Mobility Enhancement & Vision Rehabilitation

Mobility Enhancement & Vision Rehabilitation Publications

Peli E. 2017 Charles F. Prentice Award Lecture: Peripheral Prisms for Visual Field Expansion: A Translational Journey. Optom Vis Sci 2020;97(10):833-846.Abstract
On the occasion of being awarded the Prentice Medal, I was asked to summarize my translational journey. Here I describe the process of becoming a low-vision rehabilitation clinician and researcher, frustrated by the unavailability of effective treatments for some conditions. This led to decades of working to understand patients' needs and the complexities and subtleties of their visual systems and conditions. It was followed by many iterations of developing vision aids and the techniques needed to objectively evaluate their benefit. I specifically address one path: the invention and development of peripheral prisms to expand the visual fields of patients with homonymous hemianopia, leading to our latest multiperiscopic prism (mirror-based design) with its clear 45° field-of-view image shift.
Dockery DM, Krzystolik MG. The Use of Mobile Applications as Low-Vision Aids: A Pilot Study. R I Med J (2013) 2020;103(8):69-72.Abstract
OBJECTIVE: To determine the most commonly used and highest-rated mobile applications (apps) for low-vision aids. METHODS: This was a convenience sample survey. Patients known to use low-vision apps at a nonprofit low-vision center (INSIGHT, Warwick, RI) were contacted by phone between June and September 2019. INCLUSION CRITERIA: age 18+, Snellen visual acuity (VA) below 20/70, and the use of low-vision mobile apps for at least one month. A standardized script was used to record survey data and app ratings were evaluated by patients with a scale of one to five, one being the lowest and five being the highest. RESULTS: Of the sample (n=11), nine patients (81.8%) stated they used an iPhone for low-vision mobile apps. A list of 14 mobile apps was identified: the two most commonly used apps were Seeing AI (81.8%) and Be My Eyes (63.6%); their average ratings were 4.43/5 and 4.75/5, respectively. CONCLUSIONS: This survey suggests that Seeing AI and Be My Eyes are useful apps to help low- vision patients with activities of daily living.
Swan G, Goldstein RB, Savage SW, Zhang L, Ahmadi A, Bowers AR. Automatic processing of gaze movements to quantify gaze scanning behaviors in a driving simulator. Behav Res Methods 2020;Abstract
Eye and head movements are used to scan the environment when driving. In particular, when approaching an intersection, large gaze scans to the left and right, comprising head and multiple eye movements, are made. We detail an algorithm called the gaze scan algorithm that automatically quantifies the magnitude, duration, and composition of such large lateral gaze scans. The algorithm works by first detecting lateral saccades, then merging these lateral saccades into gaze scans, with the start and end points of each gaze scan marked in time and eccentricity. We evaluated the algorithm by comparing gaze scans generated by the algorithm to manually marked "consensus ground truth" gaze scans taken from gaze data collected in a high-fidelity driving simulator. We found that the gaze scan algorithm successfully marked 96% of gaze scans and produced magnitudes and durations close to ground truth. Furthermore, the differences between the algorithm and ground truth were similar to the differences found between expert coders. Therefore, the algorithm may be used in lieu of manual marking of gaze data, significantly accelerating the time-consuming marking of gaze movement data in driving simulator studies. The algorithm also complements existing eye tracking and mobility research by quantifying the number, direction, magnitude, and timing of gaze scans and can be used to better understand how individuals scan their environment.
Savage SW, Zhang L, Swan G, Bowers AR. The effects of age on the contributions of head and eye movements to scanning behavior at intersections. Transp Res Part F Traffic Psychol Behav 2020;73:128-142.Abstract
The current study was aimed at evaluating the effects of age on the contributions of head and eye movements to scanning behavior at intersections. When approaching intersections, a wide area has to be scanned requiring large lateral head rotations as well as eye movements. Prior research suggests older drivers scan less extensively. However, due to the wide-ranging differences in methodologies and measures used in prior research, the extent to which age-related changes in eye or head movements contribute to these deficits is unclear. Eleven older (mean 67 years) and 18 younger (mean 27 years) current drivers drove in a simulator while their head and eye movements were tracked. Scans, analyzed for 15 four-way intersections in city drives, were split into two categories: (consisting only of eye movements) and (containing both head and eye movements). Older drivers made smaller scans than younger drivers (46.6° vs. 53°), as well as smaller scans (9.2° vs. 10.1°), resulting in overall smaller scans. For scans, older drivers had both a smaller head and a smaller eye movement component. Older drivers made more scans than younger drivers (7 vs. 6) but fewer scans (2.1 vs. 2.7). This resulted in no age effects when considering scans. Our results clarify the contributions of eye and head movements to age-related deficits in scanning at intersections, highlight the importance of analyzing both eye and head movements, and suggest the need for older driver training programs that emphasize the importance of making large scans before entering intersections.
Shi C, Pundlik S, Luo G. Without low spatial frequencies, high resolution vision would be detrimental to motion perception. J Vis 2020;20(8):29.Abstract
A normally sighted person can see a grating of 30 cycles per degree or higher, but spatial frequencies needed for motion perception are much lower than that. It is unknown for natural images with a wide spectrum how all the visible spatial frequencies contribute to motion speed perception. In this work, we studied the effect of spatial frequency content on motion speed estimation for sequences of natural and stochastic pixel images by simulating different visual conditions, including normal vision, low vision (low-pass filtering), and complementary vision (high-pass filtering at the same cutoff frequencies of the corresponding low-vision conditions) conditions. Speed was computed using a biological motion energy-based computational model. In natural sequences, there was no difference in speed estimation error between normal vision and low vision conditions, but it was significantly higher for complementary vision conditions (containing only high-frequency components) at higher speeds. In stochastic sequences that had a flat frequency distribution, the error in normal vision condition was significantly larger compared with low vision conditions at high speeds. On the contrary, such a detrimental effect on speed estimation accuracy was not found for low spatial frequencies. The simulation results were consistent with the motion direction detection task performed by human observers viewing stochastic sequences. Together, these results (i) reiterate the importance of low frequencies in motion perception, and (ii) indicate that high frequencies may be detrimental for speed estimation when low frequency content is weak or not present.
Nartker MS, Alaoui-Soce A, Wolfe JM. Visual search errors are persistent in a laboratory analog of the incidental finding problem. Cogn Res Princ Implic 2020;5(1):32.Abstract
When radiologists search for a specific target (e.g., lung cancer), they are also asked to report any other clinically significant "incidental findings" (e.g., pneumonia). These incidental findings are missed at an undesirably high rate. In an effort to understand and reduce these errors, Wolfe et al. (Cognitive Research: Principles and Implications 2:35, 2017) developed "mixed hybrid search" as a model system for incidental findings. In this task, non-expert observers memorize six targets: half of these targets are specific images (analogous to the suspected diagnosis in the clinical task). The other half are broader, categorically defined targets, like "animals" or "cars" (analogous to the less well-specified incidental findings). In subsequent search through displays for any instances of any of the targets, observers miss about one third of the categorical targets, mimicking the incidental finding problem. In the present paper, we attempted to reduce the number of errors in the mixed hybrid search task with the goal of finding methods that could be deployed in a clinical setting. In Experiments 1a and 1b, we reminded observers about the categorical targets by inserting non-search trials in which categorical targets were clearly marked. In Experiment 2, observers responded twice on each trial: once to confirm the presence or absence of the specific targets, and once to confirm the presence or absence of the categorical targets. In Experiment 3, observers were required to confirm the presence or absence of every target on every trial using a checklist procedure. Only Experiment 3 produced a marked decline in categorical target errors, but at the cost of a substantial increase in response time.
Wolfe JM. Visual Search: How Do We Find What We Are Looking For?. Annu Rev Vis Sci 2020;6:539-562.Abstract
In visual search tasks, observers look for targets among distractors. In the lab, this often takes the form of multiple searches for a simple shape that may or may not be present among other items scattered at random on a computer screen (e.g., Find a red T among other letters that are either black or red.). In the real world, observers may search for multiple classes of target in complex scenes that occur only once (e.g., As I emerge from the subway, can I find lunch, my friend, and a street sign in the scene before me?). This article reviews work on how search is guided intelligently. I ask how serial and parallel processes collaborate in visual search, describe the distinction between search templates in working memory and target templates in long-term memory, and consider how searches are terminated.
Feldstein IT, Peli E. Pedestrians Accept Shorter Distances to Light Vehicles Than Dark Ones When Crossing the Street. Perception 2020;49(5):558-566.Abstract
Does the brightness of an approaching vehicle affect a pedestrian's crossing decision? Thirty participants indicated their street-crossing intentions when facing approaching light or dark vehicles. The experiment was conducted in a real daylight environment and, additionally, in a corresponding virtual one. A real road with actual cars provides high face validity, while a virtual environment ensures the scenario's precise reproducibility and repeatability for each participant. In both settings, participants judged dark vehicles to be a more imminent threat-either closer or moving faster-when compared with light ones. Secondary results showed that participants accepted a significantly shorter time-to-contact when crossing the street in the virtual setting than on the real road.
Feldstein IT, Dyszak GN. Road crossing decisions in real and virtual environments: A comparative study on simulator validity. Accid Anal Prev 2020;137:105356.Abstract
Virtual reality (VR) is a valuable tool for the assessment of human perception and behavior in a risk-free environment. Investigators should, however, ensure that the used virtual environment is validated in accordance with the experiment's intended research question since behavior in virtual environments has been shown to differ to behavior in real environments. This article presents the street crossing decisions of 30 participants who were facing an approaching vehicle and had to decide at what moment it was no longer safe to cross, applying the step-back method. The participants executed the task in a real environment and also within a highly immersive VR setup involving a head-mounted display (HMD). The results indicate significant differences between the two settings regarding the participants' behaviors. The time-to-contact of approaching vehicles was significantly lower for crossing decisions in the virtual environment than for crossing decisions in the real one. Additionally, it was demonstrated that participants based their crossing decisions in the real environment on the temporal distance of the approaching vehicle (i.e., time-to-contact), whereas the crossing decisions in the virtual environment seemed to depend on the vehicle's spatial distance, neglecting the vehicle's velocity. Furthermore, a deeper analysis suggests that crossing decisions were not affected by factors such as the participant's gender or the order in which they faced the real and the virtual environment.
Pamir Z, Canoluk UM, Jung J-H, Peli E. Poor resolution at the back of the tongue is the bottleneck for spatial pattern recognition. Sci Rep 2020;10(1):2435.Abstract
Spatial patterns presented on the tongue using electro-tactile sensory substitution devices (SSDs) have been suggested to be recognized better by tracing the pattern with the tip of the tongue. We examined if the functional benefit of tracing is overcoming the poor sensitivity or low spatial resolution at the back of the tongue or alternatively compensating for limited information processing capacity by fixating on a segment of the spatial pattern at a time. Using a commercially available SSD, the BrainPort, we compared letter recognition performance in three presentation modes; tracing, static, and drawing. Stimulation intensity was either constant or increased from the tip to the back of the tongue to partially compensate for the decreasing sensitivity. Recognition was significantly better for tracing, compared to static and drawing conditions. Confusion analyses showed that letters were confused based on their characteristics presented near the tip in static and drawing conditions. The results suggest that recognition performance is limited by the poor spatial resolution at the back of the tongue, and tracing seems to be an effective strategy to overcome this. Compensating for limited information processing capacity or poor sensitivity by drawing or increasing intensity at the back, respectively, does not improve the performance.
Wiegand I, Wolfe JM. Age doesn't matter much: hybrid visual and memory search is preserved in older adults. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 2020;27(2):220-253.Abstract
We tested younger and older observers' attention and long-term memory functions in a "hybrid search" task, in which observers look through visual displays for instances of any of several types of targets held in memory. Apart from a general slowing, search efficiency did not change with age. In both age groups, reaction times increased linearly with the visual set size and logarithmically with the memory set size, with similar relative costs of increasing load (Experiment 1). We replicated the finding and further showed that performance remained comparable between age groups when familiarity cues were made irrelevant (Experiment 2) and target-context associations were to be retrieved (Experiment 3). Our findings are at variance with theories of cognitive aging that propose age-specific deficits in attention and memory. As hybrid search resembles many real-world searches, our results might be relevant to improve the ecological validity of assessing age-related cognitive decline.
Costela FM, Woods RL. A free database of eye movements watching "Hollywood" videoclips. Data Brief 2019;25:103991.Abstract
The provided database of tracked eye movements was collected using an infra-red, video-camera Eyelink 1000 system, from 95 participants as they viewed 'Hollywood' video clips. There are 206 clips of 30-s and eleven clips of 30-min for a total viewing time of about 60 hours. The database also provides the raw 30-s video clip files, a short preview of the 30-min clips, and subjective ratings of the content of the videos for each in categories: (1) genre; (2) importance of human faces; (3) importance of human figures; (4) importance of man-made objects; (5) importance of nature; (6) auditory information; (7) lighting; and (8) environment type. Precise timing of the scene cuts within the clips and the democratic gaze scanpath position (center of interest) per frame are provided. At this time, this eye-movement dataset has the widest age range (22-85 years) and is the third largest (in recorded video viewing time) of those that have been made available to the research community. The data-acquisition procedures are described, along with participant demographics, summaries of some common eye-movement statistics, and highlights of research topics in which the database was used. The dataset is freely available in the Open Science Framework repository (link in the manuscript) and can be used without restriction for educational and research purposes, providing that this paper is cited in any published work.
Schill HM, Cain MS, Josephs EL, Wolfe JM. Axis of rotation as a basic feature in visual search. Atten Percept Psychophys 2019;Abstract
Searching for a "Q" among "O"s is easier than the opposite search (Treisman & Gormican in Psychological Review, 95, 15-48, 1988). In many cases, such "search asymmetries" occur because it is easier to search when a target is defined by the presence of a feature (i.e., the line terminator defining the tail of the "Q"), rather than by its absence. Treisman proposed that features that produce a search asymmetry are "basic" features in visual search (Treisman & Gormican in Psychological Review, 95, 15-48, 1988; Treisman & Souther in Journal of Experimental Psychology: General, 114, 285-310, 1985). Other stimulus attributes, such as color, orientation, and motion, have been found to produce search asymmetries (Dick, Ullman, & Sagi in Science, 237, 400-402, 1987; Treisman & Gormican in Psychological Review, 95, 15-48, 1988; Treisman & Souther in Journal of Experimental Psychology: General, 114, 285-310, 1985). Other stimulus properties, such as facial expression, produce asymmetries because one type of item (e.g., neutral faces) demands less attention in search than another (e.g., angry faces). In the present series of experiments, search for a rolling target among spinning distractors proved to be more efficient than searching for a spinning target among rolling distractors. The effect does not appear to be due to differences in physical plausibility, direction of motion, or texture movement. Our results suggest that the spinning stimuli demand less attention, making search through spinning distractors for a rolling target easier than the opposite search.

Pages