Mobility Enhancement & Vision Rehabilitation

Mobility Enhancement & Vision Rehabilitation Publications

Wiegand I, Westenberg E, Wolfe JM. Order, please! Explicit sequence learning in hybrid search in younger and older age. Mem Cognit 2021;Abstract
Sequence learning effects in simple perceptual and motor tasks are largely unaffected by normal aging. However, less is known about sequence learning in more complex cognitive tasks that involve attention and memory processes and how this changes with age. In this study, we examined whether incidental and intentional sequence learning would facilitate hybrid visual and memory search in younger and older adults. Observers performed a hybrid search task, in which they memorized four or 16 target objects and searched for any of those target objects in displays with four or 16 objects. The memorized targets appeared either in a repeating sequential order or in random order. In the first experiment, observers were not told about the sequence before the experiment. Only a subset of younger adults and none of the older adults incidentally learned the sequence. The "learners" acquired explicit knowledge about the sequence and searched faster in the sequence compared to random condition. In the second experiment, observers were told about the sequence before the search task. Both younger and older adults searched faster in sequence blocks than random blocks. Older adults, however, showed this sequence-learning effect only in blocks with smaller target sets. Our findings indicate that explicit sequence knowledge can facilitate hybrid search, as it allows observers to predict the next target and restrict their visual and memory search. In older age, the sequence-learning effect is constrained by load, presumably due to age-related decline in executive functions.
Costela FM, Reeves SM, Woods RL. An implementation of Bubble Magnification did not improve the video comprehension of individuals with central vision loss. Ophthalmic Physiol Opt 2021;Abstract
PURPOSE: People with central vision loss (CVL) watch television, videos and movies, but often report difficulty and have reduced video comprehension. An approach to assist viewing videos is electronic magnification of the video itself, such as Bubble Magnification. METHODS: We created a Bubble Magnification technique that displayed a magnified segment around the centre of interest (COI) as determined by the gaze of participants with normal vision. The 15 participants with CVL viewed video clips shown with 2× and 3× Bubble Magnification, and unedited. We measured video comprehension and gaze coherence. RESULTS: Video comprehension was significantly worse with both 2× (p = 0.01) and 3× Bubble Magnification (p < 0.001) than the unedited video. There was no difference in gaze coherence across conditions (p ≥ 0.58). This was unexpected because we expected a benefit in both video comprehension and gaze coherence. This initial attempt to implement the Bubble Magnification method had flaws that probably reduced its effectiveness. CONCLUSIONS: In the future, we propose alternative implementations of Bubble Magnification, such as variable magnification and bubble size. This study is a first step in the development of an intelligent-magnification approach to providing a vision rehabilitation aid to assist people with CVL.
Wolfe JM. Guided Search 6.0: An updated model of visual search. Psychon Bull Rev 2021;Abstract
This paper describes Guided Search 6.0 (GS6), a revised model of visual search. When we encounter a scene, we can see something everywhere. However, we cannot recognize more than a few items at a time. Attention is used to select items so that their features can be "bound" into recognizable objects. Attention is "guided" so that items can be processed in an intelligent order. In GS6, this guidance comes from five sources of preattentive information: (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g., priming), (4) reward, and (5) scene syntax and semantics. These sources are combined into a spatial "priority map," a dynamic attentional landscape that evolves over the course of search. Selective attention is guided to the most active location in the priority map approximately 20 times per second. Guidance will not be uniform across the visual field. It will favor items near the point of fixation. Three types of functional visual field (FVFs) describe the nature of these foveal biases. There is a resolution FVF, an FVF governing exploratory eye movements, and an FVF governing covert deployments of attention. To be identified as targets or rejected as distractors, items must be compared to target templates held in memory. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 ms/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid of serial and parallel processes. In GS6, if a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Setting of that threshold is adaptive, allowing feedback about performance to shape subsequent searches. Simulation shows that the combination of asynchronous diffusion and a quitting signal can produce the basic patterns of response time and error data from a range of search experiments.
Swan G, Savage SW, Zhang L, Bowers AR. Driving With Hemianopia VII: Predicting Hazard Detection With Gaze and Head Scan Magnitude. Transl Vis Sci Technol 2021;10(1):20.Abstract
Purpose: One rehabilitation strategy taught to individuals with hemianopic field loss (HFL) is to make a large blind side scan to quickly identify hazards. However, it is not clear what the minimum threshold is for how large the scan should be. Using driving simulation, we evaluated thresholds (criteria) for gaze and head scan magnitudes that best predict detection safety. Methods: Seventeen participants with complete HFL and 15 with normal vision (NV) drove through 4 routes in a virtual city while their eyes and head were tracked. Participants pressed the horn as soon as they detected a motorcycle (10 per drive) that appeared 54 degrees eccentricity on cross-streets and approached toward the driver. Results: Those with HFL detected fewer motorcycles than those with NV and had worse detection on the blind side than the seeing side. On the blind side, both safe detections and early detections (detections before the hazard entered the intersection) could be predicted with both gaze (safe 18.5 degrees and early 33.8 degrees) and head (safe 19.3 degrees and early 27 degrees) scans. However, on the seeing side, only early detections could be classified with gaze (25.3 degrees) and head (9.0 degrees). Conclusions: Both head and gaze scan magnitude were significant predictors of detection on the blind side, but less predictive on the seeing side, which was likely driven by the ability to use peripheral vision. Interestingly, head scans were as predictive as gaze scans. Translational Relevance: The minimum scan magnitude could be a useful criterion for scanning training or for developing assistive technologies to improve scanning.
Bennett CR, Bex PJ, Merabet LB. Assessing visual search performance using a novel dynamic naturalistic scene. J Vis 2021;21(1):5.Abstract
Daily activities require the constant searching and tracking of visual targets in dynamic and complex scenes. Classic work assessing visual search performance has been dominated by the use of simple geometric shapes, patterns, and static backgrounds. Recently, there has been a shift toward investigating visual search in more naturalistic dynamic scenes using virtual reality (VR)-based paradigms. In this direction, we have developed a first-person perspective VR environment combined with eye tracking for the capture of a variety of objective measures. Participants were instructed to search for a preselected human target walking in a crowded hallway setting. Performance was quantified based on saccade and smooth pursuit ocular motor behavior. To assess the effect of task difficulty, we manipulated factors of the visual scene, including crowd density (i.e., number of surrounding distractors) and the presence of environmental clutter. In general, results showed a pattern of worsening performance with increasing crowd density. In contrast, the presence of visual clutter had no effect. These results demonstrate how visual search performance can be investigated using VR-based naturalistic dynamic scenes and with high behavioral relevance. This engaging platform may also have utility in assessing visual search in a variety of clinical populations of interest.
Benedi-Garcia C, Vinas M, Dorronsoro C, Burns SA, Peli E, Marcos S. Vision is protected against blue defocus. Sci Rep 2021;11(1):352.Abstract
Due to chromatic aberration, blue images are defocused when the eye is focused to the middle of the visible spectrum, yet we normally are not aware of chromatic blur. The eye suffers from monochromatic aberrations which degrade the optical quality of all images projected on the retina. The combination of monochromatic and chromatic aberrations is not additive and these aberrations may interact to improve image quality. Using Adaptive Optics, we investigated the optical and visual effects of correcting monochromatic aberrations when viewing polychromatic grayscale, green, and blue images. Correcting the eye's monochromatic aberrations improved optical quality of the focused green images and degraded the optical quality of defocused blue images, particularly in eyes with higher amounts of monochromatic aberrations. Perceptual judgments of image quality tracked the optical findings, but the perceptual impact of the monochromatic aberrations correction was smaller than the optical predictions. The visual system appears to be adapted to the blur produced by the native monochromatic aberrations, and possibly to defocus in blue.
Keilty M, Houston KE, Collins C, Trehan R, Chen Y-T, Merabet L, Watts A, Pundlik S, Luo G. Inpatient Virtual Vision Clinic Improves Access to Vision Rehabilitation Before and During the COVID-19 Pandemic. Arch Rehabil Res Clin Transl 2021;3(1):100100.Abstract
Objective: To describe and evaluate a secure video call system combined with a suite of iPad vision testing apps to improve access to vision rehabilitation assessment for inpatients. Design: Retrospective. Setting: Two acute care inpatient rehabilitation hospitals and 1 long-term acute care (LTAC) hospital. Participants: Records of inpatients seen by the vision service. Interventions: Records from a 1-year telemedicine pilot performed at acute rehabilitation (AR) hospital 1 and then expanded to AR hospital 2 and LTAC hospital during coronavirus disease 2019 (COVID-19) were reviewed. In the virtual visits, an occupational therapist measured the patients' vision with the iPad applications and forwarded results to the off-site Doctor of Optometry (OD) for review prior to a video visit. The OD provided diagnosis and education, press-on prism application supervision, strategies and modifications, and follow-up recommendations. Providers completed the telehealth usability questionnaire (10-point scale). Main Outcome Measures: Vision examinations per month at AR hospital 1 before and with telemedicine. Results: With telemedicine at AR hospital 1, mean visits per month significantly increased from 10.7±5 to 14.9±5 (=.002). Prism was trialed in 40% of cases of which 83% were successful, similar to previously reported in-person success rates. COVID-19 caused only a marginal decrease in visits per month (=.08) at AR1, whereas the site without an established program (AR hospital 2) had a 3-4 week gap in care while the program was initiated. Cases at the LTAC hospital tended to be more complex and difficult to manage virtually. The telehealth usability questionnaire median category scores were 7 for , 8 for , 6 for , and 9 for . Conclusions: The virtual vision clinic process improved inpatient access to eye and visual neurorehabilitation assessment before and during the COVID-19 quarantine and was well accepted by providers and patients.
Swan G, Goldstein RB, Savage SW, Zhang L, Ahmadi A, Bowers AR. Automatic processing of gaze movements to quantify gaze scanning behaviors in a driving simulator. Behav Res Methods 2021;53(2):487-506.Abstract
Eye and head movements are used to scan the environment when driving. In particular, when approaching an intersection, large gaze scans to the left and right, comprising head and multiple eye movements, are made. We detail an algorithm called the gaze scan algorithm that automatically quantifies the magnitude, duration, and composition of such large lateral gaze scans. The algorithm works by first detecting lateral saccades, then merging these lateral saccades into gaze scans, with the start and end points of each gaze scan marked in time and eccentricity. We evaluated the algorithm by comparing gaze scans generated by the algorithm to manually marked "consensus ground truth" gaze scans taken from gaze data collected in a high-fidelity driving simulator. We found that the gaze scan algorithm successfully marked 96% of gaze scans and produced magnitudes and durations close to ground truth. Furthermore, the differences between the algorithm and ground truth were similar to the differences found between expert coders. Therefore, the algorithm may be used in lieu of manual marking of gaze data, significantly accelerating the time-consuming marking of gaze movement data in driving simulator studies. The algorithm also complements existing eye tracking and mobility research by quantifying the number, direction, magnitude, and timing of gaze scans and can be used to better understand how individuals scan their environment.
Moharrer M, Tang X, Luo G. With Motion Perception, Good Visual Acuity May Not Be Necessary for Driving Hazard Detection. Transl Vis Sci Technol 2020;9(13):18.Abstract
Purpose: To investigate the roles of motion perception and visual acuity in driving hazard detection. Methods: Detection of driving hazard was tested based on video and still-frames of real-world road scenes. In the experiment using videos, 20 normally sighted participants were tested under four conditions: with or without motion interruption by interframe mask, and with or without simulated low visual acuity (20/120 on average) by using a diffusing filter. Videos were down-sampled to 2.5 Hz, to allow the addition of motion interrupting masks between the frames to maintain video durations. In addition, single still frames extracted from the videos were shown in random order to eight normally sighted participants, who judged whether the frames were during ongoing hazards, with or without the diffuser. Sensitivity index d-prime (d') was compared between unmasked motion ( = 20) and still frame conditions ( = 8). Results: In the experiment using videos, there was a significant reduction in a combined performance score (taking account of reaction time and detection rate) when the motion was disrupted ( = 0.016). The diffuser did not affect the scores ( = 0.419). The score reduction was mostly due to a decrease in the detection rate ( = 0.002), not the response time ( = 0.148). The d' of participants significantly decreased ( < 0.001) from 2.24 with unmasked videos to 0.68 with still frames. Low visual acuity also had a significant effect on the d' ( = 0.004), but the change was relatively small, from 2.03 without to 1.56 with the diffuser. Conclusions: Motion perception plays a more important role than visual acuity for detecting driving hazards. Translational Relevance: Motion perception may be a relevant criterion for fitness to drive.
Peli E. 2017 Charles F. Prentice Award Lecture: Peripheral Prisms for Visual Field Expansion: A Translational Journey. Optom Vis Sci 2020;97(10):833-846.Abstract
On the occasion of being awarded the Prentice Medal, I was asked to summarize my translational journey. Here I describe the process of becoming a low-vision rehabilitation clinician and researcher, frustrated by the unavailability of effective treatments for some conditions. This led to decades of working to understand patients' needs and the complexities and subtleties of their visual systems and conditions. It was followed by many iterations of developing vision aids and the techniques needed to objectively evaluate their benefit. I specifically address one path: the invention and development of peripheral prisms to expand the visual fields of patients with homonymous hemianopia, leading to our latest multiperiscopic prism (mirror-based design) with its clear 45° field-of-view image shift.
Dockery DM, Krzystolik MG. The Use of Mobile Applications as Low-Vision Aids: A Pilot Study. R I Med J (2013) 2020;103(8):69-72.Abstract
OBJECTIVE: To determine the most commonly used and highest-rated mobile applications (apps) for low-vision aids. METHODS: This was a convenience sample survey. Patients known to use low-vision apps at a nonprofit low-vision center (INSIGHT, Warwick, RI) were contacted by phone between June and September 2019. INCLUSION CRITERIA: age 18+, Snellen visual acuity (VA) below 20/70, and the use of low-vision mobile apps for at least one month. A standardized script was used to record survey data and app ratings were evaluated by patients with a scale of one to five, one being the lowest and five being the highest. RESULTS: Of the sample (n=11), nine patients (81.8%) stated they used an iPhone for low-vision mobile apps. A list of 14 mobile apps was identified: the two most commonly used apps were Seeing AI (81.8%) and Be My Eyes (63.6%); their average ratings were 4.43/5 and 4.75/5, respectively. CONCLUSIONS: This survey suggests that Seeing AI and Be My Eyes are useful apps to help low- vision patients with activities of daily living.
Savage SW, Zhang L, Swan G, Bowers AR. The effects of age on the contributions of head and eye movements to scanning behavior at intersections. Transp Res Part F Traffic Psychol Behav 2020;73:128-142.Abstract
The current study was aimed at evaluating the effects of age on the contributions of head and eye movements to scanning behavior at intersections. When approaching intersections, a wide area has to be scanned requiring large lateral head rotations as well as eye movements. Prior research suggests older drivers scan less extensively. However, due to the wide-ranging differences in methodologies and measures used in prior research, the extent to which age-related changes in eye or head movements contribute to these deficits is unclear. Eleven older (mean 67 years) and 18 younger (mean 27 years) current drivers drove in a simulator while their head and eye movements were tracked. Scans, analyzed for 15 four-way intersections in city drives, were split into two categories: (consisting only of eye movements) and (containing both head and eye movements). Older drivers made smaller scans than younger drivers (46.6° vs. 53°), as well as smaller scans (9.2° vs. 10.1°), resulting in overall smaller scans. For scans, older drivers had both a smaller head and a smaller eye movement component. Older drivers made more scans than younger drivers (7 vs. 6) but fewer scans (2.1 vs. 2.7). This resulted in no age effects when considering scans. Our results clarify the contributions of eye and head movements to age-related deficits in scanning at intersections, highlight the importance of analyzing both eye and head movements, and suggest the need for older driver training programs that emphasize the importance of making large scans before entering intersections.
Shi C, Pundlik S, Luo G. Without low spatial frequencies, high resolution vision would be detrimental to motion perception. J Vis 2020;20(8):29.Abstract
A normally sighted person can see a grating of 30 cycles per degree or higher, but spatial frequencies needed for motion perception are much lower than that. It is unknown for natural images with a wide spectrum how all the visible spatial frequencies contribute to motion speed perception. In this work, we studied the effect of spatial frequency content on motion speed estimation for sequences of natural and stochastic pixel images by simulating different visual conditions, including normal vision, low vision (low-pass filtering), and complementary vision (high-pass filtering at the same cutoff frequencies of the corresponding low-vision conditions) conditions. Speed was computed using a biological motion energy-based computational model. In natural sequences, there was no difference in speed estimation error between normal vision and low vision conditions, but it was significantly higher for complementary vision conditions (containing only high-frequency components) at higher speeds. In stochastic sequences that had a flat frequency distribution, the error in normal vision condition was significantly larger compared with low vision conditions at high speeds. On the contrary, such a detrimental effect on speed estimation accuracy was not found for low spatial frequencies. The simulation results were consistent with the motion direction detection task performed by human observers viewing stochastic sequences. Together, these results (i) reiterate the importance of low frequencies in motion perception, and (ii) indicate that high frequencies may be detrimental for speed estimation when low frequency content is weak or not present.
Nartker MS, Alaoui-Soce A, Wolfe JM. Visual search errors are persistent in a laboratory analog of the incidental finding problem. Cogn Res Princ Implic 2020;5(1):32.Abstract
When radiologists search for a specific target (e.g., lung cancer), they are also asked to report any other clinically significant "incidental findings" (e.g., pneumonia). These incidental findings are missed at an undesirably high rate. In an effort to understand and reduce these errors, Wolfe et al. (Cognitive Research: Principles and Implications 2:35, 2017) developed "mixed hybrid search" as a model system for incidental findings. In this task, non-expert observers memorize six targets: half of these targets are specific images (analogous to the suspected diagnosis in the clinical task). The other half are broader, categorically defined targets, like "animals" or "cars" (analogous to the less well-specified incidental findings). In subsequent search through displays for any instances of any of the targets, observers miss about one third of the categorical targets, mimicking the incidental finding problem. In the present paper, we attempted to reduce the number of errors in the mixed hybrid search task with the goal of finding methods that could be deployed in a clinical setting. In Experiments 1a and 1b, we reminded observers about the categorical targets by inserting non-search trials in which categorical targets were clearly marked. In Experiment 2, observers responded twice on each trial: once to confirm the presence or absence of the specific targets, and once to confirm the presence or absence of the categorical targets. In Experiment 3, observers were required to confirm the presence or absence of every target on every trial using a checklist procedure. Only Experiment 3 produced a marked decline in categorical target errors, but at the cost of a substantial increase in response time.
Wolfe JM. Visual Search: How Do We Find What We Are Looking For?. Annu Rev Vis Sci 2020;6:539-562.Abstract
In visual search tasks, observers look for targets among distractors. In the lab, this often takes the form of multiple searches for a simple shape that may or may not be present among other items scattered at random on a computer screen (e.g., Find a red T among other letters that are either black or red.). In the real world, observers may search for multiple classes of target in complex scenes that occur only once (e.g., As I emerge from the subway, can I find lunch, my friend, and a street sign in the scene before me?). This article reviews work on how search is guided intelligently. I ask how serial and parallel processes collaborate in visual search, describe the distinction between search templates in working memory and target templates in long-term memory, and consider how searches are terminated.

Pages