Mobility Enhancement & Vision Rehabilitation

Mobility Enhancement & Vision Rehabilitation Publications

Wiegand I, Wolfe JM. Target value and prevalence influence visual foraging in younger and older age. Vision Res 2021;186:87-102.Abstract
The prevalence and reward-value of targets have an influence on visual search. The strength of the effect of an item's reward-value on attentional selection varies substantially between individuals and is potentially sensitive to aging. We investigated individual and age differences in a hybrid foraging task, in which the prevalence and value of multiple target types was varied. Using optimal foraging theory measures, foraging was more efficient overall in younger than older observers. However, the influence of prevalence and value on target selections was similar across age groups, suggesting that the underlying cognitive mechanisms are preserved in older age. When prevalence was varied but target value was balanced, younger and older observers preferably selected the most frequent target type and were biased to select another instance of the previously selected target type. When value was varied, younger and older observers showed a tendency to select high-value targets, but preferences were more diverse between individuals. When value and prevalence were inversely related, some observers showed particularly strong preferences for high-valued target types, while others showed a preference for high-prevalent, albeit low-value, target types. In younger adults, individual differences in the selection choices correlated with a personality index, suggesting that avoiding selections of low-value targets may be related to reward-seeking behaviour.
Muralidharan S, Ichhpujani P, Bhartiya S, Singh RB. Eye-tunes: role of music in ophthalmology and vision sciences. Ther Adv Ophthalmol 2021;13:25158414211040890.Abstract
Although the healing effect of music has been recognized since time immemorial, there has been a renewed interest in its use in modern medicine. This can be attributed to the increasing focus on holistic healing and on the subjective and objective aspects of well-being. In ophthalmology, this has ranged from using music for patients undergoing diagnostic procedures and surgery, as well as for doctors and the operation theatre staff during surgical procedures. Music has proven to be a potent nonpharmacological sedative and anxiolytic, allaying both the pain and stress of surgery. This review aims to explore the available evidence about the role of music as an adjunct for diagnostic and surgical procedures in current ophthalmic practices.
Avraham D, Jung J-H, Yitzhaky Y, Peli E. Retinal prosthetic vision simulation: temporal aspects. J Neural Eng 2021;18(4)Abstract
Objective. The perception of individuals fitted with retinal prostheses is not fully understood, although several retinal implants have been tested and commercialized. Realistic simulations of perception with retinal implants would be useful for future development and evaluation of such systems.Approach.We implemented a retinal prosthetic vision simulation, including temporal features, which have not been previously simulated. In particular, the simulation included temporal aspects such as persistence and perceptual fading of phosphenes and the electrode activation rate.Main results.The simulated phosphene persistence showed an effective reduction in flickering at low electrode activation rates. Although persistence has a positive effect on static scenes, it smears dynamic scenes. Perceptual fading following continuous stimulation affects prosthetic vision of both static and dynamic scenes by making them disappear completely or partially. However, we showed that perceptual fading of a static stimulus might be countered by head-scanning motions, which together with the persistence revealed the contours of the faded object. We also showed that changing the image polarity may improve simulated prosthetic vision in the presence of persistence and perceptual fading.Significance.Temporal aspects have important roles in prosthetic vision, as illustrated by the simulations. Considering these aspects may improve the future design, the training with, and evaluation of retinal prostheses.
Pundlik S, Baliutaviciute V, Moharrer M, Bowers AR, Luo G. Home-Use Evaluation of a Wearable Collision Warning Device for Individuals With Severe Vision Impairments: A Randomized Clinical Trial. JAMA Ophthalmol 2021;139(9):998-1005.Abstract
Importance: There is scant rigorous evidence about the real-world mobility benefit of electronic mobility aids. Objective: To evaluate the effect of a collision warning device on the number of contacts experienced by blind and visually impaired people in their daily mobility. Design, Setting, and Participants: In this double-masked randomized clinical trial, participants used a collision warning device during their daily mobility over a period of 4 weeks. A volunteer sample of 31 independently mobile individuals with severe visual impairments, including total blindness and peripheral visual field restrictions, who used a long cane or guide dog as their habitual mobility aid completed the study. The study was conducted from January 2018 to December 2019. Interventions: The device automatically detected collision hazards using a chest-mounted video camera. It randomly switched between 2 modes: active mode (intervention condition), where it provided alerts for detected collision threats via 2 vibrotactile wristbands, and silent mode (control condition), where the device still detected collisions but did not provide any warnings to the user. Scene videos along with the collision warning information were recorded by the device. Potential collisions detected by the device were reviewed and scored, including contacts with the hazards, by 2 independent reviewers. Participants and reviewers were masked to the device operation mode. Main Outcomes and Measures: Rate of contacts per 100 hazards per hour, compared between the 2 device modes within each participant. Modified intention-to-treat analysis was used. Results: Of the 31 included participants, 18 (58%) were male, and the median (range) age was 61 (25-73) years. A total of 19 participants (61%) had a visual acuity (VA) of light perception or worse, and 28 (90%) reported a long cane as their habitual mobility aid. The median (interquartile range) number of contacts was lower in the active mode compared with silent mode (9.3 [6.6-14.9] vs 13.8 [6.9-24.3]; difference, 4.5; 95% CI, 1.5-10.7; P < .001). Controlling for demographic characteristics, presence of VA better than light perception, and fall history, the rate of contacts significantly reduced in the active mode compared with the silent mode (β = 0.63; 95% CI, 0.54-0.73; P < .001). Conclusions and Relevance: In this study involving 31 visually impaired participants, the collision warnings were associated with a reduced rate of contacts with obstacles in daily mobility, indicating the potential of the device to augment habitual mobility aids. Trial Registration: ClinicalTrials.gov Identifier: NCT03057496.
Costela FM, Reeves SM, Woods RL. The Effect of Zoom Magnification and Large Display on Video Comprehension in Individuals With Central Vision Loss. Transl Vis Sci Technol 2021;10(8):30.Abstract
Purpose: A larger display at the same viewing distance provides relative-size magnification for individuals with central vision loss (CVL). However, the resulting large visible area of the display is expected to result in more head rotation, which may cause discomfort. We created a zoom magnification technique that placed the center of interest (COI) in the center of the display to reduce the need for head rotation. Methods: In a 2 × 2 within-subject study design, 23 participants with CVL viewed video clips from 1.5 m (4.9 feet) shown with or without zoom magnification, and with a large (208 cm/82" diagonal, 69°) or a typical (84 cm/33", 31°) screen. Head position was tracked and a custom questionnaire was used to measure discomfort. Results: Video comprehension was better with the large screen (P < 0.001) and slightly worse with zoom magnification (P = 0.03). Oddly, head movements did not vary with screen size (P = 0.63), yet were greater with zoom magnification (P = 0.001). This finding was unexpected, because the COI remains in the center with zoom magnification, but moves widely with a large screen and no magnification. Conclusions: This initial attempt to implement the zoom magnification method had flaws that may have decreased its effectiveness. In the future, we propose alternative implementations for zoom magnification, such as variable magnification. Translational Relevance: We present the first explicit demonstration that relative-size magnification improves the video comprehension of people with CVL when viewing video.
Saeedi O, Boland MV, D'Acunto L, Swamy R, Hegde V, Gupta S, Venjara A, Tsai J, Myers JS, Wellik SR, DeMoraes G, Pasquale LR, Shen LQ, Li Y, Elze T. Development and Comparison of Machine Learning Algorithms to Determine Visual Field Progression. Transl Vis Sci Technol 2021;10(7):27.Abstract
Purpose: To develop and test machine learning classifiers (MLCs) for determining visual field progression. Methods: In total, 90,713 visual fields from 13,156 eyes were included. Six different progression algorithms (linear regression of mean deviation, linear regression of the visual field index, Advanced Glaucoma Intervention Study algorithm, Collaborative Initial Glaucoma Treatment Study algorithm, pointwise linear regression [PLR], and permutation of PLR) were applied to classify each eye as progressing or stable. Six MLCs were applied (logistic regression, random forest, extreme gradient boosting, support vector classifier, convolutional neural network, fully connected neural network) using a training and testing set. For MLC input, visual fields for a given eye were divided into the first and second half and each location averaged over time within each half. Each algorithm was tested for accuracy, sensitivity, positive predictive value, and class bias with a subset of visual fields labeled by a panel of three experts from 161 eyes. Results: MLCs had similar performance metrics as some of the conventional algorithms and ranged from 87% to 91% accurate with sensitivity ranging from 0.83 to 0.88 and specificity from 0.92 to 0.96. All conventional algorithms showed significant class bias, meaning each individual algorithm was more likely to grade uncertain cases as either progressing or stable (P ≤ 0.01). Conversely, all MLCs were balanced, meaning they were equally likely to grade uncertain cases as either progressing or stable (P ≥ 0.08). Conclusions: MLCs showed a moderate to high level of accuracy, sensitivity, and specificity and were more balanced than conventional algorithms. Translational Relevance: MLCs may help to determine visual field progression.
Wiegand I, Westenberg E, Wolfe JM. Order, please! Explicit sequence learning in hybrid search in younger and older age. Mem Cognit 2021;49(6):1220-1235.Abstract
Sequence learning effects in simple perceptual and motor tasks are largely unaffected by normal aging. However, less is known about sequence learning in more complex cognitive tasks that involve attention and memory processes and how this changes with age. In this study, we examined whether incidental and intentional sequence learning would facilitate hybrid visual and memory search in younger and older adults. Observers performed a hybrid search task, in which they memorized four or 16 target objects and searched for any of those target objects in displays with four or 16 objects. The memorized targets appeared either in a repeating sequential order or in random order. In the first experiment, observers were not told about the sequence before the experiment. Only a subset of younger adults and none of the older adults incidentally learned the sequence. The "learners" acquired explicit knowledge about the sequence and searched faster in the sequence compared to random condition. In the second experiment, observers were told about the sequence before the search task. Both younger and older adults searched faster in sequence blocks than random blocks. Older adults, however, showed this sequence-learning effect only in blocks with smaller target sets. Our findings indicate that explicit sequence knowledge can facilitate hybrid search, as it allows observers to predict the next target and restrict their visual and memory search. In older age, the sequence-learning effect is constrained by load, presumably due to age-related decline in executive functions.
Costela FM, Reeves SM, Woods RL. An implementation of Bubble Magnification did not improve the video comprehension of individuals with central vision loss. Ophthalmic Physiol Opt 2021;41(4):842-852.Abstract
PURPOSE: People with central vision loss (CVL) watch television, videos and movies, but often report difficulty and have reduced video comprehension. An approach to assist viewing videos is electronic magnification of the video itself, such as Bubble Magnification. METHODS: We created a Bubble Magnification technique that displayed a magnified segment around the centre of interest (COI) as determined by the gaze of participants with normal vision. The 15 participants with CVL viewed video clips shown with 2× and 3× Bubble Magnification, and unedited. We measured video comprehension and gaze coherence. RESULTS: Video comprehension was significantly worse with both 2× (p = 0.01) and 3× Bubble Magnification (p < 0.001) than the unedited video. There was no difference in gaze coherence across conditions (p ≥ 0.58). This was unexpected because we expected a benefit in both video comprehension and gaze coherence. This initial attempt to implement the Bubble Magnification method had flaws that probably reduced its effectiveness. CONCLUSIONS: In the future, we propose alternative implementations of Bubble Magnification, such as variable magnification and bubble size. This study is a first step in the development of an intelligent-magnification approach to providing a vision rehabilitation aid to assist people with CVL.
Wolfe JM. Guided Search 6.0: An updated model of visual search. Psychon Bull Rev 2021;28(4):1060-1092.Abstract
This paper describes Guided Search 6.0 (GS6), a revised model of visual search. When we encounter a scene, we can see something everywhere. However, we cannot recognize more than a few items at a time. Attention is used to select items so that their features can be "bound" into recognizable objects. Attention is "guided" so that items can be processed in an intelligent order. In GS6, this guidance comes from five sources of preattentive information: (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g., priming), (4) reward, and (5) scene syntax and semantics. These sources are combined into a spatial "priority map," a dynamic attentional landscape that evolves over the course of search. Selective attention is guided to the most active location in the priority map approximately 20 times per second. Guidance will not be uniform across the visual field. It will favor items near the point of fixation. Three types of functional visual field (FVFs) describe the nature of these foveal biases. There is a resolution FVF, an FVF governing exploratory eye movements, and an FVF governing covert deployments of attention. To be identified as targets or rejected as distractors, items must be compared to target templates held in memory. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 ms/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid of serial and parallel processes. In GS6, if a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Setting of that threshold is adaptive, allowing feedback about performance to shape subsequent searches. Simulation shows that the combination of asynchronous diffusion and a quitting signal can produce the basic patterns of response time and error data from a range of search experiments.
Swan G, Savage SW, Zhang L, Bowers AR. Driving With Hemianopia VII: Predicting Hazard Detection With Gaze and Head Scan Magnitude. Transl Vis Sci Technol 2021;10(1):20.Abstract
Purpose: One rehabilitation strategy taught to individuals with hemianopic field loss (HFL) is to make a large blind side scan to quickly identify hazards. However, it is not clear what the minimum threshold is for how large the scan should be. Using driving simulation, we evaluated thresholds (criteria) for gaze and head scan magnitudes that best predict detection safety. Methods: Seventeen participants with complete HFL and 15 with normal vision (NV) drove through 4 routes in a virtual city while their eyes and head were tracked. Participants pressed the horn as soon as they detected a motorcycle (10 per drive) that appeared 54 degrees eccentricity on cross-streets and approached toward the driver. Results: Those with HFL detected fewer motorcycles than those with NV and had worse detection on the blind side than the seeing side. On the blind side, both safe detections and early detections (detections before the hazard entered the intersection) could be predicted with both gaze (safe 18.5 degrees and early 33.8 degrees) and head (safe 19.3 degrees and early 27 degrees) scans. However, on the seeing side, only early detections could be classified with gaze (25.3 degrees) and head (9.0 degrees). Conclusions: Both head and gaze scan magnitude were significant predictors of detection on the blind side, but less predictive on the seeing side, which was likely driven by the ability to use peripheral vision. Interestingly, head scans were as predictive as gaze scans. Translational Relevance: The minimum scan magnitude could be a useful criterion for scanning training or for developing assistive technologies to improve scanning.
Bennett CR, Bex PJ, Merabet LB. Assessing visual search performance using a novel dynamic naturalistic scene. J Vis 2021;21(1):5.Abstract
Daily activities require the constant searching and tracking of visual targets in dynamic and complex scenes. Classic work assessing visual search performance has been dominated by the use of simple geometric shapes, patterns, and static backgrounds. Recently, there has been a shift toward investigating visual search in more naturalistic dynamic scenes using virtual reality (VR)-based paradigms. In this direction, we have developed a first-person perspective VR environment combined with eye tracking for the capture of a variety of objective measures. Participants were instructed to search for a preselected human target walking in a crowded hallway setting. Performance was quantified based on saccade and smooth pursuit ocular motor behavior. To assess the effect of task difficulty, we manipulated factors of the visual scene, including crowd density (i.e., number of surrounding distractors) and the presence of environmental clutter. In general, results showed a pattern of worsening performance with increasing crowd density. In contrast, the presence of visual clutter had no effect. These results demonstrate how visual search performance can be investigated using VR-based naturalistic dynamic scenes and with high behavioral relevance. This engaging platform may also have utility in assessing visual search in a variety of clinical populations of interest.
Benedi-Garcia C, Vinas M, Dorronsoro C, Burns SA, Peli E, Marcos S. Vision is protected against blue defocus. Sci Rep 2021;11(1):352.Abstract
Due to chromatic aberration, blue images are defocused when the eye is focused to the middle of the visible spectrum, yet we normally are not aware of chromatic blur. The eye suffers from monochromatic aberrations which degrade the optical quality of all images projected on the retina. The combination of monochromatic and chromatic aberrations is not additive and these aberrations may interact to improve image quality. Using Adaptive Optics, we investigated the optical and visual effects of correcting monochromatic aberrations when viewing polychromatic grayscale, green, and blue images. Correcting the eye's monochromatic aberrations improved optical quality of the focused green images and degraded the optical quality of defocused blue images, particularly in eyes with higher amounts of monochromatic aberrations. Perceptual judgments of image quality tracked the optical findings, but the perceptual impact of the monochromatic aberrations correction was smaller than the optical predictions. The visual system appears to be adapted to the blur produced by the native monochromatic aberrations, and possibly to defocus in blue.
Keilty M, Houston KE, Collins C, Trehan R, Chen Y-T, Merabet L, Watts A, Pundlik S, Luo G. Inpatient Virtual Vision Clinic Improves Access to Vision Rehabilitation Before and During the COVID-19 Pandemic. Arch Rehabil Res Clin Transl 2021;3(1):100100.Abstract
Objective: To describe and evaluate a secure video call system combined with a suite of iPad vision testing apps to improve access to vision rehabilitation assessment for inpatients. Design: Retrospective. Setting: Two acute care inpatient rehabilitation hospitals and 1 long-term acute care (LTAC) hospital. Participants: Records of inpatients seen by the vision service. Interventions: Records from a 1-year telemedicine pilot performed at acute rehabilitation (AR) hospital 1 and then expanded to AR hospital 2 and LTAC hospital during coronavirus disease 2019 (COVID-19) were reviewed. In the virtual visits, an occupational therapist measured the patients' vision with the iPad applications and forwarded results to the off-site Doctor of Optometry (OD) for review prior to a video visit. The OD provided diagnosis and education, press-on prism application supervision, strategies and modifications, and follow-up recommendations. Providers completed the telehealth usability questionnaire (10-point scale). Main Outcome Measures: Vision examinations per month at AR hospital 1 before and with telemedicine. Results: With telemedicine at AR hospital 1, mean visits per month significantly increased from 10.7±5 to 14.9±5 (=.002). Prism was trialed in 40% of cases of which 83% were successful, similar to previously reported in-person success rates. COVID-19 caused only a marginal decrease in visits per month (=.08) at AR1, whereas the site without an established program (AR hospital 2) had a 3-4 week gap in care while the program was initiated. Cases at the LTAC hospital tended to be more complex and difficult to manage virtually. The telehealth usability questionnaire median category scores were 7 for , 8 for , 6 for , and 9 for . Conclusions: The virtual vision clinic process improved inpatient access to eye and visual neurorehabilitation assessment before and during the COVID-19 quarantine and was well accepted by providers and patients.
Swan G, Goldstein RB, Savage SW, Zhang L, Ahmadi A, Bowers AR. Automatic processing of gaze movements to quantify gaze scanning behaviors in a driving simulator. Behav Res Methods 2021;53(2):487-506.Abstract
Eye and head movements are used to scan the environment when driving. In particular, when approaching an intersection, large gaze scans to the left and right, comprising head and multiple eye movements, are made. We detail an algorithm called the gaze scan algorithm that automatically quantifies the magnitude, duration, and composition of such large lateral gaze scans. The algorithm works by first detecting lateral saccades, then merging these lateral saccades into gaze scans, with the start and end points of each gaze scan marked in time and eccentricity. We evaluated the algorithm by comparing gaze scans generated by the algorithm to manually marked "consensus ground truth" gaze scans taken from gaze data collected in a high-fidelity driving simulator. We found that the gaze scan algorithm successfully marked 96% of gaze scans and produced magnitudes and durations close to ground truth. Furthermore, the differences between the algorithm and ground truth were similar to the differences found between expert coders. Therefore, the algorithm may be used in lieu of manual marking of gaze data, significantly accelerating the time-consuming marking of gaze movement data in driving simulator studies. The algorithm also complements existing eye tracking and mobility research by quantifying the number, direction, magnitude, and timing of gaze scans and can be used to better understand how individuals scan their environment.
Moharrer M, Tang X, Luo G. With Motion Perception, Good Visual Acuity May Not Be Necessary for Driving Hazard Detection. Transl Vis Sci Technol 2020;9(13):18.Abstract
Purpose: To investigate the roles of motion perception and visual acuity in driving hazard detection. Methods: Detection of driving hazard was tested based on video and still-frames of real-world road scenes. In the experiment using videos, 20 normally sighted participants were tested under four conditions: with or without motion interruption by interframe mask, and with or without simulated low visual acuity (20/120 on average) by using a diffusing filter. Videos were down-sampled to 2.5 Hz, to allow the addition of motion interrupting masks between the frames to maintain video durations. In addition, single still frames extracted from the videos were shown in random order to eight normally sighted participants, who judged whether the frames were during ongoing hazards, with or without the diffuser. Sensitivity index d-prime (d') was compared between unmasked motion ( = 20) and still frame conditions ( = 8). Results: In the experiment using videos, there was a significant reduction in a combined performance score (taking account of reaction time and detection rate) when the motion was disrupted ( = 0.016). The diffuser did not affect the scores ( = 0.419). The score reduction was mostly due to a decrease in the detection rate ( = 0.002), not the response time ( = 0.148). The d' of participants significantly decreased ( < 0.001) from 2.24 with unmasked videos to 0.68 with still frames. Low visual acuity also had a significant effect on the d' ( = 0.004), but the change was relatively small, from 2.03 without to 1.56 with the diffuser. Conclusions: Motion perception plays a more important role than visual acuity for detecting driving hazards. Translational Relevance: Motion perception may be a relevant criterion for fitness to drive.

Pages