Mobility Enhancement & Vision Rehabilitation

Mobility Enhancement & Vision Rehabilitation Publications

Wiegand I, Wolfe JM. Age doesn't matter much: hybrid visual and memory search is preserved in older adults. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 2019;:1-34.Abstract
We tested younger and older observers' attention and long-term memory functions in a "hybrid search" task, in which observers look through visual displays for instances of any of several types of targets held in memory. Apart from a general slowing, search efficiency did not change with age. In both age groups, reaction times increased linearly with the visual set size and logarithmically with the memory set size, with similar relative costs of increasing load (Experiment 1). We replicated the finding and further showed that performance remained comparable between age groups when familiarity cues were made irrelevant (Experiment 2) and target-context associations were to be retrieved (Experiment 3). Our findings are at variance with theories of cognitive aging that propose age-specific deficits in attention and memory. As hybrid search resembles many real-world searches, our results might be relevant to improve the ecological validity of assessing age-related cognitive decline.
Barrett AM, Houston KE. Update on the Clinical Approach to Spatial Neglect. Curr Neurol Neurosci Rep 2019;19(5):25.Abstract
PURPOSE OF REVIEW: Spatial neglect is asymmetric orienting and action after a brain lesion, causing functional disability. It is common after a stroke; however, it is vastly underdocumented and undertreated. This article addresses the implementation gap in identifying and treating spatial neglect, to reduce disability and improve healthcare costs and burden. RECENT FINDINGS: Professional organizations published recommendations to implement spatial neglect care. Physicians can lead an interdisciplinary team: functionally relevant spatial neglect assessment, evidence-based spatial retraining, and integrated spatial and vision interventions can optimize outcomes. Research also strongly suggests spatial neglect adversely affects motor systems. Spatial neglect therapy might thus "kick-start" rehabilitation and improve paralysis recovery. Clinicians can implement new techniques to detect spatial neglect and lead interdisciplinary teams to promote better, integrated spatial neglect care. Future studies of brain imaging biomarkers to detect spatial neglect, and real-world applicability of prism adaptation treatment, are needed.
Ichhpujani P, Singh RB, Foulsham W, Thakur S, Lamba AS. Visual implications of digital device usage in school children: a cross-sectional study. BMC Ophthalmol 2019;19(1):76.Abstract
PURPOSE: To evaluate the use of digital devices, reading habits and the prevalence of eyestrain among urban Indian school children, aged 11-17 years. METHODS: The study included 576 adolescents attending urban schools who were surveyed regarding their electronic device usage. Additional information on the factors that may have an effect on ocular symptoms was collected. RESULTS: Twenty percent of students aged 11 in the study population use digital devices on a daily basis, in comparison with 50% of students aged 17. In addition to using these devices as homework aids, one third of study participants reported using digital devices for reading instead of conventional textbooks. The majority of students preferred sitting on a chair while reading (77%; 445 students), with only 21% (123 students) preferring to lie on the bed and 8 students alternating between chair and bed. There was a significant association between the students who preferred to lie down and those who experienced eyestrain, as reported by a little over one fourth of the student population (27%). Out of 576 students, 18% (103) experienced eyestrain at the end of the day after working on digital devices. CONCLUSIONS: The increased use of digital devices by adolescents brings a new challenge of digital eyestrain at an early age. Our study reports the patterns of electronic device usage by school children, evaluates factors associated with eyestrain and highlights the need for further investigation of these issues.
Saeedi OJ, Elze T, D'Acunto L, Swamy R, Hegde V, Gupta S, Venjara A, Tsai J, Myers JS, Wellik SR, De Moraes CG, Pasquale LR, Shen LQ, Boland MV. Agreement and Predictors of Discordance of 6 Visual Field Progression Algorithms. Ophthalmology 2019;126(6):822-828.Abstract
PURPOSE: To determine the agreement of 6 established visual field (VF) progression algorithms in a large dataset of VFs from multiple institutions and to determine predictors of discordance among these algorithms. DESIGN: Retrospective longitudinal cohort study. PARTICIPANTS: Visual fields from 5 major eye care institutions in the United States were analyzed, including a subset of eyes with at least 5 Swedish interactive threshold algorithm standard 24-2 VFs that met our reliability criteria. Of a total of 831 240 VFs, a subset of 90 713 VFs from 13 156 eyes of 8499 patients met the inclusion criteria. METHODS: Six commonly used VF progression algorithms (mean deviation [MD] slope, VF index slope, Advanced Glaucoma Intervention Study, Collaborative Initial Glaucoma Treatment Study, pointwise linear regression, and permutation of pointwise linear regression) were applied to this cohort, and each eye was determined to be stable or progressing using each measure. Agreement between individual algorithms was tested using Cohen's κ coefficient. Bivariate and multivariate analyses were used to determine predictors of discordance (3 algorithms progressing and 3 algorithms stable). MAIN OUTCOME MEASURES: Agreement and discordance between algorithms. RESULTS: Individual algorithms showed poor to moderate agreement with each other when compared directly (κ range, 0.12-0.52). Based on at least 4 algorithms, 11.7% of eyes progressed. Major predictors of discordance or lack of agreement among algorithms were more depressed initial MD (P < 0.01) and older age at first available VF (P < 0.01). A greater number of VFs (P < 0.01), more years of follow-up (P < 0.01), and eye care institution (P = 0.03) also were associated with discordance. CONCLUSIONS: This extremely large comparative series demonstrated that existing algorithms have limited agreement and that agreement varies with clinical parameters, including institution. These issues underscore the challenges to the clinical use and application of progression algorithms and of applying big-data results to individual practices.
Costela FM, Saunders DR, Rose DJ, Katjezovic S, Reeves SM, Woods RL. People With Central Vision Loss Have Difficulty Watching Videos. Invest Ophthalmol Vis Sci 2019;60(1):358-364.Abstract
Purpose: People with central vision loss (CVL) often report difficulties watching video. We objectively evaluated the ability to follow the story (using the information acquisition method). Methods: Subjects with CVL (n = 23) or normal vision (NV, n = 60) described the content of 30-second video clips from movies and documentaries. We derived an objective information acquisition (IA) score for each response using natural-language processing. To test whether the impact of CVL was simply due to reduced resolution, another group of NV subjects (n = 15) described video clips with defocus blur that reduced visual acuity to 20/50 to 20/800. Mixed models included random effects correcting for differences between subjects and between the clips, with age, gender, cognitive status, and education as covariates. Results: Compared to both NV groups, IA scores were worse for the CVL group (P < 0.001). IA reduced with worsening visual acuity (P < 0.001), and the reduction with worsening visual acuity was greater for the CVL group than the NV-defocus group (P = 0.01), which was seen as a greater discrepancy at worse levels of visual acuity. Conclusions: The IA method was able to detect difficulties in following the story experienced by people with CVL. Defocus blur failed to recreate the CVL experience. IA is likely to be useful for evaluations of the effects of vision rehabilitation.
Savage SW, Spano LP, Bowers AR. The effects of age and cognitive load on peripheral-detection performance. J Vis 2019;19(1):15.Abstract
Age-related declines in both peripheral vision and cognitive resources could contribute to the increased crash risk of older drivers. However, it is unclear whether increases in age and cognitive load result in equal detriments to detection rates across all peripheral target eccentricities (general interference effect) or whether these detriments become greater with increasing eccentricity (tunnel effect). In the current study we investigated the effects of age and cognitive load on the detection of peripheral motorcycle targets (at 5°-30° eccentricity) in static images of intersections. We used a dual-task paradigm in which cognitive load was manipulated without changing the complexity of the central (foveal) visual stimulus. Each image was displayed briefly (250 ms) to prevent eye movements. When no cognitive load was present, age resulted in a tunnel effect; however, when cognitive load was high, age resulted in a general interference effect. These findings suggest that tunnel and general interference effects can co-occur and that the predominant effect varies with the level of demand placed on participants' resources. High cognitive load had a general interference effect in both age groups, but the effect attenuated at large target eccentricities (opposite of a tunnel effect). Low cognitive load had a general interference effect in the older but not the younger group, impairing detection of motorcycle targets even at 5° eccentricity, which could present an imminent collision risk in real driving.
Wolfe JM, Cain MS, Aizenman AM. Guidance and selection history in hybrid foraging visual search. Atten Percept Psychophys 2019;81(3):637-653.Abstract
In Hybrid Foraging tasks, observers search for multiple instances of several types of target. Collecting all the dirty laundry and kitchenware out of a child's room would be a real-world example. How are such foraging episodes structured? A series of four experiments shows that selection of one item from the display makes it more likely that the next item will be of the same type. This pattern holds if the targets are defined by basic features like color and shape but not if they are defined by their identity (e.g., the letters p & d). Additionally, switching between target types during search is expensive in time, with longer response times between successive selections if the target type changes than if they are the same. Finally, the decision to leave a screen/patch for the next screen in these foraging tasks is imperfectly consistent with the predictions of optimal foraging theory. The results of these hybrid foraging studies cast new light on the ways in which prior selection history guides subsequent visual search in general.
Selivanova A, Fenwick E, Man R, Seiple W, Jackson ML. Outcomes After Comprehensive Vision Rehabilitation Using Vision-related Quality of Life Questionnaires: Impact of Vision Impairment and National Eye Institute Visual Functioning Questionnaire. Optom Vis Sci 2019;96(2):87-94.Abstract
SIGNIFICANCE: This research is significant because, although vision-related quality of life (VRQoL) is improved after vision rehabilitation (VR), patients with certain characteristics respond less positively on VRQoL measures, and this should inform future care. PURPOSE: The purposes of this study were to evaluate how two VRQoL questionnaires compare in measuring change in patient-reported outcomes after VR and to determine if patient characteristics or occupational therapy (OT) predict higher scores after rehabilitation. METHODS: In a prospective clinical cohort study, 109 patients with low vision completed the Impact of Vision Impairment (IVI) and the National Eye Institute Visual Functioning Questionnaire (NEI VFQ-25) before and after VR. Comprehensive VR included consultation with an ophthalmologist and OT if required. The relationships of six baseline characteristics (age, sex, visual acuity, contrast sensitivity, field loss, diagnosis) and OT were assessed with VRQoL scores using multivariable logistic regression. RESULTS: The mean (SD) age was 68.5 (19.2) years, and 61 (56%) were female. After rehabilitation, increases in scores were observed in all IVI subscales (reading [P < .001], mobility [P = .002], well-being [P = .0003]) and all NEI VFQ-25 subscales (functional [P = .01], socioemotional [P = .003]). Those who were referred to OT but did not attend and those who had hemianopia/field loss were less likely to have higher VRQoL in IVI mobility and well-being. Those attending OT for more than 3 hours were less likely to have better scores in emotional NEI VFQ. Men were less likely to have increased scores in functional and emotional NEI VFQ, whereas those with diagnoses of nonmacular diseases had higher odds of having increased scores on the emotional NEI VFQ (all, P < .05). CONCLUSION: Both the IVI and the NEI VFQ-25 detected change in patients' VRQoL after rehabilitation. Most of the patient characteristics we considered predicted a lower likelihood of increased scores in VRQoL.
Bernstein CA, Nir R-R, Noseda R, Fulton AB, Huntington S, Lee AJ, Bertisch SM, Hovaguimian A, Buettner C, Borsook D, Burstein R. The migraine eye: distinct rod-driven retinal pathways' response to dim light challenges the visual cortex hyperexcitability theory. Pain 2019;160(3):569-578.Abstract
Migraine-type photophobia, most commonly described as exacerbation of headache by light, affects nearly 90% of the patients. It is the most bothersome symptom accompanying an attack. Using subjective psychophysical assessments, we showed that migraine patients are more sensitive to all colors of light during ictal than during interictal phase and that control subjects do not experience pain when exposed to different colors of light. Based on these findings, we suggested that color preference is unique to migraineurs (as it was not found in control subjects) rather than migraine phase (as it was found in both phases). To identify the origin of this photophobia in migraineurs, we compared the electrical waveforms that were generated in the retina and visual cortex of 46 interictal migraineurs to those generated in 42 healthy controls using color-based electroretinography and visual-evoked potential paradigms. Unexpectedly, it was the amplitude of the retinal rod-driven b wave, which was consistently larger (by 14%-19% in the light-adapted and 18%-34% in the dark-adapted flash ERG) in the migraineurs than in the controls, rather than the retinal cone-driven a wave or the visual-evoked potentials that differs most strikingly between the 2 groups. Mechanistically, these findings suggest that the inherent hypersensitivity to light among migraine patients may originate in the retinal rods rather than retinal cones or the visual cortex. Clinically, the findings may explain why migraineurs complain that the light is too bright even when it is dim to the extent that nonmigraineurs feel as if they are in a cave.
Costela FM, Woods RL. When Watching Video, Many Saccades Are Curved and Deviate From a Velocity Profile Model. Front Neurosci 2018;12:960.Abstract
Commonly, saccades are thought to be ballistic eye movements, not modified during flight, with a straight path and a well-described velocity profile. However, they do not always follow a straight path and studies of saccade curvature have been reported previously. In a prior study, we developed a real-time, saccade-trajectory prediction algorithm to improve the updating of gaze-contingent displays and found that saccades with a curved path or that deviated from the expected velocity profile were not well fit by our saccade-prediction algorithm (velocity-profile deviation), and thus had larger updating errors than saccades that had a straight path and had a velocity profile that was fit well by the model. Further, we noticed that the curved saccades and saccades with high velocity-profile deviations were more common than we had expected when participants performed a natural-viewing task. Since those saccades caused larger display updating errors, we sought a better understanding of them. Here we examine factors that could affect curvature and velocity profile of saccades using a pool of 218,744 saccades from 71 participants watching "Hollywood" video clips. Those factors included characteristics of the participants (e.g., age), of the videos (importance of faces for following the story, genre), of the saccade (e.g., magnitude, direction), time during the session (e.g., fatigue) and presence and timing of scene cuts. While viewing the video clips, saccades were most likely horizontal or vertical over oblique. Measured curvature and velocity-profile deviation had continuous, skewed frequency distributions. We used mixed-effects regression models that included cubic terms and found a complex relationship between curvature, velocity-profile deviation and saccade duration (or magnitude). Curvature and velocity-profile deviation were related to some video-dependent features such as lighting, face presence, or nature and human figure content. Time during the session was a predictor for velocity profile deviations. Further, we found a relationship for saccades that were in flight at the time of a scene cut to have higher velocity-profile deviations and lower curvature in univariable models. Saccades characteristics vary with a variety of factors, which suggests complex interactions between oculomotor control and scene content that could be explored further.
Wolfe JM, Utochkin IS. What is a preattentive feature?. Curr Opin Psychol 2018;29:19-26.Abstract
The concept of a preattentive feature has been central to vision and attention research for about half a century. A preattentive feature is a feature that guides attention in visual search and that cannot be decomposed into simpler features. While that definition seems straightforward, there is no simple diagnostic test that infallibly identifies a preattentive feature. This paper briefly reviews the criteria that have been proposed and illustrates some of the difficulties of definition.
Gao Z, Hwang A, Zhai G, Peli E. Correcting geometric distortions in stereoscopic 3D imaging. PLoS One 2018;13(10):e0205032.Abstract
Motion in a distorted virtual 3D space may cause visually induced motion sickness. Geometric distortions in stereoscopic 3D can result from mismatches among image capture, display, and viewing parameters. Three pairs of potential mismatches are considered, including 1) camera separation vs. eye separation, 2) camera field of view (FOV) vs. screen FOV, and 3) camera convergence distance (i.e., distance from the cameras to the point where the convergence axes intersect) vs. screen distance from the observer. The effect of the viewer's head positions (i.e., head lateral offset from the screen center) is also considered. The geometric model is expressed as a function of camera convergence distance, the ratios of the three parameter-pairs, and the offset of the head position. We analyze the impacts of these five variables separately and their interactions on geometric distortions. This model facilitates insights into the various distortions and leads to methods whereby the user can minimize geometric distortions caused by some parameter-pair mismatches through adjusting of other parameter pairs. For example, in postproduction, viewers can correct for a mismatch between camera separation and eye separation by adjusting their distance from the real screen and changing the effective camera convergence distance.
Qiu C, Jung J-H, Tuccar-Burak M, Spano L, Goldstein R, Peli E. Measuring Pedestrian Collision Detection With Peripheral Field Loss and the Impact of Peripheral Prisms. Transl Vis Sci Technol 2018;7(5):1.Abstract
Purpose: Peripheral field loss (PFL) due to retinitis pigmentosa, choroideremia, or glaucoma often results in a highly constricted residual central field, which makes it difficult for patients to avoid collision with approaching pedestrians. We developed a virtual environment to evaluate the ability of patients to detect pedestrians and judge potential collisions. We validated the system with both PFL patients and normally sighted subjects with simulated PFL. We also tested whether properly placed high-power prisms may improve pedestrian detection. Methods: A virtual park-like open space was rendered using a driving simulator (configured for walking speeds), and pedestrians in testing scenarios appeared within and outside the residual central field. Nine normally sighted subjects and eight PFL patients performed the pedestrian detection and collision judgment tasks. The performance of the subjects with simulated PFL was further evaluated with field of view expanding prisms. Results: The virtual system for testing pedestrian detection and collision judgment was validated. The performance of PFL patients and normally sighted subjects with simulated PFL were similar. The prisms for simulated PFL improved detection rates, reduced detection response times, and supported reasonable collision judgments in the prism-expanded field; detections and collision judgments in the residual central field were not influenced negatively by the prisms. Conclusions: The scenarios in a virtual environment are suitable for evaluating PFL and the impact of field of view expanding devices. Translational Relevance: This study validated an objective means to evaluate field expansion devices in reproducible near-real-life settings.
Shi C, Luo G. A Streaming Motion Magnification Core for Smart Image Sensors. IEEE Trans Circuits Syst II Express Briefs 2018;65(9):1229-1233.Abstract
This paper proposes a modified Eulerian Video Magnification (EVM) algorithm and a hardware implementation of a motion magnification core for smart image sensors. Compared to the original EVM algorithm, we perform the pixel-wise temporal bandpass filtering only once rather than multiple times on all scale layers, to reduce the memory and multiplier requirement for hardware implementation. A pixel stream processing architecture with pipelined blocks is proposed for the magnification core, enabling it to readily fit common image sensing components with streaming pixel output, while achieving higher performance with lower system cost. We implemented an FPGA-based prototype that is able to process up to 90M pixels per second and magnify subtle motion. The motion magnification results are comparable to the original algorithm running on PC.
Palmer EM, Van Wert MJ, Horowitz TS, Wolfe JM. Measuring the time course of selection during visual search. Atten Percept Psychophys 2018;Abstract
In visual search tasks, observers can guide their attention towards items in the visual field that share features with the target item. In this series of studies, we examined the time course of guidance toward a subset of items that have the same color as the target item. Landolt Cs were placed on 16 colored disks. Fifteen distractor Cs had gaps facing up or down while one target C had a gap facing left or right. Observers searched for the target C and reported which side contained the gap as quickly as possible. In the absence of other information, observers must search at random through the Cs. However, during the trial, the disks changed colors. Twelve disks were now of one color and four disks were of another color. Observers knew that the target C would always be in the smaller color set. The experimental question was how quickly observers could guide their attention to the smaller color set. Results indicate that observers could not make instantaneous use of color information to guide the search, even when they knew which two colors would be appearing on every trial. In each study, it took participants 200-300 ms to fully utilize the color information once presented. Control studies replicated the finding with more saturated colors and with colored C stimuli (rather than Cs on colored disks). We conclude that segregation of a display by color for the purposes of guidance takes 200-300 ms to fully develop.

Pages