Mobility Enhancement & Vision Rehabilitation

Costela FM, Saunders DR, Kajtezovic S, Rose DJ, Woods RL. Measuring the Difficulty Watching Video With Hemianopia and an Initial Test of a Rehabilitation Approach. Transl Vis Sci Technol 2018;7(4):13.Abstract
Purpose: If you cannot follow the story when watching a video, then the viewing experience is degraded. We measured the difficulty of following the story, defined as the ability to acquire visual information, which is experienced by people with homonymous hemianopia (HH). Further, we proposed and tested a novel rehabilitation aid. Methods: Participants watched 30-second directed video clips. Following each video clip, subjects described the visual content of the clip. An objective score of information acquisition (IA) was derived by comparing each new response to a control database of descriptions of the same clip using natural language processing. Study 1 compared 60 participants with normal vision (NV) to 24 participants with HH to test the hypothesis that participants with HH would score lower than NV participants, consistent with reports from people with HH that describe difficulties in video watching. In the second study, 21 participants with HH viewed clips with or without a superimposed dynamic cue that we called a content guide. We hypothesized that IA scores would increase using this content guide. Results: The HH group had a significantly lower IA score, with an average of 2.8, compared with 4.3 shared words of the NV group (mixed-effects regression, < 0.001). Presence of the content guide significantly increased the IA score by 0.5 shared words ( = 0.03). Conclusions: Participants with HH had more difficulty acquiring information from a video, which was objectively demonstrated (reduced IA score). The content guide improved information acquisition, but not to the level of people with NV. Translational Relevance: The value as a possible rehabilitation aid of the content guide warrants further study that involves an extended period of content-guide use and a randomized controlled trial.
Jung J-H, Peli E. Field Expansion for Acquired Monocular Vision Using a Multiplexing Prism. Optom Vis Sci 2018;95(9):814-828.Abstract
SIGNIFICANCE: Acquired monocular vision (AMV) is a common visual field loss. Patients report mobility difficulties in walking due to collisions with objects or other pedestrians on the blind side. PURPOSE: The visual field of people with AMV extends more than 90° temporally on the side of the seeing eye but is restricted to approximately 55° nasally. We developed a novel field expansion device using a multiplexing prism (MxP) that superimposes the see-through and shifted views for true field expansion without apical scotoma. We present various designs of the device that enable customized fitting and improved cosmetics. METHODS: A partial MxP segment is attached (base-in) near the nose bridge. To avoid total internal reflection due to the high angle of incidence at nasal field end (55°), we fit the MxP with serrations facing the eye and tilt the prism base toward the nose. We calculated the width of the MxP (the apex location) needed to prevent apical scotoma and monocular diplopia. We also consider the effect of spectacle prescriptions on these settings. The results are verified perimetrically. RESULTS: We documented the effectivity of various prototype glasses designs with perimetric measurements. With the prototypes, all patients with AMV had field-of-view expansions up to 90° nasally without any loss of seeing field. CONCLUSIONS: The novel and properly mounted MxP in glasses has the potential for meaningful field-of-view expansion up to the size of normal binocular vision in cosmetically acceptable form.
Singh AK, Phillips F, Merabet LB, Sinha P. Why Does the Cortex Reorganize after Sensory Loss?. Trends Cogn Sci 2018;22(7):569-582.Abstract
A growing body of evidence demonstrates that the brain can reorganize dramatically following sensory loss. Although the existence of such neuroplastic crossmodal changes is not in doubt, the functional significance of these changes remains unclear. The dominant belief is that reorganization is compensatory. However, results thus far do not unequivocally indicate that sensory deprivation results in markedly enhanced abilities in other senses. Here, we consider alternative reasons besides sensory compensation that might drive the brain to reorganize after sensory loss. One such possibility is that the cortex reorganizes not to confer functional benefits, but to avoid undesirable physiological consequences of sensory deafferentation. Empirical assessment of the validity of this and other possibilities defines a rich program for future research.
Shi C, Yuan X, Chang K, Cho K-S, Xie XS, Chen DF, Luo G. Optimization of Optomotor Response-based Visual Function Assessment in Mice. Sci Rep 2018;8(1):9708.Abstract
Optomotor response/reflex (OMR) assays are emerging as a powerful and versatile tool for phenotypic study and new drug discovery for eye and brain disorders. Yet efficient OMR assessment for visual performance in mice remains a challenge. Existing OMR testing devices for mice require a lengthy procedure and may be subject to bias due to use of artificial criteria. We developed an optimized staircase protocol that utilizes mouse head pausing behavior as a novel indicator for the absence of OMR, to allow rapid and unambiguous vision assessment. It provided a highly sensitive and reliable method that can be easily implemented into automated or manual OMR systems to allow quick and unbiased assessment for visual acuity and contrast sensitivity in mice. The sensitivity and quantitative capacity of the protocol were validated using wild type mice and an inherited mouse model of retinal degeneration - mice carrying rhodopsin deficiency and exhibiting progressive loss of photoreceptors. Our OMR system with this protocol was capable of detecting progressive visual function decline that was closely correlated with the loss of photoreceptors in rhodopsin deficient mice. It provides significant advances over the existing methods in the currently available OMR devices in terms of sensitivity, accuracy and efficiency.
Mansouri B, Roznik M, Rizzo JF, Prasad S. Rehabilitation of Visual Loss: Where We Are and Where We Need to Be. J Neuroophthalmol 2018;38(2):223-229.Abstract
BACKGROUND: Spontaneous recovery of visual loss resulting from injury to the brain is variable. A variety of traditional rehabilitative strategies, including the use of prisms or compensatory saccadic eye movements, have been used successfully to improve visual function and quality-of-life for patients with homonymous hemianopia. More recently, repetitive visual stimulation of the blind area has been reported to be of benefit in expanding the field of vision. EVIDENCE ACQUISITION: We performed a literature review with main focus on clinical studies spanning from 1963 to 2016, including 52 peer-reviewed articles, relevant cross-referenced citations, editorials, and reviews. RESULTS: Repetitive visual stimulation is reported to expand the visual field, although the interpretation of results is confounded by a variety of methodological factors and conflicting outcomes from different research groups. Many studies used subjective assessments of vision and did not include a sufficient number of subjects or controls. CONCLUSIONS: The available clinical evidence does not strongly support claims of visual restoration using repetitive visual stimulation beyond the time that spontaneous visual recovery might occur. This lack of firm supportive evidence does not preclude the potential of real benefit demonstrated in laboratories. Additional well-designed clinical studies with adequate controls and methods to record ocular fixation are needed.
Costela FM, Sheldon SS, Walker B, Woods RL. People with Hemianopia Report Difficulty with TV, Computer, Cinema Use, and Photography. Optom Vis Sci 2018;95(5):428-434.Abstract
SIGNIFICANCE: Our survey found that participants with hemianopia report more difficulties watching video in various formats, including television (TV), on computers, and in a movie theater, compared with participants with normal vision (NV). These reported difficulties were not as marked as those reported by people with central vision loss. PURPOSE: The aim of this study was to survey the viewing experience (e.g., frequency, difficulty) of viewing video on TV, computers and portable visual display devices, and at the cinema of people with hemianopia and NV. This information may guide vision rehabilitation. METHODS: We administered a cross-sectional survey to investigate the viewing habits of people with hemianopia (n = 91) or NV (n = 192). The survey, consisting of 22 items, was administered either in person or in a telephone interview. Descriptive statistics are reported. RESULTS: There were five major differences between the hemianopia and NV groups. Many participants with hemianopia reported (1) at least "some" difficulty watching TV (39/82); (2) at least "some" difficulty watching video on a computer (16/62); (3) never attending the cinema (30/87); (4) at least some difficulty watching movies in the cinema (20/56), among those who did attend the cinema; and (5) never taking photographs (24/80). Some people with hemianopia reported methods that they used to help them watch video, including video playback and head turn. CONCLUSIONS: Although people with hemianopia report more difficulty with viewing video on TV and at the cinema, we are not aware of any rehabilitation methods specifically designed to assist people with hemianopia to watch video. The results of this survey may guide future vision rehabilitation.
Rinaldi L, Merabet LB, Vecchi T, Cattaneo Z. The spatial representation of number, time, and serial order following sensory deprivation: A systematic review. Neurosci Biobehav Rev 2018;90:371-380.Abstract
The spatial representation of numerical and temporal information is thought to be rooted in our multisensory experiences. Accordingly, we may expect visual or auditory deprivation to affect the way we represent numerical magnitude and time spatially. Here, we systematically review recent findings on how blind and deaf individuals represent abstract concepts such as magnitude and time (e.g., past/future, serial order of events) in a spatial format. Interestingly, available evidence suggests that sensory deprivation does not prevent the spatial "re-mapping" of abstract information, but differences compared to normally sighted and hearing individuals may emerge depending on the specific dimension considered (i.e., numerical magnitude, time as past/future, serial order). Herein we discuss how the study of sensory deprived populations may shed light on the specific, and possibly distinct, mechanisms subserving the spatial representation of these concepts. Furthermore, we pinpoint unresolved issues that need to be addressed by future studies to grasp a full understanding of the spatial representation of abstract information associated with visual and auditory deprivation.
Han S'E, Qiu C, Lee KR, Jung J-H, Peli E. Word recognition: re-thinking prosthetic vision evaluation. J Neural Eng 2018;15(5):055003.Abstract
OBJECTIVE: Evaluations of vision prostheses and sensory substitution devices have frequently relied on repeated training and then testing with the same small set of items. These multiple forced-choice tasks produced above chance performance in blind users, but it is unclear if the observed performance represents restoration of vision that transfers to novel, untrained items. APPROACH: Here, we tested the generalizability of the forced-choice paradigm on discrimination of low-resolution word images. Extensive visual training was conducted with the same 10 words used in previous BrainPort tongue stimulation studies. The performance on these 10 words and an additional 50 words was measured before and after the training sessions. MAIN RESULTS: The results revealed minimal performance improvement with the untrained words, demonstrating instead pattern discrimination limited mostly to the trained words. SIGNIFICANCE: These findings highlight the need to reconsider current evaluation practices, in particular, the use of forced-choice paradigms with a few highly trained items. While appropriate for measuring the performance thresholds in acuity or contrast sensitivity of a functioning visual system, performance on such tasks cannot be taken to indicate restored spatial pattern vision.
Shi C, Luo G. A Compact VLSI System for Bio-Inspired Visual Motion Estimation. IEEE Trans Circuits Syst Video Technol 2018;28(4):1021-1036.Abstract
This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.
Cattaneo Z, Lega C, Rinaldi L, Fantino M, Ferrari C, Merabet LB, Vecchi T. The Spatial Musical Association of Response Codes does not depend on a normal visual experience: A study with early blind individuals. Atten Percept Psychophys 2018;80(4):813-821.Abstract
Converging evidence suggests that the perception of auditory pitch exhibits a characteristic spatial organization. This pitch-space association can be demonstrated experimentally by the Spatial Musical Association of Response Codes (SMARC) effect. This is characterized by faster response times when a low-positioned key is pressed in response to a low-pitched tone, and a high-positioned key is pressed in response to a high-pitched tone. To investigate whether the development of this pitch-space association is mediated by normal visual experience, we tested a group of early blind individuals on a task that required them to discriminate the timbre of different instrument sounds with varying pitch. Results revealed a comparable pattern in the SMARC effect in both blind participants and sighted controls, suggesting that the lack of prior visual experience does not prevent the development of an association between pitch height and vertical space.
Houston KE, Peli E, Goldstein RB, Bowers AR. Driving With Hemianopia VI: Peripheral Prisms and Perceptual-Motor Training Improve Detection in a Driving Simulator. Transl Vis Sci Technol 2018;7(1):5.Abstract
Purpose: Drivers with homonymous hemianopia (HH) were previously found to have impaired detection of blind-side hazards, yet in many jurisdictions they may obtain a license. We evaluated whether oblique 57Δ peripheral prisms (p-prisms) and perceptual-motor training improved blind-side detection rates. Methods: Patients with HH (n = 11) wore p-prisms for 2 weeks and then received perceptual-motor training (six visits) detecting and touching stimuli in the prism-expanded vision. In a driving simulator, patients drove and pressed the horn upon detection of pedestrians who ran toward the roadway (26 from each side): (1) without p-prisms at baseline; (2) with p-prisms after 2 weeks acclimation but before training; (3) with p-prisms after training; and (4) 3 months later. Results: P-prisms improved blind-side detection from 42% to 56%, which further improved after training to 72% (all P < 0.001). Blind-side timely responses (adequate time to have stopped) improved from 31% without to 44% with p-prisms (P < 0.001) and further improved with training to 55% (P = 0.02). At the 3-month follow-up, improvements from training were maintained for detection (65%; P = 0.02) but not timely responses (P = 0.725). There was wide between-subject variability in baseline detection performance and response to p-prisms. There were no negative effects of p-prisms on vehicle control or seeing-side performance. Conclusions: P-prisms improved detection with no negative effects, and training may provide additional benefit. Translational Relevance: In jurisdictions where people with HH are legally driving, these data aid in clinical decision making by providing evidence that p-prisms improve performance without negative effects.
Isik L, Singer J, Madsen JR, Kanwisher N, Kreiman G. What is changing when: Decoding visual information in movies from human intracranial recordings. Neuroimage 2018;180(Pt A):147-159.Abstract
The majority of visual recognition studies have focused on the neural responses to repeated presentations of static stimuli with abrupt and well-defined onset and offset times. In contrast, natural vision involves unique renderings of visual inputs that are continuously changing without explicitly defined temporal transitions. Here we considered commercial movies as a coarse proxy to natural vision. We recorded intracranial field potential signals from 1,284 electrodes implanted in 15 patients with epilepsy while the subjects passively viewed commercial movies. We could rapidly detect large changes in the visual inputs within approximately 100 ms of their occurrence, using exclusively field potential signals from ventral visual cortical areas including the inferior temporal gyrus and inferior occipital gyrus. Furthermore, we could decode the content of those visual changes even in a single movie presentation, generalizing across the wide range of transformations present in a movie. These results present a methodological framework for studying cognition during dynamic and natural vision.
Wang S, Woods RL, Costela FM, Luo G. Dynamic gaze-position prediction of saccadic eye movements using a Taylor series. J Vis 2017;17(14):3.Abstract
Gaze-contingent displays have been widely used in vision research and virtual reality applications. Due to data transmission, image processing, and display preparation, the time delay between the eye tracker and the monitor update may lead to a misalignment between the eye position and the image manipulation during eye movements. We propose a method to reduce the misalignment using a Taylor series to predict the saccadic eye movement. The proposed method was evaluated using two large datasets including 219,335 human saccades (collected with an EyeLink 1000 system, 95% range from 1° to 32°) and 21,844 monkey saccades (collected with a scleral search coil, 95% range from 1° to 9°). When assuming a 10-ms time delay, the prediction of saccade movements using the proposed method could reduce the misalignment greater than the state-of-the-art methods. The average error was about 0.93° for human saccades and 0.26° for monkey saccades. Our results suggest that this proposed saccade prediction method will create more accurate gaze-contingent displays.
Kim JS, Kanjlia S, Merabet LB, Bedny M. Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers. J Neurosci 2017;37(47):11495-11504.Abstract
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind (n = 10, 9 female, 1 male) and sighted control (n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex.SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible.
Wolfe JM. Visual Attention: Size Matters. Curr Biol 2017;27(18):R1002-R1003.Abstract
When searching real-world scenes, human attention is guided by knowledge of the plausible size of target object (if an object is six feet tall, it isn't your cat). Computer algorithms typically do not do this, but perhaps they should.