Mobility Enhancement & Vision Rehabilitation

Isayama T, Chen Y, Kono M, Fabre E, Slavsky M, DeGrip WJ, Ma J-X, Crouch RK, Makino CL. Coexpression of three opsins in cone photoreceptors of the salamander Ambystoma tigrinum. J Comp Neurol 2014;522(10):2249-65.Abstract
Although more than one type of visual opsin is present in the retina of most vertebrates, it was thought that each type of photoreceptor expresses only one opsin. However, evidence has accumulated that some photoreceptors contain more than one opsin, in many cases as a result of a developmental transition from the expression of one opsin to another. The salamander UV-sensitive (UV) cone is particularly notable because it contains three opsins (Makino and Dodd [1996] J Gen Physiol 108:27-34). Two opsin types are expressed at levels more than 100 times lower than the level of the primary opsin. Here, immunohistochemical experiments identified the primary component as a UV cone opsin and the two minor components as the short wavelength-sensitive (S) and long wavelength-sensitive (L) cone opsins. Based on single-cell recordings of 156 photoreceptors, the presence of three components in UV cones of hatchlings and terrestrial adults ruled out a developmental transition. There was no evidence for multiple opsin types within rods or S cones, but immunohistochemistry and partial bleaching in conjunction with single-cell recording revealed that both single and double L cones contained low levels of short wavelength-sensitive pigments in addition to the main L visual pigment. These results raise the possibility that coexpression of multiple opsins in other vertebrates was overlooked because a minor component absorbing at short wavelengths was masked by the main visual pigment or because the expression level of a component absorbing at long wavelengths was exceedingly low.
Maiello G, Chessa M, Solari F, Bex PJ. Simulated disparity and peripheral blur interact during binocular fusion. J Vis 2014;14(8):13.Abstract
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual’s aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion.
Draschkow D, Wolfe JM, Võ MLH. Seek and you shall remember: scene semantics interact with visual search to build better memories. J Vis 2014;14(8):10.Abstract

Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization.

Luo G, Garaas T, Pomplun M. Salient stimulus attracts focus of peri-saccadic mislocalization. Vision Res 2014;100:93-8.Abstract
Visual localization during saccadic eye movements is prone to error. Flashes shortly before and after the onset of saccades are usually perceived to shift towards the saccade target, creating a "compression" pattern. Typically, the saccade landing point coincides with a salient saccade target. We investigated whether the mislocalization focus follows the actual saccade landing point or a salient stimulus. Subjects made saccades to either a target or a memorized location without target. In some conditions, another salient marker was presented between the initial fixation and the saccade landing point. The experiments were conducted on both black and picture backgrounds. The results show that: (a) when a saccade target or a marker (spatially separated from the saccade landing point) was present, the compression pattern of mislocalization was significantly stronger than in conditions without them, for both black and picture background conditions, and (b) the mislocalization focus tended towards the salient stimulus regardless of whether it was the saccade target or the marker. Our results suggest that a salient stimulus presented in the scene may have an attracting effect and therefore contribute to the non-uniformity of saccadic mislocalization of a probing flash.
Dartt DA, Masli S. Conjunctival epithelial and goblet cell function in chronic inflammation and ocular allergic inflammation. Curr Opin Allergy Clin Immunol 2014;14(5):464-70.Abstract

PURPOSE OF REVIEW: Although conjunctival goblet cells are a major cell type in ocular mucosa, their responses during ocular allergy are largely unexplored. This review summarizes the recent findings that provide key insights into the mechanisms by which their function and survival are altered during chronic inflammatory responses, including ocular allergy. RECENT FINDINGS: Conjunctiva represents a major component of the ocular mucosa that harbors specialized lymphoid tissue. Exposure of mucin-secreting goblet cells to allergic and inflammatory mediators released by the local innate and adaptive immune cells modulates proliferation, secretory function, and cell survival. Allergic mediators like histamine, leukotrienes, and prostaglandins directly stimulate goblet cell mucin secretion and consistently increase goblet cell proliferation. Goblet cell mucin secretion is also detectable in a murine model of allergic conjunctivitis. Additionally, primary goblet cell cultures allow evaluation of various inflammatory cytokines with respect to changes in goblet cell mucin secretion, proliferation, and apoptosis. These findings in combination with the preclinical mouse models help understand the goblet cell responses and their modulation during chronic inflammatory diseases, including ocular allergy. SUMMARY: Recent findings related to conjunctival goblet cells provide the basis for novel therapeutic approaches, involving modulation of goblet cell mucin production, to improve treatment of ocular allergies.

Bowers AR, Anastasio JR, Sheldon SS, O'Connor MG, Hollis AM, Howe PD, Horowitz TS. Can we improve clinical prediction of at-risk older drivers?. Accid Anal Prev 2013;59:537-47.Abstract
OBJECTIVES: To conduct a pilot study to evaluate the predictive value of the Montreal Cognitive Assessment test (MoCA) and a brief test of multiple object tracking (MOT) relative to other tests of cognition and attention in identifying at-risk older drivers, and to determine which combination of tests provided the best overall prediction. METHODS: Forty-seven currently licensed drivers (58-95 years), primarily from a clinical driving evaluation program, participated. Their performance was measured on: (1) a screening test battery, comprising MoCA, MOT, Mini-Mental State Examination (MMSE), Trail-Making Test, visual acuity, contrast sensitivity, and Useful Field of View (UFOV) and (2) a standardized road test. RESULTS: Eighteen participants were rated at-risk on the road test. UFOV subtest 2 was the best single predictor with an area under the curve (AUC) of .84. Neither MoCA nor MOT was a better predictor of the at-risk outcome than either MMSE or UFOV, respectively. The best four-test combination (MMSE, UFOV subtest 2, visual acuity and contrast sensitivity) was able to identify at-risk drivers with 95% specificity and 80% sensitivity (.91 AUC). CONCLUSIONS: Although the best four-test combination was much better than a single test in identifying at-risk drivers, there is still much work to do in this field to establish test batteries that have both high sensitivity and specificity.
Dorr M, Lesmes LA, Lu Z-L, Bex PJ. Rapid and reliable assessment of the contrast sensitivity function on an iPad. Invest Ophthalmol Vis Sci 2013;54(12):7266-73.Abstract
PURPOSE: Letter acuity, the predominant clinical assessment of vision, is relatively insensitive to slow vision loss caused by eye disease. While the contrast sensitivity function (CSF) has demonstrated the potential to monitor the slow progress of blinding eye diseases, current tests of CSF lack the reliability or ease-of-use to capture changes in vision timely. To improve the current state of home testing for vision, we have developed and validated a computerized adaptive test on a commercial tablet device (iPad) that provides an efficient and easy-to-use assessment of the CSF. METHODS: We evaluated the reliability, accuracy, and flexibility of tablet-based CSF assessment. Repeated tablet-based assessments of the spatial CSF, obtained from four normally-sighted observers, which each took 3 to 5 minutes, were compared to measures obtained on CRT-based laboratory equipment; additional tablet-based measures were obtained from six subjects under three different luminance conditions. RESULTS: A Bland-Altman analysis demonstrated that tablet-based assessment was reliable for estimating sensitivities at specific spatial frequencies (coefficient of repeatability 0.14-0.40 log units). The CRT- and tablet-based results demonstrated excellent agreement with absolute mean sensitivity differences <0.05 log units. The tablet-based test also reliably identified changes in contrast sensitivity due to different luminance conditions. CONCLUSIONS: We demonstrate that CSF assessment on a mobile device is indistinguishable from that obtained with specialized laboratory equipment. We also demonstrate better reliability than tests used currently for clinical trials of ophthalmic therapies, drugs, and devices.
Connors EC, Yazzolino LA, Sánchez J, Merabet LB. Development of an audio-based virtual gaming environment to assist with navigation skills in the blind. J Vis Exp 2013;(73)Abstract
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Sánchez J, Espinoza M, de Borba Campos M, Merabet LB. Enhancing Orientation and Mobility Skills in Learners who are Blind through Video gaming. Creat Cognit 2013;2013:353-356.Abstract
In this work we present the results of the cognitive impact evaluation regarding the use of Audiopolis, an audio and/or haptic-based videogame. The software has been designed, developed and evaluated for the purpose of developing orientation and mobility (O&M) skills in blind users. The videogame was evaluated through cognitive tasks performed by a sample of 12 learners. The results demonstrated that the use of Audiopolis had a positive impact on the development and use of O&M skills in school-aged blind learners.
Hwang AD, Peli E. Development of a Headlight Glare Simulator for a Driving Simulator. Transp Res Part C Emerg Technol 2013;32:129-143.Abstract
We describe the design and construction of a headlight glare simulator to be used with a driving simulator. The system combines a modified programmable off-the-shelf LED display board and a beamsplitter so that the LED lights, representing the headlights of oncoming cars, are superimposed over the driving simulator headlights image. Ideal spatial arrangement of optical components to avoid misalignments of the superimposed images is hard to achieve in practice and variations inevitably introduce some parallax. Furthermore, the driver's viewing position varies with driver's height and seating position preferences exacerbate such misalignment. We reduce the parallax errors using an intuitive calibration procedure (simple drag-and-drop alignment of nine LED positions with calibration dots on the screen). To simulate the dynamics of headlight brightness changes when two vehicles are approaching, LED intensity control algorithms based on both headlight and LED beam shapes were developed. The simulation errors were estimated and compared to real-world headlight brightness variability.
Bowers AR, Tant M, Peli E. A pilot evaluation of on-road detection performance by drivers with hemianopia using oblique peripheral prisms. Stroke Res Treat 2012;2012:176806.Abstract
Aims. Homonymous hemianopia (HH), a severe visual consequence of stroke, causes difficulties in detecting obstacles on the nonseeing (blind) side. We conducted a pilot study to evaluate the effects of oblique peripheral prisms, a novel development in optical treatments for HH, on detection of unexpected hazards when driving. Methods. Twelve people with complete HH (median 49 years, range 29-68) completed road tests with sham oblique prism glasses (SP) and real oblique prism glasses (RP). A masked evaluator rated driving performance along the 25 km routes on busy streets in Ghent, Belgium. Results. The proportion of satisfactory responses to unexpected hazards on the blind side was higher in the RP than the SP drive (80% versus 30%; P = 0.001), but similar for unexpected hazards on the seeing side. Conclusions. These pilot data suggest that oblique peripheral prisms may improve responses of people with HH to blindside hazards when driving and provide the basis for a future, larger-sample clinical trial. Testing responses to unexpected hazards in areas of heavy vehicle and pedestrian traffic appears promising as a real-world outcome measure for future evaluations of HH rehabilitation interventions aimed at improving detection when driving.
Luo G, Satgunam PN, Peli E. Visual search performance of patients with vision impairment: effect of JPEG image enhancement. Ophthalmic Physiol Opt 2012;32(5):421-8.Abstract
PURPOSE: To measure natural image search performance in patients with central vision impairment. To evaluate the performance effect for a JPEG based image enhancement technique using the visual search task. METHODS: One hundred and fifty JPEG images were presented on a touch screen monitor in either an enhanced or original version to 19 patients (visual acuity 0.4-1.2 logMAR, 6/15 to 6/90, 20/50 to 20/300) and seven normally sighted controls (visual acuity -0.12 to 0.1 logMAR, 6/4.5 to 6/7.5, 20/15 to 20/25). Each image fell into one of three categories: faces, indoors, and collections. The enhancement was realized by moderately boosting a mid-range spatial frequency band in the discrete cosine transform (DCT) coefficients of the image luminance component. Participants pointed to an object in a picture that matched a given target displayed at the upper-left corner of the monitor. Search performance was quantified by the percentage of correct responses, the median search time of correct responses, and an 'integrated performance' measure - the area under the curve of cumulative correct response rate over search time. RESULTS: Patients were able to perform the search tasks but their performance was substantially worse than the controls. Search performances for the three image categories were significantly different (p <= 0.001) for all the participants, with searching for faces being the most difficult. When search time and correct response were analyzed separately, the effect of enhancement led to increase in one measure but decrease in another for many patients. Using the integrated performance, it was found that search performance declined with decrease in acuity (p = 0.005). An improvement with enhancement was found mainly for the patients whose acuity ranged from 0.4 to 0.8 logMAR (6/15 to 6/38, 20/50 to 20/125). Enhancement conferred a small but significant improvement in integrated performance for indoor and collection images (p = 0.025) in the patients. CONCLUSION: Search performance for natural images can be measured in patients with impaired vision to evaluate the effect of image enhancement. Patients with moderate vision loss might benefit from the moderate level of enhancement used here.
Merabet LB, Connors EC, Halko MA, Sánchez J. Teaching the blind to find their way by playing video games. PLoS One 2012;7(9):e44958.Abstract
Computer based video games are receiving great interest as a means to learn and acquire new skills. As a novel approach to teaching navigation skills in the blind, we have developed Audio-based Environment Simulator (AbES); a virtual reality environment set within the context of a video game metaphor. Despite the fact that participants were naïve to the overall purpose of the software, we found that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building using audio based cues alone. This was confirmed by a series of behavioral performance tests designed to assess the transfer of acquired spatial information to a large-scale, real-world indoor navigation task. Furthermore, learning the spatial layout through a goal directed gaming strategy allowed for the mental manipulation of spatial information as evidenced by enhanced navigation performance when compared to an explicit route learning strategy. We conclude that the immersive and highly interactive nature of the software greatly engages the blind user to actively explore the virtual environment. This in turn generates an accurate sense of a large-scale three-dimensional space and facilitates the learning and transfer of navigation skills to the physical world.

Pages