Mobility Enhancement & Vision Rehabilitation

Saxena A, Yadav AK. Clustering pedestrians' perceptions towards road infrastructure and traffic characteristics. Int J Inj Contr Saf Promot 2023;30(1):68-78.Abstract
In India, over 25,000 pedestrian fatalities occur due to road crashes every year. While several studies have identified possible causative factors that contribute to these fatalities, little is known about how pedestrians perceive their surrounding environment. This study attempts to bridge this gap by analysing the pedestrian perception of the built environment and traffic-related aspects considering urban roads (arterial and sub-arterial). Fourteen parameters were selected to assess pedestrian perception, and four factors were derived through factor analysis. The obtained factor scores were then subjected to two-step cluster analysis to determine whether pedestrian perception is different for people from different socio-economic demographics with varying travel behaviour. Based on the results obtained from the descriptive analysis, the respondents were most satisfied with the 'quality of streetlights at sidewalks' and 'visibility/sight distances', while they were most dissatisfied with 'pedestrian volume at sidewalks' and 'lighting facilities at crossings'. From the cluster analysis, it can be summarized that female pedestrians walk less frequently than males and perceive a higher probability of collision or near-collision incidents against male pedestrians. The study findings can aid the policymakers in the assessment of the pedestrian perception of the existing road infrastructure and suggest improvements to ensure pedestrian safety.
Eslami M, Kazeminasab S, Sharma V, Li Y, Fazli M, Wang M, Zebardast N, Elze T. PyVisualFields: A Python Package for Visual Field Analysis. Transl Vis Sci Technol 2023;12(2):6.Abstract
PURPOSE: Artificial intelligence (AI) methods are changing all areas of research and have a variety of capabilities of analysis in ophthalmology, specifically in visual fields (VFs) to detect or predict vision loss progression. Whereas most of the AI algorithms are implemented in Python language, which offers numerous open-source functions and algorithms, the majority of algorithms in VF analysis are offered in the R language. This paper introduces PyVisualFields, a developed package to address this gap and make available VF analysis in the Python language. METHODS: For the first version, the R libraries for VF analysis provided by vfprogression and visualFields packages are analyzed to define the overlaps and distinct functions. Then, we defined and translated this functionality into Python with the help of the wrapper library rpy2. Besides maintaining, the subsequent versions' milestones are established, and the third version will be R-independent. RESULTS: The developed Python package is available as open-source software via the GitHub repository and is ready to be installed from PyPI. Several Jupyter notebooks are prepared to demonstrate and describe the capabilities of the PyVisualFields package in the categories of data presentation, normalization and deviation analysis, plotting, scoring, and progression analysis. CONCLUSIONS: We developed a Python package and demonstrated its functionality for VF analysis and facilitating ophthalmic research in VF statistical analysis, illustration, and progression prediction. TRANSLATIONAL RELEVANCE: Using this software package, researchers working on VF analysis can more quickly create algorithms for clinical applications using cutting-edge AI techniques.
Schnell AE, Vinken K, Op de Beeck H. The importance of contrast features in rat vision. Sci Rep 2023;13(1):459.Abstract
Models of object recognition have mostly focused upon the hierarchical processing of objects from local edges up to more complex shape features. An alternative strategy that might be involved in pattern recognition centres around coarse-level contrast features. In humans and monkeys, the use of such features is most documented in the domain of face perception. Given prior suggestions that, generally, rodents might rely upon contrast features for object recognition, we hypothesized that they would pick up the typical contrast features relevant for face detection. We trained rats in a face-nonface categorization task with stimuli previously used in computer vision and tested for generalization with new, unseen stimuli by including manipulations of the presence and strength of a range of contrast features previously identified to be relevant for face detection. Although overall generalization performance was low, it was significantly modulated by contrast features. A model taking into account the summed strength of contrast features predicted the variation in accuracy across stimuli. Finally, with deep neural networks, we further investigated and quantified the performance and representations of the animals. The findings suggest that rat behaviour in visual pattern recognition tasks is partially explained by contrast feature processing.
Wolfe JM, Wick FA, Mishra M, DeGutis J, Lyu W. Spatial and temporal massive memory in humans. Curr Biol 2023;33(2):405-410.e4.Abstract
It is well known that humans have a massive memory for pictures and scenes.1,2,3,4 They show an ability to encode thousands of images with only a few seconds of exposure to each. In addition to this massive memory for "what" observers have seen, three experiments reported here show that observers have a "spatial massive memory" (SMM) for "where" stimuli have been seen and a "temporal massive memory" (TMM) for "when" stimuli have been seen. The positions in time and space for at least dozens of items can be reported with good, if not perfect accuracy. Previous work has suggested that there might be good memory for stimulus location,5,6 but there do not seem to have been concerted efforts to measure the extent of this memory. Moreover, in our method, observers are recalling where items were located and not merely recognizing the correct location. This is interesting because massive memory is sometimes thought to be limited to recognition tasks based on sense of familiarity.
Micheletti S, Merabet LB, Galli J, Fazzi E. Visual intervention in early onset visual impairment: A review. Eur J Neurosci 2023;57(12):1998-2016.Abstract
Vision is a primary and motivating sense. Early visual experience derived from the external world is known to have an important impact on the development of central visual pathways, and not surprisingly, visual impairment constitutes a risk factor for overall development. In light of the role of vision in early brain development, infants and young children with visual impairment should be thus entitled to early and effective visual intervention programmes. In this review, we discuss early visual interventions in infants and young children with visual impairment, focusing on their contents and outcomes. We defined a PICO format to critically review different models with a particular focus on parent-mediated and therapist-mediated approaches. We consider protocols that involved direct manipulation or improvement of the infants' visual inputs or were based on behavioural strategies and communication towards infants with visual impairment. We also provide an overview of the effectiveness of these protocols. A total of nine intervention protocols were selected for the purposes of this review. Substantial agreement regarding the importance of promoting the enrichment of infant environments, and more specifically in the context of active play that engages the whole family, has been reported in most of the studies. However, there is no clear agreement on methodological aspects, including clinical population characteristics, outcome measures, length of treatment and follow-up programmes. Further high-quality, carefully designed and adequately reported studies are needed in order to improve the clinical efficacy of these approaches to treating infants with visual impairment.
Zhang X, Manley CE, Micheletti S, Tesic I, Bennett CR, Fazzi EM, Merabet LB. Assessing visuospatial processing in cerebral visual impairment using a novel and naturalistic static visual search task. Res Dev Disabil 2022;131:104364.Abstract
BACKGROUND: Cerebral visual impairment (CVI) is a brain based visual disorder associated with the maldevelopment of central visual pathways. Individuals with CVI often report difficulties finding a target of interest in cluttered and crowded visual scenes. However, it remains unknown how manipulating task demands and other environmental factors influence visual search performance in this population. AIM: We developed a novel and naturalistic virtual reality (VR) based static visual search task combined with eye tracking called the "virtual toy box" to objectively assess visual search performance in CVI. METHODS AND PROCEDURES: A total of 38 individuals with CVI (mean age 13.18 years ± 3.58 SD) and 53 controls with neurotypical development (mean age 15.25 years ± 5.72 SD) participated in the study. In a first experiment, study subjects were instructed to search for a preselected toy presented among a varying number of surrounding distractor toys (set size ranging from 1 to 36 items). In a second experiment, we assessed the effects of manipulating item spacing and the size of the visual area explored (field of view; FOV). OUTCOMES AND RESULTS: Behavioral outcomes collected were success rate, reaction time, gaze error, visual search area, and off-screen percent (an index of task compliance). Compared to age-matched controls, participants with CVI showed an overall impairment with respect to all the visual search outcomes of interest. Specifically, individuals with CVI were less likely and took longer to find the target, and search patterns were less accurate and precise compared to controls. Visual search response profiles were also comparatively less efficient and were associated with a slower initial pre-search (visual orienting) response as indexed by higher slope and intercept values derived from the analysis of reaction time × set size functions. Search performance was also more negatively affected in CVI at the smallest as well as largest spacing conditions tested, while increasing FOV was associated with greater decreased gaze accuracy and precision CONCLUSIONS AND IMPLICATIONS: These results are consistent with a general profile of impaired visual search abilities in CVI as well as worsening performance with increased visual task demands and an overall sensitivity to visual clutter and crowding. The observed profile of impaired visual search performance may be associated with dysfunctions related to how visual selective attention is deployed in individuals with CVI.
Pamir Z, Jung J-H, Peli E. Preparing participants for the use of the tongue visual sensory substitution device. Disabil Rehabil Assist Technol 2022;17(8):888-896.Abstract
PURPOSE: Visual sensory substitution devices (SSDs) convey visual information to a blind person through another sensory modality. Using a visual SSD in various daily activities requires training prior to use the device independently. Yet, there is limited literature about procedures and outcomes of the training conducted for preparing users for practical use of SSDs in daily activities. METHODS: We trained 29 blind adults (9 with congenital and 20 with acquired blindness) in the use of a commercially available electro-tactile SSD, BrainPort. We describe a structured training protocol adapted from the previous studies, responses of participants, and we present retrospective qualitative data on the progress of participants during the training. RESULTS: The length of the training was not a critical factor in reaching an advanced stage. Though performance in the first two sessions seems to be a good indicator of participants' ability to progress in the training protocol, there are large individual differences in how far and how fast each participant can progress in the training protocol. There are differences between congenital blind users and those blinded later in life. CONCLUSIONS: The information on the training progression would be of interest to researchers preparing studies, and to eye care professionals, who may advise patients to use SSDs.IMPLICATIONS FOR REHABILITATIONThere are large individual differences in how far and how fast each participant can learn to use a visual-to-tactile sensory substitution device for a variety of tasks.Recognition is mainly achieved through top-down processing with prior knowledge about the possible responses. Therefore, the generalizability is still questionable.Users develop different strategies in order to succeed in training tasks.
Goldstein JE, Guo X, Swenor BK, Boland MV, Smith K. Using Electronic Clinical Decision Support to Examine Vision Rehabilitation Referrals and Practice Guidelines in Ophthalmology. Transl Vis Sci Technol 2022;11(10):8.Abstract
Purpose: To examine ophthalmologist use of an electronic health record (EHR)-based clinical decision support system (CDSS) to facilitate low vision rehabilitation (LVR) care referral. Methods: The CDSS alert was designed to appear when best documented visual acuity was <20/40 or hemianopia or quadrantanopia diagnosis was identified during an ophthalmology encounter from November 6, 2017, to April 5, 2019. Fifteen ophthalmologists representing eight subspecialties from an academic medical center were required to respond to the referral recommendation (order, don't order). LVR referral rates and ophthalmologist user experience were assessed. Encounter characteristics associated with LVR referrals were explored using multilevel logistic regression analysis. Results: The alert appeared for 3625 (8.9%) of 40,931 eligible encounters. The referral rate was 14.8% (535/3625). Of the 3413 encounters that met the visual acuity criterion only, patients who were worse than 20/60 were more likely to be referred, and 32.4% of referred patients were between 20/40 and 20/60. Primary reasons for deferring referrals included active medical or surgical treatment, refractive-related issues, and previous connection to LVR services. Eleven of the 13 ophthalmologists agreed that the alert was useful in identifying candidates for LVR services. Conclusions: A CDSS for patient identification and referral offers an acceptable mechanism to apply practice guidelines and prompt ophthalmologists to facilitate LVR care. Further study is warranted to optimize ophthalmologist user experience while refining alert criteria beyond visual acuity. Translational Relevance: The CDSS provides the framework for multi-center research to assess the development of pragmatic algorithms and standards for facilitating LVR care.
Houston KE, Paschalis EI. Feasibility of Magnetic Levator Prosthesis Frame Customization Using Craniofacial Scans and 3-D Printing. Transl Vis Sci Technol 2022;11(10):34.Abstract
Purpose: To determine the feasibility of a custom frame generation approach for nonsurgical management of severe blepharoptosis with the magnetic levator prosthesis (MLP). Methods: Participants (n = 8) with severe blepharoptosis (obscuring the visual axis) in one or both eyes who had previously been using a non-custom MLP had a craniofacial scan with a smartphone app to generate a custom MLP frame. A magnetic adhesive was attached to the affected eyelid. The custom MLP frame held a cylindrical magnet near the eyebrow above the affected eyelid, suspending it in the magnetic field while still allowing blinking. The spectacle magnet could be rotated manually, providing adjustable force via angular translation of the magnetic field. Fitting success and comfort were recorded, and interpalpebral fissure (IPF) was measured from video frames after 20 minutes in-office and one-week at-home use. Preference was documented, custom versus non-custom. Results: Overall, 88% of patients (7/8) were successfully fitted with a median 9/10 comfort (interquartile 7-10) and median ptosis improvement of 2.3 mm (1.3-5.0); P = 0.01). Exact binomial testing suggested, with 80% power, that the true population success rate was significantly greater than 45% (P = 0.05). Five participants took the custom MLP home for one week, with only one case of mild conjunctival redness which resolved without treatment. Highest to lowest force modulation resulted in a marginally significant median IPF adjustment of 1.5 mm (0.8 to 2.7; P = 0.06). All preferred the custom frame. Conclusions: The three-dimensional custom MLP frame generation approach using a smartphone app-based craniofacial scan is a feasible approach for clinical deployment of the MLP. Translational Relevance: First demonstration of customized frame generation for the MLP.
Hoogsteen KMP, Szpiro S, Kreiman G, Peli E. Beyond the Cane: Describing Urban Scenes to Blind People for Mobility Tasks. ACM Trans Access Comput 2022;15(3)Abstract
Blind people face difficulties with independent mobility, impacting employment prospects, social inclusion, and quality of life. Given the advancements in computer vision, with more efficient and effective automated information extraction from visual scenes, it is important to determine what information is worth conveying to blind travelers, especially since people have a limited capacity to receive and process sensory information. We aimed to investigate which objects in a street scene are useful to describe and how those objects should be described. Thirteen cane-using participants, five of whom were early blind, took part in two urban walking experiments. In the first experiment, participants were asked to voice their information needs in the form of questions to the experimenter. In the second experiment, participants were asked to score scene descriptions and navigation instructions, provided by the experimenter, in terms of their usefulness. The descriptions included a variety of objects with various annotations per object. Additionally, we asked participants to rank order the objects and the different descriptions per object in terms of priority and explain why the provided information is or is not useful to them. The results reveal differences between early and late blind participants. Late blind participants requested information more frequently and prioritized information about objects' locations. Our results illustrate how different factors, such as the level of detail, relative position, and what type of information is provided when describing an object, affected the usefulness of scene descriptions. Participants explained how they (indirectly) used information, but they were frequently unable to explain their ratings. The results distinguish between various types of travel information, underscore the importance of featuring these types at multiple levels of abstraction, and highlight gaps in current understanding of travel information needs. Elucidating the information needs of blind travelers is critical for the development of more useful assistive technologies.
Xu J, Baliutaviciute V, Swan G, Bowers AR. Driving With Hemianopia X: Effects of Cross Traffic on Gaze Behaviors and Pedestrian Responses at Intersections. Front Hum Neurosci 2022;16:938140.Abstract
Purpose: We conducted a driving simulator study to investigate the effects of monitoring intersection cross traffic on gaze behaviors and responses to pedestrians by drivers with hemianopic field loss (HFL). Methods: Sixteen HFL and sixteen normal vision (NV) participants completed two drives in an urban environment. At 30 intersections, a pedestrian ran across the road when the participant entered the intersection, requiring a braking response to avoid a collision. Intersections with these pedestrian events had either (1) no cross traffic, (2) one approaching car from the side opposite the pedestrian location, or (3) two approaching cars, one from each side at the same time. Results: Overall, HFL drivers made more (p < 0.001) and larger (p = 0.016) blind- than seeing-side scans and looked at the majority (>80%) of cross-traffic on both the blind and seeing sides. They made more numerous and larger gaze scans (p < 0.001) when they fixated cars on both sides (compared to one or no cars) and had lower rates of unsafe responses to blind- but not seeing-side pedestrians (interaction, p = 0.037). They were more likely to demonstrate compensatory blind-side fixation behaviors (faster time to fixate and longer fixation durations) when there was no car on the seeing side. Fixation behaviors and unsafe response rates were most similar to those of NV drivers when cars were fixated on both sides. Conclusion: For HFL participants, making more scans, larger scans and safer responses to pedestrians crossing from the blind side were associated with looking at cross traffic from both directions. Thus, cross traffic might serve as a reminder to scan and provide a reference point to guide blind-side scanning of drivers with HFL. Proactively checking for cross-traffic cars from both sides could be an important safety practice for drivers with HFL.
Manley CE, Bennett CR, Merabet LB. Assessing Higher-Order Visual Processing in Cerebral Visual Impairment Using Naturalistic Virtual-Reality-Based Visual Search Tasks. Children (Basel) 2022;9(8)Abstract
Cerebral visual impairment (CVI) is a brain-based disorder associated with the maldevelopment of central visual pathways. Individuals with CVI often report difficulties with daily visual search tasks such as finding a favorite toy or familiar person in cluttered and crowded scenes. We developed two novel virtual reality (VR)-based visual search tasks combined with eye tracking to objectively assess higher order processing abilities in CVI. The first (virtual toybox) simulates a static object search, while the second (virtual hallway) represents a dynamic human search task. Participants were instructed to search for a preselected target while task demand was manipulated with respect to the presence of surrounding distractors. We found that CVI participants (when compared to age-matched controls) showed an overall impairment with visual search on both tasks and with respect to all gaze metrics. Furthermore, CVI participants showed a trend of worsening performance with increasing task demand. Finally, search performance was also impaired in CVI participants with normal/near normal visual acuity, suggesting that reduced stimulus visibility alone does not account for these observations. This novel approach may have important clinical utility in helping to assess environmental factors related to functional visual processing difficulties observed in CVI.
Xu J, Emmermann B, Bowers AR. Auditory Reminder Cues to Promote Proactive Scanning on Approach to Intersections in Drivers With Homonymous Hemianopia: Driving With Hemianopia, IX. JAMA Ophthalmol 2022;140(1):75-78.Abstract
Importance: Individuals with homonymous hemianopia (HH) are permitted to drive in some jurisdictions. They could compensate for their hemifield vision loss by scanning toward the blind side. However, some drivers with HH do not scan adequately well to the blind side when approaching an intersection, resulting in delayed responses to hazards. Objective: To evaluate whether auditory reminder cues promoted proactive scanning on approach to intersections. Design, Setting, and Participants: This cross-sectional, single-visit driving simulator study was conducted from October 2018 to May 2019 at a vision rehabilitation research laboratory. A volunteer sample of individuals with HH without visual neglect are included in this analysis. This post hoc analysis was completed in July and August 2020. Main Outcomes and Measures: Participants completed drives with and without scanning reminder cues (a single tone from a speaker on the blind side). Scanning was quantified by the percentage of intersections at which an early large scan was made (a scan with a head movement of at least 20° made before 30 m from the intersection). Responses to motorcycle hazards at intersections were quantified by the time to the first fixation and the time to the horn-press response. Results: Sixteen individuals were recruited and completed the study. Two were subsequently excluded from analyses. Thus, data from 14 participants (median [IQR] age, 54 [36-66] years; 13 men [93%]) were included. Stroke was the primary cause of the HH (10 participants [71%]). Six (43%) had right-sided HH. Participants were more likely to make an early large scan to the blind side in drives with vs without cues (65% vs 45%; difference, 20% [95% CI, 5%-37%]; P < .001). When participants made an early large scan to the blind side, they were faster to make their first fixation on blind-side motorcycles (mean [SD], 1.77 [1.34] vs 3.88 [1.17] seconds; difference, -2.11 [95% CI, -2.46 to -1.75] seconds; P < .001) and faster to press the horn (mean [SD], 2.54 [1.19] vs 4.54 [1.37] seconds; difference, -2.00 [95% CI, -2.38 to -1.62] seconds; P < .001) than when they did not make an early scan. Conclusions and Relevance: This post hoc analysis suggests that auditory reminder cues may promote proactive scanning, which may be associated with faster responses to hazards. This hypothesis should be considered in future prospective studies.

Pages