Mobility Enhancement & Vision Rehabilitation

W
Wolfe JM, Cain MS, Aizenman AM. Guidance and selection history in hybrid foraging visual search. Atten Percept Psychophys 2019;81(3):637-653.Abstract
In Hybrid Foraging tasks, observers search for multiple instances of several types of target. Collecting all the dirty laundry and kitchenware out of a child's room would be a real-world example. How are such foraging episodes structured? A series of four experiments shows that selection of one item from the display makes it more likely that the next item will be of the same type. This pattern holds if the targets are defined by basic features like color and shape but not if they are defined by their identity (e.g., the letters p & d). Additionally, switching between target types during search is expensive in time, with longer response times between successive selections if the target type changes than if they are the same. Finally, the decision to leave a screen/patch for the next screen in these foraging tasks is imperfectly consistent with the predictions of optimal foraging theory. The results of these hybrid foraging studies cast new light on the ways in which prior selection history guides subsequent visual search in general.
Wolfe JM, Wick FA, Mishra M, DeGutis J, Lyu W. Spatial and temporal massive memory in humans. Curr Biol 2023;33(2):405-410.e4.Abstract
It is well known that humans have a massive memory for pictures and scenes.1,2,3,4 They show an ability to encode thousands of images with only a few seconds of exposure to each. In addition to this massive memory for "what" observers have seen, three experiments reported here show that observers have a "spatial massive memory" (SMM) for "where" stimuli have been seen and a "temporal massive memory" (TMM) for "when" stimuli have been seen. The positions in time and space for at least dozens of items can be reported with good, if not perfect accuracy. Previous work has suggested that there might be good memory for stimulus location,5,6 but there do not seem to have been concerted efforts to measure the extent of this memory. Moreover, in our method, observers are recalling where items were located and not merely recognizing the correct location. This is interesting because massive memory is sometimes thought to be limited to recognition tasks based on sense of familiarity.
Wolfe JM, Aizenman AM, Boettcher SEP, Cain MS. Hybrid foraging search: Searching for multiple instances of multiple types of target. Vision Res 2016;119:50-9.Abstract

This paper introduces the "hybrid foraging" paradigm. In typical visual search tasks, observers search for one instance of one target among distractors. In hybrid search, observers search through visual displays for one instance of any of several types of target held in memory. In foraging search, observers collect multiple instances of a single target type from visual displays. Combining these paradigms, in hybrid foraging tasks observers search visual displays for multiple instances of any of several types of target (as might be the case in searching the kitchen for dinner ingredients or an X-ray for different pathologies). In the present experiment, observers held 8-64 target objects in memory. They viewed displays of 60-105 randomly moving photographs of objects and used the computer mouse to collect multiple targets before choosing to move to the next display. Rather than selecting at random among available targets, observers tended to collect items in runs of one target type. Reaction time (RT) data indicate searching again for the same item is more efficient than searching for any other targets, held in memory. Observers were trying to maximize collection rate. As a result, and consistent with optimal foraging theory, they tended to leave 25-33% of targets uncollected when moving to the next screen/patch. The pattern of RTs shows that while observers were collecting a target item, they had already begun searching memory and the visual display for additional targets, making the hybrid foraging task a useful way to investigate the interaction of visual and memory search.

Wolfe JM. Visual Search: How Do We Find What We Are Looking For?. Annu Rev Vis Sci 2020;6:539-562.Abstract
In visual search tasks, observers look for targets among distractors. In the lab, this often takes the form of multiple searches for a simple shape that may or may not be present among other items scattered at random on a computer screen (e.g., Find a red T among other letters that are either black or red.). In the real world, observers may search for multiple classes of target in complex scenes that occur only once (e.g., As I emerge from the subway, can I find lunch, my friend, and a street sign in the scene before me?). This article reviews work on how search is guided intelligently. I ask how serial and parallel processes collaborate in visual search, describe the distinction between search templates in working memory and target templates in long-term memory, and consider how searches are terminated.
Wolfe JM, Utochkin IS. What is a preattentive feature?. Curr Opin Psychol 2018;29:19-26.Abstract
The concept of a preattentive feature has been central to vision and attention research for about half a century. A preattentive feature is a feature that guides attention in visual search and that cannot be decomposed into simpler features. While that definition seems straightforward, there is no simple diagnostic test that infallibly identifies a preattentive feature. This paper briefly reviews the criteria that have been proposed and illustrates some of the difficulties of definition.
Wolfe JM. Visual Attention: Size Matters. Curr Biol 2017;27(18):R1002-R1003.Abstract
When searching real-world scenes, human attention is guided by knowledge of the plausible size of target object (if an object is six feet tall, it isn't your cat). Computer algorithms typically do not do this, but perhaps they should.
Wolfe JM. Major issues in the study of visual search: Part 2 of "40 Years of Feature Integration: Special Issue in Memory of Anne Treisman". Atten Percept Psychophys 2020;
Wolfe JM. Guided Search 6.0: An updated model of visual search. Psychon Bull Rev 2021;28(4):1060-1092.Abstract
This paper describes Guided Search 6.0 (GS6), a revised model of visual search. When we encounter a scene, we can see something everywhere. However, we cannot recognize more than a few items at a time. Attention is used to select items so that their features can be "bound" into recognizable objects. Attention is "guided" so that items can be processed in an intelligent order. In GS6, this guidance comes from five sources of preattentive information: (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g., priming), (4) reward, and (5) scene syntax and semantics. These sources are combined into a spatial "priority map," a dynamic attentional landscape that evolves over the course of search. Selective attention is guided to the most active location in the priority map approximately 20 times per second. Guidance will not be uniform across the visual field. It will favor items near the point of fixation. Three types of functional visual field (FVFs) describe the nature of these foveal biases. There is a resolution FVF, an FVF governing exploratory eye movements, and an FVF governing covert deployments of attention. To be identified as targets or rejected as distractors, items must be compared to target templates held in memory. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 ms/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid of serial and parallel processes. In GS6, if a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Setting of that threshold is adaptive, allowing feedback about performance to shape subsequent searches. Simulation shows that the combination of asynchronous diffusion and a quitting signal can produce the basic patterns of response time and error data from a range of search experiments.
Wolfe JM, Evans KK, Drew T, Aizenman A, Josephs E. HOW DO RADIOLOGISTS USE THE HUMAN SEARCH ENGINE?. Radiat Prot Dosimetry 2016;169(1-4):24-31.Abstract

Radiologists perform many 'visual search tasks' in which they look for one or more instances of one or more types of target item in a medical image (e.g. cancer screening). To understand and improve how radiologists do such tasks, it must be understood how the human 'search engine' works. This article briefly reviews some of the relevant work into this aspect of medical image perception. Questions include how attention and the eyes are guided in radiologic search? How is global (image-wide) information used in search? How might properties of human vision and human cognition lead to errors in radiologic search?

X
Xu J, Hutton A, Dougherty BE, Bowers AR. Driving Difficulties and Preferences of Advanced Driver Assistance Systems by Older Drivers With Central Vision Loss. Transl Vis Sci Technol 2023;12(10):7.Abstract
PURPOSE: The purpose of this study was to investigate driving difficulties and Advanced Driver Assistance Systems (ADAS) use and preferences of drivers with and without central vision loss (CVL). METHODS: Fifty-eight drivers with CVL (71 ± 13 years) and 68 without (72 ± 8 years) completed a telephone questionnaire. They rated their perceived driving difficulty and usefulness of technology support in 15 driving situations under good (daytime) and reduced visibility conditions, and reported their use experience and preferences for 12 available ADAS technologies. RESULTS: Drivers with CVL reported more difficulty (P = 0.002) and greater usefulness of technology support (P = 0.003) than non-CVL drivers, especially in reduced visibility conditions. Increased driving difficulty was associated with higher perceived technology usefulness (r = 0.34, P < 0.001). Dealing with blind spot road users, glare, unexpected pedestrians, and unfamiliar areas were perceived as the most difficult tasks that would benefit from technology support. Drivers with CVL used more advanced ADAS features than non-CVL drivers (P = 0.02), preferred to own the blind spot warning, pedestrian warning, and forward collision avoidance systems, and favored ADAS support that provided both information and active intervention. The perceived benefits of and willingness to own ADAS technologies were high for both groups. CONCLUSIONS: Drivers with CVL used more advanced ADAS and perceived greater usefulness of driver assistance technology in supporting difficult driving situations, with a strong preference for collision prevention support. TRANSLATIONAL RELEVANCE: This study highlights the specific technology needs and preferences of older drivers with CVL, which can inform future ADAS development, evaluation, and training tailored to this group.
Xu J, Baliutaviciute V, Swan G, Bowers AR. Driving With Hemianopia X: Effects of Cross Traffic on Gaze Behaviors and Pedestrian Responses at Intersections. Front Hum Neurosci 2022;16:938140.Abstract
Purpose: We conducted a driving simulator study to investigate the effects of monitoring intersection cross traffic on gaze behaviors and responses to pedestrians by drivers with hemianopic field loss (HFL). Methods: Sixteen HFL and sixteen normal vision (NV) participants completed two drives in an urban environment. At 30 intersections, a pedestrian ran across the road when the participant entered the intersection, requiring a braking response to avoid a collision. Intersections with these pedestrian events had either (1) no cross traffic, (2) one approaching car from the side opposite the pedestrian location, or (3) two approaching cars, one from each side at the same time. Results: Overall, HFL drivers made more (p < 0.001) and larger (p = 0.016) blind- than seeing-side scans and looked at the majority (>80%) of cross-traffic on both the blind and seeing sides. They made more numerous and larger gaze scans (p < 0.001) when they fixated cars on both sides (compared to one or no cars) and had lower rates of unsafe responses to blind- but not seeing-side pedestrians (interaction, p = 0.037). They were more likely to demonstrate compensatory blind-side fixation behaviors (faster time to fixate and longer fixation durations) when there was no car on the seeing side. Fixation behaviors and unsafe response rates were most similar to those of NV drivers when cars were fixated on both sides. Conclusion: For HFL participants, making more scans, larger scans and safer responses to pedestrians crossing from the blind side were associated with looking at cross traffic from both directions. Thus, cross traffic might serve as a reminder to scan and provide a reference point to guide blind-side scanning of drivers with HFL. Proactively checking for cross-traffic cars from both sides could be an important safety practice for drivers with HFL.
Xu J, Emmermann B, Bowers AR. Auditory Reminder Cues to Promote Proactive Scanning on Approach to Intersections in Drivers With Homonymous Hemianopia: Driving With Hemianopia, IX. JAMA Ophthalmol 2022;140(1):75-78.Abstract
Importance: Individuals with homonymous hemianopia (HH) are permitted to drive in some jurisdictions. They could compensate for their hemifield vision loss by scanning toward the blind side. However, some drivers with HH do not scan adequately well to the blind side when approaching an intersection, resulting in delayed responses to hazards. Objective: To evaluate whether auditory reminder cues promoted proactive scanning on approach to intersections. Design, Setting, and Participants: This cross-sectional, single-visit driving simulator study was conducted from October 2018 to May 2019 at a vision rehabilitation research laboratory. A volunteer sample of individuals with HH without visual neglect are included in this analysis. This post hoc analysis was completed in July and August 2020. Main Outcomes and Measures: Participants completed drives with and without scanning reminder cues (a single tone from a speaker on the blind side). Scanning was quantified by the percentage of intersections at which an early large scan was made (a scan with a head movement of at least 20° made before 30 m from the intersection). Responses to motorcycle hazards at intersections were quantified by the time to the first fixation and the time to the horn-press response. Results: Sixteen individuals were recruited and completed the study. Two were subsequently excluded from analyses. Thus, data from 14 participants (median [IQR] age, 54 [36-66] years; 13 men [93%]) were included. Stroke was the primary cause of the HH (10 participants [71%]). Six (43%) had right-sided HH. Participants were more likely to make an early large scan to the blind side in drives with vs without cues (65% vs 45%; difference, 20% [95% CI, 5%-37%]; P < .001). When participants made an early large scan to the blind side, they were faster to make their first fixation on blind-side motorcycles (mean [SD], 1.77 [1.34] vs 3.88 [1.17] seconds; difference, -2.11 [95% CI, -2.46 to -1.75] seconds; P < .001) and faster to press the horn (mean [SD], 2.54 [1.19] vs 4.54 [1.37] seconds; difference, -2.00 [95% CI, -2.38 to -1.62] seconds; P < .001) than when they did not make an early scan. Conclusions and Relevance: This post hoc analysis suggests that auditory reminder cues may promote proactive scanning, which may be associated with faster responses to hazards. This hypothesis should be considered in future prospective studies.
Z
Zhang X, Manley CE, Micheletti S, Tesic I, Bennett CR, Fazzi EM, Merabet LB. Assessing visuospatial processing in cerebral visual impairment using a novel and naturalistic static visual search task. Res Dev Disabil 2022;131:104364.Abstract
BACKGROUND: Cerebral visual impairment (CVI) is a brain based visual disorder associated with the maldevelopment of central visual pathways. Individuals with CVI often report difficulties finding a target of interest in cluttered and crowded visual scenes. However, it remains unknown how manipulating task demands and other environmental factors influence visual search performance in this population. AIM: We developed a novel and naturalistic virtual reality (VR) based static visual search task combined with eye tracking called the "virtual toy box" to objectively assess visual search performance in CVI. METHODS AND PROCEDURES: A total of 38 individuals with CVI (mean age 13.18 years ± 3.58 SD) and 53 controls with neurotypical development (mean age 15.25 years ± 5.72 SD) participated in the study. In a first experiment, study subjects were instructed to search for a preselected toy presented among a varying number of surrounding distractor toys (set size ranging from 1 to 36 items). In a second experiment, we assessed the effects of manipulating item spacing and the size of the visual area explored (field of view; FOV). OUTCOMES AND RESULTS: Behavioral outcomes collected were success rate, reaction time, gaze error, visual search area, and off-screen percent (an index of task compliance). Compared to age-matched controls, participants with CVI showed an overall impairment with respect to all the visual search outcomes of interest. Specifically, individuals with CVI were less likely and took longer to find the target, and search patterns were less accurate and precise compared to controls. Visual search response profiles were also comparatively less efficient and were associated with a slower initial pre-search (visual orienting) response as indexed by higher slope and intercept values derived from the analysis of reaction time × set size functions. Search performance was also more negatively affected in CVI at the smallest as well as largest spacing conditions tested, while increasing FOV was associated with greater decreased gaze accuracy and precision CONCLUSIONS AND IMPLICATIONS: These results are consistent with a general profile of impaired visual search abilities in CVI as well as worsening performance with increased visual task demands and an overall sensitivity to visual clutter and crowding. The observed profile of impaired visual search performance may be associated with dysfunctions related to how visual selective attention is deployed in individuals with CVI.

Pages