Mobility Enhancement & Vision Rehabilitation

C
Costela FM, Woods RL. A free database of eye movements watching "Hollywood" videoclips. Data Brief 2019;25:103991.Abstract
The provided database of tracked eye movements was collected using an infra-red, video-camera Eyelink 1000 system, from 95 participants as they viewed 'Hollywood' video clips. There are 206 clips of 30-s and eleven clips of 30-min for a total viewing time of about 60 hours. The database also provides the raw 30-s video clip files, a short preview of the 30-min clips, and subjective ratings of the content of the videos for each in categories: (1) genre; (2) importance of human faces; (3) importance of human figures; (4) importance of man-made objects; (5) importance of nature; (6) auditory information; (7) lighting; and (8) environment type. Precise timing of the scene cuts within the clips and the democratic gaze scanpath position (center of interest) per frame are provided. At this time, this eye-movement dataset has the widest age range (22-85 years) and is the third largest (in recorded video viewing time) of those that have been made available to the research community. The data-acquisition procedures are described, along with participant demographics, summaries of some common eye-movement statistics, and highlights of research topics in which the database was used. The dataset is freely available in the Open Science Framework repository (link in the manuscript) and can be used without restriction for educational and research purposes, providing that this paper is cited in any published work.
Costela FM, Saunders DR, Rose DJ, Katjezovic S, Reeves SM, Woods RL. People With Central Vision Loss Have Difficulty Watching Videos. Invest Ophthalmol Vis Sci 2019;60(1):358-364.Abstract
Purpose: People with central vision loss (CVL) often report difficulties watching video. We objectively evaluated the ability to follow the story (using the information acquisition method). Methods: Subjects with CVL (n = 23) or normal vision (NV, n = 60) described the content of 30-second video clips from movies and documentaries. We derived an objective information acquisition (IA) score for each response using natural-language processing. To test whether the impact of CVL was simply due to reduced resolution, another group of NV subjects (n = 15) described video clips with defocus blur that reduced visual acuity to 20/50 to 20/800. Mixed models included random effects correcting for differences between subjects and between the clips, with age, gender, cognitive status, and education as covariates. Results: Compared to both NV groups, IA scores were worse for the CVL group (P < 0.001). IA reduced with worsening visual acuity (P < 0.001), and the reduction with worsening visual acuity was greater for the CVL group than the NV-defocus group (P = 0.01), which was seen as a greater discrepancy at worse levels of visual acuity. Conclusions: The IA method was able to detect difficulties in following the story experienced by people with CVL. Defocus blur failed to recreate the CVL experience. IA is likely to be useful for evaluations of the effects of vision rehabilitation.
Costela FM, Reeves SM, Woods RL. The Effect of Zoom Magnification and Large Display on Video Comprehension in Individuals With Central Vision Loss. Transl Vis Sci Technol 2021;10(8):30.Abstract
Purpose: A larger display at the same viewing distance provides relative-size magnification for individuals with central vision loss (CVL). However, the resulting large visible area of the display is expected to result in more head rotation, which may cause discomfort. We created a zoom magnification technique that placed the center of interest (COI) in the center of the display to reduce the need for head rotation. Methods: In a 2 × 2 within-subject study design, 23 participants with CVL viewed video clips from 1.5 m (4.9 feet) shown with or without zoom magnification, and with a large (208 cm/82" diagonal, 69°) or a typical (84 cm/33", 31°) screen. Head position was tracked and a custom questionnaire was used to measure discomfort. Results: Video comprehension was better with the large screen (P < 0.001) and slightly worse with zoom magnification (P = 0.03). Oddly, head movements did not vary with screen size (P = 0.63), yet were greater with zoom magnification (P = 0.001). This finding was unexpected, because the COI remains in the center with zoom magnification, but moves widely with a large screen and no magnification. Conclusions: This initial attempt to implement the zoom magnification method had flaws that may have decreased its effectiveness. In the future, we propose alternative implementations for zoom magnification, such as variable magnification. Translational Relevance: We present the first explicit demonstration that relative-size magnification improves the video comprehension of people with CVL when viewing video.
Costela FM, Reeves SM, Woods RL. An implementation of Bubble Magnification did not improve the video comprehension of individuals with central vision loss. Ophthalmic Physiol Opt 2021;41(4):842-852.Abstract
PURPOSE: People with central vision loss (CVL) watch television, videos and movies, but often report difficulty and have reduced video comprehension. An approach to assist viewing videos is electronic magnification of the video itself, such as Bubble Magnification. METHODS: We created a Bubble Magnification technique that displayed a magnified segment around the centre of interest (COI) as determined by the gaze of participants with normal vision. The 15 participants with CVL viewed video clips shown with 2× and 3× Bubble Magnification, and unedited. We measured video comprehension and gaze coherence. RESULTS: Video comprehension was significantly worse with both 2× (p = 0.01) and 3× Bubble Magnification (p < 0.001) than the unedited video. There was no difference in gaze coherence across conditions (p ≥ 0.58). This was unexpected because we expected a benefit in both video comprehension and gaze coherence. This initial attempt to implement the Bubble Magnification method had flaws that probably reduced its effectiveness. CONCLUSIONS: In the future, we propose alternative implementations of Bubble Magnification, such as variable magnification and bubble size. This study is a first step in the development of an intelligent-magnification approach to providing a vision rehabilitation aid to assist people with CVL.
Costela FM, Saunders DR, Kajtezovic S, Rose DJ, Woods RL. Measuring the Difficulty Watching Video With Hemianopia and an Initial Test of a Rehabilitation Approach. Transl Vis Sci Technol 2018;7(4):13.Abstract
Purpose: If you cannot follow the story when watching a video, then the viewing experience is degraded. We measured the difficulty of following the story, defined as the ability to acquire visual information, which is experienced by people with homonymous hemianopia (HH). Further, we proposed and tested a novel rehabilitation aid. Methods: Participants watched 30-second directed video clips. Following each video clip, subjects described the visual content of the clip. An objective score of information acquisition (IA) was derived by comparing each new response to a control database of descriptions of the same clip using natural language processing. Study 1 compared 60 participants with normal vision (NV) to 24 participants with HH to test the hypothesis that participants with HH would score lower than NV participants, consistent with reports from people with HH that describe difficulties in video watching. In the second study, 21 participants with HH viewed clips with or without a superimposed dynamic cue that we called a content guide. We hypothesized that IA scores would increase using this content guide. Results: The HH group had a significantly lower IA score, with an average of 2.8, compared with 4.3 shared words of the NV group (mixed-effects regression, < 0.001). Presence of the content guide significantly increased the IA score by 0.5 shared words ( = 0.03). Conclusions: Participants with HH had more difficulty acquiring information from a video, which was objectively demonstrated (reduced IA score). The content guide improved information acquisition, but not to the level of people with NV. Translational Relevance: The value as a possible rehabilitation aid of the content guide warrants further study that involves an extended period of content-guide use and a randomized controlled trial.
Costela FM, Woods RL. When Watching Video, Many Saccades Are Curved and Deviate From a Velocity Profile Model. Front Neurosci 2018;12:960.Abstract
Commonly, saccades are thought to be ballistic eye movements, not modified during flight, with a straight path and a well-described velocity profile. However, they do not always follow a straight path and studies of saccade curvature have been reported previously. In a prior study, we developed a real-time, saccade-trajectory prediction algorithm to improve the updating of gaze-contingent displays and found that saccades with a curved path or that deviated from the expected velocity profile were not well fit by our saccade-prediction algorithm (velocity-profile deviation), and thus had larger updating errors than saccades that had a straight path and had a velocity profile that was fit well by the model. Further, we noticed that the curved saccades and saccades with high velocity-profile deviations were more common than we had expected when participants performed a natural-viewing task. Since those saccades caused larger display updating errors, we sought a better understanding of them. Here we examine factors that could affect curvature and velocity profile of saccades using a pool of 218,744 saccades from 71 participants watching "Hollywood" video clips. Those factors included characteristics of the participants (e.g., age), of the videos (importance of faces for following the story, genre), of the saccade (e.g., magnitude, direction), time during the session (e.g., fatigue) and presence and timing of scene cuts. While viewing the video clips, saccades were most likely horizontal or vertical over oblique. Measured curvature and velocity-profile deviation had continuous, skewed frequency distributions. We used mixed-effects regression models that included cubic terms and found a complex relationship between curvature, velocity-profile deviation and saccade duration (or magnitude). Curvature and velocity-profile deviation were related to some video-dependent features such as lighting, face presence, or nature and human figure content. Time during the session was a predictor for velocity profile deviations. Further, we found a relationship for saccades that were in flight at the time of a scene cut to have higher velocity-profile deviations and lower curvature in univariable models. Saccades characteristics vary with a variety of factors, which suggests complex interactions between oculomotor control and scene content that could be explored further.
Costela FM, Sheldon SS, Walker B, Woods RL. People with Hemianopia Report Difficulty with TV, Computer, Cinema Use, and Photography. Optom Vis Sci 2018;95(5):428-434.Abstract
SIGNIFICANCE: Our survey found that participants with hemianopia report more difficulties watching video in various formats, including television (TV), on computers, and in a movie theater, compared with participants with normal vision (NV). These reported difficulties were not as marked as those reported by people with central vision loss. PURPOSE: The aim of this study was to survey the viewing experience (e.g., frequency, difficulty) of viewing video on TV, computers and portable visual display devices, and at the cinema of people with hemianopia and NV. This information may guide vision rehabilitation. METHODS: We administered a cross-sectional survey to investigate the viewing habits of people with hemianopia (n = 91) or NV (n = 192). The survey, consisting of 22 items, was administered either in person or in a telephone interview. Descriptive statistics are reported. RESULTS: There were five major differences between the hemianopia and NV groups. Many participants with hemianopia reported (1) at least "some" difficulty watching TV (39/82); (2) at least "some" difficulty watching video on a computer (16/62); (3) never attending the cinema (30/87); (4) at least some difficulty watching movies in the cinema (20/56), among those who did attend the cinema; and (5) never taking photographs (24/80). Some people with hemianopia reported methods that they used to help them watch video, including video playback and head turn. CONCLUSIONS: Although people with hemianopia report more difficulty with viewing video on TV and at the cinema, we are not aware of any rehabilitation methods specifically designed to assist people with hemianopia to watch video. The results of this survey may guide future vision rehabilitation.
Cunningham CA, Wolfe JM. The role of object categories in hybrid visual and memory search. J Exp Psychol Gen 2014;143(4):1585-99.Abstract
In hybrid search, observers search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that response times (RTs) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g., this apple in this pose). Typical real-world tasks involve more broadly defined sets of stimuli (e.g., any "apple" or, perhaps, "fruit"). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, observers searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches.
D
Dagi LR, Tiedemann LM, Heidary G, Robson CD, Hall AM, Zurakowski D. Using spectral-domain optical coherence tomography to detect optic neuropathy in patients with craniosynostosis. J AAPOS 2014;18(6):543-9.Abstract

BACKGROUND: Detecting and monitoring optic neuropathy in patients with craniosynostosis is a clinical challenge due to limited cooperation, and subjective measures of visual function. The purpose of this study was to appraise the correlation of peripapillary retinal nerve fiber layer (RNFL) thickness measured by spectral-domain ocular coherence tomography (SD-OCT) with indication of optic neuropathy based on fundus examination. METHODS: The medical records of all patients with craniosynostosis presenting for ophthalmic evaluation during 2013 were retrospectively reviewed. The following data were abstracted from the record: diagnosis, historical evidence of elevated intracranial pressure, current ophthalmic evaluation and visual field results, and current peripapillary RNFL thickness. RESULTS: A total of 54 patients were included (mean age, 10.6 years [range, 2.4-33.8 years]). Thirteen (24%) had evidence of optic neuropathy based on current fundus examination. Of these, 10 (77%) demonstrated either peripapillary RNFL elevation and papilledema or depression with optic atrophy. Sensitivity for detecting optic atrophy was 88%; for papilledema, 60%; and for either form of optic neuropathy, 77%. Specificity was 94%, 90%, and 83%, respectively. Kappa agreement was substantial for optic atrophy (κ = 0.73) and moderate for papilledema (κ = 0.39) and for either form of optic neuropathy (κ = 0.54). Logistic regression indicated that peripapillary RNFL thickness was predictive of optic neuropathy (P < 0.001). Multivariable analysis demonstrated that RNFL thickness measurements were more sensitive at detecting optic neuropathy than visual field testing (likelihood ratio = 10.02; P = 0.002). Sensitivity and specificity of logMAR visual acuity in detecting optic neuropathy were 15% and 95%, respectively. CONCLUSIONS: Peripapillary RNFL thickness measured by SD-OCT provides adjunctive evidence for identifying optic neuropathy in patients with craniosynostosis and appears more sensitive at detecting optic atrophy than papilledema.

Dartt DA, Masli S. Conjunctival epithelial and goblet cell function in chronic inflammation and ocular allergic inflammation. Curr Opin Allergy Clin Immunol 2014;14(5):464-70.Abstract

PURPOSE OF REVIEW: Although conjunctival goblet cells are a major cell type in ocular mucosa, their responses during ocular allergy are largely unexplored. This review summarizes the recent findings that provide key insights into the mechanisms by which their function and survival are altered during chronic inflammatory responses, including ocular allergy. RECENT FINDINGS: Conjunctiva represents a major component of the ocular mucosa that harbors specialized lymphoid tissue. Exposure of mucin-secreting goblet cells to allergic and inflammatory mediators released by the local innate and adaptive immune cells modulates proliferation, secretory function, and cell survival. Allergic mediators like histamine, leukotrienes, and prostaglandins directly stimulate goblet cell mucin secretion and consistently increase goblet cell proliferation. Goblet cell mucin secretion is also detectable in a murine model of allergic conjunctivitis. Additionally, primary goblet cell cultures allow evaluation of various inflammatory cytokines with respect to changes in goblet cell mucin secretion, proliferation, and apoptosis. These findings in combination with the preclinical mouse models help understand the goblet cell responses and their modulation during chronic inflammatory diseases, including ocular allergy. SUMMARY: Recent findings related to conjunctival goblet cells provide the basis for novel therapeutic approaches, involving modulation of goblet cell mucin production, to improve treatment of ocular allergies.

Dockery DM, Krzystolik MG. The Use of Mobile Applications as Low-Vision Aids: A Pilot Study. R I Med J (2013) 2020;103(8):69-72.Abstract
OBJECTIVE: To determine the most commonly used and highest-rated mobile applications (apps) for low-vision aids. METHODS: This was a convenience sample survey. Patients known to use low-vision apps at a nonprofit low-vision center (INSIGHT, Warwick, RI) were contacted by phone between June and September 2019. INCLUSION CRITERIA: age 18+, Snellen visual acuity (VA) below 20/70, and the use of low-vision mobile apps for at least one month. A standardized script was used to record survey data and app ratings were evaluated by patients with a scale of one to five, one being the lowest and five being the highest. RESULTS: Of the sample (n=11), nine patients (81.8%) stated they used an iPhone for low-vision mobile apps. A list of 14 mobile apps was identified: the two most commonly used apps were Seeing AI (81.8%) and Be My Eyes (63.6%); their average ratings were 4.43/5 and 4.75/5, respectively. CONCLUSIONS: This survey suggests that Seeing AI and Be My Eyes are useful apps to help low- vision patients with activities of daily living.
Doherty AL, Peli E, Luo G. Hazard detection with a monocular bioptic telescope. Ophthalmic Physiol Opt 2015;35(5):530-9.Abstract

PURPOSE: The safety of bioptic telescopes for driving remains controversial. The ring scotoma, an area to the telescope eye due to the telescope magnification, has been the main cause of concern. This study evaluates whether bioptic users can use the fellow eye to detect in hazards driving videos that fall in the ring scotoma area. METHODS: Twelve visually impaired bioptic users watched a series of driving hazard perception training videos and responded as soon as they detected a hazard while reading aloud letters presented on the screen. The letters were placed such that when reading them through the telescope the hazard fell in the ring scotoma area. Four conditions were tested: no bioptic and no reading, reading without bioptic, reading with a bioptic that did not occlude the fellow eye (non-occluding bioptic), and reading with a bioptic that partially-occluded the fellow eye. Eight normally sighted subjects performed the same task with the partially occluding bioptic detecting lateral hazards (blocked by the device scotoma) and vertical hazards (outside the scotoma) to further determine the cause-and-effect relationship between hazard detection and the fellow eye. RESULTS: There were significant differences in performance between conditions: 83% of hazards were detected with no reading task, dropping to 67% in the reading task with no bioptic, to 50% while reading with the non-occluding bioptic, and 34% while reading with the partially occluding bioptic. For normally sighted, detection of vertical hazards (53%) was significantly higher than lateral hazards (38%) with the partially occluding bioptic. CONCLUSIONS: Detection of driving hazards is impaired by the addition of a secondary reading like task. Detection is further impaired when reading through a monocular telescope. The effect of the partially-occluding bioptic supports the role of the non-occluded fellow eye in compensating for the ring scotoma.

Dorr M, Lesmes LA, Lu Z-L, Bex PJ. Rapid and reliable assessment of the contrast sensitivity function on an iPad. Invest Ophthalmol Vis Sci 2013;54(12):7266-73.Abstract
PURPOSE: Letter acuity, the predominant clinical assessment of vision, is relatively insensitive to slow vision loss caused by eye disease. While the contrast sensitivity function (CSF) has demonstrated the potential to monitor the slow progress of blinding eye diseases, current tests of CSF lack the reliability or ease-of-use to capture changes in vision timely. To improve the current state of home testing for vision, we have developed and validated a computerized adaptive test on a commercial tablet device (iPad) that provides an efficient and easy-to-use assessment of the CSF. METHODS: We evaluated the reliability, accuracy, and flexibility of tablet-based CSF assessment. Repeated tablet-based assessments of the spatial CSF, obtained from four normally-sighted observers, which each took 3 to 5 minutes, were compared to measures obtained on CRT-based laboratory equipment; additional tablet-based measures were obtained from six subjects under three different luminance conditions. RESULTS: A Bland-Altman analysis demonstrated that tablet-based assessment was reliable for estimating sensitivities at specific spatial frequencies (coefficient of repeatability 0.14-0.40 log units). The CRT- and tablet-based results demonstrated excellent agreement with absolute mean sensitivity differences <0.05 log units. The tablet-based test also reliably identified changes in contrast sensitivity due to different luminance conditions. CONCLUSIONS: We demonstrate that CSF assessment on a mobile device is indistinguishable from that obtained with specialized laboratory equipment. We also demonstrate better reliability than tests used currently for clinical trials of ophthalmic therapies, drugs, and devices.
Draschkow D, Wolfe JM, Võ MLH. Seek and you shall remember: scene semantics interact with visual search to build better memories. J Vis 2014;14(8):10.Abstract

Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization.

E
E P, P S. Bitemporal hemianopia; its unique binocular complexities and a novel remedy. Ophthalmic Physiol Opt 2014;34(2):233-42.
Eslami M, Kazeminasab S, Sharma V, Li Y, Fazli M, Wang M, Zebardast N, Elze T. PyVisualFields: A Python Package for Visual Field Analysis. Transl Vis Sci Technol 2023;12(2):6.Abstract
PURPOSE: Artificial intelligence (AI) methods are changing all areas of research and have a variety of capabilities of analysis in ophthalmology, specifically in visual fields (VFs) to detect or predict vision loss progression. Whereas most of the AI algorithms are implemented in Python language, which offers numerous open-source functions and algorithms, the majority of algorithms in VF analysis are offered in the R language. This paper introduces PyVisualFields, a developed package to address this gap and make available VF analysis in the Python language. METHODS: For the first version, the R libraries for VF analysis provided by vfprogression and visualFields packages are analyzed to define the overlaps and distinct functions. Then, we defined and translated this functionality into Python with the help of the wrapper library rpy2. Besides maintaining, the subsequent versions' milestones are established, and the third version will be R-independent. RESULTS: The developed Python package is available as open-source software via the GitHub repository and is ready to be installed from PyPI. Several Jupyter notebooks are prepared to demonstrate and describe the capabilities of the PyVisualFields package in the categories of data presentation, normalization and deviation analysis, plotting, scoring, and progression analysis. CONCLUSIONS: We developed a Python package and demonstrated its functionality for VF analysis and facilitating ophthalmic research in VF statistical analysis, illustration, and progression prediction. TRANSLATIONAL RELEVANCE: Using this software package, researchers working on VF analysis can more quickly create algorithms for clinical applications using cutting-edge AI techniques.
F
Falahati M, Kurukuti NM, Vargas-Martin F, Peli E, Jung J-H. Oblique multi-periscopic prism for field expansion of homonymous hemianopia. Biomed Opt Express 2023;14(5):2352-2364.Abstract
Oblique Fresnel peripheral prisms have been used for field expansion in homonymous hemianopia mobility such as walking and driving. However, limited field expansion, low image quality, and small eye scanning range limit their effectiveness. We developed a new oblique multi-periscopic prism using a cascade of rotated half-penta prisms, which provides 42° horizontal field expansion along with 18° vertical shift, high image quality, and wider eye scanning range. Feasibility and performance of a prototype using 3D-printed module are demonstrated by raytracing, photographic depiction, and Goldmann perimetry with patients with homonymous hemianopia.
Feldstein IT, Peli E. Pedestrians Accept Shorter Distances to Light Vehicles Than Dark Ones When Crossing the Street. Perception 2020;49(5):558-566.Abstract
Does the brightness of an approaching vehicle affect a pedestrian's crossing decision? Thirty participants indicated their street-crossing intentions when facing approaching light or dark vehicles. The experiment was conducted in a real daylight environment and, additionally, in a corresponding virtual one. A real road with actual cars provides high face validity, while a virtual environment ensures the scenario's precise reproducibility and repeatability for each participant. In both settings, participants judged dark vehicles to be a more imminent threat-either closer or moving faster-when compared with light ones. Secondary results showed that participants accepted a significantly shorter time-to-contact when crossing the street in the virtual setting than on the real road.
Feldstein IT, Dyszak GN. Corrigendum to "Road crossing decisions in real and virtual environments: A comparative study on simulator validity" [Accid. Anal. Prevent. 137 (2021) 105356]. Accid Anal Prev 2022;169:106435.
Feldstein IT, Dyszak GN. Road crossing decisions in real and virtual environments: A comparative study on simulator validity. Accid Anal Prev 2020;137:105356.Abstract
Virtual reality (VR) is a valuable tool for the assessment of human perception and behavior in a risk-free environment. Investigators should, however, ensure that the used virtual environment is validated in accordance with the experiment's intended research question since behavior in virtual environments has been shown to differ to behavior in real environments. This article presents the street crossing decisions of 30 participants who were facing an approaching vehicle and had to decide at what moment it was no longer safe to cross, applying the step-back method. The participants executed the task in a real environment and also within a highly immersive VR setup involving a head-mounted display (HMD). The results indicate significant differences between the two settings regarding the participants' behaviors. The time-to-contact of approaching vehicles was significantly lower for crossing decisions in the virtual environment than for crossing decisions in the real one. Additionally, it was demonstrated that participants based their crossing decisions in the real environment on the temporal distance of the approaching vehicle (i.e., time-to-contact), whereas the crossing decisions in the virtual environment seemed to depend on the vehicle's spatial distance, neglecting the vehicle's velocity. Furthermore, a deeper analysis suggests that crossing decisions were not affected by factors such as the participant's gender or the order in which they faced the real and the virtual environment.

Pages