Mobility Enhancement & Vision Rehabilitation

R
Rinaldi L, Vecchi T, Fantino M, Merabet LB, Cattaneo Z. The effect of hand movements on numerical bisection judgments in early blind and sighted individuals. Cortex 2015;71:76-84.Abstract

Recent evidence suggests that in representing numbers blind individuals might be affected differently by proprioceptive cues (e.g., hand positions, head turns) than are sighted individuals. In this study, we asked a group of early blind and sighted individuals to perform a numerical bisection task while executing hand movements in left or right peripersonal space and with either hand. We found that in bisecting ascending numerical intervals, the hemi-space in which the hand was moved (but not the moved hand itself) influenced the bisection bias similarly in both early blind and sighted participants. However, when numerical intervals were presented in descending order, the moved hand (and not the hemi-space in which it was moved) affected the bisection bias in all participants. Overall, our data show that the operation to be performed on the mental number line affects the activated spatial reference frame, regardless of participants' previous visual experience. In particular, both sighted and early blind individuals' representation of numerical magnitude is mainly rooted in world-centered coordinates when numerical information is given in canonical orientation (i.e., from small to large), whereas hand-centered coordinates become more relevant when the scanning of the mental number line proceeds in non-canonical direction.

S
Saeedi OJ, Elze T, D'Acunto L, Swamy R, Hegde V, Gupta S, Venjara A, Tsai J, Myers JS, Wellik SR, De Moraes CG, Pasquale LR, Shen LQ, Boland MV. Agreement and Predictors of Discordance of 6 Visual Field Progression Algorithms. Ophthalmology 2019;126(6):822-828.Abstract
PURPOSE: To determine the agreement of 6 established visual field (VF) progression algorithms in a large dataset of VFs from multiple institutions and to determine predictors of discordance among these algorithms. DESIGN: Retrospective longitudinal cohort study. PARTICIPANTS: Visual fields from 5 major eye care institutions in the United States were analyzed, including a subset of eyes with at least 5 Swedish interactive threshold algorithm standard 24-2 VFs that met our reliability criteria. Of a total of 831 240 VFs, a subset of 90 713 VFs from 13 156 eyes of 8499 patients met the inclusion criteria. METHODS: Six commonly used VF progression algorithms (mean deviation [MD] slope, VF index slope, Advanced Glaucoma Intervention Study, Collaborative Initial Glaucoma Treatment Study, pointwise linear regression, and permutation of pointwise linear regression) were applied to this cohort, and each eye was determined to be stable or progressing using each measure. Agreement between individual algorithms was tested using Cohen's κ coefficient. Bivariate and multivariate analyses were used to determine predictors of discordance (3 algorithms progressing and 3 algorithms stable). MAIN OUTCOME MEASURES: Agreement and discordance between algorithms. RESULTS: Individual algorithms showed poor to moderate agreement with each other when compared directly (κ range, 0.12-0.52). Based on at least 4 algorithms, 11.7% of eyes progressed. Major predictors of discordance or lack of agreement among algorithms were more depressed initial MD (P < 0.01) and older age at first available VF (P < 0.01). A greater number of VFs (P < 0.01), more years of follow-up (P < 0.01), and eye care institution (P = 0.03) also were associated with discordance. CONCLUSIONS: This extremely large comparative series demonstrated that existing algorithms have limited agreement and that agreement varies with clinical parameters, including institution. These issues underscore the challenges to the clinical use and application of progression algorithms and of applying big-data results to individual practices.
Sánchez J, de Borba Campos M, Espinoza M, Merabet LB. Audio Haptic Videogaming for Developing Wayfinding Skills in Learners Who are Blind. IUI 2014;2014:199-208.Abstract
Interactive digital technologies are currently being developed as a novel tool for education and skill development. Audiopolis is an audio and haptic based videogame designed for developing orientation and mobility (O&M) skills in people who are blind. We have evaluated the cognitive impact of videogame play on O&M skills by assessing performance on a series of behavioral tasks carried out in both indoor and outdoor virtual spaces. Our results demonstrate that the use of Audiopolis had a positive impact on the development and use of O&M skills in school-aged learners who are blind. The impact of audio and haptic information on learning is also discussed.
Sánchez J, Espinoza M, de Borba Campos M, Merabet LB. Enhancing Orientation and Mobility Skills in Learners who are Blind through Video gaming. Creat Cognit 2013;2013:353-356.Abstract
In this work we present the results of the cognitive impact evaluation regarding the use of Audiopolis, an audio and/or haptic-based videogame. The software has been designed, developed and evaluated for the purpose of developing orientation and mobility (O&M) skills in blind users. The videogame was evaluated through cognitive tasks performed by a sample of 12 learners. The results demonstrated that the use of Audiopolis had a positive impact on the development and use of O&M skills in school-aged blind learners.
Savage SW, Spano LP, Bowers AR. The effects of age and cognitive load on peripheral-detection performance. J Vis 2019;19(1):15.Abstract
Age-related declines in both peripheral vision and cognitive resources could contribute to the increased crash risk of older drivers. However, it is unclear whether increases in age and cognitive load result in equal detriments to detection rates across all peripheral target eccentricities (general interference effect) or whether these detriments become greater with increasing eccentricity (tunnel effect). In the current study we investigated the effects of age and cognitive load on the detection of peripheral motorcycle targets (at 5°-30° eccentricity) in static images of intersections. We used a dual-task paradigm in which cognitive load was manipulated without changing the complexity of the central (foveal) visual stimulus. Each image was displayed briefly (250 ms) to prevent eye movements. When no cognitive load was present, age resulted in a tunnel effect; however, when cognitive load was high, age resulted in a general interference effect. These findings suggest that tunnel and general interference effects can co-occur and that the predominant effect varies with the level of demand placed on participants' resources. High cognitive load had a general interference effect in both age groups, but the effect attenuated at large target eccentricities (opposite of a tunnel effect). Low cognitive load had a general interference effect in the older but not the younger group, impairing detection of motorcycle targets even at 5° eccentricity, which could present an imminent collision risk in real driving.
Savage SW, Zhang L, Swan G, Bowers AR. The effects of age on the contributions of head and eye movements to scanning behavior at intersections. Transp Res Part F Traffic Psychol Behav 2020;73:128-142.Abstract
The current study was aimed at evaluating the effects of age on the contributions of head and eye movements to scanning behavior at intersections. When approaching intersections, a wide area has to be scanned requiring large lateral head rotations as well as eye movements. Prior research suggests older drivers scan less extensively. However, due to the wide-ranging differences in methodologies and measures used in prior research, the extent to which age-related changes in eye or head movements contribute to these deficits is unclear. Eleven older (mean 67 years) and 18 younger (mean 27 years) current drivers drove in a simulator while their head and eye movements were tracked. Scans, analyzed for 15 four-way intersections in city drives, were split into two categories: (consisting only of eye movements) and (containing both head and eye movements). Older drivers made smaller scans than younger drivers (46.6° vs. 53°), as well as smaller scans (9.2° vs. 10.1°), resulting in overall smaller scans. For scans, older drivers had both a smaller head and a smaller eye movement component. Older drivers made more scans than younger drivers (7 vs. 6) but fewer scans (2.1 vs. 2.7). This resulted in no age effects when considering scans. Our results clarify the contributions of eye and head movements to age-related deficits in scanning at intersections, highlight the importance of analyzing both eye and head movements, and suggest the need for older driver training programs that emphasize the importance of making large scans before entering intersections.
Schill HM, Cain MS, Josephs EL, Wolfe JM. Axis of rotation as a basic feature in visual search. Atten Percept Psychophys 2019;Abstract
Searching for a "Q" among "O"s is easier than the opposite search (Treisman & Gormican in Psychological Review, 95, 15-48, 1988). In many cases, such "search asymmetries" occur because it is easier to search when a target is defined by the presence of a feature (i.e., the line terminator defining the tail of the "Q"), rather than by its absence. Treisman proposed that features that produce a search asymmetry are "basic" features in visual search (Treisman & Gormican in Psychological Review, 95, 15-48, 1988; Treisman & Souther in Journal of Experimental Psychology: General, 114, 285-310, 1985). Other stimulus attributes, such as color, orientation, and motion, have been found to produce search asymmetries (Dick, Ullman, & Sagi in Science, 237, 400-402, 1987; Treisman & Gormican in Psychological Review, 95, 15-48, 1988; Treisman & Souther in Journal of Experimental Psychology: General, 114, 285-310, 1985). Other stimulus properties, such as facial expression, produce asymmetries because one type of item (e.g., neutral faces) demands less attention in search than another (e.g., angry faces). In the present series of experiments, search for a rolling target among spinning distractors proved to be more efficient than searching for a spinning target among rolling distractors. The effect does not appear to be due to differences in physical plausibility, direction of motion, or texture movement. Our results suggest that the spinning stimuli demand less attention, making search through spinning distractors for a rolling target easier than the opposite search.
Selivanova A, Fenwick E, Man R, Seiple W, Jackson ML. Outcomes After Comprehensive Vision Rehabilitation Using Vision-related Quality of Life Questionnaires: Impact of Vision Impairment and National Eye Institute Visual Functioning Questionnaire. Optom Vis Sci 2019;96(2):87-94.Abstract
SIGNIFICANCE: This research is significant because, although vision-related quality of life (VRQoL) is improved after vision rehabilitation (VR), patients with certain characteristics respond less positively on VRQoL measures, and this should inform future care. PURPOSE: The purposes of this study were to evaluate how two VRQoL questionnaires compare in measuring change in patient-reported outcomes after VR and to determine if patient characteristics or occupational therapy (OT) predict higher scores after rehabilitation. METHODS: In a prospective clinical cohort study, 109 patients with low vision completed the Impact of Vision Impairment (IVI) and the National Eye Institute Visual Functioning Questionnaire (NEI VFQ-25) before and after VR. Comprehensive VR included consultation with an ophthalmologist and OT if required. The relationships of six baseline characteristics (age, sex, visual acuity, contrast sensitivity, field loss, diagnosis) and OT were assessed with VRQoL scores using multivariable logistic regression. RESULTS: The mean (SD) age was 68.5 (19.2) years, and 61 (56%) were female. After rehabilitation, increases in scores were observed in all IVI subscales (reading [P < .001], mobility [P = .002], well-being [P = .0003]) and all NEI VFQ-25 subscales (functional [P = .01], socioemotional [P = .003]). Those who were referred to OT but did not attend and those who had hemianopia/field loss were less likely to have higher VRQoL in IVI mobility and well-being. Those attending OT for more than 3 hours were less likely to have better scores in emotional NEI VFQ. Men were less likely to have increased scores in functional and emotional NEI VFQ, whereas those with diagnoses of nonmacular diseases had higher odds of having increased scores on the emotional NEI VFQ (all, P < .05). CONCLUSION: Both the IVI and the NEI VFQ-25 detected change in patients' VRQoL after rehabilitation. Most of the patient characteristics we considered predicted a lower likelihood of increased scores in VRQoL.
Sheldon S, Quint J, Hecht H, Bowers AR. The effect of central vision loss on perception of mutual gaze. Optom Vis Sci 2014;91(8):1000-11.Abstract
PURPOSE: To evaluate the effects of central vision loss (CVL) on mutual gaze perception (knowing whether somebody else is looking at you), an important nonverbal visual cue in social interactions. METHODS: Twenty-three persons with CVL (visual acuity 20/50 to 20/200), 16 with a bilateral central scotoma and 7 without, and 23 age-matched control subjects completed a gaze perception task and a brief questionnaire. They adjusted the eyes of a life-size virtual head on a monitor at a 1-m distance until they either appeared to be looking straight at them or were at the extreme left/right or up/down positions at which the eyes still appeared to be looking toward them (defining the range of mutual gaze in the horizontal and vertical planes). RESULTS: The nonscotoma group did not differ from the control subjects in any gaze task measure. However, the gaze direction judgments of the scotoma group had significantly greater variability than those of the nonscotoma and control groups (p < 0.001). In addition, their mutual gaze range tended to be wider (p = 0.15), suggesting a more liberal judgment criterion. Contrast sensitivity was the strongest predictor of variability in gaze direction judgments followed by self-reported difficulties. CONCLUSIONS: Our results suggest that mutual gaze perception is relatively robust to CVL. However, a follow-up study that simulates less-than-optimal viewing conditions of everyday social interactions is needed. The gaze perception task holds promise as a research tool for investigating the effects of vision impairment on mutual gaze judgments. Self-reported difficulty and contrast sensitivity were both independent predictors of gaze perception performance, suggesting that the task captured higher-order as well as low-level visual abilities.
Shen J, Peli E, Bowers AR. Peripheral Prism Glasses: Effects of Moving and Stationary Backgrounds. Optom Vis Sci 2015;92(4):412-420.Abstract

PURPOSE: Unilateral peripheral prisms for homonymous hemianopia (HH) expand the visual field through peripheral binocular visual confusion, a stimulus for binocular rivalry that could lead to reduced predominance and partial suppression of the prism image, thereby limiting device functionality. Using natural-scene images and motion videos, we evaluated whether detection was reduced in binocular compared with monocular viewing. METHODS: Detection rates of nine participants with HH or quadranopia and normal binocularity wearing peripheral prisms were determined for static checkerboard perimetry targets briefly presented in the prism expansion area and the seeing hemifield. Perimetry was conducted under monocular and binocular viewing with targets presented over videos of real-world driving scenes and still frame images derived from those videos. RESULTS: With unilateral prisms, detection rates in the prism expansion area were significantly lower in binocular than in monocular (prism eye) viewing on the motion background (medians, 13 and 58%, respectively, p = 0.008) but not the still frame background (medians, 63 and 68%, p = 0.123). When the stimulus for binocular rivalry was reduced by fitting prisms bilaterally in one HH and one normally sighted subject with simulated HH, prism-area detection rates on the motion background were not significantly different (p > 0.6) in binocular and monocular viewing. CONCLUSIONS: Conflicting binocular motion appears to be a stimulus for reduced predominance of the prism image in binocular viewing when using unilateral peripheral prisms. However, the effect was only found for relatively small targets. Further testing is needed to determine the extent to which this phenomenon might affect the functionality of unilateral peripheral prisms in more real-world situations.

Shi C, Yuan X, Chang K, Cho K-S, Xie XS, Chen DF, Luo G. Optimization of Optomotor Response-based Visual Function Assessment in Mice. Sci Rep 2018;8(1):9708.Abstract
Optomotor response/reflex (OMR) assays are emerging as a powerful and versatile tool for phenotypic study and new drug discovery for eye and brain disorders. Yet efficient OMR assessment for visual performance in mice remains a challenge. Existing OMR testing devices for mice require a lengthy procedure and may be subject to bias due to use of artificial criteria. We developed an optimized staircase protocol that utilizes mouse head pausing behavior as a novel indicator for the absence of OMR, to allow rapid and unambiguous vision assessment. It provided a highly sensitive and reliable method that can be easily implemented into automated or manual OMR systems to allow quick and unbiased assessment for visual acuity and contrast sensitivity in mice. The sensitivity and quantitative capacity of the protocol were validated using wild type mice and an inherited mouse model of retinal degeneration - mice carrying rhodopsin deficiency and exhibiting progressive loss of photoreceptors. Our OMR system with this protocol was capable of detecting progressive visual function decline that was closely correlated with the loss of photoreceptors in rhodopsin deficient mice. It provides significant advances over the existing methods in the currently available OMR devices in terms of sensitivity, accuracy and efficiency.
Shi C, Pundlik S, Luo G. Without low spatial frequencies, high resolution vision would be detrimental to motion perception. J Vis 2020;20(8):29.Abstract
A normally sighted person can see a grating of 30 cycles per degree or higher, but spatial frequencies needed for motion perception are much lower than that. It is unknown for natural images with a wide spectrum how all the visible spatial frequencies contribute to motion speed perception. In this work, we studied the effect of spatial frequency content on motion speed estimation for sequences of natural and stochastic pixel images by simulating different visual conditions, including normal vision, low vision (low-pass filtering), and complementary vision (high-pass filtering at the same cutoff frequencies of the corresponding low-vision conditions) conditions. Speed was computed using a biological motion energy-based computational model. In natural sequences, there was no difference in speed estimation error between normal vision and low vision conditions, but it was significantly higher for complementary vision conditions (containing only high-frequency components) at higher speeds. In stochastic sequences that had a flat frequency distribution, the error in normal vision condition was significantly larger compared with low vision conditions at high speeds. On the contrary, such a detrimental effect on speed estimation accuracy was not found for low spatial frequencies. The simulation results were consistent with the motion direction detection task performed by human observers viewing stochastic sequences. Together, these results (i) reiterate the importance of low frequencies in motion perception, and (ii) indicate that high frequencies may be detrimental for speed estimation when low frequency content is weak or not present.
Shi C, Luo G. A Compact VLSI System for Bio-Inspired Visual Motion Estimation. IEEE Trans Circuits Syst Video Technol 2018;28(4):1021-1036.Abstract
This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.
Shi C, Luo G. A Streaming Motion Magnification Core for Smart Image Sensors. IEEE Trans Circuits Syst II Express Briefs 2018;65(9):1229-1233.Abstract
This paper proposes a modified Eulerian Video Magnification (EVM) algorithm and a hardware implementation of a motion magnification core for smart image sensors. Compared to the original EVM algorithm, we perform the pixel-wise temporal bandpass filtering only once rather than multiple times on all scale layers, to reduce the memory and multiplier requirement for hardware implementation. A pixel stream processing architecture with pipelined blocks is proposed for the magnification core, enabling it to readily fit common image sensing components with streaming pixel output, while achieving higher performance with lower system cost. We implemented an FPGA-based prototype that is able to process up to 90M pixels per second and magnify subtle motion. The motion magnification results are comparable to the original algorithm running on PC.
Singh AK, Phillips F, Merabet LB, Sinha P. Why Does the Cortex Reorganize after Sensory Loss?. Trends Cogn Sci 2018;22(7):569-582.Abstract
A growing body of evidence demonstrates that the brain can reorganize dramatically following sensory loss. Although the existence of such neuroplastic crossmodal changes is not in doubt, the functional significance of these changes remains unclear. The dominant belief is that reorganization is compensatory. However, results thus far do not unequivocally indicate that sensory deprivation results in markedly enhanced abilities in other senses. Here, we consider alternative reasons besides sensory compensation that might drive the brain to reorganize after sensory loss. One such possibility is that the cortex reorganizes not to confer functional benefits, but to avoid undesirable physiological consequences of sensory deafferentation. Empirical assessment of the validity of this and other possibilities defines a rich program for future research.
Swan G, Goldstein RB, Savage SW, Zhang L, Ahmadi A, Bowers AR. Automatic processing of gaze movements to quantify gaze scanning behaviors in a driving simulator. Behav Res Methods 2021;53(2):487-506.Abstract
Eye and head movements are used to scan the environment when driving. In particular, when approaching an intersection, large gaze scans to the left and right, comprising head and multiple eye movements, are made. We detail an algorithm called the gaze scan algorithm that automatically quantifies the magnitude, duration, and composition of such large lateral gaze scans. The algorithm works by first detecting lateral saccades, then merging these lateral saccades into gaze scans, with the start and end points of each gaze scan marked in time and eccentricity. We evaluated the algorithm by comparing gaze scans generated by the algorithm to manually marked "consensus ground truth" gaze scans taken from gaze data collected in a high-fidelity driving simulator. We found that the gaze scan algorithm successfully marked 96% of gaze scans and produced magnitudes and durations close to ground truth. Furthermore, the differences between the algorithm and ground truth were similar to the differences found between expert coders. Therefore, the algorithm may be used in lieu of manual marking of gaze data, significantly accelerating the time-consuming marking of gaze movement data in driving simulator studies. The algorithm also complements existing eye tracking and mobility research by quantifying the number, direction, magnitude, and timing of gaze scans and can be used to better understand how individuals scan their environment.
Swan G, Savage SW, Zhang L, Bowers AR. Driving With Hemianopia VII: Predicting Hazard Detection With Gaze and Head Scan Magnitude. Transl Vis Sci Technol 2021;10(1):20.Abstract
Purpose: One rehabilitation strategy taught to individuals with hemianopic field loss (HFL) is to make a large blind side scan to quickly identify hazards. However, it is not clear what the minimum threshold is for how large the scan should be. Using driving simulation, we evaluated thresholds (criteria) for gaze and head scan magnitudes that best predict detection safety. Methods: Seventeen participants with complete HFL and 15 with normal vision (NV) drove through 4 routes in a virtual city while their eyes and head were tracked. Participants pressed the horn as soon as they detected a motorcycle (10 per drive) that appeared 54 degrees eccentricity on cross-streets and approached toward the driver. Results: Those with HFL detected fewer motorcycles than those with NV and had worse detection on the blind side than the seeing side. On the blind side, both safe detections and early detections (detections before the hazard entered the intersection) could be predicted with both gaze (safe 18.5 degrees and early 33.8 degrees) and head (safe 19.3 degrees and early 27 degrees) scans. However, on the seeing side, only early detections could be classified with gaze (25.3 degrees) and head (9.0 degrees). Conclusions: Both head and gaze scan magnitude were significant predictors of detection on the blind side, but less predictive on the seeing side, which was likely driven by the ability to use peripheral vision. Interestingly, head scans were as predictive as gaze scans. Translational Relevance: The minimum scan magnitude could be a useful criterion for scanning training or for developing assistive technologies to improve scanning.
T
Tang H, Buia C, Madhavan R, Crone NE, Madsen JR, Anderson WS, Kreiman G. Spatiotemporal dynamics underlying object completion in human ventral visual cortex. Neuron 2014;83(3):736-48.Abstract
Natural vision often involves recognizing objects from partial information. Recognition of objects from parts presents a significant challenge for theories of vision because it requires spatial integration and extrapolation from prior knowledge. Here we recorded intracranial field potentials of 113 visually selective electrodes from epilepsy patients in response to whole and partial objects. Responses along the ventral visual stream, particularly the inferior occipital and fusiform gyri, remained selective despite showing only 9%-25% of the object areas. However, these visually selective signals emerged ∼100 ms later for partial versus whole objects. These processing delays were particularly pronounced in higher visual areas within the ventral stream. This latency difference persisted when controlling for changes in contrast, signal amplitude, and the strength of selectivity. These results argue against a purely feedforward explanation of recognition from partial information, and provide spatiotemporal constraints on theories of object recognition that involve recurrent processing.
Thornton IM, Bülthoff HH, Horowitz TS, Rynning A, Lee S-W. Interactive multiple object tracking (iMOT). PLoS One 2014;9(2):e86974.Abstract
We introduce a new task for exploring the relationship between action and attention. In this interactive multiple object tracking (iMOT) task, implemented as an iPad app, participants were presented with a display of multiple, visually identical disks which moved independently. The task was to prevent any collisions during a fixed duration. Participants could perturb object trajectories via the touchscreen. In Experiment 1, we used a staircase procedure to measure the ability to control moving objects. Object speed was set to 1°/s. On average participants could control 8.4 items without collision. Individual control strategies were quite variable, but did not predict overall performance. In Experiment 2, we compared iMOT with standard MOT performance using identical displays. Object speed was set to 2°/s. Participants could reliably control more objects (M = 6.6) than they could track (M = 4.0), but performance in the two tasks was positively correlated. In Experiment 3, we used a dual-task design. Compared to single-task baseline, iMOT performance decreased and MOT performance increased when the two tasks had to be completed together. Overall, these findings suggest: 1) There is a clear limit to the number of items that can be simultaneously controlled, for a given speed and display density; 2) participants can control more items than they can track; 3) task-relevant action appears not to disrupt MOT performance in the current experimental context.
U
Uchino Y, Uchino M, Yokoi N, Dogru M, Kawashima M, Okada N, Inaba T, Tamaki S, Komuro A, Sonomura Y, Kato H, Argüeso P, Kinoshita S, Tsubota K. Alteration of tear mucin 5AC in office workers using visual display terminals: The Osaka Study. JAMA Ophthalmol 2014;132(8):985-92.Abstract
IMPORTANCE: There are limited reports on the relationship between mucin 5AC (MUC5AC) concentrations in tears, working hours, and the frequency of ocular symptoms in visual display terminal (VDT) users. This investigation evaluated these relationships among patients with dry eye disease (DED) and individuals serving as controls. OBJECTIVE: To determine the relationship between MUC5AC concentration in the tears of VDT users based on the diagnosis of DED and frequency of ocular symptoms. DESIGN, SETTING, AND PARTICIPANTS: An institutional, cross-sectional study was conducted. Participants included 96 young and middle-aged Japanese office workers. Both eyes of 96 volunteers (60 men and 36 women) were studied. Participants working in a company that used VDTs completed questionnaires about their working hours and the frequency of ocular symptoms. Dry eye disease was diagnosed as definite or probable, or it was not present. Tear fluid was collected from the inferior fornix after instillation of 50 μL of sterilized saline. The MUC5AC concentration was normalized to tear protein content and expressed as MUC5AC (nanograms) per tear protein (milligrams). The differences in MUC5AC concentration between DED groups, between VDT working hours (short, intermediate, and long), and between symptomatic and asymptomatic groups were evaluated with 95% CIs based on nonparametric Hodges-Lehmann determination. MAIN OUTCOMES AND MEASURES: Ocular surface evaluation, prevalence of DED, and MUC5AC concentration. RESULTS: The prevalence of definite and probable DED was 9% (n = 9) and 57% (n = 55), respectively. The mean MUC5AC concentration was lower in the tears of VDT users with definite DED than in those with no DED (P = .02; Hodges-Lehmann estimator, -2.17; 95% CI, -4.67 to -0.30). The mean MUC5AC concentration in tears was lower in the group that worked longer hours than in the group that worked shorter hours (P = .049; estimated difference, -1.65; 95% CI, -3.12 to 0.00). Furthermore, MUC5AC concentration was lower in participants with symptomatic eye strain than in asymptomatic individuals (P = .001; estimated difference, -1.71; 95% CI, -2.86 to -0.63). CONCLUSIONS AND RELEVANCE: The data obtained in the present study suggest that office workers with prolonged VDT use, as well as those with an increased frequency of eye strain, have a low MUC5AC concentration in their tears. Furthermore, MUC5AC concentration in the tears of patients with DED may be lower than that in individuals without DED.

Pages