PURPOSE: To compare the variability and ability to detect visual field progression of 24-2, central 12 locations of the 24-2 and 10-2 visual field (VF) tests in eyes with abnormal VFs. DESIGN: Retrospective, multisite cohort. PARTICIPANTS: A total of 52,806 24-2 and 11,966 10-2 VF tests from 7,307 eyes from the Glaucoma Research Network database were analyzed. Only eyes with ≥ 5 visits and ≥ 2 years of follow-up were included. METHODS: Linear regression models were used to calculate the rates of MD (Mean Deviation) change (slopes) while their residuals were used to assess variability across the entire MD range. Computer simulations (n=10,000) based upon real MD residuals of our sample were performed to estimate power to detect significant progression (P < 5%) at various rates of MD change. MAIN OUTCOME MEASURES: Time required to detect progression. RESULTS: For all 3 patterns, the MD variability was highest within the -5 to -20 dB range and consistently lower with the 10-2 compared to 24-2 or Central 24-2. Overall, time to detect confirmed significant progression at 80% power was the lowest with 10-2 VF, with a decrease of 14.6% to 18.5% when compared to 24-2 and a decrease of 22.9% to 26.5% when compared to Central 24-2. CONCLUSION: Time to detect central VF progression was reduced with 10-2 MD compared with 24-2 and C24-2 MD in glaucoma eyes in this large dataset, in part because 10-2 tests had lower variability. These findings contribute to current evidence of the potential value of 10-2 testing in the clinical management of glaucoma patients and in clinical trial design.
Eye and head movements are used to scan the environment when driving. In particular, when approaching an intersection, large gaze scans to the left and right, comprising head and multiple eye movements, are made. We detail an algorithm called the gaze scan algorithm that automatically quantifies the magnitude, duration, and composition of such large lateral gaze scans. The algorithm works by first detecting lateral saccades, then merging these lateral saccades into gaze scans, with the start and end points of each gaze scan marked in time and eccentricity. We evaluated the algorithm by comparing gaze scans generated by the algorithm to manually marked "consensus ground truth" gaze scans taken from gaze data collected in a high-fidelity driving simulator. We found that the gaze scan algorithm successfully marked 96% of gaze scans and produced magnitudes and durations close to ground truth. Furthermore, the differences between the algorithm and ground truth were similar to the differences found between expert coders. Therefore, the algorithm may be used in lieu of manual marking of gaze data, significantly accelerating the time-consuming marking of gaze movement data in driving simulator studies. The algorithm also complements existing eye tracking and mobility research by quantifying the number, direction, magnitude, and timing of gaze scans and can be used to better understand how individuals scan their environment.
Purpose: One rehabilitation strategy taught to individuals with hemianopic field loss (HFL) is to make a large blind side scan to quickly identify hazards. However, it is not clear what the minimum threshold is for how large the scan should be. Using driving simulation, we evaluated thresholds (criteria) for gaze and head scan magnitudes that best predict detection safety. Methods: Seventeen participants with complete HFL and 15 with normal vision (NV) drove through 4 routes in a virtual city while their eyes and head were tracked. Participants pressed the horn as soon as they detected a motorcycle (10 per drive) that appeared 54 degrees eccentricity on cross-streets and approached toward the driver. Results: Those with HFL detected fewer motorcycles than those with NV and had worse detection on the blind side than the seeing side. On the blind side, both safe detections and early detections (detections before the hazard entered the intersection) could be predicted with both gaze (safe 18.5 degrees and early 33.8 degrees) and head (safe 19.3 degrees and early 27 degrees) scans. However, on the seeing side, only early detections could be classified with gaze (25.3 degrees) and head (9.0 degrees). Conclusions: Both head and gaze scan magnitude were significant predictors of detection on the blind side, but less predictive on the seeing side, which was likely driven by the ability to use peripheral vision. Interestingly, head scans were as predictive as gaze scans. Translational Relevance: The minimum scan magnitude could be a useful criterion for scanning training or for developing assistive technologies to improve scanning.
SIGNIFICANCE: Think Tank 2019 affirmed that the rate of infection associated with contact lenses has not changed in several decades. Also, there is a trend toward more serious infections associated with Acanthamoeba and fungi. The growing use of contact lenses in children demands our attention with surveillance and case-control studies. PURPOSE: The American Academy of Optometry (AAO) gathered researchers and key opinion leaders from around the world to discuss contact lens-associated microbial keratitis at the 2019 AAO Annual Meeting. METHODS: Experts presented within four sessions. Session 1 covered the epidemiology of microbial keratitis, pathogenesis of Pseudomonas aeruginosa, and the role of lens care systems and storage cases in corneal disease. Session 2 covered nonbacterial forms of keratitis in contact lens wearers. Session 3 covered future needs, challenges, and research questions in relation to microbial keratitis in youth and myopia control, microbiome, antimicrobial surfaces, and genetic susceptibility. Session 4 covered compliance and communication imperatives. RESULTS: The absolute rate of microbial keratitis has remained very consistent for three decades despite new technologies, and extended wear significantly increases the risk. Improved oxygen delivery afforded by silicone hydrogel lenses has not impacted the rates, and although the introduction of daily disposable lenses has minimized the risk of severe disease, there is no consistent evidence that they have altered the overall rate of microbial keratitis. Overnight orthokeratology lenses may increase the risk of microbial keratitis, especially secondary to Acanthamoeba, in children. Compliance remains a concern and a significant risk factor for disease. New insights into host microbiome and genetic susceptibility may uncover new theories. More studies such as case-control designs suited for rare diseases and registries are needed. CONCLUSIONS: The first annual AAO Think Tank acknowledged that the risk of microbial keratitis has not decreased over decades, despite innovation. Important questions and research directions remain.
Diagnosis and treatment planning in ophthalmology heavily depend on clinical examination and advanced imaging modalities, which can be time-consuming and carry the risk of human error. Artificial intelligence (AI) and deep learning (DL) are being used in different fields of ophthalmology and in particular, when running diagnostics and predicting outcomes of anterior segment surgeries. This review will evaluate the recent developments in AI for diagnostics, surgical interventions, and prognosis of corneal diseases. It also provides a brief overview of the newer AI dependent modalities in corneal diseases.
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged in December 2019 in Wuhan city, Hubei province, China. This is the third and largest coronavirus outbreak since the new millennium after SARS in 2002 and Middle East respiratory syndrome (MERS) in 2012. Over 3 million people have been infected and the COVID-19 has caused more than 217 000 deaths. A concern exists regarding the vulnerability of patients who have been treated with immunosuppressive drugs prior or during this pandemic. Would they be more susceptible to infection by the SARS-CoV-2 and how would their clinical course be altered by their immunosuppressed state? This is a question the wider medical fraternity-including ophthalmologists, rheumatologists, gastroenterologist and transplant physicians among others-must answer. The evidence from the SARS and MERS outbreak offer some degree of confidence that immunosuppression is largely safe in the current COVID-19 pandemic. Preliminary clinical experiences based on case reports, small series and observational studies show the morbidity and mortality rates in immunosuppressed patients may not differ largely from the general population. Overwhelmingly, current best practice guidelines worldwide recommended the continuation of immunosuppression treatment in patients who require them except for perhaps high-dose corticosteroid therapy and in patients with associated risk factors for severe COVID-19 disease.
PURPOSE: To report a case series of patients with treatment-resistant Acanthamoeba keratitis (AK) using oral miltefosine, often as salvage therapy. DESIGN: Descriptive, retrospective multicenter case series. METHODS: We reviewed 15 patients with AK unresponsive to therapy who were subsequently given adjuvant systemic miltefosine between 2011 and 2017. The main outcome measures were resolution of infection, final visual acuity, tolerance of miltefosine, and clinical course of disease. RESULTS: All patients were treated with biguanides and/or diamidines or azoles without resolution of disease before starting miltefosine. Eleven of 15 patients retained count fingers or better vision, and all were considered disease free at last follow-up. Eleven of 15 patients had worsening inflammation with miltefosine, with 10 of them improving with steroids. Six patients received multiple courses of miltefosine. Most tolerated oral miltefosine well, with mild gastrointestinal symptoms as the most common systemic side effect. CONCLUSIONS: Oral miltefosine is a generally well-tolerated treatment adjuvant in patients with refractory AK. The clinician should be prepared for a steroid-responsive inflammatory response frequently encountered during the treatment course.
BACKGROUND/AIMS: Vitrectomy to repair retinal detachment is often performed with either non-contact wide-angle viewing systems or wide-angle contact viewing systems. The purpose of this study is to assess whether the viewing system used is associated with any differences in surgical outcomes of vitrectomy for primary non-complex retinal detachment repair. METHODS: This is a multicenter, interventional, retrospective, comparative study. Eyes that underwent non-complex primary retinal detachment repair by either pars plana vitrectomy (PPV) alone or in combination with scleral buckle/PPV in 2015 were evaluated. The viewing system at the time of the retinal detachment repair was identified and preoperative patient characteristics, intraoperative findings and postoperative outcomes were recorded. RESULTS: A total of 2256 eyes were included in our analysis. Of those, 1893 surgeries used a non-contact viewing system, while 363 used a contact lens system. There was no statistically significant difference in single surgery anatomic success at 3 months (p=0.72), or final anatomic success (p=0.40). Average postoperative visual acuity for the contact-based cases was logMAR 0.345 (20/44 Snellen equivalent) compared with 0.475 (20/60 Snellen equivalent) for non-contact (p=0.001). After controlling for numerous confounding variables in multivariable analysis, viewing system choice was no longer statistically significant (p=0.097). CONCLUSION: There was no statistically significant difference in anatomic success achieved for primary retinal detachment repair when comparing non-contact viewing systems to contact lens systems. Postoperative visual acuity was better in the contact-based group but this was not statistically significant when confounding factors were controlled for.
With the advancement of computational power, refinement of learning algorithms and architectures, and availability of big data, artificial intelligence (AI) technology, particularly with machine learning and deep learning, is paving the way for 'intelligent' healthcare systems. AI-related research in ophthalmology previously focused on the screening and diagnosis of posterior segment diseases, particularly diabetic retinopathy, age-related macular degeneration and glaucoma. There is now emerging evidence demonstrating the application of AI to the diagnosis and management of a variety of anterior segment conditions. In this review, we provide an overview of AI applications to the anterior segment addressing keratoconus, infectious keratitis, refractive surgery, corneal transplant, adult and paediatric cataracts, angle-closure glaucoma and iris tumour, and highlight important clinical considerations for adoption of AI technologies, potential integration with telemedicine and future directions.
AIMS/HYPOTHESIS: Proliferative diabetic retinopathy (PDR) with retinal neovascularisation (NV) is a leading cause of vision loss. This study identified a set of metabolites that were altered in the vitreous humour of PDR patients compared with non-diabetic control participants. We corroborated changes in vitreous metabolites identified in prior studies and identified novel dysregulated metabolites that may lead to treatment strategies for PDR. METHODS: We analysed metabolites in vitreous samples from 43 PDR patients and 21 non-diabetic epiretinal membrane control patients from Japan (age 27-80 years) via ultra-high-performance liquid chromatography-mass spectrometry. We then investigated the association of a novel metabolite (creatine) with retinal NV in mouse oxygen-induced retinopathy (OIR). Creatine or vehicle was administered from postnatal day (P)12 to P16 (during induced NV) via oral gavage. P17 retinas were quantified for NV and vaso-obliteration. RESULTS: We identified 158 metabolites in vitreous samples that were altered in PDR patients vs control participants. We corroborated increases in pyruvate, lactate, proline and allantoin in PDR, which were identified in prior studies. We also found changes in metabolites not previously identified, including creatine. In human vitreous humour, creatine levels were decreased in PDR patients compared with epiretinal membrane control participants (false-discovery rate <0.001). We validated that lower creatine levels were associated with vascular proliferation in mouse retina in the OIR model (p = 0.027) using retinal metabolomics. Oral creatine supplementation reduced NV compared with vehicle (P12 to P16) in OIR (p = 0.0024). CONCLUSIONS/INTERPRETATION: These results suggest that metabolites from vitreous humour may reflect changes in metabolism that can be used to find pathways influencing retinopathy. Creatine supplementation could be useful to suppress NV in PDR. Graphical abstract.
PURPOSE: To critically evaluate the potential impact of the coronavirus disease (COVID-19) pandemic on global ophthalmology and VISION 2020. DESIGN: Perspective supplemented with epidemiologic insights from available online databases. METHODS: We extracted data from the Global Vision Database (2017) and Global Burden of Disease Study (2017) to highlight temporal trends in global blindness since 1990, and provide a narrative overview of how COVID-19 may derail progress toward the goals of VISION 2020. RESULTS: Over 2 decades of VISION 2020 advocacy and program implementation have culminated in a universal reduction of combined age-standardized prevalence of moderate-to-severe vision impairment (MSVI) across all world regions since 1990. Between 1990 and 2017, low-income countries observed large reductions in the age-standardized prevalence per 100,000 persons of vitamin A deficiency (25,155 to 19,187), undercorrected refractive disorders (2,286 to 2,040), cataract (1,846 to 1,690), onchocerciasis (5,577 to 2,871), trachoma (506 to 159), and leprosy (36 to 26). Despite these reductions, crude projections suggest that more than 700 million persons will experience MSVI or blindness by 2050, principally owing to our growing and ageing global population. CONCLUSIONS: Despite the many resounding successes of VISION 2020, the burden of global blindness and vision impairment is set to reach historic levels in the coming years. The impact of COVID-19, while yet to be fully determined, now threatens the hard-fought gains of global ophthalmology. The postpandemic years will require renewed effort and focus on vision advocacy and expanding eye care services worldwide.
Purpose: To investigate the nature of anatomical and functional recovery kinetics after epiretinal membrane (ERM) removal. Methods: The records of 42 patients (45 eyes) with idiopathic ERM treated with pars plana vitrectomy and surgical peeling of the ERM performed by a single surgeon at Massachusetts Eye and Ear between 2012 and 2017 were retrospectively reviewed. Outcome measures included spectral domain optical coherence tomography-measured central macular thickness (CMT) pre-operatively and at post-operative day 1, week 1, months 1, 3, 6, 12 and 24 as well as best-corrected visual acuity (BCVA). Correlations between baseline or early values and final anatomical and functional outcomes were investigated. Results: Improvement in CMT was statistically significant after 1 week, 1, 3, 6, 12 and 24 months ( < 0.01). BCVA improvement was statistically significant after 1, 6, 12 and 24 months follow-up (<0.01). The improvement of BCVA and CMT with time was found to be logarithmic (R =0.96, R =0.84) suggesting that early (<30 days) post-operative functional and anatomical changes may be predictive of long-term outcomes. Preoperative BCVA and CMT revealed a weak positive correlation with the respective BCVA and CMT at 24 months (R=0.13 and R=0.16). When plotted as a percentage of the fellow normal eye CMT, first week proportional improvement in CMT from pre-operative baseline was found to be correlated with final CMT proportional decrease (R=0.72) suggesting that first week postoperative CMT could be predictive of final CMT. Conclusion: There is a logarithmic improvement in CMT and BCVA after ERM peel with BCVA improvement following the CMT improvement. Early (less than 30 days) post-operative anatomical changes can be predictive of long-term anatomical outcomes.
In the last two decades rodents have been on the rise as a dominant model for visual neuroscience. This is particularly true for earlier levels of information processing, but a number of studies have suggested that also higher levels of processing such as invariant object recognition occur in rodents. Here we provide a quantitative and comprehensive assessment of this claim by comparing a wide range of rodent behavioral and neural data with convolutional deep neural networks. These networks have been shown to capture hallmark properties of information processing in primates through a succession of convolutional and fully connected layers. We find that performance on rodent object vision tasks can be captured using low to mid-level convolutional layers only, without any convincing evidence for the need of higher layers known to simulate complex object recognition in primates. Our approach also reveals surprising insights on assumptions made before, for example, that the best performing animals would be the ones using the most abstract representations-which we show to likely be incorrect. Our findings suggest a road ahead for further studies aiming at quantifying and establishing the richness of representations underlying information processing in animal models at large.
INTRODUCTION: Contrast sensitivity function (CSF) may better estimate a patient's visual function compared with visual acuity (VA). Our study evaluates the quick CSF (qCSF) method to measure visual function in eyes with macular disease and good letter acuity. METHODS: Patients with maculopathies (retinal vein occlusion, macula-off retinal detachment, dry age-related macular degeneration and wet age-related macular degeneration) and good letter acuity (VA ≥20/30) were included. The qCSF method uses an intelligent algorithm to measure CSF across multiple spatial frequencies. All maculopathy eyes combined and individual macular disease groups were compared with healthy control eyes. Main outcomes included area under the log CSF (AULCSF) and six CS thresholds ranging from 1 cycle per degree (cpd) to 18 cpd. RESULTS: 151 eyes with maculopathy and 93 control eyes with VA ≥20/30 were included. The presence of a maculopathy was associated with significant reduction in AULCSF (β: -0.174; p<0.001) and CS thresholds at all spatial frequencies except for 18 cpd (β: -0.094 to -0.200 log CS, all p<0.01) compared with controls. Reductions in CS thresholds were most notable at low and intermediate spatial frequencies (1.5 cpd, 3 cpd and 6 cpd). CONCLUSION: CSF measured with the qCSF active learning method was found to be significantly reduced in eyes affected by macular disease despite good VA compared with healthy control eyes. The qCSF method is a promising clinical tool to quantify subtle visual deficits that may otherwise go unrecognised by current testing methods.
Glaucoma leads to millions of cases of visual impairment and blindness around the world. Its susceptibility is shaped by both environmental and genetic risk factors. Although over 120 risk loci have been identified for glaucoma, a large portion of its heritability is still unexplained. Here we describe the foundation of the Genetics of GLaucoma Evaluation in the AMish (GGLEAM) study to investigate the genetic architecture of glaucoma in the Ohio Amish, which exhibits lower genetic and environmental heterogeneity compared to the general population. To date, we have enrolled 81 Amish individuals in our study from Holmes County, Ohio. As a part of our enrollment process, 62 GGLEAM study participants (42 glaucoma-affected and 20 unaffected individuals) received comprehensive eye examinations and glaucoma evaluations. Using the data from the Anabaptist Genealogy Database, we found that 80 of the GGLEAM study participants were related to one another through a large, multigenerational pedigree containing 1586 people. We plan to integrate the health and kinship data obtained for the GGLEAM study to interrogate glaucoma genetics and pathophysiology in this unique population.
Botulinum toxin is an important treatment for many conditions in ophthalmology, including strabismus, nystagmus, blepharospasm, hemifacial spasm, spastic and congenital entropion, corneal exposure, and persistent epithelial defects. The mechanism of action of botulinum toxin for both strabismus and nystagmus is the neuromuscular blockade and transient paralysis of extraocular muscles, but when botulinum toxin is used for some forms of strabismus, a single injection can convey indefinite benefits. There are two unique mechanisms of action that account for the long-term effect on ocular alignment: (1) the disruption of a balanced system of agonist-antagonist extraocular muscles and (2) the reestablishment of central control of alignment by the binocular visual system. For other ocular conditions, botulinum toxin acts through transient paralysis of periocular muscles. Botulinum toxin is a powerful tool in ophthalmology, achieving its therapeutic effects by direct neuromuscular blockade of extraocular and periocular muscles and by unique mechanisms related to the underlying structure and function of the visual system.
Retinal imaging remains the mainstay for monitoring and grading diabetic retinopathy. The gold standard for detecting proliferative diabetic retinopathy (PDR) requiring treatment has long been the seven-field stereoscopic fundus photography and fluorescein angiography. In the past decade, ultra-wide field fluorescein angiography (UWF-FA) has become more commonly used in clinical practice for the evaluation of more advanced diabetic retinopathy. Since its invention, optical coherence tomography (OCT) has been an important tool for the assessment of diabetic macular edema; however, OCT offered little in the assessment of neovascular changes associated with PDR until OCT-A became available. More recently, swept source OCT allowed larger field of view scans to assess a variety of DR lesions with wide field swept source optical coherence tomography (WF-SS-OCTA). This paper reviews the role of WF-SS-OCTA in detecting neovascularization of the disc (NVD), and elsewhere (NVE), microaneurysms, changes of the foveal avascular zone (FAZ), intraretinal microvascular abnormalities (IRMA), and capillary non-perfusion, as well as limitations of this evolving technology.