Solaguren-Beascoa M, Bujakowska KM, Méjécase C, Emmenegger L, Orhan E, Neuillé M, Mohand-Saïd S, Condroyer C, Lancelot M-E, Michiels C, Demontant V, Antonio A, Letexier M, Saraiva J-P, Lonjou C, Carpentier W, Léveillard T, Pierce EA, Dollfus H, Sahel J-A, Bhattacharya SS, Audo I, Zeitz C. WDR34, a candidate gene for non-syndromic rod-cone dystrophy. Clin Genet 2021;99(2):298-302.Abstract
Rod-cone dystrophy (RCD), also called retinitis pigmentosa, is characterized by rod followed by cone photoreceptor degeneration, leading to gradual visual loss. Mutations in over 65 genes have been associated with non-syndromic RCD explaining 60% to 70% of cases, with novel gene defects possibly accounting for the unsolved cases. Homozygosity mapping and whole-exome sequencing applied to a case of autosomal recessive non-syndromic RCD from a consanguineous union identified a homozygous variant in WDR34. Mutations in WDR34 have been previously associated with severe ciliopathy syndromes possibly associated with a retinal dystrophy. This is the first report of a homozygous mutation in WDR34 associated with non-syndromic RCD.
OBJECTIVE: Severe preeclampsia complicates roughly 1% of all pregnancies. One defining feature of severe preeclampsia is new onset visual disturbance. The accessibility of the choroid to high-resolution, noninvasive imaging makes it a reasonable target of investigation for disease prediction, stratification, or monitoring in preeclampsia. This study aimed to compare subfoveal choroidal thickness between women with severe preeclampsia and those with normotensive pregnancies, and to investigate associations between such findings and other indicators of disease severity, including gestational age and serum angiogenic factors. STUDY DESIGN: We designed a case-control study comprised of 36 women diagnosed with severe preeclampsia (cases) matched to 37 normotensive women (controls) by race/ethnicity and parity, all diagnosed in the postpartum period. All patients underwent enhanced depth imaging spectral-domain optical coherence tomography and serum analysis. RESULTS: Cases showed no difference in subfoveal choroidal thickness compared with controls ( = 0.65). Amongst cases, subfoveal choroidal thickness and gestational age at delivery were inversely related ( = 0.86, < .001). There was a positive association of placental growth factor with subfoveal choroidal thickness amongst cases ( = 0.54, = 0.002). CONCLUSION: This study suggests a relationship between the degree of disease severity and the magnitude of choroidal thickening. We also show an association between this index and placental growth factor level in the postpartum period.
Autism spectrum disorder (ASD) is characterized by social deficits and atypical facial processing of emotional expressions. The underlying neuropathology of these abnormalities is still unclear. Recent studies implicate cerebellum in emotional processing; other studies show cerebellar abnormalities in ASD. Here, we elucidate the spatiotemporal activation of cerebellar lobules in ASD during emotional processing of happy and angry faces in adolescents with ASD and typically developing (TD) controls. Using magnetoencephalography, we calculated dynamic statistical parametric maps across a period of 500 ms after emotional stimuli onset and determined differences between group activity to happy and angry emotions. Following happy face presentation, adolescents with ASD exhibited only left-hemispheric cerebellar activation in a cluster extending from lobule VI to lobule V (compared to TD controls). Following angry face presentation, adolescents with ASD exhibited only midline cerebellar activation (posterior IX vermis). Our findings indicate an early (125-175 ms) overactivation in cerebellar activity only for happy faces and a later overactivation for both happy (250-450 ms) and angry (250-350 ms) faces in adolescents with ASD. The prioritized hemispheric activity (happy faces) could reflect the promotion of a more flexible and adaptive social behavior, while the latter midline activity (angry faces) may guide conforming behavior.
Graduate medical education (GME) in ophthalmology has faced and overcome many challenges over the past years and 2020 has been a game-changing year. Although the severe acute respiratory syndrome coronavirus (SARS-CoV2) pandemic disrupted medical education globally, ophthalmic educators rapidly transformed their curricula to novel and effective virtual learning formats. Thus, while the COVID-19 outbreak has been one of the most significant challenges faced in the history of medical education, it has also provided an impetus to develop innovative teaching practices, bringing with it unprecedented success in allowing medical students to continue their education in ophthalmology despite these challenges. We review and appraise novel educational interventions implemented by various institutions in response to the COVID-19 pandemic, highlighting their effectiveness, challenges and proposing future directions beyond the pandemic. Many of these innovations will persist even after the end of the pandemic because they have proven that face-to-face learning is not required for all aspects of the ophthalmic GME curriculum. As ophthalmic educators harness the power of educational technology it is critical that their novel educational initiatives are incorporated into competency-based curricula with assessments mapped to the competencies. Future research should focus on evaluating the impact of this transformation to virtual learning environments on student performances as well as implementing longitudinal assessment strategies for clinical competence in workplace-based practice.
PURPOSE: To compare the variability and ability to detect visual field progression of 24-2, central 12 locations of the 24-2 and 10-2 visual field (VF) tests in eyes with abnormal VFs. DESIGN: Retrospective, multisite cohort. PARTICIPANTS: A total of 52,806 24-2 and 11,966 10-2 VF tests from 7,307 eyes from the Glaucoma Research Network database were analyzed. Only eyes with ≥ 5 visits and ≥ 2 years of follow-up were included. METHODS: Linear regression models were used to calculate the rates of MD (Mean Deviation) change (slopes) while their residuals were used to assess variability across the entire MD range. Computer simulations (n=10,000) based upon real MD residuals of our sample were performed to estimate power to detect significant progression (P < 5%) at various rates of MD change. MAIN OUTCOME MEASURES: Time required to detect progression. RESULTS: For all 3 patterns, the MD variability was highest within the -5 to -20 dB range and consistently lower with the 10-2 compared to 24-2 or Central 24-2. Overall, time to detect confirmed significant progression at 80% power was the lowest with 10-2 VF, with a decrease of 14.6% to 18.5% when compared to 24-2 and a decrease of 22.9% to 26.5% when compared to Central 24-2. CONCLUSION: Time to detect central VF progression was reduced with 10-2 MD compared with 24-2 and C24-2 MD in glaucoma eyes in this large dataset, in part because 10-2 tests had lower variability. These findings contribute to current evidence of the potential value of 10-2 testing in the clinical management of glaucoma patients and in clinical trial design.
Eye and head movements are used to scan the environment when driving. In particular, when approaching an intersection, large gaze scans to the left and right, comprising head and multiple eye movements, are made. We detail an algorithm called the gaze scan algorithm that automatically quantifies the magnitude, duration, and composition of such large lateral gaze scans. The algorithm works by first detecting lateral saccades, then merging these lateral saccades into gaze scans, with the start and end points of each gaze scan marked in time and eccentricity. We evaluated the algorithm by comparing gaze scans generated by the algorithm to manually marked "consensus ground truth" gaze scans taken from gaze data collected in a high-fidelity driving simulator. We found that the gaze scan algorithm successfully marked 96% of gaze scans and produced magnitudes and durations close to ground truth. Furthermore, the differences between the algorithm and ground truth were similar to the differences found between expert coders. Therefore, the algorithm may be used in lieu of manual marking of gaze data, significantly accelerating the time-consuming marking of gaze movement data in driving simulator studies. The algorithm also complements existing eye tracking and mobility research by quantifying the number, direction, magnitude, and timing of gaze scans and can be used to better understand how individuals scan their environment.
Purpose: One rehabilitation strategy taught to individuals with hemianopic field loss (HFL) is to make a large blind side scan to quickly identify hazards. However, it is not clear what the minimum threshold is for how large the scan should be. Using driving simulation, we evaluated thresholds (criteria) for gaze and head scan magnitudes that best predict detection safety. Methods: Seventeen participants with complete HFL and 15 with normal vision (NV) drove through 4 routes in a virtual city while their eyes and head were tracked. Participants pressed the horn as soon as they detected a motorcycle (10 per drive) that appeared 54 degrees eccentricity on cross-streets and approached toward the driver. Results: Those with HFL detected fewer motorcycles than those with NV and had worse detection on the blind side than the seeing side. On the blind side, both safe detections and early detections (detections before the hazard entered the intersection) could be predicted with both gaze (safe 18.5 degrees and early 33.8 degrees) and head (safe 19.3 degrees and early 27 degrees) scans. However, on the seeing side, only early detections could be classified with gaze (25.3 degrees) and head (9.0 degrees). Conclusions: Both head and gaze scan magnitude were significant predictors of detection on the blind side, but less predictive on the seeing side, which was likely driven by the ability to use peripheral vision. Interestingly, head scans were as predictive as gaze scans. Translational Relevance: The minimum scan magnitude could be a useful criterion for scanning training or for developing assistive technologies to improve scanning.
SIGNIFICANCE: Think Tank 2019 affirmed that the rate of infection associated with contact lenses has not changed in several decades. Also, there is a trend toward more serious infections associated with Acanthamoeba and fungi. The growing use of contact lenses in children demands our attention with surveillance and case-control studies. PURPOSE: The American Academy of Optometry (AAO) gathered researchers and key opinion leaders from around the world to discuss contact lens-associated microbial keratitis at the 2019 AAO Annual Meeting. METHODS: Experts presented within four sessions. Session 1 covered the epidemiology of microbial keratitis, pathogenesis of Pseudomonas aeruginosa, and the role of lens care systems and storage cases in corneal disease. Session 2 covered nonbacterial forms of keratitis in contact lens wearers. Session 3 covered future needs, challenges, and research questions in relation to microbial keratitis in youth and myopia control, microbiome, antimicrobial surfaces, and genetic susceptibility. Session 4 covered compliance and communication imperatives. RESULTS: The absolute rate of microbial keratitis has remained very consistent for three decades despite new technologies, and extended wear significantly increases the risk. Improved oxygen delivery afforded by silicone hydrogel lenses has not impacted the rates, and although the introduction of daily disposable lenses has minimized the risk of severe disease, there is no consistent evidence that they have altered the overall rate of microbial keratitis. Overnight orthokeratology lenses may increase the risk of microbial keratitis, especially secondary to Acanthamoeba, in children. Compliance remains a concern and a significant risk factor for disease. New insights into host microbiome and genetic susceptibility may uncover new theories. More studies such as case-control designs suited for rare diseases and registries are needed. CONCLUSIONS: The first annual AAO Think Tank acknowledged that the risk of microbial keratitis has not decreased over decades, despite innovation. Important questions and research directions remain.
Diagnosis and treatment planning in ophthalmology heavily depend on clinical examination and advanced imaging modalities, which can be time-consuming and carry the risk of human error. Artificial intelligence (AI) and deep learning (DL) are being used in different fields of ophthalmology and in particular, when running diagnostics and predicting outcomes of anterior segment surgeries. This review will evaluate the recent developments in AI for diagnostics, surgical interventions, and prognosis of corneal diseases. It also provides a brief overview of the newer AI dependent modalities in corneal diseases.
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged in December 2019 in Wuhan city, Hubei province, China. This is the third and largest coronavirus outbreak since the new millennium after SARS in 2002 and Middle East respiratory syndrome (MERS) in 2012. Over 3 million people have been infected and the COVID-19 has caused more than 217 000 deaths. A concern exists regarding the vulnerability of patients who have been treated with immunosuppressive drugs prior or during this pandemic. Would they be more susceptible to infection by the SARS-CoV-2 and how would their clinical course be altered by their immunosuppressed state? This is a question the wider medical fraternity-including ophthalmologists, rheumatologists, gastroenterologist and transplant physicians among others-must answer. The evidence from the SARS and MERS outbreak offer some degree of confidence that immunosuppression is largely safe in the current COVID-19 pandemic. Preliminary clinical experiences based on case reports, small series and observational studies show the morbidity and mortality rates in immunosuppressed patients may not differ largely from the general population. Overwhelmingly, current best practice guidelines worldwide recommended the continuation of immunosuppression treatment in patients who require them except for perhaps high-dose corticosteroid therapy and in patients with associated risk factors for severe COVID-19 disease.
PURPOSE: To report a case series of patients with treatment-resistant Acanthamoeba keratitis (AK) using oral miltefosine, often as salvage therapy. DESIGN: Descriptive, retrospective multicenter case series. METHODS: We reviewed 15 patients with AK unresponsive to therapy who were subsequently given adjuvant systemic miltefosine between 2011 and 2017. The main outcome measures were resolution of infection, final visual acuity, tolerance of miltefosine, and clinical course of disease. RESULTS: All patients were treated with biguanides and/or diamidines or azoles without resolution of disease before starting miltefosine. Eleven of 15 patients retained count fingers or better vision, and all were considered disease free at last follow-up. Eleven of 15 patients had worsening inflammation with miltefosine, with 10 of them improving with steroids. Six patients received multiple courses of miltefosine. Most tolerated oral miltefosine well, with mild gastrointestinal symptoms as the most common systemic side effect. CONCLUSIONS: Oral miltefosine is a generally well-tolerated treatment adjuvant in patients with refractory AK. The clinician should be prepared for a steroid-responsive inflammatory response frequently encountered during the treatment course.
BACKGROUND/AIMS: Vitrectomy to repair retinal detachment is often performed with either non-contact wide-angle viewing systems or wide-angle contact viewing systems. The purpose of this study is to assess whether the viewing system used is associated with any differences in surgical outcomes of vitrectomy for primary non-complex retinal detachment repair. METHODS: This is a multicenter, interventional, retrospective, comparative study. Eyes that underwent non-complex primary retinal detachment repair by either pars plana vitrectomy (PPV) alone or in combination with scleral buckle/PPV in 2015 were evaluated. The viewing system at the time of the retinal detachment repair was identified and preoperative patient characteristics, intraoperative findings and postoperative outcomes were recorded. RESULTS: A total of 2256 eyes were included in our analysis. Of those, 1893 surgeries used a non-contact viewing system, while 363 used a contact lens system. There was no statistically significant difference in single surgery anatomic success at 3 months (p=0.72), or final anatomic success (p=0.40). Average postoperative visual acuity for the contact-based cases was logMAR 0.345 (20/44 Snellen equivalent) compared with 0.475 (20/60 Snellen equivalent) for non-contact (p=0.001). After controlling for numerous confounding variables in multivariable analysis, viewing system choice was no longer statistically significant (p=0.097). CONCLUSION: There was no statistically significant difference in anatomic success achieved for primary retinal detachment repair when comparing non-contact viewing systems to contact lens systems. Postoperative visual acuity was better in the contact-based group but this was not statistically significant when confounding factors were controlled for.
With the advancement of computational power, refinement of learning algorithms and architectures, and availability of big data, artificial intelligence (AI) technology, particularly with machine learning and deep learning, is paving the way for 'intelligent' healthcare systems. AI-related research in ophthalmology previously focused on the screening and diagnosis of posterior segment diseases, particularly diabetic retinopathy, age-related macular degeneration and glaucoma. There is now emerging evidence demonstrating the application of AI to the diagnosis and management of a variety of anterior segment conditions. In this review, we provide an overview of AI applications to the anterior segment addressing keratoconus, infectious keratitis, refractive surgery, corneal transplant, adult and paediatric cataracts, angle-closure glaucoma and iris tumour, and highlight important clinical considerations for adoption of AI technologies, potential integration with telemedicine and future directions.
AIMS/HYPOTHESIS: Proliferative diabetic retinopathy (PDR) with retinal neovascularisation (NV) is a leading cause of vision loss. This study identified a set of metabolites that were altered in the vitreous humour of PDR patients compared with non-diabetic control participants. We corroborated changes in vitreous metabolites identified in prior studies and identified novel dysregulated metabolites that may lead to treatment strategies for PDR. METHODS: We analysed metabolites in vitreous samples from 43 PDR patients and 21 non-diabetic epiretinal membrane control patients from Japan (age 27-80 years) via ultra-high-performance liquid chromatography-mass spectrometry. We then investigated the association of a novel metabolite (creatine) with retinal NV in mouse oxygen-induced retinopathy (OIR). Creatine or vehicle was administered from postnatal day (P)12 to P16 (during induced NV) via oral gavage. P17 retinas were quantified for NV and vaso-obliteration. RESULTS: We identified 158 metabolites in vitreous samples that were altered in PDR patients vs control participants. We corroborated increases in pyruvate, lactate, proline and allantoin in PDR, which were identified in prior studies. We also found changes in metabolites not previously identified, including creatine. In human vitreous humour, creatine levels were decreased in PDR patients compared with epiretinal membrane control participants (false-discovery rate <0.001). We validated that lower creatine levels were associated with vascular proliferation in mouse retina in the OIR model (p = 0.027) using retinal metabolomics. Oral creatine supplementation reduced NV compared with vehicle (P12 to P16) in OIR (p = 0.0024). CONCLUSIONS/INTERPRETATION: These results suggest that metabolites from vitreous humour may reflect changes in metabolism that can be used to find pathways influencing retinopathy. Creatine supplementation could be useful to suppress NV in PDR. Graphical abstract.
PURPOSE: To evaluate the long-term outcomes of uveitic macular edema (ME). DESIGN: Longitudinal follow-up of a cohort of participants in a randomized clinical trial. PARTICIPANTS: A total of 248 eyes of 177 participants with uveitic ME enrolled in the Multicenter Uveitis Steroid Treatment (MUST) Trial and Follow-up Study. METHODS: OCT measurements, taken at baseline and annually, were graded by reading center graders masked to clinical data. Macular edema was defined as a center macular thickness (CMT) ≥240 μm on time-domain OCT or time-domain OCT equivalent. Resolution of ME was defined as normalization of macular thickness on OCT. Relapse of ME was defined as increase in macular thickness to ≥240 μm in an eye that previously had resolution. Visual acuity was measured at each visit with logarithmic visual acuity charts. MAIN OUTCOME MEASURES: Resolution and relapse of ME. Visual acuity. RESULTS: Among 227 eyes with ME followed ≥1 year, the cumulative percent of eyes with ME resolving at any point during 7 years was 94% (95% confidence interval [CI], 89-97). Epiretinal membranes on OCT were associated with a lower likelihood of ME resolution (hazard ratio [HR], 0.74; 95% CI, 0.55-1.01; P = 0.05). Among 177 eyes with resolved ME, the cumulative percent with relapse within 7 years was 43% (95% CI, 32-51). Eyes in which ME resolved gained a mean of 6.24 letters (95% CI, 4.40-8.09; P < 0.001) compared with eyes that remained free from ME during the 1-year follow-up intervals, whereas eyes in which ME did not resolve experienced no gain in vision (mean change -1.30 letters; 95% CI, -2.70 to 0.09; P = 0.065), and eyes that developed ME during the year (incident or relapsed) experienced a mean loss of -8.65 letters (95% CI, -11.5 to -5.84, P < 0.001). CONCLUSIONS: Given sufficient time and treatment, nearly all uveitic ME resolves, but episodes of relapse were common. Visual acuity results were better among eyes with resolved ME, suggesting that control of inflammation and resolution of ME might be visually relevant treatment targets.
PURPOSE: To critically evaluate the potential impact of the coronavirus disease (COVID-19) pandemic on global ophthalmology and VISION 2020. DESIGN: Perspective supplemented with epidemiologic insights from available online databases. METHODS: We extracted data from the Global Vision Database (2017) and Global Burden of Disease Study (2017) to highlight temporal trends in global blindness since 1990, and provide a narrative overview of how COVID-19 may derail progress toward the goals of VISION 2020. RESULTS: Over 2 decades of VISION 2020 advocacy and program implementation have culminated in a universal reduction of combined age-standardized prevalence of moderate-to-severe vision impairment (MSVI) across all world regions since 1990. Between 1990 and 2017, low-income countries observed large reductions in the age-standardized prevalence per 100,000 persons of vitamin A deficiency (25,155 to 19,187), undercorrected refractive disorders (2,286 to 2,040), cataract (1,846 to 1,690), onchocerciasis (5,577 to 2,871), trachoma (506 to 159), and leprosy (36 to 26). Despite these reductions, crude projections suggest that more than 700 million persons will experience MSVI or blindness by 2050, principally owing to our growing and ageing global population. CONCLUSIONS: Despite the many resounding successes of VISION 2020, the burden of global blindness and vision impairment is set to reach historic levels in the coming years. The impact of COVID-19, while yet to be fully determined, now threatens the hard-fought gains of global ophthalmology. The postpandemic years will require renewed effort and focus on vision advocacy and expanding eye care services worldwide.
PURPOSE OF REVIEW: This review will discuss the utility of high-resolution anterior segment optical coherence tomography (HR-OCT), in-vivo confocal microscopy (IVCM) and ultrasound biomicroscopy (UBM) in characterizing and diagnosing various ocular surface tumors, namely ocular surface squamous neoplasia (OSSN), conjunctival lymphoma and conjunctival melanoma. The strengths and limitations of each imaging modality will be discussed along with the characteristics findings of each lesion on each imaging platform. RECENT FINDINGS: HR-OCT can consistently be utilized in the clinic setting to distinguish between epithelial ocular surface tumors such as OSSN as compared with subepithelial tumors such as conjunctival lymphoma and conjunctival melanoma given their distinctive findings. IVCM can be used as an adjunct to HR-OCT to obtain cellular and surface characteristics, whereas UBM can be used to assess tumor depth and thickness for larger and highly pigmented lesions as well as to detect intraocular invasion. SUMMARY: HR-OCT, IVCM and UBM are all helpful imaging modalities to diagnose and characterize various ocular surface tumors and can serve as valuable adjuncts to monitor treatment response and assess for recurrence ocular surface tumors.