Clinically relevant deep learning for detection and quantification of geographic atrophy from optical coherence tomography: a model development and external validation study

Gongyu Zhang, Dun Jack Fu, Bart Liefers, Livia Faes, Sophie Glinton, Siegfried Wagner, Robbert Struyven, Nikolas Pontikos, Pearse A. Keane, Konstantinos Balaskas*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

19 Citations (Scopus)
9 Downloads (Pure)

Abstract

BACKGROUND: Geographic atrophy is a major vision-threatening manifestation of age-related macular degeneration, one of the leading causes of blindness globally. Geographic atrophy has no proven treatment or method for easy detection. Rapid, reliable, and objective detection and quantification of geographic atrophy from optical coherence tomography (OCT) retinal scans is necessary for disease monitoring, prognostic research, and to serve as clinical endpoints for therapy development. To this end, we aimed to develop and validate a fully automated method to detect and quantify geographic atrophy from OCT.
METHODS: We did a deep-learning model development and external validation study on OCT retinal scans at Moorfields Eye Hospital Reading Centre and Clinical AI Hub (London, UK). A modified U-Net architecture was used to develop four distinct deep-learning models for segmentation of geographic atrophy and its constituent retinal features from OCT scans acquired with Heidelberg Spectralis. A manually segmented clinical dataset for model development comprised 5049 B-scans from 984 OCT volumes selected randomly from 399 eyes of 200 patients with geographic atrophy secondary to age-related macular degeneration, enrolled in a prospective, multicentre, phase 2 clinical trial for the treatment of geographic atrophy (FILLY study). Performance was externally validated on an independently recruited dataset from patients receiving routine care at Moorfields Eye Hospital (London, UK). The primary outcome was segmentation and classification agreement between deep-learning model geographic atrophy prediction and consensus of two independent expert graders on the external validation dataset.
FINDINGS: The external validation cohort included 884 B-scans from 192 OCT volumes taken from 192 eyes of 110 patients as part of real-life clinical care at Moorfields Eye Hospital between Jan 1, 2016, and Dec, 31, 2019 (mean age 78·3 years [SD 11·1], 58 [53%] women). The resultant geographic atrophy deep-learning model produced predictions similar to consensus human specialist grading on the external validation dataset (median Dice similarity coefficient [DSC] 0·96 [IQR 0·10]; intraclass correlation coefficient [ICC] 0·93) and outperformed agreement between human graders (DSC 0·80 [0·28]; ICC 0·79). Similarly, the three independent feature-specific deep-learning models could accurately segment each of the three constituent features of geographic atrophy: retinal pigment epithelium loss (median DSC 0·95 [IQR 0·15]), overlying photoreceptor degeneration (0·96 [0·12]), and hypertransmission (0·97 [0·07]) in the external validation dataset versus consensus grading.
INTERPRETATION: We present a fully developed and validated deep-learning composite model for segmentation of geographic atrophy and its subtypes that achieves performance at a similar level to manual specialist assessment. Fully automated analysis of retinal OCT from routine clinical practice could provide a promising horizon for diagnosis and prognosis in both research and real-life patient care, following further clinical validation
Original languageEnglish
Pages (from-to)e665-e675
JournalThe Lancet Digital Health
Volume3
Issue number10
Early online date8 Sept 2021
DOIs
Publication statusPublished - 1 Oct 2021

Bibliographical note

Funding: Apellis Pharmaceuticals

Fingerprint

Dive into the research topics of 'Clinically relevant deep learning for detection and quantification of geographic atrophy from optical coherence tomography: a model development and external validation study'. Together they form a unique fingerprint.

Cite this