Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3,584
result(s) for
"Fundus oculi"
Sort by:
Retinal age gap as a predictive biomarker for mortality risk
2023
AimTo develop a deep learning (DL) model that predicts age from fundus images (retinal age) and to investigate the association between retinal age gap (retinal age predicted by DL model minus chronological age) and mortality risk.MethodsA total of 80 169 fundus images taken from 46 969 participants in the UK Biobank with reasonable quality were included in this study. Of these, 19 200 fundus images from 11 052 participants without prior medical history at the baseline examination were used to train and validate the DL model for age prediction using fivefold cross-validation. A total of 35 913 of the remaining 35 917 participants had available mortality data and were used to investigate the association between retinal age gap and mortality.ResultsThe DL model achieved a strong correlation of 0.81 (p<0·001) between retinal age and chronological age, and an overall mean absolute error of 3.55 years. Cox regression models showed that each 1 year increase in the retinal age gap was associated with a 2% increase in risk of all-cause mortality (hazard ratio (HR)=1.02, 95% CI 1.00 to 1.03, p=0.020) and a 3% increase in risk of cause-specific mortality attributable to non-cardiovascular and non-cancer disease (HR=1.03, 95% CI 1.00 to 1.05, p=0.041) after multivariable adjustments. No significant association was identified between retinal age gap and cardiovascular- or cancer-related mortality.ConclusionsOur findings indicate that retinal age gap might be a potential biomarker of ageing that is closely related to risk of mortality, implying the potential of retinal image as a screening tool for risk stratification and delivery of tailored interventions.
Journal Article
Artificial Intelligence to Detect Papilledema from Ocular Fundus Photographs
by
Zhubo, Jiang
,
Gohier, Philippe
,
Hamann, Steffen
in
Algorithms
,
Area Under Curve
,
Artificial intelligence
2020
A deep-learning system that was applied to 14,341 fundus photographs differentiated optic disks with papilledema from normal disks with 96.4% sensitivity and 84.7% specificity in an external-testing data set. The prevalence of papilledema was 9.5%, yielding positive and negative predictive values of 39.8% and 99.6%, respectively.
Journal Article
An Efficient Deep Learning Approach to Automatic Glaucoma Detection Using Optic Disc and Optic Cup Localization
2022
Glaucoma is an eye disease initiated due to excessive intraocular pressure inside it and caused complete sightlessness at its progressed stage. Whereas timely glaucoma screening-based treatment can save the patient from complete vision loss. Accurate screening procedures are dependent on the availability of human experts who performs the manual analysis of retinal samples to identify the glaucomatous-affected regions. However, due to complex glaucoma screening procedures and shortage of human resources, we often face delays which can increase the vision loss ratio around the globe. To cope with the challenges of manual systems, there is an urgent demand for designing an effective automated framework that can accurately identify the Optic Disc (OD) and Optic Cup (OC) lesions at the earliest stage. Efficient and effective identification and classification of glaucomatous regions is a complicated job due to the wide variations in the mass, shade, orientation, and shapes of lesions. Furthermore, the extensive similarity between the lesion and eye color further complicates the classification process. To overcome the aforementioned challenges, we have presented a Deep Learning (DL)-based approach namely EfficientDet-D0 with EfficientNet-B0 as the backbone. The presented framework comprises three steps for glaucoma localization and classification. Initially, the deep features from the suspected samples are computed with the EfficientNet-B0 feature extractor. Then, the Bi-directional Feature Pyramid Network (BiFPN) module of EfficientDet-D0 takes the computed features from the EfficientNet-B0 and performs the top-down and bottom-up keypoints fusion several times. In the last step, the resultant localized area containing glaucoma lesion with associated class is predicted. We have confirmed the robustness of our work by evaluating it on a challenging dataset namely an online retinal fundus image database for glaucoma analysis (ORIGA). Furthermore, we have performed cross-dataset validation on the High-Resolution Fundus (HRF), and Retinal Image database for Optic Nerve Evaluation (RIM ONE DL) datasets to show the generalization ability of our work. Both the numeric and visual evaluations confirm that EfficientDet-D0 outperforms the newest frameworks and is more proficient in glaucoma classification.
Journal Article
Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks
by
Yang, Jian-Feng
,
Huang, Yuqiang
,
Zheng, Dezhi
in
631/114/1305
,
692/53/2421
,
692/699/3161/3175
2021
Retinal fundus diseases can lead to irreversible visual impairment without timely diagnoses and appropriate treatments. Single disease-based deep learning algorithms had been developed for the detection of diabetic retinopathy, age-related macular degeneration, and glaucoma. Here, we developed a deep learning platform (DLP) capable of detecting multiple common referable fundus diseases and conditions (39 classes) by using 249,620 fundus images marked with 275,543 labels from heterogenous sources. Our DLP achieved a frequency-weighted average F1 score of 0.923, sensitivity of 0.978, specificity of 0.996 and area under the receiver operating characteristic curve (AUC) of 0.9984 for multi-label classification in the primary test dataset and reached the average level of retina specialists. External multihospital test, public data test and tele-reading application also showed high efficiency for multiple retinal diseases and conditions detection. These results indicate that our DLP can be applied for retinal fundus disease triage, especially in remote areas around the world.
Systems for automatic detection of a single disease may miss other important conditions. Here, the authors show a deep learning platform can detect 39 common retinal diseases and conditions.
Journal Article
Pachychoroid disease
2019
Pachychoroid is a relatively novel concept describing a phenotype characterized by attenuation of the choriocapillaris overlying dilated choroidal veins, and associated with progressive retinal pigment epithelium dysfunction and neovascularization. The emphasis in defining pachychoroid-related disorders has shifted away from simply an abnormally thick choroid (pachychoroid) toward a detailed morphological definition of a pathologic state (pachychoroid disease) with functional implications, which will be discussed in this review. Several clinical manifestations have been described to reside within the pachychoroid disease spectrum, including central serous chorioretinopathy, pachychoroid pigment epitheliopathy, pachychoroid neovasculopathy, polypoidal choroidal vasculopathy/aneurysmal type 1 neovascularization, focal choroidal excavation, peripapillary pachychoroid syndrome. These conditions all exhibit the characteristic choroidal alterations and are believed to represent different manifestations of a common pathogenic process. This review is based on both the current literature and the clinical experience of our individual authors, with an emphasis on the clinical and imaging features, management considerations, as well as current understanding of pathogenesis of these disorders within the context of the recent findings related to pachychoroid disease.
Journal Article
Automatic interpretation and clinical evaluation for fundus fluorescein angiography images of diabetic retinopathy patients by deep learning
2023
Background/aimsFundus fluorescein angiography (FFA) is an important technique to evaluate diabetic retinopathy (DR) and other retinal diseases. The interpretation of FFA images is complex and time-consuming, and the ability of diagnosis is uneven among different ophthalmologists. The aim of the study is to develop a clinically usable multilevel classification deep learning model for FFA images, including prediagnosis assessment and lesion classification.MethodsA total of 15 599 FFA images of 1558 eyes from 845 patients diagnosed with DR were collected and annotated. Three convolutional neural network (CNN) models were trained to generate the label of image quality, location, laterality of eye, phase and five lesions. Performance of the models was evaluated by accuracy, F-1 score, the area under the curve and human-machine comparison. The images with false positive and false negative results were analysed in detail.ResultsCompared with LeNet-5 and VGG16, ResNet18 got the best result, achieving an accuracy of 80.79%–93.34% for prediagnosis assessment and an accuracy of 63.67%–88.88% for lesion detection. The human-machine comparison showed that the CNN had similar accuracy with junior ophthalmologists. The false positive and false negative analysis indicated a direction of improvement.ConclusionThis is the first study to do automated standardised labelling on FFA images. Our model is able to be applied in clinical practice, and will make great contributions to the development of intelligent diagnosis of FFA images.
Journal Article
Deep learning-based automated detection for diabetic retinopathy and diabetic macular oedema in retinal fundus photographs
2022
ObjectivesTo present and validate a deep ensemble algorithm to detect diabetic retinopathy (DR) and diabetic macular oedema (DMO) using retinal fundus images.MethodsA total of 8739 retinal fundus images were collected from a retrospective cohort of 3285 patients. For detecting DR and DMO, a multiple improved Inception-v4 ensembling approach was developed. We measured the algorithm’s performance and made a comparison with that of human experts on our primary dataset, while its generalization was assessed on the publicly available Messidor-2 dataset. Also, we investigated systematically the impact of the size and number of input images used in training on model’s performance, respectively. Further, the time budget of training/inference versus model performance was analyzed.ResultsOn our primary test dataset, the model achieved an 0.992 (95% CI, 0.989–0.995) AUC corresponding to 0.925 (95% CI, 0.916-0.936) sensitivity and 0.961 (95% CI, 0.950–0.972) specificity for referable DR, while the sensitivity and specificity for ophthalmologists ranged from 0.845 to 0.936, and from 0.912 to 0.971, respectively. For referable DMO, our model generated an AUC of 0.994 (95% CI, 0.992–0.996) with a 0.930 (95% CI, 0.919–0.941) sensitivity and 0.971 (95% CI, 0.965–0.978) specificity, whereas ophthalmologists obtained sensitivities ranging between 0.852 and 0.946, and specificities ranging between 0.926 and 0.985.ConclusionThis study showed that the deep ensemble model exhibited excellent performance in detecting DR and DMO, and had good robustness and generalization, which could potentially help support and expand DR/DMO screening programs.
Journal Article
Persistence of Ebola Virus in Ocular Fluid during Convalescence
by
Varkey, Jay B
,
Ribner, Bruce S
,
Kumar, Gokul
in
Adult
,
Aqueous Humor - virology
,
Convalescence
2015
In this report, Ebola virus was cultured from aqueous humor 14 weeks after disease onset and 9 weeks after resolution of viremia, a finding that indicates the potential for delayed clearance of the virus from immune-privileged sites.
The current outbreak of EVD is believed to have begun in December 2013.
1
As of April 26, 2015, a total of 26,312 cases of EVD (including 10,899 deaths) had been reported in six countries in West Africa (i.e., Sierra Leone, Liberia, Guinea, Mali, Nigeria, and Senegal), the United States, the United Kingdom, and Spain.
2
The outbreak has also resulted in the largest number of EVD survivors in history.
Among survivors of EVD, late complications that include ocular disease can develop during convalescence.
3
,
4
However, few systematic studies have been conducted on post-EVD sequelae, so the incidence and clinical manifestations of . . .
Journal Article
Predicting sex from retinal fundus photographs using automated deep learning
by
Wagner, Siegfried K.
,
Liu, Xiaoxuan
,
Pontikos, Nikolas
in
639/705/117
,
692/308/575
,
Algorithms
2021
Deep learning may transform health care, but model development has largely been dependent on availability of advanced technical expertise. Herein we present the development of a deep learning model by clinicians without coding, which predicts reported sex from retinal fundus photographs. A model was trained on 84,743 retinal fundus photos from the UK Biobank dataset. External validation was performed on 252 fundus photos from a tertiary ophthalmic referral center. For internal validation, the area under the receiver operating characteristic curve (AUROC) of the code free deep learning (CFDL) model was 0.93. Sensitivity, specificity, positive predictive value (PPV) and accuracy (ACC) were 88.8%, 83.6%, 87.3% and 86.5%, and for external validation were 83.9%, 72.2%, 78.2% and 78.6% respectively. Clinicians are currently unaware of distinct retinal feature variations between males and females, highlighting the importance of model explainability for this task. The model performed significantly worse when foveal pathology was present in the external validation dataset, ACC: 69.4%, compared to 85.4% in healthy eyes, suggesting the fovea is a salient region for model performance OR (95% CI): 0.36 (0.19, 0.70) p = 0.0022. Automated machine learning (AutoML) may enable clinician-driven automated discovery of novel insights and disease biomarkers.
Journal Article
Retinal fundus image super-resolution based on generative adversarial network guided with vascular structure prior
2024
Many ophthalmic and systemic diseases can be screened by analyzing retinal fundus images. The clarity and resolution of retinal fundus images directly determine the effectiveness of clinical diagnosis. Deep learning methods based on generative adversarial networks are used in various research fields due to their powerful generative capabilities, especially image super-resolution. Although Real-ESRGAN is a recently proposed method that excels in processing real-world degraded images, it suffers from structural distortions when super-resolving retinal fundus images are rich in structural information. To address this shortcoming, we first process the input image using a pre-trained U-Net model to obtain a structural segmentation map of the retinal vessels and use the segmentation map as the structural prior. The spatial feature transform layer is then used to better integrate the structural prior into the generation process of the generator. In addition, we introduce channel and spatial attention modules into the skip connections of the discriminator to emphasize meaningful features and accordingly enhance the discriminative power of the discriminator. Based on the original loss functions, we introduce the L1 loss function to measure the pixel-level differences between the segmentation maps of retinal vascular structures in the high-resolution images and the super-resolution images to further constrain the super-resolution images. Simulation results on retinal image datasets show that our improved algorithm results have a better visual performance by suppressing structural distortions in the super-resolution images.
Journal Article