Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
2,326
result(s) for
"fundus imaging"
Sort by:
Smart Phone based Fundus Imaging for Diabetic Retinopathy Detection
2024
INTRODUCTION: Diabetic retinopathy (DR) is one of the consequences of diabetes which if untreated may lead to loss of vision. Generally, for DR detection, retinal images are obtained using a traditional fundus camera. A recent trend in the acquisition of eye fundus images is the usage of smartphones to acquire images. OBJECTIVES: This paper focuses on the study of existing works which incorporated smartphones for obtaining fundus images and various devices available in the market. Also, the common datasets used for carrying out DR detection using smartphone-based fundus images as well as the classification models used for the diagnosis of DR are explored. METHODS: A search of information was carried out on articles based on DR detection from fundus images published in the state-of-the-art literatures. RESULTS: Majority of the works uses SBFI devices like 20D lens, EyeExaminer etc. to obtain fundus image. The common databases used for the study are EyePACS, Messidor, etc. and the classification models mostly rely on deep learning frameworks. CONCLUSION: The use of smartphones for capturing fundus images for DR detection are explored. Smartphone devices, datasets used for the study and currently available classification models for SBFI based DR detection are discussed in detail. This paper portrays various approaches currently being employed in SBFI based DR detection.
Journal Article
Longitudinal fundus imaging and its genome-wide association analysis provide evidence for a human retinal aging clock
2023
Biological age, distinct from an individual’s chronological age, has been studied extensively through predictive aging clocks. However, these clocks have limited accuracy in short time-scales. Here we trained deep learning models on fundus images from the EyePACS dataset to predict individuals’ chronological age. Our retinal aging clocking, ‘eyeAge’, predicted chronological age more accurately than other aging clocks (mean absolute error of 2.86 and 3.30 years on quality-filtered data from EyePACS and UK Biobank, respectively). Additionally, eyeAge was independent of blood marker-based measures of biological age, maintaining an all-cause mortality hazard ratio of 1.026 even when adjusted for phenotypic age. The individual-specific nature of eyeAge was reinforced via multiple GWAS hits in the UK Biobank cohort. The top GWAS locus was further validated via knockdown of the fly homolog, Alk , which slowed age-related decline in vision in flies. This study demonstrates the potential utility of a retinal aging clock for studying aging and age-related diseases and quantitatively measuring aging on very short time-scales, opening avenues for quick and actionable evaluation of gero-protective therapeutics.
Journal Article
Development and validation of a high-resolution hyperspectral imaging system for the retina
2026
Early detection of Alzheimer's diseases, diabetic retinopathy, or macular degeneration with advanced retinal imaging technologies can help improve patient care and treatment outcome.
We aim to create a high-resolution hyperspectral imaging (HSI) system for the retina. Retinal vessel diameter and oxygenation rate will be extracted simultaneously from HSI data.
Our hyperspectral retinal imaging system consists of a snapshot hyperspectral camera, a high-resolution RGB camera, a beamsplitter, and an imaging endoscope. Multiple pansharpening algorithms, including deep learning methods, were developed to generate high-resolution hyperspectral images that were further used for the measurement of vessel size and oxygenation rate in mice.
The hyperspectral retinal imaging system was tested for its spatial resolution and spectral fidelity in retina phantoms.
imaging experiments were performed in mice. The deep learning-based pansharpening algorithm achieved a root mean square error (RMSE) of
, a correlation coefficient (CC) of
, a spectral angle score of
radians, and an error relative global dimensionless synthesis (ERGAS) score of
. Oxygen saturation (
) and lumen diameters of blood vessels were measured in the retina. The average lumen diameter of the venules was
, whereas the average lumen diameter of the arterioles was
. The average arteriole
was 98%, whereas the average venule
was 58%.
A high-resolution hyperspectral imaging system was developed and validated for retina imaging and measurement of blood vessels and oxygen saturation.
Journal Article
An empirical study of preprocessing techniques with convolutional neural networks for accurate detection of chronic ocular diseases using fundus images
by
Surya, Divyalakshmi Kaiyoor
,
Mayya, Veena
,
Acharya, U Rajendra
in
Age related diseases
,
Artificial neural networks
,
Blood vessels
2023
Chronic Ocular Diseases (COD) such as myopia, diabetic retinopathy, age-related macular degeneration, glaucoma, and cataract can affect the eye and may even lead to severe vision impairment or blindness. According to a recent World Health Organization (WHO) report on vision, at least 2.2 billion individuals worldwide suffer from vision impairment. Often, overt signs indicative of COD do not manifest until the disease has progressed to an advanced stage. However, if COD is detected early, vision impairment can be avoided by early intervention and cost-effective treatment. Ophthalmologists are trained to detect COD by examining certain minute changes in the retina, such as microaneurysms, macular edema, hemorrhages, and alterations in the blood vessels. The range of eye conditions is diverse, and each of these conditions requires a unique patient-specific treatment. Convolutional neural networks (CNNs) have demonstrated significant potential in multi-disciplinary fields, including the detection of a variety of eye diseases. In this study, we combined several preprocessing approaches with convolutional neural networks to accurately detect COD in eye fundus images. To the best of our knowledge, this is the first work that provides a qualitative analysis of preprocessing approaches for COD classification using CNN models. Experimental results demonstrate that CNNs trained on the region of interest segmented images outperform the models trained on the original input images by a substantial margin. Additionally, an ensemble of three preprocessing techniques outperformed other state-of-the-art approaches by 30% and 3%, in terms of Kappa and F1 scores, respectively. The developed prototype has been extensively tested and can be evaluated on more comprehensive COD datasets for deployment in the clinical setup.
Journal Article
Optical coherence tomography surpasses fundus imaging and intracranial pressure measurement in monitoring idiopathic intracranial hypertension
2025
We aim to evaluate the retinal nerve fiber layer (RNFL) thickness measured with optical coherence tomography (OCT) in comparison with papilledema grade, and to assess the relationship between RNFL thickness, papilledema grade, and intracranial pressure (ICP) in idiopathic intracranial hypertension (IIH). Sixty-five patients with active IIH (AIIH) with papilledema, 39 with chronic IIH (CIIH) without papilledema and 80 healthy controls (HC) were examined with OCT and fundus imaging. RNFL thickness, papilledema grade and ICP level were assessed in 55 with AIIH and 26 with CIIH. RNFL thickness was significantly higher in AIIH compared to CIIH or HC. RNFL thickness correlated strongly with papilledema grade (coefficient 0.78, p < 0.01) and moderately with ICP (coefficient 0.569, p < 0.01). RNFL thickness was associated with papilledema progression (R
2
= 0.656, p < 0.01): specifically, with increases of 9 µm from normal to mild grade (p > 0.05), 91 µm from normal to moderate (p < 0.01), and 214 µm from normal to severe (p < 0.01). ICP showed a weaker correlation with papilledema grades (R
2
= 0.339, p < 0.05), with significant increase (8 cm H
2
O, p < 0.01) only from normal to severe papilledema. RNFL correlated strongly with papilledema grade and moderately with ICP levels. RNFL thickness increased proportionally per papilledema grade.
Journal Article
Development of a deep-learning system for detection of lattice degeneration, retinal breaks, and retinal detachment in tessellated eyes using ultra-wide-field fundus images: a pilot study
2021
PurposeTo investigate the detection of lattice degeneration, retinal breaks, and retinal detachment in tessellated eyes using ultra-wide-field fundus imaging system (Optos) with convolutional neural network technology.MethodsThis study included 1500 Optos color images for tessellated fundus confirmation and peripheral retinal lesion (lattice degeneration, retinal breaks, and retinal detachment) assessment. Three retinal specialists evaluated all images and proposed the reference standard when an agreement was achieved. Then, 722 images were used to train and verify a combined deep-learning system of 3 optimal binary classification models trained using seResNext50 algorithm with 2 preprocessing methods (original resizing and cropping), and a test set of 189 images were applied to verify the performance compared to the reference standard.ResultsWith optimal preprocessing approach (original resizing method for lattice degeneration and retinal detachment, cropping method for retinal breaks), the combined deep-learning system exhibited an area under curve of 0.888, 0.953, and 1.000 for detection of lattice degeneration, retinal breaks, and retinal detachment respectively in tessellated eyes. The referral accuracy of this system was 79.8% compared to the reference standard.ConclusionA deep-learning system is feasible to detect lattice degeneration, retinal breaks, and retinal detachment in tessellated eyes using ultra-wide-field images. And this system may be considered for screening and telemedicine.
Journal Article
Detection and diagnosis of diabetic retinopathy in retinal fundus images using agentic AI approaches
2025
In today’s world, Diabetic Retinopathy (DR) remains a leading cause of vision loss globally, necessitating early detection and accurate diagnosis for timely intervention. Traditional machine learning and deep learning-based approaches, while effective, often suffer from issues such as limited interpretability, static decision-making, and inadequate generalization across diverse patient data. This research introduces an Agentic-AI Driven Framework for Diabetic Retinopathy Analysis (AADR-AI), which leverages intelligent agent-based learning mechanisms to enhance decision-making autonomy, dynamic adaptability, and contextual understanding of retinal fundus images. The novelty lies in incorporating agentic intelligence principles, autonomy, reactivity, and proactivity into DR detection systems, allowing real-time analysis and adaptive feature learning based on patient-specific variations. The proposed AADR-AI framework integrates a multi-agent ensemble of convolutional and transformer-based networks, coordinated through a decision fusion layer for robust classification. Key contributions include improved classification accuracy (up to 96.7%), enhanced model efficiency with reduced computational overhead, and real-time adaptability to varying image qualities and disease progression stages. Extensive experimentation on benchmark datasets demonstrates superior performance compared to existing state-of-the-art methods. This work highlights the transformative potential of agentic AI in medical imaging, paving the way for more autonomous and interpretable clinical decision-support systems.
Journal Article
Deep learning-based joint analysis of diabetic retinopathy and glaucoma in retinal fundus images
2025
In current times, Diabetic retinopathy (DR) might be more difficult to diagnose when coexisting with glaucoma, since the two diseases share retinal abnormalities. Worldwide, DR is one of the most common causes of blindness. Conventional convolutional neural network (CNN)-based approaches struggle significantly with this type of co-morbid imaging due to the inherent difficulty in understanding both coarse-grained features and global correlations. The authors of this study propose a novel deep learning architecture, Vision Transformer (ViT) with Bi-Directional Feature Fusion (BFF) (ViT-BiFusionDRNet-HGS), to address these limitations. It is fine-tuned using the HGS technique, which was created for the Hunger Games, and combines a Vision Transformer (ViT) with Bi-Directional Feature Fusion (BFF). The BFF module enables the learning of semantic features from low-level textures, while the Vision Transformer captures long-distance spatial correlations. By incorporating the Hunger Games Search (HGS) algorithm into the model, it optimizes crucial hyperparameters and fusion weights, allowing for better generalization across complex fundus images, faster convergence, and more accurate lesion localization. With a classification accuracy of 98.4% and sensitivity levels higher than those of CNN, standalone ViT, and other baseline optimizers, the model demonstrated superior performance on open-source datasets for diabetic retinopathy and glaucoma fundus images. Clinically, ViT-BiFusionDRNet-HGS shows great potential as a real-time, scalable system for automated analysis of retinal abnormalities in complex diagnostic situations.
Journal Article
Macretina: a dataset, to support deep learning assisted retinopathy of prematurity diagnosis
2025
Retinopathy of Prematurity (ROP) is a vision-threatening retinal disease found in premature babies, where early diagnosis is very important to prevent irreversible vision loss. In recent years, several studies have been conducted on the development of reliable AI-based screening systems. However, due to the lack of well-annotated public datasets most of them have been limited to experimental research, using single-central datasets. In this study, we introduce
Macretina
, a comprehensive and expert-annotated dataset curated from 1432 retinal fundus images of 112 premature babies collected at Macretina Hospital, Indore, India. These images were captured using the 3nethra Neo wide-field retinal imaging system, commonly used for retinopathy of prematurity (ROP) screening. The dataset is specially designed to support AI-based automated ROP diagnosis and is organized into three subsets, each addressing a distinct pathologically relevant retinal feature for ROP screening. The three subsets are:
Macretina-Ridge
which supports binary classification for ridge/demarcation line detection,
Macretina-OD
which supports object detection for optic disc localization, and
Macretina-BV
which supports semantic segmentation for blood vessel analysis. We also evaluated the utility of each subset using standard Deep Convolutional Neural Networks (DCNNs), and the experiments achieved promising results across Classification, Object Detection, and Segmentation tasks. Our dataset captures a wide range of disease severity and imaging variations, making it well-suited for developing clinically relevant and generalizable AI models.
Journal Article
Multiphoton Microscopy and Fluorescence Lifetime Imaging
by
Breymayer, Jasmin
,
Baldeweck, Thérèse
,
Vecker, Wolfgang
in
Biology, life sciences
,
Cellular biology (cytology)
,
Clinical and internal medicine
2018
This monograph focuses on modern femtosecond laser microscopes for two photon imaging and nanoprocessing, on laser tweezers for cell micromanipulation as well as on fluorescence lifetime imaging (FLIM) in Life Sciences. The book starts with an introduction by Dr. Wolfgang Kaiser, pioneer of nonlinear optics and ends with the chapter on clinical multiphoton tomography, the novel high resolution imaging technique. It includes a foreword by the nonlinear microscopy expert Dr. Colin Sheppard. Contents Part I: Basics Brief history of fluorescence lifetime imaging The long journey to the laser and its use for nonlinear optics Advanced TCSPC-FLIM techniques Ultrafast lasers in biophotonics Part II: Modern nonlinear microscopy of live cells STED microscopy: exploring fluorescence lifetime gradients for super-resolution at reduced illumination intensities Principles and applications of temporal-focusing wide-field two-photon microscopy FLIM-FRET microscopy TCSPC FLIM and PLIM for metabolic imaging and oxygen sensing Laser tweezers are sources of two-photon effects Metabolic shifts in cell proliferation and differentiation Femtosecond laser nanoprocessing Cryomultiphoton imaging Part III: Nonlinear tissue imaging Multiphoton Tomography (MPT) Clinical multimodal CARS imaging In vivo multiphoton microscopy of human skin Two-photon microscopy and fluorescence lifetime imaging of the cornea Multiscale correlative imaging of the brain Revealing interaction of dyes and nanomaterials by multiphoton imaging Multiphoton FLIM in cosmetic clinical research Multiphoton microscopy and fluorescence lifetime imaging for resection guidance in malignant glioma surgery Non-invasive single-photon and multi-photon imaging of stem cells and cancer cells in mouse models Bedside assessment of multiphoton tomography