Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
468 result(s) for "Diagnostic tests/Investigation"
Sort by:
New meaning for NLP: the trials and tribulations of natural language processing with GPT-3 in ophthalmology
Natural language processing (NLP) is a subfield of machine intelligence focused on the interaction of human language with computer systems. NLP has recently been discussed in the mainstream media and the literature with the advent of Generative Pre-trained Transformer 3 (GPT-3), a language model capable of producing human-like text. The release of GPT-3 has also sparked renewed interest on the applicability of NLP to contemporary healthcare problems. This article provides an overview of NLP models, with a focus on GPT-3, as well as discussion of applications specific to ophthalmology. We also outline the limitations of GPT-3 and the challenges with its integration into routine ophthalmic care.
Prospective evaluation of an artificial intelligence-enabled algorithm for automated diabetic retinopathy screening of 30 000 patients
Background/aimsHuman grading of digital images from diabetic retinopathy (DR) screening programmes represents a significant challenge, due to the increasing prevalence of diabetes. We evaluate the performance of an automated artificial intelligence (AI) algorithm to triage retinal images from the English Diabetic Eye Screening Programme (DESP) into test-positive/technical failure versus test-negative, using human grading following a standard national protocol as the reference standard.MethodsRetinal images from 30 405 consecutive screening episodes from three English DESPs were manually graded following a standard national protocol and by an automated process with machine learning enabled software, EyeArt v2.1. Screening performance (sensitivity, specificity) and diagnostic accuracy (95% CIs) were determined using human grades as the reference standard.ResultsSensitivity (95% CIs) of EyeArt was 95.7% (94.8% to 96.5%) for referable retinopathy (human graded ungradable, referable maculopathy, moderate-to-severe non-proliferative or proliferative). This comprises sensitivities of 98.3% (97.3% to 98.9%) for mild-to-moderate non-proliferative retinopathy with referable maculopathy, 100% (98.7%,100%) for moderate-to-severe non-proliferative retinopathy and 100% (97.9%,100%) for proliferative disease. EyeArt agreed with the human grade of no retinopathy (specificity) in 68% (67% to 69%), with a specificity of 54.0% (53.4% to 54.5%) when combined with non-referable retinopathy.ConclusionThe algorithm demonstrated safe levels of sensitivity for high-risk retinopathy in a real-world screening service, with specificity that could halve the workload for human graders. AI machine learning and deep learning algorithms such as this can provide clinically equivalent, rapid detection of retinopathy, particularly in settings where a trained workforce is unavailable or where large-scale and rapid results are needed.
Keratoconus staging by decades: a baseline ABCD classification of 1000 patients in the Homburg Keratoconus Center
BackgroundThis retrospective cross-sectional study aims to analyse the keratoconus (KC) stage distribution at different ages within the Homburg Keratoconus Center (HKC).Methods1917 corneae (1000 patients) were allocated to decades of age, classified according to Belin’s ABCD KC grading system and the stage distribution was analysed.Results73 per cent (n=728) of the patients were males, 27% (n=272) were females. The highest KC prevalence occurred between 21 and 30 years (n=585 corneae, 294 patients). Regarding anterior (A) and posterior (B) curvature, the frequency of A was significantly higher than B in all age groups for stage 0, 1 and 2 (A0>B0; A1>B1; A2>B2; p<0.03, Wilcoxon matched-pairs test). There was no significant difference between the number of A3 and B3, but significantly more corneae were classified as B4 than A4 in all age groups (p<0.02). The most frequent A|B combinations were A4|B4 (n=451), A0|B0 (n=311), A2|B4 (n=242), A2|B2 (n=189) and A1|B2 (n=154). Concerning thinnest pachymetry (C), most corneae in all age groups were classified as C0>C1>C2>C3>C4 (p<0.04, Wilcoxon matched-pairs test). For the best distance visual acuity (D), a significantly higher number of corneae were classified as D1 compared to D0 (p<0.008; D1>D0>D2>D3>D4).ConclusionThe stage distributions in all age groups were similar. Early KC rather becomes manifest in the posterior than the anterior corneal curvature whereas advanced stages of posterior corneal curvature coincide with early and advanced stages of anterior corneal curvature. Thus, this study emphasises the necessity of posterior corneal surface assessment in KC as enabled by the ABCD grading system.
Convolutional neural network to identify symptomatic Alzheimer’s disease using multimodal retinal imaging
Background/AimsTo develop a convolutional neural network (CNN) to detect symptomatic Alzheimer’s disease (AD) using a combination of multimodal retinal images and patient data.MethodsColour maps of ganglion cell-inner plexiform layer (GC-IPL) thickness, superficial capillary plexus (SCP) optical coherence tomography angiography (OCTA) images, and ultra-widefield (UWF) colour and fundus autofluorescence (FAF) scanning laser ophthalmoscopy images were captured in individuals with AD or healthy cognition. A CNN to predict AD diagnosis was developed using multimodal retinal images, OCT and OCTA quantitative data, and patient data.Results284 eyes of 159 subjects (222 eyes from 123 cognitively healthy subjects and 62 eyes from 36 subjects with AD) were used to develop the model. Area under the receiving operating characteristic curve (AUC) values for predicted probability of AD for the independent test set varied by input used: UWF colour AUC 0.450 (95% CI 0.282, 0.592), OCTA SCP 0.582 (95% CI 0.440, 0.724), UWF FAF 0.618 (95% CI 0.462, 0.773), GC-IPL maps 0.809 (95% CI 0.700, 0.919). A model incorporating all images, quantitative data and patient data (AUC 0.836 (CI 0.729, 0.943)) performed similarly to models only incorporating all images (AUC 0.829 (95% CI 0.719, 0.939)). GC-IPL maps, quantitative data and patient data AUC 0.841 (95% CI 0.739, 0.943).ConclusionOur CNN used multimodal retinal images to successfully predict diagnosis of symptomatic AD in an independent test set. GC-IPL maps were the most useful single inputs for prediction. Models including only images performed similarly to models also including quantitative data and patient data.
Normative data and percentile curves for axial length and axial length/corneal curvature in Chinese children and adolescents aged 4–18 years
PurposeTo develop age-specific and gender-specific reference percentile charts for axial length (AL) and AL/corneal radius of curvature (AL/CR) and, to use percentiles to determine probability of myopia and estimate refractive error (RE).MethodsAnalysis of AL, cycloplegic RE and CR of 14 127 Chinese participants aged 4–18 years from 3 studies. AL and AL/CR percentiles estimated using Lambda-Mu-Sigma method and compared for agreement using intraclass correlation (ICC). Logistic regression was used to model risk of myopia based on age, gender, AL and AL/CR percentiles. Accuracy of AL progression and RE estimated using percentiles was validated using an independent sample of 5742 eyes of children aged 7–10 years.ResultsAge-specific and gender-specific AL and AL/CR (3rd, 5th, 10th, 25th, 50th, 75th, 90th and 95th) percentiles are presented. Concordance between AL and AL/CR percentiles improved with age (0.13 at 4 years to >0.75 from 13 years) and a year-to-year change was observed for all except <10th percentile from 15 years. Increasing age, AL and AL/CR was associated with a more myopic RE (r2=0.45,0.70 and 0.83, respectively). The sensitivity and specificity of the model to estimate probability of myopia was 86.0% and 84.5%, respectively. Estimation of 1-year change in AL using percentiles correlated highly with actual AL (ICC=0.98). Concordance of estimated to actual RE was high (ICC=0.80) and within ±0.50D and ±1.0D of actual RE for 47.4% and 78.9% of eyes, respectively.ConclusionAge-specific and gender-specific AL and AL/CR percentiles provide reference data, aid in identifying and monitoring individuals at risk of myopia and have utility in screening for myopia. AL/CR percentiles were more accurate in estimating probability of myopia in younger children.
Development and evaluation of a deep learning model for the detection of multiple fundus diseases based on colour fundus photography
AimTo explore and evaluate an appropriate deep learning system (DLS) for the detection of 12 major fundus diseases using colour fundus photography.MethodsDiagnostic performance of a DLS was tested on the detection of normal fundus and 12 major fundus diseases including referable diabetic retinopathy, pathologic myopic retinal degeneration, retinal vein occlusion, retinitis pigmentosa, retinal detachment, wet and dry age-related macular degeneration, epiretinal membrane, macula hole, possible glaucomatous optic neuropathy, papilledema and optic nerve atrophy. The DLS was developed with 56 738 images and tested with 8176 images from one internal test set and two external test sets. The comparison with human doctors was also conducted.ResultsThe area under the receiver operating characteristic curves of the DLS on the internal test set and the two external test sets were 0.950 (95% CI 0.942 to 0.957) to 0.996 (95% CI 0.994 to 0.998), 0.931 (95% CI 0.923 to 0.939) to 1.000 (95% CI 0.999 to 1.000) and 0.934 (95% CI 0.929 to 0.938) to 1.000 (95% CI 0.999 to 1.000), with sensitivities of 80.4% (95% CI 79.1% to 81.6%) to 97.3% (95% CI 96.7% to 97.8%), 64.6% (95% CI 63.0% to 66.1%) to 100% (95% CI 100% to 100%) and 68.0% (95% CI 67.1% to 68.9%) to 100% (95% CI 100% to 100%), respectively, and specificities of 89.7% (95% CI 88.8% to 90.7%) to 98.1% (95%CI 97.7% to 98.6%), 78.7% (95% CI 77.4% to 80.0%) to 99.6% (95% CI 99.4% to 99.8%) and 88.1% (95% CI 87.4% to 88.7%) to 98.7% (95% CI 98.5% to 99.0%), respectively. When compared with human doctors, the DLS obtained a higher diagnostic sensitivity but lower specificity.ConclusionThe proposed DLS is effective in diagnosing normal fundus and 12 major fundus diseases, and thus has much potential for fundus diseases screening in the real world.
Universal artificial intelligence platform for collaborative management of cataracts
PurposeTo establish and validate a universal artificial intelligence (AI) platform for collaborative management of cataracts involving multilevel clinical scenarios and explored an AI-based medical referral pattern to improve collaborative efficiency and resource coverage.MethodsThe training and validation datasets were derived from the Chinese Medical Alliance for Artificial Intelligence, covering multilevel healthcare facilities and capture modes. The datasets were labelled using a three-step strategy: (1) capture mode recognition; (2) cataract diagnosis as a normal lens, cataract or a postoperative eye and (3) detection of referable cataracts with respect to aetiology and severity. Moreover, we integrated the cataract AI agent with a real-world multilevel referral pattern involving self-monitoring at home, primary healthcare and specialised hospital services.ResultsThe universal AI platform and multilevel collaborative pattern showed robust diagnostic performance in three-step tasks: (1) capture mode recognition (area under the curve (AUC) 99.28%–99.71%), (2) cataract diagnosis (normal lens, cataract or postoperative eye with AUCs of 99.82%, 99.96% and 99.93% for mydriatic-slit lamp mode and AUCs >99% for other capture modes) and (3) detection of referable cataracts (AUCs >91% in all tests). In the real-world tertiary referral pattern, the agent suggested 30.3% of people be ‘referred’, substantially increasing the ophthalmologist-to-population service ratio by 10.2-fold compared with the traditional pattern.ConclusionsThe universal AI platform and multilevel collaborative pattern showed robust diagnostic performance and effective service for cataracts. The context of our AI-based medical referral pattern will be extended to other common disease conditions and resource-intensive situations.
Ensemble neural network model for detecting thyroid eye disease using external photographs
PurposeTo describe an artificial intelligence platform that detects thyroid eye disease (TED).DesignDevelopment of a deep learning model.Methods1944 photographs from a clinical database were used to train a deep learning model. 344 additional images (‘test set’) were used to calculate performance metrics. Receiver operating characteristic, precision–recall curves and heatmaps were generated. From the test set, 50 images were randomly selected (‘survey set’) and used to compare model performance with ophthalmologist performance. 222 images obtained from a separate clinical database were used to assess model recall and to quantitate model performance with respect to disease stage and grade.ResultsThe model achieved test set accuracy of 89.2%, specificity 86.9%, recall 93.4%, precision 79.7% and an F1 score of 86.0%. Heatmaps demonstrated that the model identified pixels corresponding to clinical features of TED. On the survey set, the ensemble model achieved accuracy, specificity, recall, precision and F1 score of 86%, 84%, 89%, 77% and 82%, respectively. 27 ophthalmologists achieved mean performance of 75%, 82%, 63%, 72% and 66%, respectively. On the second test set, the model achieved recall of 91.9%, with higher recall for moderate to severe (98.2%, n=55) and active disease (98.3%, n=60), as compared with mild (86.8%, n=68) or stable disease (85.7%, n=63).ConclusionsThe deep learning classifier is a novel approach to identify TED and is a first step in the development of tools to improve diagnostic accuracy and lower barriers to specialist evaluation.
Evaluation and comparison of the new swept source OCT-based IOLMaster 700 with the IOLMaster 500
PurposeTo compare the measurements and failure rates obtained with a new swept source optical coherence tomography (OCT)-based biometry to IOLMaster 500.SettingEye Clinic, Baskent University Faculty of Medicine, Ankara, Turkey.DesignObservational cross-sectional study and evaluation of a new diagnostic technology.Methods188 eyes of 101 subjects were included in the study. Measurements of axial length (AL), anterior chamber depth (ACD), corneal power (K1 and K2) and the measurement failure rate with the new Zeiss IOLMaster 700 were compared with those obtained with the IOLMaster 500. The results were evaluated using Bland–Altman analyses. The differences between both methods were assessed using the paired samples t test, and their correlation was evaluated by intraclass correlation coefficient (ICC).ResultsThe mean age was 68.32±12.71 years and the male/female ratio was 29/72. The agreements between two devices were outstanding regarding AL (ICC=1.0), ACD (ICC=0.920), K1 (ICC=0.992) and K2 (ICC=0.989) values. IOLMaster 700 was able to measure ACD AL, K1 and K2 in all eyes within high-quality SD limits of the manufacturer. IOLMaster 500 was able to measure ACD in 175 eyes, whereas measurements were not possible in the remaining 13 eyes. AL measurements were not possible for 17 eyes with IOLMaster 500. Nine of these eyes had posterior subcapsular cataracts and eight had dense nuclear cataracts.ConclusionsAlthough the agreement between the two devices was excellent, the IOLMaster 700 was more effective in obtaining biometric measurements in eyes with posterior subcapsular and dense nuclear cataracts.