Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Vision–language foundation model for echocardiogram interpretation
by
Vukadinovic, Milos
, Yuan, Neal
, Christensen, Matthew
, Ouyang, David
in
631/114/2398
/ 692/699/75
/ Aorta
/ Artificial Intelligence
/ Benchmarks
/ Biomedical and Life Sciences
/ Biomedicine
/ Cancer Research
/ Datasets
/ Echocardiography
/ Echocardiography - methods
/ Heart transplantation
/ Heart valves
/ Humans
/ Image Interpretation, Computer-Assisted
/ Image processing
/ Imaging
/ Infectious Diseases
/ Language
/ Metabolic Diseases
/ Mitral valve
/ Molecular Medicine
/ Neurosciences
/ Patients
/ Robustness
/ Structure-function relationships
/ Transplants
/ Ultrasonic imaging
/ Ultrasound
/ Video
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Vision–language foundation model for echocardiogram interpretation
by
Vukadinovic, Milos
, Yuan, Neal
, Christensen, Matthew
, Ouyang, David
in
631/114/2398
/ 692/699/75
/ Aorta
/ Artificial Intelligence
/ Benchmarks
/ Biomedical and Life Sciences
/ Biomedicine
/ Cancer Research
/ Datasets
/ Echocardiography
/ Echocardiography - methods
/ Heart transplantation
/ Heart valves
/ Humans
/ Image Interpretation, Computer-Assisted
/ Image processing
/ Imaging
/ Infectious Diseases
/ Language
/ Metabolic Diseases
/ Mitral valve
/ Molecular Medicine
/ Neurosciences
/ Patients
/ Robustness
/ Structure-function relationships
/ Transplants
/ Ultrasonic imaging
/ Ultrasound
/ Video
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Vision–language foundation model for echocardiogram interpretation
by
Vukadinovic, Milos
, Yuan, Neal
, Christensen, Matthew
, Ouyang, David
in
631/114/2398
/ 692/699/75
/ Aorta
/ Artificial Intelligence
/ Benchmarks
/ Biomedical and Life Sciences
/ Biomedicine
/ Cancer Research
/ Datasets
/ Echocardiography
/ Echocardiography - methods
/ Heart transplantation
/ Heart valves
/ Humans
/ Image Interpretation, Computer-Assisted
/ Image processing
/ Imaging
/ Infectious Diseases
/ Language
/ Metabolic Diseases
/ Mitral valve
/ Molecular Medicine
/ Neurosciences
/ Patients
/ Robustness
/ Structure-function relationships
/ Transplants
/ Ultrasonic imaging
/ Ultrasound
/ Video
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Vision–language foundation model for echocardiogram interpretation
Journal Article
Vision–language foundation model for echocardiogram interpretation
2024
Request Book From Autostore
and Choose the Collection Method
Overview
The development of robust artificial intelligence models for echocardiography has been limited by the availability of annotated clinical data. Here, to address this challenge and improve the performance of cardiac imaging models, we developed EchoCLIP, a vision–language foundation model for echocardiography, that learns the relationship between cardiac ultrasound images and the interpretations of expert cardiologists across a wide range of patients and indications for imaging. After training on 1,032,975 cardiac ultrasound videos and corresponding expert text, EchoCLIP performs well on a diverse range of benchmarks for cardiac image interpretation, despite not having been explicitly trained for individual interpretation tasks. EchoCLIP can assess cardiac function (mean absolute error of 7.1% when predicting left ventricular ejection fraction in an external validation dataset) and identify implanted intracardiac devices (area under the curve (AUC) of 0.84, 0.92 and 0.97 for pacemakers, percutaneous mitral valve repair and artificial aortic valves, respectively). We also developed a long-context variant (EchoCLIP-R) using a custom tokenizer based on common echocardiography concepts. EchoCLIP-R accurately identified unique patients across multiple videos (AUC of 0.86), identified clinical transitions such as heart transplants (AUC of 0.79) and cardiac surgery (AUC 0.77) and enabled robust image-to-text search (mean cross-modal retrieval rank in the top 1% of candidate text reports). These capabilities represent a substantial step toward understanding and applying foundation models in cardiovascular imaging for preliminary interpretation of echocardiographic findings.
A vision–language foundation model, trained on a dataset of more than 1 million echocardiogram video–text pairs, is able to assess various cardiac structural and functional parameters despite not having been directly trained on any specific image interpretation task.
Publisher
Nature Publishing Group US,Nature Publishing Group
Subject
This website uses cookies to ensure you get the best experience on our website.