Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
67 result(s) for "Yoshikawa, Takeharu"
Sort by:
GPT-4 Turbo with Vision fails to outperform text-only GPT-4 Turbo in the Japan Diagnostic Radiology Board Examination
PurposeTo assess the performance of GPT-4 Turbo with Vision (GPT-4TV), OpenAI’s latest multimodal large language model, by comparing its ability to process both text and image inputs with that of the text-only GPT-4 Turbo (GPT-4 T) in the context of the Japan Diagnostic Radiology Board Examination (JDRBE).Materials and methodsThe dataset comprised questions from JDRBE 2021 and 2023. A total of six board-certified diagnostic radiologists discussed the questions and provided ground-truth answers by consulting relevant literature as necessary. The following questions were excluded: those lacking associated images, those with no unanimous agreement on answers, and those including images rejected by the OpenAI application programming interface. The inputs for GPT-4TV included both text and images, whereas those for GPT-4 T were entirely text. Both models were deployed on the dataset, and their performance was compared using McNemar’s exact test. The radiological credibility of the responses was assessed by two diagnostic radiologists through the assignment of legitimacy scores on a five-point Likert scale. These scores were subsequently used to compare model performance using Wilcoxon's signed-rank test.ResultsThe dataset comprised 139 questions. GPT-4TV correctly answered 62 questions (45%), whereas GPT-4 T correctly answered 57 questions (41%). A statistical analysis found no significant performance difference between the two models (P = 0.44). The GPT-4TV responses received significantly lower legitimacy scores from both radiologists than the GPT-4 T responses.ConclusionNo significant enhancement in accuracy was observed when using GPT-4TV with image input compared with that of using text-only GPT-4 T for JDRBE questions.
Performance changes due to differences in training data for cerebral aneurysm detection in head MR angiography images
PurposeThe performance of computer-aided detection (CAD) software depends on the quality and quantity of the dataset used for machine learning. If the data characteristics in development and practical use are different, the performance of CAD software degrades. In this study, we investigated changes in detection performance due to differences in training data for cerebral aneurysm detection software in head magnetic resonance angiography images.Materials and methodsWe utilized three types of CAD software for cerebral aneurysm detection in MRA images, which were based on 3D local intensity structure analysis, graph-based features, and convolutional neural network. For each type of CAD software, we compared three types of training pattern, which were two types of training using single-site data and one type of training using multisite data. We also carried out internal and external evaluations.ResultsIn training using single-site data, the performance of CAD software largely and unpredictably fluctuated when the training dataset was changed. Training using multisite data did not show the lowest performance among the three training patterns for any CAD software and dataset.ConclusionThe training of cerebral aneurysm detection software using data collected from multiple sites is desirable to ensure the stable performance of the software.
Automatic detection of actionable radiology reports using bidirectional encoder representations from transformers
Background It is essential for radiologists to communicate actionable findings to the referring clinicians reliably. Natural language processing (NLP) has been shown to help identify free-text radiology reports including actionable findings. However, the application of recent deep learning techniques to radiology reports, which can improve the detection performance, has not been thoroughly examined. Moreover, free-text that clinicians input in the ordering form (order information) has seldom been used to identify actionable reports. This study aims to evaluate the benefits of two new approaches: (1) bidirectional encoder representations from transformers (BERT), a recent deep learning architecture in NLP, and (2) using order information in addition to radiology reports. Methods We performed a binary classification to distinguish actionable reports (i.e., radiology reports tagged as actionable in actual radiological practice) from non-actionable ones (those without an actionable tag). 90,923 Japanese radiology reports in our hospital were used, of which 788 (0.87%) were actionable. We evaluated four methods, statistical machine learning with logistic regression (LR) and with gradient boosting decision tree (GBDT), and deep learning with a bidirectional long short-term memory (LSTM) model and a publicly available Japanese BERT model. Each method was used with two different inputs, radiology reports alone and pairs of order information and radiology reports. Thus, eight experiments were conducted to examine the performance. Results Without order information, BERT achieved the highest area under the precision-recall curve (AUPRC) of 0.5138, which showed a statistically significant improvement over LR, GBDT, and LSTM, and the highest area under the receiver operating characteristic curve (AUROC) of 0.9516. Simply coupling the order information with the radiology reports slightly increased the AUPRC of BERT but did not lead to a statistically significant improvement. This may be due to the complexity of clinical decisions made by radiologists. Conclusions BERT was assumed to be useful to detect actionable reports. More sophisticated methods are required to use order information effectively.
Capability of GPT-4V(ision) in the Japanese National Medical Licensing Examination: Evaluation Study
Previous research applying large language models (LLMs) to medicine was focused on text-based information. Recently, multimodal variants of LLMs acquired the capability of recognizing images. We aim to evaluate the image recognition capability of generative pretrained transformer (GPT)-4V, a recent multimodal LLM developed by OpenAI, in the medical field by testing how visual information affects its performance to answer questions in the 117th Japanese National Medical Licensing Examination. We focused on 108 questions that had 1 or more images as part of a question and presented GPT-4V with the same questions under two conditions: (1) with both the question text and associated images and (2) with the question text only. We then compared the difference in accuracy between the 2 conditions using the exact McNemar test. Among the 108 questions with images, GPT-4V's accuracy was 68% (73/108) when presented with images and 72% (78/108) when presented without images (P=.36). For the 2 question categories, clinical and general, the accuracies with and those without images were 71% (70/98) versus 78% (76/98; P=.21) and 30% (3/10) versus 20% (2/10; P≥.99), respectively. The additional information from the images did not significantly improve the performance of GPT-4V in the Japanese National Medical Licensing Examination.
Deep generative abnormal lesion emphasization validated by nine radiologists and 1000 chest X-rays with lung nodules
A general-purpose method of emphasizing abnormal lesions in chest radiographs, named EGGPALE (Extrapolative, Generative and General-Purpose Abnormal Lesion Emphasizer), is presented. The proposed EGGPALE method is composed of a flow-based generative model and L-infinity-distance-based extrapolation in a latent space. The flow-based model is trained using only normal chest radiographs, and an invertible mapping function from the image space to the latent space is determined. In the latent space, a given unseen image is extrapolated so that the image point moves away from the normal chest X-ray hyperplane. Finally, the moved point is mapped back to the image space and the corresponding emphasized image is created. The proposed method was evaluated by an image interpretation experiment with nine radiologists and 1,000 chest radiographs, of which positive suspected lung cancer cases and negative cases were validated by computed tomography examinations. The sensitivity of EGGPALE-processed images showed +0.0559 average improvement compared with that of the original images, with -0.0192 deterioration of average specificity. The area under the receiver operating characteristic curve of the ensemble of nine radiologists showed a statistically significant improvement. From these results, the feasibility of EGGPALE for enhancing abnormal lesions was validated. Our code is available at https://github.com/utrad-ical/Eggpale .
Impact of CT-determined low kidney volume on renal function decline: a propensity score-matched analysis
ObjectivesTo investigate the relationship between low kidney volume and subsequent estimated glomerular filtration rate (eGFR) decline in eGFR category G2 (60–89 mL/min/1.73 m2) population.MethodsIn this retrospective study, we evaluated 5531 individuals with eGFR category G2 who underwent medical checkups at our institution between November 2006 and October 2017. Exclusion criteria were absent for follow-up visit, missing data, prior renal surgery, current renal disease under treatment, large renal masses, and horseshoe kidney. We developed a 3D U-net-based automated system for renal volumetry on CT images. Participants were grouped by sex-specific kidney volume deviations set at mean minus one standard deviation. After 1:1 propensity score matching, we obtained 397 pairs of individuals in the low kidney volume (LKV) and control groups. The primary endpoint was progression of eGFR categories within 5 years, assessed using Cox regression analysis.ResultsThis study included 3220 individuals (mean age, 60.0 ± 9.7 years; men, n = 2209). The kidney volume was 404.6 ± 67.1 and 376.8 ± 68.0 cm3 in men and women, respectively. The low kidney volume (LKV) cutoff was 337.5 and 308.8 cm3 for men and women, respectively. LKV was a significant risk factor for the endpoint with an adjusted hazard ratio of 1.64 (95% confidence interval: 1.09–2.45; p = 0.02).ConclusionLow kidney volume may adversely affect subsequent eGFR maintenance; hence, the use of imaging metrics may help predict eGFR decline.Critical relevance statementLow kidney volume is a significant predictor of reduced kidney function over time; thus, kidney volume measurements could aid in early identification of individuals at risk for declining kidney health.Key points• This study explores how kidney volume affects subsequent kidney function maintenance.• Low kidney volume was associated with estimated glomerular filtration rate decreases.• Low kidney volume is a prognostic indicator of estimated glomerular filtration rate decline.
Pilot study of eruption forecasting with muography using convolutional neural network
Muography is a novel method of visualizing the internal structures of active volcanoes by using high-energy near-horizontally arriving cosmic muons. The purpose of this study is to show the feasibility of muography to forecast the eruption event with the aid of the convolutional neural network (CNN). In this study, seven daily consecutive muographic images were fed into the CNN to compute the probability of eruptions on the eighth day, and our CNN model was trained by hyperparameter tuning with the Bayesian optimization algorithm. By using the data acquired in Sakurajima volcano, Japan, as an example, the forecasting performance achieved a value of 0.726 for the area under the receiver operating characteristic curve, showing the reasonable correlation between the muographic images and eruption events. Our result suggests that muography has the potential for eruption forecasting of volcanoes.
Predicting Breast Cancer Risk Using Radiomics Features of Mammography Images
Mammography images contain a lot of information about not only the mammary glands but also the skin, adipose tissue, and stroma, which may reflect the risk of developing breast cancer. We aimed to establish a method to predict breast cancer risk using radiomics features of mammography images and to enable further examinations and prophylactic treatment to reduce breast cancer mortality. We used mammography images of 4000 women with breast cancer and 1000 healthy women from the ‘starting point set’ of the OPTIMAM dataset, a public dataset. We trained a Light Gradient Boosting Machine using radiomics features extracted from mammography images of women with breast cancer (only the healthy side) and healthy women. This model was a binary classifier that could discriminate whether a given mammography image was of the contralateral side of women with breast cancer or not, and its performance was evaluated using five-fold cross-validation. The average area under the curve for five folds was 0.60122. Some radiomics features, such as ‘wavelet-H_glcm_Correlation’ and ‘wavelet-H_firstorder_Maximum’, showed distribution differences between the malignant and normal groups. Therefore, a single radiomics feature might reflect the breast cancer risk. The odds ratio of breast cancer incidence was 7.38 in women whose estimated malignancy probability was ≥0.95. Radiomics features from mammography images can help predict breast cancer risk.
Practical Medical Image Generation with Provable Privacy Protection Based on Denoising Diffusion Probabilistic Models for High-Resolution Volumetric Images
Local differential privacy algorithms combined with deep generative models can enhance secure medical image sharing among researchers in the public domain without central administrators; however, these images were limited to the generation of low-resolution images, which are very insufficient for diagnosis by medical doctors. To enhance the performance of deep generative models so that they can generate high-resolution medical images, we propose a large-scale diffusion model that can, for the first time, unconditionally generate high-resolution (256×256×256) volumetric medical images (head magnetic resonance images). This diffusion model has 19 billion parameters, but to make it easy to train it, we temporally divided the model into 200 submodels, each of which has 95 million parameters. Moreover, on the basis of this new diffusion model, we propose another formulation of image anonymization with which the processed images can satisfy provable Gaussian local differential privacy and with which we can generate images semantically different from the original image but belonging to the same class. We believe that the formulation of this new diffusion model and the implementation of local differential privacy algorithms combined with the diffusion models can contribute to the secure sharing of practical images upstream of data processing.