Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
25 result(s) for "Riedel, Pascal"
Sort by:
Evaluating masked self-supervised learning frameworks for 3D dental model segmentation tasks
The application of deep learning using dental models is crucial for automated computer-aided treatment planning. However, developing highly accurate models requires a substantial amount of accurately labeled data. Obtaining this data is challenging, especially in the medical domain. Masked self-supervised learning has shown great promise in overcoming the challenge of data scarcity. However, its effectiveness has not been well explored in the 3D domain, particularly on dental models. In this work, we investigate the applicability of the four recently published masked self-supervised learning frameworks-Point-BERT, Point-MAE, Point-GPT, and Point-M2AE-for improving downstream tasks such as tooth and brace segmentation. These frameworks were pre-trained on a proprietary dataset of over 4000 unlabeled 3D dental models and fine-tuned using the publicly available Teeth3DS dataset for tooth segmentation and a self-constructed braces segmentation dataset. Through a set of experiments we demonstrate that pre-training can enhance the performance of downstream tasks, especially when training data is scarce or imbalanced—a critical factor for clinical usability. Our results show that the benefits are most noticeable when training data is limited but diminish as more labeled data becomes available, providing insights into when and how this technique should be applied to maximize its effectiveness.
Augmentation strategies for an imbalanced learning problem on a novel COVID-19 severity dataset
Since the beginning of the COVID-19 pandemic, many different machine learning models have been developed to detect and verify COVID-19 pneumonia based on chest X-ray images. Although promising, binary models have only limited implications for medical treatment, whereas the prediction of disease severity suggests more suitable and specific treatment options. In this study, we publish severity scores for the 2358 COVID-19 positive images in the COVIDx8B dataset, creating one of the largest collections of publicly available COVID-19 severity data. Furthermore, we train and evaluate deep learning models on the newly created dataset to provide a first benchmark for the severity classification task. One of the main challenges of this dataset is the skewed class distribution, resulting in undesirable model performance for the most severe cases. We therefore propose and examine different augmentation strategies, specifically targeting majority and minority classes. Our augmentation strategies show significant improvements in precision and recall values for the rare and most severe cases. While the models might not yet fulfill medical requirements, they serve as an appropriate starting point for further research with the proposed dataset to optimize clinical resource allocation and treatment.
Leveraging human expert image annotations to improve pneumonia differentiation through human knowledge distillation
In medical imaging, deep learning models can be a critical tool to shorten time-to-diagnosis and support specialized medical staff in clinical decision making. The successful training of deep learning models usually requires large amounts of quality data, which are often not available in many medical imaging tasks. In this work we train a deep learning model on university hospital chest X-ray data, containing 1082 images. The data was reviewed, differentiated into 4 causes for pneumonia, and annotated by an expert radiologist. To successfully train a model on this small amount of complex image data, we propose a special knowledge distillation process, which we call Human Knowledge Distillation. This process enables deep learning models to utilize annotated regions in the images during the training process. This form of guidance by a human expert improves model convergence and performance. We evaluate the proposed process on our study data for multiple types of models, all of which show improved results. The best model of this study, called PneuKnowNet, shows an improvement of + 2.3% points in overall accuracy compared to a baseline model and also leads to more meaningful decision regions. Utilizing this implicit data quality-quantity trade-off can be a promising approach for many scarce data domains beyond medical imaging.
ResNetFed: Federated Deep Learning Architecture for Privacy-Preserving Pneumonia Detection from COVID-19 Chest Radiographs
Personal health data is subject to privacy regulations, making it challenging to apply centralized data-driven methods in healthcare, where personalized training data is frequently used. Federated Learning (FL) promises to provide a decentralized solution to this problem. In FL, siloed data is used for the model training to ensure data privacy. In this paper, we investigate the viability of the federated approach using the detection of COVID-19 pneumonia as a use case. 1411 individual chest radiographs, sourced from the public data repository COVIDx8 are used. The dataset contains radiographs of 753 normal lung findings and 658 COVID-19 related pneumonias. We partition the data unevenly across five separate data silos in order to reflect a typical FL scenario. For the binary image classification analysis of these radiographs, we propose ResNetFed , a pre-trained ResNet50 model modified for federation so that it supports Differential Privacy . In addition, we provide a customized FL strategy for the model training with COVID-19 radiographs. The experimental results show that ResNetFed clearly outperforms locally trained ResNet50 models. Due to the uneven distribution of the data in the silos, we observe that the locally trained ResNet50 models perform significantly worse than ResNetFed models (mean accuracies of 63% and 82.82%, respectively). In particular, ResNetFed shows excellent model performance in underpopulated data silos, achieving up to +34.9 percentage points higher accuracy compared to local ResNet50 models. Thus, with ResNetFed, we provide a federated solution that can assist the initial COVID-19 screening in medical centers in a privacy-preserving manner.
Comparative analysis of open-source federated learning frameworks - a literature-based survey and review
While Federated Learning (FL) provides a privacy-preserving approach to analyze sensitive data without centralizing training data, the field lacks an detailed comparison of emerging open-source FL frameworks. Furthermore, there is currently no standardized, weighted evaluation scheme for a fair comparison of FL frameworks that would support the selection of a suitable FL framework. This study addresses these research gaps by conducting a comparative analysis of 15 individual open-source FL frameworks filtered by two selection criteria, using the literature review methodology proposed by Webster and Watson. These framework candidates are compared using a novel scoring schema with 15 qualitative and quantitative evaluation criteria, focusing on features, interoperability, and user friendliness. The evaluation results show that the FL framework Flower outperforms its peers with an overall score of 84.75%, while Fedlearner lags behind with a total score of 24.75%. The proposed comparison suite offers valuable initial guidance for practitioners and researchers in selecting an FL framework for the design and development of FL-driven systems. In addition, the FL framework comparison suite is designed to be adaptable and extendable accommodating the inclusion of new FL frameworks and evolving requirements.
DilatedToothSegNet: Tooth Segmentation Network on 3D Dental Meshes Through Increasing Receptive Vision
The utilization of advanced intraoral scanners to acquire 3D dental models has gained significant popularity in the fields of dentistry and orthodontics. Accurate segmentation and labeling of teeth on digitized 3D dental surface models are crucial for computer-aided treatment planning. At the same time, manual labeling of these models is a time-consuming task. Recent advances in geometric deep learning have demonstrated remarkable efficiency in surface segmentation when applied to raw 3D models. However, segmentation of the dental surface remains challenging due to the atypical and diverse appearance of the patients’ teeth. Numerous deep learning methods have been proposed to automate dental surface segmentation. Nevertheless, they still show limitations, particularly in cases where teeth are missing or severely misaligned. To overcome these challenges, we introduce a network operator called dilated edge convolution, which enhances the network’s ability to learn additional, more distant features by expanding its receptive field. This leads to improved segmentation results, particularly in complex and challenging cases. To validate the effectiveness of our proposed method, we performed extensive evaluations on the recently published benchmark data set for dental model segmentation Teeth3DS. We compared our approach with several other state-of-the-art methods using a quantitative and qualitative analysis. Through these evaluations, we demonstrate the superiority of our proposed method, showcasing its ability to outperform existing approaches in dental surface segmentation.
Atom-chip-based generation of entanglement for quantum metrology
Quantum measurement in a tangle Atom interferometers, which rely on the wave properties of particles, are used in a variety of ultra-high-precision measurements, from determining the gravitational constant to defining the time standard. The precision of interferometers is generally limited by classical statistics, arising from the finite number of atoms used in the experiment. Two papers in this issue demonstrate the potential of 'spin-squeezing' in Bose–Einstein condensates (BECs) to facilitate measurements that are more precise than classical statistics allow. Using a specially prepared BEC as the input to an interferometer, Gross et al . beat the classical precision limit. In the second study, Riedel et al . create similar 'spin-squeezed' states in a BEC confined to an 'atom chip' by controlling elastic collisional interactions with a state-dependent potential. This demonstration of multi-particle entanglement on a chip raises the prospect of chip-based portable atomic clocks that also beat the classical precision limits. Atom chips provide a versatile quantum laboratory for experiments with ultracold atomic gases, but techniques to control atomic interactions and to generate entanglement have been unavailable so far. Here, the experimental generation of multi-particle entanglement on an atom chip is described. The technique is used to produce spin-squeezed states of a two-component Bose–Einstein condensate, which should be useful for quantum metrology. Atom chips provide a versatile quantum laboratory for experiments with ultracold atomic gases 1 . They have been used in diverse experiments involving low-dimensional quantum gases 2 , cavity quantum electrodynamics 3 , atom–surface interactions 4 , 5 , and chip-based atomic clocks 6 and interferometers 7 , 8 . However, a severe limitation of atom chips is that techniques to control atomic interactions and to generate entanglement have not been experimentally available so far. Such techniques enable chip-based studies of entangled many-body systems and are a key prerequisite for atom chip applications in quantum simulations 9 , quantum information processing 10 and quantum metrology 11 . Here we report the experimental generation of multi-particle entanglement on an atom chip by controlling elastic collisional interactions with a state-dependent potential 12 . We use this technique to generate spin-squeezed states of a two-component Bose–Einstein condensate 13 ; such states are a useful resource for quantum metrology. The observed reduction in spin noise of -3.7 ± 0.4 dB, combined with the spin coherence, implies four-partite entanglement between the condensate atoms 14 ; this could be used to improve an interferometric measurement by -2.5 ± 0.6 dB over the standard quantum limit 15 . Our data show good agreement with a dynamical multi-mode simulation 16 and allow us to reconstruct the Wigner function 17 of the spin-squeezed condensate. The techniques reported here could be directly applied to chip-based atomic clocks, currently under development 18 .
Coherent manipulation of Bose–Einstein condensates with state-dependent microwave potentials on an atom chip
Entanglement-based technologies, such as quantum information processing, quantum simulations and quantum-enhanced metrology, have the potential to revolutionize our way of computing and measuring, and help clarify the puzzling concept of entanglement itself. Ultracold atoms on atom chips are attractive for their implementation, as they provide control over quantum systems in compact, robust and scalable set-ups. An important tool in this system is a potential depending on the internal atomic state. Coherent dynamics in such a potential combined with collisional interactions enables entanglement generation both for individual atoms and ensembles. Here, we demonstrate coherent manipulation of Bose-condensed atoms in a state-dependent potential, generated with microwave near-fields on an atom chip. We reversibly entangle atomic internal and motional states, realizing a trapped-atom interferometer with internal-state labelling. Our system provides control over collisions in mesoscopic condensates, paving the way to on-chip generation of many-particle entanglement and quantum-enhanced metrology with spin-squeezed states. Simultaneous coherent control of internal and motional states of a Bose–Einstein condensate has been demonstrated on an ‘atom chip’. The method should provide a route to generating many-particle entangled states, which are needed for entanglement-based technologies such as quantum-information processing or quantum-enhanced metrology.
Estimating prevalence of subjective cognitive decline in and across international cohort studies of aging: a COSMIC study
Background Subjective cognitive decline (SCD) is recognized as a risk stage for Alzheimer’s disease (AD) and other dementias, but its prevalence is not well known. We aimed to use uniform criteria to better estimate SCD prevalence across international cohorts. Methods We combined individual participant data for 16 cohorts from 15 countries (members of the COSMIC consortium) and used qualitative and quantitative (Item Response Theory/IRT) harmonization techniques to estimate SCD prevalence. Results The sample comprised 39,387 cognitively unimpaired individuals above age 60. The prevalence of SCD across studies was around one quarter with both qualitative harmonization/QH (23.8%, 95%CI = 23.3–24.4%) and IRT (25.6%, 95%CI = 25.1–26.1%); however, prevalence estimates varied largely between studies (QH 6.1%, 95%CI = 5.1–7.0%, to 52.7%, 95%CI = 47.4–58.0%; IRT: 7.8%, 95%CI = 6.8–8.9%, to 52.7%, 95%CI = 47.4–58.0%). Across studies, SCD prevalence was higher in men than women, in lower levels of education, in Asian and Black African people compared to White people, in lower- and middle-income countries compared to high-income countries, and in studies conducted in later decades. Conclusions SCD is frequent in old age. Having a quarter of older individuals with SCD warrants further investigation of its significance, as a risk stage for AD and other dementias, and of ways to help individuals with SCD who seek medical advice. Moreover, a standardized instrument to measure SCD is needed to overcome the measurement variability currently dominant in the field.
Lifestyle and incident dementia: A COSMIC individual participant data meta‐analysis
INTRODUCTION The LIfestyle for BRAin Health (LIBRA) index yields a dementia risk score based on modifiable lifestyle factors and is validated in Western samples. We investigated whether the association between LIBRA scores and incident dementia is moderated by geographical location or sociodemographic characteristics. METHODS We combined data from 21 prospective cohorts across six continents (N = 31,680) and conducted cohort‐specific Cox proportional hazard regression analyses in a two‐step individual participant data meta‐analysis. RESULTS A one‐standard‐deviation increase in LIBRA score was associated with a 21% higher risk for dementia. The association was stronger for Asian cohorts compared to European cohorts, and for individuals aged ≤75 years (vs older), though only within the first 5 years of follow‐up. No interactions with sex, education, or socioeconomic position were observed. DISCUSSION Modifiable risk and protective factors appear relevant for dementia risk reduction across diverse geographical and sociodemographic groups. Highlights A two‐step individual participant data meta‐analysis was conducted. This was done at a global scale using data from 21 ethno‐regionally diverse cohorts. The association between a modifiable dementia risk score and dementia was examined. The association was modified by geographical region and age at baseline. Yet, modifiable dementia risk and protective factors appear relevant in all investigated groups and regions.