Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
1,854 result(s) for "639/705/1046"
Sort by:
On evaluation metrics for medical applications of artificial intelligence
Clinicians and software developers need to understand how proposed machine learning (ML) models could improve patient care. No single metric captures all the desirable properties of a model, which is why several metrics are typically reported to summarize a model’s performance. Unfortunately, these measures are not easily understandable by many clinicians. Moreover, comparison of models across studies in an objective manner is challenging, and no tool exists to compare models using the same performance metrics. This paper looks at previous ML studies done in gastroenterology, provides an explanation of what different metrics mean in the context of binary classification in the presented studies, and gives a thorough explanation of how different metrics should be interpreted. We also release an open source web-based tool that may be used to aid in calculating the most relevant metrics presented in this paper so that other researchers and clinicians may easily incorporate them into their research.
Data-driven capacity estimation of commercial lithium-ion batteries from voltage relaxation
Accurate capacity estimation is crucial for the reliable and safe operation of lithium-ion batteries. In particular, exploiting the relaxation voltage curve features could enable battery capacity estimation without additional cycling information. Here, we report the study of three datasets comprising 130 commercial lithium-ion cells cycled under various conditions to evaluate the capacity estimation approach. One dataset is collected for model building from batteries with LiNi 0.86 Co 0.11 Al 0.03 O 2 -based positive electrodes. The other two datasets, used for validation, are obtained from batteries with LiNi 0.83 Co 0.11 Mn 0.07 O 2 -based positive electrodes and batteries with the blend of Li(NiCoMn)O 2 - Li(NiCoAl)O 2 positive electrodes. Base models that use machine learning methods are employed to estimate the battery capacity using features derived from the relaxation voltage profiles. The best model achieves a root-mean-square error of 1.1% for the dataset used for the model building. A transfer learning model is then developed by adding a featured linear transformation to the base model. This extended model achieves a root-mean-square error of less than 1.7% on the datasets used for the model validation, indicating the successful applicability of the capacity estimation approach utilizing cell voltage relaxation. Accurate capacity estimation is crucial for lithium-ion batteries' reliable and safe operation. Here, the authors propose an approach exploiting features from the relaxation voltage curve for battery capacity estimation without requiring other previous cycling information.
Physics-informed learning of governing equations from scarce data
Harnessing data to discover the underlying governing laws or equations that describe the behavior of complex physical systems can significantly advance our modeling, simulation and understanding of such systems in various science and engineering disciplines. This work introduces a novel approach called physics-informed neural network with sparse regression to discover governing partial differential equations from scarce and noisy data for nonlinear spatiotemporal systems. In particular, this discovery approach seamlessly integrates the strengths of deep neural networks for rich representation learning, physics embedding, automatic differentiation and sparse regression to approximate the solution of system variables, compute essential derivatives, as well as identify the key derivative terms and parameters that form the structure and explicit expression of the equations. The efficacy and robustness of this method are demonstrated, both numerically and experimentally, on discovering a variety of partial differential equation systems with different levels of data scarcity and noise accounting for different initial/boundary conditions. The resulting computational framework shows the potential for closed-form model discovery in practical applications where large and accurate datasets are intractable to capture. Recovery of underlying governing laws or equations describing the evolution of complex systems from data can be challenging if dataset is damaged or incomplete. The authors propose a learning approach which allows to discover governing partial differential equations from scarce and noisy data.
Chaos as an intermittently forced linear system
Understanding the interplay of order and disorder in chaos is a central challenge in modern quantitative science. Approximate linear representations of nonlinear dynamics have long been sought, driving considerable interest in Koopman theory. We present a universal, data-driven decomposition of chaos as an intermittently forced linear system. This work combines delay embedding and Koopman theory to decompose chaotic dynamics into a linear model in the leading delay coordinates with forcing by low-energy delay coordinates; this is called the Hankel alternative view of Koopman (HAVOK) analysis. This analysis is applied to the Lorenz system and real-world examples including Earth’s magnetic field reversal and measles outbreaks. In each case, forcing statistics are non-Gaussian, with long tails corresponding to rare intermittent forcing that precedes switching and bursting phenomena. The forcing activity demarcates coherent phase space regions where the dynamics are approximately linear from those that are strongly nonlinear. The huge amount of data generated in fields like neuroscience or finance calls for effective strategies that mine data to reveal underlying dynamics. Here Brunton et al. develop a data-driven technique to analyze chaotic systems and predict their dynamics in terms of a forced linear model.
Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data
Several studies underscore the potential of deep learning in identifying complex patterns, leading to diagnostic and prognostic biomarkers. Identifying sufficiently large and diverse datasets, required for training, is a significant challenge in medicine and can rarely be found in individual institutions. Multi-institutional collaborations based on centrally-shared patient data face privacy and ownership challenges. Federated learning is a novel paradigm for data-private multi-institutional collaborations, where model-learning leverages all available data without sharing data between institutions, by distributing the model-training to the data-owners and aggregating their results. We show that federated learning among 10 institutions results in models reaching 99% of the model quality achieved with centralized data, and evaluate generalizability on data from institutions outside the federation. We further investigate the effects of data distribution across collaborating institutions on model quality and learning patterns, indicating that increased access to data through data private multi-institutional collaborations can benefit model quality more than the errors introduced by the collaborative method. Finally, we compare with other collaborative-learning approaches demonstrating the superiority of federated learning, and discuss practical implementation considerations. Clinical adoption of federated learning is expected to lead to models trained on datasets of unprecedented size, hence have a catalytic impact towards precision/personalized medicine.
Structured information extraction from scientific text with large language models
Extracting structured knowledge from scientific text remains a challenging task for machine learning models. Here, we present a simple approach to joint named entity recognition and relation extraction and demonstrate how pretrained large language models (GPT-3, Llama-2) can be fine-tuned to extract useful records of complex scientific knowledge. We test three representative tasks in materials chemistry: linking dopants and host materials, cataloging metal-organic frameworks, and general composition/phase/morphology/application information extraction. Records are extracted from single sentences or entire paragraphs, and the output can be returned as simple English sentences or a more structured format such as a list of JSON objects. This approach represents a simple, accessible, and highly flexible route to obtaining large databases of structured specialized scientific knowledge extracted from research papers. Extracting scientific data from published research is a complex task required specialised tools. Here the authors present a scheme based on large language models to automatise the retrieval of information from text in a flexible and accessible manner.
Advances, challenges and opportunities in creating data for trustworthy AI
As artificial intelligence (AI) transitions from research to deployment, creating the appropriate datasets and data pipelines to develop and evaluate AI models is increasingly the biggest challenge. Automated AI model builders that are publicly available can now achieve top performance in many applications. In contrast, the design and sculpting of the data used to develop AI often rely on bespoke manual work, and they critically affect the trustworthiness of the model. This Perspective discusses key considerations for each stage of the data-for-AI pipeline—starting from data design to data sculpting (for example, cleaning, valuation and annotation) and data evaluation—to make AI more reliable. We highlight technical advances that help to make the data-for-AI pipeline more scalable and rigorous. Furthermore, we discuss how recent data regulations and policies can impact AI. It has become rapidly clear in the past few years that the creation, use and maintenance of high-quality annotated datasets for robust and reliable AI applications requires careful attention. This Perspective discusses challenges, considerations and best practices for various stages in the data-to-AI pipeline, to encourage a more data-centric approach.
Identifying degradation patterns of lithium ion batteries from impedance spectroscopy using machine learning
Forecasting the state of health and remaining useful life of Li-ion batteries is an unsolved challenge that limits technologies such as consumer electronics and electric vehicles. Here, we build an accurate battery forecasting system by combining electrochemical impedance spectroscopy (EIS)—a real-time, non-invasive and information-rich measurement that is hitherto underused in battery diagnosis—with Gaussian process machine learning. Over 20,000 EIS spectra of commercial Li-ion batteries are collected at different states of health, states of charge and temperatures—the largest dataset to our knowledge of its kind. Our Gaussian process model takes the entire spectrum as input, without further feature engineering, and automatically determines which spectral features predict degradation. Our model accurately predicts the remaining useful life, even without complete knowledge of past operating conditions of the battery. Our results demonstrate the value of EIS signals in battery management systems. Forecasting the state of health and remaining useful life of batteries is a challenge that limits technologies such as electric vehicles. Here, the authors build an accurate battery performance forecasting system using machine learning.
Quantum-chemical insights from deep tensor neural networks
Learning from data has led to paradigm shifts in a multitude of disciplines, including web, text and image search, speech recognition, as well as bioinformatics. Can machine learning enable similar breakthroughs in understanding quantum many-body systems? Here we develop an efficient deep learning approach that enables spatially and chemically resolved insights into quantum-mechanical observables of molecular systems. We unify concepts from many-body Hamiltonians with purpose-designed deep tensor neural networks, which leads to size-extensive and uniformly accurate (1 kcal mol −1 ) predictions in compositional and configurational chemical space for molecules of intermediate size. As an example of chemical relevance, the model reveals a classification of aromatic rings with respect to their stability. Further applications of our model for predicting atomic energies and local chemical potentials in molecules, reliable isomer energies, and molecules with peculiar electronic structure demonstrate the potential of machine learning for revealing insights into complex quantum-chemical systems. Machine learning is an increasingly popular approach to analyse data and make predictions. Here the authors develop a ‘deep learning’ framework for quantitative predictions and qualitative understanding of quantum-mechanical observables of chemical systems, beyond properties trivially contained in the training data.
Understanding traffic capacity of urban networks
Traffic in an urban network becomes congested once there is a critical number of vehicles in the network. To improve traffic operations, develop new congestion mitigation strategies, and reduce negative traffic externalities, understanding the basic laws governing the network’s critical number of vehicles and the network’s traffic capacity is necessary. However, until now, a holistic understanding of this critical point and an empirical quantification of its driving factors has been missing. Here we show with billions of vehicle observations from more than 40 cities, how road and bus network topology explains around 90% of the empirically observed critical point variation, making it therefore predictable. Importantly, we find a sublinear relationship between network size and critical accumulation emphasizing decreasing marginal returns of infrastructure investment. As transportation networks are the lifeline of our cities, our findings have profound implications on how to build and operate our cities more efficiently.