Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
27 result(s) for "Gupta, Vishu"
Sort by:
Cross-property deep transfer learning framework for enhanced predictive analytics on small materials data
Artificial intelligence (AI) and machine learning (ML) have been increasingly used in materials science to build predictive models and accelerate discovery. For selected properties, availability of large databases has also facilitated application of deep learning (DL) and transfer learning (TL). However, unavailability of large datasets for a majority of properties prohibits widespread application of DL/TL. We present a cross-property deep-transfer-learning framework that leverages models trained on large datasets to build models on small datasets of different properties. We test the proposed framework on 39 computational and two experimental datasets and find that the TL models with only elemental fractions as input outperform ML/DL models trained from scratch even when they are allowed to use physical attributes as input, for 27/39 (≈ 69%) computational and both the experimental datasets. We believe that the proposed framework can be widely useful to tackle the small data challenge in applying AI/ML in materials science. Artificial intelligence and machine learning can greatly enhance materials property prediction and discovery. Here the authors propose cross-property transfer learning to build accurate models for dozens of properties with limited data availability.
Enabling deeper learning on big data for materials informatics applications
The application of machine learning (ML) techniques in materials science has attracted significant attention in recent years, due to their impressive ability to efficiently extract data-driven linkages from various input materials representations to their output properties. While the application of traditional ML techniques has become quite ubiquitous, there have been limited applications of more advanced deep learning (DL) techniques, primarily because big materials datasets are relatively rare. Given the demonstrated potential and advantages of DL and the increasing availability of big materials datasets, it is attractive to go for deeper neural networks in a bid to boost model performance, but in reality, it leads to performance degradation due to the vanishing gradient problem. In this paper, we address the question of how to enable deeper learning for cases where big materials data is available. Here, we present a general deep learning framework based on Individual Residual learning (IRNet) composed of very deep neural networks that can work with any vector-based materials representation as input to build accurate property prediction models. We find that the proposed IRNet models can not only successfully alleviate the vanishing gradient problem and enable deeper learning, but also lead to significantly (up to 47%) better model accuracy as compared to plain deep neural networks and traditional ML techniques for a given input materials representation in the presence of big data.
Moving closer to experimental level materials property prediction using AI
While experiments and DFT-computations have been the primary means for understanding the chemical and physical properties of crystalline materials, experiments are expensive and DFT-computations are time-consuming and have significant discrepancies against experiments. Currently, predictive modeling based on DFT-computations have provided a rapid screening method for materials candidates for further DFT-computations and experiments; however, such models inherit the large discrepancies from the DFT-based training data. Here, we demonstrate how AI can be leveraged together with DFT to compute materials properties more accurately than DFT itself by focusing on the critical materials science task of predicting “formation energy of a material given its structure and composition”. On an experimental hold-out test set containing 137 entries, AI can predict formation energy from materials structure and composition with a mean absolute error (MAE) of 0.064 eV/atom; comparing this against DFT-computations, we find that AI can significantly outperform DFT computations for the same task (discrepancies of > 0.076 eV/atom) for the first time.
Improving deep learning model performance under parametric constraints for materials informatics applications
Modern machine learning (ML) and deep learning (DL) techniques using high-dimensional data representations have helped accelerate the materials discovery process by efficiently detecting hidden patterns in existing datasets and linking input representations to output properties for a better understanding of the scientific phenomenon. While a deep neural network comprised of fully connected layers has been widely used for materials property prediction, simply creating a deeper model with a large number of layers often faces with vanishing gradient problem, causing a degradation in the performance, thereby limiting usage. In this paper, we study and propose architectural principles to address the question of improving the performance of model training and inference under fixed parametric constraints. Here, we present a general deep-learning framework based on branched residual learning (BRNet) with fully connected layers that can work with any numerical vector-based representation as input to build accurate models to predict materials properties. We perform model training for materials properties using numerical vectors representing different composition-based attributes of the respective materials and compare the performance of the proposed models against traditional ML and existing DL architectures. We find that the proposed models are significantly more accurate than the ML/DL models for all data sizes by using different composition-based attributes as input. Further, branched learning requires fewer parameters and results in faster model training due to better convergence during the training phase than existing neural networks, thereby efficiently building accurate models for predicting materials properties.
Structure-aware graph neural network based deep transfer learning framework for enhanced predictive analytics on diverse materials datasets
Modern data mining methods have demonstrated effectiveness in comprehending and predicting materials properties. An essential component in the process of materials discovery is to know which material(s) will possess desirable properties. For many materials properties, performing experiments and density functional theory computations are costly and time-consuming. Hence, it is challenging to build accurate predictive models for such properties using conventional data mining methods due to the small amount of available data. Here we present a framework for materials property prediction tasks using structure information that leverages graph neural network-based architecture along with deep-transfer-learning techniques to drastically improve the model’s predictive ability on diverse materials (3D/2D, inorganic/organic, computational/experimental) data. We evaluated the proposed framework in cross-property and cross-materials class scenarios using 115 datasets to find that transfer learning models outperform the models trained from scratch in 104 cases, i.e., ≈90%, with additional benefits in performance for extrapolation problems. We believe the proposed framework can be widely useful in accelerating materials discovery in materials science.
XElemNet: towards explainable AI for deep neural networks in materials science
Recent progress in deep learning has significantly impacted materials science, leading to accelerated material discovery and innovation. ElemNet, a deep neural network model that predicts formation energy from elemental compositions, exemplifies the application of deep learning techniques in this field. However, the “black-box” nature of deep learning models often raises concerns about their interpretability and reliability. In this study, we propose XElemNet to explore the interpretability of ElemNet by applying a series of explainable artificial intelligence (XAI) techniques, focusing on post-hoc analysis and model transparency. The experiments with artificial binary datasets reveal ElemNet’s effectiveness in predicting convex hulls of element-pair systems across periodic table groups, indicating its capability to effectively discern elemental interactions in most cases. Additionally, feature importance analysis within ElemNet highlights alignment with chemical properties of elements such as reactivity and electronegativity. XElemNet provides insights into the strengths and limitations of ElemNet and offers a potential pathway for explaining other deep learning models in materials science.
An AI framework for time series microstructure prediction from processing parameters
In this study, we present an artificial intelligence (AI)-driven framework for predicting the microstructural texture of polycrystalline materials after a specific deformation process. The microstructural texture is defined in terms of the orientation distribution function (ODF) which indicates the volume density of crystal orientations. Our approach leverages an encoder-decoder model with Long Short-Term Memory (LSTM) layers to model the relationship between processing conditions and material properties. As a case study, we apply our framework to copper, generating a dataset of 3125 unique processing parameter combinations and their corresponding ODF vectors. The resulting predictions enable the calculation of homogenized properties. Our AI-driven framework outperforms traditional material processing simulations, yielding faster results with limited error rates (< 0.3% for both the elastic matrix C and the compliance matrix S ), making it a promising tool for the expedited design of microstructures with tailored properties.
Collaborative multi‐aggregator electric vehicle charge scheduling with PV‐assisted charging stations under variable solar profiles
Electric vehicles (EVs) are on the path to becoming a solution to the emissions released by the internal combustion engine vehicles that are on the road. EV charging management integration requires a smart grid platform that allows for communication and control between the aggregator, consumer and grid. This study presents an operational strategy for PV‐assisted charging stations (PVCSs) that allows the EV to be charged primarily by PV energy, followed by the EV station's battery storage (BS) and the grid. Multi‐Aggregator collaborative scheduling is considered that includes a monetary penalty on the aggregator for any unscheduled EVs. The impact of the PVCS is compared to the case with no PV/BS is included. A variation in the PV profile is included in the evaluation to assess its impact on total profits. Profit results are compared in cases of minimum, average and maximum PV energy output. The results indicate that the inclusion of penalties due to unscheduled EVs resulted in lowered profits. Further, the profits experienced an increase as the number of EVs scheduled through PV/BS increased, implying that a lesser percentage of EVs are scheduled by the grid when a greater amount of PV and battery energy are available.
Developing and validating machine learning models to predict next-day extubation
Criteria to identify patients who are ready to be liberated from mechanical ventilation (MV) are imprecise, often resulting in prolonged MV or reintubation, both of which are associated with adverse outcomes. Daily protocol-driven assessment of the need for MV leads to earlier extubation but requires dedicated personnel. We sought to determine whether machine learning (ML) applied to the electronic health record could predict next-day extubation. We examined 37 clinical features aggregated from 12AM-8AM on each patient-ICU-day from a single-center prospective cohort study of patients in our quaternary care medical ICU who received MV. We also tested our models on an external test set from a community hospital ICU in our health care system. We used three data encoding/imputation strategies and built XGBoost, LightGBM, logistic regression, LSTM, and RNN models to predict next-day extubation. We compared model predictions and actual events to examine how model-driven care might have differed from actual care. Our internal cohort included 448 patients and 3,095 ICU days, and our external test cohort had 333 patients and 2,835 ICU days. The best model (LSTM) predicted next-day extubation with an AUROC of 0.870 (95% CI 0.834–0.902) on the internal test cohort and 0.870 (95% CI 0.848–0.885) on the external test cohort. Across multiple model types, measures previously demonstrated to be important in determining readiness for extubation were found to be most informative, including plateau pressure and Richmond Agitation Sedation Scale (RASS) score. Our model often predicted patients to be stable for extubation in the days preceding their actual extubation, with 63.8% of predicted extubations occurring within three days of true extubation. Our findings suggest that an ML model may serve as a useful clinical decision support tool rather than complete replacement of clinical judgement. However, any ML-based model should be compared with protocol-based practice in a prospective, randomized controlled trial to determine improvement in outcomes while maintaining safety as well as cost effectiveness.
Simultaneously improving accuracy and computational cost under parametric constraints in materials property prediction tasks
Modern data mining techniques using machine learning (ML) and deep learning (DL) algorithms have been shown to excel in the regression-based task of materials property prediction using various materials representations. In an attempt to improve the predictive performance of the deep neural network model, researchers have tried to add more layers as well as develop new architectural components to create sophisticated and deep neural network models that can aid in the training process and improve the predictive ability of the final model. However, usually, these modifications require a lot of computational resources, thereby further increasing the already large model training time, which is often not feasible, thereby limiting usage for most researchers. In this paper, we study and propose a deep neural network framework for regression-based problems comprising of fully connected layers that can work with any numerical vector-based materials representations as model input. We present a novel deep regression neural network, iBRNet, with branched skip connections and multiple schedulers, which can reduce the number of parameters used to construct the model, improve the accuracy, and decrease the training time of the predictive model. We perform the model training using composition-based numerical vectors representing the elemental fractions of the respective materials and compare their performance against other traditional ML and several known DL architectures. Using multiple datasets with varying data sizes for training and testing, We show that the proposed iBRNet models outperform the state-of-the-art ML and DL models for all data sizes. We also show that the branched structure and usage of multiple schedulers lead to fewer parameters and faster model training time with better convergence than other neural networks. Scientific contribution: The combination of multiple callback functions in deep neural networks minimizes training time and maximizes accuracy in a controlled computational environment with parametric constraints for the task of materials property prediction.