نتائج البحث

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
تم إضافة الكتاب إلى الرف الخاص بك!
عرض الكتب الموجودة على الرف الخاص بك .
وجه الفتاة! هناك خطأ ما.
وجه الفتاة! هناك خطأ ما.
أثناء محاولة إضافة العنوان إلى الرف ، حدث خطأ ما :( يرجى إعادة المحاولة لاحقًا!
هل أنت متأكد أنك تريد إزالة الكتاب من الرف؟
{{itemTitle}}
{{itemTitle}}
وجه الفتاة! هناك خطأ ما.
وجه الفتاة! هناك خطأ ما.
أثناء محاولة إزالة العنوان من الرف ، حدث خطأ ما :( يرجى إعادة المحاولة لاحقًا!
    منجز
    مرشحات
    إعادة تعيين
  • الضبط
      الضبط
      امسح الكل
      الضبط
  • مُحَكَّمة
      مُحَكَّمة
      امسح الكل
      مُحَكَّمة
  • نوع العنصر
      نوع العنصر
      امسح الكل
      نوع العنصر
  • الموضوع
      الموضوع
      امسح الكل
      الموضوع
  • السنة
      السنة
      امسح الكل
      من:
      -
      إلى:
  • المزيد من المرشحات
951 نتائج ل "Machine Learning (ML) algorithms"
صنف حسب:
Prediction of spontaneous combustion susceptibility of coal seams based on coal intrinsic properties using various machine learning tools
Spontaneous combustion of coal leading to mine fire is a major problem in most of the coal mining countries in the world. It causes major loss to the Indian economy. The liability of coal to spontaneous combustion varies from place to place and mainly depends on the coal intrinsic properties and other geo-mining factors. Hence, the prediction of spontaneous combustion susceptibility of coal is of utmost importance for preventing the risk of fire in coal mines and utility sectors. Machine learning tools are pivotal in system improvements in relation to the statistical analysis of experimental results. Wet oxidation potential (WOP) of coal determined in the laboratory is one of the most relied indices used for assessing the spontaneous combustion susceptibility of coal. In this study, multiple linear regression (MLR) and five different machine learning (ML) techniques, such as Support Vector Regression (SVR), Artificial Neural Network (ANN), Random Forest (RF), Gradient Boosting (GB) and Extreme Gradient Boost (XGB) algorithms, were used to predict the spontaneous combustion susceptibility (WOP) of coal seams based on the coal intrinsic properties. The results derived from the models were compared with the experimental data. The results indicated excellent prediction accuracy and ease of interpretation of tree-based ensemble algorithms, like Random Forest, Gradient Boosting and Extreme Gradient Boosting. The MLR exhibited the least while XGB demonstrated the highest predictive performance. The developed XGB achieved R 2 of 0.9879, RMSE of 4.364 and VAF of 84.28%. In addition, the results of sensitivity analysis showed that the volatile matter is most sensitive to the changes in WOP of coal samples considered in the study. Thus, during spontaneous combustion modelling and simulation, volatile matter can be used as the most influential parameter for assessing the fire risk of the coal samples considered in the study. Further, the partial dependence analysis was done to interpret the complex relationships between the WOP and intrinsic properties of coal.
On-Road Experimental Campaign for Machine Learning Based State of Health Estimation of High-Voltage Batteries in Electric Vehicles
The present study investigates the use of machine learning algorithms to estimate the state of health (SOH) of high-voltage batteries in electric vehicles. The analysis is based on open-circuit voltage (OCV) measurements from 12 vehicles with different mileage conditions and focuses on establishing a correlation between the OCV values, the energy stored in the battery, and the battery SOH. The experimental campaign was conducted at the Hyundai Motor Europe Technical Center GmbH (Germany), and the data collection process took advantage of the ETAS Integrated Calibration and Application Tool (INCA) and the ETAS Measure Data Analyzer (MDA) software. Six machine learning algorithms are evaluated and compared, namely linear regression, k-nearest neighbors, support vector machine, random forest, classification and regression tree, and neural network. Among the evaluated algorithms, random forest (RF) exhibits the best performance in predicting the state of health of high-voltage batteries, both for the OCV and the capacity (C) estimation. Specifically, if compared to the worst algorithm (i.e., linear regression), RF achieves a remarkable improvement with a reduction of 96% and 97% in the mean absolute error for the OCV and the C estimation, respectively. Furthermore, the comparison highlighted the main differences in the performance, complexity, interpretability, and specific features of the six algorithms. The findings of the present study will contribute to the development of efficient maintenance strategies, thus reducing the risk of unexpected battery failures.
Machine learning models for wetland habitat vulnerability in mature Ganges delta
The present study attempts to measure wetland habitat vulnerability (WHV) in the Indian part of mature Ganges delta. Predictive algorithms belonging to bivariate statistics and machine learning (ML) algorithms were applied for fulfilling the data mining and generating the models. Results show that 60% of the wetland areas are covered by moderate to very high WHV, out of which > 300 km 2 belong to very high WHV followed by a high vulnerability in almost 150 km 2 . This areal coverage increases by 10–15% from phase II to phase III. On the other hand, a relatively safe situation is confined to < 200 km 2 . The receiver operating characteristic curve, root-mean-square error, and correlation coefficient are used to assess the accuracy of these models and categorization of habitat vulnerability. Ensemble modeling is done using the individual models having a greater accuracy level in order to increase accuracy. A field-based model of the same is prepared by gathering information directly from the field which also exhibits similar results with the algorithm-based models. Analysis of residuals in standard regression strongly supports the relevance of the selected parameters and multi-parametric models.
AI augmented edge and fog computing for Internet of Health Things (IoHT)
Patients today seek a more advanced and personalized health-care system that keeps up with the pace of modern living. Cloud computing delivers resources over the Internet and enables the deployment of an infinite number of applications to provide services to many sectors. The primary limitation of these cloud frameworks right now is their limited scalability, which results in their inability to meet needs. An edge/fog computing environment, paired with current computing techniques, is the answer to fulfill the energy efficiency and latency requirements for the real-time collection and analysis of health data. Additionally, the Internet of Things (IoT) revolution has been essential in changing contemporary healthcare systems by integrating social, economic, and technological perspectives. This requires transitioning from unadventurous healthcare systems to more adapted healthcare systems that allow patients to be identified, managed, and evaluated more easily. These techniques allow data from many sources to be integrated to effectively assess patient health status and predict potential preventive actions. A subset of the Internet of Things, the Internet of Health Things (IoHT) enables the remote exchange of data for physical processes like patient monitoring, treatment progress, observation, and consultation. Previous surveys related to healthcare mainly focused on architecture and networking, which left untouched important aspects of smart systems like optimal computing techniques such as artificial intelligence, deep learning, advanced technologies, and services that includes 5G and unified communication as a service (UCaaS). This study aims to examine future and existing fog and edge computing architectures and methods that have been augmented with artificial intelligence (AI) for use in healthcare applications, as well as defining the demands and challenges of incorporating fog and edge computing technology in IoHT, thereby helping healthcare professionals and technicians identify the relevant technologies required based on their need for developing IoHT frameworks for remote healthcare. Among the crucial elements to take into account in an IoHT framework are efficient resource management, low latency, and strong security. This review addresses several machine learning techniques for efficient resource management in the IoT, where machine learning (ML) and AI are crucial. It has been noted how the use of modern technologies, such as narrow band-IoT (NB-IoT) for wider coverage and Blockchain technology for security, is transforming IoHT. The last part of the review focuses on the future challenges posed by advanced technologies and services. This study provides prospective research suggestions for enhancing edge and fog computing services for healthcare with modern technologies in order to give patients with an improved quality of life.
A review of Machine Learning (ML) algorithms used for modeling travel mode choice
In recent decades, transportation planning researchers have used diverse types of machine learning (ML) algorithms to research a wide range of topics. This review paper starts with a brief explanation of some ML algorithms commonly used for transportation research, specifically Artificial Neural Networks (ANN), Decision Trees (DT), Support Vector Machines (SVM) and Cluster Analysis (CA). Then, these different methodologies used by researchers for modeling travel mode choice are collected and compared with the Multinomial Logit Model (MNL) which is the most commonly-used discrete choice model. Finally, the characterization of ML algorithms is discussed and Random Forest (RF), a variant of Decision Tree algorithms, is presented as the best methodology for modeling travel mode choice.
Detection of Synergistic Interaction on an Additive Scale Between Two Drugs on Abnormal Elevation of Serum Alanine Aminotransferase Using Machine-Learning Algorithms
Drug-induced liver injury (DILI) is a common adverse drug reaction, with abnormal elevation of serum alanine aminotransferase (ALT). Several clinical studies have investigated whether a combination of two drugs alters the reporting frequency of DILI using traditional statistical methods such as multiple logistic regression (MLR), but this model may over-fit the data. This study aimed to detect a synergistic interaction between two drugs on the risk of abnormal elevation of serum ALT in Japanese adult patients using three machine-learning algorithms: MLR, logistic least absolute shrinkage and selection operator (LASSO) regression, and extreme gradient boosting (XGBoost) algorithms. A total of 58,413 patients were extracted from Nihon University School of Medicine’s Clinical Data Warehouse and assigned to case ( N = 4,152) and control ( N = 54,261) groups. The MLR model over-fitted a training set. In the logistic LASSO regression model, three combinations showed relative excess risk due to interaction (RERI) for abnormal elevation of serum ALT: diclofenac and famotidine (RERI 2.427, 95% bootstrap confidence interval 1.226–11.003), acetaminophen and ambroxol (0.540, 0.087–4.625), and aspirin and cilostazol (0.188, 0.135–3.010). Moreover, diclofenac (adjusted odds ratio 1.319, 95% bootstrap confidence interval 1.189–2.821) and famotidine (1.643, 1.332–2.071) individually affected the risk of abnormal elevation of serum ALT. In the XGBoost model, not only the individual effects of diclofenac (feature importance 0.004) and famotidine (0.016), but also the interaction term (0.004) was included in important predictors. Although further study is needed, the combination of diclofenac and famotidine appears to increase the risk of abnormal elevation of serum ALT in the real world.
Implementation of CNN for Plant Identification using UAV Imagery
Plants are the world's most significant resource since they are the only natural source of oxygen. Additionally, plants are considered crucial since they are the major source of energy for humanity and have nutritional, therapeutic, and other benefits. Image identification has become more prominent in this technology-driven world, where many innovations are happening in this sphere. Image processing techniques are increasingly being used by researchers to identify plants. The capacity of Convolutional Neural Networks (CNN) to transfer weights learned with huge standard datasets to tasks with smaller collections or more particular data has improved over time. Several applications are made for image identification using deep learning, and Machine Learning (ML) algorithms. Plant image identification is a prominent part of such. The plant image dataset of about 300 images collected by mobile phone and camera from different places in the natural scenes with nine species of different plants are deployed for training. A five-layered convolution neural network (CNN) is applied for large-scale plant classification in a natural environment. The proposed work claims a higher accuracy in plant identification based on experimental data. The model achieves the utmost recognition rate of 96% NU108 dataset and UAV images of NU101 have achieved an accuracy of 97.8%.
PredictMed-CDSS: Artificial Intelligence-Based Decision Support System Predicting the Probability to Develop Neuromuscular Hip Dysplasia
Neuromuscular hip dysplasia (NHD) is a common deformity in children with cerebral palsy (CP). Although some predictive factors of NHD are known, the prediction of NHD is in its infancy. We present a Clinical Decision Support System (CDSS) designed to calculate the probability of developing NHD in children with CP. The system utilizes an ensemble of three machine learning (ML) algorithms: Neural Network (NN), Support Vector Machine (SVM), and Logistic Regression (LR). The development and evaluation of the CDSS followed the DECIDE-AI guidelines for AI-driven clinical decision support tools. The ensemble was trained on a data series from 182 subjects. Inclusion criteria were age between 12 and 18 years and diagnosis of CP from two specialized units. Clinical and functional data were collected prospectively between 2005 and 2023, and then analyzed in a cross-sectional study. Accuracy and area under the receiver operating characteristic (AUROC) were calculated for each method. Best logistic regression scores highlighted history of previous orthopedic surgery (p = 0.001), poor motor function (p = 0.004), truncal tone disorder (p = 0.008), scoliosis (p = 0.031), number of affected limbs (p = 0.05), and epilepsy (p = 0.05) as predictors of NHD. Both accuracy and AUROC were highest for NN, 83.7% and 0.92, respectively. The novelty of this study lies in the development of an efficient Clinical Decision Support System (CDSS) prototype, specifically designed to predict future outcomes of neuromuscular hip dysplasia (NHD) in patients with cerebral palsy (CP) using clinical data. The proposed system, PredictMed-CDSS, demonstrated strong predictive performance for estimating the probability of NHD development in children with CP, with the highest accuracy achieved using neural networks (NN). PredictMed-CDSS has the potential to assist clinicians in anticipating the need for early interventions and preventive strategies in the management of NHD among CP patients.
Statistical and Machine Learning Classification Approaches to Predicting and Controlling Peak Temperatures During Friction Stir Welding (FSW) of Al-6061-T6 Alloys
This paper presents optimization of peak temperatures achieved during friction stir welding (FSW) of Al-6061-T6 alloys. This research work employed a novel approach by investigating the effect of FSW welding process parameters on peak temperatures through the implementation of finite element analysis (FEA), the Taguchi method, analysis of variance (ANOVA), and machine learning (ML) algorithms. COMSOL 6.0 Multiphysics was used to perform FEA to predict peak temperatures, incorporating seven distinctive welding parameters: tool material, pin diameter, shoulder diameter, tool rotational speed, welding speed, axial force, and coefficient of friction. The influence of these parameters was investigated using an L32 Taguchi array and analysis of variance (ANOVA), revealing that axial force and tool rotational speed were the most significant parameters affecting peak temperatures. Some simulations showed temperatures exceeding the material’s melting point, indicating the need for improved thermal control. This was achieved by using three machine learning (ML) algorithms, i.e., Logistic Regression, k-Nearest Neighbors (k-NN), and Naive Bayes. A dataset of 324 data points was prepared using a factorial design to implement these algorithms. These algorithms predicted the welding conditions where the temperature exceeded the melting temperature of Al-6061-T6. It was found that the Logistic Regression classifier demonstrated the highest performance, achieving an accuracy of 98.14% as compared to Naive Bayes and k-NN classifiers. These findings contribute to sustainable welding practices by minimizing excessive heat generation, preserving material properties, and enhancing weld quality.
A review of Machine Learning (ML) algorithms used for modeling travel mode choice
In recent decades, transportation planning researchers have used diverse types of machine learning (ML) algorithms to research a wide range of topics. This review paper starts with a brief explanation of some ML algorithms commonly used for transportation research, specifically Artificial Neural Networks (ANN), Decision Trees (DT), Support Vector Machines (SVM) and Cluster Analysis (CA). Then, these different methodologies used by researchers for modeling travel mode choice are collected and compared with the Multinomial Logit Model (MNL) which is the most commonly-used discrete choice model. Finally, the characterization of ML algorithms is discussed and Random Forest (RF), a variant of Decision Tree algorithms, is presented as the best methodology for modeling travel mode choice. En décadas recientes, los investigadores de planificación de transporte han usado diversos tipos de algoritmos de Machine Learning (ML, por sus siglas en inglés) para investigar un amplio rango de temas. Este artículo de revisión inicia con una breve explicación de algunos algoritmos de Machine Learning comúnmente utilizados para la investigación en transporte, específicamente Redes Neuronales Artificiales (ANN), Árboles de Decisión (DT), Máquinas de Vector de Soporte (SVM) y Análisis de Grupos (CA). Luego, estas diferentes metodologías usadas por investigadores para modelar la elección de modo de viaje son recogidos y comparados con el Modelo Logit Multinomial (MNL) el cual es el modelo de elección discreta más comúnmente utilizado. Finalmente, la caracterización de los algoritmos de ML es discutida y el Bosque Aleatorio (RF), una variante de los algoritmos de Árboles de Decisión, es presentado como la mejor metodología para modelar la elección de modo de viaje