Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
288,544
result(s) for
"Learning algorithms"
Sort by:
Multi-robot path planning based on a deep reinforcement learning DQN algorithm
2020
The unmanned warehouse dispatching system of the ‘goods to people’ model uses a structure mainly based on a handling robot, which saves considerable manpower and improves the efficiency of the warehouse picking operation. However, the optimal performance of the scheduling system algorithm has high requirements. This study uses a deep Q-network (DQN) algorithm in a deep reinforcement learning algorithm, which combines the Q-learning algorithm, an empirical playback mechanism, and the volume-based technology of productive neural networks to generate target Q-values to solve the problem of multi-robot path planning. The aim of the Q-learning algorithm in deep reinforcement learning is to address two shortcomings of the robot path-planning problem: slow convergence and excessive randomness. Preceding the start of the algorithmic process, prior knowledge and prior rules are used to improve the DQN algorithm. Simulation results show that the improved DQN algorithm converges faster than the classic deep reinforcement learning algorithm and can more quickly learn the solutions to path-planning problems. This improves the efficiency of multi-robot path planning.
Journal Article
Scalable and distributed machine learning and deep learning patterns
by
Thomas, J. Joshua, 1973- editor
,
Sriraman, Harini, 1982- editor
,
Venkatasubbu, Pattabiraman, 1976- editor
in
Machine learning.
,
Deep learning (Machine learning)
,
Algorithms.
2023
\"By the end of this book, you will have the knowledge and abilities necessary to construct and implement a distributed data processing pipeline for machine learning model inference and training. Reduced time costs in machine learning result in shorter model training and model updating cycle wait times. Distributed machine learning enables ML professionals to reduce model training and inference time drastically. With the aid of this helpful manual, you'll be able to use your Python development experience and quickly get started with the creation of distributed ML, including multi-node ML systems\"-- Provided by publisher.
Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms
2023
The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently explored. Our paper aims to fill this gap and address the AI ethics principle of fairness from a conceptual standpoint, drawing insights from accounts of fairness elaborated in moral philosophy and using them to conceptualise fairness as an ethical value and to redefine fairness in HMLA accordingly. To achieve our goal, following a first section aimed at clarifying the background, methodology and structure of the paper, in the second section, we provide an overview of the discussion of the AI ethics principle of fairness in HMLA and show that the concept of fairness underlying this debate is framed in purely distributive terms and overlaps with non-discrimination, which is defined in turn as the absence of biases. After showing that this framing is inadequate, in the third section, we pursue an ethical inquiry into the concept of fairness and argue that fairness ought to be conceived of as an ethical value. Following a clarification of the relationship between fairness and non-discrimination, we show that the two do not overlap and that fairness requires much more than just non-discrimination. Moreover, we highlight that fairness not only has a distributive but also a socio-relational dimension. Finally, we pinpoint the constitutive components of fairness. In doing so, we base our arguments on a renewed reflection on the concept of respect, which goes beyond the idea of equal respect to include respect for individual persons. In the fourth section, we analyse the implications of our conceptual redefinition of fairness as an ethical value in the discussion of fairness in HMLA. Here, we claim that fairness requires more than non-discrimination and the absence of biases as well as more than just distribution; it needs to ensure that HMLA respects persons both as persons and as particular individuals. Finally, in the fifth section, we sketch some broader implications and show how our inquiry can contribute to making HMLA and, more generally, AI promote the social good and a fairer society.
Journal Article
Performance Comparison of an LSTM-based Deep Learning Model versus Conventional Machine Learning Algorithms for Streamflow Forecasting
2021
Streamflow forecasting plays a key role in improvement of water resource allocation, management and planning, flood warning and forecasting, and mitigation of flood damages. There are a considerable number of forecasting models and techniques that have been employed in streamflow forecasting and gained importance in hydrological studies in recent decades. In this study, the main objective was to compare the accuracy of four data-driven techniques of Linear Regression (LR), Multilayer Perceptron (MLP), Support Vector Machine (SVM), and Long Short-Term Memory (LSTM) network in daily streamflow forecasting. For this purpose, three scenarios were defined based on historical precipitation and streamflow series for 26 years of the Kentucky River basin located in eastern Kentucky, US. Statistical criteria including the coefficient of correlation (R), Nash-Sutcliff coefficient of efficiency (E), Nash-Sutcliff for High flow (EH), Nash-Sutcliff for Low flow (EL), normalized root mean square error (NRMSE), relative error in estimating maximum flow (REmax), threshold statistics (TS), and average absolute relative error (AARE) were employed to compare the performances of these methods. The results show that the LSTM network outperforms the other models in forecasting daily streamflow with the lowest values of NRMSE and the highest values ofEH,EL, and R under all scenarios. These findings indicated that the LSTM is a robust data-driven technique to characterize the time series behaviors in hydrological modeling applications.
Journal Article
Application of artificial intelligence models and optimization algorithms in plant cell and tissue culture
Artificial intelligence (AI) models and optimization algorithms (OA) are broadly employed in different fields of technology and science and have recently been applied to improve different stages of plant tissue culture. The usefulness of the application of AI-OA has been demonstrated in the prediction and optimization of length and number of microshoots or roots, biomass in plant cell cultures or hairy root culture, and optimization of environmental conditions to achieve maximum productivity and efficiency, as well as classification of microshoots and somatic embryos. Despite its potential, the use of AI and OA in this field has been limited due to complex definition terms and computational algorithms. Therefore, a systematic review to unravel modeling and optimizing methods is important for plant researchers and has been acknowledged in this study. First, the main steps for AI-OA development (from data selection to evaluation of prediction and classification models), as well as several AI models such as artificial neural networks (ANNs), neurofuzzy logic, support vector machines (SVMs), decision trees, random forest (FR), and genetic algorithms (GA), have been represented. Then, the application of AI-OA models in different steps of plant tissue culture has been discussed and highlighted. This review also points out limitations in the application of AI-OA in different plant tissue culture processes and provides a new view for future study objectives.Key points• Artificial intelligence models and optimization algorithms can be considered a novel and reliable computational method in plant tissue culture.• This review provides the main steps and concepts for model development.• The application of machine learning algorithms in different steps of plant tissue culture has been discussed and highlighted.
Journal Article
An Evaluation of Eight Machine Learning Regression Algorithms for Forest Aboveground Biomass Estimation from Multiple Satellite Data Products
2020
This study provided a comprehensive evaluation of eight machine learning regression algorithms for forest aboveground biomass (AGB) estimation from satellite data based on leaf area index, canopy height, net primary production, and tree cover data, as well as climatic and topographical data. Some of these algorithms have not been commonly used for forest AGB estimation such as the extremely randomized trees, stochastic gradient boosting, and categorical boosting (CatBoost) regression. For each algorithm, its hyperparameters were optimized using grid search with cross-validation, and the optimal AGB model was developed using the training dataset (80%) and AGB was predicted on the test dataset (20%). Performance metrics, feature importance as well as overestimation and underestimation were considered as indicators for evaluating the performance of an algorithm. To reduce the impacts of the random training-test data split and sampling method on the performance, the above procedures were repeated 50 times for each algorithm under the random sampling, the stratified sampling, and separate modeling scenarios. The results showed that five tree-based ensemble algorithms performed better than the three nonensemble algorithms (multivariate adaptive regression splines, support vector regression, and multilayer perceptron), and the CatBoost algorithm outperformed the other algorithms for AGB estimation. Compared with the random sampling scenario, the stratified sampling scenario and separate modeling did not significantly improve the AGB estimates, but modeling AGB for each forest type separately provided stable results in terms of the contributions of the predictor variables to the AGB estimates. All the algorithms showed forest AGB were underestimated when the AGB values were larger than 210 Mg/ha and overestimated when the AGB values were less than 120 Mg/ha. This study highlighted the capability of ensemble algorithms to improve AGB estimates and the necessity of improving AGB estimates for high and low AGB levels in future studies.
Journal Article