Catalogue Search | MBRL
نتائج البحث
MBRLSearchResults
وجه الفتاة! هناك خطأ ما.
أثناء محاولة إضافة العنوان إلى الرف ، حدث خطأ ما :( يرجى إعادة المحاولة لاحقًا!
-
الضبطالضبط
-
مُحَكَّمةمُحَكَّمة
-
السلسلةالسلسلة
-
مستوى القراءةمستوى القراءة
-
السنةمن:-إلى:
-
المزيد من المرشحاتالمزيد من المرشحاتنوع المحتوىنوع العنصرلديه النص الكاملالموضوعبلد النشرالناشرالمصدرالجمهور المستهدفالمتبرعاللغةمكان النشرالمؤلفينموقع
منجز
مرشحات
إعادة تعيين
672,818
نتائج ل
"Machine learning"
صنف حسب:
Gaussian Processes for Machine Learning
بواسطة
Williams, Christopher K. I
,
Rasmussen, Carl Edward
في
Artificial intelligence
,
Computer Science
,
Computing and Information Technology
2005,2006
Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes.
eBook
Human-in-the-loop machine learning: a state of the art
2023
Researchers are defining new types of interactions between humans and machine learning algorithms generically called human-in-the-loop machine learning. Depending on who is in control of the learning process, we can identify: active learning, in which the system remains in control; interactive machine learning, in which there is a closer interaction between users and learning systems; and machine teaching, where human domain experts have control over the learning process. Aside from control, humans can also be involved in the learning process in other ways. In curriculum learning human domain experts try to impose some structure on the examples presented to improve the learning; in explainable AI the focus is on the ability of the model to explain to humans why a given solution was chosen. This collaboration between AI models and humans should not be limited only to the learning process; if we go further, we can see other terms that arise such as Usable and Useful AI. In this paper we review the state of the art of the techniques involved in the new forms of relationship between humans and ML algorithms. Our contribution is not merely listing the different approaches, but to provide definitions clarifying confusing, varied and sometimes contradictory terms; to elucidate and determine the boundaries between the different methods; and to correlate all the techniques searching for the connections and influences between them.
Journal Article
GPT-4 is here: what scientists think
2023
Researchers are excited about the AI — but many are frustrated that its underlying engineering is cloaked in secrecy.
Researchers are excited about the AI — but many are frustrated that its underlying engineering is cloaked in secrecy.
The GPT-4 logo is seen in this photo illustration on 13 March, 2023 in Warsaw, Poland
Credit: Jaap Arriens/NurPhoto via Getty
Journal Article
Deep learning in practice
\"Deep Learning in Practice helps you learn how to develop and optimize a model for your projects using Deep Learning (DL) methods and architectures. This book is useful for undergraduate and graduate students, as well as practitioners in industry and academia. It will serve as a useful reference for learning deep learning fundamentals and implementing a deep learning model for any project, step by step\"-- Provided by publisher.
Predictors of real-time fMRI neurofeedback performance and improvement – A machine learning mega-analysis
بواسطة
Hendler, Talma
,
Bodurka, Jerzy
,
Megumi, Fukuda
في
501011 Cognitive psychology
,
501011 Kognitionspsychologie
,
Adult
2021
•First machine learning mega-analysis to investigate predictors of real-time fMRI neurofeedback success.•Inclusion of a pre-training no feedback was associated with higher neurofeedback performance.•Patients were associated with higher neurofeedback performance than healthy individuals.•More data (sharing) in the future will allow for design optimization and a better understanding of neurofeedback learning.
Real-time fMRI neurofeedback is an increasingly popular neuroimaging technique that allows an individual to gain control over his/her own brain signals, which can lead to improvements in behavior in healthy participants as well as to improvements of clinical symptoms in patient populations. However, a considerably large ratio of participants undergoing neurofeedback training do not learn to control their own brain signals and, consequently, do not benefit from neurofeedback interventions, which limits clinical efficacy of neurofeedback interventions. As neurofeedback success varies between studies and participants, it is important to identify factors that might influence neurofeedback success. Here, for the first time, we employed a big data machine learning approach to investigate the influence of 20 different design-specific (e.g. activity vs. connectivity feedback), region of interest-specific (e.g. cortical vs. subcortical) and subject-specific factors (e.g. age) on neurofeedback performance and improvement in 608 participants from 28 independent experiments.
With a classification accuracy of 60% (considerably different from chance level), we identified two factors that significantly influenced neurofeedback performance: Both the inclusion of a pre-training no-feedback run before neurofeedback training and neurofeedback training of patients as compared to healthy participants were associated with better neurofeedback performance. The positive effect of pre-training no-feedback runs on neurofeedback performance might be due to the familiarization of participants with the neurofeedback setup and the mental imagery task before neurofeedback training runs. Better performance of patients as compared to healthy participants might be driven by higher motivation of patients, higher ranges for the regulation of dysfunctional brain signals, or a more extensive piloting of clinical experimental paradigms. Due to the large heterogeneity of our dataset, these findings likely generalize across neurofeedback studies, thus providing guidance for designing more efficient neurofeedback studies specifically for improving clinical neurofeedback-based interventions. To facilitate the development of data-driven recommendations for specific design details and subpopulations the field would benefit from stronger engagement in open science research practices and data sharing.
Journal Article
Oracle business intelligence with machine learning : artificial intelligence techniques in OBIEE for actionable BI
Use machine learning and Oracle Business Intelligence Enterprise Edition (OBIEE) as a comprehensive BI solution. This book follows a when-to, why-to, and how-to approach to explain the key steps involved in utilizing the artificial intelligence components now available for a successful OBIEE implementation. Oracle Business Intelligence with Machine Learning covers various technologies including using Oracle OBIEE, R Enterprise, Spatial Maps, and machine learning for advanced visualization and analytics. The machine learning material focuses on learning representations of input data suitable for a given prediction problem. This book focuses on the practical aspects of implementing machine learning solutions using the rich Oracle BI ecosystem. The primary objective of this book is to bridge the gap between the academic state-of-the-art and the industry state-of-the-practice by introducing you to machine learning with OBIEE. You will: See machine learning in OBIEE Master the fundamentals of machine learning and how it pertains to BI and advanced analytics Gain an introduction to Oracle R Enterprise Discover the practical considerations of implementing machine learning with OBIEE.
First return, then explore
2021
Reinforcement learning promises to solve complex sequential-decision problems autonomously by specifying a high-level reward function only. However, reinforcement learning algorithms struggle when, as is often the case, simple and intuitive rewards provide sparse
1
and deceptive
2
feedback. Avoiding these pitfalls requires a thorough exploration of the environment, but creating algorithms that can do so remains one of the central challenges of the field. Here we hypothesize that the main impediment to effective exploration originates from algorithms forgetting how to reach previously visited states (detachment) and failing to first return to a state before exploring from it (derailment). We introduce Go-Explore, a family of algorithms that addresses these two challenges directly through the simple principles of explicitly ‘remembering’ promising states and returning to such states before intentionally exploring. Go-Explore solves all previously unsolved Atari games and surpasses the state of the art on all hard-exploration games
1
, with orders-of-magnitude improvements on the grand challenges of Montezuma’s Revenge and Pitfall. We also demonstrate the practical potential of Go-Explore on a sparse-reward pick-and-place robotics task. Additionally, we show that adding a goal-conditioned policy can further improve Go-Explore’s exploration efficiency and enable it to handle stochasticity throughout training. The substantial performance gains from Go-Explore suggest that the simple principles of remembering states, returning to them, and exploring from them are a powerful and general approach to exploration—an insight that may prove critical to the creation of truly intelligent learning agents.
A reinforcement learning algorithm that explicitly remembers promising states and returns to them as a basis for further exploration solves all as-yet-unsolved Atari games and out-performs previous algorithms on Montezuma’s Revenge and Pitfall.
Journal Article