Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
341,578
result(s) for
"learning (artificial intelligence)"
Sort by:
The Atlas of AI
by
Crawford, Kate
in
Artificial intelligence
,
Artificial intelligence -- Moral and ethical aspects
,
Artificial intelligence -- Political aspects
2021
The hidden costs of artificial intelligence, from natural
resources and labor to privacy and freedom What happens
when artificial intelligence saturates political life and depletes
the planet? How is AI shaping our understanding of ourselves and
our societies? In this book Kate Crawford reveals how this
planetary network is fueling a shift toward undemocratic governance
and increased inequality. Drawing on more than a decade of
research, award-winning science, and technology, Crawford reveals
how AI is a technology of extraction: from the energy and minerals
needed to build and sustain its infrastructure, to the exploited
workers behind \"automated\" services, to the data AI collects from
us. Rather than taking a narrow focus on code and algorithms,
Crawford offers us a political and a material perspective on what
it takes to make artificial intelligence and where it goes wrong.
While technical systems present a veneer of objectivity, they are
always systems of power. This is an urgent account of what is at
stake as technology companies use artificial intelligence to
reshape the world.
Scientific discovery in the age of artificial intelligence
2023
Artificial intelligence (AI) is being increasingly integrated into scientific discovery to augment and accelerate research, helping scientists to generate hypotheses, design experiments, collect and interpret large datasets, and gain insights that might not have been possible using traditional scientific methods alone. Here we examine breakthroughs over the past decade that include self-supervised learning, which allows models to be trained on vast amounts of unlabelled data, and geometric deep learning, which leverages knowledge about the structure of scientific data to enhance model accuracy and efficiency. Generative AI methods can create designs, such as small-molecule drugs and proteins, by analysing diverse data modalities, including images and sequences. We discuss how these methods can help scientists throughout the scientific process and the central issues that remain despite such advances. Both developers and users of AI tools need a better understanding of when such approaches need improvement, and challenges posed by poor data quality and stewardship remain. These issues cut across scientific disciplines and require developing foundational algorithmic approaches that can contribute to scientific understanding or acquire it autonomously, making them critical areas of focus for AI innovation.
The advances in artificial intelligence over the past decade are examined, with a discussion on how artificial intelligence systems can aid the scientific process and the central issues that remain despite advances.
Journal Article
Human-in-the-Loop Reinforcement Learning: A Survey and Position on Requirements, Challenges, and Opportunities
by
Retzlaff, Carl Orge
,
Saranti, Anna
,
Holzinger, Andreas
in
Agents (artificial intelligence)
,
Artificial intelligence
,
Explainable artificial intelligence
2024
Artificial intelligence (AI) and especially reinforcement learning (RL) have the potential to enable agents to learn and perform tasks autonomously with superhuman performance. However, we consider RL as fundamentally a Human-in-the-Loop (HITL) paradigm, even when an agent eventually performs its task autonomously. In cases where the reward function is challenging or impossible to define, HITL approaches are considered particularly advantageous. The application of Reinforcement Learning from Human Feedback (RLHF) in systems such as ChatGPT demonstrates the effectiveness of optimizing for user experience and integrating their feedback into the training loop. In HITL RL, human input is integrated during the agent’s learning process, allowing iterative updates and fine-tuning based on human feedback, thus enhancing the agent’s performance. Since the human is an essential part of this process, we argue that human-centric approaches are the key to successful RL, a fact that has not been adequately considered in the existing literature. This paper aims to inform readers about current explainability methods in HITL RL. It also shows how the application of explainable AI (xAI) and specific improvements to existing explainability approaches can enable a better human-agent interaction in HITL RL for all types of users, whether for lay people, domain experts, or machine learning specialists. Accounting for the workflow in HITL RL and based on software and machine learning methodologies, this article identifies four phases for human involvement for creating HITL RL systems: (1) Agent Development, (2) Agent Learning, (3) Agent Evaluation, and (4) Agent Deployment. We highlight human involvement, explanation requirements, new challenges, and goals for each phase. We furthermore identify low-risk, high-return opportunities for explainability research in HITL RL and present long-term research goals to advance the field. Finally, we propose a vision of human-robot collaboration that allows both parties to reach their full potential and cooperate effectively.
Journal Article
Deep learning
\"Artificial Intelligence is a disruptive technology across business and society. There are three long-term trends driving this AI revolution: the emergence of Big Data, the creation of cheaper and more powerful computers, and development of better algorithms for processing an learning from data. Deep learning is the subfield of Artificial Intelligence that focuses on creating large neural network models that are capable of making accurate data driven decisions. Modern neural networks are the most powerful computational models we have for analyzing massive and complex datasets, and consequently deep learning is ideally suited to take advantage of the rapid growth in Big Data and computational power. In the last ten years, deep learning has become the fundamental technology in computer vision systems, speech recognition on mobile phones, information retrieval systems, machine translation, game AI, and self-driving cars. It is set to have a massive impact in healthcare, finance, and smart cities over the next years. This book is designed to give an accessible and concise, but also comprehensive, introduction to the field of Deep Learning. The book explains what deep learning is, how the field has developed, what deep learning can do, and also discusses how the field is likely to develop in the next 10 years. Along the way, the most important neural network architectures are described, including autoencoders, recurrent neural networks, long short-term memory networks, convolutional networks, and more recent developments such as Generative Adversarial Networks, transformer networks, and capsule networks. The book also covers the two more important algorithms for training a neural network, the gradient descent algorithm and Backpropagation\"-- Provided by publisher.
A Survey of Zero-shot Generalisation in Deep Reinforcement Learning
by
Zhang, Amy
,
Rocktäschel, Tim
,
Grefenstette, Edward
in
Algorithms
,
Artificial intelligence
,
Benchmarks
2023
The study of zero-shot generalisation (ZSG) in deep Reinforcement Learning (RL) aims to produce RL algorithms whose policies generalise well to novel unseen situations at deployment time, avoiding overfitting to their training environments. Tackling this is vital if we are to deploy reinforcement learning algorithms in real world scenarios, where the environment will be diverse, dynamic and unpredictable. This survey is an overview of this nascent field. We rely on a unifying formalism and terminology for discussing different ZSG problems, building upon previous works. We go on to categorise existing benchmarks for ZSG, as well as current methods for tackling these problems. Finally, we provide a critical discussion of the current state of the field, including recommendations for future work. Among other conclusions, we argue that taking a purely procedural content generation approach to benchmark design is not conducive to progress in ZSG, we suggest fast online adaptation and tackling RL-specific problems as some areas for future work on methods for ZSG, and we recommend building benchmarks in underexplored problem settings such as offline RL ZSG and reward-function variation.
Journal Article
Confident Learning: Estimating Uncertainty in Dataset Labels
2021
Learning exists in the context of data, yet notions of confidence typically focus on model predictions, not label quality. Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the principles of pruning noisy data, counting with probabilistic thresholds to estimate noise, and ranking examples to train with confidence. Whereas numerous studies have developed these principles independently, here, we combine them, building on the assumption of a class-conditional noise process to directly estimate the joint distribution between noisy (given) labels and uncorrupted (unknown) labels. This results in a generalized CL which is provably consistent and experimentally performant. We present sufficient conditions where CL exactly finds label errors, and show CL performance exceeding seven recent competitive approaches for learning with noisy labels on the CIFAR dataset. Uniquely, the CL framework is not coupled to a specific data modality or model (e.g., we use CL to find several label errors in the presumed error-free MNIST dataset and improve sentiment classification on text data in Amazon Reviews). We also employ CL on ImageNet to quantify ontological class overlap (e.g., estimating 645 missile images are mislabeled as their parent class projectile), and moderately increase model accuracy (e.g., for ResNet) by cleaning data prior to training. These results are replicable using the open-source cleanlab release.
Journal Article