Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
18
result(s) for
"Sadhu, Arup Kumar"
Sort by:
Multi-Agent Coordination
by
Konar, Amit
,
Sadhu, Arup Kumar
in
Computing and Processing
,
General Topics for Engineers
,
Multiagent systems
2020
Discover the latest developments in multi-robot coordination techniques with this insightful and original resource Multi-Agent Coordination: A Reinforcement Learning Approach delivers a comprehensive, insightful, and unique treatment of the development of multi-robot coordination algorithms with minimal computational burden and reduced storage requirements when compared to traditional algorithms. The accomplished academics, engineers, and authors provide readers with both a high-level introduction to, and overview of, multi-robot coordination, and in-depth analyses of learning-based planning algorithms. You'll learn about how to accelerate the exploration of the team-goal and alternative approaches to speeding up the convergence of TMAQL by identifying the preferred joint action for the team. The authors also propose novel approaches to consensus Q-learning that address the equilibrium selection problem and a new way of evaluating the threshold value for uniting empires without imposing any significant computation overhead. Finally, the book concludes with an examination of the likely direction of future research in this rapidly developing field. Readers will discover cutting-edge techniques for multi-agent coordination, including: An introduction to multi-agent coordination by reinforcement learning and evolutionary algorithms, including topics like the Nash equilibrium and correlated equilibrium Improving convergence speed of multi-agent Q-learning for cooperative task planning Consensus Q-learning for multi-agent cooperative planning The efficient computing of correlated equilibrium for cooperative q-learning based multi-agent planning A modified imperialist competitive algorithm for multi-agent stick-carrying applications Perfect for academics, engineers, and professionals who regularly work with multi-agent learning algorithms, Multi-Agent Coordination: A Reinforcement Learning Approach also belongs on the bookshelves of anyone with an advanced interest in machine learning and artificial intelligence as it applies to the field of cooperative or competitive robotics.
Conclusions and Future Directions
by
Konar, Amit
,
Sadhu, Arup Kumar
in
Consensus Q‐learning algorithm
,
Firefly Algorithm
,
Imperialist Competitive Algorithm
2020
This chapter concludes the book. The book identifies a few fundamental problems in multi‐robot coordination and proposes solutions to handle these problems by extending the traditional evolutionary algorithm (EA) and multi‐agent Q‐learning. It provides the preliminaries of Reinforcement Learning and EA in view of the multirobot coordination. The book proposes useful characteristic properties for exploration of the team‐goal and joint action selection in multi‐agent system. It also proposes a novel Consensus Q‐learning algorithm for multirobot cooperative planning. The book introduces a novel approach to correlated Q‐learning and subsequent multi‐robot planning. It also introduces a novel approach for efficiently employing both Imperialist Competitive Algorithm and Firefly Algorithm to develop a hybrid algorithm with an aim to utilize the composite benefits of the explorative and exploitative capabilities of both ancestor algorithms.
Book Chapter
A Modified Imperialist Competitive Algorithm for Multi-Robot Stick-Carrying Application
by
Konar, Amit
,
Sadhu, Arup Kumar
in
evolutionary optimization approach
,
Firefly Algorithm
,
hybridization mechanism
2020
This chapter proposes a novel evolutionary optimization approach of solving a multi‐robot stick‐carrying problem. It also proposes a novel approach to embed the motion dynamics of fireflies of the Firefly Algorithm (FA) into a sociopolitical evolution‐based meta‐heuristic search algorithm, known as the Imperialist Competitive Algorithm (ICA). The chapter also recommends a novel approach of evaluating the threshold value for uniting empires, accelerating the convergence speed. It provides the formulation of the multi‐robot stick‐carrying problem. The chapter then explores the proposed hybridization mechanism along with the experimental settings for the benchmarks and simulation strategies. It also provides computer simulation of multi‐robot stick‐carrying problem in conjunction with the experiments with Khepera‐II mobile robots. The performance of the proposed traditional ICA and the traditional FA algorithm is examined with respect to the minimization of benchmark functions.
Book Chapter
Consensus Q-Learning for Multi-agent Cooperative Planning
by
Konar, Amit
,
Sadhu, Arup Kumar
in
adaption mechanism
,
consensus Q‐learning
,
consensus‐based multirobot cooperative planning algorithm
2020
This chapter proposes consensus‐based multi‐agent Q‐learning (MAQL) to address the bottleneck of the optimal equilibrium selection among multiple types. It briefly introduces the adaption mechanism of single agent QL and the state‐of‐the‐art equilibrium‐based MAQL algorithms. Then the cooperative control problem employing PGs mainly focusing upon the consensus problem is briefly discussed. The chapter also proposes a novel consensus QL (CoQL). Subsequently, a consensus‐based multirobot cooperative planning algorithm is proposed. Then two experiments are presented. The first experiment is designed to study the relative performance of the CoQL over the reference algorithms. Another experiment is framed to study the relative performance of the consensus‐based planning algorithm over the reference algorithms, considering multi‐robot stick carrying problem as a benchmark in terms of state‐transitions required to complete the task.
Book Chapter
Improve Convergence Speed of Multi-Agent Q-Learning for Cooperative Task Planning
by
Konar, Amit
,
Sadhu, Arup Kumar
in
fast cooperative multi‐agent Q‐learning
,
multi‐agent cooperative planning
,
reinforcement leaning
2020
This chapter aims at extending traditional multi‐agent Q‐learning (MAQL) algorithms to improve their speed of convergence by incorporating two interesting properties, concerning exploration of the team‐goal and selection of joint action at a given joint state. It begins by reviewing the preliminaries of reinforcement leaning (RL). The motivation of RL is to derive the optimal action at a given environmental state for which the agent would be able to derive the maximum reward. Such formulation of deriving optimal action at a given state based on the learned experience of interaction with the environment has plenty of interesting applications, including generating moves in a game, and complex task‐planning and motion‐planning of a mobile robot in a constrained environment. The chapter then introduces the proposed fast cooperative multi‐agent Q‐learning algorithms. The chapter deals with multi‐agent cooperative planning algorithms and includes experiments and results.
Book Chapter
Introduction
by
Konar, Amit
,
Sadhu, Arup Kumar
in
evolutionary algorithms
,
evolutionary optimization
,
multi‐agent coordination
2020
This chapter provides an introduction to the multi‐agent coordination by reinforcement learning (RL) and evolutionary algorithms. It also provides a thorough survey of the existing literature of RL with a brief overview of evolutionary optimization (EO) to examine the role of the algorithms in the context of multi‐agent coordination. The chapter includes the classification of multi‐agent coordination based on different criterion, such as the level of cooperation, knowledge sharing, communication, and the like. It also includes multi‐robot coordination employing EO, and specially RL for cooperative, competitive, and their composition for application to static and dynamic games. The chapter describes the single agent planning terminologies and algorithms. It deals with an overview of the metrics used to compare the performance of the algorithms in coordination.
Book Chapter
An Efficient Computing of Correlated Equilibrium for Cooperative Q-Learning-Based Multi-Robot Planning
by
Konar, Amit
,
Sadhu, Arup Kumar
in
correlated equilibrium
,
correlated Q‐learning
,
equilibrium‐based multi‐agent Q‐learning
2020
This chapter introduces a novel approach to adapt composite rewards of all the agents in one Q‐table in joint state‐action space during learning, and uses these rewards to compute correlated equilibrium in the planning phase. The ΩQ‐learning algorithms proposed in this chapter have two attractive features, which are not available in the traditional correlated Q‐learning (CQL). First, during the learning phase, an agent needs to adapt only one Q‐table in joint state—action space unlike adapting m joint Q‐tables for m agents in CQL. Second, the evaluation of the computationally expensive correlated equilibrium is avoided, following a tricky approach of computing it partially during the learning and the rest during the planning phases. The chapter provides an overview of the single agent Q‐learning and equilibrium‐based multi‐agent Q‐learning (MAQL) algorithms. Proposed cooperative MAQL and corresponding planning algorithms are then provided. The chapter offers complexity analysis, and presents simulation and experimental results.
Book Chapter
Front Matter
2020
The prelims comprise: Half‐Title Page Series Page Title Page Copyright Page Table of Contents Preface Acknowledgments About the Authors
Book Chapter
Introduction
2020
This chapter provides an introduction to the multi‐agent coordination by reinforcement learning (RL) and evolutionary algorithms. It also provides a thorough survey of the existing literature of RL with a brief overview of evolutionary optimization (EO) to examine the role of the algorithms in the context of multi‐agent coordination. The chapter includes the classification of multi‐agent coordination based on different criterion, such as the level of cooperation, knowledge sharing, communication, and the like. It also includes multi‐robot coordination employing EO, and specially RL for cooperative, competitive, and their composition for application to static and dynamic games. The chapter describes the single agent planning terminologies and algorithms. It deals with an overview of the metrics used to compare the performance of the algorithms in coordination.
Book Chapter