Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
18 result(s) for "Sadhu, Arup Kumar"
Sort by:
Multi-Agent Coordination
Discover the latest developments in multi-robot coordination techniques with this insightful and original resource Multi-Agent Coordination: A Reinforcement Learning Approach delivers a comprehensive, insightful, and unique treatment of the development of multi-robot coordination algorithms with minimal computational burden and reduced storage requirements when compared to traditional algorithms. The accomplished academics, engineers, and authors provide readers with both a high-level introduction to, and overview of, multi-robot coordination, and in-depth analyses of learning-based planning algorithms. You'll learn about how to accelerate the exploration of the team-goal and alternative approaches to speeding up the convergence of TMAQL by identifying the preferred joint action for the team. The authors also propose novel approaches to consensus Q-learning that address the equilibrium selection problem and a new way of evaluating the threshold value for uniting empires without imposing any significant computation overhead. Finally, the book concludes with an examination of the likely direction of future research in this rapidly developing field. Readers will discover cutting-edge techniques for multi-agent coordination, including: An introduction to multi-agent coordination by reinforcement learning and evolutionary algorithms, including topics like the Nash equilibrium and correlated equilibrium Improving convergence speed of multi-agent Q-learning for cooperative task planning Consensus Q-learning for multi-agent cooperative planning The efficient computing of correlated equilibrium for cooperative q-learning based multi-agent planning A modified imperialist competitive algorithm for multi-agent stick-carrying applications Perfect for academics, engineers, and professionals who regularly work with multi-agent learning algorithms, Multi-Agent Coordination: A Reinforcement Learning Approach also belongs on the bookshelves of anyone with an advanced interest in machine learning and artificial intelligence as it applies to the field of cooperative or competitive robotics.
Conclusions and Future Directions
This chapter concludes the book. The book identifies a few fundamental problems in multi‐robot coordination and proposes solutions to handle these problems by extending the traditional evolutionary algorithm (EA) and multi‐agent Q‐learning. It provides the preliminaries of Reinforcement Learning and EA in view of the multirobot coordination. The book proposes useful characteristic properties for exploration of the team‐goal and joint action selection in multi‐agent system. It also proposes a novel Consensus Q‐learning algorithm for multirobot cooperative planning. The book introduces a novel approach to correlated Q‐learning and subsequent multi‐robot planning. It also introduces a novel approach for efficiently employing both Imperialist Competitive Algorithm and Firefly Algorithm to develop a hybrid algorithm with an aim to utilize the composite benefits of the explorative and exploitative capabilities of both ancestor algorithms.
A Modified Imperialist Competitive Algorithm for Multi-Robot Stick-Carrying Application
This chapter proposes a novel evolutionary optimization approach of solving a multi‐robot stick‐carrying problem. It also proposes a novel approach to embed the motion dynamics of fireflies of the Firefly Algorithm (FA) into a sociopolitical evolution‐based meta‐heuristic search algorithm, known as the Imperialist Competitive Algorithm (ICA). The chapter also recommends a novel approach of evaluating the threshold value for uniting empires, accelerating the convergence speed. It provides the formulation of the multi‐robot stick‐carrying problem. The chapter then explores the proposed hybridization mechanism along with the experimental settings for the benchmarks and simulation strategies. It also provides computer simulation of multi‐robot stick‐carrying problem in conjunction with the experiments with Khepera‐II mobile robots. The performance of the proposed traditional ICA and the traditional FA algorithm is examined with respect to the minimization of benchmark functions.
Consensus Q-Learning for Multi-agent Cooperative Planning
This chapter proposes consensus‐based multi‐agent Q‐learning (MAQL) to address the bottleneck of the optimal equilibrium selection among multiple types. It briefly introduces the adaption mechanism of single agent QL and the state‐of‐the‐art equilibrium‐based MAQL algorithms. Then the cooperative control problem employing PGs mainly focusing upon the consensus problem is briefly discussed. The chapter also proposes a novel consensus QL (CoQL). Subsequently, a consensus‐based multirobot cooperative planning algorithm is proposed. Then two experiments are presented. The first experiment is designed to study the relative performance of the CoQL over the reference algorithms. Another experiment is framed to study the relative performance of the consensus‐based planning algorithm over the reference algorithms, considering multi‐robot stick carrying problem as a benchmark in terms of state‐transitions required to complete the task.
Improve Convergence Speed of Multi-Agent Q-Learning for Cooperative Task Planning
This chapter aims at extending traditional multi‐agent Q‐learning (MAQL) algorithms to improve their speed of convergence by incorporating two interesting properties, concerning exploration of the team‐goal and selection of joint action at a given joint state. It begins by reviewing the preliminaries of reinforcement leaning (RL). The motivation of RL is to derive the optimal action at a given environmental state for which the agent would be able to derive the maximum reward. Such formulation of deriving optimal action at a given state based on the learned experience of interaction with the environment has plenty of interesting applications, including generating moves in a game, and complex task‐planning and motion‐planning of a mobile robot in a constrained environment. The chapter then introduces the proposed fast cooperative multi‐agent Q‐learning algorithms. The chapter deals with multi‐agent cooperative planning algorithms and includes experiments and results.
Introduction
This chapter provides an introduction to the multi‐agent coordination by reinforcement learning (RL) and evolutionary algorithms. It also provides a thorough survey of the existing literature of RL with a brief overview of evolutionary optimization (EO) to examine the role of the algorithms in the context of multi‐agent coordination. The chapter includes the classification of multi‐agent coordination based on different criterion, such as the level of cooperation, knowledge sharing, communication, and the like. It also includes multi‐robot coordination employing EO, and specially RL for cooperative, competitive, and their composition for application to static and dynamic games. The chapter describes the single agent planning terminologies and algorithms. It deals with an overview of the metrics used to compare the performance of the algorithms in coordination.
An Efficient Computing of Correlated Equilibrium for Cooperative Q-Learning-Based Multi-Robot Planning
This chapter introduces a novel approach to adapt composite rewards of all the agents in one Q‐table in joint state‐action space during learning, and uses these rewards to compute correlated equilibrium in the planning phase. The ΩQ‐learning algorithms proposed in this chapter have two attractive features, which are not available in the traditional correlated Q‐learning (CQL). First, during the learning phase, an agent needs to adapt only one Q‐table in joint state—action space unlike adapting m joint Q‐tables for m agents in CQL. Second, the evaluation of the computationally expensive correlated equilibrium is avoided, following a tricky approach of computing it partially during the learning and the rest during the planning phases. The chapter provides an overview of the single agent Q‐learning and equilibrium‐based multi‐agent Q‐learning (MAQL) algorithms. Proposed cooperative MAQL and corresponding planning algorithms are then provided. The chapter offers complexity analysis, and presents simulation and experimental results.
Front Matter
The prelims comprise: Half‐Title Page Series Page Title Page Copyright Page Table of Contents Preface Acknowledgments About the Authors
Introduction
This chapter provides an introduction to the multi‐agent coordination by reinforcement learning (RL) and evolutionary algorithms. It also provides a thorough survey of the existing literature of RL with a brief overview of evolutionary optimization (EO) to examine the role of the algorithms in the context of multi‐agent coordination. The chapter includes the classification of multi‐agent coordination based on different criterion, such as the level of cooperation, knowledge sharing, communication, and the like. It also includes multi‐robot coordination employing EO, and specially RL for cooperative, competitive, and their composition for application to static and dynamic games. The chapter describes the single agent planning terminologies and algorithms. It deals with an overview of the metrics used to compare the performance of the algorithms in coordination.