Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Offline Pre-trained Multi-agent Decision Transformer
by
Xu, Bo
, Li, Xiyun
, Zhang, Haifeng
, Yang, Yaodong
, Wen, Ying
, Le, Chenyang
, Zhang, Weinan
, Wang, Jun
, Xing, Dengpeng
, Wen, Muning
, Meng, Linghui
in
Datasets
/ Modelling
/ Multiagent systems
/ Policies
/ Transformers
2023
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Offline Pre-trained Multi-agent Decision Transformer
by
Xu, Bo
, Li, Xiyun
, Zhang, Haifeng
, Yang, Yaodong
, Wen, Ying
, Le, Chenyang
, Zhang, Weinan
, Wang, Jun
, Xing, Dengpeng
, Wen, Muning
, Meng, Linghui
in
Datasets
/ Modelling
/ Multiagent systems
/ Policies
/ Transformers
2023
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Journal Article
Offline Pre-trained Multi-agent Decision Transformer
2023
Request Book From Autostore
and Choose the Collection Method
Overview
Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment. Such a paradigm is also desirable for multi-agent reinforcement learning (MARL) tasks, given the combinatorially increased interactions among agents and with the environment. However, in MARL, the paradigm of offline pre-training with online fine-tuning has not been studied, nor even datasets or benchmarks for offline MARL research are available. In this paper, we facilitate the research by providing large-scale datasets and using them to examine the usage of the decision transformer in the context of MARL. We investigate the generalization of MARL offline pre-training in the following three aspects: 1) between single agents and multiple agents, 2) from offline pretraining to online fine tuning, and 3) to that of multiple downstream tasks with few-shot and zero-shot capabilities. We start by introducing the first offline MARL dataset with diverse quality levels based on the StarCraftII environment, and then propose the novel architecture of multi-agent decision transformer (MADT) for effective offline learning. MADT leverages the transformer’s modelling ability for sequence modelling and integrates it seamlessly with both offline and online MARL tasks. A significant benefit of MADT is that it learns generalizable policies that can transfer between different types of agents under different task scenarios. On the StarCraft II offline dataset, MADT outperforms the state-of-the-art offline reinforcement learning (RL) baselines, including BCQ and CQL. When applied to online tasks, the pre-trained MADT significantly improves sample efficiency and enjoys strong performance in both few-short and zero-shot cases. To the best of our knowledge, this is the first work that studies and demonstrates the effectiveness of offline pre-trained models in terms of sample efficiency and generalizability enhancements for MARL.
Publisher
Springer Nature B.V
Subject
MBRLCatalogueRelatedBooks
Related Items
Related Items
We currently cannot retrieve any items related to this title. Kindly check back at a later time.
This website uses cookies to ensure you get the best experience on our website.