Yang, Xintong ![]() ![]() ![]() ![]() |
![]() |
PDF
- Accepted Post-Print Version
Download (558kB) |
Abstract
Although Deep Reinforcement Learning (DRL) has been popular in many disciplines including robotics, state-of-the-art DRL algorithms still struggle to learn long-horizon, multistep and sparse reward tasks, such as stacking several blocks given only a task-completion reward signal. To improve learning efficiency for such tasks, this paper proposes a DRL exploration technique, termed A2, which integrates two components inspired by human experiences: Abstract demonstrations and Adaptive exploration. A2 starts by decomposing a complex task into subtasks, and then provides the correct orders of subtasks to learn. During training, the agent explores the environment adaptively, acting more deterministically for well-mastered subtasks and more stochastically for ill-learnt subtasks. Ablation and comparative experiments are conducted on several grid-world tasks and three robotic manipulation tasks. We demonstrate that A2 can aid popular DRL algorithms (DQN, DDPG, and SAC) to learn more efficiently and stably in these environments.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Date Type: | Published Online |
Status: | Published |
Schools: | Engineering Computer Science & Informatics |
Publisher: | IEEE |
ISBN: | 978-1-6654-9807-4 |
Date of First Compliant Deposit: | 27 July 2022 |
Last Modified: | 15 Dec 2022 15:36 |
URI: | https://orca.cardiff.ac.uk/id/eprint/151519 |
Actions (repository staff only)
![]() |
Edit Item |