He, Yuanzhi, Wallbridge, Christopher ![]() ![]() ![]() |
Preview |
PDF
- Accepted Post-Print Version
Download (482kB) | Preview |
Abstract
Robot manipulation with simulation has become a mainstream approach in the robotics field recently. It entails lower risk and cost compared to direct training a real robot. Various physics engines, such as MuJoCo, offer simulated environments tailored for robot manipulation tasks. As the robotics field rapidly grows, model complexity and training times increase exponentially to meet the demands of diverse tasks. Solving this is challenging as it requires complex models and long training times. Deep Reinforcement Learning (DRL) is the current best-performing way to solve robot manipulation problems. However, although certain algorithms utilized automated curriculum learning to tackle multi-task robot manipulation problems, the models were still too complex to be solved with one training from scratch with acceptable accuracy and reasonable training time. To address this, we introduce a novel few-shot Transfer Learning (TL) technique for DRL that applies both Forward Transfer (FT) and Reverse Transfer (RT). TL facilitates breaking down a complex problem into easier-to-solve sub-problems and transferring the acquired knowledge to more complex ones. Our TL method appears able to accelerate the training process for all the MuJoCo Fetch tasks, while even improving performance by 20% and accelerating 85% for the most complex FetchSlide environment.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Schools > Computer Science & Informatics |
Publisher: | Springer |
ISBN: | 978-3-031-72061-1 |
Date of First Compliant Deposit: | 22 August 2024 |
Date of Acceptance: | 21 June 2024 |
Last Modified: | 13 May 2025 11:29 |
URI: | https://orca.cardiff.ac.uk/id/eprint/171528 |
Actions (repository staff only)
![]() |
Edit Item |