You, Yingchaol, Cai, Boliang and Ji, Ze ![]() ![]() |
![]() |
PDF
- Published Version
Available under License Creative Commons Attribution. Download (2MB) |
Abstract
Proactive robot assistance plays a critical role in human–robot collaborative assembly (HRCA), enhancing operational efficiency, product quality and workers’ ergonomics. The shift toward mass personalisation in industries brings significant challenges to the collaborative robot that must quickly adapt to product changes for proactive assistance. State-of-the-art knowledge-based task planners in HRCA struggle to quickly update their knowledge to adapt to the change of new products. Different from conventional methods, this work studies learning proactive assistance by leveraging reinforcement learning (RL) to train a policy, ready to be used for robot proactive assistance planning in HRCA. To address the limitations therein, we propose an offline RL framework where a policy for proactive assistance is trained using the dataset visually extracted from human demonstrations. In particular, an RL algorithm with a conservative Q-value is utilised to train a planning policy in an actor–critic framework with carefully designed state space and reward function. The experimental results show that with only a few demonstrations performed by workers as input, the algorithm can train a policy for proactive assistance in HRCA. The assistance task provided by the robot can fully meet the task requirement and improve human assembly preference satisfaction by 47.06% compared to a static strategy.
Item Type: | Article |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Schools > Engineering |
Publisher: | Elsevier |
ISSN: | 0360-8352 |
Date of First Compliant Deposit: | 23 June 2025 |
Date of Acceptance: | 7 June 2025 |
Last Modified: | 04 Aug 2025 12:30 |
URI: | https://orca.cardiff.ac.uk/id/eprint/179046 |
Actions (repository staff only)
![]() |
Edit Item |