Munguia-Galeano, Francisco, Veeramani, Satheeshkumar, Hernández, Juan David ORCID: https://orcid.org/0000-0002-9593-6789, Wen, Qingmeng ORCID: https://orcid.org/0000-0002-8972-4042 and Ji, Ze ORCID: https://orcid.org/0000-0002-8968-9902 2023. Affordance-based human-robot interaction with reinforcement learning. IEEE Access 11 , pp. 31282-31292. 10.1109/ACCESS.2023.3262450 |
PDF
- Published Version
Available under License Creative Commons Attribution. Download (3MB) |
Abstract
Planning precise manipulation in robotics to perform grasp and release-related operations, while interacting with humans is a challenging problem. Reinforcement learning (RL) has the potential to make robots attain this capability. In this paper, we propose an affordance-based human-robot interaction (HRI) framework, aiming to reduce the action space size that would considerably impede the exploration efficiency of the agent. The framework is based on a new algorithm called Contextual Q-learning (CQL). We first show that the proposed algorithm trains in a reduced amount of time (2.7 seconds) and reaches an 84% of success rate. This suits the robot’s learning efficiency to observe the current scenario configuration and learn to solve it. Then, we empirically validate the framework for implementation in HRI real-world scenarios. During the HRI, the robot uses semantic information from the state and the optimal policy of the last training step to search for relevant changes in the environment that may trigger the generation of a new policy.
Item Type: | Article |
---|---|
Date Type: | Published Online |
Status: | Published |
Schools: | Engineering |
Publisher: | Institute of Electrical and Electronics Engineers |
ISSN: | 2169-3536 |
Date of First Compliant Deposit: | 17 April 2023 |
Date of Acceptance: | 24 March 2023 |
Last Modified: | 11 Jun 2024 11:28 |
URI: | https://orca.cardiff.ac.uk/id/eprint/158882 |
Actions (repository staff only)
Edit Item |