Shared Control of Robot Manipulators With Obstacle Avoidance: A Deep Reinforcement Learning Approach

Matteo Rubagotti, Bianca Sangiovanni, Aigerim Nurbayeva, Gian Paolo Incremona, Antonella Ferrara, Almas Shintemirov

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


The recent surge of interest in the application of learning-based methods to control systems motivates this work to investigate how a purely model-free and a purely model-based method compare when applied to a shared control problem for a robot manipulator. Specifically, we propose a method based on model-free deep reinforcement learning (DRL) for tracking the position of an operator’s hand with the end effector of a manipulator while automatically avoiding obstacles in the workspace with the whole robot frame. The obtained control strategy generates joint reference velocities via a deep neural network trained using Q-learning. The method is tested in simulation and experimentally on a UR5 manipulator, and it is compared with a model predictive control (MPC) approach for solving the same problem. It is observed that DRL presents better performance than MPC but only if the provided reference falls within the distribution of the DRL algorithm policy. As expected, the model-based nature of MPC allows the approach to address unforeseen situations as long as these are compatible with its process model. This is not the case for DRL, for which an unexpected (not seen during the training process) human hand reference would lead to extremely poor performance.

Original languageEnglish
Pages (from-to)44-63
Number of pages20
JournalIEEE Control Systems
Issue number1
Publication statusPublished - Feb 1 2023

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modelling and Simulation
  • Electrical and Electronic Engineering


Dive into the research topics of 'Shared Control of Robot Manipulators With Obstacle Avoidance: A Deep Reinforcement Learning Approach'. Together they form a unique fingerprint.

Cite this