On improving the extrapolation capability of task-parameterized movement models

Sylvain Calinon, Tohid Alizadeh, Darwin G. Caldwell

Research output: Chapter in Book/Report/Conference proceedingConference contribution

50 Citations (Scopus)

Abstract

Gestures are characterized by intermediary or final landmarks (real or virtual) in task space or joint space that can change during the course of the motion, and that are described by varying accuracy and correlation constraints. Generalizing these trajectories in robot learning by imitation is challenging, because of the small number of demonstrations provided by the user. We present an approach to statistically encode movements in a task-parameterized mixture model, and derive an expectation-maximization (EM) algorithm to train it. The model automatically extracts the relevance of candidate coordinate systems during the task, and exploits this information during reproduction to adapt the movement in real-time to changing position and orientation of landmarks or objects. The approach is tested with a robotic arm learning to roll out a pizza dough. It is compared to three categories of task-parameterized models: 1) Gaussian process regression (GPR) with a trajectory models database; 2) Multi-streams approach with models trained in several frames of reference; and 3) Parametric Gaussian mixture model (PGMM) modulating the Gaussian centers with the task parameters. We show that the extrapolation capability of the proposed approach outperforms existing methods, by extracting the local structures of the task instead of relying on interpolation principles.

Original languageEnglish
Title of host publicationIEEE International Conference on Intelligent Robots and Systems
Pages610-616
Number of pages7
DOIs
Publication statusPublished - 2013
Externally publishedYes
Event2013 26th IEEE/RSJ International Conference on Intelligent Robots and Systems: New Horizon, IROS 2013 - Tokyo, Japan
Duration: Nov 3 2013Nov 8 2013

Other

Other2013 26th IEEE/RSJ International Conference on Intelligent Robots and Systems: New Horizon, IROS 2013
CountryJapan
CityTokyo
Period11/3/1311/8/13

Fingerprint

Extrapolation
Trajectories
Robot learning
Robotic arms
Interpolation
Demonstrations

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Cite this

Calinon, S., Alizadeh, T., & Caldwell, D. G. (2013). On improving the extrapolation capability of task-parameterized movement models. In IEEE International Conference on Intelligent Robots and Systems (pp. 610-616). [6696414] https://doi.org/10.1109/IROS.2013.6696414

On improving the extrapolation capability of task-parameterized movement models. / Calinon, Sylvain; Alizadeh, Tohid; Caldwell, Darwin G.

IEEE International Conference on Intelligent Robots and Systems. 2013. p. 610-616 6696414.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Calinon, S, Alizadeh, T & Caldwell, DG 2013, On improving the extrapolation capability of task-parameterized movement models. in IEEE International Conference on Intelligent Robots and Systems., 6696414, pp. 610-616, 2013 26th IEEE/RSJ International Conference on Intelligent Robots and Systems: New Horizon, IROS 2013, Tokyo, Japan, 11/3/13. https://doi.org/10.1109/IROS.2013.6696414
Calinon S, Alizadeh T, Caldwell DG. On improving the extrapolation capability of task-parameterized movement models. In IEEE International Conference on Intelligent Robots and Systems. 2013. p. 610-616. 6696414 https://doi.org/10.1109/IROS.2013.6696414
Calinon, Sylvain ; Alizadeh, Tohid ; Caldwell, Darwin G. / On improving the extrapolation capability of task-parameterized movement models. IEEE International Conference on Intelligent Robots and Systems. 2013. pp. 610-616
@inproceedings{0de76226648e441e96e214bb9293b565,
title = "On improving the extrapolation capability of task-parameterized movement models",
abstract = "Gestures are characterized by intermediary or final landmarks (real or virtual) in task space or joint space that can change during the course of the motion, and that are described by varying accuracy and correlation constraints. Generalizing these trajectories in robot learning by imitation is challenging, because of the small number of demonstrations provided by the user. We present an approach to statistically encode movements in a task-parameterized mixture model, and derive an expectation-maximization (EM) algorithm to train it. The model automatically extracts the relevance of candidate coordinate systems during the task, and exploits this information during reproduction to adapt the movement in real-time to changing position and orientation of landmarks or objects. The approach is tested with a robotic arm learning to roll out a pizza dough. It is compared to three categories of task-parameterized models: 1) Gaussian process regression (GPR) with a trajectory models database; 2) Multi-streams approach with models trained in several frames of reference; and 3) Parametric Gaussian mixture model (PGMM) modulating the Gaussian centers with the task parameters. We show that the extrapolation capability of the proposed approach outperforms existing methods, by extracting the local structures of the task instead of relying on interpolation principles.",
author = "Sylvain Calinon and Tohid Alizadeh and Caldwell, {Darwin G.}",
year = "2013",
doi = "10.1109/IROS.2013.6696414",
language = "English",
isbn = "9781467363587",
pages = "610--616",
booktitle = "IEEE International Conference on Intelligent Robots and Systems",

}

TY - GEN

T1 - On improving the extrapolation capability of task-parameterized movement models

AU - Calinon, Sylvain

AU - Alizadeh, Tohid

AU - Caldwell, Darwin G.

PY - 2013

Y1 - 2013

N2 - Gestures are characterized by intermediary or final landmarks (real or virtual) in task space or joint space that can change during the course of the motion, and that are described by varying accuracy and correlation constraints. Generalizing these trajectories in robot learning by imitation is challenging, because of the small number of demonstrations provided by the user. We present an approach to statistically encode movements in a task-parameterized mixture model, and derive an expectation-maximization (EM) algorithm to train it. The model automatically extracts the relevance of candidate coordinate systems during the task, and exploits this information during reproduction to adapt the movement in real-time to changing position and orientation of landmarks or objects. The approach is tested with a robotic arm learning to roll out a pizza dough. It is compared to three categories of task-parameterized models: 1) Gaussian process regression (GPR) with a trajectory models database; 2) Multi-streams approach with models trained in several frames of reference; and 3) Parametric Gaussian mixture model (PGMM) modulating the Gaussian centers with the task parameters. We show that the extrapolation capability of the proposed approach outperforms existing methods, by extracting the local structures of the task instead of relying on interpolation principles.

AB - Gestures are characterized by intermediary or final landmarks (real or virtual) in task space or joint space that can change during the course of the motion, and that are described by varying accuracy and correlation constraints. Generalizing these trajectories in robot learning by imitation is challenging, because of the small number of demonstrations provided by the user. We present an approach to statistically encode movements in a task-parameterized mixture model, and derive an expectation-maximization (EM) algorithm to train it. The model automatically extracts the relevance of candidate coordinate systems during the task, and exploits this information during reproduction to adapt the movement in real-time to changing position and orientation of landmarks or objects. The approach is tested with a robotic arm learning to roll out a pizza dough. It is compared to three categories of task-parameterized models: 1) Gaussian process regression (GPR) with a trajectory models database; 2) Multi-streams approach with models trained in several frames of reference; and 3) Parametric Gaussian mixture model (PGMM) modulating the Gaussian centers with the task parameters. We show that the extrapolation capability of the proposed approach outperforms existing methods, by extracting the local structures of the task instead of relying on interpolation principles.

UR - http://www.scopus.com/inward/record.url?scp=84893808156&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84893808156&partnerID=8YFLogxK

U2 - 10.1109/IROS.2013.6696414

DO - 10.1109/IROS.2013.6696414

M3 - Conference contribution

AN - SCOPUS:84893808156

SN - 9781467363587

SP - 610

EP - 616

BT - IEEE International Conference on Intelligent Robots and Systems

ER -