Gestures are characterized by intermediary or final landmarks (real or virtual) in task space or joint space that can change during the course of the motion, and that are described by varying accuracy and correlation constraints. Generalizing these trajectories in robot learning by imitation is challenging, because of the small number of demonstrations provided by the user. We present an approach to statistically encode movements in a task-parameterized mixture model, and derive an expectation-maximization (EM) algorithm to train it. The model automatically extracts the relevance of candidate coordinate systems during the task, and exploits this information during reproduction to adapt the movement in real-time to changing position and orientation of landmarks or objects. The approach is tested with a robotic arm learning to roll out a pizza dough. It is compared to three categories of task-parameterized models: 1) Gaussian process regression (GPR) with a trajectory models database; 2) Multi-streams approach with models trained in several frames of reference; and 3) Parametric Gaussian mixture model (PGMM) modulating the Gaussian centers with the task parameters. We show that the extrapolation capability of the proposed approach outperforms existing methods, by extracting the local structures of the task instead of relying on interpolation principles.