Learning from demonstration (LfD) is an effective method for robot motion learning because hand-coded cost function is not necessary. However, the number of times demonstrations can be performed is limited and performing a demonstration in every environmental condition is difficult. Therefore, an algorithm for generating a motion data not obtained by demonstrations is required. In order to deal with this problem, this research generates motion latent space by abstracting the demonstration data. Motion latent space is a space expressing the demonstration motion in lower dimensions. Also the demonstration data can be extended by decoding the points in the latent space. These things are realized by applying variational autoencoder (VAE) used in the field of image generation to time-series data. Demonstrations of the reaching task are conducted, and the paper shows that the manipulator can reach the object even when the object is located at a different position from demonstrations.