Following the conventional pipeline, the training dataset of a human activity recognition system relies on the detection of the significant signal variation regions. Such position-specific classifiers provide less flexibility for users to alter the sensor positions. In this paper, we proposed to employ the simulated sensor to generate the corresponding signal from human motion animation as the dataset. Visualizing the corresponding items from the real world, the user can determine the sensor's placement arbitrarily and obtain accuracy feedback as well as the classifier interface to get relief from the cost of a conventional training model. With the cases validation, the classifier trained by simulated sensor data can effectively recognize the real-world activity.