Capturing human motion performances with inertial measurement units constitutes the future of mobile sports analysis, but requires sophisticated methods to extract relevant information out of the sparse and unintuitive inertial sensor data. Kinematic data like body joint positions and segment orientations can be estimated from a sensor’s accelerations and angular velocities. For further analysis, it is necessary to develop intelligent retrieval strategies that can make sense of the underlying motion information. In this paper, we therefore discuss how to retrieve main motion determinants from raw and processed inertial sensor data. We design methods that extract a motion’s significant technical elements as well as methods that combine several measurable elements over time to extract motion features responsible for the aesthetic impression of a sports performance. In a neural network environment those feature extractors can then give the possibility to automatically evaluate and rank different performances in mobile training and competition systems, which could contribute to a better measurability and objectivity in performance-oriented sports as gymnastics and figure skating.