We envision to add context awareness and ambient intelligence to edutainment and computer gaming applications in general. This requires mixed-reality setups and ever-higher levels of immersive human-computer interaction. Here, we focus on the automatic recognition of natural human hand gestures recorded by inexpensive, wearable motion sensors. To study the feasibility of our approach, we chose an educational parking game with 3-D graphics that employs motion sensors and hand gestures as its sole game controls. Our implementation prototype is based on Java-3D for the graphics display and on our own CRN Toolbox for sensor integration. It shows very promising results in practice regarding game appeal, player satisfaction, extensibility, ease of interfacing to the sensors, and - last but not least - sufficient accuracy of the real-time gesture recognition to allow for smooth game control. An initial quantitative performance evaluation confirms these notions and provides further support for our setup.