Learning Silhouette Features for Control of Human Motion Project Description Performance interface is very useful as a new user interface for computer games (Sony¡¯s EyeToy for example). We presented a low cost and non-intrusive vision-based performance interface (¡°do as I do¡± interface) for controlling the full-body motion of animated human characters. The system combined information about the user's motion contained in silhouettes from three viewpoints with domain knowledge contained in a motion capture database to interactively produce a high quality animation. Such an interactive system will be useful for authoring, teleconferencing, or as a control interface for a character in a game. In our system, the user wore street clothes (no markers attached) and performed in front of three video cameras; the resulting silhouettes were used to estimate his or her orientation and body configuration based on a set of discriminative local features. Those features were selected by a machine learning algorithm (AdaBoost) during a preprocessing step. Sequences of motions that approximated the user's actions were extracted from the motion database and scaled in time to match the speed of the user's motion. We used swing dancing, an example of complex human motion, to demonstrate the effectiveness of our approach and compared the results obtained with discriminative local features to those obtained with a global feature set (Hu moments), and to ground truth measurement from a commercial motion capture system. ¡¡ Publication
Presentation
PDF download PPT download (coming soon) Video Full length:
Segments: Other Projects Quantifying Natural Human Motion; Object space EWA surface splatting; Adaptive EWA volume splatting; Contact Liu Ren (liuren@cs.cmu.edu, Carnegie Mellon University)
|