A Data-Driven Approach to Quantifying Natural Human Motion

Project Description

Most artists and animators evaluate the quality of motion by visual inspection, which is time consuming. We presented a novel tool to evaluate the animation quality (naturalness of human motion in our example) automatically.  The tool used a large motion capture database to develop a statistical definition of what constitutes natural human motion. Given a motion, our tool can also pinpoint the bad part   automatically.  As there was no clear definition for the naturalness of human motion, we assumed it could be defined by a large motion capture database ( 4 hours ).  Our tool might prove useful in verifying that a motion editing operation had not destroyed the naturalness of a motion capture clip or that a synthetic motion transition was within the space of those seen in natural human motion.  The key algorithm for this tool was based on an ensemble of statistical models for individual joints, limbs and the whole body. We used existing machine learning techniques such as mixture of Gaussians (MoG), hidden Markov models (HMM), and switching linear dynamic systems (SLDS) to build the models. We also implemented a Naive Bayes (NB) model for a baseline comparison. We tested these techniques on motion capture data held out from a database, keyframed motions, edited motions, motions with noise added, and synthetic motion transitions. We presented the results as receiver operating characteristic (ROC) curves and compared the results to the judgments made by subjects in a user study.

Publication

      PDF download (2M) 

Video

Full length:

high resolution avi

(with audio, DIVX, 93M)

Low resolution avi

(with audio, DIVX, 56M)

    

Segments:

Approach (with audio, DIVX, 10M) Testing results(with audio,  DIVX, 80M)

Data

Training data (AMC files)

Positive testing data (AMC files);

Negative testing data (AMC files);

Acknowledgement

Supported in part by the NSF under Grant IIS-0205224 and IIS-0326322.

Other Projects

Vision-based Performance Animation;

Object space EWA surface splatting;

Adaptive EWA volume splatting;

Contact

Liu Ren (liuren@cs.cmu.edu,  Carnegie Mellon University)

Jessica Hodgins (jkh@cs.cmu.edu, Carnegie Mellon University)