Captured human motion data is in common use in video games and feature films because it is easy to collect and can result in rich, realistic character animation. More widespread use of motion capture, however, is restricted because we do not understand how best to adapt motion to new situations. Difficulties occur when the animated character has a size and shape different from the original actor, or when it must follow a new path, jump a different height or distance, or aim at a target not available in the motion database.
Although much research has been invested into adapting existing motion to new scenarios, we know that any editing technique will introduce anomalies and errors into the motion. We are interested in developing ways to estimate perceived error in animated human motion so that motion quality can be better controlled to meet user goals.
Our initial project was two user studies that measured perception of errors in human jumping motions. We chose the ballistic phase of a jump because once the character has left the ground, the trajectory of the center of mass is fully determined. Any changes to this trajectory are errors, because they violate the laws of physics.
To capture effects of common motion editing operations such as splicing and time scaling, we introduced the following errors:
Our primary findings were
Details of these findings can be found in our SIGGRAPH 2003 paper, along with a description of how these results might be used to set thresholds on the amount of error that can be tolerated in a particular application.
This initial study has led to more questions than answers! One study we would like to do in the near term is to compare our results to perception of the identical errors applied to a rigid body such as a sphere. Different parts of the brain are active when we perceive biological vs. non-biological motion, and we may expect sensitivity to physical errors in rigid body motion to be very different from sensitivity to errors in the motion of a humanlike character. If this is true, we can ask what happens as the representation of the character becomes less and less abstract, moving from the rigid body to a more humanlike appearance.
On the application side, we are interested in exploring a broader variety of error types, and applying our results to developing more reliable algorithms for generating high quality motion by editing segments available in human motion capture databases.