Modeling Spatial and Temporal Variation in Motion Data
We present a novel method to model and synthesize variation in motion data.
Given a few examples of a particular type of motion as input, we learn a
generative model that is able to synthesize a family of spatial and temporal
variants that are statistically similar to the input examples. The new
variants retain the features of the original examples, but are not exact
copies of them. We learn a Dynamic Bayesian Network model from the input
examples that enables us to capture properties of conditional independence
in the data, and model it using a multivariate probability distribution.
We present results for a variety of human motion, and 2D handwritten characters.
We perform a user study to show that our new variants are less repetitive
than typical game and crowd simulation approaches of re-playing a small
number of existing motion clips. Our technique can synthesize new variants
efficiently and has a small memory requirement.
Manfred Lau, Ziv Bar-Joseph, and James Kuffner. 2009.
Modeling Spatial and Temporal Variation in Motion Data.
ACM Transactions on Graphics (SIGGRAPH ASIA 2009), 28(5), 171.