Skip navigation.

A Finite Element Method for Animating Large Viscoplastic Flow

Adam W. Bargteil, Chris Wojtan, Jessica K. Hodgins, and Greg Turk

Abstract

We present an extension to Lagrangian finite element methods to allow for large plastic deformations of solid materials. These behaviors are seen in such everyday materials as shampoo, dough, and clay as well as in fantastic gooey and blobby creatures in special effects scenes. To account for plastic deformation, we explicitly update the linear basis functions defined over the finite elements during each simulation step. When these updates cause the basis functions to become ill-conditioned, we remesh the simulation domain to produce a new high-quality finite-element mesh, taking care to preserve the original boundary. We also introduce an enhanced plasticity model that preserves volume and includes creep and work hardening/softening. We demonstrate our approach with simulations of synthetic objects that squish, dent, and flow. To validate our methods, we compare simulation results to videos of real materials.

Citation

Adam W. Bargteil, Chris Wojtan, Jessica K. Hodgins, and Greg Turk. A finite element method for animating large viscoplastic flow. ACM Transactions on Graphics (SIGGRAPH 2007), 26(3), August 2007. [BiBTeX]

Links

Constrained least-squares optimization for robust estimation of center of rotation

Lillian Y. Chang and Nancy S. Pollard

Abstract

This paper presents a new direct method for estimating the average center of rotation (CoR). An existing least-squares (LS) solution has been shown by previous works to have reduced accuracy for data with small range of motion (RoM). Alternative methods proposed to improve the CoR estimation use iterative algorithms. However, in this paper we show that with a carefully chosen normalization scheme, constrained least-squares solutions can perform as well as iterative approaches, even for challenging problems with significant noise and small RoM. In particular, enforcing the normalization constraint avoids poor fits near plane singularities that can affect the existing LS method. Our formulation has an exact solution, accounts for multiple markers simultaneously, and does not depend on manually-adjusted parameters. Simulation tests compare the method to four published CoR estimation techniques. The results show that the new approach has the accuracy of the iterative methods as well as the short computation time and repeatability of a least-squares solution. In addition, application of the new method to experimental motion capture data of the thumb carpometacarpal (CMC) joint yielded a more plausible CoR location compared to the previously reported LS solution and required less time than all four alternative techniques.

Citation

Lillian Y. Chang and Nancy S. Pollard. Constrained least-squares optimization for robust estimation of center of rotation. Journal of Biomechanics, 40(6):1392–1400, 2007. [BiBTeX]

Links

Constraint-based Motion Optimization Using A Statistical Dynamic Model

Jinxiang Chai and Jessica K. Hodgins

Abstract

We present a technique for generating animation from a variety of user-defined constraints. We pose constraint-based motion synthesis as a maximum a posterior (MAP) problem and develop an optimization framework that generates natural motion satisfying user constraints. The system automatically learns a statistical dynamic model from motion capture data and then enforces it as a motion prior. This motion prior, together with user-defined constraints, comprises a trajectory optimization problem. Solving this problem in the low-dimensional space yields optimal natural motion that achieves the goals specified by the user.
We demonstrate the effectiveness of this approach in two domains: human body animation and facial animation. We show that the system can generate natural-looking animation from key-frame constraints, key-trajectory constraints, and a combination of these two constraints. For example, the user can generate a walking animation from a small set of key frames and foot contact constraints. The user can also specify a small set of key trajectories for the root, hands and feet positions to generate a realistic jumping motion. The system can generate motions for a character whose skeletal model is markedly different from those of the subjects in the database. We also show that the system can use a statistical dynamic model learned from a normal walking sequence to create new motion such as walking on a slope.

Citation

Jinxiang Chai and Jessica K. Hodgins. Constraint-based motion optimization using a statistical dynamic model. ACM Transactions on Graphics (SIGGRAPH 2007), 26(3), August 2007. [BiBTeX]

Links

Construction and optimal search of interpolated motion graphs

Alla Safonova and Jessica K. Hodgins

Abstract

Many compelling applications would become feasible if novice users had the ability to synthesize high quality human motion based only on a simple sketch and a few easily specified constraints. We approach this problem by representing the desired motion as an interpolation of two time-scaled paths through a motion graph. The graph is constructed to support interpolation and pruned for efficient search. We use an anytime version of A* search to find a globally optimal solution in this graph that satisfies the user's specification. Our approach retains the natural transitions of motion graphs and the ability to synthesize physically realistic variations provided by interpolation. We demonstrate the power of this approach by synthesizing optimal or near optimal motions that include a variety of behaviors in a single motion.

Citation

Alla Safonova and Jessica K. Hodgins. Construction and optimal search of interpolated motion graphs. ACM Transactions on Graphics (SIGGRAPH 2007), 26(3), August 2007. [BiBTeX]

Links

Data driven grasp synthesis using shape matching and task-based pruning

Ying Li, Jiaxin L. Fu, and Nancy S. Pollard

Abstract

Human grasps, especially whole-hand grasps, are difficult to animate because of the high number of degrees of freedom of the hand and the need for the hand to conform naturally to the object surface. Captured human motion data provides us with a rich source of examples of natural grasps. However, for each new object, we are faced with the problem of selecting the best grasp from the database and adapting it to that object. This paper presents a data-driven approach to grasp synthesis. We begin with a database of captured human grasps. To identify candidate grasps for a new object, we introduce a novel shape matching algorithm that matches hand shape to object shape by identifying collections of features having similar relative placements and surface normals. This step returns many grasp candidates, which are clustered and pruned by choosing the grasp best suited for the intended task. For pruning undesirable grasps, we develop an anatomically based grasp quality measure specific to the human hand. Examples of grasp synthesis are shown for a variety of objects not present in the original database. This algorithm should be useful both as an animator tool for posing the hand and for automatic grasp synthesis in virtual environments.

Citation

Ying Li, Jiaxin L. Fu, and Nancy S. Pollard. Data driven grasp synthesis using shape matching and task-based pruning. IEEE Transactions on Visualization and Computer Graphics, 2007. In press. [BiBTeX]

Links

Face Poser: Interactive Modeling of 3D Facial Expressions Using Model Priors

Manfred Lau, Jin-Xiang Chai, Ying-Qing Xu, and Heung-Yeung Shum

Abstract

In this paper, we present an intuitive interface for interactively posing 3D facial expressions. The user can create and edit facial expressions by drawing freeform strokes, or by directly dragging facial points in 2D screen space. Designing such an interface for face modeling and editing is challenging because many unnatural facial expressions might be consistent with the ambiguous user input. The system automatically learns a model prior from a prerecorded facial expression database and uses it to remove the ambiguity. We formulate the problem in a maximum a posteriori (MAP) framework by combining the prior with user-defined constraints. Maximizing the posterior allows us to generate an optimal and natural facial expression that satisfies the user-defined constraints. Our system is interactive; it is also simple and easy to use. A first-time user can learn to use the system and start creating a variety of natural face models within minutes. We evaluate the performance of our approach with cross validation tests, and by comparing with alternative techniques.

Citation

Manfred Lau, Jin-Xiang Chai, Ying-Qing Xu, and Heung-Yeung Shum. Face poser: Interactive modeling of 3d facial expressions using model priors. In 2007 ACM SIGGRAPH / Eurographics Symposium on Computer Animation, August 2007. [BiBTeX]

Links

Feature Selection for Grasp Recognition from Optical Markers

Lillian Y. Chang, Nancy Pollard, Tom Mitchell, and Eric P. Xing

Abstract

Although the human hand is a complex biomechanical system, only a small set of features may be necessary for observation learning of functional grasp classes. We explore how to methodically select a minimal set of hand pose features from optical marker data for grasp recognition. Supervised feature selection is used to determine a reduced feature set of surface marker locations on the hand that is appropriate for grasp classification of individual hand poses. Classifiers trained on the reduced feature set of five markers retain at least 92% of the prediction accuracy of classifiers trained on a full feature set of thirty markers. The reduced model also generalizes better to new subjects. The dramatic reduction of the marker set size and the success of a linear classifier from local marker coordinates recommend optical marker techniques as a practical alternative to data glove methods for observation learning of grasping.

Citation

Lillian Y. Chang, Nancy Pollard, Tom Mitchell, and Eric P. Xing. Feature selection for grasp recognition from optical markers. In Proceedings of the 2007 IEEE/RSJ Intl. Conference on Intelligent Robots and Systems (IROS 2007), pages 2944–2950, October 2007. [BiBTeX]

Links

Legendre Fluids: A Unified Framework for Analytic Reduced Space Modeling and Rendering of Participating Media

Mohit Gupta and Srinivasa G. Narasimhan

Abstract

In this paper, we present a unified framework for reduced space modeling and rendering of dynamic and non-homogenous participating media, like snow, smoke, dust and fog. The key idea is to represent the 3D spatial variation of the density, velocity and intensity fields of the media using the same analytic basis. In many situations, natural effects such as mist, outdoor smoke and dust are smooth (low frequency) phenomena, and can be compactly represented by a small number of coefficients of a Legendre polynomial basis. We derive analytic expressions for the derivative and integral operators in the Legendre coefficient space, as well as the triple product integrals of Legendre polynomials. These mathematical results allow us to solve both the Navier-Stokes equations for fluid flow and light transport equations for single scattering efficiently in the reduced Legendre space. Since our technique does not depend on volume grid resolution, we can achieve computational speedups as compared to spatial domain methods while having low memory and pre-computation requirements as compared to data-driven approaches. Also, analytic definition of derivatives and integral operators in the Legendre domain avoids the approximation errors inherent in spatial domain finite difference methods. We demonstrate many interesting visual effects resulting from particles immersed in fluids as well as volumetric scattering in non-homogenous and dynamic participating media, such as fog and mist.

Citation

Mohit Gupta and Srinivasa G. Narasimhan. Legendre fluids: A unified framework for analytic reduced space modeling and rendering of participating media. In 2007 ACM SIGGRAPH / Eurographics Symposium on Computer Animation, August 2007. [BiBTeX]

Links

Many-Worlds Browsing for Control of Multibody Dynamics

Christopher D. Twigg and Doug L. James

Abstract

Animation techniques for controlling passive simulation are commonly based on an optimization paradigm: the user provides goals a priori, and sophisticated numerical methods minimize a cost function that represents these goals. Unfortunately, for multibody systems with discontinuous contact events these optimization problems can be highly nontrivial to solve, and many-hour offline optimizations, unintuitive parameters, and convergence failures can frustrate end-users and limit usage. On the other hand, users are quite adaptable, and systems which provide interactive feedback via an intuitive interface can leverage the user's own abilities to quickly produce interesting animations. However, the online computation necessary for interactivity limits scene complexity in practice. We introduce Many-Worlds Browsing, a method which circumvents these limits by exploiting the speed of multibody simulators to compute numerous example simulations in parallel (offline and online), and allow the user to browse and modify them interactively. We demonstrate intuitive interfaces through which the user can select among the examples and interactively adjust those parts of the scene that don't match his requirements. We show that using a combination of our techniques, unusual and interesting results can be generated for moderately sized scenes with under an hour of user time. Scalability is demonstrated by sampling much larger scenes using modest offline computations.

Citation

Christopher D. Twigg and Doug L. James. Many-worlds browsing for control of multibody dynamics. ACM Transactions on Graphics (SIGGRAPH 2007), 26(3), August 2007. [BiBTeX]

Links

Near-optimal Character Animation with Continuous Control

Adrien Treuille, Yongjoon Lee, and Zoran Popović

Abstract

We present a new model for real-time character animation with multidimensional, interactive control. The underlying motion engine is data-driven, enables rapid transitions, and automatically enforces foot-skate constraints without inverse kinematics. On top of this motion space, our algorithm learns approximately optimal controllers which use a compact basis representation to guide the system through multidimensional state-goal spaces. These controllers enable real-time character animation that fluidly responds to changing user directives and environmental constraints.

Citation

Adrien Treuille, Yongjoon Lee, and Zoran Popović. Near-optimal character animation with continuous control. ACM Transactions on Graphics (SIGGRAPH 2007), 26(3), August 2007. [BiBTeX]

Links

Photo Clip Art

Jean-François Lalonde, Derek Hoiem, Alexei A. Efros, Carsten Rother, John Winn, and Antonio Criminisi

Abstract

We present a system for inserting new objects into existing photographs by querying a vast image-based object library, precomputed using a publicly available Internet object database. The central goal is to shield the user from all of the arduous tasks typically involved in image compositing. The user is only asked to do two simple things: 1) pick a 3D location in the scene to place a new object; 2) select an object to insert using a hierarchical menu. We pose the problem of object insertion as a data-driven, 3D-based, context-sensitive object retrieval task. Instead of trying to manipulate the object to change its orientation, color distribution, etc. to fit the new image, we simply retrieve an object of a specified class that has all the required properties (camera pose, lighting, resolution, etc) from our large object library. We present new automatic algorithms for improving object segmentation and blending, estimating true 3D object size and orientation, and estimating scene lighting conditions. We also present an intuitive user interface that makes object insertion fast and simple even for the artistically challenged.

Citation

Jean-François Lalonde, Derek Hoiem, Alexei A. Efros, Carsten Rother, John Winn, and Antonio Criminisi. Photo clip art. ACM Transactions on Graphics (SIGGRAPH 2007), 26(3), August 2007. [BiBTeX]

Links

Responsive Characters from Motion Fragments

James McCann and Nancy S. Pollard

Abstract

In game environments, animated character motion must rapidly adapt to changes in player input -- for example, if a directional signal from the player's gamepad is not incorporated into the character's trajectory immediately, the character may blithely run off a ledge. Traditional schemes for data-driven character animation lack the split-second reactivity required for this direct control; while they can be made to work, motion artifacts will result. We describe an on-line character animation controller that assembles a motion stream from short motion fragments, choosing each fragment based on current player input and the previous fragment. By adding a simple model of player behavior we are able to improve an existing reinforcement learning method for precalculating good fragment choices. We demonstrate the efficacy of our model by comparing the animation selected by our new controller to that selected by existing methods and to the optimal selection, given knowledge of the entire path. This comparison is performed over real-world data collected from a game prototype. Finally, we provide results indicating that occasional low-quality transitions between motion segments are crucial to high-quality on-line motion generation; this is an important result for others crafting animation systems for directly-controlled characters, as it argues against the common practice of transition thresholding.

Citation

James McCann and Nancy S. Pollard. Responsive characters from motion fragments. ACM Transactions on Graphics (SIGGRAPH 2007), 26(3), August 2007. [BiBTeX]

Links

Robust estimation of dominant axis of rotation

Lillian Y. Chang and Nancy Pollard

Abstract

A simple method is developed for robustly estimating a fixed dominant axis of rotation (AoR) of anatomical joints from surface marker data. Previous approaches which assume a model of circular marker trajectories use plane-fitting to estimate the direction of the AoR. However, when there is limited joint range of motion and rotation due to a second degree of freedom, minimizing only the planar error can give poor estimates of the AoR direction. Optimizing a cost function which includes the error component within a plane, instead of only the component orthogonal to a plane, leads to improved estimates of the AoR direction for joints which exhibit additional rotational motion from a second degree of freedom. Results from synthetic data validation show the ranges of motion where the new method has lower estimation error compared to plane-fitting techniques. Estimates of the flexion-extension AoR from empirical motion capture data of the knee and index finger joints were also more anatomically plausible.

Citation

Lillian Y. Chang and Nancy Pollard. Robust estimation of dominant axis of rotation. Journal of Biomechanics, 40(12):2707–2715, 2007. [BiBTeX]

Links

Scene Completion Using Millions of Photographs

James Hays and Alexi Efros

Abstract

What can you do with a million images? In this paper we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data-driven, requiring no annotations or labelling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of results for each input image and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.

Citation

James Hays and Alexi Efros. Scene completion using millions of photographs. ACM Transactions on Graphics (SIGGRAPH 2007), 26(3), August 2007. [BiBTeX]

Links

Time-critical distributed contact for 6-DoF haptic rendering of adaptively sampled reduced deformable models

Jernej Barbič and Doug L. James

Abstract

Real-time evaluation of distributed contact forces for rigid or deformable 3D objects is important for providing multi-sensory feedback in emerging real-time applications, such as 6-DoF haptic force-feedback rendering. Unfortunately, at very high temporal rates (1 kHz for haptics), there is often insufficient time to resolve distributed contact between geometrically complex objects. In this paper, we present a spatially and temporally adaptive sample-based approach to approximate contact forces under hard real-time constraints. The approach is CPU based, and supports contact between a rigid and a reduced deformable model with complex geometry. Penalty-based contact forces are efficiently resolved using a multi-resolution point-based representation for one object, and a signed-distance field for the other. Hard real-time approximation of distributed contact forces uses multi-level progressive point-contact sampling, and exploits temporal coherence, graceful degradation and other optimizations. We present several examples of 6-DoF haptic rendering of geometrically complex rigid and deformable objects in distributed contact at real-time kilohertz rates.

Citation

Jernej Barbič and Doug L. James. Time-critical distributed contact for 6-dof haptic rendering of adaptively sampled reduced deformable models. In 2007 ACM SIGGRAPH / Eurographics Symposium on Computer Animation, August 2007. [BiBTeX]

Links

Using Color Compatibility for Assessing Image Realism

Jean-François Lalonde and Alexei A. Efros

Abstract

Why does placing an object from one photograph into another often make the colors of that object suddenly look wrong? One possibility is that humans prefer distributions of colors that are often found in nature; that is, we find pleasing these color combinations that we see often. Another possibility is that humans simply prefer colors to be consistent within an image, regardless of what they are. In this paper, we explore some of these issues by studying the color statistics of a large dataset of natural images, and by looking at differences in color distribution in realistic and unrealistic images. We apply our findings to two problems: 1) classifying composite images into realistic vs. non-realistic, and 2) recoloring image regions for realistic compositing.

Citation

Jean-François Lalonde and Alexei A. Efros. Using color compatibility for assessing image realism. IEEE International Conference on Computer Vision, 2007. [BiBTeX]

Links

Anthropomorphism influences perception of computer-animated characters' actions

Thierry Chaminade, Jessica Hodgins, and Mitsuo Kawato

Abstract

Computer-animated characters are common in popular culture and have begun to be used as experimental tools in social cognitive neurosciences. Here we investigated how appearance of these characters' influences perception of their actions. Subjects were presented with different characters animated either with motion data captured from human actors or by interpolating between poses (keyframes) designed by an animator, and were asked to categorize the motion as biological or artificial. The response bias towards 'biological', derived from the Signal Detection Theory, decreases with characters' anthropomorphism, while sensitivity is only affected by the simplest rendering style, point-light displays. fMRI showed that the response bias correlates positively with activity in the mentalizing network including left temporoparietal junction and anterior cingulate cortex, and negatively with regions sustaining motor resonance. The absence of significant effect of the characters on the brain activity suggests individual differences in the neural responses to unfamiliar artificial agents. While computer-animated characters are invaluable tools to investigate the neural bases of social cognition, further research is required to better understand how factors such as anthropomorphism affect their perception, in order to optimize their appearance for entertainment, research or therapeutic purposes.

Citation

Thierry Chaminade, Jessica Hodgins, and Mitsuo Kawato. Anthropomorphism influences perception of computer-animated characters' actions. Social Cognitive and Affective Neuroscience, May 2007. [BiBTeX]

Links

Interactive Tensor Field Design and Visualization on Surfaces

Eugene Zhang, James Hays, and Greg Turk

Abstract

Designing tensor fields in the plane and on surfaces is a necessary task in many graphics applications, such as painterly rendering, pen-and-ink sketch of smooth surfaces, and anisotropic remeshing. In this paper, we present an interactive design system that allows a user to create a wide variety of surface tensor fields with control over the number and location of degenerate points. Our system combines basis tensor fields to make an initial tensor field that satisfies a set of userspecifications. However, such a field often contains unwanted degenerate points that cannot always be eliminated due to topological constraints of the underlying surface. To reduce the artifacts caused by these degenerate points, our system allows the user to move a degenerate point or to cancel a pair of degenerate points that have opposite tensor indices.

Citation

Eugene Zhang, James Hays, and Greg Turk. Interactive tensor field design and visualization on surfaces. IEEE Transactions on Visualization and Computer Graphics, 13(1):94–107, January 2007. [BiBTeX]

Links