Skip navigation.

A Point-based Method for Animating Incompressible Flow

FunShing Sin, Adam W. Bargteil, and Jessica K. Hodgins

Abstract

In this paper, we present a point-based method for animating incompressible flow. The advection term is handled by moving the sample points through the flow in a Lagrangian fashion. However, unlike most previous approaches, the pressure term is handled by performing a projection onto a divergence-free field. To perform the pressure projection, we compute a Voronoi diagram with the sample points as input. Borrowing from Finite Volume Methods, we then invoke the divergence theorem and ensure that each Voronoi cell is divergence free. To handle complex boundary conditions, Voronoi cells are clipped against obstacle boundaries and free surfaces. The method is stable, flexible and combines many of the desirable features of point-based and grid-based methods. We demonstrate our approach on several examples of splashing and streaming liquid and swirling smoke.

Citation

FunShing Sin, Adam W. Bargteil, and Jessica K. Hodgins. A point-based method for animating incompressible flow. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Aug 2009. [BiBTeX]

Links

Estimating Natural Illumination from a Single Outdoor Image

Jean-François Lalonde, Alexei A. Efros, and Srinivasa G. Narasimhan

Abstract

Given a single outdoor image, we present a method for estimating the likely illumination conditions of the scene. In particular, we compute the probability distribution over the sun position and visibility. The method relies on a combination of weak cues that can be extracted from different portions of the image: the sky, the vertical surfaces, and the ground. While no single cue can reliably estimate illumination by itself, each one can reinforce the others to yield a more robust estimate. This is combined with a data-driven prior computed over a dataset of 6 million Internet photos. We present quantitative results on a webcam dataset with annotated sun positions, as well as qualitative results on consumer- grade photographs downloaded from Internet. Based on the estimated illumination, we show how to realistically insert synthetic 3-D objects into the scene.

Citation

Jean-François Lalonde, Alexei A. Efros, and Srinivasa G. Narasimhan. Estimating natural illumination from a single outdoor image. In IEEE International Conference on Computer Vision, 2009. [BiBTeX]

Links

Face Poser: Interactive Modeling of 3D Facial Expressions Using Facial Priors

Manfred Lau, Jinxiang Chai, Ying-Qing Xu, and Heung-Yeung Shum

Abstract

This article presents an intuitive and easy-to-use system for interactively posing 3D facial expressions. The user can model and edit facial expressions by drawing freeform strokes, by specifying distances between facial points, by incrementally editing curves on the face, or by directly dragging facial points in 2D screen space. Designing such an interface for 3D facial modeling and editing is challenging because many unnatural facial expressions might be consistent with the user's input. We formulate the problem in a maximum a posteriori framework by combining the user's input with priors embedded in a large set of facial expression data. Maximizing the posteriori allows us to generate an optimal and natural facial expression that achieves the goal specified by the user. We evaluate the performance of our system by conducting a thorough comparison of our method with alternative facial modeling techniques. To demonstrate the usability of our system, we also perform a user study of our system and compare with state-of-the-art facial expression modeling software (Poser 7).

Citation

Manfred Lau, Jinxiang Chai, Ying-Qing Xu, and Heung-Yeung Shum. Face poser: Interactive modeling of 3d facial expressions using facial priors. ACM Transactions on Graphics, 29(1), 2009. [BiBTeX]

Links

Leveraging the Talent of Hand Animators to Create Three-Dimensional Animation

Eakta Jain, Yaser Sheikh, and Jessica K. Hodgins

Abstract

The skills required to create compelling three-dimensional animation using computer software are quite different from those required to create compelling hand animation with pencil and paper. The three-dimensional medium has several advantages over the traditional medium- it is easy to relight the scene, render it from different viewpoints, and add physical simulations. In this work, we propose a method to leverage the talent of traditionally trained hand animators to create three-dimensional animation, while allowing them to work in the medium that is familiar to them. The input to our algorithm is a set of hand-animated frames. Our key insight is to use motion capture data as a source of domain knowledge and 'lift' the two-dimensional animation to three dimensions, while maintaining the unique style of the input animation. A motion capture clip is projected to two dimensions, the limbs are aligned with the hand-drawn frames, and then the motion is reconstructed into three dimensions. We demonstrate our algorithm on a variety of hand animated motion sequences on different characters, including ballet, a stylized sneaky walk, and a sequence of jumping jacks.

Citation

Eakta Jain, Yaser Sheikh, and Jessica K. Hodgins. Leveraging the talent of hand animators to create three-dimensional animation. In Proc. ACM SIGGRAPH/Eurographics Symposium on Computer Animation, August 2009. [BiBTeX]

Links

Local Layering

James McCann and Nancy S. Pollard

Abstract

In a conventional 2d painting or compositing program, graphical objects are stacked in a user-specified global order, as if each were printed on an image-sized sheet of transparent film. In this work we show how to relax this restriction so that users can make stacking decisions on a per-overlap basis, as if the layers were pictures cut from a magazine. This allows for complex and visually exciting overlapping patterns, without painstaking layer-splitting, depth-value painting, region coloring, or mask-drawing. Instead, users are presented with a layers dialog which acts locally. Behind the scenes, we divide the image into overlap regions and track the ordering of layers in each region. We formalize this structure as a graph of stacking lists, define the set of orderings where layers do not interpenetrate as consistent, and prove that our local stacking operators are both correct and sufficient to reach any consistent stacking. We also provide a method for updating the local stacking when objects change shape or position due to user editing -- this scheme prevents layer updates from producing undesired intersections. Our method extends trivially to both animation compositing and local visibility adjustment in depth-peeled 3d scenes; the latter of which allows for the creation of impossible figures which can be viewed and manipulated in real-time.

Citation

James McCann and Nancy S. Pollard. Local layering. ACM Transactions on Graphics (SIGGRAPH 2009), 28(3), August 2009. [BiBTeX]

Links

Modular Bases for Fluid Dynamics

Martin Wicke, Matt Stanton, and Adrien Treuille

Abstract

We present a new approach to fluid simulation that balances the speed of model reduction with the flexibility of grid-based methods. We construct a set of composable reduced models, or tiles, which capture spatially localized fluid behavior. We then precompute coupling terms so that these models can be rearranged at runtime. To enforce consistency between tiles, we introduce constraint reduction. This technique modifies a reduced model so that a given set of linear constraints can be fulfilled. Because dynamics and constraints can be solved entirely in the reduced space, our method is extremely fast and scales to large domains.

Citation

Martin Wicke, Matt Stanton, and Adrien Treuille. Modular bases for fluid dynamics. ACM Transactions on Graphics (SIGGRAPH 2009), 28(3), August 2009. [BiBTeX]

Links

Simulating Balance Recovery Responses to Trips Based on Biomechanical Principles

Takaaki Shiratori, Brooke Coley, Rakié Cham, and Jessica K. Hodgins

Abstract

To realize the full potential of human simulations in interactive environments, we need controllers that have the ability to respond appropriately to unexpected events. In this paper, we create controllers for the trip recovery responses that occur during walking. Two strategies have been identified in human responses to tripping: impact from an obstacle during early swing leads to an elevating strategy, in which the swing leg is lifted over the obstacle and impact during late swing leads to a lowering strategy, in which a swing leg is positioned immediately in front of the obstacle and then the other leg is swung forward and positioned in front of the body to allow recovery from the fall. We design controllers for both strategies based on the available biomechanical literature and data captured from human subjects in the laboratory. We evaluate our controllers by comparing simulated results and actual responses obtained from a motion capture system.

Citation

Takaaki Shiratori, Brooke Coley, Rakié Cham, and Jessica K. Hodgins. Simulating balance recovery responses to trips based on biomechanical principles. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Aug 2009. [BiBTeX]

Links

Modeling Spatial and Temporal Variation in Motion Data

Manfred Lau, Ziv Bar-Joseph, and James Kuffner

Abstract

We present a novel method to model and synthesize variation in motion data. Given a few examples of a particular type of motion as input, we learn a generative model that is able to synthesize a family of spatial and temporal variants that are statistically similar to the input examples. The new variants retain the features of the original examples, but are not exact copies of them. We learn a Dynamic Bayesian Network model from the input examples that enables us to capture properties of conditional independence in the data, and model it using a multivariate probability distribution. We present results for a variety of human motion, and 2D handwritten characters. We perform a user study to show that our new variants are less repetitive than typical game and crowd simulation approaches of re-playing a small number of existing motion clips. Our technique can synthesize new variants efficiently and has a small memory requirement.

Citation

Manfred Lau, Ziv Bar-Joseph, and James Kuffner. Modeling spatial and temporal variation in motion data. ACM Transactions on Graphics (SIGGRAPH Asia 2009), 28(5), 2009. [BiBTeX]

Links

Webcam Clip Art: Appearance and Illuminant Transfer from Time-lapse Sequences

Jean-François Lalonde, Alexei A. Efros, and Srinivasa G. Narasimhan

Abstract

Webcams placed all over the world observe and record the visual appearance of a variety of outdoor scenes over long periods of time. The recorded time-lapse image sequences cover a wide range of illumination and weather conditions -- a vast untapped resource for creating visual realism. In this work, we propose to use a large repository of webcams as a 'clip art' library from which users may transfer scene appearance (objects, scene backdrops, outdoor illumination) into their own time-lapse sequences or even single photographs. The goal is to combine the recent ideas from data-driven appearance transfer techniques with a general and theoretically-grounded physically-based illumination model. To accomplish this, the paper presents three main research contributions: 1) a new, high-quality outdoor webcam database of over 1300 sequences containing over 1.2 million images, calibrated radiometrically and geometrically; 2) a novel approach for matching illuminations across different scenes based on the estimation of the properties of natural illuminants (sun, sky, weather and clouds), the camera geometry, and illumination-dependent scene features; 3) a new algorithm for generating physically plausible high dynamic range environment maps for each frame in a webcam sequence.

Citation

Jean-François Lalonde, Alexei A. Efros, and Srinivasa G. Narasimhan. Webcam clip art: Appearance and illuminant transfer from time-lapse sequences. ACM Transactions on Graphics (SIGGRAPH Asia 2009), 28(5), 2009. [BiBTeX]

Links