Skip navigation.

Accelerometer-based User Interfaces for the Control of a Physically Simulated Character

Takaaki Shiratori and Jessica K. Hodgins

Abstract

In late 2006, Nintendo released a new game controller, the Wiimote, which included a three-axis accelerometer. Since then, a large variety of novel applications for these controllers have been developed by both independent and commercial developers. We add to this growing library with three performance interfaces that allow the user to control the motion of a dynamically simulated, animated character through the motion of his or her arms, wrists, or legs. For comparison, we also implement a traditional joystick/button interface. We assess these interfaces by having users test them on a set of tracks containing turns and pits. Two of the interfaces (legs and wrists) were judged to be more immersive and were better liked than the joystick/button interface by our subjects. All three of the Wiimote interfaces provided better control than the joystick interface based on an analysis of the failures seen during the user study.

Citation

Takaaki Shiratori and Jessica K. Hodgins. Accelerometer-based user interfaces for the control of a physically simulated character. ACM Transactions on Graphics (SIGGRAPH Asia 2008), 27(5), December 2008. [BiBTeX]

Links

Backward Steps in Rigid Body Simulation

Christopher D. Twigg and Doug L. James

Abstract

Physically based simulation of rigid body dynamics is commonly done by time-stepping systems forward in time. In this paper, we propose methods to allow time-stepping rigid body systems backward in time. Unfortunately, reverse-time integration of rigid bodies involving frictional contact is mathematically ill-posed, and can lack unique solutions. We instead propose time-reversed rigid body integrators that can sample possible solutions when unique ones do not exist. We also discuss challenges related to dissipation-related energy gain, sensitivity to initial conditions, stacking, constraints and articulation, rolling, sliding, skidding, bouncing, high angular velocities, rapid velocity growth from micro-collisions, and other problems encountered when going against the usual flow of time.

Citation

Christopher D. Twigg and Doug L. James. Backward steps in rigid body simulation. ACM Transactions on Graphics (SIGGRAPH 2008), 27(3), August 2008. [BiBTeX]

Links

Image-based Shaving

Minh Hoai Nguyen, Jean-François Lalonde, Alexei A. Efros, and Fernando de la Torre

Abstract

Many categories of objects, such as human faces, can be naturally viewed as a composition of several different layers. For example, a bearded face with glasses can be decomposed into three layers: a layer for glasses, a layer for the beard and a layer for other permanent facial features. While modeling such a face with a linear subspace model could be very difficult, layer separation allows for easy modeling and modification of some certain structures while leaving others unchanged. In this paper, we present a method for automatic layer extraction and its applications to face synthesis and editing. Layers are automatically extracted by utilizing the differences between subspaces and modeled separately. We show that our method can be used for tasks such beard removal (virtual shaving), beard synthesis, and beard transfer, among others.

Citation

Minh Hoai Nguyen, Jean-François Lalonde, Alexei A. Efros, and Fernando de la Torre. Image-based shaving. Computer Graphics Forum Journal (Eurographics 2008), 27(2), 2008. [BiBTeX]

Links

Laziness is a virtue: Motion stitching using effort minimization

Lei Li, James McCann, Christos Faloutsos, and Nancy Pollard

Abstract

Given two motion-capture sequences that are to be stitched together, how can we assess the goodness of the stitching? The straightforward solution, Euclidean distance, permits counter-intuitive results because it ignores the effort required to actually make the stitch. The main contribution of our work is that we propose an intuitive, first-principles approach, by computing the effort that is needed to do the transition (laziness-effort, or 'L-score'). Our conjecture is that, the smaller the effort, the more natural the transition will seem to humans. Moreover, we propose the elastic L-score which allows for elongated stitching, to make a transition as natural as possible. We present preliminary experiments on both artificial and real motions which show that our L-score approach indeed agrees with human intuition, it chooses good stitching points, and generates natural transition paths.

Citation

Lei Li, James McCann, Christos Faloutsos, and Nancy Pollard. Laziness is a virtue: Motion stitching using effort minimization. In Short Papers Proceedings of EUROGRAPHICS, 2008. [BiBTeX]

Links

Preparatory object rotation as a human-inspired grasping strategy

Lillian Y. Chang, Garth J. Zeglin, and Nancy S. Pollard

Abstract

Humans exhibit a rich set of manipulation strategies that may be desirable to mimic in humanoid robots. This study investigates preparatory object rotation as a manipulation strategy for grasping objects from different presented orientations. First, we examine how humans use preparatory rotation as a grasping strategy for lifting heavy objects with handles. We used motion capture to record human manipulation examples of 10 participants grasping objects under different task constraints. When sliding contact of the object on the surface was permitted, participants used preparatory rotation to first adjust the object handle to a desired orientation before grasping to lift the object from the surface. Analysis of the human examples suggests that humans may use preparatory object rotation in order to reuse a particular type of grasp in a specific capture region or to decrease the joint torques required to maintain the lifting pose. Second, we designed a preparatory rotation strategy for an anthropomorphic robot manipulator as a method of extending the capture region of a specific grasp prototype. The strategy was implemented as a sequence of two open-loop actions mimicking the human motion: a preparatory rotation action followed by a grasping action. The grasping action alone can only successfully lift the object from a 45-degree region of initial orientations (4 of 24 tested conditions). Our empirical evaluation of the robot preparatory rotation shows that even using a simple open-loop rotation action enables the reuse of the grasping action for a 360-degree capture region of initial object orientations (24 of 24 tested conditions).

Citation

Lillian Y. Chang, Garth J. Zeglin, and Nancy S. Pollard. Preparatory object rotation as a human-inspired grasping strategy. In IEEE-RAS International Conference on Humanoid Robots (Humanoids 2008), pages 527–534, December 2008. [BiBTeX]

Links

Real-Time Gradient-Domain Painting

James McCann and Nancy S. Pollard

Abstract

We present an image editing program which allows artists to paint in the gradient domain with real-time feedback on megapixel-sized images. Along with a pedestrian, though powerful, gradient-painting brush and gradient-clone tool, we introduce an edge brush designed for edge selection and replay. These brushes, coupled with special blending modes, allow users to accomplish global lighting and contrast adjustments using only local image manipulations -- e.g. strengthening a given edge or removing a shadow boundary. Such operations would be tedious in a conventional intensity-based paint program and hard for users to get right in the gradient domain without real-time feedback. The core of our paint program is a simple-to-implement GPU multigrid method which allows integration of megapixel-sized full-color gradient fields at over 20 frames per second on modest hardware. By way of evaluation, we present example images produced with our program and characterize the iteration time and convergence rate of our integration method.

Citation

James McCann and Nancy S. Pollard. Real-time gradient-domain painting. ACM Transactions on Graphics (SIGGRAPH 2008), 27(3), August 2008. [BiBTeX]

Links

Six-DoF Haptic Rendering of Contact between Geometrically Complex Reduced Deformable Models

Jernej Barbič and Doug L. James

Abstract

Real-time evaluation of distributed contact forces between rigid or deformable 3D objects is a key ingredient of 6-DoF force-feedback rendering. Unfortunately, at very high temporal rates, there is often insufficient time to resolve contact between geometrically complex objects. We propose a spatially and temporally adaptive approach to approximate distributed contact forces under hard real-time constraints. Our method is CPU-based and supports contact between rigid or reduced deformable models with complex geometry. We propose a contact model that uses a point-based representation for one object and a signed-distance field for the other. This model is related to the Voxmap-PointShell (VPS) method, but gives continuous contact forces and torques, enabling stable rendering of stiff penalty-based distributed contacts. We demonstrate that stable haptic interactions can be achieved by point-sampling offset surfaces to input "polygon soup" geometry using particle repulsion. We introduce a multiresolution nested pointshell construction that permits level-of-detail contact forces and enables graceful degradation of contact in close-proximity scenarios. Parametrically deformed distance fields are proposed for contact between reduced deformable objects. We present several examples of 6-DoF haptic rendering of geometrically complex rigid and deformable objects in distributed contact at real-time kilohertz rates.

Citation

Jernej Barbič and Doug L. James. Six-dof haptic rendering of contact between geometrically complex reduced deformable models. IEEE Transactions on Haptics, 1(1):39–52, 2008. [BiBTeX]

Links

What does the sky tell us about the camera?

Jean-François Lalonde, Srinivasa G. Narasimhan, and Alexei A. Efros

Abstract

As the main observed illuminant outdoors, the sky is a rich source of information about the scene. However, it is yet to be fully explored in computer vision because its appearance depends on the sun position, weather conditions, photometric and geometric parameters of the camera, and the location of capture. In this paper, we propose the use of a physically-based sky model to analyze the information available within the visible portion of the sky, observed over time. By fitting this model to an image sequence, we show how to extract camera parameters such as the focal length, and the zenith and azimuth angles. In short, the sky serves as a geometric calibration target. Once the camera parameters are recovered, we show how to use the same model in two applications: 1) segmentation of the sky and cloud layers, and 2) data-driven sky matching across different image sequences based on a novel similarity measure defined on sky parameters. This measure, combined with a rich appearance database, allows us to model a wide range of sky conditions.

Citation

Jean-François Lalonde, Srinivasa G. Narasimhan, and Alexei A. Efros. What does the sky tell us about the camera? In European Conference on Computer Vision, 2008. [BiBTeX]

Links

FMDistance: A fast and effective distance function for motion capture data

Kensuke Onuma, Christos Faloutsos, and Jessica K. Hodgins

Abstract

Given several motion capture sequences, of similar (but not identical) length, what is a good distance function? We want to find similar sequences, to spot outliers, to create clusters, and to visualize the (large) set of motion capture sequences at our disposal. We propose a set of new features for motion capture sequences. We experiment with numerous variations (112 feature-sets in total, using variations of weights, logarithms, dimensionality reduction), and we show that the appropriate combination leads to near-perfect classification on a database of 226 actions with twelve different categories, and it enables visualization of the whole database as well as outlier detection.

Citation

Kensuke Onuma, Christos Faloutsos, and Jessica K. Hodgins. FMDistance: A fast and effective distance function for motion capture data. In Short Papers Proceedings of EUROGRAPHICS, 2008. [BiBTeX]

Links

Action Capture with Accelerometers

Ronit Slyper and Jessica Hodgins

Abstract

We create a performance animation system that leverages the power of low-cost accelerometers, readily available motion capture databases, and construction techniques from e-textiles. Our system, built with only off-the-shelf parts, consists of five accelerometers sewn into a comfortable shirt that streams data to a computer. The accelerometer readings are continuously matched against accelerations computed from existing motion capture data, and an avatar is animated with the closest match. We evaluate our system visually and using simultaneous motion and accelerometer capture.

Citation

Ronit Slyper and Jessica Hodgins. Action capture with accelerometers. In 2008 ACM SIGGRAPH / Eurographics Symposium on Computer Animation, July 2008. [BiBTeX]

Links

IM2GPS: Estimating geographic information from a single image

James Hays and Alexei A. Efros

Abstract

Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally — on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we will leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earth's surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban/rural classification.

Citation

James Hays and Alexei A. Efros. Im2gps: Estimating geographic information from a single image. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2008. [BiBTeX]

Links

Method for determining kinematic parameters of the in vivo thumb carpometacarpal joint

Lillian Y. Chang and Nancy S. Pollard

Abstract

The mobility of the thumb carpometacarpal (CMC) joint is critical for functional grasping and manipulation tasks. We present an optimization technique for determining from surface marker measurements a subject-specific kinematic model of the in vivo CMC joint that is suitable for measuring mobility. Our anatomy-based cost metric scores a candidate joint model by the plausibility of the corresponding joint angle values and kinematic parameters rather than only the marker trajectory reconstruction error. The proposed method repeatably determines CMC joint models with anatomically-plausible directions for the two dominant rotational axes and a lesser range of motion (RoM) for the third rotational axis. We formulate a low-dimensional parameterization of the optimization domain by first solving for joint axis orientation variables that then constrain the search for the joint axis location variables. Individual CMC joint models were determined for 24 subjects. The directions of the flexion-extension (FE) axis and adduction-abduction (AA) axis deviated on average by 9 degrees and 22 degrees, respectively, from the mean axis direction. The average RoM for FE, AA, and pronation-supination (PS) joint angles were 76 degrees, 43 degrees, and 23 degrees for active CMC movement. The mean separation distance between the FE and AA axes was 4.6 mm, and the mean skew angle was 87 degrees from the positive flexion axis to the positive abduction axis.

Citation

Lillian Y. Chang and Nancy S. Pollard. Method for determining kinematic parameters of the in vivo thumb carpometacarpal joint. IEEE Transactions on Biomedical Engineering, 55(7):1897–1906, July 2008. [BiBTeX]

Links