Motorized Marionette

Controlling a motorized marionette to realize human-like, expressive motions
snaps

Project Description

A motorized marionette is a marionette whose strings are driven by motors instead of hands and fingers in usual marionettes. If we could program such a marionette to imitate human motions, it would be a powerful and inexpensive device for entertainment purposes because operating such marionettes by hand usually requires hard practice. The objective of this project is to develop a control scheme for a motorized marionette to realize human-like and expressive motions.

There are two possible approaches to control a motorized marionette: (1) capture a real performance and replace the fingers by the motors, and (2) use human motion capture data as reference motion for the motorized marionette. The first approach could be easier to control because the recorded data should comply quite well with the kinematic and dynamic constraints of the marionette, but it is more difficult to obtain the data because the operator should practice the motion first. In the second approach, on the other hand, it is much easier to obtain the data but we have to modify the data so that the motion becomes feasible for the marionette. Currently, we are working on the second approach where we record a motion sequence of a human actor and transform the data to compensate for the kinematic and dynamic differences between the actor and the marionette, although we do realize that the first approach (using real performance data) is also important to capture the talent of an operator and learn how they modify human motions to make the marionette motions appear more expressive.

The main differences between human actors and the marionette are:

  1. Kinematic - size and constraints: For example, the marionette cannot move around very much due to the string constraints, while human actors can. Especially, the pelvis of the marionette in our current hardware is not directly actuated.
  2. Dynamic - mass properties and actuators: The joints of the marionette are not actuated like human actors' joints. It is therefore impossible to control the joints to arbitrary angles. We also encounter undesired passive motions such as swinging, especially when we try to make fast motions.

The control software is built to compensate for these significant differences and to realize human-like motions on the marionette.

We used four motions of two stories performed by two actors. Experimental results showed that the motions of the marionette is close enough to the reference motions to distinguish two different styles of the same story.

Hardware

The marionette is driven by eight hobby servo motors, six for the arms and two for the legs. The motors accept angle commands with the resolution of approximately 180/256 degrees. The markers are attached only in the experiments for identifying the swing dynamics, not in actual performances.

photo back view front view

Software

The control software is composed of the following four components:

  1. Identification/controller design: The swing dynamics is identified using responses to step inputs. The responses are measured by the motion capture facility at CMU. The identified dynamics is then used to design a controller to prevent swinging. This process is required only when a new marionette is developed.
  2. Mapping: Compute the translation, rotation, and scaling parameters to map the captured markers into the marionette's workspace. The parameters are computed independently at each frame of the motion capture data. This process is required for each motion sequence.
  3. Controller: Apply the controller developed in step 1. to modify the mapped marker trajectories such that undesired swinging does not occur. This is an online process.
  4. Inverse kinematics: Compute the motor angles to bring the (virtual) markers placed on the marionette to the desired locations. These angles are transformed to corresponding motor commands to drive the motors.

project image

Experimental Results

We recorded four motion sequences (two stories performed by two different actors) and applied the control algorithm described above. The figure below shows snapshots from each process: captured motion, measured marker data, mapped marker data, and marionette's motion. The marionette can imitate the actors' motions well enough to distingush different styles of the same story.

project image

Publications


Videos, Data, Software


Project Team

Support



Katsu Yamane
Last Updated: February 26, 2003