Authored Animation

Sometimes, you want to get animation into your game that would be cumbersome to program by hand. This can come up in a variety of situations:

Despite these many applications, there are two basic building blocks of these animations that are worth talking about separately: animating transforms and animating shapes.

Using our Scene class, these can be expressed as:

void update(float elapsed) {
	time += elapsed;

	//---- transform animation ----
	//Simple: set transforms in the scene:

	//---- shape animation ----
	//More complicated, but more flexible:

	//Option 1: upload new vertex data:
	//   NOTE: flag_drawable->pipeline.vao points to flag_buffer

	//Option 2: switch between different sets of static vertex data:
	//   NOTE: assuming all frames in the same buffer, so vao doesn't need to be switched
	Mesh const &current_frame = explosion_frames.lookup_frame(time);
	explosion_drawable->pipeline.start = current_frame.start;
	explosion_drawable->pipeline.count = current_frame.count;

	//Option 3: use a shader to animate fixed vertex data:
	Pose player_pose = player_skeleton.compute_pose(time);
	player_drawable->pipeline.set_uniforms = [player_pose]() {
		//set uniforms for pose



Data Storage

Before we get into any specifics, it's worth taking a moment to talk about data storage for animation. The straightforward way to store any animation is to sample it at a bunch of frames when outputting, and to pick the closest frame when playing back the animation.

(TODO: picture)

This has two problems: jerky motion at high frame-rates and data inefficiency.

Jerky Motion. If you export the animation at too low a frame-rate, then the motion will appear to jump between frames rather than smoothly move (though this might sometimes be desired for certain effects). FIX: interpolation between frames -- but still can be a problem if frame-rate too low to capture high frequency effects. FIX: high frame rate.

Data inefficiency. Sampling every frame means that a 120fps animation takes twice as much storage as a 60fps animation. Even if they have the same source file. (Indeed, we could continue increasing the frame rate until the exported animation is larger than the source file.) Isn't this weird? What's going on?

In most animation programs, motions aren't stored as sampled data at frames, but -- rather -- as relatively expressive animation curves with a few, artist-placed, control points. So one could imagine exporting these curves (upside: small animations, downside: more runtime code complexity).

(TODO: picture)

However, as we continue, I'm going to set this point aside and we'll be exporting and loading animations as samples-per-frame because it's simple and it's unlikely you'll be hitting any resource limits with your games. (This doesn't mean we won't talk about data efficiency later in a different context, though.)

Animating Transforms

Animating transforms is a great way to get simple motions into your game. You've already been animating transforms using code, so all that is really needed is a way to export some transforms from blender and to hook those up to the scene graph in your game.

So, how do we get transforms out of blender? Well, we already have this scene export code, all we really need is a way to move the playhead in blender:

bpy.context.scene.frame_set(frame, 0.0)

From this, we can build an script (see that moves the playhead and writes frames:

def write_frame():
	global frames_data
	for obj in objs:
		mat = obj.matrix_world
		#express transformation relative to parent:
		if obj.parent:
			world_to_parent = obj.parent.matrix_world.copy()
			mat = world_to_parent * mat
		trs = mat.decompose() #turn into (translation, rotation, scale)
		frames_data += struct.pack('3f', trs[0].x, trs[0].y, trs[0].z)
		frames_data += struct.pack('4f', trs[1].x, trs[1].y, trs[1].z, trs[1].w)
		frames_data += struct.pack('3f', trs[2].x, trs[2].y, trs[2].z)

for frame in range(min_frame, max_frame+1):
	bpy.context.scene.frame_set(frame, 0.0) #note: second param is sub-frame

To load and use these animations in our game, we need code to load the file, and code to hook up the transforms to the hierarchy. This turns out to be pretty simple (see: TransformAnimation.hpp and TransformAnimation.cpp).

Some subtleties (may wish to look at BridgeMode.cpp):

Animating Shapes

However, there are some things we can't represent with transform animations, or that we can represent but would be rather inconvenient. The biggest class of these is animations that distort vertex positions in non-rigid ways.

Object Swap Animations

A venerable and still entirely reasonable technique for this sort of animation is what I'll call object-swap animation -- just make a different mesh per animation frame and swap them out in the scene. This is how classic 2D games have always worked (building a "sprite sheet" of different character poses and drawing the correct one). And it works pretty well in 3D, too, but it does have a certain "retro" or "lo-fi" aesthetic.

Vertex Blend Animations

Unlike 2D images, 3D models are relatively easy to interpolate between. So this leads us to the next step in shape animation: vertex blend animation.

Just as we did back in scene graph animation, the idea is to smooth out the abrupt frame changes between different meshes by interpolating vertex positions. This is pretty straightforward to write in a shader:

#version 330
uniform mat4 object_to_clip;
uniform mat4x3 object_to_light;
uniform mat3 normal_to_light;
uniform float interp;
layout(location=0) in vec4 Position1;
in vec4 Position2;
in vec3 Normal;
in vec4 Color;
out vec3 position;
out vec3 normal;
out vec4 color;
void main() {
	vec4 Position = mix(Position1, Position2, interp);
	gl_Position = object_to_clip * Position;
	position = object_to_light * Position;
	normal = normal_to_light * Normal;
	color = Color;

Of course, you will also have to adjust your C++ code to carefully bind more attributes, but at least you won't need to store any more data than the previous method (why?).

How would we update this code to avoid weird lighting glitches?

Vertex Blend Animations

(see: BoneAnimation.hpp and BoneAnimation.cpp)

Vertex blends still lose volume under rotation, and require a lot of data to be stored to avoid this (recall that we were worrying about 40 bytes / transform / frame -- now we're talking 16 bytes / vertex / frame; a lot more data). So, for certain kinds of animation, we can do better.

Observe that a lot of character animation can be thought of local rigid transforms that need to be blended between. These are known as "bone-based animation" or "skinned animation". The idea is that we can capture this idea of smoothly interpolating between rigid transforms by... um... smoothly interpolating between rigid transforms. We call this "linear blend skinning".

LBS asset pipeline:

  1. Build a mesh to animate in some static (bind) pose.
  2. Construct a hierarchy of transforms (skeleton) for the mesh you want to edit.
  3. Label each vertex of a mesh with a list of transforms that influence it, and the weights of those transforms.
  4. Animate the skeleton.
  5. Export the mesh, weights, and animations.

LBS runtime pipeline:

  1. Load the mesh, weights, animations.
  2. Send the mesh+weights to the GPU.
  3. Per-frame: compute transforms for each bone and send to GPU. Shader computes xforms and blends.

Aside: relativity

When exporting animations, one finds oneself asking what to export the animations relative to. Exporting global transforms could be very awkward, e.g., with walk cycles. When exporting a hierarchy animation, things are a bit simpler (only really need to care about root motion). Some choices: first frame (seems logical), last frame (why?), frame deltas (drifts). Probably the right answer is different depending on the animation.