CSAIL Research Abstracts - 2005 link to http://publications.csail.mit.edu/abstracts/abstracts05/index.html link to http://www.csail.mit.edu
bullet Introduction bullet Architecture, Systems
& Networks
bullet Language, Learning,
Vision & Graphics
bullet Physical, Biological
& Social Systems
bullet Theory bullet

horizontal line

Intuitive Control of Character Animation

Eugene Hsu & Jovan Popovic

Animated characters bring realism and visual appeal to many applications, such as feature films, educational visualizations, video games, and simulated environments. A compelling character animation is often unparalled for illustrating concepts or eliciting emotions. Creating such content, however, is a challenge. Expert animators often have to work for days to create precious minutes of movement.

To better understand these difficulties, consider a comparison with image editing. As with character motion data, images have simple computational representations: motions have numbers representing frames, and images have numbers representing pixels. To modify an image, artists can easily obtain tools that provide basic controls such as brightness, contrast, and sharpening, or more complex ones that perform embossing or watercolor effects. These are all operations that artists intuitively understand.

An animator does not have such luxuries, as he or she is relegated to controls that might specify the path of a hand or the joint angle of an elbow. Editing tools do not contain magical buttons that can make a motion more energetic, or sadder, or more sneaky. As a result, authoring such motion is an exercise of patience and precision for even the most skilled animator.

The objective of our research is to simplify this task. Rather than manipulating motion directly, our goal is to derive intuitive controls by example. For instance, to design an operation to convert a normal walk to a sneaky one, one might provide examples of the desired mapping. Using sophisticated techniques, a mathematical description of the normal-to-sneaky mapping can be discovered and then applied to new motions.

Our recent work in style translation aims to do just that [1]. Here, the mathematical description is a linear time-invariant model, and the method used to find the mapping is system identification. Similar techniques are applied in a variety of engineering disciplines; for instance, such models have been used to describe how aircraft react to pilot input. In our work, we have shown how these models can be applied to make a walking character limp, or a weak fighter punch and kick more aggressively.

Our style translation system changes the normal walk motion (top) into a sneaky crouch (bottom) [1].

Techniques that discover mathematical relationships can be extended to enable even more complex control tasks. Sometimes, the appropriate motion is even difficult for humans to imagine, let alone animate. Consider the task of animating a virtual dance partner for an existing character. An animator with no knowledge of dance would not even know how the partner should respond. In previous work [2], we demonstrated how such an animation task could be performed by observing examples of how people dance. The mathematical techniques themselves are based on established work in speech recognition; quite reasonable, considering the importance of first recognizing what the existing character is doing. This framework can be generalized to many situations in which one motion drives another. For instance, one might want to perform a motion with an input device such as a mouse. By providing examples of how mouse input maps to the desired motion, such as a walk, our technique is capable of carrying out this task.

The motion of a real dancer (orange) is used to control a synthetic dance partner (gray) [2].

Despite the progress in this area of research, there is still a large gap between computational representations of motion and how humans intuitively understand them. Thus, our long-term objective is to devise new techniques that make this medium more accessible for amateur animators.


[1] Eugene Hsu, Kari Pulli, and Jovan Popovic. Style translation for human motion. ACM Transactions on Graphics 24(3). To appear. 2005.

[2] Eugene Hsu, Sommer Gentry, and Jovan Popovic. Example-based control of human motion. Proceedings of the 2004 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 69-77. 2004.

horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu
(Note: On July 1, 2003, the AI Lab and LCS merged to form CSAIL.)