Skinned Animation : Content Creation

Animation through skinned meshes is a key piece of technology that lets us have modern 3D video games that don’t look look like the original Virtua Fighter arcade game. If you develop a 3D game nowadays that features any sort of humanoid character, chances are you are dealing with skinned mesh animation. While most people do not need to know the exact workings of skinned meshes, it is a set of knowledge that is really handy to have a basic understanding of. To explain mesh skinning, I am going to follow the pipeline from content creation to rendering in-game. This post will cover the content creation aspect, and I will cover the process of rendering skinned characters in-game, and the performance impacts of skinned characters in a future post.

The first thing you need to have a skinned mesh is a skeleton. You might think you need the character first, but when it comes to animating, you are using the motion of the skeleton and applying it to a mesh. This means, with the proper animation setup, the animations can actually be interchangeable across different characters with similar body shapes. So what is a skeleton? It is a hierarchy of transformation nodes. The minimum of what you need to represent this data is a position, rotation, and scale. The next key piece of information here is the hierarchy. Except for the root node, each node of the skeleton hierarchy is parented to something else in the hierarchy. So the foot bone is parented to the ankle bone, which is parented to the knee bone. The effect of this is, they inherit the transformation information of their parents. If you rotate the knee, then the ankle and foot will rotate out relative to that position. Here is a diagram I’ve drawn up of this basic concept, with some common bones included, and arrows to show the parenting As you can imagine, the more bones you have in your skeleton, the more natural, and more motion you can capture for your characters.

Alright, so now you have a skeleton built out of nodes parented to each other. How do you get an animation out of that? Animation curves. An animation curve defines the path for a piece of data over time. Within the context of skeletal animation, this would be the position, rotation, and scale of a node. An animation curve is defined by a set of keyframes and hints for what type of path the curve should follow, these hints are usually stored per keyframe, and defined as left and right tangents. A keyframe is a point in the curve where the data is defined to a specific value. In this simple animation curve of the Y value of a transformation, I have two keyframes defined. At zero seconds, the Y position is set to 0. At half a second, the Y position is set to 12. Because this animation curve consists of only two points, it is linear, and moves the Y value upward at a constant rate. Where does the curve come in? Technically that line is a curve, but you want to see a curve that curves, which happens when there are enough keyframes in the animation to generate a curve instead of a line. In this example, I have three keyframes set The first keyframe at zero seconds starts with a Y position of zero. At one sixth of a second, 0:10 on the timeline, there is a keyframe with the Y set to 2.38. At 0:15 on the timeline, there is a third keyframe, set to the same 2.38 value as the second keyframe. Even though these two keyframes are set to the same value, the animation curve is not a straight line between them, the software generated a curve with those three points, which you see represented by the green line. Sometimes, you might not be happy with the curve generated, or want finer control over the curve. This is where the hints I mentioned previously, the tangents come in. With Unity’s animation system, and other similar animation systems, these tell the software that will resolve the animation curve the rules for the path between each keyframe. In this case in Unity, the curves I have shown you have been set to free, allowing the software to automatically generate a smooth curve. In this example, I have set that same curve to linear for both the left and right tangents of the middle keyframe The last tangent type in Unity and many other animation editors is the step, as seen here: A step tangent will remove the curve entirely, and have an abrupt transition from the previous keyframe value to the current one.

Our skeleton now has a series of curves, moving the position, rotation, and scale of different bones. This is the point a key pipeline concern arises: manipulating these curves. If you have ever used a basic tool for manipulating the position, scale, and rotation of a node, you can imagine that trying to animate a skeleton with this would be extremely difficult, and it would be very easy to create unnatural motion, such as pulling the ankle node too far away from the knee node, stretching out the character. This is where a technical artist who is an expert at building a constraint, controls, and rigging system becomes extremely valuable. A rig is the container for the skeleton and the tools for manipulating the skeleton. A good rig will have constraints built in to prevent you from doing things like bending a character’s elbow backwards or doing other unnatural moment. The controls are the tools provided to the animator for manipulating the transforms of the bones. While not a direct comparison, a metaphor that will make this easier to understand is the manipulator of a marionette puppet. The puppeteer does not move the hands and legs of the puppet directly, he uses the cross shaped manipulator, which strings attached to the points of articulation on the marionette. There are some keywords to help understand this process, as well: Inverse and forward kinematics. Take a look at the skeletal drawing I provided here Forward kinematics is when you determine the motion of child nodes from the parent, when using the controls on the rig to manipulate the bones. An example might be rotating the knee, which would move the ankle and foot bones relative to the knee. Inverse kinematics is when you define an endpoint, and the position of the bones are computed back from there. An example of this would be, you wish to place the character’s foot on top of a step. With an inverse kinematic setup, you just use the control for manipulating the ankle bone to place it on top of the step, and the rigging system will figure out where the knee and hip should be to get the ankle there. As you can imagine, the better the rigging system, the more productive your animation staff will be. An animation is generated by using the controls on the rig to manipulate the skeleton into keyframe poses, and then the animation tool will generate a curve path for each datapoint that has a keyframe, between these keyframes.

So now our skeleton has a completed animation. An animated skeleton is meaningless for a game, there is nothing to see as these nodes are abstract data with no graphical representation. For the next step, you need a character model to bind the skeleton to. Here I have drawn a character over the skeleton I shared earlier: Attaching the mesh of a character to a skeleton is called binding. You accomplish this by assigning a series of bind weights to every vertex on the character, to tell it how much influence that vertex should take from each bone. In this graphic where I drew a character over the skeleton, I also applied a gradient to show the weighting process: Look at the right elbow and shoulder. The vertices that would be where the red in the gradient is would be bound to the right elbow, and share motion with the right elbow. The vertices that would be where the blue in the gradient is would be bound to the right shoulder, and share motion with the right shoulder. The vertices in between, in the purple area, would be bound to both the elbow and shoulder, the more red the more influence they would take from the elbow, and the more blue, the more influence from the shoulder. Assigning the vertices of the character mesh like this then allows the mesh to move in sync with the skeleton, giving you skeletal animation of that character. When the knee rotates to perform a kick, moving the foot and ankle with it, then the vertices on the character mesh would move relevant to what bones they were bound to.

After all of those steps, you now have a skinned and rigged character you are ready to export the animations from your 3D modeling package into Unity. My next post will cover the process Unity uses to render this data, the runtime performance impacts of it, and the tradeoffs you take when you add more keyframes, bones, and other data to your skinned character.


Joseph Stankowicz is a software engineer who has worked in the video games industry for over eight years. The last two years have had a heavy focus on Unity development, where he helped ship over eleven titles to iOS and Android platforms. He also is really excited about 3D printing, and keeps his Solidoodle 3 printing out stuff as often as possible. You can view his LinkedIn profile here

Tagged with:
Posted in Unity3D, Unity3D Performance
One comment on “Skinned Animation : Content Creation
  1. […] post here details the process of creating a skinned mesh animation, today I will cover the runtime side of […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: