Hello everyone, As we wrap up, we thought we would share some of the lessons we learned. Dedication: Our team was incredibly dedicated to the project. Animation is difficult and inevitably there will be crunch time. What got us through was the fact that everyone is incredibly passionate about making this project the best it could […]
We are currently in the midst of creating our first scene. It’s all very exciting. The plan is to take the motion capture data from the session with India last week, incorporate environmental transitions, and effectively make the 30-second scene in which India comes to life and starts dancing for Manny.
Status Updates
Our pipeline has six phases: concept, modeling, uv mapping, texture, rig, and deploy into scene.
Currently in concept is the lighting design. Our environment modeler is setting up the source lighting over break, and so getting the lit images is going to heavily inform how the source lighting is balanced.
Modeling is split into two parts: Our character modeler is currently adding humanoid features to some of our objects in the environment, models of Christ the Redeemer and Shiva. She is also creating markers that indicate which side of Manny is the front, so we’ll know which way he is looking. Our environment modeler is currently shifting the table on which the action occurs to account for unanticipated issues in balance of composition. From there, all objects will be UV mapped for texturing.
We are currently not formally texturing our objects, instead choosing to apply block colors. But after spring break, the 2D artists on the team will be looking into physically-based rendering, or PBR, texture pipelines. I will be writing more on that in my next post.
Our rigger is, at this moment, messing around with the mo-cap data.We have talked in parts about our motion capture pipeline, but to give you the full scope:
Motion Capture, the Method
The process starts with getting our dancers in the studio. We put them in unitards and attach nodes all over their bodies. Sometimes we have to go so far as to attach the nodes using band-aids, as they will go flying off using anything less.
The data is a bunch of moving balls in space, and it is up to Justin, the research associate responsible for managing the motion capture studio, to clean that up and turn it into meaningful motion. In the meantime, our tech artist has to completely alter the humanoid mo-cap rig into something that fits into the model we have. After all, Jean-Baptist doesn’t look anything like our Brazilian Doll, does he?
What ends up happening if the skeleton doesn’t fit the skin, so the limbs will cut straight through the body. It’s pretty jarring to see in motion. Our tech artist worked very hard to adjust the skeleton for the Indian doll, and is now doing the same with the Brazilian. From there, we will integrate the animation into the newly formatted skeleton, and we’ll finally be able to see our little dolls dance.
Finally, when the dances look right, we “layer” the animation into parts and “bake” them into the skeleton, which formally attaches the whole thing into the space. We then put it into the environment, and we have ourselves an animated scene.