Friday 4 September 2015

Interview: Rigging the cast of The Pirates! Band of Misfits with Character TD Martin Orlowski


Rigging Dojo received more than a few requests via Twitter to interview Martin Orlowski. After seeing his work we got really excited and reached out to him. We are very thankful that Martin took the time for this great interview!

Martin is currently working as a Character TD / Animation TD at Sony Pictures Imageworks.
Be sure to check out Martins website www.martintomoya.com to see more great examples of his work.

Lets get started with a bit of background about you, and what led you to computer graphics and rigging as your specialty?
As far as I can remember I’ve always been interested in math, programming and animation. While studying mathematics at the University I was really enjoying the challenges of solving a variety of abstract problems but I had soon realized that I was missing that extra bit of visual experience. I decided that I will pursue the animation career. This turned out to be a spot-on decision when I got into rigging. A character TD role has been since challenging the artistic side, as well as the technical side of me.
What is your approach to the R&D at the start of a project (animator input, film reference, modeling adjustments, etc)?
There is two things I want to look at first: the design and the actions of the characters. The first can be seen through concepts or models, the latter by watching a storyboard or a previz/cinematic if already in place. This gives me a solid understanding on the initial rig requirements and it’s a good starting point for a discussion with animation directors about their rig performance expectations.
Can you talk about your relationship with the Animation team during this process? I think TDs can sometimes have trouble getting requirements or educating animators on how the rig works. Any advice on how to best get feedback during pre-production on the rigs? 
The animation team on the Pirates project was a mix of both stop-motion and cg animators. Some were experienced in both. The variety of background meant a variety of requests on the rigging and the tools.I first start with a list of rig features and discuss that with the animation supervisor. After we would agree what should be implemented into the initial rig, a first version is developed and given to the supervisor for testing. At the same time auto-rigging tools scripting process is started.The base rig was built in the way that left us enough room for more advanced upgrades to be performed later in the production. If any updates were requested after the first tests, they would be taken care of before passing the rig over to the team. Handing over the rig was usually done during a quick meeting where we were able to give a solid introduction to all the features and also get some fresh ideas or requests directly from the animators. Similar meetings were run when developing animation tools to keep them animation super friendly.
Buried Treasure: Details on the animation rigs and tools    

Were you involved from the outset of the production with rigging as well as the tools? Was the interface system engineered after rigs were set requiring you to work with established constraints? 
Being a lead character TD on the whole production, from previz until the last frame of the cg animation meant a big responsibility, but allowed me to stay very clean and consistent with the whole rigging/animation process. We started writing all character tools for previz already foreseeing what we’re going to need for the vfx character/animation workflow. This meant that instead of writing similar tools twice, we were expanding them through all the stages of the production.
The rig interfaces were created after the auto-rigging tools but they were all sharing the same python libraries, which made the interfaces compatible with the rigs throughout the process.
We had constantly updated the facial system to meet the shot requirements but since every part of the rig was treated as a module it was as simple as hitting a button to have the controls swapped on-the-fly.
Talking about modular rigging, are you using anything like Maya Assets or other ways to keep connections between rig parts? 
I use Maya assets/containers in a lot of situations like packing up scenes, animation, exporting/importing objects, as well as connecting rig modules and rig In-scene UI’s. On this project the rigs were built hierarchically to keep the amount of constraints and switching nodes to a minimum. This was done to save time on calculations as a lot of characters had to be used in some heavy crowd scenes. The rig however kept the history of all the modules like limbs, spines etc. so the information was there whenever needed. I used my custom nodes (HUBs) to store the data. The HUB nodes were also added to connect the face UI’s with the head rigs. Each HUB had all the in- and -out connections exposed as attributes so dis-/connecting those two was as simple as one, two… un-/linking just two nodes.
What kind of turnaround times were expected for a start to finish animation ready asset?
Having all the tools in place a single TD was able to deliver a full CG puppet within a day. Despite having 70+ different VFX characters we’ve also created multiple generic cg rigs that allowed directors to create additional hundreds of completely unique pirates and scientists directly in the crowds scenes. Some special assets like the Pirate Captain Ship were constantly being updated through the whole production. The rigging process started during the previz, where we had to make the CG asset displaying the limitations of the physical ship on the floor. It meant rigging a replica of the gimbal that the boat was sitting on. This stage was crucial as the animation curves created for the previz scenes were transferred to the gimbal moves driving the real boat.
All the ships assets had over 3000 ropes with around 200 sails shapes created. Having split the boats into lowrez chunks with a few switchable proxies, we managed to have them performing well. Scenes with the full pirates crew on board were still animation friendly.
Were you using the Maya reference system as is to deal with the proxies or did you have to create or add onto the system for production?
Aardman’s pipeline has its own layer over the Maya referencing. This makes the workflow more artist friendly and helps to prevent many issues. Even though the referencing system in Maya has its flaws, it has always been my preference over importing rig assets into the animation scenes. By keeping the rigs tidy, and remembering a few things like for instance creating character sets directly in the animation scene rather than in the rig source files, most of the typical issues can be avoided.
How interchangeable are the rigs for the transfer of Animation Data? How did you handle and account for proportional differences between the characters when reusing or transferring animation?
With a help of a special ‘Poser’ rig built for modelers, the characters were modeled in a ‘Relaxed T-pose’. This simple tool allowed us to have the cg puppets in a deformation friendly neutral position, but without the problems of non-matching poses of the feet, shoulders and arms. Additionally, a special reference list was created for the animators displaying the characters skeleton family tree, as the puppets were often sharing the armature or some parts between them. Rig controls were pre-grouped on all rigs. It was also easy to create exclusion sets on controls or just a bunch of attributes on both rig and animation scene levels.
The Quality GuAard is a feature that immediately stood out in your reel, can you give an example of the kind of tests you are running there, and the auto fixes?
The Quality GuAard was one of the very first tools implemented in the character workflow. It was a crucial decision to allow us for the best models to start with. The QG was built as a versatile application that made creating a new custom set of checks as simple as creating a quick Ascii config file and throwing it into a folder. It supports having checks run on a defined project only as well as on a specific type of data. With additional checks popping everyday the use of it was extended into rigs, animation and renderables scenes. The checks cover testing and fixing for anything from clean topology, through strict naming conventions to tidiness of the published scene, that would be shared with another department or user.
Was this a post process that would get run at a single point each day or was it something that as “live” each time the animators opened and saved a file for example?
The QG was used as a daily tool for artists, but it was also embedded into the scene publishing workflow. Modelers could check their models while working on them to detect problems with their work in progress meshes, animators could use the tool to find and possibly fix issues in their scenes before they had to call for a TD support.
Did you find an patterns or certain things that kept getting broken during production?
One of the biggest challenge for all the departments was to keep the scenes tidy and consistent with all the naming conventions on huge assets like the boats. Having the checks run before each publish made working with those assets a pleasure and kept us away from a lot of troubles down the production line.
Looking back do you feel that you over engineered any aspect of it, or the flip side to that coin is there anything you feel like you might want to re-visit and beef up in the future?
As far as I remember every single button was used at some point of the production so I think we’ve benefited from all of it. Like with every tool I wrote, as soon as I have it released there is tons of fresh ideas buzzing in my head. So at this point I’m really looking forward to writing a new set. Although luckily I only had good feedback on the friendliness of the UI’s I’m quite confident it will be done better.
How many scripts make up the system? What would you say is the balance of languages i.e. X percentage Python vs. X percentage Mel or pyMel?
There is around 200 python files making the whole system, each with several functions/objects definitions. All topped up with a little less than 100 of custom icons. Because of the heavy focus on the look of the UI I haven’t used any Mel on this project. It’s all written using python, mainly pymel and pyqt, but wherever pymel objects would affect the speed of the tools, Maya.cmds were used instead.
There is a current discussion about pose tools and weather they should be stored in human readable format vs. locked down in a binary format.  I would love to get your take on this and compare it to how you created your amazing looking pose and animation export tools.
If the speed of the tool is significantly affected by the format and there’s enough time to write a solid! set of query/edit tools for the format than I’d opt for the binary. In any other circumstances human readable is the most friendly approach. You benefit from the fastest implementation and in case of a quick fix/debug needed it could be approached by a not- so-technical person. If we’re talking big production pipeline and a long project this is probably not a winning argument, but certainly something to think about for smaller-scale productions. My approach for the animData was to stick with Maya built in animCurves saved in Maya ascii format along with additional meta-data. This was driven by the idea of having the possibility of referencing the animation and not only importing it. Referenced animation was kept as a direct Maya reference. The downside of it was that having to open a Maya file every time the tool needed more info about the animation would become a big performance bottle-neck. I have overcome this by writing a python module that reads .ma files directly and extracts the information needed instantly. There was never a need to open animData .ma files in a background Maya session.
I am very curious about the Pirate Puppet Sphereton process and what your experience was like having to mix between cg and stop motion puppets?
Having another rigging department just next door from us was a great experience. We were really lucky to work along with some of the oldest riggers in the industry! And they didn’t even have computers :)The idea of the Sphereton (some say it sounds like a character from the Transformers movie) was to provide the ability to build something in Maya, that would be as close to the puppet armature as possible. This way we could quickly look at the puppets beautifully hand-drawn! blueprints and recreate something quickly that matched all the pivots and measurements. Once that was in place creating the rest of the body rig was as simple as pushing the right buttons. The Sphereton was converted into a control skeleton and we could work on the final skinning. The Sphereton pose was saved along with the cg puppet to allow for a quick rewind/recreation of the rig. Additionally if another puppet shared some parts of the skeleton we had that information in place.
Where there any surprises or unanticipated challenges that pushed you and the team on this film?
Animators on the floor had thousands of the mouth pieces available for their stop motion puppets. Because we were making cg doubles for 90% of them plus dozens of additional cg only puppets, we didn’t want to end up animating heavy characters in several crowd scenes neither did we want to be limited in the facial expressions. The trick was to pick as few essential mouth shapes as possible, giving additional controls on top, but at the same time staying very true to what the characters on the floor were able to perform.
We wrote a Transf’yer Face tool that allowed us to quickly grab any hires mouth that was printed in 3d at the Rapid Prototyping department and quickly map it to our heads with a different topology producing a new blendShape.
After several discussions we picked the most crucial 8 (11 for previz) shapes that were created by default and we were able to quickly provide any additional shape whenever there was a need for it in a shot. This saved us loads of time and the scenes were kept light enough to animate in.

Did the shots get cached using Maya/Alembic or some other caching system when animation published for the rest of the pipeline? Were fix ups or shot sculpts done on the caches to fix deformation problems or were they addressed in the rig?
I took a challenge of writing a new caching workflow. It was written in python around a studio mod of the GTO (Tweak Software) open source plug-in. We’ve modified the plug-in to our needs and the workflow written covered the whole animation process from the creation of the model through rigging up to the final render scenes, where the caches were referenced in and automatically attached to the geometry notifying the users whenever they went out-of-date. For convenience a lot of data was saved along with the caches so one could always read the full history of the scene it was coming from and the cached assets. The biggest challenge we had with the caches was to handle multiple lowres boats in one scene and get the caches saved for hires renderable geometries. This meant having to handle a few millions of polygons and thousands of objects that had to be constantly updated as the model changed throughout the production. After a few discussions and a couple extra checks added to the Quality GuAard, we were happily sailing our way through the oceans of caches. Crowd scenes with more than 120 characters were also cached in one go using smart loading/unloading algorithms. With fix-ups we always tried to go back to the roots of the deformation problems. Only a handful of scenes had them done directly on the caches.
Wow that is great! I would love to hear more about the smart loading algorithms.
Smart loading and unloading was my concept of detecting all the animated assets in the scene and deciding what is absolutely necessary for the cached object to be left in the scene when creating caches, i.e. props, environments and other puppets that the rig was affected by. Additionally it distinguishes different parts of the rig defining what’s deformed and what’s just a simple transform animation. From a not so tricky idea it grew into quite a complex challenge, but once implemented the cache publishing was 5-10 x faster.
How did you keep from constantly talking like pirates and what were the naming requirements for the tools with pirate sounding names? :)
Tharrr was so much pirate feeling around Aardman that I didn’t notice until now that I have named the tools that way ðŸ˜‰
Deformations are still a challenge area with many ways to approach them. I would love to hear your thoughts on this. Pose space corrections, shot sculpting for post fixes, good skinning to start with… what is your preferred workflow to deal with character skinning.
I prefer to start with a simple but solid skinning done on a character to have it released for animators testing and feedback as quickly as possible. I end up getting a variety of scenarios the character will be thrown into. This is the time when a second layer of the deformations can be applied. Those would be pose space corrections, additional bones or deformers. I keep the post fixes as the absolute last rescue whenever there is no time for a rig revision or the shot/pose is so unique there is no need to clutter up the rig with additional nodes.
Do you use any publicly available scripts or tools when rigging?
I use whatever is available on the studio pipeline along with my own libraries. If there was anything publicly available that was proven to be useful and there was no need of rewriting it, I would make sure I dedicate plenty of time to test it properly before relying on it.
Anything you wish would get fixed or changed in the software to make your life easier?
With the whole Maya running now in Qt and the api exposed through python I would say it’s reasonably easy to overcome most of the daily problems.
Do you see any major advancement in rigging for cg characters in the near future?
The thing that will be the breakthrough step for me is when we finally ‘catch up’ with the stop-motion rigging department. We need to be able to create one single puppet for the whole production. I mean not having to create multiple characters with different geometries and shaders for rendering and different for animation. Being able to see the very final shape and look result while animating is the way to go. They’ve been able to do that for ages with puppets…. without computers!
Also did you hit a problem where the CG model looked maybe to clean compared to the Puppet? How close to the feel of the puppet deformations were you trying to get with the CG rigs?
We were aiming for a convincing full frame CG character double. After we’ve created our first CG puppet for Scarf, I animated it. The final render was presented to the puppet animation team. All we heard back was that it brought a lot of confusion there, as they didn’t quite remember animating him that way. This was a perfect benchmark telling us that our CG puppets are difficult to distinguish from the physical ones. A fantastic teamwork between the modeling, rigging, look-dev and the lighting departments meant that we were not really a “Band of misfits”.
What are somethings that real armature rigs get for free that is a struggle for the CG version? Any surprises there?
One of the biggest challenges was that with all the tools available for the CG rigs (Squash and Stretch, Inverse Kinematics, space switching, constraints etc.) we could be easily drifting away from the armature performance. It was a tough task for both the rigging and the animation team not to show through the CG animation that we had a more fancy toolset available. On the other hand we were constantly surprised by our colleagues, how amazingly well a steel rigid armature could be animated.
What training or continued education resources have you found to be the most helpful to you growing as a character TD?
Every extra workshop I take adds a new perspective to what I do and I would be happy to spend a whole day listening to someone else’s approach, even if all I got from it was one single tip or inspiration.
Any last tips or advice you can give to someone that wants to improve their skills as a TD?
Practice, learn from the best in the industry, do plenty of brainstorming before you start a job, try and test new ideas, challenge yourself and eat ham! (You will need to watch the Pirates movie to see why the ham is so crucial)
What do you see missing in current TD skills or reels?
I see a lot of character TD reels that just simply can’t stand out. The same rigs/solutions seen everywhere around, even though learnt from the best and being perfectly correct, don’t show one of the most crucial attributes of the TD – the creative mind. Working in the industry challenges your creativity on a daily basis, so do show on your reel that you can think outside-the-box.
Thank you so much for taking time to talk with us and share your views and experiences. We are huge fans of Aardman and Sony Pictures Animation and your work there is really refreshing and inspiring.