Way before motion capture (mocap) was implemented in video games, life science realms used it in the 1970s to photogrammetrically track and analyse gait. Fast forward 40 years, and we’re seeing the tech-scape changing in countless ways.
Graham Qually, a motion capture artist in Vancouver, is one of those on the frontier of these advances. His time at Rainmaker Studios and Ubisoft has turned him a big player in the motion capture realm. Now, he’s got his own studio, Mimic Performance Capture, where he and his team lend a hand to some of the biggest names in the gaming world.
ADVERTISEMENT
ADVERTISEMENT
Industry evolution
ADVERTISEMENT
Having witnessed the big changes in mocap, he observes, “In the early 2000s, we were using a software called Vicon IQ; there’s very little information about it out there. But if you search the later Vicon softwares, you’d get hundreds of results. Not only is the technology growing, but the community is, too. The realm is a lot more accessible and you can buy affordable mocap suits for about $1,800, but when I started out, these suits cost $80,000.”
A mocap suit typically comprises breathable, four-way stretch, antimicrobial, polypropylene material with velcro-friendly surfaces for custom marker sets and skeletons. Markers help deliver the most accurate and reliable data in any movement analysis application, essentially collecting data right down to the second and third degree. Cleaning, mapping and editing said data follows, making for a cohesive recording experience.
ADVERTISEMENT
But it’s not just the suits that have evolved; Qually notes just how advanced capture methods have become — whether it’s using infrared or inertial suits. Additionally, the advent of markerless mocap involves the use of regular cameras, marking a huge leap forward in tech, given he’s used infrared for most of his career.
“From my experience, Rainmaker was a mocap facility, so we’d put people in a suit and capture their body — no fingers, no face, no audio,” Qually weighs up, “Then there’s Ubisoft, which is a performance capture studio; we had the space sound-proofed, we had mic booms and everything was time code-synced. Even back in 2012, you were combining pieces that weren’t built to work together. For example, the company that created the time code generator we were using, didn’t understand what we were doing.”
But now, Qually and his mocap colleagues are seeing a huge shift in product innovation, especially in the conversation with product developers, as the industry and the support for it, develop.
“It’s a two-way street now, which I love. We stay in touch with Unreal and Vicon, and we inform them of what we’re doing and what we require, and they support that. For example, we use IKinema as a solver, and we were using a different rig the other day. They organised a call with us at 8 am the next day and walked us through it.”
Real-time rendering has also seen major advancements, and not just in terms of speed and quality. “We used to render stuff out of Autodesk’s MotionBuilder and get a character but nothing else, except a grey background, or no lighting,”
Qually says. “Now, we can do straight streams, where actors see for themselves exactly what their characters look like as they’re acting — whether the body has fur or scales, and how their movements are with various body proportions.”
Down to the character
While using all the right tools is key, working with characters and narrative plotlines naturally calls for intricate captures of expressions and emotions. Qually explains, “A game like LA Noire is built around reading someone’s face, so you can tell if they’re lying or not. There are degrees of how much it actually worked. And when I played The Last Of Us, which is very emotional and engaging, there were times I just had to put the controller down and gauge the drama. So much of that came from the intricacies and interactions of performance capture, and, of course, writing and directing.”
Qually, who’s a mocap player on 2017’s horror flick It, was presented with a whole new playing field for his skill set. “We had nine kids in, which was new for us,” he describes, “For one scene, we took two contortionists and had them suspended from a stunt beam and folded them together. We sowed one girl’s upper torso into the other girl’s legs, and then we did the same with the other torso, so that they were running together. We wanted Pennywise to be very putting-off but still very human when you saw it.”
About the physicality
Motion and performance capture involve heavy and minute calculations. Qually states that a lot of his job involves time-scheduling down to the millisecond, going as far as understanding how long it takes to execute commands in a single program.
- For movement-heavy games like Assassin’s Creed, the bigger the space, the more sound-proofing needed, but there’s an upward limit to it.
- Qually and his team typically use a 40x40x20 feet volume, so placing a stuntman on a 10-foot platform is a no-brainer to capture completely.
With Far Cry 5 to release this year, he recalls using motion fields during his time working on the game. For Far Cry 4 and down, the mocap artists had a total blast, employing ragdoll physics for explosions. “We spent a week in the volume, just having stunt guys being pulled on rigs and thrown onto mats in every conceivable way possible,” he laughs, “I felt so bad for them. Every time gaming companies like this moved up a production, they took incremental steps to make it better.”
Naturally, there are days when software crashes disrupt an entire schedule or a couple of difficult actors — just a few glitches in the system of working in motion capture. But Qually, who now sits comfortably in his own studio, explains that there are a lot more advancements of mocap and he’s ready to learn them all.