It was the summer of 2000. We had survived the year 2000 and I was busy working on the David Fincher film. panic room. We were trying to preview the entire film before principal photography began. It was painstaking and painstaking work. Next to my desk was the editor, who was in charge of carving the preview into a thrilling cat-and-mouse suspense thriller. Further down the hall were the director’s office and a small screening room.
It was an excellent question. And now, more than 20 years later, it’s the perfect time to answer it.
What Fincher recognized that evening was the power that technology brings to the visualization process. Game engines, like the one he used, are designed to work in real time, which is much faster than rendering an animation frame by frame and then playing it back. Real-time interaction feels more natural and intuitive.
The problem was that in the early 2000s, there was nothing intuitive or natural about using game engines for anything other than game development.
People were talking about using game engines for previews, and early experiments were done using emerging tools like XSI Viewer. Still, the workflows were too complex for the fast-paced world of feature previews. Developers needed to write code and test features. Next, they had to train artists in new animation techniques. All of this could be achievable with months to design a pipeline, but our projects were measured in weeks, with a lead time of days.
Fast forward to today and you have a very different technology landscape. Motion capture technology has evolved and matured. Game engines like Unity and Unreal are common in industries ranging from architecture to automotive design to biomedical research. And graphics cards now pack more memory and computing power than the high-end workstations of a decade ago.
So how did all of this change before? What would my work on Panic Room look like today?
The first answer is that it would look much better. Better not because it would look more realistic, but better because I could choose how it would look. In 2000, I had very few choices for the visual style. I took what I could deduce from my Windows NT workstation running an early version of Softimage XSI, and that was it.
Now, with more powerful hardware and software, visual style is a choice. And with game engines like Unreal, we can apply different looks in real time depending on the creative goals of the project. It can look realistic, with natural lighting and atmospheric effects. It could also look more hand-drawn, like Proof’s work for The blacklist Season 7 finale (see “Hybrid Drama”,
CGW Issue 2, 2020).
Filming for the episode was cut short due to the pandemic. This left the studio with just over half the episode done and no practical way to end it. Series creator Jon Bokenkamp wanted to push the graphic novel, film noir feel of the series, so the team at Proof dialed in a visual style, and a few weeks later our preview aired at national television. A better-looking preview means it’s suited to the taste and style of the project and the filmmakers. The “look” is a creative choice in a field of creative choices.
At the other end of the spectrum is our work on the recent release of Amazon Prime Tomorrow’s war.
Proof joined the VFX team on The Tomorrow War during post-production as they built the director’s cut. Most of the work was postvis – adding CG creatures, effects and backgrounds to the practice plates. We needed to make the shots as believable as possible, while ensuring a super-fast turnaround so that the director, who is also an editor, could experiment with the animation and create the story he wanted to tell. This meant providing several iterations of shots, all with excellent creature animation and nuanced lighting so the creatures felt integrated into the plates.
Achieving higher quality was especially important for the lab sequence where the creature slowly wakes up and begins to struggle against the chains holding it down. It is a calm, intimate and tense cinematic moment. Our animators had to dig deep, imbuing the creature with a sense of emotion and purpose as it slowly becomes aware of the danger it finds itself in.
We developed a lightweight rig, giving our animators the subtle control they needed with the creature’s limbs and appendages, while still being simple enough to work with quickly. The rig was custom built for the show, although we managed to repurpose an old set of “vines” from another film and turn them into a chain harness that traps the creature. These shots go beyond mere temporary compositions. These are the first passes of animation that evoke mood, tone, sentiment, and story.
While the lab sequence showed our ability to create a dramatic moment, other sequences were about the chaos of war. For these, our team used MASH in Autodesk Maya to animate hordes of creatures, then layered them in specific keyframe performances to tie the scenes together. We’ve augmented the creature deck with multiple “damage” states to reflect injuries taken during battle.
We used hardware rendering for the CG elements, combining Autodesk Arnold shaders with Maya Viewport 2.0 rendering to achieve fast renders with sophisticated lighting. And every shot was finished with Nuke from Foundry, where Proof compositors were able to focus on color and integration. The film was filmed using anamorphic lenses, so great care was taken in extracting lens distortion for animation and rendering, then reapplying it in Nuke to match the look movie ending.
Achieving a compelling cinematic look was essential as the postvis sat in the edit for months and was used for multiple screenings with filmmakers, studio executives and test audiences. The shots had to be believable without sacrificing execution time.
Time is another big difference in forecast. If it is true that on panic room we were trying to preview the whole feature, the reality is that we ran out of time. And this film was very contained: five characters trapped in a townhouse over the course of one night, with only a handful of visual effects. The movies we’re working on now involve ensemble casts, multiple locations, and over-the-top visual effects sequences.
Take Proof’s recent work on F9. This was a project that covered all aspects of visualization – from developing the story with the filmmakers, to very detailed technical visualizations to aid in the filming of complex action shots, to postvis version after version. for the editorial, to the production of dozens of visual effects shots that are in the final film. Previs alone covered six sequences totaling over 3,000 individual shots spanning from Southern California to Scotland to Eastern Europe and beyond.
And for postvis, we made more than 700 shots touching almost all the sequences of the film. In some cases, we were replacing blue screens with CG set extensions. In others, we generated complex animations and effects to embed filmed actors in virtually entirely computer-generated worlds.
Our work on F9 lasted 26 months with an average team of 12 artists. panic room was ambitious in its attempt to plan an entire film. The visualization work for F9 was epic, with its reach touching all aspects of the film – from pre-production to filming to post-production. This level of involvement is typical of blockbuster movies. Previs is an essential part of the creative decision-making process from start to finish.
And that brings me to the real answer to the question. The significant change in previs isn’t how much work we’re able to produce, and it’s not what visualization work looks like; it’s about what it feels like.
My working relationship with Fincher on panic room was a call and an answer. He called for changes, and I responded and made them. Sometimes, however, this response took minutes or even hours. In rare cases, it may even take a day or more. The goal was to make the feedback loop as small as possible, but it was still a loop. Now the goal is to remove the loop, engaging filmmakers directly in their creative decision-making process. To hand them the gamepad and let them create their own plans.
To do this, we use an array of real-time technologies that allow us to capture the action and record it as animation. We dress the performers in motion capture suits and have them drive CG characters that are integrated into the world of the film’s story. The director can step in and block the action as if it were happening for real. We hand the director or cinematographer a virtual camera and let them compose the shots themselves. They can experiment with composition and coverage, then play back the footage in real time to assess what works and what doesn’t.
That’s what Fincher saw that night in 2000 – using real-time technologies to engage and interact with animation as a living document.
Visualization is all of that. It aims for higher quality, while doing so much more and incorporating new technologies that minimize or even bypass the iterative loop. Interestingly, what hasn’t changed is the “why”. Previs has always been about creative communication, ideation, and technical problem solving in service of telling better stories. The way we do this work at Proof has definitely changed, and for the better, but the reason we do it will always remain the same.
Ron Frankel is the Founder and Chairman of Proof Inc., and Partner/Managing Director of Proof London Ltd. Founded in 2002, Proof is the original visualization studio dedicated to providing the highest quality visualization services for feature films, broadcast and immersive entertainment. Industries.