A Brand New Game: NVIDIA Research Brings AI to Computer Graphics

0

Using AI, the researchers automated the task of converting live actor performances (left) into virtual computer game characters (right).

SIGGRAPH 2017 — LOS ANGELES — The same GPUs that bring games to your screen could soon be used to harness the power of AI to help game and movie makers go faster, spend less, and create richer experiences.

At SIGGRAPH 2017 this week, NVIDIA is showcasing research that makes it easier to animate realistic human faces, simulate the interaction of light with surfaces in a scene, and render realistic images faster.

NVIDIA combines our expertise in artificial intelligence with our long experience in computer graphics to advance 3D graphics for games, virtual reality, films and product design.

Facing forward

Game studios create animated faces by recording videos of actors performing every line of dialogue for every character in a game. They use software to turn this video into a digital duplicate of the actor, who later becomes the animated face.

Existing software forces artists to spend hundreds of hours revising these digital faces to better match the real actors. It’s tedious work for artists and expensive for studios, and it’s hard to change once it’s done.

Reducing the amount of work involved in creating facial animations would allow game artists to add more character dialogue and additional support characters, while giving them the flexibility to quickly iterate on script changes. .

Remedy Entertainment – best known for games like Quantum Break, Max Payne and Alan Wake — approached NVIDIA Research with an idea to help them produce realistic facial animation for digital doubles with less effort and at a lower cost.

Artificially intelligent game faces

Using Remedy’s vast animation data store, NVIDIA GPUs, and deep learning, NVIDIA researchers Samuli Laine, Tero Karras, Timo Aila, and Jaakko Lehtinen trained a neural network to produce facial animations directly from videos of actors.

Instead of having to perform laborious data conversion and edit hours of actor videos, NVIDIA’s solution only requires five minutes of training data. The trained network automatically generates all the facial animations needed for an entire game from a single video stream. NVIDIA’s AI solution produces more consistent animation and retains the same fidelity as existing methods.

The research team then went a step further, training a system to generate realistic facial animation using only audio. With this tool, game studios will be able to add more supporting game characters, create live-animated avatars, and more easily produce games in multiple languages.

Towards a new era in gaming

Antti Herva, lead technical character artist at Remedy, said that over time, the new methods will allow the studio to create bigger and richer game worlds with more characters than is currently possible. Already, the studio is creating high-quality facial animation in much less time than in the past.

“Based on NVIDIA Research’s work that we’ve seen in AI-powered facial animation, we’re confident that AI will revolutionize content creation,” Herva said. “Complex facial animation for digital doubles like this in Quantum Break can take many man-years to create. After working with NVIDIA to create video and audio deep neural networks for facial animation, we can reduce that time by 80% in large-scale projects and free up our artists to focus on other tasks. .

Create images with AI

AI also holds promise for rendering 3D graphics, the process that turns digital worlds into the realistic images you see on screen. Filmmakers and designers use a technique called ray tracing to simulate light reflecting off surfaces in the virtual stage. NVIDIA uses AI to improve both ray tracing and rasterization, a less expensive rendering technique used in computer games.

Although ray tracing generates very realistic images, simulating millions of virtual light rays for each image incurs significant computational cost. Partially rendered images appear noisy, like a photograph taken in very low light conditions.

To denoise the resulting image, the researchers used deep learning with GPUs to predict final rendered frames from partially finished results. Led by Chakravarty R. Alla Chaitanya, an NVIDIA research intern from McGill University, the research team created an AI solution that generates high-quality images from noisier and higher input images. approximate in a fraction of the time compared to existing methods.

This work is more than a research project. It will soon be a product. Today we announced the NVIDIA OptiX 5.0 SDK, the latest version of our ray-tracing engine. OptiX 5.0, which incorporates NVIDIA Research AI denoising technology, will be available free to registered developers in November.

AI smoothes rough edges

NVIDIA researchers used AI to tackle a computer game rendering problem called anti-aliasing. Anti-aliasing is another way to reduce noise – in this case, jagged edges in partially rendered images. Called “jaggies”, these are stair-like lines that appear instead of smooth lines.

NVIDIA researchers Marco Salvi and Anjul Patney trained a neural network to recognize these artifacts and replace these pixels with smooth anti-aliased pixels. AI-based technology produces sharper images than existing algorithms.

How AI draws the right rays

NVIDIA is developing more efficient methods for tracing virtual light rays. Computers sample the paths of many light rays to generate a photorealistic image. The problem is that not all of these light paths contribute to the final image.

Researchers Ken Daum and Alex Keller used machine learning to guide the choice of light paths. They achieved this by linking the mathematics of light ray tracing to the artificial intelligence concept of reinforcement learning.

Their solution learns to distinguish “helpful” paths – those most likely to connect lights with virtual cameras – from paths that don’t contribute to the image.

NVIDIA AI Research at SIGGRAPH

At SIGGRAPH, you can learn more about how AI is changing computer graphics by visiting us at booth #403 beginning Tuesday and attending NVIDIA’s SIGGRAPH AI Research Conferences:

Tuesday, August 1

Wednesday August 2

Thursday August 3

  • Learn about light transport in a reinforced way, room 153, 3:45 p.m.-5:15 p.m.

    The left inset shows an aliased image that is jagged and pixelated. NVIDIA’s AI antialiasing algorithm produced the larger image and the inset on the right by learning the mapping from aliased to antialiased images. Image courtesy of Epic Games.

Simulating the light reflections in this virtual scene – shown without denoising – is a challenge as the only light comes through the tightly open doorway. NVIDIA’s AI-guided light simulation delivers up to 10x faster image synthesis by reducing the required number of virtual light rays.

Source: Nvidia

Share.

Comments are closed.