Faster infographic | MIT News

0

Photographs of moving objects are almost always a bit blurry – or very blurry, if the objects are moving fast enough. To make their work look as much like a conventional movie as possible, game and movie animators try to replicate that blur. But counterintuitively, producing blurry images is actually more computationally complex than producing perfectly sharp images.

In August, at this year’s Siggraph conference – the premier conference on computer graphics – researchers from the computer graphics group at MIT’s Computer Science and Artificial Intelligence Laboratory will present a pair of papers describing new techniques for calculate blur much more efficiently. The result could be more compelling video games and digital video images that take minutes rather than hours to render.

The image sensor of a digital camera, and even the film of a conventional camera, can be thought of as a grid of color detectors, each detector corresponding to a pixel of the final image. If the photographed objects are stationary, then during a single exposure, each detector registers the color of a single point on the surface of an object. But if objects are moving, light from different points on an object, and even from different objects, will hit a single detector. The detector effectively averages the colors of all the dots, and the result is blurry.

On the right, a standard digital animation algorithm simulated blur by sampling 256 different points on the wings of a moving butterfly for each pixel in the frame. On the left is the image produced by sampling one point per pixel. In the center is the result of a new algorithm that samples a single point per pixel but infers color values ​​from surrounding points. The result is very close to the 256 sample image but much easier to calculate.
Images courtesy of Jaakko Lehtinen

Digitally rendering a video image is a computationally intensive process with several discrete steps. First, the computer must determine how the objects in the scene are moving. Second, he must calculate how light rays from an imaginary light source would reflect off objects. Finally, it determines which rays of light would actually hit an imaginary lens. If the objects in the video are moving slow enough, the computer only needs to go through this process once per frame. If the objects are moving fast, however, he may have to pass through it tens or even hundreds of times.

Color-fast

Given how difficult it is to calculate blur, you might think animators would just ignore it. But that leads to a surprisingly unconvincing video. “Motion doesn’t look smooth at all,” says Jaakko Lehtinen, who worked on both projects as a post-doc in the computer graphics group and is now a senior researcher at graphics chip maker Nvidia. .

To get an idea of ​​what blur-free motion looks like, Lehtinen says, consider the type of clay animation familiar from old movies or Christmas specials such as “Rudolph the Red-Nosed Reindeer.” “There’s no motion blur, because the scene is actually still when you take the shot,” says Lehtinen. “It just looks jerky. The movement doesn’t feel natural.”

The MIT researchers took two different approaches to simplifying the blur calculation, corresponding to two different stages in the graphics rendering pipeline. Graduate student Jonathan Ragan-Kelley is the lead author of one of the Siggraph papers, joined by Associate Professor Frédo Durand, who leads the infographics group; Lehtinen; graduate student Jiawen Chen; and Michael Doggett from Lund University in Sweden. In this paper, the researchers make the simplifying assumption that the way light reflects off a moving object does not change over the course of a single frame. For each pixel in the final image, their algorithm still averages the colors of multiple points on object surfaces, but it only calculates those colors once. The researchers found a way to represent the relationship between color calculations and the shapes of related objects as entries in a table. For each pixel in the final image, the algorithm simply searches for the corresponding values ​​in the array. This greatly simplifies the calculation but has little effect on the final image.

Adopting the researchers’ proposal would require modifying the architecture of the graphics chips. “You can really imagine going ahead and building what they suggest,” says Henry Moreton, a prominent engineer at Nvidia. “But I think the greatest value of the article is that it points to strategies for solving these problems in a more elegant, more efficient and more practical way. That they manifest themselves in exactly the way that the article presents n “It’s probably not that likely. But what they did was they indicated a new way to tackle the problem.”

turn the tables

The second of the Computer Graphics Group’s Siggraph papers, led by Lehtinen and also featuring Durand, Chen and two of Lehtinen’s Nvidia colleagues, reduces the computational burden of determining which rays of light would hit an imaginary lens. To produce compelling motion blur, digital animators can typically consider the contributions that more than 100 discrete points on the surfaces of moving objects make to the color value of a single pixel. Instead, Lehtinen and his colleagues’ algorithm looks at a smaller number of points — perhaps around 16 — and makes an educated guess about the color values ​​of the intermediate points. The result: A digital video frame that would normally take about an hour to render can take about 10 minutes.

In fact, both techniques apply not only to motion blur, but also to the type of blur that occurs, for example, in the background of an image when the camera is focused on an object in the foreground. . It’s also something the animators seek to replicate. “Where the director and cinematographer choose to focus the lens, it directs your attention when looking at the image in a subtle way,” says Lehtinen. If an animated movie doesn’t have such focus flaws, “there’s just something wrong,” Lehtinen says. “It doesn’t look like a movie.” Indeed, says Lehtinen, even though the paper has yet to be submitted, several major special effects companies have already contacted the researchers about the work.

Share.

Comments are closed.