SIGGRAPH 2020 technical papers reveal the latest trends in computer graphics and interactive techniques


CHICAGO–(BUSINESS WIRE)–SIGGRAPH 2020 announces 163 selected research projects from 24 countries under its world-renowned technical papers program. Throughout its 47-year history, the conference has continuously provided innovative and cutting-edge research in the many subfields of computer graphics. With its recent move to virtual, SIGGRAPH 2020 is working with contributing researchers to provide a new way for attendees to experience this content, and will announce details in the coming weeks.

Each year, the SIGGRAPH Technical Papers program sets the tone for the future of visual computing. From 443 submissions, with additional selections in peer-reviewed journals ACM Transactions on Charts (TOG), experts from the 2020 program jury chose each project through a double-blind review process. The ubiquity of deep learning – in addition to the use for images, new applications are proposed for animation, geometry, etc. – and a return to basics through the use of 2D graphics for things like icons. , sketches, diagrams and lines.

“I am thrilled to announce a glimpse of the incredible work of researchers who continue to think beyond what is possible in visual computing, and I look forward to seeing how these projects fuel memorable discussions at the first-ever digital SIGGRAPH “, said SIGGRAPH 2020 Chair of Technical Papers Szymon Rusinkiewicz of Princeton University. “Submitted papers were stronger than they’ve ever been and the search results from the community continue to be amazing.”

Along with new research from Stanford University, Facebook, Microsoft, Pixar Animation Studios, Google, MIT, and NVIDIA, highlights from the 2020 Technical Papers Program include:

RoboCut: hot wire cutting with robot-controlled flexible rods

Authors: Simon Duenser and Stelian Coros, ETH Zurich; Roi Poranne, ETH Zurich and University of Haifa; Bernhard Thomaszewski, University of Montreal

This article presents path optimization for robotic hot wire cutting using flexible tools. In it, the team proposes a framework that addresses physical modeling, non-rigid surface adaptation, and collision avoidance in a unified way.

AnisoMPM: animation of anisotropic damage mechanics

Authors: Joshuah Wolper, Minchen Li, Yu Fang, Ziyin Qu, Jiecong Lu, Meggie Cheng and Chenfanfu Jiang, University of Pennsylvania; and, Yunuo Chen, University of Pennsylvania and University of Science and Technology of China

With this article, the researchers present AnisoMPM: a robust and general approach that couples anisotropic damage evolution and anisotropic elastic response to animate the dynamic fracture of isotropic, transversely isotropic and orthotropic materials.

Unsupervised K-modal style content generation

Authors: Omry Sendik and Daniel Cohen-Or, Tel Aviv University; Dani Lischinski, Hebrew University of Jerusalem

This article presents uMM-GAN, a new architecture designed for unsupervised generative modeling of multimodal distributions. It effectively disentangles modes and style, providing an independent degree of generation control.

Multiple Importance Continuous Sampling

Authors: Rex West and Toshiya Hachisuka, University of Tokyo; Iliyan Georgiev, Autodesk; and, Adrien Gruson, McGill University

Researchers from the University of Tokyo, Autodesk and McGill University present a generalization of multiple importance sampling to countless sets of techniques, equipped with a corresponding equilibrium heuristic and estimator practical stochastic.

Quanta Burst Photography

Authors: Sizhuo Ma, Shantanu Gupta and Mohit Gupta, University of Wisconsin-Madison; and Arin C. Ulku, Claudio Bruschini and Edoardo Charbon, Ecole Polytechnique Fédérale de Lausanne

This article introduces quantum burst photography, a computational photography technique that exploits single-photon cameras as passive imaging devices for photography in challenging conditions, including ultra-low light and fast motion.

Incremental potential contact: Dynamics of large deformations without intersection or inversion

Authors: Minchen Li, University of Pennsylvania and Adobe Research; Zachary Ferguson, Teseo Schneider, Denis Zorin and Daniele Panozzo, New York University; Timothy Langlois and Danny M. Kaufman, Adobe Research; and, Chenfanfu Jiang, University of Pennsylvania

In this paper, researchers propose incremental contact potential (IPC) for robust and accurate time stepping of nonlinear elastodynamics. IPC ensures non-intersection and non-inversion trajectories regardless of materials, time step sizes, velocities or severity of deformation.

For even more highlights, check out the Tech Papers preview just released on YouTube:

On going virtual: “Together with our hundreds of contributors, we look forward to letting their exceptional content shine in a new digital venue. We are confident that we will deliver a strong SIGGRAPH 2020 that celebrates advancements and achievements in computer graphics and interactive techniques, and are optimistic that this new format will allow more members of our global community to come together and participate.” , shared SIGGRAPH 2020 conference chair Kristy Pron of Walt Disney Imagineering.

Registration for the SIGGRAPH 2020 virtual conference is not yet available. For conference updates and a full list of 2020 technical papers, visit


ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, bringing together educators, researchers, and professionals to inspire dialogue, share resources, and address challenges in the field. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. The SIGGRAPH conference is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2020, the 47th annual conference hosted by ACM SIGGRAPH, will take place virtually.


Comments are closed.