CHICAGO–(BUSINESS WIRE)–SIGGRAPH 2018, the world’s leading annual interdisciplinary educational event showcasing the latest in computer graphics and interactive techniques, will showcase 128 cutting-edge technical papers from around the world, showcasing important new scientific work. The 45th SIGGRAPH conference will take place August 12-16 at the Vancouver Convention Centre. To register for the conference, visit s2018.SIGGRAPH.org.
Technical papers for SIGGRAPH 2018 were chosen through a rigorous peer review process by a prestigious international jury of scholars and scientists. Each selected article meets the highest scientific standards and will be published in a special issue of ACM transactions on charts. In addition to articles selected by the conference jury, select articles that have been published in ACM transactions on charts over the past year will also be presented in Vancouver.
“Vancouver is about to overflow with the who’s-who of graphic design. Whether students, engineers or professors, everyone with a passion for technology and graphics converges on the SIGGRAPH conference to discover the most innovative and innovative ideas in the field. Tech papers, in particular, are always on trend,” said Mathieu Desbrun, SIGGRAPH 2018 Tech Paper Chair and Carl Braun Professor at the California Institute of Technology.
Highlights of this year’s technical papers program include:
Seeking to listen at a cocktail party: a speaker-independent audiovisual model for speech separation [Israel, U.S.]
Authors: Ariel Ephrat, Google Inc., Hebrew University of Jerusalem; Inbar Mosseri, Oran Lang, Tali Dekel, Kevin Wilson, Avinatan Hassidim, and Michael Rubinstein, Google Inc.; and, William Freeman, Google Inc., Massachusetts Institute of Technology (MIT)
A trained machine learning model is presented, using both visual and auditory cues from an input video to separate speech from different speakers in the video. (link)
Mode-adaptive neural networks for quadruped motion control [United
Authors: He Zhang, Sebastian Starke and Taku Komura, University of Edinburgh; and, Jun Saito, Adobe Research
This article proposes a data-driven approach to animating quadrupedal movement. The new architecture, called Mode-Adaptive Neural Networks, can learn a wide range of non-cyclic modes of locomotion and action. (link)
Skaterbots: optimization-based design and motion synthesis for robotic creatures with legs and wheels [Switzerland, U.S., Canada]
Authors: Moritz Geilinger, Roi Poranne and Stelian Coros, ETH Zurich, Department of Computer Science; Ruta Desai, Carnegie Mellon University; and, Bernhard Thomaszewski, University of Montreal
Skaterbots researchers propose a computational approach to design optimization and motion synthesis for robotic creatures that move using arbitrary arrangements of legs and wheels. (link)
DeepMimic: Deep reinforcement learning guided by examples of physics-based character skills [U.S., Canada]
Authors: Xue Bin Peng, Pieter Abbeel and Sergey Levine, University of California, Berkeley; and, Michiel van de Panne, University of British Columbia
This paper presents a deep reinforcement learning framework that allows simulated characters to mimic a rich repertoire of highly dynamic and acrobatic skills from reference motion clips. (link)
Automatic knitting 3D mesh machine [U.S., Switzerland]
Authors: Vidya Narayanan, Lea Albaugh, Jessica Hodgins and James McCann, Carnegie Mellon University; and, Stelian Coros, ETH Zurich, Carnegie Mellon University
“Automatic” researchers present the first computational approach capable of transforming 3D meshes, created by traditional modeling programs, into instructions for a computer-controlled knitting machine. (link)
Instant 3D Photography [United Kingdom, U.S.]
Authors: Peter Hedman, University College London; Johannes Köpf, Facebook
In less than 60 seconds, the authors transform color and depth images from a dual-lens camera phone into a highly detailed 3D panorama, which can be viewed with head movement parallax in VR. (link)
Deep video portraits [Germany, France, United Kingdom, U.S.]
Authors: Hyeongwoo Kim, Ayush Tewari, Weipeng Xu and Christian Theobalt, Max Planck Institute for Informatics; Pablo Garrido and Patrick Perez, Technicolor; Justus Thies and Matthias Niessner, Technical University of Munich; Christian Richardt, University of Bath; and, Michael Zollhöfer, Stanford University
Our new approach to deep video portraiture allows full control over a target actor by transferring head pose, facial expressions and eye movements with a high level of photorealism. (link)
The Technical Papers program also offers a unique opportunity to hear from all authors in a quick 2-hour window, at the Technical Papers Fast Forward event. This much-anticipated Sunday evening session offers an entertaining and illuminating summary in which authors are each given 30 seconds to “wow” the crowd with their findings and entice attendees to listen to their full presentations throughout the week of the conference.
For more information on research at SIGGRAPH, listen to this podcast with Pixar’s Ed Catmull and Tony DeRose or watch the 2018 Technical Papers Preview Trailer. The Technical Papers program is open to full conference attendees only. and the full Platinum conference.
About ACM, ACM SIGGRAPH and SIGGRAPH 2018:
ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, bringing together educators, researchers, and professionals to inspire dialogue, share resources, and address challenges in the field. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s first annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place August 12-16 at the Vancouver Convention Center in Vancouver, BC