3. Computer graphics give birth to Big Data

0

The explosion of breakthroughs, investments and entrepreneurial activity around artificial intelligence over the past decade has been exclusively driven by deep learning, a sophisticated statistical analysis technique for finding hidden patterns. in large amounts of data. A term coined in 1955 – artificial intelligence – has been applied (or misapplied) to deep learning, a more advanced version of an approach to training computers to perform certain tasks – machine learning – a term invented in 1959.

The recent success of deep learning is the result of the increased availability of a lot of data (big data) and the advent of graphics processing units (GPUs), greatly increasing the breadth and depth of data used to training computers and reducing the time required. for training deep learning algorithms.

The term “big data” first appeared in computer science literature in an October 1997 article by Michael Cox and David Ellsworth, “Application-controlled demand paging for out-of-core visualization”, published in the Proceedings of the 8th IEEE Visualization Conference. . They wrote that “Visualization presents an interesting challenge for computing systems: datasets are typically quite large, straining the capacities of main memory, local disk, and even remote disk. We call this the problem of bigdata. When datasets do not fit in main memory (in the core), or when they don’t even fit on the local disk, the most common solution is to acquire more resources.

The term was also used at the time outside of academia. For example, John R. Mashey, chief scientist at SGI, gave a presentation titled “Big Data…and the Next Wave of Infrastress” at a USENIX meeting in April 1998. SGI, founded in 1981 as Silicon Graphics, Inc., focused on the development of hardware and software for 3D image processing.

SGI founder Jim Clark completed his doctoral dissertation in 1974 at the University of Utah under the supervision of Ivan Sutherland, the “father of computer graphics”. Clark went on to found Netscape Communications whose web browser success and 1995 IPO launched the “Internet boom”. The invention of the Web in 1989 by Tim Berners-Lee, and its success in turning billions of people around the world into consumers and creators of digital data, made it easy to annotate billions of widely shared digital images (e.g., identify a cat photo as a cat”).

In 2007, computer scientist Fei-Fei Li and his colleagues at Princeton University have begun assembling ImageNet, a large database of annotated images designed to make visual object recognition software easier to find. Five years later, in October 2012, a deep learning artificial neural network designed by researchers at the University of Toronto achieved an error rate of just 16% in the Large Scale Visual Recognition Challenge. ImageNet scale, a significant improvement over the 25% error rate achieved by the best entry the previous year, heralding the resurgence of “artificial intelligence”.

Big data was indeed big. In 1996, digital storage has become more cost effective for storing data than paper, according to RJT Morris and BJ Truskowski in “The Evolution of Storage Systems”. And in 2002, digital information storage surpassed non-digital storage for the first time. According to “The World’s Technological Capacity to Store, Communicate, and Compute Information” by Martin Hilbert and Priscila Lopez, global information storage capacity grew at a compound annual growth rate of 25% per year between 1986 and 2007. They also estimated that in 1986, 99.2% of all storage capacity was analog, but in 2007, 94% of storage capacity was digital, a complete reversal of roles.

In October 2000, Peter Lyman and Hal Varian of UC Berkeley published “How much information?– the first comprehensive study to quantify, in terms of computer storage, the total amount of new and original information (not counting copies) created worldwide each year – in 1999 the world produced 1.5 exabytes of data original. In March 2007, John Gantz, David Reinsel and other IDC researchers published the first study to estimate and forecast the amount of digital data created and replicated each year – 161 exabytes in 2006, estimated to have more than sixfold to 988 exabytes in 2010, or doubling every 18 months.

The information explosion (a term first used in 1941, according to the Oxford English Dictionary) has become the great explosion of digital data. But the amount of data available was only one of two catalysts for the success of deep learning. The other was GPUs.

While the development of deep learning algorithms and their practical application progressed steadily during the 1980s and 1990s, they were limited by insufficient computing power. In October 1986, David Rumelhart, Geoffrey Hinton and Ronald Williams published “Learning representations by backpropagation of errors”, in which they describe “a new learning procedure, backpropagation, for networks of neuron-like units”. a conceptual breakthrough. in the evolution of deep learning. Three years later, Yann LeCun and other AT&T Bell Labs researchers successfully applied a backpropagation algorithm to a multilayer neural network, recognizing handwritten ZIP codes. But given the hardware limitations at the time, it took about 3 days (still a significant improvement over previous efforts) to train the network.

Computer graphics, the cradle of big data, came to the rescue. In the 1990s, real-time 3D graphics were becoming increasingly common in arcade, computer, and console games, leading to increased demand for hardware-accelerated 3D graphics. Sony first used the term GPU for Geometry Processing Unit when it launched the PS1 video game console in 1994.

Video game rendering requires many operations to be performed quickly in parallel. Graphics cards are designed to have a high degree of parallelism and high memory bandwidth, at the cost of lower clock speed and less hook capacity compared to traditional processors. It turns out that deep learning algorithms running on artificial neural networks require similar characteristics: parallelism, high memory bandwidth, no branching.

From the end of the 2000s, many researchers demonstrated the usefulness of GPUs for deep learning, in particular for training artificial neural networks. General-purpose GPUs, enabled by new programming languages ​​such as NVIDIA’s CUDA, have been applied to a variety of deep learning tasks. The most visible application of this type was the winning entry of the ImageNet 2012 challenge, mentioned above.

On March 18, 2020, the Association for Computing Machinery (ACM) named Patrick M. (Pat) Hanrahan and Edwin E. (Ed) Catmull winners of the 2019 ACM AM Turing Award for their seminal contributions to 3D computer graphics, and the revolutionary impact of these techniques on computer-generated imagery (CGI) in film and other applications.

Today, according to ACM’s press release, “Computer-animated 3D films are a hugely popular genre in the $138 billion global motion picture industry. Computer 3D imagery is also at the heart of the burgeoning video game industry, as well as emerging areas of virtual reality and augmented reality. Catmull and Hanrahan have made pioneering technical contributions that are integral to the way CGI imagery today Moreover, their knowledge of graphics processing unit (GPU) programming has had implications beyond computer graphics, impacting various fields including data center management and artificial intelligence.

Like Jim Clark, Catmull was the student of Ivan Sutherland and earned his doctorate from the University of Utah in 1974. As Robert Rivlin wrote in his 1986 book The Algorithmic Image: Graphical Visions of the Computer Age“Nearly every influential person in the modern infographic community has gone through the University of Utah or come into contact with it in some way.”

In a 2010 interview with Pat Hanrahan, Catmull described the working environment at the U of U:

“Dave Evans was the department chair and Ivan was teaching, but their company, Evans and Sutherland, was taking up all of their excess time. The students were quite independent, which I took as a real positive in the sense that the students had to do something on their own. We were supposed to create an original work. We were at the border, and our job was to expand it. They basically said, “You can check with us from time to time, and we’ll get back to you, but we’re not running this business.”

I thought it worked great! It created this supportive and collegial working environment with each other.

Later in the same discussion, Hanrahan said:

“When I first became interested in graphic design in graduate school, I heard about this quest to create a complete computer-generated image. At the time, I was very interested in artificial intelligence , which has this idea of ​​a Turing test and mind imitation. I thought the idea of ​​making a computer-generated image was preliminary to, or at least as complicated as, modeling the human mind, because you would have to model this whole virtual world, and there would have to be people in that world… and if the virtual world and the people in it didn’t seem intelligent, then that world wouldn’t pass not the Turing test and therefore would not seem plausible.

I guess I was savvy enough to think we weren’t really going to be able to model human intelligence in my lifetime. So one of the reasons I was interested in graphic design was that I thought he had good long-term career potential.

See also

12 milestones of artificial intelligence (AI): 2. Ramon Llull and his “thinking machine”

12 AI Milestones: 1. Shakey the Robot

Share.

Comments are closed.