Longterm Wiki
Back

A Timeline of Deep Learning | Flagship Pioneering

web

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
Deep Learning Revolution EraHistorical44.0

Cached Content Preview

HTTP 200Fetched Feb 22, 20265 KB
A Timeline of Deep Learning | Flagship Pioneering 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 Skip to content 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 

 

 
 
 

 
 
 

 
 A Timeline of Deep Learning

 
 
 
 
 1943 
 
 
 
 
 
 
 
 
 Two researchers in Chicago, Warren McCulloch and Walter Pitts, show that highly simplified models of neurons could be used to encode mathematical functions.
 
 
 
 
 
 
 1958 
 
 
 
 
 
 
 
 
 Frank Rosenblatt, a psychologist at the Cornell Aeronautical Laboratory, develops a basic neural network in a machine called the Perceptron. It detects images with a camera and categorizes them by turning knobs to adjust the weights of “association cells” in the machine. Rosenblatt says it eventually should be possible to mass-produce Perceptrons that are conscious of their own existence.
 
 More 
 
 
 
 
 
 1959 
 
 
 
 
 
 
 
 
 Stanford researchers Bernard Widrow and Ted Hoff show how neural networks can predict upcoming bits in a data stream. The technology proves useful in noise filters for phone lines and other communications channels.
 
 
 
 
 
 
 1969 
 
 
 
 
 
 
 
 
 Research on neural networks stalls after MIT’s Marvin Minsky and Seymour Papert argue, in a book called “Perceptrons,” that the method would be too limited to be useful even if neural networks had many more layers of artificial neurons than Rosenblatt’s machine did.
 
 
 
 
 
 
 1986 
 
 
 
 
 
 
 
 
 David Rumelhart, Geoff Hinton, and Ronald Williams publish a landmark paper on “backpropagation,” a method for training neural networks by adjusting the weights they assign to individual artificial neurons. The backpropagation algorithm had been applied in computers in the 1970s, but now researchers put it to wider use in neural networks.
 
 More 
 
 
 
 
 
 1990 
 
 
 
 
 
 
 
 
 AT&T researcher Yann LeCun, who decades later will oversee AI research at Facebook, uses backpropagation to train a system that can read handwritten numbers on checks.
 
 
 
 
 
 
 1992 
 
 
 
 
 
 
 
 
 Gerald Tesauro of IBM uses reinforcement learning to get a computer to play championship-level backgammon.
 
 
 
 
 
 
 2006 
 
 
 
 
 
 
 
 
 Hinton and colleagues show how to quickly train several individual layers in a neural network.
 
 
 
 
 
 
 2012 
 
 
 
 
 
 
 
 
 “Deep learning” takes off after Hinton and two of his students establish that a neural network trained in their method outperforms other computing techniques on a standard test for classifying images. Their system’s error rate is 15 percent; the next-best entrant is wrong 26 percent of the time.
 
 
 
 
 
 
 2014 
 
 
 
 
 
 
 
 
 Google researcher Ian Goodfellow plays two neural networks off each other to create what he calls a “generative adversarial network.” One network is programmed to generate data—such as an image of a face—while the other, known as the discr

... (truncated, 5 KB total)
Resource ID: 554f01ec5d7e73e3 | Stable ID: ZTU1ZThjYW