The History of Artificial Intelligence

AI research has been driven by the potential for computers to automate tasks and process information quickly, starting with the invention of programmable digital computers in the 1940s.
Written on 
Jan 5, 2023
in 
History of AI

The History of Artificial Intelligence

AI research has been driven by the potential for computers to automate tasks and process information quickly, starting with the invention of programmable digital computers in the 1940s.
Written on 
Jan 5, 2023
in 
History of AI

The History of Artificial Intelligence

AI research has been driven by the potential for computers to automate tasks and process information quickly, starting with the invention of programmable digital computers in the 1940s.
Written on 
Jan 5, 2023
in 
History of AI

The History of Artificial Intelligence

AI research has been driven by the potential for computers to automate tasks and process information quickly, starting with the invention of programmable digital computers in the 1940s.
Written on 
Jan 5, 2023
in 
History of AI
Play video
Written on 
Jan 5, 2023
in 
History of AI

As early as 1943, neuroscientist Warren McCulloch and logician Walter Pitts proposed a simplistic model of the neuron in their paper A Logical Calculus of the Ideas Immanent in Nervous Activity.

This small step towards the creation of neural networks was followed by the ‘Age of Ideas’. Between 1950 and 1960, several models were proposed that could potentially make AI, a reality. During this period, mathematician Alan Turing designed the Turing Test to determine whether a machine could exhibit human-like intelligent behaviour.

In the summer of 1956, computer scientists Marvin Minsky and John McCarthy hosted the “Conference that started it all” in Dartmouth College that laid down the fundamental principles of AI as described in the Harvard blog The History of Artificial Intelligence.

Two years later, in 1958, Frank Rosenblatt proposed the ‘perceptron’ to overcome the limitations of the McCullogh-Pitts neuron model and introduced the concept of supervised training as seen in the article Rosenblatt's perceptron, the first modern neural network

In the 1960s, the backpropagation algorithm was introduced. This algorithm would significantly change the way neural networks were trained, making it possible to solve previously unsolvable problems. However, it was popularized only in 1986 by David Rumelhart, Geoffrey Hinton, and Ronald Williams in their paper Learning Representations by back-propagating errors.

Despite all of these advancements, the lack of computational power and data hindered the development of AI technologies. By the late 1970s, governments and corporations were losing faith in AI and cut funding. This led to the first AI Winter

However, with the introduction of deep learning models in the 1980s, AI began to slowly pick up the pace. 

It was during this time that Kunihiko Fukushima developed the neocognitron, a neural network that contained overlapping receptive fields, similar to the visual cortex in animals. The neocognitron introduced two layers that would later be used by several researchers such as Alex Waibel and Yann LeCun to develop convolutional neural networks.

In 1982, John Hopfield introduced the Recurrent Neural Network (RNN). RNNs enable neural networks to learn patterns or sequences over time.
However, RNNs faced the vanishing gradient problem. To overcome this, the LSTM was proposed by Sepp Hochreiter and Jürgen Schmidhuber in 1997. The initial version of LSTM block included cells, input, and output gates. In subsequent years, Felix Gers and his advisor Jürgen Schmidhuber and Fred Cummins made significant changes to the architecture enabling the LSTM to perform more efficiently.

Developments in the 1990s led to IBM’s chess-playing computer ‘Deep Blue’ defeating the then world chess champion Gary Kasparov. The 2000s saw some landmark changes with AI technologies such as Roomba and chatbots such as Alexa and Siri coming into homes.

The past decade saw rapid growth in the field of AI.

In 2012, the father of AI, Geoffrey Hinton introduced the ‘dropout’ regularization technique as an efficient way of performing model averaging with neural networks.

Shortly after,  in 2014, chatbot Eugene Goostman even passed the Turing test in some regard. Eugene Goostman was designed to impersonate a 13-year old boy and fooled nearly 33% of the panel of judges. Here is an actual conversation between Eugene and TIME’s Doug Aamoth: Interview: Eugene Goostman Passes the Turing Test | Time

The same year, Ian Goodfellow and his colleagues designed ‘Generative Adversarial Networks’.  It is said that GANs will bring about the next AI revolution. 

From the Turing machine in the 1950s to the cutting edge technologies of today, AI has seen exponential growth in its six-decade-long journey. But what does the future look like? As AI becomes increasingly embedded in our society, it will change how we work and live. Will AI usher a new era of prosperity or challenge the very nature of human intelligence?

Written by

Related content