DEV Community

Cover image for A brief summary on the history of AI: Part 1
Alin Rauta
Alin Rauta

Posted on • Originally published at alinrauta.com

A brief summary on the history of AI: Part 1

This article was originally published on my blog

The genesis of artificial intelligence (1943–1955):

The first seeds of AI is generally recognized to be done by Warren McCulloch and Walter Pitts in 1943. They came up with a model of artificial neurons where each neuron has an “on” or “off” state with a switch to “on” occurring in response to stimulation by a sufficient number of neighboring neurons.

They showed that any computable function could be computed by some network of connected neurons and that all the logical connectives (and, or, not, etc.) could be implemented by simple net structures. McCulloch and Pitts also suggested that networks could learn.

In 1949 Donald Hebb demonstrated a simple updating rule for modifying the connection strengths between neurons. His rule, now called Hebbian learning, remains an influential model to this day.

In 1950, two Harvard undergraduate students (Marvin Minsky and Dean Edmonds) built the first neural network computer and was able to simulate a network of 40 neurons.

There were a number of early examples of work that can be characterized as AI, but Alan Turing’s vision was perhaps the most influential. He gave lectures on the topic as early as 1947 at the London Mathematical Society. In his 1950 article “Computing Machinery and Intelligence” he introduced the Turing Test, machine learning, genetic algorithms and reinforcement learning.

The birth of artificial intelligence (1956)

In the summer of 1956 it was organized a two-month workshop at Dartmouth that 10 men attended. This is in their own words the scope of the workshop:

“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

They agreed to name this new field “Artificial Intelligence” and this is where officially all began. The Dartmouth workshop did not lead to any new breakthroughs, but it did introduce all the major figures to each other that for the next decades will dominate the field along with their students and colleagues at MIT, CMU, Stanford and IBM.

The golden period (1956–1974)

The years after the Dartmouth conference were an era of discovery, of sprinting across new ground and successes. Given the limited computers and programming tools of the time the programs developed were simply astonishing. Computers solving algebra problems, proving theorems in geometry and learning to speak English was something unreal for most of the people during that time.

The most influential and successful directions AI research took in that period are reasoning as search, natural language and micro-worlds.

All these accomplishments made researchers to express an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years. (yep, we know now that they overestimate the task at hand).

The first AI winter (1974-1980)

The tremendous optimism expressed by the AI researchers created high expectations. Their failure in appreciating how hard were the problems they faced led to a wave of disappointment and the loss of funding - something that today we call it “AI winter”. Thus, the rate of innovation and progress in AI stagnates in this period.

Now, what were the problems that led to this AI winter? In essence, they are the same factors that today make AI fast and useful.

Limited computer power

At the time there was not enough memory and processing power to utilise AI in anything useful. It was a mere toy that could do its job only in trivial and simple situations. For instance, natural language was demonstrated with a vocabulary of only 20 words, because that was all that would fit in memory.

Not enough data

Many important artificial intelligence applications (like computer vision or natural language) require simply enormous amounts of information about the world, which was not available at that time. No one in 1970 could build a database so large and no one knew how a program might learn so much information.

To be continued…

The sources used for this article:

  1. One of the most influential textbook in AI: Artificial Intelligence, A Modern Approach (Stuart Russel, Peter Norvig)
  2. Wikipedia

If you liked this article and want to see more of these, then follow me on twitter and on dev.to.

P.S. I’m an indie maker and I’m writing a book on the basics of AI. If you want to support me and you’re interested in AI, then you can pre-order my book at a discount here (you won’t get charged until I finish the book): https://gumroad.com/l/SXpw/sideproject

Top comments (0)