This article was originally published on my blog
Here we go again (1980-1987)
In the 1980s a form of AI program called “expert systems” was successfully adopted by corporations around the world. By 1986, an expert system was saving a company an estimated $40 million a year. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. followed by the United States who formed the Microelectronics and Computer Technology Corporation (MCC) as a research consortium designed to assure national competitiveness. Overall, the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988, including hundreds of companies building expert systems, vision systems, robots, and software and hardware specialized for these purposes.
Expert systems refer to programs that are able to solve problems about a specific domain of knowledge using logical rules which are derived from the knowledge of experts. In the same period we are witnessing a “knowledge revolution” as AI researchers were beginning to believe that intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways.
Another beneficial event for AI was the return of neural networks. The proof that neural networks can learn information in a new way (by John Hopfield) and the popularisation of “backpropagation” managed to revive the field of connectionism (artificial neural networks).
The second AI winter (1987-1993)
Unfortunately,we have another “AI Winter” because many companies fell by the wayside as they failed to deliver on extravagant promises. In the late 1980s and early 1990s, AI suffered a series of financial setbacks. The expert systems proved to be too expensive to maintain and be useful just in a few special contexts. Japan’s fifth generation project didn’t achieve its ambitious goals whereas the United States cut funding to AI.
In the same period, a new approach to AI emerged. A group of researchers believed that having a body is primordial for an intelligent machine. It has to move, perceive and deal with the real world. The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 1970s from a successful background in theoretical neuroscience to lead the group studying computer vision.
In terms of methodology, AI adopts the scientific method. To be accepted, hypotheses must be subjected to rigorous empirical experiments, and the results must be analyzed statistically for their importance. A relevant example is the field of speech recognition. In the 1970s, a wide variety of different architectures and approaches were tried. Many of these were rather ad hoc and fragile, and were demonstrated on only a few specially selected examples. In recent years, approaches based on hidden Markov models (HMMs) have come to dominate the area.
It starts to take shape (1993-2011)
The field of AI is starting to be used successfully throughout the technology industry, but somehow behind the scenes (after two AI winters caused by unrealistic high expectations, people began to be more prudent). Some of the success was due to increasing computer power and some was achieved by using the scientific method when doing research. AI was fragmented into competing subfields focused on particular problems or approaches. AI was both more cautious and more successful than it had ever been.
On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. Moreover, in 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail. Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.
During the 1990s we see the emergence of intelligent agents, which is a system that perceives its environment and takes actions which maximize its chances of success. Despite the successes of intelligent agents, some influential founders of AI, including John McCarthy, Marvin Minsky, Nils Nilsson and Patrick Winston have expressed discontent with the progress of AI. They think that AI should put less emphasis on creating ever-improved versions of applications that are good at a specific task, such as driving a car, playing chess, or recognizing speech. Instead, they believe AI should return to its roots of striving for, in Simon’s words, “machines that think, that learn and that create.”
A new era of AI (2011-present)
The first decades of the 21st century brought us access to large amounts of data (known as “big data”), cheaper and faster computers and advanced machine learning techniques. One of these factors has an influential role on how AI changed its focus: data. Throughout the 60-year history of computer science, the emphasis has been on the algorithm as the main subject of study. But some recent work in AI suggests that for many problems, it makes more sense to worry about the data and be less picky about what algorithm to apply. That’s how important data has become and why right now is probably the most sought after resource in the world.
AI today is present in many real world applications such as robotic vehicles, speech recognition, self-driving cars, spam filters, social media algorithms that govern your feed, robotics. These are just a few examples, not an exhaustive list.
The sources used for this article:
- One of the most influential textbook in AI: Artificial Intelligence, A Modern Approach (Stuart Russel, Peter Norvig)
- Wikipedia
If you liked this article and want to see more of these, then follow me on twitter and on dev.to.
P.S. I’m an indie maker and I’m writing a book on the basics of AI. If you want to support me and you’re interested in AI, then you can pre-order my book at a discount here (you won’t get charged until I finish the book): https://gumroad.com/l/SXpw/sideproject
Top comments (0)