From Turing to Watson: The development of thinking systems
Scientists have been working on artificial intelligence since the middle of the last century. Their goal: To develop machines that learn and think like humans. Here is an overview of the key learnings and technological milestones they have reached.
1936: Turing machine
The British mathematician Alan Turing applies his theories to prove that a computing machine — known as a ‘Turing machine’ — would be capable of executing cognitive processes, provided they could be broken down into multiple, individual steps and represented by an algorithm. In doing so, he lays the foundation for what we call artificial intelligence today.
1956: The history begins: the term ‘AI’ is coined
In the summer of 1956, scientists gather for a conference at Dartmouth College in New Hampshire. They believe that aspects of learning as well as other characteristics of human intelligence can be simulated by machines. The programmer John McCarthy proposes calling this ‘artificial intelligence.’ The world’s first AI program, ‘Logic Theorist’ — which manages to prove several dozen mathematical theorems and data — is also written during the conference.
1966: Birth of the first chatbot
The German-American computer scientist Joseph Weizenbaum of the Massachusetts Institute of Technology invents a computer program that communicates with humans. ‘ELIZA’ uses scripts to simulate various conversation partners such as a psychotherapist. Weizenbaum is surprised at the simplicity of the means required for ELIZA to create the illusion of a human conversation partner.
1972: AI enters the medical field
With ‘MYCIN’, artificial intelligence finds its way into medical practices: The expert system developed by Ted Shortliffe at Stanford University is used for the treatment of illnesses. Expert systems are computer programs that bundle the knowledge for a specialist field using formulas, rules, and a knowledge database. They are used for diagnosis and treatment support in medicine.
1986: ‘NETtalk’ speaks
The computer is given a voice for the first time. Terrence J. Sejnowski and Charles Rosenberg teach their ‘NETtalk’ program to speak by inputting sample sentences and phoneme chains. NETtalk is able to read words and pronounce them correctly, and can apply what it has learned to words it does not know. It is one of the early artificial neural networks — programs that are supplied with large datasets and are able to draw their own conclusions on this basis. Their structure and function are thereby similar to those of the human brain.
1997: Computer beats world chess champion
The AI chess computer ‘Deep Blue’ from IBM defeats the incumbent chess world champion Garry Kasparov in a tournament. This is considered a historic success in an area previously dominated by humans. Critics, however, find fault with Deep Blue for winning merely by calculating all possible moves, rather than with cognitive intelligence.
2011: AI enters everyday life
Technology leaps in the hardware and software fields pave the way for artificial intelligence to enter everyday life. Powerful processors and graphics cards in computers, smartphones, and tablets give regular consumers access to AI programs. Digital assistants in particular enjoy great popularity: Apple’s ‘Siri’ comes to the market in 2011, Microsoft introduces the ‘Cortana’ software in 2014, and Amazon presents Amazon Echo with the voice service ‘Alexa’ in 2015.
2011: AI ‘Watson’ wins quiz show
The computer program ‘Watson’ competes in a U.S. television quiz show in the form of an animated on-screen symbol and wins against the human players. In doing so, Watson proves that it understands natural language and is able to answer difficult questions quickly.
2018: AI debates space travel and makes a hairdressing appointment
These two examples demonstrate the capabilities of artificial intelligence: In June, ‘Project Debater’ from IBM debated complex topics with two master debaters — and performed remarkably well. A few weeks before, Google demonstrated at a conference how the AI program ‘Duplex’ phones a hairdresser and conversationally makes an appointment — without the lady on the other end of the line noticing that she is talking to a machine.
20xx: The near future is intelligent
Decades of research notwithstanding, artificial intelligence is comparatively still in its infancy. It needs to become more reliable and secure against manipulation before it can be used in sensitive areas, such as autonomous driving or medicine. Another goal is for AI systems to learn to explain their decisions so that humans can comprehend them and better research how AI thinks. Numerous scientists, such as Qreatiq-endowed professor Matthias Hein at the University of Tübingen, are working on these topics.