Alan Turing, The Turing Test and Logic Theorist

Series 1: AI Basics — Chapter 3: A Brief History of Artificial Intelligence

Past stories you must know

AI for Non-Techies
8 min readMar 11, 2024
Midjourney

Welcome to Chapter 3 of my first Series on AI Basics. The articles are structured into multiple Series, which are categories of AI-related topics, and Chapters, which are concepts within each topic.

Chapter 1: Fundamentally Understanding AI

Chapter 2: Differences between AI, ML, DL, GenAI & LLMs

Chapter 3: A Brief History of Artificial Intelligence [this article]

Chapter 4: How AI Systems Work

Chapter 5: How LLMs Work

This article covers the following sections and is inspired by the Harvard Graduate School of Arts and Sciences:

1. AI’s early years (1950s — 1980s)
2. The boom (1980s — 2000s)
3. The modern era (2010s — present)

1. AI’s early years (1950s — 1980s)

Who is Alan Turing and what does he have to do with AI?

Between 1939 and 1945, Alan Turing, a British mathematician and computer scientist, played a crucial role in breaking the Nazi code during World War II. The code was generated by a machine called Enigma, which the Germans used to encrypt their military communications.

AI, as we understand it today, did not play a role in deciphering the Enigma messages. The concept of AI and its related fields, such as machine learning and deep learning, had not yet been developed during World War II. However, the efforts to decipher the Enigma messages involved several important precursors to modern AI and computer science. The work at Bletchley Park, led by mathematicians like Alan Turing, laid some of the groundwork for the development of AI and computer science in the following decades.

Alan Turing

Turing suggested that humans use available information as well as reason to solve problems and make decisions, so why can’t machines do the same thing? This was the logical framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.

Although he wrote this seminal paper, something stopped Turing from implementing the idea — computers! Before 1949, computers lacked an important function we take for granted today — they couldn’t store any commands but could only execute them. Meaning, they could be told what to do but they didn't remember what they did. Also, in the 1950s, computers were very expensive ( roughly $200,000 a month to lease) and could be afforded only by the largest companies.

Testing the intelligence of machines: The Turing Test, aka the Imitation Game (yes this Hollywood movie is based on Turing)

In the same 1950 paper, Alan Turing proposed what became known as the Turing Test, which he originally called the Imitation Game. It is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. This concept has since become a significant benchmark in the field of artificial intelligence.

The test involves a human judge engaging in a natural language conversation with two other parties — one a human and one a machine posing as a human. Both the human and machine try to convince the judge that they are human as well. If the judge is unable to reliably tell the machine from the human after a period of conversation, then the machine is said to have passed the Turing Test, demonstrating a level of intelligence and ability to converse comparable to a human’s.

The goal of the Turing Test was not necessarily to recreate true human intelligence but rather to determine whether a machine could exhibit behaviors that were indistinguishable from those of humans in a text-only conversation. If a machine could convince a person it was not a machine through natural language discussion, then it could be considered to have achieved a level of intelligence, at least for conversation purposes.

The Turing Test

To date, no AI system has achieved a consistent 50%+ success rate in blind Turing Tests required to claim it has definitively passed. While recent progress has been impressive, we have not yet achieved human-level artificial general intelligence (AGI). It’s worth noting that the Turing Test itself has been subject to criticism and debate regarding its validity as a measure of intelligence or consciousness in machines. Some argue that passing the Turing Test doesn’t necessarily imply true intelligence or understanding.

The full test aims for human-level conversational ability across all contexts, which remains an ongoing challenge. Progress toward this benchmark continues with newer, more powerful generative models, which we will talk about in the future.

Logic Theorist

Roughly five years after Turing’s paper, the Logic Theorist was a significant computer program developed by Allen Newell, John Shaw, and Herbert Simon in 1956. It was created to imitate the problem-solving capabilities of a human being and is considered by many in the field to be the first AI program created. This work was presented at the Dartmouth Summer Research Project on Artificial Intelligence hosted by Marvin Minsky and John McCarthy, who actually coined the term artificial intelligence.

John McCarthy coined the term Artificial Intelligence at this conference in 1956. Credit: https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/

Although the event didn't achieve its objective of agreeing on standard methods for AI, it provided a launch pad for the next twenty years of AI research.

Between 1957 and 1974, AI thrived as computers became faster, cheaper, and more accessible, with improved storage capacity. Additionally, machine learning algorithms advanced, and people became more adept at selecting the appropriate algorithm for their tasks. In 1970, Marvin Minsky famously expressed optimism to Life Magazine, suggesting that within three to eight years, a machine with the general intelligence of an average human would be developed. Despite this optimism and some foundational progress, achieving the ultimate goals of natural language processing, abstract thinking, and self-recognition remained distant objectives.

Breaking through the initial challenges of AI uncovered a host of obstacles, primarily centered around the severe limitations in computational power. Computers could not store vast amounts of information or process it swiftly. For tasks like communication, understanding the meanings of numerous words and their combinations was essential. Hans Moravec, a doctoral student under McCarthy, remarked that computers remained millions of times too feeble to demonstrate intelligence. As patience waned, so did funding, leading to a decade-long slowdown in research progress.

2. The boom (1980s — 2000s)

The 1980s marked a period of rapid growth and interest in AI, known as the “AI boom,” driven by breakthroughs in research and increased government funding. During this time, AI experienced a resurgence driven by two key factors: the broadening of the algorithmic toolkit and increased financial support.

John Hopfield and David Rumelhart played a pivotal role in popularizing “deep learning” techniques, enabling computers to learn from experience. Simultaneously, Edward Feigenbaum introduced expert systems, replicating the decision-making processes of human experts. The program would ask an expert in a field how to respond in a given situation, and once this was learned for virtually every situation, non-experts could receive advice from that program.

As part of their Fifth Generation Computer Project (FGCP), the Japanese government provided significant funding for expert systems and other AI-related initiatives. Between 1982 and 1990, they allocated $400 million towards revolutionizing computer processing, implementing logic programming, and advancing artificial intelligence. Despite these ambitious endeavors, most of the goals set forth were ultimately not achieved.

Following the AI boom, an AI Winter occurred between 1987 and 1993 characterized by low interest, reduced funding, and limited breakthroughs in AI research. Setbacks in machine markets and expert systems contributed to this period of decreased support for AI initiatives.

Despite the challenges of the AI Winter, the early 1990s saw impressive strides in AI research, including the development of AI systems that could defeat world champions in chess

Throughout the 1990s and 2000s, several significant milestones in artificial intelligence were reached. A notable achievement occurred in 1997 when IBM’s Deep Blue, a chess-playing computer program, defeated reigning world chess champion and grandmaster Gary Kasparov. This historic event marked the first time a computer had defeated a reigning world chess champion, signifying a significant advancement toward the development of artificially intelligent decision-making programs.

3. The modern era (2010s — present)

The impact of Moore’s Law on AI at the turn of the millennium

It was noted during the late ’90s and 2000s that we hadn’t gotten any smarter with regard to AI programming, so looking back from now, a valid question to ask is what caused significant advances in AI? Turns out it’s the rate at which computing has been becoming cheaper.

Moore’s Law, proposed in 1965, states that the memory and speed of computers double every two years. More precisely, the number of components on a single chip doubles at the lowest cost, and it is this progress that ultimately made it possible to overcome the challenges faced in the early and boom periods of AI explained above. That’s how IBM’s Deep Blue could beat Gary Kasparov in 1997 and Google’s AlphaGo beat Ke Jie in 2017.

It tells us that we saturate the capabilities of AI to the level of our current computational power (computer storage and processing speed), and then wait for Moore’s Law to catch up again.

Thereafter, intelligent assistants like Siri (2011), Alexa (2014), and Google Assistant (2016) became ubiquitous and expanded natural language interaction capabilities. Advanced in Deep Learning enabling machines to learn from vast amounts of data without explicit programming. This has led to advancements in areas like speech transcription, emotion recognition from audio or video recordings, image generation from text inputs, and more along with some other specific fields below:

Deep Learning Revolution: The 2010s marked a significant shift towards deep learning, with the development of deep neural networks like AlexNet that outperformed traditional models in tasks such as image recognition

Transformer Architecture: Introduced in 2017, the transformer architecture played a crucial role in teaching neural networks grammatical dependencies in language processing, becoming a dominant model for large language models like GPT-4.

Diffusion Models: First described in 2015, diffusion models started being utilized by image generation models such as DALL-E in the 2020s, showcasing advancements in generative AI applications.

There were other developments and periods of lull too in AI’s remarkable history but the important ones have been called out here. Now that you have an idea of the history of this revolutionary technology, we’ll talk about how AI systems work in our next chapter in a few days.

See you there!

--

--

AI for Non-Techies
AI for Non-Techies

Written by AI for Non-Techies

Anyone can learn about AI - this Medium is dedicated to simplifying AI and its real world applications for non-technical people who are willing to study it.

No responses yet