Introduction to Artificial Intelligence

Understand the differences between human and artificial intelligence, how we measure intelligence, and how AI has evolved.

We'll cover the following

Overview

Humans are social beings with the unique ability to think and reason. Imagine a child with the task of assembling a simple puzzle. One starts by examining the individual pieces, recognizing colors and shapes, and mentally categorizing them. The child uses their memory to recall the image on the puzzle box and matches pieces based on this mental picture. When they encounter difficulty fitting a piece, they try different orientations and combinations, finally solving the puzzle.

Press + to interact

Intelligence is acquiring and applying knowledge and skills to solve problems, adapt to new situations, and learn from experiences. It involves reasoning, problem-solving, planning, abstract thinking, and comprehension. An entity that can perform most of these tasks can be considered intelligent. Intelligence is important enough to have a measure called the intelligence quotient (IQ) to assess human intelligence.

Now that we’ve established a basic idea of intelligence, a series of questions arises: Can machines be intelligent, too? Can we build such machines? Which machines are intelligent? A branch of computer science answers all these questions.

Artificial intelligence (AI) is the branch of computer science that aims to create machines/systems that mimic human-like intellectual behaviors. It is the branch of science that makes machines think and act like rational humans.

AI involves developing algorithms and systems to perform tasks requiring human intelligence, such as visual perception, speech recognition, decision-making, language translation, and more.

Take a moment and answer this question:

Can we consider a scientific calculator an intelligent device?

Turing test

Now, you must be thinking, if we have a measure for human intelligence, how can we measure if a machine is intelligent? The Turing Test was proposed by British mathematician and computer scientist Alan Turing in 1950. It is a way to see if a computer can act so much like a human that you can’t tell the difference between a human and a computer. How can you perform this test? We only need three players (human judge, human, machine) and text chat. The judge initiates the conversation, and if the judge can’t decide if the replies are from the computer or the human, then the computer is said to have passed the Turing Test, i.e., it is a human-like AI.

Press + to interact
Turing test: The judge on the right side interacts with both a human and a robot, struggling to determine which one is which
Turing test: The judge on the right side interacts with both a human and a robot, struggling to determine which one is which

History of AI

AI’s history goes back to 1642, when Blaise Pascal built the first mechanical calculator. In 1837, Charles Babbage and Ada Lovelace designed the first programmable machine.

The reference to Blaise Pascal’s mechanical calculator is an important milestone in the history of computing, but it is not part of AI history proper. This calculator was designed to perform basic arithmetic operations like addition and subtraction. While this marked a significant advancement in mechanical computation, it could not learn, reason, or adapt—which are key characteristics of AI. AI’s history more accurately begins in the mid-20th century, with the development of the first computers capable of performing tasks beyond arithmetic calculations. 

In 1943, Warren McCulloch and Walter Pitts laid the foundations for artificial neural networks, bridging the gap between the brain and machines. In 1950, Alan Turing published the paper “Computing Machinery and Intelligence,” introducing the Turing test to test the intelligence of machines. In 1955, the term “Artificial intelligence” was coined at the Dartmouth Conference. In 1965, ELIZA, the first chatbot, was developed at MIT to talk to humans. Expert systems were created in the 1980s that showcased the early potential of AI. However, the complexity of real-world problems soon revealed the limitations of these early AI systems, leading to a period known as the “AI winter” during the 1970s and 1980s, characterized by reduced funding and interest.

Press + to interact
History of AI over the years
History of AI over the years

In 1997, the reigning World Chess Champion Garry Kasparov was defeated by a computer program called Deep Blue. In 2002, iRobot launched Roomba, an autonomous vacuum cleaning robot that could detect and avoid hurdles. Google built the first self-driving car in 2009 that could drive through the city.

The 21st century has witnessed exponential growth in AI research and applications. Machine learning (ML), a subset of AI focused on developing algorithms that allow computers to learn from and make predictions based on data, gained prominence. Moreover, deep learning (DL), another subset of machine learning that involves neural networks with many layers, also flourished. Breakthroughs in natural language processing (NLP), computer vision, and robotics have led to practical applications in various industries, including healthcare, finance, and transportation. AI technologies like virtual assistants, e.g., Siri and Alexa, and recommendation systems have become integral to everyday life.

Press + to interact

The latest leap in AI advancement is marked by the development and popularity of generative AI and large language models (LLMs), such as OpenAI's GPT, Google's Gemini, and Microsoft's Bing AI. These models, capable of understanding and generating human-like text, have revolutionized how we interact with machines. These models are capable of writing essays, creating poetry, generating code, and even engaging in coherent and contextually relevant conversations.