Home/Blog/Machine Learning/The Evolution of AI: Interpretant is All You Need
Home/Blog/Machine Learning/The Evolution of AI: Interpretant is All You Need

The Evolution of AI: Interpretant is All You Need

Junaid Akhtar
Oct 20, 2023
13 min read

Become a Software Engineer in Months, Not Years

From your first line of code, to your first day on the job — Educative has you covered. Join 2M+ developers learning in-demand programming skills.

Instead of showcasing ChatGPT as an alien event that poofed out of nothingness, this blog is an attempt at contextualizing ChatGPT’s development within the history of ideas being pursued in the domain of artificial intelligence (AI). But it doesn’t stop there. It further contextualizes those ideas as neatly placed within the history of ideas pursued by the foundational domain of human knowledge—philosophy. By understanding the evolution of AI, we can understand not only what ChatGPT brings to the table for humanity, but also where it’s going to hit the wall. But even more importantly, this historical route could be the AI community’s guide for one way forward.

Yoshua Bengio is a professor of computer science at the University of Montreal and a recipient of the prestigious 2018 Turing Award for his contributions to deep learning. He recently posed a very pertinent question to the deep learning community asking whether “we have the architectures we need, and all that remains is to develop better hardware and datasets so we can keep scaling up? Or are we still missing something?” He then responded on their behalfDeepLearning.AI. (2023). Yoshua Bengio wants neural nets that reason:

I’ve been studying, in collaboration with neuroscientists and cognitive neuroscientists, the performance gap between state-of-the-art systems and humans. The differences lead me to believe that simply scaling up is not going to fill the gap. Instead, building into our models a human-like ability to discover and reason with high-level concepts and relationships between them can make the difference.

In a previous blog, we established that human language and the way we learn itChoi, Yejin, [TED]. “Why AI Is Incredibly Smart and Shockingly Stupid | Yejin Choi | TED,” April 23, 2023. Accessed July 20, 2023. https://www.youtube.com/watch?v=SvBR0OGT5VI. is “never about predicting the next word, but it’s about making sense of the world and how the world works.” What we can take away from Helen Keller, Yejin Choi, and Yoshua Bengio is simply this: We want to know!

We want to know—the world#

Now, there are levels to knowing. At the most primitive level, we need to know what’s out there—ontology. We start describing things as they are through matter-of-fact propositional statements: X is Y. However, this could not be the only level of knowing what’s out there, as most individuals cannot remain perpetually satisfied believing what they know about the world to be doubtlessly true. So it becomes important to also investigate the best ways of knowing or knowing how to get to know our ontological truths. This is the realm of second-order thinking, and there is one branch of philosophy that has busied itself in examining the very nature of belief, knowledge, and the justifications of beliefs as knowledge—epistemology.

Deduction and induction are two widely known and established methods of reasoning within epistemology.

  • Deductive reasoning works its way from general principles towards particular instances/statements and arrives at conclusions with certainty.

  • Inductive reasoning moves in the opposite direction, from particulars to general statements.

Both reasoning methods have their own strengths. However, an American logician, Charles Sanders Peirce, found that no process of scientific inquiry could be complete without a third type of reasoning. He called it abductive reasoning. Let’s understand the three forms of reasoning and their interrelationship through an example!

Penguins cannot fly
AJ is a penguin
Therefore, AJ cannot fly
Deductive reasoning

If you really think hard about it, where did the first general principle that the deductive argument unquestioningly started with come from? Of course, someone would have observed one particular penguin not being able to fly, even though they would have witnessed their wings. However, we can only do a finite sampling in our finite lifetime. Other observers would have noted down similar observations and thus would have chipped into the collective human inductive experience.

Penguin #1 cannot fly
Penguin #2 cannot fly
...
Penguin #n cannot fly
Therefore, penguins cannot fly
Inductive reasoning

But if you still really think hard about it, who told the very first observer to focus on the characteristics of flying with reference to penguins? Clearly, it would take many particular observers many years of observing many particular examples of penguins before any of the observers could confidently be able to claim that no penguin can fly. At that moment of the first observation, almost infinite possibilities presented themselves, and being unable to fly was only one of the possible hypotheses. Peirce was sharp enough to identify this and insisted that no scientific discovery or inquiry process could be complete without this first abductive hypothesization. With this abductive hypothesis, we should be able to generate a deductive argument which would then be inductively tested, in time. Of course, this is a continuous process where inductive testing would validate or evolve the hypotheses and eventually might converge onto the truth in the distant future.

Peircean model for the process of discovery
Peircean model for the process of discovery

In practice, almost every scientist during their lab experiment, every doctor in their differential diagnosis, or every detective in their inquiry uses this triadic process. However, while communicating their discoveries, they often reduce their explanations to deduction and induction. This Peircean repair is an important contribution to epistemology, as abduction alone sets out to do what truth-preserving deduction and induction cannot seem to do—discover and possibly add something new to our knowledge. This is the same faculty that Bengio pointed out for AI researchers and computer scientists.

Let's continue our journey through the levels of knowing. Had there been only one thinking mind in the world attempting to understand the world, even a casual habit of reasoning would not be of much social trouble, but the fact is that there are other minds—minds that can think, minds that can think differently, minds that can very well believe the opposite of what we believe in, that too about the same phenomenon in the world. At this third level of thinking, we realize that there’s not one universal way of knowing employed by reasoning minds spread across time, space, cultures, and perspectives. The knowledge claims of this shared world are represented in terms of ideas shared by others who are also trying to understand the same world.

Ideas or concepts, however, require another vehicle to get from one mind to another—language. When this is the reality of the place we all seem to occupy, it does not suffice to just invest in knowing how to best know the world. The importance of language in the bigger picture of a community (or an individual inside a community) that’s trying to understand the truth of their shared reality becomes manifold.

We want to know—the world and the word#

From language as a vehicle of communication to recognizing the need for the communal logic of language required to achieve the shared goal of human understanding and thinking is just one step away. Developing the formal logic and ethics of this was one of the biggest contributions of Charles Peirce—semiotics, his theory of signs. Peirce realized that as far as human experience is concerned, we do not have immediate access to reality; we do not experience things as they are. Instead, what we always come across are signs, and signs always require mediated interpretation before they can be understood. This is equally true for objects of the world and symbols of human words (whether in speech or text.) One of the distinctive features of Peircean semiotics is that it treats both as signs without any distinction.

Sign—an irreducible triad
Sign—an irreducible triad

The structuralist formulation of a sign theory, before Peirce, suggested that there are two elements involved in every sign:

  • Signifier (that which is physically expressed)

  • Signified (the concept that the signifier refers to)

Peirce repaired this with a triadic formulation where every sign brings into relation three elements. Let’s understand the elements of a triadic sign through the iconic Helen Keller momentKeller, Helen. The Story of My Life. 1902. where she realized for the first time that the hand gestures of the alphabet that she's been making (sign vehicle) and the physical cool wetness that she felt every time she put her hand in water (object) are cognitively related through the non-trivial process of naming (interpretant in this scenario):

We walked down the path to the well-house, attracted by the fragrance of the honeysuckle with which it was covered. Some one was drawing water and my teacher placed my hand under the spout. As the cool stream gushed over one hand she spelled into the other the word water, first slowly, then rapidly. I stood still, my whole attention fixed upon the motions of her fingers. Suddenly I felt a misty consciousness as of something forgotten—a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that “w-a-t-e-r” meant the wonderful cool something that was flowing over my hand. That living word awakened my soul, gave it light, hope, joy, set it free! There were barriers still, it is true, but barriers that could in time be swept away —The Story of My Life

The interpretant is not necessarily an interpreter, a subject, or a person. Peircean semiosis is a dynamic and ever-evolving process where an entire triadic sign (sign vehicle, object, and interpretant) can act as an interpretant for another sign. This way, our worldview keeps growing. Here is one more example from Helen’s early learning phase that exemplifies the growth of her semiotic understanding of the world through the world model present in her imagination:

When the sun got round to the window where she was sitting with her book, she got up impatiently and shut the window. But when the sun came in just the same, she came over to me with a grieved look and spelled emphatically: “Sun is bad boy. Sun must go to bed.” The Story of My Life

Helen was as serious in her understanding of the world as the people once were in believing that the sun revolves around the earth, as most signs and their interpretations of the day pointed towards this interpretation being valid. Of course, the universe gives us signs of corrective feedback as well.

Let’s try one last exampleDeely, John, The Red Book: The Beginning of Postmodern Times or: Charles Sanders Peirce and the Recovery of Signum, 2000. to understand interpretant. Imagine that while hiking in the mountains, you come across a very peculiar bone. Peirce would say that the relation between the bone (sign vehicle) and dinosaur (object that the sign points to) is inexplicable without the paleontologist’s long-term habits of thought (interpretant):

The one perceiving the bone may be an ignorant human animal, or indeed an animal other than human. As a key to the past and to some scientific knowledge, the bone is in this case wasted, though it may be excellent to chew on or to use as a club. However, with luck, the one perceiving the bone, the one for whom the bone is objectified, may happen to be a paleontologist. In this circumstance the bone becomes a sign, not of a chew toy or of warfare, but of the age of the dinosaurs, and of some individual and type of individual dinosaur as well. A relation which was once physical between the bone and the dinosaur whose bone it was now has a chance of being reconstructed by the scientific mind. Should that happen, a relation once only physical comes to exist again, unchanged as a relation — that is to say, in its essential rationale and structure as a relation — but now existing only as purely objective.

From the language of logic to the logic of language#

For Peirce, just like our genes encode within us our ancestral inheritance, so does our language encode within itself our ancestral wisdom, to be further used by us for thinking. So, by design, our sign interpretation (thought being housed in ancestral language) is a community project—past and present included. Peirce did not throw the baby (tradition) out with the bath water like some of his skeptic contemporaries but introduced ethics around the usage of logic amidst a community of inquirers and called it pragmatism:

Pragmatism is the principle that every theoretical judgment expressible in a sentence in the indicative mood is a confused form of thought whose only meaning, if it has any, lies in its tendency to enforce a corresponding practical maxim expressible as a conditional sentence having its apodosis in the imperative mood. [1903: CP 5.18]

This is perhaps what is meant by the logic of language, that whatever is factually stated by folks ordinarily—casually insisting that their propositional statements be taken for universal truths, by a pragmaticist would be stated in the form of conditional statements, whose consequent part would be an imperative necessity if the conditions are verified as true. This move from casual fact-sayers (without regard to context or conditions) to a careful conditional stater is a move towards ethics in this universe, given its structure and our capacity/limitations as human beings. With the universe’s corrective feedback and openness toward its reception, we grow our habits of correct and useful abductions and interpretants along with our world models.

Charles Sanders Peirce
Charles Sanders Peirce

History of AI and its future vector#

As opposed to the thousands of years that philosophy has had in order to mature its thought on the subject, the domain of AI is just a newborn in comparison. The evolution of AI has been spread over a couple of decades exploring deductive rule-based expert systems and another couple of decades exploring inductive machine learning-based systems. While deductive AI systems have a human-explainable and transparent worldview that can perform causal reasoning, they are too brittle and lack growth. On the other hand, inductive AI systems show signs of learning but are eventually black boxes without any trace of a worldview; hence, even their latest large language models suffer from “hallucinations.” Lastly, any scientific model of the world should be able to teach us something about how the phenomenon they model works. Perhaps Chomsky, being a scientist, is correct in identifying that even though these large language models are useful as a tool, they don’t tell us anything about how language works.

If the history of philosophy is to be our guiding interpretant, AI needs to follow suit and move in the direction of seriously developing computational models for abductive systems that can integrate expert systems and machine learning for a holistic approximation of general human intelligence—language models grounded by world models.


If you enjoyed reading this blog, do read the first and second blogs from this series:

Frequently Asked Questions

How has AI developed over the years?

In the 1980s, advancements in computing and data storage led to a resurgence in AI research. This was marked by fresh funding and new algorithms. John Hopfield and David Rumelhart brought “deep learning” techniques to the forefront. This enabled computers to learn from experience.


  

Free Resources