Covid was a tough phase for the whole world, but it was tougher on the senior board members of the European Research Council (ERC), as the council got embroiled in a controversy with respect to the newly appointed president of ERC in 2020. At the heart of the controversy was a disagreement of perspectives around the mission of ERC. Billions of Euros of funding money are entrusted to ERC every year, and the council’s mission is to “encourage the highest quality research in Europe through competitive funding and to support investigator-driven frontier research across all fields, based on scientific excellence.”
The newly appointed president, at the time, wanted to channel more of the funding money toward the thematic direction of COVID-19-related research proposals (all in good spirit and apparently the need of the hour) while the senior board members who had the institutional memory intact were almost certain that ERC ought to stay true to its mission of funding the most excellent research centers who have dedicated their lives to conducting fundamental scientific research, and if the world is to find a solution to the pandemic, it will come out of those scientific labs who have, for years, been doing fundamental research. The board members
Coming to the world of artificial intelligence (AI) now, there’s been a lot of buzz and
The real question is: What would it take to build these deeper models of AI?
Will it just take a few more billions of dollars thrown at the problem, a few more billions of training parameters using a few more gigabytes of training data on an even more powerful computing infrastructure? In other words, can we simply engineer our way through? Or would it take the AI community the same ethos as ERC’s—doing excellent grounds-up fundamental science—in order to achieve its ideal of simulating human-level intelligence? This series of blogs recommends the latter!
It is a historical fact that the pioneers of AI stayed true to the process of doing good science, and it would serve us well to follow suit. If all science labs, including those in biology that applied their theoretical models to making COVID-19 vaccinations, follow the process represented below (in the left half of the figure), so will the research labs of AI need to if they want to create computational models of a deeper AI, an AI that doesn’t suffer the same issues that the long list of issues highlighted by professor Yann LeCun believes the current deep learning models of AI suffer from.
But more importantly, deeper AI is a proposition for the AI community to take a more scientific route, as illustrated in the right half of the following figure: It does not start with simply engineering for an application. Rather, it studies the locus of intelligence as a naturally occurring phenomenon, not narrowly, but in as holistic and broad a sense as possible; then it studies the traditions or schools of thought that have theorized such kind of holistic intelligence instead of computer scientists trying to reinvent the wheel. Only then, it creates computational models that stay true to the theory they are extracted from, and finally, the computational models are applied to interesting problems.
By following the abovementioned steps, this series has laid out a scientific framework for a deeper AI and has furnished the first two important steps of the pipeline in detail. If we add to this what some of the alternative voices within the folds of the AI community are or have been independently saying, there is a clear call for continuing with the pipeline and developing computational models for this deeper theoretical understanding of intelligence as well as artificial intelligence.
Deeper AI models: A community of pragmatic, semiotic, and abductive agents interacting with human agency converging on to the truth, emulating a deeper understanding of human intelligence.
Deep learning models: Shallow pattern recognition learners that require lots of data for training; they bypass deductive and causal reasoning. Once trained, the system is a non-interpretable black box.
While less prominent voices have tried
Yann LeCun, a Turing Award recipient Professor at New York university and Chief AI Scientist at Meta/Facebook, in critique of LLMs is
Geoffrey Hinton, considered another godfather of deep-learning AI, besides LeCun,
Kathleen Creel is a postdoctoral fellow at the Institute for Human-Centered Artificial Intelligence, Stanford University. As a solution to arbitrary bias shown by automated machine-learning based decision systems trained on one complete dataset, she
Douglas Lenat, another academician who has also served on the scientific advisory boards of both Microsoft and Apple, took a different route to inductive machine learning. He and his team of 60 researchers have managed to run a 37-year-long project curating millions of hand-crafted explicit deductive rules by employing an equivalent of 2000 years of person-year effort. This has given birth to Cyc,
Yejin Choi is a celebrated professor and researcher in the domain of natural language processing. In her
In line with ERC’s mission, this series of blogs, along with the abovementioned alternative voices from the prominent members of the AI community, is pointing toward looking past the shallow science of deep learning
Series in episodal sequence:
In this course, you'll level up your skills learned in the Industry Case Study and Machine Learning for Software Engineers. You'll take the modeling and data pipeline concepts and apply them to production-level classification and regression models for industry deployment, while continuing to practice the most efficient techniques for building scalable machine learning models. After this course, you will be able to complete industry-level machine learning projects, from data pipeline creation to model deployment and inference. The code for this course is built around the TensorFlow framework, one of the premier frameworks for industry machine learning, and the Python pandas library for data analysis. Knowledge of Python and TensorFlow are prerequisites. This course was created by AdaptiLab, a company specializing in evaluating, sourcing, and upskilling enterprise machine learning talent. It is built in collaboration with industry machine learning experts from Google, Microsoft, Amazon, and Apple.
Free Resources