A Gautam Hazari AI column.
AI Beyond Artificial, and Ahead of Intelligence
If ever an Oscar Prize equivalent of terminologies existed, it would not be an exaggeration that “Artificial Intelligence” would be a clear winner for the century.
From dinner tables to boardrooms, the term “Artificial Intelligence” has proliferated from casual to serious discussions and is debated in the context of extremes from taking the species to the next level in an accelerated journey to existential threats to humanity, or at least a threat to the position of homo-sapiens in the food chain.
I have a fundamental problem with the term though, not just academically, but for the sake of everything we are doing around it; terminology plays a critical role in our cognition process, and language is the most critical tool which has made our species rise to the top of the food chain and has helped to evolve our biological superiorities in our brains – from the frontal cortex to the number of synaptic connections – and the cognitive process.
Let’s go back to the origin of the term “Artificial Intelligence”. The year was 1955, John McCarthy was a young Assistant Professor of Mathematics at Dartmouth College in Hanover, New Hampshire.
McCarthy was very keen to gather the best minds to discuss ‘thinking machines’, to develop further ideas around that. McCarthy decided to approach the Rockefeller Foundation, and along with many of the other eager and enthusiastic individuals on the subject, made a formal proposal on the 2nd of September 1955.
The fellow proposers included Marvin Minsky – the co-founder of the MIT AI lab and the inventor of one of the first neural networks, SNARC (Stochastic Neural Analog Reinforcement Calculator) in 1951, Nathaniel Rochester, who wrote the first assembler and Claude Shannon, the Father of Information Theory and the person who described Boolean Gates.
McCarthy needed a term to describe what the group was supposed to discuss, he wanted to avoid the existing terms like “thinking machines”, “cybernetics”, “Automata Theory” and so on – to avoid any knowledge inertia, information overload, and any political implications, and he preferred to use a new, neutral and refreshing term: “Artificial Intelligence”.
The project was called the “Summer Research Project on Artificial Intelligence” and 11 thought leaders participated in the summer of 1956, for 6-8 weeks. Only 6 of them attended the whole workshop though.
The term “Artificial Intelligence” was used as a tactical term and almost as a placeholder. McCarthy may not have ever imagined that the same term would be used as the hinge of the largest technological revolution humanity has ever seen.
AI Beyond Artificial
One of the most amazing features nature has invented for living organisms is the ability to see, the ability to interact with light – direct, reflected, refracted, diffracted, and the ability to identify objects through that.
It has always been a mystery how vision works for animals, including our own species, until David Hunter Hubel – a Canadian American Neurophysiologist and Torsten Nils Wiesel – a Swedish Neurophysiologist showed between 1950 and 1960 that the visual cortexes in cats and monkeys contain neurons that individually respond to small regions of the visual field.
The mystery of how animals’ sight systems work unfolded. They received the Nobel Prize for Medicine in 1981. Inspired by the work of Hubel and Wiesel, Kunihiko Fukishima – a Japanese Computer Scientist, introduced the “Neocognitron” – the CNN (Convolutional Neural Network) architecture was born in 1980.
And that is a significant step towards the revolution we see today. One may ask, why is that? The inflexion points for AI and specifically machine learning revolution were in the year 2012, when AlexNet won the ImageNet Large Scale Visual Recognition Challenge, on the 30th of September 2012 to be precise.
I believe ImageNet is the trigger the technology world was waiting for, but talking about that needs a dedicated discussion altogether, to give it the recognition it deserves. AlexNet is a CNN model, created by Alex Krizhevsky, in collaboration with the OpenAI famed Ilya Sutskever and the Turin Award winner – Geoffrey Hinton, when Alex was a PhD student of Hinton at the University of Toronto.
AlexNet was a catalyst to get the criticality of neural networks into the mainstream technology world. From investors to implementors, it seems everyone started to investigate the criticality of neural nets. The current explosion of LLMs through the phenomenal event that happened on the 30th of November 2022 – the public release of ChatGPT – uses neural nets; the transformer architecture used by GPT is a form of neural net architecture.
Now, the neural nets and even CNNs were created by looking into how the biological neural network works. Tagging the entire category of technology as “Artificial” may not give it a proper positioning. Even the term “neural network” is used from the natural, biological term.
Now let’s talk about the second word – “Intelligence”. This is where we need to be extremely careful, so that we don’t create critical boundaries between technology and humanity, the most important driver for any technology must be Humanisation.
Here is a snippet from the proposal of John McCarthy, when he introduced the term “Artificial Intelligence” for the famous Dartmouth College Summer Project:
“An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. “
What is Intelligence? There may be a plurality of definitions, the snippet above gives some elements as well. It also needs to be considered as a context that there are several other terms – which are used in conjunction with the term “Intelligence”, like “Conscious”, “Cognition” and even “Sapience” and in a somewhat loose way – “Sentience”.
They are not synonyms but are used in the context of intelligence, and it is important that we find a way to define them or at least have a way to recognise them. It is also important to define them in a neutral way so that they can be used and identified beyond the biological narratives.
Alan Turing was extremely thoughtful and futurist when he devised the “Imitation Game” which we later called the “Turing Test”. We need to extend it to include all other terms – including conscious and sentience. Otherwise, a new species will be waiting for us in the near future and surprise us, a digital species, a technological species.
How will we know when the technology we create has become conscious? Sentient? Or even Sapient? Calling it just “Intelligence” has the risk of blinding us. With the LLM revolution, Moravec’s paradox is already invalidated, and we have a language singularity between humans and machines.
The problem we had to solve for any previous technological revolution was the “control problem”, it was certainly a hard problem, how do we control nuclear technology usage? Our collective effort solved it, or at least created a control around it generally.
The problem we need to solve now is beyond just the control problem – the “alignment problem”.
How do we ensure that the technology we are building, the apps, the apis and the AI, the has the goals and objectives aligned with the same for our species? The alignment problem becomes much more profound when what we are building is intelligent and could be conscious, sentient, thoughtful and everything which we humans thought made us different.
So, what do I suggest the term to be?
I prefer to call it “Augmented Intelligence”.
For the time being, the word “augmented” at least reminds us of the alignment problem, it keeps the message of “Humanisation”, as, instead of creating a boundary between artificial and natural or humane, it suggests the objective of this technology, and for that matter, for any technology we are building, is to augment human capabilities.
I can live with the word intelligence for the time being, so long as we don’t ignore the related aspects alongside consciousness, sentience, sapience and any other element which makes us humans.
AI beyond artificial. Let us humanise every technology we build.
Connect with Gautam Hazari on LinkedIn to stay updated on his latest insights and contributions to the field of digital identity and technology and AI beyond artificial.