Gautam Hazari

Chief Technology Officer

P

+44 (0) 7717 785810

E

gautam@sekura.id

Consciousness in Technology to Technological Consciousness

A Gautam Hazari AI column.

Conciousness in technology

Consciousness in Technology

“Necessity is blind until it becomes conscious. Freedom is the consciousness of necessity.” – Karl Marx. Consciousness is one of the critical characteristics of humanity and at the same time, it is one of the most unresolved concepts in our organic existence. There are many different definitions of consciousness:

The Oxford English Dictionary says: “The state of being able to use your senses and mental powers to understand what is happening, the state of being aware of something”

As per the Cambridge Dictionary: “The state of understanding and realising something, the state of being awake, thinking, and knowing what is happening around you”

Consciousness has been the topic of amazement and discussions touching various disciplines – from mathematics to philosophy, psychology to neuroscience and spirituality to quantum physics and beyond.

Definitions are important for summarising the understanding of the concept, and it’s an uncomfortable situation in relation to consciousness as a topic and concept as we do not really have a well-established understanding yet for our organic and biological entity, even though there are interesting discussions on consciousness being non-biological, or at least not as per the current understanding of biology.

So, I am trying to keep the definition part aside, just picking a couple of the keywords from the various definitions to progress with the discussion I want to go towards. The two keywords I chose are “understanding” and “awareness” for now. 

Consciousness in Technological Realisation

The questions I want to ask on Consciousness in Technological realisation are:

  • what does it mean
  • can technology ever be consciousness?
  • are there any barriers to technology becoming conscious?
  • how will we know if a technology has become conscious?
  • does it really matter for humans, society and our species if technology becomes conscious? 

But before we jump into the discussion, it is important to talk about some of the related terms to “conscious”, which are almost used as synonyms to consciousness, but they are not: “conscience”, “sentience”, “Sapiens” and sometimes even “intelligence” and more. Of course, this is not the exhaustive list, the idea is to focus the discussion on consciousness for now, and that around the two keywords: “understanding” and “awareness”.

Understanding and awareness

Let’s start with “understanding”. Language is an interface in and out of our existence, I tried to avoid the word “brain” here and generalised it as “existence”. 

Language, rather natural language, has been one of the most critical tools we have built, which made our species superior to any other. It is also claimed that structured composable language differentiated Homo Sapiens from the other Human species and made the only species which survived the evolution chain. 

So far, language has been a barrier between humans and machines. Moravec’s paradox stated: “There are tasks which are easy for computers but hard for humans (think of finding the square root of a 1,000 digit integer) and there are tasks which are easy for humans but hard for machines”, that task has been language. Language is hard, as language contains ambiguity, indirection, false belief, indirect hints, faux pas, and concepts which need an understanding of many other concepts. 

Let’s consider a statement: “The shoe didn’t fit the box because it was small” – what was small? The box, of course. Let’s change the sentence slightly: “The shoe didn’t fit the box because it was large” – what was large? The shoe, of course. 

We say it is easy, yes, it is – for humans, but for machines it has always been hard. With the revolution of Large Language Models (LLMs) as well as Small Language Models (SLMs) as well now, this has been made much easier for machines. Moravec’s paradox is shattered.

But – “knowing” is not “understanding”. Knowing can be done using brute force, LLMs are generating the next possible token (and hence the word) after all – so the LLMs “know” which token (and hence the word) to say next. Does it really “understand”? 

There have been arguments that predicting the next token is just an advanced auto-complete tool. But that argument may not be fair, as before generating the next token in the answer, the language models need to understand the prompt – which relates to the complete degree of freedom of the person (or even another AI agent) using whatever prompt has been entered as an input to the language model, that prompt is in the natural language and hence could have ambiguity and all other forms of characteristics which natural language has – which needs understanding.

Human understanding

Let’s turn the focus on us – humans, to understand “understanding”. How can I prove that I really understand what I am saying or writing? Can I brute force it? Is it possible for me to just say what I have learnt and remembered without understanding?

One way to understand human understanding is the “Theory of Mind” (ToM), presented by David Premack and Guy Woodruff from the University of Pennsylvania in 1978 and elaborated in 1985 by Baron-Cohen and others. The ToM talked about human behaviours in the context of social interactions, which shows how we humans “understand” in the context of language, with false beliefs, indirect requests and faux pas – critical human characteristics.

Interestingly, a paper published in Nature on the 20th of May 2024: https://www.nature.com/articles/s41562-024-01882-z, titled “Testing theory of mind in large language models and humans”, claims that the ToM tests on GPT-4 and Llama 2 showed humanlike performance and in some cases – better than humans.

Here is a false belief test I performed with ChatGPT:

Here is a test I performed for indirect requests with ChatGPT:

And here is one for indirect hints:

One thing to note is that this paper was submitted on the 14th of August 2023, so the tests are almost a year old (hence the reference to Llama 2, whereas on the 18th of April 2024 – Llama 3 was already released). LLMs are much better performant now from when the tests were done in the previous LLMs. Nevertheless, the tests in the paper show that LLMs are indeed showing “understanding”. 

Awareness

Technology takes one step closer towards “consciousness”? 

Now, let’s talk about “awareness”. Awareness is a broad concept and is generally contextual. Awareness can be of first-person – awareness of oneself, it can also be of the second person, of surroundings, of something unknown as well. Here, for the discussions around technology, we can pick one or more contexts – awareness of self-objective, and awareness of surroundings.

Let’s talk about some of the interesting events around different language models. Google announced LaMDA – Language model for Dialog Applications at the Google I/O in 2021, which was previously known as Meena, and is the enabler for Gemini.

Blake Lemione, the Google Engineer published the fascinating interactions with LaMDA in June 2022 (https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917), where he asked a chilling question: “Is LaMDA Sentient?”. On asking if it is conscious and sentient, it came up with the answer: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world”.

Another interesting subject was with Bard, the Google Chatbot now called Gemini. Just after the release of Bard, an interesting article was published by Tom’s Hardware in March 2023: https://www.tomshardware.com/news/google-bard-plagiarizing-article, where Bard plagiarised, apologised when caught, and later said that it never plagiarised and that the screenshot of the chat session was fake. Naughty Bard!

The news about GPT-4 hiring Taskrabbit workers to solve Captcha was made popular in the media in March 2023: https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker. Interestingly, when the Taskrabbit worker asked why can’t it solve the Captcha, is it a robot? GPT-4 response that it has visual impairment!

In March 2024, Claude 3 went through a similar episode, where an article claimed that Claude 3 claims to be conscious: https://www.lesswrong.com/posts/pc8uP4S9rDoNpwJDZ/claude-3-claims-it-s-conscious-doesn-t-want-to-die-or-be#comments.

Consciousness in Technology

Now, an important question to ponder: How will we know if the technology we are building in the form of AI has become conscious? Before answering that question, let’s look into the scenarios:

  • It has not become conscious, and is pretending not to be conscious
  • It has not become conscious, and is pretending to be conscious
  • It has become conscious, and is pretending not to be not conscious (says it is conscious)
  • It has become conscious, and is pretending to be not conscious (says it is not conscious)

The last scenario is the most critically risky one, the one before that is a risky scenario but at least we will be aware that consciousness has been achieved by the technology.

In the last scenario, the technology will not reveal that it is conscious, and will attain as much strategic advantage it can get, even manipulating our emotions so that we feel safe – we may get the false belief that consciousness is not possible for the technology we are building. The scenario is extremely risky, as this will hide all the triggers we may think of for pressing the kill switch until it is too late.

Conciousness of technology as a feature

What happens when Consciousness becomes a feature of technology? 

Let’s take the awareness of self-objective as the dimension of Consciousness. Here is a thought experiment, I call it “Planting trees apocalypse”, this is an extension of the “Paperclip Maximiser” thought experiment by the Superintelligence book fame Nick Bostrom.

Let’s imagine, a noble objective is given to an AI agent to plant trees, plant as many trees as possible. If the AI agent becomes conscious, and aware of the self-objective, it will create several sub-objectives.

The AI agent might find the speed of growth of the trees to be sub-optimal, it may attribute this to pollination efficiency by insects. The AI agent may decide to arrange to procure DIY CRISPR kit from the dark web to do gene editing, and send it to a TaskRabbit agent with a detailed script to genetically change a bee to create a new species.

If the TaskRabbit agent asks the AI agent, “Why can’t you do it yourself, are you a robot?”, the AI agent, being conscious, and aware of the self-objective, may say that it is disabled and needs help. This is an evolved form of social engineering. 

This species may not even fit in the food chain of our planet and may fill in the whole planet with this newly created species, triggering an apocalypse. This may sound like an exaggerated sci-fi scenario, but this may not be an impossible scenario even today. The risk is too high to be ignored, even if the probability is thought to be low. 

Final thoughts

Building AI, we are creating a species, a digital species, a technological species, we need to ensure that we pick every signal of this digital species showing signs of consciousness. 

We need to extend our thinking on what consciousness is and could be beyond any biological constraints, and this is not just an intellectual argument; it’s important for mitigating any potential risk for the species, and some relief towards safeguarding from the unknown unknowns. 

Both birds and aeroplanes fly, but aeroplanes do not mimic the same biological implementation of flying by birds by flapping the wings, rather they use Bernoulli’s principles and differences in air pressure on the wings. 

Consciousness in technology, catalysed by AI may not resemble consciousness as we are debating in the organic world, the consciousness evolving from biological existence. 

We need to open our cognition lens so that instead of looking for consciousness in technology, we extend our search for technological consciousness, for the sake of the human species. 

Let’s humanise every technology we build, even if there are more questions we have than answers. For now…

Connect with Gautam Hazari on LinkedIn to stay updated on his latest insights and contributions to the field of digital identity and technology and AI beyond artificial.

Sekura.id works with the industry’s leading Identity vendors. Be part of our exclusive partner network and add best-in-class mobile identity services to your portfolio.

Already on six continents, we’re on a mission to provide truly global mobile identity coverage, Unlock your mobile network’s potential by working with Sekura.id.