One way to predict the future in a more objective, evidence-centric way is to look for the recently filed or published patents and then join the dots. While looking for recently published patents in the month of July 2023, I stumbled upon a US patent published on the 20th of July 2023. My curiosity was piqued, and then heightened when I saw the applicant’s name: Apple Inc.
The title is a little unassuming: “Biosignal Sensing Device Using Dynamic Selection of Electrodes”:
The title doesn’t give many hints, and although it doesn’t reveal much, it is enough to catalyse the curiosity of my mind. So, I continued to read. Then, the following diagram appeared:
This then, is the next evolution of their AirPods, or for that matter – it is the evolution of any earphones – if we generalise the approach.
No, the evolution is not about the next generation of ‘Active Noise Cancellation’ or ‘Vent’ systems for pressure equalisation or perhaps even motion- or speech-detecting accelerometers.
As the title of the patent hinted in the usage of the unassuming words – “Biosignal” and “Electrodes” – it is about fitting in an EEG (Electroencephalography) sensor as electrodes in the AirPods – and the “Biosignals” referred in the title are the “brain signals”.
Yes. You read it right: the “brain signals”.
So, what is this about? Simply, it is to use the electrodes in the AirPods to sense (read) the specific biosignals – brain signals.
The brain signals can then be profiled, or categorised, using deeply-trained machine learning models, to understand our thoughts, and emotional presence – calm, sad, happy, grumpy, angry, unsure, confused, bewildered and many more.
So, when we don’t like that specific song in the curated Spotify playlist, we don’t even need to double-tap the side of the AirPods to skip to the next song – just a “thought” of “not liking it” could be sensed by the electrodes in the AirPods and yes, the playlist skips to the next song.
Machine learning models learn along the way about what we like or don’t. It’s the first time in the history of songs (and perhaps humanity), that it’s not just that we are listening to the songs; the songs are listening to us as well – where “us” is a generalised representation of our “thoughts” in the context here.
These “thoughts” are connected, sensed by the electrodes as EEG sensors, connecting the sensed “thoughts” to the mobile device using Bluetooth, to the mobile app, to the cloud, and to the machine learning algorithms in the cloud – forming the “Internet of Thoughts” – The IoTh.
Full disclosure: this is not what the patent is revealing or claiming, this is what my curious brain is extrapolating and predicting.
Imagine, an IFTTT (If This Then That – a service, termed as the rules engine of the internet, where one can create rules and the rules trigger actions) when certain conditions are met with rules like: “If [Thoughts = Stressed] Then [Divert all calls to an AI assistant] – a “thoughtful”, and caring personal assistant bot in the IoTh – the “Internet of Thoughts”, and yes – instead of all calls, it could be from specific contacts as well – I leave it to the imagination of intelligent readers to expand on other possible scenarios.
Portable EEG sensing devices are not new; I have been experimenting with headbands like “Muse” for many years – beyond the meditation apps, where brain signals are profiled to identify signals of “calm” or “not calm”, and gamified to get points when you can keep your brain “calm” through the brain signals read by the electrodes in the “Muse” headband that acts as your personal meditation coach. There were a few issues with this being a catalyst for the “Internet of Thoughts”; the obvious one is the form factor (reminiscent of Walkman-type headphones) and also the uncomfortable reality of being, “yet another gadget” to carry/wear, which challenges the usability factor as a general or multi-purpose wearable device.
The bio-sensing electrodes as on-ear EEG sensors built into the AirPods, though, fit the generic, multi-purpose device; opening the door for the “Internet of Thoughts” to become a mainstream reality, and also fits with the “desire” to wear the device as well – judging by more than 150m AirPods being sold by Apple by 2022.
Let’s take a peek back into the history of the Internet, to make a prediction on the possible evolutionary path of the IoTh – the “Internet of Thoughts”.
The “Internet” (actually the World Wide Web, technically the Internet and the World Wide Web, or even, ‘the Web’ are not the same thing) – which is actually the “IoC” – the “Internet of Content”, has gone through a steep evolution.
1989 saw the birth of Web 1 (though we did not call it Web 1 then, I am using it to differentiate it from Web 2). Web 1 was a “read-only” ecosystem, where the users could consume content from the Internet. Then, with the introduction of mobile phones and making the Internet more personal, the Web 2 was born in 2004-2005.
The “Internet” now became “writable”. Users started to “write” into the Web, by generating content, mostly using mobile phones and uploading to the Internet. The “writable” Web has continued until today and has evolved enormously, with the explosion of user-generated content in the “writable” Web, mostly fuelled by social media.
It has also created an industry for content creators and at the same time disrupted the traditional news, media and entertainment industry. Today, 95 million photos and videos are “written” into Instagram every day – that is 65,972 every single minute – a lot of restaurant meals and lifestyle shots for sure. Not to mention two billion snaps “written” into Snapchat on that same day…
The Web is at an interesting transformational juncture where more “writing” is happening than we can collectively “read”! This is pushing unimaginable personal and social psychological boundaries – our average attention span has reduced to 8.25 seconds, and we may need to revisit some of the metaphors and jokes; as the attention span of goldfish is 9 seconds.
A similar trend can be seen in the IoT world – the “Internet of Things”, where both read and write into the “things” make it a complete ecosystem of “smart everything”; from smart homes to smart cities and everything in between.
IoT “sensors” read the various parameters – temperature, humidity, acceleration, angle of rotation, altitude, direction, and anything that needs to be measured.
The “actuators” are the elements which wait for instructions to be “written” to perform actions; the motors, the steppers, the switches and so on – which get instructions “written” from the connected Internet and the “actuators” act upon them.
The LIDAR (Light Detection and Ranging) reads a nearby obstacle on the path of an autonomous car and the microprocessors, with the help of the trained machine learning algorithms, write into the braking system to action.
A two-way interaction system, with read and write, completes the “smart” ecosystem.
So, what can we predict for the IoTh?
If the electrodes in the AirPods can read thoughts and connect them to the Internet, creating the IoTh; this is the initial phase.
If we can take the cues from the “Internet of Content” [the Web] and the “Internet of Things” is it not reasonable to assume that it is a matter of time before it will be “writable” too?
So, I am walking past a fast-food restaurant, listening to my favourite music on my AirPods, and an app geo-locates the fast-food restaurants to be near me, “writes” the “thoughts” in my brain using the electrodes that “I am hungry”, even though I may not be, and that “I love that burger in that fast-food restaurant a few metres from me”, which might not even be in my taste parameters in reality.
Imagine, an application profiling my inclination from my online activities during an election, and “writes” the “thoughts” in my brain using the electrodes to either amplify or change my inclination to one political party or another.
No, this isn’t an episode of “Black Mirror”, or another release of the movie, “The Matrix”; this is a reality we are stepping into, in the Moore’s Law-defying accelerated digital world.
Let’s take a deep breath and remind ourselves, again – the Internet was created without an Identity Layer. The Internet is still missing the Identity Layer.
We are still using stop-gap approaches like passwords to solve this identity crisis, even though 81% of data breaches are related to passwords – “123456” being the most popular password used for the last 5 years, which can be cracked instantly.
We have seen how this lack of Identity Layer brings life-or-death situations in the IoT world (imagine the serious implications when the password used in a connected pacemaker is compromised).
The other approaches for solving this Identity crisis, like biometrics, have been severely challenged by the accelerated revolution of Generative AI.
We cannot afford to wait to see the implications of this lack of Identity Layer for the IoTh – the “Internet of Thoughts”.
The way to change the future is to create it.
At Sekura.id, we are busily creating and building the missing Identity Layer for the Internet.
Our vision is to make the digital world safer, and this expands on the “Internet of Things” and the “Internet of Thoughts” too.
The entire Sekura.id family is passionate about solving this Identity crisis of the Internet with the Sekura API Framework (SAFr).
Let’s make our digital world a SAFr place. The Internet of Content, the “Internet of Things, the Internet of Thoughts and beyond. Let’s work towards “ID for all and Everything”, and, yes, for “Every Thought” too.
Gautam Hazari is Sekura’s Chief Technology Officer and a Mobile Identity guru. Like Apple, he holds several technology patents too. He also plays 11 musical instruments and can be found working on the next iteration of Mobile Identity solutions. Probably at the same time…