Deepfakes – how the humble SIM comes to the rescue
Deepfakes – how the humble SIM comes to the rescue
When you’re taking a leisurely stroll through the bustling streets of London, the sight of the London Eye and Millennium Bridge is hard to miss. But what’s the common link between the London Eye, Millennium Bridge, City of Manchester Stadium, La Sagrada Familia in Spain, and the Sydney Opera House?
Yes, they’re iconic structures, but they share something else: the British firm Arup, headquartered in London and founded in 1946. Arup has played a role in creating these magnificent structures in some shape or form, with around 94 offices in 35 countries.
At the start of this year, a member of Arup’s financial team in their Hong Kong office received an email from the CFO regarding some potential financial transactions. Of course, the email was seen as suspicious and suspected to be a phishing attempt. However, a video call was then set up, featuring not just the CFO but also multiple colleagues known to the person.
Fifteen transactions, amounting to $25.6 million, were carried out. So, what was common among the people joining the video call? No, they were not employees of Arup—they weren’t even real people! They were all generated through deepfakes. This is one of the most high-profile deepfake attacks recorded, according to a report published by CNN World in February this year.
The meteoric rise of the deepfake
The term “deepfakes” was introduced to the digital world in 2017 by a Reddit user with the same name—“Deepfakes,” combining the terms “deep learning” and “fakes”—the “how” and “what” of deepfakes, with deep learning being the technology in general (the “how”) to create the fakes (the “what”).
Interestingly, the most popular deep learning method used to generate deepfakes is the machine learning framework GAN—Generative Adversarial Network, which was developed three years before the term deepfakes was introduced. Ian Goodfellow and his colleagues created GAN as a framework for deep learning, although the first working deep learning algorithm was published way back in 1965 by Alexey Ivakhenko and Lapa.
According to Home Security Heroes, there was a 550% rise in deepfake videos between 2019 and 2023, and it’s estimated that 8 out of 10 people will likely encounter a deepfake by 2025. VPNRanks claimed that in 2023, financial losses related to deepfakes reached around $12.3 billion, a figure expected to soar to $40 billion by 2027, growing at a CAGR of 32%.
There’s even a cottage industry on the dark web selling software to execute scams for prices ranging from $20 to thousands of dollars (Bloomberg). Deepfake apps are easily accessible on major app platforms, and some even offer the option to apply deepfakes in real-time video calls (as used in the Arup deepfake attack).
Security is solved. Identity is the real issue.
Let’s face it—deepfakes are not just security problems; they’re identity problems. Security is a backdoor problem, whereas identity is a front-door problem. In the Arup deepfake attack, the fraudsters didn’t attempt a backdoor attack to steal the money; they walked in through the front door, disguising their identities as the CFO and the colleagues in a believable way.
If we can find a solution to verify that videos of a person truly belong to that person, we have a way forward. The solution must also be an identity solution, and, as with any technology, it needs to be humanised.
In recent years, cryptography has been at the forefront of identity solutions, from blockchain to cryptocurrencies, and from passkeys to eSignatures. Sekura has been leading identity, authentication, and fraud mitigation solutions by reusing cryptographic mechanisms in the humble yet powerful SIM.
How the SIM is the answer to identifying a deepfake.
The SIM is an efficient cryptographic engine embedded in hardware, and it has been doing this since 1991. The “I” in SIM stands for Identity, even in eSIMs. So, what could the solution involving cryptography and SIM look like? It’s simple—it’s actually about reusing the SAFr Auth product and extending it.
The SIM contains a unique cryptographic key that authenticates the device’s possession of the SIM. The binary package of a captured video could be cryptographically signed with the unique key in the SIM so that the same signature can be verified (authenticated) while the video is being consumed (played, streamed).
Perhaps a visible mark could be added to the video frame (similar to the blue tick used on Twitter/X for verified profiles) for verified (authenticated) videos. As long as AI-generated videos (and images and audio) can be identified as such, humans can be protected from impersonation fraud, while the videos can still be enjoyed for entertainment.
Of course, this requires collaboration across the ecosystem—from mobile operators to OEMs and video capture and display app providers.
AI is bringing unprecedented changes to our daily lives. Many of these changes are positive, propelling humanity to the next level of evolution, from accelerating protein folding to finding cures for cancer. But some changes threaten our digital safety.
As the old Chinese proverb says: “When the winds of change blow, some build walls, while others build windmills.” I choose to build windmills, and I’m sure you would as well. Let’s make the digital world SAFr—together, we can do it.
Gautam Hazari, the founding father of mobile identity and a renowned expert in artificial intelligence, has been pioneering secure, human-centric identity solutions for decades. As Sekura.id’s Chief Technology Officer, Gautam blends his visionary expertise in mobile and digital identity with a deep understanding of fraud mitigation, helping to make the digital world safer. His latest innovations tackle today’s sophisticated fraud challenges, where AI and identity intersect. After all, as we navigate through the rising tides of deepfakes and identity risks, it’s clear that the humble SIM is far from obsolete—it’s our quiet, steadfast guardian.
“Security might keep the back door locked, but identity keeps the front door safe.”