The Introduction of ‘LongNet’ – Gautam Hazari

8,000 tokens vs. 1 billion tokens

July 2023 was an interesting month for the digital world, from the announcement of xAI by none other than Elon Musk on the 12th of July in an endeavour to, “Understand the universe”, to the launch of Worldcoin and WorldID on the 24th of July by Sam Altman, CEO of OpenAI, (the company behind ChatGPT) triggering interesting discussions around privacy, UBI (Universal Basic Income), decentralisation and biometrics. 

In the noisy surrounding of AI rhetoric all around the digital world, it’s easy to overlook some significant developments which may alter the gradient of the already steep growth curve of Generative AI. A similar, significant event was witnessed on the 12th of June 2017, when the famous paper, “Attention Is All You Need” was published by Google & the University of Toronto, introducing the world to a new architecture called “Transformer”. 

It took another five years to make it to popular usage when ChatGPT was launched by OpenAI on the 30th of November 2022.

Just a reminder: ChatGPT uses the Large Language Model – GPT – the Generative Pre-Trained Transformer, a realisation of the 2017 paper. The rest is, as they say, history. 

Coming back to the significant event which might have been overlooked in July, it’s the introduction of “LongNet”, in a paper published on the 19th of July by Microsoft Research.

The paper proposes a breakthrough called “Dilated Attention”, which will put Transformers on multiple rocket boosters, pushing towards light speed. 

The hint is there in the title of the paper itself: “LONGNET: Scaling Transformers to 1,000,000,000 tokens”, yes that’s 1 billion tokens! 

For context – “tokens” referred to here is in relation to the sequence length used by the Transformer-based Large Language Models, which, in turn, defines the number of words used as input to provide context (e.g., the text we type into ChatGPT prompts). 

The number of words is approximately 0.75 times the number of tokens. So, what’s the sequence length/tokens for GPT-4? 

It is approximately 8,000 tokens, which is around 6,000 words (8,000 X 0.75), although there is a newer 32K token version for GPT-4 as well (gpt-4-32k). 

Now compare that to what the “Dilated Attention” will bring in – with 1 billion tokens. 8,000 tokens vs. 1 billion tokens.

Here is a relative positioning of sequence length/tokens across popular Transformer models (GPT shown here is the first version which had 512 tokens):

With tokens in the range of tens of thousands in GPT-4, we are witnessing the hitherto unimaginable revolution of our lifetime in the form of Generative AI, with 1 billion tokens using “Dilated Attention” in LongNet – it is not even possible to comprehend what this will bring in for Generative AI.

We have already seen the impact of the current Generative AI and LLMs (Large Language Models) on the fraud landscape in the digital world – Lexis Nexis CEO, Haywood Talcove warned of a looming threat as AI-assisted fraud schemes could surpass $1 trillion if swift action was not taken ‘in weeks’

These models are almost invalidating any known form of active authentication mechanisms, “Dilated Attention”- based LongNet will make any form of “active authentication” methods completely unusable. 

The solution is simple, and we’ve talked about it many times – it’s to use “passive authentication”- trusting cryptography in the zero-trust environment and bridging cryptography with humanisation.

The mathematics behind the cryptography and cryptographic authentication stand tall in the unimaginable fraud landscape which will be brought in by the Generative AI-enabled through the LongNet and “Dilated Attention” breakthrough, as this does not rely on the users actively participating in the authentication process. 

So, let’s add a portable, secure, hardware dedicated to cryptographic authentication into the mix – and yes, we’re talking about the humble SIM card that has been with us since 1991, that cannot be spoofed and has been quietly providing the cryptography, seamlessly, silently, securely and swiftly.

We get the assurance in the form of a future-proof solution, and yes – it is humanised too, as it does not bother the human user to worry about the cryptography, working seamlessly. 

This is SAFr Auth, the future of Authentication providing the Internet’s missing Identity layer.

Let’s make the world a SAFr place, now and for the future.

Gautam Hazari is Sekura’s Chief Technology Officer and a Mobile Identity guru. He and his team have built the Sekura API Framework (SAFr) – a unique platform that connects in real-time to mobile operators to allow enterprises to build trust in their customers, prevent fraud and create awesome logins.

Sekura.id works with the industry’s leading Identity vendors. Be part of our exclusive partner network and add best-in-class mobile identity services to your portfolio.

Already on six continents, we’re on a mission to provide truly global mobile identity coverage, Unlock your mobile network’s potential by working with Sekura.id.