While the performance of current AI systems may seem impressive, there’s a long way to go before we’re likely to see true human-like capabilities.

AI is everywhere – or so you might be forgiven for thinking. Every corporate announcement and government initiative these days seems to cite it as a badge of honor.

Artificial intelligence has come a long way since the 1950s when the term was coined by Stanford emeritus professor John McCarthy and defined as “the science and engineering of making intelligent machines.”

But the words actually cover a number of different technologies and concepts, which can have radically different characteristics and capabilities. So how do the experts categorize the different stages of AI, and how can we expect to see it develop in the future? Let’s dive into the terms of artificial narrow intelligence, artificial general intelligence, and artificial superintelligence.

Artificial narrow intelligence (ANI)

Artificial narrow intelligence (ANI), also sometimes known as Weak ANI, is regarded as the most basic stage of AI, embracing everything from simple rule-based systems and decision-tree systems to artificial neural networks that can recognize patterns and make decisions based on them.

It includes ‘genetic algorithms’ and ‘evolutionary computation’ that use natural selection to improve performance over time, as well as fuzzy logic systems and Bayesian networks that can make use of imprecise or incomplete information.

However, ANI is reliant on the data it’s trained on, and can’t teach itself to perform new tasks.

It’s used widely in tasks such as manufacturing assembly, supply chain management, customer service, healthcare diagnosis, financial data analysis, and in personal assistants such as Siri and Alexa.

Right now, perhaps the most prominent example of ANI is ChatGPT and other similar artificial generative AIs. Fed with data harvested from the internet, such models – entertainingly and rather accurately described as ‘auto-complete on steroids’ – may appear sophisticated but in fact perform only a narrow function and are incapable of reasoning or learning.

They are also prone to, well, making things up – a phenomenon known as hallucination.

Artificial General Intelligence (AGI)

Artificial General Intelligence (AGI), also known as Strong AI, is what’s being seen as the next stage for AI – and is really what most people would imagine a true AI to be.

An AGI would be capable of the same level of learning and understanding as a human being, and of carrying out the same level of intellectual tasks – while having instant access to a far greater range of data.

Unsurprisingly, the concept has caused a certain level of alarm, with Stephen Hawking warning that it could even spell the end of the human race: “It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

Some are concerned that AGI could be just around the corner – including, for what it’s worth, Elon Musk.

“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast – it is growing at a pace close to exponential,” he commented on an essay by computer scientist Jaron Lanier.

“The risk of something seriously dangerous happening is in the five-year timeframe. Ten years at most.”

But before you start to panic, it’s worth noting that he made this comment way back in 2014. And other, perhaps more expert observers, currently believe that AGI isn’t going to be taking over the world any time soon.

In 2017, for example, Richard Sutton, professor of computer science at the University of Alberta, suggested that there was a 25% chance of it emerging by 2030, a 50% chance by 2040, and a 10% chance that it would never materialize at all.

Artificial Super Intelligence (ASI)

This is where we get into real science fiction territory. Artificial Super Intelligence (ASI) would, as the name implies, surpass human intelligence in every way. It could be self-aware, with its own emotions, beliefs, and desires.

Two years ago, an international group of researchers concluded that it would not be possible to contain an ASI, as it simply wouldn’t be possible to create a containment algorithm that could ensure that it couldn’t harm people under any circumstances.

“If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations,” explained team member Iyad Rahwan, director of the Center for Humans and Machines.

“If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable.”

Just this month, however, ChatGPT creator OpenAI has announced the creation of a dedicated team charged with working to ensure that these fears never come to pass.

It says it plans to dedicate 20% of its computing power towards this end, and is currently recruiting for staff.

“Our alignment research aims to make artificial general intelligence (AGI) aligned with human values and follow human intent,” the team explains.

“We take an iterative, empirical approach: by attempting to align highly capable AI systems, we can learn what works and what doesn’t, thus refining our ability to make AI systems safer and more aligned.”

In the long term, we can expect to see more initiatives such as this, as well as increased national and international controls on the development and use of AI. And while there’s no doubt about the potential dangers of an ASI, we’re a lot further from seeing one than might be apparent from much of the hype.

In short, there’s plenty of time.

Techsavvi Newsletter: January 2025 Edition
Tis The Season to be Techsavvi
techsavvi newsletter december 2024 edition