We Have No Idea What Happens After Technological Singularity (And It's Coming Fast)
Today, artificial intelligence is fast becoming an ever-present part of our lives. AI chatbots have proliferated alongside other generative AI technologies capable of pumping out synthetic "art," videos, and even songs — causing no shortage of controversy in the process. Meanwhile, every tech company seems to have some sort of AI companion software crammed into the latest iterative update of their flagship gadget. Elsewhere, Hollywood is developing entire departments to harness the power of generative AI, causing major issues among creatives who were forced to strike in part due to the industry's refusal to regulate the use of such technology moving forward.
All the while AI is heightening everyone's anxiety over robots taking all our jobs, however, it's also proving useful in many ways, making discoveries in old scientific papers and even helping combat climate change.
With AI becoming so prevalent, it's only a matter of time before we start seeing more advanced forms of the technology, though experts disagree about when we'll reach certain milestones. One of the most contentious debates concern the development of an AI that exceeds human intelligence, which would constitute what is frequently referred to as the "singularity" — a moment when technology, propelled by artificial intelligence, evolves beyond human control and its effects become irreversible. Some believe such a moment to be close while others see it as a distant concern. Well, now somebody has tried to do the math and calculate exactly when our AI overlords will truly emerge — and it's a lot sooner than you might think.
The singularity is the point of no return
Artificial superintelligence refers to a form of AI more advanced than even human-level intelligence. This is sometimes referred to as the singularity, a term often used when describing the composition of a black hole. In astrophysical terms, it refers to the densely packed matter that lies beyond the boundary, or event horizon, of these cosmic objects and is the point at which the laws of physics would cease to apply. Since nobody has ever even ventured near a black hole (the closest to us is roughly 1,500 light-years away), we really have very little understanding of what the inside of a black hole is like.
In technological terms, the singularity describes a similarly mysterious point at which existing models break down and technology outpaces us, becoming entirely unpredictable. In 1965, British mathematician Irving John Good published his paper "Speculations Concerning the First Ultraintelligent Machine," in which he introduced the concept of what would later become known as the "technological singularity." In the paper, Good writes about a hypothetical machine that could "far surpass all the intellectual activities of any man" and "design even better machines" (via History of Information). The resulting "intelligence explosion," as he put it, would be the point at which man was left behind by technology — i.e. the singularity. "Thus the first ultraintelligent machine is the last invention that man need ever make," wrote Good.
Today, we would think of that "machine" as the creation of a super-intelligent AI that dwarfs our own intellectual capabilities and is potentially able to continually upgrade itself, becoming infinitely smarter than anything that has ever existed. How close are we to this happening? Well, it depends who you ask. But according to a team of researchers at a Rome-based translation company, it might only be a few years away.
AI translation is nearing human-level ability
As Popular Mechanics reports, translation company Translated has tried to track its AI's ability to translate speech in comparison to the accuracy of a human, thereby providing a rough estimate of when the technology will surpass our own abilities. The company tracked their AI's performance from 2014 to 2022 using a metric called "Time to Edit," or TTE, which refers to the time it takes for human editors to fix the AI's translated text. By tracking how long humans were spending to "fix" the AI translations, the company was able to see whether its technology was improving across that eight-year time span.
After analyzing more than 2 billion instances of humans editing AI translations, Translated noted an undeniable improvement in the AI's performance. This suggests that at some point in the future, the technology will surpass the need for human editing and become as good — if not better — than humans at translating speech.
According to Translated, in 2015, human editors spent roughly 3.5 seconds on each word of an AI-translated suggestion. By 2022, however, that number dropped to 2 seconds, suggesting the AI was getting better at translating and humans were therefore having to spend less time editing the finished text. More importantly, this trend suggests that Translated's AI will become as good as a human at translation very soon, perhaps before the end of the decade. But does this really signal the imminent arrival of the singularity?
There's no agreement on when the singularity will arrive
Translated's study of its own technology might seem like a simple way of tracking how close we are to some sort of hinge moment in AI, but things aren't quite that easy. Today, there is widespread disagreement about when Artificial General Intelligence (AGI) — AI equivalent to human intelligence — will emerge. If you ask former Google engineer Blake Lemoine, the company's chatbot already became sentient years ago. But Lemoine is very much in the minority on that view.
Most AI experts believe AGI is a few years away at the very least. Google DeepMind co-founder Shane Legg maintains that there's a 50% chance AGI will arrive by 2028. Similarly, professor emeritus of psychology and neural science at New York University, Gary Marcus, predicts AGI will be here by 2029. But University of Oxford physicist David Deutsch believes we are far from developing AGI, mainly because he believes today's narrow AI — that which performs a single task such as producing text or playing chess — and AGI are actually two separate technologies. The point is that getting any kind of consensus as to when superintelligence — and therefore the singularity — will arrive is pretty much impossible at this point.
Efficient narrow AI is not the singularity
For the sake of argument, let's say that Translated's AI becoming as good as a human at translating speech was the equivalent of artificial general intelligence. Even then, AGI is not the singularity itself, but a precursor to that technological flashpoint. Not only has Translated's technology not quite reached that stage, but the fact is that software being as quick as a human at translating speech would not be AGI; it would still be narrow AI reaching human-level performance in its one role as a translation assistant. It might speak to the wider acceleration of AI capabilities, but in and of itself, this does not suggest we are on the precipice of the singularity.
What's more, it's worth noting that many argue simply scaling existing AI systems is not actually a path towards AGI. That is to say that some believe no matter how much data you feed a Large Language Model (LLM), such as ChatGPT or Google Gemini, it will never develop human-level intelligence. Meta's AI Chief Yann LeCun spoke to TIME in February 2024 and argued that while it's "astonishing how [LLMs] work, if you train them at scale," their capabilities are also "very limited." The chief AI scientist ultimately argues that these LLMs are "not a road towards what people call 'AGI'," which goes for Translated's AI, too. There's absolutely no guarantee that simply scaling a translation AI would lead to AGI, let alone the singularity. In fact, we're not even sure what the singularity will look like.
The singularity doesn't have to be game over for the human race
Since Irving John Good first proposed his idea of an "intelligence explosion," there has been no shortage of similarly intelligent thinkers warning against the potentially disastrous consequences of bringing such a thing about. But many of them also argue that the singularity doesn't have to be a doomsday scenario, and that the real focus should be on AI alignment.
Max Tegmark — a physicist, MIT professor, and author of several books on the topic of AI — told The Guardian in 2017 that the real point of concern with superintelligent AI is competence, urging that the goals of such an AI should be aligned with our own in order to prevent the kind of doomsday scenarios many envision when they think of a superintelligence overtaking the human race. "I don't hate ants," said Tegmark, "but if you put me in charge of building a green-energy hydroelectric plant in an anthill area, too bad for the ants. We don't want to put ourselves in the position of those ants."
While there are, of course, several competing views on whether we need to worry about AI alignment, the point is that we might not be completely helpless here, and have a chance to shape what a post-singularity world looks like. For now, though, translation software getting better at its job doesn't necessarily suggest we need to worry about such a world emerging any time soon.