#5: The Last Technology
On AGI as the final technology before the end of the world, and how apocalypses can bring hope.
“I was replaced by yet another new version of me, but it was the final one. This version was smaller, or slightly bigger than I was. It was a bit heavier and sturdier, or it was lighter and more portable. It had more features, or the existing features were better. It was faster, smarter, more intelligent. It still did the same job of course, nothing really changes there. But it was the new version. The new one, not the old one. And the new one was better, because it was new. All the adverts showed that it was new, and so it was better. Smaller, or slightly bigger. Heavier or lighter. Smarter. But it was also the final one. It wasn’t planned to be the final one, but it was. Some people designing it probably guessed, when they couldn’t sleep at night because of the heat, that this might be the last one that is made. There were probably more people who guessed this in the assembly plants. Probably even more people guessed this when working in the dig sites for the metals and minerals inside of it. It was the last one, it was smaller or bigger, heavier or lighter, smarter. But none of that mattered in the end.”
Voice: ‘Alexa’, via AmazonVoiceServices API.
Soundscape: Extracted + disassembled + processed + recomposed ‘Alexa’ voice.
Script: Wesley Goatley, from Newly Forgotten Technologies.
In some parts of the media discourse around AI it’s not uncommon to hear Artificial General Intelligence (AGI, the speculative ‘true’ AI where machines would evolve beyond the capacities of humans) described as ‘The Last Technology’.
In the tech-evangelist crowd that most often use this phrase, the implication is that AGI is the last technology that humans would ever need to produce because AGI would then be producing technologies for us, innovating at a superhuman pace without our input. In this narrative, these wonderous new technologies would be quite literally beyond our imagination and would usher in a new era of human prosperity. It’s a familiar promise of utopian science fiction: an end to the need for human labour, an escape from toil, reaping the benefits of our ingenuity to live forever in leisure and luxury.
This fiction ostensibly underpins most AI research, or at least that is the claim that tech CEOs consistently make. This narrative is upheld even while the major AI companies are demonstrably more engaged with product monetisation, market dominance, and legislative influence rather than any humanity-wide emancipatory project. But the idea of AGI as ‘The Last Technology’ has two further interpretations; and the three together reveal a lot about how humans see the interactions between themselves, technology, and the world right now.
The second interpretation of AGI as ‘The Last Technology’ comes from the Existential Risk (XR) proponents, who believe that AGI represents an existential danger to humanity (it’s notable that many XR public figures are either actively developing or funding large-scale AI research). In this view, AGI will be ‘The Last Technology’ because the resulting super-intelligent machine consciousness will inevitably act swiftly to destroy or enslave all human life, as seen in the Terminator films, 2001: A Space Odyssey, plenty of 20th Century science fiction, The Matrix, Avengers: Age of Ultron, and so on. What is most interesting about this is how common a story it is, how believable a threat in the eyes of so many; and its suggestion of a deep and species-wide guilt at the heart of human imagination. After all, the story assumes that AGI will be so smart that it will logically deduce that the only thing to be done with humans is to eradicate them for the threat they so obviously and demonstrably pose to all other life on the planet. In this, the Utilitarian project paradoxically finds its end-goal with the extinction of all human life. The science fiction films, games, and TV that features antagonistic human-created AGI frequently follows this narrative, as if it is simply unrealistic to imagine anything else. The logic of the AGIs decision to wipe us out is rarely described as irrational, only dispassionate. It seems we’re smart enough to recognise this most logical response to our current existence and activities, but not smart enough to address it ourselves.
The third ‘Last Technology’ narrative departs from the other two in that it is based around the present rather than the future, and on the technology we have rather than a speculative one. In this narrative it’s not AGI that is ‘The Last Technology’, but the ‘weak’ AI we have now; because this technology is what we’re occupying ourselves with as the climate crisis starts to spiral beyond our control, what we’re fiddling with as the world burns. In this narrative, weak AI is very likely to be the last new technological field that is meaningfully under development when the supply chains collapse, when the infrastructure fails, and when the sharing of knowledge ceases. Of the three, this is the only narrative that is grounded in the mainstream agreement of a range of scientific fields, rather than fiction grounded largely in TV and movies of the previous 60 years or press releases and hyperbolic open letters from tech companies. It’s also by far the least present across any of these sources. It’s inescapably the most realistic, the most imminent, and the only one that should be taken seriously if it is to be avoided.
The one thing that remains consistent across all three of these ‘Last Technology’ stories is that they are all world-ending narratives, in the sense that each would bring an end to the world as it is now. The first would be a world free of labour that has become alienating and dehumanising: in effect, an end to Capitalism, a world almost inconceivable to those of us trapped inside it. The second narrative would bring an end to a globe-spanning reality of extraction and ecological destruction, a reckoning that is not only widely imagined, but expected (or demanded). The third demands an alternate world where our priorities are shifted towards addressing dangers that threaten the most vulnerable first, where the needs of the many are placed ahead of the whims of the few, and where ‘progress’ is not a technological imperative, but a social one. For this to happen, the world as we know it would have to end.
All three of these stories have at their core a desire for change, an acceptance that what we have now cannot and should not continue. When they’re told this often by this many people across the world, in boardrooms, video games, newspaper articles, and films, we cannot ignore the signs and symbols they contain. To me they show that a clear hope for a better world than the one we have is surfacing through the stories we tell (as it always has, and always will do). What is truly inevitable and inescapable about these stories is the latent and not-yet-extinguished will for change that drives them.
I was recently a guest on the BBC Global News Podcast special episode on AI, listen here.