#14: AI Will Only Die When Capitalism Does
On Deepseek, AI as the perfect product of capitalism, and the methods of intervention.
“That final A.I. summer had burned so brightly, and for so much longer than the ones before. In the previous A.I. summers, those times when A.I. had captured the public imagination and that of the militaries who funded it, the A.I. salesmen always made grandiose promises of what A.I. was going to do, what it could be, how it would transform everything. They made themselves out to be prophets of a glorious technological future, a golden age for humanity, one of unparalleled progress. They promised that true artificial intelligence was just around the corner, and if they had just a bit more time and just a bit more funding it would be here, and it would change everything. In those A.I. summers the military eventually pulled the funding when all they got after years of waiting were some unintelligent computer programs pretending to be human, instead of intelligent computer programs that could drive tanks. And so after every A.I. summer there would be an A.I. winter, where the funding would dry up, and A.I. would be mocked for being a fantasy, and the salesmen either went bust or rebranded themselves. They’d wait for the next summer to come, when the cycle would begin again. But that final A.I. summer had been different.”
Excerpt from ‘A Final A.I. Winter’ from the collection Newly Forgotten Technologies: Stories From AI-Free Futures.
In January the tech news landscape became dominated by Deepseek, a Chinese company who released a series of AI models that demonstrated comparable performance to OpenAI’s flagship ChatGPT and Dall-E products but were far cheaper and more efficient to train and deploy. On top of this, Deepseek released the source code for these models (though not the training data), including some models which can run on consumer-grade laptops. Deepseek’s announcement was swiftly followed by similar models being released by ByteDance and other major Chinese tech companies, all of which boasted similar performance efficiencies and gains.
[Endless opinion content and AI slop abounded about the ‘death of American AI’ after the Deepseek news]
Deepseek achieved their successes using far less powerful hardware than is available to their US counterparts, due to US restrictions on exporting this hardware to China as part of their ongoing trade war. The global AI discourse had previously orbited around a few powerful US companies (such as OpenAI and Anthropic) who promised that their products will one day become machine intelligences, and that this lofty goal required their huge hardware purchases, funding demands, data center usage, resource consumption, political favours and legal exemptions. Deepseek’s success undermines the legitimacy of these demands and demonstrates that US AI research is neither the most advanced, efficient, or profitable in the world.
The Deepseek event does not mean that the generative AI bubble is bursting. Generative AI is proliferating at scale, and you do not have to look far to find companies and governments who are attempting to shoe-horn generative AI tools (or just rebranding their existing software as ‘AI’) into any device, product, or service they can, even in the absence of public demand for this. This desperate splattering of the AI brand over even the most banal of consumer tech creates a cultural and economic pressure to maintain the hype and hyperbole that the AI industry has always needed to survive; at least until those devices fail, disappoint, and cause the inevitable chaos that comes from relying on such an inherently unreliable technology.
[This story is from August 2023, and is as relevant now as ever]
The only genuinely hopeful thing that I think will emerge from all this is that Deepseek’s success may reduce the extreme energy consumption of training and deploying AI products which led to many tech companies’ energy usage increasing precipitously in 2024. Deepseek’s demonstration that the mainstream US generative AI industry has been inefficient in its use of resources sets a far lower bar for the cost, and therefore consumption, of generative AI development and can be replicated through Deepseek’s open-access code (unlike OpenAI, who are in no way ‘open’ to sharing their flagship code or data). The best-case scenario is that a push towards cheaper, lower-compute models becomes the dominant paradigm of the industry, rather than the hugely consumptive and expensive approaches that the US tech industry had indulged in up to now.
However, these energy savings may be undone by the sheer number of new AI bottom-feeders enabled by the Deepseek model. Considering that the US AI giants have yet to make generative AI profitable, there will likely be many worldwide actors who will make and train a mass of new products run at the lowest possible costs to try and finally turn a profit in this market. If this follows every other gold rush in the history of the internet, this will include invasive collection and selling of their users’ data to create this revenue, done through poorly developed products that provide novelty parlour tricks with poor security and are ripe for hacking attacks.
[As the old saying goes, if you’re not paying for a service you are not the customer, you are the product]
Regardless, generative AI will eventually fade away as it inevitably succumbs to the wrath of the hype cycle, given that it’s a technology that trades on what it pretends to be (a machine that’s alive) rather than what it is (bias automation). The evidence of this can be seen when companies like OpenAI demand trillions in investment, promising that they need this not to spend on making more generative AI, but to create the AGI God/Pinnochio/Frankenstein/artificial slaves/Majel Barrett (delete as appropriate to your ideological position on AI). Once the brand of generative AI has lost its use in legitimising this investment, it’ll be replaced by another thing that we will be told is the next step towards creating humanlike intelligent machines. ‘Quantum AI’ would be my current bet. Quantum AI has all the hallmarks needed for this scam: it combines two concepts that are inherently difficult to explain and often poorly defined, and both are the subject of endless reams of fiction. Regardless of the brand, I think we will always have something that promises Artificial Intelligence for as long as late capitalism persists, because AI is the perfect capitalist product. It is built upon systems that are maintained by (and maintain) an entrenched capitalist system: speculative financial investment, the exploitation of individual rights, and intensive resource extraction. What we get in return are the promises of solutions to the crises that capitalism itself creates, such as wage slavery (“AI will do your job for you”), the climate collapse (“AI will ‘solve’ it”), and health epidemics (“AI will find all the cancers”). Capitalism has always sold back cures to its own diseases, but never has the scale of consumption been so large, the product so absent, and the timeline so critical.
No technology is good, bad, or neutral, so it falls to us to see the current state of AI tools as a provocation to consider how these things could work for our benefit rather than against it. I see genuine value in taking code such as Deepseek’s and other open-source models and start interrogating what a progressive non- or anti-capitalist generative AI looks like, not in theory but in practice.
This still won’t make generative AI anything more than bias automation, but this can be an opportunity, not an obstacle. ‘Bias’ gets used pejoratively, but at its core it describes a viewpoint and set of knowledges that gets embedded into the things we make. In which case, as the generative AI tools proliferate around us, what are the biases that we’d want to see in them, rather than what there are right now? Biases towards who, or against what? A lot of my work is going to be oriented towards these questions this year. But at the same time, I feel more and more that critically-led Luddism is a valid position. When it comes down to it, some machines need to be destroyed for us to have a just society beyond and without them.