#18: The Horizon
On a future after data centers collapse; low-power AI and digital recycling in art practice; and postcapitalist storytelling.
This is the second of three posts on my recent artwork The Harbinger, The Horizon, and The Hope, a work in three parts that proposes three different near-futures for our relationship to AI tools and computational culture.
This post is on The Horizon, a low-power voice-to-voice artwork built from recycled and hacked electronics, which tells a story about survival and flourishing in an era of technocapitalist collapse. Here’s the description from its first exhibition at the Paul Mellon Center for British Art, earlier this year:
“The Horizon represents a glimpse into a possible near-future of post-abundance computation that also shows us how to better use the tools we have today.
It is a device from a future where resource scarcity, climate change, and the related political fallouts have highly limited data center and cloud computing resources to industrial, governmental, and military sectors, and everyday people must make do without some of ‘magic’ technologies they used to take for granted.
In this world, a lone former engineer is trying to build their own DIY AI assistant cobbled together from artefacts of the era of computational abundance: a hacked Kindle, a disassembled Amazon Echo, an old Raspberry Pi computer, and solar panels all jury-rigged together to make a recycled, low-power version of the AI consumer tech of today. This DIY system runs an offline voice assistant designed to reduce its power consumption as much as possible, with most of its old features disabled so that it now functions as a cooking assistant: suggesting recipes, playing music, and helping to maintain a simple vegetable garden nearby. Through interactions with the voice assistant, audiences can stumble upon the last voice memos that the mysterious former engineer received before the servers went down, which reveal what daily life may look like for those living through a future such as this.
This future is a likely one, but this warning can teach us something about what we can do today. For example, restricting the AI model to be custom, single-purpose, and offline reduces its carbon footprint, secures the personal data of the user, and presents an alternative to the mainstream AI tools of today that often have features that nobody asked for, and address non-existent problems. It presents a paradigm for AI devices that, through disaster, might show us how to live better with the tools we have today.”
[Installation view of The Horizon.]
Context
The work was inspired by the fragility of the current era of computational abundance that we live in, where vastly consumptive and unsustainable systems such as generative AI proliferate at scale and are hastily embedded into everyday systems and technologies, without clear purpose or value. This has created a vast demand for computational resources to perform the constant churning of AI slop, resulting in countries scrambling to green-light new data center projects despite protests from local communities who are impacted by the pollution and water shortages these infrastructures create. I’ve argued elsewhere that this is the worst time in history for such a consumptive technology to be developed and rolled out so aggressively, given the status of the climate crisis we are in. When combined with other forms of polycrisis, such as resource wars and rising global temperatures, the fragility of this moment in time becomes clear, and the inability for it to be sustained is obvious.
Under these conditions, it’s plausible that this era of abundance is a historic blip that we’re living through, and one that is doomed to be short-lived. With constraints around spiralling cost, energy consumption, and resource scarcity, there could likely be a period of managed decline, a rationing approach to data center usage that changes it from an ever-present and always-on layer of everyday life to something scarce and priviliged. Crucially, it’s likely that this rationing of always-on data center capacity would only be applied to us: governmental, industrial, and military systems rely upon these infrastructures too, and in the event of this kind of crisis they would not be ‘switched off’ for them. We saw this form of prioritising and government capture of everyday systems like logistics during the pandemic, or the world wars of last century, where everyday things became a rarity for all but the most powerful.
This is the reality that The Horizon is set in, where people are adapting to these changes. I wanted the work to show a glimpse of a new world that is struggling to be born, and where people are living, thriving, and adapting regardless.
Technicals
Much like The Harbinger, this work was designed to realistically and accurately depict an object from this potential future; moving beyond simulation or speculation into something that feels lived-in and tangible. To do this meaningfully, I set a limit to myself of only using widely available and long-lived hardware to build this device.
At the center of it is a Raspberry Pi 5, running in a low power configuration with an optimised large language model running on-device (locally hosted, private, no data center calls made), powered by a solar panel and battery. For the display I repurposed an old Kindle e-reader, jailbroken to act as a screen for the Pi. The Kindle’s e-ink display is both extremely low power and highly resilient, already being almost 10 years old at this point; perfect for this context.
[Detail view of the The Horizon, showing the hacked Kindle display and Amazon Echo mic and speaker that are used in the work. Picture taken at InSpace Gallery, Edinburgh.]
The microphone and speaker are both taken from a disassembled Amazon Echo smart speaker. For me, this felt like a believable set up: I’ve always seen the Echo as the exemplar object of this age of computational excess. It arguably heralded the current age of anthropomorphic AI, the exploitation of science fiction narratives, and the over-promising and under-delivering that defines the AI field today. On top of that it’s also got a unique history of AI-waste, with multiple new models being released every year despite few meaningful changes in the hardware since 2016. In the future scenario of The Horizon, these things would go from being disposable to a valuable repository of components to be retooled into something new.
I wanted the LLM interaction to feel restricted and single-purpose, a genuine departure to how LLMs are designed today. Instead of being a source of probabilistic and error-prone answers to general questions, I wanted this system to reject general-purpose queries, doggedly refusing to do anything other than answer questions related to growing food, cooking, foraging and its other narrowed functionalities. One solution to this would have been to train my own model; but I don’t think that training a model from scratch, with the associated energy costs, is ethically defensible in a work such as this. Instead, I opted to use a widely distributed model (Llama 3.2) as the base, and then restrict its functions through refining instructions in the system prompt (like how the system prompt in The Harbinger forced the model to respond in a way that kept its fictional reality intact). This is a very trial-and-error process, and there’s no quick and easy way of doing it; what instructions you must set changes depending on the model and a ton of other parameters. But it’s also a pretty organic process, one where you test an instruction, query the model, go back and tweak the instruction, over and over until you get the effect you want.
This process of narrowing the model to force it to reject general queries also makes it possible to meaningfully save energy by scripting very brief responses (such as always replying “That is beyond my operational capabilities” to these queries) which cut down both on processing time and token generation for the model. Removing the sycophantic part of query replies also limits token generation and CPU runtime, lowering the energy use of the model further. I’ve been experimenting with other techniques for this too, including user low-level models to parse inputs much like GPT-5 does. But more on this approach in the next and final post in this series, on The Hope.
Interaction
As audiences interrogate the voice assistant in The Horizon, the details of its future are exposed, bit by bit. The hard limitations of what the model will give a response to are obvious straight away, which steers the audience into engaging in a realistic roleplay with the device, such as asking where to find sage in the local area, and how to prepare it for cooking, or checking on the health of the plants in the nearby garden via the digital irrigation system that the system can access. This open-ended form of narrative isn’t didactic, doesn’t force the audience into a particular form of engagement, and makes the story behind the work a process of discovery rather than exposition.
One of the things audiences discover is a personal narrative buried at the center of the work, where the synthetic voice of the LLM is swapped for a human one. With some probing, audiences can listen to the last six voice messages that the Engineer who built this device ever received, like a small time capsule from the before-times. Performed by the artist Aimee Neat, these voice recordings show a one-sided conversation from the sender (whose identity is not revealed, though it’s implied there is an intimate relationship to the Engineer), in which they report their everyday struggles as networks and infrastructures start to buckle and fail.
[One of the voice messages stored on The Horizon]
The conversation eventually takes on a more desperate tone, where the distance between the two people becomes harder to overcome as transport networks start to fail when the data centers they rely on begin to collapse. The messages end on a hopeful note, and a commitment to overcoming the distance. This little glimpse of desperation and hope is my favourite detail in the work.
The tension of energy and consumption stays at the center of the audience’s interaction with The Horizon. Alongside the obvious battery and solar panel connected to the system, every request, response, or action of the device has an energy cost, shown via a battery indicator on the Kindle display and a readout on the bottom of the screen that shows the energy cost of each interaction, such as an LLM request, a piece of voice synthesis, or a sound file playback. As audiences interact with the work, the battery slowly drains. When it runs down to zero, the system shuts down for the night, and the screen fades to black. Rather than a system that seems infinite and powered by infrastructure that’s out of sight, this feature keeps the tension of energy conservation at the heart of the interaction. After a time the system will reboot, but wipes clear any interaction or conversation that happened before the shutdown, prompting a fresh start with the work on the part of the audience.
Future
The work is grounded in the potential of this future of data center scarcity, but also in how we could do things differently today. The LLM at the core of this work does function in a low-powered way, using recycled tools, to achieve things that are of genuine use to people, such as advice on how to better feed yourself and others. It does this in a private and offline capacity without implying that the machine is somehow alive or thinking, and without the assumption that a device like this should be a bad replacement for wikipedia or Google. Instead it’s a device that demonstrates computing within limits, addressing actual needs for individuals and communites, and positing what a meaningful engagement with these tools might be in supporting human life rather than as one more risk vector for civilisation.
I think the future will involve us looking back at the current period of computational abundance with a sense of loss, and guilt. But even if some of the infrastructures collapse, many of the tools and devices will still be left around, either sitting inert on shelves, in drawers, or in e-waste dumps. At that point it’ll be our responsibility to make something of them that uses them for something better than what they were originally designed for.
The next post here will be on the final part of this work, The Hope, which is where I ask the question “what could an ethical AI device look, feel, and sound like?”.
Thanks for reading.


