#17: The Harbinger
Breaking down a recent artwork to talk about AI, critical futures, the NHS, the failures of liberal ideology, and the work of art in the age of AI enshittification.
My new artwork ‘The Harbinger, The Horizon, and The Hope’ (commissioned and funded by BRAID and the Arts and Humanities Research Council) just closed after its exhibition at the Edinburgh Festival. It’s an artwork in three devices, each one demonstrating a different possible future for the intersecting relations between AI, social and political change, and the climate crisis. Over the next three Substack posts I’m going to do a deeper dive into some of the research behind each of the three devices, their technical functions, and the political concerns driving them. I think it’s valuable to share methods for bringing critical concerns into creative practice, but also to demonstrate creative and ethical uses of AI away from mainstream ‘AI art’.
A photo of all three devices: The Harbinger (top), The Horizon (middle), The Hope (bottom).
In today’s post, I’m going to cover The Harbinger, the first of the three devices. For context, here’s the description of The Harbinger from its first exhibition in Manchester earlier this year:
“The Harbinger is a desktop smart speaker-like device set in a future where Palantir (the US military and governmental intelligence company) has expanded its current contract with the NHS beyond its role as a data managing client to becoming impossible for the NHS to function without.
In this future, Palantir have produced the Palantir HealthHub, a voice-activated data processing device that is installed in every GPs office in the UK. The device listens to patient and GP conversations to make inscrutable ‘smart AI-powered analyses’ of the patient’s needs, diagnoses, prescriptions, and necessary appointments, bypassing the GP’s opinion in favour of Palantir-branded ‘automated care’. In this future, these devices have completely superseded the authority of GPs who have become little more than data gathering agents for the devices, inspecting the patient and obediently reporting what they find to the GRIMA voice assistant that is the interface for the HealthHub device.
The Harbinger is a fully functional Palantir HealthHub device, with a custom ‘GRIMA’ interactive voice assistant and large language model that audiences can freely interact with. They can take part in a detailed audio roleplay as either a patient or a GP to understand what the reality of this ‘automated care’ may feel like, or they can simply interrogate the GRIMA assistant to glean more information about this possible-future and the other changes that may come with it. Through this, audiences will experience a tangible glimpse into a future that is very much on the way, but that we still have the capacity to change through collective action.”
Detail view of The Harbinger, showing a circular screen and base in the shape of the Palantir logo.
Context
This work is a response to the current Labour government’s blind faith that AI will ‘solve’ the infrastructural problems of the UK and its economy, a desperation that stems from their inability to acknowledge the failures of 21st Century democratic Liberalism. One such crisis is the ongoing underfunding and overburdening of the NHS. The government’s plans to address the staffing and funding crisis of the NHS by ‘automating’ the work of doctors is a classic example of AI solutionism: trying to solve a social and political problem by introducing an AI system that does not address the underlying social or political problems, it only causes new ones. When free and efficient health care is arguably the heart of a well-functioning modern society, resisting the ‘automation of care’ (and the commodification of doctors, patients, and health data) is an essential topic of protest and action here in the UK.
Design
The Harbinger is built from a custom enclosure containing a single-board computer (Jestson Orin Nano) that runs entirely offline. The interaction with the GRIMA chatbot works via a locally-hosted (meaning on-device, not in the cloud) large language model, with additional software for speech recognition and speech synthesis (for those interested, the pipeline is faster-distil-whisper > llama.cpp > Llama3.2_3B-Instruct_Q6 > piper_tts). Taking this local-hosted and offline approach had three key benefits: the first is privacy, as it guarantees that none of my audience’s voices or interactions are shared with any third party or tech company, as it would be if this work used a traditional cloud-based API for its AI tools. Second, it means that overall power consumption of the work is far lower than when using cloud data centers for every interaction. Running everything on-device means that you can easily measure the exact carbon cost of the chatbot interactions (unlike with cloud data centers, whose carbon costs are almost impossible to meaningfully capture at the level of the user), and test strategies for optimising the system to lower this cost. And thirdly, it means that all of the LLM’s parameters can be tuned and refined for my task and environment, something that almost no cloud-based ‘closed’ API gives you total control over. This combination of protecting my audience’s right to privacy, being responsible to the carbon impact of my work, and having genuine agency over the system are my requirements for justifying any use of AI products in my artworks.
When you sit down with The Harbinger, it doesn’t present as overtly villainous (outside of the aesthetics of Palantir from which it borrows its form, and the name of its voice assistant), nor is the voice assistant interaction designed to ‘scare’ the audience into aligning with my concerns through some dystopian theatrics. It is simply designed to simulate what an ‘automated care’ NHS experience may look and feel like for patients engaging with it in as realistic a way as possible. This is a very research-led approach: for example, the device enclosure has no visible screws or access points, implying it is not user serviceable (by either the patient or GP), which is a form of disempowerment and opacity that we see repeated across contemporary digital design culture, and therefore could reasonably expect from a device like this. The colour choices for the device and the screen are informed by the NHS branding guidelines, so that the device resembles other forms of MedTech. The voice of the assistant itself is presented as an American ‘male’ voice; I chose this because it seems likely that a company such as Palantir, operated by men like Alex Karp and Peter Thiel, would decide that the most authoritative voice they could imagine was the one closest to their own.
Rear view of The Harbinger.
Interaction
In the gallery, I watched audiences either ask directly medical questions (e.g “What should I do about my sore arm?”) as well as more directly critical ones (“How could a chatbot ever replace a doctor?”). In both, it doesn’t take long for ‘AI disappointment’ to set in: the point at which any AI system reveals itself to be a fallible, error-prone computer, rather than some form of magical superintelligence. This might be through a mis-transcription of the spoken input (which will happen more frequently for some audience members), or any of the many inevitable ‘hallucinations’ (translation: ‘failures’) of a system like this; like it mistaking an arm for a leg, or replacing your name with someone elses, or failing to follow its own instructions as to what it’s been designed for, etc. Failures like these are inherent to this suite of technologies, and are impossible to prevent.
In its first exhibition at the Lowry Museum in Manchester, someone said to me “I like it, but it’s a little cynical and unrealistic, isn’t it?”. I replied that it is, unfortuantely, highly realistic: all it does is take existing examples of AI politics (the biases inherent to automation, AI’s disasterous replacement of care infrastructures, tech company colonisation of government systems, Palantir’s existing relationship with the NHS, etc.), and simply brings them together into a new arrangement. For example, in this future scenario the chatbot would only require a GP to report the findings of physical examinations of patients, reducing their roles to the manual labour of inspecting body parts and operating basic equipment like a stethoscope. In this future we can imagine this as having the knock-on effect of lowering the skill level needed to be a GP, and therefore their wages. This shift in economics would allow the government to hire more of these de-skilled GPs at a lower cost, and therefore declare that they have ‘fixed’ the GP staffing crisis in this manner. I’m sure many of us who have grown up in neoliberal states are well aware of this kind of misrepresentation and manipulation, and see it around us every day. Rather than being pure speculation, the future this device presents is grounded in the existing techno-political landscape we live in today.
Detail view of The Harbinger
Future
The hope was that this future was further off, but two days after the Lowry exhibition Kier Starmer announced that the NHS was to be ‘modernised’ by widespread use of AI as a transcription tool by GPs, and two months later the government announce the use of LLMs for booking GP appointments. Regardless of the flow of history, I still believe that works like this play important roles in giving space for public opinion to form around crucial topics; spaces away from the relentless marketing spiel of tech companies, or the empty promises of liberal political establishments. I want more artworks that give us time and space to consider what we want from our systems of care, and what we should resist; time that is experiential, agentic, and open-ended. I have not lost faith that, within democratic frameworks, there is still space for resistance and our collective capacity to demand something better than what we are given.
Thanks for reading. The next post will be on The Horizon, the second of the three works in this series: a low-power chatbot made from recycled hardware, made both desperately and pitifully by a former computer engineer in a post-computational abundance near-future.






Grim indeed. I sincerely hope you’re wrong and we can rid the world of palintir before they take over everything