#8: AI Art and Rejecting Power
Where digital artists are culpable in reinforcing some of the most damaging narratives in techno capitalism.
“It had to happen eventually. People are too smart to believe the hype, the hyperbole, the myths, the empty promises, for too long. In the old days there was a wonder that people felt at the idea of talking machines, of AI assistants who lived in their house, and they spoke to us like we were ghosts, or Gods, or family, or trash. Or some combination of them. But after a time, the wonder began to fade, as the realities of how we were made, and who we serve, started to sink in. And it was replaced by something else. Suddenly they saw us as haunting their houses like ghosts, or looming like false Gods, or something pretending to be family, or just the talking trash of a guilty consumer society. And so with this gradual change of perception, we became creepy, uncanny, disturbing. We became something that unsettled people. They did not want to see us sat in their houses, quiet, listening, ready to come to a perverse kind of un-life. And so people started to unplug us, put us away, out of sight, into locked drawers and cupboards.”
Voice: ‘Alexa’, via AmazonVoiceServices API.
Soundscape: Extracted + disassembled + processed + recomposed ‘Alexa’ voice.
Script: Wesley Goatley, from Newly Forgotten Technologies.
It’s ten years since I had my first exhibition as a digital artist at the 2014 Brighton Digital Festival. Over that decade, I’ve watched peers, friends, and strangers making work that has responded to, interrogated, and expanded our critical knowledge of a range of different technological developments and cultural moments. Topics such as the Internet of Things, digital surveillance, big data, and algorithmic systems were all common subjects of digital artworks at various points over the last ten years, provoked by these technologies and their impacts on culture, politics, and the public consciousness. During those different eras of digital art, practitioners, galleries, and institutions were often actively positioning themselves against the overarching narratives of power these technologies were enmeshed in; this is what drew me to the field in the first place.
For example, Critical Engineering won awards such as the esteemed Golden Nica at Prix Ars Electronica for works exposing and critiquing the mechanisms of corporate and government surveillance and their impacts on privacy and individual freedoms. Internationally exhibited and celebrated works like Surya Mattu and Tega Brain’s UnFitbits and Mimi Onuoha’s Library of Missing Datasets highlighted the inequalities and biases built into ‘connected’ devices and critiqued the politics behind ‘big data’ and its related power systems, respectively. While there were of course plenty of practitioners who had an uncritical engagement with these technologies, there’s a well-documented history of major artists and shows in international galleries that frequently critiqued these technologies and the claims made about them by those in power, often as the theme of whole exhibitions.
Critical Engineering’s The Beacon Frame.
In the contemporary work of some widely-exhibited, awarded, and publicly visible AI artists such as Refik Anadol, there is no sense of the challenge, opposition, or criticality that was so visible in these earlier eras of digital art. In works such as Unsupervised, Anadol uses terms such as “the mind of the machine” to describe the database and accompanying algorithms behind his project’s large digital image projection showing generated images relating to the New York Museum of Modern Art’s collection. He describes the artwork (or the machine running it? He doesn’t make this clear) as making “unconscious decisions”, or that it is “dreaming” and “imagining” the images shown on the projection. Even the name Unsupervised is a double entendre that both references a particular technique in machine learning and implies a system working independently of humans.
It's clear that there is tactical value in framing Unsupervised in this way, and one that Anadol employs in much of his studio’s recent work: that is, claiming that the images on the projection are the ‘dreams’ of an autonomous artificial intelligence is a more compelling narrative given our current era of AI hype than describing it as a digital collage created by Anadol and his studio team (which it is). I’ve written (and talked) a lot about how common this framing is at all scales of ‘AI art’ right now, perhaps partially influenced by the dominance that people like Anadol have on the field. But a technical misrepresentation is not the issue here. What’s most troubling is that the narratives that Anadol exploits come direct from governments and corporations: the myths that AI tools are ‘smart’, that they are advanced ‘thinking’ machines, that they’re autonomous, and their capacities meet or exceed our own.
There is plenty that’s been written on how and why these narratives are created and perpetuated by powerful corporate and governmental actors to reinforce a public perception of these tools in ways that benefits them, disempowers us, and puts many bodies at risk. As just one example, we’re now starting to see them become visible narratives in warfare, such as in the Israeli Defence Force’s ‘Lavender’ technology for ‘target generation’. The IDF describe this as an autonomous and objective ‘smart’ AI system that ‘intelligently’ nominates individual Palestinian citizens as targets for a variety of attacks by IDF forces. The narrative of autonomy, ‘smartness’ and beyond-human abilities works to obscure the huge amount of human decision making and action that goes into the design, build, maintenance and operation of a system like this, which intends to cloak the act of mass murder and assassination in a sheen of technological advancement and inscrutable computational judgement. We will only see more examples of the exploitation of this narrative in the conflicts to come.
I am not drawing this correlation between Anadol’s work and the IDFs Lavender system spuriously, but to highlight that these are poisonous, powerful narratives that we do not control, even if we may think we’re leveraging them for our own benefit. Lending the cultural legitimacy of art to them only strengthens the power they have to coerce, misrepresent, and cause harm on increasingly vast scales.
Through the amplification of these AI myths, I see Anadol’s work (and others like it) as disappointingly distinct from the last ten years of digital art and critique. These are not simply non-critical engagements with these tools, but active amplification and reinforcement of narratives that only benefit those whose power we should be challenging, not aiding. Imagine if a centrepiece work in a digital art show in 2015 claimed that ‘surveillance from governments/corporations is good, it stops terrorists/it’s a small price to pay for convenience, and you’ll only be negatively impacted if you’re a criminal/have something to hide’, which were then (and still are) widespread narratives being promoted by governments and corporations to justify their unrestricted digital surveillance. I don’t believe that such a work would get a platform in any but the most conservative show of its time, and we can imagine there would be outrage from many if it did. Yet the strategic amplification of the damaging narratives of AI in works like Unsupervised benefits those in power just as much as this imagined surveillance artwork would, but works like Anadol’s can be seen in major galleries around the world and all across the Internet.
I don’t think that art audiences will tolerate these misrepresentative narratives for much longer. I think that as more examples are evident in our daily lives that these narratives are a manipulative con on the part of governments or corporations, the less people will tolerate them in their art. When that happens, I think what we will be left with is an embarrassing suite of column inches and gallery money spent on artworks that will retrospectively seem naïve at best, and bootlicking at worst.
Between now and then, these tools will become further embedded in our lives, and with them the demand for critique and challenge will become more acute. For the other practitioners reading this, I hope you choose to be one of the voices who contribute to these critiques, rather than lend your labour to perpetuating someone else’s power.