Illuminating the Deep Blue Sublime

Dominik Vrabič Dežman

Abstract

Artificial Intelligence is becoming increasingly integrated into global society – it is often referred to as the latest general-purpose technology1. Significant advances in machine learning and proliferation of big data have resulted in a social climate sensitised towards AI’s exponential societal integration. Upon closer examination, an observable gap becomes clear between AI’s status quo of rapid social integration, and the hype in its mediated visuality.

I argue that this gap – between the actual social implementation and integration of AI, and the sublime, religiously exalted, mediated visual image of it – consists of a fundamental misrepresentation and non-understanding of AI, intensification of technicist power and globalising capital accumulation.

Secondly, this thesis questions and destabilises AI’s narrowly mediated public visuality. Every figuration of AI, a fundamentally non-visual technology, is inherently an artistic gesture – the agency and social responsibility of the designer are therefore increasingly crucial in mediating emergent concepts such as AI to the public. The focus of this essay is on the production and perpetuation of signification – image-producers, and platforms involved in the propagation of imagery surrounding AI. The thesis attempts to trace the cultural authority that AI has taken on through its being-mediated-as-image.

Thirdly, this thesis proposes some possible future strategies and urgencies of desisting from the current problematic of sublime disclosure of emerging technologies, using AI as a central case study.

The research is positioned at the vanguard of cultural production at the time of writing. A central methodology is the collecting and organising of visual material pertaining the mediated visuality of AI. This includes archival footage, art reports, art writing, tweets, emoji, screenshots.

Keywords: Artificial Intelligence, Media Sensationalism, Technological Sublime, Social Epistemology, Responsibility, Design Studies

  1. ‘Artificial Intelligence as a General Purpose Technology: An Historical Perspective’, Department of Economics, The University of Warwick, accessed 9 December 2020, https://warwick.ac.uk/fac/soc/economics/research/centres/cage/news/06-07-20-artificial_intelligence_as_a_general_purpose_technology_an_historical_perspective/.↩︎

Preface

It appears we are in the midst of an AI turn, which is reflected in the recent explosion of writings, critical academic pieces, art projects, researches … During the time of writing, many more initiatives will have been published. This thesis does not aim to be comprehensive. My hope is for this paper to mark a small contribution to the exponentially growing mosaic of work produced in recent years.

Introduction

“In the context of this thinning boundary [between art and technology], it seems legitimate and necessary to ask whether and to what extent … art is complicit with the manipulative flows of power or whether, on the contrary, it exposes, complicates, or perhaps even contests them.”2

The thesis emerged from a personal fascination with the phenomenon of artificial intelligence and a big involvement in technology… It began as an exploration of AI aesthetics, and the seeming lack of consensus of what the term might refer to, and how it has been addressed by several artists and designers. First grounded in arts and design, the focus soon shifted. I am very interested in AI’s marching prominence into the fabric of our society, which has happened seemingly overnight, and continues to permeate the fabric of our existence. Eventually, the broader interest became clear – to examine the visuality of AI as a hallmark of AI’s ontological status in the world, vis-à-vis its exponentially intensifying presence in our society.

The distinction between artist and designer is not important for the aims of this paper. The focus is on visuality, the aesthetic conceptions and frontages devised to mediate the concept of Artificial Intelligence in the public sphere. I rather distinguish between images, image-producers and image-platforms. It is my belief that every role of visual production inherently possesses the responsibility and potential to/of political agency. My conviction is that art and design should be actively engaged in the social and political reality of our world. They should not be disinterested.

The focus of this paper is on signifiers. Those may hail from art, design, mediasphere, or any interstitial space. These disciplines, and the artefacts that they produce, are directly plugged in to the global machinery of cultural production. They are a product of their surrounding cultural regime. I am mapping and exploring what this regime might be.

This thesis is a call to action, a soft proposal. It might become the seed of an expansive, omnidirectional flow of possibilities. In that way, it aspires to be non-directional, non-arboreal, non-linear in its ambition of proposing a way forward. Several such possibilities are explored, and I try to legitimize them in the context of a theoretical framework. Together, they represent but a miniscule subset of an incomprehensibly bigger set of strategies for progressing from our present paradigm. I hope that each small possible move forward may be regarded as an emancipatory strategy in its own regard.

This thesis aims to perform the work of a discursive prism through which the narrowly mediated visual flow is examined, scrutinised, destabilised, diffused, reconfigured, and finally projected omnidirectionally as loose trajectories of poietic potential, pointing towards diverse alternatives of social relating to AI on its way towards settling in the new normal of our newly inaugurated, AI-powered, era.

  1. Krzysztof Ziarek, The Force of Art (Stanford University Press, 2004), 96.↩︎

A world of Signifying Surfaces

Images as subjects of study are of a particular note in this social moment. We have undergone the pictorial turn, 3 in which symbolic-representation systems have taken central place. It has never been so interesting to look at images and decode the substratum of reality that they are supposedly representations of. Media theorist Vilem Flusser, too, amplifies the importance of this shift. Tracing the thinking of Baudrillard, Flusser posits that "the existential interests of the material world are being replaced by symbolic universes and the values of things.”4

The oversaturation of images in our daily life seems to further confirm these posits since the authors’ time of writing. The image has become the universal building block of our culture, which is becoming predominantly visual. The image affects many more people more greatly than text: "The illiterate are no longer excluded, as they used to be, from a culture encoded in texts, but participate almost totally in a culture encoded in images.”5

The role of the image is no longer to mediate an underlying authentic reality, as has been the case for a long time. It is to signify a concept.

“Images are significant surfaces. Images signify – mainly – something 'out there' in space and time that they have to make comprehensible to us as abstractions (as reductions of the four dimensions of space and time to the two surface dimensions).”6

The status of imagery should be withdrawn from a traditional conception of an opaque, prima facie, representational artefact. In our present time, images hold a new modality of social existence. The Centre for the Study of the Networked Image has put forward the term “Networked Image”:

“… the”networked image" is at the centre of a new global mode of reproduction and representation in which the visual image is paramount. … what constitutes an image has been radically transformed, and with it the theories that allow us to study it.”7

The authors further recognise “the need for an enlarged scope that can account for the image as a dynamic, distributed and computational object …. in using the term ‘networked image’ — preferring it to operative or computational image, or even post-photography — we emphasise the network as a descriptor of dynamic social relations as much as technological infrastructure.”8

This paper has been grounded in arts and design, in visual culture, in aesthetics. Aesthetics of language, aesthetics of networked imagery, aesthetics of social shifts.

  • Image ­– a singular signifying surface / semantic unit.
  • Visual Artefact – might be an image, text, sign, trope, symbol, topos. a unit of aesthetic disclosure.
  • Imagery – visual artefacts at a given point in time, collectively.
  1. The term was introduced in 1994 by scholar WJT Mitchell, a central voice in visual studies. For further reading, I recommend W. J. T. Mitchell, Picture Theory: Essays on Verbal and Visual Representation (Chicago: University of Chicago Press, 1994); W. J. T. Mitchell, What Do Pictures Want? The Lives and Loves of Images, Nachdr. (Chicago, Ill.: Univ. of Chicago Press, 2010); W. J. T. Mitchell, Image Science: Iconology, Visual Culture, and Media Aesthetics (Chicago London: The University of Chicago Press, 2015).↩︎

  2. Vilém Flusser, Towards a Philosophy of Photography (London: Reaktion Books, 1983), 79.↩︎

  3. Flusser, 61.↩︎

  4. Flusser, Towards a Philosophy of Photography. P. 9.↩︎

  5. Centre for the Study of the Networked Image, ‘About – CSNI’, 2020, https://www.centreforthestudyof.net/?page_id=756.↩︎

  6. Centre for the Study of the Networked Image.↩︎

I.
AI in the Public Eye

A picture containing keyboard Description automatically generated

Encountering the Deep Blue Sublime

Googling the keyword “Artificial Intelligence” summons a slew of diverse images. Scrolling through the search results, more of the same emerges… It seems as if these images all hail from the same fictive universe of blue monochrome zeros and ones, materialising from the depths of infinite space before dissolving and disappearing back again... Examining the results further, we encounter robots, humanoid hands, clashes between organic, arboreal networks and digital, orthogonally delineated circuitry… Repeating the same search on other platforms, such as Flickr and Yahoo Images, yields more of the same. It seems we are encountering an artistic canon.

Figure 2: “Robot with AI”. iStock Photo.

In most cases, the subject (AI) takes on an extremely anthropomorphised form, appearing as a prototypically white human-like robot,9 or in others, a body part (typically a brain, arm, leg) stylised to appear composed of ethereal circuitry. The suggestion of infinite space is further reinforced by an expansive dark vacuum surrounding the subject, or in other cases, an infinitely distant vanishing point. The colour palette is typically delimited to blue monochrome, a palette employed perennially in the mediation of cyber technology. The images are rife with pictorial stand-ins for intellectual potency. The portrayed anthropomorphic figures are often seen instructing, operating machinery, manipulating floating touch-screen interfaces.

Figure 3: Headline Image of piece titled “WILL AI BE TEACHING STUDENTS IN THE FUTURE?”. Adapted from https://2ser.com/will-ai-teaching-students-future/.
Figure 4: Artificial Intelligence. Digital Image. Shutterstock.
Figure 5: “Blue digital computer brain on circuit board with glow and flares”. iStockPhoto.

The central subject of these visual artefacts is typically seen suspended in the centre of the picture plane, often the sole resident of this elevated domain. It could be described as a depersonalised subject, and at the same time, a sprawling constellation. The entity inhabits a deep blue habitat. Its constitution extends into an unfathomably extensive network of 0s and 1s, of cables, flows, nodes – all conduits for the dissemination of data, their mainspring. It is represented as travelling streams of light, or undulating waves perturbing the surface of an unfathomably deep ocean. An intense visual velocity is present in the depicted directional streams.

Figure 6: “White humanoid on blurred background using globe network hologram”. Shutterstock.

The images bestow upon the beholder the dominance of this abstract, diffused entity of artificial intelligence, while assembling a language of visual unity. They invite a certain sublime feeling, offsetting rational interpretation to secondary importance.10

Google Image Search results for “Artificial Intelligence”.

These visual artefacts as a whole appear to tell a myth about the social role of artificial intelligence. Following their chain of signification, it appears they point nowhere, to the absence of reality, or a complete dismissal of any tangential, actuality. By means of repetition, and reinforcement, these deep blue visuals reinstate and affirm the performative role of these emergent technologies in our society… They reinforce the link between stand-in and the concept it represents… and point to no concept whatsoever! These visual flows might well be considered simulacra, based on the fundamental absence (dissimulation) of the material referent they’re denoting. Rather, they are chimaeric visual forms of a compounding stack of cultural stereotypes and tired, repetitive iconography.

Branching out and asking Google for images of other emerging cybertechnologies, such as Bitcoin, Blockchain, The Cloud, procures remarkably similar results. The deep-blue imagery saturates the whole media ecosystem – it has crept in to accompany everything from blogs, clickbait articles, radio shows, to reputable news sources.

  1. A broader discussion on the phenomenon may be followed in: Stephen Cave and Kanta Dihal, ‘The Whiteness of AI’, Philosophy & Technology 33, no. 4 (1 December 2020): 685–703, https://doi.org/10.1007/s13347-020-00415-6.↩︎

  2. Jos de Mul, ‘The Technological Sublime’, Next Nature Network, 17 July 2011, https://nextnature.net/2011/07/the-technological-sublime.↩︎

The Sublime Language of Our Century

“To make machines look intelligent it was necessary that the sources of their power, the labour force which surrounded and ran them, be rendered invisible.”12

The glowing visuals go hand-in-hand with the sensationalised language used to write about Artificial Intelligence. The rhetoric across the corpus shifts from mentions of the fourth industrial revolution, discriminating technologies, to killer robots.13 News outlets continue to perpetuate and construe research outcomes as an indicator of AI’s intensifying personification. A 2017 article reported Facebook to have shut down “artificial intelligence robots … after they start[ed] talking to each other in their own language”.14 This is just one in a myriad of cases in how AI research trickles down into its portrayal in popular media. An analogous process is happening in the production of visual artefacts in the mass-media theatre, which are employed to signify such emergent technologies.

Graphical user interface, text, website Description automatically generated
Figure 8: “Go master quits because AI ‘cannot be defeated’”. BBC News. 27 November 2019. Figurating AI as immanent entity with superhuman formidability.

When appearing to serve an altruistic purpose, AI is often described to be a benevolent force, “AI-powered” has become a fashionable moniker. From a media perspective, machine learning factualities are not reported nearly as frequently – for instance, that Google’s famous AlphaGo deep learning model, that defeated the world champion Fang Hui at Go in 2015, cost an estimated $38 million to train.15

Sublime language is often a cover-up for a less-than-sublime reality, where the intelligence of such technologies lags far behind the promises of its awe-inspiring veneer. This sensationalist language is indicative of the significant dissonance between the discourse of artificial intelligence in academia and its subsequent, amplified sensationalism in mass media.16

Figure 9: “Burger King trolls artificial intelligence with new ads.” Tweet. The subsequent whistle blow that the commercials were not actually generated by AI adds a whole new level of cringe factor into the game…

AI is regularly portrayed as being able to act outside of its predicted norms, an immanent force, by the rules that we – humans – impose on it.17 The observation that such disclosures are not serving us is hardly new. A 2018 study of 760 AI-related news articles conducted by researchers from the Oxford Internet Institute established that UK news outlets predominantly “[portray] AI as a relevant and competent solution to a range of public problems, … with little acknowledgement of on-going debates concerning AI’s potential effects.”18 The authors further credit the increased tasking of non-specialised journalists for the increase in sensationalist AI-reporting, and further reaffirm the value of news coverage to “provide publics with space and resources to make sense of and address pressing public problems.”19 This space and resources, that are being made to help make sense of AI in society, pose a lot of questions.

  1. The title of this chapter is taken from the opening of Warks’ Capital is Dead.↩︎

  2. Simon Schaffer as quoted by Pasquinelli. Simon Schaffer, ‘Babbage’s Intelligence: Calculating Engines and the Factory System’, Critical Inquiry 21, no. 1 (1994): 205; Matteo Pasquinelli, ‘Abnormal Encephalization in the Age of Machine Learning’, E-Flux Journal 75 (September 2016), https://www.e-flux.com/journal/75/67133/abnormal-encephalization-in-the-age-of-machine-learning/.↩︎

  3. J Scott Brennen, Philip N Howard, and Rasmus Kleis Nielsen, ‘An Industry-Led Debate: How UK Media Cover Artificial Intelligence’, 2018, 10.↩︎

  4. Andrew Griffin, ‘Facebook’s Artificial Intelligence Robots Shut down after They Start Talking to Each Other in Their Own Language | The Independent’, 2017, https://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html.↩︎

  5. Nur Ahmed and Muntasir Wahed, ‘The De-Democratization of AI: Deep Learning and the Compute Divide in Artificial’, 22 October 2020, 52.↩︎

  6. Colm Gorey, ‘Should We Believe the Hype? Media May Be Warping Reality of AI’s Powers’, Silicon Republic, 21 February 2019, https://www.siliconrepublic.com/machines/ai-media-coverage-bias.↩︎

  7. Vladan Joler and Matteo Pasquinelli, ‘The Nooscope Manifested: AI as Instrument of Knowledge Extractivism’, The Nooscope Manifested: AI as Instrument of Knowledge Extractivism, accessed 1 September 2020, http://nooscope.ai/.↩︎

  8. Brennen, Howard, and Nielsen, ‘An Industry-Led Debate: How UK Media Cover Artificial Intelligence’.↩︎

  9. Brennen, Howard, and Nielsen, 2.↩︎

Tracking the Footprints to the Source

It is difficult to identify the intention of such imagery, if there is any there in the first place. They clearly do not faithfully or accurately represent AI. Turning the optics of analysis, we can begin to inspect the source of these visual flows, where the canon is continually renewed, produced, and advanced. Hiking further and analysing the sources of these search results and media pieces harbouring them, one is typically directed to stock-photo platforms. There, the deep blue aesthetic reigns supreme. Together with the media, these are sites of fabrication of the image that lands in collective memory.

Figure 10: user Monsitj. iStock Photo. One of hundreds stock image-producers of AI-related imagery.
Figure 11: IT Technician Works on Artificial Intelligence, Big Data Mining, Neural Network Project. Gorodenkoff – Shutterstock.

The exponentially growing demand of AI-related imagery is being answered by a proliferation of visual production, mostly taking place on stock-image search platforms, that appears to be consolidating into a legitimate aesthetic genre. These flows of visual production are governed by the same received cliches in the visual articulation of the images in question. The ideas are increasingly ensconcing themselves in collective imagination through a scattered process of trans-cultural diffusion, culminating in the reproduction of identical aesthetic choices and imperatives. This narrow articulation of the aesthetic norm has been further reinforced by the demand generated by media outlets, which purchase said images from the image-producers, to use in their own published content. This is, of course, a simplified rendition of the whole complexity of the transactions. It does seem like a very plausible claim, however, that the exponentially growing mention of AI is collaterally bringing an exponentially growing number of images used to mediate the concept.

Figure 12: Popularity of the term “AI” in the English Corpus across time. After the most recent AI winter, we seem to be driving head-first into an unprecedented era of AI-hype. “AI” is, of course, a shorthand for “Artificial Intelligence”. “AI” abstracts the real-world referent away even further.
Figure 13: “Wired brain illustration – next step to artificial intelligence”. Laurent T/Shutterstock.

Another prominent trope that appears is the comparison of the brain to the computer. The comparison has abided in academia and common discourse for a long time. The general critical consensus is that far too much intensity has been placed on constructing a consensus of high similarity between a biological framework and that of artificial intelligence.20 The comparison of the human brain to a computer has been criticised as early as in 1951, by neuroscientist Karl Lashley:

“Descartes was impressed by the hydraulic figures in the royal gardens and developed a hydraulic theory of the action of the brain,” Lashley wrote. “We have since had telephone theories, electrical field theories and now theories based on computing machines and automatic rudders. I suggest we are more likely to find out about how the brain works by studying the brain itself, and the phenomena of behaviour, than by indulging in far-fetched physical analogies.”21

Figure 14: Promotional poster for Minority Report (2002).

Proceeding further, one may observe a very homogenous look applied, regurgitated, replicated, and imitated across all major stock photo engines. This portrayed cultural authority predominantly draws from sci-fi clichés22 and regurgitated religious iconography, something that the scientific community itself vehemently distances from.23

Figure 17. Hands of God and Adam.
Figure 16. Robot Hand and Human Hand Touching. Artificial Intelligence and Cooperation Concept. Digital Image. Adobe Stock. Accessed 15 February 2020. https://stock.adobe.com/.

The reproduction of these aesthetic trends contributes to the formation and concretization of a mythology surrounding AI, and its current hype. It has produced a discursive regime within which a limited, extremely specific palette of visual vocabulary (in many cases clearly traceable to its originator) stands available, borrowed from a narrow part of the vast pool of (popular) cultural production. Admittedly, one may observe multiple exceptions this narrowly-defined normativity – they are, however, very few in number.

The only way in which most people have experienced or seen AI has been mediated through image search platforms, sensationalised media broadcasts, popular culture, cinema. These image-producers has subsequently become image-makers for the public image of Artificial Intelligence. They solidify our understanding of AI in terms of these images. Molly Wright Steenson comments on the status of popular imagery used to represent the technology: "Our pop culture visions of A.I. are not helping us. In fact, they're hurting us. They're decades out of date. And to make matters worse, we keep using the old clichés in order to talk about emerging technologies today.”25

Steenson proceeds to quote Eric Johnson:

“Today, intelligent environments are a reality. But why do we depict them as layered and ghosted? "These cultural clichés/touchstones are popular for another reason: It is really, really hard to talk about digital-reality tech otherwise …. These fields are full of jargon, inconsistent in practice and difficult to grok if you haven't seen all the latest demos; pop culture is a shortcut to a common ideal, a shared vision.”26

I wonder what better ways exist than the reproduction of these tired clichés.

  1. David Watson, ‘The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence’, Minds and Machines 29, no. 3 (September 2019): 417–40, https://doi.org/10.1007/s11023-019-09506-6.↩︎

  2. Matthew Cobb, ‘Why Your Brain Is Not a Computer’, The Guardian, 27 February 2020, sec. Science, https://www.theguardian.com/science/2020/feb/27/why-your-brain-is-not-a-computer-neuroscience-neural-networks-consciousness.↩︎

  3. David Shultz, ‘Which Movies Get Artificial Intelligence Right? | Science | AAAS’, 2015, https://www.sciencemag.org/news/2015/07/which-movies-get-artificial-intelligence-right.↩︎

  4. Denis Vidal, ‘Anthropomorphism or Sub-Anthropomorphism? An Anthropological Approach to Gods and Robots’, Journal of the Royal Anthropological Institute 13, no. 4 (December 2007): 917–33, https://doi.org/10.1111/j.1467-9655.2007.00464.x.↩︎

  5. Michelangelo. Hands of God and Adam. 1509. Fresco, 280 x 570 cm. Vatican Museums. https://commons.wikimedia.org/wiki/File:Hands_of_God_and_Adam.jpg.↩︎

  6. Molly Wright Steenson, ‘A.I. Needs New Clichés’, Medium, 13 June 2018, https://medium.com/s/story/ai-needs-new-clich%C3%A9s-ed0d6adb8cbb.↩︎

  7. Steenson.↩︎

What is AI, Anyways?

In discussing AI, one may before long encounter a slippage of reliable, explicitly agreed-upon definitions owing to an overlaying lack of consensus, chaos around the current implementations of AI, and an evident discrepancy between its media sensationalism and the inchoate status quo of the technology...

In pursuit of an agreed-upon definition of Artificial Intelligence, I turned to the dictionary, where AI is defined as: “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”27 The definition does little to clarify the ambiguity, it seems rather autological. So, I proceeded to the drawing board…

Figure 19: Diagram – Taxonomy of AI. Some of the many referents of “AI”…

In fact, there is no singular type of AI. The field refers to an extremely vast conglomerate of diverse tangential disciplines, that have come to be contained under the same sign. Framing AI as an unorganized cluster of loosely correlated semantic forms helps in discussing the multiplex nature of its social integration. It is important to question the catch-all that has become draped around and above the disciplines.

When following a news piece on the exponential economic integrations of Artificial Intelligence, shifting our gaze to a visual artefact rife with sublime “neural aesthetics”, and then proceeding to examine an installation of smart home assistants communicating with one another, the underlying referent of AI shifts and shuffles. The complexity intensifies when realising that AI often refers to ASI (artificial superintelligence), or AGI (artificial general intelligence)28. AI is also used to refer to the academic field of Artificial Intelligence, and the term has come to represent a general algorithmic panacea within marketing rhetoric, an exciting prospect of added value for “AI-powered” products, and any chimaeric interstice within the semantic field.

Figure 19: Meme from online. Source unknown.

The ontological classifications and diversity of meanings that AI encroaches on are an assemblage with soft edges — not defined by formlessness but rather a gaseous loosening of form, appearing directly right in front of you but at the same time, severely out of focus. When one tries to tread closer and examine these visual veneers, they fall apart at once, sublimate, disintegrate, reshuffle, in front of one’s eyes. Smokescreens, mirrors, illusions. The front-face doesn’t hold up. Peering into the gap reveals the complex constellations of power flows, aptly concealed behind the sublime veneer, behind the circular signifier, the proxy of sublime non-understanding, othering, dissimulation.

  1. ‘Artificial Intelligence | Meaning of Artificial Intelligence by Lexico’, Lexico Dictionaries | English, accessed 12 February 2020, https://www.lexico.com/definition/artificial_intelligence.↩︎

  2. speculative philosophical and sci-fi concepts of the possibility of a general superintelligence taking over earth↩︎

AI – a Titan in its Infancy?

The historical dialectic of artificial intelligence (dictionary definition) seems to oscillate between AI hype and AI-winter. Up until some years ago, when the technology became exponential. The use of the term “artificial intelligence” has grown exponentially in global media circuits in recent years.29 Its promising commercial applications have resulted in it being called “The New Electricity”.30 Following a prolonged period of disinterest in AI research referred to as AI Winter,31 a renewed AI enthusiasm, or "hype", has been gaining footing in the last decades, vis-à-vis promises of autonomous “deep learning”, and large-scale applications of neural networks.32 It is often said that we are undergoing the latest industrial revolution, with big data and AI paving the way forward.33 The academic and commercial field of artificial intelligence has been steadily growing into a lucrative industry, with a projected compound annual growth rate of 46.2 percent from 2019 to 2025.34 Extensive reporting in the media has cemented its presence in public consciousness. It has become embedded into our social fabric in numerous ways, proportional to the exponentially growing volume of big data.35 The EU alone intends to invest over $200BN into the industry in the next ten years,36 while the amount of data used to train the algorithms will likely quadruple by 2025.

The deployment of AI seems to have onset at scales unconceivable in the previous century. This is largely thanks to the recently possible wide availability of data, the lifeblood of artificial intelligence. We have reached a point in social history where the readily available amount of training data has become sufficient for large-scale implementation and training of machine learning – its volume has doubled in the last ten years.37. In our globalised, internetworked reality, this trend often appears to sprawl in tandem with the economic integration of AI into global industry. We are ushering towards an “AI-powered” era, the advent of which is gaining significant social momentum, furthering the social climate of the AI-hype. Unfortunately, the outcomes, at the time of writing, rarely deliver to their promises.

Our collective imagination seems to be stirred about the potentials of autonomous, creative AI —- of true, non-human intelligence. Artificial intelligence is capable of highly complex calculations, while still in its (literal) infancy in terms of abstract reasoning capabilities.38 Discussions on the epistemological potentials of the creativity of AI are hardly new. In his essay Words made Flesh, scholar Florian Cramer examines some different historical debates regarding the creative limits of artificial intelligence, tracing some to as early as the 17th century:

“In 1674, three years after his permutational sonnet, Quirinus Kuhlmann published his correspondence with Athanasius Kircher in a book Epistolae Euae. It documents an early debate about automatically generated art and its cognitive limitations. In his …. He [Kuhlmann] rejects the idea of a machine …. that generates poetry, arguing that such a machine could indeed be built, but it would not produce good artistic results. One could teach, Kuhlmann writes, every little boy verse composition through simple formal rules and tables of elements (“paucis tabellis”). The result however would be versifications, not poetry (“sed versûs, non poëma”).”39

In the correspondence, the limit to artificial creativity is set by theological reservations. In our present episteme, the limit appears to be drawn from the bleeding edge of computing power.

  1. Wim Naudé, ‘The Race against the Robots and the Fallacy of the Giant Cheesecake: Immediate and Imagined Impacts of Artificial Intelligence’, Artificial Intelligence, 2019, 32.↩︎

  2. Artificial Intelligence Is the New Electricity, Future Forum (Stanford School of Business), accessed 12 February 2020, https://www.youtube.com/watch?v=21EiKfQYZXc.↩︎

  3. Chris Smith and Brian McGuire, ‘The History of Artificial Intelligence’ (History of Computing, University of Washington, 2006), 27.↩︎

  4. Wim Naudé, ‘AI’s Current Hype and Hysteria Could Set the Technology Back by Decades’, The Conversation, accessed 12 February 2020, http://theconversation.com/ais-current-hype-and-hysteria-could-set-the-technology-back-by-decades-120514.↩︎

  5. Klaus Schwab, ‘The Fourth Industrial Revolution’, 26 January 2016, https://www.foreignaffairs.com/articles/2015-12-12/fourth-industrial-revolution.↩︎

  6. Research and Markets ltd, ‘Artificial Intelligence Market Size, Share & Trends Analysis Report By Solution, By Technology (Deep Learning, Machine Learning), By End Use (Advertising & Media, Law, Healthcare), And Segment Forecasts, 2019 – 2025’, accessed 13 February 2020, https://www.researchandmarkets.com/reports/4375395/artificial-intelligence-market-size-share-and.↩︎

  7. SINTEF, ‘Big Data, for Better or Worse: 90% of World’s Data Generated over Last Two Years’, ScienceDaily, accessed 5 February 2020, https://www.sciencedaily.com/releases/2013/05/130522085217.htm.↩︎

  8. Al Jazeera, EU to Unveil Proposed Regulations for Artificial Intelligence, 2020, https://www.youtube.com/watch?v=xBg6JthpeFg&list=PLzGHKb8i9vTysJlqfhIyEieT2FqwITBEj.↩︎

  9. SINTEF, ‘Big Data, for Better or Worse’.↩︎

  10. Yonatan Zunger, ‘Asking the Right Questions About AI’, Medium, 12 October 2017, https://medium.com/@yonatanzunger/asking-the-right-questions-about-ai-7ed2d9820c48.↩︎

  11. Florian Cramer, Words Made Flesh (Rotterdam: Piet Zwart Institute, 2005).↩︎

A brief History of AI’s (Public) Visuality – in Pictures

In itself, AI is neither a benign nor malign force. In common parlance, the term “AI” picks out a loosely entangled cluster of technological projects, undertaken since 1956 under this now-well-known denominator. In 1959, the term “machine learning” was coined. From its inception, AI has been largely funded by defence departments and developed with the goal of advancing United States in the cold-war-era race. The industry soon branched out to other nations.

1956 – The founding fathers of AI. Pictured left to right are arvin Minsky, Claude Shannon, Ray Solomonoff and other scientists at the Dartmouth Summer Research Project on Artificial Intelligence (Photo: Margaret Minsky).
Figure 21: Eliza. 1964. The first conversational AI. Deemed so realistic by its users, that they would sometimes engage in physical altercations with the machine running it.
Figure 22: Shakey the Robot. First “AI”. 1966. Outcome of billions of dollars of funding by the US Defence Department. Part of the reason for the onset of the first “AI-Winter”.
Figure 23: HAL, the superintelligence from 2001: A Space Odyssey. In 1968, Minsky (who was part of the original summer conference) advised Stanley Kubrick on the implementations of AI..
Figure 24: 2000 – Roomba, starting a new craze in domestic robotic assistants. More rudimentary than Shakey in many ways.
Figure 25: IBM Watson participating on Jeopardy in 2011.
Figure 26: In 2016, Google’s DeepMind beat the world’s best (human) player at the ancient strategy game Go, which up to then was said to be too intuitive for a computer.
Figure 27: Google self-driving car. AI has become integrated into the fabric of social reality in more and more subtle ways…
Figure 28: Xinhua’s first English #AI anchor makes debut at the World Internet Conference that opened in Wuzhen, China.
Figure 29: Apple. Siri Redesign. 2020.
Figure 30: Ian Goodfellow. Tweet. 3 March 2018.
Figure 31: Amazon Echo (3rd Generation). Amazon. https://www.amazon.com/Amazon-Echo-And-Alexa-Devices/.
Figure 32: “The age of A.I.”. Official Trailer. Youtube Originals. 2020.
Figure 33: EU to unveil proposed regulations for artificial intelligence. Al Jazeera, 2020.
Figure 34: “AI” waiter. Courtesy of Al-Jazeera. AI assistants are typically rendered as feminized entities.
Figure 35: Google Assistant. Google. Thank-you, just browsing.
  1. AI, just like the internet, began as a military effort and later gained exponential public footing. I feel that as opposed to the internet, AI has never been expressly a democratic platform, even though its implications reach intimately into the lives of all of us, determining our potential futures and possibilities of social participation, or exclusion.↩︎

  2. AIArtists.org, ‘Artificial Intelligence Timeline’, AIArtists.org, accessed 9 July 2020, https://aiartists.org/ai-timeline-art.↩︎

  3. For further reading see: Helen Hester, ‘Technically Female: Women, Machines, and Hyperemployment’, Salvage (blog), 8 August 2016, https://salvage.zone/in-print/technically-female-women-machines-and-hyperemployment/.↩︎

Technological Sublime, Algorithmic Sublime

The presence of an awe-inspiring veneer is difficult to contest in the public image of AI. It is important to elaborate on the notion of technological sublime, which underpins the conceptual development of this thesis. The technological sublime has been defined by scholars as

"… a feeling of "astonishment, awe, terror, and psychic distance"—feelings once reserved for natural wonders or intense spiritual experiences, but increasingly applied to technologies that are new and potentially transformatory, but also complex and poorly understood.”43

A precondition to achieving a sublime experience is that “one must be somewhat inexperienced with an object, landscape, or engagement…” The reduction in the sublime qualities of a referent is achieved by “experience and familiarity”.44 It is questionable how much familiarity aforementioned visual depictions allow for. “Familiarity with an object threatens to undermine its potential sublimity”.

AI’s unfamiliarity, its exponential emergence and the sudden flood of media attention, all provide ample grounds for evoking sublime feelings in the beholder.

“The ways that algorithms ignite the contemporary cultural imagination …. makes them seem still in the realm of science fiction, harbingers of a revolutionary future of which we are forever on the cusp.”45

In that way, I propose the action of desubliming to refer to strategies for keeping AI’s sublime veneer at bay, to work towards a perpetual unpeeling of it, keeping it unpeeled, while populating the gap with alternatives of poietic46 potential, resubliming.

  1. Morgan G Ames, ‘Deconstructing the Algorithmic Sublime’, Big Data & Society 5, no. 1 (June 2018): 4, https://doi.org/10.1177/2053951718779194.↩︎

  2. Kyle Craft-Jenkins, ‘Artificial Intelligence and the Technological Sublime: How Virtual Characters Influence the Landscape of Modern Sublimity’ (Kentucky, University of Kentucky, 2012), 4.↩︎

  3. Ames, ‘Deconstructing the Algorithmic Sublime’.↩︎

  4. I am employing the concept of poiesis, “coming into being” within the context of aesthetic production, from Ziarek. see Ziarek, The Force of Art. p. 111 – 140.↩︎

AI – an Amorphous, Faceless Entity

AI with human-like characteristics is quickly creeping into our lives – that might be the problem, that it is human-like, anthropomorphically conceived and designed. Anthropomorphism isn’t necessarily a bad thing – In commercial applications, robots that resemble humans, and at the same time avoid falling into the uncanny valley, boost greater public acceptance and empathy towards the machines.47

Figure 36: It shows in many cases that AI has come to correspond with a plasticky, tawdry, humanoid robot.

AI could be described as an amorphous entity. As a signifier, it morphs to fit the identity of the field in which it is deployed. AI the doctor cures cancer, AI the artist creates paintings, AI the economist predicts return rates. Representing AI with such human-like agency and subjectivity serves to maintain its position as a terminus of responsibility, gatekeeping its broader context ­– the power constellations producing it. It also projects into AI a certain striving to subjectivity, to humanity – AI holds up a mirror to our own biases and hopes. The way in which its disclosure has been designed is misleading at best, and hugely detrimental to our relating at worst.

It is said within the AI industry that once a task has been achieved by a machine, it is no longer considered intelligent.48 At first, such tasks were menial. They are becoming more and more advanced, as AI finds applications in nearly all disciplines which benefit from technology.

Figure 37: “Close-up of Two Businessman Shaking Hands In Front Of Mallet”. iStock Photo.
Figure 38: “Medical technology concept with 3d rendering robot hand or cyborg hand hold stethoscope”. iStock Photo.
Figure 39: Ai-Da Robot with Painting. Image Credit: Ai-Da portraits by Nicky Johnston. Published with permission from Midas Public Relations. “AI” the artist. Our well-documented tendency to anthropomorphise technologies further results in peculiar visual outcomes of what AI may look like as a dweller of the world…

Artificial Intelligence is a theoretical discipline which has been used increasingly to automate an increasing number of processes within an increasing number of enterprises and industries. It has no inherent visual appearance except for the visual notation of theoretical formulas, visual outputs, tools used to develop it, interface of dataset labelling programs, command-line interfaces etc. Its meaning is therefore prone to misrepresentation, further intensified by a lack of clear consensus,49 that may be instrumentalised by (corporate) actors and its image malleable to their respective agendas.

Figure 40: Conceptual diagram of a neural network.
Figure 41: Big Data Artificial Intelligence Concept Machine. Shutterstock. Another floating signifier, this time more faithfully aligned to the actual theoretical backbone of AI. A big semantic leap in representation.
  1. Elaine Moore, ‘Me, Myself and A.I. — Should Robots Look like Us?’, Financial Times, 13 September 2019, https://www.ft.com/content/044e8fd2-d42c-11e9-8367-807ebd53ab77.↩︎

  2. Shelly Fan and Matthew Taylor, Will AI Replace Us? A Primer for the 21st Century, Big Idea (New York, New York: Thames & Hudson, Inc, 2019).↩︎

  3. Ariel Procaccia, ‘Beware of Geeks Bearing AI Gifts’, Bloomberg.Com, 10 July 2019, https://www.bloomberg.com/opinion/articles/2019-07-10/ai-hype-fools-a-lot-of-the-people-a-lot-of-the-time.↩︎

Who Might These Narratives Serve?

“Algorithms have everything to do with the people who define and deploy them, and the institutions and power relations in which they are embedded.”50

At a given point in time in public media, visual style is interrelated with writing style, is interrelated with thinking style, is interrelated with a social reality that is coming into being. The images pose an interesting question of visual analysis, and invite us to look beyond the surface to examine them with more detail… They are tips of an iceberg… they "have to be decoded as an expression of the concealed interests of those in power:”51

Figure 42: Should we be afraid of Artificial Intelligence? Thumbnail of YouTube video. 2020. Questionable representation yet offering valuable insight into the social climate we are in at present time..

The visual artefacts appear as a diversion to cast our glances towards while the actual deployment and unfolding of AI into the social fabric continues in much sneakier, clandestine ways. We have been castrated of our ability to recognise AI when it actually happens.

The mediated presence of AI is marked by a diffuse signifier, that points into many different directions and discloses little. Its underlying regime is organised around the upkeeping of power, the withholding of the possibility of dissent. It keeps in place the absence of possibilities of questioning AI, by wrapping its public presence in a sublime veil – an impenetrable, imperceptible barrier, rather than an invitation for further discussions.

In her book Capital is dead, MacKenzie Wark traces the emergence of a new mode of social relations, a global operation of capital accumulation based on the control of the tools used for the interpretation of data. Walmart, for instance controls "“almost as many data centers as physical distribution centers, and they are about as large.”52 Data is becoming the organising principle of global economic operations.53 The same data-centric rationality is increasingly extending to the governmental level, with AI being increasingly used to delineate and command the economic, political reality of our world. This data is extremely valuable, it is used to extract all sorts of predictive metrics about individual or group behaviour.

“The arms race of AI companies is, still today, concerned with finding the simplest and fastest algorithms with which to capitalise data.”54

The increasing consolidation of data as a resource for capital accumulation has severe implications for the average person, who does not own any access to these tools of data interpretation. The problem lies with the “extraction of what you might call surplus information, out of individual workers and consumers, in order to build predictive models which further subordinate all activity to the same information political economy.”55

“The act of translating the world's complexity into a computable format and structure inherently leads to compression: in which the territory is compressed into a map. …

At small scales or with narrowly scoped "translations", this is actually fine. Accounting software is a good example of this, as is music production software. ….

But at scale, any computing system that interacts with the wider world – such as a global transportation system (Uber), a home-based hospitality company (Airbnb), or egalitarian content delivery system (YouTube) will be forced to see its map interface with the territory. ….

Have you seen what happens when you take a screenshot, print it out and scan it back into your machine? Maybe not much at first, but over time more and more artifacts – noise – will appear. It is called generation loss and eventually, as entropy gains an ever-increasing foothold, nothing will be left but this digital gibberish. This is what happens with these computing systems at scale. As these systems enter into a cycle of ingesting and translating (compressing) the world's complexity, they then output their distorted model for the world to deal with.”56

Similar sentiments are voiced by Pasquinelli:

“Mass digitalisation … has made available vast resources of data that, for the first time in history, are free and unregulated. A regime of knowledge extractivism (then known as Big Data) gradually employed efficient algorithms to extract ‘intelligence’ from these open sources of data, mainly for the purpose of predicting consumer behaviours and selling ads. The knowledge economy morphed into a novel form of capitalism, called cognitive capitalism and then surveillance capitalism, by different authors. It was the Internet information overflow, vast datacentres, faster microprocessors and algorithms for data compression that laid the groundwork for the rise of AI monopolies in the 21st century.”57

Figure 43: Ross, Benjamin. ‘Automakers Making Deals to Speed Incorporation of AI’. AI Trends (blog), 1 July 2020. https://www.aitrends.com/ai-and-business-strategy/automakers-making-deals-to-speed-incorporation-of-ai/. The immense hype in AI has resulted in increased institutional pressure to speed up the pace of its incorporation.

The posits tie into the larger discourse of intensifying technification, “The instrumentalization of information enables all of the earth to appear as a resource to be mobilized under the control of information, but where that control is based on information that treats everything, including information itself, as a commodity.”58. Data has been dubbed the new oil,59 the new electricity,60 etc. It is the lifeblood of artificial intelligence. The sublime visual language simulates the social power of artificial intelligence, while dissimulating the rampant data extractionism by the vectoralist class – those with the means to interpret and monetise the extracted data. AI is not out there to get us. The people deploying and operating it, are, however, often there to accumulate profit with our personal metrics.

Figure 44: “Human face cyborg is showing Dollar banknotes in the night. Absurd superhero. Future of the robots.” iStock Photo. Getting it right… Kinda?

The vectoral class is not a subset of capitalist oligarchy – it is delineated as stratum above, owning not capital, but the means by which to produce capital, patents, copyrights. AI is enlisted in high ranks on the new accumulative frontiers of this expansive class. The people operating with data are at the vanguard of the development of AI, developing complex, multivariate and extremely profitable tools for its analytics. The reality of AI is, to a large extent, coaxial with the reality of data extractionism and colonialism.

Figure 45: Artificial Intelligence. Phonlamai Photo/Shutterstock.
Figure 46: “Businessperson And Robot Playing Tug Of War On Colorful Background”. Little does the businessman know that he is not actually tugging against an anthropomorphic AI – he is fighting against a constellation of technicist power structures.

It is reasonable to expect that this intensifying trend of automatization, towards ubiquitous permeation of AI, will continue as more and more aspects of our existence continue to be commodified as data points, as more data is being produced by an increasing number of “smart” devices, indices, metrics, flows. It is therefore not a question of will AI, but rather to react with urgency to how it is coming into being. We need to rework the core societal mechanisms that permit its social standing – the economic and legal apparatuses, the media circuitry, the gatekeeping of awareness, of understanding. The economic apparatus is not likely to budge anytime soon, since big data is seemingly the new big source of capital. “While power is capable of everything, what it cannot do is let go of power.”61 More wants more. AI is enlisted in the new conquest under the flag of capital accumulation.

"... pioneering artificial intelligence research will be a field of haves and have-nots. And the haves will be mainly a few big tech companies like Google, Microsoft, Amazon and Facebook, which each spend billions a year building out their data center…”62

Figure 47: AI hype, emergent technology hype. Source unknown.

Returning our sights to the publicly mediated figurations, we can quickly see that they are largely anonymous – they focus on AI, rather than the constellations it is operating from and suspended within. While we are transfixed on this monolithic visual representation, the real work is happening behind the scenes. In this role, it is effectively serving as a proxy, making us look one way in sublime awe, while further othering the technology. When examined with more detail, the engineered cultural authority of AI is nothing more than a Potemkin village. Unlike the original ploy, however, purpose of this exalted façade is not to oversignify status to the regime in power. It appears to be quite the opposite – it is the regime in power performing to the people.

Certainly, the present mythologies surrounding anthropomorphic AI have stirred Western collective imagination for a long time.63 As detached from reality as the representations appear, they hold denotative legitimacy for most beholders. Paradoxically, it appears that the aesthetic fiction surrounding AI has become more real than reality itself and usurped the hypostatic “authentic truth”, further affirming the simulacra-like nature of AI’s cultural standing.

  1. Ames, ‘Deconstructing the Algorithmic Sublime’.↩︎

  2. Flusser, Towards a Philosophy of Photography, 72.↩︎

  3. McKenzie Wark, Capital Is Dead (London ; New York: Verso, 2019), 21.↩︎

  4. Amongst several scholars, I have found the following analysis on data-centred economic models as a very useful resource. See Naudé, ‘The Race against the Robots and the Fallacy of the Giant Cheesecake: Immediate and Imagined Impacts of Artificial Intelligence’. p.38.↩︎

  5. Joler and Pasquinelli, ‘The Nooscope Manifested’.↩︎

  6. Wark, Capital Is Dead, 25.↩︎

  7. Quote by Alexander Singh, part of ongoing research titled Intertechnics. Reproduced with permission from author. Emphasis added.↩︎

  8. Joler and Pasquinelli, ‘The Nooscope Manifested’, 5.↩︎

  9. Wark, Capital Is Dead, 29.↩︎

  10. ‘The World’s Most Valuable Resource Is No Longer Oil, but Data’, The Economist, accessed 5 February 2020, https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data.↩︎

  11. Artificial Intelligence Is the New Electricity. Future Forum. Stanford School of Business, 2017. https://www.youtube.com/watch?v=21EiKfQYZXc.↩︎

  12. Ziarek, The Force of Art.↩︎

  13. Ahmed and Wahed, ‘The De-Democratization of AI: Deep Learning and the Compute Divide in Artificial’, 54.↩︎

  14. One of the earliest prominent references of such a depiction is Karl Čapek’s 1920 science fiction play R.U.RRossumovi Univerzální Roboti (Rossum's Universal Robots).↩︎

AI – a Global Entity Too Vast to Survey

Figure 48: Meme from online somewhere. Source unknown.

The public face of artificial intelligence enterprises rarely mentions the precise modus operandi of the material aspect of AI, reducing it to some variant of the common denominator “powered by AI”.64 Rather than encroaching around the status of sublime hyper-entity, artificial intelligence is more accurately denoted as a constellation of data, labour, and planetary resources.65

Most artificial intelligence datasets are manually labelled in various production sites of the so-called “AI Supply Chain”,66 a global assembly line of “construction workers… [of] the digital world”67, with tasks commissioned by the global north being routinely outsourced into the global south, where data and labour regulations are lax. The ghost labour employed for training AI is not unique to a destination in the global south – it happens all along the global supply chain. In their 2019 book, authors Mary L. Gray and Siddhart Suri write on the problematics of this new social organisation of work: the designed appearance of emerging platform-based innovations to “deliver goods and services … under the pretense that a combination of APIs and artificial intelligence have eliminated what traditional employers used to pay for, namely, recruiting, training, and retaining workers.”68 According to the authors, the solution lies in “[recognising] that on-demand platforms aren’t just software. They are bustling, dynamic online labor markets that consist of humans on both sides of the market.”69 AI is rife with fauxtomation70. The planetary reality of AI reveals further the disingenuity of the prevalent myth of ethereal “AI-powered” tech – human labour is integral to the operation of these virtual technologies – “Some authors [suggest] replacing ‘automation’ with the more accurate term heteromation.”71

Case Study – Baidu Big Data Centre

China has emerged as a major player in the growing of data processing and labelling. NY Times author Li Yuan compares lucidly: "If China is the Saudi Arabia of data, … these businesses are the refineries, turning raw data into the fuel that can power China’s A.I. ambitions.”72

Figure 49: Baidu Big Data Bainiaohe base. taken from Time magazine.

The workers performing these menial labelling tasks are considered by some an interim labour, to be eventually replaced by the very AI it has been employed to develop.74

Figure 50: Cong, Yan. Workers at the Headquarters of Ruijin Technology Company in Jiaxian, in Central China’s Henan Province. Digital Photograph. New York Times.
  1. ‘“AI-Powered” Is Tech’s Meaningless Equivalent of “All Natural”’, TechCrunch (blog), accessed 16 February 2020, http://social.techcrunch.com/2017/01/10/ai-powered-is-techs-meaningless-equivalent-of-all-natural/.↩︎

  2. Kate Crawford and Vladan Joler, ‘Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources’, 2018, https://anatomyof.ai/.↩︎

  3. Madhumita Murgia, ‘AI’s New Workforce: The Data-Labelling Industry Spreads Globally’, 24 July 2019, https://www.ft.com/content/56dde36c-aa40-11e9-984c-fac8325aaa04.↩︎

  4. Li Yuan, ‘How Cheap Labor Drives China’s A.I. Ambitions’, The New York Times, 25 November 2018, sec. Business, https://www.nytimes.com/2018/11/25/business/china-artificial-intelligence-labeling.html.↩︎

  5. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Eamon Dolan Books, 2019).↩︎

  6. Gray and Suri, 367.↩︎

  7. Coined by writer and artist Astra Taylor. It combines the word “faux” meaning fake with automation to express how work accomplished through human effort is falsely perceived as automated. Adapted from https://www.igi-global.com/dictionary/automation-and-augmentation/85839.↩︎

  8. Joler and Pasquinelli, ‘The Nooscope Manifested’, 19.↩︎

  9. Yuan, ‘How Cheap Labor Drives China’s A.I. Ambitions’.↩︎

  10. Charlie Campbell, ‘“AI Farms” Are at the Forefront of China’s Global Ambitions’, Time, accessed 28 August 2020, https://time.com/5518339/china-ai-farm-artificial-intelligence-cybersecurity/.↩︎

  11. Edd Gent, ‘The “Ghost Work” Powering Tech Magic’, accessed 17 February 2020, https://www.bbc.com/worklife/article/20190829-the-ghost-work-powering-tech-magic.↩︎

AI’s Black Box Problem

Figure 51: Meme about black-box irony of neural networks. Source unknown.

The sublime blue images in this paper are case-in-point of a non-rational gaze towards the emerging technology of Artificial Intelligence. Moving beyond the aesthetic level, the sublime also reigns supreme in the actual impenetrability of AI. It is a black box. Nobody really knows how the algorithms arrive at the conclusions they do – we know the inputs (the data), and we see the outputs – or rather, segments of them as mediated by software interfaces. The precise process of arriving at those outputs is unknown.75

“The black box effect … has become a generic pretext for the opinion that AI systems are not just inscrutable and opaque, but even ‘alien’ and out of control. … the black box rhetoric, … is closely tied to conspiracy theory sentiments in which AI is an occult power that cannot be studied, known, or politically controlled.”76

AI performs as a sort of partition - in front, the technology is kept opaque. Beyond, constellations of power with their human rights abuses, labour abuses, have also successfully hidden. The sublime rhetoric (that the black box talk is also part of) might be likened to a smoke screen, which prevents nuanced understanding. The sublime aesthetic trends have become as incomprehensible as our understanding of AI. They are a reflection and accomplice in our societal confusion. The smoke screen in question does not perform in concealment of an authentic, genuine AI. It forms a structurally indispensable part of AI’s ontological presence.

  1. Zunger, ‘Asking the Right Questions About AI’.↩︎

  2. Hochschule für Künste Bremen, The Nooscope Manifested AI as Instrument of Knowledge Extractivism, 2020, 4, https://vimeo.com/418823251.↩︎

Against the Anthropocene – Against the Sublime Awe

Figure 52: Protest sign seen during the 2019 climate strike in Seattle. Source: Twitter – https://twitter.com/emilymbender/status/1175149636241129472/

In his 2017 book, Against the Anthropocene, cultural critic TJ Demos presents an overview of the problematic term Anthropocene, and the sublime visuality used to mediate its corollary concept of climate destruction to the general public. The imagery used to mediate the phenomenon hardly answer to an investigative agenda. They most often bestow sublime awe, and a complete anonymisation, diffusion of culpability. Demos warns of the neoliberalist political ideologies embedded in the staging of visual material surrounding the Anthropocene, and the agenda to diffuse the sense of individual agency, responsibility, or inquiry into the massive global apparatus of resource extraction.77 A similar appeal to the sublime underpins the mediated visuality of AI.

Figure 53: A case-in-point of the semantic construction centred around AI, rather than the people that operate it.

Besides hiding AI’s planetary, resource-based nature, current trends in disclosure perform as a withholder of technological literacy from citizens, diffusing and dissimulating responsibility for the enterprise’s operations. Educating citizens means giving them a vocabulary to grow bridges of relating towards these emerging technologies, of considering, of having a common sensical understanding and opinion, of possibly being critical towards the data-fuelled enterprises and the asymmetrical transactions they rely on. A shift towards transparent disclosure would expose an extremely telling compass for locating the networks of power and mapping out their impact in the world.

The global AI enterprise itself is too vast to grasp or survey by an individual. This brings a diffusion of responsibility. Who to point at with a possible j’accuse? What proverbial door might a list of demands be posted to? Who is to be held accountable? The mediated visuality conceals the culprits fully.

I would like to disband any implication of orchestrated intentionality within these visual trends. I believe they are symptoms of a disorganised sprawling and striving to power, which rather than being meticulously orchestrated and deployed, “proceeds in a mindless automatic fashion.”78 The current global arms race results in deploying premature technologies at scales too large to survey.79 We are entering a speculation bubble, that might eventually burst in another AI winter. The visual artefacts are bound to leave interesting souvenirs for future generations.

Figure 54: Deep Blue faced Garry Kasparov in a televised, highly-publicised match in 1997. Kasparov was defeated. This was a watershed moment for the public conscience and collective opinion about the social potency of AI.
Figure 55: ShutterStock. A Man Playing Chess with a Robot. n.d. The original subject matter inspiration is blatantly evident.The AI has become anthropomorphised. A big semantic leap.
  1. TJ Demos, Against the Anthropocene: Visual Culture and Environment Today (Berlin: Sternberg Press, 2017).↩︎

  2. Flusser, Towards a Philosophy of Photography, 64.↩︎

  3. Naudé, ‘The Race against the Robots and the Fallacy of the Giant Cheesecake: Immediate and Imagined Impacts of Artificial Intelligence’.↩︎

The Problem Isn’t in AI – It’s in Us

We are undergoing exponential proliferation of AI discourse, with many scholars exploring complications that the exponential accroaching and integration of the technology into the fabric of our daily reality has brought alongside it.80

Figure 56: https://twitter.com/MissIG_Geek/status/1275796932720549891.

The inequality in the development of AI is also underpinned by the factual reality that AI has become much more power-intensive in recent years. The capital needed to train these AI is in the hands of the a priori wealthy, elite, research institutions, etc. that further propagate the unequal distribution of AI research and representation.81 Another huge social problem of AI is the lack of ethics education in the scientific and industry sectors that engineer these predictive models. This problem is intimately interwoven with disproportionately high funding of top AI researchers by corporate entities with their own agendas of capital accumulation,82 overtaking public capacities in terms of mobility and budget.

Governmental initiatives for the mass deployment of AI seek to automate policy, including the increase of the efficiency of surveillance systems and the automation of citizen control. The industry is a prolific subject for the examination of so many defining power structures, systemic inequalities, and societal biases of our time. The global project of AI has repeatedly been exposed as neoliberal, colonial, racist, and uncritically predictive. Sabelo Mhlambi comments succinctly:

“In the present age, artificial intelligence is increasingly being used as the moral arbitrator of society, and the institutions largely advancing its ethics are incredibly non-representative of diverse human and interrelated human experience. AI is dominated by white institutions, elite, wealthy, and often heralded as the prime answer to social problems. Through”intelligent" machines whose decisions and ways surpass human comprehension and explainability, and, through the datafication of humans, nature, can discern and predict human action and intent, therefore being justified to mediate human affairs, a white serving creation myth emerges. It is a myth that alleges the supremacy and impartiality of algorithms, an implied morality, while in actuality automating social biases that favor whiteness and marginalize non-white people.”

The creation myth of AI is a carte blanche upon which the same “matrix of domination”83 is escribed, upon which an Anglo-centric gaze reigns supreme. The anonymous aesthetics, and the discursive tendency for AI to be portrayed as a force majeure, further keep in place this inequality, lend space for the perpetuation of the same colonial dynamics.

As it stands, AI remains a brilliant multi-purpose tool. It is also a salient mirror to our own biases and unclarities. The computer will always compute exactly what it has been instructed to. The instructions (structural changes) are what we have to work on.84

Figure 57: Meme from Online. Source unknown.
  1. A thorough survey stands outside the scope of this paper. Some of the essential readings that have been formative in my understanding of the problematics of our current AI-integrations are: Adam, Alison. ‘A Feminist Critique of Artificial Intelligence’. European Journal of Women’s Studies 2, no. 3 (1995): 355–377 | Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press, 2018. | Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY: St. Martin’s Press, 2018. | Kate Crawford, and Trevor Paglen. ‘Excavating AI’, 2019. https://www.excavating.ai/.↩︎

  2. Ahmed and Wahed, ‘The De-Democratization of AI: Deep Learning and the Compute Divide in Artificial’.↩︎

  3. Will Knight, ‘Many Top AI Researchers Get Financial Backing From Big Tech | WIRED’, 10 April 2020, https://web.archive.org/web/20201004110850/https://www.wired.com/story/top-ai-researchers-financial-backing-big-tech/.↩︎

  4. Sabelo Mhlambi, ‘God in the Image of White Men: Creation Myths, Power Asymmetries and AI’, Sabelo Mhlambi, 29 March 2019, /2019/03/29/God-in-the-image-of-white-men.↩︎

  5. Zunger, ‘Asking the Right Questions About AI’.↩︎

II.
(Re)sublime

Responsibility + Agency of the Image-Producer

It is impossible to give a definite form to AI. Every figuration of it is a creative gesture. Every act of depicting it marks a distancing from reality– without implied intentionality, a mediation, a dissimulation (by virtue of gap between reality and signifier) of its true nature. Every act of depiction is therefore an act of subliming/resubliming AI. In the act, a new fiction is inevitably created, the meaning is encoded, a new mythology given birth.

Communicative design is “engaged at the same time in the private interests of clients and media. To secure its existence…. [it] must constantly strive to neutralize these inherent conflicts of interest by developing a mediating concept aimed at consensus.”85

Innately embedded in social reality, each act of design advances an agenda and may be seen as an inherently political act. Each choice to select a designed artefact, to elect a specific predetermined image to mediate a concept to a viewership, further reinforces the arbitrary union between the intangible concept of AI, and the visual stand-in used to mediate it to the general public.

“Today, visual design no longer solely exists for example on the canvas of a poster nor the printed matter bound into books. It exists intangibly, embodying products completely removed from physical space. It works to represent businesses with no offices, or technology with no hardware.”86

There will be more and more intangible concepts generated as the digitisation of our global reality proceeds, each with a unique semantic gap that will have to be filled by a visual stand-in. It will become increasingly important to determine how these referents are created. It is difficult to observe a creation of organic visual culture, since these technologies are becoming increasingly abstract and artificial as our world becomes increasingly pervaded by technology.

  1. Jan Van Toorn, ‘Design and Reflexivity’, 1994, https://designopendata.wordpress.com/portfolio/design-and-reflexivity-1994-jan-van-toorn/.↩︎

  2. Darin Buzon, ‘Design Thinking Is a Rebrand for White Supremacy’, Medium, 16 June 2020, https://medium.com/@dabuzon/design-thinking-is-a-rebrand-for-white-supremacy-b3d31aa55831.↩︎

Autonomous Art as a Strategy of Redisposing Social Relations

“Symbolic Forms are social forms – Symbolic productions represent the social position and mentality of the elites that create and disseminate them.”87

In his work, Ziarek further emphasizes the agency of autonomous art in disrupting the technicist agenda: “… [the] importance … lies in preserving, against the progressing saturation of all aspects of modern reality by technopow­er, the possibility of what I have termed an ‘aphetic forcework’.”88

Ziarek’s aphetic forcework refers to the reconfiguring of social relations, a coming-into-being of poietic potential, manifesting a Heideggerian momentum of release, Gelassenheit. Ziarek and Demos both express a strong belief in collaborative, participatory practices in redirecting the social momentum, raising the potential of art and design “as a collaborative and interactive, rather than individual-oriented, medium”89

“Art guards the freedom, the otherness, of things and objects in the reality external to art precisely by refusing to be reduced to or representable as an object, in this way calling into question the powerful machinery of rep­resentation.”90

In our visual-driven culture, figurations play a central role in mediating any given concept. The solution lies in disrupting the identical replication of an exhausted referent, in diversifying the representation and thus refusing the status quo of AI to be reduced as a consolidated entity via its monolithic aesthetic. I concur with Ziarek in the importance of creating (visual) expression marked by an “…unwillingness to subsist on the terms prescribed … by the social operations of power that regulate cultural and aesthetic discourses.”91. An additional perspective was given in 1994 by designer Jan Van Toorn “In other words, the designer must take on an oppositional stance, implying a departure from the circle of common-sense cultural representation”.”92

  1. Van Toorn, ‘Design and Reflexivity’.↩︎

  2. Ziarek, The Force of Art. p. 79.↩︎

  3. Ziarek, 45.↩︎

  4. Ziarek, 60.↩︎

  5. Ziarek, 68.↩︎

  6. Van Toorn, ‘Design and Reflexivity’.↩︎

The “Cartographic Shift” and Importance of Cognitive Mapping

Visual representations have attempted to capture the totality of the global internetworked AI enterprise, often referred to as the 'global AI supply-chain'.93 The attempt to depict these visual flows and figurate them in a cartographic format, brings with itself inherent problems – It is impossible to escape a degree of essentialism.

Toscano and Kinkle refer to Jameson’s appeal for the urgency of critical cognitive mapping, and posit that “cultural producers, for the most part, do not literally attempt to generate maps of the new interconnected global reality, or even to address it frontally.”94 According to them, the critical addressal of our global technocapitalist society is rarely undertaken by producers of (visual) culture. Rather, it is up to the critic to survey and respond to. It is here that the role of design can play an interesting role. In a unique position, the designer can function both as critic and cultural/visual producer. In his 1994 essay Design and Reflexivity, Dutch author Jan van Toorn urges (fellow) designers to cease perpetuating the same symbolic forms, instead "[m]oving from a reproductive order to a commentating one, operative criticism can make use of a long reflexive practice …”95 By entering the machinery of cultural production and destabilising from within.

This multidirectional potency gives the designer-critic a unique perspective/potential to agency from which to comment on cultural developments while at the same time, producing culture. The immense need for cognitive mapping in our advanced technocapitalist society has resulted in several cartographic endeavours. The cartographic endeavour of ‘mapping capitalism’96 is to a growing extent, coextensive with the cartographic endeavour of mapping Artificial Intelligence. It shares the same gestures, and runs along a coaxial objective. This project of cartography involves the mission of “complex seeing”, of an omniscient narrator that surveys the landscape and reveals its intricacies, inner mechanisms, workings, flows. In this regard, the cartographic narrator transcends “around” the power dynamic. They disclose and divulge the global constellation into discernible parts. And in the process desubliming it and shedding light onto the anatomical complexity of Artificial Intelligence.

At the core of the sublime disclosure of such technologies lies the fact that these technologies are, in essence, theoretical models resulting in vast systems of physical infrastructure, data flows, involving a plethora of actors and nodes. By sole virtue of their complexity, they are impossible to disclose in concise aesthetic form, such as an image or headline. The intentionality of such artefacts in media emissions is further steered by corporate agendas and bridled further by the constellations of power that exert influence open it.

Figure 58: “View of a Cyborg hand holding a Connection around a world globe 3d rendering”. Shutterstock.

Perhaps the exalted aesthetic trends are simply a reflection of excitement for the possible innovations that artificial intelligence may bring into our internetworked society. This does not take away from the long record of violations of data rights, human labour, and intensifying stratification of capital, all become othered from the hyper-entity of AI being disclosed as a sublime force, rather than a traceable topology of real-time processes and infrastructure, backed by accountable agents.

The exponentially sprawling enterprise of AI is becoming increasingly interwoven into the global circuit of commerce, policy-making apparatuses, and it is equally pervading the social fabric of our public and private life. It is not however, done in the conspicuous, caricatural gesture that it is most usually portrayed as doing in public emissions. The presence of AI has become interwoven in society in a much more diffuse manner. Endeavours to map the outlines of this global Colossus have been on the rise in the recent years.

Figure 59: Crawford, Kate, and Vladan Joler. Anatomy of an AI System. 2018. Digital. https://anatomyof.ai/img/ai-anatomy-map.pdf

Described by the authors as an “anatomical case study of the Amazon Echo as a[n] artificial intelligence system made of human labor”, the diagram shows the complexity of a specific AI-powered product, the Amazon Echo home assistant. The visualisation has been awarded Design of the Year 2019 by Dezeen, a leading design publication97. It stands as a potent testimony of the power of cognitive mapping in the project to dispel the algorithmic sublime, or at least, replace it with a data-visualisation sublime98, an information overload that reveals the true extent of the global enterprise. Joler has been a key player in the production of critical cartographic figurations of AI.

Figure 60: Vladan Joler and Matteo Pasquinelli. Nooscope – AI as Instrument of Knowledge Extractivism. 2020. In my opinion, another extremely valuable initiative created in the last months/year.

Vladan Joler and Matteo Pasquinelli define purpose of the Nooscope to “secularize AI from the ideological status of ‘intelligent machine’ to one of knowledge instrument”.99 They refer to the need for a new understanding of AI that our current social moment is sensitised for:

“Ultimately, the Nooscope manifests for a novel Machinery Question in the age of AI. The Machinery Question was a debate that sparked in England during the industrial revolution, when the response to the employment of machines and workers’ subsequent technological unemployment was a social campaign for more education about machines, that took the form of the Mechanics’ Institute Movement.”

Joler and Pasquinelli’s mapping and its accompanying essay accomplish a lucid task of discussing the extremely nonvisual aspect on AI, by inventing visual language used to mediate it, and basing their argumentation on the conceptual substratum that it creates.

Cartographic disclosure may be a potent strategy of insurrection of the globalising technicist agenda – it is an undertaking towards the becoming-known of the unknowable, the grasping of the ungraspable, the absolute disclosure of the diffuse, exponentially growing enterprise of Artificial Intelligence. Once you can trace the contours of the gargantuan enterprise, you can begin to sense and trace who the actors involved are, how they interoperate, and even more importantly, whereby the blame might lay, who might be deriving capital from its workings. Such global mapping attempts are entrapped into their own dialectic:

“Overview, especially when it comes to capital, is a fantasy – if a very effective, and often destructive, one. Because we can’t extricate ourselves from our positions in a totality that is such through its unevenness and antagonism, there is in the end something reactionary about the notion of a metalanguage that could capture, that could represent, capitalism as such.”100

There is no such thing as a conclusive, exhaustive, disclosure of AI. The potency of mapping lies in its gesture of showing the iceberg for what it is – a cryptogram of knowledge, power, resources, corporate actors. Such critical initiatives frontally address the misleading front face of “AI-Powered” tech; they rupture and predispose existing social relations in subtle, inventive ways. We need a new rhetoric and encourage against sensationalist titles and parlance, in which AI is donned an autonomous, human-like personality and character, which further discloses it as sublime entity with human-like cognitive capacities, while setting expectations for its anticipated performance disproportionately high.

From charming nonentity, to performative scientific tool, to an integrated invisible force, the ontological presence of AI has undergone a cascade of paradigm shifts. AI-powered, to AI-dominated, we are in the midst of a rapid destabilisation of social relations and a renewed formation of forces. It is also the role of imagery and visual mediation here, and cultural production and large to establish these new relationalities of poietic potential. It seems rather unlikely that cartographic endeavours will take centre stage in the public theatre any time soon. It takes little argumentation to agree that a holographic rendition of a robot is more universally mediatable than a several-thousand-word academic diagram.

Our episteme is sensitised for a redefinition of the visual language used to disclose emergent technologies and asks for a more vigilant approach, combined with a generally better understanding of the highly abstracted technologies we use on a daily basis.

  1. One of the first coverages of the sprawling industry appeared in New York Times in 2018, coeval with Crafword and Joler’s explication of AI. Refer to Yuan, ‘How Cheap Labor Drives China’s A.I. Ambitions’.↩︎

  2. Alberto Toscano and Jeff Kinkle, Cartographies of the Absolute, 2015, https://search.ebscohost.com/login.asp?direct=true&scope=site&db=nlebk&db=nlabk&AN=941611.↩︎

  3. Van Toorn, ‘Design and Reflexivity’.↩︎

  4. Toscano and Kinkle, Cartographies of the Absolute.↩︎

  5. ‘Anatomy of an AI System Wins Design of the Year 2019’, Dezeen, 22 November 2019, https://www.dezeen.com/2019/11/22/anatomy-of-an-ai-system-design-of-the-year-2019/.↩︎

  6. Lev Manovich, ‘The Anti-Sublime Ideal in Data Art’, URL= Http://Www. Manovich. Net/DOCS/Data_art. Doc, 2002.↩︎

  7. Joler and Pasquinelli, ‘The Nooscope Manifested’, 2.↩︎

  8. Toscano and Kinkle, Cartographies of the Absolute, 354.↩︎

Subverting the Instruments of Power

Turning the same tools is an ability to rework them from the inside, from within the paradigm, and therefore changing it. It is a sort of culture jamming, that helps to turn the power dynamic on its head.

Case study: White Collar Crime Risk Zones

Figure 61: White Collar Crime Risk Zones. Brian Cilfton, Sam Lavigne, Francis Tseng for New Inquiry Magazine. Using Machine learning to subvert the power dynamic.

These types of initiatives are consequential – they stand in opposition to the reproductive forces that perpetuate the same linear, asymmetrical transmissions of technopower. They derail the current momentums of power, and transfigure them with clear expositions and demands, aimed towards the actual sources of power. Visual-production, art, design, here, play a fundamental role.

Case Study: AI Incident Database

Figure 62: AI Incident database, search for “Facial Recognition”. https://incidentdatabase.ai/discover/index.html

The AI incidents database is described by its authors as “intended for users that need to discover whether their intelligent system or problem area has previously produced incidents in the real world.” The user can query the database for specific types of incident, for example only those involving “facial recognition”. At the time of writing, the database consists of 1175 press reports of AI-related incidents. Showing chinks in AI’s armour is a potent strategy for dispelling the sublime frontage.

The Need for Symbolic Language

The conceptual slot carved out by the sudden presence new technologies has largely been filled in by blasé figurations. A shift towards the development of a poetics of AI is needed, to invent new words, signs, and meanings to relate to these new technologies.

Figure 63: Nora N. Khan, Adam Ferriss. Towards a Poetics of Artificial Superintelligence.

The answer may lie in devising better words for delineating the concept. Kahn urges us to rethink the aesthetic language and cultural forms used to mediate these new technologies. The author quotes poet Jackie Wang, who wrote on their experience of encountering figurations of aliens: “…aliens could look like anything and yet we represent them as creatures close to humans. The aliens at this museum had two legs, two eyes, mouth — their form was essentially human. I wondered, is this the best we can come up with?”101 AI is an alien too, something we have never seen before, and is becoming increasingly potent … There is a lot of poignancy in the comparison – I fully agree that we are in dire need of “an imaginative paradigm shift”.102

Figure 64: “Businessman on blurred background using digital artificial intelligence interface 3D rendering”. Shutterstock.

As it stands, AI is represented as simulacrum of what non-human subjectivity may look like, a poor image, a replica. All other alternative possibilities, including a plethora of ones in which AI might not conform to anthropocentrism, are dauntingly scary, and very rarely depicted. It seems like we are trying to ascribe an overall veneer of familiarity, and control, into the emergence of AI, by disclosing it as ≤ human, possibly equal, and vehemently never more. Kahn observes how “For most people, thinking of a world in which we are not the central intelligence is not only incredibly difficult but also aesthetically repulsive. Popular images of AGI [artificial general intelligence], let alone true ASI [artificial superintelligence], are soaked in doomsday rhetoric._”_103

Figure 65: Robot from Interstellar (2014). Looking towards science fiction, one of the most powerful forms of enacting poietic potential.
  1. Nora Kahn, ‘Towards a Poetics of Artificial Superintelligence’, Medium, 10 October 2016, https://medium.com/after-us/towards-a-poetics-of-artificial-superintelligence-ebff11d2d249.↩︎

  2. Kahn.↩︎

  3. Kahn. 5.↩︎

  4. An extremely comprehensive overview of AI-rhetoric as seen in Science Fiction has been produced by Christopher Noessel in 2018. See: Noessel, Christopher. ‘What Stories Aren’t We Telling Ourselves about A.I.?’ scifiinterfaces.com, July 2018. https://d2w9rnfcy7mm78.cloudfront.net/7485022/original_d589dd23e7368978cddb6cf5b5dae505.png.↩︎

AI – Mystical Code, Words Becoming Flesh

Figure 66: !MEDIENGRUPPE BITNIK. Alexiety. 2018. This is also AI. Consequetal use of the current paradigm, to expose the peculiarity of AI as it is incorporated into the world at present. Language-based, sleek outlook.

Florian Cramer posits the presence of a transcendental mythology in our common relating towards algorithms, code – of words becoming flesh. Our collective imagination is stirred by the promises of these technologies, their limitations, and potential shifts they bring about. The fascination is a sort of uroboros, self-defeating, self-perpetuating:

“Computation and its imaginary are rich with contradictions and loaded with metaphysical and ontological speculation. Underneath those contradictions and speculations lies an obsession with code that executes, the phantasm that words become flesh. It remains a phantasm, because again and again, the execution fails to match the boundless speculative expectations invested into it.”105

A.I. might simply be the newest mystical entity, bearing the most elaborate, mysterious code we've ever seen.

  1. Cramer, Words Made Flesh, 126.↩︎

Resubliming is Not Enough

Figure 67: Refik Anadol. Archive Dreaming. Representing AI as a sublime, omniscient entity… Beautiful insight into the expansive potential of AI as an information sifter.At the same time, subliming the technology much further with imagined super-human capacity. Produced within Google’s residency program.

Developing diverse visual expressions to mediate the concept of AI is a viable step forward, but it is not enough. A critical position is of fundamental importance for the image-producer – being aware of which agendas are being perpetuated with every figuration. Van Toorn mobilises fellow designers “to apply our imaginative power once again to how we deal with communicative reality.”106

A détournement is not enough. The visual artefacts are symptoms, rather than instruments with/of their own pointed agency. We need a coherent demolishing, and restructuring, towards a new, different, more informed, more inclusive reality, in which AI is a more universally understood topic in fundamental discussions, rather than an impenetrable force, impossible to interact, interface, or negotiate with.

Poietic figurations showing the possibilities of AI are extremely valuable. Decontextualised, and built upon a fundamental societal non-understanding, and even fear of these technologies, however, they may further perpetuate the sublime otherness of emerging technologies. So, I believe that resubliming these technologies should come second to education.

Figure 68: Caye, Marie, and Arvid Jense. S.A.M. The Symbiotic Autonomous Machine. 2017. Installation. Arvid & Marie. https://arvidandmarie.com/sam.html. A quirky, non-anthropomorphic figuration of AI.

It is impossible to do differently than consent by default to these recent technological developments, and the increasing footprint they are claiming in one’s personal territory, existence, subjectivity – they’re being implemented on a playing field one had no prior awareness or knowledge of, and equally little agency to delineate. We should advocate for greater awareness, knowledge, a fundamental epistemological reconfiguration of the policies and strategies for mediating the concept of AI to the general public. We need positive complications with potential to inspire closer examination, dissent, discussion, a plurality of opinions.

III.
Paving the Future

“It is going to get easier and easier... and more and more convenient and more and more pleasurable ... to sit alone with images on a screen ... given to us by people who do not love us but want our money. And that's fine in low doses, but if it is the basic main staple of your diet, you are going to die.”107

Figure 69: “Robot and human handshake collaboration”. iStock Photo.
  1. Van Toorn, ‘Design and Reflexivity’.↩︎

  2. David Foster Wallace’s character in film The End of the Tour, 2015.↩︎

Education = Destabilisation!

Listening to an old podcast on the nascent introduction of the Internet, the confusion of people was equally grounded. The host confusedly asked: “What are you supposed to do with it, call it?”.108 It was impossible to foresee the exponential growth that would happen in the years following. Right now, we’re seemingly in the middle of a similar process of exponential growth with AI, only that it has already come much further, and the average informed citizen has comparatively less knowledge about it, less possibility of participation, bereft of vocabulary to address it or frameworks of relating to it. We are only beginning to talk about it, and it is already well implemented. Not in a democratic way, but by the hegemonic constellations of power. And we know so disproportionately little about it compared to how much it knows about us, how much it is integrated into our daily lives. It is become some sort of permeating force, which has pervaded the very reality of our common existence. In a similar way that the internet did in such a short period of time. Being internet illiterate in 2020 is a huge disadvantage. Being AI-illiterate, I imagine, might prove even more disadvantageous in the imminent future.

In addition to the role that the shifting of aesthetic production towards socially consequential, active, poietic expression can play in reconfiguring the complications of technopower, education is the foremost driver in changing the (aesthetic) discourse surrounding AI. In order to have a say in AI discussions and decisions regarding its increasing social prominence and integration into the fabric of our global society, we will need to disalienate it and find ways of understanding it sufficiently to pinpoint its shortcomings and hold the right people accountable.

Figure 70: AI+ME. How AI Sees the World. 2020. A children’s book informing about AI. Still employing the same visual clichés. Children’s books could offer prime space for introducing new and diverse modalities of relating to AI in the future. The book explains AI, but in terms of it being anthropomorphic.

Case Study: Finland’s goal to educate 1% of global population on AI

Figure 71: Elements of AI – Homepage

At the end of their European presidency in 2019, Finland has recently launched an initiative to educate at least 1% of European Citizens on the fundamentals of AI. In addition, they offered the course openly to the public, to promote greater education and diversity in training. Called “Elements of AI”,109 the course aims to “demystify AI”. Google’s own Sundar Pichai has commented: “… I wish it is a template which other countries can use”.110 This type of global sensitisation for the general population, with the goal of equipping any citizen with enough knowledge to be able to participate in the discussion, seems like exactly the type of initiative we need (more of).

Diverse visual representations without a strong underpinning aegis of education, greater awareness, and exposing the complications of power operating on the obverse side, serve us little. They are, however, of prime importance, following a fundamental strategy of education. Greater understanding will likely result in the elimination of such tired clichés. When we pave this newly elected space with topoi of anthropomorphic killer robots, we perpetuate an agenda where such an ontological figuration is conceivable, effable, slightly more normative.

Figure 72: AI is a tool – a potent tool, excelling at challenges of big data, trained on extremely specific scenarios.
Figure 73: This is already a much better example of a direction forward. AI does make us all buy more things. Shedding light on the actual applications of AI, and the extent to which it does.
  1. ‘The Lost Cities of Geo’, Invisible (blog), accessed 13 January 2021, https://99percentinvisible.org/episode/the-lost-cities-of-geo/.↩︎

  2. ‘Elements of AI’, Elements of AI, accessed 29 November 2020, https://www.elementsofai.com/.↩︎

  3. Google CEO Sundar Pichai as quoted on ‘Elements of AI’.↩︎

  4. Andrey Kurenkov, ‘A “Brief” History of Neural Nets and Deep Learning’, 2015, http://www.andreykurenkov.com/writing/a-brief-history-of-neural-nets-and-deep-learning/.↩︎

Landmark Regulations

Figure 74: It will be difficult to enforce indeed.

The AI enterprise is a nonspatial territory. It is not governed by intrinsic regulations to guide its expansion, elected subjects entitled to its profits, its constitution bears no enshrined notions of democracy. Things are beginning to change. Landmark regulations have been enacted.112 Despite the notable shifts in public perception, many such manifestos fail to deliver – they land as palliatives on top of a behemoth industry that has already grown far too large to promptly concede. Peering into the institutions where power is produced and maintained will continue to be an extremely important strategy going forward. It is about the sites of policymaking and knowledge production (educational institutions, media – including its aesthetics). It is about confronting the machinery of global economics, to not integrate AI into it in same the way it has been – as a tool for gaining technocapital:

“Machine intelligence is not biomorphic—it will never be autonomous from humankind and, for sure, from the difficulties of capital, since it is a functional component of industrial planning, marketing strategies, securitarian apparatuses, and finance.”113

Political discussions on AI are inherently intertwined with discussions on data. Our societal attitudes towards data will have to be reconfigured before the biases and fundamental inequalities that AI is perpetuating may begin to ebb away.

Figure 75: AI’s speeding incorporation is bringing alongside it a plethora of legal and ethical concerns.
  1. Appendix of public milestone regulations, in addition, many interesting initiatives are springing up in the private sector – for further reading, see for example: Futurism. ‘To Build Trust in Artificial Intelligence, IBM Wants Developers to Prove Their Algorithms Are Fair’. Accessed 28 August 2020. https://futurism.com/trust-artificial-intelligence-ibm.↩︎

  2. Pasquinelli, ‘Abnormal Encephalization in the Age of Machine Learning’.↩︎

Reviving the Commons, Coming Together

“AI is one of the most consequential technologies of our time, and it is well-acknowledged that democratizing AI will benefit a large number of people”114

Demos refers to the importance of collective action against standing up to the inaccessibility of the regime that’s causing widespread climate destruction. I, too, urge for a coming together in the spirit of interrogating our received thoughts about AI. We are in need of coming together as biopolitical assemblage, “…comprised of”living subjects, physical space, material infrastructure, technological devices, cultural forms, and organization practices that simultaneously stage dissent against the status quo while prefiguring ‘alternative worlds’.”115

A similar energy of collective action has been channeled at the level of determining ethics for future AI systems, in the proposal to establish people’s councils to counterbalance the data collection asymmetry inherent in the huge scale of data collection for training machine learning models. People’s councils are “horizontal structures in which everyone has an equal say about the matter being decided.” 116 Employing such councils (as opposed to leaving choice up to a handful of individuals) within the collective decisions that are routinely being made within the field of AI “means countering lack of consent with democratic consensus, replacing opacity with openness and reintroducing the discourse that defines due process.”117

Figure 76: https://twitter.com/danmcquillan/status/1306647351566766080.

A proposed strategy of rethinking the worrying trends in the de-democratisation of AI research has been the establishment of common datasets:

“Shared public datasets that can help to train and test AI models will be particularly beneficial for resource-constrained organizations. We posit that by releasing publicly owned data, governments can help non-elite universities and startups in the AI research race.”118

It would serve us well to think of ways to legitimise the image-making of these emerging, completely intangible concepts, as a collaborative practice. Can image-making somehow become a democratic process? Our images of AI constitute a shared, common dataset, holding our shared symbolic reality. The relationship between image and reality is fundamental, core-constituent. It would be “of public interest to acquaint a wider audience with forms of communication contributing to more independent and radical democratic shaping of opinion.”119

Figure 77: Image from online. Source unknown.
  1. Ahmed and Wahed, ‘The De-Democratization of AI: Deep Learning and the Compute Divide in Artificial’. p. 38.↩︎

  2. McKee, Yates. Strike Art: Contemporary Art and the Post-Occupy Condition. London ; New York: Verso, 2016. As quoted in Demos, Against the Anthropocene: Visual Culture and Environment Today. 44.↩︎

  3. Dan McQuillan, ‘People’s Councils for Ethical Machine Learning’, Social Media+ Society 4, no. 2 (2018). 7.↩︎

  4. McQuillan, 4.↩︎

  5. Ahmed and Wahed, ‘The De-Democratization of AI: Deep Learning and the Compute Divide in Artificial’, 89.↩︎

  6. Van Toorn, ‘Design and Reflexivity’.↩︎

AI’s Emergence into Society

“Previously the tool was the variable and the human being the constant, subsequently the human being became the variable and the machine the constant.”120

AI becoming kin, or a future of “singularity” are one of an infinite number of possibilities. A big problem with imagery used to mediate the concept of artificial intelligence is the fact that it cannot seem to let go of putting the human first. A rethinking of this would surely help as a viable strategy forward. AI is already so othered, why do we not proceed to other it in more inventive, less solipsistic ways?

In regard to the future potentials of non-human intelligence, keeping it within our own bounds – on our own territory, in the familiar – appears a repressive, colonial gesture. Envisioning possibilities of a greater other instils fear. Then we no longer hold dominion. If we begin to discuss true superintelligence that operates in its own, non-anthropocentric paradigm, our current visions of AI quickly reveal themselves as very naïve.

Figure 78: The Singularity ?? Based on current trends, not going to happen as soon as some would like to believe.

Rather than integration, it is important we first address the project of disalienation of AI as a growing urgency in our going forth — coming from the default and expected state of alienation from the actuality of the technology and its workings. To cede with its disclosure yielding to the paradigm of sublime simulacrum, but rather, to manifest its social potential in a relationality that is operational – one that is present, plausible, that bestows affordance for interaction, exchange, and intersubjectivity. One that is accessible. Counteracting these visual flows is on many occasions symmetrical with counteracting digital capitalism.

Focusing on the provable, the material, might then be the better strategy towards dispelling the technological sublime and promoting fruitful societal discourse and participation in the huge benefits that emerging cybertechnologies are bringing into our common society.

A shift of optics towards an emphasis on the human-powered, physical, fabric of algorithmic infrastructures and networks may serve as an important. 121 This will require a drastic change of course in the paradigm of aesthetic disclosure surrounding emergent technologies. Fundamentally, to actively guide new discourse towards the destabilisation of the immaculate, exalted aura that such emergent technologies adopt nearly as quickly as they are introduced.

Desubliming comes in two prescriptions – the fundamental need to disclose an informed reality of AI to the public, rather than perpetuating the mythology. And a second, poietic resubliming – using figuration differently.

I here turn to image-producers from all walks of life, of image, text, and signification at large, to pave the way towards a future of emergent potential. Speaking from personal practice, I would hope that the field of image-producers can once again gather the momentum necessary “to apply our imaginative power once again to how we deal with communicative reality.”122

  1. Flusser, Towards a Philosophy of Photography. 24.↩︎

  2. Ames, ‘Deconstructing the Algorithmic Sublime’.↩︎

  3. Van Toorn, ‘Design and Reflexivity’.↩︎

Conclusion

Figure 79: “Futuristic evolution of people digital transformation abstract technology background. Artificial intelligence and big data concept. Business growth computer and investment”. iStock Photo. The planetary intensity remains strong.

The act of seeing AI is not by zooming out to its totality – it is an impossible entity to map (Imagine disclosing the internet as a map at this point in time). It does help, however, to collectively know how it works on a baseline level, and what’s reasonable to demand, what’s unreasonable to expect. For that, we need a relating to it that’s based on accessibility and understanding. That requires looking underneath the sublime front, and desubliming it, to make room for information, for an elemental knowledge of the core workings. Passing the relay stick forth, the time comes for invoking the forgotten social contract of image-producers in imaginative figuration, of draping the signifier in different sublime dress. It is important to keep on weaving new mythologies and stories into existence. We have to liquefy the monolithic image, and not proceed to ossify another in its stead. Let’s move towards the welcoming of pluriformity… The myriad shifting faces of AI.

Bibliography

Adam, Alison. ‘A Feminist Critique of Artificial Intelligence’. European Journal of Women’s Studies 2, no. 3 (1995): 355–377.

Ahmed, Nur, and Muntasir Wahed. ‘The De-Democratization of AI: Deep Learning and the Compute Divide in Artificial’, 22 October 2020, 52.

AIArtists.org. ‘Artificial Intelligence Timeline’. AIArtists.org. Accessed 9 July 2020. https://aiartists.org/ai-timeline-art.

Al Jazeera. EU to Unveil Proposed Regulations for Artificial Intelligence, 2020. https://www.youtube.com/watch?v=xBg6JthpeFg&list=PLzGHKb8i9vTysJlqfhIyEieT2FqwITBEj.

Ames, Morgan G. ‘Deconstructing the Algorithmic Sublime’. Big Data & Society 5, no. 1 (June 2018): 4. https://doi.org/10.1177/2053951718779194.

Amoore, Louise. Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press, 2020.

———. ‘Why “Ditch the Algorithm” Is the Future of Political Protest | Louise Amoore’. the Guardian, 19 August 2020. http://www.theguardian.com/commentisfree/2020/aug/19/ditch-the-algorithm-generation-students-a-levels-politics.

‘Artificial Intelligence - Rise of the Machines | Briefing’. The Economist, 2015. https://www.economist.com/briefing/2015/05/09/rise-of-the-machines.

Artificial Intelligence Is the New Electricity. Future Forum. Stanford School of Business. Accessed 12 February 2020. https://www.youtube.com/watch?v=21EiKfQYZXc.

Barthes, Roland. Mythologies. Vintage, 1993.

Batzoglou, Serafim. ‘Supercreativity’. Medium, 21 October 2019. https://towardsdatascience.com/supercreativity-b4114ebd0357.

Baudrillard, Jean. Simulacra and Simulation. The Body, in Theory. Ann Arbor: University of Michigan Press, 1994.

Baudrillard, Jean, and Mark Poster. Selected Writings. Stanford, Calif: Stanford University Press, 1988.

Borowski, Judy, and Christina Funke. ‘Challenges of Comparing Human and Machine Perception’. The Gradient, 6 July 2020. https://thegradient.pub/challenges-of-comparing-human-and-machine-perception/.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. First edition. Oxford: Oxford University Press, 2014.

Brennen, J Scott, Philip N Howard, and Rasmus Kleis Nielsen. ‘An Industry-Led Debate: How UK Media Cover Artificial Intelligence’, 2018, 10.

Bridle, James. New Dark Age: Technology, Knowledge and the End of the Future. London ; Brooklyn, NY: Verso, 2018.

Buzon, Darin. ‘Design Thinking Is a Rebrand for White Supremacy’. Medium, 16 June 2020. https://medium.com/@dabuzon/design-thinking-is-a-rebrand-for-white-supremacy-b3d31aa55831.

Campbell, Charlie. ‘“AI Farms” Are at the Forefront of China’s Global Ambitions’. Time. Accessed 28 August 2020. https://time.com/5518339/china-ai-farm-artificial-intelligence-cybersecurity/.

Centre for the Study of the Networked Image. ‘About – CSNI’, 2020. https://www.centreforthestudyof.net/?page_id=756.

Chandler, Daniel. ‘Semiotics: The Basics’, 2017, 353.

———. ‘Technological Determinism: Reductionism’. Accessed 18 September 2020. http://visual-memory.co.uk/daniel/Documents/tecdet/tdet03.html.

Cobb, Matthew. ‘Why Your Brain Is Not a Computer’. The Guardian, 27 February 2020, sec. Science. https://www.theguardian.com/science/2020/feb/27/why-your-brain-is-not-a-computer-neuroscience-neural-networks-consciousness.

Craft-Jenkins, Kyle. ‘Artificial Intelligence and the Technological Sublime: How Virtual Characters Influence the Landscape of Modern Sublimity’. University of Kentucky, 2012.

Cramer, Florian. Words Made Flesh. Rotterdam: Piet Zwart Institute, 2005.

Cristian M. AI and Our Health Data: A Pandemic Threat to Our Privacy | The Listening Post, 2020. https://www.youtube.com/watch?v=N5GInEKo8fw.

———. Artificial Intelligence: The World According to AI |Targeted by Algorithm (Ep1)| The Big Picture, 2019. https://www.youtube.com/watch?v=134huBl7MAA&t=339s.

———. EU to Unveil Proposed Regulations for Artificial Intelligence, 2020. https://www.youtube.com/watch?v=xBg6JthpeFg.

Day, Ronald E. ‘A Review of: "The Digital Sublime: Myth, Power, and Cyberspace’: By Vincent Mosco. Cambridge, MA: MIT Press, 2004. Ix + 218 Pp. $27.95 (Cloth). ISBN 0-262-13439-X.’ The Information Society 21, no. 3 (July 2005): 223–24. https://doi.org/10.1080/01972240490951999.

De Dios Santos, Juan. ‘On the Sensationalism of Artificial Intelligence News’. KDnuggets (blog), 2019. https://www.kdnuggets.com/on-the-sensationalism-of-artificial-intelligence-news.html/.

Demos, TJ. Against the Anthropocene: Visual Culture and Environment Today. Berlin: Sternberg Press, 2017.

Deng, Iris. ‘In China 9 out of 10 Workers Trust a Robot More than Their Manager’. South China Morning Post, 16 October 2019. https://www.scmp.com/tech/big-tech/article/3033143/almost-90-cent-chinese-workers-trust-robot-more-their-human-managers.

DiResta, Renée. ‘The Supply of Disinformation Will Soon Be Infinite’. The Atlantic, 20 September 2020. https://www.theatlantic.com/ideas/archive/2020/09/future-propaganda-will-be-computer-generated/616400/.

Domingos, Pedro. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. New York: Basic Books, a member of the Perseus Books Group, 2015.

Dubber, Markus Dirk, Frank Pasquale, and Sunit Das. The Oxford Handbook of Ethics of AI, 2020.

Elements of AI. ‘Elements of AI’. Accessed 29 November 2020. https://www.elementsofai.com/.

Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY: St. Martin’s Press, 2018.

Fan, Shelly, and Matthew Taylor. Will AI Replace Us? A Primer for the 21st Century. Big Idea. New York, New York: Thames & Hudson, Inc, 2019.

Federici, Silvia, and Peter Linebaugh. Re-Enchanting the World: Feminism and the Politics of the Commons. Kairos. Oakland, CA: PM, 2019.

Flusser, Vilém. Towards a Philosophy of Photography. London: Reaktion Books, 1983.

Gorey, Colm. ‘Should We Believe the Hype? Media May Be Warping Reality of AI’s Powers’. Silicon Republic, 21 February 2019. https://www.siliconrepublic.com/machines/ai-media-coverage-bias.

Gray, Mary L., and Siddharth Suri. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Eamon Dolan Books, 2019.

Griffin, Andrew. ‘Facebook’s Artificial Intelligence Robots Shut down after They Start Talking to Each Other in Their Own Language | The Independent’, 2017. https://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html.

Hao, Karen. ‘Why Kids Need Special Protection from AI’s Influence | MIT Technology Review’, 17 September 2020. https://www.technologyreview.com/2020/09/17/1008549/kids-need-protection-from-ai/.

Harman, Graham. Object-Oriented Ontology: A New Theory of Everything. Penguin UK, 2018.

Heikkila, Andrew. ‘Blade Runner Rule: Public Perception and A.I. in Marketing and Brand R’. Datafloq, 2017. https://datafloq.com/read/public-perception-ai-marketing-brand/3832.

Hreha, Jason. ‘Anthropomorphism and AI’. Medium, 26 March 2018. https://becominghuman.ai/anthropomorphism-and-ai-151ed0b17dad.

Johnson, Eric. ‘“Minority Report” Interface Designer: Future Tech Needs a Better Dictionary’. Vox, 25 June 2015. https://www.vox.com/2015/6/25/11563868/minority-report-interface-designer-future-tech-needs-a-better.

Joler, Vladan, and Matteo Pasquinelli. ‘The Nooscope Manifested: AI as Instrument of Knowledge Extractivism’. The Nooscope Manifested: AI as Instrument of Knowledge Extractivism. Accessed 1 September 2020. http://nooscope.ai/.

Kahn, Nora. ‘Towards a Poetics of Artificial Superintelligence’. Medium, 10 October 2016. https://medium.com/after-us/towards-a-poetics-of-artificial-superintelligence-ebff11d2d249.

Kantayya, Shalini. Coded Bias. Documentary. 7th Empire Media, 2020.

Kate Crawford, and Trevor Paglen. ‘Excavating AI’, 2019. https://www.excavating.ai/.

Kherbek, William. ‘The Firewall Next Time: Belief, Power, and AI |’. Flash Art, 19 November 2020. https://flash---art.com/2020/11/episode-ii-the-firewall-next-time-belief-power-and-ai/.

Knight, Will. ‘Many Top AI Researchers Get Financial Backing From Big Tech | WIRED’, 10 April 2020. https://web.archive.org/web/20201004110850/https://www.wired.com/story/top-ai-researchers-financial-backing-big-tech/.

Krieg, Peter. Maschinenträume. Documentary. Barfuß,  Landeszentrale für politische Bildung NRW,  Norddeutscher Rundfunk (NDR), 1988.

Kurenkov, Andrey. ‘A “Brief” History of Neural Nets and Deep Learning’, 2015. http://www.andreykurenkov.com/writing/a-brief-history-of-neural-nets-and-deep-learning/.

Lev Manovich. AI Aesthetics. Moscow: Strelka Press, 2019.

Lexico. ‘Artificial Intelligence | Definition of Artificial Intelligence by Oxford Dictionary’. Lexico Dictionaries | English. Accessed 9 December 2020. https://www.lexico.com/definition/artificial_intelligence.

———. ‘Image-Maker | Definition of Image-Maker by Oxford Dictionary’. Lexico Dictionaries | English. Accessed 15 September 2020. https://www.lexico.com/definition/image-maker.

Lovelock, James. Novacene: The next Phase of Gaia. Place of publication not identified: ALLEN LANE, 2019.

ltd, Research and Markets. ‘Artificial Intelligence Market Size, Share & Trends Analysis Report’. Accessed 13 February 2020. https://www.researchandmarkets.com/reports/4375395/artificial-intelligence-market-size-share-and.

Manovich, Lev. ‘The Anti-Sublime Ideal in Data Art’. URL= Http://Www. Manovich. Net/DOCS/Data_art. Doc, 2002.

Markoff, John. ‘When Is the Singularity? Probably Not in Your Lifetime’. The New York Times, 7 April 2016, sec. Science. https://www.nytimes.com/2016/04/07/science/artificial-intelligence-when-is-the-singularity.html.

Matlack, Samuel. ‘Confronting the Technological Society’. The New Atlantis. Accessed 15 July 2020. https://www.thenewatlantis.com/publications/confronting-the-technological-society.

McKee, Yates. Strike Art: Contemporary Art and the Post-Occupy Condition. London ; New York: Verso, 2016.

McQuillan, Dan. ‘People’s Councils for Ethical Machine Learning’. Social Media+ Society 4, no. 2 (2018).

Mehrotra, Dhruv. ‘Horror Stories From Inside Amazon’s Mechanical Turk’. Gizmodo, 28 January 2020. https://gizmodo.com/horror-stories-from-inside-amazons-mechanical-turk-1840878041.

Mhlambi, Sabelo. ‘God in the Image of White Men: Creation Myths, Power Asymmetries and AI’. Sabelo Mhlambi, 29 March 2019. /2019/03/29/God-in-the-image-of-white-men.

Mitchell, Melanie. ‘Opinion | Artificial Intelligence Hits the Barrier of Meaning’. The New York Times, 5 November 2018, sec. Opinion. https://www.nytimes.com/2018/11/05/opinion/artificial-intelligence-machine-learning.html.

Mitchell, W. J. T. Image Science: Iconology, Visual Culture, and Media Aesthetics. Chicago London: The University of Chicago Press, 2015.

———. Picture Theory: Essays on Verbal and Visual Representation. Chicago: University of Chicago Press, 1994.

———. What Do Pictures Want? The Lives and Loves of Images. Nachdr. Chicago, Ill.: Univ. of Chicago Press, 2010.

Moore, Elaine. ‘Me, Myself and A.I. — Should Robots Look like Us?’ Financial Times, 13 September 2019. https://www.ft.com/content/044e8fd2-d42c-11e9-8367-807ebd53ab77.

Morley, Simon, ed. The Sublime. Documents of Contemporary Art. London : Cambridge, Mass: Whitechapel Gallery ; MIT Press, 2010.

Mul, Jos de. ‘The Technological Sublime’. Next Nature Network, 17 July 2011. https://nextnature.net/2011/07/the-technological-sublime.

Naudé, Wim. ‘The Race against the Robots and the Fallacy of the Giant Cheesecake: Immediate and Imagined Impacts of Artificial Intelligence’. Artificial Intelligence, 2019, 32.

Naughton, John. ‘Don’t Believe the Hype: The Media Are Unwittingly Selling Us an AI Fantasy | John Naughton’. The Guardian, 13 January 2019, sec. Opinion. https://www.theguardian.com/commentisfree/2019/jan/13/dont-believe-the-hype-media-are-selling-us-an-ai-fantasy.

Next Generation. ‘What We’re Getting Wrong About AI’, 2019. https://www.nextgeneration.ie/blog/2019/07/what-were-getting-wrong-about-ai.

Nielsen, Michael A. ‘Neural Networks and Deep Learning’, 2015. http://neuralnetworksanddeeplearning.com.

Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press, 2018.

Noessel, Christopher. ‘What Stories Aren’t We Telling Ourselves about A.I.?’ scifiinterfaces.com, July 2018. https://d2w9rnfcy7mm78.cloudfront.net/7485022/original_d589dd23e7368978cddb6cf5b5dae505.png.

OpenAI. ‘OpenAI Charter’. OpenAI. Accessed 29 August 2020. https://openai.com/charter/.

Paglen, Trevor. ‘Operational Images’. E-Flux Journal 59 (November 2014): 3.

Pasquinelli, Matteo. ‘Abnormal Encephalization in the Age of Machine Learning’. E-Flux Journal 75 (September 2016). https://www.e-flux.com/journal/75/67133/abnormal-encephalization-in-the-age-of-machine-learning/.

Pasquinelli, Matteo, and Vladan Joler. ‘The Nooscope Manifested Artificial Intelligence as Instrument of Knowledge Extractivism’, 2020, 23.

Phan, Thao. ‘Amazon Echo and the Aesthetics of Whiteness’. Catalyst: Feminism, Theory, Technoscience 5, no. 1 (1 April 2019): 1–38. https://doi.org/10.28968/cftt.v5i1.29586.

Piper, Kelsey. ‘The American Public Is Already Worried about AI Catastrophe’. Vox, 9 January 2019. https://www.vox.com/future-perfect/2019/1/9/18174081/fhi-govai-ai-safety-american-public-worried-ai-catastrophe.

Procaccia, Ariel. ‘Beware of Geeks Bearing AI Gifts’. Bloomberg.Com, 10 July 2019. https://www.bloomberg.com/opinion/articles/2019-07-10/ai-hype-fools-a-lot-of-the-people-a-lot-of-the-time.

Robitzski, Dan. ‘To Build Trust in Artificial Intelligence, IBM Wants Developers to Prove Their Algorithms Are Fair’. Futurism, 2018. https://futurism.com/trust-artificial-intelligence-ibm.

Rose, Gillian. Visual Methodologies: An Introduction to Researching with Visual Materials. 4th edition. London: SAGE Publications Ltd, 2016.

Ross, Benjamin. ‘Automakers Making Deals to Speed Incorporation of AI’. AI Trends (blog), 1 July 2020. https://www.aitrends.com/ai-and-business-strategy/automakers-making-deals-to-speed-incorporation-of-ai/.

Schaffer, Simon. ‘Babbage’s Intelligence: Calculating Engines and the Factory System’. Critical Inquiry 21, no. 1 (1994): 203–27.

Schneider, Dr Julia, and Lena Kadriye Ziyal. ‘A Comic Essay on Artificial Intelligence’, 2019, 60.

Shultz, David. ‘Which Movies Get Artificial Intelligence Right? | Science | AAAS’, 2015. https://www.sciencemag.org/news/2015/07/which-movies-get-artificial-intelligence-right.

SINTEF. ‘Big Data, for Better or Worse: 90% of World’s Data Generated over Last Two Years’. ScienceDaily. Accessed 5 February 2020. https://www.sciencedaily.com/releases/2013/05/130522085217.htm.

Smith, Chris, and Brian McGuire. ‘The History of Artificial Intelligence’, 27. University of Washington, 2006.

Spencer, Michael. ‘Artificial Intelligence Hype Is Real’. Forbes. Accessed 12 February 2020. https://www.forbes.com/sites/cognitiveworld/2019/02/25/artificial-intelligence-hype-is-real/.

Steenson, Molly Wright. ‘A.I. Needs New Clichés’. Medium, 13 June 2018. https://medium.com/s/story/ai-needs-new-clich%C3%A9s-ed0d6adb8cbb.

Stinson, Liz. ‘How Can Designers Responsibly Use Science Fiction as Inspiration?’ Eye on Design (blog), 8 July 2020. https://eyeondesign.aiga.org/how-can-designers-responsibly-use-science-fiction-as-inspiration/.

‘The Anthropocene | Welcome’. Accessed 12 February 2020. http://www.anthropocene.info/.

Toscano, Alberto, and Jeff Kinkle. Cartographies of the Absolute, 2015. https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=941611.

TranslateMedia. ‘Are You Bold Enough to Let AI Represent Your Brand?’ TranslateMedia, 3 October 2017. https://www.translatemedia.com/us/blog-usa/bold-enough-let-ai-represent-brand/.

Turkle, Sherry. The Second Self: Computers and the Human Spirit. 20th anniversary ed., 1st MIT Press ed. Cambridge, Mass: MIT Press, 2005.

UNICEF. ‘AI for Children’. Accessed 20 September 2020. https://www.unicef.org/globalinsight/featured-projects/ai-children.

Van Toorn, Jan. ‘Design and Reflexivity’, 1994. https://designopendata.wordpress.com/portfolio/design-and-reflexivity-1994-jan-van-toorn/.

Vidal, Denis. ‘Anthropomorphism or Sub-Anthropomorphism? An Anthropological Approach to Gods and Robots’. Journal of the Royal Anthropological Institute 13, no. 4 (December 2007): 917–33. https://doi.org/10.1111/j.1467-9655.2007.00464.x.

Wark, McKenzie. Capital Is Dead. London ; New York: Verso, 2019.

Watson, David. ‘The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence’. Minds and Machines 29, no. 3 (September 2019): 417–40. https://doi.org/10.1007/s11023-019-09506-6.

Yuan, Li. ‘How Cheap Labor Drives China’s A.I. Ambitions’. The New York Times, 25 November 2018, sec. Business. https://www.nytimes.com/2018/11/25/business/china-artificial-intelligence-labeling.html.

zgh. ‘Artificial Intelligence vs. Programmed Stupidity’. Zork (the) Hun (blog), 4 September 2018. http://zorkhun.com/wp/2018/09/04/artificial-intelligence-vs-programmed-stupidity/.

Zhang, Sarah. ‘China’s Artificial-Intelligence Boom’. The Atlantic, 16 February 2017. https://www.theatlantic.com/technology/archive/2017/02/china-artificial-intelligence/516615/.

Ziarek, Krzysztof. The Force of Art. Stanford University Press, 2004.

Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. First edition. New York: PublicAffairs, 2019.

Zuidervaart, Lambert. ‘The Social Significance of Autonomous Art: Adorno and Burger’. The Journal of Aesthetics and Art Criticism 48, no. 1 (1990): 61. https://doi.org/10.2307/431200.

Zunger, Yonatan. ‘Asking the Right Questions About AI’. Medium, 12 October 2017. https://medium.com/@yonatanzunger/asking-the-right-questions-about-ai-7ed2d9820c48.

Appendix: AI-News Audit Questionnaire

To be used when encountering a given media text of sublime air.

  • Is AI mediated as agentic, subjective?
  • Is the piece fitted with sublime imagery?
  • Is the AI described as having unusually large power?
  • Is the AI described as being able to act outside of its means?
  • Is there explicit mention of the AI operators?

🦋
Thank you for reading.