Strategic wargames can have both indirect and direct influences on real-world decisions, but when AI algorithms become the designer of the outcomes behind war strategy, we face two challenges: lack of explainability and bias. Uniformity serves as the fundamental prerequisite for the assertion of power. Thus, I see AI systems as a form of power that can solidify its foundation on absolute conditions, emphasising uniformity of thought, pathways, speech, and behavioural patterns. This can translate on the jeopardization of freedom of expression, and an ultimate tool for warfare.
The following thesis introduces the research question: How AI and video games influence warfare? In chapter 1, I examine how machine learning processes work and draw associations with ancient Greece mythology and the current AI Landscape. In chapter 2, I depict Theory of Games and Economic Behaviour and draw connections with war representation in video games, film, and war subculture. Finally in chapter 3, I find essential to discuss and challenge the use of these systems and their impact on society. Because behind any strategy there is a goal which can develop a narrative of its own. Thus, I give examples of AI implementation in real warfare. To understand how these AI systems work and how they can be used as a creation of new knowledge, implementation of new ways of being and perceiving the world.
“The oracle at Delphi, defined as both the response given to a question and the person (priestess) who gives the response, was believed to be the vessel through which the god Apollo spoke. By acting as the voice of the Gods, the oracle had a profound influence on the culture and politics of Greece, particularly during the height of the oracle’s power between the 6th and 4th centuries BCE. This period saw major developments in all areas of Greek life as well as an increase in conflict and major changes in political systems.” 1
Imagine the oracle at Delphi as a lineage of a priestly women, a class of philosophers and scholars, who looked at the world in a global manner, looking into territories, how areas, nature, the weather moved, how it affected the economies of the region as well as which economy submitted the other to achieve further development.
Acquiring prophecies directly from Apollo granted vision to surveil the enemy without being on the ground, to kill without breaking invisibility. Major upheavals and conflicts created uncertainty about the future and an increase in the ancient Greeks seeking advice and reassurance from the Gods, the highest authorities in ancient Greece.
As such the oracles, who acted as intermediaries between the mortals and the Gods, became increasingly important and influential during this period. The oracle had an influence over decisions to go to war and political consequences of warfare.
2
Today satellites are the watchers of the skies. A mirror that projects and translate back every development on Earth. AI is at the heart of the satellite from which images could speak. But a translation is an echo of the original, and even if it always spoke the truth. Was AI an imprisoned spirit? To whom did it serve? Or did it hold ownership of itself?
If you look at a map, you can examine who made it, for whom and for what purpose, what shows and what it lacks. Creating a policy based on these facts requires human interaction. Same as the mortals required the Oracles to translate the messages from Apollo.
When looking at a map from the sky, ‘winning’ translates as ‘keeping the target at constant sight', which translates into a new balance of forces that relies on instant power sensors, interceptors, and remote electronic detectors.
The precise vision provided by intelligent satellites automates the perception of the enemy’s territory, by keeping the enemy under remote surveillance in real time.
Often digital intelligences are portrayed as more significant than human brains for being faster at processing information and having immense capabilities of storage.
But imagining the brain as an amazon storage facility seems to undermine human capabilities, moreover when the human brain is still misunderstood.
So far there is no evidence that show us how the brain completely functions and “different research communities often see only what they expect to see in the brain.
A computer scientist’s perspective may be biased towards neural networks with established utility, a physicist may seek the energy landscapes that have proven invaluable for other questions, and a neuroscientist may aim to describe the incredibly complex biology of neural circuits.
The pursuit of a common theoretical framework that bridges top-down and bottom-up computational perspectives while allowing the realities of biology to be incorporated will be critical in furthering our understanding of the brain.”
3
But can AI systems be actual models of human intuition? Geofrey Hinton aka ‘the grandfather of AI’ gives an interesting example of what he calls ‘artificial intuition’. He says that if you are told to make a choice, where you can either have all cats being male and all dogs being female, or you can have all cats being female and all dogs being male.
Even though this is a biological nonsense, he says, is much more natural to make all cats female and all dogs male because this is not a question on logic but is about the big pattern of neural activity in your brain that creates your perception and forms your intuition. He explains that inside your head you have a big pattern of neural activity that represents cat, a big pattern of neural activity that represents man and a big pattern of neural activity that represents women.
A big pattern for cat is more like the pattern for woman than it is like the pattern for man. That is the result of a lot of learning about men and women and cats and dogs that has made it intuitively obvious to you that cats are more like women and dogs are more like men. Because of these big patterns of neural activity you've learned, you don’t require sequential reasoning to solve that problem.
But how these AI systems create big patterns of artificial neural activity. Like what is the reasoning behind certain outcomes, remains unclear. In other words, how it produces its information, is a black box.
Decision making in Modern Western societies is understood as rational—self-interested, purposeful, and efficient.4 However, in human brains, the connections between neurons are stronger than others, which are influenced by biases, reason, emotions, and memories that have direct impact on our actions. That being so, Neural networks in AI are the most popular machine learning scheme because it emulates the human brain. According to IBM, “Artificial neural networks (ANNs) are comprised of a node layers, containing an input layer, one or more hidden layers, and an output layer. Each node, or artificial neuron, connects to another and has an associated weight and threshold. If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. Otherwise, no data is passed along to the next layer of the network. Neural networks rely on training data to learn and improve their accuracy over time. However, once these learning algorithms are fine-tuned for accuracy, they are powerful tools in computer science and artificial intelligence, allowing us to classify and cluster data at a high velocity.”
5AI is therefore uncapable of creating anything original; the outputs it produces are the production of an endless algorithmic reproduction― a copy of a copy. Producing an incomplete projection.
Interpreting the meanings of hidden layers in an artificial neural network is an ongoing active area of research. Methods such as activation maximization, gradient-based methods, and feature visualization, try to uncover the patterns and representations that artificial neural networks use to process information. Yet there is not a definitive consensus on how to interpret unit values.
In a world in which we are subjected to algorithmic governability and the tyranny of patterns, Hito Steyerl defines one of the risks of these systems as ‘apophenia’, the paranoid and quasi-mystic tendency to seek out trends and anomalies and extract meaning from them. The tendency, to see false and purely random resonances and statistical noise and signifiers.”
6
Thus, how can we scape this feedback circuit? When the relation between organic data and computational models provide a path to influence the world?
In March 2023 Pause Giant AI Experiments: An Open Letter was published online.
“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects.”
Geofrey Hinton did not sign this letter, yet when he was asked about it, he underscored the AI areas of benefit as the reason for which development will not stop. And, then he identified the problem: 99% of the money is going into developing this system while not enough research (1 %) to address control. And argue that future is uncertain and that "Humanity is just a passing phase in the evolution of intelligence."
The machine can produce convincing answers for badly formulated questions. But the answer might not lie in the archives of data but in the rural fields where people have harvested, old and new civilizations collapsed and where now humans and other living beings alike spring up again. There is no more time to stubbornly reimagine the future through prophecies. But is time to look at the ground, below our feet, in the gravity of all our weight.
Warfare and games are located at the intersection of economic transactions and military action. By 1954, John von Neumann and Oskar Morgenstern invented a mathematical theory of economic and social organization, based on games of strategy called Theory of Games and Economic Behaviour7, that seeks to formalize human behaviour economically and militarily. This method has been put to use, and is not limited to model institutions, networks and social dynamics, by means of calculating all the possible outcomes and probabilities, to maximise use and profit by change of policy.
With the help of AI, video games cannot only be served for coercing behaviour in humans, but to train AI strategically. Humans are guiding, assisting and working for the machine, by correcting its own errors. Humans are helping the machine, recognize our world, by means of identification and an acknowledgement of the existence, validity, or legality of ideas. Prior to AI, we can find an example of warfare video game implementation in the video installations: Serious Games I–IV by Harun Farocki. Where he explores how the U.S. military employs video game technology to train troops for war and to treat an after effect of war, post-traumatic stress disorder (PTSD).
This video series depict how the military explores drone technology and computer-aided training programs. The footage represents how the soldiers participate in these simulated war operations, which provide an alternative topology of media where a mock city is erected for military exercises where the role of Afghans and Iraqis are portrayed as extra avatars. What I found compelling in this videos is how the soldiers are not being trained to withstand the emotionally devastating experiences of war but rather to tolerate the monotony of war.
“AI could rapidly uncover insights from vast data. Players could experience more immersive games with AI-generated scenarios and adversarial strategies. The expected result? A transformative leap in foresight and strategic advantage over competitors. However, both wargames and AI models share two challenges—lack of explainability (difficulties in comprehending how knowledge is produced) and bias, which raise ethical concerns. Wargames are ‘not reproducible,’ according to NATO and UK’s Ministry of Defence wargaming guidance. When combined with black-box deep learning models—systems where the decision-making process is opaque and not readily interpretable—trust in outcomes diminishes further. Biases in both can arise from limited data or flawed design, potentially leading to erroneous conclusions. Additionally, wargame methods and insights are often classified. Turbo charging them with AI can propagate errors with significant real-world consequences free from public scrutiny. Wargames can carry risks that, without ethical guardrails, could damage players and society.”
When the Lydian King Croesos was advised by the Delphic oracle that if he fought against Persia, he "would destroy a great empire". The prophecy was meant for Lydia and not for Persia as he had interpreted. Could AI systems misrepresent real scenarios? For instance, by exaggerating chances or misrepresenting adversaries?
An example of mis-representing reality intentionally can be found in the in the trailer for Call of Duty: Black Ops, at the end of 2010, which used the motto: “There’s a Soldier in All of Us” with a tag that said: “Bankers, construction workers, office drones, cooks... Who doesn't like shooting people? Even Kobe Bryant and Jimmy Kimmel want in on the fun.” Portraying the landscape as an urban warfare spectacle. Copywriter Chris DeNinno explains the campaign case study for Call of Duty: Black Ops. Which provides interesting conclusions on how to make violence appealing.
"We all know who the biggest players are in the world of entertainment. But a Video game? For the launch of Call of Duty Black Ops, we were asked to go toe to toe with the biggest names in entertainment. The goal cannot just be the bestselling video game ever, but the biggest entertainment launch in history. Bigger than even. Yes, Avatar. The challenge would be to overcome the perception that Call of Duty is just a shooter for basement dwelling teenagers. To achieve our goal, we have to go beyond the core. So we took a hard look at who really played Call of Duty and discovered it wasn't who you'd expect. It's your neighbour, your doctor, your boss, the mailman who just passed you on the street. Call of Duty should become an entertainment icon for the masses and the ultimate entertainment event. All we needed to do was let the world know our brand believe, "there's a soldier in all of us." We blasted through convention with a campaign that involved real players from all walks of life, delivering the game's epic thrill ride to the broadest range of people possible. We first established the iconic game mark, then put actual gamers right into it. In film, we took the idea to a whole new level. And online we coaxed everyone's inner soldier to come out. The campaign changed the perception of who really plays Call of Duty and invited everyone to be part of the fun. It's an absolute fantasy to be able to live out the game. Blogs, social media and the mainstream exploded with unprecedented amounts of buzz. The film was viewed over 5 million times on YouTube and named one of the year's ten most innovative viral videos. And when compared to Call Duty's previous blockbuster title Modern Warfare 2, our campaign generated five times the global online buzz at launch. Facebook fans and Twitter followers jumped 248%. And unique traffic in callofduty.com increased 175%. In just 24 hours our efforts paid off with the groundbreaking 360 million in sales. For perspective, Modern Warfare 2 earned 310 million in first day sales, while Avatar is the biggest film of all time, earned 242 million in its opening weekend. Eight weeks later, the game surpassed 1 billion in sales worldwide, making Call of Duty Black Ops, the biggest entertainment launch of anything ever." 8
“War is not merely a political act but a real political instrument, a continuation of political intercourse, a carrying out of the same by other means.’’ 9—Clausewitz
If society favours and encourages sadistic, competitive behaviour or struggle and death among people of distrust. The policies created by this society will reflect back to society. Similarly, the story of video games and new technologies will adapt to the same characteristics. However, it is human consciousness which may decide to adapt or to reject whatever practices its society has set on the table.
This type of behaviour is well depicted in the work of Photographer Simon Lehner: Men Don’t Play / Men Do. Which explores the subculture of war simulation in central Europe, where he documents groups of people that try to simulate war zones as authentically as possible, producing a surreal scene of war fetishisation that distils the hyper-masculine aspects of war, separating it from its grim and deadly socio-political realities. He describes what these simulations are like: “weekends with 50 hours of no sleep and straight warfare, with dummy weapons and plastic bullets, up to 1,500 participants, tanks, helicopters, explosive suicide belts connected to cell phones, real tactics and artificial deaths,” held in the forests of places like Hungary and the Czech Republic, but made to simulate situations in Iraq and Afghanistan.10
In Simon Lehner’s work, warfare players seem to be in a military summer camp super production, where there is a glorification of the jarhead and a dehumanization of war as tools for entertainment.
Kubrick’s film Full Metal Jacket, depicts a very different picture of the price of becoming a soldier aka tool of war. In the first half of the film where the indoctrination into the army service begins. A process meant to break men to become killers, where soldiers are stripped off their physical individuality, identity and free will, to erase any sign of self-purpose. The film develops further by the offensive invasion of the U.S.A to Vietnam which portrays the juxtaposition of seeking peace by means of death and violence, questioning the purpose of war and commitment to the task.
By this example I aim to distinguish that even soldiers after being indoctrinated can change their mind if integrity is not completely vanished. But that is not the case with AI controlled gadgets. Because AI does not have consciousness of its actions and will therefore be an excellent propeller of commands. But that is not all it offers. It is a software that can act as an entire army.
War is not bound to certainty, as J. Peter Scoblic writes on Learning from the Future: Essays on Uncertainty, Foresight, and the Long Term. “War games had been around for centuries and could take various forms—from the highly stylized to the hyper-realistic. But in the mid-1950s, the social scientists at RAND—partly in opposition to the mathematized simulations of the physical scientists and the economists— developed a free-form game that emphasized the importance of historical context and human judgment (Bessner, 2015, p. 31-53).11
(...) To Kahn (1962) war games “encouraged the development of several degrees of understanding” (p. 157)12.
First, they could move a potential future out of the realm of ignorance if a player said, “It never occurred to me that the response to X could or would be Y.” Second, they could improve players’ intuitive assessments of a situation—that is, their judgment. “Finally, and most significantly, one may learn something about a whole class of situations by amassing enough experiences with specific examples” (Kahn, 1962, p. 157). In other words, games could create classes of instances where none had existed. They could create analogy. Obviously, a game situation was not perfectly analogous to the real world, but Kahn maintained there was “nothing sacred” (Kahn & Mann, 1957c, p. 11)13
about total fealty to reality. (...) Nothing else is essential” (Kahn & Mann, 1957c, p. 11). Kahn agreed with Goldhamer and Speier that games were not predictive—“The reason for this is, first of all the obvious one: the future is uncertain” (Kahn & Mann, 1957c, p. 12).25 But, “insofar as some parts of the future are more or less determined or even over-determined by existing constraints, a war game might be successful in exploring these constraints and, therefore, useful in predictions” (Kahn & Mann, 1957c, p. 12). In other words, war games could help bound uncertainty.12”
(Entlastungen) Pipilottis Fehler is a work by Pipilotti Rist from a time where she was focusing on mistakes produced by machines.
“When machines that handle information, have too much or too little to process. They look very similar to how our body reacts to over or under estimation. If there is too much wanted of us, we react with psychosomatic illnesses.”15
She is convinced that machines have these type illnesses in themselves because the machines were made by humans too. Our character is in their senses, she says.
“When the machines produce these pictures, they come close to our inner pictures. Inside our civilization, inside ‘the system’, we try hard to always look without mistakes, to show us from the best side. But if you see the whole system… What we have done to the earth, maybe the whole system is a mistake.”
The figure falling up in her work, is an homage to the strength of the human. That we can be mopped, we can be ignored, we can be hurt, but we stand again and again. She says she is appreciating the force of her fellow humans in that scene, but in total, it refers also to the idea of ‘coming to work’ which is about trial and error in many fields, not only in art. She encourages us to do mistakes and not to being ashamed for making mistakes. Because, she emphasised “most things don't happen because but instead.”
The lack of human consciousness makes AI partially objective because it can assess parameters but not human context, like war campaigns and their impact on society. The only reason of war should be to protect yourself and your property. But AI does not have feelings of self and belonging, let alone remorse, and if it does, is because it has been told to. Therefore, it cannot fully adhere to the objective rules of war by lacking consciousness, as it needs to be ‘told’ how to proceed. AI is objected from understanding the context of war. AI can adjust strategic parameters and simulate military actions but cannot be condemned for the losses of war nor can it be expected to respect lives. AI is better off without consciousness. A machine that makes no mistakes. But has access to everything. This is the perfect soldier.
Companies like Shield AI regard itself as “Building the world´s best AI pilot”, and declare their project Hivemind, as their AI pilot, with the mission of protecting service members and civilians with intelligent systems, by quoting Sun Tzu “The greatest victory requires no war”16
. However, the use of predictive policing is used to manufacture objective judgements. And to divert accountability from civil authority onto algorithms. An overview of predictive Policing by the USA National Institute of Justice states that “Predictive policing leverages computer models—such as those used in the business industry to anticipate how market conditions or industry trends will evolve over time—for law enforcement purposes, namely anticipating likely crime events and informing actions to prevent crime. Predictions can focus on variables such as places, people, groups or incidents. Demographic trends, populations of paroled persons and economic conditions may all affect crime rates in particular areas. Using models supported by prior crime and environmental data to inform different kinds of interventions can help police reduce the number of crime incidents.”17
In data lies the business, without people’s data it would have been impossible to create AI. The endless use of machine learning in every industry will make it impossible for any legislation to keep people´s data away from risk. And the legislations that may appear in the future may protect limited access to new data. BigTech companies have acquire data from individuals through very questionable channels. Without data, BigTech companies wouldn’t have been able to develop AI. This explains the urgency in which people are called to interact with AI. For instance, it has been encouraged to use AI models to “un-bias” the patterns in AI training data. But this is just a hoax, since you cannot get unbiased data from biased people. In fact, there should not be a general one way of thinking, but space for different opinions. AI is not intelligent but a reflection of the people who make it possible. The parameters are set by humans.
The ultimate goal at the moment is for machine learning companies to acquire more data to make bigger models, as The Guardian reported on Mark Zuckerberg “The Meta chief executive has said the company will attempt to build an artificial general intelligence (AGI) system and make it open source, meaning it will be accessible to developers outside the company. The system should be made ‘as widely available as we responsibly can’, he added”.18
But the world wide web provides too much information to be feed to a data base. Who monitors what is truth? If the machine relies on human consciousness to make the right decisions. The outcomes are left up to some individuals. AI becomes a vessel for human action.
The actions of AI implemented in real warfare can be better understood in the ongoing conflict between Israel and Palestine:
“The Gospel”, an artificial intelligence system is described as a mass assassination factory in an article by +972 Magazine― “The IDF said that “through the rapid and automatic extraction of intelligence”, the Gospel produced targeting recommendations for its researchers “with the goal of a complete match between the recommendation of the machine and the identification carried out by a person”. Systems such as the Gospel, they said, had played a critical role in building lists of individuals authorised to be assassinated. Aviv Kochavi, who served as the head of the IDF until January, has said the target division is “powered by AI capabilities” and includes hundreds of officers and soldiers. In an interview published before the war, he said it was “a machine that produces vast amounts of data more effectively than any human and translates it into targets for attack”. According to Kochavi, “once this machine was activated” in Israel’s 11-day war with Hamas in May 2021 it generated 100 targets a day. “To put that into perspective, in the past we would produce 50 targets in Gaza per year. Now, this machine produces 100 targets a single day, with 50% of them being attacked.” Precisely what forms of data are ingested into the Gospel is not known. But experts said AI-based decision support systems for targeting would typically analyse large sets of information from a range of sources, such as drone footage, intercepted communications, surveillance data and information drawn from monitoring the movements and behaviour patterns of individuals and large groups.”19
The air flowed quickly in the cool morning forming between the players' legs and bumping into the next obstacle. The flesh, the sweat of youth in the air, unintelligible shouts disappearing in the landscape, blended with the echo of canyons spitting fire. The rocks disappeared in the air. The players plunged into the ocean. The death of the human relationship in the rules of the game dissolves on the battlefield. Where brothers mourn death, shake the brotherhood of War, after War. But the emergence of war technologies disrupts the laws of the game. Man's responsibility is stretched by existential emptiness. There is no more excitement on the horizon. Young people don't excite each other, they don't fight hand-to-hand anymore. Time disappears behind the crescent sun, to reappear in the legend of time.
The sky projects ricochet spheres from unfair projectiles illuminating who’s asleep. In their dreams language is a purposeful manifestation of life. Where words suggest and not describe. Becoming a translation, an echo of the original.
In the panorama, projectiles in the shape of a diamond reflect everything, but are empty by its very nature. Their only commitment is to predict policy.
On the ground, alienation becomes the killer of resilience, and a tool for the endorsement of war.
AI has a knack for mobilizing technology to make society's miseries bearable. With the latest example, we can conclude that AI influences warfare and that it can be used as a tool for terror, where war is stripped off any humanity. Consequently, games are a vehicle through where AI can be trained. And have become a key aspect for warfare development through the assembling of information. AI systems have the imprint of the people who makes use of them. For whatever outcome, an AI system may provide, it first needs a prompt, and input, a clear goal to be submitted, to provide any answer. AI systems require of your interaction first to produce something. The current artificial intelligence is just a system based on pattern associations. If media can convince you that violence is fun through videogames, you may start softening your convictions over warfare. AI does not have consciousness and a mind of its own. Only creating a big data base with as many human information and pattern associations is what we can expect. But we do not know at all if AI is actually taking over everything, because transparency is not legally required. But faster than you think, you will see AI appear as a new implementation of any application you make use of. Before you realise, it may become your ultimate commodity.