War and the coming of Artificial Intelligence
Cristopher Coker

Artificial Intelligence (AI) is already transforming our lives, for good or ill. And in the case of war, killing machines are not just the weapons of the future; they are already here. Israel already uses the Harpy, a drone that seeks out and destroys radar systems on its own without human permission, loitering in the sky until a target appears. The Sea Hunter, a US anti-submarine vessel, can cruise the oceans for months at a time scouting for enemies with no one aboard, a veritable lethal Marie Celeste. If no fully autonomous weapon has yet been developed some 400 partly-autonomous weapons and robotic systems are currently under development around the world. The AlphaGo Zero machine plays Go, the ancient board game, and clearly demonstrates that it is algorithmically capable of learning and developing creative strategies, to plan and solve problems through experience.

Why is this important? Agency in war as in life has always depended, at least hitherto, a range of human qualities: willpower, courage, fear, tactical experience and creativity, what T.E. Lawrence described as the ‘irrational tenth’ which could only be ensured by “instinct, sharpened by thought.’ Machines will soon supply the irrational tenth, unencumbered by fear, fatigue, or even moral qualms. The question that then arises is this: will they ever replace us? 

Whether they will ever develop consciousness or have their own thoughts is a question we can leave to one side. If it happens, it is probably a long way off, though Google’s Ray Kurzweil has brought the date of the so-called ‘Singularity’ forward from 2042 to 2039. But insofar as machines are becoming more autonomous, even the most complex and sophisticated machines will remain actors, not agents for some time yet. 

The first and most obvious point to make to explain this is that their intelligence is direction-less. We have motivated intelligence. Thanks to natural selection, we are governed by a mix of emotions, instincts and drives that are programmed into us so that we can reproduce. We have to balance our goals with our need to survive – choosing actions that are life-sustaining. Machines are not motivated by any innate desire to sustain their own existence. 

Because we do not ascribe intentions to machines we do not believe – at least for now – that we need to engage in an intelligent relationship with them. ‘Orders of Intentionality’ is a term the philosopher Daniel Dennett introduced to help us think about how social intelligence works. If I believe you to know something, then I can cope with one order of intentionality. If I believe that you believe that I know something, then I can cope with two orders of intentionality. If I believe that you believe that my wife believes that I know something, then I can cope with three orders of intentionality. We humans regularly encounter at least three orders in everyday life. In other words, an entity that has no motivation is not one that can be networked into social life. 

Secondly, machines are not only motivation-less, they are also non-aspirational, or we might say non-teleological. Teleological questions include the basics: what am I doing on this battlefield? Why am I taking risks? What is this entire conflict about? Am I willing to die, and if so, for what: a religion, country, family? All of these are aspirational and involve a teleological language, one that produces a sense of purpose, or end. A peculiar human trait is the willingness to serve others and the greatest human desire is not to be used, so much as to be useful. Young Jihadists are often only too willing to surrender the individuality in the hope of being useful to others. Even the most intelligent machines will have no such thoughts. 

Thirdly, we understand our own agency is determined, and nonetheless by something dear to economists: rationality, not logic. AI is logical, and we have it for a reason: we build machines that can take decisions and make choices specifically in a way that we cannot. The idea of logic as a driver of our actions has its supporters as well as its detractors. Among the first, Ray Kurzweil talks about robots giving us a significant ‘human upgrade’ and regards the introduction of autonomous robots as a moral demand of the times. The detractors, for their part, insist that it will reduce what we call ‘Meaningful Human Control’(MHC). But human control is a contested concept in itself, and its loss has been a permanent feature of war. We find it in revenge attacks; in the routine dehumanisation of an enemy; in the deployment of inexperienced, or merely poorly trained troops; in the issuing of unclear orders by commanders; and even, regrettably, in the pleasure of killing. Which is why Ronald Arkin, who has been trying to design a conscience that can be programmed into the next generation of machines, thinks that “simply being human is the weakest point of the kill chain” – our biology works against us in complying with the tenets of humanitarian law. 

Autonomous weapon systems it is true are likely to out-perform us in situations where bounded morality (i.e. situation-specific) applies. For it is the situation in which humans usually find themselves that usually encourages immoral actions. Machines, by comparison, are not situationists – largely because they do not have to wrestle with the fight or flight dynamic that is hard-wired into the rest of us. They also would not suffer from the human prejudices and would not be prone to a familiar problem in social psychology, that of ‘scenario fulfilment’ – the way in which our minds tend to reinforce pre-existing belief patterns.

And of course autonomous machines would raise another question about human agency: that we are responsible for the decisions we take. A human being has moral standing for that reason; a robot would not. Put another way, its responsibility would be logical, not rational: it would take the same decision over and over again. If that ever becomes the case, we would see a change of the first order in our understanding of ethics. For us, living ethically has never been about optimising the good; it has been about the precept of right conduct, for example towards prisoners of war. It has involved cultivating virtues and refusing to perform actions that we cannot reconcile with our conscience or sense of self. Living ethically for that reason is rational, not logical: it involves balancing the claims of a variety of goods (winning against acting correctly), and calculating how different values should be applied in circumstances where there may often be no single idea of right or wrong. And that is probably what is most dangerous about killer robots, although it is not an argument that the campaign against them often makes. Logic can be dangerous, because it is devoid of common sense. 

In the end, it is probably best neither to exaggerate the extent of human control in war, or the extent to which we can replace it with mechanical means, but recognise that we will probably continue to need each other. The dangers posed by machines will correlate exactly with how much leeway with give them in fulfilling our own goals. And that is important if you subscribe to Aristotle’s claim that the only purpose of war is peace. We may invent machines to make war on our behalf, but only we are in a position to make peace with each other – or at least until the day the machines wake up.