Nicolae Andronic is a second year IR student at King’s from Romania. His interests in the IR field range from broader ones, like the changes in the structure of the international system, to more specific ones, like the dynamics of various regions.
AI is widely seen as the next frontier of military competition and, while its implications are mostly “known unknowns” and “unknown unknowns”, it seems like there is little time to reflect on that. China is catching up quickly and, in some aspects of AI research and development, already overtaking the U.S. by serious margins. This is worrying, in a context in which there are voices claiming that AI will impact not just military tactics, but also strategic analysis and decision-making processes, eventually replacing humans on a number of key tasks. War appears to be bound to go into a science-fiction age. However, few seem to look at a more elusive, yet greater danger in the context of this technological revolution: the refusal to acknowledge war as about people and waged, ultimately, by people.
The U.S. has many antecedents of doing this. During the 1950s, guided missiles appeared. The U.S. Air Force experts were quick to proclaim that the “dogfights” of the Second World War, involving cannons, were obsolete and that the future of air-to-air combat lies in guided missiles. Pilots were not even trained for such anachronic battles anymore. However, when F4 planes, equipped with the latest missiles but not equipped with cannons, had their baptism of fire in Vietnam, the experts had a nasty surprise. It turned out that the missiles were far from perfect and often failed to lock. Even worse, when the MIGs got close to the F4s, the latter were defenseless and fell prey to the very tactics that pilots were no longer taught and to the very cannons that the experts were quick to proclaim “obsolete”.
Another example is Afghanistan. Despite their overwhelming technological advantage, U.S. forces have been unable to drive out the Taliban. Erik Prince best described the situation in an interview: “We spent close to a trillion dollars largely using equipment that was designed to fight against the Soviet Union but it’s been misused fighting against pick-up trucks and people living in local villages”. While many of Erik Prince’s ideas are, to put it politely, debatable, he is right on this one. High-tech equipment is consistently being outmatched by simple tricks and determined people. What is more, the trend of asymmetric warfare is no longer just the realm of the weak: “The strategies of Russia and China, not to mention Iran or ISIL, may have more in common with insurgent forces than the kind of conventional air or naval engagements the U.S. military expects in a great power showdown”.
At a higher level, the entire U.S. strategy seems to be based on the avoidance of direct contact and on the usage of technology as a replacement for troops. LTG Michel Yakovleff highlighted that “the Western way of war is… strategically bankrupt and it is morally bankrupt” because of excessive use of ordinance and low use of troops: Western powers can pulverize a place and inflict massive casualties, but they fail to control it and to win its people precisely because of this disproportionate and inefficient show of force. They also staunchly refuse to accept that no amount of “big shiny objects” can control a piece of land the way infantry forces can.
This is not to say that technology does not matter or is not worth investing in. History has shown very clearly what is the price of opposing or ignoring technological progress. The Samurai opposed the introduction of firearms; firearms are still with us today, the Samurai are not. AI will get better than people at a number of tasks, especially in high-tech domains like air or sea combat. In a world of information, AI can process more data than the human brain and can therefore save lives by painting a more complete picture of the battlefield without suffering from oversaturation or bias. Robots can play an important part in helping infantry perform various tasks, too. There is no reason not to let them help human soldiers wherever, whenever and however they can.
The problem is therefore not a technological one, but fundamentally one of outlook: the U.S. has consistently embarked on a journey to eliminate or distance itself from the human aspect of war. The main danger of the coming revolution is not AI itself, but the way in which American (and other Western) decision-makers will be tempted to view the world: from a detached perspective, where technology wins their wars for them and where there is no need to commit troops and to suffer the moral cost of the body bags arriving from abroad. While the wish to put troops out of harm’s way is absolutely noble and rational, it is counterproductive if it denies the very nature of war, which cannot be changed. The desire to solve the conflict in Afghanistan from 15,000 feet above ground ended up prolonging the chaos that claimed the lives of so many American and allied soldiers. More people died because the U.S. did not accept that, if it was going to war, there were going to be troops on the ground and, yes, casualties. The only way to avoid direct contact and casualties is to not go to war. Once combat starts, there is direct contact and there are casualties. And AI will not change that. So, in the not-so-distant future, when AI might prescribe different solutions than experienced soldiers, it might be worth acknowledging that AI, like any technology, cannot see all aspects of war. Because war is a human enterprise and action, it requires a human to comprehend its non-quantifiable aspects. AI can count the dead, but it cannot assess their impact upon the living.