Artificial (Military) Intelligence in the Defence Industry

Over the last twenty years we have seen the emergence of a new era – one where the future of human value seems uncertain in the face of an artificial intelligence industry that is swallowing up entire sectors of society. This takeover started with low-skill, repetitive, entry level jobs, where AI’s optimal efficiency made it the clear frontrunner. Now, it has graduated to a level some deem concerning, and others beneficial; military implementation.

Breaking Barriers

The concept of using artificial intelligence for warfare has been around since the creation of autonomous machines, and its integration into military could see significant benefits in years to come. The United States exemplifies how AI can optimize military efforts through their current initiatives and future implementation plans. The country is deploying fleets of small, piloted surveillance drones for missions in Ukraine to aid in the combat against Russia. These drones can track soldier well-being, predict Air Force maintenance requirements, and track opposing aerial forces. Furthermore, the United States plans to expand their use of AI in the military through the creation of more AI enabled autonomous vehicles by 2026 and increasing the number of air and sea vehicles controlled by AI. Currently, the military is focusing on “human-machine teaming” through the creation of unmanned vehicles controlled remotely by pilots. It is projected that in future decades countries will continue to pool money into research with the goal of entirely automating their military. This change from human to machine combat comes with several theorized improvements. Not only would AI implementation reduce casualties by taking over soldiers' jobs in dangerous environments, but it would also reduce costs. AI accuracy supersedes that of people, and as such less ammunition is wasted on the battlefield. Furthermore, AI efficiency would create a decrease in the number of personnel in the field, consumption of resources, and equipment lost and damaged, allowing the military to distribute its expenses elsewhere.

Many other global powers are already using artificial intelligence in their military efforts, an action that has led to significant upgrades in weaponry for these countries along with increased operational efficiency. The use of lethal autonomous weapons (LAW) has been at the front of these changes, as AI automation and production in factories has made it increasingly easy for these machines to be manufactured. The first documented use of a LAW was during the Libyan war in 2021, where an autonomous drone hunted down and shot at human targets. This year, the notion of using drones in military combat has become prevalent due to the war in Ukraine. Thousands of Unmanned Aerial Vehicles (UAV’s) are being used by both Russia and Ukraine to destroy enemy tanks and patrol skies, and AI is used in some of these drones as a form of target identification. Furthermore, both countries are racing to develop next generation UAVs fully guided by artificial intelligence. AI-driven drones would wreak havoc, as they can bypass air defense systems which would normally jam contact between the pilot and the drone.

 

The components of a First-Person View (FPV) drone currently used in military operations in Ukraine. Total costs for these drones can be as cheap as $500.

Source: How drone combat in Ukraine is changing warfare (reuters.com)

 

Operating in Dangerous Territory

Despite its implied benefits, there is still skepticism about the usage of AI in the military, and for good reason, as AI brings real risks. From an operational standpoint, AI’s questionability begins with the idea of reliability and fragility. Due to the new nature of military AI systems, international races are developing between nations, and this is encouraging countries to rush their development with little attentiveness to reliability, fragility, and safety. As described in a research paper by Rand Corporation, this race could create a lack of international consensus on the requirements surrounding the development of safety systems for AI in the military, therefore creating a “race to the bottom” and potentially reducing human control over military AI in the future. A lack of safety regulations makes military AI systems susceptible to the biggest potential risk, hacking. The networks used in AI are primarily forms of ML systems, a type system which learns through observation of training data. These systems can be compromised through a form of hacking known as data-poisoning. Data poisoning involves falsifying the inputs from which the AI is learning to destroy the desired function and output of the system. Hackers can modify and manipulate the data AI is learning from, which causes the algorithm to make mistakes on the battlefield. Due to the relatively unknown nature of AI data-poisoning, it remains unclear for now whether this form of hacking has the upper hand on AI systems, or whether it can be easily mitigated through defensive applications.

Is AI in Military Unethical?

Moreover, from an ethical standpoint, AI in the military creates risks through an accountability gap and infringement on morality and human rights. Experts fear that by increasing the use of AI in war, countries will lose accountability, which is an essential distributor of moral responsibility and a deterrent of harmful actions on a global scale. AI’s ability to act in full autonomy creates a “moral buffer” for military operators and pilots behind the scenes. Without control over the outcomes and actions of autonomous machines, military conduct will become more feasible and lack the consequences and accountability that act as a form of control over violence and military action. Additionally, when autonomous vehicles make mistakes or cause harm due to malfunction, it is challenging to uncover whether the blame lies on the creator of the machine, or the AI itself due to unprecedented development of machine learning. AI in war further creates ethical conflict through the argument that it requires human judgement and dignity to take a life. Former UN Special Rapporteur on Extrajudicial, Summary, and Arbitrary Executions Christof Heyns made this argument, as paraphrased by RAND corporation’s research paper: “Nonhuman systems do not have the necessary moral qualities to justify their actions in ways that respect victims and thus should not make decisions with such significant ethical implications”. A lack of human emotion, judgement, and dignity makes killing in number increasingly easy and immoral, and this ethical risk can pose significant threat to human wellbeing and global peace if AI implementation in military becomes a norm.

Furthermore, a major movement has been occurring amongst people of influence in the robotics industry, as concerns have grown about the decision making of autonomous machines. One such person is George Hinton, widely regarded as the “Godfather of AI”. Hinton resigned in May of 2023, informing the public he regretted part of his work on AI as he was afraid of the damage it could cause once AI reached the same level of intelligence as humans. Many other experts in robotics hold a similar opinion - that AI should not be able to automate critical decisions that humans do. Today, there are several active campaigns against automation of decision making. One such campaign, “Stop Killer Robots”, highlights the danger of giving robots the ability to kill: “Machines don’t see us as people, just another piece of code to be processed and sorted [...] Whether on the battlefield or at a protest, machines cannot make complex ethical choices, they cannot comprehend the value of human life...” Though they may make a more efficient solider, without the ability to comprehend morality as humans do, autonomous machines pose a risk to basic morality that people stand by.

Conclusion: The Question of Control

Ultimately, it would seem artificial intelligence is bound to become an integral part of military operations in future years due to international competition and the innate human nature to develop and advance, and as such, the important question becomes not whether AI should be used in warfare, but how can society mitigate the risks posed.

Previous
Previous

Haunting of the Dot-Com Era: Comparing Yesterday’s Tech Bubble with Today’s AI Craze

Next
Next

Cooked Innovation: How Apple Lost its Magic