top of page
Search

AI in Warfare: Data is the New Ammunition

  • Writer: Kieren Sharma
    Kieren Sharma
  • Jul 21, 2024
  • 4 min read

Updated: Feb 28

In our latest episode, we delved into the complex and increasingly relevant topic of AI in warfare. We aimed to demystify the subject, moving beyond scary headlines to explore the history, current applications, and ethical considerations surrounding the use of artificial intelligence in military contexts.






 

The Historical Roots of AI in Warfare

We started by exploring the surprising historical links between technological innovation and military investment. Key examples include:


  • Alan Turing's work at Bletchley Park during World War II: His efforts to crack the German Enigma code led to the development of the Colossus computer, a crucial tool for the Allied forces. This work also laid the foundation for modern computing and cemented Turing as a founding figure in AI.

  • ARPANET: Developed by the US Advanced Research Projects Agency (now DARPA) in the 1960s, ARPANET was initially designed to facilitate secure communication between military personnel and research institutions. It became the precursor to the modern internet.

  • DARPA's Challenges: Since the 1990s, DARPA has funded challenges that encourage the development of technologies for military use. These include the DARPA Grand Challenge, which led to the development of autonomous vehicles like Stanley, and robotics challenges that spurred companies like Boston Dynamics.

  • The Third Offset Strategy: In 2014, the US Department of Defense launched this strategy, heavily investing in the development of artificial intelligence and autonomous weapons.


 

Current Applications of AI in Warfare:

Beyond the Battlefield


The “Three D's": AI is being used for dull, dirty, and dangerous tasks, like transcribing communications, identifying objects in video footage, and operating in battle zones contaminated with biological or chemical weapons.

While autonomous weapons often dominate the narrative, AI is being used in many other ways. These include:


  • Cyber Warfare: AI plays a role in social media algorithm manipulation and the creation of deepfakes, which can influence public sentiment.

  • Integration of Humans and AI Systems: AI systems are used to process the vast amounts of data collected by sensors on the battlefield and help humans make decisions. This aims to increase military power without increasing personnel, leading to greater efficiency.

  • Command and Control: AI facilitates better decision-making and coordination of autonomous assets.

  • Predictive Maintenance: AI is used to predict when equipment like warplanes need maintenance.

  • Autonomous Vehicles: AI is used to operate unmanned aerial vehicles (UAVs) and unmanned underwater vehicles (UUVs). These can operate for long periods and go to places that are too dangerous or inaccessible for humans.

  • Defense Systems: AI is used in defensive systems, such as the Iron Dome which intercepts incoming missiles.

  • Target Prediction: AI is used to track individuals, using video surveillance, phone data, and other pieces of information to identify potential targets.


Metrics for Success: Precision vs. Recall

We highlighted that AI systems in general need a central goal and a metric to optimise for. Using target prediction as an example, we explored the implications of two important metrics:


  • Precision: Measures the proportion of accurate positive predictions. A system achieving 100% precision will exclusively identify true targets, never mistaking a civilian for a target.

  • Recall: Measures the percentage of all true positive cases detected. A system with 100% recall will recognise all the actual targets but might incorrectly identify some innocent civilians.


The choice of which metric to prioritise has huge implications in warfare, where a focus on recall might lead to high civilian casualties, while a focus on precision might allow threats to slip through.


 

Ethical and Legal Implications

The ethical concerns surrounding AI in warfare are significant and include:


  • Responsibility: When AI systems make errors, who is responsible? Is it the AI, the developer, or the human operator?

  • Human Oversight: The importance of maintaining human oversight is crucial to not abdicate moral responsibility over to machines.

  • The Lavender System: The use of the Lavender system in the Israel-Gaza conflict highlights the dangers of acting too directly on AI predictions without proper human review and decision-making.

  • The need for regulation: There are currently no clear, globally recognised definitions of what autonomous weapons actually are, hindering international efforts to regulate their use.

  • The potential for an arms race: The increasing investment in AI and autonomous weapon systems could lead to a dangerous arms race.

  • The production “valley of death": The move to speed up the deployment of new AI technologies without considering the ethical and real-world implications is a cause for concern.


 

The Future of AI in Warfare

The trends are clear: defence budgets are increasingly being spent on AI and autonomous systems. We are likely to see a blurring of the lines between intelligence, surveillance, and command and control. As AI takes on more authority, there are many unanswered questions about ethical and robust systems. While some anticipate protection through the fear of a 'mutually assured destruction' scenario akin to nuclear weapons, it's also true that AI weapons are much easier and cheaper to produce, reducing the barrier to entry and potentially broadening the threat.



If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!


Comments


bottom of page