Fighter Jet

US Air Force tests combat applications of AI

Four-day experiment part of efforts to improve decision-making, embrace automation
Life
Image: Pixabay via Pexels

24 July 2025

The US Air Force has conducted Experiment 3, a four-day exercise to evaluate the impact of artificial intelligence (AI) on the speed of target identification and decision-making in a simulated combat environment. The aim of the experiment was to accelerate the execution chain process, which consists of detecting threats, designating targets, engaging in combat, and evaluating the results.

By integrating artificial intelligence software into existing workflows, operators were able to speed up target decisions and reduce cognitive load. The goal was to leverage AI’s ability to analyse large amounts of data and present actionable information to human operators. Comparisons between decisions made independently by operators and those based on AI recommendations highlighted the complementary strengths of human judgment and the processing power of machines.

This human-machine team approach underscores the Air Force’s commitment to deploying AI as a force multiplier, while maintaining human oversight in critical decision-making processes. The feedback collected during Experiment 3 will be used to refine AI algorithms and operational procedures.

 

advertisement



 

The exercise is part of the US military’s efforts to embrace automation, artificial intelligence, data-driven command, and interconnected sensor networks to modernise the execution chain and maintain a competitive edge in future war scenarios. Former Air Force Secretary Frank Kendall emphasised the need for rapid decision-making at machine speed, especially in highly automated and autonomous future conflicts.

In addition to target acquisition, AI has the potential to transform core military activities such as document analysis, situational reporting, and research tasks, freeing up human personnel for more strategic activities. However, the increasing integration of AI in military applications also raises ethical considerations regarding responsibility, bias, and the risk of unintended consequences.

Although the Pentagon adheres to a policy of human oversight in critical decision-making loops involving AI, concerns remain about the feasibility of maintaining this control in fast-paced, data-driven combat environments. The rapid development of AI presents challenges in terms of effective regulation and oversight.

Business AM

Read More:


Back to Top ↑

TechCentral.ie