Building a Layered Strategy for the Use of Artificial Intelligence in Air Power: Understanding its Applications at the Strategic, Operational and Tactical Levels

Jean-Christophe Noël, Research Associate, French Institute of International Relations (IFRI) and Editor-in-Chief, Vortex

Abstract

AI is a formidable enabler in air power, but its potential has not yet been realized. Provided its limitations are acknowledged and appropriately managed, AI has the potential to significantly improve the air force’s planning and decision-making processes at the different levels of warfare. New inputs that contribute to simplifying the use of AI and allow the most data to be exploited as precisely as possible will improve the potential for AI adoption. In the long term, AI may prove to be most useful at the joint level, where it can benefit from the vast data and information-sharing each force component can make available. There are however complex challenges and risks relating to the expanded use of AI in warfare. The fundamental constraints of AI at the technological and human user levels need to be accounted for in order to build a future direction.

The Author

Jean-Christophe Noël is former fighter pilot with the French Air Force and was previously Military Fellow at the Center for Strategic and International Studies (CSIS) in Washington DC, United States. Jean-Christophe is currently a Research Associate at the French Institute of International Relations (IFRI) and Editor-in-Chief of Vortex, the French military air power review.

Introduction

The spectacular developments now underway in artificial intelligence (AI) did not always attract interest from the community of military aviators. Recent air combat simulations pitching AI against experienced fighter pilots have however drawn widespread attention (Ernest et al., 2016). In these simulations, human pilots were crushed by their AI opponents. The idea that pilots, like many other trades, will eventually disappear in the longer term as a result of advances in IA has gained increasing traction as a result (Pashakhanloun, 2019). Despite the accelerating gains in AI however little has changed so far in the role of human pilots in air combat and leading the projection of air power. Rather than being replaced, human pilots are progressively being allowed to accrue the benefits of AI in the cockpit, just as is the case for air force officers stationed in headquarters and air operations centers.

 

Provided its limitations are acknowledged and appropriately managed, AI has the potential to significantly improve the air force’s information power and attrition capabilities to support planning and decision-making processes at different levels of warfare. In evaluating the multiple dimensions of a strategy for the use of AI in air power, air force leaders must find a clear direction with key dilemmas. What tactical, operational, or strategic applications for AI can be developed? Is the use of AI more suited to particular levels of warfare than others? How do we integrate AI into our way of warfare? This paper will explore some of the fundamental challenges relating to the use of AI at the classical levels of warfare before considering its future direction and, finally, discussing key associated constraints and dangers that lie ahead.

AI at the Tactical Level

From one author to another, definitions of artificial intelligence (AI) vary considerably. The Journal officiel de la République française defines AI as “a theoretical and practical interdisciplinary field that aims to understand the mechanisms of cognition and thought and their imitation by a hardware and software device for the purpose of assisting or replacing human activity” (2018). Building on this definition, we can understand AI as a computing technology that solves problems by drawing from an ever-expanding volume of available data, ever-growing computing power, and from progress in software design. AI applications are manifold and affect virtually all fields. AI can streamline administrative tasks. It can improve the performance of air fleet maintenance. It can optimize guidance systems for missiles.

As Michael C. Horowitz points out however AI is anything but a weapon (Horowitz, 2018). It is an enabler, more akin to inventions such as electricity or the internal combustion engine than the battle tank or fighter aircraft. An increasing number of military players are now introducing AI into military operations, primarily focused on tactical activity. Israel, one of the pioneers in this area, recently exploited three AI-enabled capabilities – The Alchemist, The Gospel, and Depth of Wisdom – in joint operations (Antebi, 2022). The Alchemist exploits tactical and operational data to alert troops of possible attacks through a handheld tablet. The Gospel offers recommendations for threat identification which operators must validate and decide appropriate responses to (Ahronheim, 2021). This application reportedly saved a year’s worth of effort needed to achieve the same results using existing methods. Finally, Depth of Wisdom was able to generate the most comprehensive mapping of underground tunnels ever achieved.

AI at the Strategic and Operational Levels

AI has demonstrated results in transforming the battlespace at the tactical levels but also has potential advantages to offer at the higher levels of warfare. However, as the strategic, operational, and tactical levels of warfare require different types of considerations and reasoning, the potential applications and results that can be obtained with AI vary accordingly.

According to Clausewitz, strategy must weaken and ultimately break the will of adversaries. Measuring or quantifying this goal of strategy is however not easily achieved. While damage caused to adversaries as a result of military action can undermine their resolve, in the case of ideologically, politically, or psychologically-driven opponents, this is not always true. Warfare is not reduceable to a simple series of logical actions and predictable results as a human activity where creativity, surprise, deception, and psychological factors all play a role (Payne, 2021).

AI is not able to probe the psychological factors of war or understand why defeat on the battlefield does not necessarily mean giving up the will to fight. In the same ways that software used in sports cannot draw, walk, much less predict what happens in a game, AI cannot solve problems caused by complex interactions involving human beings. AI will not be able to anticipate the human creativity and elements of surprise or deception frequently encountered in military operations, let alone provide solutions for them. These limits of AI are explained by the fact that strong AI, which can match or even surpass the cognitive abilities of human beings, does not presently exist.

AI will not be able to anticipate the human creativity and elements of surprise or deception frequently encountered in military operations, let alone provide solutions for them.

– Jean-Christophe Noël

AI will increasingly be involved in operations from the planning stages to the execution of air campaigns, but air forces will instead need to rely on narrow AI. Narrow AI is limited to specific tasks and roles that it can complete at a level of performance that exceeds human abilities. Table 8.1 compares attributes in narrow AI and strong AI, which is still at the early stage of development. While narrow AI can support tactical activity, it has mainly proven inadequate in aggregating these into decisive advantage at the operational level. Such observations have been recorded in various attempts, such as an effort by the United States Navy to develop an operational-level AI system for naval commanders (Aycock and Gleney, 2021).

Table 8.1: Comparing Attributes of Narrow AI and Strong AI

It remains to be seen how best to capitalize on emerging technologies since possessing technology alone is insufficient. In World War II, the Allies had comparatively more tanks than the Germans, but their armies suffered key defeats owing to the strength of Germany’s military doctrine. AI has become a central issue in the competition between the United States and China for technological superiority, presenting a reminder of Cold War dynamics. Yet, the results of AI by militaries, whatever the technological sophistication of the systems, will be reliant on the doctrines and concepts it is combined with. AI solutions must be tailored to the specific constraints and characteristics of the military environment. As such, AI-enabled capabilities must be developed holistically through evolving and integrating doctrine and operational concepts simultaneously.

In the meantime, AI can be harnessed in more limited ways to delve into the workings of a particular device or focus on vulnerabilities in the adversary’s system –  one of John A. Warden’s (1995) renowned ‘centers of gravity’ concept. AI is also beginning to be used in influence and psychological operations (PSYOPS), which have become an essential part of military activity. In modern conflicts, all sides can distort, manipulate and disseminate misinformation. AI has many uses in this context and can support offensive and defensive PSYOPS in various ways. Along the same lines, AI will have an expanded role in information warfare, where information systems represent critical centers of gravity for all sides. 

Perhaps the most significant use for AI to benefit the operational levels lies in its ability to optimize intelligence and provide predictive analytics, allowing air forces to better anticipate threats and changes in the environment. The presence of particular individuals, specific keywords, and other patterns can provide signals in advance concerning adversaries’ intent and future plans. Taking into account as much readily available information as possible such as video, text, and images, which may not have been effectively fused and exploited, AI can produce powerful results. The Collection and Monitoring via Planning for Active Situational Scenarios (COMPASS) program is an ambitious effort that aims to do precisely this by combining several disciplines, such as game theory, modeling, and simulation (Tucker, 2018).

Similarly, AI can perform a valuable role in high-level planning by supporting the evaluation and testing of different proposals and courses of action (CoAs). By modeling adversarial forces, their doctrines, capabilities, logistics, and possibly the command styles of leaders, AI can help commanders and operational planners gauge which CoAs are likely to produce the most desirable results. Going further, by changing modeling parameters, AI can enrich thinking on assumptions and highlight cultural biases or new insights. AI applications may help draw attention to considerations that are overlooked or even help new develop new ways of thinking about challenges. AI has obvious potential for making valuable inputs at different stages of developing courses of action, as reflected in Figure 8.1.

Figure 8.1: Abstract Flow Chart for Developing Courses of Action

Future Directions for AI

The operational results and experience in using AI demonstrate important ways this technology can support tactical activity by enhancing intelligence, force protection, and assisting decision-making. The brief observations covered in this paper provide a glimpse into the future potential of AI. The growing introduction of AI into defense advances the revolution in military affairs (RMA), which commenced at the close of the 20th century. As in the 1990s, the underlying goal remains to integrate new intelligence technologies to dominate the battlespace by lifting ‘the fog of war.’

The multiplication of battlespace sensors has enhanced the collection of information, which must be processed, merged, and distributed to force elements to create multiple kill chains. This trend will become more pronounced as the concept of mosaic warfare, still in its early phases, is brought to reality (Clark and Schramm, 2020).  AI is highly relevant to multi-domain operations (MDO) constructs, which bring together joint capabilities to make possible the early detection of adversarial vulnerabilities and coordinate synchronized effects against these. AI makes it possible to detect even temporary vulnerabilities by anticipating or identifying, for example, an adversarial radar malfunction and triggering rapid actions and effects to exploit time-sensitive targets.

Designed as a system-of-systems, the potential of AI is amplified. Two promising directions in this regard have emerged in thinking about the future of air power: drone swarms and loyal wingmen. In drone swarms, tiny autonomous systems will operate much like an anthill where each individual element is not necessarily highly specialized but, combined into a system, provides a semblance of collective intelligence. As one element offsets the technical limitations of others, working together in synch, these swarms can perform complex functions such as detection, deception, and strike. Drone swarms are seen as the essential means to saturate enemy air defenses in the future (Hamilton and Ochmanek, 2020).

The loyal wingman concept, on the other hand, is yet more ambitious. Sixth-generation aircraft now under development are envisioned operating with autonomous drones to execute missions collaboratively. These wingmen will improve situational awareness and survivability for their manned counterparts and assist pilots in making better decisions faster. Loyal wingmen will be adaptable for roles that reflect specific mission objectives – such as with electronic warfare or strike functions. Building on the same approach used by leading air forces to exploit quality to defeat adversaries for the past century, the loyal wingman will redefine the dynamics of human-machine teaming and lead to radical changes in the future structure of air forces.

AI-Associated Constraints and Dangers

Any overview of AI uses by the military would not be complete without highlighting the constraints and dangers awaiting users. AI is not a magical enabler. Like any emerging technology or new technique, AI will need to be evolved and tested – reflecting the need for significant investment ahead. The application of AI in military operations is not a simple matter of running software. AI demands various investment streams to develop the required systems, enabling infrastructure, and, of course, human factors that will enable its most effective use and protect it from sophisticated adversaries.

New digital architectures, hardware, and supporting infrastructure must be created to exploit the ‘big data’ that makes AI possible. Combat clouds will need to be developed to store data, and it will be necessary to determine the nature and requirements of data and data systems as well as the appropriate policies and governance frameworks. Positioning combat clouds and servers brings its own challenges – they must be close to users, but should they be airborne or on the ground? Whatever the answer, it must assure connectivity between headquarters, command elements, and edge warfighters.

In contemporary military conflicts, all sides understand the critical dependence on connectivity and communication flows. During the conflict in Ukraine, for example, the Russian military targeted servers and data exchange nodes belonging to Viasat, a commercial telecommunications service provider, to deny communications to Ukrainian forces (Burgess, 2022). AI, in fact, has various pitfalls that can be exploited to the detriment of its users by adversaries who understand and can target these inherent limitations and vulnerabilities. Deep Learning techniques, for example, are dependent on the quality and variety of information provided for accurate results.

This is why cultural and unconscious biases, limiting the volume of information, can lead operators to make incorrect judgments when working with AI. In the human-machine nexus that AI relies upon, trust issues also arise. If AI is more creative than a pilot or supported commanders and offers unusual ways to achieve mission goals, this may raise doubts and confusion, which is unacceptable in high-speed combat. If courses of action generated in the same ways are recommended to allies or coalition partners, the absence of sound reasoning can amplify the negative consequences.

On the other hand, humans naturally tend to believe that machines are superior when offered a result that seems coherent. However, the risk of over-automation can lead to aberrations. Designating objectives through reliance on AI without human involvement where a decision-maker is under high pressure (hierarchical or timewise, for example) may cause errors that lead to catastrophic consequences.

Just as with any technology, using the well-known dialectic of the shield and sword, AI will inevitably trigger counter-strategies and may produce threats more quickly than expected. NATO air forces, for example, have developed offensive capabilities with autonomous drones without giving sufficient thought to defending against similar systems used by adversaries. The threat of competitors bringing their capabilities to bear in the era of disruptive technologies is sometimes overlooked, and air forces need to caution against repeating similar mistakes with AI. This is especially important as a lot of AI is developed using commercially available or open-access software, which allows various avenues for adversaries to respond with counter-strategies.

Conclusion

AI is a formidable enabler in air power, but its potential has not yet been realized. New inputs that contribute to simplifying the use of AI and allow the most data to be exploited as precisely as possible will improve its potential for adoption into the different levels of warfare. In the long term, AI may prove to be most useful at the joint level, where it can benefit from the vast data and information-sharing each force component makes available. Air power leaders will need to decide how much autonomy can be given to machines to take advantage of their qualities without affecting strategy.

AI may prove to be most useful at the joint level, where it can benefit from the vast data and information-sharing each force component makes available.

– Jean-Christophe Noël

The scramble in air forces to accelerate operational tempos and processes by compressing time cycles must not become an end in itself. The purpose of war is ultimately to achieve political effects, not conduct operations in the shortest timelines possible. For the time being, the use of AI in military operations is still not possible continuously across the three classical levels of warfare. It is also not possible to place AI at the center of military decision-making processes or battlespace operations. To change this, significant advances in technology and concepts are required as well as a shift in mindsets. When that happens, predictions that pilots will disappear may well be fulfilled quickly.

References:

Ahronheim, A. (2021), “Israel’s operation against Hamas was the World’s First AI War.” The Jerusalem Post.

Antebi, L. (2022), “Has Artificial Intelligence Triumphed over Terrorism?.” Vortex, 3: 103-117, Available at: https://fr.calameo.com/cesa/read/006940288d34d5c710fcc.
Burgess, M (2022), “A Mysterious Satellite Hack Has Victims Far Beyond Ukraine.” Wired.
Available at: https://www.wired.com/story/viasat-internet-hack-ukraine-russia/

Clark, B, Patt, D. and Schramm, H. (2020), “Mosaic Warfare: Exploiting Artificial Intelligence and Autonomous Systems to Implement Decision-Centric Operations.” Center for Strategic and Budgetary Assessments.
Available at: https://csbaonline.org/uploads/documents/Mosaic_Warfare_Web.pdf

Ernest N, Carroll D., Schumacher C., Clark M. and Cohen K. (2016), “Genetic Fuzzy based Artificial Intelligence for Unmanned Combat Aerial Vehicle Control in Simulated Air Combat Missions.” Journal of Defense Management, 6: 144

Aycock, A.  and Gleney IV, W. (2021), “Trying to Put Mahan in a Box,” in Trangedi, S. J. and Galdorisi, G., IA at War, Annapolis: Naval Institute Press: 265-85.

Hamilton, T. and Ochmanek, D. (2020), “Operating Low-Cost, Reusable Unmanned Aerial Vehicle in Contested Environments.” RAND Corporation.
Available at: https://www.rand.org/content/dam/rand/pubs/research_reports/RR4400/RR4407/R
AND_RR4407.pdf

Horowitz, M. (2018), “The Promise and Peril of Military Applications of Artificial Intelligence.” Bulletin of the Atomic Scientists.
Available at: https://thebulletin.org/2018/04/the-promise-and-peril-of-military-applications-of-artificial-intelligence/

Pashakhanloun, A.H. (2019), “AI, autonomy, and airpower: the end of pilots?.” Defence Studies, 19(4), 337-52

Payne, K. (2021), I, Warbot. London: Hurst and Company, 152-58

Journal officiel de la République française (2018), “Vocabulaire de l’intelligence artificielle,” n°58

Tucker, P. (2018). “The Pentagon Wants IA to reveal Adversaries’ True Intentions.” Defense One.
Available at: https://www.defenseone.com/technology/2018/03/pentagon-wants-ai-reveal-deceptive-adversaries-true-intentions/146739/

Warden, J. A. (1995), “The Enemy as A System.” Airpower Journal, 9(1): 40-55

Table of Contents

Read More

Multi-domain operations can present many challenges for training, particularly as various disparate organizations must be involved and centralized coordination must be balanced with decentralized training objectives. Emerging training technologies can help support the unique complexities of MDO, but the training community may need to solve old problems.

Dr. Tim Marler, Senior Research Engineer, RAND Corporation and Professor, Pardee RAND Graduate School, United States

AI is making its way into military operations and warfighters will increasingly co-exist with machines with progressively more advanced autonomous capabilities. As machines make the jump from simple tools to cooperative teammates, human-machine teaming will be at the center of warfare. Understanding how to ensure trust between humans and machines is critical.

Dr. Jean-Marc Rickli, Head, Global and Emerging Security Risk, Geneva Centre for Security Policy, Switzerland

Federico Mantellassi, Research and Project Officer, Geneva Centre for Security Policy, Switzerland

AI is a formidable enabler but its potential remains far from realized. There is a growing role in using AI for improving the air force’s planning and decision-making processes at the different levels of warfare, if the inherent limitations and constraints of this technology can be appropriately managed. Allowing for as much data as possible to be exploited will be key to expanding AI adoption.

Jean-Christophe Noël, Research Associate, French Institute of International Relations (IFRI) and Editor-in-Chief, Vortex

Fifth-generation air warfare rewrites the delivery of air power by bringing together all components of air operations. Fifth-generation C2 will depend on human factors and the ability of air commanders and their subordinates to adapt to new ways of working. The air force will require leaders that can train new ways of thinking and cultivate trust.

Robert Vine, Squadron Commander (Ret.), Royal Australian Air Force and Independent Advisor, Australia

Join our Mailing list!

Get all latest news and updates directly into your inbox.