We use cookies to ensure our website operates correctly and to monitor visits to our site. This helps us to improve the way our website works, ensuring that users easily find what they are looking for. To allow us to keep doing this, click 'Accept All Cookies'. Alternatively, you can personalise your cookie settings.

Accept All Cookies Personalise settings

The traditional response to the acceptance challenge posed by the military use of Artificial Intelligence (AI) has been to insist on humans maintaining ‘meaningful human control’ as a way of engendering confidence and trust. This is no longer an adequate response when considering both the ubiquity and rapid advances of AI and related underpinning technologies. AI will play an essential, growing role in a broad range of command and control (C2) activities across the whole spectrum of operations.

While less directly threatening in the public mind than ‘killer robots’, the use of AI in military decision-making presents key challenges as well as enormous advantages. Increasing human oversight over the technology itself will not prevent inadvertent (let alone intentional) misuse.

This paper builds on the premise that trust at all levels (operators, commanders, political leaders and the public) is essential to the effective adoption of AI for military decision-making and explores key related questions. What does trust in AI actually entail? How can it be built and sustained in support of military decision-making? What changes are needed for a symbiotic relationship between human operators and artificial agents for future command?

Trust in AI can be said to exist when humans hold certain expectations of the AI’s behaviour without reference to intentionality or morality on the part of the artificial agent. At the same time, however, trust is not just a function of the technology’s performance and reliability – it cannot be assured solely by resolving issues of data integrity and interpretability, important as they are. Trust-building in military AI must also address needed changes in military organisation and command structures, culture and leadership. Achieving an overall appropriate level of trust requires a holistic approach. In addition to trusting the purpose for which AI is put to use, military commanders and operators need to sufficiently trust – and be adequately trained and experienced on how to trust – the inputs, process and outputs that underpin any particular AI model. However, the most difficult, and arguably most critical, dimension is trust at the level of the organisational ecosystem. Without changes to the institutional elements of military decision-making, future AI use in C2 will remain suboptimal, confined within an analogue framework. The effective introduction of any new technology, let alone one as transformational as AI, requires a fundamental rethinking of how human activities are organised.

Prioritising the human and institutional dimensions does not mean applying more control over the technology; rather, it requires reimagining the human role and contribution within the evolving human–machine cognitive system. Future commanders will need to be able to lead diverse teams across a true ‘Whole Force’ that integrates contributions from across the military, government and civilian spheres. They must understand enough about their artificial teammates to be capable of both collaborating with and challenging them. This is more akin to the murmuration of starlings than the genius of the individual ‘kingfisher’ leader. For new concepts of command and leadership to develop, Defence must rethink its approach not only to training and career management but also to decision-making structures and processes, including the size, location and composition of future headquarters.

AI is already transforming warfare and challenging longstanding human habits. By embracing greater experimentation in training and exercises, and by exploring alternative models for C2, Defence can better prepare for the inevitable change that lies ahead.

Download the complete paper (PDF) - Trust in AI

RUSI logo in white and purple
QinetiQ logo

About the Authors

Christina Balis
Christina Balis
Christina is the Global Campaign Director of Training and Mission Rehearsal at QinetiQ. Her 20 years’ experience across both sides of the Atlantic encompasses consulting, industry and public policy settings, with particular focus on defence, global security and transatlantic relations.
Paul O'Neill
Paul O'Neill
Paul is the Director of Military Sciences at RUSI. With over 30 years’ experience in strategy and human resources, his research interests cover national security strategy and organisational aspects of defence and security, particularly organisational design, human resources, professional military education and decision-making.

Related Content

Trust in Training

Trust in AI builds on the earlier report produced by QinetiQ, The Trust Factor, which looked at trust as a fundamental component of military capability and an essential requirement for military adaptability in the 2020s.

Artificial Intelligence, Analytics & Advanced Computing

Our Data Science experts work on a range of Artificial Intelligence (AI) projects using a variety of Machine Learning (ML) techniques to discover patterns in, analyse, classify and verify data.