We use cookies to ensure our website operates correctly and to monitor visits to our site. This helps us to improve the way our website works, ensuring that users easily find what they are looking for. To allow us to keep doing this, click 'Accept All Cookies'. Alternatively, you can personalise your cookie settings.

Accept All Cookies Personalise settings

Blogs

Adopting and Assuring Disruptive Technologies for Operational Advantage

21/03/2023

With today's unprecedented rate of technological change, the traditional capabilities of defence are being enhanced with newer, disruptive capabilities such as Artificial Intelligence (AI).

As such, industry and defence must jointly figure out how to deliver these newer technologies to the warfighter faster, while assuring their safety and ensuring confidence in their operational performance. Part of this will involve the Test and Evaluation (T&E) of such technologies - ensuring capabilities are safe and fit for purpose.

But how do we test something that’s fundamentally new, and that has never been assured before? We start by building on what we have previously learned from testing previous capabilities.

Evaluation focus, user focus
The test data that we gather is essential, but what we do with it is most significant. We must not be overly fixed on testing. Ultimately, evaluation - namely the results and safety recommendations that we provide - is most important, and testing is a means to that end.

We also need to maintain our focus on the outcomes of these technologies - namely, the outcomes for the end-user. How can the warfighter best leverage these capabilities and the information we provide them? The safety recommendations and associated information that we provide must consider their needs at all times.

Closing the skills gap
It’s a high priority to get the right people with the right skills in the right place for the delivery of acquisition programmes, including how we can develop and evaluate these new technologies. We need to grow our talent pool and bring in new people with fresh ideas in a timely way.

Dismantling communication barriers
To fully bring T&E into the capability development process, we must build an assurance culture across the whole enterprise. And to do this, we must be able to share and collaborate without hindrance, leveraging the more advanced IT that’s being used by many organisations.

Working with (…not against) the regulations
The regulations, although essential, can impede those who aren’t experienced with them. The aim is to work within the regulations, not for the regulations. Early career T&E professionals should spend more time understanding the spirit of the rules, as it can take a few years to develop the interpretation skills and the ability to tailor and be proportionate in working to these guidelines.

Innovative technology requires innovative evaluation
Ultimately, there’s no point in pursuing innovative science and technology if we aren’t innovative in how we evaluate it, otherwise we may never see all the innovative applications of these new technologies.

The field of AI is a useful example, as it is an area with a lot of promise and hyperbole. What should the assurance of AI and its related technologies look like in the future?

It's worth discussing the difference between normal software and AI. The typical difference is that AI technologies learn during initial training, usually from large data sets, but they can also continue to learn when deployed, which makes it very difficult to predict their behaviour. They are also often ‘opaque’, which is to say that they act without being able to explain their decisions. This creates challenges in testing.

Evaluating complex systems isn't new and many of the things we’d normally consider still hold here - safety, security, compliance with regulations, for example. However, we must also look at new dimensions like trust and ethics - both around the technology itself and how we choose to deploy it. We must understand how it may respond and evolve in the new world, plus the interaction between people and systems, and potential hostile interference.

In a military context, AI assurance must include user-based approaches and really understand what the technology is being used for. For example, back-office mission support or frontline use in a mission-critical environment are very different - so we must ensure that the technology is rigorously tested and tailored to its various applications.

That said, as far as we know, there are no agreed metrics for evaluating the training of AI-based systems. The US Joint Artificial Intelligence Center (JAIC) is blazing a trail here. Though they still haven't got all of the answers, we would do well to follow their lead. Well-defined metrics and assurance frameworks will allow those who aren't deep experts in AI to operate assurance programmes, utilising critical T&E enablers like testbeds and data infrastructure.