We use cookies to ensure our website operates correctly and to monitor visits to our site. This helps us to improve the way our website works, ensuring that users easily find what they are looking for. To allow us to keep doing this, click 'Accept All Cookies'. Alternatively, you can personalise your cookie settings.

Accept All Cookies Personalise settings


How Test & Evaluation is carried out in other sectors: by QinetiQ & SIA Partners


QinetiQ had many opportunities in 2022 to analyse and discuss how organisations across the Defence Enterprise are transforming Test & Evaluation to respond to emerging technologies and the changing threat landscape. We have turned some of the content in to a series of short blogs. Here’s the 3rd in the series:

You can read the 1st blog here.
You can read the 2nd blog here.
You can read the 3rd blog here.

As may be obvious, Test and Evaluation (T&E) is not solely a feature of military capability development. Whilst it’s probably true that defence has one of the highest burdens of assurance - and consequentially, the most rigorous need for independent T&E - there are many private sector companies doing T&E in innovative ways that we can look to for ideas.

In this piece, we will do just that, exploring the overlap of commercial and military T&E, particularly looking to understand what can be learnt from the commercial sectors adoption of digital engineering and the associated T&E.

Digital twins and digital threads

The transformation being driven by Industry 4.0 and the adoption of digital engineering is the key to unlocking the enterprise benefits of the virtualisation of T&E.

This transformation creates a progressive suite of digital artifacts, ultimately culminating in the asset digital twin. So, what is a digital twin? It is a virtual representation of a real-world asset that exchanges data with its real world counterpart. This could be anything from an individual component, a bed of components, a whole system or all the way through to a system of systems or processes - like a factory. This journey provides a coherent and increasingly capable suite of models that can be used for T&E progressively though the full capability lifecycle.

But how comprehensive must such a twin be? It is, perhaps, natural, to try and build a complete digital twin of an entire system, its parts all working together - but this is not always necessary (or practical). For example, a fluid dynamic computation of an individual element of a wing is relatively straightforward. However, as soon as this element is placed next to another component of a wing, and then a spinning wheel that bounces, along with crosswinds and temperature fluctuations, the model struggles.

But in reality, a model need not be perfect to be useful. Simpler digital representations (those with less variables) can still yield useful information. So, it’s important not to get too concerned with the idea that ‘complete’ digital twin systems are necessary in order to realise benefit.

We also need to consider the digital thread, which is an integral part of the digital twins. A digital thread is a constant stream of data that has been collected over the lifecycle of the capability - drawn both from digital simulations and live testing. This data feeds into the twin, increasing its realism and usefulness.

Looking at some private sector examples

There are a number of digital engineering triumphs that are worth briefly mentioning.

  • First, General Motors, which claims to have reduced its development time by 117 weeks on the Hummer EV, using a combination of digital engineering and simulation.
  • Next is Jaguar, which claims to have reduced the development time of its first all-electric SUV, the I-PACE, down from 18 months to 12 weeks.
  • And then there’s Airbus, perhaps the most impressive - which used simulation instead of a physical wing bending test to save about €3 million and 4 months on the certification of the A350. €3 million might not seem substantial (considering the cost of major aviation projects), but if similar cost savings can made on a number of required tests, cost savings could really add up.

Looking to Formula 1

F1 companies share many (but not all) of defence’s vehicle engineering challenges - for example, fielding sophisticated vehicles that are required to perform at the edge of their performance envelopes, and managing a suite of components and sensitive intellectual property from multiple sub-contractors.

The majority, if not all, of major F1 companies have invested heavily in digital twins. Many started small, with twins of individual components or small collections of components and then built from there. Below are two successful examples.

#1: McLaren

If you were to observe McLaren at their testing bays in Barcelona, you'd see something that looks like chicken wire hanging off of the test vehicle. This wire is covered in sensors, allowing McLaren to test how the car physically behaves, gathering large volumes of data to feed back into their digital models.

McLaren does this because it is limited in the amount of track and wind tunnel testing that it can do. As such, it has to get this data as accurate as possible, back to make its digital twins, so, the next time the test team gets to the track, it’s ready with the most up to date iteration of the vehicle.

#2: Rolls-Royce

Our second example is Rolls-Royce. Like McLaren, its aim is to do as much work as possible on digital models, so that it can optimise expensive and relatively rare live test opportunities. In fact, Rolls Royce has reached the point where the regulator has enough faith in the digital model to reduce some of the physical tests that it would have previously insisted upon.

Of more interest, perhaps, is Rolls-Royce’s ‘IntelligentEngine’. Since its introduction, Rolls-Royce claims that it’s been able to switch the ratio of live/virtual testing from 90/10 to 40/60. It did this by initially building a digital twin of a generic engine model, and updated that over time, instead of trying to build a separate digital twin for every engine in its catalogue.

Traditional long cycle test burns are very expensive. To date, IntelligentEngine is estimated to have saved Rolls-Royce tens of millions of dollars.

What next?

We can look to such private companies as good examples, whilst appreciating that their goals and circumstances differ markedly from those of defence.

But how do you establish a digital twin programme? There's a lot to do first in data strategy and data governance - the architecture of how data will be managed, and the protocols around it.

And there are critical ‘soft’ cultural factors too, like breaking down organisational silos. For example, fighting the possessiveness that some personnel and teams have of data is important. That mentality (and the silos that result from it) must be broken down in order to facilitate the information sharing required to populate the digital thread; and consequentially, the digital twin.

Boeing: driving cultural change

Boeing is a trailblazer here. It pushed hard for the use of digital engineering and its associated technologies, but recognised that internal adoption would be hard. As such, it developed a strategy to start small, demonstrate value and scale up.

The company had a large stakeholder base that was wedded to the ‘safe’ way that things had ‘always been done’. In order to challenge this culture, Boeing decided to start small, running some minor modelling projects in a clearly defined way. It then demonstrated how to visualise the data from these models, proving its value to the organisation’s sceptics. Once this was done, Boeing took the opportunity to roll out this digital approach elsewhere across the organisation in a controlled way, scaling from there.

Data governance: the importance of the repository

Data and technology development go hand in hand - new technologies drive better adoption and better use of data, and more sophisticated data use helps to innovate new technologies.

But what’s the best way to approach this growing volume of data? Multiple information sources should be pulled into a data repository with a well-defined metadata structure that sits on top of everything. Without this, we are left with a large amount of data that is very hard to understand; namely, what the data and its context is, the quality of this data, and what we can do with it.

This takes us to modelling and simulation, where we begin to develop and test the digital model. From here, we move to the final step in terms of virtualisation and analysis - where a large volume of data and predictions are now incorporated into the model, ready to be exploited. Now, the power of digital engineering is unlocked, and continues to grow in proportion to the development of this model.

The new role of live testing

T&E continues to increase the amount of simulation in the product lifecycle - but live testing is still vital. However, its role changes - shifting from live being the sole point of data gathering, to something that validates the model, feeding back into the whole digital process. Live testing also requires strong data governance, in order to be properly integrated into the digital thread.

If we want to ensure that we fully utilise digital twins, we must, when possible, ‘turn up’ the volume on live test cycles. This will allow us to increase our confidence in digital models, prove that they work in increasingly complex circumstances, and use them more broadly.