Testing for space: embracing change
Testing technology is difficult and intricate at the best of times. Testing technologies for space is in a league of its own.
The space industry yields particular complexities for testing, for a couple of reasons. Firstly, testing requires replicating the environment in which the hardware will be used. In space, these environments are nothing short of extreme – such as huge temperature variations, and immense vibrations caused by a successful launch. Earth’s space radiation environment is also challenging to recreate for tests. Secondly, in most other industries if something goes wrong, you can react and fix it. This isn’t usually possible once something is up in space.
Waste of space?
Owing to these nuances, the space industry has somewhat lagged in pace, adopting traditional testing approaches. The product being tested, say an ion engine, often has to run for its lifetime duration to see it through all stages, possibly taking an entire decade. Couple this with space’s risk-averse outlook, and there’s a perfect recipe for this lag to persist.
Now that more – increasingly commercial - players in space are utilising Commercial Off The Shelf (COTS) technology, the ‘NewSpace’ era is sending shockwaves across the field. For instance, the launch market is growing. Previously when designing flight hardware, just one launcher would be considered. Now, due to sheer competition, the aim is to qualify for multiple launchers, requiring broad specification levels and forward-thinking approaches to design. This entails greater time spent in the concept phase, plus enhanced testing. But the latter hasn’t materialised. Testing remains mostly unchanged, restricting innovation simply due to retesting every time something changes. Decisions will be made as to whether to ride the wave or sink beneath. Testing for space: embracing change.
The merit of digital twins
As mentioned, there is little appetite for risk, so transforming testing of spaceflight hardware demands a culture shift towards ‘failing fast’, incompatible with current methods. So proving the case for it is tricky. Not only does the business need to be on-side, but the customer needs to accept risk. Beyond this, the customer’s insurers need to buy-in, and they can be hesitant to insure a mission that hasn’t considered all risks and isn’t guaranteed to work. Ironically, this justification process is a time-consuming burden in itself, thereby vindicating remaining as is.
The ‘digital twin’ approach has revolutionised some industries, being deemed the Holy Grail for test and evaluation. Digital twins are digital replicas of the physical entity, which can be used to perform accelerated testing or for failure prediction – massively reducing the cost and schedule risk burden on major test programmes.
Meeting in the middle
Of course, implementing digital twins will not happen overnight. But to continue as we are will only make the leap greater in the future.
In the meantime, making current testing facilities more effective will go some way. Efficiencies can be offered to industry via national testing facilities, such as Oxford’s Harwell services. UK companies can access this for their specific needs, rather than building their own facility, streamlining the entire testing process. But again this calls for collaboration and collective change; without this, we cannot build our own path towards the inevitable digital future.