A fine balance
Harnessing the value of an ever-widening range of innovation, from artificial intelligence to synthetic training, now requires an agreed way forward on guidelines and standards that can underpin their safe and effective deployment. But each of these technologies presents unique factors that impact not only how they should be regulated, but how much regulation is required to balance safety and accountability, with enabling the vital flow of innovation needed to maximise their potential in the field.
Autonomous systems are a particularly pertinent example. Rapid development of autonomous technology is largely being driven outside the highly-regulated markets of defence, security and critical infrastructure (CI). The scale of the benefit is such that businesses are eager to exploit them and so autonomy is already being adopted in commercial environments without an in-depth understanding of the regulatory and legal requirements to underpin their appropriate use. The mismatch between the time taken to realise their potential and the time required by regulators to develop necessary codes and practices is giving rise to vulnerabilities that can never be acceptable in hostile environments.
So while the prospect of generating advantage from autonomy is engaging, the defence, security and CI are at an impasse. The potential for adversaries to use these sector-driven technologies is growing rapidly and they are rarely bound by similar obligations or subject to the rules of engagement. Deterrents therefore need to keep pace and be clear, present and available. But without suitable regulatory guidance and approved practices there are considerable risks to rapid deployment that will prevent autonomy moving from theory to practical adoption.
Primarily, there is a risk to operational safety. The impact that a malfunctioning autonomous system could have on people, the environment, critical infrastructure, or political stability will vary for each system depending on its design, size and operational context. A supportive regulatory system will need to reflect this diversity and provide guidance and standards designed to reduce the risk of failure across all instances of use.
There is also a risk that an absence of regulation will reduce clarity of accountability. This is so important for maintaining standards through the clear and designated avenues for redress. In the world of autonomous systems this is particularly relevant. In the non-autonomous world the presence of someone in charge with clear accountability provides the necessary legal link between the wrongdoer and their employer. For an autonomous system there will always be a question over who (or what) was at fault. This inability to specify quantifiable regulatory risks often results in the deprioritisation of compliance by those developing or deploying such novel technologies. Appropriate regulation makes compliance a necessity and moves it directly into the critical path, boosting awareness of its ability to reduce risk and raise profitability.
And there is a risk of failing to secure and maintain public support. Public perception is more likely to be positive if the move to autonomy is gradual and supported by a clear and considered regulatory structure. Given the emotive debates that autonomous systems stimulate and the current pace of technical development, the risk of losing public support for their use in defence, security and CI is high unless a suitable regulatory structure is apparent and highly visible.
But by far the greatest danger arising from the need to regulate is that to address the risks outlined above we end up retreating to the safety of over-regulation, stifling the much-needed translation of commercial innovation into deployable defence capability. When change of this magnitude occurs in any complex environment the tendency is to seek solace in risk avoidance, especially when the difference between what comes next and what went before is so remarkably stark. The challenge of balancing the equation must be tackled if we are to avoid exposure to unnecessary risk or overburdening the sector with a level of bureaucracy that reduces our ability to employ autonomy as part of an effective defence strategy.
Addressing the formidable challenge of achieving that balance requires two significant changes to the way we approach the adoption of emergent technologies:
- There needs to be a cultural shift within the regulatory community from risk avoidance to risk management, identifying and mitigating risks as they emerge along the technology roadmap. There are many examples of technology development where regulation has followed commercialisation and deployment, dealing with risks as they materialise. This may require defence to develop innovative approaches to safety management, enabling deployment into capability without a complete set of regulatory instruments, while understanding the levels of risk involved.
- To make this approach successful, defence, security and CI need to be able to call on a far more agile test and evaluation process than they have employed before – one that can match the pace of technical development. It must adapt to accommodate the unpredictable direction of innovation and rapidly deliver accurate guidance on the changing risk profile of new iterations of autonomous technologies across a variety of hostile environments. With a rapid and more flexible way to assess fit for purpose, safety and security credentials, and potential vulnerabilities, regulatory teams can embark on the challenge of building a more ‘organic’ set of regulatory instruments that move and shift in line with the pace and path of commercial innovation as it flexes – safely accelerating the process of integrating autonomy into defence, security and CI strategies.
As more emergent technologies begin to show promise in the defence, security and CI environments, closing the gap between theoretical relevance and practical deployment will be a crucial factor in maximising their potential to affect outcomes. Establishing appropriate supporting regulation is a key part of that. Autonomous systems in particular are exposing the constraints of current instruments but they are also highlighting the steps required to deliver the proactive regulation and governance that will enable autonomy to become a widespread deployable defence and security mechanism.