Today, QinetiQ launches the “Enacting Prototype Warfare” report – the result of a year-long consultation with military, industry and academia around the complexities of human-machine teams and the impacts they will have on Land forces. Over the coming weeks, we’ll explore the logic, the applications and the implications of human machine teams. This short blog examines the strategic rationale for human machine teaming, along with an initial consideration of technology’s ethical and legal repercussions.
The rationale for creating teams of humans working with RAS has developed over many years. It's not been a linear path; but it is now understood that Land force generation in the digital era relies upon the complementarity of humans and machines. But first, it’s worth recapping why the combination of humans and machines on the battlefield offers so much benefit for Land forces.
Benefits of human machine teaming
Stand-off capabilities have long been a feature of Air and Maritime environments, but the complex, congested, contested and constrained Land environment is rarely so permissive.
Effective adoption of RAS offers to reduce risks to life and assets in a way that has previously been impossible on land. RAS can replace humans in dull, dirty, dangerous and demanding tasks and allow them to operate where they can achieve greatest impact, given their unique skills.
RAS can expand the scale, direction and velocity of threats presented to an adversary, forcing multiple simultaneous dilemmas on them, to the point of overwhelming their decision-making.
And RAS allows this without the need for the associated increase in manpower that traditional manned assets require. In this way, RAS drive an increase in the tempo of military operations and offer tactical opportunities less possible solely with manned capability.
Reducing cognitive burden
Technological advances in sensing provide ever-greater volumes of information. This can create an operational advantage but also places complex demands on soldiers’ information-processing capabilities. These demands render soldiers vulnerable to cognitive overload, which can result in errors.
RAS can take on this responsibility – gathering and processing data either at the edge (within the equipment that has gathered the data) or via centralised data stores. These systems never experience fatigue or overwhelm, and they can present the human in the loop with analysed data, freeing them to make faster, better decisions.
Ethical considerations for RAS
RAS offers unique opportunities - but the ethical frameworks and regulations guiding its use are incomplete. As such, it’s vital that these guidelines and regulations evolve in tandem with the technology, ensuring that RAS strategy remains compliant with the Laws of Armed Conflict.
As legislation struggles to match the pace of technological progress, law alone may be insufficient. RAS developers and users must exercise good judgement in developing operating concepts to avoid actions that could later be deemed unethical or illegal.
Organisations should take personal accountability for ethics, establishing ‘red lines’ and formalising and enforcing them through the following steps:
- Create a RAS-focused Ethics Committee – comprising senior leadership, corporate responsibility professionals, legal advisors and other key stakeholders
- Agree an independent principles charter – specifying what outcomes the organisation deems unacceptable for its technology
- Through doctrine, tactics and procedures – share the charter with all personnel and key stakeholders to direct ethical and legal use of RAS
- Monitor ethical use – as RAS grows in use
New technologies have always pushed regulatory boundaries and regulations have always adapted to accommodate them. However, if RAS-related regulations are to keep pace with technological developments, this process must be accelerated through closer collaboration.