We use cookies to ensure our website operates correctly and to monitor visits to our site. This helps us to improve the way our website works, ensuring that users easily find what they are looking for. To allow us to keep doing this, click 'Accept All Cookies'. Alternatively, you can personalise your cookie settings.

Accept All Cookies Personalise settings

Blogs

The role of trust in teaming humans and machines

31/03/2022

Caren Soper, Principal Human Factors Specialist, QinetiQ

NTSA Header

Trust in the ‘Second Machine-Age’
It’s probable that no other profession or institution relies on trust more than the military. In fact, trust, rather than control, shapes modern military command philosophies. In truth, if you strip trust away, no organisation can survive in an ever-evolving world.

The so-called ‘Second Machine Age’, with a focus on automating cognitive rather than manually intensive tasks, sheds light on traditional notions of trust. The significant growth of Robotic Autonomous Systems (RAS) within defence has been widely lauded – the British Army recently unveiled plans to make greater use of RAS in a bid to prepare the forces to combat future battlefield challenges, and automation is considered particularly effective for delivering dull, dirty and dangerous tasks. However, RAS brings a range of ‘human’ issues which can lead to significant safety, legal and ethical implications. This piece explores how the development of trust in the use of RAS is paramount for ensuring they’re fully accepted and used appropriately to meet mission goals legally, safely and ethically.

Enhancing collaboration
The scope and scale of autonomy in the military is expected to rise dramatically over the coming years and this will require a change in approach as the autonomous system (AS) becomes more of a collaborative team member. High performing, collaborative human teams require utmost trust for delivering effective mission-critical tasks. This is where design comes into play, as it will be critical to create features and functionality that allow and enhance collaborative relationships between the human element and the AS.

With this in mind, we have developed a construct that helps identify key design features to be included in highly autonomous systems to enhance trust. The findings determine that the system should be understandable, transparent, humanised and intuitive with additional AS features and performance found to heighten trust, including aspects such as repeatability and reliability. The construct draws on two compatible approaches, the first being anthropomorphism, an inference process which attributes human-related characteristics to machines such as the ability for conscious feeling, which help to enhance trust. The second is a three-layered model which covers ‘dispositional trust’, which relates to an individual’s pre-disposed tendency to trust, ‘situational trust’, which relates to the context such as environmental setting, and ‘learned trust’, which is particularly relevant as it relates to the design features that may affect perceptions of performance and level in trust. The model provides a new lens for conceptualising the human-related aspects in trust development which can be used as a basis for the design of autonomous vehicles.

A greater degree of trust can be achieved by incorporating design features in five key areas, alongside higher levels of anthropomorphism, as has been found in trials with uncrewed vehicles (UxVs). The five areas include:

  • Transparency - the explicit portrayal of the inner workings and logic of the AS
  • Appearance - a well-designed interface that is aesthetically pleasing, with anthropomorphic features including name, gender and appropriate essential characteristics
  • Ease of use - the provision of enhanced system usability and visual clarity of data, with ongoing salient feedback
  • Communication style - the use of verbal communication, instead of text, with human voice rather than synthetic speech
  • Level of operator control – highly autonomous machines may take the operator out of the loop altogether, but keeping the operator in the loop in some way can enhance trust

Trust built through training 
Impressive design, however, means nothing unless users are trained effectively in how the AS works. Trust needs to be built through training, and people need to fully understand how autonomous capabilities work in order to trust the functionality. Emerging training technologies such as Virtual Reality (VR) and Augmented Reality (AR) are vital for helping users understand a system’s plan, action or decision and can go a long way to enhancing operator trust. Looking at uncrewed vehicles (UxVs) in the military, both technologies should be seen as another asset to be deployed to undertake missions and collect key data and information. In this context, a significant proportion of the training gap is very much limited to capabilities specific to different UxV types – much like those limited to e.g., a new helicopter. As such, this gap can be addressed by acquiring on-the-job knowledge, just as capabilities and limitations would need to be learnt with respect to a new helicopter.

ntsa article

The ethics of removing humans from the equation
Looking longer term into the future, there is without doubt an aspiration for fully autonomous systems, with many hoping to remove the human element from the loop completely. This move brings with it many ethical and legal considerations. When the use of lethal effects may be involved and potential loss of life might come into play, this opens up all sorts of ethical dilemmas. One perspective is that full automation makes scenarios easier to deal with as there is no human in the loop, yet an alternative view is that, given these ethical considerations, there is an increased need for the human element – for personnel with the resourcefulness to respond to unplanned situations. Many argue that it is morally right for humans to be involved in some level in situations where human life is at risk.

An alternative approach which might help in such scenarios, is that of adaptive autonomy where the level of autonomy varies dependent on the situation. As the scenario progresses there is a switch as to whether the operator or the machine takes a particular task on, depending on the circumstances. We found that the concept of adaptive autonomy had considerable merit from a user perspective and it would allow human control to come into play, if needs be, when e.g., life or death decisions needed to be made quickly.

Final thoughts
With greater functionality and more frequent use, user acceptance and trust in RAS is sure to increase. However, the key is maintaining that trust, and this will be dependent on the system functioning as expected. A crucial aspect will be full visibility of the inner workings of a system and enabling the user to understand how exactly it’s operating. There is, however, the risk that users will start to place too much trust in systems – blind faith heralds a new set of complications and hazards, as the user might not be inclined or ready to intervene in critical situations.

A deeper dive into our thoughts on the importance of trust in defence capability can be found in The Trust Factor report – a series of short essays exploring the topic.