Data Science Investment Counter —Funding raised by UK data science companies in 2018.
£ 5.640 Million

New models can sense human trust in smart machines

Researchers have created new “classification models” that can sense how well humans trust intelligent machines they collaborate with.

The research aims at improving the quality of interactions and teamwork between people and their robotic counterparts, and was led by assistant professor Neera Jain and associate professor Tahira Reid, in Purdue University’s School of Mechanical Engineering.

“Intelligent machines, and more broadly, intelligent systems are becoming increasingly common in the everyday lives of humans,” Jain said. “As humans are increasingly required to interact with intelligent systems, trust becomes an important factor for synergistic interactions.”

The researchers have developed two types of “classifier-based empirical trust sensor models,” in order to improve trust between humans and intelligent machines.

The models use two different techniques that provide data to ‘measure’ trust: electroencephalography (EEG) and galvanic skin response. The first records brainwave patterns, and the second monitors changes in the electrical characteristics of the skin, providing psychophysiological “feature sets” correlated with trust.

The models were trialled on 45 human subjects, with a mean accuracy of 71.22 per cent, and 78.55 per cent, respectively.

It is the first time EEG measurements have been used to gauge trust in real time, or without delay.

“We are using these data in a very new way,” Jain said. “We are looking at it in sort of a continuous stream as opposed to looking at brain waves after a specific trigger or event.”

The findings of the research are detailed in a research paper appearing in a special issue of the Association for Computing Machinery’s Transactions on Interactive Intelligent Systems. The journal’s special issue is titled “Trust and Influence in Intelligent Human-Machine Interaction.”

“We are interested in using feedback-control principles to design machines that are capable of responding to changes in human trust level in real time to build and manage trust in the human-machine relationship,” Jain said. “In order to do this, we require a sensor for estimating human trust level, again in real-time. The results presented in this paper show that psychophysiological measurements could be used to do this.”

The issue of human trust in machines is important for the efficient operation of “human-agent collectives”, she added.

The models they developed are called classification algorithms.

“The idea is to be able to use these models to classify when someone is likely feeling trusting versus likely feeling distrusting,” Raid said.

Jain and Reid have also investigated trust levels related to gender and cultural differences, as well as dynamic models able to predict how trust will change in the future based on the data.

SHARE THIS ARTICLE:
BY SHACK15

Co-working space and blog dedicated to all things data science.

Subscribe to our newsletter