Robots can work better with concealed identity: Survey
Recent technological breakthroughs in artificial intelligence (AI) have made it possible for machines or robots to pass as humans.
Robots are more persuasive when pretending to be human and when these machines disclose their non-human nature, their efficiency is compromised, a new survey has revealed.
Recent technological breakthroughs in artificial intelligence (AI) have made it possible for machines or robots to pass as humans. But a team of researchers led by Talal Rahwan, associate professor of Computer Science at New York University Abu Dhabi (NYUAD), conducted an experiment to study how people interact with robots whom they believe to be human, and how such interactions are affected once robots reveal their identity.
The researchers found that robots are more efficient than humans at certain human-machine interactions, but only if they are allowed to hide their non-human nature. In their paper titled 'Behavioural Evidence for a Transparency-Efficiency Tradeoff in Human-Machine Cooperation' published in Nature Machine Intelligence, the researchers presented their experiment in which participants were asked to play a cooperation game with either a human associate or a bot associate.
This game, called the Iterated Prisoner's Dilemma, was designed to capture situations in which each of the interacting parties can either act selfishly in an attempt to exploit the other, or act cooperatively in an attempt to attain a mutually beneficial outcome.
The researchers gave some participants incorrect information about the identity of their associate. Some participants who interacted with a human were told they were interacting with a bot, and vice versa. Through this experiment, researchers were able to determine whether people are prejudiced against social partners they believe to be robots, and assess the degree to which such prejudice, if it exists, affects the efficiency of bots that are transparent about their non-human nature.
The results showed that robots posing as humans were more efficient at persuading the partner to cooperate in the game. However, as soon as their true nature was revealed, cooperation rates dropped and the bots' superiority was negated.
"Although there is broad consensus that machines should be transparent about how they make decisions, it is less clear whether they should be transparent about who they are," said Rahwan.
"Consider, for example, Google Duplex, an automated voice assistant capable of generating human-like speech to make phone calls and book appointments on behalf of its user. Google Duplex's speech is so realistic that the person on the other side of the phone may not even realise that they are talking to a bot. Is it ethical to develop such a system?"
He added: "Should we prohibit robots from passing as humans, and force them to be transparent about who they are? If the answer is 'Yes', then our findings highlight the need to set standards for the efficiency cost that we are willing to pay in return for such transparency."
Google’s digital advertising empire is regaining the momentum... READ MORE
Interaction between entrepreneurs comes amid intense interest in... READ MORE
About 98 per cent of the threats encountered were riskware and adware,... READ MORE
In the next iOS 14 update, Apple Watch will give you haptic feedback... READ MORE
Achievement comes even as vaccination rollout has slowed down in... READ MORE
He posted the video to get more followers on Instagram. READ MORE
Strong immunity makes all the difference, say recovered Covid patients READ MORE
The schools also teach one lesson per week in Islamic Education and... READ MORE