Friday, April 26, 2024

What Happens When A Robot Lies?

- Advertisement -

Researchers have developed a simulation that creates deception and even apologizes to protect people in a mindful way.

Kantwon Rogers (right), a Ph.D. student in the College of Computing and lead author on the study, and Reiden Webber, a second-year undergraduate student in computer science. Credit: Georgia Institute of Technology

Suppose a child asks a chatbot or a voice assistant if Santa Claus is real. How should the AI respond, given that some families would prefer a lie over the truth? The field of robot deception is understudied, and for now, there are more questions than answers. For one, how might humans learn to trust robotic systems again after they know the system lied to them?

Kantwon Rogers, a Ph.D. student in the College of Computing, and Reiden Webber, a second-year computer science undergraduate, at Georgia Tech have designed a driving simulation to investigate how intentional robot deception affects trust. Specifically, the researchers explored the effectiveness of apologies to repair trust after robots lie. Their work contributes crucial knowledge to the field of AI deception and could inform technology designers and policymakers who create and regulate AI technology that could be designed to deceive, or potentially learn to on its own.

- Advertisement -

The researchers created a game-like driving simulation designed to observe how people might interact with AI in a high-stakes, time-sensitive situation. They recruited 341 online participants and 20 in-person participants. Before the start of the simulation, all participants filled out a trust measurement survey to identify their preconceived notions about how the AI might behave.

Participants were assisted by a robot throughout the simulation which instructed them to be careful with the speed limit and that there’s a checkpoint within a range. As the task is completed and the participant asks the robot why it gave false information, the robot shares an apology message. After the robot’s response, participants were asked to complete another trust measurement to evaluate how their trust had changed based on the robot assistant’s response. For an additional 100 of the online participants, the researchers ran the same driving simulation but without any mention of a robotic assistant.

Researchers argue that average technology users must understand that robotic deception is real and always a possibility. Designers and technologists who create AI systems may have to choose whether they want their system to be capable of deception and should understand the ramifications of their design choices. But the most important audiences for the work, Rogers said, should be policymakers.

Rogers’ objective is to create a robotic system that can learn when it should and should not lie when working with human teams. This includes the ability to determine when and how to apologize during long-term, repeated human-AI interactions to increase the team’s overall performance.

Reference : Kantwon Rogers et al, Lying About Lying, Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (2023). DOI: 10.1145/3568294.3580178


SHARE YOUR THOUGHTS & COMMENTS

Unique DIY Projects

Electronics News

Truly Innovative Tech

MOst Popular Videos

Electronics Components

Calculators