Purdue researchers are working to reduce the “trust gap” between humans and artificial intelligence.
Researchers hope to do this by making AI better at interfacing with humans in more nuanced ways.
Even as various chatbots become more popular, researchers say responses tend to be stilted – and feel more like Wikipedia entries than actual human speech.
Aniket Bera is an associate professor of computer science at Purdue. He says that can make it more difficult for people to trust what they hear from a chatbot.
“We need to bridge the gap — the trust gap,” he said. “I feel one of the ways we’re trying to do is trying to make our technologies understand humans better and at the same time respond to humans better.”
Bera says cultural nuances are difficult to capture, and artificial intelligence can’t always adequately respond to those differences.
That’s especially important when it comes to therapy – where AI chatbots are increasingly being used to meet people’s therapy needs.
Bera said complex cultural differences are something even humans can miss.
“The first time I spoke to a therapist, I felt like in the U.S., my cultural background is not being captured,” he said. “They don’t understand what I’m going through because I come from a different culture. I understand things very differently. That’s one aspect.”
Another aspect of Bera’s research includes training AI to understand different human emotional responses. “I’m fine” can have various meanings, given simple differences in intonation – something AI currently isn’t able to read in a meaningful way.
Bera said chat therapy is not intended to replace existing therapists but to bridge the gap for people who aren’t able to regularly meet with real-world specialists.
He said research is still anywhere from 10 to 15 years out from being able to start making AI better at engaging with humans in these more complex ways.