Artificial intelligence is getting chatty, but still hasn’t perfected the gift of gab

By Shi En Kim

An Amazon Echo smart speaker

An Amazon Echo smart speaker, which is usually equipped with the artificially intelligent personal assistant Alexa. (Pixabay.)

It’s one thing to be able to talk; but it’s another to converse. And artificial intelligence (AI) has yet to check off the latter. Even with tremendous progress over the last years, the technology still hasn’t achieved the level of naturalness that would make you want to trade your best friend or favorite coworker for a smart virtual assistant.

You can find talking AI systems almost everywhere these days; perhaps you only need to look as far as across the room in your home to find Alexa, Siri, or Google Home; or call a customer service hotline and engage with chatbots. AI systems are so advanced that they can transcribe speech to text, recognize human faces, spot patterns in a haphazard heap of data, and defeat grandmasters at tactical games such as Chess and Go. But it has yet to master the nuances and subtleties in the art of human conversation.

“We're very far away from the kinds of conversations that humans are used to” like simple chit-chat, the kind of dialogue in which “you can talk about anything,” said Susan E. Brennan, a cognitive scientist at Stony Brook University. She and two other panelists discussed AI’s conversational advances during a Feb. 9 session at this year’s virtual annual meeting of the American Association for the Advancement of Science.

Conversation itself is collaborative in nature, Brennan said. If conversation is a dance, the humans and AI need to accomplish several “moves” to stay in sync: Taking turns initiating the conversation topic, checking in with each other to make sure that the message is getting across, repairing misunderstandings, and adapting to each other’s responses. Achieving all these skills with AI is easier said than done. “Communication is more than just message passing,” said Brennan. She noted that humans use pauses, tone, and other nonverbal cues to convey a world of information. About 70% of interpersonal communication between two humans is nonverbal. And AI systems have yet to pick up these cues in conversations because they still struggle with recognizing the speaker’s intention behind words.

Marilyn Walker, a computer scientist at the University of California, Santa Cruz, developed a bot in 2019 named Athena to push the conversational skills of AI as close to a human’s as possible. Athena struggled to find balance in a two-way conversation. It relied too much on semantics — choosing the topic that would most interest the human partner — to clinch the conversation. The AI was a good talker, but less so a good listener. Any conversation that flows organically requires the speaker to open up and be vulnerable. Athena’s users preferred to hear about personal experiences and opinions from the AI rather than the fun facts it spouted. But the bot doesn’t have any personal experiences to generate a more humanlike conversation, said Walker, so such responses would require handcrafting from the engineers.

To improve a bot’s conversational abilities, Walker suggested that conversational AI systems “need to have more flexible strategies for dialogue and interaction.” She added that “this is going to require a deeper understanding of the users’ utterances and the context.”

Clear, two-way communication is key for AI systems to collaborate with human counterparts. In a 2017 experiment, researchers from Arizona State University assigned a team of humans and an AI system to achieve a specific task as a team: reconnoiter an area using a remotely piloted aircraft in a synthetic environment, similar to the simulated reality in the movie the Matrix. They found that the AI “teammate” hoarded information and didn’t share in return, which indirectly slowed the team down.

An AI teammate that can’t communicate effectively can bring down its team, said Nancy Cooke, a human systems engineer at Arizona State University who directed the experiment.

“One of the things that I think [AI] really needs … is [to] have a better understanding of the task — not just its task, but [also] what the other people are doing,” Cooke said. She aims to whip up a better awareness in her bot to anticipate the needs of its teammates in future tasks and not just fulfill its solo duties.

The three researchers note that the goal of conversational AI isn’t always to make AI identical to a human. One capability that they consider unnecessary is natural language processing, that is, to imbue AI with the exactness of human speech. Cooke thinks simpler communication already works well enough for AI systems in team-based tasks. We can also look to human-animal partnerships, such as military dogs and their human handlers, to find examples of successful communication in a team that doesn’t rely on natural language.

The rule of conversation is: There are no rules — it’s challenging to anticipate all the possible things people can say in a conversation, said Brennan. But with the help of new learning and data analytics tools, AI might have a shot at rivaling humans at conversation.

“I will never say never,” said Brennan.

Shi En Kim (@goes_by_kim) is a freelance science writer and PhD student in molecular engineering at the University of Chicago. Read more of her work on her website shienkim.wordpress.com.

This story was edited by NASW member Rodrigo Pérez Ortega, who served as Kim's mentor during the NASW-AAAS Spring Virtual Mentoring Program.

Hero image credit: Pixabay.com

ADVERTISEMENT
Knight Science Journalism @MIT

ADVERTISEMENT
Stanford Center for Biomedical Ethics