Most Read

Top stories

Small Talk Just Might Be the Key to Better Relations Between Humans and A.I.s

Researchers in Utah used “chit chat” to get artificial intelligence to cooperate with humans while playing games. The implication is that rather than being a danger to humans, robots might become our allies.

Most discussion about artificial intelligence centers on the jobs it might take away or other ways it might inadvertently—or purposely—harm humans. But a recent study showed that there’s something simple we can do to foster cooperation between robots and ourselves. All it takes is a little trash talk.

Together with a team of researchers, Jacob Crandall, a computer science professor at Brigham Young University in Utah, created an algorithm that learned to cooperate with humans thanks to chit chat, which Crandall called “cheap talk.”


The researchers had human participants play several different games with the AI, measuring their cooperation based on how often they worked together rather than in competition, and also on their final scores. They used 472 games that required two-player interactions, including the well known psychological game known as the prisoner’s dilemma. In the prisoner's dilemma game, both players have to decide whether or not to inform on the other in order to avoid going to jail. The game is often used by social scientists to explore what conditions make people more likely to be cooperative or competitive in business, politics and social interactions. Crandall and his colleagues thought it would be a good test of whether the AI and humans they were studying could cooperate, since it’s one in which both sides could potentially benefit.

What made this experiment unique was that it included “chit chat.” The algorithm was programmed to say things like “I accept your last proposal” and “In your face!” and the human participant then chose a response from a pre-set list. When chit chat was introduced, cooperation between the AI and the human doubled.

“[T]his learning algorithm learns to establish and maintain effective relationships with people and other machines in a wide variety of [games] at levels that rival human cooperation, a feat not achieved by prior algorithms,” the researchers wrote in the journal Nature Communications.

Crandall explained that not only could the AI and humans cooperate with one another, the AI had the capability of being “all talk.” In other words, it sometimes said it was going to make one move, and then made another. For example, it might say it was going to cooperate with the human so they would both avoid jail time, and then decide to give them up after all. This comes after another recent study in which researchers found that AI might have the ability to cheat on their human romantic partners.

“Since Alan Turing envisioned AI, major milestones have often focused on either defeating humans in zero-sum encounters,” the researchers wrote. “Our work demonstrates how autonomous machines can learn to establish cooperative relationships with people and other machines in repeated interactions. We showed that human–machine and machine–machine cooperation is achievable using a non-trivial, but ultimately simple, set of algorithmic mechanisms.”