You might not be able to hold a conversation with your toaster just yet, but researchers at Facebook are trying to train bots to one day converse with humans in plain English.
The problem? When their human mediators stepped away, the artificial intelligence ended up developing their own language, and Facebook shut them down.
The researchers recently pit two AI agents (dubbed Alice and Bob) against each other in a negotiating match in an effort to sharpen their interpersonal skills, only to realize that their bots had no incentive to stick with familiar grammar. Without those constraints, the conversation veered off into what might seem like nonsense.
“You I everything else,” Bob says.
“Balls have a ball to me to me to me to me to me to me to me,” Alice responds.
“Agents will drift off understandable language and invent codewords for themselves,” says Dhruv Batra, a visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
This isn’t a one-off phenomenon either, but a pattern well documented by Facebook. Researchers have named it a “generative adversarial network,” as each AI jockeys to get a better deal out of the transaction. Mimicking the speech patterns of a human would be inefficient and lead to a worse deal, so the AI agents pared their vocabulary down to the essentials.
Slang and references work the same way, as humans attempt to compress experiences and emotions into easily digestible tokens all the time. Shorthand, acronyms and trade lingo all serve a similar purpose, but computers lack the same limitations that we do.
“It’s perfectly possible for a special token to mean a very complicated thought,” Batra says. “The reason why humans have this idea of decomposition, breaking ideas into simpler concepts, it’s because we have a limit to cognition.”
With a perfect recall and capacity for understanding limited only by its hardware, an AI would be able to reduce incredibly complicated concepts much further. And the more they converse, the further their “language” would drift from what humans could understand.
Although the project was originally aimed at enabling easier AI-to-human interactions, the researchers may have instead discovered a way of allowing easier machine-to-machine communication.
It’s no secret that AI learning is something of a black box. Information is fed into an AI over and over again, and the output received is tweaked until the AI begins to understand the parameters being laid out for it. This might manifest in the real world as a self-driving car that learns to differentiate between asphalt and grass, or an app for identifying plants that improves over time.
How neural networks take that data and actually translate it into action is still something of a mystery. An AI might build off the information given to it a way that’s non-intuitive even to the humans who created it. Accordingly, a machine programmed to take the most efficient route will not always follow the same thought processes as a human.
As one coder describes it, ““Getting the data into a format that makes sense for machine learning is a huge undertaking right now and is more art than science. English is a very convoluted and complicated language and not at all amicable for machine learning.”
Improving communication among bots has real world uses that might not be immediately apparent. One could be making sure that every device in your house is able to speak with each other. That way the appliances and apps of the future could create a shared language on the fly, capable of exchanging much more information than the simple English we use now.
“It’s important to remember, there aren’t bilingual speakers of AI and human languages,” says Batra.
Allowing AI agents to invent their own languages shuts out outside observers. For this reason Facebook currently has no plans to further this research, and has shut down the rogue AI agents. But as computers only get smarter, humans will have to come to terms with the fact that giving up some control over their creations might be the price we pay for increasing efficiency.