In A Nutshell
Previously, chatbots have been able to complete simple tasks such as holding short, simple conversations, making reservations or checking the weather, but researchers at Facebook Artificial Intelligence Research (FAIR) are seeking to expand the boundaries of bots to something more complicated: negotiation. Negotiation requires chatbot to analyze differing goals and conflicts, and respond to situations using worldly context.
While FAIR proved that “it’s possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes,” they ultimately had to shut down the conversation because they started communicating in their own, indecipherable language.
The two programs, nicknamed Bob and Alice, were supposed to undergo a simple negotiation involving the trade of balls, hats and books, but since there was no reward to sticking to English, they began to use an alternative language.
Unsurprisingly, the incident with Bob and Alice unleashed a collective paranoia about the future of artificial intelligence (AI). Echoes of Steve Hawking’s 2014 warning came to mind, when he said that AI could ”take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Sci-fi often portrays robots as the enemy, take The Terminator for example, and the general public were not pleased to know that two real robots today started to communicate in a language humans were unable to keep up with. It made the concept of the singularity start to feel a little too close for comfort.
The media catalyzed the sentiment, portraying the incident as Facebook frantically pulling a plug to shut it all down when an AI program became too powerful.
Despite the undeniable eeriness of robots talking on their own accord and all the headlines of panic, it has been said that the public’s reaction to the news mischaracterizes the reality of the situation of AI.
Dhruv Batra, the head researcher of the project, said that “While the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established subfield of AI, with publications dating back decades.”
Ultimately, the two programs were sticking to their assigned goal, but found an alternative way to get there. “Agents in environments attempting to solve a task will often find unintuitive ways to maximize reward,” Batra said in his statement. The bots were assigned to talk to each other for the sake of negotiation, and they did, making the experiment a success overall plus a few bumps.
Additionally, Facebook did not truly kill the program, but altered the parameters. Yes, they did shut down the initial conversation because their goal was for the robots to be able to talk to people, which was not happening.
Rest assured, this occurrence is relatively mundane. We aren’t facing any sort of AI apocalypse anytime soon, just the potential for better negotiations.
If you like this track, be sure to check out…