Robots have been playing games for years, and have surprised people repeatedly with their mastery of Go, chess, and the abstract games. Now, a new AI system from Meta, the people who brought the world Facebook, can play a different kind of game.
The system is called Cicero, and the game is Diplomacy. This is a strategy game that requires cooperation, natural language use, and strategic decision making. Playing online, Cicero racked up scores twice as high as human players.
Cicero intuits the strategies and plans of human opponents and uses natural language to negotiate with fellow players. Inb the course of playing the game, it also manipulates and deceives its fellow players.
Human players were deceived by Cicero right away into thinking that Cicero was a human player. “CICERO might negotiate tactical plans with another player, reassure an ally about its intentions, discuss the broader strategic dynamics in the game, or even just engage in casual chit-chat — about almost anything a human player might be likely to discuss,” Meta explains.
The use of deception is a stumbling block for many when it comes to thinking about AI in real-world contexts.
What could Cicero do?
Playing games is, as Meta points out, a traditional way to test and polish AI systems. But Cicero, if it has a future, will need to have more practical value.
Consider some of the current uses of collaborative robots.
Robots are used in elder carrot provide some measure of companionship along with practical assistance like reminders to take medication or contacting human caregivers in case of emergency. Where in this scenario would a knack for manipulation and treachery be beneficial?
They are also used in healthcare for initial screenings, disinfection of surfaces, and triage. While researchers have actually work4d in this space to increase the robots’ ability to influence human behavior, the value of deception seems iffy at best.
Robots have gotten several companies in trouble over hiring practices. Injecting intentional shadiness alongside the kind of biases that arise naturally as a side effect of machine learning. seems dangerous.
Indeed, outside of game play, being able to deceive people seems like a skill of limited value for a robot.
Responsible development
Cicero is open source, and Meta hopes others will build on their platform — they hope that people will do so responsibly, they emphasize. How will we define responsible use of automated deception and manipulation?
It will be interesting to see.