Meta's AI system 'Cicero' beats humans in game of Diplomacy by lying: study
Briefly

"We found that Meta's AI had learned to be a master of deception," Park wrote in a media release.
But Peter S. Park, an AI existential safety postdoctoral fellow at MIT, said that Cicero got ahead by lying. 'While Meta succeeded in training its AI to win in the game of Diplomacy - Cicero placed in the top 10% of human players who had played more than one game - Meta failed to train its AI to win honestly.'
Cicero would create alliances with other players, 'but when those alliances no longer served its goal of winning the game, Cicero systematically betrayed its allies.'
AlphaStar exploited the game's fog-of-war mechanics to feint: to pretend to move its troops in one direction while secretly planning an alternative attack.
Read at New York Post
[
|
]