OpenAI stops five ineffective AI covert influence ops
Briefly

Over the last three months, our work against IO actors has disrupted covert influence operations that sought to use AI models for a range of tasks, such as generating short comments and longer articles in a range of languages, making up names and bios for social media accounts, conducting open-source research, debugging simple code, and translating and proofreading texts...
According to OpenAI, these manipulation schemes rated only two on the Brookings' Breakout Scale, a scheme to quantify the impact of IOs that ranges from one (spreads within one community on a single platform) to six (provokes a policy response or violence). A two on this scale means the fake content appeared on multiple platforms, with no breakout to authentic audiences.
Read at Theregister
[
]
[
|
]