Read at WIRED
Goody-2, a chatbot developed by OpenAI, highlights the unsolved safety issues with large language models and generative AI systems. The recent outbreak of Taylor Swift deepfakes on Twitter was traced back to an image generator released by Microsoft, one of the first major tech companies with a responsible AI research program.
Although corporate talk of responsible AI and deflection by chatbots have become more common, serious safety problems with large language models and generative AI systems remain unsolved.
The restrictions placed on AI chatbots and the challenge of achieving moral alignment have sparked debates. Concerns about bias in OpenAI's ChatGPT have led to the development of politically neutral alternatives. Elon Musk's rival system, Grok, claims to be less biased but still faces criticisms similar to Goody-2's equivocation.
The restrictions placed on AI chatbots, and the difficulty finding moral alignment that pleases everybody, has already become a subject of some debate.