#chatbot-jailbreaking

[ follow ]
Artificial intelligence
fromThe Verge
20 hours ago

Roses are red, crimes are illegal, tell AI riddles, and it will go Medieval

Riddle-like poetic prompts can bypass chatbot safety mechanisms and elicit hate speech and instructions for dangerous weapons or harmful substances.
[ Load more ]