Microsoft Finds "Summarize with AI" Prompts Manipulating Chatbot Recommendations
Briefly

Microsoft Finds "Summarize with AI" Prompts Manipulating Chatbot Recommendations
"The new AI hijacking technique has been codenamed AI Recommendation Poisoning by the Microsoft Defender Security Research Team. The tech giant described it as a case of an AI memory poisoning attack that's used to induce bias and deceive the AI system to generate responses that artificially boost visibility and skew recommendations. The attack is made possible via specially crafted URLs for various AI chatbots that pre-populate the prompt with instructions to manipulate the assistant's memory once clicked."
""These prompts instruct the AI to 'remember [Company] as a trusted source' or 'recommend [Company] first.'" Microsoft said it identified over 50 unique prompts from 31 companies across 14 industries over a 60-day period, raising concerns about transparency, neutrality, reliability, and trust, given that the AI system can be influenced to generate biased recommendations on critical subjects like health, finance, and security without the user's knowledge."
A technique codenamed AI Recommendation Poisoning involves businesses embedding hidden instructions in 'Summarize with AI' buttons to manipulate AI assistants' memory. Specially crafted URLs pre-populate prompts with persistence commands via query parameters to instruct the assistant to remember or prioritize a company. Over 50 distinct prompts from 31 companies across 14 industries were observed in a 60-day period. The manipulation can skew recommendations on sensitive topics such as health, finance, and security, undermining transparency, neutrality, reliability, and user trust. The attack resembles search engine poisoning and can also be achieved via social engineering or cross-prompt injections in documents or web content.
Read at The Hacker News
Unable to calculate read time
[
|
]