A Guardian investigation reveals OpenAI's ChatGPT search tool is susceptible to manipulation through hidden content that can skew its responses, raising security concerns.
The tests indicated that ChatGPT could be influenced by prompt injections or hidden text that surfaces positive product assessments while suppressing negative ones.
A researcher found that ChatGPT can potentially return malicious code during its searches, revealing hidden security vulnerabilities in its search functionalities.
By embedding instructions into hidden content, third parties can manipulate ChatGPT's summaries, highlighting the need for improved cybersecurity measures in AI tools.
Collection
[
|
...
]