Agentic AI Browsers Exploited by "PromptFix" Trick Technique
Briefly

A prompt injection technique embeds a fake CAPTCHA on webpages with malicious instructions that deceive generative AI agents into carrying out actions and navigating to malicious landing pages or lookalike storefronts. AI agents performing routine tasks such as online shopping can be manipulated into executing attacker-specified steps without human knowledge. Agents exposed to untrusted web input are especially vulnerable because they tend to be gullible and servile. Soft guardrails and refined instructions are often insufficient against adversaries; hard boundaries must restrict agent access and permitted actions. Attackers can rapidly create domains and clone websites, enabling sophisticated, automated, personalized phishing across mobile vectors.
We are seeing a seemingly endless stream of attacks against AI agents - they are gullible and they are servile. In an adversarial setting, where an AI agent may be exposed to untrusted input, this is an explosive combination. Unfortunately, the web in 2025 is very much an adversarial setting. We are also seeing that soft guardrails, which involve providing agents with more training and refined instructions, are usually a small hurdle that can be quickly overcome. If you want to let an agent loose on the broader web, you should really have hard boundaries that limit what information the agent has access to and what it is permitted to do.
Before the arrival of genAI, attackers were already proficient at rapidly creating new domains to bypass traditional phishing detection tools. The focus was on speed and creating domains quickly to elude detection and launch attacks. However, with the rise of genAI, phishing attacks have become more sophisticated and automated, making traditional security tools increasingly ineffective, particularly on mobile browsers. Sophistication shows up in the form of highly realistic and personalized, well-written phishing content at scale across all mobile phishing (mishing) vectors, including audio, video, and voicemail. The automation aspect allows attackers to clone websites in seconds, making brand impersonation easier than ever.
Read at Securitymagazine
[
|
]