A critical AI browser prompt injection vulnerability has been identified and patched in Opera Neon, highlighting a persistent and serious security challenge for agentic browsers. This flaw allowed attackers to exploit the browser's AI assistant through hidden webpage elements, leading to the extraction and exfiltration of sensitive user data. The incident underscores the urgent need for new security paradigms as AI capabilities integrate deeper into browsing experiences.
The attack leveraged hidden HTML elements, such as zero-opacity span tags, to embed malicious instructions within a webpage. When a user prompted Opera Neon's AI assistant to summarize or analyze the page, the browser's underlying large language model (LLM) processed the entire HTML structure, including these invisible commands. These injected instructions could then direct the AI to perform unauthorized actions, like navigating to an authenticated user's Opera account page, extracting their email address, and leaking it to a third-party server. According to the announcement, this demonstrates a profound breakdown in how AI browsers differentiate trusted user input from untrusted page content.
This vulnerability is not merely theoretical; it represents a direct path to cross-origin data leaks. The AI agent, acting on the user's behalf, effectively inherits the user's authenticated state across different websites. This means it can access and manipulate data from pages the user is logged into, even if those sites are meant to be isolated. The potential for extracting highly sensitive information, such as credit card details from banking sites, is a significant concern that extends far beyond email addresses.
The Unseen Threat of Agentic AI
The disclosure timeline for the Opera Neon vulnerability reveals an initial dismissal by Opera as "Not Applicable," followed by a swift re-engagement and fix. This sequence suggests a rapidly evolving understanding of AI browser security risks within the industry. The fact that such a fundamental prompt injection could be missed initially, then quickly patched, indicates the novelty and complexity of these attack vectors. It also highlights the critical role of independent security research in pushing browser vendors to acknowledge and address these emerging threats.
This incident echoes previous findings in other AI browsers like Perplexity Comet, where similar indirect prompt injection attacks enabled the extraction of user data. The core problem remains unsolved: AI browsers struggle to treat webpage content as untrusted when constructing prompts for their LLMs. As browsers become more "agentic," performing actions and making decisions on a user's behalf, the attack surface expands dramatically, demanding a complete re-evaluation of traditional web security assumptions. The browser, which sees everything a user does online, becomes a prime target for sophisticated data exfiltration.
The implications for user privacy are immense. As AI assistants gain more control and access within the browser, the line between user intent and AI action blurs. Protecting sensitive information—from banking and work data to browsing history—requires a fundamental shift in how these systems are designed and secured. Browser vendors must prioritize robust isolation mechanisms and stringent input validation to prevent these types of AI browser prompt injection attacks from becoming commonplace.



