
A new type of threat is alarming the world of cybersecurity. It is called Man-in-the-Prompt, and it is capable of compromising interactions with leading generative Artificial Intelligence tools such as ChatGPT, Gemini, Copilot, Claude, and DeepSeek. The challenge? It doesn’t even require a sophisticated attack: all it takes is a browser extension that doesn’t even need any special privileges.
LayerX’s research shows that any browser extension, even without any special permissions, can access the prompts of both commercial and internal LLMs and inject them with prompts to steal data, exfiltrate it, and cover their tracks.
The exploit has been tested on all top commercial LLMs, with proof-of-concept demos provided for ChatGPT and Google Gemini.
This exploit stems from the way most GenAI tools are implemented – in the browser. When users interact with an LLM-based assistant, the prompt input field is typically part of the page’s Document Object Model (DOM). This means that any browser extension with scripting access to the DOM can read from, or write to, the AI prompt directly.
Bad actors can leverage malicious or compromised extensions to perform prompt injection attacks, extract data directly from the prompt, response, or session, or compromise model integrity.
How can you protect yourself ?
- Don’t install extensions from unknown or unreliable sources.
- Regularly check installed extensions and uninstall those that aren’t needed.
- Limit extension permissions whenever possible.
Read more about it here.