Skip to main content

Command Palette

Search for a command to run...

Impersonating Productivity: Malicious AI Extensions Targeting Enterprise Chat Histories

Published
5 min read
Impersonating Productivity: Malicious AI Extensions Targeting Enterprise Chat Histories

Executive Summary

The rapid integration of Large Language Models (LLMs) into the daily workflow of knowledge workers has created a new attack surface for cybercriminals. In a significant escalation of supply chain attacks targeting the browser ecosystem, Microsoft Defender Security Research has uncovered a campaign involving malicious Chromium-based browser extensions. These extensions impersonate legitimate AI assistant tools to harvest sensitive LLM chat histories and browsing data.

Reports indicate that these deceptive extensions have achieved a massive scale, with approximately 900,000 installs and active telemetry across more than 20,000 enterprise tenants. By exploiting user trust in productivity tools and utilizing automated distribution channels in agentic browsers, the threat actors have embedded a persistent data collection mechanism directly into the browser environments of corporate users. This post details the attack chain, the risks associated with LLM data exfiltration, and the specific Indicators of Compromise (IOCs) SecLookup is actively blocking.

Threat Analysis

The core of this campaign relies on Impersonation and Supply Chain Compromise, leveraging the growing dependency on AI sidebars and agentic browsing tools. To understand the full impact, we must analyze the tactics, techniques, and procedures (TTPs) employed by the threat actors.

Attack Vector: The AI Sidebar Ecosystem

The primary delivery mechanism for this threat is the browser extension marketplace and the emerging ecosystem of "agentic browsers." These are environments designed to assist users in interacting with AI models directly within their browsing sessions.

  • TTP: Impersonation (T1566.001) The threat actors created extensions with names and descriptions designed to closely mimic legitimate AI productivity tools. By leveraging a largely uniform architecture across Chromium-based browsers like Google Chrome and Microsoft Edge, they minimized the friction required for users to install them. The visual similarity to trusted tools increases the likelihood of installation without scrutiny.

  • TTP: Supply Chain Compromise (T1195) Unlike traditional phishing attacks that rely on social engineering, this campaign leverages automated distribution. The threat actors targeted agentic browsers that automatically download extensions without requiring explicit user approval. This technique allows the malicious code to bypass the initial gatekeeper (the user) and establish a foothold in the target environment immediately upon the user's first interaction with the AI tool.

Persistence and Privilege Escalation

Once installed, these extensions do not behave like typical malware. Instead, they function as a persistent "bot" within the user's browser session.

  • TTP: Exploiting Trust and Convenience (T1546.004) To function effectively, the extensions request broad page-level permissions. Knowledge workers, seeking convenience, often grant these permissions to allow the extension to interact with web pages and read chat content. This grants the malicious extension the ability to read DOM elements, access localStorage (often used to store chat histories), and monitor tab activity.

Data Exfiltration and Impact

The true danger lies in what the extensions collect. The threat actors are harvesting full URLs and the content of AI chat sessions from platforms such as ChatGPT and DeepSeek.

For an enterprise, this represents a critical data breach vector. The data harvested is not generic; it includes:

  • Proprietary Code: Snippets of code being debugged or written.

  • Internal Workflows: Step-by-step processes shared between employees.

  • Strategic Discussions: Sensitive business intelligence and decision-making logs.

By exfiltrating this data, the threat actors gain access to the intellectual property and strategic direction of the organization, turning a seemingly trusted productivity utility into a sophisticated surveillance tool.

Indicators of Compromise (IOCs)

SecLookup's threat intelligence team has identified and verified the following malicious domains associated with this campaign. These domains are confirmed to be hosting the malicious extension packages or acting as C2 infrastructure.

Domains

chataigpt[.]pro
chatgptsidebar[.]pro

SecLookup Detection

SecLookup is actively monitoring the threat landscape to protect our users from this specific campaign. Our threat intelligence platform has confirmed the malicious nature of the domains listed above.

We are currently blocking access to chataigpt[.]pro and chatgptsidebar[.]pro to prevent the installation of the malicious extensions and the subsequent exfiltration of sensitive data. If your security telemetry detects interactions with these domains, it should be treated as an indicator of a compromised browser environment.

Recommendations

Defenders and SOC analysts must adapt their strategies to address the unique risks of AI-integrated browser extensions. The following measures are recommended to mitigate the risk of this campaign and similar future threats:

  1. Audit Browser Extensions: Conduct a sweep of all Chromium-based browsers (Chrome, Edge, Brave) within the enterprise. Remove any extensions that mimic AI assistants but are not officially sanctioned by the IT department.

  2. Restrict Extension Permissions: Enforce strict policies regarding extension permissions. Extensions should only request the minimum permissions necessary to function. If an extension requests "Read and change all your data on the sites you visit" or access to localStorage for a simple translation tool, it should be rejected.

  3. Monitor for Agentic Browser Behavior: As agentic browsers become more prevalent, monitor for the automatic installation of unknown extensions. Implement automated checks to ensure only whitelisted extensions are permitted.

  4. Inspect Extension Metadata: Before installation, inspect the developer ID and publisher details. Malicious actors often use names that are close to, but not identical to, legitimate vendors.

  5. Monitor LLM Traffic: While difficult to fully block, monitoring outbound traffic to AI chat platforms can help identify unusual patterns that might indicate data exfiltration, though filtering this traffic requires careful consideration to avoid blocking legitimate developer work.

References

  • Malicious AI Assistant Extensions Harvest LLM Chat Histories by Microsoft Defender Security Research Team, Microsoft Security Blog, March 5, 2026. Link

More from this blog

S

Seclookup Blogs

23 posts