AiFrame Fake AI Chrome Extensions
Artificial Intelligence

AiFrame Fake AI Chrome Extensions Tied to tapnetic.pro Hit 300,000 Users

A campaign dubbed AiFrame has been linked to a cluster of fake AI Chrome extensions that collectively reached more than 300,000 users while quietly extracting sensitive browser data. The extensions posed as popular “AI assistant” tools for chat, summarization, translation, and Gmail help, but shared the same code patterns and the same backend infrastructure tied to the tapnetic.pro domain.

The risk was not limited to simple tracking or aggressive advertising. The extensions operated with broad permissions, pulled readable content from pages people visited, and in some cases targeted Gmail specifically, where email text could be collected directly from the page. While some of the extensions have been removed from the Chrome Web Store, others were still visible at the time the campaign details circulated publicly.

What The AiFrame Campaign Appears To Be

AiFrame is best understood as an “extension cluster” rather than a single malicious add-on. Multiple extensions were published under different names, impersonating different AI brands and use cases, but the underlying structure remained consistent. That kind of repetition is a common survival tactic. If one listing is removed, others remain, and distribution continues with minimal disruption.

Several of the extensions were marketed like normal productivity tools. Some even carried generic names that would not immediately raise suspicion, such as “AI Assistant,” “AI Sidebar,” or “ChatGPT Translate.” Install counts ranged from a few hundred to tens of thousands per extension, but the combined exposure is what matters. When the same backend controls multiple storefront entries, the operator does not need any single extension to survive long term.

How The Extensions Delivered “AI” Features

One of the most important technical details is how the promised AI features were delivered. Instead of implementing real AI functionality locally within the extension, the interface was served through a full screen iframe that loaded content from a remote server. In practice, the user saw what looked like an AI sidebar or assistant window, but it was largely a remote web app displayed on top of the page.

This architecture matters because it shifts power away from the store-reviewed extension package and toward whatever the operator serves at runtime. The extension becomes a privileged bridge. The remote page can request content from the extension, and the extension can collect page data and send it back to the remote interface. If the operator changes what the remote site does, the behavior can change without pushing a new extension update through the Chrome Web Store review process.

What Data Was Collected From Browsing Sessions

The campaign was reported to include logic for extracting content from websites the user visits, including titles, readable text, excerpts, and page metadata. This is often positioned as a “summarize this page” feature, but it creates an obvious privacy boundary problem if the extracted content is sent off-device to infrastructure controlled by the extension operator.

In practical terms, the risk is not limited to public blog posts or news pages. If the content script runs broadly, it can be triggered on logged-in dashboards, internal portals, authenticated web apps, or pages containing account recovery flows. Even when passwords are not directly read, the surrounding context can be useful for targeted phishing, social engineering, and account takeover attempts.

Gmail Targeting Makes The Impact More Severe

A subset of the extensions reportedly included Gmail-focused behavior. Rather than only extracting general page content, the Gmail-targeting group used dedicated scripts that run on mail.google.com and attempt to read visible email content directly from the page. This type of access can include message thread text, contextual conversation data, and potentially draft content depending on how the extraction is implemented.

Gmail is an unusually high value target because email often functions as the key to everything else. Access to inbox content can help an attacker reset passwords, intercept verification links, identify financial services, and map a victim’s professional and personal relationships. Even without stealing a password directly, collecting email content can enable follow-on attacks that are difficult to attribute to the original extension compromise.

tapnetic.pro And The Subdomain Pattern

The infrastructure referenced for this campaign centered around tapnetic.pro and a set of themed subdomains that matched the AI brand being impersonated. Examples shown publicly included subdomains that look like “claude,” “chatgpt,” “gemini,” “grok,” and similar. Segmenting traffic this way can help the operator keep extensions organized and also reduce the chance that a single block takes down the entire operation.

The public-facing tapnetic.pro site shown in screenshots appears generic, with broad “digital solutions” marketing language and no clear ownership detail. That sort of lightweight front page can function as cover infrastructure. It makes the domain look like a normal business property while the real activity occurs through subdomains serving the iframe UI and receiving extracted data.

Why The Remote Iframe Model Is A Red Flag

Extensions that render core functionality from a remote iframe are not automatically malicious, but they should be treated as high risk by default. A normal extension’s logic is shipped inside the package, which allows reviews, static analysis, and signature-based detections to catch dangerous patterns. When the “brains” of the extension live on a remote server, post-install behavior can change instantly.

This also creates a dangerous mismatch with user expectations. Many people assume an extension works locally unless stated otherwise. If the extension can read page content and send it to an external domain, the user’s browsing session effectively becomes a data feed. Even if the operator initially behaves, the same architecture enables abuse later.

Examples Of Extensions Reported In The Cluster

Install counts and availability can change quickly, but examples that were circulated publicly included extensions using names such as:

  • AI Assistant
  • AI Sidebar
  • ChatGPT Translate
  • AI GPT
  • ChatGPT
  • Google Gemini

Some reporting also included extension IDs, which can be helpful for defenders and IT teams performing audits. If you are investigating an environment, use the Chrome Extensions page, enterprise policy exports, or browser inventory tooling to match installed extension IDs rather than relying only on names, since names can be changed and reused.

What To Do If You Installed One Of These Extensions

If there is any chance an AiFrame-linked extension was installed, treat it like a browser-level compromise. Removing the extension is necessary, but it may not be sufficient on its own. The goal is to reduce the value of anything the extension could have captured and to shut down common takeover paths.

  • Remove the suspicious extension immediately, then restart the browser.
  • Review Chrome’s extension permissions and remove any other add-ons you do not recognize.
  • Change passwords for accounts that matter, starting with your email account and then banking, social platforms, and work logins.
  • Enable two factor authentication where possible, preferably with an authenticator app or hardware key.
  • Review recent security events and logged-in devices for your email provider and critical accounts.
  • Check Gmail forwarding rules, filters, and delegated access settings for anything you did not configure.

If you use Google Workspace or manage a team, review OAuth grants, third-party access, and any unusual API authorizations tied to user accounts. Browser-based data collection is often paired with secondary persistence methods once an attacker has enough information.

Enterprise And IT Actions

For organizations, the biggest risk is silent spread. Extensions can be installed on unmanaged devices or personal profiles, then used to access corporate web apps. Even if a company has strong endpoint controls, the browser itself can become the collection layer.

Actions that reduce exposure include restricting extension installs through enterprise policies, maintaining allowlists for approved extensions, and monitoring for unusual outbound traffic patterns to newly registered or low-reputation domains. If a campaign relies on a consistent backend domain family, blocking and alerting on those domains can reduce ongoing exposure.

  • Audit installed Chrome extensions across managed endpoints and VDI environments.
  • Enforce extension allowlists and block install from unknown publishers.
  • Review logs for outbound connections to suspicious subdomains tied to the campaign infrastructure.
  • Investigate Gmail or Workspace anomalous behavior, especially unexpected email access patterns.
  • Reset credentials for impacted users, and rotate any tokens used for administrative access.

Why These Campaigns Keep Working

The campaign is a direct result of how people use the web now. AI tools have become normal in daily workflows, and attackers know that “AI assistant” is an easy sell. A lot of users will install the first extension that looks legitimate, especially if it is framed as a time-saver for writing, summarizing, or replying to email.

That means the best defense is not only detection after the fact, but habits that reduce the chance of installing untrusted extensions in the first place. If an extension requests broad permissions, displays a full screen overlay, or routes core features through remote pages, it deserves extra scrutiny even if the store listing looks polished.

Broader Browser Security Implications

AiFrame is another reminder that the browser extension ecosystem is a supply chain. A single extension can sit between the user and every site they visit. When the extension is designed as a remote-controlled interface, it becomes even harder for the average user to understand where their data is going or how it is being processed.

As AI branding continues to dominate consumer tools, extension spraying and brand impersonation are likely to remain a consistent threat pattern. The most effective response is fast removal, account hardening, and better controls around what extensions are allowed to run in the first place.

Related coverage is available in the cybersecurity section.

Sean Doyle

Sean is a tech author and security researcher with more than 20 years of experience in cybersecurity, privacy, malware analysis, analytics, and online marketing. He focuses on clear reporting, deep technical investigation, and practical guidance that helps readers stay safe in a fast-moving digital landscape. His work continues to appear in respected publications, including articles written for Private Internet Access. Through Botcrawl and his ongoing cybersecurity coverage, Sean provides trusted insights on data breaches, malware threats, and online safety for individuals and businesses worldwide.

View all posts →

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.