SesameOp

SesameOp Malware Uses OpenAI Assistants API for Command and Control in Covert Espionage Campaign

The newly discovered SesameOp malware represents a major evolution in cyber-espionage tactics. Instead of relying on traditional command-and-control servers, attackers have weaponized the OpenAI Assistants API as a covert channel for issuing commands, transferring data, and maintaining persistent access to compromised networks. The malware was identified by Microsoft’s Detection and Response Team (DART) during an investigation into a long-running intrusion that persisted inside an enterprise environment for several months. By blending its traffic with legitimate API calls, SesameOp evaded network monitoring tools and antivirus systems, establishing itself as one of the most innovative threats of 2025.

Background of the SesameOp Malware

Microsoft first discovered SesameOp in July 2025 during a forensic analysis of a targeted intrusion affecting a large enterprise network. The attacker had already established a complex web of internal web shells to issue commands and maintain remote access. What set this campaign apart was the use of legitimate Microsoft Visual Studio utilities modified to load malicious libraries through a technique known as .NET AppDomainManager injection. This method allowed the threat actor to execute code within trusted processes, helping it remain invisible for months.

The attack relied on a custom-built loader named Netapi64.dll that executed a secondary payload called OpenAIAgent.Netapi64. This backdoor used OpenAI’s legitimate Assistants API to send and receive instructions through normal-looking API requests. Once installed, the malware was able to query, execute, and report results back to the attacker through the OpenAI platform, bypassing traditional command infrastructure entirely.

How the SesameOp Malware Works

The infection chain begins when a compromised configuration file instructs the system to load the malicious Netapi64.dll library at runtime. This DLL is heavily obfuscated with Eazfuscator.NET and is programmed to locate and decrypt the primary backdoor component stored in the Windows Temp directory under the name OpenAIAgent.Netapi64. Once active, the backdoor connects to OpenAI’s API using an embedded API key to begin communication.

The malware first checks whether the infected system already has a record in OpenAI’s vector store. If not, it creates one using the hostname of the machine. It then retrieves a list of Assistants from the attacker’s account, with each Assistant containing specific instructions in the description and instruction fields. The backdoor recognizes three operational states: Sleep (standby mode), Payload (command execution), and Result (data exfiltration). Using these designations, the malware receives commands, executes them locally, and uploads the results as message data back to OpenAI’s servers.

Each exchange between the attacker and the malware is encrypted using AES and RSA, then compressed with GZIP and encoded in Base64 to obscure the contents. Because the communications occur through legitimate HTTPS requests to api.openai.com, the activity appears normal to firewalls and intrusion detection systems. This makes SesameOp especially dangerous in enterprise networks where legitimate API calls are common and rarely blocked.

Key Risks and Technical Implications

The SesameOp malware introduces a dangerous new model for command-and-control operations. By using commercial AI infrastructure, it eliminates the need for dedicated C2 servers, reducing the attacker’s operational footprint and minimizing the risk of detection. This marks the first confirmed instance of OpenAI’s platform being used in this way, setting a precedent for future AI-abuse threats.

Microsoft’s technical breakdown shows that the malware’s use of OpenAI Assistants is not an exploit or vulnerability but a misuse of legitimate functionality. The threat actor created AI “Assistants” that acted as storage nodes for commands and task results, effectively turning a benign developer tool into a covert communication channel.

Analysts also observed that the attacker implemented AppDomainManager injection to run the malware code inside trusted Visual Studio processes. This gave it access to system resources while bypassing endpoint security rules designed to monitor untrusted executables. Combined with API-based C2 traffic, these methods allowed SesameOp to remain active in compromised systems for an extended period.

Why SesameOp Is a Major Security Concern

Unlike most backdoors that rely on static infrastructure, SesameOp’s use of OpenAI’s cloud environment allows it to blend in with legitimate AI usage. Security software cannot simply block traffic to OpenAI domains without disrupting normal business functions, making this tactic especially effective for long-term espionage campaigns. The malware’s persistence mechanisms, combined with encrypted communications and modular payload delivery, make detection extremely difficult.

Microsoft concluded that this attack was primarily focused on espionage and long-term data collection rather than ransomware or financial gain. However, the techniques demonstrated here could easily be repurposed for large-scale criminal operations, such as credential theft or targeted infiltration of enterprise AI workflows.

Microsoft and OpenAI Response

Microsoft identified the API key used by the threat actor and promptly reported it to OpenAI. Following notification, OpenAI disabled both the key and the associated account. The companies confirmed that there was no breach of OpenAI’s infrastructure; rather, its API was misused to facilitate command-and-control operations. OpenAI also confirmed that the Assistants API, which was central to this abuse, will be deprecated in August 2026.

Both organizations are working together to implement enhanced monitoring and abuse-prevention measures to detect and block suspicious API usage that might indicate similar threats in the future.

Mitigation Strategies

For Organizations

  • Review API activity logs: Examine all traffic to AI platforms such as OpenAI, Anthropic, and Google Cloud for signs of unusual or persistent requests.
  • Restrict API key usage: Limit keys to specific IP addresses and enforce short rotation cycles to minimize exposure.
  • Apply endpoint protection and behavior analysis: Use malware threat detection tools that can recognize unusual process injections or DLL loading patterns.
  • Isolate suspicious processes: If unusual DLLs such as Netapi64.dll appear in system directories, isolate and scan the host immediately with anti-malware software like Malwarebytes.
  • Educate developers and administrators: Ensure staff understand how malicious actors can exploit legitimate APIs for covert communication and data exfiltration.

For Security Vendors and AI Providers

  • Implement C2 pattern detection: Integrate heuristics that identify AI-based communication misuse.
  • Audit user activity: Monitor for accounts creating excessive Assistants or large volumes of vector store requests.
  • Block API key reuse: Enforce strong authentication and verification procedures for AI integrations.

Industry and Regulatory Impact

The discovery of SesameOp signals a turning point in cybersecurity. The ability to conceal command-and-control operations within trusted AI traffic raises major concerns for both enterprise security and regulatory compliance. As artificial intelligence becomes more integrated into business operations, these platforms will increasingly attract threat actors seeking to exploit their legitimacy.

While this specific attack does not represent a vulnerability in OpenAI itself, it underscores the urgent need for global standards governing the safe use of AI APIs and monitoring for potential abuse. The misuse of AI infrastructure could soon become a recurring tactic across both criminal and state-sponsored operations.

Future Outlook

The SesameOp malware case demonstrates that threat actors are already adapting to the new era of AI-driven technology. Future malware campaigns may adopt similar approaches, leveraging large language models, chat APIs, and vector stores to automate attacks and conceal operations within legitimate systems. As this trend continues, enterprises must rethink how they monitor cloud and API traffic, focusing not only on domains and signatures but also on behavior and intent.

For ongoing coverage of major malware threats and breaking artificial intelligence abuse cases, visit Botcrawl for verified cybersecurity analysis and expert insights.

Sean Doyle

Sean is a tech author and security researcher with more than 20 years of experience in cybersecurity, privacy, malware analysis, analytics, and online marketing. He focuses on clear reporting, deep technical investigation, and practical guidance that helps readers stay safe in a fast-moving digital landscape. His work continues to appear in respected publications, including articles written for Private Internet Access. Through Botcrawl and his ongoing cybersecurity coverage, Sean provides trusted insights on data breaches, malware threats, and online safety for individuals and businesses worldwide.

More Reading

Post navigation

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.