SesameOp: How Attackers Used the OpenAI Assistants API as Command and Control
Overview
In November 2025, Microsoft disclosed the discovery of a novel backdoor named SesameOp. What makes this case stand out is that the malware didn’t rely on traditional command-and-control (C2) servers or infrastructure - instead, it abused the OpenAI Assistants API, a legitimate cloud service, to communicate with its operators.
Microsoft’s Detection and Response Team (DART) found that the attackers maintained persistence inside a compromised enterprise network for months. The incident demonstrates a growing trend in which threat actors use trusted, well-known cloud platforms to hide malicious activity in plain sight.
There is no evidence that OpenAI’s platform had a vulnerability exploited in this campaign; the threat actors used a valid API key (controlled by the attacker) and normal API endpoints for covert messaging. OpenAI cooperated with the investigation to disable the malicious account and associated API key.
This case demonstrates a growing trend in which threat actors “live off the land” by using trusted cloud platforms to hide malicious activity. In such scenarios, reputation-based blocking of domains or IPs is often ineffective unless defenders incorporate application context and behavioral analytics.
How It Works
At a technical level, SesameOp operates through two main components: a loader and a backdoor. The backdoor uses the OpenAI Assistants API as an application-layer covert channel, hiding commands and results inside otherwise legitimate API objects.
Infection Chain and Loader
The loader, named Netapi64.dll, is a .NET-based, heavily obfuscated module that employs Eazfuscator.NET artifacts and strings.
Key observed behaviors:
- Creates a marker file at
C:\Windows\Temp\Netapi64.start. - Creates a mutex to enforce a single running instance.
- Searches for payload files with names ending in
.Netapi64, XOR-decodes them, and executes them dynamically. - Injects into legitimate host processes by manipulating the
.NET AppDomainManagerentry point, allowing near-stealth execution on application startup.
Note: the use of development tool binaries and libraries (for example, Visual Studio utilities) to load or execute the loader was observed; such toolchain abuse helped the actor bypass simple allowlists.
Backdoor and API Abuse
The second stage, OpenAIAgent.Netapi64, does not interact with AI models; instead, it uses OpenAI’s Assistants API endpoints as a C2 channel.
Upon launch, the backdoor reads a configuration embedded in its resource section, formatted as:
<OpenAI_API_Key>|<Dictionary_Key_Name>|<Proxy>
Using this data, the malware connects to the attacker’s OpenAI account, lists Assistants and vector stores via API calls, and reads instructions hidden in Assistant descriptions - typically labeled SLEEP, Payload, or Result.
If an Assistant is marked Payload, the malware retrieves an encrypted command. The message contains a Base64-encoded AES key (itself encrypted with a hard-coded RSA key) and a GZIP-compressed payload. The malware decrypts and decompresses the payload, interprets it as a dictionary of commands, then executes it using the JScriptEngine within .NET via reflection.
Execution results are compressed and encrypted, then sent back to the attacker by creating a new Assistant whose name is the victim’s Base64-encoded hostname and whose description is marked Result. The entire exchange occurs over HTTPS to api.openai.com, blending with legitimate traffic and evading many network-based detections.
Persistence and Evasion
The malware’s obfuscation and DLL injection techniques make static analysis and signature detection difficult. Because communications occur with a trusted domain, reputation-based defenses often fail to flag it. The malware also deletes Assistants and messages after each operation to reduce traces in the attacker account, and leverages developer utilities to piggyback on trusted processes.
Risks
SesameOp highlights several practical risks for enterprises and security teams.
Trusted Service Abuse
Attackers can hide C2 and data exfiltration inside legitimate API calls to well-known cloud providers, nullifying simple allow-list based defenses.
Long-Term Persistence
The campaign prioritized stealth and intelligence gathering over destructive behavior, enabling months of undetected access in the observed case.
Complex Payload Delivery
Multi-layer encryption (RSA + AES), compression (GZIP), and reflection-based script execution make payloads difficult to detect with signature-only engines.
Developer Toolchain Exposure
Compromise or abuse of developer environments and signed utilities (for example, Visual Studio) is a high-value vector because such tools are often broadly trusted within organizations.
Detection Gaps in Cloud-Integrated Environments
Blanket allowlisting of major cloud API domains is insufficient. Application, user, and host context are required to separate benign and malicious uses of the same endpoints.
Real-Life Example Usage
The SesameOp campaign came to light after Microsoft investigated a compromised enterprise network in July 2025. The attackers had injected malicious libraries into legitimate Visual Studio utilities to maintain persistence.
Once active, the loader Netapi64.dll executed the backdoor OpenAIAgent.Netapi64. The backdoor reached out to the OpenAI Assistants API, using a valid API key and account owned by the threat actor. From there, it fetched encrypted instructions hidden within Assistant metadata, executed them locally, and returned the results to the attacker - all disguised as normal API interactions.
Microsoft concluded that the campaign’s objective was espionage, not financial gain. OpenAI cooperated in disabling the malicious account and API key. However, the tactic itself -- abusing legitimate cloud APIs - remains a growing challenge for defenders.
Recommendations
1. Enhance Endpoint Protection
Enable tamper protection and EDR block mode in Defender or equivalent solutions. Activate cloud-delivered protection and Potentially Unwanted Application (PUA) blocking to catch unknown threats. Monitor for LoadLibrary / module load events for unexpected DLLs such as Netapi64.dll, or assemblies with Eazfuscator markers.
2. Monitor Cloud API Traffic
Regularly audit outbound connections to large-scale cloud APIs like OpenAI, Anthropic, and AWS. Investigate any processes that make outbound HTTPS requests to AI or LLM-related domains, especially non-browser executables.
3. Harden Developer Toolchains
Restrict the installation of Visual Studio and .NET development tools on non-developer systems. Implement application control (AppLocker, Microsoft Defender Application Control, or equivalent) to block unauthorized binaries. Monitor for abnormal AppDomainManager or .NET runtime configuration changes and for unexpected DLL loads into developer processes.
4. Segment and Control API Usage
Replace broad domain allowlisting with per-host or per-user API access policies: only approved systems should be able to call external AI APIs. Centralize API key management: enforce least privilege on keys, rotate keys regularly, and revoke unused keys immediately.
5. Expand Behavioral Analytics
Deploy logging that captures process-to-network relationships. Build baselines of normal cloud API usage to spot anomalies. Look for indicators such as unusual DLL names (for example, Netapi64.dll), new Assistants created by non-standard processes, or GZIP-compressed data blobs in outbound HTTPS sessions.
6. Prepare for Cloud-Based Persistence
Incorporate “trusted-service abuse” scenarios into incident response playbooks. Extend monitoring beyond IP reputation - include behavioral analysis and application context.
Final Thoughts
SesameOp is a preview of where threat operations are heading. Attackers no longer need hidden servers or obscure domains - they can blend into legitimate cloud traffic, using APIs like OpenAI’s as covert communication channels.
For cybersecurity teams, this means visibility and behavioral detection now outweigh traditional blacklist approaches. The line between “trusted” and “malicious” services is fading and defenders must adapt quickly to this new era of AI-assisted stealth malware.