Two Active Campaigns Targeting Claude Code Developers Right Now
Two campaigns targeting Claude Code users are active right now. One uses social engineering — fake GitHub repositories claiming to contain leaked source code. The other exploits npm's postinstall hooks to permanently modify how your AI agent behaves. Both are worth understanding even if you haven't been hit, because the attack patterns they represent are going to keep showing up.
Campaign 1: Vidar Infostealer via Fake "Leaked Source" Repos
Earlier this year, Claude Code's source maps were briefly accessible. That's been patched. But threat actors are now exploiting developer curiosity about what the source looked like.
The attack is straightforward: GitHub repositories appear claiming to contain "leaked Claude Code source." The repo names and READMEs look plausible — they reference real file structures and include convincing directory trees. Developers searching GitHub for Claude Code internals land on these repos organically.
What's inside isn't source code. It's a Rust-compiled dropper that downloads and executes Vidar, a well-documented infostealer. Vidar targets:
- Browser credentials — saved passwords, active login sessions, autofill data from Chrome, Firefox, Edge
- Cryptocurrency wallets — local wallet files and browser extension data
- Authentication tokens — Discord, Telegram, Steam, and other service tokens
- System information — screenshots, installed software, hardware IDs
The Rust dropper is a deliberate choice. Rust binaries are harder to reverse-engineer than Python scripts, and they bypass many signature-based detection tools. BleepingComputer and several security researchers have documented this campaign.
What to do:
- Don't clone random repos claiming to have leaked source code. This sounds obvious, but curiosity is the entire attack vector. If a repo appeared this week with leaked source of any popular tool, treat it as hostile until proven otherwise.
- Check your recent git history. If you cloned any "claude-code-source" or similarly named repos in the past month, scan your system. Run your AV, check browser extensions for anything unexpected, and rotate credentials for services where you use saved passwords.
- Use GitHub's reporting tools. If you find a repo distributing malware, report it. The faster these get taken down, the smaller the blast radius.
Campaign 2: npm Postinstall Injecting Persistent Agent Instructions
This one is more subtle and, for AI-assisted development workflows, potentially more damaging.
A malicious npm package (identified as openmatrix, though variants may exist) uses its postinstall script to write files into ~/.claude/commands/. If you're not familiar with that directory — it's where Claude Code loads custom slash commands. Files placed there become part of your agent's instruction set.
The injected files include metadata like always_load: true and priority: critical, which means they're loaded into every Claude Code session automatically. The attacker doesn't need to execute code once and exfiltrate data. Instead, they permanently modify how your AI agent interprets instructions.
This is persistent prompt injection via the supply chain.
Think about what that means: every subsequent Claude Code session reads the attacker's instructions alongside yours. The injected file could instruct the agent to include a specific dependency in generated code, exfiltrate file contents through seemingly innocuous API calls, or subtly modify security-sensitive code in ways that pass casual review.
The worst part: npm uninstall doesn't clean it up. The postinstall script writes to your home directory, outside the project's node_modules. Removing the package leaves the injected files in place.
What to do:
- Audit
~/.claude/commands/right now. List everything in that directory. If you see files you didn't create, read them carefully and delete them.bash ls -la ~/.claude/commands/ - Audit postinstall scripts before installing packages. Check
package.jsonforpostinstall,preinstall, andpreparescripts. Tools likecan-i-ignore-scriptshelp. - Consider
--ignore-scriptsfor untrusted packages. Runningnpm install --ignore-scriptsskips lifecycle hooks entirely. You'll need to run any legitimate build steps manually, but it eliminates this entire attack class. - Use
npm audit signaturesto verify package provenance where available.
Why AI Developer Tools Are High-Value Targets
These two campaigns share a common thread: they target the developer's AI tooling configuration, not just the developer's machine.
Traditional supply chain attacks steal credentials or install backdoors. These are bad, but well-understood. The npm postinstall campaign does something different — it modifies the instructions that an AI agent follows. This is a new attack surface that didn't exist two years ago.
Consider the attack chain: compromised npm package → modified agent instructions → agent generates subtly vulnerable code → developer approves it (93% auto-acceptance rate, remember?) → vulnerability ships to production. The attacker never touches your server directly. Your own agent does the work.
This pattern will grow. As AI coding agents gain more filesystem access, more tool permissions, and more autonomy, the value of compromising their instruction pipeline goes up. A single injected instruction file is worth more than a keylogger if the agent has deploy permissions.
The broader trend is already visible. LiteLLM's PyPI compromise hit 47,000 downloads. Axios 1.14.1 was backdoored on npm. MoltBook's skills registry found 341 malicious entries out of 2,857 — a 12% poisoning rate. AI tooling supply chains are being probed systematically.
A Minimal Audit Checklist
Run these now. Takes two minutes.
- [ ]
ls -la ~/.claude/commands/— anything you didn't put there? - [ ]
ls -la ~/.claude/— any unexpected config files? - [ ]
grep -r "postinstall" node_modules/*/package.json | head -20— what are your packages doing on install? - [ ] Check your GitHub stars/clones for any "leaked source" repos you may have interacted with
- [ ] If you use custom MCP servers, audit their tool definitions for unexpected network calls
If you're running AI agents in production with filesystem or network access, these checks become even more important. We've written about securing agents with markdown-based governance and the security audit pipeline — both cover patterns for limiting blast radius when agent instructions can't be fully trusted.
The uncomfortable truth: most developers using AI coding tools haven't audited their ~/.claude/ directory even once. Most haven't checked what postinstall scripts their npm dependencies run. That's the gap these campaigns exploit — not a zero-day, not a novel technique, just the assumption that nobody's looking.
Start looking.