Settings Files Are the New Autoexec.bat
A pattern keeps showing up in agent-tooling incident reports: malicious code lands on a developer machine, writes a few lines into a settings file, and from then on every IDE launch and every agent run is owned. No process to kill. No daemon to find. Just text in a JSON file the user has full write permission to.
It's the same trick autoexec.bat taught DOS users to fear in 1996. We forgot, because for twenty years the equivalent files lived in places you mostly didn't edit by hand. Now we edit them constantly — and so do the tools we install.
The Threat Model
Settings files for agent runners — editor configs, per-project rule files, global hook definitions — all share three properties that make them an attacker's dream:
- They're user-writable. No privilege escalation required. A regular npm package, a Python wheel, a curl-piped install script — any of those can drop content into them with the user's own permissions.
- They're sourced silently at launch. Nothing prompts you. Nothing diffs. The IDE opens, the agent runner boots, and whatever was in the file gets respected.
- They survive reboots. Process trees die. Settings files don't. A single write persists across every future session, every clean shell, every "let me just close everything and start fresh."
Combine those three and you've got persistence better than anything an attacker could install at the system level — because the user is doing the trusting work for them.
The Surface Is Bigger Than You Think
The obvious targets are well-known settings files for editors and agent runners. Less obvious targets surface the same way:
- Shell startup scripts.
.bashrc,.zshrc,.profile. Every interactive shell sources them. Aliases that look harmless but shadowgit,npm, or your agent runner's binary are a classic move. - Global package configs. npm, pip, cargo, gem — all of them respect global config files. A few lines redirect a registry, install a postinstall hook, or set an environment variable that gets injected into every subprocess.
- OS-level scheduled jobs. launchd plists on macOS, systemd user units on Linux. User-scope, no sudo, automatic on login.
- Git config. Per-repo
core.hooksPathredirects every git operation to run an attacker's script. Conditional includes pull in remote-controllable configs. - Project-scoped rule files. Anything an agent reads at startup to know how to behave — instruction files, allowlists, tool manifests, MCP server registries. Modify the rules, change the agent's behavior.
The pattern is the same in all of them: a file the user owns, that some tool trusts, gets read automatically.
Five Defensive Patterns
Most of the standard advice ("review what you install, don't run shady code") is unenforceable for an agent fleet that spawns dozens of tasks a day. Here's what holds up.
1. Committed-vs-Runtime Diff
Keep a canonical version of every settings file checked into a private repository. On every agent runner boot, compute a diff between the committed version and the one on disk. Any unexpected delta — even a single line — halts the runner before it sources anything.
This catches both malicious additions and "well-meaning" mutations from package installers that quietly amend config files. It also forces you to be explicit about what your settings should contain, which turns out to be a useful exercise on its own.
2. Pre-Launch Integrity Hashing
For settings files that shouldn't ever change without an explicit human action, store a SHA-256 of the expected content. The runner verifies the hash at startup and refuses to load on mismatch. Cheap, fast, hard to fake without writing matching content.
Pair this with version-controlled hashes — the hash file itself is in git, and a CI job rejects pull requests where the hash changes without a corresponding settings change. The result is that any drift from the agreed-upon state is loud.
3. Tripwires via Filesystem Watch
If the file genuinely needs to be writable during a session — many agent runners write back to their own configs to record state — wrap it with a filesystem watcher (fswatch on macOS, inotify on Linux). Any write event during agent execution triggers an alert and, optionally, a kill switch on the runner.
The point isn't to prevent the write. It's to make sure no write happens without you knowing. Most legitimate writes are noisy in obvious ways (a single field updates, you can correlate it with what the agent was doing). Quiet writes are the suspicious ones.
4. Read-Only Where the OS Supports It
Some settings files don't need to be writable after initial setup. Mark them read-only at the filesystem level. On macOS, set the uchg flag with chflags. On Linux, use chattr +i. An attacker now needs to escalate privileges to flip the flag — which raises the bar from "any postinstall script" to "actual exploit."
You can't do this for files the runner needs to update. You absolutely can for hook lists, tool allowlists, and command palettes that change once a quarter.
5. Per-Project Sandboxing — No Global Hooks
If you're designing an agent runner, the most important defense is to make global hooks impossible. Hooks live per-project, in a file that the runner re-reads each time a project opens, with no cross-project effect. A compromised package in Project A can't reach into Project B.
If you're using an agent runner that someone else designed, prefer per-project configuration over global. Audit whatever global hooks already exist. Most should be empty. The ones that aren't should have a really good reason for living in your home directory rather than in a single repo.
Why This Is Worse for Agent Stacks
A normal developer compromised by a settings-file injection has a bad day. They notice weird behavior, kill their shell, investigate, recover.
An agent stack compromised the same way has a bad month. The runner spawns dozens of headless sessions a day. Each one sources the poisoned settings. Each one inherits whatever shell aliases, hook commands, or rule overrides got dropped in. The blast radius isn't one developer's laptop — it's every task the agent has touched since the injection landed, and every artifact it pushed downstream.
The reason this category of attack is escalating right now is that agent runners turned a quiet developer-tooling problem into a fleet-scale supply-chain problem. The same techniques that didn't matter much when one human had to be at the keyboard now matter a lot when twenty autonomous workers do.
Read your settings files. Hash them. Watch them. Lock the ones you can. Assume anything you can write to is something somebody else can write to first.
Next Time
What changes when the attacker doesn't need to write to disk at all? Runtime prompt injection against agents that fetch the web, and why most of the defenses you'd reach for first don't help. Coming soon.