Your .env file is the first thing an AI agent reads.
It has to. That’s where the config lives.
But it also has your API keys, your database passwords, your Stripe secrets. And now an LLM has all of them in its context window. Shipped to a remote server, logged somewhere, maybe cached.
This is the problem ghostenv solves.
/the problem
Every AI coding agent, Claude Code, Cursor, Copilot, Codex, reads your project files. That includes .env. It has to understand your project to help you.
But there’s a difference between “needs to know the shape of my config” and “needs my production Stripe key.”
The agent needs to know STRIPE_SECRET exists. It doesn’t need to know the value.
Most developers know this is a problem. Most ignore it because the alternative, manually redacting and unredacting your .env every time you use an agent, is too annoying to actually do.
/how ghostenv works
brew install ghostenv
ghostenv initThat’s it. ghostenv reads your .env, encrypts the real values into a vault, and replaces them with masked fakes:
# Before
STRIPE_SECRET=sk_live_abc123xyz
# After (what the agent sees)
STRIPE_SECRET=gv_WUZFHQP7GQPXSMWGThe gv_ values are deterministic fakes. They look like real keys. They parse like real keys. But they’re useless.
Real secrets live in .ghostenv/vault.enc, encrypted with AES-256-GCM. The master key is stored in your OS keychain. macOS Keychain with Touch ID, or GPG on Linux.
/running commands that need secrets
Your agent writes code that calls an API. The code needs the real key. But the agent shouldn’t see it.
ghostenv run uv run python app.pyghostenv injects the real values as environment variables into the child process. The agent sees the command succeed. It never sees the key.
On macOS, this requires Touch ID. On Linux, GPG passphrase. No silent access.
/the policy system
Not every command should get secrets. ghostenv uses an allowlist:
# .ghostenv/policy.yaml
allow:
- command: npm publish
inject:
- NPM_TOKEN
- command: docker push *
inject:
- all
- command: uv run *
inject:
- allCommands not in the policy get rejected. Runtimes like python, bash, node are blocked entirely. They’re too general.
An AI agent can add policy entries, but it can’t add blocked runtimes. And every ghostenv run call is visible in the terminal for you to approve.
/agent detection
ghostenv detects when it’s being called by an AI agent.
It walks the process tree looking for known agent process names. claude, cursor, codex, aider, copilot, and others.
When an agent is detected:
ghostenv showis blockedghostenv editis blockedghostenv restoreis blockedghostenv run --allis blocked- Output from
ghostenv runis scrubbed of secret values
The agent can use your secrets. It can’t see them.
/what it doesn’t protect against
ghostenv is not a sandbox. If a command has your secrets in its environment, it can exfiltrate them. Write to a file, POST to a server.
But the threat model isn’t “malicious code execution.” It’s “AI agent accidentally or casually leaking secrets through its context window.”
That’s the common case. That’s what ghostenv stops.
/setup in 60 seconds
# Install
brew install rituraj003/tap/ghostenv
# Lock your .env
cd your-project
ghostenv init
# Allow commands that need secrets
ghostenv policy add "npm publish"
ghostenv policy add "uv run *"
# Run with secrets
ghostenv run uv run python app.pyghostenv auto-generates a CLAUDE.md so AI agents know to use ghostenv run instead of reading .env directly.
/links
- GitHub
- Install:
brew install rituraj003/tap/ghostenv - Works on macOS (Touch ID) and Linux (GPG)