We shipped an AI agent to production without guardrails. It sent 4,000 duplicate emails in 90 seconds. That was the last time.
In 2024, we were engineering leads at a fintech company deploying AI agents to real workflows — document review, account reconciliation, customer outreach. We had capable models. We had great prompts. What we didn't have was any meaningful perimeter around what those agents could do.
The incident was a billing notification agent that entered a retry loop. In 90 seconds, it called our email provider 4,000 times. Every customer got the same notification dozens of times. We had no kill-switch, no rate limit on the agent level, and no audit trail to reconstruct what happened.
We fixed it, then looked for a product that would prevent it from happening again. We couldn't find one we trusted — everything was either monitoring-after-the-fact or so restrictive it made agents useless. So we built Agent Enclosure.
Capable agents and safe agents can be the same thing. The answer isn't to hobble your agents with overly restrictive prompts. The answer is a well-defined perimeter.
Agent Enclosure gives you a sandboxed execution environment where every action is bounded, logged, and reversible. Agents can be powerful. They just need to run inside a fence.
We're building for security-conscious teams at fintech, healthtech, and enterprise SaaS — teams where "an AI made a mistake" is not an acceptable post-incident summary.
Request early accessEvery agent should have an explicit, auditable manifest of what it's allowed to do. No ambient authority.
If you can't replay what an agent did, you can't learn from it. Every action deserves a record.
We build audit logs and controls that regulators understand — because our customers need to pass reviews, not just feel secure.
We're a small team with a focused mission. If you're building in AI security, we'd love to talk.