For the past year, AI has been mostly a product question for compliance teams: "Are we training on customer data?" "Do we have a data processing agreement with our AI provider?" "What's our retention policy for AI-generated outputs?"
That's changing. As more companies deploy AI agents — not just AI assistants, but agents that take actions in real systems — regulators and auditors are starting to ask a different and harder set of questions. Questions that most teams haven't prepared for.
We've talked to dozens of security engineers, compliance leads, and GRC teams at fintech and healthtech companies. Here are the questions they're being asked:
SOC 2 CC6.3 requires that access to information assets is restricted based on roles and least-privilege principles. Auditors are extending this to AI agents. "What was this agent's permission set? Who authorized it? When was it last reviewed?"
Most teams answer this by describing their IAM configuration — but as we explored in our piece on agent permissions and IAM, traditional IAM doesn't capture the full picture of agent access. Auditors are starting to notice.
This is the most basic audit question, and many teams can't answer it precisely. They have application logs, database write logs, API gateway logs — but not a unified view of "this agent session, in this time window, took these actions."
Reconstructing agent behavior from fragmented logs is painful, time-consuming, and often incomplete. The absence of a unified agent action log is increasingly a finding in SOC 2 reviews.
This is the question that stumps most teams. The honest answer is often "we're relying on our prompt to constrain the agent's behavior" — which is not a satisfying answer to an auditor who has read about prompt injection.
The better answer: "The agent ran inside an enclosure with explicit allowlist-only access, so it was architecturally incapable of accessing resources outside its manifest." That answer requires infrastructure, not just documentation.
This is emerging particularly in financial services and healthcare, where transaction decisions need to be explainable. "An AI agent made this recommendation" is increasingly not sufficient. Auditors want to see: what inputs did the agent receive, what reasoning did it follow, what actions did it take, and what was the output?
This requires capturing not just the final action but the full execution trace — tool calls, intermediate outputs, decision context.
Based on the questions above, here's what an agent audit log needs to contain to be useful in a compliance review:
A log is a record of what happened. An audit trail is a record that can be relied upon in a compliance review or legal proceeding. The difference matters in practice.
Audit trails need to be:
Application logs typically fail on completeness and tamper-evidence. Scattered API logs fail on attribution. Building a proper audit trail for agent actions requires thinking about these properties explicitly.
For teams operating in healthcare, HIPAA's audit control requirements (§164.312(b)) require "hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information."
If AI agents access or process ePHI — which they do in any clinical or administrative workflow — they fall squarely under this requirement. The audit controls need to cover the agent's activity, not just the underlying data systems. This is a gap most healthcare AI teams haven't closed.
The compliance posture shift: "We have AI" used to be a product announcement. Now it's a compliance disclosure that triggers specific questions. Teams building for regulated industries need to treat agent auditability as a first-class requirement, not an afterthought.
The cost of retrofitting audit infrastructure into a deployed agent system is high. Access logs are scattered, integrity can't be proven retroactively, and session-level attribution requires reconstructing history you didn't capture.
The right time to build audit infrastructure is before your first production deployment — when you can design the logging and permission model together, rather than layering audit on top of a system that wasn't designed for it.
Agent Enclosure captures a complete, tamper-evident execution trace for every agent session by default. Your audit trail is there when you need it — whether for an internal review, a customer security questionnaire, or a formal compliance audit.
Every agent session logged, attributed, and tamper-evident. Export for SOC 2, HIPAA, or internal review.
Request early access