This repository demonstrates how untrusted AI agents can be safely contained using workload identity and service mesh authorization policies. The lab shows that a compromised agent cannot perform lateral movement, service enumeration, or data exfiltration.
The lab and its attack scenarios were validated on a live cluster.
See the advisor validation memo:
The deployment flow builds and pushes images to a cluster-reachable OCI registry.
The default REGISTRY value uses ttl.sh for portability, and can be overridden for private clusters.