OS-level sandbox for AI coding agents - kernel-enforced file, command, and network isolation
-
Updated
Mar 10, 2026 - Go
OS-level sandbox for AI coding agents - kernel-enforced file, command, and network isolation
AI agent powered by MCP, LangGraph, LangChain, RAG, OpenAI, and PostgreSQL
History Poison Lab: Vulnerable LLM implementation demonstrating Chat History Poisoning attacks. Learn how attackers manipulate chat context and explore mitigation strategies for secure LLM applications.
LLM-powered Python test generaunit-testingtor CLI with single-function parsing, prompt-injection sanitization, output validation, and deterministic settings.
🤖 Transform AI development with powerful tools and frameworks designed for the next generation of intelligent applications and solutions.
Add a description, image, and links to the secure-llm topic page so that developers can more easily learn about it.
To associate your repository with the secure-llm topic, visit your repo's landing page and select "manage topics."