██╗ ██╗███████╗ ██████╗ ██╗ ██╗██████╗ ██████╗███████╗
╚██╗██╔╝██╔════╝██╔═══██╗██║ ██║██╔══██╗██╔════╝██╔════╝
╚███╔╝ ███████╗██║ ██║██║ ██║██████╔╝██║ █████╗
██╔██╗ ╚════██║██║ ██║██║ ██║██╔══██╗██║ ██╔══╝
██╔╝ ██╗███████║╚██████╔╝╚██████╔╝██║ ██║╚██████╗███████╗
╚═╝ ╚═╝╚══════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝
_sec
AI Security Research & Tools
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
We specialize in offensive security research for AI systems. Our focus: finding vulnerabilities in LLMs, AI agents, and RAG architectures before attackers do.
🔴 AI Red Teaming Adversarial testing of production AI systems
🛡️ LLM Security Assessment Prompt injection, jailbreaks, guardrail testing
🤖 Agent Vulnerability Tool abuse, MCP attacks, agentic exploitation
📊 RAG Security Research Data exfiltration, context poisoning vectors
| Project | Description |
|---|---|
| llm-security-payloads | 200+ curated LLM attack payloads |
| agentaudit-cli | Command-line AI security scanner (coming soon) |
🌐 xsourcesec.com
🚀 app.xsourcesec.com
📧 security@xsourcesec.com
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
AgentAudit — Automated AI Security Testing