-
Notifications
You must be signed in to change notification settings - Fork 4
Home
adham90 edited this page Jan 21, 2026
·
6 revisions
Welcome to the official documentation for RubyLLM::Agents, a production-ready Rails engine for building, managing, and monitoring LLM-powered AI agents.
- Getting Started - Installation and initial setup
- Installation - Detailed installation steps
- Configuration - Configure the initializer
- First Agent - Build your first AI agent
- Generators - Scaffold agents, embedders, and more
- Migration - Upgrade from v0.5.0 to v1.0.0
- Agent DSL - Declarative agent configuration
- Parameters - Required and optional parameters
- Prompts and Schemas - Structure inputs and outputs
- Conversation History - Multi-turn conversations
- Result Object - Access execution metadata
- Tools - Enable agents to call external functions
- Streaming - Real-time response streaming
- Thinking - Extended reasoning and chain-of-thought
- Attachments - Vision and multimodal support
- Caching - Response caching with TTL
- Execution Tracking - Automatic logging and analytics
- Embeddings - Text-to-vector embeddings
- Moderation - Content safety filtering
- Audio - Transcription (speech-to-text) and TTS (text-to-speech)
- Image Operations - Generation, analysis, editing, pipelines
- Reliability Overview - Build resilient agents
- Automatic Retries - Handle transient failures
- Model Fallbacks - Fallback model chains
- Circuit Breakers - Prevent cascading failures
- Workflows Overview - Compose agents
- Pipeline Workflows - Sequential execution
- Parallel Workflows - Concurrent execution
- Router Workflows - Conditional dispatch
- Async/Fiber - Fiber-based concurrent execution with rate limiting
- Budget Controls - Spending limits
- Multi-Tenancy - Per-tenant isolation and budgets
- Alerts - Notifications and webhooks
- PII Redaction - Data protection
- Testing Agents - RSpec patterns and mocking
- Error Handling - Error types and recovery
- Dashboard - Monitoring UI guide
- Database Queries - Execution model queries and analytics
- Production Deployment - Deployment guide
- Background Jobs - Async logging
- Troubleshooting - Common issues
- Best Practices - Production guidelines
- API Reference - Class documentation
- Examples - Real-world use cases
- FAQ - Common questions
- Contributing - How to contribute
RubyLLM::Agents is a Rails engine that provides:
- Clean DSL for defining AI agents with declarative configuration
- Automatic tracking of every execution with costs, tokens, and timing
- Production reliability with retries, fallbacks, and circuit breakers
- Budget controls to prevent runaway costs
- Workflow orchestration for complex multi-agent scenarios
- Real-time dashboard for monitoring and debugging
v1.0.0-beta.3 - See CHANGELOG for release history.
Through RubyLLM, RubyLLM::Agents supports:
- OpenAI - GPT-4, GPT-4o, GPT-4o-mini, GPT-3.5
- Anthropic - Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
- Google - Gemini 2.0 Flash, Gemini 1.5 Pro
- And more - Any provider supported by RubyLLM
- GitHub Issues - Report bugs
- GitHub Discussions - Ask questions
- RubyLLM Documentation - Underlying LLM library