Skip to content
adham90 edited this page Jan 21, 2026 · 6 revisions

RubyLLM::Agents Documentation

Welcome to the official documentation for RubyLLM::Agents, a production-ready Rails engine for building, managing, and monitoring LLM-powered AI agents.

Quick Navigation

Getting Started

Core Concepts

Features

Specialized Capabilities

Production Features

Reliability

Workflow Orchestration

Concurrency

  • Async/Fiber - Fiber-based concurrent execution with rate limiting

Governance

Development

Operations

Reference


About RubyLLM::Agents

RubyLLM::Agents is a Rails engine that provides:

  • Clean DSL for defining AI agents with declarative configuration
  • Automatic tracking of every execution with costs, tokens, and timing
  • Production reliability with retries, fallbacks, and circuit breakers
  • Budget controls to prevent runaway costs
  • Workflow orchestration for complex multi-agent scenarios
  • Real-time dashboard for monitoring and debugging

Current Version

v1.0.0-beta.3 - See CHANGELOG for release history.

Supported LLM Providers

Through RubyLLM, RubyLLM::Agents supports:

  • OpenAI - GPT-4, GPT-4o, GPT-4o-mini, GPT-3.5
  • Anthropic - Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
  • Google - Gemini 2.0 Flash, Gemini 1.5 Pro
  • And more - Any provider supported by RubyLLM

Need Help?

Clone this wiki locally