Skip to content
adham90 edited this page Jan 21, 2026 · 2 revisions

Frequently Asked Questions

Common questions about RubyLLM::Agents.

General

What is RubyLLM::Agents?

RubyLLM::Agents is a Rails engine for building, managing, and monitoring LLM-powered AI agents. It provides:

  • Clean DSL for agent configuration
  • Automatic execution tracking
  • Cost analytics and budget controls
  • Reliability features (retries, fallbacks, circuit breakers)
  • Workflow orchestration
  • Real-time dashboard

How is it different from LangChain?

Aspect RubyLLM::Agents LangChain
Language Ruby/Rails Python/JS
Integration Rails-native Framework-agnostic
Focus Production operations Rapid prototyping
Observability Built-in dashboard Requires add-ons
Cost tracking Automatic Manual

What LLM providers are supported?

Through RubyLLM, we support:

  • OpenAI (GPT-4, GPT-4o, GPT-3.5)
  • Anthropic (Claude 3.5, Claude 3)
  • Google (Gemini 2.0, Gemini 1.5)
  • And more via RubyLLM

What Ruby/Rails versions are required?

  • Ruby >= 3.1.0
  • Rails >= 7.0

Configuration

How do I set API keys?

# Environment variables (recommended)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...

# Or Rails credentials
rails credentials:edit

How do I change the default model?

# config/initializers/ruby_llm_agents.rb
RubyLLM::Agents.configure do |config|
  config.default_model = "gpt-4o"
end

How do I enable caching?

module LLM
  class MyAgent < ApplicationAgent
    cache 1.hour  # Cache for 1 hour
  end
end

How do I configure the dashboard?

config.dashboard_auth = ->(controller) {
  controller.current_user&.admin?
}

Usage

How do I call an agent?

result = LLM::MyAgent.call(query: "test")
result.content      # The response
result.total_cost   # Cost in USD

How do I use streaming?

module LLM
  class StreamingAgent < ApplicationAgent
    streaming true
  end
end

LLM::StreamingAgent.call(prompt: "Write a story") do |chunk|
  print chunk
end

How do I send images to an agent?

result = LLM::VisionAgent.call(
  question: "Describe this image",
  with: "photo.jpg"
)

How do I get structured output?

def schema
  @schema ||= RubyLLM::Schema.create do
    string :title
    array :tags, of: :string
  end
end

How do I debug an agent?

result = LLM::MyAgent.call(query: "test", dry_run: true)
# Shows prompts without making API call

Costs & Budgets

How are costs calculated?

Costs are calculated based on:

  • Input tokens × model input price
  • Output tokens × model output price

Prices are from RubyLLM's model pricing data.

How do I set budget limits?

config.budgets = {
  global_daily: 100.0,      # $100/day
  global_monthly: 2000.0,   # $2000/month
  enforcement: :hard        # Block when exceeded
}

How do I check current spending?

RubyLLM::Agents::BudgetTracker.status
# => { global_daily: { limit: 100, current: 45.50, ... } }

Why is my agent blocked?

Check if budget is exceeded:

RubyLLM::Agents::BudgetTracker.exceeded?(:global, :daily)

Reliability

How do retries work?

retries max: 3, backoff: :exponential

Failed requests are automatically retried with increasing delays.

How do fallbacks work?

model "gpt-4o"
fallback_models "gpt-4o-mini", "claude-3-haiku"

If the primary model fails, fallbacks are tried in order.

What is a circuit breaker?

Circuit breakers prevent cascading failures by temporarily blocking requests to failing services.

circuit_breaker errors: 10, within: 60, cooldown: 300

After 10 errors in 60 seconds, requests are blocked for 5 minutes.


Performance

How do I improve latency?

  1. Enable caching: cache 1.hour
  2. Use streaming: streaming true
  3. Use faster models: model "gpt-4o-mini"
  4. Enable async logging: config.async_logging = true

How do I reduce costs?

  1. Enable caching
  2. Use cheaper models for simple tasks
  3. Set budget limits
  4. Optimize prompts (shorter = cheaper)

Why is the dashboard slow?

  1. Too much data: Set config.retention_period = 30.days
  2. Missing indexes: Run rails generate ruby_llm_agents:upgrade
  3. Complex queries: Reduce config.dashboard_per_page

Data & Privacy

What data is logged?

By default:

  • Agent type, model, status
  • Token counts, costs, duration
  • Parameters (redacted)
  • Prompts (optional)
  • Responses (optional)

How do I redact sensitive data?

config.redaction = {
  fields: %w[ssn credit_card],
  patterns: [/\d{3}-\d{2}-\d{4}/]
}

How do I disable prompt storage?

config.persist_prompts = false
config.persist_responses = false

How long is data retained?

config.retention_period = 30.days

Run cleanup regularly to delete old data.


Workflows

How do I chain agents?

workflow = RubyLLM::Agents::Workflow.pipeline(
  Agent1, Agent2, Agent3
)
result = workflow.call(input: data)

How do I run agents in parallel?

workflow = RubyLLM::Agents::Workflow.parallel(
  a: AgentA,
  b: AgentB
)
result = workflow.call(input: data)

How do I route to different agents?

workflow = RubyLLM::Agents::Workflow.router(
  classifier: IntentClassifier,
  routes: {
    "support" => SupportAgent,
    "sales" => SalesAgent
  }
)

Troubleshooting

Agent returns nil

  1. Check for errors: result.success?
  2. Check schema matches response
  3. Try dry_run to see prompts

Executions not appearing in dashboard

  1. Check async logging: Is job processor running?
  2. Try sync: config.async_logging = false
  3. Check for database errors

Rate limit errors

  1. Add retries with backoff
  2. Add fallback models
  3. Implement request queuing

Memory issues

  1. Disable response storage
  2. Set retention period
  3. Use streaming for large responses

Getting Help

Where can I report bugs?

GitHub Issues: https://github.com/adham90/ruby_llm-agents/issues

Where can I ask questions?

GitHub Discussions: https://github.com/adham90/ruby_llm-agents/discussions

How do I contribute?

See Contributing guide.

Related Pages

Clone this wiki locally