Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions examples/rig-simple/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
target/
Cargo.lock
generated_recipes.json
14 changes: 14 additions & 0 deletions examples/rig-simple/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
[package]
name = "rig-simple"
version = "0.1.0"
edition = "2021"

[dependencies]
rig-core = { version = "0.23" }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
tokio = { version = "1.41", features = ["full"] }
reqwest = { version = "0.12", features = ["json"] }
uuid = { version = "1.10", features = ["v4"] }
anyhow = "1.0"
schemars = { version = "1.0", features = ["derive"] }
240 changes: 240 additions & 0 deletions examples/rig-simple/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,240 @@
# Try Coagent - rig-simple Example

This example demonstrates how to use the `rig-core` library with Ollama to generate structured recipe data in Rust. This is a translation of the `langchain-simple` Python example using the rig framework.

## Prerequisites

1. **Rust 1.70+** - Install from https://rustup.rs
2. **Ollama** - Install from https://ollama.ai
3. **Coagent Server** - Running on `http://localhost:3000`

## Installation

### Install Ollama and Pull a Model

```bash
# Install Ollama (see https://ollama.ai)
# Then pull a model:
ollama pull qwen3:8b
# or
ollama pull llama2
```

## Configuration

Edit `src/config.rs` to customize:
- Ollama model name (default: `qwen3:8b`)
- Ollama base URL (default: `http://localhost:11434`)
- Temperature and max_tokens
- Default ingredients list
- Number of recipes to generate

## Usage

### Build and Run

```bash
cd examples/rig-simple
cargo run
```

### Development

```bash
# Check for compilation errors
cargo check

# Run with verbose output
RUST_LOG=debug cargo run

# Build optimized release version
cargo build --release
./target/release/rig-simple
```

## What This Example Does

1. **Structured Output**: Uses rig's extractor pattern with JsonSchema and serde for type-safe structured data
2. **rig-core Integration**: Demonstrates the rig framework with Ollama LLM
3. **Coagent Logging**: Logs all LLM calls and run metadata to the Coagent server
4. **Metadata Extraction**: Captures timing information
5. **Output**: Generates 3 recipes and saves them to `generated_recipes.json`

## Key Differences from langchain-simple (Python)

This Rust example provides similar functionality to the Python version but with:

- **Type Safety**: Compile-time guarantees for data structures using Rust's type system
- **Performance**: Native performance with async/await using Tokio
- **No Python Dependencies**: Pure Rust implementation using `rig-core`
- **Error Handling**: Robust error handling with `anyhow` and `Result` types
- **Extractor Pattern**: Uses rig's built-in extractor for automatic JSON schema generation

### Rust vs Python Comparison

**Python (langchain-simple)**:
```python
from langchain_ollama import OllamaLLM
from pydantic import BaseModel

class Recipe(BaseModel):
name: str
ingredients: List[str]
# ...

llm = OllamaLLM(model="qwen3:8b")
result = chain.invoke({"ingredients": ingredients_str})
```

**Rust (rig-simple)**:
```rust
use rig::providers::ollama;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};

#[derive(JsonSchema, Serialize, Deserialize)]
struct Recipe {
name: String,
ingredients: Vec<String>,
// ...
}

let client = ollama::Client::from_url("http://localhost:11434");
let extractor = client.extractor::<RecipeCollection>("qwen3:8b").build();
let result = extractor.extract(&prompt).await?;
```

## Why rig-core?

The rig framework provides several advantages:

1. **Unified Interface**: Single API for multiple LLM providers (20+ supported)
2. **Built-in Extractors**: Automatic structured output using JSON Schema
3. **Type Safety**: Full Rust type safety with serde and schemars
4. **Modern Design**: Async/await first design with Tokio
5. **Vector Store Support**: Built-in support for 10+ vector databases
6. **Production Ready**: Used by organizations like St Jude, Coral Protocol, and Nethermind

## Project Structure

```
rig-simple/
├── Cargo.toml # Rust dependencies
├── README.md # This file
└── src/
├── main.rs # Entry point and main logic
├── config.rs # Configuration structs
├── models.rs # Recipe data models with JsonSchema
├── generator.rs # Recipe generation logic using rig
└── coagent_client.rs # Coagent logging client
```

## Troubleshooting

### Ollama Connection Issues
- Ensure Ollama is running: `ollama serve`
- Check available models: `ollama list`
- Verify the model name in `src/config.rs` matches an available model

### Coagent Server Issues
- Ensure the Coagent server is running on `http://localhost:3000`
- Check the server logs for any errors
- The example will continue running even if logging fails (warnings will be printed)

### Model Not Found
```bash
ollama pull qwen3:8b
# or try another model
ollama pull llama2
```

### Compilation Errors
```bash
# Update dependencies
cargo update

# Clean and rebuild
cargo clean
cargo build
```

### JSON Parsing Errors

If you encounter JSON parsing errors, the model might be returning malformed JSON or wrapping it in markdown code blocks. The example includes logic to handle common cases, but you may need to:

1. Try a different model (e.g., `llama3.2`, `qwen3:8b`)
2. Adjust the temperature in `src/config.rs`
3. Check the raw response for debugging

## Example Output

```
🍳 Advanced rig-core Recipe Generator (Structured Output)
============================================================
Generating structured recipes with: chicken, tomatoes, onions, garlic, rice, olive oil, salt, pepper
Using model: qwen3:8b
Max recipes: 3

Generating recipes... (this may take a moment)
------------------------------------------------------------

🎉 Structured Recipe Collection:
============================================================

📋 Recipe 1: Chicken and Rice Pilaf
⏱️ Prep: 15 min | Cook: 30 min | Serves: 4

🥘 Ingredients:
• 1 lb chicken breast, diced
• 2 cups rice
• 1 onion, chopped
• 3 cloves garlic, minced
• 2 tomatoes, diced
• 2 tbsp olive oil
• Salt and pepper to taste

👨‍🍳 Instructions:
1. Heat olive oil in a large skillet over medium heat
2. Add diced chicken and cook until browned
3. Add onion and garlic, sauté until fragrant
4. Stir in rice and cook for 2 minutes
5. Add tomatoes and 4 cups water, bring to boil
6. Reduce heat, cover, and simmer for 20 minutes
7. Season with salt and pepper to taste

💾 Recipes saved to: generated_recipes.json

Duration: 4521 ms
```

## Dependencies

- **rig-core** (0.23) - Rust LLM framework with multi-provider support
- **serde** (1.0) - Serialization framework
- **serde_json** (1.0) - JSON support for serde
- **schemars** (0.8) - JSON Schema generation for Rust types
- **tokio** (1.41) - Async runtime
- **reqwest** (0.12) - HTTP client for Coagent logging
- **uuid** (1.10) - UUID generation for session IDs
- **anyhow** (1.0) - Error handling

## License

This example is part of the Infinyon Coagent project.

## Related Examples

- **langchain-simple** - Python version using LangChain
- **rust-genai-simple** - Rust version using rust-genai
- **smolagents** - Python agent example

## Contributing

Contributions are welcome! Please feel free to submit issues or pull requests.

## Learn More

- **rig GitHub**: https://github.com/0xPlaygrounds/rig
- **rig Documentation**: https://docs.rs/rig-core
- **Ollama**: https://ollama.ai
- **Coagent**: https://github.com/terraphim/try-coagent
Loading