CLI tool for conversing with AI models via OpenRouter. Maintains conversation state across invocations for coherent multi-turn conversations directly from your terminal.
This tool is still in early development and not ready for general use.
- Persistent conversations - State saved between sessions
- Multiple contexts - Separate conversations for different projects/topics
- Plugin system - Extend capabilities with custom tools
- Streaming responses - Real-time output as the AI responds
- Rolling compaction - Automatic context management with intelligent summarization
- Agentic workflows - Built-in tools for todos, goals, and autonomous processing
- Cross-context messaging - Contexts can communicate with each other
- Large output caching - Tool outputs automatically cached with surgical access tools
- Unix philosophy - Only LLM output goes to stdout (pipeable)
cargo install --path .Create ~/.chibi/config.toml:
api_key = "your-openrouter-api-key"
model = "anthropic/claude-sonnet-4"
context_window_limit = 200000
warn_threshold_percent = 80.0Copy example prompts:
mkdir -p ~/.chibi/prompts
cp examples/prompts/*.md ~/.chibi/prompts/# Simple prompt
chibi What is Rust?
# Pipe content
cat error.log | chibi "explain this error"
# Different contexts
chibi -c coding "Review this function"
chibi -c research "Find info about X"
# See tool activity
chibi -v "Read my Cargo.toml"- Getting Started - Installation and first steps
- Configuration - Full config reference including API parameters
- Contexts - Managing multiple conversations
- Plugins - Creating tools for the LLM
- Hooks - Lifecycle event system
- Agentic Workflows - Autonomous multi-step processing
- CLI Reference - All command flags
- Transcript Format - JSONL format specification
# Contexts
chibi -c <name> # Switch context
chibi -C <name> # Transient context (one-off)
chibi -L # List contexts
chibi -l # Current context info
# History
chibi -a # Archive current context
chibi -z # Compact current context
chibi -g 10 # Show last 10 log entries
# System prompts
chibi -y "prompt" # Set current context's prompt
chibi -n system_prompt # View current prompt
# Tools
chibi -v # Verbose (see tool calls)
chibi -p plugin args # Run plugin directlySee CLI Reference for the complete list.
See chibi-plugins for ready-to-use plugins:
read_file- Read file contentsfetch_url- Fetch web contentrun_command- Execute shell commands (with confirmation)web_search- Search via DuckDuckGorecurse- Continue processing autonomouslysub-agent- Spawn sub-agents in other contextsgithub-mcp- GitHub integration via MCP
~/.chibi/
├── config.toml # Global configuration
├── models.toml # Model metadata (optional)
├── prompts/ # System prompts
│ ├── chibi.md # Default prompt
│ └── reflection.md # LLM's persistent memory
├── plugins/ # Plugin scripts
└── contexts/<name>/
├── context.jsonl # Conversation history
├── local.toml # Per-context config
├── todos.md # Current todos
├── goals.md # Current goals
├── system_prompt.md # Custom prompt (optional)
└── tool_cache/ # Cached large tool outputs
ISC
Make meow, not rawr

