Set these environment variables for the providers you want to use:
# Cloud Providers
export OPENAI_API_KEY="sk-..." # OpenAI
export ANTHROPIC_API_KEY="sk-ant-..." # Anthropic (API key method)
export FIREWORKS_API_KEY="fw_..." # Fireworks AI
export DEEPSEEK_API_KEY="sk-..." # DeepSeek
export XAI_API_KEY="xai-..." # xAI (Grok)
export GROQ_API_KEY="gsk_..." # Groq
export TOGETHER_API_KEY="..." # Together AI
export MISTRAL_API_KEY="..." # Mistral AI
export OPENROUTER_API_KEY="sk-or-..." # OpenRouter
export PERPLEXITY_API_KEY="pplx-..." # Perplexity
export GOOGLE_API_KEY="..." # Google/Gemini
export GEMINI_API_KEY="..." # Alternative for Google
export COHERE_API_KEY="..." # Cohere
export VERCEL_API_KEY="..." # Vercel AI Gateway
# Azure OpenAI
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"
# Cloudflare
export CLOUDFLARE_API_TOKEN="..."
export CLOUDFLARE_ACCOUNT_ID="..."
export CLOUDFLARE_GATEWAY_ID="..." # For AI Gateway
# AWS Bedrock
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_REGION="us-east-1"
export AWS_BEARER_TOKEN_BEDROCK="..." # Alternative: Bearer tokenDefault: http://127.0.0.1:1234/v1
Your OpenAI-compatible API endpoint. Can be set to any OpenAI-compatible server:
# LM Studio (default)
export LOCAL_API_BASE=http://localhost:1234/v1
# Ollama
export LOCAL_API_BASE=http://localhost:11434/v1
# vLLM
export LOCAL_API_BASE=http://localhost:8000/v1
# Custom OpenAI-compatible service
export LOCAL_API_BASE=https://api.your-service.com/v1Default: ~/.config/opencode/opencode.json
Custom path to OpenCode configuration file:
# Use custom config location
export OPENCODE_CONFIG=/path/to/custom/opencode.jsonDefault: ~/.config
Custom config directory location:
# Use custom config directory
export XDG_CONFIG_HOME=/home/user/my-configs
# Config will be at: $XDG_CONFIG_HOME/opencode/opencode.jsonOpenCode uses JSON configuration files. Here's the complete schema for local providers:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"local": {
"npm": "@ai-sdk/openai-compatible",
"name": "Local Provider Name",
"options": {
"baseURL": "http://localhost:1234/v1"
},
"models": {
"model-id": {
"name": "Display Name",
"tools": true
}
}
}
}
}Type: Object Description: Configuration for your local AI provider
| Field | Type | Required | Description |
|---|---|---|---|
npm |
string | Yes | Must be @ai-sdk/openai-compatible |
name |
string | Yes | Display name for the provider |
options |
object | Yes | Provider-specific options |
models |
object | Yes | Available models configuration |
Type: Object Description: Connection options for the provider
| Field | Type | Required | Description |
|---|---|---|---|
baseURL |
string | Yes | OpenAI-compatible API endpoint |
Type: Object Description: Model configurations indexed by model ID
| Field | Type | Required | Description |
|---|---|---|---|
name |
string | No | Display name (defaults to model ID) |
tools |
boolean | No | Enable tool calling (default: true) |
{
"llama-3.1-8b-instruct": {
"name": "Llama 3.1 8B Instruct",
"tools": true
}
}{
"llava-7b": {
"name": "LLaVA 7B Vision",
"tools": false
}
}{
"nomic-embed-text": {
"name": "Nomic Embed Text",
"tools": false
}
}You can configure multiple providers simultaneously:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"local-lmstudio": {
"npm": "@ai-sdk/openable-compatible",
"name": "LM Studio",
"options": {
"baseURL": "http://localhost:1234/v1"
},
"models": {}
},
"local-ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {}
},
"fireworks": {
"baseURL": "https://api.fireworks.ai/inference/v1",
"models": {
"deepseek-v3p2": {
"name": "DeepSeek V3.2"
}
}
}
}
}The sync script only updates the "local" provider. For multiple local endpoints, create multiple providers:
# Sync LM Studio
cd /path/to/opencode-local-setup
LOCAL_API_BASE=http://localhost:1234/v1 node scripts/sync-local-models.mjs
# Sync Ollama
LOCAL_API_BASE=http://localhost:11434/v1 node scripts/sync-local-models.mjs ~/.opencode-ollama.json
# Use different configs
OPENCODE_CONFIG=~/.opencode-lmstudio.json opencode
OPENCODE_CONFIG=~/.opencode-ollama.json opencodeWhen multiple models have the same ID, OpenCode uses the first matching provider. Use prefixes to disambiguate:
# In OpenCode
/models list # Shows all models
/models use local/llama-3.1 # Use local provider
/models use fireworks/deepseek # Use fireworks providerFor providers requiring authentication:
{
"provider": {
"custom": {
"npm": "@ai-sdk/openai-compatible",
"name": "Custom Provider",
"options": {
"baseURL": "https://api.custom.com/v1",
"headers": {
"Authorization": "Bearer YOUR_API_KEY"
}
}
}
}
}Some models may need specific configuration:
{
"models": {
"qwen-long-writer": {
"name": "Qwen Long Writer",
"tools": true,
"contextWindow": 32000
}
}
}# Use specific local model
opencode -m local/llama-3.1-8b-instruct
# Use cloud provider model
opencode -m fireworks/accounts/fireworks/models/deepseek-v3p2# After installation
oc-local # Uses local provider
deepseek # Uses Fireworks DeepSeek# Temporarily override endpoint
LOCAL_API_BASE=http://localhost:8000/v1 opencode
# With custom config
OPENCODE_CONFIG=/path/to/config.json opencode- GET
/v1/models- List available models - POST
/v1/chat/completions- Chat completion - POST
/v1/embeddings- Text embeddings
- LM Studio: Full OpenAI compatibility at
http://localhost:1234/v1 - Ollama: OpenAI compatibility at
http://localhost:11434/v1 - vLLM: Full OpenAI compatibility, configurable port (default: 8000)