Skip to content

Latest commit

 

History

History
309 lines (250 loc) · 7.14 KB

File metadata and controls

309 lines (250 loc) · 7.14 KB

API Reference

Environment Variables

Provider API Keys

Set these environment variables for the providers you want to use:

# Cloud Providers
export OPENAI_API_KEY="sk-..."           # OpenAI
export ANTHROPIC_API_KEY="sk-ant-..."    # Anthropic (API key method)
export FIREWORKS_API_KEY="fw_..."         # Fireworks AI
export DEEPSEEK_API_KEY="sk-..."          # DeepSeek
export XAI_API_KEY="xai-..."              # xAI (Grok)
export GROQ_API_KEY="gsk_..."             # Groq
export TOGETHER_API_KEY="..."             # Together AI
export MISTRAL_API_KEY="..."              # Mistral AI
export OPENROUTER_API_KEY="sk-or-..."     # OpenRouter
export PERPLEXITY_API_KEY="pplx-..."      # Perplexity
export GOOGLE_API_KEY="..."               # Google/Gemini
export GEMINI_API_KEY="..."               # Alternative for Google
export COHERE_API_KEY="..."               # Cohere
export VERCEL_API_KEY="..."               # Vercel AI Gateway

# Azure OpenAI
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"

# Cloudflare
export CLOUDFLARE_API_TOKEN="..."
export CLOUDFLARE_ACCOUNT_ID="..."
export CLOUDFLARE_GATEWAY_ID="..."        # For AI Gateway

# AWS Bedrock
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_REGION="us-east-1"
export AWS_BEARER_TOKEN_BEDROCK="..."     # Alternative: Bearer token

LOCAL_API_BASE

Default: http://127.0.0.1:1234/v1

Your OpenAI-compatible API endpoint. Can be set to any OpenAI-compatible server:

# LM Studio (default)
export LOCAL_API_BASE=http://localhost:1234/v1

# Ollama
export LOCAL_API_BASE=http://localhost:11434/v1

# vLLM
export LOCAL_API_BASE=http://localhost:8000/v1

# Custom OpenAI-compatible service
export LOCAL_API_BASE=https://api.your-service.com/v1

OPENCODE_CONFIG

Default: ~/.config/opencode/opencode.json

Custom path to OpenCode configuration file:

# Use custom config location
export OPENCODE_CONFIG=/path/to/custom/opencode.json

XDG_CONFIG_HOME

Default: ~/.config

Custom config directory location:

# Use custom config directory
export XDG_CONFIG_HOME=/home/user/my-configs
# Config will be at: $XDG_CONFIG_HOME/opencode/opencode.json

Configuration Schema

OpenCode uses JSON configuration files. Here's the complete schema for local providers:

Basic Structure

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "local": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Local Provider Name",
      "options": {
        "baseURL": "http://localhost:1234/v1"
      },
      "models": {
        "model-id": {
          "name": "Display Name",
          "tools": true
        }
      }
    }
  }
}

Provider Configuration

local

Type: Object Description: Configuration for your local AI provider

Field Type Required Description
npm string Yes Must be @ai-sdk/openai-compatible
name string Yes Display name for the provider
options object Yes Provider-specific options
models object Yes Available models configuration

options

Type: Object Description: Connection options for the provider

Field Type Required Description
baseURL string Yes OpenAI-compatible API endpoint

models

Type: Object Description: Model configurations indexed by model ID

Field Type Required Description
name string No Display name (defaults to model ID)
tools boolean No Enable tool calling (default: true)

Model Configuration Examples

Standard LLM (with tool support)

{
  "llama-3.1-8b-instruct": {
    "name": "Llama 3.1 8B Instruct",
    "tools": true
  }
}

Vision Model

{
  "llava-7b": {
    "name": "LLaVA 7B Vision",
    "tools": false
  }
}

Embedding Model (no tools)

{
  "nomic-embed-text": {
    "name": "Nomic Embed Text",
    "tools": false
  }
}

Multi-Provider Setup

You can configure multiple providers simultaneously:

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "local-lmstudio": {
      "npm": "@ai-sdk/openable-compatible",
      "name": "LM Studio",
      "options": {
        "baseURL": "http://localhost:1234/v1"
      },
      "models": {}
    },
    "local-ollama": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Ollama",
      "options": {
        "baseURL": "http://localhost:11434/v1"
      },
      "models": {}
    },
    "fireworks": {
      "baseURL": "https://api.fireworks.ai/inference/v1",
      "models": {
        "deepseek-v3p2": {
          "name": "DeepSeek V3.2"
        }
      }
    }
  }
}

Using Multiple Endpoints

The sync script only updates the "local" provider. For multiple local endpoints, create multiple providers:

# Sync LM Studio
cd /path/to/opencode-local-setup
LOCAL_API_BASE=http://localhost:1234/v1 node scripts/sync-local-models.mjs

# Sync Ollama  
LOCAL_API_BASE=http://localhost:11434/v1 node scripts/sync-local-models.mjs ~/.opencode-ollama.json

# Use different configs
OPENCODE_CONFIG=~/.opencode-lmstudio.json opencode
OPENCODE_CONFIG=~/.opencode-ollama.json opencode

Provider Resolution

When multiple models have the same ID, OpenCode uses the first matching provider. Use prefixes to disambiguate:

# In OpenCode
/models list                    # Shows all models
/models use local/llama-3.1     # Use local provider
/models use fireworks/deepseek  # Use fireworks provider

Advanced Options

Custom Headers

For providers requiring authentication:

{
  "provider": {
    "custom": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Custom Provider",
      "options": {
        "baseURL": "https://api.custom.com/v1",
        "headers": {
          "Authorization": "Bearer YOUR_API_KEY"
        }
      }
    }
  }
}

Model-Specific Settings

Some models may need specific configuration:

{
  "models": {
    "qwen-long-writer": {
      "name": "Qwen Long Writer",
      "tools": true,
      "contextWindow": 32000
    }
  }
}

Command-Line Usage

Basic Usage

# Use specific local model
opencode -m local/llama-3.1-8b-instruct

# Use cloud provider model
opencode -m fireworks/accounts/fireworks/models/deepseek-v3p2

Using Convenience Functions

# After installation
oc-local                   # Uses local provider
deepseek                   # Uses Fireworks DeepSeek

With Custom Endpoints

# Temporarily override endpoint
LOCAL_API_BASE=http://localhost:8000/v1 opencode

# With custom config
OPENCODE_CONFIG=/path/to/config.json opencode

API Endpoints

Standard OpenAI-Compatible

  • GET /v1/models - List available models
  • POST /v1/chat/completions - Chat completion
  • POST /v1/embeddings - Text embeddings

Provider-Specific

  • LM Studio: Full OpenAI compatibility at http://localhost:1234/v1
  • Ollama: OpenAI compatibility at http://localhost:11434/v1
  • vLLM: Full OpenAI compatibility, configurable port (default: 8000)