feat(ollama): comprehensive model management and server integration#345
Draft
Koan-Bot wants to merge 1 commit intosukria:mainfrom
Draft
feat(ollama): comprehensive model management and server integration#345Koan-Bot wants to merge 1 commit intosukria:mainfrom
Koan-Bot wants to merge 1 commit intosukria:mainfrom
Conversation
…default Implement the /ollama Telegram skill with subcommands for model management: - /ollama status — server health, version, loaded models - /ollama list — list locally available models via API - /ollama pull <name> — download a model - /ollama remove <name> — delete a local model - /ollama help — show available subcommands Improve OllamaLaunchProvider: - Add get_version() with semver parsing for minimum version check - is_available() now validates Ollama >= 0.15.0 (launch support) - Set OLLAMA_NO_CLOUD=1 by default for data privacy (v0.16.2+) Improve pid_manager: - Add HTTP health check (_ollama_http_ready) using /api endpoint - start_ollama() now verifies server readiness via HTTP, not just PID Update docs/provider-local.md: - Add ollama-launch provider section with config examples - Update recommended models (add qwen3-coder, glm-4.7) - Document /ollama Telegram skill commands 113 new tests (69 for skill, 44 for provider/pid_manager). 6273 total tests pass. Closes sukria#319
fdc531d to
e3e4f6e
Compare
Contributor
Author
Updated — Complete RewriteForce-pushed with a fresh implementation from Changes
Tests
🤖 Updated by Kōan |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Comprehensive Ollama integration for Kōan — from REST client to Telegram-accessible model management.
What's included (7 commits)
Infrastructure
ollama_client.py) with health checks, model listing, version detection_api_request()layer (GET/POST/DELETE) with consistent error handlingOllamaClaudeProvider— routes Claude CLI through Ollama proxy viaANTHROPIC_BASE_URLollama→LocalLLMProvider(alongside existinglocal)Skill:
/ollama/ollama— Server status: version, model list, running models, configured model readiness/ollama list(aliases:ls,models) — Compact model listing/ollama pull <model>— Download models with streaming NDJSON progress/ollama remove <model>(alias:rm) — Remove locally stored models/ollama show <model>(alias:info) — Model details (family, parameters, quantization, context)/ollama help— Subcommand referencelocal,ollama, orollama-claudeprovidersProcess management
localandollama-claudeproviders)Observability
/statusand/pingshow Ollama process health when provider needs itProvider validation
OllamaClaudeProvidervalidates model availability viacheck_server_and_model()(not just server health)Test coverage
test_ollama_client.py,test_skill_ollama.py,test_startup_info.py,test_pid_manager.py,test_cli_provider.py,test_ollama_claude_provider.pyCloses #319, closes #305
Test plan
/ollamashows server status when provider islocalorollama-claude/ollama listshows available models/ollama pull <model>downloads and reports success/ollama remove <model>deletes and reports freed space/ollama show <model>displays model details (family, params, quantization)/ollama helpshows subcommand reference🤖 Generated with Claude Code