-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Problem Statement
I am currently limited to using cloud-based AI providers, which raises concerns regarding data privacy and incurs usage costs. I want to be able to utilize local LLMs to generate commit messages without sending code snippets to external servers.
Proposed Solution
I propose implementing support for Ollama as a new AI provider. This integration should allow users to configure MateCommit to communicate with a local Ollama instance (usually running on localhost:11434).
The implementation needs to:
- Create a new struct for Ollama that satisfies the existing
AIProviderinterface. - Add configuration options for the Ollama endpoint and model name.
- Ensure the prompt formatting works well with standard local models like Llama 3 or Mistral.
Alternatives Considered
I considered using generic HTTP bindings for local servers, but a dedicated Ollama provider ensures better default settings and easier configuration for end users.
Additional Context
This is a great entry point for contributors looking to work with AI integrations. The AIProvider interface is already established, so this task primarily involves implementing the specific API calls for Ollama.