Skip to content

Implement support for Ollama AI provider #77

@thomas-vilte

Description

@thomas-vilte

Problem Statement

I am currently limited to using cloud-based AI providers, which raises concerns regarding data privacy and incurs usage costs. I want to be able to utilize local LLMs to generate commit messages without sending code snippets to external servers.

Proposed Solution

I propose implementing support for Ollama as a new AI provider. This integration should allow users to configure MateCommit to communicate with a local Ollama instance (usually running on localhost:11434).

The implementation needs to:

  1. Create a new struct for Ollama that satisfies the existing AIProvider interface.
  2. Add configuration options for the Ollama endpoint and model name.
  3. Ensure the prompt formatting works well with standard local models like Llama 3 or Mistral.

Alternatives Considered

I considered using generic HTTP bindings for local servers, but a dedicated Ollama provider ensures better default settings and easier configuration for end users.

Additional Context

This is a great entry point for contributors looking to work with AI integrations. The AIProvider interface is already established, so this task primarily involves implementing the specific API calls for Ollama.

Metadata

Metadata

Assignees

No one assigned

    Labels

    aiAI functionalities, prompts, and integration with LLM modelsenhancementImprovement or extension of an existing functionalityfeatureNew featuresgood first issueGood for newcomers

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions