-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Problem Statement
I have observed that users currently lack the ability to utilize OpenAI's models for generating commit messages within MateCommit. As we aim to support a diverse range of AI backends, the absence of an OpenAI integration limits the tool's utility for developers who rely on GPT-3.5 or GPT-4 for high-quality text generation. Furthermore, with the recent introduction of the new provider interface, we need to ensure all new integrations adhere to this standard for maintainability.
Proposed Solution
I propose implementing a dedicated OpenAI provider that fully integrates with our new provider interface. The implementation should include the following steps:
- Create a new
OpenAIProviderstruct that satisfies the provider interface contract. - Implement the logic to communicate with the OpenAI API endpoints.
- Add configuration support to handle the OpenAI API key securely.
- Allow users to configure the specific model they wish to use (e.g.,
gpt-3.5-turbo,gpt-4), with a sensible default.
Alternatives Considered
I considered relying solely on local LLMs or other existing providers, but OpenAI remains a standard in the industry with high availability and quality. I also considered implementing this using the legacy provider pattern, but that would introduce technical debt immediately; therefore, using the new interface is mandatory.
Additional Context
This implementation should serve as a reference example for future cloud-based provider integrations. Please ensure error handling covers cases like quota limits and invalid API keys.