-
-
Notifications
You must be signed in to change notification settings - Fork 76
Description
📌Description
This PR introduces a pluggable LLM provider architecture that allows users to dynamically choose their LLM provider (e.g., Groq, Gemini, OpenAI, open-source models) and specific model at runtime.
The goal is to reduce dependency on a single AI provider, improve system resilience, and align with Perspective-AI’s mission of decentralization, user choice, and cost-efficient AI usage.
🎯 Motivation
Perspective-AI aims to counter algorithmic echo chambers by presenting balanced, multi-perspective analysis. Relying on a single LLM provider introduces:
Vendor lock-in
Single point of failure
Cost scalability issues
Reduced transparency and flexibility
By enabling multiple LLM backends, we:
Improve system redundancy
Encourage open-source LLM adoption
Allow users to select models based on cost, latency, or quality
Make the system more future-proof and extensible
✨ Key Features Added
- LLM Provider Selection
Users can now choose from multiple LLM providers, such as:
Groq
Gemini
OpenAI
Open-source / self-hosted LLMs (future-ready)
This selection can be configured via:
API request parameters
UI-level controls (frontend-ready)
Environment-level defaults
- Model-Level Granularity
Each provider exposes its available models, allowing users to select:
Faster vs. more capable models
Cost-efficient open-source alternatives
Task-specific models for reasoning or summarization
Example:
Groq → llama3-70b, mixtral
Gemini → gemini-pro
Open-source → Mistral, LLaMA, etc.
- Unified LLM Interface
A provider-agnostic abstraction layer ensures:
Clean separation between business logic and LLM implementations
Easy onboarding of new providers
Minimal refactoring when switching models
This keeps LangChain/LangGraph workflows intact while enabling backend flexibility.
- Cost-Efficient & Open-Source Friendly
Default preference can be given to open-source or low-cost LLMs
Reduces reliance on expensive proprietary APIs
Supports the long-term sustainability of the project
🧠 Architectural Impact
Before:
Single LLM provider → tightly coupled → limited scalability
After:
Pluggable LLM layer → provider abstraction → dynamic model routing
This aligns with Perspective-AI’s broader architecture of:
Modular design
Redundancy in critical dependencies
Transparent AI decision pipelines
🔐 Security & Configuration
API keys are managed via environment variables
No provider-specific secrets are exposed to the frontend
Safe fallbacks ensure graceful degradation if a provider fails
📈 Expected Outcomes
Reduced vendor lock-in
Improved system resilience
Lower operational costs
Better experimentation with reasoning quality
Stronger alignment with open-source principles
🛠️ Future Scope
Auto-routing models based on task complexity
User-defined cost/latency preferences
Self-hosted local LLM support
Provider benchmarking & quality comparison UI
@ParagGhatage Assign this to me !!