feat: provider and model configuration system. and fix all linting errors#28
Open
ErikKoning (Erik-Koning) wants to merge 1 commit intolangchain-ai:mainfrom
Open
Conversation
- Add ability to create custom providers (Azure AI Foundry, AWS Bedrock, etc.) - Support multiple configurations per provider with deployment-style selection - Store custom providers in ~/.openwork/user-providers.json - Add ProviderConfigDialog for managing API keys, endpoints, and deployments - Add AddProviderDialog for creating new custom providers - Fix 138 linting errors (missing return types, unused variables, etc.) - Add explicit return types to all component functions - Fix React hooks exhaustive-deps and setState-in-effect issues - Update UI components with proper TypeScript annotations Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Member
Hunter Lovell (hntrl)
left a comment
There was a problem hiding this comment.
Directionally I like the way that this is headed but there's a few nits to address:
- Can we pre-fetch/cache the available model strings for each provider when we first over the popover? There's a slight delay that comes from retrieving stored profiles from IPC
- We have the option to "name" a configuration for default providers, so my assumption is that we can select between multiple configs for that default. I don't believe this is the case though
- Selecting the pencil icon next to each model in the provider tab takes me to edit the API key config, but my assumption would be that's for configuring the model profile itself
- I like the idea of having model profiles to adjust things like model parameters and what not, but I think part of that would require upstreaming a way to get static chat model params to langchain (exporting a chat model zod schema, which is definitely something we could do). E.g. I can have a
opus-super-omega-maxmodel profile that maxes all reasoning and budgets out - What are the custom providers you're looking to use? I think the UX for adding a provider that complies with a schema is a little odd right now. I'd expect something like:
- you explicitly set a provider instead of an "api format"
- if its an openAI provider, you can configure the base url in that dialog. Same if its an anthropic provider, etc.
- Each of these would have its own "onboarding flow" to define provider-only auth methods or whatever we deem fit to put in there
For the lints + formats, I'm going to push changes to solve this on current main. If you wouldn't mind rebasing and descoping it to just the model configuration changes that would derisk the PR and be much appreciated 🙏
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Enables a unified system for managing and having users add their own providers or models locally. Default models and providers unchanged.
UI bug fixes, UI improvements
Lint errors solved
Related Issue
Fixes #(issue number)
#11 #7 #5
Type of Change
Checklist
Screenshots (if applicable)
Additional Notes
Built-in Providers (multi-model) Custom Providers (deployment)
├── Anthropic ├── Azure AI Foundry (user-created)
├── OpenAI ├── AWS Bedrock (user-created)
└── Google └── My Custom Endpoint (user-created)
Storage:
~/.openwork/
├── provider-configs.json ← Multi-config per provider
├── user-providers.json ← Custom provider definitions
└── .env ← Legacy (migrated on startup)