-
Notifications
You must be signed in to change notification settings - Fork 904
support a custom HTTPX client in Client and AsyncClient
#380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
This is needed for |
|
Thanks for the PR. Can you provide some details in the description of why this change is necessary for |
|
We would like to reuse a set of http connections (encapsulated in an HTTPX client) when creating Ollama clients, so it's as cheap as possible to create new clients. Please could you at least kick off tests. By the way, you can set github to run tests automatically, even for new contributors; this change makes it much more attractive to contribute to the library. |
|
Hi can you fix the tests? ollama-python currently targets 3.8 so the union types |
|
|
||
| class BaseClient: | ||
| class Client: | ||
| @overload |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These overload aren't necessary since there's no overlap between the client and non-client versions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are to maintain proper typing, the point is you can't pass client and follow_redirects or timeout together. These overloads mean doing so will give a typing error.
|
Fixed suggestions I think, by the way it would be easier for contributors and less work for you if you let tests run automatically. |
|
Friendly request to please take another peek at this PR. I’m here as a user of pydantic-ai and would love to be able to leverage ollama structured outputs and pydantic-ai for super powered local agents.😀 Really appreciate the hard work from both projects, so thank you all in advance! Also, if you have a donation link of some kind, I’d be happy to use that to support. |
|
Likewise on Pydantic AI + Ollama. I was saddened to find out that streaming tool calls don't work with the OpenAI compat layer. Would be nice to use the Ollama bindings directly. |
Allow passing a pre-configured `httpx.Client` or `httpx.AsyncClient`
instance to reuse connections and custom configurations.
```python
import httpx
from ollama import Client
custom_httpx_client = httpx.Client(timeout=30.0)
client = Client(client=custom_httpx_client)
messages = [
{
'role': 'user',
'content': 'Why is the sky blue?',
},
]
for part in client.chat('gpt-oss:120b-cloud', messages=messages, stream=True):
print(part.message.content, end='', flush=True)
```
Note: this is an updated minimal version of
ollama#380 to support a custom
`httpx` client.
I took the approach of checking for the class instead of isinstance so
that nothing has to be a direct instance of an httpx.Client or
httpx.AsyncClient. It's up to the user to provide the right custom
client.
|
Created a follow up PR that tries to make the diff small in hopes we can get this in soon. #618 |
As well as allowing a custom httpx client, this also improves type safety in
ClientandAsyncClient- previouslyself._clientimplicitly had type ofNone.