diff --git a/README.md b/README.md index f98523f..518dae4 100644 --- a/README.md +++ b/README.md @@ -30,10 +30,19 @@ response = client.chat.completions.create( # Cache hit - returns instantly response = client.chat.completions.create( model="gpt-4o", - messages=[{"role": "user", "content": "What's France's capital city?"}] + messages=[{"role": "user", "content": "Tell me France's capital city"}] ) ``` +Node.js follows a similar pattern of changing the base URL to point to your Semcache host: + +```js +const OpenAI = require('openai'); + +// Point to your Semcache host instead of OpenAI +const openai = new OpenAI({baseURL: 'http://localhost:8080', apiKey: 'your-key'}); +``` + ## Features - **🧠 Completely in-memory** - Prompts, responses and the vector database are stored in-memory