From a4bc0f78f3c2dc54f2855acc8d5970e23e076a3a Mon Sep 17 00:00:00 2001 From: Louis Date: Wed, 11 Jun 2025 10:13:21 +0200 Subject: [PATCH 1/2] Update readme with nodejs example --- README.md | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index f98523f..9d0bccd 100644 --- a/README.md +++ b/README.md @@ -30,10 +30,19 @@ response = client.chat.completions.create( # Cache hit - returns instantly response = client.chat.completions.create( model="gpt-4o", - messages=[{"role": "user", "content": "What's France's capital city?"}] + messages=[{"role": "user", "content": "Tell me Fracnce's capital city"}] ) ``` +Node.js follows a similar pattern of changing the base URL to point to your Semcache host: + +```js +const OpenAI = require('openai'); + +// Point to your Semcache host instead of OpenAI +const openai = new OpenAI({baseURL: 'http://localhost:8080', apiKey: 'your-key'}); +``` + ## Features - **🧠 Completely in-memory** - Prompts, responses and the vector database are stored in-memory From 8d83744191f19a7de799782702c87185ae507e2a Mon Sep 17 00:00:00 2001 From: Louis Date: Wed, 11 Jun 2025 10:14:04 +0200 Subject: [PATCH 2/2] Spelling mistake --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 9d0bccd..518dae4 100644 --- a/README.md +++ b/README.md @@ -30,7 +30,7 @@ response = client.chat.completions.create( # Cache hit - returns instantly response = client.chat.completions.create( model="gpt-4o", - messages=[{"role": "user", "content": "Tell me Fracnce's capital city"}] + messages=[{"role": "user", "content": "Tell me France's capital city"}] ) ```