Skip to content

MCP server for PostgreSQL - Query databases using natural language via Model Context Protocol

Notifications You must be signed in to change notification settings

Anashel-RPG/mcp-postgres

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MCP Postgres (MCP + LLM toolkit for Postgres)

Connect any Postgres database to an MCP server and a chat UI designed for tool-building, observability, and safe data exploration.

What this is

This repo is a toolkit + reference implementation for running:

  • pg-mcp: an MCP server (Cloudflare Worker) exposing Postgres-focused tools (not just “run SQL”)
  • llm-app: a lightweight chat UI + LLM orchestrator (Cloudflare Worker + Vite frontend) built to inspect, debug, and measure every tool interaction

The demo ships with the public NY SPARCS Hospital Inpatient Discharges (2021) dataset (≈2.13M rows) so you can try it immediately.

What makes it different

Most “LLM + Postgres” projects are thin SQL wrappers. This toolkit intentionally pushes the LLM toward business intelligence-style tools:

  • compare / segment / pivot / summarize
  • scoped exploration with row caps
  • tool-by-tool profiling (latency + concurrency)

You can still run specific queries, but if you inspect the LLM↔MCP exchange you’ll typically see tool-driven analysis rather than “generate one giant SQL query”.

Features (demo UI)

  • Full tool transparency: inspect every MCP call, payload, response, and timing
  • SSE conversation inspection: view the full streaming exchange end-to-end
  • Latency + concurrency testing: measure parallel tool calls and overhead
  • Table UX: export tables, open results in modals, copy JSON
  • Multilingual chat: ships with 5 languages; add more easily
  • On-the-fly glossary: generates a glossary to explain unfamiliar fields/values
  • Row-Level Security (RLS) friendly: user sessions map to a “security scope”; the MCP should only pivot data the user is allowed to see

Architecture

  • Runtime: Cloudflare Workers (2 workers) + Cloudflare Hyperdrive (Postgres connection)
  • State: Durable Objects (auth, chat sessions, scoped dataset handles)
  • Observability: Cloudflare Logs + Traces enabled in wrangler.toml
  • Security posture (default): API-key protected, fail-closed, with extra CORS + schema filtering options

Architecture diagram

flowchart LR
  U[User / Browser] -->|HTTPS| APP["llm-app Worker<br/>(API + UI)"]

  subgraph CF[Cloudflare]
    APP -->|serves| ASSETS["Static assets<br/>(Vite build via Workers Assets)"]

    APP -->|reads/writes| DO_AUTH["Durable Object<br/>AuthConfig"]
    APP -->|reads/writes| DO_CHAT["Durable Object<br/>ChatSession"]

    APP -->|MCP JSON-RPC<br/>X-API-Key| MCP["pg-mcp Worker"]
    MCP -->|stores dataset handles| DO_DATA["Durable Object<br/>DatasetStore"]

    MCP -->|Hyperdrive| HD[(Hyperdrive)]
  end

  HD -->|Postgres| PG[(PostgreSQL)]

  APP -->|LLM API| LLM["OpenAI-compatible LLM endpoint"]
Loading

What’s in the diagram

  • Durable Objects:
    • llm-app: AuthConfig, ChatSession
    • pg-mcp: DatasetStore
  • KV/R2/D1: not used in the current implementation (the “KV” you may see in the UI code is just a key/value markdown rendering feature, not Cloudflare KV).

Project status (important)

This repo is very close to being turnkey, but it’s not “one command and you’re live” yet.

  • Some setup still requires manual steps (Cloudflare login, Hyperdrive creation, SPARCS CSV download, setting OpenAI key).
  • I’m actively working on simplifying install/deploy (and adding a Docker/local wrapper) this week.

That said, there is already a working setup flow (doctor, setup, deploy:all) and the demo is running in production.

Quickstart (Cloudflare Workers)

Requirements

  • Node.js >= 20 (Wrangler v4 requirement)
  • A Postgres database
  • A Cloudflare account with Workers + (optionally) Hyperdrive access
  • The SPARCS CSV (for the demo dataset): see sql/dataset/README.md

1) Install dependencies

cd mcp-postgress
npm install

2) Run the environment doctor

npm run doctor

3) Configure database import (demo dataset)

The importer reads sql/setup/.env. If it doesn’t exist, it will be created from sql/setup/env.template.

npm run setup:db -- --schema-only

Then edit:

  • sql/setup/.env → set DATABASE_URL=postgres://...

To import a safe sample (recommended on first run):

npm run setup:db -- --max-rows 250000 --index-mode minimal

To import the full dataset:

npm run setup:db -- --full

4) Create a Hyperdrive connection (recommended)

Create a Hyperdrive config using the repo-pinned Wrangler (v4):

npm run wrangler -- hyperdrive create my-hyperdrive --connection-string="postgres://..."

Then set the returned Hyperdrive ID in:

  • workers/pg-mcp/wrangler.toml[[hyperdrive]] id = "YOUR_HYPERDRIVE_ID"

5) Set secrets

First, login to Cloudflare (using repo-pinned Wrangler):

npm run wrangler -- login

Generate + upload the shared MCP API key to both workers:

npm run setup:secrets

Set your OpenAI key for llm-app (required for chat):

npm run wrangler -- secret put OPENAI_API_KEY --config workers/llm-app/wrangler.toml

6) Deploy both workers

This deploys pg-mcp, detects its URL, wires llm-app to it, then deploys llm-app.

npm run deploy:all

7) Local/dev loops

# UI dev (remote backend)
npm run dev

# Run the MCP worker locally
npm run dev:mcp

# Run the full app in “local” mode (see llm-app scripts)
npm run dev:local

Using pg-mcp from an MCP client (Cursor, etc.)

This repo includes a template cursor-mcp.json you can copy into your MCP client config.

  • Template: cursor-mcp.json
  • Fill in:
    • url: your deployed pg-mcp RPC endpoint
    • X-API-Key: the MCP_API_KEY you generated

Security notes (read this)

pg-mcp is designed to be fail-closed:

  • If MCP_API_KEY is missing, it rejects requests by default.
  • There is an ALLOW_UNAUTHENTICATED flag in workers/pg-mcp/wrangler.toml intended only for local development.
  • CORS controls and schema filtering are available via CORS_*, EXCLUDED_SCHEMAS, and ALLOWED_SOURCES.
  • Multiple “row cap” safeguards exist (MAX_*_ROWS) to prevent accidental large pulls.

Repo layout

  • workers/pg-mcp/: MCP server Worker
  • workers/llm-app/: chat UI + LLM orchestration Worker + frontend
  • shared/: shared utilities (telemetry, middleware)
  • sql/: demo schema/import/indexes/rollups + dataset docs
  • postman/: Postman collection + environment template

Roadmap (near-term)

  • A truly turnkey install/deploy flow
  • A Docker/local wrapper so it can run anywhere (not just Workers)
  • More prebuilt tool packs and safer “cap + scope” patterns for production data

If you’re building MCP tools and want a UI where you can see everything (payloads, SSE, concurrency, latency) while staying secure by default, this is exactly what I wanted to exist.

About

MCP server for PostgreSQL - Query databases using natural language via Model Context Protocol

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •