Skip to content

Lightweight Node.js CLI tool to serve static front-end files and reverse-proxy API requests to a back-end server. Zero-config start, flexible JSON configuration, PM2 and Docker ready.

License

Notifications You must be signed in to change notification settings

lopatnov/express-reverse-proxy

express-reverse-proxy

Lightweight Node.js CLI tool to serve static front-end files and reverse-proxy API requests to a back-end server. Zero-config start, flexible JSON configuration, PM2 and Docker ready.

npm downloads npm version License GitHub issues GitHub stars


Table of Contents


Installation

Global install (recommended for CLI use):

npm install -g @lopatnov/express-reverse-proxy

Local dev dependency:

npm install --save-dev @lopatnov/express-reverse-proxy

Run without installing:

npx @lopatnov/express-reverse-proxy

Quick Start

  1. Generate a server-config.json interactively (or create it manually):
express-reverse-proxy --init

Or create it manually:

{
  "port": 8080,
  "folders": "www",
  "proxy": {
    "/api": "http://localhost:4000"
  }
}
  1. Start the server:
express-reverse-proxy

Your front-end files from ./www are now served at http://localhost:8080, and any request to /api/* is forwarded to your back-end at http://localhost:4000.

No config file? If server-config.json is not found, the server starts with built-in defaults: port 8000, serving files from the current directory (.). A warning is printed to the console.

See server-config.json for a full configuration example.


Demo

The repository includes a full working demo: two mock back-end APIs and two front-end clients, each served by a separate proxy instance.

demo/
  server-a.js       — Users API    (port 4001)
  server-b.js       — Products API (port 4002)
  client-a/         — Frontend A   (port 8080, proxies /api → :4001)
  client-b/         — Frontend B   (port 8081, proxies /api → :4002)
  server-config.json

Start all demo processes at once:

npm run demo

This command starts both mock back-ends and the proxy server (serving both clients) as a single Node.js-managed process group. Open the clients in your browser:

URL Description
http://localhost:8080 Client A — Users API demo
http://localhost:8081 Client B — Products API demo

Click the Send request buttons to see live API responses flowing through the proxy. Press Ctrl+C to stop all processes.

Demo architecture:

Browser
  │
  ├─▶  localhost:8080  (express-reverse-proxy)
  │      ├─▶  GET /         → serves demo/client-a/index.html
  │      └─▶  GET /api/**   → proxied to demo/server-a.js :4001
  │
  └─▶  localhost:8081  (express-reverse-proxy)
         ├─▶  GET /         → serves demo/client-b/index.html
         └─▶  GET /api/**   → proxied to demo/server-b.js :4002

How It Works

Every incoming request passes through the middleware chain in this order:

Request
  │
  ├─▶  Logging (morgan)                     — log to console or file
  ├─▶  responseTime                          — measure and record latency
  ├─▶  cors                                  — CORS headers + preflight OPTIONS
  ├─▶  compression                           — gzip/deflate response body
  ├─▶  helmet                                — security HTTP headers
  ├─▶  favicon                               — serve /favicon.ico from memory
  │
  ├─▶  healthCheck                           — GET /__health__ → {status, uptime}
  │       └─▶  Path matches → respond immediately (bypasses auth below)
  │
  ├─▶  rateLimit                             — 429 if client over limit
  ├─▶  basicAuth                             — 401 if credentials missing/wrong
  │
  ├─▶  Custom headers applied (headers)
  │
  ├─▶  Redirect rules checked (redirects)
  │       └─▶  Path matches → 301/302 redirect
  │
  ├─▶  Static files checked (folders, in order)
  │       └─▶  File found → serve it
  │
  ├─▶  CGI scripts checked (cgi)
  │       └─▶  Path + extension matches → execute script → stream response
  │
  ├─▶  File upload handler (upload)
  │       └─▶  POST path matches → save files, return JSON
  │           GET path matches → serve uploaded file
  │
  ├─▶  Reverse proxy rules checked (proxy)
  │       └─▶  Path matches → forward to back-end → return response
  │               (round-robin if multiple targets configured)
  │
  └─▶  No match → unhandled handler (by Accept header)
            └─▶  Return status + body

Options that are not configured are skipped entirely. Static files always take priority over proxy rules.


CLI Options

The package installs two equivalent commands — use whichever you prefer:

express-reverse-proxy [options]
lerp [options]

lerp is a short alias for Lopatnov Express Reverse Proxy.

Option Description
--help Print help and exit
--config <file> Path to the JSON configuration file. Default: server-config.json
--init Interactively create a server-config.json in the current directory
--cluster [action] Manage the PM2 cluster. Action defaults to start when omitted
--cluster-config <file> Path to a custom PM2 ecosystem config file. Default: ecosystem.config.cjs next to server.js

--config

Specify the path to the configuration file. Accepts a file path or a directory (in which case server-config.json inside that directory is used).

express-reverse-proxy --config ./configs/prod.json
express-reverse-proxy --config ./configs/

Default: server-config.json in the current working directory.

If the file is not found and --config was explicitly provided, the server exits with an error. If no --config is given and the default file is missing, the server starts with built-in defaults (port: 8000, folders: ".") and prints a warning.

--init

Interactively creates a server-config.json in the current working directory. Asks for port, static folder, optional proxy target, and hot reload preference.

express-reverse-proxy --init

Example session:

Port [8000]: 8080
Static folder [.]: www
Proxy path (e.g. /api) [skip]: /api
Proxy target for /api: http://localhost:4000
Hot reload? [y/N]: y
[init] Created /your/project/server-config.json

If server-config.json already exists, the command asks before overwriting.

--cluster

Manage the PM2 process cluster. Action defaults to start when omitted.

Action Description
start Start the cluster (default when action is omitted)
stop Stop all cluster instances
restart Restart all cluster instances
status Show PM2 process status table
logs Stream the last 200 log lines
monitor Open the PM2 real-time monitor
express-reverse-proxy --cluster              # same as --cluster start
express-reverse-proxy --cluster start
express-reverse-proxy --cluster stop
express-reverse-proxy --cluster restart
express-reverse-proxy --cluster status
express-reverse-proxy --cluster logs
express-reverse-proxy --cluster monitor

Pass --config to forward a custom config path to all cluster workers:

express-reverse-proxy --cluster start --config ./configs/prod.json

--cluster-config

Override the PM2 ecosystem config file used to start the cluster. Default: ecosystem.config.cjs in the same directory as server.js.

express-reverse-proxy --cluster start --cluster-config ./my-ecosystem.config.cjs
express-reverse-proxy --cluster restart --cluster-config /etc/myapp/ecosystem.config.cjs

Useful when you need custom PM2 settings such as a different number of instances, environment variables, or log file paths. See PM2 ecosystem documentation for all options.


Configuration

All configuration lives in a single JSON file (default server-config.json).

IDE autocomplete (JSON Schema)

Add a $schema reference to your config file to get property autocomplete, descriptions, and type checking in VS Code and other editors:

{
  "$schema": "https://unpkg.com/@lopatnov/express-reverse-proxy/server-config.schema.json",
  "port": 8080,
  "folders": "www"
}

Environment variables

Variable Default Description
PORT 8000 Overrides the port when it is not set in the config file
NODE_ENV Passed through to PM2 env profiles (env / env_development)

port

The port the server listens on. Defaults to 8000. Can also be set via the PORT environment variable.

{
  "port": 8080
}

logging

Controls HTTP request logging (Morgan). Enabled by default (dev format). Set to false to silence per-request log lines — useful in production behind another proxy, or to keep console output clean.

{
  "port": 8080,
  "logging": false,
  "folders": "www"
}

Object form — write logs to a file and/or choose a different format:

{
  "logging": { "format": "combined", "file": "./logs/access.log" }
}
Option Default Description
format "combined" Morgan format: combined, common, dev, short, or tiny
file none Path to log file (relative to config file). Appended if exists

When file is set, logs are written to the file only (not to the console).

Logging is applied per-site, so each virtual host can have its own format and log file.

hotReload

Watches the folders directories for file changes and automatically reloads connected browser tabs. Uses Server-Sent Events (SSE). Intended for local development only.

{
  "port": 8080,
  "hotReload": true,
  "folders": "www"
}

The server exposes two endpoints when hot reload is enabled:

Endpoint Description
GET /__hot-reload__ SSE stream — browsers subscribe here
GET /__hot-reload__/client.js Ready-to-use client script

Connecting the client

Option A — plain HTML project: add a script tag to your page. The file is served directly by the dev server, no installation needed:

<script src="/__hot-reload__/client.js"></script>

Option B — bundled project (Vite, webpack, etc.): import the client module. The bundler resolves it through the package exports field:

import '@lopatnov/express-reverse-proxy/hot-reload-client';

Both options connect to /__hot-reload__ and call location.reload() when a file change is detected. The connection is re-established automatically after 3 seconds if the server restarts.

PM2 note: hot reload works best with a single process (node server.js). If using PM2, set instances: 1 in your ecosystem config — each worker maintains its own file watcher and SSE client list independently.

headers

Add headers to every response — useful for CORS in development.

{
  "headers": {
    "Access-Control-Allow-Origin": "*"
  }
}

redirects

Permanently or temporarily redirect URL paths to new destinations. Redirects are checked before static files and proxy rules.

Object form — map source paths to destinations:

{
  "redirects": {
    "/old-path": "/new-path",
    "/legacy": "https://new.example.com",
    "/temp": { "to": "/temporary-destination", "status": 302 }
  }
}

Array form — explicit entries with from, to, and optional status:

{
  "redirects": [
    { "from": "/old", "to": "/new" },
    { "from": "/moved", "to": "https://example.com", "status": 301 },
    { "from": "/temp", "to": "/somewhere", "status": 302 }
  ]
}
Field Default Description
from Source URL path (array form only, required)
to Destination path or full URL (required)
status 301 HTTP redirect status: 301, 302, 307, or 308

301 — Moved Permanently. 302 — Found (temporary). Use 301 for permanent URL changes and 302 for temporary ones.

folders

Serve static files. Supports three forms:

Single directory:

{
  "folders": "www"
}

Multiple directories (searched in order):

{
  "folders": ["./www", "./mock-json", "../../images"]
}

URL path mapping (nested objects supported):

{
  "folders": {
    "/": "dist",
    "/api": "./mock-json",
    "/assets": {
      "/images": "./images",
      "/css": "./scss/dist",
      "/script": "./scripts"
    }
  }
}

The above maps:

URL path Local directory
/ dist
/api ./mock-json
/assets/images ./images
/assets/css ./scss/dist
/assets/script ./scripts

proxy

Forward requests to a back-end server. Supports three forms:

Proxy everything to one server:

{
  "proxy": "http://localhost:4000"
}

Map a URL path prefix to a server:

{
  "proxy": {
    "/api": "http://localhost:4000"
  }
}

Multiple proxy rules:

{
  "proxy": [
    { "/api": "http://localhost:4000" },
    { "/auth": "http://localhost:5000" }
  ]
}

Load balancing — pass an array of targets for a path to distribute requests in round-robin:

{
  "proxy": {
    "/api": ["http://backend1:3000", "http://backend2:3000", "http://backend3:3000"]
  }
}

Requests to /api are forwarded to the backends in turn: backend1, backend2, backend3, backend1, …

unhandled

Control responses when no static file or proxy rule matches. Rules are selected by the request's Accept header.

{
  "unhandled": {
    "html": {
      "status": 307,
      "headers": { "Location": "/" }
    },
    "json": {
      "status": 404,
      "send": { "error": "Not Found" }
    },
    "xml": {
      "status": 404,
      "send": "<error>Not Found</error>"
    },
    "*": {
      "status": 404,
      "file": "./www/not-found.txt"
    }
  }
}

Each Accept key supports these response options:

Option Type Description
status number HTTP response status code
headers object Additional response headers
send string | object Inline response body (text or JSON)
file string Path to file whose contents are sent as body

host

Route requests to this configuration based on the HTTP Host header. Enables virtual hosting — multiple sites on one server process.

Value Behavior
"app.localhost" Only handles requests whose Host header matches exactly
"*" or omitted Catch-all — handles any request not matched by another entry

To use multi-site mode, make the config file an array instead of an object. Specific hosts are always checked before the catch-all.

Multiple sites on one port — routing by Host header:

[
  { "host": "app.localhost", "port": 8080, "folders": "www" },
  { "host": "admin.localhost", "port": 8080, "folders": "admin" },
  { "host": "*", "port": 8080, "folders": "fallback" }
]

Multiple sites on different ports — one server instance per port:

[
  { "host": "app.localhost", "port": 8080, "folders": "www" },
  { "host": "admin.localhost", "port": 8080, "folders": "admin" },
  {
    "host": "api.localhost",
    "port": 9090,
    "proxy": { "/": "http://localhost:4000" }
  },
  { "host": "*", "port": 9090, "folders": "fallback" }
]

Configs with the same port share one Express server; configs with different port values each start their own server.

Two entries with the same host and port cause a startup error. The same host on different ports is allowed.

ssl

Enable HTTPS on a port by adding an ssl object to any site config for that port. All sites sharing the same port use the same certificate.

Field Type Description
key string Path to the private key file (PEM format)
cert string Path to the certificate file (PEM format)
ca string (optional) Path to the CA bundle for client validation
redirect integer (optional) HTTP port to redirect (301) to HTTPS

Paths are resolved relative to the config file, not the current working directory.

{
  "port": 443,
  "ssl": {
    "key": "./certs/key.pem",
    "cert": "./certs/cert.pem"
  },
  "folders": "./public",
  "proxy": {
    "/api": "http://localhost:4000"
  }
}

Automatic HTTP → HTTPS redirect — set redirect to the HTTP port to also listen on plain HTTP and redirect all traffic to HTTPS:

{
  "port": 443,
  "ssl": {
    "key": "./certs/key.pem",
    "cert": "./certs/cert.pem",
    "redirect": 80
  },
  "folders": "./public"
}

This starts an HTTPS server on port 443 and a tiny redirect-only HTTP server on port 80. All http:// requests are permanently redirected (301) to https://.

All site configs on the same port must either all have ssl or none — mixing is a startup error.

compression

Enable gzip/deflate response compression. Reduces the size of HTML, CSS, JS, and JSON responses sent to the browser. Set to true for defaults, or pass an options object.

{
  "port": 8080,
  "compression": true,
  "folders": "www"
}

With custom options (see compression docs):

{
  "compression": { "level": 6, "threshold": 1024 }
}

Compression is applied per-site. Assets that are already compressed (images, fonts, video) are not affected — the browser signals it accepts compressed responses via the Accept-Encoding header.

helmet

Set security-related HTTP response headers. Protects against common web vulnerabilities by configuring headers such as Content-Security-Policy, X-Frame-Options, Strict-Transport-Security, and others.

{
  "port": 8080,
  "helmet": true,
  "folders": "www"
}

Disable a specific header (see helmet docs for all options):

{
  "helmet": { "contentSecurityPolicy": false }
}

When helmet: true is set, the default helmet configuration is applied. This may block inline scripts and cross-origin resources. Adjust contentSecurityPolicy or other options as needed for your project.

cors

Enable CORS (Cross-Origin Resource Sharing) headers and handle preflight OPTIONS requests automatically. Useful when your front-end on one origin calls an API on a different origin.

{
  "port": 8080,
  "cors": true,
  "proxy": { "/api": "http://localhost:4000" }
}

Restrict to a specific origin (see cors docs):

{
  "cors": { "origin": "https://app.example.com" }
}

The cors middleware handles OPTIONS preflight requests that the headers option cannot respond to. Use cors when you need to allow requests from JavaScript on a different domain — for example a React app calling this proxy's API routes.

favicon

Serve a favicon file efficiently. The file is read into memory at startup and served from there on every /favicon.ico request — before static folder scanning or proxy rules run.

{
  "port": 8080,
  "favicon": "./public/favicon.ico",
  "folders": "www"
}

The path is resolved relative to the config file, consistent with the ssl option. Absolute paths are also accepted.

If your favicon already lives inside a directory listed in folders, this option is not needed — express.static will serve it automatically.

responseTime

Add an X-Response-Time header to every response, recording how long the server took to handle the request. Useful for performance monitoring and debugging.

{
  "port": 8080,
  "responseTime": true,
  "folders": "www"
}

With custom precision (see response-time docs):

{
  "responseTime": { "digits": 0, "suffix": false }
}

rateLimit

Limit the number of requests a client can make in a time window. Responds with 429 Too Many Requests when the limit is exceeded. Useful when running without a dedicated reverse proxy.

{
  "port": 8080,
  "rateLimit": { "windowMs": 60000, "limit": 100 },
  "folders": "www"
}
Option Default Description
windowMs 60000 Time window in milliseconds
limit 5 Maximum requests per client per window
message built-in Response body when limit is exceeded

See express-rate-limit docs for all options.

Rate limiting is applied per-site and per IP address. In production behind Nginx or Caddy, configure rate limiting there instead — it runs before Node.js and is more efficient.

basicAuth

Protect the site with HTTP Basic Authentication. All requests must include valid credentials or the server responds with 401 Unauthorized.

{
  "port": 8080,
  "basicAuth": {
    "users": { "admin": "s3cr3t" },
    "challenge": true
  },
  "folders": "www"
}
Option Default Description
users Object mapping username → password (required)
challenge false Send WWW-Authenticate header to trigger browser login dialog
realm Realm string shown in the browser login dialog

See express-basic-auth docs for all options.

Passwords are compared in plain text. Do not use Basic Auth over plain HTTP in production — always combine with ssl or put behind a TLS-terminating proxy.

healthCheck

Expose a lightweight health check endpoint. Returns a JSON response with server status, uptime, and current timestamp. Useful for load balancers, monitoring systems, and container health checks.

{
  "port": 8080,
  "healthCheck": true,
  "folders": "www"
}

Default endpoint: GET /__health__

{ "status": "ok", "uptime": 42.3, "timestamp": "2026-01-01T12:00:00.000Z" }

Custom path:

{
  "healthCheck": { "path": "/health" }
}
Option Default Description
path "/__health__" URL path of the health endpoint

The health check endpoint is placed before rate limiting and basic auth — it is always publicly accessible regardless of other authentication settings.

cgi

Execute server-side scripts using the CGI (Common Gateway Interface) protocol. When a request matches the configured URL prefix and file extension, the script is spawned as a child process — HTTP headers become environment variables, the request body is piped to stdin, and the script's stdout is streamed back as the HTTP response.

{
  "port": 8080,
  "cgi": {
    "path": "/cgi-bin",
    "dir": "./cgi-bin",
    "extensions": [".cgi", ".pl", ".py", ".sh"],
    "interpreters": {
      ".py": "python3",
      ".sh": "sh",
      ".pl": "perl"
    }
  }
}
Option Default Description
path "/cgi-bin" URL prefix that triggers CGI dispatch
dir "./cgi-bin" Local directory containing scripts (resolved relative to config file)
extensions [".cgi", ".pl", ".py", ".sh"] File extensions treated as executable CGI scripts
interpreters {} Map of file extension → interpreter command

Shorthand — point directly to the script directory (all defaults apply):

{
  "cgi": "./cgi-bin"
}

CGI environment variables set for every request:

Variable Value
REQUEST_METHOD HTTP method (GET, POST, …)
QUERY_STRING URL query string (without ?)
CONTENT_TYPE Content-Type request header
CONTENT_LENGTH Content-Length request header
SCRIPT_FILENAME Absolute path to the script file
SCRIPT_NAME URL path to the script (e.g. /cgi-bin/hello.py)
SERVER_NAME Requested hostname
SERVER_PORT Server listen port
REMOTE_ADDR Client IP address
HTTP_* All request headers (e.g. HTTP_ACCEPT, HTTP_HOST)

A minimal Python example (cgi-bin/hello.py):

#!/usr/bin/env python3
print("Content-Type: text/plain")
print("Status: 200 OK")
print()
print("Hello from CGI!")

Unix/macOS note: Scripts must be executable: chmod +x cgi-bin/hello.py. Alternatively, configure an interpreters entry for the extension — no executable bit required when an interpreter is specified.

Windows note: Scripts are not directly executable on Windows. You must configure interpreters for every extension you use; otherwise the request returns a 500 spawn error.

Array form — multiple independent CGI directories on the same site:

{
  "cgi": [
    {
      "path": "/py-scripts",
      "dir": "./py-scripts",
      "extensions": [".py"],
      "interpreters": { ".py": "python3" }
    },
    {
      "path": "/node-scripts",
      "dir": "./node-scripts",
      "extensions": [".js"],
      "interpreters": { ".js": "node" }
    }
  ]
}

Each entry in the array sets up an independent CGI mount point with its own directory, URL prefix, extensions, and interpreters.

upload

Accept file uploads via multipart/form-data and save them to a local directory. Uploaded files can be retrieved immediately via GET.

{
  "port": 8080,
  "upload": {
    "path": "/upload",
    "dir": "./uploads",
    "maxFileSize": 10485760,
    "maxFiles": 10,
    "allowedTypes": ["image/jpeg", "image/png", "application/pdf"],
    "fieldName": "file"
  }
}

Shorthand — directory only (all defaults apply):

{
  "upload": "./uploads"
}
Option Default Description
path "/upload" URL prefix for the upload endpoint
dir "./uploads" Save directory (resolved relative to the config file)
maxFileSize none Maximum file size in bytes; responds with 413 when exceeded
maxFiles none Maximum number of files per request; responds with 400 when exceeded
allowedTypes none MIME type whitelist; responds with 400 when the type is not in the list
fieldName any field Accept only files uploaded in this specific form field

Array form — multiple upload endpoints on the same site:

{
  "upload": [
    { "path": "/photos", "dir": "./photos", "allowedTypes": ["image/jpeg", "image/png"] },
    { "path": "/docs",   "dir": "./documents", "allowedTypes": ["application/pdf"], "maxFileSize": 5242880 }
  ]
}

HTTP interface:

Method URL Description
POST <path> Upload files via multipart/form-data
GET <path>/<name> Retrieve a previously uploaded file

POST success response (200):

{
  "files": [
    { "file": "photo-1700000000000-123456789.jpg", "size": 45678, "originalName": "photo.jpg" }
  ]
}

Upload with curl:

curl -F "file=@photo.jpg" http://localhost:8080/upload

The upload directory is created automatically at startup if it does not exist. Saved filenames include a timestamp and random suffix to avoid collisions.


Configuration Recipes

Static files first, then fall back to back-end

All unmatched requests are forwarded to localhost:4000.

{
  "port": 8080,
  "folders": "www",
  "proxy": "http://localhost:4000"
}
  • GET /index.html → served from ./www/index.html
  • GET /missing → proxied to http://localhost:4000/missing

Static files + API on a specific path

Only /api/* requests go to the back-end; everything else stays local.

{
  "port": 8080,
  "folders": "www",
  "proxy": {
    "/api": "http://localhost:4000"
  }
}
  • GET /index.html → served from ./www/index.html
  • GET /api/users → proxied to http://localhost:4000/users
  • GET /missing → 404 Not Found

Hot reload dev server

Local development setup: serve a front-end build folder, proxy API requests to a local back-end, and automatically reload the browser on file changes.

{
  "port": 3000,
  "hotReload": true,
  "folders": "./dist",
  "proxy": {
    "/api": "http://localhost:4000"
  }
}

Add the client script to your HTML (or import it in your bundler entry point):

<script src="/__hot-reload__/client.js"></script>

The browser reconnects automatically after server restarts.


HTTPS with automatic HTTP redirect

Serve the site over HTTPS and redirect all plain-HTTP traffic (port 80) to HTTPS (port 443) with a permanent 301 redirect.

mkdir certs
openssl req -x509 -newkey rsa:2048 -keyout certs/key.pem -out certs/cert.pem \
  -days 365 -nodes -subj "/CN=example.com"
{
  "port": 443,
  "ssl": {
    "key": "./certs/key.pem",
    "cert": "./certs/cert.pem",
    "redirect": 80
  },
  "folders": "./public",
  "proxy": {
    "/api": "http://localhost:4000"
  }
}

The server logs two listeners on startup:

[listen] https://localhost:443
[listen] http redirect :80 → https :443

HTTPS with a self-signed certificate (local dev)

mkdir certs
openssl req -x509 -newkey rsa:2048 -keyout certs/key.pem -out certs/cert.pem \
  -days 365 -nodes -subj "/CN=localhost"
{
  "port": 8443,
  "ssl": {
    "key": "./certs/key.pem",
    "cert": "./certs/cert.pem"
  },
  "folders": "www",
  "proxy": {
    "/api": "http://localhost:4000"
  }
}

Start and open in browser (accept the self-signed cert warning):

express-reverse-proxy --config server-config.json
# [listen] https://localhost:8443

Production hardening (helmet + cors + compression)

Enable security headers, CORS, and response compression in one config:

{
  "port": 8080,
  "compression": true,
  "helmet": true,
  "cors": { "origin": "https://app.example.com" },
  "responseTime": true,
  "folders": "www",
  "proxy": {
    "/api": "http://localhost:4000"
  }
}

Protected admin area

Protect a site with rate limiting and HTTP Basic Auth. Useful for internal tools or staging environments.

{
  "port": 8080,
  "rateLimit": { "windowMs": 60000, "limit": 30 },
  "basicAuth": {
    "users": { "admin": "s3cr3t", "viewer": "readonly" },
    "challenge": true
  },
  "folders": "./admin",
  "proxy": {
    "/api": "http://localhost:4000"
  }
}
  • Requests without valid credentials → 401 Unauthorized (browser shows login dialog)
  • More than 30 requests per minute from the same IP → 429 Too Many Requests

Always combine Basic Auth with ssl in production — credentials are transmitted in plain text otherwise.


URL migration (permanent redirects)

Redirect old URLs to new ones after a site restructure, without breaking existing links or SEO rankings.

{
  "port": 8080,
  "redirects": [
    { "from": "/about.html",    "to": "/about",       "status": 301 },
    { "from": "/products.html", "to": "/products",    "status": 301 },
    { "from": "/blog/:slug",    "to": "/posts/:slug", "status": 301 }
  ],
  "folders": "./public"
}

Or as an object map for simple path-to-path redirects:

{
  "redirects": {
    "/old-home":  "/",
    "/old-about": "/about",
    "/legacy-api": "https://api.example.com"
  }
}

Load-balanced API proxy

Distribute API traffic across multiple back-end instances using round-robin load balancing. No external load balancer required.

{
  "port": 8080,
  "folders": "./public",
  "proxy": {
    "/api": [
      "http://backend-1:3000",
      "http://backend-2:3000",
      "http://backend-3:3000"
    ]
  }
}

Requests to /api/* are forwarded to the three back-ends in turn. If a back-end is down, its slot in the rotation still receives requests — add a health check at the application level or use a dedicated load balancer for automatic failover.


File upload server

Accept file uploads from a web form or API client and serve them back over HTTP.

{
  "port": 8080,
  "upload": [
    {
      "path": "/photos",
      "dir": "./storage/photos",
      "maxFileSize": 5242880,
      "allowedTypes": ["image/jpeg", "image/png", "image/webp"]
    },
    {
      "path": "/documents",
      "dir": "./storage/docs",
      "maxFileSize": 10485760,
      "allowedTypes": ["application/pdf"]
    }
  ]
}

Upload a photo:

curl -F "file=@photo.jpg" http://localhost:8080/photos
# {"files":[{"file":"photo-1700000000000-123456789.jpg","size":45678,"originalName":"photo.jpg"}]}

Retrieve it:

curl http://localhost:8080/photos/photo-1700000000000-123456789.jpg

Health check for Docker / Kubernetes

Add a health check endpoint and write access logs to a file — a common pattern for containerized deployments.

{
  "port": 8080,
  "healthCheck": { "path": "/health" },
  "logging": { "format": "combined", "file": "/var/log/app/access.log" },
  "compression": true,
  "folders": "./public",
  "proxy": {
    "/api": "http://backend:3000"
  }
}

Docker HEALTHCHECK:

HEALTHCHECK --interval=30s --timeout=5s --start-period=10s \
  CMD curl -f http://localhost:8080/health || exit 1

Kubernetes liveness/readiness probe:

livenessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 30

CORS headers + rich error responses

{
  "port": 8080,
  "headers": {
    "Access-Control-Allow-Origin": "*"
  },
  "folders": "www",
  "proxy": {
    "/api": "https://stat.ripe.net"
  },
  "unhandled": {
    "html": {
      "status": 307,
      "headers": { "Location": "/" }
    },
    "json": {
      "status": 404,
      "send": { "error": "JSON Not Found" }
    },
    "xml": {
      "status": 404,
      "send": "<error>Not Found</error>"
    }
  }
}

Docker & PM2

Docker

A Dockerfile is included. Build and run:

docker build -t express-reverse-proxy .
docker run -p 8080:8080 -v $(pwd)/server-config.json:/app/server-config.json express-reverse-proxy

PM2

The package includes a default ecosystem.config.cjs, resolved automatically from the package directory. It runs the server in cluster mode — PM2 acts as a load balancer and all worker processes share a single port through the Node.js cluster module. Without cluster mode each instance would try to bind its own copy of the port and all but the first would fail.

Default ecosystem.config.cjs (for reference)
// ecosystem.config.cjs
const path = require("path");
module.exports = {
  apps: [
    {
      name: "express-reverse-proxy", // process name in pm2 list
      script: path.join(__dirname, "server.js"), // absolute path — works after global install
      instances: "max", // one worker per CPU core
      exec_mode: "cluster", // required for port sharing
      wait_ready: true, // wait for process.send('ready') before marking healthy
      listen_timeout: 30000, // ms to wait for 'ready' signal
      kill_timeout: 5000, // ms to wait for graceful shutdown before SIGKILL
      shutdown_with_message: true, // send 'shutdown' message instead of SIGINT
      env: { NODE_ENV: "production" },
      env_development: { NODE_ENV: "development" },
    },
  ],
};

To customize PM2 behavior, provide your own file via --cluster-config (optional):

express-reverse-proxy --cluster start --cluster-config ./my-ecosystem.config.cjs

Run via npm scripts:

Script Description
npm run pm2-start Start cluster (max CPU cores); reads server-config.json from cwd
npm run pm2-restart Restart all instances
npm run pm2-stop Stop all instances
npm run pm2-status Show process status
npm run pm2-logs Show last 200 log lines
npm run pm2-monitor Open real-time monitor

Or use the CLI directly:

express-reverse-proxy --cluster start
express-reverse-proxy --cluster status
express-reverse-proxy --cluster stop

Behind a reverse proxy

For production deployments it is common to place a dedicated reverse proxy in front of express-reverse-proxy to handle TLS termination, HTTP/2, gzip compression, and rate limiting. In this setup the Node.js server listens on a local port over plain HTTP, while the outer proxy terminates HTTPS connections from the internet:

Internet (HTTPS / HTTP/2)
        ↓
  Nginx or Caddy          — TLS, HTTP/2, gzip, rate limiting
        ↓ HTTP/1.1 (localhost)
  express-reverse-proxy   — PM2 cluster, routing, static files, API proxy
        ↓
  Backend API servers

No ssl config needed in server-config.json when the outer proxy handles TLS.

Nginx

server {
    listen 443 ssl;
    server_name example.com;

    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    location / {
        proxy_pass         http://127.0.0.1:8080;
        proxy_set_header   Host              $host;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
    }
}

Free certificates can be obtained with Certbot: certbot --nginx -d example.com.

Caddy

Caddy provisions and renews Let's Encrypt certificates automatically — no extra tooling needed:

example.com {
    reverse_proxy 127.0.0.1:8080
}

Start with caddy run --config Caddyfile.


Testing

The project uses Cypress for E2E testing. Tests cover static file serving, reverse proxy routing, custom response headers, and unhandled route behaviour on both Client A (:8080) and Client B (:8081).

Open Cypress interactively (pick tests to run in the browser UI):

npm run cypress:open

Run all tests headlessly (requires demo servers to be running first):

# Terminal 1 — start demo servers
npm run demo

# Terminal 2 — run tests
npm run cypress:run

Or start everything automatically:

npm test

npm test uses scripts/test.js, which starts all demo servers, waits for all four ports (4001, 4002, 8080, 8081) to be ready, runs Cypress, then shuts everything down.

Test coverage

Suite What is tested
static.cy.js Both clients load, serve CSS, return custom headers, redirect unhandled HTML routes, return 404 for unhandled JSON
proxy.cy.js /api/users proxied to Users API, /api/products proxied to Products API, 404 for non-existent resources, UI button interaction

Troubleshooting

Server starts but static files are not served

  • Check that the path in folders is correct relative to where you run the command, not relative to server-config.json.
  • Verify the directory exists: ls ./www (or the path you configured).

Proxy requests return 502 or fail silently

  • Confirm the back-end is running and reachable: curl http://localhost:4000/api/health.
  • The proxy address must include the protocol: "http://localhost:4000".

Port already in use

Error: listen EADDRINUSE :::8080

Either change port in server-config.json, or set the environment variable:

PORT=9090 express-reverse-proxy

CORS errors in the browser

Add a headers block to your config:

{
  "headers": {
    "Access-Control-Allow-Origin": "*"
  }
}

server-config.json not found

If no config file is found, the server starts with built-in defaults — port 8000, serving files from . (the current directory) — and prints a yellow warning. This is useful for a quick local file preview.

To use your own config, either place server-config.json in the working directory or specify a path with --config:

express-reverse-proxy --config ./configs/dev.json

PM2 shows Error: spawn wmic ENOENT on Windows 11

PM2 error: Error caught while calling pidusage
PM2 error: Error: Error: spawn wmic ENOENT

wmic was removed in newer Windows 11 builds. PM2 uses it internally to collect CPU/memory metrics, but this does not affect the server — all instances start and serve requests normally. The metrics columns in pm2 status will show 0% / 0b.

To suppress the errors, ecosystem.config.cjs already includes pmx: false which disables the metrics module. If the errors still appear after restarting, delete the PM2 daemon state and start fresh:

pm2 kill
npm run pm2-start

Multiple configs with the same host + port

Error: Duplicate host "app.localhost" on port 8080

Each host + port combination must be unique across all entries in an array config. The same host on different ports is allowed.


Contributing

Contributions are welcome! Please read CONTRIBUTING.md before opening a pull request.


Built With

  • Node.js — JavaScript runtime (ESM)
  • Express — HTTP server framework
  • express-http-proxy — reverse proxy middleware
  • Morgan — HTTP request logger
  • compression — gzip/deflate response compression
  • helmet — security HTTP headers
  • cors — CORS headers and preflight handling
  • serve-favicon — efficient favicon serving
  • response-time — X-Response-Time header
  • express-rate-limit — request rate limiting
  • express-basic-auth — HTTP Basic Authentication
  • CGI support — built on Node.js child_process.spawn (no external dependency)
  • multer — multipart/form-data file upload handling
  • PM2 — production process manager with clustering
  • Biome — fast linter and formatter (Rust-based)
  • Cypress — E2E testing framework
  • Docker — containerization

License

Apache-2.0 © 2020–2026 Oleksandr Lopatnov · LinkedIn

About

Lightweight Node.js CLI tool to serve static front-end files and reverse-proxy API requests to a back-end server. Zero-config start, flexible JSON configuration, PM2 and Docker ready.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors 3

  •  
  •  
  •