Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
85 changes: 25 additions & 60 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,48 +13,15 @@ API gateway, usage metering, and billing services for the Callora API marketplac
- Health check: `GET /api/health`
- Placeholder routes: `GET /api/apis`, `GET /api/usage`
- JSON body parsing; ready to add auth, metering, and contract calls
- In-memory `VaultRepository` with:
- `create(userId, contractId, network)`
- `findByUserId(userId, network)`
- `updateBalanceSnapshot(id, balance, lastSyncedAt)`

## Vault repository behavior

<<<<<<< HEAD
Endpoint:

`GET /api/developers/analytics`

Authentication:

- Requires `x-user-id` header (developer identity for now).

Query params:

- `from` (required): ISO date/time
- `to` (required): ISO date/time
- `groupBy` (optional): `day | week | month` (default: `day`)
- `apiId` (optional): filters to one API (must belong to authenticated developer)
- `includeTop` (optional): set to `true` to include `topEndpoints` and anonymized `topUsers`
=======
- Enforces one vault per user per network.
- `balanceSnapshot` is stored in smallest units using non-negative integer `bigint` values.
- `findByUserId` is network-aware and returns the vault for a specific user/network pair.
>>>>>>> main

## Local setup

1. **Prerequisites:** Node.js 18+
2. **Install and run (dev):**

```bash
cd callora-backend
npm install
npm run dev
```
<<<<<<< HEAD
3. API base: `http://localhost:3000`

3. API base: [http://localhost:3000](http://localhost:3000). Example: [http://localhost:3000/api/health](http://localhost:3000/api/health).

### Docker Setup

You can run the entire stack (API and PostgreSQL) locally using Docker Compose:
Expand All @@ -63,36 +30,39 @@ You can run the entire stack (API and PostgreSQL) locally using Docker Compose:
docker compose up --build
```
The API will be available at http://localhost:3000, and the PostgreSQL database will be mapped to local port 5432.
=======

3. API base: [http://localhost:3000](http://localhost:3000). Example: [http://localhost:3000/api/health](http://localhost:3000/api/health).
>>>>>>> main

## Scripts

| Command | Description |
|---|---|
| `npm run dev` | Run with tsx watch (no build) |
| `npm run build` | Compile TypeScript to `dist/` |
| `npm start` | Run compiled `dist/index.js` |
| `npm test` | Run unit tests |
| `npm run test:coverage` | Run unit tests with coverage |
| Command | Description |
|----------------|--------------------------------|
| `npm run dev` | Run with tsx watch (no build) |
| `npm run build`| Compile TypeScript to `dist/` |
| `npm start` | Run compiled `dist/index.js` |

## Database migrations

### Observability (Prometheus Metrics)
This repository includes SQL migrations for `api_keys`, `vaults`, and `audit_logs` in `migrations/`.

The application exposes a standard Prometheus text-format metrics endpoint at `GET /api/metrics`.
It automatically tracks `http_requests_total`, `http_request_duration_seconds`, and default Node.js system metrics.
- `audit_logs` provides a compliance-oriented, append-only record of sensitive actions.
- `api_keys` stores only `key_hash` (never the raw API key).
- `api_keys` enforces unique `(user_id, api_id)` and has an index on `(user_id, prefix)` for key lookup.
- `vaults` stores per-user per-network snapshots with unique `(user_id, network)`.

#### Production Security:
In production (NODE_ENV=production), this endpoint is protected. You must configure the METRICS_API_KEY environment variable and scrape the endpoint using an authorization header:
Authorization: Bearer <YOUR_METRICS_API_KEY>
Run migrations with PostgreSQL:

```bash
psql "$DATABASE_URL" -f migrations/0001_create_api_keys_and_vaults.up.sql
psql "$DATABASE_URL" -f migrations/0002_create_audit_logs.up.sql
```

## Project layout

```text
callora-backend/
|-- src/
| |-- index.ts # Express app and routes
| |-- index.ts # Express entry point
| |-- routes/ # API routes (audit, auth, keys, etc.)
| |-- audit.ts # Audit logging service
| |-- repositories/
| |-- vaultRepository.ts # Vault repository implementation
| |-- vaultRepository.test.ts # Unit tests
Expand All @@ -102,12 +72,7 @@ callora-backend/

## Environment

<<<<<<< HEAD
- `PORT` — HTTP port (default: 3000). Optional for local dev.

This repo is part of [Callora](https://github.com/your-org/callora). Frontend: `callora-frontend`. Contracts: `callora-contracts`.
=======
- `PORT` - HTTP port (default: 3000). Optional for local dev.
- `DATABASE_URL` - PostgreSQL connection string.

This repo is part of [Callora](https://github.com/your-org/callora). Frontend: `callora-frontend`. Contracts: `callora-contracts`.
>>>>>>> main
9 changes: 9 additions & 0 deletions eslint.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,15 @@ export default tseslint.config(
},
},
},
{
rules: {
'@typescript-eslint/no-unused-vars': [
'error',
{ varsIgnorePattern: '^_', argsIgnorePattern: '^_' },
],
'@typescript-eslint/no-explicit-any': 'off',
},
},
{
ignores: ['dist/', 'node_modules/'],
}
Expand Down
21 changes: 21 additions & 0 deletions jest.config.cjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
/** @type {import('ts-jest').JestConfigWithTsJest} */
module.exports = {
preset: 'ts-jest',
testEnvironment: 'node',
testMatch: ['**/?(*.)+(spec|test).ts'],
transform: {
'^.+\\.ts$': [
'ts-jest',
{
tsconfig: {
module: 'CommonJS',
moduleResolution: 'Node',
esModuleInterop: true
}
}
]
},
moduleNameMapper: {
'^(\\.{1,2}/.*)\\.js$': '$1'
}
};
2 changes: 1 addition & 1 deletion jest.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ export default {
useESM: true,
tsconfig: {
module: 'ESNext',
moduleResolution: 'Bundler',
moduleResolution: 'node',
},
}],
},
Expand Down
4 changes: 4 additions & 0 deletions migrations/0002_create_audit_logs.down.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
DROP TRIGGER IF EXISTS trg_prevent_audit_logs_delete ON audit_logs;
DROP TRIGGER IF EXISTS trg_prevent_audit_logs_update ON audit_logs;
DROP FUNCTION IF EXISTS prevent_audit_logs_mutation;
DROP TABLE IF EXISTS audit_logs;
30 changes: 30 additions & 0 deletions migrations/0002_create_audit_logs.up.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
CREATE TABLE audit_logs (
id BIGSERIAL PRIMARY KEY,
actor_user_id BIGINT NOT NULL,
action VARCHAR(64) NOT NULL,
resource TEXT NOT NULL,
ip INET,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

CREATE INDEX idx_audit_logs_actor_user_id_created_at
ON audit_logs (actor_user_id, created_at DESC);
CREATE INDEX idx_audit_logs_action_created_at
ON audit_logs (action, created_at DESC);
CREATE INDEX idx_audit_logs_resource_created_at
ON audit_logs (resource, created_at DESC);

CREATE OR REPLACE FUNCTION prevent_audit_logs_mutation()
RETURNS TRIGGER AS $$
BEGIN
RAISE EXCEPTION 'audit_logs is append-only';
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER trg_prevent_audit_logs_update
BEFORE UPDATE ON audit_logs
FOR EACH ROW EXECUTE FUNCTION prevent_audit_logs_mutation();

CREATE TRIGGER trg_prevent_audit_logs_delete
BEFORE DELETE ON audit_logs
FOR EACH ROW EXECUTE FUNCTION prevent_audit_logs_mutation();
Loading
Loading