s2.dev is a serverless datastore for real-time, streaming data.
This repository contains:
- s2-cli - The official S2 command-line interface
- s2-lite - An open source, self-hostable server implementation of the S2 API
brew install s2-streamstore/s2/s2cargo install --locked s2-clicurl -fsSL https://raw.githubusercontent.com/s2-streamstore/s2/main/install.sh | bashOr specify a version with VERSION=x.y.z before the command. See all releases.
docker pull ghcr.io/s2-streamstore/s2s2-lite is embedded as the s2 lite subcommand of the CLI. It's a self-hostable server implementation of the S2 API.
It uses SlateDB as its storage engine, which relies entirely on object storage for durability.
It is easy to run s2 lite against object stores like AWS S3 and Tigris. It is a single-node binary with no other external dependencies. Just like s2.dev, data is always durable on object storage before being acknowledged or returned to readers.
You can also simply not specify a --bucket, which makes it operate entirely in-memory (or use --local-root to persist to local disk instead).
Tip
In-memory s2-lite is a very effective S2 emulator for integration tests.
Here's how you can run in-memory without any external dependency:
# Using Docker
docker run -p 8080:80 ghcr.io/s2-streamstore/s2 lite
# Or directly with the CLI
s2 lite --port 8080AWS S3 bucket example
docker run -p 8080:80 \
-e AWS_PROFILE=${AWS_PROFILE} \
-v ~/.aws:/home/nonroot/.aws:ro \
ghcr.io/s2-streamstore/s2 lite \
--bucket ${S3_BUCKET} \
--path s2liteStatic credentials example (Tigris, R2 etc)
docker run -p 8080:80 \
-e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} \
-e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \
-e AWS_ENDPOINT_URL_S3=${AWS_ENDPOINT_URL_S3} \
ghcr.io/s2-streamstore/s2 lite \
--bucket ${S3_BUCKET} \
--path s2liteNote
Point the S2 CLI or SDKs at your lite instance like this:
export S2_ACCOUNT_ENDPOINT="http://localhost:8080"
export S2_BASIN_ENDPOINT="http://localhost:8080"
export S2_ACCESS_TOKEN="ignored"Let's make sure the server is ready:
while ! curl -sf ${S2_ACCOUNT_ENDPOINT}/health -o /dev/null; do echo Waiting...; sleep 2; done && echo Up!Install the CLI (see Installation above) or upgrade if s2 --version is older than 0.26
Let's create a basin with auto-creation of streams enabled:
s2 create-basin liteness --create-stream-on-append --create-stream-on-readTest your performance:
s2 bench liteness --target-mibps 10 --duration 5s --catchup-delay 0sNow let's try streaming sessions. In one or more new terminals (make sure you re-export the env vars noted above),
s2 read s2://liteness/starwars 2> /dev/nullNow back from your original terminal, let's write to the stream:
nc starwars.s2.dev 23 | s2 append s2://liteness/starwars/health will return 200 on success for readiness and liveness checks
/metrics returns Prometheus text format
Use SL8_ prefixed environment variables, e.g.:
# Defaults to 50ms for remote bucket / 5ms in-memory
SL8_FLUSH_INTERVAL=10ms- HTTP serving is implemented using axum
- Each stream corresponds to a Tokio task called
streamerthat owns the currenttailposition, serializes appends, and broadcasts acknowledged records to followers - Appends are pipelined to improve performance against high-latency object storage
lite::backend::kv::Keydocuments the data modeling in SlateDB
Tip
Pipelining is temporarily disabled by default, and it will be enabled once it is completely safe.
For now, you can use S2LITE_PIPELINE=true to get a sense of what performance will look like.
- CLI β v0.26+
- TypeScript SDK β v0.22+
- Go SDK β v0.11+
- Rust SDK β v0.22+
- Python π§ needs to be migrated to v1 API
- Java π§ needs to be migrated to v1 API
Complete specs are available:
Important
Unlike the cloud service where the basin is implicit as a subdomain, /streams/* requests must specify the basin using the S2-Basin header. The SDKs take care of this automatically.
| Endpoint | Support |
|---|---|
/basins |
Supported |
/streams |
Supported |
/streams/{stream}/records |
Supported |
/access-tokens |
Not supported #28 |
/metrics |
Not supported |

