xchannel is a tiny, zero-copy, mmap-backed IPC channel with automatic region and file rolling. It lets a single writer append messages to a persistent stream that multiple readers can replay from the beginning (LateJoin) or tail in real time (Live)—without a broker or background service.
- Shared‑memory / IPC logs without a broker.
- Constant-time tailing (readers only track a byte offset).
- Works on Linux and macOS (16 KiB) and typical 4 KiB page systems.
- Zero‑copy access: messages are written directly into a memory‑mapped region and read back without additional copying.
- Rolling regions and files: large channels are segmented into fixed‑size regions. When a region fills up the writer rolls over to the next region; when the end of a file is reached a new file with an incremented sequence number is created automatically.
- Two reader modes:
- LateJoin – start from the beginning of the earliest existing channel file.
- Live – join the channel at the current write position and only observe new messages.
- MTU enforcement: optional maximum message size to defend against unbounded memory usage or corrupted input.
- Atomic state management: the shared write position and message count are tracked using atomic variables with proper memory ordering
- Very simple, low maintenance: the system relies on a minimal set of concepts. There are no background services, no complex synchronization mechanisms, and no external dependencies.
- No back pressure: readers cannot slow down the writer, retention is controlled by rolling policy rather than consumer speed.
- Clear non-aliasing contract (single writer): readers never observe bytes
while they’re being written. This is a language-agnostic safety property (C/C++/Rust/..),
and fits Rust’s
&mut/&guarantees naturally.
use xchannel::{WriterBuilder, ReaderBuilder};
let region = xchannel::page_size(); // ensure page-aligned regions
let mut w = WriterBuilder::new("demo.xch")
.region_size(region)
.file_roll_size(10_000_000)
.build()?;
// write a message
let payload = b"hello world";
if let Some(buf) = w.try_reserve(payload.len()) {
buf.copy_from_slice(payload);
w.commit(1, payload.len() as u32, timestamp)?;
}
// read it back
let mut r = ReaderBuilder::new("demo.xch")
.late_join()
.batch_limit(1000)
.build()?;
if let Some(msg) = r.try_read() {
let hdr = msg.header();
println!("type={}, len={}", hdr.message_type, hdr.length);
println!("payload={:?}", msg.payload());
}use xchannel::ReaderBuilder;
let mut r = ReaderBuilder::new("demo.xch").late_join().build()?;
if let Some(batch) = r.try_read_batch(None) {
for idx in (0..batch.len()).rev() {
let msg = batch.get(idx).unwrap();
let hdr = msg.header();
let payload = msg.payload();
// payload is opaque bytes; parse as needed.
println!("type={}, len={}, first={:?}", hdr.message_type, hdr.length, payload.get(0));
}
}Zero-copy • Append-only • Multi-reader
Typical Rust channels:
std::sync::mpsctokio::mpsccrossbeam
These are great but they:
- work inside one process
- messages exist only in memory
- cannot replay history
- cannot easily tail from another process
Some systems need:
- cross-process communication
- persistent message streams
- very low overhead
- simple debugging
Typical examples:
- trading systems
- logging pipelines
- market data distribution
- real-time analytics
Instead of a queue:
Use an append-only log stored in a memory-mapped file
Writer ---> mmap file ---> Readers
Properties:
- writer appends messages
- readers scan sequentially
- messages remain persistent
Readers can start:
- from beginning
- from current tail
Writer
│
│ append messages
▼
┌─────────────────────────────────────────────┐
│ mmap file │
│ │
│ msg1 msg2 msg3 msg4 msg5 msg6 msg7 msg8 ... │
│ │
└─────────────────────────────────────────────┘
▲ ▲
│ │
Reader A Reader B
(LateJoin) (Live)
Key property:
Readers never block the writer.
Channel files are divided into regions.
File
┌─────────────────────────────┐
│ Region 0 │
│ ChannelHeader + messages │
├─────────────────────────────┤
│ Region 1 │
│ messages │
├─────────────────────────────┤
│ Region 2 │
│ messages │
└─────────────────────────────┘
Regions provide:
- predictable memory layout
- simple boundary handling
- easier file rolling
Each record looks like:
[ MessageHeader ][ payload ][ padding ]
Header fields include:
- committed flag
- message type
- payload length
- timestamp
Readers check:
header.committed
┌──────────────────────────────┐
│ MessageHeader │
│ │
│ committed │
│ message_type │
│ payload_length │
│ timestamp_ns │
└──────────────────────────────┘
│
▼
┌──────────────────────────────┐
│ Payload bytes │
│ user message │
└──────────────────────────────┘
│
▼
┌──────────────────────────────┐
│ Padding (optional) │
└──────────────────────────────┘
reserve → write payload → publish
Steps:
- reserve memory
- write payload
- prepare next header
- commit current message
The commit flag is written last.
Writer publishes a message in this order:
1. write payload(i)
2. prepare header(i+1) // write-ahead header
3. commit header(i) = true (Release)
Meaning:
- the next header slot exists before publication
- the message becomes visible only after commit
When a reader sees:
header(i).committed == true
then:
- payload(i) is fully written
- header(i+1) already exists
So the reader can continue scanning safely:
header(i) → payload(i) → header(i+1)
No global metadata required.
header(i) ready
│
▼
write payload(i)
│
▼
prepare header(i+1)
│
▼
commit header(i)
Key property:
The commit flag is the only synchronization point.
Naive design:
writer updates global head pointer
readers poll the same pointer
Result:
CPU1 (writer) <----> CPU2 (reader)
cache invalidations
This causes unnecessary cache coherence traffic.
Each message has its own commit flag.
Readers check different cache lines as they scan.
msg1.header.committed
msg2.header.committed
msg3.header.committed
Benefits:
- minimal contention
- scalable readers
- avoids cache bouncing
use xchannel::WriterBuilder;
fn main() -> std::io::Result<()> {
let mut writer = WriterBuilder::new("demo.xch")
.build()?;
let payload = b"hello xchannel";
let buf = writer.try_reserve(payload.len())?;
buf.copy_from_slice(payload);
writer.commit(1, payload.len() as u32, 0 )?;
Ok(())
}use xchannel::ReaderBuilder;
fn main() -> std::io::Result<()> {
let mut reader = ReaderBuilder::new("demo.xch")
.late_join()
.build()?;
while let Some(msg) = reader.try_read() {
let header = msg.header();
let payload = msg.payload();
println!(
"type={} len={} payload={:?}",
header.message_type,
header.length,
payload
);
}
Ok(())
}Rust enforces strict aliasing rules:
&mut T → exclusive access
&T → shared access
Simultaneous read/write would violate:
&mut [u8] vs &[u8]
This is especially important with mmap memory.
Writer and readers operate on the same mapped memory.
Naively this could allow:
writer writing payload
reader reading same payload
This would break Rust’s non-aliasing contract.
The algorithm guarantees:
Writer and readers never access the same memory region simultaneously
Except one field:
AtomicBool committed
Writer accesses:
payload(i)
header(i)
Readers access only:
payload(j)
header(j)
Where
j < committed_index
Meaning the message is already published.
Writer:
write payload
prepare next header
commit = true (Release)
Reader:
if committed.load(Acquire) {
read payload
}
Guarantees:
- no partial reads
- correct memory ordering
After commit:
writer never touches payload again
Then readers access:
&[u8]
Timeline:
writer (&mut) → finished
reader (&) → begins
No overlapping access.
Rust’s non-aliasing guarantee is preserved.
Both sides access only:
AtomicBool committed
Safe because:
- atomic operations
- Acquire / Release ordering
- tiny memory footprint
start from beginning
Useful for:
- replay
- debugging
- analytics
start from tail
Useful for:
- real-time consumers
- monitoring
- streaming pipelines
Channels can run indefinitely.
Files roll when necessary:
demo.xch
demo.xch.1
demo.xch.2
Process:
- writer writes Roll marker
- creates next file
- readers follow automatically
Benefits:
- zero-copy payload access
- OS page cache handles IO
- sequential reads are extremely fast
- minimal syscalls
Readers simply scan memory:
header → header → header
The design aligns with Rust principles:
Ownership transfer:
writer owns payload → commit → reader observes
Concurrency:
atomic publication
Memory layout:
simple + predictable
xchannel relies on:
- append-only log
- commit-flag publication
- write-ahead headers
- sequential scanning
- strict memory ownership transfer
Result:
safe + zero-copy + low latency + scalable
Good fit:
- market data distribution
- logging pipelines
- inter-process messaging
- persistent event streams
- historical replay (simulation)
xchannel provides:
- mmap-based IPC channels
- zero-copy message access
- append-only persistent log
- minimal contention design
- Rust-safe memory access model