Live custody monitoring for PeerDAS.
Status: Beta. A version of this codebase is feeding The Lab.
Check out our roadmap to v1.
Request for feedback: We're looking for input from L2s and the wider ecosystem on how to make Dasmon even more useful to them. Please file an issue to let us know what's missing!
PeerDAS transforms Ethereum's data availability layer from full replication, where every node stores every blob, to a sharded architecture where nodes store a variable subset of data columns. This is what makes dramatic blob scalability improvements possible.
But with this shift comes a new challenge: peers can claim custody commitments without actually storing the data. How do we verify the network is upholding its collective data availability guarantees?
Dasmon answers a simple question: are peers storing the data they say they're supposed to?
It achieves this by pairing with a Consensus Layer client (currently by embedding inside Prysm) and performing continuous verification. It tracks peers, samples their claimed custody ranges, validates data columns against canonical KZG commitments. Every test result is logged, analyzed, and can be streamed in real-time.
Peer tracking. Automatically discovers and tracks peers joining the network, monitoring their custody metadata: which columns they claim to store, and for which slot ranges.
Intelligent sampling. Rather than testing randomly, dasmon uses sampling that's aware of past coverage to systematically prioritize untested (slot, column) combinations. This maximizes coverage while minimizing redundant testing.
Response validation. Every response is validated against known KZG commitments. Dasmon verifies not just that data was returned, but that the data matches what should be stored.
Real-time event streaming. All test results, custody updates, and peer status changes are streamed via a local event stream (which in The Lab version feeds into EthPandaOps' Xatu). An (experimental) WebSocket, enabling live monitoring dashboards and alerting.
Historical analytics. SQLite-backed test history with configurable retention (default: 18 days) enables long-term compliance analysis and trend detection, and also feeds the coverage-aware sampler.
Dasmon embeds into your beacon node and operates as a background monitoring engine:
- Peer discovery: subscribes to libp2p network events to track connected peers.
- Custody tracking: periodically fetches peer metadata to learn custody ranges and group counts.
- Smart job generation: uses coverage-aware sampling to select (slot, column) pairs that need testing.
- RPC verification: requests
DataColumnSidecarsByRangefrom peers, validates responses against KZG commitments. - History recording: persists results to SQLite for long-term coverage analysis.
- Event emission: broadcasts test results, custody updates, and system snapshots.
The engine runs a worker pool that dispatches verification jobs with round-robin peer selection. Jobs are rate-limited per peer and constrained by configurable sampling parameters.
Dasmon needs blob KZG commitments to verify responses. It queries your beacon node first, then falls back to a configurable Beacon API endpoint. Backfilling happens at startup (preload mode) or continuously in the background.
The sampler optimizes for coverage:
- Prioritizes untested (slot, column) combinations
- Considers tests within a configurable history window
- Retries failed tests after a configurable delay
- Applies slot bias (recent/historical) within valid custody ranges
- Respects block filters to focus on specific slot ranges
Dasmon is designed to embed into beacon clients. Right now it integrates with a custom fork of Prysm operated by EthPandaOps (named 'Tysm'), but it is trivial to integrate into any Prysm node.
Implement the Externs interface to provide access to your beacon node's internals:
type Externs interface {
// Retrieve custody metadata for a peer
PeerRefresh(ctx context.Context, id peer.ID) (*PeerCustody, error)
// Query blocks in a slot range (returns blocks with KZG commitments)
ChainQuery(ctx context.Context, query BlockQuery) ([]*Block, error)
// Get current head block and root
ChainHead(ctx context.Context) (block *Block, root [32]byte, err error)
// Subscribe to chain events (finalization, reorgs)
ChainSubscribe(ctx context.Context) (<-chan ChainEvent, error)
}See externs.go for the complete interface definition.
database_path: ./dasmon.db
workers: 4
http_addr: :9090
backfill:
beacon_api: http://localhost:5052That's enough to get started. Dasmon uses sensible defaults for everything else.
# Timing
test_interval: 10s # Min time between tests to same peer
peer_refresh_interval: 10s # How often to refresh custody metadata
reconnection_grace: 5m # Grace period for disconnected peers
# Sampling
sampling:
max_columns: 128 # Max columns per RPC request
max_slots: 128 # Max slots per RPC request
slot_bias: heavily_recent # uniform | recent | historical | heavily_*
# Coverage optimization
sampling_strategy:
history_window: 18d # 18 days
prefer_untested: true
retry_failures: true
retry_after: 24h
# Filters
block_filter:
start_slot: -8192 # Relative to head (negative = slots back)
end_slot: 0 # 0 = current head
node_filter:
include: [] # Empty = all peers
exclude: []
# Infrastructure
database_path: ./dasmon.db
workers: 6
http_addr: :9090
event_log_file: ./dasmon.events.log
backfill:
mode: background # or "preload"
beacon_api: http://localhost:5052
metrics:
enabled: true
port: 9091import "github.com/ethp2p/dasmon"
env := &dasmon.Environment{
Host: libp2pHost,
Externs: myExternsImpl,
}
// No error handling for brevity.
config, _ := dasmon.ParseYAML(configData)
engine, _ := dasmon.NewEngine(env, config)
engine.Start()
defer engine.Stop()
// Subscribe to events
sub, _ := engine.Subscribe(1000)
for evt := range sub.Out() {
event := evt.(dasmon.Event)
// Handle Sampling, CustodyUpdate, NodeTracking, SystemSnapshot
}When http_addr is configured:
| Endpoint | Description |
|---|---|
GET /status |
Engine status (peers tracked, jobs executed) |
GET /peers |
Monitored peers with custody info |
GET /snapshot |
Complete state snapshot with sequence number |
GET /ws |
WebSocket for real-time event streaming |
All events include sequence numbers and timestamps for ordering and deduplication:
- Sampling: test results (success/failure, duration, error details)
- CustodyUpdate: peer custody range or group count changes
- NodeTracking: peer addition/removal from monitoring
- SystemSnapshot: periodic status snapshots
GPLv3.
