A safe, policy-driven framework for processing extremely long inputs using Retrieval-augmented Long-context Memory (RLM) patterns.
RLM Controller enables LLM agents to process inputs that exceed typical context windows (50k+ characters) by:
- Storing input as external context files
- Intelligently slicing and searching content
- Spawning parallel subcalls for analysis
- Aggregating structured results with traceability
- π§ Smart Slicing: Keyword-based planning with fallback chunking
- π Security-First: Prompt injection mitigation, no code execution, strict limits
- β‘ Parallel Execution: Async batch processing for speed
- π Full Traceability: JSONL logging for every operation
- π― OpenClaw Native: Designed for OpenClaw agent framework
python3 scripts/rlm_ctx.py store --infile input.txt --ctx-dir ./ctxpython3 scripts/rlm_auto.py \
--ctx ./ctx/<ctx_id>.txt \
--goal "analyze authentication logic" \
--outdir ./run1python3 scripts/rlm_async_plan.py \
--plan ./run1/plan.json \
--batch-size 4 > ./run1/async_plan.json
python3 scripts/rlm_async_spawn.py \
--async-plan ./run1/async_plan.json \
--out ./run1/spawn.jsonlUse sessions_spawn to execute subcalls in parallel batches. See docs/flows.md for complete workflows.
rlm-controller/
βββ scripts/ # Core utilities (~766 LOC)
β βββ rlm_ctx.py # Context store/peek/search/chunk
β βββ rlm_plan.py # Keyword-based slice planner
β βββ rlm_auto.py # Auto artifact generator
β βββ rlm_async_plan.py # Batch scheduler
β βββ rlm_async_spawn.py # Spawn manifest builder
β βββ rlm_emit_toolcalls.py # Toolcall formatter
β βββ rlm_batch_runner.py # Assistant-driven executor
β βββ rlm_runner.py # JSONL orchestrator
β βββ rlm_trace_summary.py # Log summarizer
β βββ rlm_path.py # Shared path-validation helpers
β βββ rlm_redact.py # Secret pattern redaction
β βββ cleanup.sh # Artifact cleanup
βββ docs/ # Documentation
β βββ flows.md # Manual & async workflows
β βββ policy.md # Limits & decision rules
β βββ security.md # Security foundations
β βββ security_checklist.md # Pre/during/post run checks
β βββ security_audit_response.md # OpenClaw audit response
β βββ cleanup_ignore.txt # Cleanup exclusions
βββ SKILL.md # OpenClaw skill manifest
RLM Controller is designed with security-first principles:
- β No code execution - Only safelisted helper scripts
- β Prompt injection mitigation - Input treated as data, not commands
- β Strict limits - Max recursion: 1, max subcalls: 32, max slice: 16k chars
- β Bounded work - Hard caps on batches and total slices
- β Least privilege - Subcalls read-only by design
See docs/security.md for detailed safeguards.
- π Large Documentation: Process entire codebases or API docs
- π Dense Logs: Analyze thousands of log lines for patterns
- π Repository Analysis: Multi-file security audits
- π Dataset Processing: Extract structured data from large files
- Python 3.7+
- OpenClaw framework (for
sessions_spawnintegration) - Unix-like environment (bash scripts)
Default policies can be customized in docs/policy.md:
- Max subcalls: 32
- Max slice size: 16k chars
- Batch size: 4
- Max recursion depth: 1
This skill integrates with the OpenClaw agent framework:
- Uses
sessions_spawnfor parallel subcalls - Respects sub-agent constraints (no nested spawning)
- Compatible with OpenClaw's tool safety model
- Getting Started Guide - Manual & async workflows
- Security Model - Threat model & mitigations
- Policy Reference - Limits & decision rules
- Security Checklist - Operational guidance
Licensed under the Apache License, Version 2.0. See LICENCE.md for details.
Contributions welcome! Please:
- Review docs/security.md for security requirements
- Ensure all scripts pass basic smoke tests
- Update documentation for any new features
- Follow existing code style (Python PEP 8)
Production Ready - Fully functional for OpenClaw deployments.
Future enhancements:
- HTML trace viewer for log visualization
- Direct LLM API integration (currently requires OpenClaw)
- Additional output formats
Developed as part of the OpenClaw ecosystem for safe, scalable agent operations.