A safety-first, inspectable memory and context-construction architecture for AI systems
Think of it as a synthetic hippocampus with a kill switchโdesigned to make context construction visible, bounded, and auditable before inference happens.
TL;DR โ SCE replaces flat retrieval and opaque prompt assembly with an explicit, graph-based context engine. Context is constructed, not fetched. Memory emerges through controlled activation, not hidden weights. This is a working system with full LLM integration, not a conceptual demo.
๐ Original Blueprint / Concept Paper โข ๐ Quick Start โข ๐ฏ Use Cases โข ๐ฌ Discussions โข ๐ค Contribute
The Synapse Context Engine (SCE) is a brain-inspired memory and context layer for AI systems, designed to function as a Systemโ2โlike substrate for context assembly.
Instead of treating context as a static retrieval problem (as in traditional RAG pipelines), SCE models memory as an explicit, typed hypergraph. Context is assembled dynamically through spreading activation, allowing systems to recall, relate, and reason over information via network dynamics rather than keyword or embedding similarity alone.
The result is memory that is:
- Coherent instead of fragmented
- Inspectable instead of opaque
- Bounded instead of unbounded
| Feature | Status |
|---|---|
| Spreading Activation + Hebbian Learning | โ Implemented |
| Hypergraph Memory (multi-way edges) | โ Implemented |
| Security Firewall (rule-based) | โ Implemented |
| LLM Integration (Gemini, Groq, Ollama) | โ Implemented |
| Real-time Visualization | โ Implemented |
| Custom User/AI Identities | โ Implemented |
| Algorithmic Extraction | โ Implemented |
| Hyperedge Consolidation (Clique Compression) | โ Implemented |
| Algorithmic Mesh Wiring | โ Implemented |
| Data Hygiene (Strict Garbage Collection) | โ Implemented |
| Accurate Telemetry (Performance metrics) | โ Implemented |
| Node Connections (Natural Expansion) | |
| Hierarchical Auto-Clustering | |
| Prompt Optimization | |
| Production Ready | |
| Optimization | โ Community-driven (once core is solidified) |
| Benchmarks | โ Community-driven (once core is solidified) |
License: Apache 2.0 โข Maintainer: Sasu โข Updates: docs/updates/
Constrain and observe the space in which context is constructed, rather than hoping the model behaves safely inside an opaque prompt.
SCE shifts safety and alignment concerns upstream, from model behavior to memory and context construction.
As AI systems move toward greater autonomy and persistence, their memory architectures remain fragile:
- Vector databases retrieve isolated chunks and lose relational structure
- Prompt assembly hides context construction inside token sequences
- Hallucinations emerge from fragmented, ungrounded memory representations
- Prompt injection and context poisoning are structurally easy
- Alignment is layered on top of black boxes
SCE explores a different axis of control: architectural safety through explicit structure and observability.
This project originated from building a digital twin platform that needed better memory architecture. While capability improvements were the initial driver, the safety properties that emerged from the architecture became the primary reason for open-sourcing. The core insight: context construction should be inspectable, bounded, and auditable by designโnot retrofitted with behavioral constraints after the model is already deployed.
SCE processes queries through a staged pipeline where each step is independently observable:
Stimulus (Query / Event)
โ
Active Focus (Anchor Node)
โ
Controlled Graph Propagation
โ
Context Synthesis (Pruned + Weighted)
โ
LLM Inference โโโ Response
โ
Extraction (Phase 1: Concepts, Phase 2: Relations)
โ
Integrity & Layout (Mesh Wiring + Hygiene)
โ
Memory Encoding (Graph Update)
โ
Telemetry & Audit Signals
Modular Design: Each stage in the pipeline is independently configurable. Security layers, pruning strategies, and activation mechanics can be added, modified, or replaced without changing the core architecture. This allows for experimentation with different safety mechanisms, custom context filters, and domain-specific optimizations.
Memory is represented as a hypergraph:
- Nodes represent heterogeneous entities (projects, artifacts, preferences, behaviors, constraints)
- Synapses encode weighted pairwise relationships (sourceโtarget)
- Hyperedges connect multiple nodes simultaneously for atomic multi-way relationships
When any node in a hyperedge activates, energy distributes to all connected members (clique activation). This preserves higher-order context that is lost when relationships are decomposed into isolated pairs or flat embeddings.
Example: Instead of separate edges:
Alice -[ATTENDED]-> MeetingMeeting -[DISCUSSED]-> BudgetBudget -[AFFECTS]-> Project_X
SCE can group these as a hyperedge:
{Alice, Meeting, Budget, Project_X}labeledDECISION_CONTEXT
When you query about Alice, all four nodes activate simultaneously through the hyperedgeโnot by traversing three separate edges.
All activation is evaluated relative to an explicit Active Focus node representing the current task or operational context.
This anchoring prevents freeโfloating activation and helps contain:
- Prompt injection
- Context drift
- Runaway propagation
When a stimulus occurs, activation energy is injected into seed nodes and propagates outward with:
- Decay factors (configurable, e.g., 0.8)
- Activation thresholds (e.g., 0.3)
- Depth limits (bounded traversal)
Only meaningfully activated nodes participate in context synthesis. Global flooding is structurally prevented.
Activated nodes are distilled into a structured synthesis layer:
- Ordered by relevance
- Pruned for redundancy
- Fully auditable
The LLM never sees the raw graphโonly the synthesized context.
SCE exposes internal dynamics through rigorous, information-theoretic metricsโnot opaque "vibes":
- Focus (Normalized Entropy): Measures attention drift.
0.02means diffuse noise;0.95means sharp logical coherence. - Stability (Inverse Variance): Detects when the system is confident vs. chaotic.
- Plasticity (Burst vs Mean): Distinguishes between background learning and sudden "paradigm shift" rewiring.
These signals enable runtime safety gating (e.g., "Stop generation if Focus < 0.1") and precise post-hoc auditing. The math is pure, visible, and unchangeable by the model.
SCE treats context construction as a staged pipeline, not a single opaque function call.
Key properties:
- Every activation path is observable
- Security violations can terminate execution
- Context growth is measurable and bounded
Failure modes become visible instead of implicit.
- Orchestrator:
lib/sce/engine/SCEEngine.ts(Thin wrapper, manages subsystems) - Graph:
lib/sce/graph/(Adjacency Index) - Physics:
lib/sce/activation/(Spreading Activation, Energy Dynamics) - Learning:
lib/sce/learning/(Hebbian, Co-Activation) - Structure:
lib/sce/hyperedges/(Clustering, Consolidation) - Safety:
lib/sce/safety/(Contradictions, Orthogonality) - Metrics:
lib/sce/metrics/(Telemetry) components/CoreEngine.tsx- UI orchestration and visualization
The CoreEngine component acts as a memory observatory rather than a simple demo UI.
It provides:
- Explicit stimulus injection ("Trigger Pulse")
- Focus anchoring
- Live graph visualization
- Context synthesis output
- Telemetry dashboard
Think of it as mission control for context assemblyโdesigned for debugging, research, and safety analysis.
SCE is not a silver bullet for all security concernsโ but it reshapes the threat landscape:
| Attack Vector | RAG Systems | SCE |
|---|---|---|
| Prompt injection | Hidden in concatenated text | Must traverse explicit graph structure |
| Context poisoning | Affects all retrievals | Localized to specific nodes/edges |
| Runaway costs | Unbounded context growth | Activation thresholds + energy budgets |
| Alignment drift | Behavioral nudging post-hoc | Structural constraints pre-inference |
| Input/Output safety | Post-hoc filtering only | Multi-layer inspection at every stage |
Incoming Query
โ
[๐ฅ Cognitive Firewall] โโ(Violation)โโโ ๐ Blocked
(Regex Patterns + Rules)
โ
Extraction & Grounding
โ
Context Anchoring
โ
Spreading Activation
โ
[๐ก๏ธ System 2 Logic] โโ(Contradiction)โโโ โ ๏ธ Flagged
(Dissonance Check)
โ
Context Synthesis โโ(Sanitization)โโโ ๐ Filtered
โ
LLM Inference
Note on Hallucinations: While not primarily a security concern, SCE's structured memory with source attribution provides better factual grounding than flat retrieval systems. Each activated node carries metadata about its origin, making fabricated information architecturally harder (though not impossible).
Instead of asking the model to behave, SCE limits what it can meaningfully see.
SCE is an exploratory architecture with unresolved challenges:
๐ด Critical Research Focus (Active Development):
Graph Growth Mechanics
- Connection strategy: Currently connects everything during chat, leading to over-dense graphs
- Node creation heuristics: What triggers new node creation vs. updating existing nodes?
- Natural weight distribution: How should weights evolve to reflect true semantic relationships?
- These are active areas of experimentationโno settled solutions yet
Prompt Engineering
- Entity extraction prompts need refinement for different domains
- Response synthesis prompts balancing creativity vs. grounding
- What information should be extracted and persisted vs. discarded?
๐ก Scalability & Performance:
Over-Connection Issues
- Over-connection creates performance issues as graphs grow beyond 1K+ nodes
- Need pruning strategies: temporal decay, relevance thresholds, or periodic consolidation
- What are the practical memory and latency bounds?
๐ข Future Research Questions:
Adversarial Robustness
- Can activation thresholds be tuned to hide relevant context?
- What if weights are maliciously manipulated?
- How does SCE handle ambiguous focus transitions?
Parameter Sensitivity
- How sensitive is performance to decay factors, thresholds, and depth limits?
- Can these be learned rather than hand-tuned?
These are open research questions. Help us answer themโsee CONTRIBUTING.md.
- Competing with vector databases on raw retrieval speed
- Replacing LLMs or transformer architectures
- Acting as a dropโin RAG replacement
- Claiming solved alignment
SCE is an exploratory architecture, not a production framework.
SCE requires explicit graph relationships between entities:
- Example datasets (like the included knowledge bases) provide working starting points for experimentation
- Conversion utilities exist for seeding from structured data sources (see v0.3.1 release notes)
- Relationships evolve and strengthen through usage via Hebbian weight learning
- Trade-off: More upfront structure needed, but richer relational context that improves over time
npm install
npm run devRequires Rust & Platform Dependencies (see Quick Start Guide)
npm run tauri devor you can build the app (it wil create an installer for your computer)
npm run tauri build
The chat interface exposes the complete pipeline. Active Context Focus (top) shows anchored nodes. Quick Actions (right) provide exploration prompts. System Audit (left) logs every operation in real-time.
Add an API key in settings to use the app (Default / Recommended is Groq)
| Component | Technology |
|---|---|
| Frontend | React, TypeScript, Vite |
| Visualization | Custom Graph Renderer, Recharts |
| Styling | Tailwind CSS, Glassmorphism UI |
| Engine | Custom Hypergraph (TypeScript) |
| Desktop | Tauri 2.0, Rust, SQLite |
| AI Integration | Gemini, Groq, Ollama (Local) |
Note: The stack prioritizes inspectability and cross-platform deployment. TypeScript for the engine enables real-time browser visualization; Tauri allows the same codebase to run as desktop app with SQLite persistence.
SCE draws from neuroscience, graph theory, and cognitive architecture researchโadapting concepts for practical AI systems:
- Neuroscience & Memory: Hebbian learning (Hebb, 1949), hippocampal cognitive maps (O'Keefe & Nadel, 1978), complementary learning systems (McClelland et al., 1995)
- Cognitive Architecture: Spreading activation theory (Collins & Loftus, 1975), ACT-R (Anderson et al., 2004), SOAR (Laird et al., 1987)
- Graph Theory: Hypergraphs (Berge, 1973), network communicability (Estrada & Hatano, 2008), spectral graph theory (Chung, 1997)
- Information Theory: Maximal marginal relevance (Carbonell & Goldstein, 1998), information-theoretic pruning (Cover & Thomas, 2006)
For full citations and detailed connections to research traditions, see CITATIONS.md.
This project is developed by a single independent dev, not a software company, nor a research lab. This project is the result of my personal research to in order to create more realistic NPC behavior in games and the ability to create true digital twin. This project is a proof of concept for a larger AI game project and digital twin platform I'm currently working on.
Project Status: The core architecture is functional, but it needs more testing, tweaking and optimization. The current challenge is how the system creates connections / evolves / flows naturally (how the graph expands naturally and learns along the way). This require a deep dive / experimentations on hypergraphs, graph theory and cognitive architecture research.
Why Open-Sourced: While SCE was built to solve memory architecture challenges in games and digital twin systems, it was open-sourced specifically because of its potential to address many security concerns in current AI systems and perhaps enable safer alignment. If this were purely about better memory optimization, it would have remained proprietary.
What's Needed from the Community:
Research & Validation:
- Benchmark studies comparing SCE to RAG baselines (once core architecture is established)
- Adversarial testing of security mechanisms
- Formal analysis of activation dynamics
- Comparison studies across different domains
Engineering Improvements:
- Test coverage for core engine
- Performance optimization for large graphs (>100k nodes)
- SQL backend implementation for true scalability
- Additional LLM provider integrations
Applications & Extensions:
- Domain-specific adaptations
- Alternative activation strategies
- Novel security rule patterns
- Integration with existing AI frameworks
If you are interested in:
- AI safety & alignment through architectural constraints
- Alternative memory architectures for persistent AI systems
- Graph-based context construction
- Inspectable AI reasoning
Your contributions, research, and extensions are welcome. See CONTRIBUTING.md for guidelines.
Check the Issues tab for specific areas where help is needed.
SCE intentionally does not ship with traditional retrieval benchmarks yet.
The architecture is still stabilizing, and there is currently no accepted baseline for evaluating:
- Relational memory coherence
- Context inspectability
- Activation trace quality
- Long-term memory evolution
Premature benchmarks would bias development toward legacy retrieval metrics and misrepresent SCEโs goals.
Benchmarks will be introduced once the architecture is considered stable and native evaluation criteria are defined.
All source code and datasets in this repository are licensed under
the Apache License, Version 2.0, unless otherwise noted.
See the LICENSE file for details.
All content within the docs/ directory (including notes,
architectural diagrams, conceptual papers, and images)
is licensed under Creative Commons Attribution 4.0 (CC BY 4.0).
If you use SCE or its underlying concepts in academic research, technical reports, or publications, please cite:
@misc{sce_2025,
title = {The Synapse Context Engine (SCE): An Inspectable Memory Architecture for Safe AI},
author = {Lasse Sainia},
year = {2025},
url = {https://github.com/sasus-dev/synapse-context-engine}
}A brain-inspired memory architecture for AI systemsโbuilt by a single dev, open-sourced for safety.

