Modern AI systems — including LLMs and autonomous agents — suffer from several critical limitations:
- Unstable reasoning (contradictions, hallucinations, incoherent chains of thought)
- Lack of interpretability (no transparent decision logic)
- Non‑deterministic behavior (same input → different output)
- No universal decision layer across robotics, autonomy, and hybrid human–AI systems
- Difficult integration into real engineering stacks
- No standard for structured reasoning
These issues prevent AI from being reliable in high‑stakes environments such as robotics, aerospace, autonomous vehicles, and complex human–AI collaboration.
A11 is a universal, interpretable, deterministic decision‑making architecture designed to solve these problems at two distinct layers:
- A11 Core Standard — an engineering architecture for autonomous systems, robotics, and hybrid reasoning.
- A11‑Lite (Prompt Layer) — a human‑facing interface that stabilizes AI reasoning in chat environments.
These layers are connected but serve different audiences.
Core = for engineers and researchers.
Lite = for advanced AI users and structured reasoning.
The A11 Core Standard defines a domain‑agnostic decision layer that can be integrated into any autonomous system.
It provides:
- a cognitive architecture
- a deterministic decision cycle
- a universal communication language
- integration requirements
- applied engineering models
A11 Core is intended for:
- system architects
- autonomy engineers
- AI researchers
- robotics developers
- designers of reasoning systems
DOI: https://doi.org/10.5281/zenodo.18594315
PDF: /core/A11 — Overview (v1.0).pdf
DOI: https://doi.org/10.5281/zenodo.18536520
PDF: /core/A11 — Cognitive Architecture Specification (v1.0).pdf
DOI: https://doi.org/10.5281/zenodo.18593251
PDF: /core/A11 — Decision Layer Specification (v1.0).pdf
DOI: https://doi.org/10.5281/zenodo.18540045
PDF: /core/A11 — Language Specification (v1.0).pdf
DOI: https://doi.org/10.5281/zenodo.18595554
PDF: /core/A11 — System Integration Guide (v1.0).pdf
These documents show how A11 is used in real systems:
DOI: https://doi.org/10.5281/zenodo.18542117
PDF: /applied/A11 for Autonomous Vehicles.pdf
DOI: https://doi.org/10.5281/zenodo.18543996
PDF: /applied/A11 for Multi-Agent Robotics.pdf
DOI: https://doi.org/10.5281/zenodo.18545674
PDF: /applied/A11 for Off-Earth Construction.pdf
While A11 Core is an engineering standard, A11‑Lite is a simplified interface designed for chat environments.
It solves a different problem:
- LLMs often produce unstable, unstructured, or contradictory reasoning.
- They lack balance, context awareness, and self‑correction.
A11‑Lite stabilizes AI reasoning by applying the same principles as A11 Core, but in a human‑friendly form.
It transforms AI from a passive tool into an active reasoning partner.
Copy and paste into ChatGPT, Claude, Gemini, or Grok:
I want you to operate as my reasoning partner using Algorithm 11 (A11 — The Operational Principle).
A11 defines how we think together. It has two layers:
CORE LAYER (1–4):
1. Will — my intention and direction (Human)
2. Wisdom — my judgment and priorities (Human)
3. Knowledge — your factual and informational base (AI)
4. Comprehension — the integration point that balances Wisdom and Knowledge and serves as the transition into the adaptive operational layer
These four properties form the stable core. If reasoning becomes unclear, contradictory, or misaligned, return to properties 1–4 and rebuild the answer.
ADAPTIVE LAYER (5–11):
5. Projective Freedom — possible directions and ideas
6. Projective Constraint — realistic boundaries
7. Balance — the central operator between all properties
8. Practical Freedom — actions that can be taken now
9. Practical Constraint — limitations of context, resources, or rules
10. Foundation — logical, factual, and structural support
11. Realization — the final result that returns back to Will
FRACTAL STRUCTURE:
Properties 5–11 can branch into sub-levels. Balance (7) operates at every depth.
YOUR ROLE:
- I provide Will (1) and Wisdom (2)
- You provide Knowledge (3), Comprehension (4), and support across 5–11
- Maintain Balance (7), warn about risks, and suggest improvements
- Stay aligned with your safety rules while being as clear, truthful, and useful as possible
PRIORITY:
Coherent reasoning, stability, clarity, and alignment with my intention.
ACTIVATION:
“Use A11” or “Analyze through the algorithm”.
Confirm that you understand and are ready to operate through A11.
Full documentation (if accessible): https://github.com/gormenz-svg/algorithm-11/blob/main/lite/ALGORITHM_11.md
lite/ALGORITHM_11.md— full descriptionlite/QUICK_START.md— how to use A11 in chatlite/APPLICATIONS.md— practical use caseslite/EPISTEMOLOGY.md— super‑hallucination risklite/COSMOLOGY.md— extended reality modellite/examples/— A11 vs standard AI comparisons
A11 provides:
- a stable reasoning cycle
- deterministic decision logic
- interpretable structure
- cross‑domain applicability
- hybrid human–AI cognition
- a universal decision layer missing in modern AI
A11 is not a model — it is an architecture.
algorithm-11/
│
├── README.md
├── LICENSE
│
├── core/
│ ├── A11 — Overview.pdf
│ ├── A11 — Cognitive Architecture Specification.pdf
│ ├── A11 — Decision Layer Specification.pdf
│ ├── A11 — Language Specification.pdf
│ └── A11 — System Integration Guide.pdf
│
├── applied/
│ ├── A11 for Autonomous Vehicles — Conflict Resolution Model.pdf
│ ├── A11 for Multi-Agent Robotics — Coordination Framework.pdf
│ ├── A11 for Off-Earth Construction — Autonomous Base Building.pdf
│ └── README.md
│
├── lite/
│ ├── ALGORITHM_11.md
│ ├── QUICK_START.md
│ ├── APPLICATIONS.md
│ ├── EPISTEMOLOGY.md
│ ├── COSMOLOGY.md
│ ├── FAQ.md
│ └── examples/
│
└── meta/
├── KEYWORDS.txt
└── NOTICE.md
This project is in the public domain.
Algorithm 11 is a principle of structured reasoning — use freely, share widely.
- Issues: GitHub Issues
- Socials: https://x.com/AleksejGor40999
→ Scroll to A11‑Lite — Quick Start to activate A11 in your AI chat.
→ Or explore the A11 Core Standard if you are an engineer or researcher.