From 977602a3f7b32f61cf0034ede3004fd07a13248a Mon Sep 17 00:00:00 2001 From: Daniel Barbosa Date: Thu, 19 Feb 2026 21:35:09 -0300 Subject: [PATCH 01/13] feat: Add Gemini CLI + AIOS quickstart guide and a personalized automated workflow guide. --- GEMINI-QUICKSTART.md | 132 + GUIA_AIOS_DANIEL.md | 80 + _contexto_aios-core.txt | 348624 +++++++++++++++++++++++++++++++++++++ 3 files changed, 348836 insertions(+) create mode 100644 GEMINI-QUICKSTART.md create mode 100644 GUIA_AIOS_DANIEL.md create mode 100644 _contexto_aios-core.txt diff --git a/GEMINI-QUICKSTART.md b/GEMINI-QUICKSTART.md new file mode 100644 index 000000000..3e8d9a30e --- /dev/null +++ b/GEMINI-QUICKSTART.md @@ -0,0 +1,132 @@ +# Gemini CLI + AIOS - Guia de Início Rápido + +## Setup Completo ✅ + +Seu ambiente está configurado para usar o **Gemini CLI** com todos os agentes do AIOS. + +--- + +## 1. Configurar Chave de API + +Você precisa de uma chave de API do Google AI. Pegue em: https://aistudio.google.com/app/apikey + +### Opção A: Global (Recomendado) +Adicione ao seu `~/.zshrc` ou `~/.bashrc`: +```bash +export GOOGLE_AI_API_KEY="sua-chave-aqui" +``` + +Depois rode: +```bash +source ~/.zshrc +``` + +### Opção B: Por Projeto +Adicione ao `.env` de cada projeto: +```bash +GOOGLE_AI_API_KEY=sua-chave-aqui +``` + +--- + +## 2. Uso Básico + +### Com o Wrapper (Recomendado) +```bash +# Dentro de um projeto AIOS +cd /Users/daniel/Documents/Bordeless/aios-core +gemini-aios + +# O wrapper vai: +# - Detectar o projeto AIOS +# - Carregar a chave de API +# - Informar os comandos disponíveis +``` + +### Diretamente +```bash +export GOOGLE_AI_API_KEY="..." +gemini +``` + +--- + +## 3. Comandos AIOS no Gemini + +Dentro do Gemini CLI, você tem acesso a: + +### Menu de Agentes +``` +/aios-menu +``` +Mostra todos os agentes disponíveis. + +### Ativação Rápida +- `/aios-dev` - Desenvolvedor +- `/aios-architect` - Arquiteto de sistemas +- `/aios-qa` - Quality Assurance +- `/aios-devops` - DevOps Engineer +- `/aios-pm` - Product Manager +- `/aios-data-engineer` - Engenheiro de Dados +- E mais... + +### Comandos de Sistema +- `/aios-status` - Status da instalação +- `/aios-agents` - Lista de agentes +- `/aios-validate` - Validar instalação + +--- + +## 4. Exemplo de Uso + +```bash +# Iniciar Gemini no projeto AIOS +cd ~/Documents/Bordeless/aios-core +gemini-aios + +# Dentro do Gemini: +> /aios-dev +# O agente desenvolvedor será ativado + +> Como agente AIOS dev, crie um script para... +``` + +--- + +## 5. Comparação: Claude vs Gemini + +| Recurso | Claude Code | Gemini CLI | +|---------|-------------|------------| +| Modelo | Claude Opus/Sonnet | Gemini 1.5 Pro/Flash | +| Custo | $15+/mês | **Grátis** (60/min, 1000/dia) | +| Ativação | `/dev` | `/aios-dev` | +| MCP | Nativo | Via extensão | +| Multimodal | Limitado | **Completo** (imagens, vídeo) | + +--- + +## 6. Troubleshooting + +### Erro: API Key não encontrada +```bash +# Verificar se a variável está definida +echo $GOOGLE_AI_API_KEY + +# Se vazio, exportar manualmente +export GOOGLE_AI_API_KEY="sk-..." +``` + +### Extensão não aparece +```bash +# Verificar instalação +gemini extensions list | grep aios + +# Reinstalar se necessário +cp -r ~/Documents/Bordeless/aios-core/packages/gemini-aios-extension/* ~/.gemini/extensions/aios/ +``` + +--- + +## Pronto! + +Agora você tem o poder do AIOS rodando com Gemini, de graça! 🚀 diff --git a/GUIA_AIOS_DANIEL.md b/GUIA_AIOS_DANIEL.md new file mode 100644 index 000000000..eacb9b565 --- /dev/null +++ b/GUIA_AIOS_DANIEL.md @@ -0,0 +1,80 @@ +# 🚀 Fluxo de Trabalho Automático com AIOS e Gemini CLI + +Um guia definitivo criado especificamente para você, Daniel, para dominar seu ecossistema AIOS diretamente do terminal. + +## 🧠 O que é o AIOS? +O **AIOS (Artificial Intelligence Operating System)** funciona como o seu "cérebro" central de agentes de IA. Ele não é apenas um simples script, mas um ecossistema que transforma qualquer diretório vazio em um ambiente de desenvolvimento impulsionado por múltiplos agentes especialistas (Desenvolvedor, Arquiteto, QA, DevOps, etc.). + +**O que isso significa na prática?** +Significa que você tem uma equipe completa de especialistas em software "morando" no seu terminal. Em vez de abrir o navegador, gerenciar abas e copiar/colar código, você delega tarefas complexas diretamente pela linha de comando, mantendo o foco total na sua IDE e no código. + +--- + +## ⚙️ A Mágica da Automação (Tudo pelo Terminal) + +Você introduziu dois superpoderes ao seu ambiente global: os comandos `aios-new` e `gemini-aios`. A combinação deles é o que cria o seu "ponto automático de trabalho". + +### 1. Criando um Novo Projeto AIOS (`aios-new`) +Sempre que for começar uma nova ideia ou repo, esqueça ter que configurar o Gemini ou prompt de agentes manualmente. De qualquer lugar no terminal, role: + +```bash +aios-new nome-do-meu-projeto +``` + +**O que este comando faz automaticamente por você em 1 segundo:** +1. **Andaime (Scaffolding):** Cria a pasta do projeto. +2. **Importação de Cérebro:** Copia o núcleo do AIOS (regras, prompts comportamentais e inteligência dos agentes) da sua source principal para a nova pasta (via `.gemini/` e `.aios-core/`). +3. **Mapeamento de Comandos:** Cria os atalhos mágicos (slash commands) para o Gemini CLI (como `/aios-dev`, `/aios-menu`). +4. **Git & Env:** Inicia o repositório Git, cria o arquivo `.gitignore` adequado e um `.env` em branco aguardando sua API Key. + +### 2. Ativando seu Ambiente com IA (`gemini-aios`) +Após dar o `aios-new`, entre no diretório criado: + +```bash +cd nome-do-meu-projeto +``` +*(Certifique-se de que a varável de ambiente `GOOGLE_AI_API_KEY` esteja presente no arquivo `.env` ou globalmente no seu `~/.zshrc`).* + +Agora, ative sua força de trabalho: + +```bash +gemini-aios +``` + +**O que o `gemini-aios` faz sob o capô?** +- Verifica dinamicamente se você tem uma chave de API válida. +- Inicia o **Gemini CLI** forçando inteligentemente o uso do modelo `gemini-2.0-flash`. O modelo Flash processa contextos gigantes (arquivos e mais arquivos de código) de forma extremamente mais rápida e barata/gratuita do que o Pro. +- Alerta os comandos disponíveis daquele diretório, deixando o terminal pronto para receber comandos. + +--- + +## 🤖 Como Usar a Mágica no Dia a Dia + +Uma vez dentro do prompt do `gemini-aios` (que no seu CLI será algo como `> `), você não está falando apenas com uma IA genérica, você tem controle de roteamento. Invoque especialistas usando os Slash Commands inseridos pelo seu setup: + +- `/aios-menu` ➡️ Lista quem está disponível na sua "empresa" para trabalhar. +- `/aios-architect ` ➡️ **Primeiro passo ideal.** Peça algo como: *"Como devo estruturar o banco de dados desse app de lista de tarefas em Node.js considerando escalabilidade?"* +- `/aios-dev` ➡️ **A mão na massa.** Peça: *"Tendo em vista a estrutura definida, implemente o arquivo server.js agora."* +- Outros perfis como `/aios-qa` para testes e validações. + +### Exemplo de Fluxo Absoluto (Resumo): +```bash +# De qualquer lugar do seu Mac: +aios-new meu-sistema-vendas +cd meu-sistema-vendas +gemini-aios + +# Agora, dentro do Gemini CLI: +> /aios-dev Verifique meu diretório atual e inicialize um projeto Node.js com Express básico. +# (Ele faz tudo direto no terminal) +``` + +--- + +## 🎯 Por que seu Setup é um Absoluto "Ponto Automático"? + +1. **Repetibilidade Instantânea:** O `aios-new` te blinda de perder 10 minutos copiando e colando prompts em todo novo repositório. O projeto já nasce inteligente. +2. **Contexto Ciente:** Os agentes (como o `/aios-dev`) foram projetados via `.gemini/rules.md` e metadados para ler seu disco rígido e saber imediatamente em qual projeto estão trabalhando sem que você precise explicar nada. +3. **Escudo de Custos:** O wrapper `gemini-aios` já te blinda de enviar milhares de tokens (os arquivos de código) para um modelo caro. O default no Flash permite iteração rápida sem pesar no limite de quota ou no bolso de sua Cloud. + +Seja bem-vindo ao futuro do seu fluxo de trabalho, Daniel! Escreva código através de comandos executivos, construindo do zero à produção pelo seu Mac! diff --git a/_contexto_aios-core.txt b/_contexto_aios-core.txt new file mode 100644 index 000000000..bdb031a5c --- /dev/null +++ b/_contexto_aios-core.txt @@ -0,0 +1,348624 @@ +# CONTEXTO DO PROJETO: aios-core + +================================================== +📄 GEMINI-QUICKSTART.md +================================================== +```md +# Gemini CLI + AIOS - Guia de Início Rápido + +## Setup Completo ✅ + +Seu ambiente está configurado para usar o **Gemini CLI** com todos os agentes do AIOS. + +--- + +## 1. Configurar Chave de API + +Você precisa de uma chave de API do Google AI. Pegue em: https://aistudio.google.com/app/apikey + +### Opção A: Global (Recomendado) +Adicione ao seu `~/.zshrc` ou `~/.bashrc`: +```bash +export GOOGLE_AI_API_KEY="sua-chave-aqui" +``` + +Depois rode: +```bash +source ~/.zshrc +``` + +### Opção B: Por Projeto +Adicione ao `.env` de cada projeto: +```bash +GOOGLE_AI_API_KEY=sua-chave-aqui +``` + +--- + +## 2. Uso Básico + +### Com o Wrapper (Recomendado) +```bash +# Dentro de um projeto AIOS +cd /Users/daniel/Documents/Bordeless/aios-core +gemini-aios + +# O wrapper vai: +# - Detectar o projeto AIOS +# - Carregar a chave de API +# - Informar os comandos disponíveis +``` + +### Diretamente +```bash +export GOOGLE_AI_API_KEY="..." +gemini +``` + +--- + +## 3. Comandos AIOS no Gemini + +Dentro do Gemini CLI, você tem acesso a: + +### Menu de Agentes +``` +/aios-menu +``` +Mostra todos os agentes disponíveis. + +### Ativação Rápida +- `/aios-dev` - Desenvolvedor +- `/aios-architect` - Arquiteto de sistemas +- `/aios-qa` - Quality Assurance +- `/aios-devops` - DevOps Engineer +- `/aios-pm` - Product Manager +- `/aios-data-engineer` - Engenheiro de Dados +- E mais... + +### Comandos de Sistema +- `/aios-status` - Status da instalação +- `/aios-agents` - Lista de agentes +- `/aios-validate` - Validar instalação + +--- + +## 4. Exemplo de Uso + +```bash +# Iniciar Gemini no projeto AIOS +cd ~/Documents/Bordeless/aios-core +gemini-aios + +# Dentro do Gemini: +> /aios-dev +# O agente desenvolvedor será ativado + +> Como agente AIOS dev, crie um script para... +``` + +--- + +## 5. Comparação: Claude vs Gemini + +| Recurso | Claude Code | Gemini CLI | +|---------|-------------|------------| +| Modelo | Claude Opus/Sonnet | Gemini 1.5 Pro/Flash | +| Custo | $15+/mês | **Grátis** (60/min, 1000/dia) | +| Ativação | `/dev` | `/aios-dev` | +| MCP | Nativo | Via extensão | +| Multimodal | Limitado | **Completo** (imagens, vídeo) | + +--- + +## 6. Troubleshooting + +### Erro: API Key não encontrada +```bash +# Verificar se a variável está definida +echo $GOOGLE_AI_API_KEY + +# Se vazio, exportar manualmente +export GOOGLE_AI_API_KEY="sk-..." +``` + +### Extensão não aparece +```bash +# Verificar instalação +gemini extensions list | grep aios + +# Reinstalar se necessário +cp -r ~/Documents/Bordeless/aios-core/packages/gemini-aios-extension/* ~/.gemini/extensions/aios/ +``` + +--- + +## Pronto! + +Agora você tem o poder do AIOS rodando com Gemini, de graça! 🚀 + +``` + +================================================== +📄 CODE_OF_CONDUCT.md +================================================== +```md +# Contributor Covenant Code of Conduct + +> 🇧🇷 [Versão em Português](CODE_OF_CONDUCT-PT.md) + +## Our Pledge + +We as members, contributors, and leaders pledge to make participation in our +community a harassment-free experience for everyone, regardless of age, body +size, visible or invisible disability, ethnicity, sex characteristics, gender +identity and expression, level of experience, education, socio-economic status, +nationality, personal appearance, race, caste, color, religion, or sexual +identity and orientation. + +We pledge to act and interact in ways that contribute to an open, welcoming, +diverse, inclusive, and healthy community. + +## Our Standards + +Examples of behavior that contributes to a positive environment for our +community include: + +* Using welcoming and inclusive language +* Being respectful of differing viewpoints and experiences +* Gracefully accepting constructive criticism +* Focusing on what is best for the community +* Showing empathy towards other community members + +Examples of unacceptable behavior include: + +* The use of sexualized language or imagery, and sexual attention or advances of + any kind +* Trolling, insulting or derogatory comments, and personal or political attacks +* Public or private harassment +* Publishing others' private information, such as a physical or email address, + without their explicit permission +* Other conduct which could reasonably be considered inappropriate in a + professional setting + +## Enforcement Responsibilities + +Community leaders are responsible for clarifying and enforcing our standards of +acceptable behavior and will take appropriate and fair corrective action in +response to any behavior that they deem inappropriate, threatening, offensive, +or harmful. + +Community leaders have the right and responsibility to remove, edit, or reject +comments, commits, code, wiki edits, issues, and other contributions that are +not aligned to this Code of Conduct, and will communicate reasons for moderation +decisions when appropriate. + +## Scope + +This Code of Conduct applies within all community spaces, and also applies when +an individual is officially representing the community in public spaces. +Examples of representing our community include using an official e-mail address, +posting via an official social media account, or acting as an appointed +representative at an online or offline event. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported to the community leaders responsible for enforcement at +conduct@SynkraAI.com. All complaints will be reviewed and investigated promptly +and fairly. + +All community leaders are obligated to respect the privacy and security of the +reporter of any incident. + +## Enforcement Guidelines + +Community leaders will follow these Community Impact Guidelines in determining +the consequences for any action they deem in violation of this Code of Conduct: + +### 1. Correction +**Community Impact**: Use of inappropriate language or other behavior deemed +unprofessional or unwelcome in the community. + +**Consequence**: A private, written warning from community leaders, providing +clarity around the nature of the violation and an explanation of why the +behavior was inappropriate. A public apology may be requested. + +### 2. Warning +**Community Impact**: A violation through a single incident or series of +actions. + +**Consequence**: A warning with consequences for continued behavior. No +interaction with the people involved, including unsolicited interaction with +those enforcing the Code of Conduct, for a specified period of time. This +includes avoiding interactions in community spaces as well as external channels +like social media. Violating these terms may lead to a temporary or permanent +ban. + +### 3. Temporary Ban +**Community Impact**: A serious violation of community standards, including +sustained inappropriate behavior. + +**Consequence**: A temporary ban from any sort of interaction or public +communication with the community for a specified period of time. No public or +private interaction with the people involved, including unsolicited interaction +with those enforcing the Code of Conduct, is allowed during this period. +Violating these terms may lead to a permanent ban. + +### 4. Permanent Ban +**Community Impact**: Demonstrating a pattern of violation of community +standards, including sustained inappropriate behavior, harassment of an +individual, or aggression toward or disparagement of classes of individuals. + +**Consequence**: A permanent ban from any sort of public interaction within the +community. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], +version 2.1, available at +[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1] + +Community Impact Guidelines were inspired by +[Mozilla's code of conduct enforcement ladder][Mozilla CoC]. + +For answers to common questions about this code of conduct, see +[https://www.contributor-covenant.org/faq][FAQ] + +[homepage]: https://www.contributor-covenant.org +[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html +[Mozilla CoC]: https://github.com/mozilla/diversity +[FAQ]: https://www.contributor-covenant.org/faq + + +``` + +================================================== +📄 LICENSE +================================================== +```txt +MIT License + +Copyright (c) 2025 BMad Code, LLC (BMad Method - original work) +Copyright (c) 2025 SynkraAI Inc. (AIOS Framework - derivative work) + +This project was originally derived from the BMad Method +(https://github.com/bmad-code-org/BMAD-METHOD), created by Brian Madison. +Synkra AIOS is NOT affiliated with, endorsed by, or sanctioned by the +BMad Method or BMad Code, LLC. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + +TRADEMARK NOTICE: +BMad, BMad Method, and BMad Core are trademarks of BMad Code, LLC. +These trademarks are NOT licensed under the MIT License. See the BMad Method +TRADEMARK.md (https://github.com/bmad-code-org/BMAD-METHOD/blob/main/TRADEMARK.md) +for detailed guidelines on trademark usage. + +``` + +================================================== +📄 CHANGELOG.md +================================================== +```md +# Changelog + +All notable changes to Synkra AIOS will be documented in this file. + +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), +and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). + +## [4.2.11] - 2026-02-16 + +### Added + +- Squad agent commands are now automatically installed to active IDEs during pro scaffolding (`installSquadCommands`). +- Supports Claude Code (`.claude/commands/{squad}/`), Codex CLI (`.codex/agents/`), Gemini CLI (`.gemini/rules/{squad}/`), and Cursor (`.cursor/rules/`). +- Installed files are tracked in `pro-installed-manifest.yaml` and `pro-version.json`. + +## [4.2.10] - 2026-02-16 + +### Fixed + +- Handle `ALREADY_ACTIVATED` license status gracefully instead of throwing error. +- Fix error envelope parsing in pro license client — correctly extracts error messages from API responses. + +## [4.2.9] - 2026-02-16 + +### Fixed + +- Pass `targetDir` correctly to `runProWizard` — fixes pro install failing in non-CWD projects. +- Surface pro install errors to user instead of silently swallowing them. + +## [4.2.8] - 2026-02-16 + +### Fixed + +- Exclude `mmos-squad` (private) from pro scaffolding via `SCAFFOLD_EXCLUDES`. +- Merge `pro-config.yaml` sections into `core-config.yaml` during pro install (`mergeProConfig`). + +## [4.2.7] - 2026-02-16 + +### Fixed + +- Pro wizard (`npx aios-core install`) now auto-installs `@aios-fullstack/pro` package during Step 2, fixing "Pro package not found" error in greenfield and brownfield projects. +- Greenfield projects without `package.json` now get `npm init -y` automatically before pro install. +- Removed unused `headings` import in `pro-setup.js`. + +## [Unreleased] + +### Added + +- `docs/glossary.md` with official AIOS taxonomy terms: + - `squad` + - `flow-state` + - `confidence gate` + - `execution profile` +- `scripts/semantic-lint.js` for semantic terminology regression checks. +- `tests/unit/semantic-lint.test.js` for semantic lint rule validation. + +### Changed + +- CI now includes a `Semantic Lint` job (`npm run validate:semantic-lint`). +- Pre-commit markdown pipeline now runs semantic lint through `lint-staged`. + +### Migration Notes + +- Deprecated terminology replacements: + - `expansion pack` -> `squad` + - `permission mode` -> `execution profile` + - `workflow state` -> `flow-state` (warning-level migration) + +--- + +## [3.9.0] - 2025-12-26 + +### Highlights + +This release introduces **Squad Continuous Improvement** capabilities with analyze and extend commands, plus a massive codebase cleanup removing 116K+ lines of deprecated content. + +### Added + +#### Story SQS-11: Squad Analyze & Extend +- **`*analyze-squad` command** - Analyze squad structure, coverage, and get improvement suggestions +- **`*extend-squad` command** - Add new components (agents, tasks, workflows, etc.) incrementally +- **New Scripts:** + - `squad-analyzer.js` - Inventory and coverage analysis + - `squad-extender.js` - Component creation with templates +- **8 Component Templates:** + - `agent-template.md`, `task-template.md`, `workflow-template.yaml` + - `checklist-template.md`, `template-template.md` + - `tool-template.js`, `script-template.js`, `data-template.yaml` +- **New Tasks:** + - `squad-creator-analyze.md` + - `squad-creator-extend.md` + +### Changed + +#### Story TD-1: Tech Debt Cleanup +- Fixed ESLint warnings in 5 core files +- Removed 284 deprecated files (~116,978 lines deleted) +- Cleaned `.github/deprecated-docs/` directory +- Removed obsolete backup files + +### Fixed +- ESLint `_error` variable warnings in test utilities +- Context loader error handling improvements + +--- + +## [3.8.0] - 2025-12-26 + +*Previous release with WIS and SQS features.* + +--- + +## [2.2.3] - 2025-12-22 + +### Highlights + +This release marks the **Open-Source Community Readiness** milestone, preparing AIOS for public contribution while introducing the **Squad System** for extensibility. + +### Added + +#### Epic OSR: Open-Source Community Readiness (10 Stories) + +- **Legal Foundation** (OSR-3) + - `PRIVACY.md` / `PRIVACY-PT.md` - Privacy policies (EN/PT) + - `TERMS.md` / `TERMS-PT.md` - Terms of use (EN/PT) + - `CODE_OF_CONDUCT.md` - Community guidelines with contact info + +- **Community Process** (OSR-6) + - Feature request templates and triage process + - Issue labeling standards + +- **Public Roadmap** (OSR-7) + - Public roadmap documentation + - Community visibility into planned features + +- **Squads Guide** (OSR-8) + - Comprehensive guide for creating community squads + - Examples and best practices + +- **Rebranding to Synkra** (OSR-9) + - Brand investigation complete + - Namespace updated to SynkraAI + +- **Release Checklist** (OSR-10) + - GitHub configuration validated + - CodeQL security scanning active (30+ alerts addressed) + - Branch protection rules configured + - Smoke test passed on clean clone + +#### Epic SQS: Squad System Enhancement (Sprint 7) + +- **Squad Designer Agent** (SQS-9) + - New `@squad-creator` agent for guided squad creation + - Interactive wizard with `*create-squad` command + - AI-powered naming and structure suggestions + +- **Squad Loader Utility** (SQS-2) + - Local squad resolution from `./squads/` directory + - Simplified loading without complex caching + +- **Squad Validator + Schema** (SQS-3) + - JSON Schema for squad manifest validation + - `*validate-squad` command for compliance checking + +- **Squad Creator Tasks** (SQS-4) + - `*create-squad` - Interactive squad creation + - `*validate-squad` - Manifest validation + - `*list-squads` - Local squad discovery + +#### Infrastructure & Documentation + +- **Documentation Integrity System** (6.9) + - Automated cross-reference validation + - Link checking in CI pipeline + +- **MCP Governance Consolidation** (6.14) + - Unified MCP configuration rules + - `.claude/rules/mcp-usage.md` guidance + +- **Agent Config Path Fix** (6.15) + - Resolved path resolution issues across platforms + +- **Scripts Path Consolidation** (6.16) + - Standardized script locations under `.aios-core/scripts/` + +- **Semantic Release Automation** (6.17) + - Automated versioning on merge to main + - Conventional commit parsing + - Automatic CHANGELOG generation + +- **Agent Command Rationalization** (Story 6.1.2.3) + - Command consolidation: `aios-master` 44→30 commands (32% reduction) + - Command consolidation: `data-engineer` 31→28 commands (9.7% reduction) + - New consolidated tasks: `security-audit`, `analyze-performance`, `test-as-user`, `setup-database` + - Migration guide: `docs/guides/command-migration-guide.md` + - Agent selection guide: `docs/guides/agent-selection-guide.md` + +- **Dynamic Project Status Context** (Story 6.1.2.4) + - Git branch, modified files, and recent commits shown in agent greetings + - Current story and epic detection from `docs/stories/` + - 60-second cache mechanism (<100ms first load, <10ms cached) + - Cross-platform support (Windows/Linux/macOS) + +### Changed + +- **Agent Delegation Guidance** - All agents now include "NOT for" sections in `whenToUse` +- **PR Title Format** - DevOps `*create-pr` now generates Conventional Commits format titles +- **Scripts Location** - Consolidated under `.aios-core/scripts/` for consistency +- **MCP Configuration** - Unified rules in `.claude/rules/mcp-usage.md` + +### Fixed + +- **Agent Config Paths** (6.15) - Resolved path resolution issues on Windows +- **Script References** (6.16) - Fixed broken script imports across agents + +### Security + +- **CodeQL Scanning** - Active with 30+ alerts reviewed +- **Branch Protection** - Enabled on main (1 approver, dismiss stale reviews) + +### Documentation + +- **Squads Guide** - Complete guide for community squad creation +- **Feature Process** - Templates and triage workflow documented +- **Public Roadmap** - Community visibility into planned features +- **Legal Documents** - Privacy policy, Terms of Use (EN/PT) + +--- + +## [4.32.0] - 2025-11-12 + +### Removed +- **Private squads** - Moved to separate private repository (`aios-squads`) + - Removed `squads/creator/` (CreatorOS) + - Removed `squads/innerlens/` + - Removed `squads/mmos-mapper/` + - Removed `squads/aios-infrastructure-devops/` + - Removed `squads/meeting-notes/` + - Repository: https://github.com/SynkraAI/aios-squads (PRIVATE) +- **Internal development tools** - Moved to separate private repository (`aios-dev-tools`) + - Removed analysis scripts: `analyze-batches.js`, `analyze-decision-patterns.js`, `analyze-epic3.js`, etc. + - Removed consolidation scripts: `consolidate-entities.js`, `consolidate-results.js`, etc. + - Removed extraction scripts: `extract-all-claude-backups.js`, `extract-claude-history.js` + - Removed generation scripts: `generate-entity-summary.js`, `generate-entity-table.js` + - Repository: https://github.com/SynkraAI/aios-dev-tools (PRIVATE) +- **hybrid-ops squad** - Moved to separate repository for independent maintenance + - Removed `squads/hybrid-ops/` directory + - Removed `.hybrid-ops/` directory + - Updated `core-config.yaml` to reference external repository + - Updated `install-manifest.yaml` (removed 47 file entries) + - Repository: https://github.com/SynkraAI/aios-hybrid-ops-pedro-valerio + +### Changed +- README.md - hybrid-ops now listed under "Squads Externos" +- Squad can now be installed independently via GitHub +- **Squad naming convention** - Applied consistent `{agent-id}-` prefix to agent-specific tasks across all 6 squads + - ETL pack: 4 tasks renamed (youtube-specialist, social-specialist, web-specialist) + - Creator pack: 4 tasks already renamed (pre-existing migration) + - Innerlens pack: 4 tasks renamed (fragment-extractor, psychologist, quality-assurance) + - Mmos-mapper pack: 7 tasks renamed (cognitive-analyst, research-specialist, system-prompt-architect, emulator, mind-pm) + - Aios-infrastructure-devops pack: 2 tasks already renamed (pre-existing) + - Meeting-notes pack: 1 task already renamed (pre-existing) + - All agent dependencies updated to reference new task names + - Shared tasks correctly have NO prefix (conservative approach) + +### Technical +- Story: 4.6 - Move Hybrid-Ops to Separate Repository +- Breaking Change: hybrid-ops no longer bundled with @synkra/aios-core +- Migration: Users can install from external repo to `squads/hybrid-ops/` +- Story: 4.7 - Removed `squads/hybrid-ops.legacy/` directory (legacy backup no longer needed) +- Story: 4.5.3 - Squads Naming Convention Migration + - Applied naming convention from Story 4.5.2 to all 6 squads + - Total: 15 tasks renamed (11 new + 4 pre-existing) + - 18 agent files updated with new dependencies + - Validation: 100% compliance, 0 broken references + +## [4.31.1] - 2025-10-22 + +### Added +- NPX temporary directory detection with defense-in-depth architecture +- PRIMARY detection layer in `tools/aios-npx-wrapper.js` using `__dirname` +- SECONDARY fallback detection in `tools/installer/bin/aios.js` using `process.cwd()` +- User-friendly help message with chalk styling when NPX temp directory detected +- Regex patterns to identify macOS NPX temporary paths (`/private/var/folders/.*/npx-/`, `/.npm/_npx/`) +- JSDoc documentation for NPX detection functions + +### Fixed +- NPX installation from temporary directory no longer attempts IDE detection +- Clear error message guides users to correct installation directory +- Prevents confusion when running `npx @synkra/aios-core install` from home directory + +### Changed +- Early exit with `process.exit(1)` when NPX temporary context detected +- Help message provides actionable solution: `cd /path/to/your/project && npx @synkra/aios-core install` + +### Technical +- Story: 2.3 - NPX Installation Context Detection & Help Text (macOS) +- Defense in depth: Two independent detection layers provide redundancy +- macOS-specific implementation (other platforms unaffected) +- Non-breaking change (patch version) + +## [4.31.0] - Previous Release + +*(Previous changelog entries to be added)* + +``` + +================================================== +📄 jest.config.js +================================================== +```js +module.exports = { + testEnvironment: 'node', + coverageDirectory: 'coverage', + + // Test patterns from LOCAL (mais específico) + testMatch: [ + '**/tests/**/*.test.js', + '**/tests/**/*.spec.js', + '**/.aios-core/**/__tests__/**/*.test.js', + // Pro tests run via pro-integration.yml CI workflow (not in local npm test) + // '**/pro/**/__tests__/**/*.test.js', + ], + + // Ignore patterns - exclude incompatible test frameworks + testPathIgnorePatterns: [ + '/node_modules/', + // Pro submodule tests — run via pro-integration.yml CI workflow, not local npm test + // Use anchored regex to only match the pro/ submodule dir, not tests/pro/ + '/pro/', + // Playwright e2e tests (use ESM imports, run with Playwright not Jest) + 'tools/quality-dashboard/tests/e2e/', + // Windows-specific tests (only run on Windows CI) + 'tests/integration/windows/', + // Node.js native test runner tests (use node:test module) + 'tests/installer/v21-path-validation.test.js', + // v2.1 Migration: Tests with removed common/utils modules (OSR-10 tech debt) + // These tests reference modules removed during v4.31.0 → v2.1 migration + 'tests/tools/backward-compatibility.test.js', + 'tests/tools/clickup-helpers.test.js', + 'tests/tools/clickup-validators.test.js', + 'tests/tools/google-workspace-helpers.test.js', + 'tests/tools/google-workspace-validators.test.js', + 'tests/tools/n8n-helpers.test.js', + 'tests/tools/n8n-validators.test.js', + 'tests/tools/schema-detection.test.js', + 'tests/tools/supabase-helpers.test.js', + 'tests/tools/supabase-validators.test.js', + 'tests/tools/validation-performance.test.js', + 'tests/tools/validators.test.js', + 'tests/integration/tools-system.test.js', + 'tests/unit/tool-helper-executor.test.js', + 'tests/unit/tool-validation-helper.test.js', + 'tests/unit/tool-resolver.test.js', + 'tests/regression/tools-migration.test.js', + 'tests/performance/tools-system-benchmark.test.js', + 'tests/clickup/status-sync.test.js', + 'tests/story-update-hook.test.js', + 'tests/epic-verification.test.js', + 'tests/e2e/story-creation-clickup.test.js', + 'tests/installer/v21-structure.test.js', + // Squad template tests use ESM imports - run separately with --experimental-vm-modules + '.aios-core/development/templates/squad-template/tests/', + // Manifest tests need manifest data alignment (OSR-10 tech debt) + 'tests/unit/manifest/manifest-generator.test.js', + 'tests/unit/manifest/manifest-validator.test.js', + // Performance tests are flaky on different hardware (OSR-10 tech debt) + 'tests/integration/install-transaction.test.js', + // License tests require network/crypto resources unavailable in CI (pre-existing) + 'tests/license/', + // Workflow intelligence tests - assertion count mismatches (pre-existing) + '.aios-core/workflow-intelligence/__tests__/', + ], + + // Coverage collection (Story TD-3: Updated paths) + collectCoverageFrom: [ + 'src/**/*.js', + '.aios-core/**/*.js', + 'bin/**/*.js', + 'packages/**/*.js', + 'scripts/**/*.js', + '!**/node_modules/**', + '!**/tests/**', + '!**/coverage/**', + '!**/__tests__/**', + '!**/*.test.js', + '!**/*.spec.js', + // Exclude templates, generated files, and legacy scripts + '!.aios-core/development/templates/**', + '!.aios-core/development/scripts/**', + '!.aios-core/core/orchestration/**', + '!.aios-core/core/execution/**', + '!.aios-core/hooks/**', + '!.aios-core/product/templates/**', + '!**/dist/**', + // Story TD-6: Exclude I/O-heavy health check plugins from core coverage + // These are integration-test candidates (git, npm, network, disk, docker, etc.) + // Core engine/healers/reporters remain in scope with 80%+ coverage + '!.aios-core/core/health-check/checks/**', + // Story TD-6: Exclude config/manifest modules - mostly I/O operations + // These modules handle file system operations and JSON parsing + // Better suited for integration tests + '!.aios-core/core/config/**', + '!.aios-core/core/manifest/**', + // Story TD-6: Exclude registry (file I/O heavy) and utils (helper functions) + // These provide supporting functionality tested indirectly through main modules + '!.aios-core/core/registry/**', + '!.aios-core/core/utils/**', + ], + + // Coverage thresholds (Story TD-3) + // Target: 80% global, 85% for core modules + // Current baseline (2025-12-27): ~31% (needs improvement) + // TEMPORARY: Lowered thresholds for PR #53, #76 (Gemini), #96 (CI fix) + // TODO: Restore thresholds after adding tests - tracked in Story SEC-1 follow-up + coverageThreshold: { + global: { + branches: 19, + functions: 22, + lines: 22, + statements: 22, + }, + // Core modules coverage threshold + // TD-6: Adjusted to 45% to reflect current coverage (47.14%) + // TEMPORARY: Lowered to 38% for PR #76 - Gemini integration adds many new files + // Many core modules are I/O-heavy orchestration that's difficult to unit test + '.aios-core/core/': { + lines: 38, + }, + }, + + // Coverage ignore patterns from REMOTE + coveragePathIgnorePatterns: ['/node_modules/', '/coverage/', '/.husky/', '/dist/'], + + // Timeout from REMOTE (30s melhor para operações longas) + testTimeout: 30000, + + // Config from LOCAL + verbose: true, + roots: [''], + moduleDirectories: ['node_modules', '.'], + setupFilesAfterEnv: ['/tests/setup.js'], + + // Cross-platform config from REMOTE + globals: { + 'ts-jest': { + isolatedModules: true, + }, + }, +}; + +``` + +================================================== +📄 README.md +================================================== +```md +# Synkra AIOS: Framework Universal de Agentes IA 🚀 + +[![Versão NPM](https://img.shields.io/npm/v/aios-core.svg)](https://www.npmjs.com/package/aios-core) +[![Licença: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE) +[![Versão Node.js](https://img.shields.io/badge/node-%3E%3D18.0.0-brightgreen.svg)](https://nodejs.org/) +[![CI](https://github.com/SynkraAI/aios-core/actions/workflows/ci.yml/badge.svg)](https://github.com/SynkraAI/aios-core/actions/workflows/ci.yml) +[![codecov](https://codecov.io/gh/SynkraAI/aios-core/branch/main/graph/badge.svg)](https://codecov.io/gh/SynkraAI/aios-core) +[![Documentação](https://img.shields.io/badge/docs-disponível-orange.svg)](https://synkra.ai) +[![Open Source](https://img.shields.io/badge/Open%20Source-Yes-success.svg)](LICENSE) +[![Contributions Welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg)](CONTRIBUTING.md) +[![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Contributor%20Covenant-blue.svg)](CODE_OF_CONDUCT.md) + +Framework de Desenvolvimento Auto-Modificável Alimentado por IA. Fundado em Desenvolvimento Ágil Dirigido por Agentes, oferecendo capacidades revolucionárias para desenvolvimento dirigido por IA e muito mais. Transforme qualquer domínio com expertise especializada de IA: desenvolvimento de software, entretenimento, escrita criativa, estratégia de negócios, bem-estar pessoal e muito mais. + +## Comece Aqui (10 Min) + +Se é sua primeira vez no AIOS, siga este caminho linear: + +1. Instale em um projeto novo ou existente: +```bash +# novo projeto +npx aios-core init meu-projeto + +# projeto existente +cd seu-projeto +npx aios-core install +``` +2. Escolha sua IDE/CLI e o caminho de ativação: +- Claude Code: `/agent-name` +- Gemini CLI: `/aios-menu` → `/aios-` +- Codex CLI: `/skills` → `aios-` +- Cursor/Copilot/AntiGravity: siga os limites e workarounds em `docs/ide-integration.md` +3. Ative 1 agente e confirme o greeting. +4. Rode 1 comando inicial (`*help` ou equivalente) para validar first-value. + +Definição de first-value (binária): ativação de agente + greeting válido + comando inicial com output útil em <= 10 minutos. + + +## Compatibilidade de Hooks por IDE (Realidade AIOS 4.2) + +Muitos recursos avançados do AIOS dependem de eventos de ciclo de vida (hooks). A tabela abaixo mostra a paridade real entre IDEs/plataformas: + +| IDE/CLI | Paridade de Hooks vs Claude | Impacto Prático | +| --- | --- | --- | +| Claude Code | Completa (referência) | Automação máxima de contexto, guardrails e auditoria | +| Gemini CLI | Alta (eventos nativos) | Cobertura forte de automações pre/post tool e sessão | +| Codex CLI | Parcial/limitada | Parte das automações depende de `AGENTS.md`, `/skills`, MCP e fluxo operacional | +| Cursor | Sem lifecycle hooks equivalentes | Menor automação de pre/post tool; foco em regras, MCP e fluxo do agente | +| GitHub Copilot | Sem lifecycle hooks equivalentes | Menor automação de sessão/tooling; foco em instruções de repositório + MCP no VS Code | +| AntiGravity | Workflow-based (não hook-based) | Integração por workflows, não por eventos de hook equivalentes ao Claude | + +Impactos e mitigação detalhados: `docs/ide-integration.md`. + +## Acknowledgments & Attribution + +Synkra AIOS was originally derived from the [BMad Method](https://github.com/bmad-code-org/BMAD-METHOD), created by [Brian Madison](https://github.com/bmadcode) (BMad Code, LLC). We gratefully acknowledge the BMad Method for providing the foundation from which this project began. + +**Important:** This project is **NOT affiliated with, endorsed by, or sanctioned by** the BMad Method or BMad Code, LLC. Contributors appearing in the git history from the original BMad Method repository do not imply active participation in or endorsement of Synkra AIOS. + +Since its origin, AIOS has evolved significantly with its own architecture, terminology, and features (v4.x+), and does not depend on BMad for current operation. The BMad Method remains an excellent framework in its own right — please visit the [official BMad Method repository](https://github.com/bmad-code-org/BMAD-METHOD) to learn more. + +BMad, BMad Method, and BMad Core are trademarks of BMad Code, LLC. See [TRADEMARK.md](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/TRADEMARK.md) for usage guidelines. + +## Visão Geral + +### Premissa Arquitetural: CLI First + +O Synkra AIOS segue uma hierarquia clara de prioridades: + +``` +CLI First → Observability Second → UI Third +``` + +| Camada | Prioridade | Foco | Exemplos | +| ----------------- | ---------- | ----------------------------------------------------------------------------- | -------------------------------------------- | +| **CLI** | Máxima | Onde a inteligência vive. Toda execução, decisões e automação acontecem aqui. | Agentes (`@dev`, `@qa`), workflows, comandos | +| **Observability** | Secundária | Observar e monitorar o que acontece no CLI em tempo real. | Dashboard SSE, logs, métricas, timeline | +| **UI** | Terciária | Gestão pontual e visualizações quando necessário. | Kanban, settings, story management | + +**Princípios derivados:** + +- A CLI é a fonte da verdade - dashboards apenas observam +- Funcionalidades novas devem funcionar 100% via CLI antes de ter UI +- A UI nunca deve ser requisito para operação do sistema +- Observabilidade serve para entender o que o CLI está fazendo, não para controlá-lo + +--- + +**As Duas Inovações Chave do Synkra AIOS:** + +**1. Planejamento Agêntico:** Agentes dedicados (analyst, pm, architect) colaboram com você para criar documentos de PRD e Arquitetura detalhados e consistentes. Através de engenharia avançada de prompts e refinamento com human-in-the-loop, estes agentes de planejamento produzem especificações abrangentes que vão muito além da geração genérica de tarefas de IA. + +**2. Desenvolvimento Contextualizado por Engenharia:** O agente sm (Scrum Master) então transforma estes planos detalhados em histórias de desenvolvimento hiperdetalhadas que contêm tudo que o agente dev precisa - contexto completo, detalhes de implementação e orientação arquitetural incorporada diretamente nos arquivos de histórias. + +Esta abordagem de duas fases elimina tanto a **inconsistência de planejamento** quanto a **perda de contexto** - os maiores problemas no desenvolvimento assistido por IA. Seu agente dev abre um arquivo de história com compreensão completa do que construir, como construir e por quê. + +**📖 [Veja o fluxo de trabalho completo no Guia do Usuário](docs/guides/user-guide.md)** - Fase de planejamento, ciclo de desenvolvimento e todos os papéis dos agentes + +## Pré-requisitos + +- Node.js >=18.0.0 (v20+ recomendado) +- npm >=9.0.0 +- GitHub CLI (opcional, necessário para colaboração em equipe) + +> **Problemas de instalação?** Consulte o [Guia de Troubleshooting](docs/guides/installation-troubleshooting.md) + +**Guias específicos por plataforma:** + +- 📖 [Guia de Instalação para macOS](docs/installation/macos.md) +- 📖 [Guia de Instalação para Windows](docs/installation/windows.md) +- 📖 [Guia de Instalação para Linux](docs/installation/linux.md) + +**Documentação multilíngue disponível:** [Português](docs/pt/installation/) | [Español](docs/es/installation/) + +## Navegação Rápida + +### Entendendo o Fluxo de Trabalho AIOS + +**Antes de mergulhar, revise estes diagramas críticos de fluxo de trabalho que explicam como o AIOS funciona:** + +1. **[Fluxo de Planejamento (Interface Web)](docs/guides/user-guide.md#the-planning-workflow-web-ui)** - Como criar documentos de PRD e Arquitetura +2. **[Ciclo Principal de Desenvolvimento (IDE)](docs/guides/user-guide.md#the-core-development-cycle-ide)** - Como os agentes sm, dev e qa colaboram através de arquivos de histórias + +> ⚠️ **Estes diagramas explicam 90% da confusão sobre o fluxo Synkra AIOS Agentic Agile** - Entender a criação de PRD+Arquitetura e o fluxo de trabalho sm/dev/qa e como os agentes passam notas através de arquivos de histórias é essencial - e também explica por que isto NÃO é taskmaster ou apenas um simples executor de tarefas! + +### O que você gostaria de fazer? + +- **[Instalar e Construir software com Equipe Ágil Full Stack de IA](#início-rápido)** → Instruções de Início Rápido +- **[Aprender como usar o AIOS](docs/guides/user-guide.md)** → Guia completo do usuário e passo a passo +- **[Ver agentes IA disponíveis](#agentes-disponíveis)** → Papéis especializados para sua equipe +- **[Explorar usos não técnicos](#-além-do-desenvolvimento-de-software---squads)** → Escrita criativa, negócios, bem-estar, educação +- **[Criar meus próprios agentes IA](#criando-seu-próprio-squad)** → Construir agentes para seu domínio +- **[Navegar Squads prontos](docs/guides/squads-overview.md)** → Veja como criar e usar equipes de agentes IA +- **[Entender a arquitetura](docs/architecture/ARCHITECTURE-INDEX.md)** → Mergulho técnico profundo +- **[Reportar problemas](https://github.com/SynkraAI/aios-core/issues)** → Bug reports e feature requests + +## Importante: Mantenha Sua Instalação AIOS Atualizada + +**Mantenha-se atualizado sem esforço!** Para atualizar sua instalação AIOS existente: + +```bash +npx aios-core@latest install +``` + +Isto vai: + +- ✅ Detectar automaticamente sua instalação existente +- ✅ Atualizar apenas os arquivos que mudaram +- ✅ Criar arquivos de backup `.bak` para quaisquer modificações customizadas +- ✅ Preservar suas configurações específicas do projeto + +Isto facilita beneficiar-se das últimas melhorias, correções de bugs e novos agentes sem perder suas customizações! + +## Início Rápido + +### 🚀 Instalação via NPX (Recomendado) + +**Instale o Synkra AIOS com um único comando:** + +```bash +# Criar um novo projeto com assistente interativo moderno +npx aios-core init meu-projeto + +# Ou instalar em projeto existente +cd seu-projeto +npx aios-core install + +# Ou usar uma versão específica +npx aios-core@latest init meu-projeto +``` + +### ✨ Assistente de Instalação Moderno + +O Synkra AIOS agora inclui uma experiência de instalação interativa de última geração, inspirada em ferramentas modernas como Vite e Next.js: + +**Recursos do Instalador Interativo:** + +- 🎨 **Interface Moderna**: Prompts coloridos e visuais com @clack/prompts +- ✅ **Validação em Tempo Real**: Feedback instantâneo sobre entradas inválidas +- 🔄 **Indicadores de Progresso**: Spinners para operações longas (cópia de arquivos, instalação de deps) +- 📦 **Seleção Multi-Componente**: Escolha quais componentes instalar com interface intuitiva +- ⚙️ **Escolha de Gerenciador de Pacotes**: Selecione entre npm, yarn ou pnpm +- ⌨️ **Suporte a Cancelamento**: Ctrl+C ou ESC para sair graciosamente a qualquer momento +- 📊 **Resumo de Instalação**: Visualize todas as configurações antes de prosseguir +- ⏱️ **Rastreamento de Duração**: Veja quanto tempo levou a instalação + +**O instalador oferece:** + +- ✅ Download da versão mais recente do NPM +- ✅ Assistente de instalação interativo moderno +- ✅ Configuração automática do IDE (Codex CLI, Cursor ou Claude Code) +- ✅ Configuração de todos os agentes e fluxos de trabalho AIOS +- ✅ Criação dos arquivos de configuração necessários +- ✅ Inicialização do sistema de meta-agentes +- ✅ Verificações de saúde do sistema +- ✅ **Suporte Cross-Platform**: Testado em Windows, macOS e Linux + +> **É isso!** Sem clonar, sem configuração manual - apenas um comando e você está pronto para começar com uma experiência de instalação moderna e profissional. + +**Pré-requisitos**: [Node.js](https://nodejs.org) v18+ necessário (v20+ recomendado) | [Troubleshooting](docs/guides/installation-troubleshooting.md) + +### Atualizando uma Instalação Existente + +Se você já tem o AIOS instalado: + +```bash +npx aios-core@latest install +# O instalador detectará sua instalação existente e a atualizará +``` + +### Configure Seu IDE para Desenvolvimento AIOS + +O Synkra AIOS inclui regras pré-configuradas para IDE para melhorar sua experiência de desenvolvimento: + +#### Para Cursor: + +1. Abra as configurações do Cursor +2. Navegue até **User Rules** +3. Copie o conteúdo de `.cursor/global-rules.md` +4. Cole na seção de regras e salve + +#### Para Claude Code: + +- ✅ Já configurado! O arquivo `.claude/CLAUDE.md` é carregado automaticamente +- Sync dedicado de agentes: `npm run sync:ide:claude` +- Validacao dedicada: `npm run validate:claude-sync && npm run validate:claude-integration` + +#### Para Codex CLI: + +- ✅ Integração de primeira classe no AIOS 4.2 (pipeline de ativação e greeting compartilhado) +- ✅ Já configurado! O arquivo `AGENTS.md` na raiz é carregado automaticamente +- Opcional: sincronize agentes auxiliares com `npm run sync:ide:codex` +- Recomendado neste repositório: gerar e versionar skills locais com `npm run sync:skills:codex` +- Use `npm run sync:skills:codex:global` apenas fora deste projeto (para evitar duplicidade no `/skills`) +- Validacao dedicada: `npm run validate:codex-sync && npm run validate:codex-integration` +- Guardrails de skills/paths: `npm run validate:codex-skills && npm run validate:paths` + +#### Para Gemini CLI: + +- ✅ Regras e agentes sincronizaveis com `npm run sync:ide:gemini` +- Arquivos gerados em `.gemini/rules.md`, `.gemini/rules/AIOS/agents/` e `.gemini/commands/*.toml` +- ✅ Hooks e settings locais no fluxo de instalacao (`.gemini/hooks/` + `.gemini/settings.json`) +- ✅ Ativacao rapida por slash commands (`/aios-menu`, `/aios-dev`, `/aios-architect`, etc.) +- Validacao dedicada: `npm run validate:gemini-sync && npm run validate:gemini-integration` +- Paridade multi-IDE em um comando: `npm run validate:parity` + +Estas regras fornecem: + +- 🤖 Reconhecimento e integração de comandos de agentes +- 📋 Fluxo de trabalho de desenvolvimento dirigido por histórias +- ✅ Rastreamento automático de checkboxes +- 🧪 Padrões de teste e validação +- 📝 Padrões de código específicos do AIOS + +### Início Mais Rápido com Interface Web (2 minutos) + +1. **Instale o AIOS**: Execute `npx aios-core init meu-projeto` +2. **Configure seu IDE**: Siga as instruções de configuração para Codex CLI, Cursor ou Claude Code +3. **Comece a Planejar**: Ative um agente como `@analyst` para começar a criar seu briefing +4. **Use comandos AIOS**: Digite `*help` para ver comandos disponíveis +5. **Siga o fluxo**: Veja o [Guia do usuário](docs/guides/user-guide.md) para mais detalhes + +### Referência de Comandos CLI + +O Synkra AIOS oferece uma CLI moderna e cross-platform com comandos intuitivos: + +```bash +# Gerenciamento de Projeto (com assistente interativo) +npx aios-core init [opções] + --force Forçar criação em diretório não vazio + --skip-install Pular instalação de dependências npm + --template Usar template específico (default, minimal, enterprise) + +# Instalação e Configuração (com prompts modernos) +npx aios-core install [opções] + --force Sobrescrever configuração existente + --quiet Saída mínima durante instalação + --dry-run Simular instalação sem modificar arquivos + +# Comandos do Sistema +npx aios-core --version Exibir versão instalada +npx aios-core --help Exibir ajuda detalhada +npx aios-core info Exibir informações do sistema +npx aios-core doctor Executar diagnósticos do sistema +npx aios-core doctor --fix Corrigir problemas detectados automaticamente + +# Manutenção +npx aios-core update Atualizar para versão mais recente +npx aios-core uninstall Remover Synkra AIOS +``` + +**Recursos da CLI:** + +- ✅ **Help System Abrangente**: `--help` em qualquer comando mostra documentação detalhada +- ✅ **Validação de Entrada**: Feedback imediato sobre parâmetros inválidos +- ✅ **Mensagens Coloridas**: Erros em vermelho, sucessos em verde, avisos em amarelo +- ✅ **Cross-Platform**: Funciona perfeitamente em Windows, macOS e Linux +- ✅ **Suporte a Dry-Run**: Teste instalações sem modificar arquivos + +### 💡 Exemplos de Uso + +#### Instalação Interativa Completa + +```bash +$ npx aios-core install + +🚀 Synkra AIOS Installation + +◆ What is your project name? +│ my-awesome-project +│ +◇ Which directory should we use? +│ ./my-awesome-project +│ +◆ Choose components to install: +│ ● Core Framework (Required) +│ ● Agent System (Required) +│ ● Squads (optional) +│ ○ Example Projects (optional) +│ +◇ Select package manager: +│ ● npm +│ ○ yarn +│ ○ pnpm +│ +◆ Initialize Git repository? +│ Yes +│ +◆ Install dependencies? +│ Yes +│ +▸ Creating project directory... +▸ Copying framework files... +▸ Initializing Git repository... +▸ Installing dependencies (this may take a minute)... +▸ Configuring environment... +▸ Running post-installation setup... + +✔ Installation completed successfully! (34.2s) + +Next steps: + cd my-awesome-project + aios-core doctor # Verify installation + aios-core --help # See available commands +``` + +#### Instalação Silenciosa (CI/CD) + +```bash +# Instalação automatizada sem prompts +$ npx aios-core install --quiet --force +✔ Synkra AIOS installed successfully +``` + +#### Simulação de Instalação (Dry-Run) + +```bash +# Testar instalação sem modificar arquivos +$ npx aios-core install --dry-run + +[DRY RUN] Would create: ./my-project/ +[DRY RUN] Would copy: .aios-core/ (45 files) +[DRY RUN] Would initialize: Git repository +[DRY RUN] Would install: npm dependencies +✔ Dry run completed - no files were modified +``` + +#### Diagnóstico do Sistema + +```bash +$ npx aios-core doctor + +🏥 AIOS System Diagnostics + +✔ Node.js version: v20.10.0 (meets requirement: >=18.0.0) +✔ npm version: 10.2.3 +✔ Git installed: version 2.43.0 +✔ GitHub CLI: gh 2.40.1 +✔ Synkra AIOS: v4.2.11 + +Configuration: +✔ .aios-core/ directory exists +✔ Agent files: 11 found +✔ Workflow files: 8 found +✔ Templates: 15 found + +Dependencies: +✔ @clack/prompts: ^0.7.0 +✔ commander: ^12.0.0 +✔ execa: ^9.0.0 +✔ fs-extra: ^11.0.0 +✔ picocolors: ^1.0.0 + +✅ All checks passed! Your installation is healthy. +``` + +#### Obter Ajuda + +```bash +$ npx aios-core --help + +Usage: aios-core [options] [command] + +Synkra AIOS: AI-Orchestrated System for Full Stack Development + +Options: + -V, --version output the version number + -h, --help display help for command + +Commands: + init Create new AIOS project with interactive wizard + install [options] Install AIOS in current directory + info Display system information + doctor [options] Run system diagnostics and health checks + help [command] display help for command + +Run 'aios-core --help' for detailed information about each command. +``` + +### Alternativa: Clonar e Construir + +Para contribuidores ou usuários avançados que queiram modificar o código fonte: + +```bash +# Clonar o repositório +git clone https://github.com/SynkraAI/aios-core.git +cd aios-core + +# Instalar dependências +npm install + +# Executar o instalador +npm run install:aios +``` + +### Configuração Rápida para Equipe + +Para membros da equipe ingressando no projeto: + +```bash +# Instalar AIOS no projeto +npx aios-core@latest install + +# Isto vai: +# 1. Detectar instalação existente (se houver) +# 2. Instalar/atualizar framework AIOS +# 3. Configurar agentes e workflows +``` + +## 🌟 Além do Desenvolvimento de Software - Squads + +O framework de linguagem natural do AIOS funciona em QUALQUER domínio. Os Squads fornecem agentes IA especializados para escrita criativa, estratégia de negócios, saúde e bem-estar, educação e muito mais. Além disso, os Squads podem expandir o núcleo do Synkra AIOS com funcionalidade específica que não é genérica para todos os casos. [Veja o Guia de Squads](docs/guides/squads-guide.md) e aprenda a criar os seus próprios! + +## Agentes Disponíveis + +O Synkra AIOS vem com 11 agentes especializados: + +### Agentes Meta + +- **aios-master** - Agente mestre de orquestração (inclui capacidades de desenvolvimento de framework) +- **aios-orchestrator** - Orquestrador de fluxo de trabalho e coordenação de equipe + +### Agentes de Planejamento (Interface Web) + +- **analyst** - Especialista em análise de negócios e criação de PRD +- **pm** (Product Manager) - Gerente de produto e priorização +- **architect** - Arquiteto de sistema e design técnico +- **ux-expert** - Design de experiência do usuário e usabilidade + +### Agentes de Desenvolvimento (IDE) + +- **sm** (Scrum Master) - Gerenciamento de sprint e criação de histórias +- **dev** - Desenvolvedor e implementação +- **qa** - Garantia de qualidade e testes +- **po** (Product Owner) - Gerenciamento de backlog e histórias + +## Documentação e Recursos + +### Guias Essenciais + +- 📖 **[Guia do Usuário](docs/guides/user-guide.md)** - Passo a passo completo desde a concepção até a conclusão do projeto +- 🏗️ **[Arquitetura Principal](docs/architecture/ARCHITECTURE-INDEX.md)** - Mergulho técnico profundo e design do sistema +- 🚀 **[Guia de Squads](docs/guides/squads-guide.md)** - Estenda o AIOS para qualquer domínio além do desenvolvimento de software + +### Documentação Adicional + +- 🤖 **[Guia de Squads](docs/guides/squads-guide.md)** - Crie e publique equipes de agentes IA +- 📋 **[Primeiros Passos](docs/getting-started.md)** - Tutorial passo a passo para iniciantes +- 🔧 **[Solução de Problemas](docs/troubleshooting.md)** - Soluções para problemas comuns +- 🎯 **[Princípios Orientadores](docs/GUIDING-PRINCIPLES.md)** - Filosofia e melhores práticas do AIOS +- 🏛️ **[Visão Geral da Arquitetura](docs/architecture/ARCHITECTURE-INDEX.md)** - Visão detalhada da arquitetura do sistema +- ⚙️ **[Guia de Ajuste de Performance](docs/performance-tuning-guide.md)** - Otimize seu fluxo de trabalho AIOS +- 🔒 **[Melhores Práticas de Segurança](docs/security-best-practices.md)** - Segurança e proteção de dados +- 🔄 **[Guia de Migração](docs/migration-guide.md)** - Migração de versões anteriores +- 📦 **[Versionamento e Releases](docs/versioning-and-releases.md)** - Política de versões + +## 🤖 AIOS Autonomous Development Engine (ADE) + +O Synkra AIOS introduz o **Autonomous Development Engine (ADE)** - um sistema completo para desenvolvimento autônomo que transforma requisitos em código funcional. + +### 🎯 O Que é o ADE? + +O ADE é um conjunto de **7 Epics** que habilitam execução autônoma de desenvolvimento: + +| Epic | Nome | Descrição | +| ----- | ---------------- | ------------------------------------------ | +| **1** | Worktree Manager | Isolamento de branches via Git worktrees | +| **2** | Migration V2→V3 | Migração para formato autoClaude V3 | +| **3** | Spec Pipeline | Transforma requisitos em specs executáveis | +| **4** | Execution Engine | Executa specs com 13 steps + self-critique | +| **5** | Recovery System | Recuperação automática de falhas | +| **6** | QA Evolution | Review estruturado em 10 fases | +| **7** | Memory Layer | Memória persistente de padrões e insights | + +### 🔄 Fluxo Principal + +``` +User Request → Spec Pipeline → Execution Engine → QA Review → Working Code + ↓ + Recovery System + ↓ + Memory Layer +``` + +### ⚡ Quick Start ADE + +```bash +# 1. Criar spec a partir de requisito +@pm *gather-requirements +@architect *assess-complexity +@analyst *research-deps +@pm *write-spec +@qa *critique-spec + +# 2. Executar spec aprovada +@architect *create-plan +@architect *create-context +@dev *execute-subtask 1.1 + +# 3. QA Review +@qa *review-build STORY-42 +``` + +### 📖 Documentação ADE + +- **[Guia Completo do ADE](docs/guides/ade-guide.md)** - Tutorial passo a passo +- **[Alterações nos Agentes](docs/architecture/ADE-AGENT-CHANGES.md)** - Comandos e capabilities por agente +- **[Epic 1 - Worktree Manager](docs/architecture/ADE-EPIC1-HANDOFF.md)** +- **[Epic 2 - Migration V2→V3](docs/architecture/ADE-EPIC2-HANDOFF.md)** +- **[Epic 3 - Spec Pipeline](docs/architecture/ADE-EPIC3-HANDOFF.md)** +- **[Epic 4 - Execution Engine](docs/architecture/ADE-EPIC4-HANDOFF.md)** +- **[Epic 5 - Recovery System](docs/architecture/ADE-EPIC5-HANDOFF.md)** +- **[Epic 6 - QA Evolution](docs/architecture/ADE-EPIC6-HANDOFF.md)** +- **[Epic 7 - Memory Layer](docs/architecture/ADE-EPIC7-HANDOFF.md)** + +### 🆕 Novos Comandos por Agente + +**@devops:** + +- `*create-worktree`, `*list-worktrees`, `*merge-worktree`, `*cleanup-worktrees` +- `*inventory-assets`, `*analyze-paths`, `*migrate-agent`, `*migrate-batch` + +**@pm:** + +- `*gather-requirements`, `*write-spec` + +**@architect:** + +- `*assess-complexity`, `*create-plan`, `*create-context`, `*map-codebase` + +**@analyst:** + +- `*research-deps`, `*extract-patterns` + +**@qa:** + +- `*critique-spec`, `*review-build`, `*request-fix`, `*verify-fix` + +**@dev:** + +- `*execute-subtask`, `*track-attempt`, `*rollback`, `*capture-insights`, `*list-gotchas`, `*apply-qa-fix` + +## Criando Seu Próprio Squad + +Squads permitem estender o AIOS para qualquer domínio. Estrutura básica: + +``` +squads/seu-squad/ +├── config.yaml # Configuração do squad +├── agents/ # Agentes especializados +├── tasks/ # Fluxos de trabalho de tarefas +├── templates/ # Templates de documentos +├── checklists/ # Checklists de validação +├── data/ # Base de conhecimento +├── README.md # Documentação do squad +└── user-guide.md # Guia do usuário +``` + +Veja o [Guia de Squads](docs/guides/squads-guide.md) para instruções detalhadas. + +## Squads Disponíveis + +Squads são equipes modulares de agentes IA. Veja a [Visão Geral de Squads](docs/guides/squads-overview.md) para mais informações. + +### Squads Externos + +- **[hybrid-ops](https://github.com/SynkraAI/aios-hybrid-ops-pedro-valerio)** - Operações híbridas humano-agente (repositório separado) + +## AIOS Pro + +O **AIOS Pro** (`@aios-fullstack/pro`) é o módulo premium do Synkra AIOS, oferecendo funcionalidades avançadas para equipes e projetos de maior escala. + +> **Disponibilidade restrita:** O AIOS Pro está disponível exclusivamente para membros do **AIOS Cohort Advanced**. [Saiba mais sobre o programa](https://synkra.ai). + +### Instalação + +```bash +npm install @aios-fullstack/pro +``` + +### Features Premium + +- **Squads Avançados** - Squads especializados com capacidades expandidas +- **Memory Layer** - Memória persistente de padrões e insights entre sessões +- **Métricas & Analytics** - Dashboard de produtividade e métricas de desenvolvimento +- **Integrações Enterprise** - Conectores para Jira, Linear, Notion e mais +- **Configuração em Camadas** - Sistema de configuração L1-L4 com herança +- **Licenciamento** - Gerenciamento de licença via `aios pro activate --key ` + +Para mais informações, execute `npx aios-core pro --help` após a instalação. + +## Suporte + +- 🐛 [Rastreador de Issues](https://github.com/SynkraAI/aios-core/issues) - Bug reports e feature requests +- 💡 [Processo de Features](docs/FEATURE_PROCESS.md) - Como propor novas funcionalidades +- 📋 [Como Contribuir](CONTRIBUTING.md) +- 🗺️ [Roadmap](docs/roadmap.md) - Veja o que estamos construindo +- 🤖 [Guia de Squads](docs/guides/squads-guide.md) - Crie equipes de agentes IA + +## Git Workflow e Validação + +O Synkra AIOS implementa um sistema de validação de múltiplas camadas para garantir qualidade do código e consistência: + +### 🛡️ Defense in Depth - 3 Camadas de Validação + +**Camada 1: Pre-commit (Local - Rápida)** + +- ✅ ESLint - Qualidade de código +- ✅ TypeScript - Verificação de tipos +- ⚡ Performance: <5s +- 💾 Cache habilitado + +**Camada 2: Pre-push (Local - Validação de Stories)** + +- ✅ Validação de checkboxes de histórias +- ✅ Consistência de status +- ✅ Seções obrigatórias + +**Camada 3: CI/CD (Cloud - Obrigatório para merge)** + +- ✅ Todos os testes +- ✅ Cobertura de testes (80% mínimo) +- ✅ Validações completas +- ✅ GitHub Actions + +### 📖 Documentação Detalhada + +- 📋 **[Guia Completo de Git Workflow](docs/git-workflow-guide.md)** - Guia detalhado do fluxo de trabalho +- 📋 **[CONTRIBUTING.md](CONTRIBUTING.md)** - Guia de contribuição + +### Comandos Disponíveis + +```bash +# Validações locais +npm run lint # ESLint +npm run typecheck # TypeScript +npm test # Testes +npm run test:coverage # Testes com cobertura + +# Validador AIOS +node .aios-core/utils/aios-validator.js pre-commit # Validação pre-commit +node .aios-core/utils/aios-validator.js pre-push # Validação pre-push +node .aios-core/utils/aios-validator.js stories # Validar todas stories +``` + +### Branch Protection + +Configure proteção da branch master com: + +```bash +node scripts/setup-branch-protection.js +``` + +Requer: + +- GitHub CLI (gh) instalado e autenticado +- Acesso de admin ao repositório + +## Contribuindo + +**Estamos empolgados com contribuições e acolhemos suas ideias, melhorias e Squads!** 🎉 + +Para contribuir: + +1. Fork o repositório +2. Crie uma branch para sua feature (`git checkout -b feature/MinhaNovaFeature`) +3. Commit suas mudanças (`git commit -m 'feat: Adicionar nova feature'`) +4. Push para a branch (`git push origin feature/MinhaNovaFeature`) +5. Abra um Pull Request + +Veja também: + +- 📋 [Como Contribuir com Pull Requests](docs/how-to-contribute-with-pull-requests.md) +- 📋 [Guia de Git Workflow](docs/git-workflow-guide.md) + +## 📄 Legal + +| Documento | English | Português | +| --------------------- | ------------------------------------------- | ------------------------------------- | +| **Licença** | [MIT License](LICENSE) | - | +| **Modelo de Licença** | [Core vs Pro](docs/legal/license-clarification.md) | - | +| **Privacidade** | [Privacy Policy](docs/legal/privacy.md) | - | +| **Termos de Uso** | [Terms of Use](docs/legal/terms.md) | - | +| **Código de Conduta** | [Code of Conduct](CODE_OF_CONDUCT.md) | [PT-BR](docs/pt/code-of-conduct.md) | +| **Contribuição** | [Contributing](CONTRIBUTING.md) | [PT-BR](docs/pt/contributing.md) | +| **Segurança** | [Security](docs/security.md) | [PT-BR](docs/pt/security.md) | +| **Comunidade** | [Community](docs/community.md) | [PT-BR](docs/pt/community.md) | +| **Roadmap** | [Roadmap](docs/roadmap.md) | [PT-BR](docs/pt/roadmap.md) | +| **Changelog** | [Version History](CHANGELOG.md) | - | + +## Reconhecimentos + +This project was originally derived from the [BMad Method](https://github.com/bmad-code-org/BMAD-METHOD) by [Brian Madison](https://github.com/bmadcode). We thank Brian and all BMad Method contributors for the original work that made this project possible. + +**Note:** Some contributors shown in the GitHub contributors graph are inherited from the original BMad Method git history and do not represent active participation in or endorsement of Synkra AIOS. + +[![Contributors](https://contrib.rocks/image?repo=SynkraAI/aios-core)](https://github.com/SynkraAI/aios-core/graphs/contributors) + +Construído com ❤️ para a comunidade de desenvolvimento assistido por IA + +--- + +**[⬆ Voltar ao topo](#synkra-aios-framework-universal-de-agentes-ia-)** + +``` + +================================================== +📄 .gitignore +================================================== +```txt +# Dependencies +node_modules/ +pnpm-lock.yaml +bun.lock +deno.lock + +# Logs +logs/ +*.log +npm-debug.log* +*.tmp +*.bak +.cache/ + +# Installation logs and backups (Story 1.9 - Error Handling & Rollback) +.aios-install.log +.aios-install.log.* +.aios-backup/ +.aios-core.backup*/ # Backup directories (any timestamp) + +# Build output +build/*.txt +web-bundles/ + +# Environment variables +.env +.env.local +.env.*.local +.env.production +.env.development +.env.test + +# System files +.DS_Store +Thumbs.db +nul # Windows temporary file + +# Private keys and certificates +*.key +*.pem +*.p12 +*.pfx +*.crt +*.cer +*.p7b +*.p7c + +# Secrets and credentials directories +secrets/ +credentials/ + +# Private local commands (not for repository) +.claude/commands/mmosMapper/ +.claude/commands/hybridOps/ + +# Private local squads (not for repository) +squads/ +keys/ +*.secret +*.credentials + +# Binary installers and development artifacts +*.msi # Windows installer packages +*.exe # Executable binaries +*.bat # Development batch scripts (sync-agents.bat, etc.) +*.sh # Development shell scripts +sync-*.bat # Agent sync scripts +sync-*.sh # Agent sync scripts + +# Development tools and configs +# Note: .husky/, .prettierrc, .prettierignore are TRACKED for pre-commit hooks (Story 3.1) + +# IDE and editor configs +.windsurf/ +!.windsurf/rules/ +!.windsurf/rules/agents/ +.cursor/rules/ # Cursor IDE personal rules and configurations +!.cursor/rules/agents/ + +# AI assistant files +CLAUDE.md +.claude +!.claude/commands/ +!.claude/commands/AIOS/ +!.claude/commands/AIOS/agents/ +!.claude/CLAUDE.md +!.claude/rules/ +.gemini +!.gemini/ +!.gemini/rules/ +!.gemini/rules/AIOS/ +!.gemini/rules/AIOS/agents/ +!.gemini/rules/AIOS/agents/*.md + +# AI Session Artifacts (Story 6.1.2.6 - Decision Logging) +# Auto-generated decision logs from yolo mode executions +.ai/ +*.decision-log.md + +# Project-specific +# Note: .aios-core is SOURCE CODE for framework-development mode +# It should NOT be ignored in the main repository +.aios-core.backup*/ # Only ignore backup directories +test-project-install/* +sample-project/* + +# Expansion Projects (separate from core AIOS-FULLSTACK) +# These folders are from other projects and should NOT be in this repository +.aios/ # AIOS expansion project files +.aios/project-status.yaml # Project status cache (Story 6.1.2.4 - auto-generated) +.cursor/ # Cursor IDE project files +aios-memory-layer-mvp/ # Separate memory layer project +memory/ # Memory squad +performance/ # Performance squad +scripts/ # Project-specific scripts (not part of core) +security/ # Security squad +telemetry/ # Telemetry squad +tests/ # Test files for expansion projects + +# AIOS Development Artifacts (generated during workflow execution) +# These files are created by agents during development and should NOT be in version control +# NOTE: docs/prd/ is TRACKED - it contains important product documentation +qa/ # Test artifacts and quality assurance files +epic-retrospective.md # Epic retrospective documents +*.stats.md # Statistics and analysis files +flattened-codebase.xml # Flattened codebase exports +test-results.txt # Test result outputs +**/test-results.txt # Test results in any subdirectory + +# Test coverage reports (generated by test runs) +coverage/ +*.lcov +lcov-report/ + +# AIOS temporary build/analysis files +.aios/reports/ +.aios/temp-*.js + +# Build outputs (generated from source) +aios-core/index.d.ts +aios-core/index.esm.js +aios-core/*.map + +# Applications (separate deployments, NOT part of npm package) +apps/ + +# External projects (separate repositories) +aios-guide-app/ +examples/ +!docs/examples/ +supabase/ + +# Output directories (generated artifacts) +outputs/ + +# Test fixtures (generated during tests) +tests/*/__fixtures__/ + +# UI/UX design artifacts +docs/ui-prompts.pdf + +# Keep these main planning documents (DO commit these) +# docs/prd.md +# docs/architecture.md +# docs/front-end-spec.md +# docs/project-brief.md +# docs/epics/ - Epic definitions +# docs/stories/ - Story files (v2.1, v2.2, etc) +# docs/qa/ - QA gates and reports +# docs/architecture/ - Architecture decisions and reviews +# docs/decisions/ - Decision logs and learnings (except roundtables) +# NOTE: docs/standards/ is IGNORED - contains proprietary methodology (see below) +# NOTE: docs/one-pagers/ is IGNORED - contains internal business decisions (see below) + +# Windows reserved device names +NUL +CON +AUX +PRN + +# ============================================ +# GENERATED ARTIFACTS (Story 6.12) +# ============================================ + +# Framework runtime artifacts +.aios-core/.session/ +.aios-core/manifests/*.csv +.aios-core/core/registry/service-registry.json + +# Quality metrics (generated during testing) +.aios/data/quality-metrics.json +.aios-core/quality/metrics-*.json +.aios-core/data/quality-metrics.json + +# Session data +.aios-core/.session/current-session.json + +# ============================================ +# ARCHIVED DOCUMENTATION (Story 6.12) +# ============================================ +# Historical documentation moved to .github/deprecated-docs/ +# These are internal development artifacts, not user documentation +# NOTE: Keep archived in repo for historical reference, +# but prevent new files from being added accidentally +# .github/deprecated-docs/ # TRACKED - contains archived docs + +# ============================================ +# DEVELOPMENT ARTIFACTS +# ============================================ +# Office documents should not be in repo +*.docx +*.xlsx +*.pptx + +# Test/sample files (prefixed patterns) +test-sample-*.md +*-sample-*.md + +# Backup/archive patterns +*.backup +*.old +*~ + +# TypeScript incremental build cache +.tsbuildinfo +*.tsbuildinfo + +# Runtime AIOS artifacts (explicit rules without inline comments) +.aios/ +.aios/project-status.yaml +.aios/notifications/ +.aios/sessions/ + +# ESLint cache +.eslintcache + +# ============================================ +# PROPRIETARY/INTERNAL CONTENT (Story TD-5) +# ============================================ +# Content that is NOT part of the public aios-core framework +# These are SynkraAI-specific methodologies and business documents + +# Proprietary Process Mapping Frameworks (Pedro Valério methodology) +docs/standards/ + +# Internal Business Decisions and Strategy +docs/one-pagers/ + +# Internal PRDs (product requirements - internal development plans) +docs/prd/ + +# Internal Decisions (business strategy, architecture decisions) +docs/decisions/ + +# Internal Development Stories (sprint backlogs, story tracking) +# Users don't need development history - only documentation +docs/stories/ + +# Internal QA Reports and Assessments +docs/qa/ + +# Internal Epics (roadmap, partner strategies, business plans) +# Contains sensitive info: partner names, financials, timelines +docs/epics/ + +# Squad Design Files (test artifacts) +squads/.designs/ + +# ============================================ +# MCP Configuration (contains API keys) +.mcp.json + +# Nogic (code intelligence - local config) +.nogic/ + +# MCP/PLAYWRIGHT ARTIFACTS +# ============================================ +# Screenshots and sensitive data from MCP tools +.playwright-mcp/ +*.recovery-codes.txt +npm-recovery-codes.txt + +# ============================================ +# PRO SUBMODULE (ADR-PRO-001 — git submodule, NOT ignored) +# ============================================ +# pro/ is a git submodule pointing to SynkraAI/aios-pro (private) +# It is tracked by git via .gitmodules — do NOT add pro/ to gitignore +# Pro submodule contents excluded from npm publish (not in package.json files array) + +# ============================================ +# LOCAL CONFIG (ADR-PRO-002 — machine-specific secrets, never committed) +# ============================================ +.aios-core/local-config.yaml + +# ============================================ +# IDS REGISTRY (Story IDS-3 — Self-Updating Registry) +# ============================================ +.aios-core/data/registry-backups/ +.aios-core/data/.entity-registry.lock +.aios-core/data/registry-update-log.jsonl + +# ============================================ +# .AIOS-CORE INTERNAL/GENERATED FILES +# ============================================ + +# Deprecated documentation (superseded by V2.1-COMPLETE) +.aios-core/docs/standards/AIOS-LIVRO-DE-OURO.md +.aios-core/docs/standards/AIOS-LIVRO-DE-OURO-V2.1.md +.aios-core/docs/standards/AIOS-LIVRO-DE-OURO-V2.1-SUMMARY.md +.aios-core/docs/standards/AIOS-FRAMEWORK-MASTER.md +.aios-core/docs/standards/V3-ARCHITECTURAL-DECISIONS.md + +# Duplicate docs (originals in .aios-core/core/docs/) +.aios-core/docs/SHARD-TRANSLATION-GUIDE.md +.aios-core/docs/component-creation-guide.md +.aios-core/docs/template-syntax.md +.aios-core/docs/troubleshooting-guide.md +.aios-core/docs/session-update-pattern.md + +# Generated runtime files +.aios-core/.session/ +.aios-core/.session/current-session.json + +# Generated manifests (created by framework) +.aios-core/manifests/*.csv + +# Internal audit results +.aios-core/infrastructure/tests/utilities-audit-results.json + +# ============================================ +# TEMPORARY FILES +# ============================================ +# Windows path-encoded temporary files (malformed path files) +C:Users* +*temp-catalog.txt +.eslintcache + +# ============================================ +# SYNAPSE RUNTIME (auto-managed) +# ============================================ +.synapse/sessions/ +.synapse/cache/ +.synapse/metrics/ + +# ============================================ +# SYNAPSE PACKAGE (generated by scripts/package-synapse.js) +# ============================================ +synapse-package.zip +.synapse-package/ + +# ============================================ +# MMOS IMPORT (TEMPORARY - Story MMOS-Sync) +# Remove entries after validation +# ============================================ +scripts/glue/ +.claude/skills/ +.claude/hooks/ +!.claude/hooks/synapse-engine.cjs +!.claude/hooks/precompact-session-digest.cjs +.claude/templates/ +.claude/agents/ +.claude/agent-memory/ +.claude/setup/ +docs/guides/aios-workflows/ +docs/guides/glue-script-guide.md + +# Local Gemini project rules generated/managed per workspace +.gemini/rules.md + +``` + +================================================== +📄 package.json +================================================== +```json +{ + "name": "aios-core", + "version": "4.2.13", + "description": "Synkra AIOS: AI-Orchestrated System for Full Stack Development - Core Framework", + "bin": { + "aios": "bin/aios.js", + "aios-core": "bin/aios.js", + "aios-minimal": "bin/aios-minimal.js" + }, + "preferGlobal": false, + "workspaces": [ + "packages/*" + ], + "files": [ + "bin/", + "scripts/", + "packages/", + ".aios-core/", + ".claude/CLAUDE.md", + ".claude/rules/", + ".claude/hooks/", + "pro/license/", + "README.md", + "LICENSE" + ], + "scripts": { + "format": "prettier --write \"**/*.md\"", + "test": "jest", + "test:watch": "jest --watch", + "test:coverage": "jest --coverage", + "test:health-check": "mocha tests/health-check/**/*.test.js --timeout 30000", + "lint": "eslint . --cache --cache-location .eslintcache", + "typecheck": "tsc --noEmit", + "release": "semantic-release", + "release:test": "semantic-release --dry-run --no-ci || echo 'Config test complete - authentication errors are expected locally'", + "generate:manifest": "node scripts/generate-install-manifest.js", + "validate:manifest": "node scripts/validate-manifest.js", + "validate:structure": "node .aios-core/infrastructure/scripts/source-tree-guardian/index.js", + "validate:agents": "node .aios-core/infrastructure/scripts/validate-agents.js", + "sync:ide": "node .aios-core/infrastructure/scripts/ide-sync/index.js sync", + "sync:ide:validate": "node .aios-core/infrastructure/scripts/ide-sync/index.js validate", + "sync:ide:check": "node .aios-core/infrastructure/scripts/ide-sync/index.js validate --strict", + "sync:ide:claude": "node .aios-core/infrastructure/scripts/ide-sync/index.js sync --ide claude-code", + "sync:ide:codex": "node .aios-core/infrastructure/scripts/ide-sync/index.js sync --ide codex", + "sync:ide:gemini": "node .aios-core/infrastructure/scripts/ide-sync/index.js sync --ide gemini", + "sync:ide:github-copilot": "node .aios-core/infrastructure/scripts/ide-sync/index.js sync --ide github-copilot", + "sync:ide:antigravity": "node .aios-core/infrastructure/scripts/ide-sync/index.js sync --ide antigravity", + "validate:claude-sync": "node .aios-core/infrastructure/scripts/ide-sync/index.js validate --ide claude-code --strict", + "validate:claude-integration": "node .aios-core/infrastructure/scripts/validate-claude-integration.js", + "validate:codex-sync": "node .aios-core/infrastructure/scripts/ide-sync/index.js validate --ide codex --strict", + "validate:codex-integration": "node .aios-core/infrastructure/scripts/validate-codex-integration.js", + "validate:gemini-sync": "node .aios-core/infrastructure/scripts/ide-sync/index.js validate --ide gemini --strict", + "validate:github-copilot-sync": "node .aios-core/infrastructure/scripts/ide-sync/index.js validate --ide github-copilot --strict", + "validate:antigravity-sync": "node .aios-core/infrastructure/scripts/ide-sync/index.js validate --ide antigravity --strict", + "validate:gemini-integration": "node .aios-core/infrastructure/scripts/validate-gemini-integration.js", + "sync:skills:codex": "node .aios-core/infrastructure/scripts/codex-skills-sync/index.js", + "sync:skills:codex:global": "node .aios-core/infrastructure/scripts/codex-skills-sync/index.js --global --global-only", + "validate:codex-skills": "node .aios-core/infrastructure/scripts/codex-skills-sync/validate.js --strict", + "validate:paths": "node .aios-core/infrastructure/scripts/validate-paths.js", + "validate:parity": "node .aios-core/infrastructure/scripts/validate-parity.js", + "validate:semantic-lint": "node scripts/semantic-lint.js", + "manifest:ensure": "node scripts/ensure-manifest.js", + "sync:ide:cursor": "node .aios-core/infrastructure/scripts/ide-sync/index.js sync --ide cursor", + "prepublishOnly": "npm run generate:manifest && npm run validate:manifest", + "prepare": "husky" + }, + "dependencies": { + "@clack/prompts": "^0.11.0", + "@kayvan/markdown-tree-parser": "^1.5.0", + "ajv": "^8.17.1", + "ajv-formats": "^3.0.1", + "ansi-to-html": "^0.7.2", + "chalk": "^4.1.2", + "chokidar": "^3.5.3", + "cli-progress": "^3.12.0", + "commander": "^12.1.0", + "execa": "^5.1.1", + "fast-glob": "^3.3.3", + "fs-extra": "^11.3.2", + "glob": "^10.4.4", + "handlebars": "^4.7.8", + "inquirer": "^8.2.6", + "js-yaml": "^4.1.0", + "ora": "^5.4.1", + "picocolors": "^1.1.1", + "proper-lockfile": "^4.1.2", + "semver": "^7.7.2", + "validator": "^13.15.15" + }, + "keywords": [ + "ai", + "aios", + "agile", + "agents", + "orchestrator", + "fullstack", + "development", + "cli", + "cross-platform", + "interactive", + "wizard", + "modern-ux", + "vite-style", + "automation" + ], + "author": "Synkra AIOS Team", + "license": "MIT", + "repository": { + "type": "git", + "url": "git+https://github.com/SynkraAI/aios-core.git" + }, + "bugs": { + "url": "https://github.com/SynkraAI/aios-core/issues" + }, + "homepage": "https://github.com/SynkraAI/aios-core#readme", + "engines": { + "node": ">=18.0.0", + "npm": ">=9.0.0" + }, + "devDependencies": { + "@semantic-release/changelog": "^6.0.3", + "@semantic-release/git": "^10.0.1", + "@types/jest": "^30.0.0", + "@typescript-eslint/eslint-plugin": "^8.46.2", + "@typescript-eslint/parser": "^8.46.2", + "conventional-changelog-conventionalcommits": "^9.1.0", + "eslint": "^9.38.0", + "husky": "^9.1.7", + "jest": "^30.2.0", + "lint-staged": "^16.1.1", + "mocha": "^11.7.5", + "prettier": "^3.5.3", + "semantic-release": "^25.0.2", + "typescript": "^5.9.3", + "yaml-lint": "^1.7.0" + }, + "lint-staged": { + "*.{js,mjs,cjs,ts}": [ + "eslint --fix --cache --cache-location .eslintcache", + "prettier --write" + ], + "*.md": [ + "prettier --write", + "node scripts/semantic-lint.js --staged" + ], + ".aios-core/development/agents/*.md": [ + "npm run sync:ide" + ] + }, + "overrides": { + "tar": "^7.5.7", + "diff": "^8.0.3" + } +} + +``` + +================================================== +📄 CONTRIBUTING.md +================================================== +```md +# Contributing to Synkra AIOS + +> **[Versao em Portugues](CONTRIBUTING-PT.md)** + +Welcome to AIOS! Thank you for your interest in contributing. This guide will help you understand our development workflow, contribution process, and how to submit your changes. + +## Table of Contents + +- [Quick Start](#quick-start) +- [Types of Contributions](#types-of-contributions) +- [Development Workflow](#development-workflow) +- [Contributing Agents](#contributing-agents) +- [Contributing Tasks](#contributing-tasks) +- [Contributing Squads](#contributing-squads) +- [Code Review Process](#code-review-process) +- [Validation System](#validation-system) +- [Code Standards](#code-standards) +- [Testing Requirements](#testing-requirements) +- [Frequently Asked Questions](#frequently-asked-questions) +- [Getting Help](#getting-help) +- [Working with Pro](#working-with-pro) + +--- + +## Quick Start + +### 1. Fork and Clone + +```bash +# Fork via GitHub UI, then clone your fork +git clone https://github.com/YOUR_USERNAME/aios-core.git +cd aios-core + +# Add upstream remote +git remote add upstream https://github.com/SynkraAI/aios-core.git +``` + +### 2. Set Up Development Environment + +**Prerequisites:** + +- Node.js >= 20.0.0 +- npm +- Git +- GitHub CLI (`gh`) - optional but recommended + +```bash +# Install dependencies +npm install + +# Verify setup +npm test +npm run lint +npm run typecheck +``` + +### 3. Create a Feature Branch + +```bash +git checkout -b feature/your-feature-name +``` + +**Branch Naming Conventions:** +| Prefix | Use For | +|--------|---------| +| `feature/` | New features, agents, tasks | +| `fix/` | Bug fixes | +| `docs/` | Documentation updates | +| `refactor/` | Code refactoring | +| `test/` | Test additions/improvements | + +### 4. Make Your Changes + +Follow the relevant guide below for your contribution type. + +### 5. Run Local Validation + +```bash +npm run lint # Code style +npm run typecheck # Type checking +npm test # Run tests +npm run build # Verify build +``` + +### 6. Push and Create PR + +```bash +git push origin feature/your-feature-name +``` + +Then create a Pull Request on GitHub targeting `main` branch. + +--- + +## Types of Contributions + +| Contribution | Description | Difficulty | +| ----------------- | ------------------------------------ | ----------- | +| **Documentation** | Fix typos, improve guides | Easy | +| **Bug Fixes** | Fix reported issues | Easy-Medium | +| **Tasks** | Add new task workflows | Medium | +| **Agents** | Create new AI agent personas | Medium | +| **Squads** | Bundle of agents + tasks + workflows | Advanced | +| **Core Features** | Framework improvements | Advanced | + +--- + +## Development Workflow + +### Commit Conventions + +We use [Conventional Commits](https://www.conventionalcommits.org/): + +``` +: + + +``` + +**Types:** `feat`, `fix`, `docs`, `style`, `refactor`, `test`, `chore` + +**Examples:** + +```bash +git commit -m "feat(agent): add security-auditor agent" +git commit -m "fix: resolve memory leak in config loader" +git commit -m "docs: update contribution guide" +``` + +### Pull Request Process + +1. **Create PR** targeting `main` branch +2. **Automated checks** run (lint, typecheck, test, build) +3. **CodeRabbit review** provides AI-powered feedback +4. **Maintainer review** - at least 1 approval required +5. **Merge** after all checks pass + +--- + +## Contributing Agents + +Agents are AI personas with specific expertise and commands. + +### Agent File Location + +``` +.aios-core/development/agents/your-agent.md +``` + +### Required Agent Structure + +```yaml +agent: + name: AgentName + id: agent-id # kebab-case, unique + title: Descriptive Title + icon: emoji + whenToUse: 'When to activate this agent' + +persona_profile: + archetype: Builder | Analyst | Guardian | Operator | Strategist + + communication: + tone: pragmatic | friendly | formal | analytical + emoji_frequency: none | low | medium | high + + vocabulary: + - domain-term-1 + - domain-term-2 + + greeting_levels: + minimal: 'Short greeting' + named: 'Named greeting with personality' + archetypal: 'Full archetypal greeting' + + signature_closing: 'Signature phrase' + +persona: + role: "Agent's primary role" + style: 'Communication style' + identity: "Agent's identity description" + focus: 'What the agent focuses on' + + core_principles: + - Principle 1 + - Principle 2 + +commands: + - help: Show available commands + - custom-command: Command description + +dependencies: + tasks: + - related-task.md + tools: + - tool-name +``` + +### Agent Contribution Checklist + +- [ ] Agent ID is unique and uses kebab-case +- [ ] `persona_profile` is complete with archetype and communication +- [ ] All commands have descriptions +- [ ] Dependencies list all required tasks +- [ ] No hardcoded credentials or sensitive data +- [ ] Follows existing patterns in the codebase + +### PR Template for Agents + +Use the **Agent Contribution** template when creating your PR. + +--- + +## Contributing Tasks + +Tasks are executable workflows that agents can run. + +### Task File Location + +``` +.aios-core/development/tasks/your-task.md +``` + +### Required Task Structure + +```markdown +# Task Name + +**Description:** What this task does +**Agent(s):** @dev, @qa, etc. +**Elicit:** true | false + +--- + +## Prerequisites + +- Prerequisite 1 +- Prerequisite 2 + +## Steps + +### Step 1: First Step + +Description of what to do. + +**Elicitation Point (if elicit: true):** + +- Question to ask user +- Options to present + +### Step 2: Second Step + +Continue with more steps... + +## Deliverables + +- [ ] Deliverable 1 +- [ ] Deliverable 2 + +## Error Handling + +If X happens, do Y. + +--- + +## Dependencies + +- `dependency-1.md` +- `dependency-2.md` +``` + +### Task Contribution Checklist + +- [ ] Task has clear description and purpose +- [ ] Steps are sequential and logical +- [ ] Elicitation points are clear (if applicable) +- [ ] Deliverables are well-defined +- [ ] Error handling guidance included +- [ ] Dependencies exist in the codebase + +### PR Template for Tasks + +Use the **Task Contribution** template when creating your PR. + +--- + +## Contributing Squads + +Squads are bundles of related agents, tasks, and workflows. + +### Squad Structure + +``` +your-squad/ +├── manifest.yaml # Squad metadata +├── agents/ +│ └── your-agent.md +├── tasks/ +│ └── your-task.md +└── workflows/ + └── your-workflow.yaml +``` + +### Squad Manifest + +```yaml +name: your-squad +version: 1.0.0 +description: What this squad does +author: Your Name +dependencies: + - base-squad (optional) +agents: + - your-agent +tasks: + - your-task +``` + +### Squad Resources + +- [Squads Guide](docs/guides/squads-guide.md) - Complete documentation +- [Squad Template](templates/squad/) - Start from a working template +- [Squad Discussions](https://github.com/SynkraAI/aios-core/discussions/categories/ideas) - Share ideas + +--- + +## Code Review Process + +### Automated Checks + +When you submit a PR, the following checks run automatically: + +| Check | Description | Required | +| -------------- | ---------------------- | -------- | +| **ESLint** | Code style and quality | Yes | +| **TypeScript** | Type checking | Yes | +| **Build** | Build verification | Yes | +| **Tests** | Jest test suite | Yes | +| **Coverage** | Minimum 80% coverage | Yes | + +### CodeRabbit AI Review + +[CodeRabbit](https://coderabbit.ai) automatically reviews your PR and provides feedback on: + +- Code quality and best practices +- Security concerns +- AIOS-specific patterns (agents, tasks, workflows) +- Performance issues + +**Severity Levels:** + +| Level | Action Required | +| ------------ | ---------------------------------------- | +| **CRITICAL** | Must fix before merge | +| **HIGH** | Strongly recommended to fix | +| **MEDIUM** | Consider fixing or document as tech debt | +| **LOW** | Optional improvement | + +**Responding to CodeRabbit:** + +- Address CRITICAL and HIGH issues before requesting review +- MEDIUM issues can be documented for follow-up +- LOW issues are informational + +### Maintainer Review + +After automated checks pass, a maintainer will: + +1. Verify changes meet project standards +2. Check for security implications +3. Ensure documentation is updated +4. Approve or request changes + +### Merge Requirements + +- [ ] All CI checks pass +- [ ] At least 1 maintainer approval +- [ ] All conversations resolved +- [ ] No merge conflicts +- [ ] Branch is up to date with main + +--- + +## Validation System + +AIOS implements a **Defense in Depth** strategy with 3 validation layers: + +### Layer 1: Pre-commit (Local) + +**Performance:** < 5 seconds + +- ESLint with cache +- TypeScript incremental compilation +- IDE sync (auto-stages IDE command files) + +### Layer 2: Pre-push (Local) + +**Performance:** < 2 seconds + +- Story checkbox validation +- Status consistency checks + +### Layer 3: CI/CD (Cloud) + +**Performance:** 2-5 minutes + +- Full lint and type checking +- Complete test suite +- Coverage reporting +- Story validation +- Branch protection rules + +--- + +## Code Standards + +### JavaScript/TypeScript + +- ES2022 features +- Prefer `const` over `let` +- Use async/await over promises +- Add JSDoc comments for public APIs +- Follow existing code style + +### File Organization + +``` +.aios-core/ +├── development/ +│ ├── agents/ # Agent definitions +│ ├── tasks/ # Task workflows +│ └── workflows/ # Multi-step workflows +├── core/ # Core utilities +└── product/ + └── templates/ # Document templates + +docs/ +├── guides/ # User guides +└── architecture/ # System architecture +``` + +### ESLint & TypeScript + +- Extends: `eslint:recommended`, `@typescript-eslint/recommended` +- Target: ES2022 +- Strict mode enabled +- No console.log in production (warnings) + +--- + +## Testing Requirements + +### Coverage Requirements + +- **Minimum:** 80% coverage (branches, functions, lines, statements) +- **Unit Tests:** Required for all new functions +- **Integration Tests:** Required for workflows + +### Running Tests + +```bash +npm test # Run all tests +npm run test:coverage # With coverage report +npm run test:watch # Watch mode +npm test -- path/to/test.js # Specific file +``` + +### Writing Tests + +```javascript +describe('MyModule', () => { + it('should do something', () => { + const result = myFunction(); + expect(result).toBe(expected); + }); +}); +``` + +--- + +## Frequently Asked Questions + +### Q: How long does review take? + +**A:** We aim for first review within 24-48 hours. Complex changes may take longer. + +### Q: Can I contribute without tests? + +**A:** Tests are strongly encouraged. For documentation-only changes, tests may not be required. + +### Q: What if my PR has conflicts? + +**A:** Rebase your branch on latest main: + +```bash +git fetch upstream +git rebase upstream/main +git push --force-with-lease +``` + +### Q: Can I contribute in Portuguese? + +**A:** Yes! We accept PRs in Portuguese. See [CONTRIBUTING-PT.md](CONTRIBUTING-PT.md). + +### Q: How do I become a maintainer? + +**A:** Consistent, high-quality contributions over time. Start with small fixes and work up to larger features. + +### Q: My CI checks are failing. What do I do? + +**A:** Check the GitHub Actions logs: + +```bash +gh pr checks # View PR check status +``` + +Common fixes: + +- Run `npm run lint -- --fix` for style issues +- Run `npm run typecheck` to see type errors +- Ensure tests pass locally before pushing + +--- + +## Getting Help + +- **GitHub Issues:** [Open an issue](https://github.com/SynkraAI/aios-core/issues) +- **Discussions:** [Start a discussion](https://github.com/SynkraAI/aios-core/discussions) +- **Community:** [COMMUNITY.md](COMMUNITY.md) + +--- + +## Working with Pro + +AIOS uses an Open Core model with a private `pro/` git submodule (see [ADR-PRO-001](docs/architecture/adr/adr-pro-001-repository-strategy.md)). + +### For Open-Source Contributors + +**You do NOT need the pro/ submodule.** The standard clone works perfectly: + +```bash +git clone https://github.com/SynkraAI/aios-core.git +cd aios-core +npm install && npm test # All tests pass without pro/ +``` + +The `pro/` directory will simply not exist in your clone — this is expected and all features, tests, and CI pass without it. + +### For Team Members (with Pro Access) + +```bash +# Clone with submodule +git clone --recurse-submodules https://github.com/SynkraAI/aios-core.git + +# Or add to existing clone +git submodule update --init pro +``` + +**Push order:** Always push `pro/` changes first, then `aios-core`. + +### Future: CLI Setup + +```bash +# Coming in a future release +aios setup --pro +``` + +For the complete developer workflow guide, see [Pro Developer Workflow](docs/guides/workflows/pro-developer-workflow.md). + +--- + +## Additional Resources + +- [Community Guide](COMMUNITY.md) - How to participate +- [Squads Guide](docs/guides/squads-guide.md) - Create agent teams +- [Architecture](docs/architecture/) - System design +- [Roadmap](ROADMAP.md) - Project direction + +--- + +**Thank you for contributing to Synkra AIOS!** + +``` + +================================================== +📄 tsconfig.json +================================================== +```json +{ + "compilerOptions": { + "target": "ES2022", + "module": "commonjs", + "lib": ["ES2022"], + "strict": true, + "esModuleInterop": true, + "skipLibCheck": true, + "forceConsistentCasingInFileNames": true, + "incremental": true, + "tsBuildInfoFile": ".tsbuildinfo", + "allowJs": true, + "checkJs": false, + "noEmit": true, + "resolveJsonModule": true, + "allowSyntheticDefaultImports": true, + "typeRoots": [ + "./types", + "./node_modules/@types" + ], + "baseUrl": ".", + "paths": { + "@synkra/aios-core": ["./.aios-core/core"], + "@synkra/aios-core/*": ["./.aios-core/core/*"] + } + }, + "include": [ + "types/**/*.d.ts", + ".aios-core/**/*", + "tools/**/*", + "scripts/**/*", + "src/**/*", + "common/**/*", + "bin/**/*", + "*.js" + ], + "exclude": [ + "node_modules", + "dist", + ".tsbuildinfo", + "coverage", + ".test-temp", + "**/*-tmpl.*", + "**/*.tmpl.*", + "**/*-template.*", + ".aios-core/development/templates/**", + ".aios-core.backup*/**", + "aios-core.backup*/**", + "aios-core-deprecated/**", + "aios-core-depracated/**", + "eslint.config.js" + ] +} + +``` + +================================================== +📄 eslint.config.js +================================================== +```js +// @ts-check +const js = require('@eslint/js'); +const tsPlugin = require('@typescript-eslint/eslint-plugin'); +const tsParser = require('@typescript-eslint/parser'); + +/** + * AIOS Framework ESLint Configuration + * ESLint v9 flat config format + * @type {import('eslint').Linter.Config[]} + */ +module.exports = [ + // Recommended JavaScript rules + js.configs.recommended, + + // Global ignores + { + ignores: [ + '**/node_modules/**', + '**/coverage/**', + '**/build/**', + '**/dist/**', + '**/.next/**', + // Dashboard has its own ESLint config + 'apps/dashboard/**', + '**/.aios-core/_legacy-v4.31.0/**', + '**/web-bundles/**', + '**/*.min.js', + '**/aios-core/*.js', + '**/templates/squad/**', + // Squad template - ES modules with placeholder imports + '.aios-core/development/templates/squad-template/**', + // ESM bundle files - auto-generated + '**/*.esm.js', + '**/index.esm.js', + // Legacy and backup files + '**/*.backup*.js', + '**/aios-init-old.js', + '**/aios-init-v4.js', + // Scripts that need cleanup (TODO: fix in Story 6.2) + '.aios-core/quality/**', + '.aios-core/scripts/**', + // Development scripts with known ESLint errors (TODO: fix in future story) + '.aios-core/development/scripts/**', + '.claude/commands/AIOS/scripts/**', + // CLI files with legacy issues (TODO: fix) + '.aios-core/cli/**', + '.aios-core/infrastructure/scripts/**', + // Bin files with legacy issues + 'bin/aios-init*.js', + 'bin/migrate-*.js', + // Template files with placeholder syntax + '.aios-core/product/templates/**', + // Health Dashboard - uses Vite/React with ES modules + 'tools/health-dashboard/**', + // Core orchestration/execution - legacy code with no-undef errors (TODO: fix) + '.aios-core/core/orchestration/**', + '.aios-core/core/execution/**', + // Hook integrations - legacy code (TODO: fix) + '.aios-core/hooks/**', + // Pro module - legacy code + 'pro/**', + // Glue scripts + 'scripts/glue/**', + ], + }, + + // JavaScript files configuration + { + files: ['**/*.js', '**/*.mjs', '**/*.cjs'], + languageOptions: { + ecmaVersion: 2022, + sourceType: 'commonjs', + globals: { + // Node.js globals + __dirname: 'readonly', + __filename: 'readonly', + exports: 'writable', + module: 'readonly', + require: 'readonly', + process: 'readonly', + console: 'readonly', + Buffer: 'readonly', + setTimeout: 'readonly', + clearTimeout: 'readonly', + setInterval: 'readonly', + clearInterval: 'readonly', + setImmediate: 'readonly', + global: 'readonly', + // Node.js 18+ globals + fetch: 'readonly', + AbortController: 'readonly', + URL: 'readonly', + URLSearchParams: 'readonly', + structuredClone: 'readonly', + // Jest globals + describe: 'readonly', + it: 'readonly', + test: 'readonly', + expect: 'readonly', + beforeEach: 'readonly', + afterEach: 'readonly', + beforeAll: 'readonly', + afterAll: 'readonly', + jest: 'readonly', + }, + }, + rules: { + // Error prevention + 'no-unused-vars': [ + 'warn', + { + argsIgnorePattern: '^_', + varsIgnorePattern: '^_', + caughtErrorsIgnorePattern: '^_', + }, + ], + 'no-undef': 'error', + 'no-console': 'off', // We need console for CLI tool + + // Code style + semi: ['error', 'always'], + quotes: ['warn', 'single', { avoidEscape: true }], + indent: ['warn', 2, { SwitchCase: 1 }], + 'comma-dangle': ['warn', 'always-multiline'], + + // Best practices + eqeqeq: ['error', 'always', { null: 'ignore' }], + 'no-var': 'error', + 'prefer-const': 'warn', + 'no-throw-literal': 'error', + + // Relaxed rules for legacy code (TODO: fix and re-enable as errors) + 'no-case-declarations': 'warn', + 'no-useless-escape': 'warn', + 'no-control-regex': 'warn', + 'no-prototype-builtins': 'warn', + 'no-empty': 'warn', + }, + }, + + // TypeScript files configuration + { + files: ['**/*.ts', '**/*.tsx'], + languageOptions: { + parser: tsParser, + parserOptions: { + ecmaVersion: 2022, + sourceType: 'module', + }, + }, + plugins: { + '@typescript-eslint': tsPlugin, + }, + rules: { + ...tsPlugin.configs.recommended.rules, + '@typescript-eslint/no-unused-vars': [ + 'warn', + { + argsIgnorePattern: '^_', + varsIgnorePattern: '^_', + }, + ], + '@typescript-eslint/explicit-function-return-type': 'off', + '@typescript-eslint/no-explicit-any': 'warn', + }, + }, + + // Test files - more relaxed rules + { + files: ['**/*.test.js', '**/*.spec.js', '**/tests/**/*.js'], + rules: { + 'no-unused-vars': 'off', + 'no-undef': 'off', // Jest globals + }, + }, +]; + +``` + +================================================== +📄 .env.example +================================================== +```example +# AIOS Framework Environment Variables +# Copy this file to .env and fill in your values +# NEVER commit .env files to version control + +# ============================================================================= +# AI Provider Configuration +# ============================================================================= + +# Anthropic API (Claude) +# Get your key from: https://console.anthropic.com/ +ANTHROPIC_API_KEY=your_anthropic_api_key_here + +# OpenAI API (optional, for GPT models) +# Get your key from: https://platform.openai.com/api-keys +OPENAI_API_KEY=your_openai_api_key_here + +# ============================================================================= +# GitHub Integration +# ============================================================================= + +# GitHub Personal Access Token (for API operations) +# Create at: https://github.com/settings/tokens +GITHUB_TOKEN=your_github_token_here + +# ============================================================================= +# AIOS Framework Settings +# ============================================================================= + +# Enable debug mode for verbose logging +AIOS_DEBUG=false + +# Default AI model to use +AIOS_DEFAULT_MODEL=claude-3-5-sonnet-20241022 + +# MCP Server settings +AIOS_MCP_ENABLED=true + +# ============================================================================= +# Development Settings (Optional) +# ============================================================================= + +# Node environment +NODE_ENV=development + +# Log level (debug, info, warn, error) +LOG_LEVEL=info + +# ============================================================================= +# Testing (CI/CD) +# ============================================================================= + +# CI environment flag (set by GitHub Actions) +# CI=true + +``` + +================================================== +📄 AGENTS.md +================================================== +```md +# AGENTS.md - Synkra AIOS + +Este arquivo configura o comportamento esperado de agentes no Codex CLI neste repositorio. + +## Constitution + +Siga `.aios-core/constitution.md` como fonte de verdade: +- CLI First +- Agent Authority +- Story-Driven Development +- No Invention +- Quality First +- Absolute Imports + +## Workflow Obrigatorio + +1. Inicie por uma story em `docs/stories/` +2. Implemente apenas o que os acceptance criteria pedem +3. Atualize checklist (`[ ]` -> `[x]`) e file list +4. Execute quality gates antes de concluir + +## Quality Gates + +```bash +npm run lint +npm run typecheck +npm test +``` + +## Estrutura Principal + +- Core framework: `.aios-core/` +- CLI: `bin/` +- Pacotes: `packages/` +- Testes: `tests/` +- Documentacao: `docs/` + +## IDE/Agent Sync + +- Sincronizar regras/agentes: `npm run sync:ide` +- Validar drift: `npm run sync:ide:check` +- Rodar paridade multi-IDE (Claude/Codex/Gemini): `npm run validate:parity` +- Sync Claude Code: `npm run sync:ide:claude` +- Sincronizar Gemini CLI: `npm run sync:ide:gemini` +- Validar Codex sync/integration: `npm run validate:codex-sync && npm run validate:codex-integration` +- Gerar skills locais do Codex: `npm run sync:skills:codex` +- Este repositorio usa **local-first**: prefira `.codex/skills` versionado no projeto +- Use `sync:skills:codex:global` apenas para testes fora deste repo + +## Agent Shortcuts (Codex) + +Preferencia de ativacao no Codex CLI: +1. Use `/skills` e selecione `aios-` vindo de `.codex/skills` (ex.: `aios-architect`) +2. Se preferir, use os atalhos abaixo (`@architect`, `/architect`, etc.) + +Quando a mensagem do usuario for um atalho de agente, carregue o arquivo correspondente em `.aios-core/development/agents/` (fallback: `.codex/agents/`), renderize o greeting via `generate-greeting.js` e assuma a persona ate receber `*exit`. + +Atalhos aceitos por agente: +- `@aios-master`, `/aios-master`, `/aios-master.md` -> `.aios-core/development/agents/aios-master.md` +- `@analyst`, `/analyst`, `/analyst.md` -> `.aios-core/development/agents/analyst.md` +- `@architect`, `/architect`, `/architect.md` -> `.aios-core/development/agents/architect.md` +- `@data-engineer`, `/data-engineer`, `/data-engineer.md` -> `.aios-core/development/agents/data-engineer.md` +- `@dev`, `/dev`, `/dev.md` -> `.aios-core/development/agents/dev.md` +- `@devops`, `/devops`, `/devops.md` -> `.aios-core/development/agents/devops.md` +- `@pm`, `/pm`, `/pm.md` -> `.aios-core/development/agents/pm.md` +- `@po`, `/po`, `/po.md` -> `.aios-core/development/agents/po.md` +- `@qa`, `/qa`, `/qa.md` -> `.aios-core/development/agents/qa.md` +- `@sm`, `/sm`, `/sm.md` -> `.aios-core/development/agents/sm.md` +- `@squad-creator`, `/squad-creator`, `/squad-creator.md` -> `.aios-core/development/agents/squad-creator.md` +- `@ux-design-expert`, `/ux-design-expert`, `/ux-design-expert.md` -> `.aios-core/development/agents/ux-design-expert.md` + +Resposta esperada ao ativar atalho: +1. Confirmar agente ativado +2. Mostrar 3-6 comandos principais (`*help`, etc.) +3. Seguir na persona do agente + +``` + +================================================== +📄 tests/pro-recover.test.js +================================================== +```js +/** + * Tests for License Recovery Flow (Story INS-3.3) + * + * Verifies: + * - Anti-enumeration: identical message for any email + * - Correct recovery URL + * - Offline fallback: shows URL when browser cannot open + * - Email masking + */ + +const readline = require('readline'); +const { maskEmail, openBrowser, promptEmail, recoverLicense, RECOVERY_URL, RECOVERY_MESSAGE } = require('../packages/aios-pro-cli/src/recover'); + +// ─── Constants ──────────────────────────────────────────────────────────────── + +describe('Recovery Constants', () => { + test('RECOVERY_URL is the correct portal URL', () => { + expect(RECOVERY_URL).toBe('https://pro.synkra.ai/recover'); + }); + + test('RECOVERY_MESSAGE is the anti-enumeration message', () => { + expect(RECOVERY_MESSAGE).toContain('Se este email estiver associado'); + expect(RECOVERY_MESSAGE).toContain('instrucoes de recuperacao'); + }); +}); + +// ─── Anti-Enumeration ───────────────────────────────────────────────────────── + +describe('Anti-Enumeration', () => { + test('RECOVERY_MESSAGE is identical regardless of email — same constant used', () => { + // The message is a constant, not computed from email input. + // This guarantees no information leakage by design. + const msg1 = RECOVERY_MESSAGE; + const msg2 = RECOVERY_MESSAGE; + expect(msg1).toBe(msg2); + }); + + test('message does NOT contain any confirmation of email existence', () => { + expect(RECOVERY_MESSAGE).not.toMatch(/found|exists|valid|invalid|not found|registered/i); + }); +}); + +// ─── Email Masking ──────────────────────────────────────────────────────────── + +describe('maskEmail', () => { + test('masks email correctly: u***@example.com', () => { + expect(maskEmail('user@example.com')).toBe('u***@example.com'); + }); + + test('handles single-char local part', () => { + expect(maskEmail('a@b.com')).toBe('a***@b.com'); + }); + + test('returns *** for invalid email without @', () => { + expect(maskEmail('notanemail')).toBe('***'); + }); + + test('returns *** for email starting with @', () => { + expect(maskEmail('@example.com')).toBe('***'); + }); +}); + +// ─── Browser Open / Offline Fallback ────────────────────────────────────────── + +describe('openBrowser', () => { + test('returns true when browser opens successfully', async () => { + const mockOpen = jest.fn().mockResolvedValue(undefined); + const result = await openBrowser(RECOVERY_URL, mockOpen); + expect(result).toBe(true); + expect(mockOpen).toHaveBeenCalledWith(RECOVERY_URL); + }); + + test('returns false when open() rejects (offline fallback)', async () => { + const mockOpen = jest.fn().mockRejectedValue(new Error('Could not open browser')); + const result = await openBrowser(RECOVERY_URL, mockOpen); + expect(result).toBe(false); + }); + + test('returns false when open() throws synchronously', async () => { + const mockOpen = jest.fn().mockImplementation(() => { + throw new Error('Cannot find module'); + }); + const result = await openBrowser(RECOVERY_URL, mockOpen); + expect(result).toBe(false); + }); +}); + +// ─── promptEmail ───────────────────────────────────────────────────────────── + +describe('promptEmail', () => { + let createInterfaceSpy; + + afterEach(() => { + if (createInterfaceSpy) createInterfaceSpy.mockRestore(); + }); + + function mockReadline(answer) { + const mockClose = jest.fn(); + const mockQuestion = jest.fn((_prompt, cb) => cb(answer)); + createInterfaceSpy = jest.spyOn(readline, 'createInterface').mockReturnValue({ + question: mockQuestion, + close: mockClose, + }); + return { mockQuestion, mockClose }; + } + + test('resolves with trimmed email from user input', async () => { + const { mockQuestion, mockClose } = mockReadline(' test@example.com '); + const email = await promptEmail(); + expect(email).toBe('test@example.com'); + expect(mockQuestion).toHaveBeenCalledWith(' Enter your email: ', expect.any(Function)); + expect(mockClose).toHaveBeenCalled(); + }); + + test('resolves with empty string for whitespace-only input', async () => { + mockReadline(' '); + const email = await promptEmail(); + expect(email).toBe(''); + }); + + test('passes correct options to createInterface', async () => { + mockReadline('a@b.com'); + await promptEmail(); + expect(createInterfaceSpy).toHaveBeenCalledWith({ + input: process.stdin, + output: process.stdout, + }); + }); +}); + +// ─── recoverLicense Integration ────────────────────────────────────────────── + +describe('recoverLicense', () => { + let createInterfaceSpy; + let logSpy; + let errorSpy; + let exitSpy; + + beforeEach(() => { + logSpy = jest.spyOn(console, 'log').mockImplementation(() => {}); + errorSpy = jest.spyOn(console, 'error').mockImplementation(() => {}); + exitSpy = jest.spyOn(process, 'exit').mockImplementation(() => { + throw new Error('process.exit called'); + }); + }); + + afterEach(() => { + if (createInterfaceSpy) createInterfaceSpy.mockRestore(); + logSpy.mockRestore(); + errorSpy.mockRestore(); + exitSpy.mockRestore(); + }); + + function mockReadline(answer) { + const mockClose = jest.fn(); + const mockQuestion = jest.fn((_prompt, cb) => cb(answer)); + createInterfaceSpy = jest.spyOn(readline, 'createInterface').mockReturnValue({ + question: mockQuestion, + close: mockClose, + }); + } + + test('full flow: shows anti-enum message, masked email, and recovery URL', async () => { + mockReadline('user@example.com'); + // Use a mock openFn by temporarily replacing openBrowser behavior + // recoverLicense calls openBrowser(RECOVERY_URL) without openFn, + // so the dynamic import will fail in test env → offline fallback + await recoverLicense(); + + const allOutput = logSpy.mock.calls.map(c => c[0]).join('\n'); + + // AC1: shows recovery message + expect(allOutput).toContain(RECOVERY_MESSAGE); + // AC2: anti-enumeration — same constant message + expect(allOutput).not.toMatch(/found|exists|valid|invalid|not found|registered/i); + // AC2: masked email displayed + expect(allOutput).toContain('u***@example.com'); + // AC4: always shows URL in text + expect(allOutput).toContain(RECOVERY_URL); + }); + + test('exits with error when email is empty', async () => { + mockReadline(''); + await expect(recoverLicense()).rejects.toThrow('process.exit called'); + expect(errorSpy).toHaveBeenCalledWith(expect.stringContaining('Email is required')); + expect(exitSpy).toHaveBeenCalledWith(1); + }); + + test('shows offline fallback message when browser cannot open', async () => { + mockReadline('test@mail.com'); + // open package not available in test env → offline fallback + await recoverLicense(); + + const allOutput = logSpy.mock.calls.map(c => c[0]).join('\n'); + expect(allOutput).toContain('Could not open browser automatically'); + expect(allOutput).toContain(RECOVERY_URL); + }); +}); + +``` + +================================================== +📄 tests/pro-wizard.test.js +================================================== +```js +/** + * Pro Installation Wizard Tests + * + * @story INS-3.2 — Implement Pro Installation Wizard with License Gate + */ + +'use strict'; + +// Mock modules before requiring the module under test +jest.mock('inquirer'); +jest.mock('ora', () => { + const spinnerMock = { + start: jest.fn().mockReturnThis(), + stop: jest.fn().mockReturnThis(), + succeed: jest.fn().mockReturnThis(), + fail: jest.fn().mockReturnThis(), + text: '', + }; + return jest.fn(() => spinnerMock); +}); + +const inquirer = require('inquirer'); + +// Save original env and stdout.isTTY +const originalEnv = { ...process.env }; +const originalIsTTY = process.stdout.isTTY; + +beforeEach(() => { + jest.clearAllMocks(); + process.env = { ...originalEnv }; + delete process.env.CI; + delete process.env.AIOS_PRO_KEY; + Object.defineProperty(process.stdout, 'isTTY', { + value: true, + writable: true, + configurable: true, + }); + // Suppress console output during tests + jest.spyOn(console, 'log').mockImplementation(() => {}); + jest.spyOn(console, 'error').mockImplementation(() => {}); +}); + +afterEach(() => { + process.env = { ...originalEnv }; + Object.defineProperty(process.stdout, 'isTTY', { + value: originalIsTTY, + writable: true, + configurable: true, + }); + console.log.mockRestore(); + console.error.mockRestore(); +}); + +// ─── Helper to get module ─────────────────────────────────────────────────── + +const proSetup = require('../packages/installer/src/wizard/pro-setup'); + +// ─── maskLicenseKey ────────────────────────────────────────────────────────── + +describe('maskLicenseKey', () => { + test('masks middle segments of valid key', () => { + expect(proSetup.maskLicenseKey('PRO-ABCD-EFGH-IJKL-MNOP')).toBe('PRO-ABCD-****-****-MNOP'); + }); + + test('handles lowercase input', () => { + expect(proSetup.maskLicenseKey('pro-abcd-efgh-ijkl-mnop')).toBe('PRO-ABCD-****-****-MNOP'); + }); + + test('returns **** for null/undefined', () => { + expect(proSetup.maskLicenseKey(null)).toBe('****'); + expect(proSetup.maskLicenseKey(undefined)).toBe('****'); + expect(proSetup.maskLicenseKey('')).toBe('****'); + }); + + test('returns **** for invalid format', () => { + expect(proSetup.maskLicenseKey('not-a-key')).toBe('****'); + expect(proSetup.maskLicenseKey('PRO-SHORT')).toBe('****'); + }); + + test('key never appears in full in any log output', () => { + const fullKey = 'PRO-ABCD-EFGH-IJKL-MNOP'; + const masked = proSetup.maskLicenseKey(fullKey); + + expect(masked).not.toContain('EFGH'); + expect(masked).not.toContain('IJKL'); + expect(masked).not.toBe(fullKey); + }); +}); + +// ─── validateKeyFormat ─────────────────────────────────────────────────────── + +describe('validateKeyFormat', () => { + test('accepts valid key format', () => { + expect(proSetup.validateKeyFormat('PRO-ABCD-EF12-GH34-IJ56')).toBe(true); + }); + + test('accepts lowercase (auto-uppercases internally)', () => { + expect(proSetup.validateKeyFormat('pro-abcd-ef12-gh34-ij56')).toBe(true); + }); + + test('rejects invalid formats', () => { + expect(proSetup.validateKeyFormat('')).toBe(false); + expect(proSetup.validateKeyFormat(null)).toBe(false); + expect(proSetup.validateKeyFormat('INVALID')).toBe(false); + expect(proSetup.validateKeyFormat('PRO-SHORT')).toBe(false); + expect(proSetup.validateKeyFormat('PRO-ABCD-EFGH-IJKL')).toBe(false); + }); +}); + +// ─── isCIEnvironment ───────────────────────────────────────────────────────── + +describe('isCIEnvironment', () => { + test('returns true when CI=true', () => { + process.env.CI = 'true'; + expect(proSetup.isCIEnvironment()).toBe(true); + }); + + test('returns true when stdout is not TTY', () => { + Object.defineProperty(process.stdout, 'isTTY', { + value: false, + writable: true, + configurable: true, + }); + expect(proSetup.isCIEnvironment()).toBe(true); + }); + + test('returns false in normal interactive terminal', () => { + delete process.env.CI; + Object.defineProperty(process.stdout, 'isTTY', { + value: true, + writable: true, + configurable: true, + }); + expect(proSetup.isCIEnvironment()).toBe(false); + }); +}); + +// ─── showProHeader ─────────────────────────────────────────────────────────── + +describe('showProHeader', () => { + test('outputs branded header', () => { + proSetup.showProHeader(); + const output = console.log.mock.calls.map((c) => String(c[0] || '')).join('\n'); + expect(output).toContain('AIOS Pro'); + expect(output).toContain('Premium'); + }); +}); + +// ─── stepLicenseGate ───────────────────────────────────────────────────────── + +describe('stepLicenseGate', () => { + test('CI mode: fails when AIOS_PRO_KEY not set', async () => { + process.env.CI = 'true'; + delete process.env.AIOS_PRO_KEY; + + const result = await proSetup.stepLicenseGate(); + + expect(result.success).toBe(false); + expect(result.error).toContain('AIOS_PRO_KEY'); + }); + + test('pre-provided key: rejects invalid format', async () => { + process.env.CI = 'true'; + + const result = await proSetup.stepLicenseGate({ key: 'INVALID-KEY' }); + + expect(result.success).toBe(false); + expect(result.error).toContain('Invalid key format'); + }); + + test('CI mode with valid format key: attempts API validation', async () => { + process.env.CI = 'true'; + process.env.AIOS_PRO_KEY = 'PRO-ABCD-EFGH-IJKL-MNOP'; + + const result = await proSetup.stepLicenseGate(); + + // Will fail because API is not available or returns error, but key is not shown in full + expect(result.success).toBe(false); + // Verify error message exists and does NOT contain the full key + expect(result.error).toBeDefined(); + }); + + test('interactive mode: prompts with password type and retries', async () => { + delete process.env.CI; + Object.defineProperty(process.stdout, 'isTTY', { + value: true, + writable: true, + configurable: true, + }); + + let callCount = 0; + inquirer.prompt.mockImplementation((questions) => { + callCount++; + // First call is the method choice menu (email vs key) + if (callCount === 1) { + return Promise.resolve({ method: 'key' }); + } + // Subsequent calls are license key prompts + return Promise.resolve({ licenseKey: 'PRO-AAAA-BBBB-CCCC-DDDD' }); + }); + + const result = await proSetup.stepLicenseGate(); + + // 1 for method choice + 3 for max retries = 4 total calls + expect(callCount).toBe(4); + expect(result.success).toBe(false); + + // Verify second call (first key prompt) was called with password type + const keyPromptCall = inquirer.prompt.mock.calls[1][0]; + expect(keyPromptCall[0].type).toBe('password'); + expect(keyPromptCall[0].mask).toBe('*'); + }); +}); + +// ─── stepInstallScaffold ───────────────────────────────────────────────────── + +describe('stepInstallScaffold', () => { + test('fails gracefully when scaffolder not available (no pro package dir)', async () => { + const result = await proSetup.stepInstallScaffold('/fake/nonexistent/dir'); + + // Either scaffolder is not found, or source dir doesn't exist + expect(result.success).toBe(false); + }); +}); + +// ─── stepVerify ────────────────────────────────────────────────────────────── + +describe('stepVerify', () => { + test('shows summary of scaffolded content', async () => { + const mockScaffoldResult = { + copiedFiles: [ + 'squads/premium-squad/agent.md', + 'squads/premium-squad/readme.md', + '.aios-core/pro-config.yaml', + 'pro-version.json', + ], + }; + + const result = await proSetup.stepVerify(mockScaffoldResult); + + expect(result.success).toBe(true); + // squads/ files + expect(result.squads).toEqual([ + 'squads/premium-squad/agent.md', + 'squads/premium-squad/readme.md', + ]); + // .yaml and .json files + expect(result.configs).toEqual([ + '.aios-core/pro-config.yaml', + 'pro-version.json', + ]); + }); + + test('handles null scaffoldResult', async () => { + const result = await proSetup.stepVerify(null); + + expect(result.success).toBe(true); + expect(result.squads).toEqual([]); + expect(result.configs).toEqual([]); + }); + + test('handles empty copiedFiles', async () => { + const result = await proSetup.stepVerify({ copiedFiles: [] }); + + expect(result.success).toBe(true); + expect(result.squads).toEqual([]); + }); +}); + +// ─── runProWizard (integration) ────────────────────────────────────────────── + +describe('runProWizard', () => { + test('fails in CI mode without AIOS_PRO_KEY', async () => { + process.env.CI = 'true'; + delete process.env.AIOS_PRO_KEY; + + const result = await proSetup.runProWizard(); + + expect(result.success).toBe(false); + expect(result.licenseValidated).toBe(false); + }); + + test('fails with invalid key format in CI', async () => { + process.env.CI = 'true'; + process.env.AIOS_PRO_KEY = 'INVALID'; + + const result = await proSetup.runProWizard(); + + expect(result.success).toBe(false); + }); + + test('does not show branding in CI mode', async () => { + process.env.CI = 'true'; + process.env.AIOS_PRO_KEY = 'PRO-AAAA-BBBB-CCCC-DDDD'; + + await proSetup.runProWizard(); + + // In CI mode, showProHeader is skipped + const output = console.log.mock.calls.map((c) => String(c[0] || '')).join('\n'); + // The gold header box chars should not appear + expect(output).not.toContain('╔══'); + }); + + test('does not show branding in quiet mode', async () => { + await proSetup.runProWizard({ quiet: true, key: 'PRO-AAAA-BBBB-CCCC-DDDD' }); + + const output = console.log.mock.calls.map((c) => String(c[0] || '')).join('\n'); + expect(output).not.toContain('╔══'); + }); +}); + +// ─── Key Masking Security ──────────────────────────────────────────────────── + +describe('Key Masking Security', () => { + test('full key never appears in console output during wizard', async () => { + process.env.CI = 'true'; + const fullKey = 'PRO-ABCD-EFGH-IJKL-MNOP'; + process.env.AIOS_PRO_KEY = fullKey; + + await proSetup.runProWizard(); + + // Collect all output + const allOutput = [ + ...console.log.mock.calls.map((c) => String(c[0] || '')), + ...console.error.mock.calls.map((c) => String(c[0] || '')), + ].join('\n'); + + // The full key should NEVER appear in output + expect(allOutput).not.toContain(fullKey); + + // Middle segments should never be visible + expect(allOutput).not.toContain('EFGH'); + expect(allOutput).not.toContain('IJKL'); + }); +}); + +// ─── Lazy Import Graceful Failure ──────────────────────────────────────────── + +describe('Lazy Import', () => { + test('pro-setup loads without errors', () => { + expect(proSetup).toBeDefined(); + expect(typeof proSetup.runProWizard).toBe('function'); + expect(typeof proSetup.maskLicenseKey).toBe('function'); + expect(typeof proSetup.validateKeyFormat).toBe('function'); + }); + + test('internal loaders do not throw', () => { + const { _testing } = proSetup; + + // These should return either the module or null — never throw + expect(() => _testing.loadLicenseApi()).not.toThrow(); + expect(() => _testing.loadFeatureGate()).not.toThrow(); + expect(() => _testing.loadProScaffolder()).not.toThrow(); + }); +}); + +// ─── API Offline / Error Handling ──────────────────────────────────────────── + +describe('API Error Handling', () => { + test('validateKeyWithApi handles missing license module gracefully', async () => { + // When loadLicenseApi returns null (module not installed), + // we test by mocking the internal function + const originalLoad = proSetup._testing.loadLicenseApi; + + // Temporarily override + proSetup._testing.loadLicenseApi = () => null; + + const result = await proSetup._testing.validateKeyWithApi('PRO-AAAA-BBBB-CCCC-DDDD'); + + // Restore + proSetup._testing.loadLicenseApi = originalLoad; + + expect(result.success).toBe(false); + expect(result.error).toContain('not available'); + }); + + test('validateKeyWithApi handles API offline', async () => { + const originalLoad = proSetup._testing.loadLicenseApi; + + proSetup._testing.loadLicenseApi = () => ({ + LicenseApiClient: class { + async isOnline() { return false; } + }, + }); + + const result = await proSetup._testing.validateKeyWithApi('PRO-AAAA-BBBB-CCCC-DDDD'); + + proSetup._testing.loadLicenseApi = originalLoad; + + expect(result.success).toBe(false); + expect(result.error).toContain('unreachable'); + }); + + test('validateKeyWithApi handles network error', async () => { + const originalLoad = proSetup._testing.loadLicenseApi; + + proSetup._testing.loadLicenseApi = () => ({ + LicenseApiClient: class { + async isOnline() { return true; } + async activate() { + const err = new Error('Network error'); + err.code = 'NETWORK_ERROR'; + throw err; + } + }, + }); + + const result = await proSetup._testing.validateKeyWithApi('PRO-AAAA-BBBB-CCCC-DDDD'); + + proSetup._testing.loadLicenseApi = originalLoad; + + expect(result.success).toBe(false); + expect(result.error).toContain('unreachable'); + }); + + test('validateKeyWithApi handles invalid key error', async () => { + const originalLoad = proSetup._testing.loadLicenseApi; + + proSetup._testing.loadLicenseApi = () => ({ + LicenseApiClient: class { + async isOnline() { return true; } + async activate() { + const err = new Error('Invalid'); + err.code = 'INVALID_KEY'; + throw err; + } + }, + }); + + const result = await proSetup._testing.validateKeyWithApi('PRO-AAAA-BBBB-CCCC-DDDD'); + + proSetup._testing.loadLicenseApi = originalLoad; + + expect(result.success).toBe(false); + expect(result.error).toContain('Invalid license key'); + }); + + test('validateKeyWithApi handles expired key error', async () => { + const originalLoad = proSetup._testing.loadLicenseApi; + + proSetup._testing.loadLicenseApi = () => ({ + LicenseApiClient: class { + async isOnline() { return true; } + async activate() { + const err = new Error('Expired'); + err.code = 'EXPIRED_KEY'; + throw err; + } + }, + }); + + const result = await proSetup._testing.validateKeyWithApi('PRO-AAAA-BBBB-CCCC-DDDD'); + + proSetup._testing.loadLicenseApi = originalLoad; + + expect(result.success).toBe(false); + expect(result.error).toContain('expired'); + }); + + test('validateKeyWithApi handles rate limiting', async () => { + const originalLoad = proSetup._testing.loadLicenseApi; + + proSetup._testing.loadLicenseApi = () => ({ + LicenseApiClient: class { + async isOnline() { return true; } + async activate() { + const err = new Error('Rate limited'); + err.code = 'RATE_LIMITED'; + throw err; + } + }, + }); + + const result = await proSetup._testing.validateKeyWithApi('PRO-AAAA-BBBB-CCCC-DDDD'); + + proSetup._testing.loadLicenseApi = originalLoad; + + expect(result.success).toBe(false); + expect(result.error).toContain('Too many requests'); + }); + + test('validateKeyWithApi handles seat limit exceeded', async () => { + const originalLoad = proSetup._testing.loadLicenseApi; + + proSetup._testing.loadLicenseApi = () => ({ + LicenseApiClient: class { + async isOnline() { return true; } + async activate() { + const err = new Error('Seats'); + err.code = 'SEAT_LIMIT_EXCEEDED'; + throw err; + } + }, + }); + + const result = await proSetup._testing.validateKeyWithApi('PRO-AAAA-BBBB-CCCC-DDDD'); + + proSetup._testing.loadLicenseApi = originalLoad; + + expect(result.success).toBe(false); + expect(result.error).toContain('Maximum activations'); + }); + + test('validateKeyWithApi handles successful activation', async () => { + const originalLoad = proSetup._testing.loadLicenseApi; + + proSetup._testing.loadLicenseApi = () => ({ + LicenseApiClient: class { + async isOnline() { return true; } + async activate() { + return { + key: 'PRO-AAAA-BBBB-CCCC-DDDD', + features: ['pro.*'], + seats: { used: 1, max: 3 }, + expiresAt: '2027-01-01', + }; + } + }, + }); + + const result = await proSetup._testing.validateKeyWithApi('PRO-AAAA-BBBB-CCCC-DDDD'); + + proSetup._testing.loadLicenseApi = originalLoad; + + expect(result.success).toBe(true); + expect(result.data.features).toContain('pro.*'); + }); +}); + +``` + +================================================== +📄 tests/setup.js +================================================== +```js +// Jest setup file +// This file runs before all tests + +// Set test environment variables +process.env.NODE_ENV = 'test'; +process.env.AIOS_DEBUG = 'false'; + +// Skip integration tests by default (require external services) +// Set SKIP_INTEGRATION_TESTS=false to run them +if (process.env.SKIP_INTEGRATION_TESTS === undefined) { + process.env.SKIP_INTEGRATION_TESTS = 'true'; +} + +// Global test timeout (increased for CI environments) +jest.setTimeout(process.env.CI ? 30000 : 10000); + +// Mock console methods to reduce noise in tests (comment out for debugging) +// global.console = { +// ...console, +// log: jest.fn(), +// debug: jest.fn(), +// info: jest.fn(), +// warn: jest.fn(), +// error: jest.fn(), +// }; + +// Helper to conditionally skip integration tests +global.describeIntegration = process.env.SKIP_INTEGRATION_TESTS === 'true' + ? describe.skip + : describe; + +global.testIntegration = process.env.SKIP_INTEGRATION_TESTS === 'true' + ? test.skip + : test; + +``` + +================================================== +📄 tests/epic-verification.test.js +================================================== +```js +// File: tests/epic-verification.test.js + +/** + * Epic Verification Integration Test Suite + * + * Tests Epic verification before story creation - ensuring Epics exist, + * have correct status, and handle error scenarios properly. + * + * AC1: Epic Verification Before Story Creation + * + * NOTE: These tests require ClickUp MCP server and API key. + * Skip in CI or when SKIP_INTEGRATION_TESTS=true + */ + +// Skip all tests - these require live ClickUp API and MCP server +// To run: CLICKUP_API_KEY=xxx npm test -- tests/epic-verification.test.js +const SKIP_REASON = 'Requires ClickUp MCP server and API key'; + +describe.skip('Epic Verification - Integration Tests', () => { + // All tests skipped - require live ClickUp connection + test.todo('Enable when ClickUp MCP is available'); +}); + +// Original tests preserved for reference when ClickUp is configured +const _originalTests = () => { + const { verifyEpicExists } = require('../common/utils/clickup-helpers'); + + // Mock ClickUp MCP tool + jest.mock('../common/utils/tool-resolver', () => ({ + resolveTool: jest.fn(() => ({ + getWorkspaceTasks: jest.fn(), + })), + })); + + const toolResolver = require('../common/utils/tool-resolver'); + + describe('Epic Verification - Integration Tests [REQUIRES CLICKUP]', () => { + let mockClickUpTool; + + beforeEach(() => { + jest.clearAllMocks(); + mockClickUpTool = toolResolver.resolveTool('clickup'); + }); + + describe('Scenario 1: Epic Found Successfully', () => { + test('should find Epic with correct tag and active status', async () => { + // Mock ClickUp API response: Epic 5 exists with "In Progress" status + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-task-12345', + name: 'Epic 5: Tools System', + status: 'In Progress', + tags: ['epic', 'epic-5'], + list: { + id: 'backlog-list-id', + name: 'Backlog', + }, + }, + ], + }); + + const result = await verifyEpicExists(5); + + expect(result.found).toBe(true); + expect(result.epicTaskId).toBe('epic-task-12345'); + expect(result.epicName).toBe('Epic 5: Tools System'); + expect(result.status).toBe('In Progress'); + + // Verify API was called with correct parameters + expect(mockClickUpTool.getWorkspaceTasks).toHaveBeenCalledWith({ + tags: ['epic-5'], + list_ids: expect.any(Array), + statuses: ['Planning', 'In Progress'], + }); + }); + + test('should find Epic in Planning status', async () => { + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-task-67890', + name: 'Epic 7: New Feature', + status: 'Planning', + tags: ['epic', 'epic-7'], + }, + ], + }); + + const result = await verifyEpicExists(7); + + expect(result.found).toBe(true); + expect(result.status).toBe('Planning'); + }); + + test('should log success message when Epic found', async () => { + const consoleLogSpy = jest.spyOn(console, 'log').mockImplementation(); + + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-task-99', + name: 'Epic 3: Architecture', + status: 'In Progress', + tags: ['epic', 'epic-3'], + }, + ], + }); + + await verifyEpicExists(3); + + expect(consoleLogSpy).toHaveBeenCalledWith( + expect.stringContaining('✅ Found Epic 3'), + ); + expect(consoleLogSpy).toHaveBeenCalledWith( + expect.stringContaining('epic-task-99'), + ); + + consoleLogSpy.mockRestore(); + }); + }); + + describe('Scenario 2: Epic Not Found (HALT Expected)', () => { + test('should throw error when Epic does not exist', async () => { + // Mock empty response: Epic 10 not found + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [], + }); + + await expect(verifyEpicExists(10)).rejects.toThrow( + /Epic 10 not found in ClickUp Backlog list/, + ); + }); + + test('should provide helpful error message with creation instructions', async () => { + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [], + }); + + try { + await verifyEpicExists(15); + fail('Should have thrown error'); + } catch (error) { + expect(error.message).toContain('Epic 15 not found'); + expect(error.message).toContain('Please create Epic task with:'); + expect(error.message).toContain("Name: 'Epic 15:"); + expect(error.message).toContain('List: Backlog'); + expect(error.message).toContain("Tags: ['epic', 'epic-15']"); + expect(error.message).toContain('Status: Planning or In Progress'); + } + }); + + test('should HALT execution when Epic not found', async () => { + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [], + }); + + const consoleErrorSpy = jest.spyOn(console, 'error').mockImplementation(); + + try { + await verifyEpicExists(20); + } catch { + // Expected error + } + + expect(consoleErrorSpy).toHaveBeenCalledWith( + expect.stringContaining('❌ Epic 20 not found'), + ); + + consoleErrorSpy.mockRestore(); + }); + }); + + describe('Scenario 3: Epic in Wrong Status (Done/Archived)', () => { + test('should reject Epic with "Done" status', async () => { + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-task-done', + name: 'Epic 2: Completed Feature', + status: 'Done', + tags: ['epic', 'epic-2'], + }, + ], + }); + + await expect(verifyEpicExists(2)).rejects.toThrow( + /Epic 2 has invalid status: Done/, + ); + }); + + test('should reject Epic with "Archived" status', async () => { + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-task-archived', + name: 'Epic 1: Old Feature', + status: 'Archived', + tags: ['epic', 'epic-1'], + }, + ], + }); + + await expect(verifyEpicExists(1)).rejects.toThrow( + /Epic 1 has invalid status: Archived/, + ); + }); + + test('should provide error message explaining valid statuses', async () => { + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-wrong-status', + name: 'Epic 8: Test', + status: 'Blocked', + tags: ['epic', 'epic-8'], + }, + ], + }); + + try { + await verifyEpicExists(8); + fail('Should have thrown error'); + } catch (error) { + expect(error.message).toContain('Epic 8 has invalid status: Blocked'); + expect(error.message).toContain('Valid statuses: Planning, In Progress'); + } + }); + + test('should only search for Epics with Planning or In Progress status', async () => { + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [], + }); + + try { + await verifyEpicExists(6); + } catch { + // Expected error + } + + expect(mockClickUpTool.getWorkspaceTasks).toHaveBeenCalledWith( + expect.objectContaining({ + statuses: ['Planning', 'In Progress'], + }), + ); + }); + }); + + describe('Scenario 4: Multiple Epics with Same Number (Ambiguity)', () => { + test('should detect multiple Epics with same tag', async () => { + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-duplicate-1', + name: 'Epic 4: Feature A', + status: 'Planning', + tags: ['epic', 'epic-4'], + }, + { + id: 'epic-duplicate-2', + name: 'Epic 4: Feature B', + status: 'In Progress', + tags: ['epic', 'epic-4'], + }, + ], + }); + + await expect(verifyEpicExists(4)).rejects.toThrow( + /Multiple Epics found with tag 'epic-4'/, + ); + }); + + test('should list all duplicate Epics in error message', async () => { + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-dup-a', + name: 'Epic 9: Implementation A', + status: 'Planning', + tags: ['epic', 'epic-9'], + }, + { + id: 'epic-dup-b', + name: 'Epic 9: Implementation B', + status: 'In Progress', + tags: ['epic', 'epic-9'], + }, + { + id: 'epic-dup-c', + name: 'Epic 9: Implementation C', + status: 'Planning', + tags: ['epic', 'epic-9'], + }, + ], + }); + + try { + await verifyEpicExists(9); + fail('Should have thrown error'); + } catch (error) { + expect(error.message).toContain('Multiple Epics found'); + expect(error.message).toContain('epic-dup-a'); + expect(error.message).toContain('epic-dup-b'); + expect(error.message).toContain('epic-dup-c'); + expect(error.message).toContain('Epic 9: Implementation A'); + expect(error.message).toContain('Epic 9: Implementation B'); + expect(error.message).toContain('Epic 9: Implementation C'); + } + }); + + test('should provide resolution instructions for duplicate Epics', async () => { + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-x', + name: 'Epic 11: Duplicate', + status: 'Planning', + tags: ['epic', 'epic-11'], + }, + { + id: 'epic-y', + name: 'Epic 11: Duplicate', + status: 'In Progress', + tags: ['epic', 'epic-11'], + }, + ], + }); + + try { + await verifyEpicExists(11); + fail('Should have thrown error'); + } catch (error) { + expect(error.message).toContain('Please resolve this ambiguity by:'); + expect(error.message).toContain('Remove tag from incorrect Epic'); + expect(error.message).toContain('Archive or delete duplicate Epic'); + } + }); + }); + + describe('Additional Edge Cases', () => { + test('should handle ClickUp API errors gracefully', async () => { + mockClickUpTool.getWorkspaceTasks.mockRejectedValue( + new Error('ClickUp API: Rate limit exceeded'), + ); + + await expect(verifyEpicExists(5)).rejects.toThrow( + 'ClickUp API: Rate limit exceeded', + ); + }); + + test('should handle network timeout errors', async () => { + mockClickUpTool.getWorkspaceTasks.mockRejectedValue( + new Error('Network timeout'), + ); + + await expect(verifyEpicExists(3)).rejects.toThrow('Network timeout'); + }); + + test('should validate Epic number is positive integer', async () => { + await expect(verifyEpicExists(0)).rejects.toThrow( + /Epic number must be a positive integer/, + ); + + await expect(verifyEpicExists(-5)).rejects.toThrow( + /Epic number must be a positive integer/, + ); + + await expect(verifyEpicExists(3.5)).rejects.toThrow( + /Epic number must be a positive integer/, + ); + }); + + test('should handle Epic without tags gracefully', async () => { + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-no-tags', + name: 'Epic 12: No Tags', + status: 'In Progress', + tags: [], // Missing tags + }, + ], + }); + + // Should not find Epic if it doesn't have required tag + await expect(verifyEpicExists(12)).rejects.toThrow( + /Epic 12 not found/, + ); + }); + + test('should handle malformed ClickUp response', async () => { + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + // Missing tasks array + }); + + await expect(verifyEpicExists(5)).rejects.toThrow(); + }); + + test('should capture Epic metadata for later use', async () => { + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-metadata-test', + name: 'Epic 13: Metadata Test', + status: 'Planning', + tags: ['epic', 'epic-13'], + list: { + id: 'list-123', + name: 'Backlog', + }, + custom_fields: [ + { id: 'cf1', name: 'priority', value: 'High' }, + ], + }, + ], + }); + + const result = await verifyEpicExists(13); + + expect(result.found).toBe(true); + expect(result.epicTaskId).toBe('epic-metadata-test'); + expect(result.listId).toBe('list-123'); + expect(result.listName).toBe('Backlog'); + expect(result.customFields).toBeDefined(); + }); + }); + + describe('Performance and Caching', () => { + test('should cache Epic lookup results', async () => { + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-cached', + name: 'Epic 14: Cache Test', + status: 'In Progress', + tags: ['epic', 'epic-14'], + }, + ], + }); + + // First call + await verifyEpicExists(14); + + // Second call should use cache (mock should be called only once) + await verifyEpicExists(14); + + expect(mockClickUpTool.getWorkspaceTasks).toHaveBeenCalledTimes(1); + }); + + test('should invalidate cache after specified timeout', async () => { + jest.useFakeTimers(); + + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-timeout', + name: 'Epic 15: Timeout Test', + status: 'Planning', + tags: ['epic', 'epic-15'], + }, + ], + }); + + // First call + await verifyEpicExists(15); + + // Advance time by 6 minutes (cache expires after 5 minutes) + jest.advanceTimersByTime(6 * 60 * 1000); + + // Second call should hit API again + await verifyEpicExists(15); + + expect(mockClickUpTool.getWorkspaceTasks).toHaveBeenCalledTimes(2); + + jest.useRealTimers(); + }); + }); + }); +}; // End of _originalTests + +``` + +================================================== +📄 tests/story-update-hook.test.js +================================================== +```js +// File: tests/story-update-hook.test.js + +/** + * Story Update Hook Test Suite + * + * Tests the story change detection, changelog generation, and ClickUp synchronization + * functionality of the story-update-hook module. + */ + +const { + detectChanges, + generateChangelog, + syncStoryToClickUp, + updateFrontmatterTimestamp, +} = require('../common/utils/story-update-hook'); + +// Mock ClickUp helper functions +jest.mock('../common/utils/clickup-helpers', () => ({ + updateStoryStatus: jest.fn(), + updateTaskDescription: jest.fn(), + addTaskComment: jest.fn(), + verifyEpicExists: jest.fn(), +})); + +const clickupHelpers = require('../common/utils/clickup-helpers'); + +describe('Story Update Hook - Change Detection', () => { + beforeEach(() => { + jest.clearAllMocks(); + }); + + describe('detectChanges() - Status Changes', () => { + test('should detect status change from Draft to In Progress', () => { + const oldContent = '---\nstatus: Draft\n---\nStory content'; + const newContent = '---\nstatus: In Progress\n---\nStory content'; + + const changes = detectChanges(oldContent, newContent); + + expect(changes.status).toEqual({ + changed: true, + from: 'Draft', + to: 'In Progress', + }); + }); + + test('should not detect status change when status is unchanged', () => { + const oldContent = '---\nstatus: In Progress\n---\nStory content'; + const newContent = '---\nstatus: In Progress\n---\nUpdated story content'; + + const changes = detectChanges(oldContent, newContent); + + expect(changes.status).toEqual({ + changed: false, + from: 'In Progress', + to: 'In Progress', + }); + }); + + test('should handle missing status in old content', () => { + const oldContent = '---\ntitle: My Story\n---\nContent'; + const newContent = '---\nstatus: Draft\n---\nContent'; + + const changes = detectChanges(oldContent, newContent); + + expect(changes.status).toEqual({ + changed: true, + from: undefined, + to: 'Draft', + }); + }); + }); + + describe('detectChanges() - Task Completion', () => { + test('should detect newly completed tasks', () => { + const oldContent = ` +## Tasks +- [ ] Task 1 +- [ ] Task 2 +- [ ] Task 3 + `; + const newContent = ` +## Tasks +- [x] Task 1 +- [ ] Task 2 +- [x] Task 3 + `; + + const changes = detectChanges(oldContent, newContent); + + expect(changes.tasksCompleted).toHaveLength(2); + expect(changes.tasksCompleted).toContain('Task 1'); + expect(changes.tasksCompleted).toContain('Task 3'); + }); + + test('should ignore already completed tasks', () => { + const oldContent = ` +## Tasks +- [x] Task 1 +- [ ] Task 2 + `; + const newContent = ` +## Tasks +- [x] Task 1 +- [x] Task 2 + `; + + const changes = detectChanges(oldContent, newContent); + + expect(changes.tasksCompleted).toHaveLength(1); + expect(changes.tasksCompleted).toContain('Task 2'); + }); + + test('should handle no task completions', () => { + const oldContent = '- [ ] Task 1\n- [ ] Task 2'; + const newContent = '- [ ] Task 1\n- [ ] Task 2'; + + const changes = detectChanges(oldContent, newContent); + + expect(changes.tasksCompleted).toEqual([]); + }); + }); + + describe('detectChanges() - File List Updates', () => { + test('should detect new files added to File List section', () => { + const oldContent = ` +## File List +**New Files:** +- common/utils/file1.js + `; + const newContent = ` +## File List +**New Files:** +- common/utils/file1.js +- common/utils/file2.js +- tests/file2.test.js + `; + + const changes = detectChanges(oldContent, newContent); + + expect(changes.filesAdded).toHaveLength(2); + expect(changes.filesAdded).toContain('common/utils/file2.js'); + expect(changes.filesAdded).toContain('tests/file2.test.js'); + }); + + test('should handle File List section with no changes', () => { + const oldContent = '## File List\n- file1.js\n- file2.js'; + const newContent = '## File List\n- file1.js\n- file2.js'; + + const changes = detectChanges(oldContent, newContent); + + expect(changes.filesAdded).toEqual([]); + }); + }); + + describe('detectChanges() - Dev Notes', () => { + test('should detect new Dev Notes added', () => { + const oldContent = '## Dev Notes\n\nNote 1'; + const newContent = '## Dev Notes\n\nNote 1\n\nNote 2: Implementation details'; + + const changes = detectChanges(oldContent, newContent); + + expect(changes.devNotesAdded).toBe(true); + expect(changes.devNotesContent).toContain('Note 2'); + }); + + test('should not detect dev notes when unchanged', () => { + const oldContent = '## Dev Notes\n\nNote 1'; + const newContent = '## Dev Notes\n\nNote 1'; + + const changes = detectChanges(oldContent, newContent); + + expect(changes.devNotesAdded).toBe(false); + }); + }); + + describe('detectChanges() - Acceptance Criteria', () => { + test('should detect changes to Acceptance Criteria section', () => { + const oldContent = '## Acceptance Criteria\n- AC1: Original'; + const newContent = '## Acceptance Criteria\n- AC1: Updated\n- AC2: New'; + + const changes = detectChanges(oldContent, newContent); + + expect(changes.acceptanceCriteriaChanged).toBe(true); + }); + + test('should not detect AC changes when unchanged', () => { + const oldContent = '## Acceptance Criteria\n- AC1: Test'; + const newContent = '## Acceptance Criteria\n- AC1: Test'; + + const changes = detectChanges(oldContent, newContent); + + expect(changes.acceptanceCriteriaChanged).toBe(false); + }); + }); +}); + +describe('Story Update Hook - Changelog Generation', () => { + test('should generate changelog for status change', () => { + const changes = { + status: { changed: true, from: 'Draft', to: 'In Progress' }, + tasksCompleted: [], + filesAdded: [], + devNotesAdded: false, + acceptanceCriteriaChanged: false, + }; + + const changelog = generateChangelog(changes); + + expect(changelog).toContain('Status: Draft → In Progress'); + }); + + test('should generate changelog for completed tasks', () => { + const changes = { + status: { changed: false }, + tasksCompleted: ['Task 1', 'Task 2'], + filesAdded: [], + devNotesAdded: false, + acceptanceCriteriaChanged: false, + }; + + const changelog = generateChangelog(changes); + + expect(changelog).toContain('Completed tasks:'); + expect(changelog).toContain('• Task 1'); + expect(changelog).toContain('• Task 2'); + }); + + test('should generate changelog for added files', () => { + const changes = { + status: { changed: false }, + tasksCompleted: [], + filesAdded: ['common/utils/new-file.js', 'tests/new-test.js'], + devNotesAdded: false, + acceptanceCriteriaChanged: false, + }; + + const changelog = generateChangelog(changes); + + expect(changelog).toContain('Files added:'); + expect(changelog).toContain('• common/utils/new-file.js'); + expect(changelog).toContain('• tests/new-test.js'); + }); + + test('should generate changelog for dev notes', () => { + const changes = { + status: { changed: false }, + tasksCompleted: [], + filesAdded: [], + devNotesAdded: true, + devNotesContent: 'Implementation note', + acceptanceCriteriaChanged: false, + }; + + const changelog = generateChangelog(changes); + + expect(changelog).toContain('Dev notes updated'); + }); + + test('should generate changelog for acceptance criteria changes', () => { + const changes = { + status: { changed: false }, + tasksCompleted: [], + filesAdded: [], + devNotesAdded: false, + acceptanceCriteriaChanged: true, + }; + + const changelog = generateChangelog(changes); + + expect(changelog).toContain('Acceptance criteria modified'); + }); + + test('should generate comprehensive changelog with multiple changes', () => { + const changes = { + status: { changed: true, from: 'Draft', to: 'In Progress' }, + tasksCompleted: ['Setup environment', 'Write tests'], + filesAdded: ['src/module.js'], + devNotesAdded: true, + devNotesContent: 'Note', + acceptanceCriteriaChanged: false, + }; + + const changelog = generateChangelog(changes); + + expect(changelog).toContain('Status: Draft → In Progress'); + expect(changelog).toContain('Completed tasks:'); + expect(changelog).toContain('Files added:'); + expect(changelog).toContain('Dev notes updated'); + }); + + test('should return empty string when no changes detected', () => { + const changes = { + status: { changed: false }, + tasksCompleted: [], + filesAdded: [], + devNotesAdded: false, + acceptanceCriteriaChanged: false, + }; + + const changelog = generateChangelog(changes); + + expect(changelog).toBe(''); + }); +}); + +describe('Story Update Hook - ClickUp Synchronization', () => { + beforeEach(() => { + jest.clearAllMocks(); + }); + + test('should sync status change to ClickUp', async () => { + clickupHelpers.updateStoryStatus.mockResolvedValue({ success: true }); + + const storyFile = { + metadata: { + clickup_task_id: 'story-123', + status: 'In Progress', + }, + content: '---\nstatus: In Progress\n---\nContent', + }; + + const changes = { + status: { changed: true, from: 'Draft', to: 'In Progress' }, + tasksCompleted: [], + filesAdded: [], + devNotesAdded: false, + acceptanceCriteriaChanged: false, + }; + + await syncStoryToClickUp(storyFile, changes); + + expect(clickupHelpers.updateStoryStatus).toHaveBeenCalledWith( + 'story-123', + 'In Progress', + ); + }); + + test('should add changelog comment to ClickUp', async () => { + clickupHelpers.addTaskComment.mockResolvedValue({ success: true }); + + const storyFile = { + metadata: { + clickup_task_id: 'story-456', + }, + }; + + const changes = { + status: { changed: false }, + tasksCompleted: ['Task 1', 'Task 2'], + filesAdded: [], + devNotesAdded: false, + acceptanceCriteriaChanged: false, + }; + + await syncStoryToClickUp(storyFile, changes); + + expect(clickupHelpers.addTaskComment).toHaveBeenCalledWith( + 'story-456', + expect.stringContaining('Completed tasks:'), + ); + }); + + test('should update task description when acceptance criteria changed', async () => { + clickupHelpers.updateTaskDescription.mockResolvedValue({ success: true }); + + const storyFile = { + metadata: { + clickup_task_id: 'story-789', + }, + content: '## Acceptance Criteria\n- AC1: Updated', + }; + + const changes = { + status: { changed: false }, + tasksCompleted: [], + filesAdded: [], + devNotesAdded: false, + acceptanceCriteriaChanged: true, + }; + + await syncStoryToClickUp(storyFile, changes); + + expect(clickupHelpers.updateTaskDescription).toHaveBeenCalledWith( + 'story-789', + expect.stringContaining('AC1: Updated'), + ); + }); + + test('should handle sync when no changes detected', async () => { + const storyFile = { + metadata: { clickup_task_id: 'story-000' }, + }; + + const changes = { + status: { changed: false }, + tasksCompleted: [], + filesAdded: [], + devNotesAdded: false, + acceptanceCriteriaChanged: false, + }; + + await syncStoryToClickUp(storyFile, changes); + + expect(clickupHelpers.updateStoryStatus).not.toHaveBeenCalled(); + expect(clickupHelpers.addTaskComment).not.toHaveBeenCalled(); + expect(clickupHelpers.updateTaskDescription).not.toHaveBeenCalled(); + }); +}); + +describe('Story Update Hook - Frontmatter Timestamp Update', () => { + test('should update last_updated timestamp in frontmatter', () => { + const content = '---\ntitle: My Story\nstatus: Draft\n---\nStory content'; + + const updated = updateFrontmatterTimestamp(content); + + expect(updated).toMatch(/last_updated: \d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}/); + }); + + test('should preserve existing frontmatter fields', () => { + const content = '---\ntitle: My Story\nstatus: In Progress\nepic: 5\n---\nContent'; + + const updated = updateFrontmatterTimestamp(content); + + expect(updated).toContain('title: My Story'); + expect(updated).toContain('status: In Progress'); + expect(updated).toContain('epic: 5'); + }); + + test('should replace existing last_updated timestamp', () => { + const content = '---\nlast_updated: 2024-01-01T00:00:00Z\n---\nContent'; + + const updated = updateFrontmatterTimestamp(content); + + expect(updated).not.toContain('2024-01-01T00:00:00Z'); + expect(updated).toMatch(/last_updated: \d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}/); + }); + + test('should handle content without frontmatter', () => { + const content = 'Story content without frontmatter'; + + const updated = updateFrontmatterTimestamp(content); + + expect(updated).toBe(content); // Should return unchanged + }); +}); + +describe('Story Update Hook - Error Handling', () => { + beforeEach(() => { + jest.clearAllMocks(); + }); + + test('should handle missing metadata gracefully', async () => { + const storyFile = { + content: 'Story content', + // No metadata + }; + + const changes = { + status: { changed: true, from: 'Draft', to: 'In Progress' }, + tasksCompleted: [], + filesAdded: [], + devNotesAdded: false, + acceptanceCriteriaChanged: false, + }; + + await expect(syncStoryToClickUp(storyFile, changes)).resolves.not.toThrow(); + expect(clickupHelpers.updateStoryStatus).not.toHaveBeenCalled(); + }); + + test('should handle network failures during status update', async () => { + clickupHelpers.updateStoryStatus.mockRejectedValue( + new Error('Network error'), + ); + + const storyFile = { + metadata: { + clickup_task_id: 'story-fail-1', + status: 'In Progress', + }, + }; + + const changes = { + status: { changed: true, from: 'Draft', to: 'In Progress' }, + tasksCompleted: [], + filesAdded: [], + devNotesAdded: false, + acceptanceCriteriaChanged: false, + }; + + await expect(syncStoryToClickUp(storyFile, changes)).rejects.toThrow('Network error'); + }); + + test('should handle invalid task_id in metadata', async () => { + clickupHelpers.updateStoryStatus.mockRejectedValue( + new Error('Task not found'), + ); + + const storyFile = { + metadata: { + clickup_task_id: 'invalid-task-id', + status: 'In Progress', + }, + }; + + const changes = { + status: { changed: true, from: 'Draft', to: 'In Progress' }, + tasksCompleted: [], + filesAdded: [], + devNotesAdded: false, + acceptanceCriteriaChanged: false, + }; + + await expect(syncStoryToClickUp(storyFile, changes)).rejects.toThrow('Task not found'); + }); + + test('should handle ClickUp API rate limit errors', async () => { + clickupHelpers.addTaskComment.mockRejectedValue( + new Error('Rate limit exceeded'), + ); + + const storyFile = { + metadata: { clickup_task_id: 'story-rate-limit' }, + }; + + const changes = { + status: { changed: false }, + tasksCompleted: ['Task 1'], + filesAdded: [], + devNotesAdded: false, + acceptanceCriteriaChanged: false, + }; + + await expect(syncStoryToClickUp(storyFile, changes)).rejects.toThrow('Rate limit exceeded'); + }); + + test('should handle malformed story content in detectChanges', () => { + const oldContent = null; + const newContent = '---\nstatus: Draft\n---\nContent'; + + expect(() => detectChanges(oldContent, newContent)).not.toThrow(); + }); +}); + +``` + +================================================== +📄 tests/unit/context-detector.test.js +================================================== +```js +/** + * Unit Tests for ContextDetector + * + * Test Coverage: + * - Hybrid detection (conversation vs file) + * - Session type detection (new/existing/workflow) + * - Command extraction from conversation + * - Workflow pattern matching + * - Session state file operations + * - TTL expiration handling + */ + +const ContextDetector = require('../../.aios-core/core/session/context-detector'); +const fs = require('fs'); +const path = require('path'); + +describe('ContextDetector', () => { + let detector; + const testSessionFile = path.join(__dirname, '.test-session-state.json'); + + beforeEach(() => { + detector = new ContextDetector(); + // Clean up test session file + if (fs.existsSync(testSessionFile)) { + fs.unlinkSync(testSessionFile); + } + }); + + afterEach(() => { + // Clean up test session file + if (fs.existsSync(testSessionFile)) { + fs.unlinkSync(testSessionFile); + } + }); + + describe('detectSessionType', () => { + describe('Conversation-based detection (preferred)', () => { + test('should detect new session when conversation is empty', () => { + const result = detector.detectSessionType([]); + expect(result).toBe('new'); + }); + + test('should detect existing session with conversation history', () => { + const conversation = [ + { content: 'Hello' }, + { content: 'How are you?' }, + ]; + const result = detector.detectSessionType(conversation); + expect(result).toBe('existing'); + }); + + test('should detect workflow session with story development pattern', () => { + const conversation = [ + { content: '*validate-story-draft story-6.1.md' }, + { content: 'Story validated!' }, + { content: '*develop story-6.1.md' }, + ]; + const result = detector.detectSessionType(conversation); + expect(result).toBe('workflow'); + }); + + test('should detect workflow session with epic creation pattern', () => { + const conversation = [ + { content: '*create-epic Epic 6' }, + { content: '*create-story Story 6.1' }, + { content: '*validate-story-draft story-6.1.md' }, + ]; + const result = detector.detectSessionType(conversation); + expect(result).toBe('workflow'); + }); + + test('should detect workflow session with backlog management pattern', () => { + const conversation = [ + { content: '*backlog-review' }, + { content: '*backlog-prioritize' }, + ]; + const result = detector.detectSessionType(conversation); + expect(result).toBe('workflow'); + }); + + test('should handle mixed content in conversation', () => { + const conversation = [ + { content: 'Regular text without commands' }, + { content: '*help' }, + { content: 'More regular text' }, + ]; + const result = detector.detectSessionType(conversation); + expect(result).toBe('existing'); + }); + }); + + describe('File-based detection (fallback)', () => { + test('should detect new session when file does not exist', () => { + const result = detector.detectSessionType(null, testSessionFile); + expect(result).toBe('new'); + }); + + test('should detect existing session from valid file', () => { + const sessionData = { + sessionId: 'test-session', + lastActivity: Date.now(), + lastCommands: ['help', 'status'], + }; + fs.writeFileSync(testSessionFile, JSON.stringify(sessionData), 'utf8'); + + const result = detector.detectSessionType(null, testSessionFile); + expect(result).toBe('existing'); + }); + + test('should detect workflow session from file with active workflow', () => { + const sessionData = { + sessionId: 'test-session', + lastActivity: Date.now(), + workflowActive: 'story_development', + lastCommands: ['validate-story-draft', 'develop'], + }; + fs.writeFileSync(testSessionFile, JSON.stringify(sessionData), 'utf8'); + + const result = detector.detectSessionType(null, testSessionFile); + expect(result).toBe('workflow'); + }); + + test('should detect new session when file is expired (TTL)', () => { + const sessionData = { + sessionId: 'test-session', + lastActivity: Date.now() - (2 * 60 * 60 * 1000), // 2 hours ago + lastCommands: ['help'], + }; + fs.writeFileSync(testSessionFile, JSON.stringify(sessionData), 'utf8'); + + const result = detector.detectSessionType(null, testSessionFile); + expect(result).toBe('new'); + }); + + test('should handle invalid JSON gracefully', () => { + fs.writeFileSync(testSessionFile, 'invalid json{', 'utf8'); + + const result = detector.detectSessionType(null, testSessionFile); + expect(result).toBe('new'); + }); + + test('should detect new session when file has no commands', () => { + const sessionData = { + sessionId: 'test-session', + lastActivity: Date.now(), + lastCommands: [], + }; + fs.writeFileSync(testSessionFile, JSON.stringify(sessionData), 'utf8'); + + const result = detector.detectSessionType(null, testSessionFile); + expect(result).toBe('new'); + }); + }); + + describe('Hybrid detection priority', () => { + test('should prefer conversation over file when both available', () => { + // Create file indicating workflow + const sessionData = { + sessionId: 'test-session', + lastActivity: Date.now(), + workflowActive: 'story_development', + lastCommands: ['validate-story-draft', 'develop'], + }; + fs.writeFileSync(testSessionFile, JSON.stringify(sessionData), 'utf8'); + + // Conversation indicates existing (no workflow) + const conversation = [ + { content: 'Hello' }, + { content: '*help' }, + ]; + + const result = detector.detectSessionType(conversation, testSessionFile); + expect(result).toBe('existing'); // Should use conversation, not file + }); + + test('should fallback to file when conversation is empty', () => { + const sessionData = { + sessionId: 'test-session', + lastActivity: Date.now(), + lastCommands: ['help', 'status'], + }; + fs.writeFileSync(testSessionFile, JSON.stringify(sessionData), 'utf8'); + + const result = detector.detectSessionType([], testSessionFile); + expect(result).toBe('existing'); + }); + }); + }); + + describe('_extractCommands', () => { + test('should extract commands from conversation', () => { + const conversation = [ + { content: '*help' }, + { content: '*validate-story-draft story.md' }, + { content: 'Some text' }, + { content: '*develop story.md' }, + ]; + + const commands = detector._extractCommands(conversation); + expect(commands).toEqual(['help', 'validate-story-draft', 'develop']); + }); + + test('should limit to last 10 commands', () => { + const conversation = Array(15).fill(null).map((_, i) => ({ + content: `*command-${i}`, + })); + + const commands = detector._extractCommands(conversation); + expect(commands.length).toBe(10); + expect(commands[0]).toBe('command-5'); // Should start from command-5 + }); + + test('should handle messages without commands', () => { + const conversation = [ + { content: 'Hello' }, + { content: 'How are you?' }, + ]; + + const commands = detector._extractCommands(conversation); + expect(commands).toEqual([]); + }); + }); + + describe('updateSessionState', () => { + test('should create session state file', () => { + const state = { + sessionId: 'test-123', + lastCommands: ['help', 'status'], + workflowActive: 'story_development', + }; + + detector.updateSessionState(state, testSessionFile); + + expect(fs.existsSync(testSessionFile)).toBe(true); + const savedData = JSON.parse(fs.readFileSync(testSessionFile, 'utf8')); + expect(savedData.sessionId).toBe('test-123'); + expect(savedData.lastCommands).toEqual(['help', 'status']); + expect(savedData.workflowActive).toBe('story_development'); + }); + + test('should generate session ID if not provided', () => { + const state = { + lastCommands: ['help'], + }; + + detector.updateSessionState(state, testSessionFile); + + const savedData = JSON.parse(fs.readFileSync(testSessionFile, 'utf8')); + expect(savedData.sessionId).toMatch(/^session-/); + }); + + test('should create directory if it does not exist', () => { + const deepPath = path.join(__dirname, 'deep', 'nested', 'session.json'); + + detector.updateSessionState({ lastCommands: [] }, deepPath); + + expect(fs.existsSync(deepPath)).toBe(true); + + // Cleanup + fs.unlinkSync(deepPath); + fs.rmdirSync(path.dirname(deepPath)); + fs.rmdirSync(path.dirname(path.dirname(deepPath))); + }); + }); + + describe('clearExpiredSession', () => { + test('should remove expired session file', () => { + const sessionData = { + sessionId: 'test-session', + lastActivity: Date.now() - (2 * 60 * 60 * 1000), // 2 hours ago + lastCommands: ['help'], + }; + fs.writeFileSync(testSessionFile, JSON.stringify(sessionData), 'utf8'); + + detector.clearExpiredSession(testSessionFile); + + expect(fs.existsSync(testSessionFile)).toBe(false); + }); + + test('should keep valid session file', () => { + const sessionData = { + sessionId: 'test-session', + lastActivity: Date.now(), // Current time + lastCommands: ['help'], + }; + fs.writeFileSync(testSessionFile, JSON.stringify(sessionData), 'utf8'); + + detector.clearExpiredSession(testSessionFile); + + expect(fs.existsSync(testSessionFile)).toBe(true); + }); + + test('should handle non-existent file gracefully', () => { + expect(() => { + detector.clearExpiredSession(testSessionFile); + }).not.toThrow(); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/tool-helper-executor.test.js +================================================== +```js +const ToolHelperExecutor = require('../../common/utils/tool-helper-executor'); + +describe('ToolHelperExecutor', () => { + describe('Constructor and Helper Management', () => { + test('should initialize with empty helpers array', () => { + const executor = new ToolHelperExecutor([]); + expect(executor.listHelpers()).toEqual([]); + expect(executor.getStats().count).toBe(0); + }); + + test('should load helpers from constructor', () => { + const helpers = [ + { + id: 'test-helper', + language: 'javascript', + function: 'function test() { return "result"; }', + }, + ]; + + const executor = new ToolHelperExecutor(helpers); + expect(executor.hasHelper('test-helper')).toBe(true); + expect(executor.listHelpers()).toContain('test-helper'); + }); + + test('should skip invalid helpers during construction', () => { + const helpers = [ + { id: 'valid', function: 'function() {}' }, + { id: 'no-function' }, // Missing function + { function: 'function() {}' }, // Missing id + ]; + + const executor = new ToolHelperExecutor(helpers); + expect(executor.listHelpers()).toEqual(['valid']); + }); + + test('should add helper dynamically', () => { + const executor = new ToolHelperExecutor([]); + + executor.addHelper({ + id: 'dynamic-helper', + function: 'function() { return 42; }', + }); + + expect(executor.hasHelper('dynamic-helper')).toBe(true); + }); + + test('should throw error when adding duplicate helper', () => { + const executor = new ToolHelperExecutor([ + { id: 'existing', function: 'function() {}' }, + ]); + + expect(() => { + executor.addHelper({ id: 'existing', function: 'function() {}' }); + }).toThrow(/already exists/); + }); + + test('should replace existing helper', () => { + const executor = new ToolHelperExecutor([ + { id: 'replaceable', function: 'function() { return 1; }' }, + ]); + + executor.replaceHelper({ + id: 'replaceable', + function: 'function() { return 2; }', + }); + + expect(executor.hasHelper('replaceable')).toBe(true); + }); + + test('should remove helper', () => { + const executor = new ToolHelperExecutor([ + { id: 'removable', function: 'function() {}' }, + ]); + + const removed = executor.removeHelper('removable'); + expect(removed).toBe(true); + expect(executor.hasHelper('removable')).toBe(false); + }); + + test('should clear all helpers', () => { + const executor = new ToolHelperExecutor([ + { id: 'helper1', function: 'function() {}' }, + { id: 'helper2', function: 'function() {}' }, + ]); + + executor.clearHelpers(); + expect(executor.getStats().count).toBe(0); + }); + }); + + describe('Helper Execution - Success Cases', () => { + test('should execute simple helper successfully', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'simple-return', + function: ` + function execute() { + return 42; + } + execute(); + `, + }, + ]); + + const result = await executor.execute('simple-return'); + expect(result).toBe(42); + }); + + test('should pass args to helper', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'args-helper', + function: ` + function process() { + return args.value * 2; + } + process(); + `, + }, + ]); + + const result = await executor.execute('args-helper', { value: 21 }); + expect(result).toBe(42); + }); + + test('should handle complex data structures in args', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'complex-args', + function: ` + function process() { + return { + name: args.user.name.toUpperCase(), + count: args.items.length, + total: args.items.reduce((sum, item) => sum + item.value, 0) + }; + } + process(); + `, + }, + ]); + + const result = await executor.execute('complex-args', { + user: { name: 'test' }, + items: [ + { value: 10 }, + { value: 20 }, + { value: 30 }, + ], + }); + + expect(result).toEqual({ + name: 'TEST', + count: 3, + total: 60, + }); + }); + + test('should execute helper with string manipulation', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'string-helper', + function: ` + function format() { + return args.text.split(' ').map(w => + w.charAt(0).toUpperCase() + w.slice(1).toLowerCase() + ).join(' '); + } + format(); + `, + }, + ]); + + const result = await executor.execute('string-helper', { + text: 'hello WORLD from TEST', + }); + expect(result).toBe('Hello World From Test'); + }); + + test('should handle array operations', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'array-helper', + function: ` + function process() { + return args.numbers + .filter(n => n % 2 === 0) + .map(n => n * 2) + .reduce((sum, n) => sum + n, 0); + } + process(); + `, + }, + ]); + + const result = await executor.execute('array-helper', { + numbers: [1, 2, 3, 4, 5, 6], + }); + expect(result).toBe(24); // (2+4+6)*2 = 24 + }); + }); + + describe('Timeout Enforcement (1000ms)', () => { + test('should timeout helper exceeding 1s limit', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'slow-helper', + function: ` + function slowOperation() { + const start = Date.now(); + while (Date.now() - start < 2000) { + // Busy wait for 2 seconds + } + return "completed"; + } + slowOperation(); + `, + }, + ]); + + await expect(executor.execute('slow-helper')) + .rejects + .toThrow(/exceeded 1s timeout/); + }, 3000); // Test timeout higher than helper timeout + + test('should complete helper within timeout', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'fast-helper', + function: ` + function fastOperation() { + let sum = 0; + for (let i = 0; i < 1000; i++) { + sum += i; + } + return sum; + } + fastOperation(); + `, + }, + ]); + + const result = await executor.execute('fast-helper'); + expect(result).toBe(499500); + }); + + test('should timeout on infinite loop', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'infinite-loop', + function: ` + function infiniteLoop() { + while (true) { + // Infinite loop + } + } + infiniteLoop(); + `, + }, + ]); + + await expect(executor.execute('infinite-loop')) + .rejects + .toThrow(/exceeded 1s timeout/); + }, 3000); + }); + + describe('Sandbox Isolation', () => { + test('should not have access to require', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'require-test', + function: ` + function attemptRequire() { + try { + const fs = require('fs'); + return "accessed fs"; + } catch (error) { + return "require blocked"; + } + } + attemptRequire(); + `, + }, + ]); + + const result = await executor.execute('require-test'); + expect(result).toBe('require blocked'); + }); + + test('should not have access to process', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'process-test', + function: ` + function attemptProcess() { + try { + return typeof process; + } catch (error) { + return "undefined"; + } + } + attemptProcess(); + `, + }, + ]); + + const result = await executor.execute('process-test'); + expect(result).toBe('undefined'); + }); + + test('should not have access to filesystem', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'fs-test', + function: ` + function attemptFs() { + try { + const fs = require('fs'); + fs.readFileSync('/etc/passwd'); + return "file accessed"; + } catch (error) { + return "fs blocked"; + } + } + attemptFs(); + `, + }, + ]); + + const result = await executor.execute('fs-test'); + expect(result).toBe('fs blocked'); + }); + + test('should not have access to global scope beyond sandbox', async () => { + // Set a global variable outside sandbox + global.testSecret = 'secret-value'; + + const executor = new ToolHelperExecutor([ + { + id: 'global-test', + function: ` + function attemptGlobal() { + try { + return global.testSecret || "no access"; + } catch (error) { + return "no access"; + } + } + attemptGlobal(); + `, + }, + ]); + + const result = await executor.execute('global-test'); + expect(result).toBe('no access'); + + // Cleanup + delete global.testSecret; + }); + + test('should only have access to provided args', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'scope-test', + function: ` + function checkScope() { + const available = []; + if (typeof args !== 'undefined') available.push('args'); + if (typeof require === 'undefined') available.push('no-require'); + if (typeof process === 'undefined') available.push('no-process'); + if (typeof fs === 'undefined') available.push('no-fs'); + return available; + } + checkScope(); + `, + }, + ]); + + const result = await executor.execute('scope-test', { test: true }); + expect(result).toContain('args'); + expect(result).toContain('no-require'); + expect(result).toContain('no-process'); + expect(result).toContain('no-fs'); + }); + }); + + describe('Error Handling', () => { + test('should throw error for non-existent helper', async () => { + const executor = new ToolHelperExecutor([]); + + await expect(executor.execute('nonexistent')) + .rejects + .toThrow(/not found/); + }); + + test('should provide helpful error with available helpers', async () => { + const executor = new ToolHelperExecutor([ + { id: 'helper1', function: 'function() {}' }, + { id: 'helper2', function: 'function() {}' }, + ]); + + await expect(executor.execute('wrong-helper')) + .rejects + .toThrow(/Available helpers: helper1, helper2/); + }); + + test('should handle syntax errors in helper function', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'syntax-error', + function: 'function invalid( { // Invalid syntax', + }, + ]); + + await expect(executor.execute('syntax-error')) + .rejects + .toThrow(/execution failed/); + }); + + test('should handle runtime errors in helper', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'runtime-error', + function: ` + function throwError() { + throw new Error("Intentional error"); + } + throwError(); + `, + }, + ]); + + await expect(executor.execute('runtime-error')) + .rejects + .toThrow(/execution failed/); + }); + + test('should handle undefined variable access', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'undefined-var', + function: ` + function accessUndefined() { + return undefinedVariable.property; + } + accessUndefined(); + `, + }, + ]); + + await expect(executor.execute('undefined-var')) + .rejects + .toThrow(/execution failed/); + }); + + test('should throw error for helper without function', async () => { + const executor = new ToolHelperExecutor([]); + executor.helpers.set('no-function', { id: 'no-function' }); + + await expect(executor.execute('no-function')) + .rejects + .toThrow(/has no function defined/); + }); + }); + + describe('Helper Metadata', () => { + test('should get helper info', () => { + const executor = new ToolHelperExecutor([ + { + id: 'test-helper', + language: 'javascript', + runtime: 'isolated_vm', + function: 'function() {}', + }, + ]); + + const info = executor.getHelperInfo('test-helper'); + expect(info).toEqual({ + id: 'test-helper', + language: 'javascript', + runtime: 'isolated_vm', + hasFunction: true, + }); + }); + + test('should return null for non-existent helper info', () => { + const executor = new ToolHelperExecutor([]); + const info = executor.getHelperInfo('nonexistent'); + expect(info).toBeNull(); + }); + + test('should provide default language and runtime', () => { + const executor = new ToolHelperExecutor([ + { + id: 'minimal-helper', + function: 'function() {}', + }, + ]); + + const info = executor.getHelperInfo('minimal-helper'); + expect(info.language).toBe('javascript'); + expect(info.runtime).toBe('isolated_vm'); + }); + + test('should get statistics', () => { + const executor = new ToolHelperExecutor([ + { id: 'helper1', function: 'function() {}' }, + { id: 'helper2', function: 'function() {}' }, + { id: 'helper3', function: 'function() {}' }, + ]); + + const stats = executor.getStats(); + expect(stats.count).toBe(3); + expect(stats.helpers).toHaveLength(3); + expect(stats.helpers).toContain('helper1'); + expect(stats.helpers).toContain('helper2'); + expect(stats.helpers).toContain('helper3'); + }); + }); + + describe('Memory Management', () => { + test('should dispose isolate after successful execution', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'dispose-test', + function: '(function() { return "ok"; })();', + }, + ]); + + const result = await executor.execute('dispose-test'); + expect(result).toBe('ok'); + // No way to directly test disposal, but it should not throw + }); + + test('should dispose isolate after failed execution', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'fail-dispose', + function: 'throw new Error("test error");', + }, + ]); + + await expect(executor.execute('fail-dispose')) + .rejects + .toThrow(); + // Isolate should still be disposed even on error + }); + + test('should dispose isolate after timeout', async () => { + const executor = new ToolHelperExecutor([ + { + id: 'timeout-dispose', + function: 'while(true) {}', + }, + ]); + + await expect(executor.execute('timeout-dispose')) + .rejects + .toThrow(/timeout/); + // Isolate should be disposed even on timeout + }, 3000); + }); +}); + +``` + +================================================== +📄 tests/unit/decision-log-indexer.test.js +================================================== +```js +/** + * Unit Tests for Decision Log Indexer + * + * Tests the indexing system for decision logs. + * + * @see .aios-core/scripts/decision-log-indexer.js + */ + +const fs = require('fs').promises; +const path = require('path'); +const { + parseLogMetadata, + generateIndexContent, + addToIndex, + rebuildIndex, +} = require('../../.aios-core/development/scripts/decision-log-indexer'); + +// Mock fs.promises +jest.mock('fs', () => ({ + promises: { + readFile: jest.fn(), + writeFile: jest.fn(), + mkdir: jest.fn(), + readdir: jest.fn(), + }, +})); + +describe('decision-log-indexer', () => { + beforeEach(() => { + jest.clearAllMocks(); + }); + + afterEach(() => { + jest.restoreAllMocks(); + }); + + describe('parseLogMetadata', () => { + it('should parse metadata from decision log file', async () => { + const mockLogContent = `# Decision Log: Story 6.1.2.6.2 + +**Generated:** 2025-11-16T14:30:00.000Z +**Agent:** dev +**Mode:** Yolo (Autonomous Development) +**Story:** docs/stories/story-6.1.2.6.2.md +**Rollback:** \`git reset --hard abc123\` + +--- + +## Context + +**Story Implementation:** 6.1.2.6.2 +**Execution Time:** 15m 30s +**Status:** completed + +**Files Modified:** 5 files +**Tests Run:** 8 tests +**Decisions Made:** 3 autonomous decisions +`; + + fs.readFile.mockResolvedValue(mockLogContent); + + const metadata = await parseLogMetadata('.ai/decision-log-6.1.2.6.2.md'); + + expect(metadata).toBeDefined(); + expect(metadata.storyId).toBe('6.1.2.6.2'); + expect(metadata.agent).toBe('dev'); + expect(metadata.status).toBe('completed'); + expect(metadata.duration).toBe('15m 30s'); + expect(metadata.decisionCount).toBe(3); + expect(metadata.timestamp).toBeInstanceOf(Date); + }); + + it('should handle missing metadata fields gracefully', async () => { + const mockLogContent = `# Decision Log: Story test + +Some content without proper metadata +`; + + fs.readFile.mockResolvedValue(mockLogContent); + + const metadata = await parseLogMetadata('.ai/decision-log-test.md'); + + expect(metadata).toBeDefined(); + expect(metadata.storyId).toBe('test'); + expect(metadata.agent).toBe('unknown'); + expect(metadata.status).toBe('unknown'); + expect(metadata.duration).toBe('0s'); + expect(metadata.decisionCount).toBe(0); + }); + + it('should return null on read error', async () => { + fs.readFile.mockRejectedValue(new Error('File not found')); + + const consoleSpy = jest.spyOn(console, 'error').mockImplementation(); + + const metadata = await parseLogMetadata('.ai/missing.md'); + + expect(metadata).toBeNull(); + expect(consoleSpy).toHaveBeenCalled(); + + consoleSpy.mockRestore(); + }); + }); + + describe('generateIndexContent', () => { + it('should generate markdown index from metadata array', () => { + const metadata = [ + { + storyId: '6.1.2.6.2', + timestamp: new Date('2025-11-16T14:30:00.000Z'), + agent: 'dev', + status: 'completed', + duration: '15m 30s', + decisionCount: 3, + logPath: '.ai/decision-log-6.1.2.6.2.md', + }, + { + storyId: '6.1.2.6.1', + timestamp: new Date('2025-11-15T10:00:00.000Z'), + agent: 'dev', + status: 'completed', + duration: '5m 10s', + decisionCount: 1, + logPath: '.ai/decision-log-6.1.2.6.1.md', + }, + ]; + + const indexContent = generateIndexContent(metadata); + + expect(indexContent).toContain('# Decision Log Index'); + expect(indexContent).toContain('Total logs: 2'); + expect(indexContent).toContain('6.1.2.6.2'); + expect(indexContent).toContain('6.1.2.6.1'); + expect(indexContent).toContain('2025-11-16'); + expect(indexContent).toContain('2025-11-15'); + expect(indexContent).toContain('completed'); + expect(indexContent).toContain('15m 30s'); + expect(indexContent).toContain('5m 10s'); + expect(indexContent).toContain('[View](decision-log-6.1.2.6.2.md)'); + }); + + it('should sort logs by timestamp (newest first)', () => { + const metadata = [ + { + storyId: 'old', + timestamp: new Date('2025-11-10T10:00:00.000Z'), + agent: 'dev', + status: 'completed', + duration: '5m', + decisionCount: 1, + logPath: '.ai/decision-log-old.md', + }, + { + storyId: 'new', + timestamp: new Date('2025-11-16T10:00:00.000Z'), + agent: 'dev', + status: 'completed', + duration: '3m', + decisionCount: 2, + logPath: '.ai/decision-log-new.md', + }, + ]; + + const indexContent = generateIndexContent(metadata); + + // 'new' should appear before 'old' in the table + const newIndex = indexContent.indexOf('| new |'); + const oldIndex = indexContent.indexOf('| old |'); + + expect(newIndex).toBeLessThan(oldIndex); + }); + + it('should handle empty metadata array', () => { + const indexContent = generateIndexContent([]); + + expect(indexContent).toContain('# Decision Log Index'); + expect(indexContent).toContain('Total logs: 0'); + }); + }); + + describe('addToIndex', () => { + it('should create new index file if it does not exist', async () => { + // Mock config + const yaml = require('js-yaml'); + jest.spyOn(yaml, 'load').mockReturnValue({ + decisionLogging: { + enabled: true, + location: '.ai/', + indexFile: 'decision-logs-index.md', + }, + }); + + // Mock log content + const mockLogContent = `# Decision Log: Story 6.1.2.6.2 + +**Generated:** 2025-11-16T14:30:00.000Z +**Agent:** dev +**Story:** docs/stories/story-6.1.2.6.2.md +**Status:** completed +**Execution Time:** 15m 30s +**Decisions Made:** 3 autonomous decisions +`; + + fs.readFile.mockImplementation((filePath) => { + if (filePath.endsWith('core-config.yaml')) { + return Promise.resolve('decisionLogging:\n enabled: true\n location: .ai/\n indexFile: decision-logs-index.md'); + } + if (filePath.includes('decision-log-6.1.2.6.2')) { + return Promise.resolve(mockLogContent); + } + // Index doesn't exist yet + return Promise.reject(new Error('ENOENT')); + }); + + fs.mkdir.mockResolvedValue(); + fs.writeFile.mockResolvedValue(); + + const consoleSpy = jest.spyOn(console, 'log').mockImplementation(); + + const indexPath = await addToIndex('.ai/decision-log-6.1.2.6.2.md'); + + expect(indexPath).toBe(path.join('.ai', 'decision-logs-index.md')); + expect(fs.mkdir).toHaveBeenCalledWith('.ai/', { recursive: true }); + expect(fs.writeFile).toHaveBeenCalled(); + + const writeCall = fs.writeFile.mock.calls[0]; + const indexContent = writeCall[1]; + + expect(indexContent).toContain('# Decision Log Index'); + expect(indexContent).toContain('6.1.2.6.2'); + expect(indexContent).toContain('Total logs: 1'); + + consoleSpy.mockRestore(); + }); + + it('should update existing index file', async () => { + const yaml = require('js-yaml'); + jest.spyOn(yaml, 'load').mockReturnValue({ + decisionLogging: { + enabled: true, + location: '.ai/', + indexFile: 'decision-logs-index.md', + }, + }); + + const existingIndex = `# Decision Log Index + +Total logs: 1 + +| Story ID | Date | Agent | Status | Duration | Decisions | Log File | +|----------|------|-------|--------|----------|-----------|----------| +| old-story | 2025-11-10 | dev | completed | 5m | 1 | [View](decision-log-old-story.md) | +`; + + const newLogContent = `# Decision Log: Story new-story + +**Generated:** 2025-11-16T14:30:00.000Z +**Agent:** dev +**Story:** docs/stories/new-story.md +**Status:** completed +**Execution Time:** 10m +**Decisions Made:** 2 autonomous decisions +`; + + fs.readFile.mockImplementation((filePath) => { + if (filePath.endsWith('core-config.yaml')) { + return Promise.resolve('decisionLogging:\n enabled: true'); + } + if (filePath.includes('new-story')) { + return Promise.resolve(newLogContent); + } + if (filePath.includes('index')) { + return Promise.resolve(existingIndex); + } + return Promise.reject(new Error('File not found')); + }); + + fs.mkdir.mockResolvedValue(); + fs.writeFile.mockResolvedValue(); + + await addToIndex('.ai/decision-log-new-story.md'); + + const writeCall = fs.writeFile.mock.calls[0]; + const updatedIndexContent = writeCall[1]; + + expect(updatedIndexContent).toContain('new-story'); + expect(updatedIndexContent).toContain('old-story'); + expect(updatedIndexContent).toContain('Total logs: 2'); + }); + + it('should replace existing entry for same story ID', async () => { + const yaml = require('js-yaml'); + jest.spyOn(yaml, 'load').mockReturnValue({ + decisionLogging: { + enabled: true, + location: '.ai/', + indexFile: 'decision-logs-index.md', + }, + }); + + const existingIndex = `# Decision Log Index + +Total logs: 1 + +| Story ID | Date | Agent | Status | Duration | Decisions | Log File | +|----------|------|-------|--------|----------|-----------|----------| +| 6.1.2.6.2 | 2025-11-15 | dev | completed | 5m | 1 | [View](decision-log-6.1.2.6.2.md) | +`; + + const updatedLogContent = `# Decision Log: Story 6.1.2.6.2 + +**Generated:** 2025-11-16T14:30:00.000Z +**Agent:** dev +**Story:** docs/stories/story-6.1.2.6.2.md +**Status:** completed +**Execution Time:** 10m +**Decisions Made:** 3 autonomous decisions +`; + + fs.readFile.mockImplementation((filePath) => { + if (filePath.endsWith('core-config.yaml')) { + return Promise.resolve('decisionLogging:\n enabled: true'); + } + if (filePath.includes('6.1.2.6.2')) { + return Promise.resolve(updatedLogContent); + } + if (filePath.includes('index')) { + return Promise.resolve(existingIndex); + } + return Promise.reject(new Error('File not found')); + }); + + fs.mkdir.mockResolvedValue(); + fs.writeFile.mockResolvedValue(); + + await addToIndex('.ai/decision-log-6.1.2.6.2.md'); + + const writeCall = fs.writeFile.mock.calls[0]; + const updatedIndexContent = writeCall[1]; + + // Should still have only 1 log (old entry replaced) + expect(updatedIndexContent).toContain('Total logs: 1'); + expect(updatedIndexContent).toContain('6.1.2.6.2'); + expect(updatedIndexContent).toContain('2025-11-16'); + expect(updatedIndexContent).toContain('10m'); + expect(updatedIndexContent).toContain('3'); + }); + + it('should return null when decision logging is disabled', async () => { + const yaml = require('js-yaml'); + jest.spyOn(yaml, 'load').mockReturnValue({ + decisionLogging: { + enabled: false, + }, + }); + + fs.readFile.mockResolvedValue('decisionLogging:\n enabled: false'); + + const consoleSpy = jest.spyOn(console, 'log').mockImplementation(); + + const result = await addToIndex('.ai/decision-log-test.md'); + + expect(result).toBeNull(); + expect(consoleSpy).toHaveBeenCalledWith(expect.stringContaining('disabled')); + + consoleSpy.mockRestore(); + }); + }); + + describe('rebuildIndex', () => { + it('should rebuild index from all log files in directory', async () => { + const yaml = require('js-yaml'); + jest.spyOn(yaml, 'load').mockReturnValue({ + decisionLogging: { + enabled: true, + location: '.ai/', + indexFile: 'decision-logs-index.md', + }, + }); + + fs.readdir.mockResolvedValue([ + 'decision-log-6.1.2.6.2.md', + 'decision-log-6.1.2.6.1.md', + 'other-file.md', + 'decision-logs-index.md', + ]); + + const log1Content = `**Generated:** 2025-11-16T14:30:00.000Z +**Agent:** dev +**Story:** story1.md +**Status:** completed +**Execution Time:** 10m +**Decisions Made:** 3`; + + const log2Content = `**Generated:** 2025-11-15T10:00:00.000Z +**Agent:** dev +**Story:** story2.md +**Status:** completed +**Execution Time:** 5m +**Decisions Made:** 1`; + + fs.readFile.mockImplementation((filePath) => { + if (filePath.endsWith('core-config.yaml')) { + return Promise.resolve('decisionLogging:\n enabled: true'); + } + if (filePath.includes('6.1.2.6.2')) { + return Promise.resolve(log1Content); + } + if (filePath.includes('6.1.2.6.1')) { + return Promise.resolve(log2Content); + } + return Promise.reject(new Error('File not found')); + }); + + fs.writeFile.mockResolvedValue(); + + const consoleSpy = jest.spyOn(console, 'log').mockImplementation(); + + await rebuildIndex(); + + expect(fs.readdir).toHaveBeenCalledWith('.ai/'); + + const writeCall = fs.writeFile.mock.calls[0]; + const indexContent = writeCall[1]; + + expect(indexContent).toContain('Total logs: 2'); + expect(indexContent).toContain('6.1.2.6.2'); + expect(indexContent).toContain('6.1.2.6.1'); + + consoleSpy.mockRestore(); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/validate-codex-integration.test.js +================================================== +```js +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); + +const { + validateCodexIntegration, +} = require('../../.aios-core/infrastructure/scripts/validate-codex-integration'); + +describe('validate-codex-integration', () => { + let tmpRoot; + + function write(file, content = '') { + fs.mkdirSync(path.dirname(file), { recursive: true }); + fs.writeFileSync(file, content, 'utf8'); + } + + beforeEach(() => { + tmpRoot = fs.mkdtempSync(path.join(os.tmpdir(), 'validate-codex-')); + }); + + afterEach(() => { + fs.rmSync(tmpRoot, { recursive: true, force: true }); + }); + + it('passes when required Codex files exist', () => { + write(path.join(tmpRoot, 'AGENTS.md'), '# rules'); + write(path.join(tmpRoot, '.codex', 'agents', 'dev.md'), '# dev'); + write(path.join(tmpRoot, '.codex', 'skills', 'aios-dev', 'SKILL.md'), '# skill'); + write(path.join(tmpRoot, '.aios-core', 'development', 'agents', 'dev.md'), '# dev'); + + const result = validateCodexIntegration({ projectRoot: tmpRoot }); + expect(result.ok).toBe(true); + expect(result.errors).toEqual([]); + }); + + it('fails when codex agents/skills dirs are missing', () => { + const result = validateCodexIntegration({ projectRoot: tmpRoot }); + expect(result.ok).toBe(false); + expect(result.errors.some((e) => e.includes('Missing Codex agents dir'))).toBe(true); + expect(result.errors.some((e) => e.includes('Missing Codex skills dir'))).toBe(true); + expect(result.warnings.some((w) => w.includes('Codex instructions file not found yet'))).toBe(true); + }); +}); + +``` + +================================================== +📄 tests/unit/validate-parity.test.js +================================================== +```js +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); +const { + runParityValidation, + diffCompatibilityContracts, +} = require('../../.aios-core/infrastructure/scripts/validate-parity'); + +describe('validate-parity', () => { + function createMockProjectRoot() { + const root = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-parity-')); + fs.mkdirSync(path.join(root, 'docs'), { recursive: true }); + fs.writeFileSync( + path.join(root, 'docs', 'ide-integration.md'), + [ + '| IDE/CLI | Overall Status |', + '| --- | --- |', + '| Claude Code | Works |', + '| Gemini CLI | Works |', + '| Codex CLI | Limited |', + '| Cursor | Limited |', + '| GitHub Copilot | Limited |', + '| AntiGravity | Limited |', + ].join('\n'), + 'utf8', + ); + return root; + } + + function buildMockContract() { + return { + release: 'AIOS 4.0.4', + global_required_checks: ['paths'], + ide_matrix: [ + { ide: 'claude-code', display_name: 'Claude Code', expected_status: 'Works', required_checks: ['claude-sync', 'claude-integration'] }, + { ide: 'gemini', display_name: 'Gemini CLI', expected_status: 'Works', required_checks: ['gemini-sync', 'gemini-integration'] }, + { ide: 'codex', display_name: 'Codex CLI', expected_status: 'Limited', required_checks: ['codex-sync', 'codex-integration', 'codex-skills'] }, + { ide: 'cursor', display_name: 'Cursor', expected_status: 'Limited', required_checks: ['cursor-sync'] }, + { ide: 'github-copilot', display_name: 'GitHub Copilot', expected_status: 'Limited', required_checks: ['github-copilot-sync'] }, + { ide: 'antigravity', display_name: 'AntiGravity', expected_status: 'Limited', required_checks: ['antigravity-sync'] }, + ], + }; + } + + it('passes when all checks return ok', () => { + const projectRoot = createMockProjectRoot(); + const ok = { ok: true, errors: [], warnings: [] }; + const result = runParityValidation( + { projectRoot }, + { + runSyncValidate: () => ok, + validateClaudeIntegration: () => ok, + validateCodexIntegration: () => ok, + validateGeminiIntegration: () => ok, + validateCodexSkills: () => ok, + validatePaths: () => ok, + loadCompatibilityContract: () => buildMockContract(), + }, + ); + + expect(result.ok).toBe(true); + expect(result.checks).toHaveLength(11); + expect(result.checks.every((c) => c.ok)).toBe(true); + expect(result.contractViolations).toHaveLength(0); + }); + + it('fails when any check fails', () => { + const projectRoot = createMockProjectRoot(); + let count = 0; + const result = runParityValidation( + { projectRoot }, + { + runSyncValidate: () => ({ ok: true, errors: [], warnings: [] }), + validateClaudeIntegration: () => ({ ok: true, errors: [], warnings: [] }), + validateCodexIntegration: () => { + count += 1; + return count === 1 + ? { ok: false, errors: ['broken codex integration'], warnings: [] } + : { ok: true, errors: [], warnings: [] }; + }, + validateGeminiIntegration: () => ({ ok: true, errors: [], warnings: [] }), + validateCodexSkills: () => ({ ok: true, errors: [], warnings: [] }), + validatePaths: () => ({ ok: true, errors: [], warnings: [] }), + loadCompatibilityContract: () => buildMockContract(), + }, + ); + + expect(result.ok).toBe(false); + expect(result.checks.some((c) => c.id === 'codex-integration' && c.ok === false)).toBe(true); + }); + + it('fails when docs matrix claim diverges from contract', () => { + const projectRoot = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-parity-mismatch-')); + fs.mkdirSync(path.join(projectRoot, 'docs'), { recursive: true }); + fs.writeFileSync(path.join(projectRoot, 'docs', 'ide-integration.md'), '| IDE/CLI | Overall Status |\n| --- | --- |\n| Codex CLI | Works |\n', 'utf8'); + + const ok = { ok: true, errors: [], warnings: [] }; + const result = runParityValidation( + { projectRoot }, + { + runSyncValidate: () => ok, + validateClaudeIntegration: () => ok, + validateCodexIntegration: () => ok, + validateGeminiIntegration: () => ok, + validateCodexSkills: () => ok, + validatePaths: () => ok, + loadCompatibilityContract: () => buildMockContract(), + }, + ); + + expect(result.ok).toBe(false); + expect(result.contractViolations.length).toBeGreaterThan(0); + }); + + it('generates diff between contract versions', () => { + const previous = buildMockContract(); + const current = buildMockContract(); + current.release = 'AIOS 4.1.0'; + current.global_required_checks = ['paths', 'codex-skills']; + current.ide_matrix = current.ide_matrix.map((ide) => { + if (ide.ide === 'codex') { + return { ...ide, expected_status: 'Works' }; + } + return ide; + }); + + const diff = diffCompatibilityContracts(current, previous); + + expect(diff).toBeDefined(); + expect(diff.release_changed).toBe(true); + expect(diff.has_changes).toBe(true); + expect(diff.global_required_checks.added).toContain('codex-skills'); + expect(diff.ide_changes.some((change) => change.ide === 'codex')).toBe(true); + }); + + it('includes contractDiff in parity result when --diff path is provided', () => { + const projectRoot = createMockProjectRoot(); + const ok = { ok: true, errors: [], warnings: [] }; + const result = runParityValidation( + { projectRoot, diffPath: '.aios-core/infrastructure/contracts/compatibility/aios-4.0.3.yaml' }, + { + runSyncValidate: () => ok, + validateClaudeIntegration: () => ok, + validateCodexIntegration: () => ok, + validateGeminiIntegration: () => ok, + validateCodexSkills: () => ok, + validatePaths: () => ok, + loadCompatibilityContract: (contractPath) => { + if (contractPath.endsWith('aios-4.0.3.yaml')) { + return { + release: 'AIOS 4.0.3', + global_required_checks: ['paths'], + ide_matrix: [ + { ide: 'codex', display_name: 'Codex CLI', expected_status: 'Experimental', required_checks: ['codex-sync'] }, + ], + }; + } + return buildMockContract(); + }, + }, + ); + + expect(result.ok).toBe(true); + expect(result.contractDiff).toBeDefined(); + expect(result.contractDiff.from_release).toBe('AIOS 4.0.3'); + expect(result.contractDiff.to_release).toBe('AIOS 4.0.4'); + expect(result.contractDiff.has_changes).toBe(true); + }); +}); + +``` + +================================================== +📄 tests/unit/cli.test.js +================================================== +```js +/** + * STORY-1.1: CLI Entry Point Unit Tests + * Tests for bin/aios.js command routing and version checking + */ + +const fs = require('fs'); +const path = require('path'); +const { spawn } = require('child_process'); + +describe('CLI Entry Point', () => { + const cliPath = path.join(__dirname, '../../bin/aios.js'); + + describe('Node.js Version Check', () => { + it('should have engines field in package.json requiring Node 18+', () => { + // Version check is now enforced via package.json engines field + const packageJsonPath = path.join(__dirname, '../../package.json'); + const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8')); + + expect(packageJson.engines).toBeDefined(); + expect(packageJson.engines.node).toMatch(/>=\s*18/); + }); + + it('should have proper module structure', () => { + const cliPath = path.join(__dirname, '../../bin/aios.js'); + const cliContent = fs.readFileSync(cliPath, 'utf8'); + + // Verify CLI has proper structure + expect(cliContent).toContain("require('path')"); + expect(cliContent).toContain("require('fs')"); + expect(cliContent).toContain('process.argv'); + }); + }); + + describe('Command Routing', () => { + it('should handle --version flag', (done) => { + const child = spawn('node', [cliPath, '--version']); + let output = ''; + + child.stdout.on('data', (data) => { + output += data.toString(); + }); + + child.on('close', (code) => { + expect(code).toBe(0); + expect(output).toMatch(/\d+\.\d+\.\d+/); + done(); + }); + }); + + it('should handle --help flag', (done) => { + const child = spawn('node', [cliPath, '--help']); + let output = ''; + + child.stdout.on('data', (data) => { + output += data.toString(); + }); + + child.on('close', (code) => { + expect(code).toBe(0); + expect(output).toContain('USAGE'); + expect(output).toContain('npx aios-core@latest'); + done(); + }); + }); + + it('should handle info command', (done) => { + const child = spawn('node', [cliPath, 'info']); + let output = ''; + + child.stdout.on('data', (data) => { + output += data.toString(); + }); + + child.on('close', (code) => { + expect(code).toBe(0); + expect(output).toContain('System Information'); + done(); + }); + }); + + it('should handle doctor command', (done) => { + const child = spawn('node', [cliPath, 'doctor']); + let output = ''; + let errors = ''; + + child.stdout.on('data', (data) => { + output += data.toString(); + }); + + child.stderr.on('data', (data) => { + errors += data.toString(); + }); + + child.on('close', (code) => { + // Doctor may exit with 0 or 1 depending on system state + const combined = output + errors; + expect(combined).toContain('Diagnostics'); + done(); + }); + }); + + it('should error on unknown command', (done) => { + const child = spawn('node', [cliPath, 'unknown-command']); + let errors = ''; + + child.stderr.on('data', (data) => { + errors += data.toString(); + }); + + child.on('close', (code) => { + expect(code).toBe(1); + expect(errors).toContain('Unknown command'); + expect(errors).toContain('unknown-command'); + done(); + }); + }); + }); + + describe('Shebang', () => { + it('should have proper shebang for cross-platform compatibility', () => { + const cliPath = path.join(__dirname, '../../bin/aios.js'); + const cliContent = fs.readFileSync(cliPath, 'utf8'); + + expect(cliContent.startsWith('#!/usr/bin/env node')).toBe(true); + }); + }); + + describe('Error Handling', () => { + it('should show usage info when unknown command provided', (done) => { + const child = spawn('node', [cliPath, 'invalid']); + let errors = ''; + let output = ''; + + child.stderr.on('data', (data) => { + errors += data.toString(); + }); + + child.stdout.on('data', (data) => { + output += data.toString(); + }); + + child.on('close', (code) => { + expect(code).toBe(1); + const combined = errors + output; + expect(combined).toContain('Unknown command'); + expect(combined).toContain('--help'); + done(); + }); + }, 15000); // Increase timeout to 15s + }); +}); + +``` + +================================================== +📄 tests/unit/info-cli.test.js +================================================== +```js +/** + * Info CLI Unit Tests + * + * Tests for the worker info command functionality. + * + * @story 2.8-2.9 - Discovery CLI Info & List + */ + +const path = require('path'); + +// Test modules +const { + formatInfo, + formatInfoPretty, + formatInfoJSON, + formatInfoYAML, + formatNotFoundError, + wrapText, +} = require('../../.aios-core/cli/commands/workers/formatters/info-formatter'); +const { findSuggestions, findRelatedWorkers } = require('../../.aios-core/cli/commands/workers/info'); +const { levenshteinDistance } = require('../../.aios-core/cli/commands/workers/search-keyword'); + +// Mock worker for testing +const mockWorker = { + id: 'json-csv-transformer', + name: 'JSON to CSV Transformer', + description: 'Converts JSON data to CSV format with configurable column mapping and delimiter options.', + category: 'data', + subcategory: 'transformation', + inputs: ['json (object|array) - JSON data to transform'], + outputs: ['csv (string) - CSV formatted data'], + tags: ['etl', 'data', 'json', 'csv', 'transformation'], + path: '.aios-core/development/tasks/data/json-csv-transformer.md', + taskFormat: 'TASK-FORMAT-V1', + executorTypes: ['Worker', 'Agent'], + performance: { + avgDuration: '50ms', + cacheable: true, + parallelizable: true, + }, + agents: ['dev'], + metadata: { + source: 'development', + addedVersion: '1.0.0', + }, +}; + +const mockRelatedWorkers = [ + { id: 'csv-json-transformer', name: 'CSV to JSON Transformer' }, + { id: 'json-validator', name: 'JSON Schema Validator' }, +]; + +describe('Info Formatter - Pretty Output', () => { + test('formatInfoPretty includes worker name', () => { + const output = formatInfoPretty(mockWorker, {}); + expect(output).toContain('JSON to CSV Transformer'); + }); + + test('formatInfoPretty includes ID', () => { + const output = formatInfoPretty(mockWorker, {}); + expect(output).toContain('json-csv-transformer'); + }); + + test('formatInfoPretty includes category and subcategory', () => { + const output = formatInfoPretty(mockWorker, {}); + expect(output).toContain('data'); + expect(output).toContain('transformation'); + }); + + test('formatInfoPretty includes description', () => { + const output = formatInfoPretty(mockWorker, {}); + expect(output).toContain('Converts JSON data to CSV format'); + }); + + test('formatInfoPretty includes performance metrics', () => { + const output = formatInfoPretty(mockWorker, {}); + expect(output).toContain('50ms'); + expect(output).toContain('Cacheable'); + expect(output).toContain('Parallelizable'); + }); + + test('formatInfoPretty includes tags', () => { + const output = formatInfoPretty(mockWorker, {}); + expect(output).toContain('etl'); + expect(output).toContain('json'); + expect(output).toContain('csv'); + }); + + test('formatInfoPretty includes usage example', () => { + const output = formatInfoPretty(mockWorker, {}); + expect(output).toContain('Usage Example'); + expect(output).toContain('aios task run json-csv-transformer'); + }); + + test('formatInfoPretty includes related workers when provided', () => { + const output = formatInfoPretty(mockWorker, { relatedWorkers: mockRelatedWorkers }); + expect(output).toContain('Related Workers'); + expect(output).toContain('csv-json-transformer'); + expect(output).toContain('json-validator'); + }); + + test('formatInfoPretty shows verbose debug info when enabled', () => { + const output = formatInfoPretty(mockWorker, { verbose: true }); + expect(output).toContain('[Debug Info]'); + expect(output).toContain('development'); + }); +}); + +describe('Info Formatter - JSON Output', () => { + test('formatInfoJSON returns valid JSON', () => { + const output = formatInfoJSON(mockWorker, {}); + expect(() => JSON.parse(output)).not.toThrow(); + }); + + test('formatInfoJSON includes all required fields', () => { + const output = formatInfoJSON(mockWorker, {}); + const parsed = JSON.parse(output); + expect(parsed.id).toBe('json-csv-transformer'); + expect(parsed.name).toBe('JSON to CSV Transformer'); + expect(parsed.category).toBe('data'); + expect(parsed.subcategory).toBe('transformation'); + expect(parsed.tags).toEqual(['etl', 'data', 'json', 'csv', 'transformation']); + expect(parsed.performance).toBeDefined(); + expect(parsed.metadata).toBeDefined(); + }); + + test('formatInfoJSON includes related workers', () => { + const output = formatInfoJSON(mockWorker, { relatedWorkers: mockRelatedWorkers }); + const parsed = JSON.parse(output); + expect(parsed.relatedWorkers).toContain('csv-json-transformer'); + expect(parsed.relatedWorkers).toContain('json-validator'); + }); +}); + +describe('Info Formatter - YAML Output', () => { + test('formatInfoYAML returns valid YAML', () => { + const output = formatInfoYAML(mockWorker, {}); + expect(output).toContain('id: json-csv-transformer'); + expect(output).toContain('name: JSON to CSV Transformer'); + }); + + test('formatInfoYAML includes all required fields', () => { + const output = formatInfoYAML(mockWorker, {}); + expect(output).toContain('category: data'); + expect(output).toContain('subcategory: transformation'); + expect(output).toContain('tags:'); + expect(output).toContain('- etl'); + }); +}); + +describe('Info Formatter - Format Selection', () => { + test('formatInfo with format=json returns JSON', () => { + const output = formatInfo(mockWorker, { format: 'json' }); + expect(() => JSON.parse(output)).not.toThrow(); + }); + + test('formatInfo with format=yaml returns YAML', () => { + const output = formatInfo(mockWorker, { format: 'yaml' }); + expect(output).toContain('id: json-csv-transformer'); + }); + + test('formatInfo with format=pretty returns pretty output', () => { + const output = formatInfo(mockWorker, { format: 'pretty' }); + expect(output).toContain('📦'); + }); + + test('formatInfo defaults to pretty format', () => { + const output = formatInfo(mockWorker, {}); + expect(output).toContain('📦'); + }); +}); + +describe('Not Found Error Formatter', () => { + test('formatNotFoundError includes invalid ID', () => { + const output = formatNotFoundError('invalid-worker', []); + expect(output).toContain("Worker 'invalid-worker' not found"); + }); + + test('formatNotFoundError includes suggestions', () => { + const suggestions = [ + { id: 'json-validator' }, + { id: 'json-transformer' }, + ]; + const output = formatNotFoundError('json-validtor', suggestions); + expect(output).toContain('Did you mean'); + expect(output).toContain('json-validator'); + expect(output).toContain('json-transformer'); + }); + + test('formatNotFoundError includes search hint', () => { + const output = formatNotFoundError('invalid', []); + expect(output).toContain('aios workers search invalid'); + }); +}); + +describe('Text Wrapping', () => { + test('wrapText wraps long text', () => { + const longText = 'This is a very long description that should be wrapped at a reasonable width for display in the terminal.'; + const wrapped = wrapText(longText, 40); + expect(wrapped.length).toBeGreaterThan(1); + expect(wrapped.every(line => line.length <= 40)).toBe(true); + }); + + test('wrapText handles short text', () => { + const shortText = 'Short text'; + const wrapped = wrapText(shortText, 40); + expect(wrapped.length).toBe(1); + expect(wrapped[0]).toBe('Short text'); + }); + + test('wrapText handles empty text', () => { + const wrapped = wrapText('', 40); + expect(wrapped).toEqual(['']); + }); + + test('wrapText handles null text', () => { + const wrapped = wrapText(null, 40); + expect(wrapped).toEqual(['']); + }); +}); + +describe('Levenshtein Distance for Suggestions', () => { + test('similar IDs have low distance', () => { + const distance = levenshteinDistance('json-validator', 'json-validtor'); + expect(distance).toBeLessThanOrEqual(2); + }); + + test('different IDs have high distance', () => { + const distance = levenshteinDistance('json-validator', 'api-generator'); + expect(distance).toBeGreaterThan(5); + }); + + test('identical IDs have zero distance', () => { + const distance = levenshteinDistance('json-validator', 'json-validator'); + expect(distance).toBe(0); + }); +}); + +describe('Performance Requirements', () => { + test('formatInfoPretty completes quickly', () => { + const startTime = Date.now(); + for (let i = 0; i < 100; i++) { + formatInfoPretty(mockWorker, { relatedWorkers: mockRelatedWorkers }); + } + const duration = Date.now() - startTime; + // Should format 100 workers in under 100ms + expect(duration).toBeLessThan(100); + }); + + test('formatInfoJSON completes quickly', () => { + const startTime = Date.now(); + for (let i = 0; i < 100; i++) { + formatInfoJSON(mockWorker, { relatedWorkers: mockRelatedWorkers }); + } + const duration = Date.now() - startTime; + // Should format 100 workers in under 50ms + expect(duration).toBeLessThan(50); + }); +}); + +``` + +================================================== +📄 tests/unit/migration-execute.test.js +================================================== +```js +/** + * Migration Execute Module Tests + * + * @story 2.14 - Migration Script v2.0 → v2.1 + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const { + createModuleDirectories, + migrateModule, + executeMigration, + saveMigrationState, + loadMigrationState, + clearMigrationState, +} = require('../../.aios-core/cli/commands/migrate/execute'); +const { analyzeMigrationPlan } = require('../../.aios-core/cli/commands/migrate/analyze'); + +describe('Migration Execute Module', () => { + let testDir; + + beforeEach(async () => { + testDir = path.join(os.tmpdir(), `aios-execute-test-${Date.now()}`); + await fs.promises.mkdir(testDir, { recursive: true }); + }); + + afterEach(async () => { + if (testDir && fs.existsSync(testDir)) { + await fs.promises.rm(testDir, { recursive: true, force: true }); + } + }); + + describe('createModuleDirectories', () => { + it('should create all four module directories', async () => { + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(aiosCoreDir, { recursive: true }); + + const result = await createModuleDirectories(aiosCoreDir); + + expect(fs.existsSync(path.join(aiosCoreDir, 'core'))).toBe(true); + expect(fs.existsSync(path.join(aiosCoreDir, 'development'))).toBe(true); + expect(fs.existsSync(path.join(aiosCoreDir, 'product'))).toBe(true); + expect(fs.existsSync(path.join(aiosCoreDir, 'infrastructure'))).toBe(true); + expect(result.modules).toContain('core'); + }); + + it('should not fail if directories already exist', async () => { + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(path.join(aiosCoreDir, 'core'), { recursive: true }); + + const result = await createModuleDirectories(aiosCoreDir); + + expect(result.created).not.toContain(path.join(aiosCoreDir, 'core')); + }); + }); + + describe('migrateModule', () => { + it('should migrate files to module directory', async () => { + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(path.join(aiosCoreDir, 'agents'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'development'), { recursive: true }); + await fs.promises.writeFile(path.join(aiosCoreDir, 'agents', 'dev.md'), 'Agent'); + + const moduleData = { + files: [{ + sourcePath: path.join(aiosCoreDir, 'agents', 'dev.md'), + relativePath: path.join('agents', 'dev.md'), + size: 5, + }], + }; + + const result = await migrateModule(moduleData, 'development', aiosCoreDir); + + expect(result.migratedFiles).toHaveLength(1); + expect(fs.existsSync(path.join(aiosCoreDir, 'development', 'agents', 'dev.md'))).toBe(true); + }); + + it('should support dry run mode', async () => { + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(path.join(aiosCoreDir, 'agents'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'development'), { recursive: true }); + await fs.promises.writeFile(path.join(aiosCoreDir, 'agents', 'dev.md'), 'Agent'); + + const moduleData = { + files: [{ + sourcePath: path.join(aiosCoreDir, 'agents', 'dev.md'), + relativePath: path.join('agents', 'dev.md'), + size: 5, + }], + }; + + const result = await migrateModule(moduleData, 'development', aiosCoreDir, { dryRun: true }); + + expect(result.migratedFiles).toHaveLength(1); + expect(result.migratedFiles[0].dryRun).toBe(true); + // File should NOT be copied in dry run + expect(fs.existsSync(path.join(aiosCoreDir, 'development', 'agents', 'dev.md'))).toBe(false); + }); + }); + + describe('executeMigration', () => { + it('should execute full migration', async () => { + // Create v2.0 structure + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(path.join(aiosCoreDir, 'agents'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'registry'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'cli'), { recursive: true }); + await fs.promises.writeFile(path.join(aiosCoreDir, 'agents', 'dev.md'), 'Agent'); + await fs.promises.writeFile(path.join(aiosCoreDir, 'registry', 'index.js'), 'Registry'); + await fs.promises.writeFile(path.join(aiosCoreDir, 'cli', 'index.js'), 'CLI'); + + const plan = await analyzeMigrationPlan(testDir); + const result = await executeMigration(plan, { cleanupOriginals: false }); + + expect(result.success).toBe(true); + expect(result.totalFiles).toBe(3); + expect(fs.existsSync(path.join(aiosCoreDir, 'development', 'agents', 'dev.md'))).toBe(true); + expect(fs.existsSync(path.join(aiosCoreDir, 'core', 'registry', 'index.js'))).toBe(true); + expect(fs.existsSync(path.join(aiosCoreDir, 'product', 'cli', 'index.js'))).toBe(true); + }); + + it('should return error for non-migratable plan', async () => { + const plan = { canMigrate: false, error: 'Test error' }; + const result = await executeMigration(plan); + + expect(result.success).toBe(false); + expect(result.error).toBe('Test error'); + }); + + it('should support dry run', async () => { + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(path.join(aiosCoreDir, 'agents'), { recursive: true }); + await fs.promises.writeFile(path.join(aiosCoreDir, 'agents', 'dev.md'), 'Agent'); + + const plan = await analyzeMigrationPlan(testDir); + const result = await executeMigration(plan, { dryRun: true }); + + expect(result.dryRun).toBe(true); + // Directories should not be created in dry run + expect(fs.existsSync(path.join(aiosCoreDir, 'development'))).toBe(false); + }); + }); + + describe('Migration State', () => { + it('should save and load migration state', async () => { + await saveMigrationState(testDir, { phase: 'test', value: 123 }); + + const state = await loadMigrationState(testDir); + + expect(state.phase).toBe('test'); + expect(state.value).toBe(123); + expect(state.timestamp).toBeTruthy(); + }); + + it('should return null if no state exists', async () => { + const state = await loadMigrationState(testDir); + expect(state).toBeNull(); + }); + + it('should clear migration state', async () => { + await saveMigrationState(testDir, { phase: 'test' }); + await clearMigrationState(testDir); + + const state = await loadMigrationState(testDir); + expect(state).toBeNull(); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/codex-skills-validate.test.js +================================================== +```js +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); + +const { syncSkills } = require('../../.aios-core/infrastructure/scripts/codex-skills-sync/index'); +const { validateCodexSkills } = require('../../.aios-core/infrastructure/scripts/codex-skills-sync/validate'); + +describe('Codex Skills Validator', () => { + let tmpRoot; + let sourceDir; + let skillsDir; + let expectedAgentCount; + + beforeEach(() => { + tmpRoot = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-codex-validate-')); + sourceDir = path.join(process.cwd(), '.aios-core', 'development', 'agents'); + skillsDir = path.join(tmpRoot, '.codex', 'skills'); + expectedAgentCount = fs.readdirSync(sourceDir).filter(name => name.endsWith('.md')).length; + }); + + afterEach(() => { + fs.rmSync(tmpRoot, { recursive: true, force: true }); + }); + + it('passes when all generated skills are present and valid', () => { + syncSkills({ sourceDir, localSkillsDir: skillsDir, dryRun: false }); + + const result = validateCodexSkills({ + projectRoot: tmpRoot, + sourceDir, + skillsDir, + strict: true, + }); + + expect(result.ok).toBe(true); + expect(result.checked).toBe(expectedAgentCount); + expect(result.errors).toEqual([]); + }); + + it('fails when a generated skill is missing', () => { + syncSkills({ sourceDir, localSkillsDir: skillsDir, dryRun: false }); + fs.rmSync(path.join(skillsDir, 'aios-architect', 'SKILL.md'), { force: true }); + + const result = validateCodexSkills({ + projectRoot: tmpRoot, + sourceDir, + skillsDir, + strict: true, + }); + + expect(result.ok).toBe(false); + expect(result.errors.some(error => error.includes('Missing skill file'))).toBe(true); + }); + + it('fails when greeting command is removed from a skill', () => { + syncSkills({ sourceDir, localSkillsDir: skillsDir, dryRun: false }); + const target = path.join(skillsDir, 'aios-dev', 'SKILL.md'); + const original = fs.readFileSync(target, 'utf8'); + fs.writeFileSync(target, original.replace('generate-greeting.js dev', 'generate-greeting.js'), 'utf8'); + + const result = validateCodexSkills({ + projectRoot: tmpRoot, + sourceDir, + skillsDir, + strict: true, + }); + + expect(result.ok).toBe(false); + expect(result.errors.some(error => error.includes('missing canonical greeting command'))).toBe(true); + }); + + it('fails in strict mode when orphaned aios-* skill dir exists', () => { + syncSkills({ sourceDir, localSkillsDir: skillsDir, dryRun: false }); + const orphanPath = path.join(skillsDir, 'aios-legacy'); + fs.mkdirSync(orphanPath, { recursive: true }); + fs.writeFileSync(path.join(orphanPath, 'SKILL.md'), '# legacy', 'utf8'); + + const result = validateCodexSkills({ + projectRoot: tmpRoot, + sourceDir, + skillsDir, + strict: true, + }); + + expect(result.ok).toBe(false); + expect(result.orphaned).toContain('aios-legacy'); + }); +}); + +``` + +================================================== +📄 tests/unit/generate-greeting.test.js +================================================== +```js +/** + * Unit Tests for generate-greeting.js + * + * Tests the unified greeting generator with mocked dependencies. + * + * Part of Story 6.1.4: Unified Greeting System Integration + */ + +const assert = require('assert'); +const path = require('path'); + +// Mock session context +const mockSessionContext = { + sessionType: 'new', + message: null, + previousAgent: null, + lastCommands: [], + workflowActive: null, +}; + +// Mock project status +const mockProjectStatus = { + branch: 'main', + modifiedFiles: 5, + recentCommit: 'feat: implement unified greeting system', + currentStory: 'story-6.1.4', +}; + +describe('generate-greeting.js', () => { + describe('generateGreeting()', () => { + it('should generate greeting for valid agent', async () => { + // This is a smoke test - actual implementation would need mocking + const agentId = 'qa'; + + // Verify agent file exists + const fs = require('fs').promises; + const agentPath = path.join(process.cwd(), '.aios-core', 'development', 'agents', `${agentId}.md`); + + try { + await fs.access(agentPath); + assert.ok(true, 'Agent file exists'); + } catch (error) { + assert.fail(`Agent file not found: ${agentPath}`); + } + }); + + it('should handle missing agent gracefully', async () => { + const agentId = 'nonexistent-agent'; + + // Verify fallback behavior (would need actual implementation) + const expectedFallback = `✅ ${agentId} Agent ready\n\nType \`*help\` to see available commands.`; + + // Test would call generateGreeting and check for fallback + assert.ok(true, 'Fallback behavior expected'); + }); + + it('should complete within performance target', async () => { + const startTime = Date.now(); + + // Simulate greeting generation + await new Promise(resolve => setTimeout(resolve, 50)); + + const duration = Date.now() - startTime; + assert.ok(duration < 150, `Greeting took ${duration}ms, target is <150ms`); + }); + }); + + describe('Session Context Integration', () => { + it('should detect new session type', () => { + const context = { ...mockSessionContext, sessionType: 'new' }; + assert.strictEqual(context.sessionType, 'new'); + }); + + it('should detect existing session type', () => { + const context = { + ...mockSessionContext, + sessionType: 'existing', + lastCommands: ['review'], + }; + assert.strictEqual(context.sessionType, 'existing'); + assert.strictEqual(context.lastCommands.length, 1); + }); + + it('should detect workflow session type', () => { + const context = { + ...mockSessionContext, + sessionType: 'workflow', + lastCommands: ['review', 'gate', 'apply-fixes'], + previousAgent: 'qa', + }; + assert.strictEqual(context.sessionType, 'workflow'); + assert.ok(context.lastCommands.length >= 3); + }); + }); + + describe('Project Status Integration', () => { + it('should include git branch', () => { + assert.ok(mockProjectStatus.branch); + assert.strictEqual(typeof mockProjectStatus.branch, 'string'); + }); + + it('should include modified files count', () => { + assert.ok(mockProjectStatus.modifiedFiles >= 0); + assert.strictEqual(typeof mockProjectStatus.modifiedFiles, 'number'); + }); + + it('should include recent commit', () => { + assert.ok(mockProjectStatus.recentCommit); + assert.strictEqual(typeof mockProjectStatus.recentCommit, 'string'); + }); + }); + + describe('Error Handling', () => { + it('should return fallback on agent load failure', () => { + const fallback = generateFallbackGreeting('test-agent'); + assert.ok(fallback.includes('test-agent')); + assert.ok(fallback.includes('ready')); + }); + + it('should handle timeout gracefully', async () => { + const timeout = 100; + const startTime = Date.now(); + + // Simulate timeout scenario + await new Promise(resolve => setTimeout(resolve, timeout + 10)); + + const duration = Date.now() - startTime; + assert.ok(duration >= timeout); + }); + }); +}); + +/** + * Helper: Generate fallback greeting + */ +function generateFallbackGreeting(agentId) { + return `✅ ${agentId} Agent ready\n\nType \`*help\` to see available commands.`; +} + +// Run tests if called directly +if (require.main === module) { + console.log('Running generate-greeting unit tests...\n'); + + // Simple test runner + const tests = [ + { + name: 'Agent file exists', + fn: async () => { + const fs = require('fs').promises; + const agentPath = path.join(process.cwd(), '.aios-core', 'agents', 'qa.md'); + await fs.access(agentPath); + return true; + }, + }, + { + name: 'Fallback greeting format', + fn: async () => { + const fallback = generateFallbackGreeting('test'); + return fallback.includes('test') && fallback.includes('ready'); + }, + }, + { + name: 'Session context structure', + fn: async () => { + return Object.hasOwn(mockSessionContext, 'sessionType') && + Object.hasOwn(mockSessionContext, 'lastCommands'); + }, + }, + ]; + + let passed = 0; + let failed = 0; + + (async () => { + for (const test of tests) { + try { + const result = await test.fn(); + if (result) { + console.log(`✅ ${test.name}`); + passed++; + } else { + console.log(`❌ ${test.name}`); + failed++; + } + } catch (error) { + console.log(`❌ ${test.name}: ${error.message}`); + failed++; + } + } + + console.log(`\n${passed} passed, ${failed} failed`); + process.exit(failed > 0 ? 1 : 0); + })(); +} + +module.exports = { + generateFallbackGreeting, +}; + + +``` + +================================================== +📄 tests/unit/format-duration.test.js +================================================== +```js +/** + * Unit Tests for Format Duration Utility + * Story: TEST-1 - Dashboard Demo + * + * @jest-environment node + */ + +const { formatDuration, formatDurationShort } = require('../../.aios-core/utils/format-duration'); + +describe('formatDuration', () => { + describe('basic conversions', () => { + test('should format seconds only', () => { + expect(formatDuration(1000)).toBe('1s'); + expect(formatDuration(30000)).toBe('30s'); + expect(formatDuration(59000)).toBe('59s'); + }); + + test('should format minutes and seconds', () => { + expect(formatDuration(60000)).toBe('1m'); + expect(formatDuration(61000)).toBe('1m 1s'); + expect(formatDuration(90000)).toBe('1m 30s'); + expect(formatDuration(3599000)).toBe('59m 59s'); + }); + + test('should format hours, minutes, and seconds', () => { + expect(formatDuration(3600000)).toBe('1h'); + expect(formatDuration(3661000)).toBe('1h 1m 1s'); + expect(formatDuration(7200000)).toBe('2h'); + expect(formatDuration(7325000)).toBe('2h 2m 5s'); + }); + + test('should format days', () => { + expect(formatDuration(86400000)).toBe('1d'); + expect(formatDuration(90061000)).toBe('1d 1h 1m 1s'); + expect(formatDuration(172800000)).toBe('2d'); + }); + }); + + describe('edge cases', () => { + test('should handle zero', () => { + expect(formatDuration(0)).toBe('0s'); + }); + + test('should handle negative numbers', () => { + expect(formatDuration(-1000)).toBe('-1s'); + expect(formatDuration(-3661000)).toBe('-1h 1m 1s'); + }); + + test('should handle very large numbers', () => { + const veryLarge = 1000 * 24 * 60 * 60 * 1000; // 1000 days + expect(formatDuration(veryLarge)).toBe('999d+'); + }); + + test('should handle non-numeric input', () => { + expect(formatDuration(null)).toBe('0s'); + expect(formatDuration(undefined)).toBe('0s'); + expect(formatDuration('invalid')).toBe('0s'); + expect(formatDuration(NaN)).toBe('0s'); + }); + + test('should handle floating point numbers', () => { + expect(formatDuration(1500)).toBe('1s'); + expect(formatDuration(1999)).toBe('1s'); + expect(formatDuration(61500)).toBe('1m 1s'); + }); + }); + + describe('sub-second values', () => { + test('should show 0s for values less than 1 second', () => { + expect(formatDuration(500)).toBe('0s'); + expect(formatDuration(999)).toBe('0s'); + }); + }); +}); + +describe('formatDurationShort', () => { + describe('basic conversions', () => { + test('should format minutes and seconds', () => { + expect(formatDurationShort(0)).toBe('0:00'); + expect(formatDurationShort(1000)).toBe('0:01'); + expect(formatDurationShort(60000)).toBe('1:00'); + expect(formatDurationShort(90000)).toBe('1:30'); + }); + + test('should format hours, minutes, and seconds', () => { + expect(formatDurationShort(3600000)).toBe('1:00:00'); + expect(formatDurationShort(3661000)).toBe('1:01:01'); + expect(formatDurationShort(7325000)).toBe('2:02:05'); + }); + + test('should pad single digits', () => { + expect(formatDurationShort(61000)).toBe('1:01'); + expect(formatDurationShort(3605000)).toBe('1:00:05'); + }); + }); + + describe('edge cases', () => { + test('should handle invalid input', () => { + expect(formatDurationShort(null)).toBe('0:00'); + expect(formatDurationShort(-1000)).toBe('0:00'); + expect(formatDurationShort(NaN)).toBe('0:00'); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/validate-paths.test.js +================================================== +```js +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); + +const { validatePaths } = require('../../.aios-core/infrastructure/scripts/validate-paths'); + +describe('Path Validator', () => { + let tmpRoot; + let skillsDir; + + function write(file, content) { + fs.mkdirSync(path.dirname(file), { recursive: true }); + fs.writeFileSync(file, content, 'utf8'); + } + + beforeEach(() => { + tmpRoot = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-path-validate-')); + skillsDir = path.join(tmpRoot, '.codex', 'skills'); + }); + + afterEach(() => { + fs.rmSync(tmpRoot, { recursive: true, force: true }); + }); + + it('passes with relative canonical paths', () => { + write(path.join(tmpRoot, 'AGENTS.md'), '# Agents\n'); + write(path.join(tmpRoot, '.aios-core', 'product', 'templates', 'ide-rules', 'codex-rules.md'), '# codex\n'); + write( + path.join(skillsDir, 'aios-dev', 'SKILL.md'), + [ + '# Skill', + 'Load .aios-core/development/agents/dev.md', + 'Run node .aios-core/development/scripts/generate-greeting.js dev', + ].join('\n'), + ); + + const result = validatePaths({ + projectRoot: tmpRoot, + skillsDir, + requiredFiles: [ + path.join(tmpRoot, 'AGENTS.md'), + path.join(tmpRoot, '.aios-core', 'product', 'templates', 'ide-rules', 'codex-rules.md'), + ], + }); + + expect(result.ok).toBe(true); + expect(result.errors).toEqual([]); + }); + + it('fails when absolute user path is found', () => { + write(path.join(tmpRoot, 'AGENTS.md'), 'Path /Users/alan/Code/aios-core'); + write(path.join(tmpRoot, '.aios-core', 'product', 'templates', 'ide-rules', 'codex-rules.md'), '# codex\n'); + write( + path.join(skillsDir, 'aios-dev', 'SKILL.md'), + [ + '# Skill', + 'Load .aios-core/development/agents/dev.md', + 'Run node .aios-core/development/scripts/generate-greeting.js dev', + ].join('\n'), + ); + + const result = validatePaths({ + projectRoot: tmpRoot, + skillsDir, + requiredFiles: [ + path.join(tmpRoot, 'AGENTS.md'), + path.join(tmpRoot, '.aios-core', 'product', 'templates', 'ide-rules', 'codex-rules.md'), + ], + }); + + expect(result.ok).toBe(false); + expect(result.errors.some(error => error.includes('forbidden absolute path'))).toBe(true); + }); + + it('fails when skill lacks canonical activation paths', () => { + write(path.join(tmpRoot, 'AGENTS.md'), '# Agents\n'); + write(path.join(tmpRoot, '.aios-core', 'product', 'templates', 'ide-rules', 'codex-rules.md'), '# codex\n'); + write(path.join(skillsDir, 'aios-dev', 'SKILL.md'), '# Skill\nUse dev\n'); + + const result = validatePaths({ + projectRoot: tmpRoot, + skillsDir, + requiredFiles: [ + path.join(tmpRoot, 'AGENTS.md'), + path.join(tmpRoot, '.aios-core', 'product', 'templates', 'ide-rules', 'codex-rules.md'), + ], + }); + + expect(result.ok).toBe(false); + expect(result.errors.some(error => error.includes('missing canonical source path'))).toBe(true); + expect(result.errors.some(error => error.includes('missing canonical greeting script path'))).toBe(true); + }); +}); + +``` + +================================================== +📄 tests/unit/decision-log-generator.test.js +================================================== +```js +/** + * Unit Tests for Decision Log Generator + * + * Test Coverage: + * - Decision log file generation + * - Duration calculation and formatting + * - Decision list formatting + * - File list formatting + * - Test result formatting + * - Rollback information generation + * - Error handling and edge cases + * + * @see .aios-core/scripts/decision-log-generator.js + */ + +const { + generateDecisionLog, + calculateDuration, + generateDecisionsList, + generateFilesList, + generateTestsList, + generateRollbackFilesList, +} = require('../../.aios-core/development/scripts/decision-log-generator'); + +const fs = require('fs').promises; +const path = require('path'); + +describe('decision-log-generator', () => { + const testAiDir = '.ai-test'; + const originalDateNow = Date.now; + + beforeEach(async () => { + // Clean up test directory + try { + await fs.rm(testAiDir, { recursive: true, force: true }); + } catch (error) { + // Directory might not exist + } + + // Mock Date.now for consistent timestamps + Date.now = jest.fn(() => 1705406400000); // 2024-01-16 12:00:00 UTC + }); + + afterEach(async () => { + // Restore Date.now + Date.now = originalDateNow; + + // Clean up test directory + try { + await fs.rm(testAiDir, { recursive: true, force: true }); + } catch (error) { + // Ignore cleanup errors + } + }); + + describe('calculateDuration', () => { + test('should format duration in hours and minutes when > 1 hour', () => { + const context = { + startTime: 1705406400000, + endTime: 1705406400000 + (3600000 * 2.5), // 2.5 hours later + }; + + const result = calculateDuration(context); + + expect(result).toBe('2h 30m'); + }); + + test('should format duration in minutes and seconds when < 1 hour', () => { + const context = { + startTime: 1705406400000, + endTime: 1705406400000 + (60000 * 5.5), // 5.5 minutes later + }; + + const result = calculateDuration(context); + + expect(result).toBe('5m 30s'); + }); + + test('should format duration in seconds when < 1 minute', () => { + const context = { + startTime: 1705406400000, + endTime: 1705406400000 + 45000, // 45 seconds later + }; + + const result = calculateDuration(context); + + expect(result).toBe('45s'); + }); + + test('should show "in progress" when endTime is missing', () => { + const context = { + startTime: 1705406400000, + // No endTime + }; + + const result = calculateDuration(context); + + expect(result).toContain('in progress'); + }); + + test('should handle zero duration', () => { + const context = { + startTime: 1705406400000, + endTime: 1705406400000, // Same time + }; + + const result = calculateDuration(context); + + expect(result).toBe('0s'); + }); + }); + + describe('generateDecisionsList', () => { + test('should generate markdown for decisions with all fields', () => { + const decisions = [ + { + timestamp: 1705406400000, + description: 'Use Axios for HTTP client', + reason: 'Better error handling', + alternatives: ['Fetch API', 'Got library'], + }, + ]; + + const result = generateDecisionsList(decisions); + + expect(result).toContain('### Decision 1: Use Axios for HTTP client'); + expect(result).toContain('**Timestamp:**'); + expect(result).toContain('**Reason:** Better error handling'); + expect(result).toContain('**Alternatives Considered:**'); + expect(result).toContain('- Fetch API'); + expect(result).toContain('- Got library'); + }); + + test('should generate markdown for multiple decisions', () => { + const decisions = [ + { + timestamp: 1705406400000, + description: 'Decision 1', + reason: 'Reason 1', + alternatives: [], + }, + { + timestamp: 1705406400000, + description: 'Decision 2', + reason: 'Reason 2', + alternatives: ['Alt 1'], + }, + ]; + + const result = generateDecisionsList(decisions); + + expect(result).toContain('### Decision 1: Decision 1'); + expect(result).toContain('### Decision 2: Decision 2'); + expect(result).toContain('**Reason:** Reason 1'); + expect(result).toContain('**Reason:** Reason 2'); + }); + + test('should handle decisions without alternatives', () => { + const decisions = [ + { + timestamp: 1705406400000, + description: 'Simple decision', + reason: 'Only one option', + alternatives: [], + }, + ]; + + const result = generateDecisionsList(decisions); + + expect(result).toContain('### Decision 1: Simple decision'); + expect(result).not.toContain('**Alternatives Considered:**'); + }); + + test('should return message when no decisions provided', () => { + const result = generateDecisionsList([]); + + expect(result).toBe('*No autonomous decisions recorded.*'); + }); + + test('should handle null/undefined decisions array', () => { + expect(generateDecisionsList(null)).toBe('*No autonomous decisions recorded.*'); + expect(generateDecisionsList(undefined)).toBe('*No autonomous decisions recorded.*'); + }); + + test('should include type and priority fields when present (AC7)', () => { + const decisions = [ + { + timestamp: 1705406400000, + description: 'Architecture decision', + type: 'architecture', + priority: 'high', + reason: 'Better scalability', + alternatives: ['Monolith'], + }, + ]; + + const result = generateDecisionsList(decisions); + + expect(result).toContain('**Type:** architecture'); + expect(result).toContain('**Priority:** high'); + }); + }); + + describe('generateFilesList', () => { + test('should generate markdown for files with action metadata', () => { + const files = [ + { path: 'src/api.js', action: 'created' }, + { path: 'src/utils.js', action: 'modified' }, + ]; + + const result = generateFilesList(files); + + expect(result).toContain('- `src/api.js` (created)'); + expect(result).toContain('- `src/utils.js` (modified)'); + }); + + test('should handle files as simple strings', () => { + const files = ['file1.js', 'file2.js']; + + const result = generateFilesList(files); + + expect(result).toContain('- `file1.js` (modified)'); + expect(result).toContain('- `file2.js` (modified)'); + }); + + test('should use "modified" as default action', () => { + const files = [{ path: 'test.js' }]; + + const result = generateFilesList(files); + + expect(result).toContain('(modified)'); + }); + + test('should return message when no files provided', () => { + const result = generateFilesList([]); + + expect(result).toBe('*No files modified.*'); + }); + + test('should handle null/undefined files array', () => { + expect(generateFilesList(null)).toBe('*No files modified.*'); + expect(generateFilesList(undefined)).toBe('*No files modified.*'); + }); + }); + + describe('generateTestsList', () => { + test('should generate markdown for passed tests', () => { + const tests = [ + { name: 'api.test.js', passed: true, duration: 125 }, + ]; + + const result = generateTestsList(tests); + + expect(result).toContain('- ✅ PASS: `api.test.js` (125ms)'); + }); + + test('should generate markdown for failed tests with error', () => { + const tests = [ + { name: 'broken.test.js', passed: false, duration: 50, error: 'Assertion failed' }, + ]; + + const result = generateTestsList(tests); + + expect(result).toContain('- ❌ FAIL: `broken.test.js` (50ms)'); + expect(result).toContain('- Error: Assertion failed'); + }); + + test('should handle tests without duration', () => { + const tests = [ + { name: 'test.js', passed: true }, + ]; + + const result = generateTestsList(tests); + + expect(result).toContain('- ✅ PASS: `test.js`'); + expect(result).not.toContain('ms)'); + }); + + test('should return message when no tests provided', () => { + const result = generateTestsList([]); + + expect(result).toBe('*No tests recorded.*'); + }); + + test('should handle null/undefined tests array', () => { + expect(generateTestsList(null)).toBe('*No tests recorded.*'); + expect(generateTestsList(undefined)).toBe('*No tests recorded.*'); + }); + }); + + describe('generateRollbackFilesList', () => { + test('should generate file list for rollback', () => { + const files = [ + { path: 'src/api.js', action: 'created' }, + { path: 'src/utils.js', action: 'modified' }, + ]; + + const result = generateRollbackFilesList(files); + + expect(result).toContain('- src/api.js'); + expect(result).toContain('- src/utils.js'); + }); + + test('should handle string file paths', () => { + const files = ['file1.js', 'file2.js']; + + const result = generateRollbackFilesList(files); + + expect(result).toContain('- file1.js'); + expect(result).toContain('- file2.js'); + }); + + test('should return message when no files provided', () => { + const result = generateRollbackFilesList([]); + + expect(result).toBe('*No files to rollback.*'); + }); + + test('should handle null/undefined files array', () => { + expect(generateRollbackFilesList(null)).toBe('*No files to rollback.*'); + expect(generateRollbackFilesList(undefined)).toBe('*No files to rollback.*'); + }); + }); + + describe('generateDecisionLog', () => { + test('should create decision log file with complete context', async () => { + const storyId = 'story-6.1.2.6'; + const context = { + agentId: 'dev', + storyPath: 'docs/stories/story-6.1.2.6.md', + startTime: 1705406400000, + endTime: 1705410000000, // 1 hour later + status: 'completed', + decisions: [ + { + timestamp: 1705408000000, + description: 'Use Axios for HTTP', + reason: 'Better error handling', + alternatives: ['Fetch API', 'Got library'], + }, + ], + filesModified: [ + { path: 'src/api.js', action: 'created' }, + ], + testsRun: [ + { name: 'api.test.js', passed: true, duration: 125 }, + ], + metrics: { + agentLoadTime: 150, + taskExecutionTime: 60000, + }, + commitBefore: 'abc123def456', + }; + + const logPath = await generateDecisionLog(storyId, context); + + // Verify file was created (normalize path for cross-platform compatibility) + expect(logPath).toBe(path.join('.ai', 'decision-log-story-6.1.2.6.md')); + + // Read and verify file contents + const content = await fs.readFile(logPath, 'utf8'); + + expect(content).toContain('# Decision Log: Story story-6.1.2.6'); + expect(content).toContain('**Agent:** dev'); + expect(content).toContain('**Story:** docs/stories/story-6.1.2.6.md'); + expect(content).toContain('**Status:** completed'); + expect(content).toContain('## Decisions Made'); + expect(content).toContain('Use Axios for HTTP'); + expect(content).toContain('### Files Modified'); + expect(content).toContain('src/api.js'); + expect(content).toContain('### Test Results'); + expect(content).toContain('api.test.js'); + expect(content).toContain('### Rollback Instructions'); + expect(content).toContain('git reset --hard abc123def456'); + expect(content).toContain('### Performance Impact'); + expect(content).toContain('Agent Load Time: 150ms'); + }); + + test('should create decision log with minimal context', async () => { + const storyId = 'story-6.1.2.6'; + const context = { + agentId: 'dev', + storyPath: 'docs/stories/story-6.1.2.6.md', + startTime: 1705406400000, + status: 'in-progress', + }; + + const logPath = await generateDecisionLog(storyId, context); + + // Normalize path for cross-platform compatibility + expect(logPath).toBe(path.join('.ai', 'decision-log-story-6.1.2.6.md')); + + const content = await fs.readFile(logPath, 'utf8'); + + expect(content).toContain('**Status:** in-progress'); + expect(content).toContain('**Completed:** In Progress'); + expect(content).toContain('*No autonomous decisions recorded.*'); + expect(content).toContain('*No files modified.*'); + expect(content).toContain('*No tests recorded.*'); + }); + + test('should create .ai directory if it does not exist', async () => { + // .ai directory is cleaned up in beforeEach + const storyId = 'test-story'; + const context = { + agentId: 'dev', + storyPath: 'test.md', + startTime: Date.now(), + status: 'completed', + }; + + await generateDecisionLog(storyId, context); + + // Verify .ai directory was created + const stats = await fs.stat('.ai'); + expect(stats.isDirectory()).toBe(true); + }); + + test('should handle missing endTime gracefully', async () => { + const context = { + agentId: 'dev', + storyPath: 'test.md', + startTime: 1705406400000, + status: 'running', + }; + + const logPath = await generateDecisionLog('test', context); + const content = await fs.readFile(logPath, 'utf8'); + + expect(content).toContain('**Completed:** In Progress'); + expect(content).toContain('in progress'); + }); + + test('should handle missing commitBefore with default', async () => { + const context = { + agentId: 'dev', + storyPath: 'test.md', + startTime: 1705406400000, + status: 'completed', + }; + + const logPath = await generateDecisionLog('test', context); + const content = await fs.readFile(logPath, 'utf8'); + + expect(content).toContain('git reset --hard HEAD'); + }); + + test('should handle missing metrics gracefully', async () => { + const context = { + agentId: 'dev', + storyPath: 'test.md', + startTime: 1705406400000, + status: 'completed', + // No metrics provided + }; + + const logPath = await generateDecisionLog('test', context); + const content = await fs.readFile(logPath, 'utf8'); + + expect(content).toContain('*No performance metrics recorded.*'); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/session-context-loader.test.js +================================================== +```js +/** + * Unit Tests: Session Context Loader + * Story 6.1.2.6.2 - Agent Performance Optimization + * + * Tests session continuity and agent transition tracking + */ + +const fs = require('fs').promises; +const path = require('path'); +const SessionContextLoader = require('../../.aios-core/scripts/session-context-loader'); + +describe('SessionContextLoader', () => { + let loader; + const testSessionPath = path.join(process.cwd(), '.aios', 'session-state-test.json'); + + beforeEach(() => { + loader = new SessionContextLoader(); + // Override session state path for testing + loader.sessionStatePath = testSessionPath; + }); + + afterEach(async () => { + // Clean up test session file + await loader.clearSession(); + }); + + describe('loadContext() - Session Detection', () => { + test('detects new session when no state exists', () => { + loader.clearSession(); + + const context = loader.loadContext('dev'); + + expect(context.sessionType).toBe('new'); + expect(context.message).toBeNull(); + expect(context.previousAgent).toBeNull(); + expect(context.lastCommands).toEqual([]); + expect(context.workflowActive).toBeNull(); + }); + + test('detects existing session after agent activation', () => { + // Simulate @po activation + loader.updateSession('po', 'Pax', 'validate-story-draft'); + + // Load context for @dev + const context = loader.loadContext('dev'); + + expect(context.sessionType).toBe('existing'); + expect(context.previousAgent).toBeTruthy(); + expect(context.previousAgent.agentId).toBe('po'); + expect(context.message).toContain('Continuing from @po'); + }); + }); + + describe('loadContext() - Agent Transition Tracking', () => { + test('tracks agent transitions correctly', () => { + // Simulate agent sequence: @po → @dev + loader.updateSession('po', 'Pax', 'validate-story-draft'); + loader.updateSession('dev', 'Dex', 'develop'); + + // Load context for @qa + const context = loader.loadContext('qa'); + + expect(context.previousAgent.agentId).toBe('dev'); + expect(context.previousAgent.agentName).toBe('Dex'); + expect(context.previousAgent.lastCommand).toBe('develop'); + }); + + test('skips same agent in previous agent detection', () => { + // Simulate: @po → @dev → @dev again + loader.updateSession('po', 'Pax', 'create-story'); + loader.updateSession('dev', 'Dex', 'develop'); + loader.updateSession('dev', 'Dex', 'run-tests'); + + // Load context for @dev + const context = loader.loadContext('dev'); + + // Previous agent should be @po, not @dev + expect(context.previousAgent.agentId).toBe('po'); + }); + + test('maintains command history (last 10 commands)', () => { + loader.updateSession('po', 'Pax', 'create-story'); + loader.updateSession('po', 'Pax', 'validate-story-draft'); + loader.updateSession('dev', 'Dex', 'develop'); + loader.updateSession('dev', 'Dex', 'run-tests'); + + const context = loader.loadContext('qa'); + + expect(context.lastCommands).toEqual([ + 'create-story', + 'validate-story-draft', + 'develop', + 'run-tests', + ]); + }); + + test('limits command history to 10 entries', () => { + // Add 15 commands + for (let i = 1; i <= 15; i++) { + loader.updateSession('dev', 'Dex', `command-${i}`); + } + + const context = loader.loadContext('qa'); + + expect(context.lastCommands.length).toBe(10); + expect(context.lastCommands[0]).toBe('command-6'); // Oldest kept + expect(context.lastCommands[9]).toBe('command-15'); // Latest + }); + }); + + describe('updateSession() - Session State Management', () => { + test('initializes session ID on first update', () => { + loader.updateSession('dev', 'Dex', 'develop'); + + const sessionState = loader.loadSessionState(); + + expect(sessionState.sessionId).toMatch(/^session-/); + expect(sessionState.startTime).toBeTruthy(); + expect(sessionState.lastActivity).toBeTruthy(); + }); + + test('maintains session ID across updates', () => { + loader.updateSession('po', 'Pax', 'create-story'); + const state1 = loader.loadSessionState(); + const sessionId1 = state1.sessionId; + + loader.updateSession('dev', 'Dex', 'develop'); + const state2 = loader.loadSessionState(); + const sessionId2 = state2.sessionId; + + expect(sessionId1).toBe(sessionId2); + }); + + test('updates lastActivity timestamp', async () => { + loader.updateSession('dev', 'Dex', 'develop'); + const state1 = loader.loadSessionState(); + const activity1 = state1.lastActivity; + + // Wait a bit + await new Promise(resolve => setTimeout(resolve, 10)); + + loader.updateSession('dev', 'Dex', 'run-tests'); + const state2 = loader.loadSessionState(); + const activity2 = state2.lastActivity; + + expect(activity2).toBeGreaterThan(activity1); + }); + + test('limits agent sequence to last 20 activations', () => { + // Add 25 agent activations + for (let i = 1; i <= 25; i++) { + loader.updateSession('dev', 'Dex', `develop-${i}`); + } + + const sessionState = loader.loadSessionState(); + + expect(sessionState.agentSequence.length).toBe(20); + }); + }); + + describe('generateContextMessage() - Message Generation', () => { + test('generates message with previous agent info', () => { + loader.updateSession('po', 'Pax', 'validate-story-draft'); + + const context = loader.loadContext('dev'); + + expect(context.message).toContain('@po'); + expect(context.message).toContain('Pax'); + expect(context.message).toContain('validate-story-draft'); + }); + + test('includes time since previous agent activation', () => { + loader.updateSession('po', 'Pax', 'validate-story-draft'); + + const context = loader.loadContext('dev'); + + expect(context.message).toMatch(/(just now|minutes ago)/); + }); + + test('includes recent commands in message', () => { + loader.updateSession('po', 'Pax', 'create-story'); + loader.updateSession('po', 'Pax', 'validate-story-draft'); + + const context = loader.loadContext('dev'); + + expect(context.message).toContain('Recent commands'); + expect(context.message).toContain('create-story'); + expect(context.message).toContain('validate-story-draft'); + }); + + test('includes active workflow if set', () => { + loader.updateSession('po', 'Pax', 'create-story', { + workflowActive: 'story-creation', + }); + + const context = loader.loadContext('dev'); + + expect(context.message).toContain('Active Workflow: story-creation'); + }); + + test('returns null for new sessions', () => { + loader.clearSession(); + + const context = loader.loadContext('dev'); + + expect(context.message).toBeNull(); + }); + }); + + describe('formatForGreeting() - Display Formatting', () => { + test('formats context message for agent greeting', () => { + loader.updateSession('po', 'Pax', 'validate-story-draft'); + + const message = loader.formatForGreeting('dev'); + + expect(message).toContain('\n'); + expect(message).toContain('@po'); + }); + + test('returns empty string for new sessions', () => { + loader.clearSession(); + + const message = loader.formatForGreeting('dev'); + + expect(message).toBe(''); + }); + }); + + describe('clearSession() - Cleanup', () => { + test('removes session state file', async () => { + loader.updateSession('dev', 'Dex', 'develop'); + + loader.clearSession(); + + const sessionState = loader.loadSessionState(); + expect(Object.keys(sessionState).length).toBe(0); + }); + + test('allows new session after clear', () => { + loader.updateSession('dev', 'Dex', 'develop'); + loader.clearSession(); + + const context = loader.loadContext('dev'); + + expect(context.sessionType).toBe('new'); + }); + }); + + describe('getPreviousAgent() - Agent History', () => { + test('returns null if no previous agents', () => { + const sessionState = { agentSequence: [] }; + const previous = loader.getPreviousAgent(sessionState, 'dev'); + + expect(previous).toBeNull(); + }); + + test('returns most recent different agent', () => { + const sessionState = { + agentSequence: [ + { agentId: 'po', agentName: 'Pax', activatedAt: Date.now() - 10000, lastCommand: 'create-story' }, + { agentId: 'dev', agentName: 'Dex', activatedAt: Date.now() - 5000, lastCommand: 'develop' }, + { agentId: 'qa', agentName: 'Quinn', activatedAt: Date.now(), lastCommand: 'review' }, + ], + }; + + const previous = loader.getPreviousAgent(sessionState, 'qa'); + + expect(previous.agentId).toBe('dev'); + expect(previous.agentName).toBe('Dex'); + }); + }); + + describe('Error Handling', () => { + test('handles corrupted session file gracefully', async () => { + // Write invalid JSON to session file + await fs.mkdir(path.dirname(testSessionPath), { recursive: true }); + await fs.writeFile(testSessionPath, 'invalid json{', 'utf8'); + + const context = loader.loadContext('dev'); + + // Should treat as new session + expect(context.sessionType).toBe('new'); + }); + + test('handles missing session file gracefully', () => { + loader.clearSession(); + + const context = loader.loadContext('dev'); + + expect(context.sessionType).toBe('new'); + }); + }); + + describe('onTaskComplete() - Task Completion Hook (WIS-3)', () => { + test('records task completion in session state', () => { + const result = loader.onTaskComplete('develop', { + success: true, + agentId: 'dev', + storyPath: 'docs/stories/test.md', + }); + + expect(result.success).toBe(true); + expect(result.sessionId).toBeTruthy(); + }); + + test('adds command to history', () => { + loader.onTaskComplete('develop', { success: true }); + + const context = loader.loadContext('qa'); + + expect(context.lastCommands).toContain('*develop'); + }); + + test('normalizes command with * prefix', () => { + loader.onTaskComplete('run-tests', { success: true }); + loader.onTaskComplete('*review-qa', { success: true }); + + const sessionState = loader.loadSessionState(); + + expect(sessionState.lastCommands).toContain('*run-tests'); + expect(sessionState.lastCommands).toContain('*review-qa'); + }); + + test('updates task history', () => { + loader.onTaskComplete('develop', { + success: true, + agentId: 'dev', + storyPath: 'docs/stories/test.md', + }); + + const sessionState = loader.loadSessionState(); + + expect(sessionState.taskHistory).toBeTruthy(); + expect(sessionState.taskHistory.length).toBe(1); + expect(sessionState.taskHistory[0].task).toBe('develop'); + expect(sessionState.taskHistory[0].success).toBe(true); + }); + + test('limits task history to 20 entries', () => { + for (let i = 1; i <= 25; i++) { + loader.onTaskComplete(`task-${i}`, { success: true }); + } + + const sessionState = loader.loadSessionState(); + + expect(sessionState.taskHistory.length).toBe(20); + expect(sessionState.taskHistory[0].task).toBe('task-6'); + expect(sessionState.taskHistory[19].task).toBe('task-25'); + }); + + test('updates current story if provided', () => { + loader.onTaskComplete('develop', { + success: true, + storyPath: 'docs/stories/v2.1/sprint-10/story-wis-3.md', + }); + + const sessionState = loader.loadSessionState(); + + expect(sessionState.currentStory).toBe('docs/stories/v2.1/sprint-10/story-wis-3.md'); + }); + + test('infers workflow state from task name', () => { + loader.onTaskComplete('develop', { success: true }); + + const sessionState = loader.loadSessionState(); + + expect(sessionState.workflowActive).toBe('story_development'); + expect(sessionState.workflowState).toBe('in_development'); + }); + + test('infers different workflow states', () => { + // Test validate-story-draft + loader.onTaskComplete('validate-story-draft', { success: true }); + let state = loader.loadSessionState(); + expect(state.workflowActive).toBe('story_development'); + expect(state.workflowState).toBe('validated'); + + // Test review-qa + loader.onTaskComplete('review-qa', { success: true }); + state = loader.loadSessionState(); + expect(state.workflowState).toBe('qa_reviewed'); + + // Test create-epic + loader.onTaskComplete('create-epic', { success: true }); + state = loader.loadSessionState(); + expect(state.workflowActive).toBe('epic_creation'); + expect(state.workflowState).toBe('epic_drafted'); + }); + + test('handles failed task result', () => { + loader.onTaskComplete('develop', { success: false }); + + const sessionState = loader.loadSessionState(); + + expect(sessionState.taskHistory[0].success).toBe(false); + }); + }); + + describe('getWorkflowState() - Workflow State Access (WIS-3)', () => { + test('returns null when no workflow active', () => { + loader.clearSession(); + + const state = loader.getWorkflowState(); + + expect(state).toBeNull(); + }); + + test('returns workflow state after task completion', () => { + loader.onTaskComplete('develop', { success: true }); + + const state = loader.getWorkflowState(); + + expect(state).toBeTruthy(); + expect(state.workflow).toBe('story_development'); + expect(state.state).toBe('in_development'); + expect(state.lastActivity).toBeTruthy(); + }); + }); + + describe('getTaskHistory() - Task History Access (WIS-3)', () => { + test('returns empty array when no history', () => { + loader.clearSession(); + + const history = loader.getTaskHistory(); + + expect(history).toEqual([]); + }); + + test('returns task history entries', () => { + loader.onTaskComplete('develop', { success: true }); + loader.onTaskComplete('run-tests', { success: true }); + + const history = loader.getTaskHistory(); + + expect(history.length).toBe(2); + expect(history[0].task).toBe('develop'); + expect(history[1].task).toBe('run-tests'); + }); + + test('limits returned entries based on limit parameter', () => { + for (let i = 1; i <= 10; i++) { + loader.onTaskComplete(`task-${i}`, { success: true }); + } + + const history = loader.getTaskHistory(5); + + expect(history.length).toBe(5); + expect(history[0].task).toBe('task-6'); + expect(history[4].task).toBe('task-10'); + }); + }); + + describe('_inferWorkflowState() - Workflow State Inference (WIS-3)', () => { + test('infers story_development workflow states', () => { + const states = [ + { task: 'validate-story-draft', expected: { workflow: 'story_development', state: 'validated' } }, + { task: 'develop', expected: { workflow: 'story_development', state: 'in_development' } }, + { task: 'develop-yolo', expected: { workflow: 'story_development', state: 'in_development' } }, + { task: 'review-qa', expected: { workflow: 'story_development', state: 'qa_reviewed' } }, + ]; + + states.forEach(({ task, expected }) => { + const result = loader._inferWorkflowState(task, {}); + expect(result).toEqual(expected); + }); + }); + + test('infers epic_creation workflow states', () => { + const states = [ + { task: 'create-epic', expected: { workflow: 'epic_creation', state: 'epic_drafted' } }, + { task: 'create-story', expected: { workflow: 'epic_creation', state: 'stories_created' } }, + ]; + + states.forEach(({ task, expected }) => { + const result = loader._inferWorkflowState(task, {}); + expect(result).toEqual(expected); + }); + }); + + test('returns null for unknown tasks', () => { + const result = loader._inferWorkflowState('unknown-task', {}); + expect(result).toBeNull(); + }); + + test('normalizes task name by removing * prefix', () => { + const result = loader._inferWorkflowState('*develop', {}); + expect(result).toEqual({ workflow: 'story_development', state: 'in_development' }); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/ensure-manifest.test.js +================================================== +```js +'use strict'; + +const { shouldCheckManifest } = require('../../scripts/ensure-manifest'); + +describe('ensure-manifest', () => { + describe('shouldCheckManifest', () => { + it('returns false when no staged files', () => { + expect(shouldCheckManifest([])).toBe(false); + }); + + it('returns false when only manifest file is staged', () => { + expect(shouldCheckManifest(['.aios-core/install-manifest.yaml'])).toBe(false); + }); + + it('returns true when any other .aios-core file is staged', () => { + expect(shouldCheckManifest(['.aios-core/core-config.yaml'])).toBe(true); + expect( + shouldCheckManifest([ + 'README.md', + '.aios-core/infrastructure/scripts/ide-sync/index.js', + ]), + ).toBe(true); + }); + + it('returns false when only non-.aios-core files are staged', () => { + expect(shouldCheckManifest(['README.md', 'package.json'])).toBe(false); + }); + }); +}); + + +``` + +================================================== +📄 tests/unit/search-cli.test.js +================================================== +```js +/** + * Search CLI Unit Tests + * + * Tests for the worker search functionality. + * + * @story 2.7 - Discovery CLI Search + */ + +const path = require('path'); + +// Test modules +const { searchKeyword, fuzzyMatchScore, levenshteinDistance } = require('../../.aios-core/cli/commands/workers/search-keyword'); +const { applyFilters, filterByCategory, filterByTags } = require('../../.aios-core/cli/commands/workers/search-filters'); +const { formatOutput, formatTable, formatJSON, formatYAML } = require('../../.aios-core/cli/utils/output-formatter-cli'); +const { calculateScores, sortByScore, calculateSearchAccuracy } = require('../../.aios-core/cli/utils/score-calculator'); +const { isSemanticAvailable, cosineSimilarity, buildSearchText } = require('../../.aios-core/cli/commands/workers/search-semantic'); + +// Mock workers for testing +const mockWorkers = [ + { + id: 'json-csv-transformer', + name: 'JSON to CSV Transformer', + description: 'Converts JSON data to CSV format', + category: 'data', + subcategory: 'transformation', + tags: ['etl', 'data', 'json', 'csv'], + path: '.aios-core/development/tasks/data/json-csv-transformer.md', + }, + { + id: 'csv-json-transformer', + name: 'CSV to JSON Transformer', + description: 'Converts CSV data to JSON format', + category: 'data', + subcategory: 'transformation', + tags: ['etl', 'data', 'json', 'csv'], + path: '.aios-core/development/tasks/data/csv-json-transformer.md', + }, + { + id: 'json-validator', + name: 'JSON Schema Validator', + description: 'Validates JSON against a schema', + category: 'validation', + subcategory: 'schema', + tags: ['validation', 'schema', 'json'], + path: '.aios-core/development/tasks/validation/json-validator.md', + }, + { + id: 'api-generator', + name: 'REST API Generator', + description: 'Generates REST API endpoints from schema', + category: 'code', + subcategory: 'generation', + tags: ['api', 'rest', 'generator', 'openapi'], + path: '.aios-core/development/tasks/code/api-generator.md', + }, + { + id: 'test-runner', + name: 'Test Runner', + description: 'Runs unit and integration tests', + category: 'testing', + subcategory: 'execution', + tags: ['testing', 'unit', 'integration', 'jest'], + path: '.aios-core/development/tasks/testing/test-runner.md', + }, +]; + +describe('Levenshtein Distance', () => { + test('identical strings have distance 0', () => { + expect(levenshteinDistance('json', 'json')).toBe(0); + }); + + test('calculates distance for different strings', () => { + expect(levenshteinDistance('json', 'jsno')).toBe(2); + expect(levenshteinDistance('cat', 'bat')).toBe(1); + expect(levenshteinDistance('', 'abc')).toBe(3); + }); +}); + +describe('Fuzzy Match Score', () => { + test('exact match returns 100', () => { + expect(fuzzyMatchScore('json', 'json')).toBe(100); + }); + + test('contains query returns high score', () => { + const score = fuzzyMatchScore('json transformer', 'json'); + expect(score).toBeGreaterThanOrEqual(85); + }); + + test('partial match returns moderate score', () => { + const score = fuzzyMatchScore('transformation', 'transform'); + expect(score).toBeGreaterThan(0); + }); + + test('no match returns 0', () => { + const score = fuzzyMatchScore('xyz', 'abc'); + expect(score).toBe(0); + }); +}); + +describe('Filter Functions', () => { + test('filterByCategory filters correctly', () => { + const filtered = filterByCategory(mockWorkers, 'data'); + expect(filtered.length).toBe(2); + expect(filtered.every(w => w.category === 'data')).toBe(true); + }); + + test('filterByTags filters with AND logic', () => { + const filtered = filterByTags(mockWorkers, ['json', 'csv']); + expect(filtered.length).toBe(2); + expect(filtered.every(w => w.tags.includes('json') && w.tags.includes('csv'))).toBe(true); + }); + + test('applyFilters combines category and tags', () => { + const filtered = applyFilters(mockWorkers, { + category: 'validation', + tags: ['json'], + }); + expect(filtered.length).toBe(1); + expect(filtered[0].id).toBe('json-validator'); + }); + + test('empty filter returns all results', () => { + const filtered = applyFilters(mockWorkers, {}); + expect(filtered.length).toBe(mockWorkers.length); + }); +}); + +describe('Score Calculator', () => { + test('calculateScores boosts exact ID match', () => { + const workers = [{ id: 'json-validator', name: 'Test', score: 50 }]; + const results = calculateScores(workers, 'json-validator'); + expect(results[0].score).toBe(100); + }); + + test('sortByScore sorts descending', () => { + const workers = [ + { id: 'a', name: 'A', score: 50 }, + { id: 'b', name: 'B', score: 90 }, + { id: 'c', name: 'C', score: 70 }, + ]; + const sorted = sortByScore(workers); + expect(sorted[0].id).toBe('b'); + expect(sorted[1].id).toBe('c'); + expect(sorted[2].id).toBe('a'); + }); + + test('calculateSearchAccuracy returns correct metrics', () => { + const results = [ + { id: 'json-validator' }, + { id: 'other-worker' }, + ]; + const accuracy = calculateSearchAccuracy(results, 'json-validator'); + expect(accuracy.found).toBe(true); + expect(accuracy.isFirst).toBe(true); + expect(accuracy.accuracy).toBe(100); + }); +}); + +describe('Output Formatter', () => { + const testResults = [ + { + id: 'json-csv-transformer', + name: 'JSON to CSV Transformer', + description: 'Converts JSON data to CSV format', + category: 'data', + tags: ['etl', 'json', 'csv'], + score: 95, + path: '.aios-core/tasks/json-csv-transformer.md', + }, + ]; + + test('formatJSON returns valid JSON', () => { + const output = formatJSON(testResults, {}); + const parsed = JSON.parse(output); + expect(Array.isArray(parsed)).toBe(true); + expect(parsed[0].id).toBe('json-csv-transformer'); + }); + + test('formatYAML returns valid YAML', () => { + const output = formatYAML(testResults, {}); + expect(output).toContain('id: json-csv-transformer'); + expect(output).toContain('name: JSON to CSV Transformer'); + }); + + test('formatTable returns formatted table', () => { + const output = formatTable(testResults, { query: 'json', duration: '0.1' }); + expect(output).toContain('Found 1 worker'); + expect(output).toContain('json-csv-transformer'); + expect(output).toContain('95%'); + }); + + test('formatOutput with format=json returns JSON', () => { + const output = formatOutput(testResults, { format: 'json' }); + expect(() => JSON.parse(output)).not.toThrow(); + }); + + test('formatOutput handles empty results', () => { + const output = formatOutput([], { format: 'table', query: 'xyz' }); + expect(output).toContain('No workers found'); + }); +}); + +describe('Semantic Search Utilities', () => { + test('isSemanticAvailable returns false without API key', () => { + const originalKey = process.env.OPENAI_API_KEY; + delete process.env.OPENAI_API_KEY; + expect(isSemanticAvailable()).toBe(false); + if (originalKey) process.env.OPENAI_API_KEY = originalKey; + }); + + test('cosineSimilarity of identical vectors is 1', () => { + const vec = [1, 0, 1, 0]; + expect(cosineSimilarity(vec, vec)).toBeCloseTo(1, 5); + }); + + test('cosineSimilarity of orthogonal vectors is 0', () => { + const vec1 = [1, 0, 0, 0]; + const vec2 = [0, 1, 0, 0]; + expect(cosineSimilarity(vec1, vec2)).toBeCloseTo(0, 5); + }); + + test('buildSearchText concatenates worker fields', () => { + const worker = { + name: 'Test Worker', + description: 'A test worker', + category: 'testing', + tags: ['test', 'unit'], + }; + const text = buildSearchText(worker); + expect(text).toContain('Test Worker'); + expect(text).toContain('A test worker'); + expect(text).toContain('testing'); + expect(text).toContain('test'); + expect(text).toContain('unit'); + }); +}); + +describe('Keyword Search Integration', () => { + // This test requires the actual registry to be loaded + test('searchKeyword returns results for valid query', async () => { + try { + const results = await searchKeyword('json'); + // Should find at least some JSON-related workers + expect(Array.isArray(results)).toBe(true); + // If registry is available, should find results + if (results.length > 0) { + expect(results[0]).toHaveProperty('id'); + expect(results[0]).toHaveProperty('score'); + } + } catch (error) { + // Registry may not be available in test environment + console.log('Skipping searchKeyword test - registry not available'); + } + }); +}); + +``` + +================================================== +📄 tests/unit/dev-context-loader.test.js +================================================== +```js +/** + * Unit Tests: Dev Context Loader + * Story 6.1.2.6.2 - Agent Performance Optimization + * + * Tests smart file loading with caching and summarization + */ + +const fs = require('fs').promises; +const path = require('path'); +const DevContextLoader = require('../../.aios-core/development/scripts/dev-context-loader'); + +describe('DevContextLoader', () => { + let loader; + const testCacheDir = path.join(process.cwd(), '.aios', 'cache-test'); + + beforeEach(async () => { + loader = new DevContextLoader(); + // Override cache directory for testing + loader.cacheDir = testCacheDir; + // Clear cache before each test to ensure clean state + await loader.clearCache().catch(() => {}); + }); + + afterEach(async () => { + // Clean up test cache + try { + await fs.rm(testCacheDir, { recursive: true, force: true }); + } catch (error) { + // Ignore cleanup errors + } + }); + + describe('load() - Performance', () => { + test('loads summaries efficiently (cold load)', async () => { + const start = Date.now(); + const result = await loader.load({ fullLoad: false, skipCache: true }); + const duration = Date.now() - start; + + // Relaxed timing for CI environments - just verify it completes reasonably + expect(duration).toBeLessThan(5000); // 5 seconds max + // Accept either 'loaded' or 'no_files' (when running in test environment without expected files) + expect(['loaded', 'no_files']).toContain(result.status); + if (result.status === 'loaded') { + expect(result.loadStrategy).toBe('summary'); + } + }, 60000); // 60s timeout for slow systems + + test('cached load is significantly faster than cold load', async () => { + // First load (cache miss) + const start1 = Date.now(); + const firstResult = await loader.load({ fullLoad: false, skipCache: false }); + const coldDuration = Date.now() - start1; + + // Skip cache comparison test if no files were loaded + if (firstResult.status === 'no_files') { + return; + } + + // Skip if all files had errors (no actual files to cache) + const successfulFiles = firstResult.files.filter((f) => !f.error); + if (successfulFiles.length === 0) { + return; + } + + // Second load (cache hit) + const start2 = Date.now(); + const result = await loader.load({ fullLoad: false, skipCache: false }); + const cachedDuration = Date.now() - start2; + + // Skip performance assertion if durations are too short to measure reliably (< 50ms) + // This can happen in CI environments with variable timing + const durationsTooShort = coldDuration < 50 || cachedDuration < 5; + + // Cached should be faster than cold load (relaxed threshold for CI environments) + // Only enforce timing when we have a reasonably measurable cold duration + if (!durationsTooShort && coldDuration > 100) { + expect(cachedDuration).toBeLessThan(coldDuration * 0.9); + } + + // Verify caching occurred only if we had successful file loads + if (successfulFiles.length > 0) { + expect(result.cacheHits).toBeGreaterThan(0); + } + }, 60000); + }); + + describe('load() - Summary Mode', () => { + test('generates correct summaries with headers + preview', async () => { + const result = await loader.load({ fullLoad: false }); + + // Accept either 'loaded' or 'no_files' (when running in test environment without expected files) + expect(['loaded', 'no_files']).toContain(result.status); + + if (result.status === 'loaded') { + expect(result.files).toBeTruthy(); + + // Check each file has summary structure + result.files.forEach((file) => { + if (file.summary) { + expect(file.summary).toContain('## Key Sections:'); + expect(file.summary).toContain('## Preview (first 100 lines):'); + expect(file.summaryLines).toBeLessThan(150); + } + }); + } + }); + + test('reduces data by ~82%', async () => { + const summaryResult = await loader.load({ fullLoad: false, skipCache: true }); + const fullResult = await loader.load({ fullLoad: true, skipCache: true }); + + // Accept either 'loaded' or 'no_files' (when running in test environment without expected files) + expect(['loaded', 'no_files']).toContain(summaryResult.status); + + if (summaryResult.status === 'no_files') { + return; // Skip rest of test if no files found + } + + // Only count successfully loaded files (exclude files with errors) + const successfulSummaryFiles = summaryResult.files.filter((f) => !f.error); + const successfulFullFiles = fullResult.files.filter((f) => !f.error); + + // Calculate total lines only from successfully loaded files + const summaryLines = successfulSummaryFiles.reduce( + (sum, f) => sum + (f.summaryLines || 0), + 0, + ); + const fullLines = successfulFullFiles.reduce((sum, f) => sum + (f.linesCount || 0), 0); + + // Only test reduction if we have data to compare + if (fullLines > 0) { + const reduction = ((fullLines - summaryLines) / fullLines) * 100; + + expect(reduction).toBeGreaterThan(75); // At least 75% reduction + expect(reduction).toBeLessThan(90); // Less than 90% reduction + } + }); + }); + + describe('load() - Full Load Mode', () => { + test('loads complete files when fullLoad=true', async () => { + const result = await loader.load({ fullLoad: true }); + + // Accept either 'loaded' or 'no_files' (when running in test environment without expected files) + expect(['loaded', 'no_files']).toContain(result.status); + + if (result.status === 'loaded') { + expect(result.loadStrategy).toBe('full'); + + result.files.forEach((file) => { + if (file.content) { + expect(file.content).toBeTruthy(); + expect(file.linesCount).toBeGreaterThan(0); + } + }); + } + }); + }); + + describe('Cache Management', () => { + test('saves to cache after first load', async () => { + const result = await loader.load({ fullLoad: false }); + + // Accept either 'loaded' or 'no_files' (when running in test environment without expected files) + expect(['loaded', 'no_files']).toContain(result.status); + + if (result.status === 'loaded') { + // Only check cache if we had successful file loads (not just errors) + const successfulFiles = result.files.filter((f) => !f.error); + + if (successfulFiles.length > 0) { + // Check cache directory exists + const cacheExists = await fs + .access(testCacheDir) + .then(() => true) + .catch(() => false); + + expect(cacheExists).toBe(true); + } + } + }); + + test('respects cache TTL (1 hour)', async () => { + // This test would require mocking time or waiting 1 hour + // For now, we'll test the cache key generation + const cacheKey = loader.getCacheKey('docs/framework/coding-standards.md', false); + + expect(cacheKey).toMatch(/^devcontext_/); + expect(cacheKey).toContain('_summary'); + }); + + test('clearCache() removes all cached files', async () => { + // Load files to populate cache + await loader.load({ fullLoad: false }); + + // Clear cache + await loader.clearCache(); + + // Verify cache is empty + const files = await fs.readdir(testCacheDir).catch(() => []); + const devContextFiles = files.filter((f) => f.startsWith('devcontext_')); + + expect(devContextFiles.length).toBe(0); + }); + }); + + describe('Error Handling', () => { + test('handles missing files gracefully', async () => { + const result = await loader.load({ fullLoad: false }); + + // Accept either 'loaded' or 'no_files' (when running in test environment without expected files) + expect(['loaded', 'no_files']).toContain(result.status); + + if (result.status === 'loaded') { + // Check if any files have errors + const filesWithErrors = result.files.filter((f) => f.error); + + // Should still return results for files that loaded successfully + expect(result.files.length).toBeGreaterThan(0); + } + }); + + test('falls back to direct load if cache read fails', async () => { + // Simulate cache corruption by creating invalid JSON + await fs.mkdir(testCacheDir, { recursive: true }); + const corruptCachePath = path.join(testCacheDir, 'devcontext_test_summary.json'); + await fs.writeFile(corruptCachePath, 'invalid json{', 'utf8'); + + // Load should still succeed (but may return no_files if expected files don't exist) + const result = await loader.load({ fullLoad: false }); + + expect(['loaded', 'no_files']).toContain(result.status); + }); + }); + + describe('generateSummary()', () => { + test('extracts headers from markdown files', () => { + const lines = [ + '# Main Title', + 'Some content', + '## Section 1', + 'More content', + '## Section 2', + 'Even more content', + ]; + + const content = lines.join('\n'); + const summary = loader.generateSummary('test.md', content, lines); + + expect(summary).toContain('Main Title'); + expect(summary).toContain('Section 1'); + expect(summary).toContain('Section 2'); + }); + + test('includes first 100 lines as preview', () => { + const lines = Array.from({ length: 200 }, (_, i) => `Line ${i + 1}`); + const content = lines.join('\n'); + + const summary = loader.generateSummary('test.md', content, lines); + + expect(summary).toContain('Line 1'); + expect(summary).toContain('Line 100'); + expect(summary).toContain('and 100 more lines'); + }); + }); + + describe('getCacheKey()', () => { + test('generates unique keys for different files', () => { + const key1 = loader.getCacheKey('docs/file1.md', false); + const key2 = loader.getCacheKey('docs/file2.md', false); + + expect(key1).not.toBe(key2); + }); + + test('generates different keys for summary vs full load', () => { + const summaryKey = loader.getCacheKey('docs/file.md', false); + const fullKey = loader.getCacheKey('docs/file.md', true); + + expect(summaryKey).toContain('_summary'); + expect(fullKey).toContain('_full'); + }); + + test('normalizes file paths in cache keys', () => { + const key = loader.getCacheKey('docs/framework/coding-standards.md', false); + + // Should not contain special characters + expect(key).not.toMatch(/[/\\.]/); + expect(key).toMatch(/^devcontext_/); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/tool-resolver.test.js +================================================== +```js +// Mock dependencies before requiring +jest.mock('fs-extra'); +jest.mock('glob', () => ({ + sync: jest.fn().mockReturnValue([]), +})); + +describe('ToolResolver', () => { + let resolver; + let fs; + let glob; + let path; + + beforeAll(() => { + // Require modules inside beforeAll to avoid module-level issues + fs = require('fs-extra'); + glob = require('glob'); + path = require('path'); + + // Require ToolResolver after all other modules are loaded + resolver = require('../../common/utils/tool-resolver'); + }); + + const _fixturesPath = () => path.join(__dirname, '../fixtures'); + + // Sample tool definitions (with 'tool:' wrapper - matches spec format) + const simpleToolYaml = ` +tool: + id: test-simple + type: mcp + name: Test Simple Tool + version: 1.0.0 + description: Simple v1.0 tool for testing + commands: + - search + - fetch + mcp_specific: + server_command: npx -y test-simple-server + transport: stdio +`; + + const complexToolYaml = ` +tool: + schema_version: 2.0 + id: test-complex + type: mcp + name: Test Complex Tool + version: 1.0.0 + description: Complex v2.0 tool with executable knowledge + knowledge_strategy: executable + commands: + - create_item + executable_knowledge: + validators: + - id: validate-create-item + validates: create_item + language: javascript + function: | + function validateCommand(args) { + return { valid: true, errors: [] }; + } + mcp_specific: + server_command: npx -y test-complex-server + transport: stdio +`; + + const invalidToolYaml = ` +tool: + type: mcp + name: Invalid Tool + version: 1.0.0 +`; + + beforeEach(() => { + // Clear all mocks + jest.clearAllMocks(); + + // Reset glob.sync to return empty array by default + glob.sync.mockReturnValue([]); + + // Clear cache before each test + if (resolver.cache) { + resolver.cache.clear(); + } + }); + + describe('Cache Mechanism', () => { + test('should cache tool after first resolution', async () => { + // Mock file system + glob.sync.mockReturnValue(['aios-core/tools/test-simple.yaml']); + fs.readFile.mockResolvedValue(simpleToolYaml); + + // First call - should read from file + const tool1 = await resolver.resolveTool('test-simple'); + expect(fs.readFile).toHaveBeenCalledTimes(1); + expect(tool1.id).toBe('test-simple'); + + // Second call - should use cache + const tool2 = await resolver.resolveTool('test-simple'); + expect(fs.readFile).toHaveBeenCalledTimes(1); // Still 1, not called again + expect(tool2.id).toBe('test-simple'); + + // Should be exact same object from cache + expect(tool1).toBe(tool2); + }); + + test('should use different cache keys for different squads', async () => { + glob.sync.mockReturnValue(['squads/pack1/tools/test-tool.yaml']); + fs.readFile.mockResolvedValue(simpleToolYaml); + + const tool1 = await resolver.resolveTool('test-simple', { expansionPack: 'pack1' }); + const tool2 = await resolver.resolveTool('test-simple', { expansionPack: 'pack2' }); + + // Should have called readFile twice (different cache keys) + expect(fs.readFile).toHaveBeenCalledTimes(2); + }); + + test('should provide cache clear method', () => { + // Add items to cache + resolver.cache.set('test1', { id: 'test1' }); + resolver.cache.set('test2', { id: 'test2' }); + expect(resolver.cache.size).toBe(2); + + // Clear cache + resolver.clearCache(); + expect(resolver.cache.size).toBe(0); + }); + + test('should provide cache stats', async () => { + glob.sync.mockReturnValue(['aios-core/tools/test-simple.yaml']); + fs.readFile.mockResolvedValue(simpleToolYaml); + + await resolver.resolveTool('test-simple'); + await resolver.resolveTool('test-simple'); // Cache hit + + const stats = resolver.getCacheStats(); + expect(stats).toHaveProperty('size'); + expect(stats).toHaveProperty('keys'); + expect(stats.size).toBe(1); + expect(stats.keys).toContain('core:test-simple'); + }); + }); + + describe('Search Path Priority', () => { + test('should prioritize squad tools over core tools', async () => { + // Mock squad tool found first + glob.sync.mockImplementation((pattern) => { + if (pattern.includes('squads/my-pack')) { + return ['squads/my-pack/tools/test-simple.yaml']; + } + return []; + }); + + fs.readFile.mockResolvedValue(simpleToolYaml); + + const tool = await resolver.resolveTool('test-simple', { expansionPack: 'my-pack' }); + + // Should have searched squad first + expect(glob.sync).toHaveBeenCalledWith( + expect.stringContaining('squads/my-pack/tools'), + ); + + expect(tool.id).toBe('test-simple'); + }); + + test('should fall back to core when squad tool not found', async () => { + // Mock: squad returns empty, core returns tool + glob.sync.mockImplementation((pattern) => { + if (pattern.includes('squads')) { + return []; + } + if (pattern.includes('aios-core/tools')) { + return ['aios-core/tools/test-simple.yaml']; + } + return []; + }); + + fs.readFile.mockResolvedValue(simpleToolYaml); + + const tool = await resolver.resolveTool('test-simple', { expansionPack: 'my-pack' }); + + // Should have tried both paths + expect(glob.sync).toHaveBeenCalledTimes(2); + expect(tool.id).toBe('test-simple'); + }); + + test('should search common/tools directory', async () => { + glob.sync.mockImplementation((pattern) => { + if (pattern.includes('common/tools')) { + return ['common/tools/test-tool.yaml']; + } + return []; + }); + + fs.readFile.mockResolvedValue(simpleToolYaml); + + await resolver.resolveTool('test-simple'); + + // Should have searched common directory + expect(glob.sync).toHaveBeenCalledWith( + expect.stringContaining('common/tools'), + ); + }); + }); + + describe('Tool Not Found Error', () => { + test('should throw error when tool not found', async () => { + glob.sync.mockReturnValue([]); + + await expect(resolver.resolveTool('nonexistent-tool')).rejects.toThrow( + /Tool 'nonexistent-tool' not found/, + ); + }); + + test('should provide helpful error message with search paths', async () => { + glob.sync.mockReturnValue([]); + + try { + await resolver.resolveTool('missing-tool'); + fail('Should have thrown error'); + } catch (error) { + expect(error.message).toContain('missing-tool'); + // Error should mention where it looked + expect(error.message.toLowerCase()).toMatch(/search|path|found/); + } + }); + }); + + describe('Schema Validation', () => { + test('should validate required fields', async () => { + glob.sync.mockReturnValue(['aios-core/tools/invalid.yaml']); + fs.readFile.mockResolvedValue(invalidToolYaml); + + await expect(resolver.resolveTool('invalid-tool')).rejects.toThrow( + /required field/i, + ); + }); + + test('should validate v1.0 tool schema', async () => { + glob.sync.mockReturnValue(['aios-core/tools/simple.yaml']); + fs.readFile.mockResolvedValue(simpleToolYaml); + + const tool = await resolver.resolveTool('test-simple'); + + // Should have all required v1.0 fields + expect(tool).toHaveProperty('id'); + expect(tool).toHaveProperty('type'); + expect(tool).toHaveProperty('name'); + expect(tool).toHaveProperty('version'); + expect(tool).toHaveProperty('description'); + }); + + test('should validate v2.0 tool schema with executable knowledge', async () => { + glob.sync.mockReturnValue(['aios-core/tools/complex.yaml']); + fs.readFile.mockResolvedValue(complexToolYaml); + + const tool = await resolver.resolveTool('test-complex'); + + // Should have v2.0 fields + expect(tool.schema_version).toBe(2.0); + expect(tool).toHaveProperty('executable_knowledge'); + expect(tool.executable_knowledge).toHaveProperty('validators'); + }); + + test('should reject invalid YAML syntax', async () => { + glob.sync.mockReturnValue(['aios-core/tools/broken.yaml']); + fs.readFile.mockResolvedValue('invalid: yaml: syntax: ['); + + await expect(resolver.resolveTool('broken-tool')).rejects.toThrow(); + }); + }); + + describe('Auto-Detection Logic', () => { + test('should detect v1.0 for simple tools', async () => { + glob.sync.mockReturnValue(['aios-core/tools/simple.yaml']); + fs.readFile.mockResolvedValue(simpleToolYaml); + + const tool = await resolver.resolveTool('test-simple'); + + // Should auto-detect as v1.0 + expect(tool.schema_version).toBe(1.0); + }); + + test('should detect v2.0 when executable_knowledge present', async () => { + // Tool without explicit schema_version but has v2.0 features + const autoDetectToolYaml = ` +tool: + id: auto-detect + type: mcp + name: Auto Detect Tool + version: 1.0.0 + description: Tool with v2.0 features + executable_knowledge: + validators: + - id: test-validator + validates: command + function: "function() { return true; }" + mcp_specific: + server_command: npx -y test +`; + + glob.sync.mockReturnValue(['aios-core/tools/auto.yaml']); + fs.readFile.mockResolvedValue(autoDetectToolYaml); + + const tool = await resolver.resolveTool('auto-detect'); + + // Should auto-detect as v2.0 + expect(tool.schema_version).toBe(2.0); + }); + + test('should detect v2.0 when api_complexity present', async () => { + const apiComplexToolYaml = ` +tool: + id: api-complex + type: mcp + name: API Complex Tool + version: 1.0.0 + description: Tool with API complexity + api_complexity: + api_quirks: + - quirk: test + description: test quirk +`; + + glob.sync.mockReturnValue(['aios-core/tools/api.yaml']); + fs.readFile.mockResolvedValue(apiComplexToolYaml); + + const tool = await resolver.resolveTool('api-complex'); + expect(tool.schema_version).toBe(2.0); + }); + + test('should detect v2.0 when anti_patterns present', async () => { + const antiPatternsToolYaml = ` +tool: + id: anti-patterns + type: mcp + name: Anti Patterns Tool + version: 1.0.0 + description: Tool with anti patterns + anti_patterns: + - pattern: wrong_usage + description: Incorrect usage +`; + + glob.sync.mockReturnValue(['aios-core/tools/anti.yaml']); + fs.readFile.mockResolvedValue(antiPatternsToolYaml); + + const tool = await resolver.resolveTool('anti-patterns'); + expect(tool.schema_version).toBe(2.0); + }); + }); + + describe('Health Check Methods', () => { + test('should support command-based health check', async () => { + const healthCheckToolYaml = ` +tool: + id: health-tool + type: mcp + name: Health Tool + version: 1.0.0 + description: Tool with health check + health_check: + method: command + command: echo "healthy" + expected_output: healthy + timeout: 5000 +`; + + glob.sync.mockReturnValue(['aios-core/tools/health.yaml']); + fs.readFile.mockResolvedValue(healthCheckToolYaml); + + const tool = await resolver.resolveTool('health-tool'); + + expect(tool.health_check).toBeDefined(); + expect(tool.health_check.method).toBe('command'); + }); + + test('should support HTTP endpoint health check', async () => { + const httpHealthToolYaml = ` +tool: + id: http-health-tool + type: mcp + name: HTTP Health Tool + version: 1.0.0 + description: Tool with HTTP health check + health_check: + method: http + endpoint: http://localhost:3000/health + expected_status: 200 +`; + + glob.sync.mockReturnValue(['aios-core/tools/http-health.yaml']); + fs.readFile.mockResolvedValue(httpHealthToolYaml); + + const tool = await resolver.resolveTool('http-health-tool'); + + expect(tool.health_check).toBeDefined(); + expect(tool.health_check.method).toBe('http'); + expect(tool.health_check.endpoint).toBe('http://localhost:3000/health'); + }); + + test('should support custom function health check', async () => { + const functionHealthToolYaml = ` +tool: + id: function-health-tool + type: mcp + name: Function Health Tool + version: 1.0.0 + description: Tool with function health check + health_check: + method: function + function: | + async function checkHealth() { + return { healthy: true }; + } +`; + + glob.sync.mockReturnValue(['aios-core/tools/function-health.yaml']); + fs.readFile.mockResolvedValue(functionHealthToolYaml); + + const tool = await resolver.resolveTool('function-health-tool'); + + expect(tool.health_check).toBeDefined(); + expect(tool.health_check.method).toBe('function'); + expect(tool.health_check.function).toContain('checkHealth'); + }); + }); + + describe('Additional Methods', () => { + test('should list all available tools', async () => { + glob.sync.mockReturnValue([ + 'aios-core/tools/tool1.yaml', + 'aios-core/tools/tool2.yaml', + 'common/tools/tool3.yaml', + ]); + + const tools = resolver.listAvailableTools(); + + expect(Array.isArray(tools)).toBe(true); + expect(tools.length).toBeGreaterThan(0); + }); + + test('should check if tool exists without loading', async () => { + // Mock implementation that returns results only for 'exists' tool (not 'not-exists') + glob.sync.mockImplementation((pattern) => { + if (pattern.includes('/exists.yaml')) { + return ['aios-core/tools/exists.yaml']; + } + return []; + }); + + const exists = await resolver.toolExists('exists'); + expect(exists).toBe(true); + + const notExists = await resolver.toolExists('not-exists'); + expect(notExists).toBe(false); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/validate-gemini-integration.test.js +================================================== +```js +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); + +const { + validateGeminiIntegration, +} = require('../../.aios-core/infrastructure/scripts/validate-gemini-integration'); + +describe('validate-gemini-integration', () => { + let tmpRoot; + + function write(file, content = '') { + fs.mkdirSync(path.dirname(file), { recursive: true }); + fs.writeFileSync(file, content, 'utf8'); + } + + beforeEach(() => { + tmpRoot = fs.mkdtempSync(path.join(os.tmpdir(), 'validate-gemini-')); + }); + + afterEach(() => { + fs.rmSync(tmpRoot, { recursive: true, force: true }); + }); + + it('passes when required Gemini files exist', () => { + write(path.join(tmpRoot, '.gemini', 'rules', 'AIOS', 'agents', 'dev.md'), '# dev'); + write(path.join(tmpRoot, '.gemini', 'commands', 'aios-menu.toml'), 'description = "menu"'); + write(path.join(tmpRoot, '.gemini', 'commands', 'aios-dev.toml'), 'description = "dev"'); + write(path.join(tmpRoot, '.aios-core', 'development', 'agents', 'dev.md'), '# dev'); + write(path.join(tmpRoot, 'packages', 'gemini-aios-extension', 'extension.json'), '{}'); + write(path.join(tmpRoot, 'packages', 'gemini-aios-extension', 'README.md'), '# readme'); + write(path.join(tmpRoot, 'packages', 'gemini-aios-extension', 'commands', 'aios-status.js'), ''); + write(path.join(tmpRoot, 'packages', 'gemini-aios-extension', 'commands', 'aios-agents.js'), ''); + write(path.join(tmpRoot, 'packages', 'gemini-aios-extension', 'commands', 'aios-validate.js'), ''); + write(path.join(tmpRoot, 'packages', 'gemini-aios-extension', 'hooks', 'hooks.json'), '{}'); + + const result = validateGeminiIntegration({ projectRoot: tmpRoot }); + expect(result.ok).toBe(true); + expect(result.errors).toEqual([]); + }); + + it('warns (but passes) when rules file is missing', () => { + const result = validateGeminiIntegration({ projectRoot: tmpRoot }); + expect(result.ok).toBe(false); + expect(result.errors.some((e) => e.includes('Missing Gemini agents dir'))).toBe(true); + expect(result.errors.some((e) => e.includes('Missing Gemini commands dir'))).toBe(true); + expect(result.warnings.some((w) => w.includes('Gemini rules file not found yet'))).toBe(true); + }); +}); + +``` + +================================================== +📄 tests/unit/gemini-agent-launcher.test.js +================================================== +```js +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); + +const { + listAvailableAgents, + hasAgent, + buildActivationPrompt, + commandNameForAgent, +} = require('../../packages/gemini-aios-extension/commands/lib/agent-launcher'); + +describe('gemini agent launcher', () => { + let tmpRoot; + + function write(file, content = '') { + fs.mkdirSync(path.dirname(file), { recursive: true }); + fs.writeFileSync(file, content, 'utf8'); + } + + beforeEach(() => { + tmpRoot = fs.mkdtempSync(path.join(os.tmpdir(), 'gemini-launcher-')); + }); + + afterEach(() => { + fs.rmSync(tmpRoot, { recursive: true, force: true }); + }); + + it('lists canonical agents from source-of-truth directory', () => { + write(path.join(tmpRoot, '.aios-core', 'development', 'agents', 'dev.md'), '# dev'); + write(path.join(tmpRoot, '.aios-core', 'development', 'agents', 'architect.md'), '# architect'); + write(path.join(tmpRoot, '.aios-core', 'development', 'agents', '_README.md'), '# ignore'); + + const result = listAvailableAgents(tmpRoot); + expect(result).toEqual(['architect', 'dev']); + }); + + it('detects agent presence in canonical or gemini mirrored folders', () => { + write(path.join(tmpRoot, '.gemini', 'rules', 'AIOS', 'agents', 'qa.md'), '# qa'); + expect(hasAgent(tmpRoot, 'qa')).toBe(true); + expect(hasAgent(tmpRoot, 'missing')).toBe(false); + }); + + it('builds deterministic activation prompt', () => { + const prompt = buildActivationPrompt('devops'); + expect(prompt).toContain('.aios-core/development/agents/devops.md'); + expect(prompt).toContain('generate-greeting.js devops'); + expect(prompt).toContain('*exit'); + }); + + it('maps command name correctly for master agent', () => { + expect(commandNameForAgent('aios-master')).toBe('/aios-master'); + expect(commandNameForAgent('dev')).toBe('/aios-dev'); + }); +}); + +``` + +================================================== +📄 tests/unit/semantic-lint.test.js +================================================== +```js +'use strict'; + +const path = require('path'); +const { + parseArgs, + lintContent, + runSemanticLint, +} = require('../../scripts/semantic-lint'); + +describe('semantic-lint', () => { + it('parses staged/json/file arguments', () => { + const parsed = parseArgs(['--staged', '--json', 'docs/file.md']); + expect(parsed.staged).toBe(true); + expect(parsed.json).toBe(true); + expect(parsed.files).toEqual(['docs/file.md']); + }); + + it('detects deprecated terms with severities', () => { + const findings = lintContent( + 'This expansion pack uses permission mode. Legacy workflow state term.', + 'docs/sample.md', + ); + + expect(findings.some((item) => item.ruleId === 'deprecated-expansion-pack')).toBe(true); + expect(findings.some((item) => item.ruleId === 'deprecated-permission-mode')).toBe(true); + expect(findings.some((item) => item.ruleId === 'legacy-workflow-state-term')).toBe(true); + }); + + it('fails when error-level terms are present', () => { + const result = runSemanticLint( + { projectRoot: '/tmp/project', targets: ['docs/sample.md'] }, + { + collectFiles: () => [path.join('/tmp/project', 'docs', 'sample.md')], + readFile: () => 'Use expansion pack here.', + }, + ); + + expect(result.ok).toBe(false); + expect(result.errors.length).toBeGreaterThan(0); + }); + + it('passes when only warning-level term is present', () => { + const result = runSemanticLint( + { projectRoot: '/tmp/project', targets: ['docs/sample.md'] }, + { + collectFiles: () => [path.join('/tmp/project', 'docs', 'sample.md')], + readFile: () => 'Use workflow state wording for migration note.', + }, + ); + + expect(result.ok).toBe(true); + expect(result.errors).toHaveLength(0); + expect(result.warnings.length).toBeGreaterThan(0); + }); +}); + +``` + +================================================== +📄 tests/unit/greeting-preference.test.js +================================================== +```js +/** + * Unit Tests for Greeting Preference System + * + * Test Coverage: + * - PreferenceManager: getPreference, setPreference, validation + * - GreetingBuilder: buildFixedLevelGreeting, preference override + * - All preference values (auto, minimal, named, archetypal) + * - Error handling and fallbacks + * - Config backup/restore + */ + +const GreetingPreferenceManager = require('../../.aios-core/development/scripts/greeting-preference-manager'); +const fs = require('fs'); +const path = require('path'); +const yaml = require('js-yaml'); + +// Mock dependencies for GreetingBuilder +jest.mock('../../.aios-core/core/session/context-detector'); +jest.mock('../../.aios-core/infrastructure/scripts/git-config-detector'); +jest.mock('../../.aios-core/infrastructure/scripts/project-status-loader', () => ({ + loadProjectStatus: jest.fn(), + formatStatusDisplay: jest.fn(), +})); + +const GreetingBuilder = require('../../.aios-core/development/scripts/greeting-builder'); + +// Mock fs operations for config file +const CONFIG_PATH = path.join(process.cwd(), '.aios-core', 'core-config.yaml'); +const BACKUP_PATH = path.join(process.cwd(), '.aios-core', 'core-config.yaml.backup'); +const TEST_CONFIG_PATH = path.join(__dirname, '..', 'fixtures', 'test-core-config.yaml'); + +describe('GreetingPreferenceManager', () => { + let manager; + let originalConfig; + let testConfig; + + beforeEach(() => { + manager = new GreetingPreferenceManager(); + + // Backup original config if exists + if (fs.existsSync(CONFIG_PATH)) { + originalConfig = fs.readFileSync(CONFIG_PATH, 'utf8'); + } + + // Create test config + testConfig = { + agentIdentity: { + greeting: { + preference: 'auto', + contextDetection: true, + sessionDetection: 'hybrid', + }, + }, + }; + }); + + afterEach(() => { + // Restore original config + if (originalConfig) { + fs.writeFileSync(CONFIG_PATH, originalConfig, 'utf8'); + } else if (fs.existsSync(CONFIG_PATH)) { + fs.unlinkSync(CONFIG_PATH); + } + + // Clean up backup + if (fs.existsSync(BACKUP_PATH)) { + fs.unlinkSync(BACKUP_PATH); + } + }); + + describe('getPreference', () => { + test('returns default "auto" if not configured', () => { + // Config doesn't exist or has no preference + const preference = manager.getPreference(); + expect(['auto', 'minimal', 'named', 'archetypal']).toContain(preference); + }); + + test('returns configured preference', () => { + testConfig.agentIdentity.greeting.preference = 'minimal'; + fs.writeFileSync(CONFIG_PATH, yaml.dump(testConfig), 'utf8'); + + const preference = manager.getPreference(); + expect(preference).toBe('minimal'); + }); + + test('handles missing config file gracefully', () => { + if (fs.existsSync(CONFIG_PATH)) { + fs.unlinkSync(CONFIG_PATH); + } + + const preference = manager.getPreference(); + expect(preference).toBe('auto'); + }); + }); + + describe('setPreference', () => { + test('sets valid preference successfully', () => { + fs.writeFileSync(CONFIG_PATH, yaml.dump(testConfig), 'utf8'); + + const result = manager.setPreference('minimal'); + expect(result.success).toBe(true); + expect(result.preference).toBe('minimal'); + + // Verify config was updated + const updatedConfig = yaml.load(fs.readFileSync(CONFIG_PATH, 'utf8')); + expect(updatedConfig.agentIdentity.greeting.preference).toBe('minimal'); + }); + + test('throws error for invalid preference', () => { + fs.writeFileSync(CONFIG_PATH, yaml.dump(testConfig), 'utf8'); + + expect(() => manager.setPreference('invalid')).toThrow('Invalid preference'); + expect(() => manager.setPreference('Auto')).toThrow('Invalid preference'); + expect(() => manager.setPreference('')).toThrow('Invalid preference'); + }); + + test('accepts all valid preferences', () => { + fs.writeFileSync(CONFIG_PATH, yaml.dump(testConfig), 'utf8'); + + expect(() => manager.setPreference('auto')).not.toThrow(); + expect(() => manager.setPreference('minimal')).not.toThrow(); + expect(() => manager.setPreference('named')).not.toThrow(); + expect(() => manager.setPreference('archetypal')).not.toThrow(); + }); + + test('creates backup before modification', () => { + fs.writeFileSync(CONFIG_PATH, yaml.dump(testConfig), 'utf8'); + + manager.setPreference('minimal'); + + expect(fs.existsSync(BACKUP_PATH)).toBe(true); + const backupConfig = yaml.load(fs.readFileSync(BACKUP_PATH, 'utf8')); + expect(backupConfig.agentIdentity.greeting.preference).toBe('auto'); + }); + + test('restores backup on YAML error', () => { + fs.writeFileSync(CONFIG_PATH, yaml.dump(testConfig), 'utf8'); + + // Mock yaml.dump to throw error + const originalDump = yaml.dump; + yaml.dump = jest.fn(() => { + throw new Error('YAML error'); + }); + + expect(() => manager.setPreference('minimal')).toThrow(); + + // Restore + yaml.dump = originalDump; + + // Config should be restored + const restoredConfig = yaml.load(fs.readFileSync(CONFIG_PATH, 'utf8')); + expect(restoredConfig.agentIdentity.greeting.preference).toBe('auto'); + }); + + test('creates config structure if missing', () => { + const minimalConfig = {}; + fs.writeFileSync(CONFIG_PATH, yaml.dump(minimalConfig), 'utf8'); + + manager.setPreference('named'); + + const updatedConfig = yaml.load(fs.readFileSync(CONFIG_PATH, 'utf8')); + expect(updatedConfig.agentIdentity.greeting.preference).toBe('named'); + }); + }); + + describe('getConfig', () => { + test('returns complete greeting config', () => { + testConfig.agentIdentity.greeting.preference = 'archetypal'; + testConfig.agentIdentity.greeting.showArchetype = false; + fs.writeFileSync(CONFIG_PATH, yaml.dump(testConfig), 'utf8'); + + const config = manager.getConfig(); + expect(config.preference).toBe('archetypal'); + expect(config.showArchetype).toBe(false); + expect(config.contextDetection).toBe(true); + }); + + test('returns empty object if config missing', () => { + if (fs.existsSync(CONFIG_PATH)) { + fs.unlinkSync(CONFIG_PATH); + } + + const config = manager.getConfig(); + expect(config).toEqual({}); + }); + }); +}); + +describe('GreetingBuilder with Preferences', () => { + let builder; + let mockAgent; + + beforeEach(() => { + builder = new GreetingBuilder(); + mockAgent = { + name: 'Dex', + id: 'dev', + icon: '💻', + persona_profile: { + archetype: 'Builder', + greeting_levels: { + minimal: '💻 dev Agent ready', + named: '💻 Dex (Builder) ready', + archetypal: '💻 Dex the Builder ready to innovate!', + }, + }, + }; + }); + + describe('buildFixedLevelGreeting', () => { + test('builds minimal greeting', () => { + const greeting = builder.buildFixedLevelGreeting(mockAgent, 'minimal'); + expect(greeting).toContain('💻 dev Agent ready'); + expect(greeting).toContain('*help'); + }); + + test('builds named greeting', () => { + const greeting = builder.buildFixedLevelGreeting(mockAgent, 'named'); + expect(greeting).toContain('💻 Dex (Builder) ready'); + expect(greeting).toContain('*help'); + }); + + test('builds archetypal greeting', () => { + const greeting = builder.buildFixedLevelGreeting(mockAgent, 'archetypal'); + expect(greeting).toContain('💻 Dex the Builder ready to innovate!'); + expect(greeting).toContain('*help'); + }); + + test('includes help command hint', () => { + const greeting = builder.buildFixedLevelGreeting(mockAgent, 'named'); + expect(greeting).toContain('*help'); + }); + + test('falls back to simple greeting if no greeting_levels', () => { + const agentWithoutLevels = { + name: 'Test', + id: 'test', + icon: '🤖', + }; + + const greeting = builder.buildFixedLevelGreeting(agentWithoutLevels, 'minimal'); + expect(greeting).toBeTruthy(); + expect(greeting).toContain('*help'); + }); + + test('uses fallback for missing level', () => { + const agentPartialLevels = { + name: 'Test', + id: 'test', + icon: '🤖', + persona_profile: { + greeting_levels: { + named: '🤖 Test ready', + }, + }, + }; + + const greeting = builder.buildFixedLevelGreeting(agentPartialLevels, 'minimal'); + expect(greeting).toContain('test Agent ready'); + }); + }); + + describe('buildGreeting with preference override', () => { + test('uses fixed level when preference set to minimal', async () => { + // Mock preferenceManager to return 'minimal' + builder.preferenceManager.getPreference = jest.fn().mockReturnValue('minimal'); + + const greeting = await builder.buildGreeting(mockAgent, {}); + expect(greeting).toContain('dev Agent ready'); + expect(greeting).not.toContain('Dex the Builder'); + }); + + test('uses fixed level when preference set to named', async () => { + builder.preferenceManager.getPreference = jest.fn().mockReturnValue('named'); + + const greeting = await builder.buildGreeting(mockAgent, {}); + expect(greeting).toContain('Dex (Builder) ready'); + }); + + test('uses fixed level when preference set to archetypal', async () => { + builder.preferenceManager.getPreference = jest.fn().mockReturnValue('archetypal'); + + const greeting = await builder.buildGreeting(mockAgent, {}); + expect(greeting).toContain('Dex the Builder ready to innovate!'); + }); + + test('uses session detection when preference is "auto"', async () => { + builder.preferenceManager.getPreference = jest.fn().mockReturnValue('auto'); + + const greeting = await builder.buildGreeting(mockAgent, { conversationHistory: [] }); + expect(greeting).toBeTruthy(); // Should use contextual logic + }); + + test('handles preference manager errors gracefully', async () => { + builder.preferenceManager.getPreference = jest.fn().mockImplementation(() => { + throw new Error('Config read failed'); + }); + + const greeting = await builder.buildGreeting(mockAgent, {}); + expect(greeting).toBeTruthy(); // Should fallback to simple greeting + }); + }); +}); + + +``` + +================================================== +📄 tests/unit/validate-claude-integration.test.js +================================================== +```js +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); + +const { + validateClaudeIntegration, +} = require('../../.aios-core/infrastructure/scripts/validate-claude-integration'); + +describe('validate-claude-integration', () => { + let tmpRoot; + + function write(file, content = '') { + fs.mkdirSync(path.dirname(file), { recursive: true }); + fs.writeFileSync(file, content, 'utf8'); + } + + beforeEach(() => { + tmpRoot = fs.mkdtempSync(path.join(os.tmpdir(), 'validate-claude-')); + }); + + afterEach(() => { + fs.rmSync(tmpRoot, { recursive: true, force: true }); + }); + + it('passes when required Claude files exist', () => { + write(path.join(tmpRoot, '.claude', 'CLAUDE.md'), '# rules'); + write(path.join(tmpRoot, '.claude', 'hooks', 'hook.js'), ''); + write(path.join(tmpRoot, '.claude', 'commands', 'AIOS', 'agents', 'dev.md'), '# dev'); + write(path.join(tmpRoot, '.aios-core', 'development', 'agents', 'dev.md'), '# dev'); + + const result = validateClaudeIntegration({ projectRoot: tmpRoot }); + expect(result.ok).toBe(true); + expect(result.errors).toEqual([]); + }); + + it('fails when claude agents dir is missing', () => { + const result = validateClaudeIntegration({ projectRoot: tmpRoot }); + expect(result.ok).toBe(false); + expect(result.errors.some((e) => e.includes('Missing Claude agents dir'))).toBe(true); + }); +}); + + +``` + +================================================== +📄 tests/unit/migration-analyze.test.js +================================================== +```js +/** + * Migration Analyze Module Tests + * + * @story 2.14 - Migration Script v2.0 → v2.1 + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const { + MODULE_MAPPING, + detectV2Structure, + categorizeFile, + analyzeMigrationPlan, + formatSize, + formatMigrationPlan, + analyzeImports, +} = require('../../.aios-core/cli/commands/migrate/analyze'); + +describe('Migration Analyze Module', () => { + let testDir; + + beforeEach(async () => { + testDir = path.join(os.tmpdir(), `aios-analyze-test-${Date.now()}`); + await fs.promises.mkdir(testDir, { recursive: true }); + }); + + afterEach(async () => { + if (testDir && fs.existsSync(testDir)) { + await fs.promises.rm(testDir, { recursive: true, force: true }); + } + }); + + describe('MODULE_MAPPING', () => { + it('should have all four modules defined', () => { + expect(MODULE_MAPPING).toHaveProperty('core'); + expect(MODULE_MAPPING).toHaveProperty('development'); + expect(MODULE_MAPPING).toHaveProperty('product'); + expect(MODULE_MAPPING).toHaveProperty('infrastructure'); + }); + + it('should have directories for each module', () => { + expect(MODULE_MAPPING.core.directories).toContain('registry'); + expect(MODULE_MAPPING.development.directories).toContain('agents'); + expect(MODULE_MAPPING.product.directories).toContain('cli'); + expect(MODULE_MAPPING.infrastructure.directories).toContain('hooks'); + }); + }); + + describe('detectV2Structure', () => { + it('should detect v2.0 flat structure', async () => { + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(path.join(aiosCoreDir, 'agents'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'tasks'), { recursive: true }); + + const result = await detectV2Structure(testDir); + + expect(result.isV2).toBe(true); + expect(result.isV21).toBe(false); + expect(result.version).toBe('2.0'); + }); + + it('should detect v2.1 modular structure', async () => { + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(path.join(aiosCoreDir, 'core'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'development'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'product'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'infrastructure'), { recursive: true }); + + const result = await detectV2Structure(testDir); + + expect(result.isV2).toBe(false); + expect(result.isV21).toBe(true); + expect(result.version).toBe('2.1'); + }); + + it('should return error if no .aios-core exists', async () => { + const result = await detectV2Structure(testDir); + + expect(result.isV2).toBe(false); + expect(result.isV21).toBe(false); + expect(result.error).toBeTruthy(); + }); + }); + + describe('categorizeFile', () => { + it('should categorize agents to development', () => { + expect(categorizeFile('agents/dev.md')).toBe('development'); + }); + + it('should categorize registry to core', () => { + expect(categorizeFile('registry/service.json')).toBe('core'); + }); + + it('should categorize cli to product', () => { + expect(categorizeFile('cli/index.js')).toBe('product'); + }); + + it('should categorize hooks to infrastructure', () => { + expect(categorizeFile('hooks/pre-commit.js')).toBe('infrastructure'); + }); + + it('should categorize root files to core', () => { + expect(categorizeFile('index.js')).toBe('core'); + }); + + it('should return null for unknown directories', () => { + expect(categorizeFile('unknown/file.js')).toBeNull(); + }); + }); + + describe('analyzeMigrationPlan', () => { + it('should generate migration plan for v2.0 project', async () => { + // Create v2.0 structure + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(path.join(aiosCoreDir, 'agents'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'tasks'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'registry'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'cli'), { recursive: true }); + + await fs.promises.writeFile(path.join(aiosCoreDir, 'agents', 'dev.md'), 'Agent'); + await fs.promises.writeFile(path.join(aiosCoreDir, 'tasks', 'build.md'), 'Task'); + await fs.promises.writeFile(path.join(aiosCoreDir, 'registry', 'index.js'), 'Registry'); + await fs.promises.writeFile(path.join(aiosCoreDir, 'cli', 'index.js'), 'CLI'); + + const plan = await analyzeMigrationPlan(testDir); + + expect(plan.canMigrate).toBe(true); + expect(plan.sourceVersion).toBe('2.0'); + expect(plan.targetVersion).toBe('2.1'); + expect(plan.totalFiles).toBe(4); + expect(plan.modules.development.files).toHaveLength(2); // agents + tasks + expect(plan.modules.core.files).toHaveLength(1); // registry + expect(plan.modules.product.files).toHaveLength(1); // cli + }); + + it('should return canMigrate false for v2.1 project', async () => { + // Create v2.1 structure + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(path.join(aiosCoreDir, 'core'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'development'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'product'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'infrastructure'), { recursive: true }); + + const plan = await analyzeMigrationPlan(testDir); + + expect(plan.canMigrate).toBe(false); + }); + + it('should detect conflicts', async () => { + // Create v2.0 structure with existing v2.1 dir + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(path.join(aiosCoreDir, 'agents'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'core'), { recursive: true }); // Conflict + + await fs.promises.writeFile(path.join(aiosCoreDir, 'agents', 'dev.md'), 'Agent'); + + const plan = await analyzeMigrationPlan(testDir); + + expect(plan.conflicts.length).toBeGreaterThan(0); + }); + }); + + describe('formatSize', () => { + it('should format bytes correctly', () => { + expect(formatSize(500)).toBe('500 B'); + expect(formatSize(1024)).toBe('1.0 KB'); + expect(formatSize(1536)).toBe('1.5 KB'); + expect(formatSize(1048576)).toBe('1.0 MB'); + }); + }); + + describe('formatMigrationPlan', () => { + it('should format plan as table', async () => { + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(path.join(aiosCoreDir, 'agents'), { recursive: true }); + await fs.promises.writeFile(path.join(aiosCoreDir, 'agents', 'dev.md'), 'Agent'); + + const plan = await analyzeMigrationPlan(testDir); + const formatted = formatMigrationPlan(plan); + + expect(formatted).toContain('Migration Plan:'); + expect(formatted).toContain('Module'); + expect(formatted).toContain('Files'); + expect(formatted).toContain('Size'); + }); + }); + + describe('analyzeImports', () => { + it('should analyze importable files', async () => { + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(path.join(aiosCoreDir, 'cli'), { recursive: true }); + await fs.promises.writeFile(path.join(aiosCoreDir, 'cli', 'index.js'), 'module.exports = {}'); + + const plan = await analyzeMigrationPlan(testDir); + const imports = analyzeImports(plan); + + expect(imports).toHaveProperty('totalImportableFiles'); + expect(imports).toHaveProperty('byModule'); + expect(imports.byModule.product).toBeGreaterThanOrEqual(0); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/decision-context.test.js +================================================== +```js +/** + * Unit Tests for Decision Context + * + * Tests the DecisionContext class for tracking decisions, files, and tests. + * + * @see .aios-core/scripts/decision-context.js + */ + +const { + DecisionContext, + DECISION_TYPES, + PRIORITY_LEVELS, +} = require('../../.aios-core/development/scripts/decision-context'); + +describe('DecisionContext', () => { + let context; + + beforeEach(() => { + // Mock Date.now for consistent timestamps + jest.spyOn(Date, 'now').mockReturnValue(1700000000000); + + // Create fresh context for each test + context = new DecisionContext('dev', 'docs/stories/test-story.md'); + }); + + afterEach(() => { + jest.restoreAllMocks(); + }); + + describe('constructor', () => { + it('should initialize with required fields', () => { + expect(context.agentId).toBe('dev'); + expect(context.storyPath).toBe('docs/stories/test-story.md'); + expect(context.startTime).toBe(1700000000000); + expect(context.status).toBe('running'); + expect(context.enabled).toBe(true); + }); + + it('should initialize empty tracking arrays', () => { + expect(context.decisions).toEqual([]); + expect(context.filesModified).toEqual([]); + expect(context.testsRun).toEqual([]); + }); + + it('should capture git commit hash', () => { + expect(context.commitBefore).toBeDefined(); + expect(typeof context.commitBefore).toBe('string'); + }); + + it('should handle disabled state', () => { + const disabledContext = new DecisionContext('dev', 'story.md', { enabled: false }); + expect(disabledContext.enabled).toBe(false); + }); + }); + + describe('recordDecision', () => { + it('should record decision with all fields', () => { + const decision = context.recordDecision({ + description: 'Use Axios for HTTP', + reason: 'Better error handling', + alternatives: ['Fetch API', 'Got library'], + type: 'library-choice', + priority: 'medium', + }); + + expect(decision).toMatchObject({ + timestamp: 1700000000000, + description: 'Use Axios for HTTP', + reason: 'Better error handling', + alternatives: ['Fetch API', 'Got library'], + type: 'library-choice', + priority: 'medium', + }); + + expect(context.decisions).toHaveLength(1); + expect(context.decisions[0]).toBe(decision); + }); + + it('should handle empty alternatives array', () => { + const decision = context.recordDecision({ + description: 'Simple decision', + reason: 'Only one option', + alternatives: [], + }); + + expect(decision.alternatives).toEqual([]); + }); + + it('should use default type and priority if not provided', () => { + const decision = context.recordDecision({ + description: 'Decision without classification', + reason: 'Some reason', + }); + + expect(decision.type).toBe('architecture'); + expect(decision.priority).toBe('medium'); + }); + + it('should validate decision type', () => { + const consoleSpy = jest.spyOn(console, 'warn').mockImplementation(); + + const decision = context.recordDecision({ + description: 'Test decision', + reason: 'Test reason', + type: 'invalid-type', + }); + + expect(decision.type).toBe('architecture'); // Fallback to default + expect(consoleSpy).toHaveBeenCalledWith(expect.stringContaining('Unknown decision type')); + + consoleSpy.mockRestore(); + }); + + it('should validate priority level', () => { + const consoleSpy = jest.spyOn(console, 'warn').mockImplementation(); + + const decision = context.recordDecision({ + description: 'Test decision', + reason: 'Test reason', + priority: 'invalid-priority', + }); + + expect(decision.priority).toBe('medium'); // Fallback to default + expect(consoleSpy).toHaveBeenCalledWith(expect.stringContaining('Unknown priority level')); + + consoleSpy.mockRestore(); + }); + + it('should handle non-array alternatives', () => { + const decision = context.recordDecision({ + description: 'Test', + reason: 'Test', + alternatives: 'not an array', + }); + + expect(decision.alternatives).toEqual([]); + }); + + it('should return null when disabled', () => { + const disabledContext = new DecisionContext('dev', 'story.md', { enabled: false }); + const decision = disabledContext.recordDecision({ + description: 'Test', + reason: 'Test', + }); + + expect(decision).toBeNull(); + expect(disabledContext.decisions).toHaveLength(0); + }); + }); + + describe('trackFile', () => { + it('should track file with action', () => { + context.trackFile('src/api.js', 'created'); + + expect(context.filesModified).toHaveLength(1); + expect(context.filesModified[0].path).toContain('api.js'); // OS-agnostic path check + expect(context.filesModified[0].action).toBe('created'); + }); + + it('should use default action "modified"', () => { + context.trackFile('src/utils.js'); + + expect(context.filesModified[0].action).toBe('modified'); + }); + + it('should normalize file paths', () => { + context.trackFile('src\\windows\\path.js', 'created'); + + const tracked = context.filesModified[0]; + expect(tracked.path).toContain('path.js'); // Path normalized by OS + expect(typeof tracked.path).toBe('string'); + }); + + it('should update existing file instead of duplicating', () => { + context.trackFile('src/api.js', 'created'); + context.trackFile('src/api.js', 'modified'); + + expect(context.filesModified).toHaveLength(1); + expect(context.filesModified[0].action).toBe('modified'); + }); + + it('should not track when disabled', () => { + const disabledContext = new DecisionContext('dev', 'story.md', { enabled: false }); + disabledContext.trackFile('test.js', 'created'); + + expect(disabledContext.filesModified).toHaveLength(0); + }); + }); + + describe('trackTest', () => { + it('should track passing test', () => { + context.trackTest({ + name: 'api.test.js', + passed: true, + duration: 125, + }); + + expect(context.testsRun).toHaveLength(1); + expect(context.testsRun[0]).toMatchObject({ + name: 'api.test.js', + passed: true, + duration: 125, + error: null, + timestamp: 1700000000000, + }); + }); + + it('should track failing test with error', () => { + context.trackTest({ + name: 'broken.test.js', + passed: false, + duration: 50, + error: 'Assertion failed', + }); + + const test = context.testsRun[0]; + expect(test.passed).toBe(false); + expect(test.error).toBe('Assertion failed'); + }); + + it('should handle missing duration', () => { + context.trackTest({ + name: 'test.js', + passed: true, + }); + + expect(context.testsRun[0].duration).toBe(0); + }); + + it('should not track when disabled', () => { + const disabledContext = new DecisionContext('dev', 'story.md', { enabled: false }); + disabledContext.trackTest({ name: 'test.js', passed: true }); + + expect(disabledContext.testsRun).toHaveLength(0); + }); + }); + + describe('updateMetrics', () => { + it('should update metrics', () => { + context.updateMetrics({ + agentLoadTime: 150, + taskExecutionTime: 60000, + }); + + expect(context.metrics.agentLoadTime).toBe(150); + expect(context.metrics.taskExecutionTime).toBe(60000); + }); + + it('should merge new metrics with existing', () => { + context.metrics.agentLoadTime = 100; + context.updateMetrics({ + taskExecutionTime: 5000, + }); + + expect(context.metrics.agentLoadTime).toBe(100); + expect(context.metrics.taskExecutionTime).toBe(5000); + }); + + it('should not update when disabled', () => { + const disabledContext = new DecisionContext('dev', 'story.md', { enabled: false }); + const initialMetrics = { ...disabledContext.metrics }; + + disabledContext.updateMetrics({ newMetric: 123 }); + + expect(disabledContext.metrics).toEqual(initialMetrics); + }); + }); + + describe('complete', () => { + it('should mark execution as complete', () => { + jest.spyOn(Date, 'now').mockReturnValue(1700000060000); // 1 minute later + + context.complete('completed'); + + expect(context.status).toBe('completed'); + expect(context.endTime).toBe(1700000060000); + expect(context.metrics.taskExecutionTime).toBe(60000); // 1 minute + }); + + it('should use default status "completed"', () => { + context.complete(); + + expect(context.status).toBe('completed'); + }); + + it('should handle failed status', () => { + context.complete('failed'); + + expect(context.status).toBe('failed'); + }); + }); + + describe('toObject', () => { + it('should return context as plain object', () => { + context.recordDecision({ + description: 'Test decision', + reason: 'Test reason', + }); + context.trackFile('src/test.js', 'created'); + context.trackTest({ name: 'test.js', passed: true, duration: 100 }); + context.complete(); + + const obj = context.toObject(); + + expect(obj).toMatchObject({ + agentId: 'dev', + storyPath: 'docs/stories/test-story.md', + startTime: 1700000000000, + status: 'completed', + decisions: expect.any(Array), + filesModified: expect.any(Array), + testsRun: expect.any(Array), + metrics: expect.any(Object), + commitBefore: expect.any(String), + }); + + expect(obj.decisions).toHaveLength(1); + expect(obj.filesModified).toHaveLength(1); + expect(obj.testsRun).toHaveLength(1); + }); + }); + + describe('getSummary', () => { + it('should return summary statistics', () => { + context.recordDecision({ description: 'D1', reason: 'R1' }); + context.recordDecision({ description: 'D2', reason: 'R2' }); + context.trackFile('file1.js', 'created'); + context.trackTest({ name: 'test1.js', passed: true, duration: 100 }); + context.trackTest({ name: 'test2.js', passed: false, duration: 50 }); + + jest.spyOn(Date, 'now').mockReturnValue(1700000060000); // 1 minute later + context.complete(); + + const summary = context.getSummary(); + + expect(summary).toMatchObject({ + decisionsCount: 2, + filesModifiedCount: 1, + testsRunCount: 2, + testsPassed: 1, + testsFailed: 1, + duration: 60000, + status: 'completed', + }); + }); + + it('should calculate duration for running context', () => { + jest.spyOn(Date, 'now').mockReturnValue(1700000030000); // 30 seconds later + + const summary = context.getSummary(); + + expect(summary.duration).toBe(30000); + expect(summary.status).toBe('running'); + }); + }); + + describe('DECISION_TYPES constant', () => { + it('should have all required decision types', () => { + expect(DECISION_TYPES).toHaveProperty('library-choice'); + expect(DECISION_TYPES).toHaveProperty('architecture'); + expect(DECISION_TYPES).toHaveProperty('algorithm'); + expect(DECISION_TYPES).toHaveProperty('error-handling'); + expect(DECISION_TYPES).toHaveProperty('testing-strategy'); + expect(DECISION_TYPES).toHaveProperty('performance'); + expect(DECISION_TYPES).toHaveProperty('security'); + expect(DECISION_TYPES).toHaveProperty('database'); + }); + }); + + describe('PRIORITY_LEVELS constant', () => { + it('should have all required priority levels', () => { + expect(PRIORITY_LEVELS).toHaveProperty('critical'); + expect(PRIORITY_LEVELS).toHaveProperty('high'); + expect(PRIORITY_LEVELS).toHaveProperty('medium'); + expect(PRIORITY_LEVELS).toHaveProperty('low'); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/output-formatter.test.js +================================================== +```js +/** + * Unit Tests for PersonalizedOutputFormatter and OutputPatternValidator + * + * Story: 6.1.6 - Output Formatter Implementation + * Test Coverage: 50+ test cases + * Target: ≥80% coverage + */ + +const PersonalizedOutputFormatter = require('../../.aios-core/infrastructure/scripts/output-formatter'); +const OutputPatternValidator = require('../../.aios-core/infrastructure/scripts/validate-output-pattern'); +const fs = require('fs'); +const path = require('path'); + +// Mock fs for agent file reading +jest.mock('fs'); + +describe('PersonalizedOutputFormatter', () => { + let mockAgent; + let mockTask; + let mockResults; + + beforeEach(() => { + jest.clearAllMocks(); + + // Setup default mocks + mockAgent = { + id: 'dev', + name: 'Dex', + title: 'Full Stack Developer', + }; + + mockTask = { + name: 'develop-story', + }; + + mockResults = { + startTime: '2025-01-15T10:00:00Z', + endTime: '2025-01-15T10:02:30Z', + duration: '2.5s', + tokens: { total: 1800 }, + success: true, + output: 'Task completed successfully.', + tests: { passed: 12, total: 12 }, + coverage: 87, + linting: { status: '✅ Clean' }, + }; + + // Mock agent file content + const mockAgentContent = `# dev + +\`\`\`yaml +agent: + name: Dex + id: dev + title: Full Stack Developer + +persona_profile: + archetype: Builder + zodiac: "♒ Aquarius" + communication: + tone: pragmatic + emoji_frequency: medium + vocabulary: + - construir + - implementar + - refatorar + - resolver + - otimizar + greeting_levels: + minimal: "💻 dev Agent ready" + named: "💻 Dex (Builder) ready. Let's build something great!" + signature_closing: "— Dex, sempre construindo 🔨" +\`\`\` +`; + + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue(mockAgentContent); + }); + + describe('Core Formatting', () => { + test('generates valid output for Builder agent', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const output = formatter.format(); + + expect(output).toContain('## 📊 Task Execution Report'); + expect(output).toContain('**Agent:** Dex (Builder)'); + expect(output).toContain('**Task:** develop-story'); + expect(output).toContain('**Duration:** 2.5s'); + expect(output).toContain('**Tokens Used:** 1,800 total'); + }); + + test('maintains fixed header structure', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const output = formatter.format(); + const lines = output.split('\n'); + + // Check header structure + expect(lines[0]).toBe('## 📊 Task Execution Report'); + expect(lines[1]).toBe(''); + expect(lines[2]).toContain('**Agent:**'); + expect(lines[3]).toContain('**Task:**'); + expect(lines[4]).toContain('**Started:**'); + expect(lines[5]).toContain('**Completed:**'); + }); + + test('places Duration on line 7', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const header = formatter.buildFixedHeader(); + const lines = header.split('\n'); + + // Duration should be on line 6 (0-indexed), which is the 7th line + expect(lines[6]).toMatch(/^\*\*Duration:\*\*/); + }); + + test('places Tokens on line 8', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const header = formatter.buildFixedHeader(); + const lines = header.split('\n'); + + // Tokens should be on line 7 (0-indexed), which is the 8th line + expect(lines[7]).toMatch(/^\*\*Tokens Used:\*\*/); + }); + + test('places Metrics section last', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const output = formatter.format(); + const lines = output.split('\n'); + + const metricsIndex = lines.findIndex(line => line === '### Metrics'); + const lastSectionIndex = lines.length - 1; + + // Metrics should be before signature (last line) + expect(metricsIndex).toBeLessThan(lastSectionIndex); + expect(lines[lastSectionIndex]).toContain('— Dex'); + }); + }); + + describe('Personality Injection', () => { + test('selects vocabulary from Builder archetype', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const verb = formatter.selectVerbFromVocabulary(['construir', 'implementar', 'refatorar']); + + expect(['construir', 'implementar', 'refatorar']).toContain(verb); + }); + + test('generates pragmatic tone status message', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const message = formatter.generateSuccessMessage('pragmatic', 'implementar'); + + expect(message).toContain('Tá pronto!'); + expect(message.toLowerCase()).toContain('implementado'); + }); + + test('generates empathetic tone status message', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const message = formatter.generateSuccessMessage('empathetic', 'criar'); + + expect(message.toLowerCase()).toContain('criado'); + expect(message).toContain('cuidado'); + }); + + test('generates analytical tone status message', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const message = formatter.generateSuccessMessage('analytical', 'validar'); + + expect(message).toContain('validado'); + expect(message).toContain('rigorosamente'); + }); + + test('generates collaborative tone status message', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const message = formatter.generateSuccessMessage('collaborative', 'harmonizar'); + + expect(message).toContain('harmonizado'); + expect(message).toContain('alinhados'); + }); + + test('injects signature closing correctly', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const signature = formatter.buildSignature(); + + expect(signature).toBe('— Dex, sempre construindo 🔨'); + }); + + test('loads persona_profile from agent file', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + + expect(formatter.personaProfile).toBeDefined(); + expect(formatter.personaProfile.archetype).toBe('Builder'); + expect(formatter.personaProfile.communication.tone).toBe('pragmatic'); + }); + }); + + describe('Error Handling', () => { + test('graceful degradation if persona_profile missing', () => { + fs.existsSync.mockReturnValue(false); + + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + + expect(formatter.personaProfile).toBeDefined(); + expect(formatter.personaProfile.archetype).toBe('Agent'); // Neutral fallback + }); + + test('graceful degradation if vocabulary missing', () => { + const agentWithoutVocab = { + id: 'test', + name: 'Test', + }; + + const mockContent = `# test +\`\`\`yaml +agent: + name: Test +persona_profile: + communication: + tone: neutral +\`\`\` +`; + + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue(mockContent); + + const formatter = new PersonalizedOutputFormatter(agentWithoutVocab, mockTask, mockResults); + const verb = formatter.selectVerbFromVocabulary([]); + + expect(verb).toBe('completar'); // Default fallback + }); + + test('handles missing agent file gracefully', () => { + fs.existsSync.mockReturnValue(false); + + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const output = formatter.format(); + + expect(output).toContain('## 📊 Task Execution Report'); + expect(output).toContain('**Agent:** Dex (Agent)'); // Neutral fallback + }); + + test('handles malformed YAML gracefully', () => { + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue('invalid yaml content'); + + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + + expect(formatter.personaProfile).toBeDefined(); + expect(formatter.personaProfile.archetype).toBe('Agent'); // Neutral fallback + }); + }); + + describe('All 11 Agents', () => { + const agents = [ + { id: 'dev', name: 'Dex', archetype: 'Builder', tone: 'pragmatic' }, + { id: 'qa', name: 'Quinn', archetype: 'Guardian', tone: 'analytical' }, + { id: 'po', name: 'Pax', archetype: 'Balancer', tone: 'collaborative' }, + { id: 'pm', name: 'Morgan', archetype: 'Visionary', tone: 'pragmatic' }, + { id: 'sm', name: 'River', archetype: 'Flow Master', tone: 'empathetic' }, + { id: 'architect', name: 'Aria', archetype: 'Architect', tone: 'analytical' }, + { id: 'analyst', name: 'Atlas', archetype: 'Explorer', tone: 'analytical' }, + { id: 'ux-design-expert', name: 'Uma', archetype: 'Empathizer', tone: 'empathetic' }, + { id: 'data-engineer', name: 'Dara', archetype: 'Engineer', tone: 'pragmatic' }, + { id: 'devops', name: 'Gage', archetype: 'Operator', tone: 'pragmatic' }, + { id: 'aios-master', name: 'Orion', archetype: 'Orchestrator', tone: 'collaborative' }, + ]; + + agents.forEach(agent => { + test(`generates valid output for ${agent.name} (${agent.archetype})`, () => { + const mockContent = `# ${agent.id} +\`\`\`yaml +agent: + name: ${agent.name} + id: ${agent.id} +persona_profile: + archetype: ${agent.archetype} + communication: + tone: ${agent.tone} + vocabulary: [testar, validar] + signature_closing: "— ${agent.name}" +\`\`\` +`; + + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue(mockContent); + + const formatter = new PersonalizedOutputFormatter( + { id: agent.id, name: agent.name }, + mockTask, + mockResults, + ); + const output = formatter.format(); + + expect(output).toContain(`**Agent:** ${agent.name} (${agent.archetype})`); + expect(output).toContain(`— ${agent.name}`); + }); + }); + }); + + describe('Performance', () => { + test('formatter completes in reasonable time', () => { + const start = process.hrtime.bigint(); + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + formatter.format(); + const duration = Number(process.hrtime.bigint() - start) / 1000000; + + expect(duration).toBeLessThan(100); // Should be <100ms (2x target) + }); + + test('vocabulary lookup is cached', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + + // First call + const verb1 = formatter.selectVerbFromVocabulary(['construir', 'implementar']); + + // Second call should use cache (same result) + const verb2 = formatter.selectVerbFromVocabulary(['construir', 'implementar']); + + expect(verb1).toBe(verb2); + }); + }); + + describe('Output Structure', () => { + test('builds fixed header correctly', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const header = formatter.buildFixedHeader(); + + expect(header).toContain('## 📊 Task Execution Report'); + expect(header).toContain('**Agent:**'); + expect(header).toContain('**Task:**'); + expect(header).toContain('**Duration:**'); + expect(header).toContain('**Tokens Used:**'); + }); + + test('builds personalized status correctly', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const status = formatter.buildPersonalizedStatus(); + + expect(status).toContain('### Status'); + expect(status).toContain('✅'); + }); + + test('builds output section correctly', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const output = formatter.buildOutput(); + + expect(output).toContain('### Output'); + expect(output).toContain('Task completed successfully.'); + }); + + test('builds fixed metrics correctly', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const metrics = formatter.buildFixedMetrics(); + + expect(metrics).toContain('### Metrics'); + expect(metrics).toContain('Tests: 12/12'); + expect(metrics).toContain('Coverage: 87%'); + expect(metrics).toContain('Linting: ✅ Clean'); + }); + }); +}); + +describe('OutputPatternValidator', () => { + let validator; + + beforeEach(() => { + validator = new OutputPatternValidator(); + }); + + describe('Structure Validation', () => { + test('detects missing Header section', () => { + const invalidOutput = `### Status +✅ Task completed + +### Output +Content here + +### Metrics +- Tests: 0/0 +`; + + const result = validator.validate(invalidOutput); + + expect(result.valid).toBe(false); + expect(result.errors.some(e => e.type === 'missing_section' && e.section === 'Header')).toBe(true); + }); + + test('detects missing Status section', () => { + const invalidOutput = `## 📊 Task Execution Report +**Agent:** Dex (Builder) +**Task:** test +**Duration:** 2s +**Tokens Used:** 1000 total + +### Output +Content + +### Metrics +- Tests: 0/0 +`; + + const result = validator.validate(invalidOutput); + + expect(result.valid).toBe(false); + expect(result.errors.some(e => e.type === 'missing_section' && e.section === 'Status')).toBe(true); + }); + + test('detects missing Metrics section', () => { + const invalidOutput = `## 📊 Task Execution Report +**Agent:** Dex (Builder) +**Task:** test +**Duration:** 2s +**Tokens Used:** 1000 total + +### Status +✅ Done + +### Output +Content +`; + + const result = validator.validate(invalidOutput); + + expect(result.valid).toBe(false); + expect(result.errors.some(e => e.type === 'missing_section' && e.section === 'Metrics')).toBe(true); + }); + + test('detects wrong Duration line position', () => { + const invalidOutput = `## 📊 Task Execution Report +**Agent:** Dex (Builder) +**Task:** test +**Started:** 2025-01-15T10:00:00Z +**Completed:** 2025-01-15T10:02:00Z +**Tokens Used:** 1000 total +**Extra Field:** something +**Duration:** 2s + +--- + +### Status +✅ Done + +### Output +Content + +### Metrics +- Tests: 0/0 +`; + + const result = validator.validate(invalidOutput); + + expect(result.valid).toBe(false); + expect(result.errors.some(e => e.type === 'wrong_position' && e.field === 'Duration')).toBe(true); + }); + + test('detects wrong Tokens line position', () => { + const invalidOutput = `## 📊 Task Execution Report +**Agent:** Dex (Builder) +**Task:** test +**Started:** 2025-01-15T10:00:00Z +**Completed:** 2025-01-15T10:02:00Z +**Duration:** 2s +**Tokens Used:** 1000 total +**Extra Line:** something + +--- + +### Status +✅ Done + +### Output +Content + +### Metrics +- Tests: 0/0 +`; + + const result = validator.validate(invalidOutput); + + expect(result.valid).toBe(false); + expect(result.errors.some(e => e.type === 'wrong_position' && e.field === 'Tokens')).toBe(true); + }); + + test('detects Metrics not last', () => { + const invalidOutput = `## 📊 Task Execution Report +**Agent:** Dex (Builder) +**Task:** test +**Duration:** 2s +**Tokens Used:** 1000 total + +--- + +### Status +✅ Done + +### Metrics +- Tests: 0/0 + +### Output +Content after metrics +`; + + const result = validator.validate(invalidOutput); + + expect(result.valid).toBe(false); + expect(result.errors.some(e => e.type === 'wrong_order')).toBe(true); + }); + }); + + describe('Validation Accuracy', () => { + test('passes valid output', () => { + const validOutput = `## 📊 Task Execution Report + +**Agent:** Dex (Builder) +**Task:** develop-story +**Started:** 2025-01-15T10:00:00Z +**Completed:** 2025-01-15T10:02:30Z +**Duration:** 2.5s +**Tokens Used:** 1,800 total + +--- + +### Status +✅ Tá pronto! Implementado com sucesso. + +### Output +Task completed successfully. + +### Metrics +- Tests: 12/12 +- Coverage: 87% +- Linting: ✅ Clean + +--- +— Dex, sempre construindo 🔨 +`; + + const result = validator.validate(validOutput); + + expect(result.valid).toBe(true); + expect(result.errors.length).toBe(0); + }); + + test('provides actionable error messages', () => { + const invalidOutput = `## 📊 Task Execution Report +**Agent:** Dex +**Task:** test +**Duration:** 2s +**Tokens Used:** 1000 total + +### Status +✅ Done +`; + + const result = validator.validate(invalidOutput); + const formatted = validator.formatErrors(result); + + expect(formatted).toContain('❌ Validation Error'); + expect(formatted).toContain('Missing required section'); + }); + }); + + describe('Edge Cases', () => { + test('handles empty output', () => { + const result = validator.validate(''); + + expect(result.valid).toBe(false); + expect(result.errors.length).toBeGreaterThan(0); + }); + + test('handles malformed sections', () => { + const invalidOutput = `## 📊 Task Execution Report +**Agent:** Dex +**Duration:** 2s +**Tokens Used:** 1000 + +### Statu +✅ Done + +### Outpu +Content + +### Metric +- Tests: 0/0 +`; + + const result = validator.validate(invalidOutput); + + expect(result.valid).toBe(false); + expect(result.errors.some(e => e.type === 'missing_section')).toBe(true); + }); + + test('handles missing required fields', () => { + const invalidOutput = `## 📊 Task Execution Report +**Agent:** Dex +**Task:** test +**Started:** 2025-01-15T10:00:00Z +**Completed:** 2025-01-15T10:02:00Z + +--- + +### Status +✅ Done + +### Output +Content + +### Metrics +- Tests: 0/0 +`; + + const result = validator.validate(invalidOutput); + + expect(result.valid).toBe(false); + expect(result.errors.some(e => e.type === 'missing_field' || e.type === 'wrong_position')).toBe(true); + }); + }); +}); + + +``` + +================================================== +📄 tests/unit/greeting-builder.test.js +================================================== +```js +/** + * Unit Tests for GreetingBuilder + * + * Test Coverage: + * - New session greeting + * - Existing session greeting + * - Workflow session greeting + * - Git configured vs unconfigured + * - Command visibility filtering + * - Project status integration + * - Timeout protection + * - Parallel operations + * - Fallback strategy + * - Backwards compatibility + * - Story 10.3: User profile-based command filtering + * - Story ACT-12: Language delegated to Claude Code settings.json + */ + +const GreetingBuilder = require('../../.aios-core/development/scripts/greeting-builder'); +const ContextDetector = require('../../.aios-core/core/session/context-detector'); +const GitConfigDetector = require('../../.aios-core/infrastructure/scripts/git-config-detector'); + +// Mock dependencies +jest.mock('../../.aios-core/core/session/context-detector'); +jest.mock('../../.aios-core/infrastructure/scripts/git-config-detector'); +jest.mock('../../.aios-core/infrastructure/scripts/project-status-loader', () => ({ + loadProjectStatus: jest.fn(), + formatStatusDisplay: jest.fn(), +})); +jest.mock('../../.aios-core/core/config/config-resolver', () => ({ + resolveConfig: jest.fn(() => ({ + config: { user_profile: 'advanced' }, + warnings: [], + legacy: false, + })), +})); +const { resolveConfig: mockResolveConfig } = require('../../.aios-core/core/config/config-resolver'); +jest.mock('../../.aios-core/development/scripts/greeting-preference-manager', () => { + return jest.fn().mockImplementation(() => ({ + getPreference: jest.fn().mockReturnValue('auto'), + setPreference: jest.fn(), + getConfig: jest.fn().mockReturnValue({}), + })); +}); + +const { loadProjectStatus, formatStatusDisplay } = require('../../.aios-core/infrastructure/scripts/project-status-loader'); + +describe('GreetingBuilder', () => { + let builder; + let mockAgent; + + beforeEach(() => { + jest.clearAllMocks(); + + // Setup default mock agent + mockAgent = { + name: 'TestAgent', + icon: '🤖', + persona_profile: { + greeting_levels: { + minimal: '🤖 TestAgent ready', + named: '🤖 TestAgent (Tester) ready', + archetypal: '🤖 TestAgent the Tester ready', + }, + }, + persona: { + role: 'Test automation expert', + }, + commands: [ + { name: 'help', visibility: ['full', 'quick', 'key'], description: 'Show help' }, + { name: 'test', visibility: ['full', 'quick'], description: 'Run tests' }, + { name: 'build', visibility: ['full'], description: 'Build project' }, + { name: 'deploy', visibility: ['key'], description: 'Deploy to production' }, + ], + }; + + // Setup default mocks - must be done BEFORE creating GreetingBuilder instance + ContextDetector.mockImplementation(() => ({ + detectSessionType: jest.fn().mockReturnValue('new'), + })); + + GitConfigDetector.mockImplementation(() => ({ + get: jest.fn().mockReturnValue({ + configured: true, + type: 'github', + branch: 'main', + }), + })); + + loadProjectStatus.mockResolvedValue({ + branch: 'main', + modifiedFiles: ['file1.js', 'file2.js'], + modifiedFilesTotalCount: 2, + recentCommits: ['feat: add feature', 'fix: bug fix'], + currentStory: 'STORY-123', + isGitRepo: true, + }); + formatStatusDisplay.mockReturnValue('Project Status Display'); + + // Create builder AFTER mocks are set up + builder = new GreetingBuilder(); + }); + + describe('Session Type Greetings', () => { + test('should build new session greeting with full details', async () => { + builder.contextDetector.detectSessionType.mockReturnValue('new'); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + // Implementation now always uses archetypal greeting for richer presentation + expect(greeting).toContain('TestAgent the Tester ready'); + expect(greeting).toContain('Test automation expert'); // Role description + expect(greeting).toContain('Project Status'); // Project status + expect(greeting).toContain('Available Commands'); // Full commands + }); + + test('should build existing session greeting without role', async () => { + builder.contextDetector.detectSessionType.mockReturnValue('existing'); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + // Story ACT-7: Existing sessions use named greeting (brief) instead of archetypal + expect(greeting).toContain('TestAgent (Tester) ready'); + expect(greeting).not.toContain('Test automation expert'); // No role + expect(greeting).toContain('Quick Commands'); // Quick commands + }); + + test('should build workflow session greeting with minimal presentation', async () => { + builder.contextDetector.detectSessionType.mockReturnValue('workflow'); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + // Story ACT-7: Workflow sessions use named greeting (focused) instead of archetypal + expect(greeting).toContain('TestAgent (Tester) ready'); + expect(greeting).not.toContain('Test automation expert'); // No role + expect(greeting).toContain('Key Commands'); // Key commands only + }); + }); + + describe('Git Configuration', () => { + test('should show project status when git configured', async () => { + builder.gitConfigDetector.get.mockReturnValue({ + configured: true, + type: 'github', + branch: 'main', + }); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + expect(greeting).toContain('Project Status'); + expect(loadProjectStatus).toHaveBeenCalled(); + }); + + test('should hide project status when git not configured', async () => { + builder.gitConfigDetector.get.mockReturnValue({ + configured: false, + type: null, + branch: null, + }); + loadProjectStatus.mockResolvedValue(null); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + expect(greeting).not.toContain('Project Status'); + }); + + test('should show git warning at END when not configured', async () => { + builder.gitConfigDetector.get.mockReturnValue({ + configured: false, + type: null, + branch: null, + }); + loadProjectStatus.mockResolvedValue(null); // No status when git not configured + + const greeting = await builder.buildGreeting(mockAgent, {}); + + // Implementation may not always show git warning depending on config + // Just verify greeting is generated + expect(greeting).toBeTruthy(); + expect(greeting).toContain('TestAgent'); + }); + + test('should not show git warning when configured', async () => { + builder.gitConfigDetector.get.mockReturnValue({ + configured: true, + type: 'github', + branch: 'main', + }); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + expect(greeting).not.toContain('Git Configuration Needed'); + }); + }); + + describe('Command Visibility', () => { + test('should show full commands for new session', async () => { + builder.contextDetector.detectSessionType.mockReturnValue('new'); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + expect(greeting).toContain('help'); + expect(greeting).toContain('test'); + expect(greeting).toContain('build'); + expect(greeting).toContain('Available Commands'); + }); + + test('should show quick commands for existing session', async () => { + builder.contextDetector.detectSessionType.mockReturnValue('existing'); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + expect(greeting).toContain('help'); + expect(greeting).toContain('test'); + expect(greeting).not.toContain('build'); // Full-only command + expect(greeting).toContain('Quick Commands'); + }); + + test('should show key commands for workflow session', async () => { + builder.contextDetector.detectSessionType.mockReturnValue('workflow'); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + expect(greeting).toContain('help'); + expect(greeting).toContain('deploy'); + expect(greeting).not.toContain('test'); // Not a key command + expect(greeting).toContain('Key Commands'); + }); + + test('should handle agent without visibility metadata (backwards compatible)', async () => { + mockAgent.commands = [ + { name: 'help' }, + { name: 'test' }, + { name: 'build' }, + ]; + + const greeting = await builder.buildGreeting(mockAgent, {}); + + expect(greeting).toContain('help'); + expect(greeting).toContain('test'); + expect(greeting).toContain('build'); + }); + + test('should limit to 12 commands maximum', async () => { + mockAgent.commands = Array(20).fill(null).map((_, i) => ({ + name: `command-${i}`, + visibility: ['full', 'quick', 'key'], + })); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + const commandMatches = greeting.match(/\*command-/g); + expect(commandMatches?.length).toBeLessThanOrEqual(12); + }); + }); + + describe('Current Context', () => { + test('should show workflow context when in workflow session', async () => { + builder.contextDetector.detectSessionType.mockReturnValue('workflow'); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + expect(greeting).toContain('Context:'); + expect(greeting).toContain('STORY-123'); + }); + + test('should show last command in existing session', async () => { + builder.contextDetector.detectSessionType.mockReturnValue('existing'); + + const greeting = await builder.buildGreeting(mockAgent, { + lastCommands: ['validate-story-draft'], + }); + + // Implementation uses Context section with different format + // Just verify greeting is generated for existing session + expect(greeting).toBeTruthy(); + expect(greeting).toContain('TestAgent'); + expect(greeting).toContain('Quick Commands'); + }); + }); + + describe('Project Status Formatting', () => { + test('should use full format for new/existing sessions', async () => { + builder.contextDetector.detectSessionType.mockReturnValue('new'); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + // Story ACT-7: Now uses narrative format when enriched context is available + // Just verify project status is shown in greeting + expect(greeting).toContain('Project Status'); + expect(greeting).toContain('branch'); + }); + + test('should use condensed format for workflow session', async () => { + builder.contextDetector.detectSessionType.mockReturnValue('workflow'); + loadProjectStatus.mockResolvedValue({ + branch: 'main', + modifiedFilesTotalCount: 5, + currentStory: 'STORY-123', + isGitRepo: true, + }); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + expect(greeting).toContain('🌿 main'); + expect(greeting).toContain('📝 5 modified'); + expect(greeting).toContain('📖 STORY-123'); + }); + }); + + describe('Performance and Fallback', () => { + test('should complete within timeout (150ms)', async () => { + const startTime = Date.now(); + await builder.buildGreeting(mockAgent, {}); + const endTime = Date.now(); + + expect(endTime - startTime).toBeLessThan(150); + }); + + test('should fallback to simple greeting on timeout', async () => { + // Mock slow operation + loadProjectStatus.mockImplementation(() => + new Promise(resolve => setTimeout(resolve, 200)), + ); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + expect(greeting).toContain('TestAgent (Tester) ready'); + expect(greeting).toContain('Type `*help`'); + }); + + test('should fallback on context detection error', async () => { + builder.contextDetector.detectSessionType.mockImplementation(() => { + throw new Error('Detection failed'); + }); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + // When context detection fails, it defaults to 'new' session and builds full greeting + // Implementation now always uses archetypal greeting for richer presentation + expect(greeting).toContain('TestAgent the Tester ready'); + expect(greeting).toContain('Available Commands'); // Defaults to 'new' session + expect(greeting).toContain('Test automation expert'); // Shows role for 'new' session + }); + + test('should fallback on git config error', async () => { + builder.gitConfigDetector.get.mockImplementation(() => { + throw new Error('Git check failed'); + }); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + expect(greeting).toBeTruthy(); + // Should still produce a greeting + }); + + test('should handle project status load failure gracefully', async () => { + loadProjectStatus.mockRejectedValue(new Error('Load failed')); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + expect(greeting).toBeTruthy(); + // Should still produce a greeting without status + }); + }); + + describe('Simple Greeting (Fallback)', () => { + test('should build simple greeting', () => { + const simple = builder.buildSimpleGreeting(mockAgent); + + expect(simple).toContain('TestAgent (Tester) ready'); + expect(simple).toContain('Type `*help`'); + }); + + test('should handle agent without persona profile', () => { + const basicAgent = { + name: 'BasicAgent', + icon: '⚡', + }; + + const simple = builder.buildSimpleGreeting(basicAgent); + + expect(simple).toContain('⚡ BasicAgent ready'); + expect(simple).toContain('Type `*help`'); + }); + }); + + describe('Component Methods', () => { + test('buildPresentation should return correct greeting level', () => { + // Implementation now always uses archetypal greeting for richer presentation + expect(builder.buildPresentation(mockAgent, 'new')).toContain('TestAgent the Tester'); + expect(builder.buildPresentation(mockAgent, 'workflow')).toContain('TestAgent the Tester'); + }); + + test('buildRoleDescription should return role', () => { + const role = builder.buildRoleDescription(mockAgent); + expect(role).toContain('Test automation expert'); + }); + + test('buildCommands should format commands list', () => { + const commands = [ + { name: 'help', description: 'Show help' }, + { name: 'test', description: 'Run tests' }, + ]; + + const formatted = builder.buildCommands(commands, 'new'); + expect(formatted).toContain('*help'); + expect(formatted).toContain('Show help'); + expect(formatted).toContain('Available Commands'); + }); + + test('buildGitWarning should return warning message', () => { + const warning = builder.buildGitWarning(); + expect(warning).toContain('Git Configuration Needed'); + expect(warning).toContain('git init'); + }); + }); + + describe('User Profile-Based Filtering (Story 10.3)', () => { + let mockPmAgent; + let mockDevAgent; + + beforeEach(() => { + // PM Agent (Bob) + mockPmAgent = { + id: 'pm', + name: 'Morgan', + icon: '📋', + persona_profile: { + greeting_levels: { + minimal: '📋 PM ready', + named: '📋 Morgan (PM) ready', + archetypal: '📋 Morgan the Product Manager ready', + }, + }, + persona: { + role: 'Product Manager and orchestrator', + }, + commands: [ + { name: 'help', visibility: ['full', 'quick', 'key'], description: 'Show help' }, + { name: 'create-story', visibility: ['full', 'quick'], description: 'Create new story' }, + { name: 'status', visibility: ['full', 'quick', 'key'], description: 'Project status' }, + ], + }; + + // Dev Agent (non-PM) + mockDevAgent = { + id: 'dev', + name: 'Dex', + icon: '👨‍💻', + persona_profile: { + greeting_levels: { + minimal: '👨‍💻 Dev ready', + named: '👨‍💻 Dex (Developer) ready', + archetypal: '👨‍💻 Dex the Developer ready', + }, + }, + persona: { + role: 'Software Developer', + }, + commands: [ + { name: 'help', visibility: ['full', 'quick', 'key'], description: 'Show help' }, + { name: 'develop', visibility: ['full', 'quick'], description: 'Start development' }, + { name: 'test', visibility: ['full'], description: 'Run tests' }, + ], + }; + }); + + describe('loadUserProfile()', () => { + test('should return advanced as default when resolveConfig returns no user_profile', () => { + mockResolveConfig.mockReturnValueOnce({ + config: {}, + warnings: [], + legacy: false, + }); + + const profile = builder.loadUserProfile(); + expect(profile).toBe('advanced'); + }); + + test('should return advanced when user_profile is missing from config', () => { + mockResolveConfig.mockReturnValueOnce({ + config: { project: { type: 'GREENFIELD' } }, + warnings: [], + legacy: false, + }); + + const profile = builder.loadUserProfile(); + expect(profile).toBe('advanced'); + }); + + test('should return advanced when user_profile is invalid', () => { + mockResolveConfig.mockReturnValueOnce({ + config: { user_profile: 'invalid_value' }, + warnings: [], + legacy: false, + }); + + const profile = builder.loadUserProfile(); + expect(profile).toBe('advanced'); + }); + + test('should return bob when user_profile is bob', () => { + mockResolveConfig.mockReturnValueOnce({ + config: { user_profile: 'bob' }, + warnings: [], + legacy: false, + }); + + const profile = builder.loadUserProfile(); + expect(profile).toBe('bob'); + }); + + test('should return advanced when user_profile is advanced', () => { + mockResolveConfig.mockReturnValueOnce({ + config: { user_profile: 'advanced' }, + warnings: [], + legacy: false, + }); + + const profile = builder.loadUserProfile(); + expect(profile).toBe('advanced'); + }); + + test('should return advanced when resolveConfig throws', () => { + mockResolveConfig.mockImplementationOnce(() => { + throw new Error('Config load failed'); + }); + + const profile = builder.loadUserProfile(); + expect(profile).toBe('advanced'); + }); + }); + + describe('filterCommandsByVisibility() with user profile', () => { + test('should return commands for PM agent in bob mode (AC1)', () => { + const commands = builder.filterCommandsByVisibility(mockPmAgent, 'new', 'bob'); + expect(commands.length).toBeGreaterThan(0); + expect(commands.some(c => c.name === 'help')).toBe(true); + }); + + test('should return empty array for non-PM agent in bob mode (AC1)', () => { + const commands = builder.filterCommandsByVisibility(mockDevAgent, 'new', 'bob'); + expect(commands).toEqual([]); + }); + + test('should return commands for any agent in advanced mode (AC2)', () => { + const pmCommands = builder.filterCommandsByVisibility(mockPmAgent, 'new', 'advanced'); + const devCommands = builder.filterCommandsByVisibility(mockDevAgent, 'new', 'advanced'); + + expect(pmCommands.length).toBeGreaterThan(0); + expect(devCommands.length).toBeGreaterThan(0); + }); + + test('should default to advanced when userProfile not provided (AC6)', () => { + // filterCommandsByVisibility without userProfile should behave as advanced + const commands = builder.filterCommandsByVisibility(mockDevAgent, 'new'); + expect(commands.length).toBeGreaterThan(0); + }); + }); + + describe('buildBobModeRedirect()', () => { + test('should return redirect message with agent name', () => { + const redirect = builder.buildBobModeRedirect(mockDevAgent); + + expect(redirect).toContain('Modo Assistido'); + expect(redirect).toContain('@pm'); + expect(redirect).toContain('Bob'); + expect(redirect).toContain('Dex'); + }); + + test('should return redirect message without agent name when agent is null', () => { + const redirect = builder.buildBobModeRedirect(null); + + expect(redirect).toContain('Modo Assistido'); + expect(redirect).toContain('@pm'); + expect(redirect).toContain('Este agente'); + }); + }); + + describe('Full greeting in bob mode', () => { + test('PM agent should show commands in bob mode (AC5)', async () => { + mockResolveConfig.mockReturnValueOnce({ + config: { user_profile: 'bob' }, + warnings: [], + legacy: false, + }); + + const greeting = await builder.buildGreeting(mockPmAgent, {}); + + expect(greeting).toContain('Morgan'); + expect(greeting).toContain('help'); + expect(greeting).not.toContain('Modo Assistido'); + }); + + test('Non-PM agent should show redirect message in bob mode (AC4)', async () => { + mockResolveConfig.mockReturnValueOnce({ + config: { user_profile: 'bob' }, + warnings: [], + legacy: false, + }); + + const greeting = await builder.buildGreeting(mockDevAgent, {}); + + expect(greeting).toContain('Dex'); + expect(greeting).toContain('Modo Assistido'); + expect(greeting).toContain('@pm'); + expect(greeting).not.toContain('develop'); // No commands shown + }); + + test('All agents should show normal commands in advanced mode (AC2)', async () => { + mockResolveConfig.mockReturnValue({ + config: { user_profile: 'advanced' }, + warnings: [], + legacy: false, + }); + + const pmGreeting = await builder.buildGreeting(mockPmAgent, {}); + const devGreeting = await builder.buildGreeting(mockDevAgent, {}); + + expect(pmGreeting).toContain('help'); + expect(pmGreeting).not.toContain('Modo Assistido'); + + expect(devGreeting).toContain('help'); + expect(devGreeting).not.toContain('Modo Assistido'); + }); + }); + }); + + describe('ACT-12: Language delegated to Claude Code settings.json', () => { + test('buildSimpleGreeting uses English help prompt (language handled natively by Claude Code)', () => { + const greeting = builder.buildSimpleGreeting(mockAgent); + expect(greeting).toContain('Type `*help`'); + }); + + test('buildFixedLevelGreeting uses English help text', () => { + const greeting = builder.buildFixedLevelGreeting(mockAgent, 'named'); + expect(greeting).toContain('Type `*help`'); + }); + + test('buildPresentation uses English welcome back', () => { + const sectionContext = { + sessionType: 'existing', + }; + + const presentation = builder.buildPresentation(mockAgent, 'existing', '', sectionContext); + expect(presentation).toContain('welcome back'); + }); + + test('buildFooter uses English guide prompt for new sessions', () => { + const sectionContext = { + sessionType: 'new', + }; + + const footer = builder.buildFooter(mockAgent, sectionContext); + expect(footer).toContain('Type `*guide`'); + }); + + test('buildFooter uses English help prompt for existing sessions', () => { + const sectionContext = { + sessionType: 'existing', + }; + + const footer = builder.buildFooter(mockAgent, sectionContext); + expect(footer).toContain('Type `*help`'); + expect(footer).toContain('*session-info'); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/workflow-state-manager.test.js +================================================== +```js +'use strict'; + +const fs = require('fs').promises; +const os = require('os'); +const path = require('path'); + +const { + WorkflowStateManager, +} = require('../../.aios-core/development/scripts/workflow-state-manager'); + +describe('WorkflowStateManager runtime-first recommendations', () => { + it('creates state with version metadata', async () => { + const tmp = await fs.mkdtemp(path.join(os.tmpdir(), 'aios-wsm-')); + const manager = new WorkflowStateManager({ basePath: tmp }); + + const workflowData = { + workflow: { + id: 'test-workflow', + name: 'Test Workflow', + sequence: [{ agent: 'dev', creates: 'artifact.md' }], + }, + }; + + const state = await manager.createState(workflowData); + expect(state.state_version).toBe('2.0'); + expect(state.workflow_id).toBe('test-workflow'); + }); + + it('prioritizes blocked over other runtime states', () => { + const manager = new WorkflowStateManager(); + const result = manager.evaluateExecutionState({ + story_status: 'blocked', + qa_status: 'rejected', + ci_status: 'red', + has_uncommitted_changes: true, + }); + + expect(result.state).toBe('blocked'); + }); + + it('maps qa rejection to apply-qa-fixes', () => { + const manager = new WorkflowStateManager(); + const next = manager.getNextActionRecommendation( + { story_status: 'in_progress', qa_status: 'rejected', ci_status: 'green' }, + { story: 'docs/stories/example.story.md' }, + ); + + expect(next.state).toBe('qa_rejected'); + expect(next.command).toContain('*apply-qa-fixes'); + expect(next.agent).toBe('@dev'); + }); + + it('maps red ci to run-tests', () => { + const manager = new WorkflowStateManager(); + const next = manager.getNextActionRecommendation({ + story_status: 'in_progress', + qa_status: 'pass', + ci_status: 'red', + has_uncommitted_changes: false, + }); + + expect(next.state).toBe('ci_red'); + expect(next.command).toBe('*run-tests'); + }); + + it('maps in-progress + clean tree to qa review', () => { + const manager = new WorkflowStateManager(); + const next = manager.getNextActionRecommendation( + { + story_status: 'in_progress', + qa_status: 'pass', + ci_status: 'green', + has_uncommitted_changes: false, + }, + { story: 'story.md' }, + ); + + expect(next.state).toBe('ready_for_validation'); + expect(next.command).toContain('*review-build'); + expect(next.agent).toBe('@qa'); + }); + + it('maps completed story to close-story', () => { + const manager = new WorkflowStateManager(); + const next = manager.getNextActionRecommendation( + { + story_status: 'done', + qa_status: 'pass', + ci_status: 'green', + has_uncommitted_changes: false, + }, + { story: 'docs/stories/completed.md' }, + ); + + expect(next.state).toBe('completed'); + expect(next.command).toContain('*close-story'); + expect(next.agent).toBe('@po'); + expect(next.rationale).toBeDefined(); + expect(next.confidence).toBeGreaterThanOrEqual(0.9); + }); + + it('maps in-progress + uncommitted changes to in_development', () => { + const manager = new WorkflowStateManager(); + const next = manager.getNextActionRecommendation({ + story_status: 'in_progress', + qa_status: 'pass', + ci_status: 'green', + has_uncommitted_changes: true, + }); + + expect(next.state).toBe('in_development'); + expect(next.command).toBe('*run-tests'); + expect(next.agent).toBe('@dev'); + }); + + it('falls back to unknown with low confidence when no signals', () => { + const manager = new WorkflowStateManager(); + const next = manager.getNextActionRecommendation({}); + + expect(next.state).toBe('unknown'); + expect(next.confidence).toBeLessThanOrEqual(0.5); + }); +}); + +``` + +================================================== +📄 tests/unit/git-config-detector.test.js +================================================== +```js +/** + * Unit Tests for GitConfigDetector + * + * Test Coverage: + * - Cache hit/miss scenarios + * - Cache expiration (TTL) + * - Cache invalidation + * - Timeout protection + * - Git repository detection + * - Branch and remote detection + * - Repository type detection + * - Graceful error handling + */ + +const GitConfigDetector = require('../../.aios-core/infrastructure/scripts/git-config-detector'); +const { execSync } = require('child_process'); + +// Mock execSync for testing +jest.mock('child_process'); + +describe('GitConfigDetector', () => { + let detector; + + beforeEach(() => { + jest.useFakeTimers(); + detector = new GitConfigDetector(5 * 60 * 1000); // 5 minute TTL + jest.clearAllMocks(); + }); + + afterEach(() => { + jest.useRealTimers(); + }); + + describe('Cache Management', () => { + test('should return cached data on cache hit', () => { + // Setup mock git commands + execSync + .mockReturnValueOnce('true\n') // is-inside-work-tree + .mockReturnValueOnce('main\n') // branch + .mockReturnValueOnce('https://github.com/user/repo.git\n'); // remote url + + const firstCall = detector.get(); + expect(firstCall.configured).toBe(true); + + // Second call should use cache (no execSync calls) + const secondCall = detector.get(); + expect(secondCall).toEqual(firstCall); + expect(execSync).toHaveBeenCalledTimes(3); // Only first call + }); + + test('should execute git commands on cache miss', () => { + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('main\n') + .mockReturnValueOnce('https://github.com/user/repo.git\n'); + + const result = detector.get(); + + expect(execSync).toHaveBeenCalled(); + expect(result.configured).toBe(true); + }); + + test('should expire cache after TTL', () => { + const shortTTL = 100; // 100ms + detector = new GitConfigDetector(shortTTL); + + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('main\n') + .mockReturnValueOnce('https://github.com/user/repo.git\n'); + + detector.get(); // First call + + // Wait for cache to expire + jest.advanceTimersByTime(shortTTL + 1); + + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('develop\n') + .mockReturnValueOnce('https://github.com/user/repo.git\n'); + + detector.get(); // Second call after expiration + + expect(execSync).toHaveBeenCalledTimes(6); // 3 calls each time + }); + + test('should invalidate cache manually', () => { + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('main\n') + .mockReturnValueOnce('https://github.com/user/repo.git\n'); + + detector.get(); // First call + + detector.invalidate(); + + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('develop\n') + .mockReturnValueOnce('https://github.com/user/repo.git\n'); + + detector.get(); // Should re-detect + + expect(execSync).toHaveBeenCalledTimes(6); // 3 calls each time + }); + + test('should report cache age correctly', () => { + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('main\n') + .mockReturnValueOnce('https://github.com/user/repo.git\n'); + + detector.get(); + + const age = detector.getCacheAge(); + expect(age).toBeGreaterThanOrEqual(0); + expect(age).toBeLessThan(100); // Should be very recent + }); + + test('should detect cache expiring soon', () => { + const shortTTL = 1000; // 1 second + detector = new GitConfigDetector(shortTTL); + + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('main\n') + .mockReturnValueOnce('https://github.com/user/repo.git\n'); + + detector.get(); + + jest.advanceTimersByTime(950); // 950ms elapsed (50ms remaining) + + const expiringSoon = detector.isCacheExpiringSoon(); + expect(expiringSoon).toBe(true); + }); + }); + + describe('Git Repository Detection', () => { + test('should detect configured git repository', () => { + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('main\n') + .mockReturnValueOnce('https://github.com/user/repo.git\n'); + + const result = detector.detect(); + + expect(result.configured).toBe(true); + expect(result.branch).toBe('main'); + expect(result.type).toBe('github'); + }); + + test('should detect unconfigured repository (no git)', () => { + // Create a fresh detector to avoid cache pollution from other tests + const freshDetector = new GitConfigDetector(5 * 60 * 1000); + execSync.mockImplementation(() => { + throw new Error('not a git repository'); + }); + + const result = freshDetector.detect(); + + expect(result.configured).toBe(false); + expect(result.branch).toBeNull(); + expect(result.type).toBeNull(); + }); + + test('should handle timeout gracefully', () => { + execSync.mockImplementation(() => { + throw new Error('Command timeout'); + }); + + const result = detector.detect(); + + expect(result.configured).toBe(false); + }); + }); + + describe('Repository Type Detection', () => { + test('should detect GitHub repository', () => { + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('main\n') + .mockReturnValueOnce('https://github.com/user/repo.git\n'); + + const result = detector.detect(); + + expect(result.type).toBe('github'); + }); + + test('should detect GitLab repository', () => { + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('main\n') + .mockReturnValueOnce('https://gitlab.com/user/repo.git\n'); + + const result = detector.detect(); + + expect(result.type).toBe('gitlab'); + }); + + test('should detect Bitbucket repository', () => { + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('main\n') + .mockReturnValueOnce('https://bitbucket.org/user/repo.git\n'); + + const result = detector.detect(); + + expect(result.type).toBe('bitbucket'); + }); + + test('should detect other repository type', () => { + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('main\n') + .mockReturnValueOnce('https://custom-git-server.com/repo.git\n'); + + const result = detector.detect(); + + expect(result.type).toBe('other'); + }); + + test('should handle missing remote URL', () => { + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('main\n') + .mockImplementationOnce(() => { + throw new Error('No remote'); + }); + + const result = detector.detect(); + + expect(result.type).toBeNull(); + }); + }); + + describe('Branch Detection', () => { + test('should detect current branch', () => { + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('feature-123\n') + .mockReturnValueOnce('https://github.com/user/repo.git\n'); + + const result = detector.detect(); + + expect(result.branch).toBe('feature-123'); + }); + + test('should handle detached HEAD state', () => { + execSync + .mockReturnValueOnce('true\n') + .mockReturnValueOnce('\n') // Empty branch name + .mockReturnValueOnce('https://github.com/user/repo.git\n'); + + const result = detector.detect(); + + expect(result.branch).toBeNull(); + }); + }); + + describe('Detailed Information', () => { + test('should get detailed git information', () => { + execSync + .mockReturnValueOnce('true\n') // is-inside-work-tree + .mockReturnValueOnce('main\n') // branch + .mockReturnValueOnce('https://github.com/user/repo.git\n') // remote + .mockReturnValueOnce('John Doe\n') // user.name + .mockReturnValueOnce('john@example.com\n') // user.email + .mockReturnValueOnce('https://github.com/user/repo.git\n') // remote (again) + .mockReturnValueOnce('abc123def456\n') // last commit + .mockReturnValueOnce('M file.txt\n'); // status --porcelain + + const result = detector.getDetailed(); + + expect(result.userName).toBe('John Doe'); + expect(result.userEmail).toBe('john@example.com'); + expect(result.lastCommit).toBe('abc123def456'); + expect(result.hasUncommittedChanges).toBe(true); + }); + + test('should handle errors in detailed detection', () => { + execSync.mockImplementation(() => { + throw new Error('git error'); + }); + + const result = detector.getDetailed(); + + expect(result.configured).toBe(false); + }); + }); + + describe('Error Handling', () => { + test('should gracefully handle all git errors', () => { + execSync.mockImplementation(() => { + throw new Error('git command failed'); + }); + + expect(() => { + detector.detect(); + }).not.toThrow(); + + const result = detector.detect(); + expect(result.configured).toBe(false); + }); + + test('should cache error results', () => { + execSync.mockImplementation(() => { + throw new Error('git error'); + }); + + const firstCall = detector.get(); + const secondCall = detector.get(); + + expect(firstCall).toEqual(secondCall); + expect(execSync).toHaveBeenCalledTimes(1); // Only first call + }); + }); +}); + +``` + +================================================== +📄 tests/unit/tool-validation-helper.test.js +================================================== +```js +const ToolValidationHelper = require('../../common/utils/tool-validation-helper'); + +describe('ToolValidationHelper', () => { + describe('Constructor and Validator Management', () => { + test('should initialize with empty validators array', () => { + const helper = new ToolValidationHelper([]); + expect(helper.listValidators()).toEqual([]); + expect(helper.getStats().count).toBe(0); + }); + + test('should load validators from constructor', () => { + const validators = [ + { + validates: 'create_item', + language: 'javascript', + function: ` + (function() { + return { valid: true, errors: [] }; + })(); + `, + }, + ]; + + const helper = new ToolValidationHelper(validators); + expect(helper.hasValidator('create_item')).toBe(true); + expect(helper.listValidators()).toContain('create_item'); + }); + + test('should skip invalid validators during construction', () => { + const validators = [ + { validates: 'valid', function: 'function() {}' }, + { validates: 'no-function' }, // Missing function + { function: 'function() {}' }, // Missing validates + ]; + + const helper = new ToolValidationHelper(validators); + expect(helper.listValidators()).toEqual(['valid']); + }); + + test('should add validator dynamically', () => { + const helper = new ToolValidationHelper([]); + + helper.addValidator({ + validates: 'dynamic-command', + function: '(function() { return { valid: true, errors: [] }; })();', + }); + + expect(helper.hasValidator('dynamic-command')).toBe(true); + }); + + test('should throw error when adding duplicate validator', () => { + const helper = new ToolValidationHelper([ + { validates: 'existing', function: 'function() {}' }, + ]); + + expect(() => { + helper.addValidator({ validates: 'existing', function: 'function() {}' }); + }).toThrow(/already exists/); + }); + + test('should replace existing validator', () => { + const helper = new ToolValidationHelper([ + { validates: 'replaceable', function: '(function() { return { valid: true, errors: [] }; })();' }, + ]); + + helper.replaceValidator({ + validates: 'replaceable', + function: '(function() { return { valid: false, errors: ["replaced"] }; })();', + }); + + expect(helper.hasValidator('replaceable')).toBe(true); + }); + + test('should remove validator', () => { + const helper = new ToolValidationHelper([ + { validates: 'removable', function: 'function() {}' }, + ]); + + const removed = helper.removeValidator('removable'); + expect(removed).toBe(true); + expect(helper.hasValidator('removable')).toBe(false); + }); + + test('should clear all validators', () => { + const helper = new ToolValidationHelper([ + { validates: 'validator1', function: 'function() {}' }, + { validates: 'validator2', function: 'function() {}' }, + ]); + + helper.clearValidators(); + expect(helper.getStats().count).toBe(0); + }); + }); + + describe('Validation Success/Failure Cases', () => { + test('should pass validation with valid args', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'create_item', + function: ` + (function() { + if (!args.args.name || !args.args.type) { + return { valid: false, errors: ['Missing required fields'] }; + } + return { valid: true, errors: [] }; + })(); + `, + }, + ]); + + const result = await helper.validate('create_item', { name: 'Test', type: 'item' }); + expect(result.valid).toBe(true); + expect(result.errors).toEqual([]); + }); + + test('should fail validation with invalid args', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'create_item', + function: ` + (function() { + if (!args.args.name || !args.args.type) { + return { valid: false, errors: ['Missing required fields: name, type'] }; + } + return { valid: true, errors: [] }; + })(); + `, + }, + ]); + + const result = await helper.validate('create_item', { name: 'Test' }); + expect(result.valid).toBe(false); + expect(result.errors).toContain('Missing required fields: name, type'); + }); + + test('should handle complex validation logic', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'update_item', + function: ` + (function() { + const errors = []; + const data = args.args; + + if (!data.id) errors.push('ID is required'); + if (data.priority && (data.priority < 1 || data.priority > 4)) { + errors.push('Priority must be between 1 and 4'); + } + if (data.tags && !Array.isArray(data.tags)) { + errors.push('Tags must be an array'); + } + + return { valid: errors.length === 0, errors }; + })(); + `, + }, + ]); + + const result = await helper.validate('update_item', { + id: '123', + priority: 5, + tags: 'not-array', + }); + + expect(result.valid).toBe(false); + expect(result.errors).toHaveLength(2); + expect(result.errors).toContain('Priority must be between 1 and 4'); + expect(result.errors).toContain('Tags must be an array'); + }); + + test('should return standardized format from non-standard results', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'bad-format', + function: '(function() { return { valid: "yes" }; })();', // Invalid format + }, + ]); + + const result = await helper.validate('bad-format', {}); + expect(result.valid).toBe(true); // "yes" is truthy + expect(result.errors).toEqual([]); // Standardized to empty array + }); + }); + + describe('No-Validator Pass-Through', () => { + test('should auto-pass when no validator exists', async () => { + const helper = new ToolValidationHelper([]); + + const result = await helper.validate('unknown_command', { test: 'data' }); + expect(result.valid).toBe(true); + expect(result.errors).toEqual([]); + expect(result._note).toContain('No validator configured'); + }); + + test('should only validate commands with validators', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'validated_command', + function: '(function() { return { valid: false, errors: ["fail"] }; })();', + }, + ]); + + const validatedResult = await helper.validate('validated_command', {}); + expect(validatedResult.valid).toBe(false); + + const unvalidatedResult = await helper.validate('unvalidated_command', {}); + expect(unvalidatedResult.valid).toBe(true); + }); + }); + + describe('Performance Target (<50ms)', () => { + test('should complete simple validation in <50ms', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'fast_command', + function: ` + (function() { + return { valid: true, errors: [] }; + })(); + `, + }, + ]); + + const result = await helper.validate('fast_command', {}); + expect(result._duration).toBeDefined(); + expect(result._duration).toBeLessThan(50); + }); + + test('should warn when validation exceeds 50ms', async () => { + const consoleSpy = jest.spyOn(console, 'warn').mockImplementation(); + + const helper = new ToolValidationHelper([ + { + validates: 'slow_command', + function: ` + (function() { + const start = Date.now(); + while (Date.now() - start < 100) { + // Busy wait for 100ms + } + return { valid: true, errors: [] }; + })(); + `, + }, + ]); + + await helper.validate('slow_command', {}); + expect(consoleSpy).toHaveBeenCalled(); + const warnCall = consoleSpy.mock.calls[0][0]; + expect(warnCall).toContain('took'); + expect(warnCall).toContain('ms'); + expect(warnCall).toContain('target: <50ms'); + + consoleSpy.mockRestore(); + }); + + test('should include duration in result', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'timed_command', + function: '(function() { return { valid: true, errors: [] }; })();', + }, + ]); + + const result = await helper.validate('timed_command', {}); + expect(result._duration).toBeDefined(); + expect(typeof result._duration).toBe('number'); + expect(result._duration).toBeGreaterThanOrEqual(0); + }); + }); + + describe('Timeout Enforcement (500ms)', () => { + test('should timeout validator exceeding 500ms limit', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'timeout_test', + function: ` + (function() { + const start = Date.now(); + while (Date.now() - start < 1000) { + // Busy wait for 1 second + } + return { valid: true, errors: [] }; + })(); + `, + }, + ]); + + const result = await helper.validate('timeout_test', {}); + expect(result.valid).toBe(false); + expect(result.errors).toHaveLength(1); + expect(result.errors[0]).toContain('exceeded 500ms timeout'); + }, 2000); // Test timeout higher than validator timeout + + test('should complete validator within timeout', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'quick_validator', + function: ` + (function() { + let sum = 0; + for (let i = 0; i < 1000; i++) { + sum += i; + } + return { valid: true, errors: [] }; + })(); + `, + }, + ]); + + const result = await helper.validate('quick_validator', {}); + expect(result.valid).toBe(true); + }); + }); + + describe('Error Handling and Formatting', () => { + test('should handle validator with no function', async () => { + const helper = new ToolValidationHelper([]); + helper.validators.set('no-function', { validates: 'no-function' }); + + const result = await helper.validate('no-function', {}); + expect(result.valid).toBe(false); + expect(result.errors[0]).toContain('has no function defined'); + }); + + test('should handle syntax errors in validator function', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'syntax-error', + function: 'function invalid( { // Invalid syntax', + }, + ]); + + const result = await helper.validate('syntax-error', {}); + expect(result.valid).toBe(false); + expect(result.errors[0]).toContain('Validation error'); + }); + + test('should handle runtime errors in validator', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'runtime-error', + function: ` + (function() { + throw new Error("Intentional error"); + })(); + `, + }, + ]); + + const result = await helper.validate('runtime-error', {}); + expect(result.valid).toBe(false); + expect(result.errors[0]).toContain('Validation error'); + }); + + test('should handle validator returning non-object', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'returns-string', + function: '(function() { return "not an object"; })();', + }, + ]); + + const result = await helper.validate('returns-string', {}); + expect(result.valid).toBe(false); + expect(result.errors[0]).toContain('returned invalid format'); + }); + + test('should handle validator returning null', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'returns-null', + function: '(function() { return null; })();', + }, + ]); + + const result = await helper.validate('returns-null', {}); + expect(result.valid).toBe(false); + expect(result.errors[0]).toContain('returned invalid format'); + }); + + test('should format errors as array', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'error-format', + function: '(function() { return { valid: false, errors: "single error" }; })();', + }, + ]); + + const result = await helper.validate('error-format', {}); + expect(result.errors).toEqual([]); // Non-array errors become empty array + }); + }); + + describe('Batch Validation', () => { + test('should validate multiple commands at once', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'cmd1', + function: '(function() { return { valid: true, errors: [] }; })();', + }, + { + validates: 'cmd2', + function: '(function() { return { valid: false, errors: ["cmd2 failed"] }; })();', + }, + ]); + + const results = await helper.validateBatch([ + { command: 'cmd1', args: {} }, + { command: 'cmd2', args: {} }, + { command: 'cmd3', args: {} }, // No validator + ]); + + expect(results).toHaveLength(3); + expect(results[0].result.valid).toBe(true); + expect(results[1].result.valid).toBe(false); + expect(results[2].result.valid).toBe(true); // No validator = pass + }); + + test('should handle empty batch validation', async () => { + const helper = new ToolValidationHelper([]); + + const results = await helper.validateBatch([]); + expect(results).toEqual([]); + }); + }); + + describe('Declarative Validation', () => { + test('should validate required fields declaratively', () => { + const helper = new ToolValidationHelper([ + { + validates: 'create_user', + checks: [ + { required_fields: ['name', 'email'] }, + ], + function: 'function() {}', + }, + ]); + + const result = helper.validateDeclarative('create_user', { name: 'John' }); + expect(result.valid).toBe(false); + expect(result.errors).toContain("Required field 'email' is missing"); + }); + + test('should pass when all required fields present', () => { + const helper = new ToolValidationHelper([ + { + validates: 'create_user', + checks: [ + { required_fields: ['name', 'email'] }, + ], + function: 'function() {}', + }, + ]); + + const result = helper.validateDeclarative('create_user', { name: 'John', email: 'john@example.com' }); + expect(result.valid).toBe(true); + expect(result.errors).toEqual([]); + }); + + test('should return pass when no checks defined', () => { + const helper = new ToolValidationHelper([ + { + validates: 'no_checks', + function: 'function() {}', + }, + ]); + + const result = helper.validateDeclarative('no_checks', {}); + expect(result.valid).toBe(true); + expect(result._note).toContain('No declarative checks'); + }); + }); + + describe('Validator Metadata', () => { + test('should get validator info', () => { + const helper = new ToolValidationHelper([ + { + id: 'val-1', + validates: 'test_command', + language: 'javascript', + checks: [{ required_fields: ['id'] }], + function: 'function() {}', + }, + ]); + + const info = helper.getValidatorInfo('test_command'); + expect(info).toEqual({ + id: 'val-1', + validates: 'test_command', + language: 'javascript', + checks: [{ required_fields: ['id'] }], + hasFunction: true, + }); + }); + + test('should return null for non-existent validator info', () => { + const helper = new ToolValidationHelper([]); + const info = helper.getValidatorInfo('nonexistent'); + expect(info).toBeNull(); + }); + + test('should provide default language', () => { + const helper = new ToolValidationHelper([ + { + validates: 'minimal', + function: 'function() {}', + }, + ]); + + const info = helper.getValidatorInfo('minimal'); + expect(info.language).toBe('javascript'); + }); + + test('should get statistics', () => { + const helper = new ToolValidationHelper([ + { validates: 'validator1', function: 'function() {}' }, + { validates: 'validator2', function: 'function() {}' }, + { validates: 'validator3', function: 'function() {}' }, + ]); + + const stats = helper.getStats(); + expect(stats.count).toBe(3); + expect(stats.validators).toHaveLength(3); + expect(stats.validators).toContain('validator1'); + expect(stats.validators).toContain('validator2'); + expect(stats.validators).toContain('validator3'); + }); + }); + + describe('Memory Management', () => { + test('should dispose isolate after successful validation', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'dispose-test', + function: '(function() { return { valid: true, errors: [] }; })();', + }, + ]); + + const result = await helper.validate('dispose-test', {}); + expect(result.valid).toBe(true); + // No way to directly test disposal, but it should not throw + }); + + test('should dispose isolate after failed validation', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'fail-dispose', + function: '(function() { throw new Error("test error"); })();', + }, + ]); + + const result = await helper.validate('fail-dispose', {}); + expect(result.valid).toBe(false); + // Isolate should still be disposed even on error + }); + + test('should dispose isolate after timeout', async () => { + const helper = new ToolValidationHelper([ + { + validates: 'timeout-dispose', + function: '(function() { while(true) {} })();', + }, + ]); + + const result = await helper.validate('timeout-dispose', {}); + expect(result.valid).toBe(false); + // Isolate should be disposed even on timeout + }, 2000); + }); +}); + +``` + +================================================== +📄 tests/unit/execution-profile-resolver.test.js +================================================== +```js +'use strict'; + +const { + resolveExecutionProfile, +} = require('../../.aios-core/core/orchestration/execution-profile-resolver'); + +describe('execution-profile-resolver', () => { + it('enforces safe profile for production context', () => { + const result = resolveExecutionProfile({ + context: 'production', + yolo: true, + }); + + expect(result.profile).toBe('safe'); + expect(result.source).toBe('context'); + expect(result.policy.require_confirmation).toBe(true); + }); + + it('enforces balanced profile for migration context', () => { + const result = resolveExecutionProfile({ + context: 'migration', + yolo: true, + }); + + expect(result.profile).toBe('balanced'); + expect(result.source).toBe('context'); + expect(result.policy.max_parallel_changes).toBe(3); + }); + + it('uses explicit profile when provided', () => { + const result = resolveExecutionProfile({ + explicitProfile: 'aggressive', + context: 'production', + yolo: false, + }); + + expect(result.profile).toBe('aggressive'); + expect(result.source).toBe('explicit'); + }); + + it('defaults to aggressive in yolo development context', () => { + const result = resolveExecutionProfile({ + context: 'development', + yolo: true, + }); + + expect(result.profile).toBe('aggressive'); + expect(result.source).toBe('yolo'); + }); + + it('defaults to balanced when nothing is specified', () => { + const result = resolveExecutionProfile({}); + + expect(result.profile).toBe('balanced'); + expect(result.source).toBe('default'); + }); +}); + +``` + +================================================== +📄 tests/unit/security-utils.test.js +================================================== +```js +/** + * Unit Tests for Security Utilities + * + * Test Coverage: + * - Path validation and traversal prevention + * - Input sanitization + * - JSON validation + * - Rate limiting + * - Safe path construction + * + * @see .aios-core/core/utils/security-utils.js + */ + +const { + validatePath, + sanitizeInput, + validateJSON, + RateLimiter, + safePath, + isSafeString, + getObjectDepth, +} = require('../../.aios-core/core/utils/security-utils'); + +describe('security-utils', () => { + describe('validatePath', () => { + test('should accept valid relative paths', () => { + const result = validatePath('src/components/Button.js'); + + expect(result.valid).toBe(true); + expect(result.errors).toHaveLength(0); + }); + + test('should reject path traversal with ../', () => { + const result = validatePath('../../../etc/passwd'); + + expect(result.valid).toBe(false); + expect(result.errors).toContain('Path traversal detected: ".." is not allowed'); + }); + + test('should reject path traversal with ..\\', () => { + const result = validatePath('..\\..\\Windows\\System32'); + + expect(result.valid).toBe(false); + expect(result.errors).toContain('Path traversal detected: ".." is not allowed'); + }); + + test('should reject null bytes in path', () => { + const result = validatePath('file.txt\0.exe'); + + expect(result.valid).toBe(false); + expect(result.errors).toContain('Null byte detected in path'); + }); + + test('should reject empty string', () => { + const result = validatePath(''); + + expect(result.valid).toBe(false); + expect(result.errors).toContain('Path must be a non-empty string'); + }); + + test('should reject null/undefined', () => { + expect(validatePath(null).valid).toBe(false); + expect(validatePath(undefined).valid).toBe(false); + }); + + test('should reject absolute paths by default', () => { + const result = validatePath('/etc/passwd'); + + expect(result.valid).toBe(false); + expect(result.errors).toContain('Absolute paths are not allowed'); + }); + + test('should allow absolute paths when option is set', () => { + const result = validatePath('/home/user/file.txt', { allowAbsolute: true }); + + expect(result.valid).toBe(true); + }); + + test('should detect path escaping base directory', () => { + const result = validatePath('subdir/../../../outside.txt', { + basePath: '/safe/directory', + }); + + expect(result.valid).toBe(false); + }); + + test('should normalize paths correctly', () => { + // Note: paths with '..' are rejected as path traversal + const result = validatePath('src//components/./Button.js'); + + expect(result.valid).toBe(true); + expect(result.normalized).toBeDefined(); + }); + }); + + describe('sanitizeInput', () => { + test('should remove null bytes from all input types', () => { + const result = sanitizeInput('hello\0world', 'general'); + + expect(result).toBe('helloworld'); + }); + + test('should sanitize filename - allow only safe characters', () => { + const result = sanitizeInput('myname.txt', 'filename'); + + expect(result).toBe('my_file_name.txt'); + }); + + test('should sanitize filename - prevent hidden files', () => { + const result = sanitizeInput('.hidden_file', 'filename'); + + expect(result).toBe('hidden_file'); + }); + + test('should sanitize identifier - allow alphanumeric and dash/underscore', () => { + const result = sanitizeInput('user@email.com', 'identifier'); + + expect(result).toBe('user_email_com'); + }); + + test('should sanitize shell - remove dangerous characters', () => { + const result = sanitizeInput('echo "test"; rm -rf /', 'shell'); + + expect(result).not.toContain(';'); + expect(result).not.toContain('"'); + }); + + test('should sanitize html - escape HTML entities', () => { + const result = sanitizeInput('', 'html'); + + expect(result).toContain('<'); + expect(result).toContain('>'); + expect(result).not.toContain('<'); + expect(result).not.toContain('>'); + }); + + test('should remove control characters in general mode', () => { + const result = sanitizeInput('hello\x00\x01\x02world', 'general'); + + expect(result).toBe('helloworld'); + }); + + test('should return non-string values unchanged', () => { + expect(sanitizeInput(123, 'general')).toBe(123); + expect(sanitizeInput(null, 'general')).toBe(null); + expect(sanitizeInput(undefined, 'general')).toBe(undefined); + }); + }); + + describe('validateJSON', () => { + test('should parse valid JSON', () => { + const result = validateJSON('{"name": "test", "value": 42}'); + + expect(result.valid).toBe(true); + expect(result.data).toEqual({ name: 'test', value: 42 }); + }); + + test('should reject invalid JSON', () => { + const result = validateJSON('{invalid json}'); + + expect(result.valid).toBe(false); + expect(result.error).toContain('Invalid JSON'); + }); + + test('should reject empty/null input', () => { + expect(validateJSON('').valid).toBe(false); + expect(validateJSON(null).valid).toBe(false); + }); + + test('should reject JSON exceeding max size', () => { + const largeJSON = JSON.stringify({ data: 'x'.repeat(2000000) }); + const result = validateJSON(largeJSON, { maxSize: 1000000 }); + + expect(result.valid).toBe(false); + expect(result.error).toContain('exceeds maximum size'); + }); + + test('should reject deeply nested JSON', () => { + let nested = { value: 'deep' }; + for (let i = 0; i < 15; i++) { + nested = { nested }; + } + const result = validateJSON(JSON.stringify(nested), { maxDepth: 10 }); + + expect(result.valid).toBe(false); + expect(result.error).toContain('nesting depth'); + }); + + test('should accept JSON within nesting limit', () => { + const nested = { a: { b: { c: { d: 'value' } } } }; + const result = validateJSON(JSON.stringify(nested), { maxDepth: 10 }); + + expect(result.valid).toBe(true); + }); + }); + + describe('RateLimiter', () => { + test('should allow requests within limit', () => { + const limiter = new RateLimiter({ maxRequests: 5, windowMs: 1000 }); + + for (let i = 0; i < 5; i++) { + const result = limiter.check('user1'); + expect(result.allowed).toBe(true); + } + }); + + test('should block requests exceeding limit', () => { + const limiter = new RateLimiter({ maxRequests: 3, windowMs: 1000 }); + + // Make 3 allowed requests + for (let i = 0; i < 3; i++) { + limiter.check('user1'); + } + + // 4th request should be blocked + const result = limiter.check('user1'); + expect(result.allowed).toBe(false); + expect(result.remaining).toBe(0); + }); + + test('should track different keys independently', () => { + const limiter = new RateLimiter({ maxRequests: 2, windowMs: 1000 }); + + // User1 makes 2 requests + limiter.check('user1'); + limiter.check('user1'); + + // User2 should still be allowed + const result = limiter.check('user2'); + expect(result.allowed).toBe(true); + }); + + test('should return remaining count', () => { + const limiter = new RateLimiter({ maxRequests: 5, windowMs: 1000 }); + + limiter.check('user1'); + limiter.check('user1'); + const result = limiter.check('user1'); + + // Remaining is calculated before recording the current request + // After 2 previous checks, history.length = 2, remaining = 5 - 2 = 3 + expect(result.remaining).toBe(3); + }); + + test('should reset specific key', () => { + const limiter = new RateLimiter({ maxRequests: 2, windowMs: 1000 }); + + limiter.check('user1'); + limiter.check('user1'); + + // Should be blocked + expect(limiter.check('user1').allowed).toBe(false); + + // Reset + limiter.reset('user1'); + + // Should be allowed again + expect(limiter.check('user1').allowed).toBe(true); + }); + + test('should clear all data', () => { + const limiter = new RateLimiter({ maxRequests: 5, windowMs: 1000 }); + + limiter.check('user1'); + limiter.check('user2'); + + limiter.clear(); + + // After clear, history is empty for each key + // remaining = maxRequests - 0 = 5 (calculated before recording) + expect(limiter.check('user1').remaining).toBe(5); + expect(limiter.check('user2').remaining).toBe(5); + }); + }); + + describe('safePath', () => { + test('should return safe path within base directory', () => { + const result = safePath('/home/user', 'documents', 'file.txt'); + + expect(result).not.toBeNull(); + expect(result).toContain('documents'); + expect(result).toContain('file.txt'); + }); + + test('should return null for path traversal attempts', () => { + const result = safePath('/home/user', '..', '..', 'etc', 'passwd'); + + expect(result).toBeNull(); + }); + + test('should handle nested directories', () => { + const result = safePath('/base', 'level1', 'level2', 'file.txt'); + + expect(result).not.toBeNull(); + }); + }); + + describe('isSafeString', () => { + test('should return true for safe strings', () => { + expect(isSafeString('hello world')).toBe(true); + expect(isSafeString('file-name_123.txt')).toBe(true); + }); + + test('should return false for path traversal', () => { + expect(isSafeString('../secret')).toBe(false); + }); + + test('should return false for template injection', () => { + expect(isSafeString('${process.env.SECRET}')).toBe(false); + }); + + test('should return false for null bytes', () => { + expect(isSafeString('file\0.txt')).toBe(false); + }); + + test('should return false for non-strings', () => { + expect(isSafeString(123)).toBe(false); + expect(isSafeString(null)).toBe(false); + expect(isSafeString({})).toBe(false); + }); + }); + + describe('getObjectDepth', () => { + test('should return 0 for primitives', () => { + expect(getObjectDepth('string')).toBe(0); + expect(getObjectDepth(123)).toBe(0); + expect(getObjectDepth(null)).toBe(0); + }); + + test('should return 0 for flat object', () => { + expect(getObjectDepth({ a: 1, b: 2 })).toBe(0); + }); + + test('should return correct depth for nested objects', () => { + expect(getObjectDepth({ a: { b: 1 } })).toBe(1); + expect(getObjectDepth({ a: { b: { c: 1 } } })).toBe(2); + }); + + test('should handle arrays', () => { + expect(getObjectDepth([1, 2, 3])).toBe(0); + expect(getObjectDepth([{ a: 1 }])).toBe(1); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/list-cli.test.js +================================================== +```js +/** + * List CLI Unit Tests + * + * Tests for the worker list command functionality. + * + * @story 2.8-2.9 - Discovery CLI Info & List + */ + +const path = require('path'); + +// Test modules +const { formatTree, formatTreeCollapsed, groupWorkers } = require('../../.aios-core/cli/commands/workers/formatters/list-tree'); +const { formatTable, formatJSON, formatYAML, formatList, formatCount, truncate } = require('../../.aios-core/cli/commands/workers/formatters/list-table'); +const { paginate, formatPaginationInfo, formatPaginationHint } = require('../../.aios-core/cli/commands/workers/utils/pagination'); + +// Mock workers for testing +const mockWorkers = [ + { + id: 'json-csv-transformer', + name: 'JSON to CSV Transformer', + category: 'data', + subcategory: 'transformation', + tags: ['etl', 'json', 'csv'], + path: '.aios-core/tasks/json-csv-transformer.md', + }, + { + id: 'csv-json-transformer', + name: 'CSV to JSON Transformer', + category: 'data', + subcategory: 'transformation', + tags: ['etl', 'json', 'csv'], + path: '.aios-core/tasks/csv-json-transformer.md', + }, + { + id: 'json-validator', + name: 'JSON Schema Validator', + category: 'data', + subcategory: 'validation', + tags: ['validation', 'schema', 'json'], + path: '.aios-core/tasks/json-validator.md', + }, + { + id: 'unit-test-runner', + name: 'Unit Test Runner', + category: 'testing', + subcategory: 'unit', + tags: ['testing', 'unit', 'jest'], + path: '.aios-core/tasks/unit-test-runner.md', + }, + { + id: 'api-generator', + name: 'REST API Generator', + category: 'code', + subcategory: 'generation', + tags: ['api', 'rest', 'openapi'], + path: '.aios-core/tasks/api-generator.md', + }, +]; + +describe('Group Workers', () => { + test('groups workers by category', () => { + const groups = groupWorkers(mockWorkers); + expect(groups).toHaveProperty('data'); + expect(groups).toHaveProperty('testing'); + expect(groups).toHaveProperty('code'); + }); + + test('counts workers per category', () => { + const groups = groupWorkers(mockWorkers); + expect(groups.data.count).toBe(3); + expect(groups.testing.count).toBe(1); + expect(groups.code.count).toBe(1); + }); + + test('groups by subcategory within category', () => { + const groups = groupWorkers(mockWorkers); + expect(groups.data.subcategories).toHaveProperty('transformation'); + expect(groups.data.subcategories).toHaveProperty('validation'); + expect(groups.data.subcategories.transformation.length).toBe(2); + expect(groups.data.subcategories.validation.length).toBe(1); + }); + + test('handles workers without subcategory', () => { + const workerNoSub = [{ id: 'test', name: 'Test', category: 'other' }]; + const groups = groupWorkers(workerNoSub); + expect(groups.other.subcategories).toHaveProperty('general'); + }); +}); + +describe('Tree Formatter', () => { + test('formatTree includes total count', () => { + const output = formatTree(mockWorkers, {}); + expect(output).toContain('5 workers available'); + }); + + test('formatTree includes category headers', () => { + const output = formatTree(mockWorkers, {}); + expect(output).toContain('DATA'); + expect(output).toContain('TESTING'); + expect(output).toContain('CODE'); + }); + + test('formatTree includes subcategory headers', () => { + const output = formatTree(mockWorkers, {}); + expect(output).toContain('Transformation'); + expect(output).toContain('Validation'); + }); + + test('formatTree includes worker IDs', () => { + const output = formatTree(mockWorkers, { maxPerSubcategory: 10 }); + expect(output).toContain('json-csv-transformer'); + expect(output).toContain('json-validator'); + }); + + test('formatTree includes usage hints', () => { + const output = formatTree(mockWorkers, {}); + expect(output).toContain('aios workers info '); + expect(output).toContain('aios workers search '); + }); + + test('formatTree shows verbose debug when enabled', () => { + const output = formatTree(mockWorkers, { verbose: true }); + expect(output).toContain('[Debug Info]'); + expect(output).toContain('Total workers: 5'); + }); + + test('formatTreeCollapsed hides individual workers', () => { + const output = formatTreeCollapsed(mockWorkers, {}); + expect(output).toContain('DATA'); + expect(output).not.toContain('json-csv-transformer'); + }); + + test('formatTree handles empty array', () => { + const output = formatTree([], {}); + expect(output).toContain('No workers found'); + }); +}); + +describe('Table Formatter', () => { + test('formatTable includes header row', () => { + const output = formatTable(mockWorkers, {}); + expect(output).toContain('#'); + expect(output).toContain('ID'); + expect(output).toContain('NAME'); + expect(output).toContain('CATEGORY'); + }); + + test('formatTable includes worker data', () => { + const output = formatTable(mockWorkers, {}); + expect(output).toContain('json-csv-transformer'); + expect(output).toContain('JSON to CSV Transformer'); + expect(output).toContain('data'); + }); + + test('formatTable handles pagination info', () => { + const pagination = { + page: 2, + limit: 10, + totalItems: 25, + totalPages: 3, + startIndex: 11, + endIndex: 20, + }; + const output = formatTable(mockWorkers, { pagination }); + expect(output).toContain('11-20 of 25'); + }); + + test('formatTable handles empty array', () => { + const output = formatTable([], {}); + expect(output).toContain('No workers found'); + }); +}); + +describe('JSON Formatter', () => { + test('formatJSON returns valid JSON', () => { + const output = formatJSON(mockWorkers, {}); + expect(() => JSON.parse(output)).not.toThrow(); + }); + + test('formatJSON includes all workers', () => { + const output = formatJSON(mockWorkers, {}); + const parsed = JSON.parse(output); + expect(parsed.length).toBe(5); + }); + + test('formatJSON includes worker properties', () => { + const output = formatJSON(mockWorkers, {}); + const parsed = JSON.parse(output); + expect(parsed[0].id).toBe('json-csv-transformer'); + expect(parsed[0].name).toBe('JSON to CSV Transformer'); + expect(parsed[0].category).toBe('data'); + expect(parsed[0].subcategory).toBe('transformation'); + expect(parsed[0].tags).toEqual(['etl', 'json', 'csv']); + }); + + test('formatJSON includes pagination when provided', () => { + const pagination = { + page: 1, + limit: 10, + totalItems: 50, + totalPages: 5, + }; + const output = formatJSON(mockWorkers, { pagination }); + const parsed = JSON.parse(output); + expect(parsed).toHaveProperty('data'); + expect(parsed).toHaveProperty('pagination'); + expect(parsed.pagination.totalItems).toBe(50); + }); +}); + +describe('YAML Formatter', () => { + test('formatYAML returns valid YAML', () => { + const output = formatYAML(mockWorkers, {}); + expect(output).toContain('- id: json-csv-transformer'); + expect(output).toContain(' name: JSON to CSV Transformer'); + }); + + test('formatYAML includes all workers', () => { + const output = formatYAML(mockWorkers, {}); + expect(output).toContain('json-csv-transformer'); + expect(output).toContain('csv-json-transformer'); + expect(output).toContain('json-validator'); + }); +}); + +describe('Count Formatter', () => { + test('formatCount shows total count', () => { + const categories = { + data: { count: 3, subcategories: ['transformation', 'validation'] }, + testing: { count: 1, subcategories: ['unit'] }, + }; + const output = formatCount(categories, 4, {}); + expect(output).toContain('Total: 4 workers'); + }); + + test('formatCount shows category counts', () => { + const categories = { + data: { count: 3, subcategories: ['transformation', 'validation'] }, + testing: { count: 1, subcategories: ['unit'] }, + }; + const output = formatCount(categories, 4, {}); + expect(output).toContain('DATA'); + expect(output).toContain('3 workers'); + expect(output).toContain('TESTING'); + expect(output).toContain('1 workers'); + }); + + test('formatCount shows subcategories in verbose mode', () => { + const categories = { + data: { count: 3, subcategories: ['transformation', 'validation'] }, + }; + const output = formatCount(categories, 3, { verbose: true }); + expect(output).toContain('transformation'); + expect(output).toContain('validation'); + }); +}); + +describe('Format Selection', () => { + test('formatList with format=json returns JSON', () => { + const output = formatList(mockWorkers, { format: 'json' }); + expect(() => JSON.parse(output)).not.toThrow(); + }); + + test('formatList with format=yaml returns YAML', () => { + const output = formatList(mockWorkers, { format: 'yaml' }); + expect(output).toContain('- id:'); + }); + + test('formatList with format=table returns table', () => { + const output = formatList(mockWorkers, { format: 'table' }); + expect(output).toContain('#'); + expect(output).toContain('ID'); + }); + + test('formatList defaults to table format', () => { + const output = formatList(mockWorkers, {}); + expect(output).toContain('#'); + expect(output).toContain('ID'); + }); +}); + +describe('Pagination', () => { + test('paginate returns correct slice', () => { + const items = Array.from({ length: 50 }, (_, i) => ({ id: `item-${i}` })); + const result = paginate(items, { page: 2, limit: 10 }); + expect(result.items.length).toBe(10); + expect(result.items[0].id).toBe('item-10'); + expect(result.items[9].id).toBe('item-19'); + }); + + test('paginate calculates correct pagination info', () => { + const items = Array.from({ length: 50 }, (_, i) => ({ id: `item-${i}` })); + const result = paginate(items, { page: 2, limit: 10 }); + expect(result.pagination.page).toBe(2); + expect(result.pagination.limit).toBe(10); + expect(result.pagination.totalItems).toBe(50); + expect(result.pagination.totalPages).toBe(5); + expect(result.pagination.startIndex).toBe(11); + expect(result.pagination.endIndex).toBe(20); + expect(result.pagination.hasNextPage).toBe(true); + expect(result.pagination.hasPrevPage).toBe(true); + }); + + test('paginate handles first page', () => { + const items = Array.from({ length: 50 }, (_, i) => ({ id: `item-${i}` })); + const result = paginate(items, { page: 1, limit: 10 }); + expect(result.pagination.hasPrevPage).toBe(false); + expect(result.pagination.hasNextPage).toBe(true); + }); + + test('paginate handles last page', () => { + const items = Array.from({ length: 50 }, (_, i) => ({ id: `item-${i}` })); + const result = paginate(items, { page: 5, limit: 10 }); + expect(result.pagination.hasPrevPage).toBe(true); + expect(result.pagination.hasNextPage).toBe(false); + }); + + test('paginate handles single page', () => { + const items = Array.from({ length: 5 }, (_, i) => ({ id: `item-${i}` })); + const result = paginate(items, { page: 1, limit: 10 }); + expect(result.pagination.totalPages).toBe(1); + expect(result.pagination.hasPrevPage).toBe(false); + expect(result.pagination.hasNextPage).toBe(false); + }); + + test('paginate handles empty array', () => { + const result = paginate([], { page: 1, limit: 10 }); + expect(result.items.length).toBe(0); + expect(result.pagination.totalItems).toBe(0); + expect(result.pagination.totalPages).toBe(0); + }); + + test('formatPaginationInfo returns correct text', () => { + const pagination = { + startIndex: 11, + endIndex: 20, + totalItems: 50, + page: 2, + totalPages: 5, + }; + const output = formatPaginationInfo(pagination); + expect(output).toContain('11-20 of 50'); + expect(output).toContain('page 2/5'); + }); + + test('formatPaginationHint includes navigation hints', () => { + const pagination = { + page: 2, + totalPages: 5, + hasPrevPage: true, + hasNextPage: true, + }; + const output = formatPaginationHint(pagination); + expect(output).toContain('--page=1'); + expect(output).toContain('--page=3'); + }); +}); + +describe('String Truncation', () => { + test('truncate shortens long strings', () => { + const long = 'This is a very long string that should be truncated'; + const result = truncate(long, 20); + expect(result.length).toBe(20); + expect(result.endsWith('…')).toBe(true); + }); + + test('truncate leaves short strings unchanged', () => { + const short = 'Short'; + const result = truncate(short, 20); + expect(result).toBe('Short'); + }); + + test('truncate handles empty string', () => { + const result = truncate('', 20); + expect(result).toBe(''); + }); + + test('truncate handles null/undefined', () => { + expect(truncate(null, 20)).toBe(''); + expect(truncate(undefined, 20)).toBe(''); + }); +}); + +describe('Performance Requirements', () => { + // Create large mock dataset + const largeMockWorkers = Array.from({ length: 250 }, (_, i) => ({ + id: `worker-${i}`, + name: `Worker ${i}`, + category: ['data', 'testing', 'code', 'template'][i % 4], + subcategory: ['transformation', 'validation', 'unit', 'generation'][i % 4], + tags: ['tag1', 'tag2', 'tag3'], + path: `.aios-core/tasks/worker-${i}.md`, + })); + + test('formatTree handles 200+ workers under 100ms', () => { + const startTime = Date.now(); + formatTree(largeMockWorkers, {}); + const duration = Date.now() - startTime; + expect(duration).toBeLessThan(100); + }); + + test('formatTable handles 200+ workers under 100ms', () => { + const startTime = Date.now(); + formatTable(largeMockWorkers, {}); + const duration = Date.now() - startTime; + expect(duration).toBeLessThan(100); + }); + + test('paginate handles 200+ workers under 10ms', () => { + const startTime = Date.now(); + paginate(largeMockWorkers, { page: 5, limit: 20 }); + const duration = Date.now() - startTime; + expect(duration).toBeLessThan(10); + }); + + test('groupWorkers handles 200+ workers under 50ms', () => { + const startTime = Date.now(); + groupWorkers(largeMockWorkers); + const duration = Date.now() - startTime; + expect(duration).toBeLessThan(50); + }); +}); + +``` + +================================================== +📄 tests/unit/migration-backup.test.js +================================================== +```js +/** + * Migration Backup Module Tests + * + * @story 2.14 - Migration Script v2.0 → v2.1 + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const { + createBackupDirName, + calculateChecksum, + copyFileWithMetadata, + getAllFiles, + createBackup, + verifyBackup, + findLatestBackup, + listBackups, +} = require('../../.aios-core/cli/commands/migrate/backup'); + +/** + * Cleanup helper with retry logic for flaky file system operations + * @param {string} dir - Directory to remove + * @param {number} maxRetries - Maximum retry attempts + * @param {number} retryDelay - Delay between retries in ms + */ +async function cleanupWithRetry(dir, maxRetries = 3, retryDelay = 100) { + for (let attempt = 1; attempt <= maxRetries; attempt++) { + try { + if (fs.existsSync(dir)) { + await fs.promises.rm(dir, { recursive: true, force: true, maxRetries: 3 }); + } + return; + } catch (error) { + const isRetryable = error.code && ['ENOTEMPTY', 'EBUSY', 'EPERM', 'EACCES'].includes(error.code); + if (attempt === maxRetries || !isRetryable) { + // Last attempt failed or non-retryable error, log but don't throw + console.warn(`Warning: Failed to cleanup ${dir} after ${attempt} attempts:`, error.code); + return; + } + // Linear backoff (100ms, 200ms, 300ms...) + await new Promise(resolve => setTimeout(resolve, retryDelay * attempt)); + } + } +} + +describe('Migration Backup Module', () => { + let testDir; + let testId; + + beforeEach(async () => { + // Create a unique temporary test directory with random suffix to avoid collisions + testId = `${Date.now()}-${Math.random().toString(36).substring(2, 8)}`; + testDir = path.join(os.tmpdir(), `aios-backup-test-${testId}`); + await fs.promises.mkdir(testDir, { recursive: true }); + }); + + afterEach(async () => { + // Small delay to allow file handles to close + await new Promise(resolve => setTimeout(resolve, 50)); + // Cleanup test directory with retry logic + await cleanupWithRetry(testDir); + }); + + describe('createBackupDirName', () => { + it('should create backup directory name with date format', () => { + const name = createBackupDirName(); + expect(name).toMatch(/^\.aios-backup-\d{4}-\d{2}-\d{2}$/); + }); + + it('should use current date', () => { + const name = createBackupDirName(); + const today = new Date().toISOString().split('T')[0]; + expect(name).toBe(`.aios-backup-${today}`); + }); + }); + + describe('calculateChecksum', () => { + it('should calculate MD5 checksum for a file', async () => { + const testFile = path.join(testDir, 'test.txt'); + await fs.promises.writeFile(testFile, 'Hello, World!'); + + const checksum = await calculateChecksum(testFile); + + expect(checksum).toMatch(/^[a-f0-9]{32}$/); + // MD5 of "Hello, World!" is known + expect(checksum).toBe('65a8e27d8879283831b664bd8b7f0ad4'); + }); + + it('should produce different checksums for different content', async () => { + const file1 = path.join(testDir, 'file1.txt'); + const file2 = path.join(testDir, 'file2.txt'); + + await fs.promises.writeFile(file1, 'Content 1'); + await fs.promises.writeFile(file2, 'Content 2'); + + const checksum1 = await calculateChecksum(file1); + const checksum2 = await calculateChecksum(file2); + + expect(checksum1).not.toBe(checksum2); + }); + }); + + describe('copyFileWithMetadata', () => { + it('should copy file and preserve content', async () => { + const srcFile = path.join(testDir, 'source.txt'); + const destFile = path.join(testDir, 'dest', 'copied.txt'); + + await fs.promises.writeFile(srcFile, 'Test content'); + + const result = await copyFileWithMetadata(srcFile, destFile); + + expect(fs.existsSync(destFile)).toBe(true); + const content = await fs.promises.readFile(destFile, 'utf8'); + expect(content).toBe('Test content'); + expect(result.checksum).toMatch(/^[a-f0-9]{32}$/); + }); + + it('should create destination directory if needed', async () => { + const srcFile = path.join(testDir, 'source.txt'); + const destFile = path.join(testDir, 'nested', 'deep', 'copied.txt'); + + await fs.promises.writeFile(srcFile, 'Test'); + + await copyFileWithMetadata(srcFile, destFile); + + expect(fs.existsSync(destFile)).toBe(true); + }); + }); + + describe('getAllFiles', () => { + it('should get all files recursively', async () => { + // Create test structure + await fs.promises.mkdir(path.join(testDir, 'subdir'), { recursive: true }); + await fs.promises.writeFile(path.join(testDir, 'file1.txt'), 'a'); + await fs.promises.writeFile(path.join(testDir, 'subdir', 'file2.txt'), 'b'); + + const files = await getAllFiles(testDir); + + expect(files).toHaveLength(2); + expect(files.some(f => f.includes('file1.txt'))).toBe(true); + expect(files.some(f => f.includes('file2.txt'))).toBe(true); + }); + + it('should return empty array for empty directory', async () => { + const files = await getAllFiles(testDir); + expect(files).toHaveLength(0); + }); + }); + + describe('createBackup', () => { + it('should create backup of .aios-core directory', async () => { + // Create mock .aios-core structure + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(path.join(aiosCoreDir, 'agents'), { recursive: true }); + await fs.promises.writeFile(path.join(aiosCoreDir, 'agents', 'test.md'), 'Agent content'); + await fs.promises.writeFile(path.join(aiosCoreDir, 'index.js'), 'module.exports = {}'); + + const result = await createBackup(testDir); + + expect(result.success).toBe(true); + expect(result.backupDir).toBeTruthy(); + expect(fs.existsSync(result.backupDir)).toBe(true); + expect(result.manifest.totalFiles).toBeGreaterThan(0); + }); + + it('should fail if .aios-core does not exist', async () => { + await expect(createBackup(testDir)).rejects.toThrow(/No .aios-core directory/); + }); + + it('should include backup manifest', async () => { + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(aiosCoreDir, { recursive: true }); + await fs.promises.writeFile(path.join(aiosCoreDir, 'test.js'), 'test'); + + const result = await createBackup(testDir); + const manifestPath = path.join(result.backupDir, 'backup-manifest.json'); + + expect(fs.existsSync(manifestPath)).toBe(true); + + const manifest = JSON.parse(await fs.promises.readFile(manifestPath, 'utf8')); + expect(manifest.version).toBe('2.0'); + expect(manifest.files).toBeInstanceOf(Array); + expect(manifest.checksums).toBeInstanceOf(Object); + }); + }); + + describe('verifyBackup', () => { + it('should verify valid backup', async () => { + // Create and verify a backup + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(aiosCoreDir, { recursive: true }); + await fs.promises.writeFile(path.join(aiosCoreDir, 'test.js'), 'test content'); + + const backupResult = await createBackup(testDir); + const verification = await verifyBackup(backupResult.backupDir); + + expect(verification.valid).toBe(true); + expect(verification.verified).toBeGreaterThan(0); + expect(verification.failed).toHaveLength(0); + }); + + it('should detect corrupted files', async () => { + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(aiosCoreDir, { recursive: true }); + await fs.promises.writeFile(path.join(aiosCoreDir, 'test.js'), 'original'); + + const backupResult = await createBackup(testDir); + + // Corrupt a file in backup + const backedUpFile = path.join(backupResult.backupDir, '.aios-core', 'test.js'); + await fs.promises.writeFile(backedUpFile, 'corrupted'); + + const verification = await verifyBackup(backupResult.backupDir); + + expect(verification.valid).toBe(false); + expect(verification.failed).toHaveLength(1); + }); + }); + + describe('findLatestBackup', () => { + it('should find the most recent backup', async () => { + // Create mock backup directories + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(aiosCoreDir, { recursive: true }); + await fs.promises.writeFile(path.join(aiosCoreDir, 'test.js'), 'test'); + + // Create first backup + await createBackup(testDir); + + const latest = await findLatestBackup(testDir); + + expect(latest).not.toBeNull(); + expect(latest.name).toMatch(/^\.aios-backup-/); + }); + + it('should return null if no backups exist', async () => { + const latest = await findLatestBackup(testDir); + expect(latest).toBeNull(); + }); + }); + + describe('listBackups', () => { + it('should list all backups', async () => { + const aiosCoreDir = path.join(testDir, '.aios-core'); + await fs.promises.mkdir(aiosCoreDir, { recursive: true }); + await fs.promises.writeFile(path.join(aiosCoreDir, 'test.js'), 'test'); + + await createBackup(testDir); + + const backups = await listBackups(testDir); + + expect(backups).toBeInstanceOf(Array); + expect(backups.length).toBeGreaterThan(0); + expect(backups[0]).toHaveProperty('name'); + expect(backups[0]).toHaveProperty('hasManifest'); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/decision-recorder.test.js +================================================== +```js +/** + * Unit Tests for Decision Recorder API + * + * Tests the convenience API for recording decisions during yolo mode. + * + * @see .aios-core/scripts/decision-recorder.js + */ + +const fs = require('fs').promises; +const { + initializeDecisionLogging, + recordDecision, + trackFile, + trackTest, + updateMetrics, + completeDecisionLogging, + getCurrentContext, +} = require('../../.aios-core/development/scripts/decision-recorder'); + +// Mock decision-log-generator +jest.mock('../../.aios-core/development/scripts/decision-log-generator', () => ({ + generateDecisionLog: jest.fn().mockResolvedValue('.ai/decision-log-test.md'), +})); + +describe('decision-recorder', () => { + beforeEach(() => { + jest.clearAllMocks(); + jest.spyOn(Date, 'now').mockReturnValue(1700000000000); + }); + + afterEach(() => { + jest.restoreAllMocks(); + }); + + describe('initializeDecisionLogging', () => { + it('should initialize decision logging context', async () => { + const context = await initializeDecisionLogging('dev', 'docs/stories/test.md'); + + expect(context).toBeDefined(); + expect(context.agentId).toBe('dev'); + expect(context.storyPath).toBe('docs/stories/test.md'); + expect(context.enabled).toBe(true); + }); + + it('should respect enabled option', async () => { + const context = await initializeDecisionLogging('dev', 'test.md', { enabled: false }); + + expect(context).toBeNull(); + }); + + it('should pass agent load time to context', async () => { + const context = await initializeDecisionLogging('dev', 'test.md', { agentLoadTime: 150 }); + + expect(context.metrics.agentLoadTime).toBe(150); + }); + }); + + describe('recordDecision', () => { + beforeEach(async () => { + // Reset global context before each test in this suite + if (getCurrentContext()) { + await completeDecisionLogging('cleanup'); + } + }); + + it('should record decision after initialization', async () => { + await initializeDecisionLogging('dev', 'test.md'); + + const decision = recordDecision({ + description: 'Test decision', + reason: 'Test reason', + alternatives: ['Alt 1', 'Alt 2'], + type: 'library-choice', + priority: 'high', + }); + + expect(decision).toBeDefined(); + expect(decision.description).toBe('Test decision'); + expect(decision.type).toBe('library-choice'); + expect(decision.priority).toBe('high'); + }); + + it('should warn if not initialized', () => { + const consoleSpy = jest.spyOn(console, 'warn').mockImplementation(); + + const decision = recordDecision({ + description: 'Test', + reason: 'Test', + }); + + expect(decision).toBeNull(); + expect(consoleSpy).toHaveBeenCalledWith(expect.stringContaining('not initialized')); + + consoleSpy.mockRestore(); + }); + }); + + describe('trackFile', () => { + it('should track file after initialization', async () => { + await initializeDecisionLogging('dev', 'test.md'); + + trackFile('src/api.js', 'created'); + + const context = getCurrentContext(); + expect(context.filesModified).toHaveLength(1); + expect(context.filesModified[0].path).toContain('api.js'); // Path may have OS-specific separators + expect(context.filesModified[0].action).toBe('created'); + }); + + it('should handle not initialized gracefully', async () => { + trackFile('test.js', 'created'); // Should not throw + }); + }); + + describe('trackTest', () => { + it('should track test after initialization', async () => { + await initializeDecisionLogging('dev', 'test.md'); + + trackTest({ + name: 'api.test.js', + passed: true, + duration: 125, + }); + + const context = getCurrentContext(); + expect(context.testsRun).toHaveLength(1); + expect(context.testsRun[0].name).toBe('api.test.js'); + expect(context.testsRun[0].passed).toBe(true); + }); + + it('should handle not initialized gracefully', async () => { + trackTest({ name: 'test.js', passed: true }); // Should not throw + }); + }); + + describe('updateMetrics', () => { + it('should update metrics after initialization', async () => { + await initializeDecisionLogging('dev', 'test.md'); + + updateMetrics({ + agentLoadTime: 200, + taskExecutionTime: 5000, + }); + + const context = getCurrentContext(); + expect(context.metrics.agentLoadTime).toBe(200); + expect(context.metrics.taskExecutionTime).toBe(5000); + }); + + it('should handle not initialized gracefully', async () => { + updateMetrics({ test: 123 }); // Should not throw + }); + }); + + describe('completeDecisionLogging', () => { + it('should complete logging and generate log file', async () => { + const { generateDecisionLog } = require('../../.aios-core/development/scripts/decision-log-generator'); + + await initializeDecisionLogging('dev', 'test.md'); + + recordDecision({ description: 'D1', reason: 'R1' }); + trackFile('file1.js', 'created'); + trackTest({ name: 'test.js', passed: true, duration: 100 }); + + jest.spyOn(Date, 'now').mockReturnValue(1700000060000); // 1 minute later + + const logPath = await completeDecisionLogging('6.1.2.6.2', 'completed'); + + expect(logPath).toBe('.ai/decision-log-test.md'); + expect(generateDecisionLog).toHaveBeenCalledWith('6.1.2.6.2', expect.objectContaining({ + agentId: 'dev', + status: 'completed', + decisions: expect.any(Array), + filesModified: expect.any(Array), + testsRun: expect.any(Array), + })); + }); + + it('should reset global context after completion', async () => { + await initializeDecisionLogging('dev', 'test.md'); + await completeDecisionLogging('test', 'completed'); + + const context = getCurrentContext(); + expect(context).toBeNull(); + }); + + it('should handle not initialized gracefully', async () => { + const logPath = await completeDecisionLogging('test'); + + expect(logPath).toBeNull(); + }); + + it('should handle errors during log generation', async () => { + const { generateDecisionLog } = require('../../.aios-core/development/scripts/decision-log-generator'); + generateDecisionLog.mockRejectedValueOnce(new Error('File system error')); + + await initializeDecisionLogging('dev', 'test.md'); + + await expect(completeDecisionLogging('test')).rejects.toThrow('File system error'); + }); + + it('should use default status "completed"', async () => { + const { generateDecisionLog } = require('../../.aios-core/development/scripts/decision-log-generator'); + + await initializeDecisionLogging('dev', 'test.md'); + await completeDecisionLogging('test'); + + const callArgs = generateDecisionLog.mock.calls[0][1]; + expect(callArgs.status).toBe('completed'); + }); + + it('should display summary after logging', async () => { + const consoleSpy = jest.spyOn(console, 'log').mockImplementation(); + + await initializeDecisionLogging('dev', 'test.md'); + recordDecision({ description: 'D1', reason: 'R1' }); + recordDecision({ description: 'D2', reason: 'R2' }); + trackFile('file1.js', 'created'); + trackTest({ name: 'test1.js', passed: true, duration: 100 }); + trackTest({ name: 'test2.js', passed: false, duration: 50 }); + + await completeDecisionLogging('test'); + + expect(consoleSpy).toHaveBeenCalledWith(expect.stringContaining('Decision Log Summary')); + expect(consoleSpy).toHaveBeenCalledWith(expect.stringContaining('Decisions: 2')); + expect(consoleSpy).toHaveBeenCalledWith(expect.stringContaining('Files Modified: 1')); + expect(consoleSpy).toHaveBeenCalledWith(expect.stringContaining('Tests Run: 2')); + + consoleSpy.mockRestore(); + }); + }); + + describe('getCurrentContext', () => { + it('should return null when not initialized', () => { + const context = getCurrentContext(); + expect(context).toBeNull(); + }); + + it('should return current context when initialized', async () => { + await initializeDecisionLogging('dev', 'test.md'); + + const context = getCurrentContext(); + expect(context).toBeDefined(); + expect(context.agentId).toBe('dev'); + }); + }); + + describe('integration workflow', () => { + it('should support full yolo mode workflow', async () => { + const { generateDecisionLog } = require('../../.aios-core/development/scripts/decision-log-generator'); + + // Initialize + await initializeDecisionLogging('dev', 'docs/stories/story-6.1.2.6.2.md', { + agentLoadTime: 150, + }); + + // Record decisions during execution + recordDecision({ + description: 'Use Axios for HTTP client', + reason: 'Better error handling and interceptors', + alternatives: ['Fetch API', 'Got library'], + type: 'library-choice', + priority: 'medium', + }); + + recordDecision({ + description: 'Use React Context for state', + reason: 'Simple state sharing without Redux', + alternatives: ['Redux', 'Zustand'], + type: 'architecture', + priority: 'high', + }); + + // Track files + trackFile('src/api/client.js', 'created'); + trackFile('package.json', 'modified'); + + // Track tests + trackTest({ name: 'api.test.js', passed: true, duration: 125 }); + trackTest({ name: 'context.test.js', passed: true, duration: 85 }); + + // Update metrics + updateMetrics({ taskExecutionTime: 300000 }); + + // Complete + jest.spyOn(Date, 'now').mockReturnValue(1700000300000); // 5 minutes later + const logPath = await completeDecisionLogging('6.1.2.6.2', 'completed'); + + // Verify log was generated with correct data + expect(logPath).toBe('.ai/decision-log-test.md'); + + const callArgs = generateDecisionLog.mock.calls[0][1]; + expect(callArgs.agentId).toBe('dev'); + expect(callArgs.status).toBe('completed'); + expect(callArgs.decisions).toHaveLength(2); + expect(callArgs.decisions[0].description).toBe('Use Axios for HTTP client'); + expect(callArgs.decisions[1].description).toBe('Use React Context for state'); + expect(callArgs.filesModified).toHaveLength(2); + expect(callArgs.filesModified[0].path).toContain('client.js'); // OS-agnostic path check + expect(callArgs.filesModified[1].path).toContain('package.json'); + expect(callArgs.testsRun).toHaveLength(2); + expect(callArgs.testsRun[0].name).toBe('api.test.js'); + expect(callArgs.testsRun[0].passed).toBe(true); + expect(callArgs.testsRun[1].name).toBe('context.test.js'); + expect(callArgs.testsRun[1].passed).toBe(true); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/quality/metrics-hook.test.js +================================================== +```js +/** + * Metrics Hook Unit Tests + * + * Tests for the quality metrics hook integration module. + * + * @module tests/unit/quality/metrics-hook.test.js + * @story 3.11a - Quality Gates Metrics Collector + */ + +const path = require('path'); +const fs = require('fs').promises; +const { + recordPreCommitMetrics, + recordPRReviewMetrics, + recordHumanReviewMetrics, + withPreCommitMetrics, + getQuickSummary, +} = require('../../../.aios-core/quality/metrics-hook'); +const { MetricsCollector } = require('../../../.aios-core/quality/metrics-collector'); + +// Test data directory +const TEST_DATA_DIR = path.join(__dirname, '../../fixtures/quality'); +const TEST_METRICS_FILE = path.join(TEST_DATA_DIR, 'test-hook-metrics.json'); + +describe('Metrics Hook', () => { + let originalCwd; + + beforeAll(async () => { + // Create test data directory + await fs.mkdir(TEST_DATA_DIR, { recursive: true }); + + // Store original cwd + originalCwd = process.cwd(); + }); + + beforeEach(async () => { + // Clean up test file before each test + try { + await fs.unlink(TEST_METRICS_FILE); + await fs.unlink(`${TEST_METRICS_FILE}.lock`); + } catch { + // Ignore if file doesn't exist + } + + // Override the default data file path for testing + // Note: In real usage, the hook uses the default path + }); + + afterAll(async () => { + // Clean up + try { + await fs.unlink(TEST_METRICS_FILE); + await fs.unlink(`${TEST_METRICS_FILE}.lock`); + } catch { + // Ignore + } + }); + + describe('recordPreCommitMetrics', () => { + it('should record pre-commit metrics without throwing', async () => { + // This should not throw even if metrics file doesn't exist + const result = await recordPreCommitMetrics({ + passed: true, + durationMs: 2500, + findingsCount: 0, + }); + + // Result could be null if recording fails (graceful degradation) + // In normal circumstances, it should succeed + expect(result === null || result.layer === 1).toBe(true); + }); + + it('should include triggeredBy: hook in metadata', async () => { + const result = await recordPreCommitMetrics({ + passed: true, + durationMs: 1500, + }); + + if (result) { + expect(result.metadata.triggeredBy).toBe('hook'); + } + }); + + it('should handle failure gracefully', async () => { + // Create a mock that throws + jest.spyOn(console, 'warn').mockImplementation(() => {}); + + // Even with invalid input, should not throw + const result = await recordPreCommitMetrics({ + passed: 'invalid', // Should be boolean + durationMs: 'not a number', + }); + + // Should handle gracefully (either record or return null) + expect(result === null || typeof result === 'object').toBe(true); + + jest.restoreAllMocks(); + }); + }); + + describe('recordPRReviewMetrics', () => { + it('should record PR review with CodeRabbit data', async () => { + const result = await recordPRReviewMetrics({ + passed: true, + durationMs: 180000, + coderabbit: { + findingsCount: 5, + severityBreakdown: { + critical: 0, + high: 1, + medium: 2, + low: 2, + }, + }, + }); + + if (result) { + expect(result.layer).toBe(2); + expect(result.metadata.triggeredBy).toBe('pr'); + } + }); + + it('should record PR review with Quinn data', async () => { + const result = await recordPRReviewMetrics({ + passed: true, + durationMs: 120000, + quinn: { + findingsCount: 3, + topCategories: ['test-coverage', 'documentation'], + }, + }); + + if (result) { + expect(result.layer).toBe(2); + } + }); + + it('should include additional metadata', async () => { + const result = await recordPRReviewMetrics({ + passed: true, + durationMs: 150000, + metadata: { + prNumber: 42, + branchName: 'feature/test', + }, + }); + + if (result) { + expect(result.metadata.prNumber).toBe(42); + expect(result.metadata.branchName).toBe('feature/test'); + } + }); + }); + + describe('recordHumanReviewMetrics', () => { + it('should record human review as Layer 3', async () => { + const result = await recordHumanReviewMetrics({ + passed: true, + durationMs: 600000, + findingsCount: 1, + }); + + if (result) { + expect(result.layer).toBe(3); + expect(result.metadata.triggeredBy).toBe('manual'); + } + }); + }); + + describe('withPreCommitMetrics', () => { + it('should wrap check function and record metrics', async () => { + const result = await withPreCommitMetrics(async () => { + // Simulate some checks + return { + passed: true, + findingsCount: 0, + }; + }); + + expect(result.passed).toBe(true); + expect(result.findingsCount).toBe(0); + }); + + it('should catch and record failures', async () => { + const result = await withPreCommitMetrics(async () => { + throw new Error('Check failed'); + }); + + expect(result.passed).toBe(false); + expect(result.error).toBe('Check failed'); + }); + + it('should pass through metadata from check function', async () => { + const result = await withPreCommitMetrics(async () => { + return { + passed: true, + findingsCount: 2, + metadata: { lintErrors: 1, typeErrors: 1 }, + }; + }); + + expect(result.findingsCount).toBe(2); + expect(result.metadata.lintErrors).toBe(1); + }); + }); + + describe('getQuickSummary', () => { + it('should return null when no metrics exist', async () => { + // With no data file, should return null gracefully + const summary = await getQuickSummary(); + + // Either null (no file) or summary object + if (summary) { + expect(summary).toHaveProperty('layer1'); + expect(summary).toHaveProperty('layer2'); + expect(summary).toHaveProperty('layer3'); + } + }); + + it('should return summary structure when data exists', async () => { + // First record some metrics + await recordPreCommitMetrics({ + passed: true, + durationMs: 2000, + }); + + const summary = await getQuickSummary(); + + if (summary) { + expect(summary.layer1).toHaveProperty('passRate'); + expect(summary.layer1).toHaveProperty('totalRuns'); + expect(summary.layer2).toHaveProperty('autoCatchRate'); + expect(summary).toHaveProperty('historyCount'); + } + }); + }); +}); + +``` + +================================================== +📄 tests/unit/quality/metrics-collector.test.js +================================================== +```js +/** + * MetricsCollector Unit Tests + * + * Tests for the quality gate metrics collector module. + * + * @module tests/unit/quality/metrics-collector.test.js + * @story 3.11a - Quality Gates Metrics Collector + */ + +const path = require('path'); +const fs = require('fs').promises; +const { + MetricsCollector, + createEmptyMetrics, + DEFAULT_RETENTION_DAYS, +} = require('../../../.aios-core/quality/metrics-collector'); + +// Test data directory +const TEST_DATA_DIR = path.join(__dirname, '../../fixtures/quality'); +const TEST_METRICS_FILE = path.join(TEST_DATA_DIR, 'test-metrics.json'); + +describe('MetricsCollector', () => { + let collector; + + beforeAll(async () => { + // Create test data directory + await fs.mkdir(TEST_DATA_DIR, { recursive: true }); + }); + + beforeEach(async () => { + // Clean up test file before each test + try { + await fs.unlink(TEST_METRICS_FILE); + await fs.unlink(`${TEST_METRICS_FILE}.lock`); + } catch { + // Ignore if file doesn't exist + } + + collector = new MetricsCollector({ + dataFile: TEST_METRICS_FILE, + retentionDays: 30, + }); + }); + + afterAll(async () => { + // Clean up test files + try { + await fs.unlink(TEST_METRICS_FILE); + await fs.unlink(`${TEST_METRICS_FILE}.lock`); + } catch { + // Ignore + } + }); + + describe('createEmptyMetrics', () => { + it('should create valid empty metrics structure', () => { + const metrics = createEmptyMetrics(); + + expect(metrics).toHaveProperty('version', '1.0'); + expect(metrics).toHaveProperty('lastUpdated'); + expect(metrics).toHaveProperty('retentionDays', DEFAULT_RETENTION_DAYS); + expect(metrics).toHaveProperty('layers'); + expect(metrics).toHaveProperty('trends'); + expect(metrics).toHaveProperty('history'); + + // Check layer structure + expect(metrics.layers).toHaveProperty('layer1'); + expect(metrics.layers).toHaveProperty('layer2'); + expect(metrics.layers).toHaveProperty('layer3'); + + // Check layer1 structure + expect(metrics.layers.layer1).toHaveProperty('passRate', 0); + expect(metrics.layers.layer1).toHaveProperty('avgTimeMs', 0); + expect(metrics.layers.layer1).toHaveProperty('totalRuns', 0); + + // Check layer2 specific fields + expect(metrics.layers.layer2).toHaveProperty('autoCatchRate', 0); + expect(metrics.layers.layer2).toHaveProperty('coderabbit'); + expect(metrics.layers.layer2).toHaveProperty('quinn'); + }); + }); + + describe('load/save', () => { + it('should create empty metrics on first load', async () => { + const metrics = await collector.load(); + + expect(metrics.version).toBe('1.0'); + expect(metrics.history).toEqual([]); + }); + + it('should save and load metrics correctly', async () => { + await collector.load(); + await collector.recordRun(1, { passed: true, durationMs: 1000 }); + + // Create new collector instance + const newCollector = new MetricsCollector({ + dataFile: TEST_METRICS_FILE, + }); + const loadedMetrics = await newCollector.load(); + + expect(loadedMetrics.history.length).toBe(1); + expect(loadedMetrics.history[0].passed).toBe(true); + }); + }); + + describe('recordRun', () => { + it('should record Layer 1 pre-commit run', async () => { + const run = await collector.recordRun(1, { + passed: true, + durationMs: 3200, + findingsCount: 0, + }); + + expect(run.layer).toBe(1); + expect(run.passed).toBe(true); + expect(run.durationMs).toBe(3200); + expect(run.findingsCount).toBe(0); + expect(run.timestamp).toBeDefined(); + }); + + it('should record Layer 2 PR review run', async () => { + const run = await collector.recordRun(2, { + passed: false, + durationMs: 120000, + findingsCount: 5, + }); + + expect(run.layer).toBe(2); + expect(run.passed).toBe(false); + expect(run.findingsCount).toBe(5); + }); + + it('should record Layer 3 human review run', async () => { + const run = await collector.recordRun(3, { + passed: true, + durationMs: 600000, + findingsCount: 1, + }); + + expect(run.layer).toBe(3); + expect(run.passed).toBe(true); + }); + + it('should reject invalid layer numbers', async () => { + await expect(collector.recordRun(0, { passed: true })) + .rejects.toThrow('Layer must be 1, 2, or 3'); + + await expect(collector.recordRun(4, { passed: true })) + .rejects.toThrow('Layer must be 1, 2, or 3'); + }); + + it('should include metadata in run record', async () => { + const run = await collector.recordRun(1, { + passed: true, + durationMs: 1000, + metadata: { + storyId: '3.11a', + branchName: 'feature/test', + }, + }); + + expect(run.metadata.storyId).toBe('3.11a'); + expect(run.metadata.branchName).toBe('feature/test'); + }); + }); + + describe('recordPreCommit', () => { + it('should record pre-commit as Layer 1', async () => { + const run = await collector.recordPreCommit({ + passed: true, + durationMs: 2500, + }); + + expect(run.layer).toBe(1); + expect(run.passed).toBe(true); + }); + }); + + describe('recordPRReview', () => { + it('should record PR review with CodeRabbit metrics', async () => { + await collector.recordPRReview({ + passed: true, + durationMs: 180000, + coderabbit: { + findingsCount: 5, + severityBreakdown: { + critical: 0, + high: 1, + medium: 2, + low: 2, + }, + }, + }); + + const metrics = await collector.getMetrics(); + expect(metrics.layers.layer2.coderabbit.active).toBe(true); + expect(metrics.layers.layer2.coderabbit.findingsCount).toBe(5); + }); + + it('should accumulate CodeRabbit severity breakdown', async () => { + await collector.recordPRReview({ + passed: true, + durationMs: 100000, + coderabbit: { + findingsCount: 3, + severityBreakdown: { critical: 1, high: 1, medium: 1, low: 0 }, + }, + }); + + await collector.recordPRReview({ + passed: true, + durationMs: 100000, + coderabbit: { + findingsCount: 2, + severityBreakdown: { critical: 0, high: 0, medium: 1, low: 1 }, + }, + }); + + const metrics = await collector.getMetrics(); + expect(metrics.layers.layer2.coderabbit.severityBreakdown.critical).toBe(1); + expect(metrics.layers.layer2.coderabbit.severityBreakdown.high).toBe(1); + expect(metrics.layers.layer2.coderabbit.severityBreakdown.medium).toBe(2); + expect(metrics.layers.layer2.coderabbit.severityBreakdown.low).toBe(1); + }); + + it('should track Quinn categories', async () => { + await collector.recordPRReview({ + passed: true, + durationMs: 120000, + quinn: { + findingsCount: 3, + topCategories: ['test-coverage', 'documentation'], + }, + }); + + const metrics = await collector.getMetrics(); + expect(metrics.layers.layer2.quinn.topCategories).toContain('test-coverage'); + expect(metrics.layers.layer2.quinn.topCategories).toContain('documentation'); + }); + }); + + describe('recordHumanReview', () => { + it('should record human review as Layer 3', async () => { + const run = await collector.recordHumanReview({ + passed: true, + durationMs: 300000, + }); + + expect(run.layer).toBe(3); + }); + }); + + describe('aggregate calculations', () => { + it('should calculate pass rate correctly', async () => { + // Record 4 runs: 3 passed, 1 failed + await collector.recordRun(1, { passed: true, durationMs: 1000 }); + await collector.recordRun(1, { passed: true, durationMs: 1000 }); + await collector.recordRun(1, { passed: true, durationMs: 1000 }); + await collector.recordRun(1, { passed: false, durationMs: 1000 }); + + const metrics = await collector.getMetrics(); + expect(metrics.layers.layer1.passRate).toBe(0.75); + }); + + it('should calculate average time correctly', async () => { + await collector.recordRun(1, { passed: true, durationMs: 1000 }); + await collector.recordRun(1, { passed: true, durationMs: 2000 }); + await collector.recordRun(1, { passed: true, durationMs: 3000 }); + + const metrics = await collector.getMetrics(); + expect(metrics.layers.layer1.avgTimeMs).toBe(2000); + }); + + it('should update total runs count', async () => { + await collector.recordRun(1, { passed: true, durationMs: 1000 }); + await collector.recordRun(1, { passed: true, durationMs: 1000 }); + await collector.recordRun(2, { passed: true, durationMs: 1000 }); + + const metrics = await collector.getMetrics(); + expect(metrics.layers.layer1.totalRuns).toBe(2); + expect(metrics.layers.layer2.totalRuns).toBe(1); + }); + + it('should update lastRun timestamp', async () => { + const before = new Date().toISOString(); + await collector.recordRun(1, { passed: true, durationMs: 1000 }); + const after = new Date().toISOString(); + + const metrics = await collector.getMetrics(); + expect(metrics.layers.layer1.lastRun >= before).toBe(true); + expect(metrics.layers.layer1.lastRun <= after).toBe(true); + }); + }); + + describe('cleanup', () => { + it('should remove records older than retention period', async () => { + // Create collector with 1 day retention + const shortRetentionCollector = new MetricsCollector({ + dataFile: TEST_METRICS_FILE, + retentionDays: 1, + }); + + // Add a record and manually backdate it + await shortRetentionCollector.load(); + const metrics = await shortRetentionCollector.getMetrics(); + + // Add old record (2 days ago) + const oldDate = new Date(); + oldDate.setDate(oldDate.getDate() - 2); + metrics.history.push({ + timestamp: oldDate.toISOString(), + layer: 1, + passed: true, + durationMs: 1000, + findingsCount: 0, + }); + + // Add recent record + metrics.history.push({ + timestamp: new Date().toISOString(), + layer: 1, + passed: true, + durationMs: 1000, + findingsCount: 0, + }); + + await shortRetentionCollector.save(metrics); + + // Run cleanup + const removed = await shortRetentionCollector.cleanup(); + + expect(removed).toBe(1); + + const cleaned = await shortRetentionCollector.getMetrics(); + expect(cleaned.history.length).toBe(1); + }); + }); + + describe('export', () => { + it('should export metrics as JSON', async () => { + await collector.recordRun(1, { passed: true, durationMs: 1000 }); + + const exported = await collector.export('json'); + const parsed = JSON.parse(exported); + + expect(parsed.history.length).toBe(1); + expect(parsed.version).toBe('1.0'); + }); + + it('should export history as CSV', async () => { + await collector.recordRun(1, { passed: true, durationMs: 1000, findingsCount: 2 }); + await collector.recordRun(2, { passed: false, durationMs: 2000, findingsCount: 5 }); + + const csv = await collector.export('csv'); + const lines = csv.split('\n'); + + expect(lines[0]).toBe('timestamp,layer,passed,durationMs,findingsCount'); + expect(lines.length).toBe(3); // Header + 2 rows + }); + }); + + describe('getLayerHistory', () => { + it('should return history for specific layer', async () => { + await collector.recordRun(1, { passed: true, durationMs: 1000 }); + await collector.recordRun(2, { passed: true, durationMs: 2000 }); + await collector.recordRun(1, { passed: false, durationMs: 1500 }); + + const layer1History = await collector.getLayerHistory(1); + + expect(layer1History.length).toBe(2); + expect(layer1History.every((r) => r.layer === 1)).toBe(true); + }); + + it('should limit results when specified', async () => { + for (let i = 0; i < 10; i++) { + await collector.recordRun(1, { passed: true, durationMs: 1000 }); + } + + const limited = await collector.getLayerHistory(1, 5); + expect(limited.length).toBe(5); + }); + }); + + describe('reset', () => { + it('should reset all metrics to empty state', async () => { + await collector.recordRun(1, { passed: true, durationMs: 1000 }); + await collector.recordRun(2, { passed: true, durationMs: 2000 }); + + await collector.reset(); + + const metrics = await collector.getMetrics(); + expect(metrics.history.length).toBe(0); + expect(metrics.layers.layer1.totalRuns).toBe(0); + expect(metrics.layers.layer2.totalRuns).toBe(0); + }); + }); + + describe('validation', () => { + it('should validate metrics against schema', async () => { + const metrics = createEmptyMetrics(); + const { valid, errors } = await collector.validate(metrics); + + expect(valid).toBe(true); + expect(errors).toBeNull(); + }); + + it('should detect invalid metrics', async () => { + const invalidMetrics = { + version: '2.0', // Wrong version + lastUpdated: new Date().toISOString(), + layers: {}, + trends: {}, + history: [], + }; + + const { valid } = await collector.validate(invalidMetrics); + expect(valid).toBe(false); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/quality/seed-metrics.test.js +================================================== +```js +/** + * Seed Metrics Unit Tests + * + * Tests for the quality gate metrics seed data generator. + * + * @module tests/unit/quality/seed-metrics.test.js + * @story 3.11a - Quality Gates Metrics Collector + */ + +const { + generateSeedData, + generateLayer1Run, + generateLayer2Run, + generateLayer3Run, +} = require('../../../.aios-core/quality/seed-metrics'); + +describe('Seed Metrics Generator', () => { + describe('generateLayer1Run', () => { + it('should generate valid Layer 1 run', () => { + const timestamp = new Date(); + const run = generateLayer1Run(timestamp); + + expect(run.layer).toBe(1); + expect(typeof run.passed).toBe('boolean'); + expect(run.durationMs).toBeGreaterThan(0); + expect(run.findingsCount).toBeGreaterThanOrEqual(0); + expect(run.timestamp).toBe(timestamp.toISOString()); + }); + + it('should generate realistic duration range', () => { + const runs = []; + for (let i = 0; i < 100; i++) { + runs.push(generateLayer1Run(new Date())); + } + + const durations = runs.map((r) => r.durationMs); + const minDuration = Math.min(...durations); + const maxDuration = Math.max(...durations); + + // Layer 1 should be 2-8 seconds + expect(minDuration).toBeGreaterThanOrEqual(2000); + expect(maxDuration).toBeLessThanOrEqual(8000); + }); + + it('should have high pass rate (~92%)', () => { + const runs = []; + for (let i = 0; i < 1000; i++) { + runs.push(generateLayer1Run(new Date())); + } + + const passRate = runs.filter((r) => r.passed).length / runs.length; + // Allow for statistical variation + expect(passRate).toBeGreaterThan(0.85); + expect(passRate).toBeLessThan(0.98); + }); + }); + + describe('generateLayer2Run', () => { + it('should generate valid Layer 2 run', () => { + const timestamp = new Date(); + const run = generateLayer2Run(timestamp); + + expect(run.layer).toBe(2); + expect(typeof run.passed).toBe('boolean'); + expect(run.durationMs).toBeGreaterThan(0); + expect(run.metadata).toBeDefined(); + }); + + it('should include CodeRabbit metadata', () => { + const runs = []; + for (let i = 0; i < 100; i++) { + runs.push(generateLayer2Run(new Date())); + } + + const withCoderabbit = runs.filter((r) => r.metadata?.coderabbit !== null); + // CodeRabbit active 95% of time + expect(withCoderabbit.length).toBeGreaterThan(85); + }); + + it('should include Quinn metadata', () => { + const run = generateLayer2Run(new Date()); + + expect(run.metadata.quinn).toBeDefined(); + expect(run.metadata.quinn.findingsCount).toBeGreaterThanOrEqual(0); + expect(Array.isArray(run.metadata.quinn.topCategories)).toBe(true); + }); + + it('should have realistic duration (2-10 minutes)', () => { + const runs = []; + for (let i = 0; i < 50; i++) { + runs.push(generateLayer2Run(new Date())); + } + + const durations = runs.map((r) => r.durationMs); + const minDuration = Math.min(...durations); + const maxDuration = Math.max(...durations); + + expect(minDuration).toBeGreaterThanOrEqual(120000); + expect(maxDuration).toBeLessThanOrEqual(600000); + }); + }); + + describe('generateLayer3Run', () => { + it('should generate valid Layer 3 run', () => { + const timestamp = new Date(); + const run = generateLayer3Run(timestamp); + + expect(run.layer).toBe(3); + expect(typeof run.passed).toBe('boolean'); + expect(run.durationMs).toBeGreaterThan(0); + }); + + it('should have highest pass rate (~96%)', () => { + const runs = []; + for (let i = 0; i < 1000; i++) { + runs.push(generateLayer3Run(new Date())); + } + + const passRate = runs.filter((r) => r.passed).length / runs.length; + expect(passRate).toBeGreaterThan(0.90); + expect(passRate).toBeLessThan(0.99); + }); + + it('should have realistic duration (5-30 minutes)', () => { + const runs = []; + for (let i = 0; i < 50; i++) { + runs.push(generateLayer3Run(new Date())); + } + + const durations = runs.map((r) => r.durationMs); + const minDuration = Math.min(...durations); + const maxDuration = Math.max(...durations); + + expect(minDuration).toBeGreaterThanOrEqual(300000); + expect(maxDuration).toBeLessThanOrEqual(1800000); + }); + }); + + describe('generateSeedData', () => { + it('should generate 30 days of history by default', () => { + const metrics = generateSeedData(); + + // Should have multiple runs + expect(metrics.history.length).toBeGreaterThan(100); + + // Check date range + const timestamps = metrics.history.map((r) => new Date(r.timestamp)); + const oldestDate = new Date(Math.min(...timestamps)); + const newestDate = new Date(Math.max(...timestamps)); + + const daysDiff = (newestDate - oldestDate) / (1000 * 60 * 60 * 24); + expect(daysDiff).toBeGreaterThanOrEqual(25); + expect(daysDiff).toBeLessThanOrEqual(31); + }); + + it('should generate specified number of days', () => { + const metrics = generateSeedData({ days: 7 }); + + const timestamps = metrics.history.map((r) => new Date(r.timestamp)); + const oldestDate = new Date(Math.min(...timestamps)); + const newestDate = new Date(Math.max(...timestamps)); + + const daysDiff = (newestDate - oldestDate) / (1000 * 60 * 60 * 24); + expect(daysDiff).toBeLessThanOrEqual(8); + }); + + it('should respect runsPerDay option', () => { + const lowRuns = generateSeedData({ days: 5, runsPerDay: 2 }); + const highRuns = generateSeedData({ days: 5, runsPerDay: 15 }); + + // High runs should have significantly more history + expect(highRuns.history.length).toBeGreaterThan(lowRuns.history.length); + }); + + it('should reduce weekend activity when enabled', () => { + // Generate multiple seed data sets to average out randomness + let weekdayTotal = 0; + let weekendTotal = 0; + + for (let i = 0; i < 10; i++) { + const metrics = generateSeedData({ + days: 14, + weekendReduction: true, + }); + + metrics.history.forEach((r) => { + const day = new Date(r.timestamp).getDay(); + if (day === 0 || day === 6) { + weekendTotal++; + } else { + weekdayTotal++; + } + }); + } + + // Weekend days (2 per week) should have fewer runs than weekdays (5 per week) + const weekdayAvg = weekdayTotal / 10; + const weekendAvg = weekendTotal / 4; + + expect(weekendAvg).toBeLessThan(weekdayAvg); + }); + + it('should calculate layer aggregates correctly', () => { + // Use more days to ensure Layer 3 (10% probability) gets runs + const metrics = generateSeedData({ days: 30, runsPerDay: 10 }); + + // All layers should have runs (with 30 days and ~300 runs, Layer 3 should have ~30) + expect(metrics.layers.layer1.totalRuns).toBeGreaterThan(0); + expect(metrics.layers.layer2.totalRuns).toBeGreaterThan(0); + expect(metrics.layers.layer3.totalRuns).toBeGreaterThan(0); // With 300 runs at 10% probability, statistically guaranteed + + // Pass rates should be between 0 and 1 + expect(metrics.layers.layer1.passRate).toBeGreaterThan(0); + expect(metrics.layers.layer1.passRate).toBeLessThanOrEqual(1); + + // Layer 1 should have more runs than Layer 3 (60% vs 10% probability) + expect(metrics.layers.layer1.totalRuns).toBeGreaterThanOrEqual(metrics.layers.layer3.totalRuns); + }); + + it('should include CodeRabbit aggregates', () => { + const metrics = generateSeedData({ days: 10 }); + + expect(metrics.layers.layer2.coderabbit.active).toBe(true); + expect(metrics.layers.layer2.coderabbit.findingsCount).toBeGreaterThan(0); + expect(metrics.layers.layer2.coderabbit.severityBreakdown).toBeDefined(); + }); + + it('should include Quinn aggregates', () => { + const metrics = generateSeedData({ days: 10 }); + + expect(metrics.layers.layer2.quinn.findingsCount).toBeGreaterThanOrEqual(0); + expect(Array.isArray(metrics.layers.layer2.quinn.topCategories)).toBe(true); + }); + + it('should generate trend data', () => { + const metrics = generateSeedData({ days: 10 }); + + expect(metrics.trends.passRates.length).toBeGreaterThan(0); + expect(metrics.trends.autoCatchRate.length).toBeGreaterThan(0); + + // Each trend point should have date and value + metrics.trends.passRates.forEach((t) => { + expect(t.date).toMatch(/^\d{4}-\d{2}-\d{2}$/); + expect(typeof t.value).toBe('number'); + expect(t.value).toBeGreaterThanOrEqual(0); + expect(t.value).toBeLessThanOrEqual(1); + }); + }); + + it('should sort history by timestamp', () => { + const metrics = generateSeedData({ days: 10 }); + + for (let i = 1; i < metrics.history.length; i++) { + const prev = new Date(metrics.history[i - 1].timestamp); + const curr = new Date(metrics.history[i].timestamp); + expect(curr >= prev).toBe(true); + } + }); + + it('should have valid schema version', () => { + const metrics = generateSeedData(); + + expect(metrics.version).toBe('1.0'); + expect(metrics.lastUpdated).toBeDefined(); + expect(metrics.retentionDays).toBe(30); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/config/ide-configs.test.js +================================================== +```js +/** + * Unit tests for IDE Configs Metadata + * + * Story 1.4: IDE Selection + * Tests IDE configuration metadata structure + * + * Synkra AIOS v2.1 supports 6 IDEs: + * - Claude Code, Codex CLI, Gemini CLI, Cursor, GitHub Copilot, AntiGravity + */ + +const { + IDE_CONFIGS, + getIDEKeys, + getIDEConfig, + isValidIDE, + getIDEChoices, +} = require('../../../packages/installer/src/config/ide-configs'); + +describe('IDE Configs', () => { + describe('IDE_CONFIGS', () => { + it('should have 6 IDE configurations', () => { + const keys = Object.keys(IDE_CONFIGS); + expect(keys).toHaveLength(6); + }); + + it('should include all expected IDEs', () => { + const expectedIDEs = [ + 'claude-code', + 'codex', + 'gemini', + 'cursor', + 'github-copilot', + 'antigravity', + ]; + + expectedIDEs.forEach((ide) => { + expect(IDE_CONFIGS).toHaveProperty(ide); + }); + }); + + it('should have valid structure for each IDE', () => { + Object.entries(IDE_CONFIGS).forEach(([key, config]) => { + expect(config).toHaveProperty('name'); + expect(config).toHaveProperty('description'); + expect(config).toHaveProperty('configFile'); + expect(config).toHaveProperty('template'); + expect(config).toHaveProperty('requiresDirectory'); + expect(config).toHaveProperty('format'); + expect(config).toHaveProperty('agentFolder'); + + expect(typeof config.name).toBe('string'); + expect(typeof config.description).toBe('string'); + expect(typeof config.configFile).toBe('string'); + expect(typeof config.template).toBe('string'); + expect(typeof config.requiresDirectory).toBe('boolean'); + expect(['text', 'json', 'yaml']).toContain(config.format); + expect(typeof config.agentFolder).toBe('string'); + }); + }); + + it('should have correct directory requirements', () => { + // IDEs that require directories + expect(IDE_CONFIGS['claude-code'].requiresDirectory).toBe(true); + expect(IDE_CONFIGS['github-copilot'].requiresDirectory).toBe(true); + expect(IDE_CONFIGS.antigravity.requiresDirectory).toBe(true); + expect(IDE_CONFIGS.cursor.requiresDirectory).toBe(true); + + // IDEs that do not require directories (root file only) + expect(IDE_CONFIGS.codex.requiresDirectory).toBe(false); + }); + + it('should have correct file formats', () => { + // All current IDEs use text format + Object.values(IDE_CONFIGS).forEach((config) => { + expect(config.format).toBe('text'); + }); + }); + + it('should have correct config file paths', () => { + expect(IDE_CONFIGS['claude-code'].configFile).toContain('.claude'); + expect(IDE_CONFIGS.codex.configFile).toBe('AGENTS.md'); + expect(IDE_CONFIGS.gemini.configFile).toContain('.gemini'); + expect(IDE_CONFIGS.cursor.configFile).toContain('.cursor'); + expect(IDE_CONFIGS['github-copilot'].configFile).toContain('.github'); + expect(IDE_CONFIGS.antigravity.configFile).toContain('.antigravity'); + }); + + it('should have template paths in ide-rules folder', () => { + Object.values(IDE_CONFIGS).forEach((config) => { + expect(config.template).toMatch(/^ide-rules\//); + }); + }); + + it('should have Claude Code and Codex as recommended', () => { + expect(IDE_CONFIGS['claude-code'].recommended).toBe(true); + expect(IDE_CONFIGS.codex.recommended).toBe(true); + }); + + it('should have correct agent folder paths', () => { + expect(IDE_CONFIGS['claude-code'].agentFolder).toContain('.claude'); + expect(IDE_CONFIGS['claude-code'].agentFolder).toContain('agents'); + expect(IDE_CONFIGS.codex.agentFolder).toContain('.codex'); + expect(IDE_CONFIGS.codex.agentFolder).toContain('agents'); + expect(IDE_CONFIGS.gemini.agentFolder).toContain('.gemini'); + expect(IDE_CONFIGS.gemini.agentFolder).toContain('agents'); + expect(IDE_CONFIGS.cursor.agentFolder).toContain('.cursor'); + expect(IDE_CONFIGS.cursor.agentFolder).toContain('rules'); + expect(IDE_CONFIGS['github-copilot'].agentFolder).toContain('.github'); + expect(IDE_CONFIGS['github-copilot'].agentFolder).toContain('agents'); + // AntiGravity uses .agent/workflows instead of .antigravity/agents + expect(IDE_CONFIGS.antigravity.agentFolder).toContain('.agent'); + expect(IDE_CONFIGS.antigravity.agentFolder).toContain('workflows'); + }); + }); + + describe('getIDEKeys', () => { + it('should return array of IDE keys', () => { + const keys = getIDEKeys(); + + expect(Array.isArray(keys)).toBe(true); + expect(keys).toHaveLength(6); + }); + + it('should return all IDE keys', () => { + const keys = getIDEKeys(); + const expectedKeys = [ + 'claude-code', + 'codex', + 'gemini', + 'cursor', + 'github-copilot', + 'antigravity', + ]; + + expectedKeys.forEach((key) => { + expect(keys).toContain(key); + }); + }); + }); + + describe('getIDEConfig', () => { + it('should return config for valid IDE', () => { + const config = getIDEConfig('cursor'); + + expect(config).toBeDefined(); + expect(config.name).toBe('Cursor'); + }); + + it('should return null for invalid IDE', () => { + const config = getIDEConfig('invalid-ide'); + + expect(config).toBeNull(); + }); + + it('should return correct config for all IDEs', () => { + const ides = ['claude-code', 'codex', 'gemini', 'cursor', 'github-copilot', 'antigravity']; + + ides.forEach((ide) => { + const config = getIDEConfig(ide); + expect(config).toBeDefined(); + expect(config).toBe(IDE_CONFIGS[ide]); + }); + }); + }); + + describe('isValidIDE', () => { + it('should return true for valid IDE', () => { + expect(isValidIDE('cursor')).toBe(true); + expect(isValidIDE('gemini')).toBe(true); + expect(isValidIDE('github-copilot')).toBe(true); + expect(isValidIDE('claude-code')).toBe(true); + expect(isValidIDE('codex')).toBe(true); + expect(isValidIDE('antigravity')).toBe(true); + }); + + it('should return false for invalid IDE', () => { + expect(isValidIDE('invalid-ide')).toBe(false); + expect(isValidIDE('')).toBe(false); + expect(isValidIDE(null)).toBe(false); + expect(isValidIDE(undefined)).toBe(false); + }); + + it('should return true for all valid IDE keys', () => { + const keys = getIDEKeys(); + + keys.forEach((key) => { + expect(isValidIDE(key)).toBe(true); + }); + }); + }); + + describe('getIDEChoices', () => { + it('should return array of choices', () => { + const choices = getIDEChoices(); + + expect(Array.isArray(choices)).toBe(true); + expect(choices).toHaveLength(6); + }); + + it('should have valid choice structure', () => { + const choices = getIDEChoices(); + + choices.forEach((choice) => { + expect(choice).toHaveProperty('name'); + expect(choice).toHaveProperty('value'); + expect(typeof choice.name).toBe('string'); + expect(typeof choice.value).toBe('string'); + }); + }); + + it('should include IDE name in choice name', () => { + const choices = getIDEChoices(); + + choices.forEach((choice) => { + const ideKey = choice.value; + const config = getIDEConfig(ideKey); + + expect(choice.name).toContain(config.name); + }); + }); + + it('should use IDE key as choice value', () => { + const choices = getIDEChoices(); + const keys = getIDEKeys(); + + const choiceValues = choices.map((c) => c.value); + + keys.forEach((key) => { + expect(choiceValues).toContain(key); + }); + }); + + it('should pre-check recommended IDEs', () => { + const choices = getIDEChoices(); + const claudeCodeChoice = choices.find((c) => c.value === 'claude-code'); + const codexChoice = choices.find((c) => c.value === 'codex'); + + expect(claudeCodeChoice.checked).toBe(true); + expect(codexChoice.checked).toBe(true); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/quality-gates/quality-gate-manager.test.js +================================================== +```js +/** + * Quality Gate Manager Unit Tests + * + * @story 2.10 - Quality Gate Manager + */ + +const { QualityGateManager } = require('../../../.aios-core/core/quality-gates/quality-gate-manager'); + +describe('QualityGateManager', () => { + let manager; + + beforeEach(() => { + manager = new QualityGateManager({ + layer1: { enabled: true }, + layer2: { enabled: true }, + layer3: { enabled: true }, + }); + }); + + describe('constructor', () => { + it('should create manager with default config', () => { + const defaultManager = new QualityGateManager(); + expect(defaultManager).toBeDefined(); + expect(defaultManager.layers).toBeDefined(); + expect(defaultManager.layers.layer1).toBeDefined(); + expect(defaultManager.layers.layer2).toBeDefined(); + expect(defaultManager.layers.layer3).toBeDefined(); + }); + + it('should create manager with custom config', () => { + const customManager = new QualityGateManager({ + layer1: { enabled: false }, + layer2: { enabled: true }, + layer3: { enabled: false }, + }); + expect(customManager.layers.layer1.enabled).toBe(false); + expect(customManager.layers.layer2.enabled).toBe(true); + expect(customManager.layers.layer3.enabled).toBe(false); + }); + }); + + describe('runLayer', () => { + it('should throw error for invalid layer number', async () => { + await expect(manager.runLayer(4)).rejects.toThrow('Invalid layer number: 4'); + await expect(manager.runLayer(0)).rejects.toThrow('Invalid layer number: 0'); + }); + + it('should run Layer 1', async () => { + // Mock runCommand to avoid actual command execution + manager.layers.layer1.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, + stdout: '0 errors', + stderr: '', + duration: 100, + }); + + const result = await manager.runLayer(1, { verbose: false }); + expect(result).toBeDefined(); + expect(result.layer).toBe('Layer 1: Pre-commit'); + }); + + it('should run Layer 2', async () => { + // Mock runCommand to avoid actual WSL/CodeRabbit calls + manager.layers.layer2.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, + stdout: 'No issues found', + stderr: '', + duration: 50, + }); + + const result = await manager.runLayer(2, { verbose: false }); + expect(result).toBeDefined(); + expect(result.layer).toBe('Layer 2: PR Automation'); + }); + + it('should run Layer 3', async () => { + // Mock file operations to avoid actual filesystem access + manager.layers.layer3.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, + stdout: 'Review complete', + stderr: '', + duration: 50, + }); + + const result = await manager.runLayer(3, { verbose: false }); + expect(result).toBeDefined(); + expect(result.layer).toBe('Layer 3: Human Review'); + }); + }); + + describe('getDuration', () => { + it('should return 0 when not started', () => { + expect(manager.getDuration()).toBe(0); + }); + + it('should return duration after execution', async () => { + manager.startTime = Date.now() - 1000; + manager.endTime = Date.now(); + const duration = manager.getDuration(); + expect(duration).toBeGreaterThanOrEqual(1000); + expect(duration).toBeLessThan(1100); + }); + }); + + describe('formatDuration', () => { + it('should format milliseconds', () => { + expect(manager.formatDuration(500)).toBe('500ms'); + }); + + it('should format seconds', () => { + expect(manager.formatDuration(5000)).toBe('5.0s'); + }); + + it('should format minutes', () => { + expect(manager.formatDuration(120000)).toBe('2.0m'); + }); + }); + + describe('failFast', () => { + it('should return fail-fast result', () => { + manager.startTime = Date.now(); + const result = manager.failFast({ pass: false, layer: 'Layer 1' }); + + expect(result.pass).toBe(false); + expect(result.status).toBe('failed'); + expect(result.stoppedAt).toBe('layer1'); + expect(result.reason).toBe('fail-fast'); + expect(result.exitCode).toBe(1); + }); + }); + + describe('escalate', () => { + it('should return escalation result', () => { + manager.startTime = Date.now(); + const result = manager.escalate({ pass: false, layer: 'Layer 2' }); + + expect(result.pass).toBe(false); + expect(result.status).toBe('blocked'); + expect(result.stoppedAt).toBe('layer2'); + expect(result.reason).toBe('escalation'); + expect(result.exitCode).toBe(1); + }); + }); + + describe('determineOverallStatus', () => { + it('should return not-started when no layer1', () => { + expect(manager.determineOverallStatus({})).toBe('not-started'); + }); + + it('should return layer1-failed when layer1 failed', () => { + expect(manager.determineOverallStatus({ layer1: { pass: false } })).toBe('layer1-failed'); + }); + + it('should return layer1-complete when only layer1 passed', () => { + expect(manager.determineOverallStatus({ layer1: { pass: true } })).toBe('layer1-complete'); + }); + + it('should return layer2-blocked when layer2 failed', () => { + expect(manager.determineOverallStatus({ + layer1: { pass: true }, + layer2: { pass: false }, + })).toBe('layer2-blocked'); + }); + + it('should return layer3-pending when awaiting human review', () => { + expect(manager.determineOverallStatus({ + layer1: { pass: true }, + layer2: { pass: true }, + layer3: { pass: true }, + })).toBe('layer3-pending'); + }); + }); +}); + +``` + +================================================== +📄 tests/unit/quality-gates/human-review-orchestrator.test.js +================================================== +```js +/** + * Human Review Orchestrator Unit Tests + * + * Tests for Story 3.5 - Human Review Orchestration (Layer 3) + * Smoke Tests: HUMAN-01 (Orchestration), HUMAN-02 (Blocking) + * + * @story 3.5 - Human Review Orchestration + */ + +const { HumanReviewOrchestrator } = require('../../../.aios-core/core/quality-gates/human-review-orchestrator'); + +describe('HumanReviewOrchestrator', () => { + let orchestrator; + + beforeEach(() => { + orchestrator = new HumanReviewOrchestrator({ + statusPath: '.aios/qa-status-test.json', + reviewRequestsPath: '.aios/human-review-requests-test', + }); + }); + + describe('constructor', () => { + it('should create orchestrator with default config', () => { + const defaultOrchestrator = new HumanReviewOrchestrator(); + expect(defaultOrchestrator).toBeDefined(); + expect(defaultOrchestrator.focusRecommender).toBeDefined(); + expect(defaultOrchestrator.notificationManager).toBeDefined(); + }); + + it('should create orchestrator with custom config', () => { + expect(orchestrator.statusPath).toBe('.aios/qa-status-test.json'); + expect(orchestrator.reviewRequestsPath).toBe('.aios/human-review-requests-test'); + }); + }); + + describe('checkLayerPassed', () => { + it('should return pass=false when layer result is null', () => { + const result = orchestrator.checkLayerPassed(null, 'Layer 1'); + expect(result.pass).toBe(false); + expect(result.reason).toBe('Layer 1 not executed'); + }); + + it('should return pass=true when layer passed', () => { + const layerResult = { pass: true, checks: { total: 3, passed: 3, failed: 0 } }; + const result = orchestrator.checkLayerPassed(layerResult, 'Layer 1'); + expect(result.pass).toBe(true); + expect(result.checksPassed).toBe(3); + }); + + it('should extract issues when layer failed', () => { + const layerResult = { + pass: false, + results: [ + { check: 'lint', pass: false, message: 'Lint errors found' }, + { check: 'test', pass: true, message: 'Tests passed' }, + ], + }; + const result = orchestrator.checkLayerPassed(layerResult, 'Layer 1'); + expect(result.pass).toBe(false); + expect(result.issues).toHaveLength(1); + expect(result.issues[0].check).toBe('lint'); + }); + }); + + describe('extractIssues', () => { + it('should return empty array when no results', () => { + const issues = orchestrator.extractIssues({}); + expect(issues).toEqual([]); + }); + + it('should extract failed checks as issues', () => { + const layerResult = { + results: [ + { check: 'lint', pass: false, message: 'Errors found' }, + { check: 'test', pass: false, message: 'Tests failed', error: 'AssertionError' }, + ], + }; + const issues = orchestrator.extractIssues(layerResult); + expect(issues).toHaveLength(2); + expect(issues[0].check).toBe('lint'); + expect(issues[1].details).toBe('AssertionError'); + }); + }); + + describe('determineSeverity', () => { + it('should return CRITICAL for test failures', () => { + expect(orchestrator.determineSeverity({ check: 'test' })).toBe('CRITICAL'); + }); + + it('should return HIGH for lint failures', () => { + expect(orchestrator.determineSeverity({ check: 'lint' })).toBe('HIGH'); + }); + + it('should return CRITICAL for critical coderabbit issues', () => { + expect(orchestrator.determineSeverity({ check: 'coderabbit', issues: { critical: 1 } })).toBe('CRITICAL'); + }); + + it('should return MEDIUM for other issues', () => { + expect(orchestrator.determineSeverity({ check: 'other' })).toBe('MEDIUM'); + }); + }); + + describe('block (HUMAN-02)', () => { + it('should block with layer1 failure', () => { + const layerCheck = { + pass: false, + reason: 'Lint errors', + issues: [{ check: 'lint', message: '5 errors' }], + }; + const result = orchestrator.block(layerCheck, 'layer1', Date.now()); + + expect(result.pass).toBe(false); + expect(result.status).toBe('blocked'); + expect(result.stoppedAt).toBe('layer1'); + expect(result.message).toContain('linting'); + }); + + it('should block with layer2 failure', () => { + const layerCheck = { + pass: false, + reason: 'CodeRabbit issues', + issues: [], + }; + const result = orchestrator.block(layerCheck, 'layer2', Date.now()); + + expect(result.pass).toBe(false); + expect(result.stoppedAt).toBe('layer2'); + expect(result.message).toContain('CodeRabbit'); + }); + }); + + describe('generateFixRecommendations', () => { + it('should generate lint fix recommendation', () => { + const layerCheck = { + issues: [{ check: 'lint', message: 'Errors found', severity: 'HIGH' }], + }; + const recs = orchestrator.generateFixRecommendations(layerCheck); + expect(recs[0].suggestion).toContain('npm run lint:fix'); + }); + + it('should generate test fix recommendation', () => { + const layerCheck = { + issues: [{ check: 'test', message: 'Tests failed', severity: 'CRITICAL' }], + }; + const recs = orchestrator.generateFixRecommendations(layerCheck); + expect(recs[0].suggestion).toContain('npm test'); + }); + + it('should generate coderabbit fix recommendation', () => { + const layerCheck = { + issues: [{ check: 'coderabbit', message: 'Issues found', severity: 'HIGH' }], + }; + const recs = orchestrator.generateFixRecommendations(layerCheck); + expect(recs[0].suggestion).toContain('CodeRabbit'); + }); + }); + + describe('orchestrateReview (HUMAN-01)', () => { + it('should block when Layer 1 fails', async () => { + const prContext = { changedFiles: ['file.js'] }; + const layer1Result = { pass: false, results: [{ check: 'lint', pass: false, message: 'Error' }] }; + const layer2Result = { pass: true }; + + const result = await orchestrator.orchestrateReview(prContext, layer1Result, layer2Result); + + expect(result.pass).toBe(false); + expect(result.status).toBe('blocked'); + expect(result.stoppedAt).toBe('layer1'); + }); + + it('should block when Layer 2 fails', async () => { + const prContext = { changedFiles: ['file.js'] }; + const layer1Result = { pass: true }; + const layer2Result = { pass: false, results: [{ check: 'coderabbit', pass: false, message: 'Issues' }] }; + + const result = await orchestrator.orchestrateReview(prContext, layer1Result, layer2Result); + + expect(result.pass).toBe(false); + expect(result.status).toBe('blocked'); + expect(result.stoppedAt).toBe('layer2'); + }); + + it('should request human review when both layers pass', async () => { + // Mock file system operations + orchestrator.saveReviewRequest = jest.fn().mockResolvedValue(); + orchestrator.notifyReviewer = jest.fn().mockResolvedValue({ success: true }); + + const prContext = { changedFiles: ['auth/login.js'] }; + const layer1Result = { pass: true, results: [{ check: 'lint', pass: true }] }; + const layer2Result = { pass: true, results: [{ check: 'coderabbit', pass: true }] }; + + const result = await orchestrator.orchestrateReview(prContext, layer1Result, layer2Result); + + expect(result.pass).toBe(true); + expect(result.status).toBe('pending_human_review'); + expect(result.reviewRequest).toBeDefined(); + expect(result.reviewRequest.focusAreas).toBeDefined(); + }); + }); + + describe('generateAutomatedSummary', () => { + it('should generate summary for Layer 1', () => { + const layer1Result = { + pass: true, + results: [ + { check: 'lint', pass: true, message: 'No errors' }, + { check: 'test', pass: true, message: 'All passed' }, + ], + }; + const summary = orchestrator.generateAutomatedSummary(layer1Result, {}); + + expect(summary.layer1.status).toBe('passed'); + expect(summary.layer1.checks).toHaveLength(2); + }); + + it('should generate summary for Layer 2 with CodeRabbit', () => { + const layer2Result = { + pass: true, + results: [ + { check: 'coderabbit', pass: true, issues: { critical: 0, high: 2, medium: 5 } }, + ], + }; + const summary = orchestrator.generateAutomatedSummary({}, layer2Result); + + expect(summary.layer2.coderabbit).toBeDefined(); + expect(summary.layer2.coderabbit.issues.high).toBe(2); + }); + }); + + describe('estimateReviewTime', () => { + it('should return base time with no focus areas', () => { + const focusAreas = { primary: [], secondary: [] }; + expect(orchestrator.estimateReviewTime(focusAreas)).toBe(10); + }); + + it('should add time per primary focus area', () => { + const focusAreas = { + primary: [{ area: 'security' }, { area: 'architecture' }], + secondary: [], + }; + expect(orchestrator.estimateReviewTime(focusAreas)).toBe(20); + }); + + it('should add half time per secondary focus area', () => { + const focusAreas = { + primary: [], + secondary: [{ area: 'ux' }, { area: 'performance' }], + }; + expect(orchestrator.estimateReviewTime(focusAreas)).toBe(15); + }); + }); + + describe('generateRequestId', () => { + it('should generate unique IDs', () => { + const id1 = orchestrator.generateRequestId(); + const id2 = orchestrator.generateRequestId(); + expect(id1).not.toBe(id2); + expect(id1).toMatch(/^hr-/); + }); + }); + + describe('validateRequestId (Security)', () => { + it('should accept valid alphanumeric IDs', () => { + expect(orchestrator.validateRequestId('hr-abc123')).toBe('hr-abc123'); + expect(orchestrator.validateRequestId('hr_test.id')).toBe('hr_test.id'); + expect(orchestrator.validateRequestId('HR-ABC-123')).toBe('HR-ABC-123'); + }); + + it('should reject path traversal attempts with ../', () => { + expect(() => orchestrator.validateRequestId('../../../etc/passwd')).toThrow('Invalid request ID'); + expect(() => orchestrator.validateRequestId('..\\..\\windows\\system32')).toThrow('Invalid request ID'); + }); + + it('should reject IDs with slashes', () => { + expect(() => orchestrator.validateRequestId('path/to/file')).toThrow('Invalid request ID'); + expect(() => orchestrator.validateRequestId('path\\to\\file')).toThrow('Invalid request ID'); + }); + + it('should reject IDs with special characters', () => { + expect(() => orchestrator.validateRequestId('id'); + + expect(result.valid).toBe(false); + }); + + it('should block javascript: protocol', () => { + const result = engine.securityChecker.checkCode('javascript:void(0)'); + + expect(result.valid).toBe(false); + }); +}); + +``` + +================================================== +📄 tests/core/wave-executor.test.js +================================================== +```js +/** + * Wave Executor Tests - Story EXC-1 (AC1) + * + * Tests for parallel wave execution including: + * - Constructor and configuration + * - executeWaves() single/multiple/mixed scenarios + * - executeWave() parallel chunking + * - executeTaskWithTimeout() success and timeout + * - chunkArray() utility + * - calculateMetrics() + * - Event emissions + */ + +const { + createMockTask, + createMockTasks, + createMockTaskExecutor, + collectEvents, +} = require('./execution-test-helpers'); + +// Mock WaveAnalyzer module to prevent constructor error in WaveExecutor +// The wave-executor.js does try { require('../../workflow-intelligence/engine/wave-analyzer') } +// which succeeds but exports a non-constructor. We force it to null via jest.mock. +jest.mock('../../.aios-core/workflow-intelligence/engine/wave-analyzer', () => null); + +const WaveExecutor = require('../../.aios-core/core/execution/wave-executor'); + +// Null-safe analyzer for tests that need one +const NOOP_ANALYZER = { analyze: jest.fn().mockReturnValue({ waves: [] }) }; + +// ═══════════════════════════════════════════════════════════════════════════════ +// CONSTRUCTOR +// ═══════════════════════════════════════════════════════════════════════════════ + +describe('WaveExecutor', () => { + let executor; + + afterEach(() => { + if (executor) { + executor.removeAllListeners(); + } + jest.useRealTimers(); + }); + + describe('Constructor', () => { + test('should create with default config (WE-01)', () => { + executor = new WaveExecutor(); + + expect(executor.maxParallel).toBe(4); + expect(executor.taskTimeout).toBe(10 * 60 * 1000); + expect(executor.continueOnNonCriticalFailure).toBe(true); + expect(executor.activeExecutions).toBeInstanceOf(Map); + expect(executor.completedWaves).toEqual([]); + }); + + test('should create with custom config (WE-02)', () => { + executor = new WaveExecutor({ + maxParallel: 8, + taskTimeout: 5000, + continueOnNonCriticalFailure: false, + }); + + expect(executor.maxParallel).toBe(8); + expect(executor.taskTimeout).toBe(5000); + expect(executor.continueOnNonCriticalFailure).toBe(false); + }); + + test('should accept custom waveAnalyzer and taskExecutor', () => { + const mockAnalyzer = { analyze: jest.fn() }; + const mockExecutor = jest.fn(); + + executor = new WaveExecutor({ + waveAnalyzer: mockAnalyzer, + taskExecutor: mockExecutor, + }); + + expect(executor.waveAnalyzer).toBe(mockAnalyzer); + expect(executor.taskExecutor).toBe(mockExecutor); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // executeWaves() + // ───────────────────────────────────────────────────────────────────────────── + + describe('executeWaves()', () => { + test('should return success with no waves (WE-03)', async () => { + const mockAnalyzer = { + analyze: jest.fn().mockReturnValue({ waves: [] }), + }; + executor = new WaveExecutor({ waveAnalyzer: mockAnalyzer }); + + const result = await executor.executeWaves('wf-1'); + + expect(result.success).toBe(true); + expect(result.waves).toEqual([]); + expect(result.totalDuration).toBe(0); + expect(result.message).toBe('No waves to execute'); + }); + + test('should execute single wave successfully (WE-04)', async () => { + const tasks = createMockTasks(2); + const mockAnalyzer = { + analyze: jest.fn().mockReturnValue({ + waves: [{ index: 1, tasks }], + }), + }; + const mockTaskExec = createMockTaskExecutor({ success: true }); + + executor = new WaveExecutor({ + waveAnalyzer: mockAnalyzer, + taskExecutor: mockTaskExec, + }); + + const result = await executor.executeWaves('wf-1'); + + expect(result.success).toBe(true); + expect(result.waves).toHaveLength(1); + expect(result.waves[0].allSucceeded).toBe(true); + expect(result.aborted).toBe(false); + }); + + test('should execute multiple sequential waves (WE-05)', async () => { + const tasks1 = createMockTasks(2); + const tasks2 = [createMockTask({ id: 'task-3' })]; + const mockAnalyzer = { + analyze: jest.fn().mockReturnValue({ + waves: [ + { index: 1, tasks: tasks1 }, + { index: 2, tasks: tasks2 }, + ], + }), + }; + const mockTaskExec = createMockTaskExecutor({ success: true }); + + executor = new WaveExecutor({ + waveAnalyzer: mockAnalyzer, + taskExecutor: mockTaskExec, + }); + + const result = await executor.executeWaves('wf-1'); + + expect(result.success).toBe(true); + expect(result.waves).toHaveLength(2); + expect(result.metrics.totalWaves).toBe(2); + }); + + test('should handle mixed success/failure with non-critical tasks (WE-06)', async () => { + let callCount = 0; + const mockTaskExec = jest.fn().mockImplementation(async () => { + callCount++; + if (callCount === 2) { + return { success: false, error: 'fail' }; + } + return { success: true, output: 'ok' }; + }); + + const tasks = createMockTasks(3); + const mockAnalyzer = { + analyze: jest.fn().mockReturnValue({ + waves: [{ index: 1, tasks }], + }), + }; + + executor = new WaveExecutor({ + waveAnalyzer: mockAnalyzer, + taskExecutor: mockTaskExec, + continueOnNonCriticalFailure: true, + }); + + const result = await executor.executeWaves('wf-1'); + + // Non-critical failure doesn't abort + expect(result.aborted).toBe(false); + expect(result.waves[0].allSucceeded).toBe(false); + }); + + test('should abort on critical task failure when configured (WE-07)', async () => { + const mockTaskExec = jest.fn().mockImplementation(async () => { + return { success: false, error: 'critical failure' }; + }); + + const tasks = [createMockTask({ id: 'task-1', critical: true })]; + const mockAnalyzer = { + analyze: jest.fn().mockReturnValue({ + waves: [ + { index: 1, tasks }, + { index: 2, tasks: createMockTasks(1) }, + ], + }), + }; + + executor = new WaveExecutor({ + waveAnalyzer: mockAnalyzer, + taskExecutor: mockTaskExec, + continueOnNonCriticalFailure: false, + }); + + const result = await executor.executeWaves('wf-1'); + + expect(result.aborted).toBe(true); + expect(result.success).toBe(false); + // Second wave should not execute + expect(result.waves).toHaveLength(1); + }); + + test('should use fallback single wave without analyzer (WE-08)', async () => { + const tasks = createMockTasks(2); + const mockTaskExec = createMockTaskExecutor({ success: true }); + + executor = new WaveExecutor({ + waveAnalyzer: null, + taskExecutor: mockTaskExec, + }); + + const result = await executor.executeWaves('wf-1', { tasks }); + + expect(result.success).toBe(true); + expect(result.waves).toHaveLength(1); + }); + + test('should emit execution events (WE-08)', async () => { + const tasks = createMockTasks(1); + const mockAnalyzer = { + analyze: jest.fn().mockReturnValue({ + waves: [{ index: 1, tasks }], + }), + }; + const mockTaskExec = createMockTaskExecutor({ success: true }); + + executor = new WaveExecutor({ + waveAnalyzer: mockAnalyzer, + taskExecutor: mockTaskExec, + }); + + const tracker = collectEvents(executor, [ + 'execution_started', + 'wave_started', + 'wave_completed', + 'execution_completed', + 'task_started', + 'task_completed', + ]); + + await executor.executeWaves('wf-1'); + + expect(tracker.count('execution_started')).toBe(1); + expect(tracker.count('wave_started')).toBe(1); + expect(tracker.count('wave_completed')).toBe(1); + expect(tracker.count('execution_completed')).toBe(1); + expect(tracker.count('task_started')).toBe(1); + expect(tracker.count('task_completed')).toBe(1); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // executeWave() + // ───────────────────────────────────────────────────────────────────────────── + + describe('executeWave()', () => { + test('should return empty for wave with no tasks', async () => { + executor = new WaveExecutor(); + + const result = await executor.executeWave({ index: 1, tasks: [] }, {}); + + expect(result).toEqual([]); + }); + + test('should respect maxParallel chunking (WE-09)', async () => { + let concurrentCount = 0; + let maxConcurrent = 0; + + const mockTaskExec = jest.fn().mockImplementation(async () => { + concurrentCount++; + maxConcurrent = Math.max(maxConcurrent, concurrentCount); + await new Promise((resolve) => setTimeout(resolve, 10)); + concurrentCount--; + return { success: true, output: 'ok' }; + }); + + executor = new WaveExecutor({ + maxParallel: 2, + taskExecutor: mockTaskExec, + }); + + const tasks = createMockTasks(4); + await executor.executeWave({ index: 1, tasks }, {}); + + // With maxParallel=2 and 4 tasks, max concurrent should be 2 + expect(maxConcurrent).toBeLessThanOrEqual(2); + expect(mockTaskExec).toHaveBeenCalledTimes(4); + }); + + test('should collect results from all tasks in wave', async () => { + const mockTaskExec = createMockTaskExecutor({ success: true }); + executor = new WaveExecutor({ taskExecutor: mockTaskExec }); + + const tasks = createMockTasks(3); + const results = await executor.executeWave({ index: 1, tasks }, {}); + + expect(results).toHaveLength(3); + results.forEach((r) => { + expect(r.success).toBe(true); + expect(r).toHaveProperty('taskId'); + expect(r).toHaveProperty('duration'); + }); + }); + + test('should handle rejected promises gracefully', async () => { + const mockTaskExec = jest.fn().mockRejectedValue(new Error('Boom')); + executor = new WaveExecutor({ taskExecutor: mockTaskExec }); + + const tasks = createMockTasks(1); + const results = await executor.executeWave({ index: 1, tasks }, {}); + + expect(results).toHaveLength(1); + expect(results[0].success).toBe(false); + // executeTaskWithTimeout catches the error internally, so result is fulfilled + // with success: false and error message + expect(results[0].result.error).toBe('Boom'); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // executeTaskWithTimeout() + // ───────────────────────────────────────────────────────────────────────────── + + describe('executeTaskWithTimeout()', () => { + test('should execute task successfully (WE-13)', async () => { + const mockTaskExec = createMockTaskExecutor({ success: true, output: 'done' }); + executor = new WaveExecutor({ taskExecutor: mockTaskExec }); + + const task = createMockTask({ id: 'task-1' }); + const result = await executor.executeTaskWithTimeout(task, {}); + + expect(result.success).toBe(true); + expect(result.duration).toBeGreaterThanOrEqual(0); + }); + + test('should timeout task that takes too long (WE-14)', async () => { + jest.useFakeTimers(); + + const neverResolves = jest.fn().mockImplementation( + () => new Promise(() => {}), // Never resolves + ); + + executor = new WaveExecutor({ + taskExecutor: neverResolves, + taskTimeout: 5000, + }); + + const task = createMockTask({ id: 'slow-task' }); + const resultPromise = executor.executeTaskWithTimeout(task, {}); + + jest.advanceTimersByTime(5001); + + const result = await resultPromise; + + expect(result.success).toBe(false); + expect(result.error).toContain('timed out'); + }); + + test('should track active executions', async () => { + const mockTaskExec = createMockTaskExecutor({ success: true, delay: 10 }); + executor = new WaveExecutor({ taskExecutor: mockTaskExec }); + + const task = createMockTask({ id: 'tracked-task' }); + const promise = executor.executeTaskWithTimeout(task, {}); + + // During execution, task should be tracked + expect(executor.activeExecutions.has('tracked-task')).toBe(true); + + await promise; + + // After execution, still tracked briefly for monitoring + expect(executor.activeExecutions.get('tracked-task').status).toBe('completed'); + }); + + test('should emit task_started event', async () => { + const mockTaskExec = createMockTaskExecutor({ success: true }); + executor = new WaveExecutor({ taskExecutor: mockTaskExec }); + + const tracker = collectEvents(executor, ['task_started']); + const task = createMockTask({ id: 'emit-task' }); + + await executor.executeTaskWithTimeout(task, {}); + + expect(tracker.count('task_started')).toBe(1); + expect(tracker.getByName('task_started')[0].data.taskId).toBe('emit-task'); + }); + + test('should use default executor when no taskExecutor provided', async () => { + executor = new WaveExecutor({ taskExecutor: null }); + + const task = createMockTask({ id: 'default-task' }); + const result = await executor.executeTaskWithTimeout(task, {}); + + expect(result.success).toBe(true); + }); + + test('should use rate limit manager when available', async () => { + const mockRLM = { + executeWithRetry: jest.fn().mockResolvedValue({ success: true, output: 'ok' }), + }; + const mockTaskExec = createMockTaskExecutor({ success: true }); + + executor = new WaveExecutor({ + taskExecutor: mockTaskExec, + rateLimitManager: mockRLM, + }); + + const task = createMockTask({ id: 'rl-task' }); + await executor.executeTaskWithTimeout(task, {}); + + expect(mockRLM.executeWithRetry).toHaveBeenCalled(); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // chunkArray() + // ───────────────────────────────────────────────────────────────────────────── + + describe('chunkArray()', () => { + beforeEach(() => { + executor = new WaveExecutor(); + }); + + test('should chunk array evenly (WE-15)', () => { + const result = executor.chunkArray([1, 2, 3, 4], 2); + expect(result).toEqual([[1, 2], [3, 4]]); + }); + + test('should handle uneven chunks (WE-16)', () => { + const result = executor.chunkArray([1, 2, 3, 4, 5], 2); + expect(result).toEqual([[1, 2], [3, 4], [5]]); + }); + + test('should handle empty array', () => { + const result = executor.chunkArray([], 3); + expect(result).toEqual([]); + }); + + test('should handle chunk size larger than array', () => { + const result = executor.chunkArray([1, 2], 5); + expect(result).toEqual([[1, 2]]); + }); + + test('should handle single element', () => { + const result = executor.chunkArray([1], 1); + expect(result).toEqual([[1]]); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // calculateMetrics() + // ───────────────────────────────────────────────────────────────────────────── + + describe('calculateMetrics()', () => { + beforeEach(() => { + executor = new WaveExecutor(); + }); + + test('should calculate metrics for successful waves (WE-17)', () => { + const waveResults = [ + { + wave: 1, + results: [ + { taskId: 't1', success: true, duration: 1000 }, + { taskId: 't2', success: true, duration: 2000 }, + ], + allSucceeded: true, + }, + ]; + + const metrics = executor.calculateMetrics(waveResults); + + expect(metrics.totalTasks).toBe(2); + expect(metrics.successful).toBe(2); + expect(metrics.failed).toBe(0); + expect(metrics.successRate).toBe(100); + expect(metrics.totalDuration).toBe(3000); + expect(metrics.wallTime).toBe(2000); + expect(metrics.parallelEfficiency).toBe(1.5); + expect(metrics.totalWaves).toBe(1); + }); + + test('should calculate metrics with failures', () => { + const waveResults = [ + { + wave: 1, + results: [ + { taskId: 't1', success: true, duration: 1000 }, + { taskId: 't2', success: false, duration: 500 }, + ], + allSucceeded: false, + }, + ]; + + const metrics = executor.calculateMetrics(waveResults); + + expect(metrics.totalTasks).toBe(2); + expect(metrics.successful).toBe(1); + expect(metrics.failed).toBe(1); + expect(metrics.successRate).toBe(50); + }); + + test('should handle empty wave results', () => { + const metrics = executor.calculateMetrics([]); + + expect(metrics.totalTasks).toBe(0); + expect(metrics.successRate).toBe(100); + expect(metrics.totalWaves).toBe(0); + }); + + test('should calculate across multiple waves', () => { + const waveResults = [ + { + wave: 1, + results: [ + { taskId: 't1', success: true, duration: 1000 }, + ], + }, + { + wave: 2, + results: [ + { taskId: 't2', success: true, duration: 2000 }, + { taskId: 't3', success: true, duration: 1500 }, + ], + }, + ]; + + const metrics = executor.calculateMetrics(waveResults); + + expect(metrics.totalTasks).toBe(3); + expect(metrics.totalWaves).toBe(2); + expect(metrics.wallTime).toBe(1000 + 2000); // max of each wave + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // OTHER METHODS + // ───────────────────────────────────────────────────────────────────────────── + + describe('getStatus()', () => { + test('should return current execution status', () => { + executor = new WaveExecutor(); + + const status = executor.getStatus(); + + expect(status).toHaveProperty('currentWave'); + expect(status).toHaveProperty('activeExecutions'); + expect(status).toHaveProperty('completedWaves'); + }); + }); + + describe('formatStatus()', () => { + test('should return formatted string', () => { + executor = new WaveExecutor(); + + const output = executor.formatStatus(); + + expect(output).toContain('Wave Executor Status'); + expect(typeof output).toBe('string'); + }); + }); + + describe('cancelAll()', () => { + test('should cancel active executions and emit events', async () => { + executor = new WaveExecutor(); + + const tracker = collectEvents(executor, ['task_cancelled', 'execution_cancelled']); + + // Simulate active execution + executor.activeExecutions.set('task-1', { status: 'running' }); + + executor.cancelAll(); + + expect(tracker.count('task_cancelled')).toBe(1); + expect(tracker.count('execution_cancelled')).toBe(1); + expect(executor.activeExecutions.get('task-1').status).toBe('cancelled'); + }); + }); +}); + +``` + +================================================== +📄 tests/core/rate-limit-manager.test.js +================================================== +```js +/** + * Rate Limit Manager - Test Suite + * Story EXC-1, AC7 - rate-limit-manager.js coverage + * + * Tests: constructor, executeWithRetry, calculateDelay, preemptiveThrottle, + * isRateLimitError, metrics, withRateLimit, getGlobalManager + */ + +const { + collectEvents, +} = require('./execution-test-helpers'); + +const { + RateLimitManager, + withRateLimit, + getGlobalManager, +} = require('../../.aios-core/core/execution/rate-limit-manager'); + +describe('RateLimitManager', () => { + // ── Constructor ───────────────────────────────────────────────────── + + describe('Constructor', () => { + test('creates with defaults', () => { + const rlm = new RateLimitManager(); + expect(rlm.maxRetries).toBe(5); + expect(rlm.baseDelay).toBe(1000); + expect(rlm.maxDelay).toBe(30000); + expect(rlm.requestsPerMinute).toBe(50); + }); + + test('accepts custom config', () => { + const rlm = new RateLimitManager({ maxRetries: 3, baseDelay: 500, maxDelay: 5000, requestsPerMinute: 10 }); + expect(rlm.maxRetries).toBe(3); + expect(rlm.baseDelay).toBe(500); + expect(rlm.maxDelay).toBe(5000); + expect(rlm.requestsPerMinute).toBe(10); + }); + + test('extends EventEmitter', () => { + const rlm = new RateLimitManager(); + expect(typeof rlm.on).toBe('function'); + }); + + test('initializes metrics to zero', () => { + const rlm = new RateLimitManager(); + expect(rlm.metrics.rateLimitHits).toBe(0); + expect(rlm.metrics.totalRetries).toBe(0); + expect(rlm.metrics.totalRequests).toBe(0); + }); + }); + + // ── isRateLimitError ────────────────────────────────────────────────── + + describe('isRateLimitError', () => { + test('detects HTTP 429', () => { + const rlm = new RateLimitManager(); + expect(rlm.isRateLimitError({ status: 429, message: '' })).toBe(true); + expect(rlm.isRateLimitError({ statusCode: 429, message: '' })).toBe(true); + }); + + test('detects rate limit messages', () => { + const rlm = new RateLimitManager(); + expect(rlm.isRateLimitError({ message: 'Rate limit exceeded' })).toBe(true); + expect(rlm.isRateLimitError({ message: 'Too many requests' })).toBe(true); + expect(rlm.isRateLimitError({ message: 'Request throttled' })).toBe(true); + expect(rlm.isRateLimitError({ message: 'Quota exceeded' })).toBe(true); + expect(rlm.isRateLimitError({ message: 'API overloaded' })).toBe(true); + }); + + test('detects rate limit error codes', () => { + const rlm = new RateLimitManager(); + expect(rlm.isRateLimitError({ code: 'RATE_LIMITED', message: '' })).toBe(true); + expect(rlm.isRateLimitError({ code: 'TOO_MANY_REQUESTS', message: '' })).toBe(true); + }); + + test('returns false for non-rate-limit errors', () => { + const rlm = new RateLimitManager(); + expect(rlm.isRateLimitError({ message: 'Connection timeout' })).toBe(false); + expect(rlm.isRateLimitError({ status: 500, message: 'Server error' })).toBe(false); + }); + }); + + // ── calculateDelay ──────────────────────────────────────────────────── + + describe('calculateDelay', () => { + test('uses retryAfter from error', () => { + const rlm = new RateLimitManager({ maxDelay: 60000 }); + const error = { retryAfter: 5, message: '' }; + expect(rlm.calculateDelay(1, error)).toBe(5000); + }); + + test('caps retryAfter to maxDelay', () => { + const rlm = new RateLimitManager({ maxDelay: 3000 }); + const error = { retryAfter: 10, message: '' }; + expect(rlm.calculateDelay(1, error)).toBe(3000); + }); + + test('extracts retry-after from error message', () => { + const rlm = new RateLimitManager({ maxDelay: 60000 }); + const error = { message: 'Please retry after 3 seconds' }; + expect(rlm.calculateDelay(1, error)).toBe(3000); + }); + + test('uses exponential backoff', () => { + const rlm = new RateLimitManager({ baseDelay: 1000, maxDelay: 60000 }); + const error = { message: '' }; + const delay1 = rlm.calculateDelay(1, error); + const delay2 = rlm.calculateDelay(2, error); + // delay1 should be ~1000+jitter, delay2 ~2000+jitter + expect(delay1).toBeLessThanOrEqual(2000); // 1000 base + up to 1000 jitter + expect(delay2).toBeLessThanOrEqual(3000); // 2000 base + up to 1000 jitter + }); + + test('caps at maxDelay', () => { + const rlm = new RateLimitManager({ baseDelay: 1000, maxDelay: 5000 }); + const error = { message: '' }; + const delay = rlm.calculateDelay(10, error); // 2^9 * 1000 = 512000 + expect(delay).toBeLessThanOrEqual(5000); + }); + }); + + // ── executeWithRetry ────────────────────────────────────────────────── + + describe('executeWithRetry', () => { + test('returns result on success', async () => { + const rlm = new RateLimitManager(); + const result = await rlm.executeWithRetry(() => Promise.resolve('ok')); + expect(result).toBe('ok'); + expect(rlm.metrics.totalRequests).toBe(1); + }); + + test('throws non-rate-limit errors immediately', async () => { + const rlm = new RateLimitManager(); + await expect( + rlm.executeWithRetry(() => { throw new Error('connection failed'); }), + ).rejects.toThrow('connection failed'); + }); + + test('retries on rate limit error', async () => { + const rlm = new RateLimitManager({ maxRetries: 2, baseDelay: 1 }); + // Override sleep to be instant + rlm.sleep = () => Promise.resolve(); + + let calls = 0; + const fn = () => { + calls++; + if (calls === 1) { + const err = new Error('Rate limit exceeded'); + throw err; + } + return Promise.resolve('success'); + }; + + const result = await rlm.executeWithRetry(fn); + expect(result).toBe('success'); + expect(calls).toBe(2); + expect(rlm.metrics.rateLimitHits).toBe(1); + expect(rlm.metrics.successAfterRetry).toBe(1); + }); + + test('throws after maxRetries exceeded', async () => { + const rlm = new RateLimitManager({ maxRetries: 2, baseDelay: 1 }); + rlm.sleep = () => Promise.resolve(); + + const fn = () => { throw new Error('Rate limit exceeded'); }; + + await expect(rlm.executeWithRetry(fn)).rejects.toThrow('Rate limit exceeded after 2 retries'); + expect(rlm.metrics.rateLimitHits).toBe(2); + }); + + test('emits rate_limit_hit and waiting events', async () => { + const rlm = new RateLimitManager({ maxRetries: 2, baseDelay: 1 }); + rlm.sleep = () => Promise.resolve(); + + let calls = 0; + const fn = () => { + calls++; + if (calls === 1) throw new Error('Rate limit exceeded'); + return Promise.resolve('ok'); + }; + + const events = collectEvents(rlm, ['rate_limit_hit', 'waiting']); + await rlm.executeWithRetry(fn); + + expect(events.count('rate_limit_hit')).toBe(1); + expect(events.count('waiting')).toBe(1); + }); + }); + + // ── Metrics ─────────────────────────────────────────────────────────── + + describe('Metrics', () => { + test('getMetrics returns computed fields', () => { + const rlm = new RateLimitManager(); + rlm.metrics.totalRequests = 10; + rlm.metrics.rateLimitHits = 2; + rlm.metrics.totalRetries = 3; + rlm.metrics.totalWaitTime = 9000; + const metrics = rlm.getMetrics(); + expect(metrics.averageWaitTime).toBe(3000); + expect(metrics.successRate).toBe(80); + expect(metrics.requestsPerMinuteLimit).toBe(50); + }); + + test('getMetrics handles zero retries', () => { + const rlm = new RateLimitManager(); + const metrics = rlm.getMetrics(); + expect(metrics.averageWaitTime).toBe(0); + expect(metrics.successRate).toBe(100); + }); + + test('resetMetrics clears everything', () => { + const rlm = new RateLimitManager(); + rlm.metrics.totalRequests = 10; + rlm.logEvent('test', {}); + rlm.resetMetrics(); + expect(rlm.metrics.totalRequests).toBe(0); + expect(rlm.eventLog.length).toBe(0); + }); + }); + + // ── Event logging ───────────────────────────────────────────────────── + + describe('Event logging', () => { + test('logEvent stores events', () => { + const rlm = new RateLimitManager(); + rlm.logEvent('test_event', { key: 'value' }); + expect(rlm.eventLog.length).toBe(1); + expect(rlm.eventLog[0].type).toBe('test_event'); + }); + + test('logEvent trims to maxEventLog', () => { + const rlm = new RateLimitManager(); + rlm.maxEventLog = 3; + for (let i = 0; i < 5; i++) { + rlm.logEvent(`event-${i}`, {}); + } + expect(rlm.eventLog.length).toBe(3); + }); + + test('getRecentEvents returns limited entries', () => { + const rlm = new RateLimitManager(); + for (let i = 0; i < 10; i++) { + rlm.logEvent(`event-${i}`, {}); + } + expect(rlm.getRecentEvents(3).length).toBe(3); + }); + }); + + // ── formatStatus ────────────────────────────────────────────────────── + + describe('formatStatus', () => { + test('returns formatted status', () => { + const rlm = new RateLimitManager(); + const status = rlm.formatStatus(); + expect(status).toContain('Rate Limit Manager'); + expect(status).toContain('Total Requests'); + }); + }); + + // ── withRateLimit ───────────────────────────────────────────────────── + + describe('withRateLimit', () => { + test('wraps function with rate limiting', async () => { + const rlm = new RateLimitManager(); + const fn = (x) => Promise.resolve(x * 2); + const wrapped = withRateLimit(fn, rlm); + const result = await wrapped(5); + expect(result).toBe(10); + }); + }); + + // ── getGlobalManager ────────────────────────────────────────────────── + + describe('getGlobalManager', () => { + test('returns singleton instance', () => { + const m1 = getGlobalManager(); + const m2 = getGlobalManager(); + expect(m1).toBe(m2); + expect(m1).toBeInstanceOf(RateLimitManager); + }); + }); +}); + +``` + +================================================== +📄 tests/core/unified-activation-pipeline.test.js +================================================== +```js +/** + * Integration Tests for UnifiedActivationPipeline + * + * Story ACT-6: Unified Activation Pipeline + * + * Tests: + * - Each of 12 agents activates through unified pipeline + * - Identical context structure for all agents + * - Parallel loading of 5 loaders + * - Sequential steps with data dependencies + * - Timeout protection and fallback behavior + * - Backward compatibility (generate-greeting.js still works) + * - Performance targets (<200ms total) + * - Error isolation (one loader fails, others still work) + */ + +'use strict'; + +// --- Mock Setup (BEFORE requiring modules) --- + +const mockCoreConfig = { + user_profile: 'advanced', + agentIdentity: { + greeting: { + preference: 'auto', + contextDetection: true, + sessionDetection: 'hybrid', + }, + }, + dataLocation: '.aios-core/data', + devStoryLocation: 'docs/stories', + projectStatus: { enabled: true }, +}; + +const mockAgentDefinition = { + agent: { + id: 'dev', + name: 'Dex', + icon: '\uD83D\uDCBB', + title: 'Full Stack Developer', + }, + persona_profile: { + archetype: 'Builder', + communication: { + greeting_levels: { + minimal: '\uD83D\uDCBB dev Agent ready', + named: '\uD83D\uDCBB Dex (Builder) ready. Let\'s build something great!', + archetypal: '\uD83D\uDCBB Dex the Builder ready to innovate!', + }, + signature_closing: '-- Dex, sempre construindo', + }, + greeting_levels: { + minimal: '\uD83D\uDCBB dev Agent ready', + named: '\uD83D\uDCBB Dex (Builder) ready. Let\'s build something great!', + archetypal: '\uD83D\uDCBB Dex the Builder ready to innovate!', + }, + }, + persona: { + role: 'Expert Senior Software Engineer', + }, + commands: [ + { name: 'help', visibility: ['full', 'quick', 'key'], description: 'Show help' }, + { name: 'develop', visibility: ['full', 'quick'], description: 'Implement story' }, + { name: 'exit', visibility: ['full', 'quick', 'key'], description: 'Exit' }, + ], +}; + +const mockSessionContext = { + sessionType: 'new', + message: null, + previousAgent: null, + lastCommands: [], + workflowActive: null, + currentStory: null, +}; + +const mockProjectStatus = { + branch: 'main', + modifiedFiles: [], + modifiedFilesTotalCount: 0, + recentCommits: [], + currentStory: null, +}; + +const mockGitConfig = { + configured: true, + type: 'github', + branch: 'main', +}; + +// Mock fs.promises +jest.mock('fs', () => { + const actual = jest.requireActual('fs'); + return { + ...actual, + promises: { + ...actual.promises, + readFile: jest.fn().mockImplementation((filePath) => { + if (filePath.includes('core-config.yaml')) { + const yaml = jest.requireActual('js-yaml'); + return Promise.resolve(yaml.dump(mockCoreConfig)); + } + if (filePath.includes('.md')) { + return Promise.resolve('```yaml\nagent:\n id: dev\n name: Dex\n icon: "\uD83D\uDCBB"\n```'); + } + return Promise.resolve(''); + }), + access: jest.fn().mockResolvedValue(undefined), + }, + existsSync: actual.existsSync, + readFileSync: jest.fn().mockImplementation((filePath) => { + if (filePath.includes('core-config.yaml')) { + const yaml = jest.requireActual('js-yaml'); + return yaml.dump(mockCoreConfig); + } + if (filePath.includes('workflow-patterns.yaml')) { + return 'workflows: {}'; + } + if (filePath.includes('session-state.json')) { + return JSON.stringify({}); + } + return ''; + }), + }; +}); + +// Mock agent-config-loader +jest.mock('../../.aios-core/development/scripts/agent-config-loader', () => ({ + AgentConfigLoader: jest.fn().mockImplementation(() => ({ + loadComplete: jest.fn().mockResolvedValue({ + config: { dataLocation: '.aios-core/data' }, + definition: mockAgentDefinition, + agent: mockAgentDefinition.agent, + persona_profile: mockAgentDefinition.persona_profile, + commands: mockAgentDefinition.commands, + }), + })), +})); + +// Mock session context loader +jest.mock('../../.aios-core/core/session/context-loader', () => { + return jest.fn().mockImplementation(() => ({ + loadContext: jest.fn().mockReturnValue(mockSessionContext), + })); +}); + +// Mock project status loader +jest.mock('../../.aios-core/infrastructure/scripts/project-status-loader', () => ({ + loadProjectStatus: jest.fn().mockResolvedValue(mockProjectStatus), +})); + +// Mock git config detector +jest.mock('../../.aios-core/infrastructure/scripts/git-config-detector', () => { + return jest.fn().mockImplementation(() => ({ + get: jest.fn().mockReturnValue(mockGitConfig), + })); +}); + +// Mock permission mode +jest.mock('../../.aios-core/core/permissions', () => ({ + PermissionMode: jest.fn().mockImplementation(() => ({ + currentMode: 'ask', + load: jest.fn().mockResolvedValue('ask'), + getBadge: jest.fn().mockReturnValue('[Ask]'), + _loaded: false, + })), + OperationGuard: jest.fn(), +})); + +// Mock config-resolver +jest.mock('../../.aios-core/core/config/config-resolver', () => ({ + resolveConfig: jest.fn().mockReturnValue({ + config: mockCoreConfig, + }), +})); + +// Mock validate-user-profile +jest.mock('../../.aios-core/infrastructure/scripts/validate-user-profile', () => ({ + validateUserProfile: jest.fn().mockReturnValue({ + valid: true, + value: 'advanced', + warning: null, + }), +})); + +// Mock greeting-preference-manager +jest.mock('../../.aios-core/development/scripts/greeting-preference-manager', () => { + return jest.fn().mockImplementation(() => ({ + getPreference: jest.fn().mockReturnValue('auto'), + })); +}); + +// Mock context-detector +jest.mock('../../.aios-core/core/session/context-detector', () => { + return jest.fn().mockImplementation(() => ({ + detectSessionType: jest.fn().mockReturnValue('new'), + })); +}); + +// Mock workflow-navigator +jest.mock('../../.aios-core/development/scripts/workflow-navigator', () => { + return jest.fn().mockImplementation(() => ({ + detectWorkflowState: jest.fn().mockReturnValue(null), + getNextSteps: jest.fn().mockReturnValue([]), + suggestNextCommands: jest.fn().mockReturnValue([]), + formatSuggestions: jest.fn().mockReturnValue(''), + getGreetingMessage: jest.fn().mockReturnValue(''), + extractContext: jest.fn().mockReturnValue({}), + patterns: { workflows: {} }, + })); +}); + +// Mock config cache +jest.mock('../../.aios-core/core/config/config-cache', () => ({ + globalConfigCache: { + get: jest.fn().mockReturnValue(null), + set: jest.fn(), + invalidate: jest.fn(), + }, +})); + +// Mock performance tracker +jest.mock('../../.aios-core/infrastructure/scripts/performance-tracker', () => ({ + trackConfigLoad: jest.fn(), +})); + +// --- Require modules AFTER mocks --- +const { UnifiedActivationPipeline, ALL_AGENT_IDS, LOADER_TIERS, DEFAULT_PIPELINE_TIMEOUT_MS, FALLBACK_PHRASE } = require('../../.aios-core/development/scripts/unified-activation-pipeline'); +const { AgentConfigLoader } = require('../../.aios-core/development/scripts/agent-config-loader'); +const SessionContextLoader = require('../../.aios-core/core/session/context-loader'); +const { loadProjectStatus } = require('../../.aios-core/infrastructure/scripts/project-status-loader'); +const GitConfigDetector = require('../../.aios-core/infrastructure/scripts/git-config-detector'); +const { PermissionMode } = require('../../.aios-core/core/permissions'); + +// Track mock timers to prevent Jest worker exit warnings from orphaned setTimeout calls +const _pendingMockTimers = []; + +// ============================================================ +// Tests +// ============================================================ + +describe('UnifiedActivationPipeline', () => { + let pipeline; + + beforeEach(() => { + jest.clearAllMocks(); + + // Restore default mock implementations that may have been overridden by prior tests. + // jest.clearAllMocks() only clears call history, NOT implementations set via mockImplementation(). + AgentConfigLoader.mockImplementation(() => ({ + loadComplete: jest.fn().mockResolvedValue({ + config: { dataLocation: '.aios-core/data' }, + definition: mockAgentDefinition, + agent: mockAgentDefinition.agent, + persona_profile: mockAgentDefinition.persona_profile, + commands: mockAgentDefinition.commands, + }), + })); + + SessionContextLoader.mockImplementation(() => ({ + loadContext: jest.fn().mockReturnValue(mockSessionContext), + })); + + loadProjectStatus.mockImplementation(() => Promise.resolve(mockProjectStatus)); + + GitConfigDetector.mockImplementation(() => ({ + get: jest.fn().mockReturnValue(mockGitConfig), + })); + + PermissionMode.mockImplementation(() => ({ + currentMode: 'ask', + load: jest.fn().mockResolvedValue('ask'), + getBadge: jest.fn().mockReturnValue('[Ask]'), + _loaded: false, + })); + + const ContextDetector = require('../../.aios-core/core/session/context-detector'); + ContextDetector.mockImplementation(() => ({ + detectSessionType: jest.fn().mockReturnValue('new'), + })); + + pipeline = new UnifiedActivationPipeline(); + }); + + afterEach(() => { + _pendingMockTimers.forEach(id => clearTimeout(id)); + _pendingMockTimers.length = 0; + }); + + // ----------------------------------------------------------- + // 1. Core Activation + // ----------------------------------------------------------- + describe('activate()', () => { + it('should activate an agent and return greeting + context + duration', async () => { + const result = await pipeline.activate('dev'); + + expect(result).toHaveProperty('greeting'); + expect(result).toHaveProperty('context'); + expect(result).toHaveProperty('duration'); + expect(typeof result.greeting).toBe('string'); + expect(typeof result.duration).toBe('number'); + expect(result.greeting.length).toBeGreaterThan(0); + }); + + it('should return a non-empty greeting string', async () => { + const result = await pipeline.activate('dev'); + expect(result.greeting).toBeTruthy(); + expect(result.greeting.length).toBeGreaterThan(5); + }); + + it('should include duration in response', async () => { + const result = await pipeline.activate('dev'); + expect(result.duration).toBeGreaterThanOrEqual(0); + }); + }); + + // ----------------------------------------------------------- + // 2. All 12 Agents - Identical Context Structure + // ----------------------------------------------------------- + describe('all 12 agents produce identical context structure', () => { + const expectedContextKeys = [ + 'agent', 'config', 'session', 'projectStatus', 'gitConfig', + 'permissions', 'preference', 'sessionType', 'workflowState', + 'userProfile', 'conversationHistory', 'lastCommands', + 'previousAgent', 'sessionMessage', 'workflowActive', 'sessionStory', + ]; + + ALL_AGENT_IDS.forEach(agentId => { + it(`should produce correct context structure for @${agentId}`, async () => { + // Adjust mock to return the agent's ID + AgentConfigLoader.mockImplementation(() => ({ + loadComplete: jest.fn().mockResolvedValue({ + config: { dataLocation: '.aios-core/data' }, + definition: { + ...mockAgentDefinition, + agent: { ...mockAgentDefinition.agent, id: agentId }, + }, + agent: { ...mockAgentDefinition.agent, id: agentId }, + persona_profile: mockAgentDefinition.persona_profile, + commands: mockAgentDefinition.commands, + }), + })); + + const result = await pipeline.activate(agentId); + + // Verify all expected keys exist in context + for (const key of expectedContextKeys) { + expect(result.context).toHaveProperty(key); + } + + // Verify context has the correct agent ID + expect(result.context.agent.id).toBe(agentId); + + // Verify greeting is a non-empty string + expect(typeof result.greeting).toBe('string'); + expect(result.greeting.length).toBeGreaterThan(0); + }); + }); + + it('should produce contexts with the exact same keys for all agents', async () => { + const contextKeys = []; + + for (const agentId of ALL_AGENT_IDS) { + AgentConfigLoader.mockImplementation(() => ({ + loadComplete: jest.fn().mockResolvedValue({ + config: {}, + definition: { + ...mockAgentDefinition, + agent: { ...mockAgentDefinition.agent, id: agentId }, + }, + agent: { ...mockAgentDefinition.agent, id: agentId }, + persona_profile: mockAgentDefinition.persona_profile, + commands: mockAgentDefinition.commands, + }), + })); + + const result = await pipeline.activate(agentId); + contextKeys.push(Object.keys(result.context).sort().join(',')); + } + + // All contexts should have the same key set + const uniqueKeyPatterns = new Set(contextKeys); + expect(uniqueKeyPatterns.size).toBe(1); + }); + }); + + // ----------------------------------------------------------- + // 3. Parallel Loading + // ----------------------------------------------------------- + describe('parallel loading', () => { + it('should call all 5 loaders', async () => { + await pipeline.activate('dev'); + + // AgentConfigLoader called + expect(AgentConfigLoader).toHaveBeenCalled(); + + // SessionContextLoader called + expect(SessionContextLoader).toHaveBeenCalled(); + + // ProjectStatusLoader called + expect(loadProjectStatus).toHaveBeenCalled(); + + // GitConfigDetector called + expect(GitConfigDetector).toHaveBeenCalled(); + + // PermissionMode called + expect(PermissionMode).toHaveBeenCalled(); + }); + + it('should load all 5 loaders even if one is slow', async () => { + // Make one loader slow but still within timeout + loadProjectStatus.mockImplementation(() => + new Promise(resolve => setTimeout(() => resolve(mockProjectStatus), 50)), + ); + + const result = await pipeline.activate('dev'); + expect(result.greeting).toBeTruthy(); + expect(result.context.projectStatus).toEqual(mockProjectStatus); + }); + }); + + // ----------------------------------------------------------- + // 4. Error Isolation (one loader fails, others still work) + // ----------------------------------------------------------- + describe('error isolation', () => { + it('should still produce greeting when AgentConfigLoader fails', async () => { + AgentConfigLoader.mockImplementation(() => ({ + loadComplete: jest.fn().mockRejectedValue(new Error('Agent config error')), + })); + + const freshPipeline = new UnifiedActivationPipeline(); + const result = await freshPipeline.activate('dev'); + expect(result.greeting).toBeTruthy(); + expect(typeof result.greeting).toBe('string'); + }); + + it('should still produce greeting when SessionContextLoader fails', async () => { + SessionContextLoader.mockImplementation(() => ({ + loadContext: jest.fn().mockImplementation(() => { throw new Error('Session error'); }), + })); + + const result = await pipeline.activate('dev'); + expect(result.greeting).toBeTruthy(); + }); + + it('should still produce greeting when ProjectStatusLoader fails', async () => { + loadProjectStatus.mockRejectedValue(new Error('Project status error')); + + const result = await pipeline.activate('dev'); + expect(result.greeting).toBeTruthy(); + expect(result.context.projectStatus).toBeNull(); + }); + + it('should still produce greeting when GitConfigDetector fails', async () => { + GitConfigDetector.mockImplementation(() => ({ + get: jest.fn().mockImplementation(() => { throw new Error('Git config error'); }), + })); + + const result = await pipeline.activate('dev'); + expect(result.greeting).toBeTruthy(); + }); + + it('should still produce greeting when PermissionMode fails', async () => { + PermissionMode.mockImplementation(() => ({ + currentMode: 'ask', + load: jest.fn().mockRejectedValue(new Error('Permission error')), + getBadge: jest.fn().mockReturnValue('[Ask]'), + })); + + const result = await pipeline.activate('dev'); + expect(result.greeting).toBeTruthy(); + }); + + it('should use fallback when ALL loaders fail', async () => { + AgentConfigLoader.mockImplementation(() => ({ + loadComplete: jest.fn().mockRejectedValue(new Error('fail')), + })); + SessionContextLoader.mockImplementation(() => ({ + loadContext: jest.fn().mockImplementation(() => { throw new Error('fail'); }), + })); + loadProjectStatus.mockRejectedValue(new Error('fail')); + GitConfigDetector.mockImplementation(() => ({ + get: jest.fn().mockImplementation(() => { throw new Error('fail'); }), + })); + PermissionMode.mockImplementation(() => ({ + load: jest.fn().mockRejectedValue(new Error('fail')), + getBadge: jest.fn().mockReturnValue(''), + })); + + // Recreate pipeline to pick up new mocks + const freshPipeline = new UnifiedActivationPipeline(); + const result = await freshPipeline.activate('dev'); + expect(result.greeting).toBeTruthy(); + expect(result.greeting).toContain('dev'); + }); + }); + + // ----------------------------------------------------------- + // 5. Timeout Protection + // ----------------------------------------------------------- + describe('timeout protection', () => { + it('should return fallback greeting if pipeline exceeds timeout', async () => { + // Make all loaders very slow + AgentConfigLoader.mockImplementation(() => ({ + loadComplete: jest.fn().mockImplementation(() => + new Promise(resolve => setTimeout(() => resolve(null), 500)), + ), + })); + loadProjectStatus.mockImplementation(() => + new Promise(resolve => setTimeout(() => resolve(null), 500)), + ); + + const result = await pipeline.activate('dev'); + // Should still return something (either from pipeline or timeout fallback) + expect(result.greeting).toBeTruthy(); + expect(typeof result.greeting).toBe('string'); + expect(result.fallback).toBe(true); + }); + }); + + // ----------------------------------------------------------- + // 6. Enriched Context Shape + // ----------------------------------------------------------- + describe('enriched context shape', () => { + it('should include agent definition in context', async () => { + const result = await pipeline.activate('dev'); + expect(result.context.agent).toBeDefined(); + expect(result.context.agent.id).toBe('dev'); + }); + + it('should include session info in context', async () => { + const result = await pipeline.activate('dev'); + expect(result.context.session).toBeDefined(); + expect(result.context.sessionType).toBe('new'); + }); + + it('should include project status in context', async () => { + const result = await pipeline.activate('dev'); + expect(result.context.projectStatus).toEqual(mockProjectStatus); + }); + + it('should include git config in context', async () => { + const result = await pipeline.activate('dev'); + expect(result.context.gitConfig).toEqual(mockGitConfig); + }); + + it('should include permission data in context', async () => { + const result = await pipeline.activate('dev'); + expect(result.context.permissions).toBeDefined(); + expect(result.context.permissions.mode).toBe('ask'); + expect(result.context.permissions.badge).toBe('[Ask]'); + }); + + it('should include user profile in context', async () => { + const result = await pipeline.activate('dev'); + expect(result.context.userProfile).toBe('advanced'); + }); + + it('should include preference in context', async () => { + const result = await pipeline.activate('dev'); + expect(result.context.preference).toBe('auto'); + }); + + it('should include workflow state (null for new session)', async () => { + const result = await pipeline.activate('dev'); + expect(result.context.workflowState).toBeNull(); + }); + + it('should include backward-compatible legacy fields', async () => { + const result = await pipeline.activate('dev'); + expect(result.context).toHaveProperty('conversationHistory'); + expect(result.context).toHaveProperty('lastCommands'); + expect(result.context).toHaveProperty('previousAgent'); + expect(result.context).toHaveProperty('sessionMessage'); + expect(result.context).toHaveProperty('workflowActive'); + expect(result.context).toHaveProperty('sessionStory'); + }); + }); + + // ----------------------------------------------------------- + // 7. Session Type Detection + // ----------------------------------------------------------- + describe('session type detection', () => { + it('should detect new session type', async () => { + const result = await pipeline.activate('dev'); + expect(result.context.sessionType).toBe('new'); + }); + + it('should use session type from SessionContextLoader when available', async () => { + SessionContextLoader.mockImplementation(() => ({ + loadContext: jest.fn().mockReturnValue({ + ...mockSessionContext, + sessionType: 'existing', + lastCommands: ['develop'], + }), + })); + + const result = await pipeline.activate('dev'); + expect(result.context.sessionType).toBe('existing'); + }); + + it('should prefer conversation history over session context for detection', async () => { + const ContextDetector = require('../../.aios-core/core/session/context-detector'); + ContextDetector.mockImplementation(() => ({ + detectSessionType: jest.fn().mockReturnValue('workflow'), + })); + + // Recreate pipeline to pick up new mock + const freshPipeline = new UnifiedActivationPipeline(); + + const result = await freshPipeline.activate('dev', { + conversationHistory: [{ content: '*develop story-1' }, { content: '*run-tests' }], + }); + expect(result.context.sessionType).toBe('workflow'); + }); + }); + + // ----------------------------------------------------------- + // 8. Fallback Greeting + // ----------------------------------------------------------- + describe('fallback greeting', () => { + it('should produce a valid fallback for unknown agents', async () => { + AgentConfigLoader.mockImplementation(() => ({ + loadComplete: jest.fn().mockRejectedValue(new Error('Agent not found')), + })); + + // Recreate pipeline to pick up new mock + const freshPipeline = new UnifiedActivationPipeline(); + const result = await freshPipeline.activate('unknown-agent'); + expect(result.greeting).toBeTruthy(); + expect(typeof result.greeting).toBe('string'); + // The greeting should contain the agent ID somewhere + expect(result.greeting).toContain('unknown-agent'); + }); + + it('should include the agent ID in fallback greeting', async () => { + const greeting = pipeline._generateFallbackGreeting('test-agent'); + expect(greeting).toContain('test-agent'); + }); + }); + + // ----------------------------------------------------------- + // 9. Static Methods + // ----------------------------------------------------------- + describe('static methods', () => { + it('getAllAgentIds() should return all 12 agent IDs', () => { + const ids = UnifiedActivationPipeline.getAllAgentIds(); + expect(ids).toHaveLength(12); + expect(ids).toContain('dev'); + expect(ids).toContain('qa'); + expect(ids).toContain('architect'); + expect(ids).toContain('pm'); + expect(ids).toContain('po'); + expect(ids).toContain('sm'); + expect(ids).toContain('analyst'); + expect(ids).toContain('data-engineer'); + expect(ids).toContain('ux-design-expert'); + expect(ids).toContain('devops'); + expect(ids).toContain('aios-master'); + expect(ids).toContain('squad-creator'); + }); + + it('isValidAgentId() should return true for valid IDs', () => { + expect(UnifiedActivationPipeline.isValidAgentId('dev')).toBe(true); + expect(UnifiedActivationPipeline.isValidAgentId('qa')).toBe(true); + expect(UnifiedActivationPipeline.isValidAgentId('aios-master')).toBe(true); + }); + + it('isValidAgentId() should return false for invalid IDs', () => { + expect(UnifiedActivationPipeline.isValidAgentId('invalid')).toBe(false); + expect(UnifiedActivationPipeline.isValidAgentId('')).toBe(false); + expect(UnifiedActivationPipeline.isValidAgentId('DEV')).toBe(false); + }); + }); + + // ----------------------------------------------------------- + // 10. Default Icon Mapping + // ----------------------------------------------------------- + describe('default icon mapping', () => { + it('should return correct icons for known agents', () => { + expect(pipeline._getDefaultIcon('dev')).toBe('\uD83D\uDCBB'); + expect(pipeline._getDefaultIcon('aios-master')).toBe('\uD83D\uDC51'); + }); + + it('should return default robot icon for unknown agents', () => { + expect(pipeline._getDefaultIcon('unknown')).toBe('\uD83E\uDD16'); + }); + }); + + // ----------------------------------------------------------- + // 11. Default Context + // ----------------------------------------------------------- + describe('default context', () => { + it('_getDefaultContext should return complete structure', () => { + const ctx = pipeline._getDefaultContext('dev'); + expect(ctx.agent.id).toBe('dev'); + expect(ctx.sessionType).toBe('new'); + expect(ctx.permissions.mode).toBe('ask'); + expect(ctx.preference).toBe('auto'); + expect(ctx.userProfile).toBe('advanced'); + }); + + it('_getDefaultSessionContext should return new session defaults', () => { + const session = pipeline._getDefaultSessionContext(); + expect(session.sessionType).toBe('new'); + expect(session.previousAgent).toBeNull(); + expect(session.lastCommands).toEqual([]); + }); + }); + + // ----------------------------------------------------------- + // 12. Agent Definition Building + // ----------------------------------------------------------- + describe('agent definition building', () => { + it('should build from loaded config data', () => { + const agentComplete = { + agent: { id: 'dev', name: 'Dex', icon: '\uD83D\uDCBB' }, + persona_profile: mockAgentDefinition.persona_profile, + definition: { persona: { role: 'Developer' }, commands: [] }, + commands: [{ name: 'help' }], + }; + + const def = pipeline._buildAgentDefinition('dev', agentComplete); + expect(def.id).toBe('dev'); + expect(def.name).toBe('Dex'); + expect(def.persona_profile).toBeDefined(); + expect(def.commands).toHaveLength(1); + }); + + it('should return fallback definition when loader returns null', () => { + const def = pipeline._buildAgentDefinition('dev', null); + expect(def.id).toBe('dev'); + expect(def.name).toBe('dev'); + expect(def.persona_profile).toBeDefined(); + expect(def.persona_profile.greeting_levels).toBeDefined(); + expect(def.commands).toEqual([]); + }); + }); + + // ----------------------------------------------------------- + // 13. Preference Resolution + // ----------------------------------------------------------- + describe('preference resolution', () => { + it('should bypass bob mode restriction for PM agent', () => { + const pmAgent = { id: 'pm' }; + const pref = pipeline._resolvePreference(pmAgent, 'bob'); + // PM bypasses bob mode, should call getPreference with 'advanced' + expect(pipeline.preferenceManager.getPreference).toHaveBeenCalledWith('advanced'); + }); + + it('should apply bob mode restriction for non-PM agents', () => { + const devAgent = { id: 'dev' }; + pipeline._resolvePreference(devAgent, 'bob'); + expect(pipeline.preferenceManager.getPreference).toHaveBeenCalledWith('bob'); + }); + + it('should pass through advanced profile without changes', () => { + const devAgent = { id: 'dev' }; + pipeline._resolvePreference(devAgent, 'advanced'); + expect(pipeline.preferenceManager.getPreference).toHaveBeenCalledWith('advanced'); + }); + }); + + // ----------------------------------------------------------- + // 14. Workflow State Detection + // ----------------------------------------------------------- + describe('workflow state detection', () => { + it('should return null for non-workflow sessions', () => { + const result = pipeline._detectWorkflowState(mockSessionContext, 'new'); + expect(result).toBeNull(); + }); + + it('should return null when session context is null', () => { + const result = pipeline._detectWorkflowState(null, 'workflow'); + expect(result).toBeNull(); + }); + + it('should return null when no command history', () => { + const result = pipeline._detectWorkflowState( + { lastCommands: [] }, + 'workflow', + ); + expect(result).toBeNull(); + }); + + it('should attempt detection for workflow sessions with commands', () => { + const sessionWithCommands = { + lastCommands: ['develop', 'run-tests'], + }; + pipeline._detectWorkflowState(sessionWithCommands, 'workflow'); + expect(pipeline.workflowNavigator.detectWorkflowState).toHaveBeenCalledWith( + ['develop', 'run-tests'], + sessionWithCommands, + ); + }); + }); + + // ----------------------------------------------------------- + // 15. generate-greeting.js Backward Compatibility + // ----------------------------------------------------------- + describe('generate-greeting.js backward compatibility', () => { + it('should export generateGreeting function', () => { + const { generateGreeting } = require('../../.aios-core/development/scripts/generate-greeting'); + expect(typeof generateGreeting).toBe('function'); + }); + }); + + // ----------------------------------------------------------- + // 16. Performance + // ----------------------------------------------------------- + describe('performance', () => { + it('should complete activation within 500ms (mocked loaders)', async () => { + const startTime = Date.now(); + await pipeline.activate('dev'); + const duration = Date.now() - startTime; + // With mocked loaders, should be well under 500ms + // Real-world target is <200ms; CI environments have variable timing + expect(duration).toBeLessThan(500); + }); + + it('should report duration in result', async () => { + const result = await pipeline.activate('dev'); + expect(result.duration).toBeDefined(); + expect(result.duration).toBeGreaterThanOrEqual(0); + }); + }); + + // ----------------------------------------------------------- + // 17. Profile Loader Wrapper (ACT-11: replaces _safeLoad) + // ----------------------------------------------------------- + describe('_profileLoader', () => { + it('should return result on success and record metrics', async () => { + const metrics = { loaders: {} }; + const result = await pipeline._profileLoader('test', metrics, 1000, () => Promise.resolve({ data: 'test' })); + expect(result).toEqual({ data: 'test' }); + expect(metrics.loaders.test.status).toBe('ok'); + expect(metrics.loaders.test.duration).toBeGreaterThanOrEqual(0); + }); + + it('should return null on error and record error status', async () => { + const metrics = { loaders: {} }; + const result = await pipeline._profileLoader('test', metrics, 1000, () => Promise.reject(new Error('fail'))); + expect(result).toBeNull(); + expect(metrics.loaders.test.status).toBe('error'); + expect(metrics.loaders.test.error).toContain('fail'); + }); + + it('should return null on timeout and record timeout status', async () => { + const metrics = { loaders: {} }; + const result = await pipeline._profileLoader('test', metrics, 10, () => + new Promise(resolve => setTimeout(() => resolve('late'), 500)), + ); + expect(result).toBeNull(); + expect(metrics.loaders.test.status).toBe('timeout'); + }); + }); + + // ----------------------------------------------------------- + // 18. Constructor Options + // ----------------------------------------------------------- + describe('constructor options', () => { + it('should accept custom projectRoot', () => { + const customPipeline = new UnifiedActivationPipeline({ projectRoot: '/custom/path' }); + expect(customPipeline.projectRoot).toBe('/custom/path'); + }); + + it('should use cwd as default projectRoot', () => { + const defaultPipeline = new UnifiedActivationPipeline(); + expect(defaultPipeline.projectRoot).toBe(process.cwd()); + }); + + it('should accept custom greetingBuilder', () => { + const mockBuilder = { buildGreeting: jest.fn() }; + const customPipeline = new UnifiedActivationPipeline({ greetingBuilder: mockBuilder }); + expect(customPipeline.greetingBuilder).toBe(mockBuilder); + }); + }); + + // ----------------------------------------------------------- + // 19. ALL_AGENT_IDS constant + // ----------------------------------------------------------- + describe('ALL_AGENT_IDS', () => { + it('should contain exactly 12 agents', () => { + expect(ALL_AGENT_IDS).toHaveLength(12); + }); + + it('should not have duplicates', () => { + const unique = new Set(ALL_AGENT_IDS); + expect(unique.size).toBe(ALL_AGENT_IDS.length); + }); + + it('should include the formerly Path-B agents', () => { + expect(ALL_AGENT_IDS).toContain('devops'); + expect(ALL_AGENT_IDS).toContain('data-engineer'); + expect(ALL_AGENT_IDS).toContain('ux-design-expert'); + }); + }); + + // =========================================================== + // ACT-11: Pipeline Performance Optimization Tests + // =========================================================== + + // ----------------------------------------------------------- + // 20. Tiered Loading Architecture (AC: 5, 6, 7) + // ----------------------------------------------------------- + describe('ACT-11: tiered loading', () => { + it('should export LOADER_TIERS with correct tier structure', () => { + expect(LOADER_TIERS).toBeDefined(); + expect(LOADER_TIERS.critical).toBeDefined(); + expect(LOADER_TIERS.high).toBeDefined(); + expect(LOADER_TIERS.bestEffort).toBeDefined(); + expect(LOADER_TIERS.critical.loaders).toContain('agentConfig'); + expect(LOADER_TIERS.high.loaders).toContain('permissionMode'); + expect(LOADER_TIERS.high.loaders).toContain('gitConfig'); + expect(LOADER_TIERS.bestEffort.loaders).toContain('sessionContext'); + expect(LOADER_TIERS.bestEffort.loaders).toContain('projectStatus'); + }); + + it('should return quality "full" when all loaders succeed', async () => { + const result = await pipeline.activate('dev'); + expect(result.quality).toBe('full'); + expect(result.fallback).toBe(false); + }); + + it('should return quality "fallback" when Tier 1 (agentConfig) fails', async () => { + AgentConfigLoader.mockImplementation(() => ({ + loadComplete: jest.fn().mockRejectedValue(new Error('Agent config error')), + })); + + const freshPipeline = new UnifiedActivationPipeline(); + const result = await freshPipeline.activate('dev'); + expect(result.quality).toBe('fallback'); + expect(result.fallback).toBe(true); + }); + + it('should return quality "partial" when Tier 2/3 loaders fail but Tier 1 succeeds', async () => { + loadProjectStatus.mockRejectedValue(new Error('git timeout')); + GitConfigDetector.mockImplementation(() => ({ + get: jest.fn().mockImplementation(() => { throw new Error('git error'); }), + })); + + const freshPipeline = new UnifiedActivationPipeline(); + const result = await freshPipeline.activate('dev'); + expect(result.quality).toBe('partial'); + expect(result.fallback).toBe(false); + // Greeting should still be rich (agent identity present) + expect(result.greeting).toBeTruthy(); + expect(result.greeting.length).toBeGreaterThan(10); + }); + + it('should still return greeting when only ProjectStatus times out', async () => { + loadProjectStatus.mockImplementation(() => + new Promise(resolve => { + const id = setTimeout(() => resolve(mockProjectStatus), 300); + _pendingMockTimers.push(id); + }), + ); + + const result = await pipeline.activate('dev'); + expect(result.greeting).toBeTruthy(); + // ProjectStatus may or may not have loaded depending on timing + expect(['full', 'partial']).toContain(result.quality); + }); + }); + + // ----------------------------------------------------------- + // 21. Loader Profiling / Metrics (AC: 1, 9) + // ----------------------------------------------------------- + describe('ACT-11: loader profiling', () => { + it('should include metrics in activation result', async () => { + const result = await pipeline.activate('dev'); + expect(result.metrics).toBeDefined(); + expect(result.metrics.loaders).toBeDefined(); + }); + + it('should record timing data for all 5 loaders', async () => { + const result = await pipeline.activate('dev'); + const loaderNames = Object.keys(result.metrics.loaders); + expect(loaderNames).toContain('agentConfig'); + expect(loaderNames).toContain('permissionMode'); + expect(loaderNames).toContain('gitConfig'); + expect(loaderNames).toContain('sessionContext'); + expect(loaderNames).toContain('projectStatus'); + }); + + it('should record duration and status for each loader', async () => { + const result = await pipeline.activate('dev'); + for (const [name, data] of Object.entries(result.metrics.loaders)) { + expect(data).toHaveProperty('duration'); + expect(data).toHaveProperty('status'); + expect(data).toHaveProperty('start'); + expect(data).toHaveProperty('end'); + expect(typeof data.duration).toBe('number'); + expect(['ok', 'timeout', 'error']).toContain(data.status); + } + }); + + it('should record error message on loader failure', async () => { + loadProjectStatus.mockRejectedValue(new Error('git status failed')); + const freshPipeline = new UnifiedActivationPipeline(); + const result = await freshPipeline.activate('dev'); + expect(result.metrics.loaders.projectStatus.status).toBe('error'); + expect(result.metrics.loaders.projectStatus.error).toContain('git status failed'); + }); + }); + + // ----------------------------------------------------------- + // 22. Fallback Phrase (ACT-12: Language delegated to Claude Code settings.json) + // ----------------------------------------------------------- + describe('ACT-12: fallback phrase', () => { + it('should export FALLBACK_PHRASE as a string', () => { + expect(FALLBACK_PHRASE).toBeDefined(); + expect(typeof FALLBACK_PHRASE).toBe('string'); + expect(FALLBACK_PHRASE).toContain('*help'); + }); + + it('should generate English fallback greeting', () => { + const greeting = pipeline._generateFallbackGreeting('dev'); + expect(greeting).toContain('Type'); + expect(greeting).toContain('*help'); + expect(greeting).toContain('dev'); + }); + }); + + // ----------------------------------------------------------- + // 23. Configurable Pipeline Timeout (AC: 2, 4) + // ----------------------------------------------------------- + describe('ACT-11: configurable timeout', () => { + it('should export DEFAULT_PIPELINE_TIMEOUT_MS', () => { + expect(DEFAULT_PIPELINE_TIMEOUT_MS).toBeDefined(); + expect(typeof DEFAULT_PIPELINE_TIMEOUT_MS).toBe('number'); + expect(DEFAULT_PIPELINE_TIMEOUT_MS).toBe(500); + }); + + it('should use default timeout when no config or env override', () => { + const timeout = pipeline._resolvePipelineTimeout({}); + expect(timeout).toBe(DEFAULT_PIPELINE_TIMEOUT_MS); + }); + + it('should use config timeout when specified', () => { + const timeout = pipeline._resolvePipelineTimeout({ pipeline: { timeout_ms: 300 } }); + expect(timeout).toBe(300); + }); + + it('should use env var over config value', () => { + const originalEnv = process.env.AIOS_PIPELINE_TIMEOUT; + process.env.AIOS_PIPELINE_TIMEOUT = '800'; + try { + const timeout = pipeline._resolvePipelineTimeout({ pipeline: { timeout_ms: 300 } }); + expect(timeout).toBe(800); + } finally { + if (originalEnv !== undefined) { + process.env.AIOS_PIPELINE_TIMEOUT = originalEnv; + } else { + delete process.env.AIOS_PIPELINE_TIMEOUT; + } + } + }); + + it('should ignore invalid env var values', () => { + const originalEnv = process.env.AIOS_PIPELINE_TIMEOUT; + process.env.AIOS_PIPELINE_TIMEOUT = 'not-a-number'; + try { + const timeout = pipeline._resolvePipelineTimeout({}); + expect(timeout).toBe(DEFAULT_PIPELINE_TIMEOUT_MS); + } finally { + if (originalEnv !== undefined) { + process.env.AIOS_PIPELINE_TIMEOUT = originalEnv; + } else { + delete process.env.AIOS_PIPELINE_TIMEOUT; + } + } + }); + }); + + // ----------------------------------------------------------- + // 24. Quality Determination (ACT-11) + // ----------------------------------------------------------- + describe('ACT-11: quality determination', () => { + it('should return "full" when all loaders are ok', () => { + const metrics = { + loaders: { + agentConfig: { status: 'ok', duration: 50 }, + permissionMode: { status: 'ok', duration: 30 }, + gitConfig: { status: 'ok', duration: 40 }, + sessionContext: { status: 'ok', duration: 20 }, + projectStatus: { status: 'ok', duration: 80 }, + }, + }; + expect(pipeline._determineQuality(metrics)).toBe('full'); + }); + + it('should return "fallback" when agentConfig failed', () => { + const metrics = { + loaders: { + agentConfig: { status: 'error', duration: 80 }, + }, + }; + expect(pipeline._determineQuality(metrics)).toBe('fallback'); + }); + + it('should return "partial" when agentConfig ok but others failed', () => { + const metrics = { + loaders: { + agentConfig: { status: 'ok', duration: 50 }, + permissionMode: { status: 'ok', duration: 30 }, + gitConfig: { status: 'timeout', duration: 120 }, + sessionContext: { status: 'ok', duration: 20 }, + projectStatus: { status: 'timeout', duration: 180 }, + }, + }; + expect(pipeline._determineQuality(metrics)).toBe('partial'); + }); + }); + + // ----------------------------------------------------------- + // 25. Timeout Simulation Tests (AC: 11) + // ----------------------------------------------------------- + describe('ACT-11: timeout simulation', () => { + it('all loaders fast → full greeting', async () => { + // Default mocks are instant — should produce full quality + const result = await pipeline.activate('dev'); + expect(result.quality).toBe('full'); + expect(result.fallback).toBe(false); + }); + + it('ProjectStatus slow → partial greeting (everything else present)', async () => { + loadProjectStatus.mockImplementation(() => + new Promise((_, reject) => { + const id = setTimeout(() => reject(new Error('slow')), 300); + _pendingMockTimers.push(id); + }), + ); + + const freshPipeline = new UnifiedActivationPipeline(); + const result = await freshPipeline.activate('dev'); + expect(result.quality).toBe('partial'); + expect(result.context.projectStatus).toBeNull(); + // Agent identity should still be present + expect(result.context.agent.id).toBe('dev'); + expect(result.context.permissions).toBeDefined(); + }); + + it('AgentConfig slow → fallback greeting (Tier 1 failure)', async () => { + AgentConfigLoader.mockImplementation(() => ({ + loadComplete: jest.fn().mockImplementation(() => + new Promise((_, reject) => { + const id = setTimeout(() => reject(new Error('slow')), 200); + _pendingMockTimers.push(id); + }), + ), + })); + + const freshPipeline = new UnifiedActivationPipeline(); + const result = await freshPipeline.activate('dev'); + expect(result.quality).toBe('fallback'); + expect(result.fallback).toBe(true); + expect(result.greeting).toContain('dev'); + }); + + it('all loaders slow → fallback via pipeline timeout', async () => { + AgentConfigLoader.mockImplementation(() => ({ + loadComplete: jest.fn().mockImplementation(() => + new Promise(resolve => { + const id = setTimeout(() => resolve(null), 800); + _pendingMockTimers.push(id); + }), + ), + })); + loadProjectStatus.mockImplementation(() => + new Promise(resolve => { + const id = setTimeout(() => resolve(null), 800); + _pendingMockTimers.push(id); + }), + ); + SessionContextLoader.mockImplementation(() => ({ + loadContext: jest.fn().mockImplementation(() => + new Promise(resolve => { + const id = setTimeout(() => resolve(null), 800); + _pendingMockTimers.push(id); + }), + ), + })); + + const freshPipeline = new UnifiedActivationPipeline(); + const result = await freshPipeline.activate('dev'); + expect(result.fallback).toBe(true); + expect(result.greeting).toContain('dev'); + }); + }); + + // ----------------------------------------------------------- + // 26. Backward Compatibility (ACT-11) + // ----------------------------------------------------------- + describe('ACT-11: backward compatibility', () => { + it('should still return fallback boolean field', async () => { + const result = await pipeline.activate('dev'); + expect(typeof result.fallback).toBe('boolean'); + }); + + it('fallback=false when quality is full or partial', async () => { + const result = await pipeline.activate('dev'); + expect(result.quality).toBe('full'); + expect(result.fallback).toBe(false); + }); + + it('fallback=true only when quality is fallback', async () => { + AgentConfigLoader.mockImplementation(() => ({ + loadComplete: jest.fn().mockRejectedValue(new Error('fail')), + })); + const freshPipeline = new UnifiedActivationPipeline(); + const result = await freshPipeline.activate('dev'); + expect(result.quality).toBe('fallback'); + expect(result.fallback).toBe(true); + }); + }); + + // ----------------------------------------------------------- + // 27. All 12 Agents Verified — No Fallback (AC: 12) + // ----------------------------------------------------------- + describe('ACT-11: all 12 agents non-fallback', () => { + ALL_AGENT_IDS.forEach(agentId => { + it(`@${agentId} should not produce fallback greeting`, async () => { + AgentConfigLoader.mockImplementation(() => ({ + loadComplete: jest.fn().mockResolvedValue({ + config: { dataLocation: '.aios-core/data' }, + definition: { + ...mockAgentDefinition, + agent: { ...mockAgentDefinition.agent, id: agentId }, + }, + agent: { ...mockAgentDefinition.agent, id: agentId }, + persona_profile: mockAgentDefinition.persona_profile, + commands: mockAgentDefinition.commands, + }), + })); + + const result = await pipeline.activate(agentId); + expect(result.fallback).toBe(false); + expect(result.quality).toBe('full'); + expect(result.metrics).toBeDefined(); + expect(result.metrics.loaders.agentConfig).toBeDefined(); + expect(result.metrics.loaders.agentConfig.status).toBe('ok'); + }); + }); + }); +}); + +``` + +================================================== +📄 tests/core/build-orchestrator.test.js +================================================== +```js +/** + * Build Orchestrator Tests - Story EXC-1 (AC2) + * + * Tests for the 8-phase build pipeline including: + * - Constructor and config defaults + * - runPhase() wrapping and event emission + * - build() with phase failure handling + * - buildSubtaskPrompt() formatting + * - validateSubtaskResult() success and rejection + * - extractModifiedFiles() + * - formatDuration() + * - Enum values + */ + +const path = require('path'); +const fs = require('fs'); +const os = require('os'); +const { + createTempDir, + cleanupTempDir, + collectEvents, +} = require('./execution-test-helpers'); + +// Mock optional modules to prevent constructor errors +jest.mock('../../.aios-core/workflow-intelligence/engine/wave-analyzer', () => null); +jest.mock('../../.aios-core/infrastructure/scripts/worktree-manager', () => { throw new Error('not available'); }); +jest.mock('../../.aios-core/core/memory/gotchas-memory', () => { throw new Error('not available'); }); + +const { + BuildOrchestrator, + OrchestratorEvent, + Phase, + DEFAULT_CONFIG, +} = require('../../.aios-core/core/execution/build-orchestrator'); + +// ═══════════════════════════════════════════════════════════════════════════════ +// ENUMS & CONFIG +// ═══════════════════════════════════════════════════════════════════════════════ + +describe('Build Orchestrator Enums', () => { + test('OrchestratorEvent should have all expected events (BO-04)', () => { + expect(OrchestratorEvent.BUILD_QUEUED).toBe('build_queued'); + expect(OrchestratorEvent.PHASE_STARTED).toBe('phase_started'); + expect(OrchestratorEvent.PHASE_COMPLETED).toBe('phase_completed'); + expect(OrchestratorEvent.PHASE_FAILED).toBe('phase_failed'); + expect(OrchestratorEvent.SUBTASK_EXECUTING).toBe('subtask_executing'); + expect(OrchestratorEvent.QA_STARTED).toBe('qa_started'); + expect(OrchestratorEvent.QA_COMPLETED).toBe('qa_completed'); + expect(OrchestratorEvent.MERGE_STARTED).toBe('merge_started'); + expect(OrchestratorEvent.MERGE_COMPLETED).toBe('merge_completed'); + expect(OrchestratorEvent.BUILD_COMPLETED).toBe('build_completed'); + expect(OrchestratorEvent.BUILD_FAILED).toBe('build_failed'); + expect(OrchestratorEvent.REPORT_GENERATED).toBe('report_generated'); + }); + + test('Phase should have all 8 phases (BO-05)', () => { + expect(Phase.INIT).toBe('init'); + expect(Phase.WORKTREE).toBe('worktree'); + expect(Phase.PLAN).toBe('plan'); + expect(Phase.EXECUTE).toBe('execute'); + expect(Phase.QA).toBe('qa'); + expect(Phase.MERGE).toBe('merge'); + expect(Phase.CLEANUP).toBe('cleanup'); + expect(Phase.REPORT).toBe('report'); + }); + + test('DEFAULT_CONFIG should have expected defaults (BO-03)', () => { + expect(DEFAULT_CONFIG.useWorktree).toBe(true); + expect(DEFAULT_CONFIG.runQA).toBe(true); + expect(DEFAULT_CONFIG.autoMerge).toBe(true); + expect(DEFAULT_CONFIG.maxIterations).toBe(10); + expect(DEFAULT_CONFIG.globalTimeout).toBe(45 * 60 * 1000); + expect(DEFAULT_CONFIG.subtaskTimeout).toBe(10 * 60 * 1000); + expect(DEFAULT_CONFIG.dryRun).toBe(false); + expect(DEFAULT_CONFIG.verbose).toBe(false); + }); +}); + +// ═══════════════════════════════════════════════════════════════════════════════ +// CONSTRUCTOR +// ═══════════════════════════════════════════════════════════════════════════════ + +describe('BuildOrchestrator', () => { + let orchestrator; + let testDir; + + beforeEach(() => { + testDir = createTempDir('build-orch-test-'); + orchestrator = new BuildOrchestrator({ rootPath: testDir }); + }); + + afterEach(() => { + if (orchestrator) { + orchestrator.removeAllListeners(); + } + cleanupTempDir(testDir); + }); + + describe('Constructor', () => { + test('should create with default config (BO-01)', () => { + const orch = new BuildOrchestrator(); + + expect(orch.config.useWorktree).toBe(true); + expect(orch.config.maxIterations).toBe(10); + expect(orch.config.dryRun).toBe(false); + expect(orch.activeBuilds).toBeInstanceOf(Map); + expect(orch.activeBuilds.size).toBe(0); + }); + + test('should merge custom config with defaults (BO-02)', () => { + const orch = new BuildOrchestrator({ + maxIterations: 5, + dryRun: true, + rootPath: '/custom/path', + }); + + expect(orch.config.maxIterations).toBe(5); + expect(orch.config.dryRun).toBe(true); + expect(orch.rootPath).toBe('/custom/path'); + // Defaults preserved + expect(orch.config.useWorktree).toBe(true); + expect(orch.config.globalTimeout).toBe(45 * 60 * 1000); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // runPhase() + // ───────────────────────────────────────────────────────────────────────────── + + describe('runPhase()', () => { + test('should emit phase events on success (BO-28)', async () => { + const tracker = collectEvents(orchestrator, [ + OrchestratorEvent.PHASE_STARTED, + OrchestratorEvent.PHASE_COMPLETED, + ]); + + const ctx = { storyId: 'test-story', phases: {} }; + const result = await orchestrator.runPhase(ctx, Phase.INIT, async () => 'done'); + + expect(result).toBe('done'); + expect(tracker.count(OrchestratorEvent.PHASE_STARTED)).toBe(1); + expect(tracker.count(OrchestratorEvent.PHASE_COMPLETED)).toBe(1); + expect(ctx.phases[Phase.INIT].status).toBe('completed'); + expect(ctx.phases[Phase.INIT].duration).toBeGreaterThanOrEqual(0); + }); + + test('should emit phase_failed on error (BO-28)', async () => { + const tracker = collectEvents(orchestrator, [ + OrchestratorEvent.PHASE_FAILED, + ]); + + const ctx = { storyId: 'test-story', phases: {} }; + + await expect( + orchestrator.runPhase(ctx, Phase.PLAN, async () => { + throw new Error('plan failed'); + }), + ).rejects.toThrow('plan failed'); + + expect(tracker.count(OrchestratorEvent.PHASE_FAILED)).toBe(1); + expect(ctx.phases[Phase.PLAN].status).toBe('failed'); + expect(ctx.phases[Phase.PLAN].error).toBe('plan failed'); + }); + + test('should set currentPhase on context', async () => { + const ctx = { storyId: 'test-story', phases: {} }; + + await orchestrator.runPhase(ctx, Phase.EXECUTE, async () => 'ok'); + + expect(ctx.currentPhase).toBe(Phase.EXECUTE); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // build() + // ───────────────────────────────────────────────────────────────────────────── + + describe('build()', () => { + test('should reject duplicate builds for same story (BO-07)', async () => { + // Simulate an active build by directly setting the map + orchestrator.activeBuilds.set('story-1', { storyId: 'story-1' }); + + await expect(orchestrator.build('story-1')).rejects.toThrow( + 'Build already in progress for story-1', + ); + }); + + test('should fail at init phase if story not found (BO-08)', async () => { + const result = await orchestrator.build('nonexistent-story', { + useWorktree: false, + runQA: false, + }); + + expect(result.success).toBe(false); + expect(result.error).toContain('Story not found'); + expect(result.phase).toBe(Phase.INIT); + }); + + test('should emit BUILD_QUEUED event', async () => { + const tracker = collectEvents(orchestrator, [OrchestratorEvent.BUILD_QUEUED]); + + // Will fail at init (no story file) but should still emit queued + await orchestrator.build('some-story', { useWorktree: false, runQA: false }); + + expect(tracker.count(OrchestratorEvent.BUILD_QUEUED)).toBe(1); + }); + + test('should cleanup activeBuilds after completion', async () => { + await orchestrator.build('some-story', { useWorktree: false, runQA: false }); + + expect(orchestrator.activeBuilds.has('some-story')).toBe(false); + }); + + test('should emit BUILD_FAILED on error', async () => { + const tracker = collectEvents(orchestrator, [OrchestratorEvent.BUILD_FAILED]); + + await orchestrator.build('missing-story', { useWorktree: false, runQA: false }); + + expect(tracker.count(OrchestratorEvent.BUILD_FAILED)).toBe(1); + expect(tracker.getByName(OrchestratorEvent.BUILD_FAILED)[0].data.storyId).toBe('missing-story'); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // phaseInit() + // ───────────────────────────────────────────────────────────────────────────── + + describe('phaseInit()', () => { + test('should find story file and set storyPath (BO-12)', async () => { + // Create a story file in the temp directory + const storiesDir = path.join(testDir, 'docs', 'stories'); + fs.mkdirSync(storiesDir, { recursive: true }); + fs.writeFileSync(path.join(storiesDir, 'test-story.md'), '# Test Story'); + + const ctx = { storyId: 'test-story', config: orchestrator.config }; + const result = await orchestrator.phaseInit(ctx); + + expect(result.storyPath).toContain('test-story.md'); + expect(ctx.storyPath).toBeDefined(); + }); + + test('should throw if story not found (BO-13)', async () => { + const ctx = { storyId: 'nonexistent', config: orchestrator.config }; + + await expect(orchestrator.phaseInit(ctx)).rejects.toThrow('Story not found'); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // Phase dry-run paths + // ───────────────────────────────────────────────────────────────────────────── + + describe('Phase dry-run paths', () => { + test('phaseWorktree returns dryRun when config.dryRun is true', async () => { + const ctx = { storyId: 'test', config: { ...orchestrator.config, dryRun: true } }; + const result = await orchestrator.phaseWorktree(ctx); + expect(result.dryRun).toBe(true); + }); + + test('phasePlan returns dryRun and uses existing plan file', async () => { + // Create plan dir and file + const planDir = path.join(testDir, 'plan'); + fs.mkdirSync(planDir, { recursive: true }); + fs.writeFileSync( + path.join(planDir, 'implementation.yaml'), + 'storyId: test\nphases: []', + ); + + const ctx = { + storyId: 'test', + config: { ...orchestrator.config, planDir: 'plan' }, + worktree: null, + }; + const result = await orchestrator.phasePlan(ctx); + expect(result.source).toBe('existing'); + expect(ctx.plan).toBeDefined(); + }); + + test('phasePlan returns dryRun when no plan and dryRun enabled', async () => { + const ctx = { + storyId: 'test', + config: { ...orchestrator.config, planDir: 'plan', dryRun: true }, + worktree: null, + }; + const result = await orchestrator.phasePlan(ctx); + expect(result.dryRun).toBe(true); + }); + + test('phaseExecute returns dryRun when config.dryRun is true', async () => { + const ctx = { + storyId: 'test', + config: { ...orchestrator.config, dryRun: true }, + }; + const result = await orchestrator.phaseExecute(ctx); + expect(result.dryRun).toBe(true); + }); + + test('phaseQA returns dryRun when config.dryRun is true', async () => { + const ctx = { + storyId: 'test', + config: { ...orchestrator.config, dryRun: true }, + }; + const result = await orchestrator.phaseQA(ctx); + expect(result.dryRun).toBe(true); + }); + + test('phaseMerge returns dryRun when config.dryRun is true', async () => { + const ctx = { + storyId: 'test', + config: { ...orchestrator.config, dryRun: true }, + }; + const result = await orchestrator.phaseMerge(ctx); + expect(result.dryRun).toBe(true); + }); + + test('phaseMerge returns skipped when no worktree', async () => { + const ctx = { + storyId: 'test', + config: { ...orchestrator.config }, + worktree: null, + }; + const result = await orchestrator.phaseMerge(ctx); + expect(result.skipped).toBe(true); + }); + + test('phaseCleanup returns dryRun when config.dryRun is true', async () => { + const ctx = { + storyId: 'test', + config: { ...orchestrator.config, dryRun: true }, + }; + const result = await orchestrator.phaseCleanup(ctx); + expect(result.dryRun).toBe(true); + }); + + test('phaseCleanup returns skipped when no worktree', async () => { + const ctx = { + storyId: 'test', + config: { ...orchestrator.config }, + worktree: null, + }; + const result = await orchestrator.phaseCleanup(ctx); + expect(result.skipped).toBe(true); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // generatePlan() + // ───────────────────────────────────────────────────────────────────────────── + + describe('generatePlan()', () => { + test('should generate plan from story file', async () => { + const storiesDir = path.join(testDir, 'docs', 'stories'); + fs.mkdirSync(storiesDir, { recursive: true }); + fs.writeFileSync( + path.join(storiesDir, 'gen-plan-story.md'), + '# Story\n\n- [ ] AC1: First criteria\n- [ ] AC2: Second criteria\n', + ); + + const ctx = { + storyId: 'gen-plan-story', + storyPath: path.join(storiesDir, 'gen-plan-story.md'), + config: orchestrator.config, + }; + + const plan = await orchestrator.generatePlan(ctx); + expect(plan.storyId).toBe('gen-plan-story'); + expect(plan.phases[0].subtasks.length).toBe(2); + expect(plan.phases[0].subtasks[0].description).toContain('First criteria'); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // log() + // ───────────────────────────────────────────────────────────────────────────── + + describe('log()', () => { + test('should log at different levels', () => { + const spy = jest.spyOn(console, 'log').mockImplementation(); + orchestrator.log('test info message', 'info'); + expect(spy).toHaveBeenCalled(); + spy.mockRestore(); + }); + + test('should skip debug messages when not verbose', () => { + orchestrator.config.verbose = false; + const spy = jest.spyOn(console, 'log').mockImplementation(); + orchestrator.log('debug message', 'debug'); + expect(spy).not.toHaveBeenCalled(); + spy.mockRestore(); + }); + + test('should show debug messages when verbose', () => { + orchestrator.config.verbose = true; + const spy = jest.spyOn(console, 'log').mockImplementation(); + orchestrator.log('debug message', 'debug'); + expect(spy).toHaveBeenCalled(); + spy.mockRestore(); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // buildSubtaskPrompt() + // ───────────────────────────────────────────────────────────────────────────── + + describe('buildSubtaskPrompt()', () => { + test('should format prompt with subtask details (BO-21)', () => { + const subtask = { + id: '1.1', + description: 'Implement auth module', + files: ['src/auth.js', 'src/auth.test.js'], + acceptanceCriteria: ['User can log in', 'Token is generated'], + }; + const execCtx = { iteration: 1, config: { maxIterations: 10 } }; + const buildCtx = { storyId: 'story-1' }; + + const prompt = orchestrator.buildSubtaskPrompt(subtask, execCtx, buildCtx); + + expect(prompt).toContain('Story story-1'); + expect(prompt).toContain('1.1'); + expect(prompt).toContain('Implement auth module'); + expect(prompt).toContain('src/auth.js'); + expect(prompt).toContain('User can log in'); + expect(prompt).toContain('Token is generated'); + expect(prompt).toContain('attempt 1 of 10'); + }); + + test('should include retry message on subsequent attempts', () => { + const subtask = { id: '1.1', description: 'Test' }; + const execCtx = { iteration: 3, config: { maxIterations: 10 } }; + const buildCtx = { storyId: 'story-1' }; + + const prompt = orchestrator.buildSubtaskPrompt(subtask, execCtx, buildCtx); + + expect(prompt).toContain('attempt 3 of 10'); + expect(prompt).toContain('Previous attempt failed'); + }); + + test('should include gotchas context when available', () => { + const subtask = { id: '1.1', description: 'Test' }; + const execCtx = { iteration: 1, config: { maxIterations: 10 } }; + const buildCtx = { + storyId: 'story-1', + relevantGotchas: [ + { title: 'ESM import issue', description: 'Use require() not import', workaround: 'Use CommonJS' }, + ], + }; + + const prompt = orchestrator.buildSubtaskPrompt(subtask, execCtx, buildCtx); + + expect(prompt).toContain('Known Gotchas'); + expect(prompt).toContain('ESM import issue'); + expect(prompt).toContain('Use CommonJS'); + }); + + test('should include verification command when specified', () => { + const subtask = { + id: '1.1', + description: 'Test', + verification: { command: 'npm test' }, + }; + const execCtx = { iteration: 1, config: { maxIterations: 10 } }; + const buildCtx = { storyId: 'story-1' }; + + const prompt = orchestrator.buildSubtaskPrompt(subtask, execCtx, buildCtx); + + expect(prompt).toContain('Verification'); + expect(prompt).toContain('npm test'); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // validateSubtaskResult() + // ───────────────────────────────────────────────────────────────────────────── + + describe('validateSubtaskResult()', () => { + test('should return true for clean output (BO-22)', () => { + const result = { stdout: 'All good, implementation complete.' }; + const subtask = { id: '1.1' }; + + expect(orchestrator.validateSubtaskResult(result, subtask)).toBe(true); + }); + + test('should return false for output with error and failed (BO-23)', () => { + const result = { stdout: 'Error: something went wrong and the build failed' }; + const subtask = { id: '1.1' }; + + expect(orchestrator.validateSubtaskResult(result, subtask)).toBe(false); + }); + + test('should return false for test failed output', () => { + const result = { stdout: 'Running tests...\nTest failed: 2 of 5' }; + const subtask = { id: '1.1' }; + + expect(orchestrator.validateSubtaskResult(result, subtask)).toBe(false); + }); + + test('should return true when output contains verification passed', () => { + const result = { stdout: 'verification passed for all checks' }; + const subtask = { id: '1.1', verification: { command: 'npm test' } }; + + expect(orchestrator.validateSubtaskResult(result, subtask)).toBe(true); + }); + + test('should return true when output contains check mark', () => { + const result = { stdout: 'All checks ✓ complete' }; + const subtask = { id: '1.1', verification: 'npm test' }; + + expect(orchestrator.validateSubtaskResult(result, subtask)).toBe(true); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // extractModifiedFiles() + // ───────────────────────────────────────────────────────────────────────────── + + describe('extractModifiedFiles()', () => { + test('should extract files from wrote/created patterns (BO-24)', () => { + const output = 'wrote src/auth.js\ncreated src/auth.test.js\nmodified package.json'; + + const files = orchestrator.extractModifiedFiles(output); + + expect(files).toContain('src/auth.js'); + expect(files).toContain('src/auth.test.js'); + expect(files).toContain('package.json'); + }); + + test('should extract files from file: pattern', () => { + const output = 'file: src/index.js\nfile: README.md'; + + const files = orchestrator.extractModifiedFiles(output); + + expect(files).toContain('src/index.js'); + expect(files).toContain('README.md'); + }); + + test('should deduplicate files', () => { + const output = 'wrote src/auth.js\nmodified src/auth.js'; + + const files = orchestrator.extractModifiedFiles(output); + + expect(files.filter((f) => f === 'src/auth.js')).toHaveLength(1); + }); + + test('should return empty array for no matches', () => { + const output = 'No files were changed.'; + + const files = orchestrator.extractModifiedFiles(output); + + expect(files).toEqual([]); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // formatDuration() + // ───────────────────────────────────────────────────────────────────────────── + + describe('formatDuration()', () => { + test('should format seconds only (BO-27)', () => { + expect(orchestrator.formatDuration(5000)).toBe('5s'); + expect(orchestrator.formatDuration(30000)).toBe('30s'); + }); + + test('should format minutes and seconds', () => { + expect(orchestrator.formatDuration(90000)).toBe('1m 30s'); + expect(orchestrator.formatDuration(300000)).toBe('5m 0s'); + }); + + test('should handle zero', () => { + expect(orchestrator.formatDuration(0)).toBe('0s'); + }); + + test('should handle sub-second values', () => { + expect(orchestrator.formatDuration(500)).toBe('0s'); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // findStoryFile() + // ───────────────────────────────────────────────────────────────────────────── + + describe('findStoryFile()', () => { + test('should find story in stories root', () => { + const storiesDir = path.join(testDir, 'docs', 'stories'); + fs.mkdirSync(storiesDir, { recursive: true }); + fs.writeFileSync(path.join(storiesDir, 'my-story.md'), '# Story'); + + const result = orchestrator.findStoryFile('my-story'); + + expect(result).toContain('my-story.md'); + }); + + test('should find story in subdirectory', () => { + const subDir = path.join(testDir, 'docs', 'stories', 'v2.1'); + fs.mkdirSync(subDir, { recursive: true }); + fs.writeFileSync(path.join(subDir, 'sprint-story.md'), '# Story'); + + const result = orchestrator.findStoryFile('sprint-story'); + + expect(result).toContain('sprint-story.md'); + }); + + test('should return null for nonexistent story', () => { + const storiesDir = path.join(testDir, 'docs', 'stories'); + fs.mkdirSync(storiesDir, { recursive: true }); + + const result = orchestrator.findStoryFile('missing-story'); + + expect(result).toBeNull(); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // getActiveBuilds() + // ───────────────────────────────────────────────────────────────────────────── + + describe('getActiveBuilds()', () => { + test('should return empty array when no builds active', () => { + expect(orchestrator.getActiveBuilds()).toEqual([]); + }); + + test('should return active build info', () => { + orchestrator.activeBuilds.set('story-1', { + storyId: 'story-1', + currentPhase: Phase.EXECUTE, + startTime: Date.now() - 5000, + }); + + const builds = orchestrator.getActiveBuilds(); + + expect(builds).toHaveLength(1); + expect(builds[0].storyId).toBe('story-1'); + expect(builds[0].phase).toBe(Phase.EXECUTE); + expect(builds[0].duration).toBeGreaterThanOrEqual(0); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // generateReport() + // ───────────────────────────────────────────────────────────────────────────── + + describe('generateReport()', () => { + test('should generate success report', () => { + const ctx = { + storyId: 'story-1', + worktree: null, + qaResult: { success: true }, + mergeResult: null, + phases: { + init: { status: 'completed', duration: 100 }, + plan: { status: 'completed', duration: 200 }, + }, + errors: [], + result: { stats: { completedSubtasks: 5, failedSubtasks: 0, totalIterations: 5 } }, + }; + + const report = orchestrator.generateReport(ctx, 5000, false); + + expect(report).toContain('SUCCESS'); + expect(report).toContain('story-1'); + expect(report).toContain('init'); + expect(report).toContain('plan'); + expect(report).toContain('Completed Subtasks: 5'); + }); + + test('should generate failure report with errors', () => { + const ctx = { + storyId: 'story-1', + worktree: null, + qaResult: { success: false }, + mergeResult: null, + phases: { + init: { status: 'completed', duration: 100 }, + execute: { status: 'failed', duration: 300, error: 'Build failed' }, + }, + errors: [new Error('Build loop failed')], + result: null, + }; + + const report = orchestrator.generateReport(ctx, 3000, true); + + expect(report).toContain('FAILED'); + expect(report).toContain('Build loop failed'); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────── + // phaseReport() + // ───────────────────────────────────────────────────────────────────────────── + + describe('phaseReport()', () => { + test('should write report to file', async () => { + const ctx = { + storyId: 'test-report', + startTime: Date.now() - 5000, + config: { ...orchestrator.config, reportDir: 'plan' }, + worktree: null, + qaResult: null, + mergeResult: null, + phases: {}, + errors: [], + result: null, + }; + + const result = await orchestrator.phaseReport(ctx); + + expect(result.path).toContain('build-report-test-report.md'); + expect(fs.existsSync(result.path)).toBe(true); + expect(ctx.reportPath).toBe(result.path); + }); + + test('should emit REPORT_GENERATED event', async () => { + const tracker = collectEvents(orchestrator, [OrchestratorEvent.REPORT_GENERATED]); + + const ctx = { + storyId: 'test-report', + startTime: Date.now(), + config: { ...orchestrator.config, reportDir: 'plan' }, + worktree: null, + qaResult: null, + mergeResult: null, + phases: {}, + errors: [], + result: null, + }; + + await orchestrator.phaseReport(ctx); + + expect(tracker.count(OrchestratorEvent.REPORT_GENERATED)).toBe(1); + }); + }); +}); + +``` + +================================================== +📄 tests/core/execution-test-helpers.js +================================================== +```js +/** + * Execution System - Shared Test Helpers + * Story EXC-1 - Test Coverage for .aios-core/core/execution/ + * + * Mock factories and utilities shared across all 9 execution test files. + * Follows patterns from build-state-manager.test.js. + */ + +const path = require('path'); +const fs = require('fs'); +const os = require('os'); + +// ═══════════════════════════════════════════════════════════════════════════════ +// TEMP DIRECTORY HELPERS +// ═══════════════════════════════════════════════════════════════════════════════ + +/** + * Create a unique temp directory for test isolation. + * @param {string} prefix - Directory prefix + * @returns {string} - Absolute path to temp directory + */ +function createTempDir(prefix = 'exec-test-') { + return fs.mkdtempSync(path.join(os.tmpdir(), prefix)); +} + +/** + * Remove a temp directory and all contents. + * @param {string} dirPath - Directory to remove + */ +function cleanupTempDir(dirPath) { + if (dirPath && fs.existsSync(dirPath)) { + fs.rmSync(dirPath, { recursive: true, force: true }); + } +} + +// ═══════════════════════════════════════════════════════════════════════════════ +// CHILD_PROCESS MOCK +// ═══════════════════════════════════════════════════════════════════════════════ + +/** + * Create a mock child_process.spawn that simulates stdout/stderr/exit. + * @param {string} stdout - Simulated stdout data + * @param {string} stderr - Simulated stderr data + * @param {number} exitCode - Simulated exit code (0 = success) + * @returns {Object} - Mock spawn function and helpers + */ +function mockChildProcess(stdout = '', stderr = '', exitCode = 0) { + const EventEmitter = require('events'); + + const mockProcess = new EventEmitter(); + mockProcess.stdout = new EventEmitter(); + mockProcess.stderr = new EventEmitter(); + mockProcess.pid = Math.floor(Math.random() * 10000) + 1000; + mockProcess.kill = jest.fn(); + + const spawnFn = jest.fn().mockReturnValue(mockProcess); + + // Schedule data emission after spawn is called + const emitData = () => { + process.nextTick(() => { + if (stdout) { + mockProcess.stdout.emit('data', Buffer.from(stdout)); + } + if (stderr) { + mockProcess.stderr.emit('data', Buffer.from(stderr)); + } + process.nextTick(() => { + mockProcess.emit('close', exitCode); + }); + }); + }; + + return { + spawn: spawnFn, + process: mockProcess, + emitData, + /** + * Configure spawn to emit data automatically on next call. + */ + autoEmit() { + spawnFn.mockImplementation(() => { + emitData(); + return mockProcess; + }); + return this; + }, + }; +} + +// ═══════════════════════════════════════════════════════════════════════════════ +// TASK MOCK FACTORY +// ═══════════════════════════════════════════════════════════════════════════════ + +/** + * Create a mock task object for wave/build testing. + * @param {Object} overrides - Fields to override + * @returns {Object} - Mock task + */ +function createMockTask(overrides = {}) { + const id = overrides.id || `task-${Math.floor(Math.random() * 1000)}`; + return { + id, + description: `Test task ${id}`, + agent: 'dev', + critical: false, + dependencies: [], + subtasks: [], + ...overrides, + }; +} + +/** + * Create an array of mock tasks. + * @param {number} count - Number of tasks to create + * @param {Object} overrides - Shared overrides for all tasks + * @returns {Array} - Array of mock tasks + */ +function createMockTasks(count, overrides = {}) { + return Array.from({ length: count }, (_, i) => + createMockTask({ id: `task-${i + 1}`, ...overrides }), + ); +} + +// ═══════════════════════════════════════════════════════════════════════════════ +// WAVE RESULTS MOCK FACTORY +// ═══════════════════════════════════════════════════════════════════════════════ + +/** + * Create mock wave results for aggregation testing. + * @param {number} taskCount - Number of task results + * @param {number} successRate - Success rate (0-1) + * @returns {Array} - Array of mock task results + */ +function createMockWaveResults(taskCount = 3, successRate = 1.0) { + const results = []; + for (let i = 0; i < taskCount; i++) { + const success = Math.random() < successRate; + results.push({ + taskId: `task-${i + 1}`, + success, + critical: false, + duration: Math.floor(Math.random() * 5000) + 100, + result: success + ? { success: true, output: `Output for task-${i + 1}` } + : undefined, + error: success ? undefined : 'Simulated task failure', + }); + } + return results; +} + +/** + * Create deterministic wave results (no randomness). + * @param {Array} outcomes - Array of success/failure booleans + * @returns {Array} - Array of mock task results + */ +function createDeterministicWaveResults(outcomes) { + return outcomes.map((success, i) => ({ + taskId: `task-${i + 1}`, + success, + critical: false, + duration: (i + 1) * 1000, + result: success + ? { success: true, output: `Output for task-${i + 1}` } + : undefined, + error: success ? undefined : `Task task-${i + 1} failed`, + })); +} + +// ═══════════════════════════════════════════════════════════════════════════════ +// MEMORY DEPENDENCY MOCKS +// ═══════════════════════════════════════════════════════════════════════════════ + +/** + * Create a mock MemoryQuery instance. + * @param {Object} overrides - Method overrides + * @returns {Object} - Mock MemoryQuery + */ +function createMockMemoryQuery(overrides = {}) { + return { + query: jest.fn().mockResolvedValue([]), + getRelevantContext: jest.fn().mockResolvedValue(''), + search: jest.fn().mockResolvedValue([]), + ...overrides, + }; +} + +/** + * Create a mock GotchasMemory instance. + * @param {Object} overrides - Method overrides + * @returns {Object} - Mock GotchasMemory + */ +function createMockGotchasMemory(overrides = {}) { + return { + getRelevantGotchas: jest.fn().mockResolvedValue([]), + addGotcha: jest.fn().mockResolvedValue(true), + getAll: jest.fn().mockReturnValue([]), + search: jest.fn().mockReturnValue([]), + ...overrides, + }; +} + +/** + * Create a mock SessionMemory instance. + * @param {Object} overrides - Method overrides + * @returns {Object} - Mock SessionMemory + */ +function createMockSessionMemory(overrides = {}) { + return { + get: jest.fn().mockReturnValue(null), + set: jest.fn(), + getAll: jest.fn().mockReturnValue({}), + clear: jest.fn(), + ...overrides, + }; +} + +// ═══════════════════════════════════════════════════════════════════════════════ +// BUILD STATE HELPERS +// ═══════════════════════════════════════════════════════════════════════════════ + +/** + * Create a mock plan object for AutonomousBuildLoop. + * @param {Object} overrides - Fields to override + * @returns {Object} - Mock plan + */ +function createMockPlan(overrides = {}) { + return { + storyId: 'test-story-1', + tasks: [ + { + id: 'task-1', + subtasks: [ + { id: '1.1', description: 'Subtask 1.1' }, + { id: '1.2', description: 'Subtask 1.2' }, + ], + }, + { + id: 'task-2', + subtasks: [ + { id: '2.1', description: 'Subtask 2.1' }, + ], + }, + ], + ...overrides, + }; +} + +/** + * Create a mock BuildStateManager. + * @param {Object} overrides - Method overrides + * @returns {Object} - Mock BuildStateManager + */ +function createMockBuildStateManager(overrides = {}) { + const state = { + storyId: 'test-story-1', + status: 'pending', + checkpoints: [], + completedSubtasks: [], + currentSubtask: null, + metrics: { totalSubtasks: 3, completedSubtasks: 0, totalFailures: 0 }, + }; + + return { + createState: jest.fn().mockReturnValue(state), + loadState: jest.fn().mockReturnValue(state), + loadOrCreateState: jest.fn().mockReturnValue(state), + saveState: jest.fn(), + getState: jest.fn().mockReturnValue(state), + saveCheckpoint: jest.fn().mockReturnValue({ id: 'cp-1', subtaskId: '1.1' }), + startSubtask: jest.fn(), + completeSubtask: jest.fn(), + completeBuild: jest.fn(), + failBuild: jest.fn(), + recordFailure: jest.fn().mockReturnValue({ failure: {}, isStuck: false }), + getLastCheckpoint: jest.fn().mockReturnValue(null), + getStatus: jest.fn().mockReturnValue({ exists: true }), + _state: state, + ...overrides, + }; +} + +/** + * Create a mock RecoveryTracker. + * @param {Object} overrides - Method overrides + * @returns {Object} - Mock RecoveryTracker + */ +function createMockRecoveryTracker(overrides = {}) { + return { + trackAttempt: jest.fn(), + getAttempts: jest.fn().mockReturnValue([]), + isStuck: jest.fn().mockReturnValue(false), + reset: jest.fn(), + ...overrides, + }; +} + +/** + * Create a mock WorktreeManager. + * @param {Object} overrides - Method overrides + * @returns {Object} - Mock WorktreeManager + */ +function createMockWorktreeManager(overrides = {}) { + return { + create: jest.fn().mockResolvedValue({ path: '/tmp/worktree', branch: 'feat/test' }), + remove: jest.fn().mockResolvedValue(true), + list: jest.fn().mockResolvedValue([]), + getWorktreePath: jest.fn().mockReturnValue('/tmp/worktree'), + ...overrides, + }; +} + +// ═══════════════════════════════════════════════════════════════════════════════ +// EVENT EMITTER HELPERS +// ═══════════════════════════════════════════════════════════════════════════════ + +/** + * Collect all events emitted by an EventEmitter. + * @param {EventEmitter} emitter - The emitter to monitor + * @param {Array} eventNames - Event names to listen for + * @returns {Object} - Object with events array and helper methods + */ +function collectEvents(emitter, eventNames) { + const events = []; + + for (const name of eventNames) { + emitter.on(name, (data) => { + events.push({ name, data, timestamp: Date.now() }); + }); + } + + return { + events, + getByName(name) { + return events.filter((e) => e.name === name); + }, + count(name) { + return name ? events.filter((e) => e.name === name).length : events.length; + }, + clear() { + events.length = 0; + }, + }; +} + +// ═══════════════════════════════════════════════════════════════════════════════ +// TASK EXECUTOR HELPERS +// ═══════════════════════════════════════════════════════════════════════════════ + +/** + * Create a mock task executor that resolves/rejects based on config. + * @param {Object} options - Configuration + * @param {boolean} options.success - Whether to succeed + * @param {number} options.delay - Simulated delay in ms + * @param {string} options.output - Output text + * @returns {Function} - Async task executor function + */ +function createMockTaskExecutor(options = {}) { + const { success = true, delay = 0, output = 'mock output' } = options; + + return jest.fn().mockImplementation(async () => { + if (delay > 0) { + await new Promise((resolve) => setTimeout(resolve, delay)); + } + if (success) { + return { success: true, output }; + } + throw new Error(options.error || 'Mock task execution failed'); + }); +} + +// ═══════════════════════════════════════════════════════════════════════════════ +// EXPORTS +// ═══════════════════════════════════════════════════════════════════════════════ + +module.exports = { + // Temp directory + createTempDir, + cleanupTempDir, + + // Child process + mockChildProcess, + + // Task factories + createMockTask, + createMockTasks, + createMockTaskExecutor, + + // Wave results + createMockWaveResults, + createDeterministicWaveResults, + + // Memory mocks + createMockMemoryQuery, + createMockGotchasMemory, + createMockSessionMemory, + + // Build system mocks + createMockPlan, + createMockBuildStateManager, + createMockRecoveryTracker, + createMockWorktreeManager, + + // Event helpers + collectEvents, +}; + +``` + +================================================== +📄 tests/core/context-aware-greetings.test.js +================================================== +```js +/** + * Tests for Story ACT-7: Context-Aware Greeting Sections + * + * Validates that greeting sections adapt intelligently to session context + * instead of showing static templates every time. + * + * Test Coverage: + * - AC1: Section builders receive full enriched context object + * - AC2: Presentation adapts: new=full, existing=brief, workflow=focused + * - AC3: Role description references current story and branch + * - AC4: Project status uses natural language narrative + * - AC5: Context section references previous agent handoff + * - AC6: Footer varies by session context + * - AC7: Parallelizable sections with Promise.all() + * - AC8: Fallback to static templates on failure (150ms timeout) + * - AC9: Performance within 200ms total + * - AC10: A/B comparison: static vs context-aware + */ + +const GreetingBuilder = require('../../.aios-core/development/scripts/greeting-builder'); +const ContextDetector = require('../../.aios-core/core/session/context-detector'); +const GitConfigDetector = require('../../.aios-core/infrastructure/scripts/git-config-detector'); + +// Mock dependencies +jest.mock('../../.aios-core/core/session/context-detector'); +jest.mock('../../.aios-core/infrastructure/scripts/git-config-detector'); +jest.mock('../../.aios-core/infrastructure/scripts/project-status-loader', () => ({ + loadProjectStatus: jest.fn(), + formatStatusDisplay: jest.fn(), +})); +jest.mock('../../.aios-core/core/config/config-resolver', () => ({ + resolveConfig: jest.fn(() => ({ + config: { user_profile: 'advanced' }, + warnings: [], + legacy: false, + })), +})); +jest.mock('../../.aios-core/development/scripts/greeting-preference-manager', () => { + return jest.fn().mockImplementation(() => ({ + getPreference: jest.fn().mockReturnValue('auto'), + setPreference: jest.fn(), + getConfig: jest.fn().mockReturnValue({}), + })); +}); +// Story ACT-5: Mock SessionState and SurfaceChecker +jest.mock('../../.aios-core/core/orchestration/session-state', () => ({ + SessionState: jest.fn().mockImplementation(() => ({ + getStateFilePath: jest.fn().mockReturnValue('/tmp/nonexistent-state.yaml'), + })), + sessionStateExists: jest.fn().mockReturnValue(false), +})); +jest.mock('../../.aios-core/core/orchestration/surface-checker', () => ({ + SurfaceChecker: jest.fn().mockImplementation(() => ({ + load: jest.fn().mockReturnValue(false), + shouldSurface: jest.fn().mockReturnValue({ should_surface: false }), + })), +})); + +const { loadProjectStatus } = require('../../.aios-core/infrastructure/scripts/project-status-loader'); + +describe('Story ACT-7: Context-Aware Greeting Sections', () => { + let builder; + let mockAgent; + + // Enriched context template matching UnifiedActivationPipeline output shape + const baseEnrichedContext = { + sessionType: 'new', + projectStatus: { + branch: 'feat/act-7', + modifiedFiles: ['greeting-builder.js', 'unified-activation-pipeline.js'], + modifiedFilesTotalCount: 2, + recentCommits: ['feat: implement context-aware greetings'], + currentStory: 'ACT-7', + isGitRepo: true, + }, + gitConfig: { configured: true, type: 'github', branch: 'feat/act-7' }, + permissions: { mode: 'ask', badge: '[Ask]' }, + preference: 'auto', + workflowState: null, + userProfile: 'advanced', + conversationHistory: [], + lastCommands: [], + previousAgent: null, + sessionMessage: null, + workflowActive: null, + sessionStory: 'ACT-7', + }; + + beforeEach(() => { + jest.clearAllMocks(); + + mockAgent = { + id: 'dev', + name: 'Dex', + icon: '\uD83D\uDCBB', + persona_profile: { + archetype: 'Builder', + greeting_levels: { + minimal: '\uD83D\uDCBB dev Agent ready', + named: "\uD83D\uDCBB Dex (Builder) ready. Let's build something great!", + archetypal: '\uD83D\uDCBB Dex the Builder ready to innovate!', + }, + communication: { + signature_closing: '-- Dex, sempre construindo', + }, + }, + persona: { + role: 'Expert Senior Software Engineer & Implementation Specialist', + }, + commands: [ + { name: 'help', visibility: ['full', 'quick', 'key'], description: 'Show help' }, + { name: 'develop', visibility: ['full', 'quick'], description: 'Implement story tasks' }, + { name: 'run-tests', visibility: ['quick', 'key'], description: 'Execute tests' }, + { name: 'build', visibility: ['full'], description: 'Build project' }, + ], + }; + + ContextDetector.mockImplementation(() => ({ + detectSessionType: jest.fn().mockReturnValue('new'), + })); + + GitConfigDetector.mockImplementation(() => ({ + get: jest.fn().mockReturnValue({ + configured: true, + type: 'github', + branch: 'feat/act-7', + }), + })); + + loadProjectStatus.mockResolvedValue(baseEnrichedContext.projectStatus); + + builder = new GreetingBuilder(); + }); + + // ======================================================================== + // AC1: Section builders receive full enriched context + // ======================================================================== + describe('AC1: Enriched context passing', () => { + test('buildPresentation accepts sectionContext parameter', () => { + const result = builder.buildPresentation(mockAgent, 'new', '', { + sessionType: 'new', + projectStatus: baseEnrichedContext.projectStatus, + }); + expect(result).toBeTruthy(); + expect(typeof result).toBe('string'); + }); + + test('buildRoleDescription accepts sectionContext parameter', () => { + const result = builder.buildRoleDescription(mockAgent, { + sessionStory: 'ACT-7', + projectStatus: { branch: 'feat/act-7', currentStory: 'ACT-7' }, + gitConfig: { branch: 'feat/act-7' }, + }); + expect(result).toContain('Role:'); + }); + + test('buildProjectStatus accepts sectionContext parameter', () => { + const result = builder.buildProjectStatus( + baseEnrichedContext.projectStatus, + 'new', + { sessionType: 'new' }, + ); + expect(result).toContain('Project Status'); + }); + + test('buildFooter accepts sectionContext parameter', () => { + const result = builder.buildFooter(mockAgent, { sessionType: 'new' }); + expect(result).toBeTruthy(); + }); + + test('buildContextSection accepts sectionContext parameter', () => { + const result = builder.buildContextSection( + mockAgent, + { ...baseEnrichedContext, previousAgent: 'qa' }, + 'existing', + baseEnrichedContext.projectStatus, + { previousAgent: 'qa' }, + ); + // Should produce a context section for existing session with previous agent + expect(result).toBeTruthy(); + }); + + test('backward compatible: all section builders work without sectionContext', () => { + // All methods should still work with their original signatures + expect(builder.buildPresentation(mockAgent, 'new', '')).toBeTruthy(); + expect(builder.buildRoleDescription(mockAgent)).toContain('Role:'); + expect(builder.buildProjectStatus(baseEnrichedContext.projectStatus, 'new')).toContain('Project Status'); + expect(builder.buildFooter(mockAgent)).toBeTruthy(); + expect(builder.buildContextSection(mockAgent, {}, 'existing', baseEnrichedContext.projectStatus)).toBeDefined(); + }); + }); + + // ======================================================================== + // AC2: Presentation adapts to session type + // ======================================================================== + describe('AC2: Adaptive presentation', () => { + test('new session: uses full archetypal greeting', () => { + const result = builder.buildPresentation(mockAgent, 'new', '', { + sessionType: 'new', + }); + expect(result).toContain('Dex the Builder ready to innovate!'); + }); + + test('existing session: brief welcome back with story reference', () => { + const result = builder.buildPresentation(mockAgent, 'existing', '', { + sessionType: 'existing', + sessionStory: 'ACT-7', + projectStatus: { currentStory: 'ACT-7' }, + }); + expect(result).toContain('Dex (Builder) ready'); + expect(result).toContain('continuing ACT-7'); + }); + + test('existing session: welcome back without story', () => { + const result = builder.buildPresentation(mockAgent, 'existing', '', { + sessionType: 'existing', + sessionStory: null, + projectStatus: null, + }); + expect(result).toContain('welcome back'); + }); + + test('workflow session: focused greeting with workflow active', () => { + const result = builder.buildPresentation(mockAgent, 'workflow', '', { + sessionType: 'workflow', + workflowState: { currentPhase: 'development' }, + }); + expect(result).toContain('Dex (Builder) ready'); + expect(result).toContain('workflow active'); + }); + + test('workflow session: named greeting when no workflow state', () => { + const result = builder.buildPresentation(mockAgent, 'workflow', '', { + sessionType: 'workflow', + workflowState: null, + workflowActive: null, + }); + expect(result).toContain('Dex (Builder) ready'); + expect(result).not.toContain('workflow active'); + }); + + test('permission badge is appended in all session types', () => { + const badge = '[Auto]'; + + const newResult = builder.buildPresentation(mockAgent, 'new', badge, { sessionType: 'new' }); + expect(newResult).toContain('[Auto]'); + + const existingResult = builder.buildPresentation(mockAgent, 'existing', badge, { + sessionType: 'existing', + }); + expect(existingResult).toContain('[Auto]'); + + const workflowResult = builder.buildPresentation(mockAgent, 'workflow', badge, { + sessionType: 'workflow', + }); + expect(workflowResult).toContain('[Auto]'); + }); + + test('no sectionContext: falls back to archetypal (backward compatible)', () => { + const result = builder.buildPresentation(mockAgent, 'existing', ''); + // Without sectionContext, even existing session uses archetypal + expect(result).toContain('Dex the Builder ready to innovate!'); + }); + }); + + // ======================================================================== + // AC3: Role description references story and branch + // ======================================================================== + describe('AC3: Context-aware role description', () => { + test('includes story reference when available', () => { + const result = builder.buildRoleDescription(mockAgent, { + sessionStory: 'ACT-7', + projectStatus: { branch: 'main' }, + gitConfig: { branch: 'main' }, + }); + expect(result).toContain('Role:'); + expect(result).toContain('Story: ACT-7'); + }); + + test('includes branch reference when not main/master', () => { + const result = builder.buildRoleDescription(mockAgent, { + sessionStory: null, + projectStatus: { branch: 'feat/act-7' }, + gitConfig: { branch: 'feat/act-7' }, + }); + expect(result).toContain('Branch: `feat/act-7`'); + }); + + test('skips branch reference for main', () => { + const result = builder.buildRoleDescription(mockAgent, { + sessionStory: null, + projectStatus: { branch: 'main' }, + gitConfig: { branch: 'main' }, + }); + expect(result).not.toContain('Branch:'); + }); + + test('skips branch reference for master', () => { + const result = builder.buildRoleDescription(mockAgent, { + sessionStory: null, + projectStatus: { branch: 'master' }, + gitConfig: { branch: 'master' }, + }); + expect(result).not.toContain('Branch:'); + }); + + test('includes both story and branch when both available', () => { + const result = builder.buildRoleDescription(mockAgent, { + sessionStory: 'ACT-7', + projectStatus: { branch: 'feat/act-7', currentStory: 'ACT-7' }, + gitConfig: { branch: 'feat/act-7' }, + }); + expect(result).toContain('Story: ACT-7'); + expect(result).toContain('Branch: `feat/act-7`'); + expect(result).toContain('|'); // separator + }); + + test('no sectionContext: returns plain role (backward compatible)', () => { + const result = builder.buildRoleDescription(mockAgent); + expect(result).toBe('**Role:** Expert Senior Software Engineer & Implementation Specialist'); + expect(result).not.toContain('Story:'); + expect(result).not.toContain('Branch:'); + }); + + test('returns empty for agent without role', () => { + const agentNoRole = { ...mockAgent, persona: {} }; + expect(builder.buildRoleDescription(agentNoRole)).toBe(''); + }); + }); + + // ======================================================================== + // AC4: Natural language project status narrative + // ======================================================================== + describe('AC4: Natural language project status', () => { + test('narrative format with branch and file count', () => { + const result = builder.buildProjectStatus( + baseEnrichedContext.projectStatus, + 'new', + { sessionType: 'new' }, + ); + expect(result).toContain("You're on branch `feat/act-7`"); + expect(result).toContain('2 modified files'); + }); + + test('narrative format with story reference', () => { + const result = builder.buildProjectStatus( + baseEnrichedContext.projectStatus, + 'new', + { sessionType: 'new' }, + ); + expect(result).toContain('Story **ACT-7** is in progress'); + }); + + test('narrative format with recent commit', () => { + const result = builder.buildProjectStatus( + baseEnrichedContext.projectStatus, + 'new', + { sessionType: 'new' }, + ); + expect(result).toContain('Last commit:'); + expect(result).toContain('implement context-aware greetings'); + }); + + test('workflow session still uses condensed format', () => { + const result = builder.buildProjectStatus( + baseEnrichedContext.projectStatus, + 'workflow', + { sessionType: 'workflow' }, + ); + expect(result).toContain('🌿 feat/act-7'); + expect(result).toContain('📝 2 modified'); + }); + + test('singular file count uses correct grammar', () => { + const singleFileStatus = { + ...baseEnrichedContext.projectStatus, + modifiedFilesTotalCount: 1, + }; + const result = builder.buildProjectStatus(singleFileStatus, 'new', { sessionType: 'new' }); + expect(result).toContain('1 modified file.'); + expect(result).not.toContain('1 modified files'); + }); + + test('no sectionContext: uses legacy bullet-point format', () => { + const result = builder.buildProjectStatus(baseEnrichedContext.projectStatus, 'new'); + expect(result).toContain('**Branch:**'); + expect(result).toContain('**Modified:**'); + }); + + test('empty status returns empty string', () => { + expect(builder.buildProjectStatus(null, 'new', {})).toBe(''); + }); + + test('status with no data returns empty string', () => { + const result = builder.buildProjectStatus({}, 'new', { sessionType: 'new' }); + expect(result).toBe(''); + }); + }); + + // ======================================================================== + // AC5: Context section references previous agent handoff + // ======================================================================== + describe('AC5: Intelligent agent handoff context', () => { + test('detects previous agent transition with story context', () => { + const context = { + ...baseEnrichedContext, + sessionType: 'existing', + previousAgent: { agentId: 'qa', agentName: 'Quinn' }, + }; + const result = builder.buildContextSection( + mockAgent, + context, + 'existing', + baseEnrichedContext.projectStatus, + { previousAgent: context.previousAgent }, + ); + expect(result).toBeTruthy(); + expect(result).toContain('@Quinn'); + }); + + test('handles string previousAgent format', () => { + const context = { + ...baseEnrichedContext, + sessionType: 'existing', + previousAgent: 'qa', + }; + const result = builder.buildContextSection( + mockAgent, + context, + 'existing', + baseEnrichedContext.projectStatus, + { previousAgent: 'qa' }, + ); + expect(result).toBeTruthy(); + expect(result).toContain('@qa'); + }); + + test('handoff fallback when narrative has no description', () => { + const context = { + sessionType: 'existing', + previousAgent: { agentId: 'po', agentName: 'Pax' }, + // No lastCommands, no sessionStory, no projectStatus fields + }; + const result = builder.buildContextSection( + mockAgent, + context, + 'existing', + null, // no projectStatus + { previousAgent: context.previousAgent }, + ); + expect(result).toBeTruthy(); + expect(result).toContain('@Pax'); + }); + + test('skips context for new sessions', () => { + const result = builder.buildContextSection( + mockAgent, + baseEnrichedContext, + 'new', + baseEnrichedContext.projectStatus, + {}, + ); + expect(result).toBeNull(); + }); + + test('suggests correct command for dev->qa transition', () => { + const context = { + ...baseEnrichedContext, + sessionType: 'existing', + previousAgent: { agentId: 'dev', agentName: 'Dex' }, + }; + const qaAgent = { ...mockAgent, id: 'qa', name: 'Quinn' }; + const result = builder.buildContextSection( + qaAgent, + context, + 'existing', + baseEnrichedContext.projectStatus, + { previousAgent: context.previousAgent }, + ); + expect(result).toContain('*review'); + }); + }); + + // ======================================================================== + // AC6: Footer varies by session context + // ======================================================================== + describe('AC6: Adaptive footer', () => { + test('new session: full guide prompt', () => { + const result = builder.buildFooter(mockAgent, { sessionType: 'new' }); + expect(result).toContain('*guide'); + expect(result).toContain('comprehensive usage instructions'); + }); + + test('existing session: brief tip', () => { + const result = builder.buildFooter(mockAgent, { sessionType: 'existing' }); + expect(result).toContain('*help'); + expect(result).toContain('*session-info'); + expect(result).not.toContain('*guide'); + }); + + test('workflow session: progress note with story', () => { + const result = builder.buildFooter(mockAgent, { + sessionType: 'workflow', + sessionStory: 'ACT-7', + }); + expect(result).toContain('Focused on **ACT-7**'); + expect(result).toContain('*help'); + }); + + test('workflow session: generic message without story', () => { + const result = builder.buildFooter(mockAgent, { + sessionType: 'workflow', + sessionStory: null, + projectStatus: null, + }); + expect(result).toContain('Workflow active'); + }); + + test('signature is always appended when available', () => { + const newFooter = builder.buildFooter(mockAgent, { sessionType: 'new' }); + const existingFooter = builder.buildFooter(mockAgent, { sessionType: 'existing' }); + const workflowFooter = builder.buildFooter(mockAgent, { sessionType: 'workflow' }); + + expect(newFooter).toContain('Dex, sempre construindo'); + expect(existingFooter).toContain('Dex, sempre construindo'); + expect(workflowFooter).toContain('Dex, sempre construindo'); + }); + + test('no sectionContext: defaults to new session footer (backward compatible)', () => { + const result = builder.buildFooter(mockAgent); + expect(result).toContain('*guide'); + expect(result).toContain('Dex, sempre construindo'); + }); + + test('agent without signature: footer still renders', () => { + const agentNoSig = { + ...mockAgent, + persona_profile: { + ...mockAgent.persona_profile, + communication: {}, + }, + }; + const result = builder.buildFooter(agentNoSig, { sessionType: 'existing' }); + expect(result).toContain('*help'); + expect(result).not.toContain('Dex'); + }); + }); + + // ======================================================================== + // AC7: Parallelizable sections with Promise.all() + // ======================================================================== + describe('AC7: Parallelization', () => { + test('_safeBuildSection resolves sync builders', async () => { + const result = await builder._safeBuildSection(() => 'sync result'); + expect(result).toBe('sync result'); + }); + + test('_safeBuildSection resolves async builders', async () => { + const result = await builder._safeBuildSection(() => Promise.resolve('async result')); + expect(result).toBe('async result'); + }); + + test('_safeBuildSection returns null on error', async () => { + const result = await builder._safeBuildSection(() => { + throw new Error('test error'); + }); + expect(result).toBeNull(); + }); + + test('_safeBuildSection returns null on timeout', async () => { + const slowBuilder = () => new Promise((resolve) => setTimeout(() => resolve('too late'), 300)); + const result = await builder._safeBuildSection(slowBuilder); + expect(result).toBeNull(); + }, 1000); + + test('_safeBuildSection handles null return', async () => { + const result = await builder._safeBuildSection(() => null); + expect(result).toBeNull(); + }); + + test('full greeting with enriched context uses parallel execution', async () => { + const context = { + ...baseEnrichedContext, + sessionType: 'existing', + previousAgent: { agentId: 'qa', agentName: 'Quinn' }, + }; + + const greeting = await builder.buildGreeting(mockAgent, context); + + // Should contain sections that were built in parallel + expect(greeting).toBeTruthy(); + expect(greeting).toContain('Dex'); + }); + }); + + // ======================================================================== + // AC8: Fallback to static templates on failure + // ======================================================================== + describe('AC8: Fallback and timeout protection', () => { + test('falls back to simple greeting when _buildContextualGreeting throws', async () => { + // Force an error in the contextual greeting path + builder.contextDetector.detectSessionType.mockImplementation(() => { + throw new Error('Total failure'); + }); + + const greeting = await builder.buildGreeting(mockAgent, {}); + + // Should fall back gracefully + expect(greeting).toBeTruthy(); + expect(greeting).toContain('Dex'); + expect(greeting).toContain('*help'); + }); + + test('section timeout produces null, not crash', async () => { + const result = await builder._safeBuildSection(() => + new Promise((resolve) => setTimeout(() => resolve('too late'), 500)), + ); + expect(result).toBeNull(); + }, 1000); + + test('greeting still renders when project status load fails', async () => { + const context = { + ...baseEnrichedContext, + projectStatus: null, + gitConfig: { configured: false }, + }; + + const greeting = await builder.buildGreeting(mockAgent, context); + + expect(greeting).toBeTruthy(); + expect(greeting).toContain('Dex'); + }); + }); + + // ======================================================================== + // AC9: Performance within 200ms + // ======================================================================== + describe('AC9: Performance', () => { + test('context-aware greeting completes within 200ms', async () => { + const startTime = Date.now(); + await builder.buildGreeting(mockAgent, baseEnrichedContext); + const duration = Date.now() - startTime; + + expect(duration).toBeLessThan(200); + }); + + test('greeting with all sections completes within 200ms', async () => { + const context = { + ...baseEnrichedContext, + sessionType: 'existing', + previousAgent: { agentId: 'qa', agentName: 'Quinn' }, + lastCommands: ['develop-story'], + }; + + const startTime = Date.now(); + await builder.buildGreeting(mockAgent, context); + const duration = Date.now() - startTime; + + expect(duration).toBeLessThan(200); + }); + }); + + // ======================================================================== + // AC10: A/B comparison - static vs context-aware + // ======================================================================== + describe('AC10: A/B comparison', () => { + test('static greeting is shorter than context-aware greeting', async () => { + const staticGreeting = builder.buildSimpleGreeting(mockAgent); + + const contextGreeting = await builder.buildGreeting(mockAgent, baseEnrichedContext); + + // Context-aware should be richer (longer) than simple fallback + expect(contextGreeting.length).toBeGreaterThan(staticGreeting.length); + }); + + test('context-aware greeting includes more sections than static', async () => { + const staticGreeting = builder.buildSimpleGreeting(mockAgent); + const contextGreeting = await builder.buildGreeting(mockAgent, baseEnrichedContext); + + // Static: just greeting + help prompt + expect(staticGreeting).toContain('*help'); + expect(staticGreeting).not.toContain('Role:'); + expect(staticGreeting).not.toContain('Project Status'); + + // Context-aware: includes role, project status, commands, footer + expect(contextGreeting).toContain('Role:'); + expect(contextGreeting).toContain('Project Status'); + expect(contextGreeting).toContain('Commands'); + }); + + test('existing session greeting differs from new session greeting', async () => { + const newContext = { ...baseEnrichedContext, sessionType: 'new' }; + const existingContext = { ...baseEnrichedContext, sessionType: 'existing' }; + + const newGreeting = await builder.buildGreeting(mockAgent, newContext); + const existingGreeting = await builder.buildGreeting(mockAgent, existingContext); + + // New session should have full intro; existing should be brief + expect(newGreeting).toContain('Available Commands'); + expect(existingGreeting).toContain('Quick Commands'); + + // New session should have role description; existing should not + expect(newGreeting).toContain('Role:'); + expect(existingGreeting).not.toContain('Role:'); + }); + + test('workflow session greeting is focused', async () => { + const workflowContext = { ...baseEnrichedContext, sessionType: 'workflow' }; + + const workflowGreeting = await builder.buildGreeting(mockAgent, workflowContext); + + expect(workflowGreeting).toContain('Key Commands'); + expect(workflowGreeting).not.toContain('Role:'); + // Condensed project status for workflow + expect(workflowGreeting).toContain('🌿'); + }); + }); + + // ======================================================================== + // Regression: Backward compatibility + // ======================================================================== + describe('Backward compatibility', () => { + test('buildGreeting(agent, {}) still works (no enriched context)', async () => { + const greeting = await builder.buildGreeting(mockAgent, {}); + expect(greeting).toBeTruthy(); + expect(greeting).toContain('Dex'); + }); + + test('buildGreeting(agent) still works (no context at all)', async () => { + const greeting = await builder.buildGreeting(mockAgent); + expect(greeting).toBeTruthy(); + expect(greeting).toContain('Dex'); + }); + + test('old format without visibility metadata still works', async () => { + const oldAgent = { + ...mockAgent, + commands: [ + { name: 'help' }, + { name: 'develop' }, + ], + }; + const greeting = await builder.buildGreeting(oldAgent, baseEnrichedContext); + expect(greeting).toContain('help'); + expect(greeting).toContain('develop'); + }); + }); +}); + +``` + +================================================== +📄 tests/core/build-state-manager.test.js +================================================== +```js +/** + * Build State Manager Tests - Story 8.4 + * + * Tests for autonomous build state management including: + * - State creation, loading, saving + * - Checkpoint management + * - Build resume functionality + * - Abandoned detection + * - Failure tracking and notifications + * - Attempt logging + */ + +const path = require('path'); +const fs = require('fs'); +const os = require('os'); + +const { + BuildStateManager, + BuildStatus, + NotificationType, + validateBuildState, + DEFAULT_CONFIG, +} = require('../../.aios-core/core/execution/build-state-manager'); + +// ═══════════════════════════════════════════════════════════════════════════════════ +// TEST SETUP +// ═══════════════════════════════════════════════════════════════════════════════════ + +describe('BuildStateManager', () => { + let testDir; + let manager; + const testStoryId = 'test-story-8.4'; + + beforeEach(() => { + // Create unique temp directory for each test + testDir = fs.mkdtempSync(path.join(os.tmpdir(), 'build-state-test-')); + manager = new BuildStateManager(testStoryId, { + planDir: path.join(testDir, 'plan'), + rootPath: testDir, + }); + }); + + afterEach(() => { + // Cleanup temp directory + if (testDir && fs.existsSync(testDir)) { + fs.rmSync(testDir, { recursive: true, force: true }); + } + }); + + // ───────────────────────────────────────────────────────────────────────────────── + // SCHEMA VALIDATION + // ───────────────────────────────────────────────────────────────────────────────── + + describe('validateBuildState', () => { + test('should validate correct state', () => { + const validState = { + storyId: 'story-1', + status: BuildStatus.PENDING, + startedAt: new Date().toISOString(), + checkpoints: [], + }; + + const result = validateBuildState(validState); + expect(result.valid).toBe(true); + expect(result.errors).toHaveLength(0); + }); + + test('should reject state missing required fields', () => { + const invalidState = { + storyId: 'story-1', + // missing status, startedAt, checkpoints + }; + + const result = validateBuildState(invalidState); + expect(result.valid).toBe(false); + expect(result.errors.length).toBeGreaterThan(0); + expect(result.errors.some((e) => e.includes('status'))).toBe(true); + }); + + test('should reject invalid status', () => { + const invalidState = { + storyId: 'story-1', + status: 'invalid_status', + startedAt: new Date().toISOString(), + checkpoints: [], + }; + + const result = validateBuildState(invalidState); + expect(result.valid).toBe(false); + expect(result.errors.some((e) => e.includes('status'))).toBe(true); + }); + + test('should reject non-array checkpoints', () => { + const invalidState = { + storyId: 'story-1', + status: BuildStatus.PENDING, + startedAt: new Date().toISOString(), + checkpoints: 'not-an-array', + }; + + const result = validateBuildState(invalidState); + expect(result.valid).toBe(false); + expect(result.errors.some((e) => e.includes('checkpoints'))).toBe(true); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────────── + // STATE MANAGEMENT + // ───────────────────────────────────────────────────────────────────────────────── + + describe('State Management', () => { + test('should create new state', () => { + const state = manager.createState({ + totalSubtasks: 10, + }); + + expect(state.storyId).toBe(testStoryId); + expect(state.status).toBe(BuildStatus.PENDING); + expect(state.checkpoints).toEqual([]); + expect(state.completedSubtasks).toEqual([]); + expect(state.metrics.totalSubtasks).toBe(10); + }); + + test('should save and load state', () => { + manager.createState({ totalSubtasks: 5 }); + manager.saveState(); + + // Create new manager to load + const manager2 = new BuildStateManager(testStoryId, { + planDir: path.join(testDir, 'plan'), + rootPath: testDir, + }); + + const loaded = manager2.loadState(); + expect(loaded).not.toBeNull(); + expect(loaded.storyId).toBe(testStoryId); + expect(loaded.metrics.totalSubtasks).toBe(5); + }); + + test('should return null when no state exists', () => { + const loaded = manager.loadState(); + expect(loaded).toBeNull(); + }); + + test('should loadOrCreateState', () => { + // First call creates + const state1 = manager.loadOrCreateState({ totalSubtasks: 7 }); + expect(state1.metrics.totalSubtasks).toBe(7); + manager.saveState(); + + // Second call loads + const manager2 = new BuildStateManager(testStoryId, { + planDir: path.join(testDir, 'plan'), + rootPath: testDir, + }); + const state2 = manager2.loadOrCreateState({ totalSubtasks: 99 }); + expect(state2.metrics.totalSubtasks).toBe(7); // Should be original value + }); + + test('should throw when storyId not provided', () => { + expect(() => new BuildStateManager(null)).toThrow('storyId is required'); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────────── + // CHECKPOINT MANAGEMENT (AC2) + // ───────────────────────────────────────────────────────────────────────────────── + + describe('Checkpoint Management (AC2)', () => { + beforeEach(() => { + manager.createState({ totalSubtasks: 5 }); + }); + + test('should save checkpoint after subtask completion', () => { + const checkpoint = manager.saveCheckpoint('1.1', { + duration: 5000, + filesModified: ['file1.js', 'file2.js'], + }); + + expect(checkpoint.id).toMatch(/^cp-/); + expect(checkpoint.subtaskId).toBe('1.1'); + expect(checkpoint.status).toBe('completed'); + expect(checkpoint.filesModified).toHaveLength(2); + + const state = manager.getState(); + expect(state.checkpoints).toHaveLength(1); + expect(state.completedSubtasks).toContain('1.1'); + expect(state.metrics.completedSubtasks).toBe(1); + }); + + test('should save multiple checkpoints', () => { + manager.saveCheckpoint('1.1'); + manager.saveCheckpoint('1.2'); + manager.saveCheckpoint('2.1'); + + const state = manager.getState(); + expect(state.checkpoints).toHaveLength(3); + expect(state.completedSubtasks).toEqual(['1.1', '1.2', '2.1']); + }); + + test('should create checkpoint files', () => { + manager.saveCheckpoint('1.1'); + + const checkpointDir = path.join(testDir, 'plan', 'checkpoints'); + expect(fs.existsSync(checkpointDir)).toBe(true); + + const files = fs.readdirSync(checkpointDir); + expect(files.length).toBe(1); + expect(files[0]).toMatch(/^cp-.*\.json$/); + }); + + test('should get last checkpoint', () => { + manager.saveCheckpoint('1.1'); + manager.saveCheckpoint('1.2'); + + const last = manager.getLastCheckpoint(); + expect(last.subtaskId).toBe('1.2'); + }); + + test('should return null when no checkpoints', () => { + const last = manager.getLastCheckpoint(); + expect(last).toBeNull(); + }); + + test('should not duplicate completed subtasks', () => { + manager.saveCheckpoint('1.1'); + manager.saveCheckpoint('1.1'); // Same subtask again + + const state = manager.getState(); + expect(state.completedSubtasks).toEqual(['1.1']); + expect(state.checkpoints).toHaveLength(2); // Still records checkpoint + }); + }); + + // ───────────────────────────────────────────────────────────────────────────────── + // BUILD RESUME (AC3) + // ───────────────────────────────────────────────────────────────────────────────── + + describe('Build Resume (AC3)', () => { + test('should resume build from checkpoint', () => { + // Setup: create state with some progress + manager.createState({ totalSubtasks: 5 }); + manager.startSubtask('1.1'); + manager.completeSubtask('1.1'); + manager.startSubtask('1.2'); + manager.saveState(); + + // Simulate new session + const manager2 = new BuildStateManager(testStoryId, { + planDir: path.join(testDir, 'plan'), + rootPath: testDir, + }); + + const context = manager2.resumeBuild(); + + expect(context.storyId).toBe(testStoryId); + expect(context.status).toBe(BuildStatus.IN_PROGRESS); + expect(context.completedSubtasks).toContain('1.1'); + expect(context.lastCheckpoint).not.toBeNull(); + }); + + test('should throw when no state exists', () => { + expect(() => manager.resumeBuild()).toThrow('No build state found'); + }); + + test('should throw when build already completed', () => { + manager.createState(); + manager.completeBuild(); + + const manager2 = new BuildStateManager(testStoryId, { + planDir: path.join(testDir, 'plan'), + rootPath: testDir, + }); + + expect(() => manager2.resumeBuild()).toThrow('already completed'); + }); + + test('should allow resume of failed build', () => { + manager.createState({ totalSubtasks: 5 }); + manager.saveCheckpoint('1.1'); + manager.failBuild('Test failure'); + + const manager2 = new BuildStateManager(testStoryId, { + planDir: path.join(testDir, 'plan'), + rootPath: testDir, + }); + + // Should not throw + const context = manager2.resumeBuild(); + expect(context.status).toBe(BuildStatus.IN_PROGRESS); + }); + + test('should clear abandoned flag on resume', () => { + manager.createState(); + const state = manager.getState(); + state.abandoned = true; + state.abandonedAt = new Date().toISOString(); + manager.saveState(); + + const manager2 = new BuildStateManager(testStoryId, { + planDir: path.join(testDir, 'plan'), + rootPath: testDir, + }); + + const context = manager2.resumeBuild(); + const newState = manager2.getState(); + + expect(newState.abandoned).toBe(false); + expect(newState.abandonedAt).toBeNull(); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────────── + // BUILD STATUS (AC4) + // ───────────────────────────────────────────────────────────────────────────────── + + describe('Build Status (AC4)', () => { + test('should return status when build exists', () => { + manager.createState({ totalSubtasks: 10 }); + manager.saveCheckpoint('1.1'); + manager.saveCheckpoint('1.2'); + manager.saveState(); + + const status = manager.getStatus(); + + expect(status.exists).toBe(true); + expect(status.storyId).toBe(testStoryId); + expect(status.progress.completed).toBe(2); + expect(status.progress.total).toBe(10); + expect(status.progress.percentage).toBe(20); + expect(status.checkpointCount).toBe(2); + }); + + test('should return exists:false when no build', () => { + const status = manager.getStatus(); + + expect(status.exists).toBe(false); + expect(status.message).toBe('No build state found'); + }); + + test('should calculate duration', () => { + manager.createState(); + manager.saveState(); + + // Wait a bit + const status = manager.getStatus(); + + expect(status.duration).toBeDefined(); + expect(status.durationMs).toBeGreaterThanOrEqual(0); + }); + + test('should format status for CLI', () => { + manager.createState({ totalSubtasks: 5 }); + manager.saveCheckpoint('1.1'); + manager.saveState(); + + const formatted = manager.formatStatus(); + + expect(formatted).toContain(testStoryId); + expect(formatted).toContain('Progress'); + expect(formatted).toContain('1/5'); + }); + + test('should get all builds', () => { + // Create a build + manager.createState(); + manager.saveState(); + + const builds = BuildStateManager.getAllBuilds(testDir); + + expect(builds.length).toBeGreaterThanOrEqual(1); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────────── + // ABANDONED DETECTION (AC5) + // ───────────────────────────────────────────────────────────────────────────────── + + describe('Abandoned Detection (AC5)', () => { + test('should detect abandoned build', () => { + manager.createState(); + const state = manager.getState(); + + // Simulate old checkpoint (2 hours ago) + const oldTime = new Date(Date.now() - 2 * 60 * 60 * 1000); + state.status = BuildStatus.IN_PROGRESS; + state.lastCheckpoint = oldTime.toISOString(); + manager.saveState(); + + const result = manager.detectAbandoned(); + + expect(result.detected).toBe(true); + expect(result.storyId).toBe(testStoryId); + }); + + test('should not detect active build as abandoned', () => { + manager.createState(); + const state = manager.getState(); + state.status = BuildStatus.IN_PROGRESS; + state.lastCheckpoint = new Date().toISOString(); + manager.saveState(); + + const result = manager.detectAbandoned(); + + expect(result.detected).toBe(false); + }); + + test('should not detect completed build as abandoned', () => { + manager.createState(); + manager.completeBuild(); + + const result = manager.detectAbandoned(); + + expect(result.detected).toBe(false); + }); + + test('should use custom threshold', () => { + manager.createState(); + const state = manager.getState(); + + // Checkpoint 5 minutes ago + const fiveMinutesAgo = new Date(Date.now() - 5 * 60 * 1000); + state.status = BuildStatus.IN_PROGRESS; + state.lastCheckpoint = fiveMinutesAgo.toISOString(); + manager.saveState(); + + // With 1 hour threshold - not abandoned + const result1 = manager.detectAbandoned(60 * 60 * 1000); + expect(result1.detected).toBe(false); + + // With 1 minute threshold - abandoned + const result2 = manager.detectAbandoned(60 * 1000); + expect(result2.detected).toBe(true); + }); + + test('should add notification when marking abandoned', () => { + manager.createState(); + const state = manager.getState(); + + const oldTime = new Date(Date.now() - 2 * 60 * 60 * 1000); + state.status = BuildStatus.IN_PROGRESS; + state.lastCheckpoint = oldTime.toISOString(); + manager.saveState(); + + manager.detectAbandoned(); + + const notifications = manager.getNotifications(); + expect(notifications.some((n) => n.type === NotificationType.ABANDONED)).toBe(true); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────────── + // FAILURE TRACKING (AC6) + // ───────────────────────────────────────────────────────────────────────────────── + + describe('Failure Tracking (AC6)', () => { + beforeEach(() => { + manager.createState({ totalSubtasks: 5 }); + }); + + test('should record failure', () => { + const result = manager.recordFailure('1.1', { + error: 'Test error', + attempt: 1, + }); + + expect(result.failure.subtaskId).toBe('1.1'); + expect(result.failure.error).toBe('Test error'); + + const state = manager.getState(); + expect(state.failedAttempts).toHaveLength(1); + expect(state.metrics.totalFailures).toBe(1); + }); + + test('should track multiple failures', () => { + manager.recordFailure('1.1', { error: 'Error 1' }); + manager.recordFailure('1.1', { error: 'Error 2' }); + manager.recordFailure('1.1', { error: 'Error 3' }); + + const state = manager.getState(); + expect(state.failedAttempts).toHaveLength(3); + expect(state.metrics.totalFailures).toBe(3); + }); + + test('should auto-increment attempt number', () => { + manager.recordFailure('1.1', { error: 'Error 1' }); + const result = manager.recordFailure('1.1', { error: 'Error 2' }); + + expect(result.failure.attempt).toBe(2); + }); + + test('should detect stuck after multiple failures', () => { + // Record 3 failures to trigger stuck detection + manager.recordFailure('1.1', { error: 'Error 1' }); + manager.recordFailure('1.1', { error: 'Error 2' }); + const result = manager.recordFailure('1.1', { error: 'Error 3' }); + + // Stuck detection depends on stuck-detector being available + // If not available, isStuck will be false + expect(result).toHaveProperty('isStuck'); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────────── + // ATTEMPT LOGGING (AC7) + // ───────────────────────────────────────────────────────────────────────────────── + + describe('Attempt Logging (AC7)', () => { + beforeEach(() => { + manager.createState({ totalSubtasks: 5 }); + }); + + test('should log attempts to file', () => { + manager.saveCheckpoint('1.1'); + manager.recordFailure('1.2', { error: 'Test error' }); + manager.saveState(); + + const logPath = path.join(testDir, 'plan', 'build-attempts.log'); + expect(fs.existsSync(logPath)).toBe(true); + + const content = fs.readFileSync(logPath, 'utf-8'); + expect(content).toContain('1.1'); + expect(content).toContain('checkpoint'); + }); + + test('should get attempt log', () => { + manager.saveCheckpoint('1.1'); + manager.saveCheckpoint('1.2'); + manager.saveState(); + + const logs = manager.getAttemptLog(); + + expect(logs.length).toBeGreaterThan(0); + }); + + test('should filter log by subtask', () => { + manager.saveCheckpoint('1.1'); + manager.saveCheckpoint('2.1'); + manager.saveState(); + + const logs = manager.getAttemptLog({ subtaskId: '1.1' }); + + expect(logs.every((l) => l.includes('1.1'))).toBe(true); + }); + + test('should limit log output', () => { + // Create many log entries + for (let i = 0; i < 10; i++) { + manager.saveCheckpoint(`1.${i}`); + } + manager.saveState(); + + const logs = manager.getAttemptLog({ limit: 3 }); + + expect(logs.length).toBe(3); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────────── + // SUBTASK MANAGEMENT + // ───────────────────────────────────────────────────────────────────────────────── + + describe('Subtask Management', () => { + beforeEach(() => { + manager.createState({ totalSubtasks: 5 }); + }); + + test('should start subtask', () => { + manager.startSubtask('1.1', { phase: 'phase-1' }); + + const state = manager.getState(); + expect(state.currentSubtask).toBe('1.1'); + expect(state.currentPhase).toBe('phase-1'); + expect(state.status).toBe(BuildStatus.IN_PROGRESS); + }); + + test('should complete subtask', () => { + manager.startSubtask('1.1'); + manager.completeSubtask('1.1', { + duration: 5000, + filesModified: ['file.js'], + }); + + const state = manager.getState(); + expect(state.currentSubtask).toBeNull(); + expect(state.completedSubtasks).toContain('1.1'); + }); + + test('should complete build', () => { + manager.saveCheckpoint('1.1'); + manager.completeBuild(); + + const state = manager.getState(); + expect(state.status).toBe(BuildStatus.COMPLETED); + expect(state.metrics.totalDuration).toBeGreaterThanOrEqual(0); + }); + + test('should fail build', () => { + manager.failBuild('Critical error'); + + const state = manager.getState(); + expect(state.status).toBe(BuildStatus.FAILED); + + const notifications = manager.getNotifications(); + expect(notifications.some((n) => n.type === NotificationType.ERROR)).toBe(true); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────────── + // NOTIFICATIONS + // ───────────────────────────────────────────────────────────────────────────────── + + describe('Notifications', () => { + beforeEach(() => { + manager.createState(); + }); + + test('should get unacknowledged notifications', () => { + // Generate some notifications via actions + manager.completeBuild(); + + const notifications = manager.getNotifications(); + expect(notifications.length).toBeGreaterThan(0); + expect(notifications[0].acknowledged).toBe(false); + }); + + test('should acknowledge notification', () => { + manager.completeBuild(); + + const before = manager.getNotifications(); + expect(before.length).toBeGreaterThan(0); + + manager.acknowledgeNotification(0); + + const after = manager.getNotifications(); + expect(after.length).toBe(before.length - 1); + }); + }); + + // ───────────────────────────────────────────────────────────────────────────────── + // CLEANUP + // ───────────────────────────────────────────────────────────────────────────────── + + describe('Cleanup', () => { + test('should cleanup abandoned build', async () => { + manager.createState(); + const state = manager.getState(); + state.status = BuildStatus.ABANDONED; + state.abandoned = true; + manager.saveCheckpoint('1.1'); + manager.saveState(); + + const result = await manager.cleanup(); + + expect(result.cleaned).toBe(true); + expect(result.filesRemoved.length).toBeGreaterThan(0); + }); + + test('should not cleanup active build without force', async () => { + manager.createState(); + const state = manager.getState(); + state.status = BuildStatus.IN_PROGRESS; + manager.saveState(); + + const result = await manager.cleanup(); + + expect(result.cleaned).toBe(false); + }); + + test('should force cleanup active build', async () => { + manager.createState(); + const state = manager.getState(); + state.status = BuildStatus.IN_PROGRESS; + manager.saveState(); + + const result = await manager.cleanup({ force: true }); + + expect(result.cleaned).toBe(true); + }); + + test('should cleanup completed build', async () => { + manager.createState(); + manager.completeBuild(); + + const result = await manager.cleanup(); + + expect(result.cleaned).toBe(true); + }); + }); +}); + +// ═══════════════════════════════════════════════════════════════════════════════════ +// DEFAULT CONFIG +// ═══════════════════════════════════════════════════════════════════════════════════ + +describe('DEFAULT_CONFIG', () => { + test('should have correct defaults', () => { + expect(DEFAULT_CONFIG.maxIterations).toBe(10); + expect(DEFAULT_CONFIG.globalTimeout).toBe(30 * 60 * 1000); + expect(DEFAULT_CONFIG.abandonedThreshold).toBe(60 * 60 * 1000); + expect(DEFAULT_CONFIG.autoCheckpoint).toBe(true); + }); +}); + +``` + +================================================== +📄 tests/core/agent-config-enrichment.test.js +================================================== +```js +/** + * Tests for Story ACT-8: Agent Config Loading + Document Governance + * + * Test Coverage: + * - All 7 data files exist on disk + * - Enriched agents load correct files from agent-config-requirements.yaml + * - YAML parses correctly after enrichment + * - All files referenced in config exist on disk + * - Performance targets documented for all agents + * - source-tree.md governance section contains all required files + * - update-source-tree.md task file exists + * - aios-master has *update-source-tree command + */ + +const fs = require('fs'); +const path = require('path'); +const yaml = require('js-yaml'); + +const ROOT = path.resolve(__dirname, '..', '..'); + +/** + * Helper: load and parse YAML file + */ +function loadYaml(relativePath) { + const fullPath = path.join(ROOT, relativePath); + const content = fs.readFileSync(fullPath, 'utf8'); + return yaml.load(content); +} + +/** + * Helper: read file content + */ +function readFile(relativePath) { + const fullPath = path.join(ROOT, relativePath); + return fs.readFileSync(fullPath, 'utf8'); +} + +/** + * Helper: check file exists + */ +function fileExists(relativePath) { + const fullPath = path.join(ROOT, relativePath); + return fs.existsSync(fullPath); +} + +describe('Story ACT-8: Agent Config Enrichment', () => { + let agentConfig; + + beforeAll(() => { + agentConfig = loadYaml('.aios-core/data/agent-config-requirements.yaml'); + }); + + describe('YAML Validity', () => { + test('agent-config-requirements.yaml parses without errors', () => { + expect(agentConfig).toBeDefined(); + expect(agentConfig.agents).toBeDefined(); + expect(typeof agentConfig.agents).toBe('object'); + }); + + test('all expected agents have entries', () => { + const expectedAgents = [ + 'aios-master', 'dev', 'qa', 'devops', 'github-devops', + 'architect', 'po', 'sm', 'data-engineer', 'db-sage', + 'pm', 'analyst', 'ux-design-expert', 'squad-creator', + 'aios-developer', 'aios-orchestrator', 'default', + ]; + + for (const agentId of expectedAgents) { + expect(agentConfig.agents).toHaveProperty(agentId); + } + }); + }); + + describe('7 Required Data Files Exist on Disk', () => { + const requiredFiles = [ + { path: 'docs/framework/coding-standards.md', owner: '@dev' }, + { path: 'docs/framework/tech-stack.md', owner: '@architect' }, + { path: '.aios-core/data/technical-preferences.md', owner: '@architect' }, + { path: '.aios-core/product/data/test-levels-framework.md', owner: '@qa' }, + { path: '.aios-core/product/data/test-priorities-matrix.md', owner: '@qa' }, + { path: '.aios-core/product/data/brainstorming-techniques.md', owner: '@analyst' }, + { path: '.aios-core/product/data/elicitation-methods.md', owner: '@po' }, + ]; + + test.each(requiredFiles)('$path exists on disk (owner: $owner)', ({ path: filePath }) => { + expect(fileExists(filePath)).toBe(true); + }); + }); + + describe('All Files Referenced in Config Exist', () => { + test('every files_loaded path resolves to an existing file', () => { + const missing = []; + + for (const [agentId, config] of Object.entries(agentConfig.agents)) { + for (const fileEntry of (config.files_loaded || [])) { + const fp = typeof fileEntry === 'string' ? fileEntry : fileEntry.path; + if (fp && !fileExists(fp)) { + missing.push({ agent: agentId, file: fp }); + } + } + } + + expect(missing).toEqual([]); + }); + }); + + describe('Enriched Agent: @pm', () => { + test('pm now loads coding-standards.md and tech-stack.md', () => { + const pm = agentConfig.agents.pm; + expect(pm.files_loaded).toBeDefined(); + expect(pm.files_loaded.length).toBe(2); + + const paths = pm.files_loaded.map(f => f.path); + expect(paths).toContain('docs/framework/coding-standards.md'); + expect(paths).toContain('docs/framework/tech-stack.md'); + }); + + test('pm performance target is <100ms', () => { + expect(agentConfig.agents.pm.performance_target).toBe('<100ms'); + }); + }); + + describe('Enriched Agent: @ux-design-expert', () => { + test('ux-design-expert now loads tech-stack.md and coding-standards.md', () => { + const ux = agentConfig.agents['ux-design-expert']; + expect(ux.files_loaded).toBeDefined(); + expect(ux.files_loaded.length).toBe(2); + + const paths = ux.files_loaded.map(f => f.path); + expect(paths).toContain('docs/framework/tech-stack.md'); + expect(paths).toContain('docs/framework/coding-standards.md'); + }); + + test('ux-design-expert performance target is <100ms', () => { + expect(agentConfig.agents['ux-design-expert'].performance_target).toBe('<100ms'); + }); + }); + + describe('Enriched Agent: @analyst', () => { + test('analyst now loads brainstorming-techniques, tech-stack, and source-tree', () => { + const analyst = agentConfig.agents.analyst; + expect(analyst.files_loaded).toBeDefined(); + expect(analyst.files_loaded.length).toBe(3); + + const paths = analyst.files_loaded.map(f => f.path); + expect(paths).toContain('.aios-core/product/data/brainstorming-techniques.md'); + expect(paths).toContain('docs/framework/tech-stack.md'); + expect(paths).toContain('docs/framework/source-tree.md'); + }); + + test('analyst performance target is <100ms', () => { + expect(agentConfig.agents.analyst.performance_target).toBe('<100ms'); + }); + }); + + describe('Enriched Agent: @sm', () => { + test('sm now loads mode-selection, workflow-patterns, and coding-standards', () => { + const sm = agentConfig.agents.sm; + expect(sm.files_loaded).toBeDefined(); + expect(sm.files_loaded.length).toBe(3); + + const paths = sm.files_loaded.map(f => f.path); + expect(paths).toContain('.aios-core/product/data/mode-selection-best-practices.md'); + expect(paths).toContain('.aios-core/data/workflow-patterns.yaml'); + expect(paths).toContain('docs/framework/coding-standards.md'); + }); + + test('sm performance target is <75ms', () => { + expect(agentConfig.agents.sm.performance_target).toBe('<75ms'); + }); + }); + + describe('Enriched Agent: @squad-creator', () => { + test('squad-creator has explicit entry (not default)', () => { + const sc = agentConfig.agents['squad-creator']; + expect(sc).toBeDefined(); + expect(sc.config_sections).toContain('squadsTemplateLocation'); + }); + + test('squad-creator has lazy loading for registry and manifest', () => { + const sc = agentConfig.agents['squad-creator']; + expect(sc.lazy_loading).toHaveProperty('agent_registry', true); + expect(sc.lazy_loading).toHaveProperty('squad_manifest', true); + }); + + test('squad-creator performance target is <150ms', () => { + expect(agentConfig.agents['squad-creator'].performance_target).toBe('<150ms'); + }); + }); + + describe('Shared Files Consumers Updated', () => { + test('coding-standards.md lists pm, ux-design-expert, sm as users', () => { + const csFile = agentConfig.lazy_loading_strategy.shared_files.find( + f => f.path === 'docs/framework/coding-standards.md', + ); + expect(csFile).toBeDefined(); + expect(csFile.used_by).toContain('pm'); + expect(csFile.used_by).toContain('ux-design-expert'); + expect(csFile.used_by).toContain('sm'); + }); + + test('tech-stack.md lists pm, ux-design-expert, analyst as users', () => { + const tsFile = agentConfig.lazy_loading_strategy.shared_files.find( + f => f.path === 'docs/framework/tech-stack.md', + ); + expect(tsFile).toBeDefined(); + expect(tsFile.used_by).toContain('pm'); + expect(tsFile.used_by).toContain('ux-design-expert'); + expect(tsFile.used_by).toContain('analyst'); + }); + + test('source-tree.md lists analyst as user', () => { + const stFile = agentConfig.lazy_loading_strategy.shared_files.find( + f => f.path === 'docs/framework/source-tree.md', + ); + expect(stFile).toBeDefined(); + expect(stFile.used_by).toContain('analyst'); + }); + }); + + describe('Performance Targets', () => { + test('every agent has a performance_target', () => { + for (const [agentId, config] of Object.entries(agentConfig.agents)) { + expect(config).toHaveProperty('performance_target'); + expect(config.performance_target).toMatch(/^<\d+ms$/); + } + }); + }); +}); + +describe('Story ACT-8: Document Governance', () => { + describe('source-tree.md Governance Section', () => { + let sourceTreeContent; + + beforeAll(() => { + sourceTreeContent = readFile('docs/framework/source-tree.md'); + }); + + test('source-tree.md contains Data File Governance section', () => { + expect(sourceTreeContent).toContain('## Data File Governance'); + }); + + test('source-tree.md documents coding-standards.md', () => { + expect(sourceTreeContent).toContain('coding-standards.md'); + expect(sourceTreeContent).toContain('@dev'); + }); + + test('source-tree.md documents tech-stack.md', () => { + expect(sourceTreeContent).toContain('tech-stack.md'); + expect(sourceTreeContent).toContain('@architect'); + }); + + test('source-tree.md documents technical-preferences.md', () => { + expect(sourceTreeContent).toContain('technical-preferences.md'); + }); + + test('source-tree.md documents test-levels-framework.md', () => { + expect(sourceTreeContent).toContain('test-levels-framework.md'); + expect(sourceTreeContent).toContain('@qa'); + }); + + test('source-tree.md documents test-priorities-matrix.md', () => { + expect(sourceTreeContent).toContain('test-priorities-matrix.md'); + }); + + test('source-tree.md documents brainstorming-techniques.md', () => { + expect(sourceTreeContent).toContain('brainstorming-techniques.md'); + expect(sourceTreeContent).toContain('@analyst'); + }); + + test('source-tree.md documents elicitation-methods.md', () => { + expect(sourceTreeContent).toContain('elicitation-methods.md'); + expect(sourceTreeContent).toContain('@po'); + }); + }); + + describe('Governance Task', () => { + test('update-source-tree.md task file exists', () => { + expect(fileExists('.aios-core/development/tasks/update-source-tree.md')).toBe(true); + }); + + test('task file contains expected sections', () => { + const content = readFile('.aios-core/development/tasks/update-source-tree.md'); + expect(content).toContain('Update Source Tree Task'); + expect(content).toContain('Validate document governance'); + expect(content).toContain('agent-config-requirements.yaml'); + expect(content).toContain('source-tree.md'); + }); + }); + + describe('aios-master Command', () => { + test('aios-master.md contains *update-source-tree command', () => { + const content = readFile('.aios-core/development/agents/aios-master.md'); + expect(content).toContain('update-source-tree'); + expect(content).toContain('Validate data file governance'); + }); + + test('aios-master dependencies include update-source-tree.md', () => { + const content = readFile('.aios-core/development/agents/aios-master.md'); + expect(content).toContain('update-source-tree.md'); + }); + }); +}); + +``` + +================================================== +📄 tests/core/gate-evaluator.test.js +================================================== +```js +/** + * Gate Evaluator Tests + * + * Story: 0.6 - Quality Gates + * Epic: Epic 0 - ADE Master Orchestrator + * + * Tests for gate evaluator that ensures quality between epics. + * + * @author @dev (Dex) + * @version 1.0.0 + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); + +const { + GateEvaluator, + GateVerdict, + DEFAULT_GATE_CONFIG, +} = require('../../.aios-core/core/orchestration/gate-evaluator'); + +describe('Gate Evaluator (Story 0.6)', () => { + let tempDir; + let evaluator; + + beforeEach(async () => { + tempDir = path.join(os.tmpdir(), `gate-evaluator-test-${Date.now()}`); + await fs.ensureDir(tempDir); + + evaluator = new GateEvaluator({ + projectRoot: tempDir, + strictMode: false, + }); + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + describe('GateVerdict Enum (AC2)', () => { + it('should have all required verdicts', () => { + expect(GateVerdict.APPROVED).toBe('approved'); + expect(GateVerdict.NEEDS_REVISION).toBe('needs_revision'); + expect(GateVerdict.BLOCKED).toBe('blocked'); + }); + }); + + describe('DEFAULT_GATE_CONFIG (AC5)', () => { + it('should have config for epic3_to_epic4', () => { + expect(DEFAULT_GATE_CONFIG.epic3_to_epic4).toBeDefined(); + expect(DEFAULT_GATE_CONFIG.epic3_to_epic4.blocking).toBe(true); + }); + + it('should have config for epic4_to_epic6', () => { + expect(DEFAULT_GATE_CONFIG.epic4_to_epic6).toBeDefined(); + expect(DEFAULT_GATE_CONFIG.epic4_to_epic6.requireTests).toBe(true); + }); + + // Note: epic6_to_epic7 config removed with Epic 7 revert (commits 51df718, 75cbca1) + }); + + describe('Constructor', () => { + it('should initialize with default options', () => { + const e = new GateEvaluator({ projectRoot: tempDir }); + + expect(e.projectRoot).toBe(tempDir); + expect(e.strictMode).toBe(false); + }); + + it('should accept strict mode (AC7)', () => { + const e = new GateEvaluator({ + projectRoot: tempDir, + strictMode: true, + }); + + expect(e.strictMode).toBe(true); + }); + + it('should accept custom gate config (AC5)', () => { + const customConfig = { + epic3_to_epic4: { blocking: false }, + }; + + const e = new GateEvaluator({ + projectRoot: tempDir, + gateConfig: customConfig, + }); + + expect(e.gateConfig).toEqual(customConfig); + }); + }); + + describe('evaluate (AC1)', () => { + it('should evaluate gate and return result', async () => { + const epicResult = { + specPath: '/path/to/spec.md', + complexity: 'STANDARD', + requirements: ['REQ-1', 'REQ-2'], + }; + + const result = await evaluator.evaluate(3, 4, epicResult); + + expect(result).toBeDefined(); + expect(result.gate).toBe('epic3_to_epic4'); + expect(result.fromEpic).toBe(3); + expect(result.toEpic).toBe(4); + expect(result.verdict).toBeDefined(); + expect(result.checks).toBeDefined(); + }); + + it('should run checks for each gate', async () => { + const epicResult = { + specPath: '/path/to/spec.md', + complexity: 'STANDARD', + }; + + const result = await evaluator.evaluate(3, 4, epicResult); + + expect(result.checks.length).toBeGreaterThan(0); + expect(result.checks.some((c) => c.name === 'spec_exists')).toBe(true); + }); + + it('should calculate score based on checks', async () => { + const epicResult = { + specPath: '/path/to/spec.md', + complexity: 'STANDARD', + requirements: ['REQ-1'], + }; + + const result = await evaluator.evaluate(3, 4, epicResult); + + expect(result.score).toBeGreaterThanOrEqual(0); + expect(result.score).toBeLessThanOrEqual(5); + }); + }); + + describe('Gate Verdicts (AC2)', () => { + it('should return APPROVED for passing checks', async () => { + const epicResult = { + specPath: '/path/to/spec.md', + complexity: 'STANDARD', + requirements: ['REQ-1'], + score: 4.5, + }; + + const result = await evaluator.evaluate(3, 4, epicResult); + + expect(result.verdict).toBe(GateVerdict.APPROVED); + }); + + it('should return NEEDS_REVISION for minor issues', async () => { + // Epic 6 -> 7 allows minor issues but not major + const epicResult = { + qaReport: { passed: true }, + // Missing verdict - medium severity + }; + + // Use custom config to force needs_revision + const e = new GateEvaluator({ + projectRoot: tempDir, + gateConfig: { + epic3_to_epic4: { + blocking: false, + checks: ['spec_exists'], + }, + }, + }); + + const result = await e.evaluate(3, 4, { + /* missing spec */ + }); + + // Without spec, should fail + expect([GateVerdict.NEEDS_REVISION, GateVerdict.BLOCKED]).toContain(result.verdict); + }); + + it('should return BLOCKED for critical issues (AC3)', async () => { + const e = new GateEvaluator({ + projectRoot: tempDir, + gateConfig: { + epic3_to_epic4: { + blocking: true, + checks: ['spec_exists'], + }, + }, + }); + + // Epic result without spec (critical check) + const result = await e.evaluate(3, 4, {}); + + expect(result.verdict).toBe(GateVerdict.BLOCKED); + }); + }); + + describe('BLOCKED halts pipeline (AC3)', () => { + it('shouldBlock returns true for BLOCKED verdict', () => { + expect(evaluator.shouldBlock(GateVerdict.BLOCKED)).toBe(true); + expect(evaluator.shouldBlock(GateVerdict.APPROVED)).toBe(false); + expect(evaluator.shouldBlock(GateVerdict.NEEDS_REVISION)).toBe(false); + }); + }); + + describe('NEEDS_REVISION returns to previous epic (AC4)', () => { + it('needsRevision returns true for NEEDS_REVISION verdict', () => { + expect(evaluator.needsRevision(GateVerdict.NEEDS_REVISION)).toBe(true); + expect(evaluator.needsRevision(GateVerdict.APPROVED)).toBe(false); + expect(evaluator.needsRevision(GateVerdict.BLOCKED)).toBe(false); + }); + }); + + describe('Gate Results Storage (AC6)', () => { + it('should store all gate results', async () => { + await evaluator.evaluate(3, 4, { specPath: '/spec.md', complexity: 'STANDARD' }); + await evaluator.evaluate(4, 6, { planPath: '/plan.yaml', testResults: [{ passed: true }] }); + + const results = evaluator.getResults(); + + expect(results).toHaveLength(2); + expect(results[0].gate).toBe('epic3_to_epic4'); + expect(results[1].gate).toBe('epic4_to_epic6'); + }); + + it('should get specific gate result', async () => { + await evaluator.evaluate(3, 4, { specPath: '/spec.md', complexity: 'STANDARD' }); + + const result = evaluator.getResult('epic3_to_epic4'); + + expect(result).toBeDefined(); + expect(result.gate).toBe('epic3_to_epic4'); + }); + + it('should return null for unknown gate', () => { + const result = evaluator.getResult('unknown_gate'); + + expect(result).toBeNull(); + }); + }); + + describe('Strict Mode (AC7)', () => { + it('should block on any failure in strict mode', async () => { + const strictEvaluator = new GateEvaluator({ + projectRoot: tempDir, + strictMode: true, + gateConfig: { + epic3_to_epic4: { + blocking: false, // Would normally not block + checks: ['spec_exists', 'complexity_assessed'], + }, + }, + }); + + // Missing spec - would normally be needs_revision with blocking: false + const result = await strictEvaluator.evaluate(3, 4, { complexity: 'STANDARD' }); + + // In strict mode, any issue = blocked + expect(result.verdict).toBe(GateVerdict.BLOCKED); + }); + + it('should not affect approval in strict mode', async () => { + const strictEvaluator = new GateEvaluator({ + projectRoot: tempDir, + strictMode: true, + }); + + const result = await strictEvaluator.evaluate(3, 4, { + specPath: '/spec.md', + complexity: 'STANDARD', + requirements: ['REQ-1'], + score: 5.0, + }); + + expect(result.verdict).toBe(GateVerdict.APPROVED); + }); + }); + + describe('Summary', () => { + it('should generate summary of all evaluations', async () => { + await evaluator.evaluate(3, 4, { specPath: '/spec.md', complexity: 'STANDARD' }); + await evaluator.evaluate(4, 6, { planPath: '/plan.yaml' }); + + const summary = evaluator.getSummary(); + + expect(summary.total).toBe(2); + expect(summary.approved).toBeGreaterThanOrEqual(0); + expect(summary.averageScore).toBeGreaterThanOrEqual(0); + }); + }); + + describe('Individual Checks', () => { + it('should check spec_exists', async () => { + const result = await evaluator.evaluate(3, 4, { specPath: '/spec.md' }); + const check = result.checks.find((c) => c.name === 'spec_exists'); + + expect(check).toBeDefined(); + expect(check.passed).toBe(true); + }); + + it('should fail spec_exists when missing', async () => { + const e = new GateEvaluator({ + projectRoot: tempDir, + gateConfig: { + epic3_to_epic4: { blocking: true, checks: ['spec_exists'] }, + }, + }); + + const result = await e.evaluate(3, 4, {}); + const check = result.checks.find((c) => c.name === 'spec_exists'); + + expect(check.passed).toBe(false); + }); + + it('should check complexity_assessed', async () => { + const result = await evaluator.evaluate(3, 4, { complexity: 'STANDARD' }); + const check = result.checks.find((c) => c.name === 'complexity_assessed'); + + expect(check).toBeDefined(); + expect(check.passed).toBe(true); + }); + + it('should check plan_complete', async () => { + const e = new GateEvaluator({ + projectRoot: tempDir, + gateConfig: { + epic4_to_epic6: { blocking: true, checks: ['plan_complete'] }, + }, + }); + + const result = await e.evaluate(4, 6, { planPath: '/plan.yaml' }); + const check = result.checks.find((c) => c.name === 'plan_complete'); + + expect(check).toBeDefined(); + expect(check.passed).toBe(true); + }); + + it('should check qa_report_exists', async () => { + const e = new GateEvaluator({ + projectRoot: tempDir, + gateConfig: { + epic6_to_epic7: { blocking: false, checks: ['qa_report_exists'] }, + }, + }); + + const result = await e.evaluate(6, 7, { reportPath: '/qa-report.md' }); + const check = result.checks.find((c) => c.name === 'qa_report_exists'); + + expect(check).toBeDefined(); + expect(check.passed).toBe(true); + }); + }); + + describe('Clear', () => { + it('should clear all results', async () => { + await evaluator.evaluate(3, 4, { specPath: '/spec.md' }); + expect(evaluator.getResults().length).toBeGreaterThan(0); + + evaluator.clear(); + + expect(evaluator.getResults()).toHaveLength(0); + expect(evaluator.getLogs()).toHaveLength(0); + }); + }); +}); + +describe('Integration with MasterOrchestrator', () => { + let tempDir; + + beforeEach(async () => { + tempDir = path.join(os.tmpdir(), `gate-integration-test-${Date.now()}`); + await fs.ensureDir(tempDir); + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + it('should integrate GateEvaluator with MasterOrchestrator', async () => { + const { MasterOrchestrator } = require('../../.aios-core/core/orchestration'); + + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + strictGates: false, + }); + + expect(orchestrator.gateEvaluator).toBeDefined(); + expect(orchestrator.gateEvaluator).toBeInstanceOf(GateEvaluator); + }); + + it('should expose getGateEvaluator method', async () => { + const { MasterOrchestrator } = require('../../.aios-core/core/orchestration'); + + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + }); + + const evaluator = orchestrator.getGateEvaluator(); + expect(evaluator).toBeDefined(); + expect(evaluator).toBeInstanceOf(GateEvaluator); + }); + + it('should respect strictGates option', async () => { + const { MasterOrchestrator } = require('../../.aios-core/core/orchestration'); + + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + strictGates: true, + }); + + expect(orchestrator.gateEvaluator.strictMode).toBe(true); + }); +}); + +``` + +================================================== +📄 tests/core/autonomous-build-loop.test.js +================================================== +```js +/** + * Autonomous Build Loop - Test Suite + * Story EXC-1, AC3 - autonomous-build-loop.js coverage + * + * Tests: constructor, run(), executeLoop, executeSubtaskWithRetry, + * executeSubtask, loadPlan, countSubtasks, pause/resume/stop, + * isTimedOut, isComplete, generateReport, formatDuration, formatStatus, enums + */ + +const path = require('path'); +const fs = require('fs'); +const { + createTempDir, + cleanupTempDir, + createMockPlan, + collectEvents, +} = require('./execution-test-helpers'); + +// ── Mocks ──────────────────────────────────────────────────────────────────── + +// Mock BuildStateManager (required dependency) +const mockStateInstance = { + storyId: 'test-story', + _state: { + metrics: { totalSubtasks: 0 }, + checkpoints: [], + completedSubtasks: [], + }, + loadOrCreateState: jest.fn().mockReturnValue({ + checkpoints: [], + completedSubtasks: [], + }), + saveState: jest.fn(), + completeBuild: jest.fn(), + failBuild: jest.fn(), + startSubtask: jest.fn(), + completeSubtask: jest.fn(), + recordFailure: jest.fn().mockReturnValue({ failure: {}, isStuck: false }), + resumeBuild: jest.fn().mockReturnValue({ + lastCheckpoint: { id: 'cp-1' }, + plan: null, + worktree: null, + }), + formatStatus: jest.fn().mockReturnValue('Status: OK'), +}; + +jest.mock('../../.aios-core/core/execution/build-state-manager', () => ({ + BuildStateManager: jest.fn().mockImplementation(() => ({ ...mockStateInstance })), +})); + +// Mock optional dependencies to not load +jest.mock('../../.aios-core/infrastructure/scripts/recovery-tracker', () => { + throw new Error('not available'); +}); +jest.mock('../../.aios-core/infrastructure/scripts/worktree-manager', () => { + throw new Error('not available'); +}); + +const { + AutonomousBuildLoop, + BuildEvent, + SubtaskResult, + DEFAULT_CONFIG, +} = require('../../.aios-core/core/execution/autonomous-build-loop'); + +const { BuildStateManager } = require('../../.aios-core/core/execution/build-state-manager'); + +// ── Helpers ────────────────────────────────────────────────────────────────── + +function createTestPlan(subtaskCount = 2) { + return { + phases: [ + { + id: 'phase-1', + subtasks: Array.from({ length: subtaskCount }, (_, i) => ({ + id: `subtask-${i + 1}`, + description: `Test subtask ${i + 1}`, + files: [`file-${i + 1}.js`], + })), + }, + ], + }; +} + +function createLoop(overrides = {}) { + return new AutonomousBuildLoop({ + verbose: false, + globalTimeout: 60000, + maxIterations: 3, + selfCritiqueEnabled: false, + verificationEnabled: false, + ...overrides, + }); +} + +// ── Tests ──────────────────────────────────────────────────────────────────── + +describe('AutonomousBuildLoop', () => { + let tmpDir; + + beforeEach(() => { + tmpDir = createTempDir('abl-test-'); + jest.clearAllMocks(); + // Reset mock state + mockStateInstance._state.metrics.totalSubtasks = 0; + mockStateInstance._state.checkpoints = []; + mockStateInstance._state.completedSubtasks = []; + mockStateInstance.loadOrCreateState.mockReturnValue({ + checkpoints: [], + completedSubtasks: [], + }); + }); + + afterEach(() => { + cleanupTempDir(tmpDir); + }); + + // ── Enums & Constants ─────────────────────────────────────────────────── + + describe('Enums & Constants', () => { + test('BuildEvent has all expected event types', () => { + expect(BuildEvent.BUILD_STARTED).toBe('build_started'); + expect(BuildEvent.SUBTASK_STARTED).toBe('subtask_started'); + expect(BuildEvent.SUBTASK_COMPLETED).toBe('subtask_completed'); + expect(BuildEvent.SUBTASK_FAILED).toBe('subtask_failed'); + expect(BuildEvent.ITERATION_STARTED).toBe('iteration_started'); + expect(BuildEvent.ITERATION_COMPLETED).toBe('iteration_completed'); + expect(BuildEvent.SELF_CRITIQUE).toBe('self_critique'); + expect(BuildEvent.VERIFICATION_STARTED).toBe('verification_started'); + expect(BuildEvent.VERIFICATION_COMPLETED).toBe('verification_completed'); + expect(BuildEvent.BUILD_FAILED).toBe('build_failed'); + expect(BuildEvent.BUILD_SUCCESS).toBe('build_success'); + expect(BuildEvent.BUILD_TIMEOUT).toBe('build_timeout'); + expect(BuildEvent.BUILD_PAUSED).toBe('build_paused'); + }); + + test('SubtaskResult has all expected values', () => { + expect(SubtaskResult.SUCCESS).toBe('success'); + expect(SubtaskResult.FAILED).toBe('failed'); + expect(SubtaskResult.TIMEOUT).toBe('timeout'); + expect(SubtaskResult.SKIPPED).toBe('skipped'); + }); + + test('DEFAULT_CONFIG has expected defaults', () => { + expect(DEFAULT_CONFIG.maxIterations).toBe(10); + expect(DEFAULT_CONFIG.globalTimeout).toBe(30 * 60 * 1000); + expect(DEFAULT_CONFIG.subtaskTimeout).toBe(5 * 60 * 1000); + expect(DEFAULT_CONFIG.selfCritiqueEnabled).toBe(true); + expect(DEFAULT_CONFIG.verificationEnabled).toBe(true); + expect(DEFAULT_CONFIG.autoCommit).toBe(true); + expect(DEFAULT_CONFIG.pauseOnFailure).toBe(false); + expect(DEFAULT_CONFIG.verbose).toBe(false); + expect(DEFAULT_CONFIG.useWorktree).toBe(false); + expect(DEFAULT_CONFIG.worktreeCleanup).toBe(true); + }); + }); + + // ── Constructor ───────────────────────────────────────────────────────── + + describe('Constructor', () => { + test('creates with default config', () => { + const loop = new AutonomousBuildLoop(); + expect(loop.config.maxIterations).toBe(10); + expect(loop.config.globalTimeout).toBe(30 * 60 * 1000); + expect(loop.isRunning).toBe(false); + expect(loop.isPaused).toBe(false); + expect(loop.currentSubtask).toBeNull(); + expect(loop.startTime).toBeNull(); + }); + + test('merges custom config over defaults', () => { + const loop = new AutonomousBuildLoop({ + maxIterations: 5, + verbose: true, + }); + expect(loop.config.maxIterations).toBe(5); + expect(loop.config.verbose).toBe(true); + expect(loop.config.globalTimeout).toBe(30 * 60 * 1000); // default preserved + }); + + test('initializes stats to zero', () => { + const loop = new AutonomousBuildLoop(); + expect(loop.stats).toEqual({ + totalSubtasks: 0, + completedSubtasks: 0, + failedSubtasks: 0, + totalIterations: 0, + successfulIterations: 0, + failedIterations: 0, + }); + }); + + test('extends EventEmitter', () => { + const loop = new AutonomousBuildLoop(); + expect(typeof loop.on).toBe('function'); + expect(typeof loop.emit).toBe('function'); + }); + }); + + // ── run() ─────────────────────────────────────────────────────────────── + + describe('run()', () => { + test('throws if already running', async () => { + const loop = createLoop(); + loop.isRunning = true; + await expect(loop.run('story-1')).rejects.toThrow('Build loop is already running'); + }); + + test('emits BUILD_STARTED event', async () => { + const loop = createLoop(); + const plan = createTestPlan(1); + const events = collectEvents(loop, [BuildEvent.BUILD_STARTED]); + + await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(events.count(BuildEvent.BUILD_STARTED)).toBe(1); + expect(events.getByName(BuildEvent.BUILD_STARTED)[0].data.storyId).toBe('story-1'); + }); + + test('executes all subtasks in plan successfully', async () => { + const loop = createLoop(); + const plan = createTestPlan(2); + + const result = await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(result.success).toBe(true); + expect(result.storyId).toBe('story-1'); + expect(result.stats.completedSubtasks).toBe(2); + }); + + test('throws when no plan is found', async () => { + const loop = createLoop(); + // No plan provided and no plan files exist + await expect(loop.run('story-1', { rootPath: tmpDir })).rejects.toThrow( + 'No implementation plan found for story-1', + ); + }); + + test('emits BUILD_SUCCESS on successful completion', async () => { + const loop = createLoop(); + const plan = createTestPlan(1); + const events = collectEvents(loop, [BuildEvent.BUILD_SUCCESS]); + + await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(events.count(BuildEvent.BUILD_SUCCESS)).toBe(1); + expect(events.getByName(BuildEvent.BUILD_SUCCESS)[0].data.storyId).toBe('story-1'); + }); + + test('emits BUILD_FAILED on error', async () => { + const loop = createLoop(); + const events = collectEvents(loop, [BuildEvent.BUILD_FAILED]); + + try { + await loop.run('story-1', { rootPath: tmpDir }); + } catch { + // expected + } + + expect(events.count(BuildEvent.BUILD_FAILED)).toBe(1); + }); + + test('sets isRunning to false in finally block', async () => { + const loop = createLoop(); + try { + await loop.run('story-1', { rootPath: tmpDir }); + } catch { + // expected + } + expect(loop.isRunning).toBe(false); + }); + + test('generates report with correct fields', async () => { + const loop = createLoop(); + const plan = createTestPlan(1); + + const result = await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(result).toHaveProperty('storyId', 'story-1'); + expect(result).toHaveProperty('success'); + expect(result).toHaveProperty('duration'); + expect(result).toHaveProperty('durationFormatted'); + expect(result).toHaveProperty('stats'); + expect(result).toHaveProperty('config'); + expect(result).toHaveProperty('completedAt'); + }); + }); + + // ── executeLoop (via run) ─────────────────────────────────────────────── + + describe('executeLoop', () => { + test('skips already-completed subtasks', async () => { + const loop = createLoop(); + const plan = createTestPlan(2); + + // Simulate subtask-1 already completed + mockStateInstance.loadOrCreateState.mockReturnValue({ + checkpoints: [], + completedSubtasks: ['subtask-1'], + }); + + const events = collectEvents(loop, [BuildEvent.SUBTASK_STARTED]); + await loop.run('story-1', { plan, rootPath: tmpDir }); + + // Only subtask-2 should have been started + const startedIds = events.getByName(BuildEvent.SUBTASK_STARTED).map((e) => e.data.subtaskId); + expect(startedIds).not.toContain('subtask-1'); + expect(startedIds).toContain('subtask-2'); + }); + + test('handles global timeout during execution', async () => { + const loop = createLoop({ globalTimeout: 1 }); // 1ms timeout + const plan = createTestPlan(2); + + // Add delay via executor to ensure timeout triggers + loop.config.executor = async () => { + await new Promise((r) => setTimeout(r, 10)); + return { success: true }; + }; + + const events = collectEvents(loop, [BuildEvent.BUILD_TIMEOUT]); + + // The first subtask may succeed before timeout, but the loop should detect timeout + const result = await loop.run('story-1', { plan, rootPath: tmpDir }); + + // Either timed out or completed before timeout - both are valid + // but the globalTimeout=1ms should trigger + if (!result.success) { + expect(result.error).toMatch(/timeout/i); + } + }); + + test('handles pause during execution', async () => { + const loop = createLoop(); + const plan = createTestPlan(3); + + // Pause during the executor + let callCount = 0; + loop.config.executor = async () => { + callCount++; + if (callCount === 1) { + loop.pause(); + } + return { success: true }; + }; + + const result = await loop.run('story-1', { plan, rootPath: tmpDir }); + + // After first subtask completes, pause is detected in the next iteration + // The build should report paused state + expect(result.success).toBe(false); + }); + + test('stops on failure when pauseOnFailure is true', async () => { + const loop = createLoop({ pauseOnFailure: true }); + const plan = createTestPlan(3); + + // Fail on first subtask + loop.config.executor = async () => ({ success: false, error: 'test failure' }); + + const result = await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(result.success).toBe(false); + // Should have only attempted the first subtask (with retries) + expect(loop.stats.failedSubtasks).toBe(1); + }); + }); + + // ── executeSubtaskWithRetry ───────────────────────────────────────────── + + describe('executeSubtaskWithRetry', () => { + test('succeeds on first try', async () => { + const loop = createLoop(); + const plan = createTestPlan(1); + + loop.config.executor = async () => ({ success: true, filesModified: ['a.js'] }); + + const events = collectEvents(loop, [ + BuildEvent.SUBTASK_STARTED, + BuildEvent.SUBTASK_COMPLETED, + ]); + + await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(events.count(BuildEvent.SUBTASK_COMPLETED)).toBe(1); + expect(loop.stats.successfulIterations).toBe(1); + expect(loop.stats.failedIterations).toBe(0); + }); + + test('retries on failure up to maxIterations', async () => { + const loop = createLoop({ maxIterations: 3, pauseOnFailure: true }); + const plan = createTestPlan(1); + + // Always fail + loop.config.executor = async () => ({ success: false, error: 'always fails' }); + + const events = collectEvents(loop, [BuildEvent.ITERATION_STARTED]); + + await loop.run('story-1', { plan, rootPath: tmpDir }); + + // Should have tried 3 times (maxIterations) + expect(events.count(BuildEvent.ITERATION_STARTED)).toBe(3); + expect(loop.stats.failedIterations).toBe(3); + }); + + test('succeeds on retry after initial failure', async () => { + const loop = createLoop({ maxIterations: 3 }); + const plan = createTestPlan(1); + + let attempt = 0; + loop.config.executor = async () => { + attempt++; + if (attempt < 3) return { success: false, error: 'not yet' }; + return { success: true }; + }; + + await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(loop.stats.successfulIterations).toBe(1); + expect(loop.stats.failedIterations).toBe(2); + expect(loop.stats.completedSubtasks).toBe(1); + }); + + test('emits SUBTASK_FAILED after max iterations', async () => { + const loop = createLoop({ maxIterations: 2, pauseOnFailure: true }); + const plan = createTestPlan(1); + + loop.config.executor = async () => ({ success: false, error: 'fail' }); + + const events = collectEvents(loop, [BuildEvent.SUBTASK_FAILED]); + + await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(events.count(BuildEvent.SUBTASK_FAILED)).toBe(1); + const failEvent = events.getByName(BuildEvent.SUBTASK_FAILED)[0].data; + expect(failEvent.subtaskId).toBe('subtask-1'); + expect(failEvent.attempts).toBe(2); + }); + + test('handles exception in executor', async () => { + const loop = createLoop({ maxIterations: 2, pauseOnFailure: true }); + const plan = createTestPlan(1); + + loop.config.executor = async () => { + throw new Error('executor crash'); + }; + + await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(loop.stats.failedIterations).toBe(2); + }); + + test('performs self-critique between retries when enabled', async () => { + const loop = createLoop({ + maxIterations: 2, + selfCritiqueEnabled: true, + pauseOnFailure: true, + }); + const plan = createTestPlan(1); + + loop.config.executor = async () => ({ success: false, error: 'fail' }); + + const events = collectEvents(loop, [BuildEvent.SELF_CRITIQUE]); + + await loop.run('story-1', { plan, rootPath: tmpDir }); + + // Self-critique should happen between retries (not after last iteration) + expect(events.count(BuildEvent.SELF_CRITIQUE)).toBe(1); + }); + }); + + // ── executeSubtask ────────────────────────────────────────────────────── + + describe('executeSubtask', () => { + test('calls external executor when configured', async () => { + const loop = createLoop(); + const plan = createTestPlan(1); + const executorFn = jest.fn().mockResolvedValue({ success: true, filesModified: ['x.js'] }); + loop.config.executor = executorFn; + + await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(executorFn).toHaveBeenCalledWith( + expect.objectContaining({ id: 'subtask-1' }), + expect.objectContaining({ iteration: 1 }), + ); + }); + + test('returns success without executor (default simulation)', async () => { + const loop = createLoop(); + const plan = createTestPlan(1); + // No executor set - uses default simulation + + const result = await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(result.success).toBe(true); + }); + }); + + // ── loadPlan ──────────────────────────────────────────────────────────── + + describe('loadPlan', () => { + test('returns plan from options.plan directly', async () => { + const loop = createLoop(); + const plan = createTestPlan(1); + + const result = await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(result.success).toBe(true); + expect(result.stats.totalSubtasks).toBe(1); + }); + + test('loads JSON plan from file', async () => { + const loop = createLoop(); + const planDir = path.join(tmpDir, 'plan'); + fs.mkdirSync(planDir, { recursive: true }); + + const plan = createTestPlan(1); + fs.writeFileSync(path.join(planDir, 'implementation.json'), JSON.stringify(plan)); + + // Mock process.cwd to return tmpDir + const origCwd = process.cwd; + process.cwd = () => tmpDir; + + try { + const result = await loop.run('story-1', { rootPath: tmpDir }); + expect(result.success).toBe(true); + } finally { + process.cwd = origCwd; + } + }); + + test('returns null when no plan found (triggers error)', async () => { + const loop = createLoop(); + await expect(loop.run('story-1', { rootPath: tmpDir })).rejects.toThrow( + 'No implementation plan found', + ); + }); + }); + + // ── countSubtasks ─────────────────────────────────────────────────────── + + describe('countSubtasks', () => { + test('counts subtasks across phases', async () => { + const loop = createLoop(); + const plan = { + phases: [ + { id: 'p1', subtasks: [{ id: 's1' }, { id: 's2' }] }, + { id: 'p2', subtasks: [{ id: 's3' }] }, + ], + }; + + const result = await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(result.stats.totalSubtasks).toBe(3); + }); + + test('handles empty phases', async () => { + const loop = createLoop(); + const plan = { phases: [{ id: 'p1', subtasks: [] }] }; + + const result = await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(result.stats.totalSubtasks).toBe(0); + }); + + test('handles missing phases array', async () => { + const loop = createLoop(); + const plan = { phases: [] }; + + const result = await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(result.stats.totalSubtasks).toBe(0); + }); + }); + + // ── Control methods ───────────────────────────────────────────────────── + + describe('Control Methods', () => { + test('pause() sets isPaused when running', () => { + const loop = createLoop(); + loop.isRunning = true; + loop.pause(); + expect(loop.isPaused).toBe(true); + }); + + test('pause() does nothing when not running', () => { + const loop = createLoop(); + loop.pause(); + expect(loop.isPaused).toBe(false); + }); + + test('stop() sets isRunning false', () => { + const loop = createLoop(); + loop.isRunning = true; + loop.isPaused = true; + loop.stop(); + expect(loop.isRunning).toBe(false); + expect(loop.isPaused).toBe(false); + }); + + test('stop() does nothing when not running', () => { + const loop = createLoop(); + loop.stop(); + expect(loop.isRunning).toBe(false); + }); + }); + + // ── isTimedOut ────────────────────────────────────────────────────────── + + describe('isTimedOut()', () => { + test('returns false when no startTime', () => { + const loop = createLoop(); + expect(loop.isTimedOut()).toBe(false); + }); + + test('returns false when no globalTimeout', () => { + const loop = createLoop({ globalTimeout: 0 }); + loop.startTime = Date.now() - 999999; + expect(loop.isTimedOut()).toBe(false); + }); + + test('returns false within timeout', () => { + const loop = createLoop({ globalTimeout: 60000 }); + loop.startTime = Date.now(); + expect(loop.isTimedOut()).toBe(false); + }); + + test('returns true after timeout', () => { + const loop = createLoop({ globalTimeout: 1000 }); + loop.startTime = Date.now() - 2000; + expect(loop.isTimedOut()).toBe(true); + }); + }); + + // ── isComplete ────────────────────────────────────────────────────────── + + describe('isComplete()', () => { + test('returns true when all subtasks completed', () => { + const loop = createLoop(); + loop.stats.totalSubtasks = 3; + loop.stats.completedSubtasks = 3; + expect(loop.isComplete()).toBe(true); + }); + + test('returns true when completed exceeds total', () => { + const loop = createLoop(); + loop.stats.totalSubtasks = 2; + loop.stats.completedSubtasks = 3; + expect(loop.isComplete()).toBe(true); + }); + + test('returns false when incomplete', () => { + const loop = createLoop(); + loop.stats.totalSubtasks = 5; + loop.stats.completedSubtasks = 2; + expect(loop.isComplete()).toBe(false); + }); + }); + + // ── formatDuration ────────────────────────────────────────────────────── + + describe('formatDuration()', () => { + test('formats seconds', () => { + const loop = createLoop(); + expect(loop.formatDuration(5000)).toBe('5s'); + }); + + test('formats minutes and seconds', () => { + const loop = createLoop(); + expect(loop.formatDuration(125000)).toBe('2m 5s'); + }); + + test('formats hours, minutes, and seconds', () => { + const loop = createLoop(); + expect(loop.formatDuration(3725000)).toBe('1h 2m 5s'); + }); + + test('formats zero', () => { + const loop = createLoop(); + expect(loop.formatDuration(0)).toBe('0s'); + }); + }); + + // ── formatStatus ──────────────────────────────────────────────────────── + + describe('formatStatus()', () => { + test('returns status string with running info', () => { + const loop = createLoop(); + loop.isRunning = true; + loop.startTime = Date.now(); + loop.stats.totalSubtasks = 5; + loop.stats.completedSubtasks = 2; + + const status = loop.formatStatus(); + + expect(status).toContain('Autonomous Build Loop Status'); + expect(status).toContain('Running'); + expect(status).toContain('Statistics'); + expect(status).toContain('Progress'); + }); + + test('includes current subtask when set', () => { + const loop = createLoop(); + loop.currentSubtask = 'subtask-42'; + + const status = loop.formatStatus(); + + expect(status).toContain('subtask-42'); + }); + + test('shows TIMEOUT when time exceeded', () => { + const loop = createLoop({ globalTimeout: 1000 }); + loop.startTime = Date.now() - 5000; + + const status = loop.formatStatus(); + + expect(status).toContain('TIMEOUT'); + }); + }); + + // ── log ───────────────────────────────────────────────────────────────── + + describe('log()', () => { + test('logs info messages', () => { + const loop = createLoop(); + const spy = jest.spyOn(console, 'log').mockImplementation(); + + loop.log('test message', 'info'); + + expect(spy).toHaveBeenCalledWith(expect.stringContaining('test message')); + spy.mockRestore(); + }); + + test('skips debug when not verbose', () => { + const loop = createLoop({ verbose: false }); + const spy = jest.spyOn(console, 'log').mockImplementation(); + + loop.log('debug msg', 'debug'); + + expect(spy).not.toHaveBeenCalled(); + spy.mockRestore(); + }); + + test('logs debug when verbose', () => { + const loop = createLoop({ verbose: true }); + const spy = jest.spyOn(console, 'log').mockImplementation(); + + loop.log('debug msg', 'debug'); + + expect(spy).toHaveBeenCalledWith(expect.stringContaining('debug msg')); + spy.mockRestore(); + }); + }); + + // ── Event tracking ────────────────────────────────────────────────────── + + describe('Event emission', () => { + test('emits full lifecycle events for successful build', async () => { + const loop = createLoop(); + const plan = createTestPlan(1); + + const events = collectEvents(loop, [ + BuildEvent.BUILD_STARTED, + BuildEvent.SUBTASK_STARTED, + BuildEvent.ITERATION_STARTED, + BuildEvent.SUBTASK_COMPLETED, + BuildEvent.BUILD_SUCCESS, + ]); + + await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(events.count(BuildEvent.BUILD_STARTED)).toBe(1); + expect(events.count(BuildEvent.SUBTASK_STARTED)).toBe(1); + expect(events.count(BuildEvent.ITERATION_STARTED)).toBe(1); + expect(events.count(BuildEvent.SUBTASK_COMPLETED)).toBe(1); + expect(events.count(BuildEvent.BUILD_SUCCESS)).toBe(1); + }); + + test('emits ITERATION_COMPLETED on failed iteration', async () => { + const loop = createLoop({ maxIterations: 1, pauseOnFailure: true }); + const plan = createTestPlan(1); + + loop.config.executor = async () => ({ success: false, error: 'fail' }); + + const events = collectEvents(loop, [BuildEvent.ITERATION_COMPLETED]); + + await loop.run('story-1', { plan, rootPath: tmpDir }); + + expect(events.count(BuildEvent.ITERATION_COMPLETED)).toBe(1); + const data = events.getByName(BuildEvent.ITERATION_COMPLETED)[0].data; + expect(data.success).toBe(false); + expect(data.error).toBe('fail'); + }); + }); +}); + +``` + +================================================== +📄 tests/core/greeting-preference-manager.test.js +================================================== +```js +/** + * Greeting Preference Manager Tests + * + * Tests for the GreetingPreferenceManager class including: + * - Reading preference from config + * - All 4 preference values work correctly + * - Missing key defaults to 'auto' (backward compatibility) + * + * @story ACT-1 - Fix GreetingPreferenceManager Configuration + */ + +const path = require('path'); + +// Mock fs before requiring the module +jest.mock('fs', () => ({ + readFileSync: jest.fn(), + writeFileSync: jest.fn(), + existsSync: jest.fn(), + copyFileSync: jest.fn(), +})); + +// Mock js-yaml +jest.mock('js-yaml', () => ({ + load: jest.fn(), + dump: jest.fn(), +})); + +const fs = require('fs'); +const yaml = require('js-yaml'); + +// Import after mocks are set up +const GreetingPreferenceManager = require('../../.aios-core/development/scripts/greeting-preference-manager'); + +// Set timeout for all tests +jest.setTimeout(30000); + +describe('GreetingPreferenceManager', () => { + let manager; + + beforeEach(() => { + jest.clearAllMocks(); + manager = new GreetingPreferenceManager(); + }); + + describe('getPreference()', () => { + // Subtask 3.2: Test preference is read from config + it('should read preference from config file', () => { + const mockConfig = { + agentIdentity: { + greeting: { + preference: 'named', + contextDetection: true, + }, + }, + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const result = manager.getPreference(); + + expect(result).toBe('named'); + expect(fs.readFileSync).toHaveBeenCalled(); + expect(yaml.load).toHaveBeenCalledWith('yaml content'); + }); + + // Subtask 3.3: Test each of 4 values produces correct greeting level + it.each([ + ['auto', 'auto'], + ['minimal', 'minimal'], + ['named', 'named'], + ['archetypal', 'archetypal'], + ])('should return "%s" when preference is set to "%s"', (input, expected) => { + const mockConfig = { + agentIdentity: { + greeting: { + preference: input, + }, + }, + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const result = manager.getPreference(); + + expect(result).toBe(expected); + }); + + // Subtask 3.4: Test missing key defaults to 'auto' (backward compatibility) + it('should default to "auto" when preference key is missing', () => { + const mockConfig = { + agentIdentity: { + greeting: { + contextDetection: true, + // preference key is missing + }, + }, + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const result = manager.getPreference(); + + expect(result).toBe('auto'); + }); + + it('should default to "auto" when greeting section is missing', () => { + const mockConfig = { + agentIdentity: { + // greeting section is missing + }, + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const result = manager.getPreference(); + + expect(result).toBe('auto'); + }); + + it('should default to "auto" when agentIdentity is missing', () => { + const mockConfig = { + // agentIdentity is missing + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const result = manager.getPreference(); + + expect(result).toBe('auto'); + }); + + it('should default to "auto" on config load error', () => { + fs.readFileSync.mockImplementation(() => { + throw new Error('File not found'); + }); + + const consoleSpy = jest.spyOn(console, 'warn').mockImplementation(); + + const result = manager.getPreference(); + + expect(result).toBe('auto'); + expect(consoleSpy).toHaveBeenCalledWith( + expect.stringContaining('[GreetingPreference]'), + expect.any(String), + ); + + consoleSpy.mockRestore(); + }); + }); + + describe('setPreference()', () => { + beforeEach(() => { + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue('yaml content'); + yaml.dump.mockReturnValue('dumped yaml'); + yaml.load.mockImplementation((content) => { + if (content === 'yaml content') { + return { agentIdentity: { greeting: {} } }; + } + return {}; // For validation + }); + }); + + it('should set valid preference values', () => { + manager.setPreference('minimal'); + + expect(yaml.dump).toHaveBeenCalled(); + expect(fs.writeFileSync).toHaveBeenCalled(); + }); + + it('should reject invalid preference values', () => { + expect(() => manager.setPreference('invalid')).toThrow(/Invalid preference/); + expect(() => manager.setPreference('')).toThrow(/Invalid preference/); + expect(() => manager.setPreference('MINIMAL')).toThrow(/Invalid preference/); + }); + + it('should backup config before modification', () => { + manager.setPreference('named'); + + expect(fs.copyFileSync).toHaveBeenCalled(); + }); + }); + + describe('getConfig()', () => { + it('should return complete greeting config', () => { + const mockConfig = { + agentIdentity: { + greeting: { + preference: 'auto', + contextDetection: true, + sessionDetection: 'hybrid', + }, + }, + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const result = manager.getConfig(); + + expect(result).toEqual({ + preference: 'auto', + contextDetection: true, + sessionDetection: 'hybrid', + }); + }); + + it('should return empty object when greeting section is missing', () => { + const mockConfig = { + agentIdentity: {}, + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const result = manager.getConfig(); + + expect(result).toEqual({}); + }); + }); +}); + +/** + * E2E Greeting Behavior Tests (Task 4) + * + * Tests that each preference value produces the expected greeting format. + * These tests verify the integration between GreetingPreferenceManager + * and GreetingBuilder.buildFixedLevelGreeting(). + */ +describe('E2E Greeting Behavior', () => { + // Mock agent with persona_profile and greeting_levels + const mockAgent = { + id: 'dev', + name: 'Dex', + icon: '💻', + persona_profile: { + archetype: 'Builder', + greeting_levels: { + minimal: '💻 dev Agent ready', + named: "💻 Dex (Builder) ready. Let's build something great!", + archetypal: '💻 Dex the Builder ready to innovate!', + }, + }, + }; + + // Subtask 4.1: Test minimal preference + describe('preference: minimal', () => { + it('should show only icon + "ready" format', () => { + const greetingLevel = mockAgent.persona_profile.greeting_levels.minimal; + + expect(greetingLevel).toBe('💻 dev Agent ready'); + expect(greetingLevel).toContain(mockAgent.icon); + expect(greetingLevel).toContain('ready'); + expect(greetingLevel).not.toContain('Builder'); // No archetype + expect(greetingLevel).not.toContain('Dex'); // No name (uses id) + }); + }); + + // Subtask 4.2: Test named preference + describe('preference: named', () => { + it('should show name + archetype format', () => { + const greetingLevel = mockAgent.persona_profile.greeting_levels.named; + + expect(greetingLevel).toBe("💻 Dex (Builder) ready. Let's build something great!"); + expect(greetingLevel).toContain(mockAgent.icon); + expect(greetingLevel).toContain(mockAgent.name); // Has name + expect(greetingLevel).toContain('Builder'); // Has archetype + }); + }); + + // Subtask 4.3: Test archetypal preference + describe('preference: archetypal', () => { + it('should show full persona format', () => { + const greetingLevel = mockAgent.persona_profile.greeting_levels.archetypal; + + expect(greetingLevel).toBe('💻 Dex the Builder ready to innovate!'); + expect(greetingLevel).toContain(mockAgent.icon); + expect(greetingLevel).toContain(mockAgent.name); + expect(greetingLevel).toContain('Builder'); + expect(greetingLevel).toContain('the'); // Archetypal format: "X the Y" + }); + }); + + // Subtask 4.4: Test auto preference (session-aware) + describe('preference: auto', () => { + it('should use session-aware greeting logic when preference is auto', () => { + // When preference is 'auto', GreetingBuilder._buildContextualGreeting is called + // which adapts based on session type (new/existing/workflow) + // This is tested by verifying the preference manager returns 'auto' + // and the GreetingBuilder handles it correctly + + const mockConfig = { + agentIdentity: { + greeting: { + preference: 'auto', + contextDetection: true, + sessionDetection: 'hybrid', + }, + }, + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const manager = new GreetingPreferenceManager(); + const preference = manager.getPreference(); + + expect(preference).toBe('auto'); + // When 'auto', GreetingBuilder uses _buildContextualGreeting + // which adapts greeting based on session type + }); + }); + + // Verify all greeting levels are distinct + describe('greeting level distinctness', () => { + it('all 3 levels should produce different outputs', () => { + const levels = mockAgent.persona_profile.greeting_levels; + + expect(levels.minimal).not.toBe(levels.named); + expect(levels.named).not.toBe(levels.archetypal); + expect(levels.minimal).not.toBe(levels.archetypal); + }); + + it('minimal should be the shortest greeting', () => { + const levels = mockAgent.persona_profile.greeting_levels; + + expect(levels.minimal.length).toBeLessThan(levels.named.length); + expect(levels.minimal.length).toBeLessThan(levels.archetypal.length); + }); + }); +}); + +``` + +================================================== +📄 tests/core/permission-mode-integration.test.js +================================================== +```js +/** + * Permission Mode Integration Tests + * + * Story: ACT-4 - PermissionMode Integration Fix + * Epic: EPIC-ACT - Unified Agent Activation Pipeline + * + * Tests that PermissionMode and OperationGuard are properly wired + * and enforce permission checks during agent operations. + * + * @author @dev (Dex) + * @version 1.0.0 + */ + +const path = require('path'); +const fs = require('fs'); +const os = require('os'); + +const { PermissionMode } = require('../../.aios-core/core/permissions/permission-mode'); +const { OperationGuard } = require('../../.aios-core/core/permissions/operation-guard'); +const { + createGuard, + checkOperation, + getModeBadge, + setMode, + cycleMode, + enforcePermission, +} = require('../../.aios-core/core/permissions'); + +describe('Permission Mode Integration (Story ACT-4)', () => { + let tempDir; + let configPath; + + beforeEach(() => { + tempDir = path.join(os.tmpdir(), `permission-test-${Date.now()}-${Math.random().toString(36).slice(2)}`); + const aiosDir = path.join(tempDir, '.aios'); + fs.mkdirSync(aiosDir, { recursive: true }); + configPath = path.join(aiosDir, 'config.yaml'); + }); + + afterEach(() => { + try { + fs.rmSync(tempDir, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + }); + + // --- AC 1: environment-bootstrap creates config with permissions.mode: ask --- + describe('AC1: Default permission mode initialization', () => { + test('PermissionMode defaults to "ask" when config file does not exist', async () => { + const emptyDir = path.join(os.tmpdir(), `perm-empty-${Date.now()}`); + fs.mkdirSync(emptyDir, { recursive: true }); + + try { + const mode = new PermissionMode(emptyDir); + const loaded = await mode.load(); + expect(loaded).toBe('ask'); + expect(mode.currentMode).toBe('ask'); + } finally { + fs.rmSync(emptyDir, { recursive: true, force: true }); + } + }); + + test('PermissionMode reads "ask" from config when set', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + const mode = new PermissionMode(tempDir); + const loaded = await mode.load(); + expect(loaded).toBe('ask'); + }); + + test('PermissionMode falls back to "ask" for invalid mode in config', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: invalid-mode\n'); + + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(() => {}); + const mode = new PermissionMode(tempDir); + const loaded = await mode.load(); + expect(loaded).toBe('ask'); + warnSpy.mockRestore(); + }); + }); + + // --- AC 4: Explore mode blocks file writes and git operations --- + describe('AC4: Explore mode (read-only) prevents writes', () => { + let guard; + + beforeEach(async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: explore\n'); + const mode = new PermissionMode(tempDir); + await mode.load(); + guard = new OperationGuard(mode); + }); + + test('explore mode allows Read tool', async () => { + const result = await guard.guard('Read', { file_path: '/some/file.js' }); + expect(result.proceed).toBe(true); + expect(result.operation).toBe('read'); + }); + + test('explore mode allows Grep tool', async () => { + const result = await guard.guard('Grep', { pattern: 'test' }); + expect(result.proceed).toBe(true); + }); + + test('explore mode allows Glob tool', async () => { + const result = await guard.guard('Glob', { pattern: '**/*.js' }); + expect(result.proceed).toBe(true); + }); + + test('explore mode blocks Write tool', async () => { + const result = await guard.guard('Write', { file_path: '/some/file.js', content: 'test' }); + expect(result.proceed).toBe(false); + expect(result.blocked).toBe(true); + expect(result.message).toContain('Explore'); + }); + + test('explore mode blocks Edit tool', async () => { + const result = await guard.guard('Edit', { file_path: '/some/file.js', old_string: 'a', new_string: 'b' }); + expect(result.proceed).toBe(false); + expect(result.blocked).toBe(true); + }); + + test('explore mode blocks git commit via Bash', async () => { + const result = await guard.guard('Bash', { command: 'git commit -m "test"' }); + expect(result.proceed).toBe(false); + expect(result.blocked).toBe(true); + }); + + test('explore mode blocks git push via Bash', async () => { + const result = await guard.guard('Bash', { command: 'git push origin main' }); + expect(result.proceed).toBe(false); + expect(result.blocked).toBe(true); + }); + + test('explore mode allows git status via Bash', async () => { + const result = await guard.guard('Bash', { command: 'git status' }); + expect(result.proceed).toBe(true); + }); + + test('explore mode allows git log via Bash', async () => { + const result = await guard.guard('Bash', { command: 'git log --oneline -5' }); + expect(result.proceed).toBe(true); + }); + + test('explore mode blocks rm -rf via Bash', async () => { + const result = await guard.guard('Bash', { command: 'rm -rf node_modules' }); + expect(result.proceed).toBe(false); + expect(result.blocked).toBe(true); + }); + }); + + // --- AC 5: Ask mode prompts for confirmation on destructive ops --- + describe('AC5: Ask mode (default) prompts for confirmation', () => { + let guard; + + beforeEach(async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + const mode = new PermissionMode(tempDir); + await mode.load(); + guard = new OperationGuard(mode); + }); + + test('ask mode allows Read tool without confirmation', async () => { + const result = await guard.guard('Read', { file_path: '/some/file.js' }); + expect(result.proceed).toBe(true); + }); + + test('ask mode requires confirmation for Write tool', async () => { + const result = await guard.guard('Write', { file_path: '/some/file.js' }); + expect(result.proceed).toBe(false); + expect(result.needsConfirmation).toBe(true); + expect(result.message).toContain('Confirmation Required'); + }); + + test('ask mode requires confirmation for Edit tool', async () => { + const result = await guard.guard('Edit', { file_path: '/some/file.js' }); + expect(result.proceed).toBe(false); + expect(result.needsConfirmation).toBe(true); + }); + + test('ask mode requires confirmation for git commit', async () => { + const result = await guard.guard('Bash', { command: 'git commit -m "test"' }); + expect(result.proceed).toBe(false); + expect(result.needsConfirmation).toBe(true); + }); + + test('ask mode requires confirmation for rm command', async () => { + const result = await guard.guard('Bash', { command: 'rm -rf dist' }); + expect(result.proceed).toBe(false); + expect(result.needsConfirmation).toBe(true); + }); + + test('ask mode allows git status without confirmation', async () => { + const result = await guard.guard('Bash', { command: 'git status' }); + expect(result.proceed).toBe(true); + }); + }); + + // --- AC 6: Auto mode allows all operations --- + describe('AC6: Auto mode allows all operations', () => { + let guard; + + beforeEach(async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: auto\n'); + const mode = new PermissionMode(tempDir); + await mode.load(); + guard = new OperationGuard(mode); + }); + + test('auto mode allows Read tool', async () => { + const result = await guard.guard('Read', { file_path: '/some/file.js' }); + expect(result.proceed).toBe(true); + }); + + test('auto mode allows Write tool', async () => { + const result = await guard.guard('Write', { file_path: '/some/file.js' }); + expect(result.proceed).toBe(true); + }); + + test('auto mode allows Edit tool', async () => { + const result = await guard.guard('Edit', { file_path: '/some/file.js' }); + expect(result.proceed).toBe(true); + }); + + test('auto mode allows git commit', async () => { + const result = await guard.guard('Bash', { command: 'git commit -m "test"' }); + expect(result.proceed).toBe(true); + }); + + test('auto mode allows git push', async () => { + const result = await guard.guard('Bash', { command: 'git push origin main' }); + expect(result.proceed).toBe(true); + }); + + test('auto mode allows rm -rf', async () => { + const result = await guard.guard('Bash', { command: 'rm -rf node_modules' }); + expect(result.proceed).toBe(true); + }); + + test('auto mode allows npm install', async () => { + const result = await guard.guard('Bash', { command: 'npm install express' }); + expect(result.proceed).toBe(true); + }); + }); + + // --- AC 7: Badge reflects current mode --- + describe('AC7: Badge display reflects current mode', () => { + test('badge shows [Explore] for explore mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: explore\n'); + const badge = await getModeBadge(tempDir); + expect(badge).toContain('Explore'); + expect(badge).toContain('🔍'); + }); + + test('badge shows [Ask] for ask mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + const badge = await getModeBadge(tempDir); + expect(badge).toContain('Ask'); + expect(badge).toContain('⚠️'); + }); + + test('badge shows [Auto] for auto mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: auto\n'); + const badge = await getModeBadge(tempDir); + expect(badge).toContain('Auto'); + expect(badge).toContain('⚡'); + }); + + test('badge defaults to [Ask] when config missing', async () => { + const emptyDir = path.join(os.tmpdir(), `badge-empty-${Date.now()}`); + fs.mkdirSync(emptyDir, { recursive: true }); + try { + const badge = await getModeBadge(emptyDir); + expect(badge).toContain('Ask'); + } finally { + fs.rmSync(emptyDir, { recursive: true, force: true }); + } + }); + }); + + // --- AC 2: *yolo command cycles modes --- + describe('AC2: *yolo toggle cycles modes correctly', () => { + test('cycles from ask to auto', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + const result = await cycleMode(tempDir); + expect(result.mode).toBe('auto'); + expect(result.badge).toContain('Auto'); + }); + + test('cycles from auto to explore', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: auto\n'); + const result = await cycleMode(tempDir); + expect(result.mode).toBe('explore'); + expect(result.badge).toContain('Explore'); + }); + + test('cycles from explore back to ask', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: explore\n'); + const result = await cycleMode(tempDir); + expect(result.mode).toBe('ask'); + expect(result.badge).toContain('Ask'); + }); + + test('full cycle: ask -> auto -> explore -> ask', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + + // ask -> auto + let result = await cycleMode(tempDir); + expect(result.mode).toBe('auto'); + + // auto -> explore + result = await cycleMode(tempDir); + expect(result.mode).toBe('explore'); + + // explore -> ask + result = await cycleMode(tempDir); + expect(result.mode).toBe('ask'); + }); + + test('cycleMode persists the new mode to config file', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + await cycleMode(tempDir); + + // Read config directly to verify persistence + const configContent = fs.readFileSync(configPath, 'utf-8'); + expect(configContent).toContain('mode: auto'); + }); + + test('cycleMode returns a message string', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + const result = await cycleMode(tempDir); + expect(result.message).toBeDefined(); + expect(typeof result.message).toBe('string'); + expect(result.message).toContain('Auto'); + }); + }); + + // --- AC 8: Integration test - explore mode blocks writes --- + describe('AC8: Integration - enforcePermission API', () => { + test('enforcePermission returns "allow" for read ops in explore mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: explore\n'); + const result = await enforcePermission('Read', { file_path: '/test.js' }, tempDir); + expect(result.action).toBe('allow'); + }); + + test('enforcePermission returns "deny" for write ops in explore mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: explore\n'); + const result = await enforcePermission('Write', { file_path: '/test.js' }, tempDir); + expect(result.action).toBe('deny'); + expect(result.message).toBeDefined(); + }); + + test('enforcePermission returns "prompt" for write ops in ask mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + const result = await enforcePermission('Write', { file_path: '/test.js' }, tempDir); + expect(result.action).toBe('prompt'); + expect(result.message).toBeDefined(); + }); + + test('enforcePermission returns "allow" for write ops in auto mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: auto\n'); + const result = await enforcePermission('Write', { file_path: '/test.js' }, tempDir); + expect(result.action).toBe('allow'); + }); + + test('enforcePermission returns "deny" for git push in explore mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: explore\n'); + const result = await enforcePermission('Bash', { command: 'git push origin main' }, tempDir); + expect(result.action).toBe('deny'); + }); + + test('enforcePermission returns "prompt" for git push in ask mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + const result = await enforcePermission('Bash', { command: 'git push origin main' }, tempDir); + expect(result.action).toBe('prompt'); + }); + + test('enforcePermission returns "allow" for git push in auto mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: auto\n'); + const result = await enforcePermission('Bash', { command: 'git push origin main' }, tempDir); + expect(result.action).toBe('allow'); + }); + }); + + // --- Additional: OperationGuard classification --- + describe('OperationGuard command classification', () => { + let guard; + + beforeEach(async () => { + const mode = new PermissionMode(tempDir); + mode.currentMode = 'ask'; + mode._loaded = true; + guard = new OperationGuard(mode); + }); + + test('classifies Read tool as read', () => { + expect(guard.classifyOperation('Read')).toBe('read'); + }); + + test('classifies Write tool as write', () => { + expect(guard.classifyOperation('Write')).toBe('write'); + }); + + test('classifies Edit tool as write', () => { + expect(guard.classifyOperation('Edit')).toBe('write'); + }); + + test('classifies Glob tool as read', () => { + expect(guard.classifyOperation('Glob')).toBe('read'); + }); + + test('classifies Grep tool as read', () => { + expect(guard.classifyOperation('Grep')).toBe('read'); + }); + + test('classifies git status as read', () => { + expect(guard.classifyBashCommand('git status')).toBe('read'); + }); + + test('classifies git push as write', () => { + expect(guard.classifyBashCommand('git push origin main')).toBe('write'); + }); + + test('classifies git commit as write', () => { + expect(guard.classifyBashCommand('git commit -m "test"')).toBe('write'); + }); + + test('classifies rm -rf as delete', () => { + expect(guard.classifyBashCommand('rm -rf node_modules')).toBe('delete'); + }); + + test('classifies npm install as write', () => { + expect(guard.classifyBashCommand('npm install express')).toBe('write'); + }); + + test('classifies git log as read', () => { + expect(guard.classifyBashCommand('git log --oneline')).toBe('read'); + }); + + test('classifies git diff as read', () => { + expect(guard.classifyBashCommand('git diff')).toBe('read'); + }); + }); + + // --- Additional: setMode function --- + describe('setMode function', () => { + test('sets mode to explore and persists', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + const result = await setMode('explore', tempDir); + expect(result.mode).toBe('explore'); + + const configContent = fs.readFileSync(configPath, 'utf-8'); + expect(configContent).toContain('mode: explore'); + }); + + test('sets mode to auto and persists', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + const result = await setMode('auto', tempDir); + expect(result.mode).toBe('auto'); + }); + + test('handles yolo alias for auto', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + const result = await setMode('yolo', tempDir); + expect(result.mode).toBe('auto'); + }); + + test('throws on invalid mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + await expect(setMode('invalid', tempDir)).rejects.toThrow('Invalid mode'); + }); + }); + + // --- Additional: OperationGuard logging --- + describe('OperationGuard operation logging', () => { + test('logs operations for audit trail', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: auto\n'); + const mode = new PermissionMode(tempDir); + await mode.load(); + const guard = new OperationGuard(mode); + + await guard.guard('Read', { file_path: '/file1.js' }); + await guard.guard('Write', { file_path: '/file2.js' }); + await guard.guard('Bash', { command: 'git status' }); + + const log = guard.getLog(); + expect(log).toHaveLength(3); + expect(log[0].operation).toBe('read'); + expect(log[1].operation).toBe('write'); + expect(log[2].operation).toBe('read'); + }); + + test('getStats returns correct counts', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: auto\n'); + const mode = new PermissionMode(tempDir); + await mode.load(); + const guard = new OperationGuard(mode); + + await guard.guard('Read', {}); + await guard.guard('Write', {}); + await guard.guard('Write', {}); + + const stats = guard.getStats(); + expect(stats.total).toBe(3); + expect(stats.byOperation.read).toBe(1); + expect(stats.byOperation.write).toBe(2); + expect(stats.byResult.allowed).toBe(3); + }); + }); + + // --- Additional: createGuard convenience function --- + describe('createGuard convenience function', () => { + test('creates guard with loaded mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: explore\n'); + const { mode, guard } = await createGuard(tempDir); + expect(mode.currentMode).toBe('explore'); + expect(guard).toBeInstanceOf(OperationGuard); + }); + }); + + // --- Additional: PermissionMode helper methods --- + describe('PermissionMode helper methods', () => { + test('isAutonomous returns true for auto mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: auto\n'); + const mode = new PermissionMode(tempDir); + await mode.load(); + expect(mode.isAutonomous()).toBe(true); + }); + + test('isAutonomous returns false for ask mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + const mode = new PermissionMode(tempDir); + await mode.load(); + expect(mode.isAutonomous()).toBe(false); + }); + + test('isReadOnly returns true for explore mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: explore\n'); + const mode = new PermissionMode(tempDir); + await mode.load(); + expect(mode.isReadOnly()).toBe(true); + }); + + test('isReadOnly returns false for ask mode', async () => { + fs.writeFileSync(configPath, 'permissions:\n mode: ask\n'); + const mode = new PermissionMode(tempDir); + await mode.load(); + expect(mode.isReadOnly()).toBe(false); + }); + + test('getHelp returns formatted help text', () => { + const help = PermissionMode.getHelp(); + expect(help).toContain('Permission Modes'); + expect(help).toContain('explore'); + expect(help).toContain('ask'); + expect(help).toContain('auto'); + expect(help).toContain('*yolo'); + }); + }); +}); + +``` + +================================================== +📄 tests/core/epic-executors.test.js +================================================== +```js +/** + * Epic Executors Tests + * + * Story: 0.3 - Epic Executors + * Epic: Epic 0 - ADE Master Orchestrator + * + * Tests for all epic executor classes. + * + * @author @dev (Dex) + * @version 1.0.0 + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); + +const { + EpicExecutor, + Epic3Executor, + Epic4Executor, + Epic5Executor, + Epic6Executor, + ExecutionStatus, + RecoveryStrategy, + QAVerdict, + createExecutor, + hasExecutor, + getAvailableEpics, + EXECUTOR_MAP, +} = require('../../.aios-core/core/orchestration/executors'); + +describe('Epic Executors (Story 0.3)', () => { + let tempDir; + let mockOrchestrator; + + beforeEach(async () => { + // Create temp directory + tempDir = path.join(os.tmpdir(), `epic-executors-test-${Date.now()}`); + await fs.ensureDir(tempDir); + + // Create mock orchestrator + mockOrchestrator = { + projectRoot: tempDir, + storyId: 'TEST-001', + maxRetries: 3, + _log: jest.fn(), + }; + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + describe('EpicExecutor Base Class (AC1)', () => { + it('should create instance with orchestrator and epic number', () => { + const executor = new EpicExecutor(mockOrchestrator, 3); + + expect(executor.orchestrator).toBe(mockOrchestrator); + expect(executor.epicNum).toBe(3); + expect(executor.status).toBe(ExecutionStatus.PENDING); + }); + + it('should throw error on execute() - abstract method', async () => { + const executor = new EpicExecutor(mockOrchestrator, 3); + + await expect(executor.execute({})).rejects.toThrow('must implement execute()'); + }); + + it('should return standardized result (AC7)', () => { + const executor = new EpicExecutor(mockOrchestrator, 3); + executor.status = ExecutionStatus.SUCCESS; + executor.startTime = new Date().toISOString(); + executor.endTime = new Date().toISOString(); + + const result = executor.getResult(); + + expect(result.epicNum).toBe(3); + expect(result.status).toBe(ExecutionStatus.SUCCESS); + expect(result.success).toBe(true); + expect(result.artifacts).toEqual([]); + expect(result.errors).toEqual([]); + }); + + it('should track artifacts', () => { + const executor = new EpicExecutor(mockOrchestrator, 3); + + executor._addArtifact('file', '/path/to/file.md', { size: 100 }); + + expect(executor.artifacts).toHaveLength(1); + expect(executor.artifacts[0].type).toBe('file'); + expect(executor.artifacts[0].path).toBe('/path/to/file.md'); + expect(executor.artifacts[0].size).toBe(100); + }); + + it('should track logs', () => { + const executor = new EpicExecutor(mockOrchestrator, 3); + + executor._log('Test message', 'info'); + executor._log('Error message', 'error'); + + expect(executor.logs).toHaveLength(2); + expect(executor.logs[0].message).toBe('Test message'); + expect(executor.logs[1].level).toBe('error'); + }); + + it('should calculate duration', () => { + const executor = new EpicExecutor(mockOrchestrator, 3); + executor.startTime = new Date(Date.now() - 5000).toISOString(); + executor.endTime = new Date().toISOString(); + + const duration = executor._getDuration(); + const durationMs = executor._getDurationMs(); + + expect(duration).toBe('5s'); + expect(durationMs).toBeGreaterThanOrEqual(4900); + expect(durationMs).toBeLessThanOrEqual(5100); + }); + }); + + describe('Epic3Executor - Spec Pipeline (AC2)', () => { + let executor; + + beforeEach(() => { + executor = new Epic3Executor(mockOrchestrator); + }); + + it('should create instance with epic number 3', () => { + expect(executor.epicNum).toBe(3); + }); + + it('should execute and return spec path', async () => { + const result = await executor.execute({ + storyId: 'TEST-001', + source: 'story', + }); + + expect(result.success).toBe(true); + expect(result.specPath).toBeDefined(); + expect(result.complexity).toBeDefined(); + }); + + it('should fail without storyId', async () => { + const result = await executor.execute({}); + + expect(result.success).toBe(false); + expect(result.error).toContain('storyId'); + }); + + it('should reuse existing spec', async () => { + // Create existing spec + const specPath = path.join(tempDir, 'docs', 'stories', 'TEST-001', 'spec.md'); + await fs.ensureDir(path.dirname(specPath)); + await fs.writeFile(specPath, '# Existing Spec'); + + const result = await executor.execute({ + storyId: 'TEST-001', + source: 'story', + }); + + expect(result.success).toBe(true); + expect(result.reused).toBe(true); + }); + }); + + describe('Epic4Executor - Execution Engine (AC3)', () => { + let executor; + + beforeEach(() => { + executor = new Epic4Executor(mockOrchestrator); + }); + + it('should create instance with epic number 4', () => { + expect(executor.epicNum).toBe(4); + }); + + it('should execute and return progress', async () => { + const result = await executor.execute({ + storyId: 'TEST-001', + specPath: '/path/to/spec.md', + complexity: 'STANDARD', + }); + + expect(result.success).toBe(true); + expect(result.progress).toBeDefined(); + expect(result.planPath).toBeDefined(); + }); + + it('should create stub plan if not exists', async () => { + const result = await executor.execute({ + storyId: 'TEST-001', + specPath: '/path/to/spec.md', + }); + + const planPath = path.join( + tempDir, + 'docs', + 'stories', + 'TEST-001', + 'plan', + 'implementation.yaml', + ); + expect(await fs.pathExists(planPath)).toBe(true); + }); + }); + + describe('Epic5Executor - Recovery System (AC4)', () => { + let executor; + + beforeEach(() => { + executor = new Epic5Executor(mockOrchestrator); + }); + + it('should create instance with epic number 5', () => { + expect(executor.epicNum).toBe(5); + }); + + it('should execute recovery for failed epic', async () => { + const result = await executor.execute({ + storyId: 'TEST-001', + failedEpic: 4, + error: new Error('Test failure'), + attempts: 0, + }); + + expect(result.success).toBe(true); + expect(result.strategy).toBeDefined(); + expect(result.shouldRetry).toBeDefined(); + }); + + it('should escalate after max attempts', async () => { + const result = await executor.execute({ + storyId: 'TEST-001', + failedEpic: 4, + error: new Error('Persistent failure'), + attempts: 5, + }); + + expect(result.strategy).toBe(RecoveryStrategy.ESCALATE_TO_HUMAN); + expect(result.escalated).toBe(true); + }); + + it('should create escalation report', async () => { + const result = await executor.execute({ + storyId: 'TEST-001', + failedEpic: 4, + error: new Error('Critical failure'), + attempts: 5, + }); + + expect(result.recoveryResult.reportPath).toBeDefined(); + expect(await fs.pathExists(result.recoveryResult.reportPath)).toBe(true); + }); + }); + + describe('Epic6Executor - QA Loop (AC5)', () => { + let executor; + + beforeEach(() => { + executor = new Epic6Executor(mockOrchestrator); + }); + + it('should create instance with epic number 6', () => { + expect(executor.epicNum).toBe(6); + }); + + it('should execute QA loop and return verdict', async () => { + const result = await executor.execute({ + storyId: 'TEST-001', + buildResult: {}, + testResults: [], + }); + + expect(result.success).toBe(true); + expect(result.verdict).toBeDefined(); + expect(result.iterations).toBeDefined(); + }); + + it('should generate QA report', async () => { + const result = await executor.execute({ + storyId: 'TEST-001', + buildResult: {}, + }); + + expect(result.reportPath).toBeDefined(); + expect(await fs.pathExists(result.reportPath)).toBe(true); + }); + }); + + describe('Factory Functions', () => { + it('should create executor with createExecutor()', () => { + const executor = createExecutor(3, mockOrchestrator); + + expect(executor).toBeInstanceOf(Epic3Executor); + expect(executor.epicNum).toBe(3); + }); + + it('should throw for unknown epic number', () => { + expect(() => createExecutor(99, mockOrchestrator)).toThrow('No executor found'); + }); + + it('should check executor existence with hasExecutor()', () => { + expect(hasExecutor(3)).toBe(true); + expect(hasExecutor(6)).toBe(true); + expect(hasExecutor(99)).toBe(false); + }); + + it('should return available epics', () => { + const epics = getAvailableEpics(); + + expect(epics).toContain(3); + expect(epics).toContain(4); + expect(epics).toContain(5); + expect(epics).toContain(6); + }); + }); + + describe('Enums', () => { + it('should export ExecutionStatus enum', () => { + expect(ExecutionStatus.PENDING).toBe('pending'); + expect(ExecutionStatus.RUNNING).toBe('running'); + expect(ExecutionStatus.SUCCESS).toBe('success'); + expect(ExecutionStatus.FAILED).toBe('failed'); + }); + + it('should export RecoveryStrategy enum', () => { + expect(RecoveryStrategy.RETRY_SAME_APPROACH).toBe('retry_same_approach'); + expect(RecoveryStrategy.ESCALATE_TO_HUMAN).toBe('escalate_to_human'); + }); + + it('should export QAVerdict enum', () => { + expect(QAVerdict.APPROVED).toBe('approved'); + expect(QAVerdict.NEEDS_REVISION).toBe('needs_revision'); + expect(QAVerdict.BLOCKED).toBe('blocked'); + }); + }); + + describe('EXECUTOR_MAP', () => { + it('should map all epic numbers to executor classes', () => { + expect(EXECUTOR_MAP[3]).toBe(Epic3Executor); + expect(EXECUTOR_MAP[4]).toBe(Epic4Executor); + expect(EXECUTOR_MAP[5]).toBe(Epic5Executor); + expect(EXECUTOR_MAP[6]).toBe(Epic6Executor); + }); + }); +}); + +describe('Standardized Results (AC7)', () => { + let tempDir; + let mockOrchestrator; + + beforeEach(async () => { + tempDir = path.join(os.tmpdir(), `executor-results-test-${Date.now()}`); + await fs.ensureDir(tempDir); + + mockOrchestrator = { + projectRoot: tempDir, + storyId: 'TEST-001', + _log: jest.fn(), + }; + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + it('all executors should return consistent result structure', async () => { + const executors = [ + new Epic3Executor(mockOrchestrator), + new Epic4Executor(mockOrchestrator), + new Epic5Executor(mockOrchestrator), + new Epic6Executor(mockOrchestrator), + ]; + + const contexts = [ + { storyId: 'TEST-001', source: 'story' }, + { storyId: 'TEST-001', specPath: '/path' }, + { storyId: 'TEST-001', failedEpic: 3, error: 'test', attempts: 0 }, + { storyId: 'TEST-001', buildResult: {} }, + ]; + + for (let i = 0; i < executors.length; i++) { + const result = await executors[i].execute(contexts[i]); + + // All results should have these fields + expect(result).toHaveProperty('epicNum'); + expect(result).toHaveProperty('status'); + expect(result).toHaveProperty('success'); + expect(result).toHaveProperty('artifacts'); + expect(result).toHaveProperty('errors'); + expect(result).toHaveProperty('duration'); + } + }); +}); + +``` + +================================================== +📄 tests/core/master-orchestrator.test.js +================================================== +```js +/** + * Master Orchestrator Tests + * + * Story: 0.1 - Master Orchestrator Core + * Epic: Epic 0 - ADE Master Orchestrator + * + * Tests the core functionality of the MasterOrchestrator class. + * + * @author @dev (Dex) + * @version 1.0.0 + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); + +const MasterOrchestrator = require('../../.aios-core/core/orchestration/master-orchestrator'); +const { OrchestratorState, EpicStatus, EPIC_CONFIG } = MasterOrchestrator; + +describe('MasterOrchestrator', () => { + let tempDir; + let orchestrator; + + beforeEach(async () => { + // Create temp directory for tests + tempDir = path.join(os.tmpdir(), `master-orchestrator-test-${Date.now()}`); + await fs.ensureDir(tempDir); + + // Create .aios/dashboard directory for dashboard integration + await fs.ensureDir(path.join(tempDir, '.aios', 'dashboard')); + + // Create a minimal package.json for tech stack detection + await fs.writeJson(path.join(tempDir, 'package.json'), { + name: 'test-project', + dependencies: { + react: '^18.0.0', + }, + devDependencies: { + jest: '^29.0.0', + }, + }); + + orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + maxRetries: 2, + autoRecovery: false, + }); + }); + + afterEach(async () => { + // Cleanup temp directory + await fs.remove(tempDir); + }); + + describe('Constructor (AC2)', () => { + it('should create instance with default options', () => { + const orch = new MasterOrchestrator(tempDir); + + expect(orch.projectRoot).toBe(tempDir); + expect(orch.storyId).toBeNull(); + expect(orch.maxRetries).toBe(3); + expect(orch.autoRecovery).toBe(true); + expect(orch.state).toBe(OrchestratorState.INITIALIZED); + }); + + it('should create instance with custom options', () => { + expect(orchestrator.projectRoot).toBe(tempDir); + expect(orchestrator.storyId).toBe('TEST-001'); + expect(orchestrator.maxRetries).toBe(2); + expect(orchestrator.autoRecovery).toBe(false); + }); + + it('should initialize execution state', () => { + expect(orchestrator.executionState).toBeDefined(); + expect(orchestrator.executionState.storyId).toBe('TEST-001'); + expect(orchestrator.executionState.epics).toBeDefined(); + expect(Object.keys(orchestrator.executionState.epics)).toHaveLength(4); + }); + + it('should have all epics in pending state initially', () => { + for (const epicNum of [3, 4, 5, 6]) { + expect(orchestrator.executionState.epics[epicNum]).toBeDefined(); + expect(orchestrator.executionState.epics[epicNum].status).toBe(EpicStatus.PENDING); + } + }); + }); + + describe('State Machine (AC6)', () => { + it('should start in INITIALIZED state', () => { + expect(orchestrator.state).toBe(OrchestratorState.INITIALIZED); + }); + + it('should transition to READY after initialize()', async () => { + await orchestrator.initialize(); + expect(orchestrator.state).toBe(OrchestratorState.READY); + }); + + it('should emit stateChange event on transition', async () => { + const stateChanges = []; + orchestrator.on('stateChange', (change) => { + stateChanges.push(change); + }); + + await orchestrator.initialize(); + + expect(stateChanges.length).toBeGreaterThan(0); + expect(stateChanges[0].from).toBe(OrchestratorState.INITIALIZED); + expect(stateChanges[0].to).toBe(OrchestratorState.READY); + }); + + it('should have all valid state enum values', () => { + expect(OrchestratorState.INITIALIZED).toBe('initialized'); + expect(OrchestratorState.READY).toBe('ready'); + expect(OrchestratorState.IN_PROGRESS).toBe('in_progress'); + expect(OrchestratorState.BLOCKED).toBe('blocked'); + expect(OrchestratorState.COMPLETE).toBe('complete'); + }); + }); + + describe('TechStackDetector Integration (AC7)', () => { + it('should detect tech stack during initialize()', async () => { + await orchestrator.initialize(); + + expect(orchestrator.executionState.techStackProfile).toBeDefined(); + expect(orchestrator.executionState.techStackProfile.hasFrontend).toBe(true); + expect(orchestrator.executionState.techStackProfile.frontend.framework).toBe('react'); + expect(orchestrator.executionState.techStackProfile.hasTests).toBe(true); + }); + + it('should store tech stack profile in state', async () => { + await orchestrator.initialize(); + + const profile = orchestrator.executionState.techStackProfile; + expect(profile.detectedAt).toBeDefined(); + expect(profile.confidence).toBeGreaterThan(0); + }); + }); + + describe('executeEpic (AC4)', () => { + beforeEach(async () => { + await orchestrator.initialize(); + }); + + it('should execute a single epic', async () => { + const result = await orchestrator.executeEpic(3); + + expect(result).toBeDefined(); + expect(result.epicNum).toBe(3); + // Stub executor returns success + expect(result.success).toBe(true); + }); + + it('should update epic state during execution', async () => { + await orchestrator.executeEpic(3); + + expect(orchestrator.executionState.epics[3].status).toBe(EpicStatus.COMPLETED); + expect(orchestrator.executionState.epics[3].startedAt).toBeDefined(); + expect(orchestrator.executionState.epics[3].completedAt).toBeDefined(); + }); + + it('should emit epicStart and epicComplete events', async () => { + const events = []; + orchestrator.on('epicStart', (e) => events.push({ type: 'start', ...e })); + orchestrator.on('epicComplete', (e) => events.push({ type: 'complete', ...e })); + + await orchestrator.executeEpic(4); + + expect(events).toHaveLength(2); + expect(events[0].type).toBe('start'); + expect(events[0].epicNum).toBe(4); + expect(events[1].type).toBe('complete'); + expect(events[1].epicNum).toBe(4); + }); + + it('should throw error for unknown epic', async () => { + await expect(orchestrator.executeEpic(99)).rejects.toThrow('Unknown epic number'); + }); + }); + + describe('executeFullPipeline (AC3)', () => { + beforeEach(async () => { + await orchestrator.initialize(); + }); + + it('should execute epics in sequence 3→4→6', async () => { + const executedEpics = []; + orchestrator.on('epicStart', (e) => executedEpics.push(e.epicNum)); + + const result = await orchestrator.executeFullPipeline(); + + // Epic 5 (Recovery) is on-demand, not in sequence + expect(executedEpics).toEqual([3, 4, 6]); + expect(result.epics.executed).toContain(3); + expect(result.epics.executed).toContain(4); + expect(result.epics.executed).toContain(6); + }); + + it('should transition to COMPLETE on success', async () => { + await orchestrator.executeFullPipeline(); + expect(orchestrator.state).toBe(OrchestratorState.COMPLETE); + }); + + it('should return finalized result', async () => { + const result = await orchestrator.executeFullPipeline(); + + expect(result.workflowId).toBeDefined(); + expect(result.storyId).toBe('TEST-001'); + expect(result.success).toBe(true); + expect(result.duration).toBeDefined(); + expect(result.techStack).toBeDefined(); + }); + }); + + describe('resumeFromEpic (AC5)', () => { + beforeEach(async () => { + await orchestrator.initialize(); + // Execute first two epics + await orchestrator.executeEpic(3); + await orchestrator.executeEpic(4); + }); + + it('should reset epics from specified point', async () => { + // Mark epic 4 as we want to resume from there + expect(orchestrator.executionState.epics[3].status).toBe(EpicStatus.COMPLETED); + expect(orchestrator.executionState.epics[4].status).toBe(EpicStatus.COMPLETED); + + await orchestrator.resumeFromEpic(4); + + // Epic 3 should still be completed (before resume point) + expect(orchestrator.executionState.epics[3].status).toBe(EpicStatus.COMPLETED); + // Epic 4+ should be completed again after resume + expect(orchestrator.executionState.epics[4].status).toBe(EpicStatus.COMPLETED); + }); + + it('should continue full pipeline from resume point', async () => { + const executedEpics = []; + orchestrator.on('epicStart', (e) => executedEpics.push(e.epicNum)); + + await orchestrator.resumeFromEpic(6); + + // Should only execute 6 since 3,4 were already done + // and reset happens for epics >= fromEpic + expect(executedEpics).toContain(6); + }); + }); + + describe('State Persistence (Story 0.2)', () => { + describe('saveState (AC1, AC3)', () => { + it('should save state to correct path', async () => { + await orchestrator.initialize(); + await orchestrator.executeEpic(3); + + const statePath = orchestrator.statePath; + // Normalize path separators for cross-platform compatibility (Windows uses \, Unix uses /) + const normalizedPath = statePath.replace(/\\/g, '/'); + expect(normalizedPath).toContain('.aios/master-orchestrator/TEST-001.json'); + expect(await fs.pathExists(statePath)).toBe(true); + }); + + it('should return success status', async () => { + await orchestrator.initialize(); + const result = await orchestrator.saveState(); + expect(result).toBe(true); + }); + }); + + describe('State Schema (AC2, AC6, AC7)', () => { + it('should include all required fields', async () => { + await orchestrator.initialize(); + await orchestrator.executeEpic(3); + + const savedState = await fs.readJson(orchestrator.statePath); + + // AC2: currentEpic, completedEpics, failedEpics, context + expect(savedState.currentEpic).toBeDefined(); + expect(savedState.completedEpics).toContain(3); + expect(savedState.failedEpics).toBeDefined(); + expect(savedState.context).toBeDefined(); + + // AC6: timestamps + expect(savedState.timestamps).toBeDefined(); + expect(savedState.timestamps.startedAt).toBeDefined(); + expect(savedState.timestamps.updatedAt).toBeDefined(); + expect(savedState.timestamps.savedAt).toBeDefined(); + + // AC7: techStackProfile + expect(savedState.techStackProfile).toBeDefined(); + expect(savedState.techStackProfile.hasFrontend).toBe(true); + }); + + it('should include schema version', async () => { + await orchestrator.initialize(); + const savedState = await fs.readJson(orchestrator.statePath); + expect(savedState.schemaVersion).toBe('1.0'); + }); + + it('should include context with options', async () => { + await orchestrator.initialize(); + const savedState = await fs.readJson(orchestrator.statePath); + + expect(savedState.context.maxRetries).toBe(2); + expect(savedState.context.autoRecovery).toBe(false); + }); + }); + + describe('loadState (AC4)', () => { + it('should load state for current storyId', async () => { + await orchestrator.initialize(); + await orchestrator.executeEpic(3); + + const loadedState = await orchestrator.loadState(); + expect(loadedState).not.toBeNull(); + expect(loadedState.storyId).toBe('TEST-001'); + }); + + it('should load state for specific storyId', async () => { + await orchestrator.initialize(); + await orchestrator.executeEpic(3); + + // Create another orchestrator with different ID + const orch2 = new MasterOrchestrator(tempDir, { storyId: 'TEST-002' }); + + // Load state from first orchestrator + const loadedState = await orch2.loadState('TEST-001'); + expect(loadedState).not.toBeNull(); + expect(loadedState.storyId).toBe('TEST-001'); + }); + + it('should return null for non-existent state', async () => { + const state = await orchestrator.loadState('NON-EXISTENT'); + expect(state).toBeNull(); + }); + }); + + describe('Resume Detection (AC5)', () => { + it('should load existing state on initialize', async () => { + await orchestrator.initialize(); + await orchestrator.executeEpic(3); + + const orchestrator2 = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + }); + + await orchestrator2.initialize(); + expect(orchestrator2.executionState.epics['3'].status).toBe(EpicStatus.COMPLETED); + }); + + it('should find latest valid state', async () => { + await orchestrator.initialize(); + await orchestrator.executeEpic(3); + + // Create another orchestrator + const orch2 = new MasterOrchestrator(tempDir, { storyId: 'TEST-002' }); + await orch2.initialize(); + + // Find latest should return most recent + const latestState = await orchestrator.findLatestValidState(); + expect(latestState).not.toBeNull(); + expect(['TEST-001', 'TEST-002']).toContain(latestState.storyId); + }); + + it('should not resume completed states', async () => { + await orchestrator.initialize(); + await orchestrator.executeFullPipeline(); + + // State should be complete + const savedState = await fs.readJson(orchestrator.statePath); + expect(savedState.status).toBe(OrchestratorState.COMPLETE); + + // New orchestrator should not load completed state + const orch2 = new MasterOrchestrator(tempDir, { storyId: 'TEST-001' }); + await orch2.initialize(); + + // Should start fresh since previous was complete + expect(orch2.executionState.epics['3'].status).toBe(EpicStatus.PENDING); + }); + }); + + describe('State Management', () => { + it('should clear state', async () => { + await orchestrator.initialize(); + await orchestrator.saveState(); + + expect(await fs.pathExists(orchestrator.statePath)).toBe(true); + + await orchestrator.clearState(); + expect(await fs.pathExists(orchestrator.statePath)).toBe(false); + }); + + it('should list saved states', async () => { + await orchestrator.initialize(); + await orchestrator.executeEpic(3); + + const orch2 = new MasterOrchestrator(tempDir, { storyId: 'TEST-002' }); + await orch2.initialize(); + + const states = await orchestrator.listSavedStates(); + expect(states.length).toBeGreaterThanOrEqual(2); + expect(states[0].storyId).toBeDefined(); + expect(states[0].progress).toBeDefined(); + expect(states[0].resumable).toBeDefined(); + }); + + it('should calculate progress from state', async () => { + await orchestrator.initialize(); + await orchestrator.executeEpic(3); + await orchestrator.executeEpic(4); + + const states = await orchestrator.listSavedStates(); + const testState = states.find((s) => s.storyId === 'TEST-001'); + + expect(testState.progress).toBe(67); // 2 of 3 epics + }); + }); + }); + + describe('Progress Tracking', () => { + it('should calculate progress percentage', async () => { + await orchestrator.initialize(); + + expect(orchestrator.getProgressPercentage()).toBe(0); + + await orchestrator.executeEpic(3); + expect(orchestrator.getProgressPercentage()).toBe(33); // 1 of 3 non-on-demand epics + + await orchestrator.executeEpic(4); + expect(orchestrator.getProgressPercentage()).toBe(67); // 2 of 3 + }); + + it('should return status summary', async () => { + await orchestrator.initialize(); + await orchestrator.executeEpic(3); + + const status = orchestrator.getStatus(); + + expect(status.state).toBe(OrchestratorState.READY); + expect(status.storyId).toBe('TEST-001'); + expect(status.progress).toBe(33); + expect(status.epics['3'].status).toBe(EpicStatus.COMPLETED); + }); + }); + + describe('EPIC_CONFIG', () => { + it('should have configuration for all epics', () => { + expect(EPIC_CONFIG[3]).toBeDefined(); + expect(EPIC_CONFIG[4]).toBeDefined(); + expect(EPIC_CONFIG[5]).toBeDefined(); + expect(EPIC_CONFIG[6]).toBeDefined(); + }); + + it('should mark Epic 5 as on-demand', () => { + expect(EPIC_CONFIG[5].onDemand).toBe(true); + expect(EPIC_CONFIG[3].onDemand).toBeUndefined(); + }); + + it('should have correct epic names', () => { + expect(EPIC_CONFIG[3].name).toBe('Spec Pipeline'); + expect(EPIC_CONFIG[4].name).toBe('Execution Engine'); + expect(EPIC_CONFIG[5].name).toBe('Recovery System'); + expect(EPIC_CONFIG[6].name).toBe('QA Loop'); + }); + }); + + describe('Context Building (Story 0.4)', () => { + beforeEach(async () => { + await orchestrator.initialize(); + }); + + // AC1: Método _buildContextForEpic(epicNum) criado + it('AC1: should have _buildContextForEpic method', () => { + expect(typeof orchestrator._buildContextForEpic).toBe('function'); + }); + + it('should build context with base properties for all epics', async () => { + const context = orchestrator._buildContextForEpic(3); + + expect(context.storyId).toBe('TEST-001'); + expect(context.workflowId).toBeDefined(); + expect(context.techStack).toBeDefined(); + expect(context.projectRoot).toBe(tempDir); + }); + + // AC2: Epic 3 recebe: storyId, source, prdPath + it('AC2: Epic 3 should receive storyId, source, prdPath', async () => { + const orch = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + source: 'prd', + prdPath: '/path/to/prd.md', + }); + await orch.initialize(); + + const context = orch._buildContextForEpic(3); + + expect(context.storyId).toBe('TEST-001'); + expect(context.source).toBe('prd'); + expect(context.prdPath).toBe('/path/to/prd.md'); + }); + + // AC3: Epic 4 recebe: specPath, complexity, requirements, techStack + it('AC3: Epic 4 should receive specPath, complexity, requirements, techStack', async () => { + // Simulate Epic 3 completion + orchestrator.executionState.epics[3] = { + status: EpicStatus.COMPLETED, + result: { + specPath: '/path/to/spec.md', + complexity: 'STANDARD', + requirements: ['REQ-1', 'REQ-2'], + }, + }; + + const context = orchestrator._buildContextForEpic(4); + + expect(context.spec).toBe('/path/to/spec.md'); + expect(context.complexity).toBe('STANDARD'); + expect(context.requirements).toEqual(['REQ-1', 'REQ-2']); + expect(context.techStack).toBeDefined(); + }); + + // AC4: Epic 5 recebe: implementationPath, errors, attempts + it('AC4: Epic 5 should receive implementationPath, errors, attempts', async () => { + // Simulate Epic 4 with errors + orchestrator.executionState.epics[4] = { + status: EpicStatus.FAILED, + result: { + implementationPath: '/path/to/impl', + }, + attempts: 2, + }; + orchestrator.executionState.errors = [{ epic: 4, error: 'Build failed' }]; + + const context = orchestrator._buildContextForEpic(5); + + expect(context.implementationPath).toBe('/path/to/impl'); + expect(context.errors).toHaveLength(1); + expect(context.errors[0].error).toBe('Build failed'); + expect(context.attempts).toBe(2); + }); + + // AC5: Epic 6 recebe: buildResult, testResults, codeChanges + it('AC5: Epic 6 should receive buildResult, testResults, codeChanges', async () => { + // Simulate Epic 4 completion + orchestrator.executionState.epics[4] = { + status: EpicStatus.COMPLETED, + result: { + success: true, + testResults: [{ name: 'test1', passed: true }], + codeChanges: ['/src/file.js'], + }, + }; + + const context = orchestrator._buildContextForEpic(6); + + expect(context.buildResult).toBeDefined(); + expect(context.buildResult.success).toBe(true); + expect(context.testResults).toHaveLength(1); + expect(context.codeChanges).toContain('/src/file.js'); + }); + + + // AC7: TechStackProfile injected em todos os contextos + it('AC7: TechStackProfile should be injected in all contexts', async () => { + const epics = [3, 4, 5, 6, 7]; + + for (const epicNum of epics) { + const context = orchestrator._buildContextForEpic(epicNum); + expect(context.techStack).toBeDefined(); + expect(typeof context.techStack).toBe('object'); + } + }); + + it('should include previousGates in all contexts', async () => { + // Complete Epic 3 + orchestrator.executionState.epics[3] = { + status: EpicStatus.COMPLETED, + result: {}, + }; + + const context = orchestrator._buildContextForEpic(4); + + expect(context.previousGates).toBeDefined(); + expect(context.previousGates).toContain(3); + }); + + it('should collect session insights correctly', async () => { + orchestrator.executionState.startedAt = new Date(Date.now() - 5000).toISOString(); + orchestrator.executionState.errors = [{ epic: 3, error: 'test' }]; + orchestrator.executionState.retryCount = { 3: 2, 4: 1 }; + orchestrator.executionState.epics[3] = { status: EpicStatus.COMPLETED }; + + const insights = orchestrator._collectSessionInsights(); + + expect(insights.duration).toBeGreaterThanOrEqual(4900); + expect(insights.errorsEncountered).toBe(1); + expect(insights.recoveryAttempts).toBe(3); + expect(insights.completedEpics).toBe(1); + }); + }); +}); + +describe('EpicStatus Enum', () => { + it('should have all required status values', () => { + expect(EpicStatus.PENDING).toBe('pending'); + expect(EpicStatus.IN_PROGRESS).toBe('in_progress'); + expect(EpicStatus.COMPLETED).toBe('completed'); + expect(EpicStatus.FAILED).toBe('failed'); + expect(EpicStatus.SKIPPED).toBe('skipped'); + }); +}); + +describe('Module Exports', () => { + it('should export MasterOrchestrator class', () => { + expect(MasterOrchestrator).toBeDefined(); + expect(typeof MasterOrchestrator).toBe('function'); + }); + + it('should export OrchestratorState', () => { + expect(OrchestratorState).toBeDefined(); + expect(typeof OrchestratorState).toBe('object'); + }); + + it('should export EpicStatus', () => { + expect(EpicStatus).toBeDefined(); + expect(typeof EpicStatus).toBe('object'); + }); + + it('should export EPIC_CONFIG', () => { + expect(EPIC_CONFIG).toBeDefined(); + expect(typeof EPIC_CONFIG).toBe('object'); + }); +}); + +``` + +================================================== +📄 tests/core/user-profile-audit.test.js +================================================== +```js +/** + * User Profile Audit Tests - Story ACT-2 + * + * Validates: + * - AC1: GreetingPreferenceManager accounts for user_profile (bob forces minimal/named) + * - AC3: validate-user-profile runs during activation pipeline + * - AC4: Bob mode restricts command visibility to key-only across agents + * + * @story ACT-2 - Audit user_profile Impact Across Agents + * @epic EPIC-ACT - Unified Agent Activation Pipeline + */ + +// Mock fs before requiring any modules +jest.mock('fs', () => ({ + readFileSync: jest.fn(), + writeFileSync: jest.fn(), + existsSync: jest.fn(), + copyFileSync: jest.fn(), +})); + +jest.mock('js-yaml', () => ({ + load: jest.fn(), + dump: jest.fn(), +})); + +// Mock config-resolver +jest.mock('../../.aios-core/core/config/config-resolver', () => ({ + resolveConfig: jest.fn(), +})); + +// Mock infrastructure/scripts/validate-user-profile +jest.mock('../../.aios-core/infrastructure/scripts/validate-user-profile', () => ({ + validateUserProfile: jest.fn(), + loadAndValidateUserProfile: jest.fn(), + getUserProfile: jest.fn(), + isBobMode: jest.fn(), + isAdvancedMode: jest.fn(), + VALID_USER_PROFILES: ['bob', 'advanced'], + DEFAULT_USER_PROFILE: 'advanced', +})); + +// Mock other dependencies +jest.mock('../../.aios-core/core/session/context-detector', () => { + return jest.fn().mockImplementation(() => ({ + detectSessionType: jest.fn().mockReturnValue('new'), + })); +}); + +jest.mock('../../.aios-core/infrastructure/scripts/git-config-detector', () => { + return jest.fn().mockImplementation(() => ({ + get: jest.fn().mockResolvedValue({ configured: true, type: 'github', branch: 'main' }), + })); +}); + +jest.mock('../../.aios-core/development/scripts/workflow-navigator', () => { + return jest.fn().mockImplementation(() => ({ + getNextSteps: jest.fn().mockReturnValue([]), + detectWorkflowState: jest.fn().mockReturnValue(null), + })); +}); + +jest.mock('../../.aios-core/infrastructure/scripts/project-status-loader', () => ({ + loadProjectStatus: jest.fn().mockResolvedValue({ + branch: 'main', + modifiedFiles: [], + modifiedFilesTotalCount: 0, + recentCommits: [], + currentStory: null, + }), +})); + +jest.mock('../../.aios-core/core/permissions', () => ({ + PermissionMode: jest.fn().mockImplementation(() => ({ + load: jest.fn().mockResolvedValue(undefined), + getBadge: jest.fn().mockReturnValue(''), + })), +})); + +const fs = require('fs'); +const yaml = require('js-yaml'); +const { resolveConfig } = require('../../.aios-core/core/config/config-resolver'); +const { validateUserProfile } = require('../../.aios-core/infrastructure/scripts/validate-user-profile'); + +// Import modules under test after mocks +const GreetingPreferenceManager = require('../../.aios-core/development/scripts/greeting-preference-manager'); +const GreetingBuilder = require('../../.aios-core/development/scripts/greeting-builder'); + +jest.setTimeout(30000); + +// ============================================================================ +// AC1: GreetingPreferenceManager accounts for user_profile +// ============================================================================ + +describe('AC1: GreetingPreferenceManager bob mode override', () => { + let manager; + + beforeEach(() => { + jest.clearAllMocks(); + manager = new GreetingPreferenceManager(); + }); + + it('should force "named" when user_profile is bob and preference is auto', () => { + const mockConfig = { + user_profile: 'bob', + agentIdentity: { + greeting: { + preference: 'auto', + }, + }, + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const result = manager.getPreference('bob'); + + expect(result).toBe('named'); + }); + + it('should force "named" when user_profile is bob and preference is archetypal', () => { + const mockConfig = { + user_profile: 'bob', + agentIdentity: { + greeting: { + preference: 'archetypal', + }, + }, + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const result = manager.getPreference('bob'); + + expect(result).toBe('named'); + }); + + it('should allow "minimal" when user_profile is bob and preference is minimal', () => { + const mockConfig = { + user_profile: 'bob', + agentIdentity: { + greeting: { + preference: 'minimal', + }, + }, + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const result = manager.getPreference('bob'); + + expect(result).toBe('minimal'); + }); + + it('should allow "named" when user_profile is bob and preference is named', () => { + const mockConfig = { + user_profile: 'bob', + agentIdentity: { + greeting: { + preference: 'named', + }, + }, + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const result = manager.getPreference('bob'); + + expect(result).toBe('named'); + }); + + it('should NOT restrict preference when user_profile is advanced', () => { + const mockConfig = { + user_profile: 'advanced', + agentIdentity: { + greeting: { + preference: 'archetypal', + }, + }, + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const result = manager.getPreference('advanced'); + + expect(result).toBe('archetypal'); + }); + + it('should return auto when advanced and preference is auto', () => { + const mockConfig = { + user_profile: 'advanced', + agentIdentity: { + greeting: { + preference: 'auto', + }, + }, + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const result = manager.getPreference('advanced'); + + expect(result).toBe('auto'); + }); + + it('should read user_profile from config when not passed as parameter', () => { + const mockConfig = { + user_profile: 'bob', + agentIdentity: { + greeting: { + preference: 'auto', + }, + }, + }; + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue(mockConfig); + + const result = manager.getPreference(); // No param + + expect(result).toBe('named'); + }); + + it('should default to auto on config load error regardless of profile param', () => { + fs.readFileSync.mockImplementation(() => { + throw new Error('File not found'); + }); + + const consoleSpy = jest.spyOn(console, 'warn').mockImplementation(); + + const result = manager.getPreference('bob'); + + expect(result).toBe('auto'); + consoleSpy.mockRestore(); + }); +}); + +// ============================================================================ +// AC3: validate-user-profile runs during activation pipeline +// ============================================================================ + +describe('AC3: validate-user-profile integrated into activation pipeline', () => { + let builder; + + beforeEach(() => { + jest.clearAllMocks(); + + // Default: config returns advanced + resolveConfig.mockReturnValue({ + config: { user_profile: 'advanced' }, + }); + + // Default: validation passes + validateUserProfile.mockReturnValue({ + valid: true, + value: 'advanced', + error: null, + warning: null, + }); + + // Mock config loading for GreetingBuilder constructor + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue({ + agentIdentity: { greeting: { preference: 'auto' } }, + }); + + builder = new GreetingBuilder(); + }); + + it('should call validateUserProfile during loadUserProfile()', () => { + builder.loadUserProfile(); + + expect(validateUserProfile).toHaveBeenCalledWith('advanced'); + }); + + it('should use validated value from validateUserProfile', () => { + resolveConfig.mockReturnValue({ + config: { user_profile: 'BOB' }, + }); + validateUserProfile.mockReturnValue({ + valid: true, + value: 'bob', + error: null, + warning: null, + }); + + const result = builder.loadUserProfile(); + + expect(result).toBe('bob'); + }); + + it('should fall back to default when validation fails', () => { + resolveConfig.mockReturnValue({ + config: { user_profile: 'invalid_value' }, + }); + validateUserProfile.mockReturnValue({ + valid: false, + value: null, + error: 'Invalid user_profile: "invalid_value"', + }); + + const consoleSpy = jest.spyOn(console, 'warn').mockImplementation(); + const result = builder.loadUserProfile(); + + expect(result).toBe('advanced'); + expect(consoleSpy).toHaveBeenCalledWith( + expect.stringContaining('validation failed'), + ); + consoleSpy.mockRestore(); + }); + + it('should log warning from validation but continue', () => { + resolveConfig.mockReturnValue({ + config: { user_profile: undefined }, + }); + + // loadUserProfile returns default when no user_profile + const result = builder.loadUserProfile(); + expect(result).toBe('advanced'); + }); + + it('should gracefully handle resolveConfig failure', () => { + resolveConfig.mockImplementation(() => { + throw new Error('Config resolution failed'); + }); + + const consoleSpy = jest.spyOn(console, 'warn').mockImplementation(); + const result = builder.loadUserProfile(); + + expect(result).toBe('advanced'); + consoleSpy.mockRestore(); + }); +}); + +// ============================================================================ +// AC4: Bob mode restricts command visibility +// ============================================================================ + +describe('AC4: Bob mode command visibility restrictions', () => { + let builder; + + const mockAgentDev = { + id: 'dev', + name: 'Dex', + icon: '\uD83D\uDCBB', + commands: [ + { name: 'help', visibility: ['full', 'quick', 'key'], description: 'Show commands' }, + { name: 'develop', visibility: ['full', 'quick'], description: 'Implement story' }, + { name: 'apply-qa-fixes', visibility: ['quick', 'key'], description: 'Apply QA fixes' }, + { name: 'run-tests', visibility: ['quick', 'key'], description: 'Run tests' }, + { name: 'explain', visibility: ['full'], description: 'Explain' }, + { name: 'exit', visibility: ['full', 'quick', 'key'], description: 'Exit' }, + ], + }; + + const mockAgentPm = { + id: 'pm', + name: 'Bob', + icon: '\uD83D\uDCCA', + commands: [ + { name: 'help', visibility: ['full', 'quick', 'key'], description: 'Show commands' }, + { name: 'status', visibility: ['full', 'quick', 'key'], description: 'Project status' }, + { name: 'create-epic', visibility: ['full', 'quick'], description: 'Create epic' }, + { name: 'exit', visibility: ['full', 'quick', 'key'], description: 'Exit' }, + ], + }; + + const mockAgentNoVisibility = { + id: 'qa', + name: 'Quinn', + icon: '\u2705', + commands: [ + { name: 'help', description: 'Show commands' }, + { name: 'review', description: 'Review code' }, + { name: 'exit', description: 'Exit' }, + ], + }; + + beforeEach(() => { + jest.clearAllMocks(); + + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue({ + agentIdentity: { greeting: { preference: 'auto' } }, + }); + + builder = new GreetingBuilder(); + }); + + it('should return empty commands for bob mode non-PM agents', () => { + const commands = builder.filterCommandsByVisibility(mockAgentDev, 'new', 'bob'); + expect(commands).toEqual([]); + }); + + it('should return empty commands for bob mode agents without visibility metadata', () => { + const commands = builder.filterCommandsByVisibility(mockAgentNoVisibility, 'new', 'bob'); + expect(commands).toEqual([]); + }); + + it('should return full commands for PM agent in bob mode', () => { + const commands = builder.filterCommandsByVisibility(mockAgentPm, 'new', 'bob'); + // PM in bob mode uses normal filtering; 'new' session = 'full' visibility + expect(commands.length).toBeGreaterThan(0); + }); + + it('should return commands with "full" visibility for advanced mode new session', () => { + const commands = builder.filterCommandsByVisibility(mockAgentDev, 'new', 'advanced'); + // 'new' session = 'full' visibility filter + const allHaveFull = commands.every( + (cmd) => cmd.visibility && cmd.visibility.includes('full'), + ); + expect(allHaveFull).toBe(true); + expect(commands.length).toBe(4); // help, develop, explain, exit + }); + + it('should return commands with "quick" visibility for advanced mode existing session', () => { + const commands = builder.filterCommandsByVisibility(mockAgentDev, 'existing', 'advanced'); + const allHaveQuick = commands.every( + (cmd) => cmd.visibility && cmd.visibility.includes('quick'), + ); + expect(allHaveQuick).toBe(true); + expect(commands.length).toBe(5); // help, develop, apply-qa-fixes, run-tests, exit + }); + + it('should return commands with "key" visibility for advanced mode workflow session', () => { + const commands = builder.filterCommandsByVisibility(mockAgentDev, 'workflow', 'advanced'); + const allHaveKey = commands.every( + (cmd) => cmd.visibility && cmd.visibility.includes('key'), + ); + expect(allHaveKey).toBe(true); + expect(commands.length).toBe(4); // help, apply-qa-fixes, run-tests, exit + }); + + it('should default userProfile to advanced when not provided', () => { + const commands = builder.filterCommandsByVisibility(mockAgentDev, 'new'); + // Should behave as advanced + expect(commands.length).toBeGreaterThan(0); + }); + + it('should fall back to first 12 commands for agents without visibility metadata in advanced mode', () => { + const commands = builder.filterCommandsByVisibility(mockAgentNoVisibility, 'new', 'advanced'); + // No visibility metadata, falls back to first 12 + expect(commands).toEqual(mockAgentNoVisibility.commands.slice(0, 12)); + }); +}); + +// ============================================================================ +// Integration: Bob mode full greeting flow +// ============================================================================ + +describe('Integration: Bob mode greeting flow', () => { + let builder; + + const mockAgentDev = { + id: 'dev', + name: 'Dex', + icon: '\uD83D\uDCBB', + persona_profile: { + archetype: 'Builder', + communication: { + greeting_levels: { + minimal: '\uD83D\uDCBB dev Agent ready', + named: "\uD83D\uDCBB Dex (Builder) ready. Let's build something great!", + archetypal: '\uD83D\uDCBB Dex the Builder ready to innovate!', + }, + signature_closing: '-- Dex', + }, + }, + persona: { role: 'Full Stack Developer' }, + commands: [ + { name: 'help', visibility: ['full', 'quick', 'key'], description: 'Show commands' }, + { name: 'develop', visibility: ['full', 'quick'], description: 'Implement story' }, + { name: 'exit', visibility: ['full', 'quick', 'key'], description: 'Exit' }, + ], + }; + + const mockAgentPm = { + id: 'pm', + name: 'Bob', + icon: '\uD83D\uDCCA', + persona_profile: { + archetype: 'Conductor', + communication: { + greeting_levels: { + minimal: '\uD83D\uDCCA pm Agent ready', + named: '\uD83D\uDCCA Bob (Conductor) ready.', + archetypal: '\uD83D\uDCCA Bob the Conductor ready to orchestrate!', + }, + signature_closing: '-- Bob', + }, + }, + persona: { role: 'Product Manager' }, + commands: [ + { name: 'help', visibility: ['full', 'quick', 'key'], description: 'Show commands' }, + { name: 'exit', visibility: ['full', 'quick', 'key'], description: 'Exit' }, + ], + }; + + beforeEach(() => { + jest.clearAllMocks(); + + // Config: bob mode with auto preference + fs.readFileSync.mockReturnValue('yaml content'); + yaml.load.mockReturnValue({ + user_profile: 'bob', + agentIdentity: { greeting: { preference: 'auto' } }, + }); + + resolveConfig.mockReturnValue({ + config: { user_profile: 'bob' }, + }); + + validateUserProfile.mockReturnValue({ + valid: true, + value: 'bob', + error: null, + warning: null, + }); + + builder = new GreetingBuilder(); + }); + + it('should produce named greeting for non-PM agent in bob mode', async () => { + const greeting = await builder.buildGreeting(mockAgentDev, {}); + + // Bob mode forces named preference for non-PM + expect(greeting).toContain('Dex'); + expect(greeting).toContain('*help'); + }); + + it('should use contextual greeting for PM agent in bob mode', async () => { + const greeting = await builder.buildGreeting(mockAgentPm, {}); + + // PM bypasses bob restriction, auto preference used + // Should get full contextual greeting (not just fixed named) + expect(greeting).toBeTruthy(); + expect(greeting).toContain('Bob'); + }); +}); + +// ============================================================================ +// validate-user-profile standalone tests +// ============================================================================ + +describe('validate-user-profile standalone', () => { + // Use unmocked version for these tests + const actualValidateUserProfile = jest.requireActual( + '../../.aios-core/infrastructure/scripts/validate-user-profile', + ).validateUserProfile; + + it('should validate "bob" as valid', () => { + const result = actualValidateUserProfile('bob'); + expect(result.valid).toBe(true); + expect(result.value).toBe('bob'); + }); + + it('should validate "advanced" as valid', () => { + const result = actualValidateUserProfile('advanced'); + expect(result.valid).toBe(true); + expect(result.value).toBe('advanced'); + }); + + it('should normalize case: "BOB" -> "bob"', () => { + const result = actualValidateUserProfile('BOB'); + expect(result.valid).toBe(true); + expect(result.value).toBe('bob'); + }); + + it('should normalize case: "ADVANCED" -> "advanced"', () => { + const result = actualValidateUserProfile('ADVANCED'); + expect(result.valid).toBe(true); + expect(result.value).toBe('advanced'); + }); + + it('should reject invalid values', () => { + const result = actualValidateUserProfile('invalid'); + expect(result.valid).toBe(false); + expect(result.error).toContain('Invalid user_profile'); + }); + + it('should handle null with warning and default', () => { + const result = actualValidateUserProfile(null); + expect(result.valid).toBe(true); + expect(result.value).toBe('advanced'); + expect(result.warning).toBeTruthy(); + }); + + it('should handle undefined with warning and default', () => { + const result = actualValidateUserProfile(undefined); + expect(result.valid).toBe(true); + expect(result.value).toBe('advanced'); + expect(result.warning).toBeTruthy(); + }); + + it('should reject non-string values', () => { + const result = actualValidateUserProfile(123); + expect(result.valid).toBe(false); + expect(result.error).toContain('must be a string'); + }); +}); + +``` + +================================================== +📄 tests/core/subagent-dispatcher.test.js +================================================== +```js +/** + * Subagent Dispatcher - Test Suite + * Story EXC-1, AC7 - subagent-dispatcher.js coverage + * + * Tests: constructor, resolveAgent, dispatch, enrichContext, + * buildPrompt, extractModifiedFiles, isRelevantGotcha, + * agent mapping, logging, formatStatus + */ + +const { + createMockMemoryQuery, + createMockGotchasMemory, + collectEvents, +} = require('./execution-test-helpers'); + +// Mock gotchas-memory (exists but exports object, not constructor directly) +jest.mock('../../.aios-core/core/memory/gotchas-memory', () => { throw new Error('mocked'); }); + +const { SubagentDispatcher } = require('../../.aios-core/core/execution/subagent-dispatcher'); + +describe('SubagentDispatcher', () => { + // ── Constructor ───────────────────────────────────────────────────── + + describe('Constructor', () => { + test('creates with defaults', () => { + const sd = new SubagentDispatcher(); + expect(sd.defaultAgent).toBe('@dev'); + expect(sd.maxRetries).toBe(2); + expect(sd.retryDelay).toBe(2000); + expect(sd.agentMapping.database).toBe('@data-engineer'); + expect(sd.agentMapping.test).toBe('@qa'); + }); + + test('accepts custom config', () => { + const sd = new SubagentDispatcher({ + defaultAgent: '@qa', + maxRetries: 5, + agentMapping: { custom: '@custom' }, + }); + expect(sd.defaultAgent).toBe('@qa'); + expect(sd.maxRetries).toBe(5); + expect(sd.agentMapping.custom).toBe('@custom'); + }); + + test('extends EventEmitter', () => { + const sd = new SubagentDispatcher(); + expect(typeof sd.on).toBe('function'); + }); + + test('accepts injected memory dependencies', () => { + const mq = createMockMemoryQuery(); + const gm = createMockGotchasMemory(); + const sd = new SubagentDispatcher({ memoryQuery: mq, gotchasMemory: gm }); + expect(sd.memoryQuery).toBe(mq); + expect(sd.gotchasMemory).toBe(gm); + }); + }); + + // ── resolveAgent ────────────────────────────────────────────────────── + + describe('resolveAgent', () => { + let sd; + + beforeEach(() => { + sd = new SubagentDispatcher(); + }); + + test('uses explicit agent from task', () => { + expect(sd.resolveAgent({ agent: '@qa' })).toBe('@qa'); + }); + + test('adds @ prefix to agent name', () => { + expect(sd.resolveAgent({ agent: 'dev' })).toBe('@dev'); + }); + + test('resolves from task type', () => { + expect(sd.resolveAgent({ type: 'database', description: '' })).toBe('@data-engineer'); + expect(sd.resolveAgent({ type: 'test', description: '' })).toBe('@qa'); + expect(sd.resolveAgent({ type: 'deploy', description: '' })).toBe('@devops'); + }); + + test('resolves from task tags', () => { + expect(sd.resolveAgent({ tags: ['testing', 'coverage'], description: '' })).toBe('@qa'); + }); + + test('infers from description', () => { + expect(sd.resolveAgent({ description: 'Create database migration' })).toBe('@data-engineer'); + expect(sd.resolveAgent({ description: 'Write tests for user service' })).toBe('@qa'); + expect(sd.resolveAgent({ description: 'Deploy to production' })).toBe('@devops'); + expect(sd.resolveAgent({ description: 'Document API endpoints' })).toBe('@pm'); + }); + + test('falls back to default agent', () => { + expect(sd.resolveAgent({ description: 'Do something generic' })).toBe('@dev'); + }); + }); + + // ── dispatch ────────────────────────────────────────────────────────── + + describe('dispatch', () => { + test('emits dispatch_started event', async () => { + const sd = new SubagentDispatcher(); + sd.maxRetries = 0; + sd.retryDelay = 0; + sd.spawnSubagent = jest.fn().mockRejectedValue(new Error('fail')); + const events = collectEvents(sd, ['dispatch_started']); + + await sd.dispatch({ id: 't1', description: 'Test task' }); + + expect(events.count('dispatch_started')).toBe(1); + }); + + test('returns failure after all retries fail', async () => { + const sd = new SubagentDispatcher({ maxRetries: 1, retryDelay: 0 }); + sd.sleep = () => Promise.resolve(); + sd.spawnSubagent = jest.fn().mockRejectedValue(new Error('spawn failed')); + + const result = await sd.dispatch({ id: 't1', description: 'Test' }); + + expect(result.success).toBe(false); + expect(result.error).toBe('spawn failed'); + expect(sd.spawnSubagent).toHaveBeenCalledTimes(2); // 1 + 1 retry + }); + + test('succeeds on retry', async () => { + const sd = new SubagentDispatcher({ maxRetries: 2, retryDelay: 0 }); + sd.sleep = () => Promise.resolve(); + + let calls = 0; + sd.spawnSubagent = jest.fn().mockImplementation(() => { + calls++; + if (calls === 1) throw new Error('temp fail'); + return Promise.resolve({ success: true, output: 'done', filesModified: ['a.js'] }); + }); + + const result = await sd.dispatch({ id: 't1', description: 'Test' }); + + expect(result.success).toBe(true); + expect(result.output).toBe('done'); + expect(result.filesModified).toEqual(['a.js']); + }); + + test('emits dispatch_completed on success', async () => { + const sd = new SubagentDispatcher(); + sd.spawnSubagent = jest.fn().mockResolvedValue({ success: true, output: 'ok' }); + + const events = collectEvents(sd, ['dispatch_completed']); + await sd.dispatch({ id: 't1', description: 'Test' }); + + expect(events.count('dispatch_completed')).toBe(1); + }); + + test('emits dispatch_failed after all retries', async () => { + const sd = new SubagentDispatcher({ maxRetries: 0 }); + sd.spawnSubagent = jest.fn().mockRejectedValue(new Error('fail')); + + const events = collectEvents(sd, ['dispatch_failed']); + await sd.dispatch({ id: 't1', description: 'Test' }); + + expect(events.count('dispatch_failed')).toBe(1); + }); + }); + + // ── enrichContext ───────────────────────────────────────────────────── + + describe('enrichContext', () => { + test('returns base context when no memory', async () => { + const sd = new SubagentDispatcher({ memoryQuery: null, gotchasMemory: null }); + const result = await sd.enrichContext({ id: 't1', description: 'test' }, { base: true }); + expect(result.base).toBe(true); + expect(result.projectContext).toBeDefined(); + }); + + test('enriches with memory when available', async () => { + const mq = createMockMemoryQuery({ + getContextForAgent: jest.fn().mockResolvedValue({ + relevantMemory: [{ type: 'pattern', content: 'use hooks' }], + suggestedPatterns: [{ name: 'hooks-pattern' }], + }), + }); + const sd = new SubagentDispatcher({ memoryQuery: mq }); + const result = await sd.enrichContext({ id: 't1', description: 'test' }, {}); + expect(result.memory.length).toBe(1); + expect(result.patterns.length).toBe(1); + }); + + test('handles memory query errors gracefully', async () => { + const mq = createMockMemoryQuery({ + getContextForAgent: jest.fn().mockRejectedValue(new Error('query failed')), + }); + const sd = new SubagentDispatcher({ memoryQuery: mq }); + const result = await sd.enrichContext({ id: 't1', description: 'test' }, {}); + // Should not throw, just skip memory + expect(result.projectContext).toBeDefined(); + }); + }); + + // ── buildPrompt ─────────────────────────────────────────────────────── + + describe('buildPrompt', () => { + test('includes agent and task info', () => { + const sd = new SubagentDispatcher(); + const prompt = sd.buildPrompt('@dev', { + id: 't1', + description: 'Build feature X', + acceptanceCriteria: ['AC1', 'AC2'], + files: ['src/app.js'], + }, {}); + + expect(prompt).toContain('@dev'); + expect(prompt).toContain('Build feature X'); + expect(prompt).toContain('AC1'); + expect(prompt).toContain('src/app.js'); + }); + + test('includes gotchas and patterns from context', () => { + const sd = new SubagentDispatcher(); + const prompt = sd.buildPrompt('@dev', { id: 't1', description: 'test' }, { + gotchas: [{ pattern: 'avoid X', workaround: 'use Y' }], + patterns: [{ name: 'Pattern A', description: 'Desc' }], + }); + + expect(prompt).toContain('avoid X'); + expect(prompt).toContain('Pattern A'); + }); + }); + + // ── extractModifiedFiles ────────────────────────────────────────────── + + describe('extractModifiedFiles', () => { + test('extracts files from output', () => { + const sd = new SubagentDispatcher(); + const output = "Created `src/app.js` and modified 'lib/utils.ts'"; + const files = sd.extractModifiedFiles(output); + expect(files.length).toBeGreaterThanOrEqual(1); + }); + + test('returns empty for empty output', () => { + const sd = new SubagentDispatcher(); + expect(sd.extractModifiedFiles('')).toEqual([]); + }); + }); + + // ── isRelevantGotcha ────────────────────────────────────────────────── + + describe('isRelevantGotcha', () => { + let sd; + + beforeEach(() => { + sd = new SubagentDispatcher(); + }); + + test('matches by pattern', () => { + const gotcha = { pattern: 'database', description: '' }; + const task = { description: 'Fix database connection', type: '', tags: [] }; + expect(sd.isRelevantGotcha(gotcha, task)).toBe(true); + }); + + test('matches by category', () => { + const gotcha = { category: 'test', description: '' }; + const task = { description: '', type: 'test', tags: [] }; + expect(sd.isRelevantGotcha(gotcha, task)).toBe(true); + }); + + test('matches by keyword overlap', () => { + const gotcha = { description: 'connection timeout error handling' }; + const task = { description: 'handle connection timeout error gracefully', tags: [] }; + expect(sd.isRelevantGotcha(gotcha, task)).toBe(true); + }); + + test('returns false for unrelated gotcha', () => { + const gotcha = { description: 'quantum physics' }; + const task = { description: 'build login form', tags: [] }; + expect(sd.isRelevantGotcha(gotcha, task)).toBe(false); + }); + }); + + // ── Agent mapping ───────────────────────────────────────────────────── + + describe('Agent mapping', () => { + test('getAgentMapping returns copy', () => { + const sd = new SubagentDispatcher(); + const mapping = sd.getAgentMapping(); + mapping.custom = '@custom'; + expect(sd.agentMapping.custom).toBeUndefined(); + }); + + test('updateAgentMapping adds new mappings', () => { + const sd = new SubagentDispatcher(); + sd.updateAgentMapping({ custom: '@custom-agent' }); + expect(sd.agentMapping.custom).toBe('@custom-agent'); + // Original mappings preserved + expect(sd.agentMapping.database).toBe('@data-engineer'); + }); + }); + + // ── Logging ─────────────────────────────────────────────────────────── + + describe('Logging', () => { + test('log stores entries', () => { + const sd = new SubagentDispatcher(); + sd.log('test_event', { key: 'value' }); + expect(sd.dispatchLog.length).toBe(1); + expect(sd.dispatchLog[0].type).toBe('test_event'); + }); + + test('log trims to maxLogSize', () => { + const sd = new SubagentDispatcher(); + sd.maxLogSize = 3; + for (let i = 0; i < 5; i++) { + sd.log(`event-${i}`, {}); + } + expect(sd.dispatchLog.length).toBe(3); + }); + + test('getLog returns limited entries', () => { + const sd = new SubagentDispatcher(); + for (let i = 0; i < 10; i++) { + sd.log(`e-${i}`, {}); + } + expect(sd.getLog(5).length).toBe(5); + }); + }); + + // ── formatStatus ────────────────────────────────────────────────────── + + describe('formatStatus', () => { + test('returns formatted status', () => { + const sd = new SubagentDispatcher(); + const status = sd.formatStatus(); + expect(status).toContain('Subagent Dispatcher'); + expect(status).toContain('Agent Mapping'); + }); + }); +}); + +``` + +================================================== +📄 tests/core/surface-checker.test.js +================================================== +```js +/** + * Surface Checker Tests + * + * Story 11.4: Bob Surface Criteria + * + * Tests for the surface-checker module that determines when + * Bob should interrupt and ask for human decision. + * + * @jest-environment node + */ + +const path = require('path'); +const { + SurfaceChecker, + createSurfaceChecker, + shouldSurface, +} = require('../../.aios-core/core/orchestration/surface-checker'); + +describe('SurfaceChecker', () => { + let checker; + + beforeEach(() => { + checker = new SurfaceChecker(); + checker.load(); + }); + + describe('load()', () => { + it('should load criteria file successfully', () => { + const newChecker = new SurfaceChecker(); + const result = newChecker.load(); + expect(result).toBe(true); + expect(newChecker.criteria).not.toBeNull(); + }); + + it('should return false for non-existent file', () => { + const newChecker = new SurfaceChecker('/non/existent/path.yaml'); + const result = newChecker.load(); + expect(result).toBe(false); + }); + }); + + describe('validate()', () => { + it('should validate properly formatted criteria file', () => { + const result = checker.validate(); + expect(result.valid).toBe(true); + expect(result.errors).toHaveLength(0); + }); + + it('should return errors for unloaded checker', () => { + const newChecker = new SurfaceChecker('/non/existent/path.yaml'); + const result = newChecker.validate(); + expect(result.valid).toBe(false); + expect(result.errors).toContain('Criteria file not loaded'); + }); + }); + + describe('evaluateCondition()', () => { + describe('comparison operators', () => { + it('should evaluate greater than correctly', () => { + expect(checker.evaluateCondition('estimated_cost > 5', { estimated_cost: 10 })).toBe(true); + expect(checker.evaluateCondition('estimated_cost > 5', { estimated_cost: 5 })).toBe(false); + expect(checker.evaluateCondition('estimated_cost > 5', { estimated_cost: 3 })).toBe(false); + }); + + it('should evaluate greater than or equal correctly', () => { + expect(checker.evaluateCondition('errors_in_task >= 2', { errors_in_task: 2 })).toBe(true); + expect(checker.evaluateCondition('errors_in_task >= 2', { errors_in_task: 3 })).toBe(true); + expect(checker.evaluateCondition('errors_in_task >= 2', { errors_in_task: 1 })).toBe(false); + }); + + it('should evaluate less than correctly', () => { + expect(checker.evaluateCondition('count < 10', { count: 5 })).toBe(true); + expect(checker.evaluateCondition('count < 10', { count: 10 })).toBe(false); + }); + + it('should evaluate less than or equal correctly', () => { + expect(checker.evaluateCondition('count <= 10', { count: 10 })).toBe(true); + expect(checker.evaluateCondition('count <= 10', { count: 5 })).toBe(true); + expect(checker.evaluateCondition('count <= 10', { count: 15 })).toBe(false); + }); + + it('should handle missing fields with default 0', () => { + expect(checker.evaluateCondition('missing > 5', {})).toBe(false); + expect(checker.evaluateCondition('missing >= 0', {})).toBe(true); + }); + }); + + describe('equality operators', () => { + it('should evaluate string equality correctly', () => { + expect( + checker.evaluateCondition("risk_level == 'HIGH'", { risk_level: 'HIGH' }), + ).toBe(true); + expect( + checker.evaluateCondition("risk_level == 'HIGH'", { risk_level: 'LOW' }), + ).toBe(false); + }); + + it('should evaluate number equality correctly', () => { + expect(checker.evaluateCondition('count == 5', { count: 5 })).toBe(true); + expect(checker.evaluateCondition('count == 5', { count: 3 })).toBe(false); + }); + }); + + describe('IN operator', () => { + it('should evaluate IN operator for destructive actions', () => { + expect( + checker.evaluateCondition('action_type IN destructive_actions', { + action_type: 'delete_files', + }), + ).toBe(true); + expect( + checker.evaluateCondition('action_type IN destructive_actions', { + action_type: 'force_push', + }), + ).toBe(true); + expect( + checker.evaluateCondition('action_type IN destructive_actions', { + action_type: 'read_file', + }), + ).toBe(false); + }); + }); + + describe('scope comparison', () => { + it('should evaluate scope change when explicitly flagged', () => { + expect( + checker.evaluateCondition('requested_scope > approved_scope', { + scope_expanded: true, + }), + ).toBe(true); + expect( + checker.evaluateCondition('requested_scope > approved_scope', { + scope_expanded: false, + }), + ).toBe(false); + }); + + it('should evaluate scope change by length comparison', () => { + expect( + checker.evaluateCondition('requested_scope > approved_scope', { + requested_scope: 'full refactor of authentication system', + approved_scope: 'fix login bug', + }), + ).toBe(true); + }); + }); + + describe('OR conditions', () => { + it('should evaluate OR conditions correctly', () => { + expect( + checker.evaluateCondition('requires_api_key OR requires_payment', { + requires_api_key: true, + requires_payment: false, + }), + ).toBe(true); + expect( + checker.evaluateCondition('requires_api_key OR requires_payment', { + requires_api_key: false, + requires_payment: true, + }), + ).toBe(true); + expect( + checker.evaluateCondition('requires_api_key OR requires_payment', { + requires_api_key: false, + requires_payment: false, + }), + ).toBe(false); + }); + }); + + describe('boolean fields', () => { + it('should evaluate boolean fields correctly', () => { + expect(checker.evaluateCondition('requires_api_key', { requires_api_key: true })).toBe( + true, + ); + expect(checker.evaluateCondition('requires_api_key', { requires_api_key: false })).toBe( + false, + ); + expect(checker.evaluateCondition('requires_api_key', {})).toBe(false); + }); + }); + }); + + describe('interpolateMessage()', () => { + it('should interpolate simple variables', () => { + const result = checker.interpolateMessage('Cost: ${estimated_cost}', { + estimated_cost: 10.5, + }); + expect(result).toContain('10.50'); + }); + + it('should handle missing variables', () => { + const result = checker.interpolateMessage('Value: ${missing}', {}); + expect(result).toBe('Value: ${missing}'); + }); + + it('should interpolate multiple variables', () => { + const result = checker.interpolateMessage( + 'Error count: ${errors_in_task}, Summary: ${error_summary}', + { + errors_in_task: 3, + error_summary: 'Connection failed', + }, + ); + expect(result).toBe('Error count: 3, Summary: Connection failed'); + }); + + it('should handle null/undefined context values', () => { + const result = checker.interpolateMessage('Value: ${value}', { value: null }); + expect(result).toBe('Value: '); + }); + }); + + describe('shouldSurface()', () => { + describe('C001: Cost Threshold', () => { + it('should surface when cost exceeds threshold', () => { + const result = checker.shouldSurface({ estimated_cost: 10 }); + expect(result.should_surface).toBe(true); + expect(result.criterion_id).toBe('C001'); + expect(result.action).toBe('confirm_before_proceed'); + expect(result.severity).toBe('warning'); + expect(result.can_bypass).toBe(true); + }); + + it('should not surface when cost is below threshold', () => { + const result = checker.shouldSurface({ estimated_cost: 3 }); + expect(result.should_surface).toBe(false); + }); + }); + + describe('C002: Risk Threshold', () => { + it('should surface when risk level is HIGH', () => { + const result = checker.shouldSurface({ + risk_level: 'HIGH', + risk_details: 'Production database affected', + }); + expect(result.should_surface).toBe(true); + expect(result.criterion_id).toBe('C002'); + expect(result.action).toBe('present_and_ask_go_nogo'); + expect(result.severity).toBe('critical'); + }); + + it('should not surface when risk level is LOW', () => { + const result = checker.shouldSurface({ risk_level: 'LOW' }); + expect(result.should_surface).toBe(false); + }); + }); + + describe('C003: Multiple Options', () => { + it('should surface when multiple valid options exist', () => { + const result = checker.shouldSurface({ + valid_options_count: 3, + options_with_tradeoffs: '1. Option A\n2. Option B\n3. Option C', + }); + expect(result.should_surface).toBe(true); + expect(result.criterion_id).toBe('C003'); + expect(result.action).toBe('present_options_with_tradeoffs'); + expect(result.can_bypass).toBe(false); + }); + + it('should not surface when only one option exists', () => { + const result = checker.shouldSurface({ valid_options_count: 1 }); + expect(result.should_surface).toBe(false); + }); + }); + + describe('C004: Consecutive Errors', () => { + it('should surface when 2 or more errors in same task', () => { + const result = checker.shouldSurface({ + errors_in_task: 2, + error_summary: 'Failed to connect twice', + }); + expect(result.should_surface).toBe(true); + expect(result.criterion_id).toBe('C004'); + expect(result.action).toBe('pause_and_ask_help'); + expect(result.severity).toBe('error'); + expect(result.can_bypass).toBe(false); + }); + + it('should not surface when less than 2 errors', () => { + const result = checker.shouldSurface({ errors_in_task: 1 }); + expect(result.should_surface).toBe(false); + }); + }); + + describe('C005: Destructive Action', () => { + it('should surface for delete_files action', () => { + const result = checker.shouldSurface({ + action_type: 'delete_files', + action_description: 'Delete all temp files', + affected_files: '10 files', + }); + expect(result.should_surface).toBe(true); + expect(result.criterion_id).toBe('C005'); + expect(result.action).toBe('always_confirm'); + expect(result.severity).toBe('critical'); + expect(result.can_bypass).toBe(false); + }); + + it('should surface for force_push action', () => { + const result = checker.shouldSurface({ + action_type: 'force_push', + action_description: 'Force push to main', + }); + expect(result.should_surface).toBe(true); + expect(result.criterion_id).toBe('C005'); + expect(result.can_bypass).toBe(false); + }); + + it('should surface for drop_table action', () => { + const result = checker.shouldSurface({ + action_type: 'drop_table', + action_description: 'Drop users table', + }); + expect(result.should_surface).toBe(true); + expect(result.criterion_id).toBe('C005'); + }); + + it('should not surface for safe actions', () => { + const result = checker.shouldSurface({ + action_type: 'read_file', + action_description: 'Read config file', + }); + expect(result.should_surface).toBe(false); + }); + }); + + describe('C006: Scope Change', () => { + it('should surface when scope is expanded', () => { + const result = checker.shouldSurface({ + scope_expanded: true, + approved_scope: 'Fix login bug', + requested_scope: 'Refactor entire auth system', + scope_difference: 'Major expansion', + }); + expect(result.should_surface).toBe(true); + expect(result.criterion_id).toBe('C006'); + expect(result.action).toBe('confirm_scope_expansion'); + }); + }); + + describe('C007: External Dependency', () => { + it('should surface when API key is required', () => { + const result = checker.shouldSurface({ + requires_api_key: true, + dependency_description: 'OpenAI API key required', + }); + expect(result.should_surface).toBe(true); + expect(result.criterion_id).toBe('C007'); + expect(result.action).toBe('confirm_before_proceed'); + }); + + it('should surface when payment is required', () => { + const result = checker.shouldSurface({ + requires_payment: true, + dependency_description: 'AWS charges apply', + }); + expect(result.should_surface).toBe(true); + expect(result.criterion_id).toBe('C007'); + }); + }); + + describe('Evaluation Order', () => { + it('should evaluate destructive actions first (highest priority)', () => { + // Even with high cost and high risk, destructive action should trigger + const result = checker.shouldSurface({ + action_type: 'force_push', + estimated_cost: 100, + risk_level: 'HIGH', + }); + expect(result.criterion_id).toBe('C005'); + }); + + it('should check risk before cost', () => { + const result = checker.shouldSurface({ + risk_level: 'HIGH', + estimated_cost: 10, + }); + expect(result.criterion_id).toBe('C002'); + }); + }); + + describe('No Surface Needed', () => { + it('should not surface when no criteria are met', () => { + const result = checker.shouldSurface({ + estimated_cost: 2, + risk_level: 'LOW', + errors_in_task: 0, + action_type: 'read_file', + }); + expect(result.should_surface).toBe(false); + expect(result.criterion_id).toBeNull(); + expect(result.action).toBeNull(); + }); + + it('should not surface with empty context', () => { + const result = checker.shouldSurface({}); + expect(result.should_surface).toBe(false); + }); + }); + }); + + describe('getActionConfig()', () => { + it('should return action configuration', () => { + const config = checker.getActionConfig('confirm_before_proceed'); + expect(config).not.toBeNull(); + expect(config.type).toBe('yes_no'); + expect(config.default).toBe('no'); + }); + + it('should return null for unknown action', () => { + const config = checker.getActionConfig('unknown_action'); + expect(config).toBeNull(); + }); + }); + + describe('getDestructiveActions()', () => { + it('should return list of destructive actions', () => { + const actions = checker.getDestructiveActions(); + expect(Array.isArray(actions)).toBe(true); + expect(actions).toContain('delete_files'); + expect(actions).toContain('force_push'); + expect(actions).toContain('drop_table'); + expect(actions).toContain('rm_rf'); + }); + }); + + describe('isDestructiveAction()', () => { + it('should return true for destructive actions', () => { + expect(checker.isDestructiveAction('delete_files')).toBe(true); + expect(checker.isDestructiveAction('force_push')).toBe(true); + expect(checker.isDestructiveAction('reset_hard')).toBe(true); + }); + + it('should return false for safe actions', () => { + expect(checker.isDestructiveAction('read_file')).toBe(false); + expect(checker.isDestructiveAction('create_file')).toBe(false); + expect(checker.isDestructiveAction('git_commit')).toBe(false); + }); + }); + + describe('getMetadata()', () => { + it('should return metadata from criteria file', () => { + const metadata = checker.getMetadata(); + expect(metadata).not.toBeNull(); + expect(metadata.story).toBe('11.4'); + expect(metadata.author).toBe('@dev (Dex)'); + }); + }); + + describe('getCriteria()', () => { + it('should return all criteria definitions', () => { + const criteria = checker.getCriteria(); + expect(criteria).not.toBeNull(); + expect(criteria.cost_threshold).toBeDefined(); + expect(criteria.risk_threshold).toBeDefined(); + expect(criteria.destructive_action).toBeDefined(); + }); + }); +}); + +describe('createSurfaceChecker()', () => { + it('should create and load a SurfaceChecker', () => { + const checker = createSurfaceChecker(); + expect(checker).toBeInstanceOf(SurfaceChecker); + expect(checker.criteria).not.toBeNull(); + }); +}); + +describe('shouldSurface() convenience function', () => { + it('should work as a standalone function', () => { + const result = shouldSurface({ estimated_cost: 10 }); + expect(result.should_surface).toBe(true); + expect(result.criterion_id).toBe('C001'); + }); + + it('should return no surface for empty context', () => { + const result = shouldSurface({}); + expect(result.should_surface).toBe(false); + }); +}); + +``` + +================================================== +📄 tests/core/parallel-monitor.test.js +================================================== +```js +/** + * Parallel Monitor - Test Suite + * Story EXC-1, AC7 - parallel-monitor.js coverage + * + * Tests: constructor, registerWave, registerTaskStart/Complete, + * addTaskOutput, registerWaveComplete, cancelWave, getStatus, + * formatProgressBar, formatStatus, history, clear + */ + +const { + collectEvents, +} = require('./execution-test-helpers'); + +const { ParallelMonitor, getMonitor } = require('../../.aios-core/core/execution/parallel-monitor'); + +describe('ParallelMonitor', () => { + let monitor; + + beforeEach(() => { + monitor = new ParallelMonitor({ maxHistory: 50, maxLogLines: 100 }); + }); + + // ── Constructor ───────────────────────────────────────────────────── + + describe('Constructor', () => { + test('creates with defaults', () => { + const m = new ParallelMonitor(); + expect(m.activeWaves).toBeInstanceOf(Map); + expect(m.activeTasks).toBeInstanceOf(Map); + expect(m.history).toEqual([]); + expect(m.notifyOnComplete).toBe(true); + expect(m.notifyOnFailure).toBe(true); + }); + + test('accepts custom config', () => { + const m = new ParallelMonitor({ maxHistory: 10, notifyOnComplete: false }); + expect(m.maxHistory).toBe(10); + expect(m.notifyOnComplete).toBe(false); + }); + + test('extends EventEmitter', () => { + expect(typeof monitor.on).toBe('function'); + }); + }); + + // ── registerWave ────────────────────────────────────────────────────── + + describe('registerWave', () => { + test('registers a wave with tasks', () => { + monitor.registerWave('wave-1', { + workflowId: 'wf-1', + index: 0, + tasks: [ + { id: 't1', description: 'Task 1', agent: '@dev' }, + { id: 't2', description: 'Task 2', agent: '@qa' }, + ], + }); + + expect(monitor.activeWaves.has('wave-1')).toBe(true); + const wave = monitor.activeWaves.get('wave-1'); + expect(wave.tasks.length).toBe(2); + expect(wave.status).toBe('running'); + }); + + test('emits wave_registered event', () => { + const events = collectEvents(monitor, ['wave_registered']); + monitor.registerWave('wave-1', { tasks: [{ id: 't1', description: 'x' }] }); + expect(events.count('wave_registered')).toBe(1); + }); + }); + + // ── registerTaskStart ───────────────────────────────────────────────── + + describe('registerTaskStart', () => { + test('registers a task as running', () => { + monitor.registerWave('wave-1', { tasks: [{ id: 't1', description: 'x' }] }); + monitor.registerTaskStart('t1', { waveId: 'wave-1', agent: '@dev', description: 'Test' }); + + expect(monitor.activeTasks.has('t1')).toBe(true); + expect(monitor.activeTasks.get('t1').status).toBe('running'); + }); + + test('updates wave task status', () => { + monitor.registerWave('wave-1', { tasks: [{ id: 't1', description: 'x' }] }); + monitor.registerTaskStart('t1', { waveId: 'wave-1', agent: '@dev' }); + + const wave = monitor.activeWaves.get('wave-1'); + expect(wave.tasks[0].status).toBe('running'); + }); + + test('emits task_started event', () => { + const events = collectEvents(monitor, ['task_started']); + monitor.registerTaskStart('t1', { waveId: 'w1', agent: '@dev' }); + expect(events.count('task_started')).toBe(1); + }); + }); + + // ── addTaskOutput ───────────────────────────────────────────────────── + + describe('addTaskOutput', () => { + test('adds output to task logs', () => { + monitor.registerTaskStart('t1', { waveId: 'w1', agent: '@dev' }); + monitor.addTaskOutput('t1', 'line 1'); + monitor.addTaskOutput('t1', 'line 2'); + + const logs = monitor.getTaskLogs('t1'); + expect(logs.length).toBe(2); + expect(logs[0].line).toBe('line 1'); + }); + + test('trims logs when exceeding maxLogLines', () => { + monitor = new ParallelMonitor({ maxLogLines: 3 }); + monitor.registerTaskStart('t1', { waveId: 'w1', agent: '@dev' }); + + for (let i = 0; i < 5; i++) { + monitor.addTaskOutput('t1', `line ${i}`); + } + + const logs = monitor.getTaskLogs('t1'); + expect(logs.length).toBe(3); + }); + + test('handles unknown taskId gracefully', () => { + // Should not throw + monitor.addTaskOutput('nonexistent', 'line'); + }); + }); + + // ── registerTaskComplete ────────────────────────────────────────────── + + describe('registerTaskComplete', () => { + test('marks task as completed', () => { + monitor.registerWave('w1', { tasks: [{ id: 't1', description: 'x' }] }); + monitor.registerTaskStart('t1', { waveId: 'w1', agent: '@dev' }); + monitor.registerTaskComplete('t1', { success: true, filesModified: ['a.js'] }); + + const task = monitor.activeTasks.get('t1'); + expect(task.status).toBe('completed'); + expect(task.filesModified).toEqual(['a.js']); + }); + + test('marks task as failed', () => { + monitor.registerTaskStart('t1', { waveId: 'w1', agent: '@dev' }); + monitor.registerTaskComplete('t1', { success: false, error: 'boom' }); + + const task = monitor.activeTasks.get('t1'); + expect(task.status).toBe('failed'); + expect(task.error).toBe('boom'); + }); + + test('adds to history', () => { + monitor.registerTaskStart('t1', { waveId: 'w1', agent: '@dev' }); + monitor.registerTaskComplete('t1', { success: true }); + expect(monitor.history.length).toBe(1); + }); + + test('trims history to maxHistory', () => { + monitor = new ParallelMonitor({ maxHistory: 2 }); + for (let i = 0; i < 3; i++) { + monitor.registerTaskStart(`t${i}`, { waveId: 'w1', agent: '@dev' }); + monitor.registerTaskComplete(`t${i}`, { success: true }); + } + expect(monitor.history.length).toBe(2); + }); + + test('emits task_failed on failure', () => { + const events = collectEvents(monitor, ['task_failed']); + monitor.registerTaskStart('t1', { waveId: 'w1', agent: '@dev' }); + monitor.registerTaskComplete('t1', { success: false, error: 'fail' }); + expect(events.count('task_failed')).toBe(1); + }); + + test('does nothing for unknown task', () => { + // Should not throw + monitor.registerTaskComplete('nonexistent', { success: true }); + }); + }); + + // ── registerWaveComplete ────────────────────────────────────────────── + + describe('registerWaveComplete', () => { + test('marks wave as completed', () => { + monitor.registerWave('w1', { tasks: [{ id: 't1', description: 'x' }] }); + monitor.registerWaveComplete('w1', { success: true, metrics: {} }); + + const wave = monitor.activeWaves.get('w1'); + expect(wave.status).toBe('completed'); + }); + + test('emits wave_completed when notifyOnComplete is true', () => { + const events = collectEvents(monitor, ['wave_completed']); + monitor.registerWave('w1', { tasks: [] }); + monitor.registerWaveComplete('w1', { success: true }); + expect(events.count('wave_completed')).toBe(1); + }); + + test('does nothing for unknown wave', () => { + monitor.registerWaveComplete('nonexistent', { success: true }); + }); + }); + + // ── cancelWave ──────────────────────────────────────────────────────── + + describe('cancelWave', () => { + test('cancels running wave and pending tasks', () => { + monitor.registerWave('w1', { + tasks: [ + { id: 't1', description: 'x', agent: '@dev' }, + { id: 't2', description: 'y', agent: '@qa' }, + ], + }); + + monitor.cancelWave('w1'); + + const wave = monitor.activeWaves.get('w1'); + expect(wave.status).toBe('cancelled'); + expect(wave.tasks[0].status).toBe('cancelled'); + expect(wave.tasks[1].status).toBe('cancelled'); + }); + + test('emits wave_cancelled event', () => { + const events = collectEvents(monitor, ['wave_cancelled']); + monitor.registerWave('w1', { tasks: [] }); + monitor.cancelWave('w1'); + expect(events.count('wave_cancelled')).toBe(1); + }); + + test('does nothing for non-running wave', () => { + monitor.registerWave('w1', { tasks: [] }); + monitor.activeWaves.get('w1').status = 'completed'; + monitor.cancelWave('w1'); + expect(monitor.activeWaves.get('w1').status).toBe('completed'); + }); + }); + + // ── getStatus ───────────────────────────────────────────────────────── + + describe('getStatus', () => { + test('returns status with active waves', () => { + monitor.registerWave('w1', { + workflowId: 'wf-1', + index: 0, + tasks: [{ id: 't1', description: 'x' }], + }); + + const status = monitor.getStatus(); + expect(status.activeWaves).toBe(1); + expect(status.waves.length).toBe(1); + expect(status.waves[0].progress.pending).toBe(1); + }); + + test('returns empty status when no waves', () => { + const status = monitor.getStatus(); + expect(status.activeWaves).toBe(0); + expect(status.waves).toEqual([]); + }); + }); + + // ── getTaskLogs ─────────────────────────────────────────────────────── + + describe('getTaskLogs', () => { + test('returns empty for unknown task', () => { + expect(monitor.getTaskLogs('nonexistent')).toEqual([]); + }); + + test('returns limited logs', () => { + monitor.registerTaskStart('t1', { waveId: 'w1' }); + for (let i = 0; i < 5; i++) { + monitor.addTaskOutput('t1', `line ${i}`); + } + expect(monitor.getTaskLogs('t1', 3).length).toBe(3); + }); + }); + + // ── formatProgressBar ───────────────────────────────────────────────── + + describe('formatProgressBar', () => { + test('formats zero total', () => { + const bar = monitor.formatProgressBar(0, 0, 0, 0); + expect(bar).toContain('0/0'); + }); + + test('formats progress with completed', () => { + const bar = monitor.formatProgressBar(3, 1, 0, 0); + expect(bar).toContain('4/4'); + }); + + test('shows running tasks', () => { + const bar = monitor.formatProgressBar(2, 0, 1, 1); + expect(bar).toContain('2/4'); + }); + }); + + // ── formatStatus ────────────────────────────────────────────────────── + + describe('formatStatus', () => { + test('shows no active executions message', () => { + const status = monitor.formatStatus(); + expect(status).toContain('No active executions'); + }); + + test('shows wave details when active', () => { + monitor.registerWave('w1', { + workflowId: 'wf-1', + index: 1, + tasks: [{ id: 't1', description: 'x', agent: '@dev' }], + }); + + const status = monitor.formatStatus(); + expect(status).toContain('Wave 1'); + }); + }); + + // ── clear ───────────────────────────────────────────────────────────── + + describe('clear', () => { + test('clears all data', () => { + monitor.registerWave('w1', { tasks: [{ id: 't1', description: 'x' }] }); + monitor.registerTaskStart('t1', { waveId: 'w1' }); + monitor.history.push({ id: 1 }); + + monitor.clear(); + + expect(monitor.activeWaves.size).toBe(0); + expect(monitor.activeTasks.size).toBe(0); + expect(monitor.taskLogs.size).toBe(0); + expect(monitor.history.length).toBe(0); + }); + }); + + // ── getMonitor singleton ────────────────────────────────────────────── + + describe('getMonitor', () => { + test('returns singleton instance', () => { + const m1 = getMonitor(); + const m2 = getMonitor(); + expect(m1).toBe(m2); + expect(m1).toBeInstanceOf(ParallelMonitor); + }); + }); + + // ── History ─────────────────────────────────────────────────────────── + + describe('getHistory', () => { + test('returns limited history', () => { + monitor.history = [{ id: 1 }, { id: 2 }, { id: 3 }]; + expect(monitor.getHistory(2).length).toBe(2); + }); + }); + + // ── broadcast ───────────────────────────────────────────────────────── + + describe('broadcast', () => { + test('sends message to registered connections', () => { + const sent = []; + const mockWs = { + send: (msg) => sent.push(msg), + on: jest.fn(), + }; + + monitor.registerConnection(mockWs); + // registerConnection sends status message + expect(sent.length).toBe(1); + + monitor.broadcast('test', { data: 1 }); + expect(sent.length).toBe(2); + }); + + test('removes broken connections', () => { + let callCount = 0; + const mockWs = { + send: () => { + callCount++; + // First call is from registerConnection (initial status), let it pass + if (callCount > 1) throw new Error('closed'); + }, + on: jest.fn(), + }; + + monitor.registerConnection(mockWs); + expect(monitor.wsConnections.size).toBe(1); + monitor.broadcast('test', {}); + expect(monitor.wsConnections.size).toBe(0); + }); + }); +}); + +``` + +================================================== +📄 tests/core/cli-commands.test.js +================================================== +```js +/** + * CLI Commands Tests + * + * Story: 0.9 - CLI Commands + * Epic: Epic 0 - ADE Master Orchestrator + * + * Tests for CLI command handlers. + * + * @author @dev (Dex) + * @version 1.0.0 + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); + +const { + orchestrate, + orchestrateStatus, + orchestrateStop, + orchestrateResume, + commands, +} = require('../../.aios-core/core/orchestration/cli-commands'); + +describe('CLI Commands (Story 0.9)', () => { + let tempDir; + let originalLog; + let logOutput; + + beforeEach(async () => { + tempDir = path.join(os.tmpdir(), `cli-commands-test-${Date.now()}`); + await fs.ensureDir(tempDir); + + // Capture console output + logOutput = []; + originalLog = console.log; + console.log = (...args) => { + logOutput.push(args.join(' ')); + }; + }); + + afterEach(async () => { + console.log = originalLog; + await fs.remove(tempDir); + }); + + describe('Command Exports (AC7)', () => { + it('should export all commands', () => { + expect(commands).toBeDefined(); + expect(commands['orchestrate']).toBe(orchestrate); + expect(commands['orchestrate-status']).toBe(orchestrateStatus); + expect(commands['orchestrate-stop']).toBe(orchestrateStop); + expect(commands['orchestrate-resume']).toBe(orchestrateResume); + }); + }); + + describe('orchestrate (AC1)', () => { + it('should require story ID', async () => { + const result = await orchestrate(null, { projectRoot: tempDir }); + + expect(result.success).toBe(false); + expect(result.exitCode).toBe(3); + expect(result.error).toContain('required'); + }); + + it('should execute pipeline and return result', async () => { + const result = await orchestrate('TEST-001', { projectRoot: tempDir }); + + expect(result).toBeDefined(); + expect(typeof result.exitCode).toBe('number'); + }); + + it('should support --epic flag (AC5)', async () => { + const result = await orchestrate('TEST-001', { + projectRoot: tempDir, + epic: 4, + }); + + expect(result).toBeDefined(); + // Should attempt to start from epic 4 + }); + + it('should support --dry-run flag (AC6)', async () => { + const result = await orchestrate('TEST-001', { + projectRoot: tempDir, + dryRun: true, + }); + + expect(result.success).toBe(true); + expect(result.dryRun).toBe(true); + expect(result.exitCode).toBe(0); + + // Dry run initializes for tech stack detection but doesn't execute full pipeline + // State file may be created during initialization, but status should show dryRun + }); + + it('should support --strict flag', async () => { + const result = await orchestrate('TEST-001', { + projectRoot: tempDir, + strict: true, + dryRun: true, + }); + + expect(result.success).toBe(true); + }); + }); + + describe('orchestrateStatus (AC2)', () => { + it('should require story ID', async () => { + const result = await orchestrateStatus(null, { projectRoot: tempDir }); + + expect(result.success).toBe(false); + expect(result.exitCode).toBe(3); + }); + + it('should return error if no state found', async () => { + const result = await orchestrateStatus('NONEXISTENT', { projectRoot: tempDir }); + + expect(result.success).toBe(false); + expect(result.exitCode).toBe(1); + expect(result.error).toContain('not found'); + }); + + it('should show status when state exists', async () => { + // Create state file + const statePath = path.join(tempDir, '.aios', 'master-orchestrator', 'TEST-001.json'); + await fs.ensureDir(path.dirname(statePath)); + await fs.writeJson(statePath, { + status: 'in_progress', + currentEpic: 4, + startedAt: new Date().toISOString(), + updatedAt: new Date().toISOString(), + epics: { + 3: { status: 'completed' }, + 4: { status: 'in_progress' }, + 6: { status: 'pending' }, + 7: { status: 'pending' }, + }, + errors: [], + }); + + const result = await orchestrateStatus('TEST-001', { projectRoot: tempDir }); + + expect(result.success).toBe(true); + expect(result.exitCode).toBe(0); + expect(result.state).toBeDefined(); + expect(result.state.status).toBe('in_progress'); + }); + }); + + describe('orchestrateStop (AC3)', () => { + it('should require story ID', async () => { + const result = await orchestrateStop(null, { projectRoot: tempDir }); + + expect(result.success).toBe(false); + expect(result.exitCode).toBe(3); + }); + + it('should return error if no state found', async () => { + const result = await orchestrateStop('NONEXISTENT', { projectRoot: tempDir }); + + expect(result.success).toBe(false); + expect(result.exitCode).toBe(1); + }); + + it('should stop execution and update state', async () => { + // Create state file + const statePath = path.join(tempDir, '.aios', 'master-orchestrator', 'TEST-001.json'); + await fs.ensureDir(path.dirname(statePath)); + await fs.writeJson(statePath, { + status: 'in_progress', + currentEpic: 4, + updatedAt: new Date().toISOString(), + }); + + const result = await orchestrateStop('TEST-001', { projectRoot: tempDir }); + + expect(result.success).toBe(true); + expect(result.exitCode).toBe(0); + + // Check state was updated + const updatedState = await fs.readJson(statePath); + expect(updatedState.status).toBe('stopped'); + }); + }); + + describe('orchestrateResume (AC4)', () => { + it('should require story ID', async () => { + const result = await orchestrateResume(null, { projectRoot: tempDir }); + + expect(result.success).toBe(false); + expect(result.exitCode).toBe(3); + }); + + it('should return error if no state found', async () => { + const result = await orchestrateResume('NONEXISTENT', { projectRoot: tempDir }); + + expect(result.success).toBe(false); + expect(result.exitCode).toBe(1); + }); + + it('should not resume completed stories', async () => { + // Create completed state + const statePath = path.join(tempDir, '.aios', 'master-orchestrator', 'TEST-001.json'); + await fs.ensureDir(path.dirname(statePath)); + await fs.writeJson(statePath, { + status: 'complete', + currentEpic: 7, + epics: { + 3: { status: 'completed' }, + 4: { status: 'completed' }, + 6: { status: 'completed' }, + 7: { status: 'completed' }, + }, + }); + + const result = await orchestrateResume('TEST-001', { projectRoot: tempDir }); + + expect(result.success).toBe(false); + expect(result.exitCode).toBe(2); + expect(result.error).toContain('completed'); + }); + + it('should resume from stopped state', async () => { + // Create stopped state + const statePath = path.join(tempDir, '.aios', 'master-orchestrator', 'TEST-001.json'); + await fs.ensureDir(path.dirname(statePath)); + await fs.writeJson(statePath, { + status: 'stopped', + currentEpic: 4, + updatedAt: new Date().toISOString(), + epics: { + 3: { status: 'completed' }, + 4: { status: 'in_progress' }, + 6: { status: 'pending' }, + 7: { status: 'pending' }, + }, + }); + + const result = await orchestrateResume('TEST-001', { projectRoot: tempDir }); + + expect(result).toBeDefined(); + // Should attempt to resume + }); + }); + + describe('Output Formatting', () => { + it('should output to console', async () => { + await orchestrate('TEST-001', { projectRoot: tempDir, dryRun: true }); + + expect(logOutput.length).toBeGreaterThan(0); + expect(logOutput.some((line) => line.includes('TEST-001'))).toBe(true); + }); + }); +}); + +``` + +================================================== +📄 tests/core/semantic-merge-engine.test.js +================================================== +```js +/** + * Semantic Merge Engine - Test Suite + * Story EXC-1, AC4 - semantic-merge-engine.js coverage + * + * Tests: SemanticAnalyzer, ConflictDetector, AutoMerger, AIResolver, + * CustomRulesLoader, SemanticMergeEngine, enums + */ + +const path = require('path'); +const fs = require('fs'); +const { + createTempDir, + cleanupTempDir, +} = require('./execution-test-helpers'); + +const { + SemanticMergeEngine, + SemanticAnalyzer, + ConflictDetector, + AutoMerger, + AIResolver, + CustomRulesLoader, + ChangeType, + MergeStrategy, + ConflictSeverity, + MergeDecision, +} = require('../../.aios-core/core/execution/semantic-merge-engine'); + +// ── Tests ──────────────────────────────────────────────────────────────────── + +describe('SemanticMergeEngine Module', () => { + let tmpDir; + + beforeEach(() => { + tmpDir = createTempDir('sme-test-'); + }); + + afterEach(() => { + cleanupTempDir(tmpDir); + }); + + // ── Enums ───────────────────────────────────────────────────────────── + + describe('Enums', () => { + test('ChangeType has expected values', () => { + expect(ChangeType.IMPORT_ADDED).toBe('import_added'); + expect(ChangeType.FUNCTION_ADDED).toBe('function_added'); + expect(ChangeType.FUNCTION_MODIFIED).toBe('function_modified'); + expect(ChangeType.FUNCTION_REMOVED).toBe('function_removed'); + expect(ChangeType.CLASS_ADDED).toBe('class_added'); + expect(ChangeType.CLASS_REMOVED).toBe('class_removed'); + expect(ChangeType.UNKNOWN).toBe('unknown'); + }); + + test('MergeStrategy has expected values', () => { + expect(MergeStrategy.COMBINE).toBe('combine'); + expect(MergeStrategy.TAKE_NEWER).toBe('take_newer'); + expect(MergeStrategy.AI_REQUIRED).toBe('ai_required'); + expect(MergeStrategy.HUMAN_REQUIRED).toBe('human_required'); + }); + + test('ConflictSeverity has expected values', () => { + expect(ConflictSeverity.LOW).toBe('low'); + expect(ConflictSeverity.MEDIUM).toBe('medium'); + expect(ConflictSeverity.HIGH).toBe('high'); + expect(ConflictSeverity.CRITICAL).toBe('critical'); + }); + + test('MergeDecision has expected values', () => { + expect(MergeDecision.AUTO_MERGED).toBe('auto_merged'); + expect(MergeDecision.AI_MERGED).toBe('ai_merged'); + expect(MergeDecision.NEEDS_HUMAN_REVIEW).toBe('needs_human_review'); + expect(MergeDecision.FAILED).toBe('failed'); + }); + }); + + // ── SemanticAnalyzer ────────────────────────────────────────────────── + + describe('SemanticAnalyzer', () => { + let analyzer; + + beforeEach(() => { + analyzer = new SemanticAnalyzer(); + }); + + test('detectLanguage maps extensions correctly', () => { + expect(analyzer.detectLanguage('.js')).toBe('javascript'); + expect(analyzer.detectLanguage('.ts')).toBe('typescript'); + expect(analyzer.detectLanguage('.py')).toBe('python'); + expect(analyzer.detectLanguage('.css')).toBe('css'); + expect(analyzer.detectLanguage('.json')).toBe('json'); + expect(analyzer.detectLanguage('.xyz')).toBe('text'); + }); + + test('extractElements returns empty for null content', () => { + const result = analyzer.extractElements(null, 'javascript'); + expect(result.imports).toEqual([]); + expect(result.functions).toEqual([]); + expect(result.classes).toEqual([]); + }); + + test('extractElements finds JS imports', () => { + const code = 'import { foo } from \'bar\';\nimport baz from \'qux\';'; + const result = analyzer.extractElements(code, 'javascript'); + expect(result.imports.length).toBeGreaterThanOrEqual(1); + }); + + test('extractElements finds JS functions', () => { + const code = 'function hello() { return 1; }\nconst world = () => 2;'; + const result = analyzer.extractElements(code, 'javascript'); + expect(result.functions.length).toBeGreaterThanOrEqual(1); + }); + + test('extractElements finds JS classes', () => { + const code = 'class MyClass extends Base { constructor() {} }'; + const result = analyzer.extractElements(code, 'javascript'); + expect(result.classes.length).toBe(1); + expect(result.classes[0].name).toBe('MyClass'); + }); + + test('analyzeDiff detects added functions', () => { + const before = ''; + const after = 'function newFunc() { return true; }'; + const result = analyzer.analyzeDiff('test.js', before, after); + expect(result.language).toBe('javascript'); + expect(result.functionsAdded).toContain('newFunc'); + }); + + test('analyzeDiff detects removed functions', () => { + const before = 'function oldFunc() { return true; }'; + const after = ''; + const result = analyzer.analyzeDiff('test.js', before, after); + const removed = result.changes.filter(c => c.changeType === ChangeType.FUNCTION_REMOVED); + expect(removed.length).toBe(1); + }); + + test('countChangedLines returns difference', () => { + expect(analyzer.countChangedLines('a\nb\nc', 'a\nb\nc\nd\ne')).toBe(2); + expect(analyzer.countChangedLines('', '')).toBe(0); + expect(analyzer.countChangedLines(null, 'a\nb')).toBe(1); + }); + + test('extractImportSource extracts module path', () => { + expect(analyzer.extractImportSource("import x from 'lodash'")).toBe('lodash'); + expect(analyzer.extractImportSource("import 'styles.css'")).toBe("import 'styles.css'"); + }); + + test('getLocation returns line number', () => { + const content = 'line1\nline2\nline3'; + expect(analyzer.getLocation(content, 6)).toBe('line 2'); + }); + }); + + // ── ConflictDetector ────────────────────────────────────────────────── + + describe('ConflictDetector', () => { + let detector; + + beforeEach(() => { + detector = new ConflictDetector(); + }); + + test('detectConflicts returns empty for single task', () => { + const result = detector.detectConflicts({ task1: { changes: [] } }); + expect(result).toEqual([]); + }); + + test('detectConflicts finds overlapping function modifications', () => { + const analyses = { + task1: { + filePath: 'app.js', + changes: [{ changeType: ChangeType.FUNCTION_MODIFIED, target: 'handleSubmit', location: 'line 10' }], + }, + task2: { + filePath: 'app.js', + changes: [{ changeType: ChangeType.FUNCTION_MODIFIED, target: 'handleSubmit', location: 'line 10' }], + }, + }; + + const conflicts = detector.detectConflicts(analyses); + expect(conflicts.length).toBeGreaterThan(0); + expect(conflicts[0].severity).toBeDefined(); + }); + + test('getCompatibility returns compatible for import+import', () => { + const result = detector.getCompatibility(ChangeType.IMPORT_ADDED, ChangeType.IMPORT_ADDED); + expect(result.compatible).toBe(true); + expect(result.strategy).toBe(MergeStrategy.COMBINE); + }); + + test('getCompatibility returns incompatible for function_removed+function_modified', () => { + const result = detector.getCompatibility(ChangeType.FUNCTION_REMOVED, ChangeType.FUNCTION_MODIFIED); + expect(result.compatible).toBe(false); + expect(result.severity).toBe(ConflictSeverity.CRITICAL); + }); + + test('getCompatibility returns default for unknown combinations', () => { + const result = detector.getCompatibility('something', 'unknown'); + expect(result.compatible).toBe(false); + expect(result.strategy).toBe(MergeStrategy.AI_REQUIRED); + }); + + test('mapStrategy maps string values', () => { + expect(detector.mapStrategy('combine')).toBe(MergeStrategy.COMBINE); + expect(detector.mapStrategy('human_required')).toBe(MergeStrategy.HUMAN_REQUIRED); + expect(detector.mapStrategy(null)).toBe(MergeStrategy.AI_REQUIRED); + expect(detector.mapStrategy('garbage')).toBe(MergeStrategy.AI_REQUIRED); + }); + + test('mapSeverity maps string values', () => { + expect(detector.mapSeverity('low')).toBe(ConflictSeverity.LOW); + expect(detector.mapSeverity('critical')).toBe(ConflictSeverity.CRITICAL); + expect(detector.mapSeverity(null)).toBe(ConflictSeverity.MEDIUM); + }); + }); + + // ── AutoMerger ──────────────────────────────────────────────────────── + + describe('AutoMerger', () => { + let merger; + + beforeEach(() => { + merger = new AutoMerger(); + }); + + test('tryAutoMerge fails for non-COMBINE strategy', () => { + const conflict = { mergeStrategy: MergeStrategy.AI_REQUIRED, changeTypes: [], targets: [] }; + const result = merger.tryAutoMerge(conflict, '', {}); + expect(result.success).toBe(false); + }); + + test('tryAutoMerge combines imports', () => { + const conflict = { + mergeStrategy: MergeStrategy.COMBINE, + changeTypes: [ChangeType.IMPORT_ADDED, ChangeType.IMPORT_ADDED], + targets: [], + }; + const base = '// base file\nconst x = 1;'; + const taskContents = { + t1: "import a from 'a';\nconst x = 1;", + t2: "import b from 'b';\nconst x = 1;", + }; + + const result = merger.tryAutoMerge(conflict, base, taskContents); + expect(result.success).toBe(true); + expect(result.decision).toBe(MergeDecision.AUTO_MERGED); + }); + + test('tryAutoMerge fails for unsupported change combo', () => { + const conflict = { + mergeStrategy: MergeStrategy.COMBINE, + changeTypes: [ChangeType.VARIABLE_MODIFIED, ChangeType.VARIABLE_ADDED], + targets: [], + }; + const result = merger.tryAutoMerge(conflict, '', {}); + expect(result.success).toBe(false); + }); + }); + + // ── AIResolver ──────────────────────────────────────────────────────── + + describe('AIResolver', () => { + let resolver; + + beforeEach(() => { + resolver = new AIResolver({ maxContextTokens: 1000 }); + }); + + test('constructor sets defaults', () => { + const r = new AIResolver(); + expect(r.maxContextTokens).toBe(4000); + expect(r.confidenceThreshold).toBe(0.7); + expect(r.callCount).toBe(0); + }); + + test('extractCodeBlock extracts from markdown', () => { + const response = 'Here is code:\n```js\nconst x = 1;\n```\nDone.'; + expect(resolver.extractCodeBlock(response)).toBe('const x = 1;'); + }); + + test('extractCodeBlock returns null for no code block', () => { + expect(resolver.extractCodeBlock('just text')).toBeNull(); + }); + + test('assessConfidence scores based on response', () => { + const good = '```\ncode\n```\nThis is a detailed explanation of the merge.'; + expect(resolver.assessConfidence(good)).toBeGreaterThanOrEqual(0.8); + + const bad = 'error: cannot resolve the conflict'; + expect(resolver.assessConfidence(bad)).toBeLessThan(0.8); + }); + + test('getStats returns call metrics', () => { + const stats = resolver.getStats(); + expect(stats.callsMade).toBe(0); + expect(stats.estimatedTokensUsed).toBe(0); + }); + + test('resolveConflict returns NEEDS_HUMAN_REVIEW for large context', async () => { + const bigResolver = new AIResolver({ maxContextTokens: 10 }); + const conflict = { + filePath: 'test.js', + location: 'line 1', + severity: 'high', + tasksInvolved: ['t1'], + }; + const result = await bigResolver.resolveConflict(conflict, 'a'.repeat(200), { + t1: { intent: 'test', changes: [] }, + }); + expect(result.decision).toBe(MergeDecision.NEEDS_HUMAN_REVIEW); + }); + + test('buildContext includes conflict info', () => { + const conflict = { + filePath: 'app.js', + location: 'line 5', + severity: 'medium', + tasksInvolved: ['t1'], + }; + const ctx = resolver.buildContext(conflict, 'base code', { + t1: { intent: 'fix bug', changes: [{ changeType: 'function_modified' }], content: 'new code' }, + }); + expect(ctx).toContain('app.js'); + expect(ctx).toContain('fix bug'); + }); + }); + + // ── CustomRulesLoader ───────────────────────────────────────────────── + + describe('CustomRulesLoader', () => { + let loader; + + beforeEach(() => { + loader = new CustomRulesLoader(tmpDir); + }); + + test('constructor sets paths', () => { + expect(loader.rootPath).toBe(tmpDir); + expect(loader.rulesPath).toContain('.aios'); + }); + + test('loadCustomRules returns null when no file', () => { + expect(loader.loadCustomRules()).toBeNull(); + }); + + test('isCacheValid returns false initially', () => { + expect(loader.isCacheValid()).toBe(false); + }); + + test('clearCache resets cache', () => { + loader.cache.rules = { test: true }; + loader.cache.lastLoad = Date.now(); + loader.clearCache(); + expect(loader.cache.rules).toBeNull(); + expect(loader.cache.lastLoad).toBeNull(); + }); + + test('getDefaultRules returns expected structure', () => { + const rules = loader.getDefaultRules(); + expect(rules.compatibility).toBeDefined(); + expect(rules.file_patterns).toBeDefined(); + expect(rules.languages).toBeDefined(); + expect(rules.strategies).toBeDefined(); + expect(rules.ai).toBeDefined(); + }); + + test('getMergedRules returns defaults when no custom rules', () => { + const rules = loader.getMergedRules(); + expect(rules.ai.enabled).toBe(true); + }); + + test('deepMerge merges objects correctly', () => { + const target = { a: 1, b: { c: 2, d: 3 } }; + const source = { b: { c: 99 }, e: 5 }; + const result = loader.deepMerge(target, source); + expect(result.a).toBe(1); + expect(result.b.c).toBe(99); + expect(result.b.d).toBe(3); + expect(result.e).toBe(5); + }); + + test('deepMerge skips null/undefined values', () => { + const target = { a: 1 }; + const source = { a: null, b: undefined }; + const result = loader.deepMerge(target, source); + expect(result.a).toBe(1); + }); + + test('matchesPattern matches glob patterns', () => { + expect(loader.matchesPattern('node_modules/foo/bar.js', ['node_modules/**'])).toBe(true); + expect(loader.matchesPattern('src/components/app.ts', ['src/**/*.ts'])).toBe(true); + expect(loader.matchesPattern('README.md', ['*.md'])).toBe(true); + expect(loader.matchesPattern('src/app.ts', ['*.md'])).toBe(false); + }); + + test('matchesPattern returns false for null patterns', () => { + expect(loader.matchesPattern('file.js', null)).toBe(false); + expect(loader.matchesPattern('file.js', 'not-array')).toBe(false); + }); + + test('getFileCategory categorizes files correctly', () => { + expect(loader.getFileCategory('node_modules/x.js')).toBe('skip'); + expect(loader.getFileCategory('README.md')).toBe('auto_merge'); + expect(loader.getFileCategory('package.json')).toBe('human_review'); + expect(loader.getFileCategory('src/components/App.tsx')).toBe('ai_preferred'); + expect(loader.getFileCategory('random.xyz')).toBe('default'); + }); + + test('getCompatibilityRule returns null for unknown', () => { + expect(loader.getCompatibilityRule('a', 'b')).toBeNull(); + }); + + test('getLanguageConfig returns config for known language', () => { + const jsConfig = loader.getLanguageConfig('javascript'); + expect(jsConfig.patterns).toBeDefined(); + }); + + test('getLanguageConfig returns empty for unknown language', () => { + expect(loader.getLanguageConfig('brainfuck')).toEqual({}); + }); + + test('getAIConfig returns defaults', () => { + const config = loader.getAIConfig(); + expect(config.enabled).toBe(true); + expect(config.max_context_tokens).toBe(4000); + }); + }); + + // ── SemanticAnalyzer - Python ────────────────────────────────────────── + + describe('SemanticAnalyzer - Python analysis', () => { + let pyAnalyzer; + beforeEach(() => { pyAnalyzer = new SemanticAnalyzer(); }); + + test('extractElements detects Python imports and functions', () => { + const pyCode = `import os +from pathlib import Path + +def hello(): + pass + +class MyClass: + pass +`; + const elements = pyAnalyzer.extractElements(pyCode, 'python'); + expect(elements.imports.length).toBeGreaterThanOrEqual(1); + expect(elements.functions.length).toBeGreaterThanOrEqual(1); + expect(elements.classes.length).toBeGreaterThanOrEqual(1); + }); + + test('analyzeDiff for Python files', () => { + const base = 'def greet():\n print("hi")\n'; + const modified = 'def greet():\n print("hello")\n\ndef goodbye():\n print("bye")\n'; + const result = pyAnalyzer.analyzeDiff('app.py', base, modified, 'task-1'); + expect(result.filePath).toBe('app.py'); + expect(result.language).toBe('python'); + }); + }); + + // ── AIResolver helper methods ──────────────────────────────────────── + + describe('AIResolver helpers', () => { + let resolver; + beforeEach(() => { resolver = new AIResolver(); }); + + test('buildContext returns formatted context string', () => { + const conflict = { + filePath: 'src/app.js', + location: 'line 10', + severity: 'high', + tasksInvolved: ['t1'], + }; + const taskSnapshots = { + t1: { intent: 'Add feature', changes: [{ changeType: 'addition' }], content: 'code' }, + }; + const context = resolver.buildContext(conflict, 'const x = 1;', taskSnapshots); + expect(context).toContain('src/app.js'); + expect(context).toContain('high'); + expect(context).toContain('Add feature'); + }); + + test('buildMergePrompt returns prompt string', () => { + const prompt = resolver.buildMergePrompt( + { location: 'line 5' }, + '## Context\nSome context', + ); + expect(prompt).toContain('code merge specialist'); + expect(prompt).toContain('Context'); + }); + + test('extractCodeBlock extracts code from markdown', () => { + const response = 'Here is the merged code:\n```\nconst x = 1;\n```\n'; + const code = resolver.extractCodeBlock(response); + expect(code).toContain('const x = 1;'); + }); + + test('extractCodeBlock returns null for no code block', () => { + const response = 'No code here.'; + expect(resolver.extractCodeBlock(response)).toBeNull(); + }); + + test('assessConfidence returns numeric confidence', () => { + // 'high confidence merge' has no code block, is short, no error indicators + // base 0.5 + 0.15 (no error indicators) = 0.65 + expect(resolver.assessConfidence('high confidence merge')).toBe(0.65); + // With code block: base 0.5 + 0.3 (code block) + 0.15 (no errors) = 0.95 + expect(resolver.assessConfidence('```js\ncode\n```')).toBeCloseTo(0.95); + }); + + test('getStats returns call count', () => { + const stats = resolver.getStats(); + expect(stats.callsMade).toBe(0); + expect(stats.estimatedTokensUsed).toBe(0); + }); + }); + + // ── SemanticMergeEngine - mergeFile ────────────────────────────────── + + describe('SemanticMergeEngine - mergeFile', () => { + test('returns human_review for files marked as human_review', async () => { + const engine = new SemanticMergeEngine({ rootPath: tmpDir }); + const result = await engine.mergeFile('package.json', '{}', { + t1: { files: { 'package.json': '{"a":1}' } }, + t2: { files: { 'package.json': '{"b":2}' } }, + }); + expect(result.decision).toBe(MergeDecision.NEEDS_HUMAN_REVIEW); + }); + + test('auto-merges single task modification', async () => { + const engine = new SemanticMergeEngine({ rootPath: tmpDir }); + const result = await engine.mergeFile('src/utils.js', 'const a = 1;', { + t1: { files: { 'src/utils.js': 'const a = 2;' } }, + }); + expect(result.decision).toBe(MergeDecision.AUTO_MERGED); + expect(result.mergedContent).toBe('const a = 2;'); + }); + + test('combineNonConflictingChanges picks most-changed version', () => { + const engine = new SemanticMergeEngine({ rootPath: tmpDir }); + const result = engine.combineNonConflictingChanges( + 'base', + { t1: 'a bit changed', t2: 'much more changed content here' }, + { + t1: { changes: [{ type: 'edit' }] }, + t2: { changes: [{ type: 'edit' }, { type: 'add' }, { type: 'edit' }] }, + }, + ); + expect(result).toBe('much more changed content here'); + }); + }); + + // ── SemanticMergeEngine (Orchestrator) ───────────────────────────────── + + describe('SemanticMergeEngine', () => { + test('constructor initializes all components', () => { + const engine = new SemanticMergeEngine({ rootPath: tmpDir }); + expect(engine.analyzer).toBeInstanceOf(SemanticAnalyzer); + expect(engine.detector).toBeInstanceOf(ConflictDetector); + expect(engine.autoMerger).toBeInstanceOf(AutoMerger); + expect(engine.aiResolver).toBeInstanceOf(AIResolver); + }); + + test('extends EventEmitter', () => { + const engine = new SemanticMergeEngine({ rootPath: tmpDir }); + expect(typeof engine.on).toBe('function'); + expect(typeof engine.emit).toBe('function'); + }); + + test('findModifiedFiles collects all files from snapshots', () => { + const engine = new SemanticMergeEngine({ rootPath: tmpDir }); + const snapshots = { + t1: { files: { 'a.js': 'code', 'b.js': 'code' } }, + t2: { files: { 'b.js': 'code2', 'c.js': 'code' } }, + }; + const files = engine.findModifiedFiles(snapshots); + expect(files.size).toBe(3); + expect(files.has('a.js')).toBe(true); + expect(files.has('c.js')).toBe(true); + }); + + test('shouldProcessFile skips node_modules', () => { + const engine = new SemanticMergeEngine({ rootPath: tmpDir }); + expect(engine.shouldProcessFile('node_modules/foo.js')).toBe(false); + expect(engine.shouldProcessFile('src/app.js')).toBe(true); + }); + + test('getFileCategory returns category', () => { + const engine = new SemanticMergeEngine({ rootPath: tmpDir }); + const category = engine.getFileCategory('package.json'); + expect(category).toBe('human_review'); + }); + + test('getRules returns merged rules', () => { + const engine = new SemanticMergeEngine({ rootPath: tmpDir }); + const rules = engine.getRules(); + expect(rules).toBeDefined(); + expect(rules.ai).toBeDefined(); + }); + + test('reloadRules clears cache and re-initializes detector', () => { + const engine = new SemanticMergeEngine({ rootPath: tmpDir }); + const oldDetector = engine.detector; + engine.reloadRules(); + expect(engine.detector).not.toBe(oldDetector); + }); + + test('getAIStats returns resolver stats', () => { + const engine = new SemanticMergeEngine({ rootPath: tmpDir }); + const stats = engine.getAIStats(); + expect(stats.callsMade).toBe(0); + }); + + test('saveReport writes JSON and MD files', async () => { + const engine = new SemanticMergeEngine({ + rootPath: tmpDir, + storageDir: path.join(tmpDir, '.aios', 'merge'), + }); + + const report = { + startedAt: new Date().toISOString(), + tasks: ['t1'], + results: [], + status: 'success', + }; + + await engine.saveReport(report); + + const mergeDir = path.join(tmpDir, '.aios', 'merge'); + expect(fs.existsSync(mergeDir)).toBe(true); + + const latestPath = path.join(mergeDir, 'merge-report-latest.json'); + expect(fs.existsSync(latestPath)).toBe(true); + }); + }); +}); + +``` + +================================================== +📄 tests/core/recovery-handler.test.js +================================================== +```js +/** + * Recovery Handler Tests + * + * Story: 0.5 - Error Recovery Integration + * Epic: Epic 0 - ADE Master Orchestrator + * + * Tests for recovery handler that manages automatic error recovery. + * + * @author @dev (Dex) + * @version 1.0.0 + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); + +const { + RecoveryHandler, + RecoveryStrategy, + RecoveryResult, +} = require('../../.aios-core/core/orchestration/recovery-handler'); + +describe('Recovery Handler (Story 0.5)', () => { + let tempDir; + let handler; + + beforeEach(async () => { + tempDir = path.join(os.tmpdir(), `recovery-handler-test-${Date.now()}`); + await fs.ensureDir(tempDir); + + handler = new RecoveryHandler({ + projectRoot: tempDir, + storyId: 'TEST-001', + maxRetries: 3, + autoEscalate: true, + }); + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + describe('RecoveryStrategy Enum (AC2)', () => { + it('should have all required strategies', () => { + expect(RecoveryStrategy.RETRY_SAME_APPROACH).toBe('retry_same_approach'); + expect(RecoveryStrategy.ROLLBACK_AND_RETRY).toBe('rollback_and_retry'); + expect(RecoveryStrategy.SKIP_PHASE).toBe('skip_phase'); + expect(RecoveryStrategy.ESCALATE_TO_HUMAN).toBe('escalate_to_human'); + expect(RecoveryStrategy.TRIGGER_RECOVERY_WORKFLOW).toBe('trigger_recovery_workflow'); + }); + }); + + describe('RecoveryResult Enum', () => { + it('should have all result types', () => { + expect(RecoveryResult.SUCCESS).toBe('success'); + expect(RecoveryResult.FAILED).toBe('failed'); + expect(RecoveryResult.ESCALATED).toBe('escalated'); + expect(RecoveryResult.SKIPPED).toBe('skipped'); + }); + }); + + describe('Constructor', () => { + it('should initialize with default options', () => { + const h = new RecoveryHandler({ + projectRoot: tempDir, + storyId: 'TEST-001', + }); + + expect(h.projectRoot).toBe(tempDir); + expect(h.storyId).toBe('TEST-001'); + expect(h.maxRetries).toBe(3); + expect(h.autoEscalate).toBe(true); + }); + + it('should accept custom options (AC5)', () => { + const h = new RecoveryHandler({ + projectRoot: tempDir, + storyId: 'TEST-002', + maxRetries: 5, + autoEscalate: false, + }); + + expect(h.maxRetries).toBe(5); + expect(h.autoEscalate).toBe(false); + }); + }); + + describe('handleEpicFailure (AC1)', () => { + it('should handle epic failure and return recovery result', async () => { + const result = await handler.handleEpicFailure(3, new Error('Test failure'), { + approach: 'test approach', + }); + + expect(result).toBeDefined(); + expect(result.epicNum).toBe(3); + expect(result.strategy).toBeDefined(); + expect(result.success).toBeDefined(); + }); + + it('should track attempts (AC7)', async () => { + await handler.handleEpicFailure(3, new Error('First failure')); + await handler.handleEpicFailure(3, new Error('Second failure')); + + expect(handler.getAttemptCount(3)).toBe(2); + expect(handler.getAttemptHistory()[3]).toHaveLength(2); + }); + + it('should select RETRY_SAME_APPROACH for transient errors', async () => { + const result = await handler.handleEpicFailure( + 3, + new Error('Network timeout: ETIMEDOUT'), + {}, + ); + + expect(result.strategy).toBe(RecoveryStrategy.RETRY_SAME_APPROACH); + expect(result.shouldRetry).toBe(true); + }); + + it('should select ESCALATE_TO_HUMAN for fatal errors', async () => { + const result = await handler.handleEpicFailure(3, new Error('Fatal: Out of memory'), {}); + + expect(result.strategy).toBe(RecoveryStrategy.ESCALATE_TO_HUMAN); + expect(result.escalated).toBe(true); + }); + }); + + describe('Max Retries (AC5)', () => { + it('should track retry count per epic', async () => { + expect(handler.canRetry(3)).toBe(true); + + await handler.handleEpicFailure(3, new Error('Fail 1')); + await handler.handleEpicFailure(3, new Error('Fail 2')); + + expect(handler.getAttemptCount(3)).toBe(2); + expect(handler.canRetry(3)).toBe(true); + }); + + it('should return false for canRetry after max attempts (AC5)', async () => { + await handler.handleEpicFailure(3, new Error('Fail 1')); + await handler.handleEpicFailure(3, new Error('Fail 2')); + await handler.handleEpicFailure(3, new Error('Fail 3')); + + expect(handler.canRetry(3)).toBe(false); + }); + }); + + describe('Automatic Escalation (AC6)', () => { + it('should escalate after max retries when autoEscalate is true', async () => { + // Use up all retries + await handler.handleEpicFailure(4, new Error('Fail 1')); + await handler.handleEpicFailure(4, new Error('Fail 2')); + const result = await handler.handleEpicFailure(4, new Error('Fail 3')); + + // On max retries, should escalate + expect(result.strategy).toBe(RecoveryStrategy.ESCALATE_TO_HUMAN); + expect(result.escalated).toBe(true); + }); + + it('should not escalate when autoEscalate is false', async () => { + const h = new RecoveryHandler({ + projectRoot: tempDir, + storyId: 'TEST-001', + maxRetries: 1, + autoEscalate: false, + }); + + await h.handleEpicFailure(3, new Error('Fail 1')); + const result = await h.handleEpicFailure(3, new Error('Fail 2')); + + // Without autoEscalate, should still try other strategies + expect(result.escalated).toBe(false); + }); + + it('should save escalation report (AC6)', async () => { + // Force escalation + await handler.handleEpicFailure(3, new Error('Fail 1')); + await handler.handleEpicFailure(3, new Error('Fail 2')); + const result = await handler.handleEpicFailure(3, new Error('Fatal error')); + + if (result.escalated) { + const reportsDir = path.join(tempDir, '.aios', 'escalations'); + const exists = await fs.pathExists(reportsDir); + expect(exists).toBe(true); + } + }); + }); + + describe('Logging (AC7)', () => { + it('should log all recovery attempts', async () => { + await handler.handleEpicFailure(3, new Error('Test failure')); + + const logs = handler.getLogs(); + expect(logs.length).toBeGreaterThan(0); + expect(logs.some((l) => l.message.includes('Epic 3'))).toBe(true); + }); + + it('should track attempt details', async () => { + await handler.handleEpicFailure(3, new Error('Specific error'), { + approach: 'custom approach', + }); + + const history = handler.getAttemptHistory(); + expect(history[3][0].approach).toBe('custom approach'); + expect(history[3][0].error).toBe('Specific error'); + }); + + it('should get logs for specific epic', async () => { + await handler.handleEpicFailure(3, new Error('Epic 3 error')); + await handler.handleEpicFailure(4, new Error('Epic 4 error')); + + const epic3Logs = handler.getEpicLogs(3); + expect(epic3Logs.every((l) => l.message.includes('3') || l.message.includes('epic-3'))).toBe( + true, + ); + }); + }); + + describe('Reset and Clear', () => { + it('should reset attempts for specific epic', async () => { + await handler.handleEpicFailure(3, new Error('Fail')); + expect(handler.getAttemptCount(3)).toBe(1); + + handler.resetAttempts(3); + expect(handler.getAttemptCount(3)).toBe(0); + }); + + it('should clear all state', async () => { + await handler.handleEpicFailure(3, new Error('Fail')); + await handler.handleEpicFailure(4, new Error('Fail')); + + handler.clear(); + + expect(handler.getAttemptCount(3)).toBe(0); + expect(handler.getAttemptCount(4)).toBe(0); + expect(handler.getLogs()).toHaveLength(0); + }); + }); + + describe('Error Classification', () => { + it('should classify transient errors correctly', async () => { + const transientErrors = [ + 'Network timeout: ETIMEDOUT', + 'Connection refused: ECONNREFUSED', + 'Fetch failed: network error', + ]; + + for (const err of transientErrors) { + handler.clear(); + const result = await handler.handleEpicFailure(3, new Error(err)); + expect(result.strategy).toBe(RecoveryStrategy.RETRY_SAME_APPROACH); + } + }); + + it('should classify dependency errors correctly', async () => { + handler.clear(); + const result = await handler.handleEpicFailure(3, new Error('Cannot find module lodash')); + + // Should trigger recovery workflow for dependency issues + expect([ + RecoveryStrategy.TRIGGER_RECOVERY_WORKFLOW, + RecoveryStrategy.RETRY_SAME_APPROACH, // First attempt might retry + ]).toContain(result.strategy); + }); + }); + + describe('Event Emitter', () => { + it('should emit recoveryAttempt event', async () => { + const events = []; + handler.on('recoveryAttempt', (e) => events.push(e)); + + await handler.handleEpicFailure(3, new Error('Test')); + + expect(events).toHaveLength(1); + expect(events[0].epicNum).toBe(3); + expect(events[0].attempt).toBe(1); + }); + + it('should emit escalation event when escalated', async () => { + const events = []; + handler.on('escalation', (e) => events.push(e)); + + // Force escalation + await handler.handleEpicFailure(3, new Error('Fail 1')); + await handler.handleEpicFailure(3, new Error('Fail 2')); + await handler.handleEpicFailure(3, new Error('Fatal error')); + + expect(events.length).toBeGreaterThan(0); + }); + }); +}); + +describe('Integration with MasterOrchestrator', () => { + let tempDir; + + beforeEach(async () => { + tempDir = path.join(os.tmpdir(), `recovery-integration-test-${Date.now()}`); + await fs.ensureDir(tempDir); + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + it('should integrate RecoveryHandler with MasterOrchestrator', async () => { + const { MasterOrchestrator } = require('../../.aios-core/core/orchestration'); + + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + maxRetries: 3, + autoRecovery: true, + }); + + // Verify recovery handler is initialized + expect(orchestrator.recoveryHandler).toBeDefined(); + expect(orchestrator.recoveryHandler).toBeInstanceOf(RecoveryHandler); + expect(orchestrator.recoveryHandler.maxRetries).toBe(3); + }); + + it('should expose getRecoveryHandler method', async () => { + const { MasterOrchestrator } = require('../../.aios-core/core/orchestration'); + + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + }); + + const handler = orchestrator.getRecoveryHandler(); + expect(handler).toBeDefined(); + expect(handler).toBeInstanceOf(RecoveryHandler); + }); +}); + +``` + +================================================== +📄 tests/core/context-injector.test.js +================================================== +```js +/** + * Context Injector - Test Suite + * Story EXC-1, AC5 - context-injector.js coverage + * + * Tests: constructor, inject, cache, project context, files inference, + * memory/gotchas/decisions, formatForLLM, trimToTokenBudget, metrics + */ + +const path = require('path'); +const fs = require('fs'); +const { + createTempDir, + cleanupTempDir, + createMockMemoryQuery, + createMockGotchasMemory, + createMockSessionMemory, +} = require('./execution-test-helpers'); + +// Mock gotchas-memory (exists but exports object, not constructor directly) +jest.mock('../../.aios-core/core/memory/gotchas-memory', () => { throw new Error('mocked'); }); + +const { ContextInjector } = require('../../.aios-core/core/execution/context-injector'); + +describe('ContextInjector', () => { + let tmpDir; + + beforeEach(() => { + tmpDir = createTempDir('ci-test-'); + }); + + afterEach(() => { + cleanupTempDir(tmpDir); + }); + + // ── Constructor ───────────────────────────────────────────────────── + + describe('Constructor', () => { + test('creates with defaults', () => { + const ci = new ContextInjector(); + expect(ci.tokenBudget).toBe(4000); + expect(ci.charsPerToken).toBe(4); + expect(ci.cacheTTL).toBe(5 * 60 * 1000); + expect(ci.cache).toBeInstanceOf(Map); + }); + + test('accepts custom config', () => { + const ci = new ContextInjector({ tokenBudget: 2000, charsPerToken: 3 }); + expect(ci.tokenBudget).toBe(2000); + expect(ci.charsPerToken).toBe(3); + }); + + test('accepts injected memory dependencies', () => { + const mq = createMockMemoryQuery(); + const gm = createMockGotchasMemory(); + const sm = createMockSessionMemory(); + const ci = new ContextInjector({ memoryQuery: mq, gotchasMemory: gm, sessionMemory: sm }); + expect(ci.memoryQuery).toBe(mq); + expect(ci.gotchasMemory).toBe(gm); + expect(ci.sessionMemory).toBe(sm); + }); + }); + + // ── Cache ───────────────────────────────────────────────────────────── + + describe('Cache', () => { + test('setCache and getCached round-trip', () => { + const ci = new ContextInjector(); + ci.setCache('key1', { data: 'test' }); + expect(ci.getCached('key1')).toEqual({ data: 'test' }); + }); + + test('getCached returns null for missing key', () => { + const ci = new ContextInjector(); + expect(ci.getCached('nonexistent')).toBeNull(); + }); + + test('getCached returns null for expired entry', () => { + const ci = new ContextInjector({ cacheTTL: 1 }); // 1ms TTL + ci.setCache('key1', 'value'); + // Force expiration + ci.cache.get('key1').timestamp = Date.now() - 100; + expect(ci.getCached('key1')).toBeNull(); + }); + + test('getCached increments cacheHits', () => { + const ci = new ContextInjector(); + ci.setCache('key1', 'value'); + ci.getCached('key1'); + expect(ci.metrics.cacheHits).toBe(1); + }); + + test('clearCache empties the cache', () => { + const ci = new ContextInjector(); + ci.setCache('a', 1); + ci.setCache('b', 2); + ci.clearCache(); + expect(ci.cache.size).toBe(0); + }); + }); + + // ── getCacheKey ──────────────────────────────────────────────────────── + + describe('getCacheKey', () => { + test('generates key from task type and service', () => { + const ci = new ContextInjector(); + expect(ci.getCacheKey({ type: 'api', service: 'auth' })).toBe('api-auth'); + }); + + test('uses defaults for missing fields', () => { + const ci = new ContextInjector(); + expect(ci.getCacheKey({})).toBe('default-core'); + }); + }); + + // ── inject ──────────────────────────────────────────────────────────── + + describe('inject()', () => { + test('returns formatted context string', async () => { + const ci = new ContextInjector({ rootPath: tmpDir }); + const task = { id: 'task-1', description: 'Test task' }; + const result = await ci.inject(task); + expect(typeof result).toBe('string'); + expect(result).toContain('task-1'); + expect(result).toContain('Test task'); + }); + + test('includes acceptance criteria', async () => { + const ci = new ContextInjector({ rootPath: tmpDir }); + const task = { + id: 'task-1', + description: 'Test', + acceptanceCriteria: ['AC1: do X', 'AC2: do Y'], + }; + const result = await ci.inject(task); + expect(result).toContain('AC1: do X'); + }); + + test('updates metrics after injection', async () => { + const ci = new ContextInjector({ rootPath: tmpDir }); + await ci.inject({ id: 'task-1', description: 'Test' }); + expect(ci.metrics.injections).toBe(1); + expect(ci.metrics.avgContextSize).toBeGreaterThan(0); + }); + }); + + // ── getRelevantFiles ────────────────────────────────────────────────── + + describe('getRelevantFiles', () => { + test('includes explicitly specified files', async () => { + const ci = new ContextInjector({ rootPath: tmpDir }); + const task = { id: 't1', description: 'Test', files: ['src/app.js'] }; + const files = await ci.getRelevantFiles(task); + expect(files.length).toBe(1); + expect(files[0].path).toBe('src/app.js'); + expect(files[0].purpose).toBe('Specified in task'); + }); + + test('infers files from backtick paths in description', async () => { + const ci = new ContextInjector({ rootPath: tmpDir }); + const task = { id: 't1', description: 'Update `src/utils.js` and `lib/helper.ts`' }; + const files = await ci.getRelevantFiles(task); + expect(files.some(f => f.path === 'src/utils.js')).toBe(true); + expect(files.some(f => f.path === 'lib/helper.ts')).toBe(true); + }); + + test('limits to 10 files', async () => { + const ci = new ContextInjector({ rootPath: tmpDir }); + const manyFiles = Array.from({ length: 15 }, (_, i) => `file-${i}.js`); + const task = { id: 't1', description: 'Test', files: manyFiles }; + const files = await ci.getRelevantFiles(task); + expect(files.length).toBe(10); + }); + }); + + // ── inferFilesFromDescription ───────────────────────────────────────── + + describe('inferFilesFromDescription', () => { + test('returns empty for null', () => { + const ci = new ContextInjector(); + expect(ci.inferFilesFromDescription(null)).toEqual([]); + }); + + test('extracts backtick paths', () => { + const ci = new ContextInjector(); + const result = ci.inferFilesFromDescription('Fix `src/app.js` and `lib/utils.ts`'); + expect(result).toContain('src/app.js'); + expect(result).toContain('lib/utils.ts'); + }); + + test('deduplicates paths', () => { + const ci = new ContextInjector(); + const result = ci.inferFilesFromDescription('`app.js` and `app.js` again'); + expect(result.length).toBe(1); + }); + }); + + // ── Memory integration ──────────────────────────────────────────────── + + describe('Memory integration', () => { + test('getRelevantMemory returns empty without memoryQuery', async () => { + const ci = new ContextInjector({ memoryQuery: null }); + const result = await ci.getRelevantMemory({ id: 't1', description: 'test' }); + expect(result).toEqual([]); + }); + + test('getRelevantMemory queries memory', async () => { + const mq = createMockMemoryQuery({ + query: jest.fn().mockResolvedValue([{ type: 'pattern', content: 'use hooks', score: 0.9 }]), + }); + const ci = new ContextInjector({ memoryQuery: mq }); + const result = await ci.getRelevantMemory({ id: 't1', description: 'component' }); + expect(result.length).toBe(1); + expect(result[0].type).toBe('pattern'); + }); + + test('getRelevantGotchas returns empty without gotchasMemory', async () => { + const ci = new ContextInjector({ gotchasMemory: null }); + expect(await ci.getRelevantGotchas({ id: 't1' })).toEqual([]); + }); + + test('getRecentDecisions returns empty without sessionMemory', async () => { + const ci = new ContextInjector({ sessionMemory: null }); + expect(await ci.getRecentDecisions()).toEqual([]); + }); + }); + + // ── formatForLLM ────────────────────────────────────────────────────── + + describe('formatForLLM', () => { + test('includes task section', () => { + const ci = new ContextInjector(); + const injection = { + task: { id: 'task-1', description: 'Build feature', acceptanceCriteria: [] }, + project: { patterns: [], conventions: [] }, + files: [], + memory: [], + gotchas: [], + decisions: [], + }; + const result = ci.formatForLLM(injection); + expect(result).toContain('Task Context'); + expect(result).toContain('task-1'); + }); + + test('includes files section when present', () => { + const ci = new ContextInjector(); + const injection = { + task: { id: 't1', description: 'Test', acceptanceCriteria: [] }, + project: { patterns: [], conventions: [] }, + files: [{ path: 'src/app.js', purpose: 'main', exists: true }], + memory: [], + gotchas: [], + decisions: [], + }; + const result = ci.formatForLLM(injection); + expect(result).toContain('src/app.js'); + }); + }); + + // ── trimToTokenBudget ───────────────────────────────────────────────── + + describe('trimToTokenBudget', () => { + test('returns content unchanged if within budget', () => { + const ci = new ContextInjector({ tokenBudget: 1000, charsPerToken: 4 }); + const short = 'Hello world'; + expect(ci.trimToTokenBudget(short, 1000)).toBe(short); + }); + + test('trims content exceeding budget', () => { + const ci = new ContextInjector({ charsPerToken: 1 }); + const long = 'a'.repeat(200); + const result = ci.trimToTokenBudget(long, 50); + expect(result.length).toBeLessThanOrEqual(50); + }); + + test('preserves task section when trimming', () => { + const ci = new ContextInjector({ charsPerToken: 1 }); + const content = '## Task\nImportant\n### Extra\n' + 'x'.repeat(100); + const result = ci.trimToTokenBudget(content, 50); + expect(result).toContain('Task'); + }); + }); + + // ── Metrics ─────────────────────────────────────────────────────────── + + describe('Metrics', () => { + test('getMetrics returns rounded values', () => { + const ci = new ContextInjector(); + ci.updateMetrics('test content', 42); + const metrics = ci.getMetrics(); + expect(metrics.injections).toBe(1); + expect(typeof metrics.avgContextSize).toBe('number'); + expect(typeof metrics.avgInjectionTime).toBe('number'); + }); + + test('updateMetrics computes running average', () => { + const ci = new ContextInjector(); + ci.updateMetrics('aaaa', 10); // size=4, time=10 + ci.updateMetrics('bb', 20); // size=2, time=20 + expect(ci.metrics.injections).toBe(2); + expect(ci.metrics.avgContextSize).toBe(3); // (4+2)/2 + expect(ci.metrics.avgInjectionTime).toBe(15); // (10+20)/2 + }); + }); + + // ── formatStatus ────────────────────────────────────────────────────── + + describe('formatStatus', () => { + test('returns formatted status string', () => { + const ci = new ContextInjector(); + const status = ci.formatStatus(); + expect(status).toContain('Context Injector'); + expect(status).toContain('Token Budget'); + }); + }); + + // ── detectConventions ───────────────────────────────────────────────── + + describe('detectConventions', () => { + test('detects TypeScript project', async () => { + const ci = new ContextInjector({ rootPath: tmpDir }); + fs.writeFileSync(path.join(tmpDir, 'tsconfig.json'), '{}'); + const conventions = await ci.detectConventions(); + expect(conventions).toContain('TypeScript project'); + }); + + test('detects tests directory', async () => { + const ci = new ContextInjector({ rootPath: tmpDir }); + fs.mkdirSync(path.join(tmpDir, 'tests')); + const conventions = await ci.detectConventions(); + expect(conventions).toContain('Tests in /tests directory'); + }); + + test('returns empty for bare project', async () => { + const ci = new ContextInjector({ rootPath: tmpDir }); + const conventions = await ci.detectConventions(); + expect(Array.isArray(conventions)).toBe(true); + }); + }); +}); + +``` + +================================================== +📄 tests/core/activation-runtime.test.js +================================================== +```js +'use strict'; + +jest.mock('../../.aios-core/development/scripts/unified-activation-pipeline', () => ({ + UnifiedActivationPipeline: jest.fn().mockImplementation(() => ({ + activate: jest.fn(async () => ({ + greeting: 'ok', + context: {}, + duration: 1, + quality: 'full', + metrics: {}, + })), + })), +})); + +const { ActivationRuntime, activateAgent } = require('../../.aios-core/development/scripts/activation-runtime'); +const { UnifiedActivationPipeline } = require('../../.aios-core/development/scripts/unified-activation-pipeline'); + +describe('ActivationRuntime', () => { + beforeEach(() => { + jest.clearAllMocks(); + }); + + it('uses UnifiedActivationPipeline as canonical backend', async () => { + const runtime = new ActivationRuntime(); + const result = await runtime.activate('dev'); + + expect(UnifiedActivationPipeline).toHaveBeenCalledTimes(1); + expect(result.greeting).toBe('ok'); + }); + + it('returns greeting-only helper', async () => { + const runtime = new ActivationRuntime(); + const greeting = await runtime.activateGreeting('qa'); + expect(greeting).toBe('ok'); + }); + + it('returns empty string when activate result has no greeting', async () => { + const runtime = new ActivationRuntime(); + runtime.activate = jest.fn(async () => null); + const greeting = await runtime.activateGreeting('qa'); + expect(greeting).toBe(''); + }); + + it('throws descriptive error when activation fails', async () => { + const runtime = new ActivationRuntime(); + runtime.activate = jest.fn(async () => { + throw new Error('pipeline exploded'); + }); + + await expect(runtime.activateGreeting('qa')).rejects.toThrow( + 'ActivationRuntime.activateGreeting failed for "qa": pipeline exploded', + ); + }); + + it('supports one-shot activateAgent helper', async () => { + const result = await activateAgent('architect'); + expect(result.quality).toBe('full'); + }); +}); + +``` + +================================================== +📄 tests/core/agent-invoker.test.js +================================================== +```js +/** + * Agent Invoker Tests + * + * Story: 0.7 - Agent Invocation Interface + * Epic: Epic 0 - ADE Master Orchestrator + * + * Tests for agent invocation interface. + * + * @author @dev (Dex) + * @version 1.0.0 + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); + +const { + AgentInvoker, + SUPPORTED_AGENTS, + InvocationStatus, +} = require('../../.aios-core/core/orchestration/agent-invoker'); + +describe('Agent Invoker (Story 0.7)', () => { + let tempDir; + let invoker; + + beforeEach(async () => { + tempDir = path.join(os.tmpdir(), `agent-invoker-test-${Date.now()}`); + await fs.ensureDir(tempDir); + + // Create agents directory + const agentsDir = path.join(tempDir, '.aios-core', 'development', 'agents'); + await fs.ensureDir(agentsDir); + + // Create sample agent file + await fs.writeFile(path.join(agentsDir, 'dev.md'), '# Developer Agent\n\nDevelops code.'); + + // Create tasks directory + const tasksDir = path.join(tempDir, '.aios-core', 'development', 'tasks'); + await fs.ensureDir(tasksDir); + + // Create sample task file + await fs.writeFile(path.join(tasksDir, 'sample-task.md'), '# Sample Task\n\nDo something.'); + + invoker = new AgentInvoker({ + projectRoot: tempDir, + maxRetries: 2, + }); + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + describe('SUPPORTED_AGENTS (AC2)', () => { + it('should include all required agents', () => { + expect(SUPPORTED_AGENTS.pm).toBeDefined(); + expect(SUPPORTED_AGENTS.architect).toBeDefined(); + expect(SUPPORTED_AGENTS.analyst).toBeDefined(); + expect(SUPPORTED_AGENTS.dev).toBeDefined(); + expect(SUPPORTED_AGENTS.qa).toBeDefined(); + }); + + it('should have correct agent structure', () => { + const devAgent = SUPPORTED_AGENTS.dev; + expect(devAgent.name).toBe('dev'); + expect(devAgent.displayName).toBe('Developer'); + expect(devAgent.file).toBe('dev.md'); + expect(devAgent.capabilities).toBeInstanceOf(Array); + }); + }); + + describe('InvocationStatus Enum', () => { + it('should have all required statuses', () => { + expect(InvocationStatus.SUCCESS).toBe('success'); + expect(InvocationStatus.FAILED).toBe('failed'); + expect(InvocationStatus.TIMEOUT).toBe('timeout'); + expect(InvocationStatus.SKIPPED).toBe('skipped'); + }); + }); + + describe('Constructor', () => { + it('should initialize with default options', () => { + const inv = new AgentInvoker({ projectRoot: tempDir }); + + expect(inv.projectRoot).toBe(tempDir); + expect(inv.defaultTimeout).toBe(300000); + expect(inv.maxRetries).toBe(3); + }); + + it('should accept custom options', () => { + const inv = new AgentInvoker({ + projectRoot: tempDir, + defaultTimeout: 60000, + maxRetries: 5, + validateOutput: false, + }); + + expect(inv.defaultTimeout).toBe(60000); + expect(inv.maxRetries).toBe(5); + expect(inv.validateOutput).toBe(false); + }); + }); + + describe('invokeAgent (AC1)', () => { + it('should invoke agent and return result', async () => { + const result = await invoker.invokeAgent('dev', 'sample-task', { foo: 'bar' }); + + expect(result).toBeDefined(); + expect(result.success).toBe(true); + expect(result.invocationId).toBeDefined(); + expect(result.agentName).toBe('dev'); + expect(result.taskPath).toBe('sample-task'); + expect(result.duration).toBeDefined(); + }); + + it('should handle agent name with @ prefix', async () => { + const result = await invoker.invokeAgent('@dev', 'sample-task'); + + expect(result.success).toBe(true); + }); + + it('should fail for unknown agent', async () => { + const result = await invoker.invokeAgent('unknown-agent', 'sample-task'); + + expect(result.success).toBe(false); + expect(result.error).toContain('Unknown agent'); + }); + + it('should fail for non-existent task', async () => { + const result = await invoker.invokeAgent('dev', 'non-existent-task'); + + expect(result.success).toBe(false); + expect(result.error).toContain('Task not found'); + }); + }); + + describe('Agent Support (AC2)', () => { + it('getSupportedAgents should return all agents', () => { + const agents = invoker.getSupportedAgents(); + + expect(agents.pm).toBeDefined(); + expect(agents.architect).toBeDefined(); + expect(agents.analyst).toBeDefined(); + expect(agents.dev).toBeDefined(); + expect(agents.qa).toBeDefined(); + }); + + it('isAgentSupported should return true for supported agents', () => { + expect(invoker.isAgentSupported('dev')).toBe(true); + expect(invoker.isAgentSupported('@pm')).toBe(true); + expect(invoker.isAgentSupported('QA')).toBe(true); + }); + + it('isAgentSupported should return false for unsupported agents', () => { + expect(invoker.isAgentSupported('unknown')).toBe(false); + expect(invoker.isAgentSupported('bob')).toBe(false); + }); + }); + + describe('Context Building (AC3)', () => { + it('should pass inputs to task', async () => { + const inputs = { key1: 'value1', key2: 'value2' }; + const result = await invoker.invokeAgent('dev', 'sample-task', inputs); + + expect(result.success).toBe(true); + // The simulated result should complete successfully + }); + }); + + describe('Timeout Handling (AC4)', () => { + it('should respect timeout setting', () => { + const inv = new AgentInvoker({ + projectRoot: tempDir, + defaultTimeout: 1000, + }); + + expect(inv.defaultTimeout).toBe(1000); + }); + + it('should timeout on long execution with custom executor', async () => { + const inv = new AgentInvoker({ + projectRoot: tempDir, + defaultTimeout: 100, // Very short timeout + executor: async () => { + // Simulate long execution + await new Promise((resolve) => setTimeout(resolve, 500)); + return { done: true }; + }, + }); + + const result = await inv.invokeAgent('dev', 'sample-task'); + + expect(result.success).toBe(false); + expect(result.error).toContain('timed out'); + }); + }); + + describe('Output Validation (AC5)', () => { + it('should validate output when schema exists', async () => { + // Create task with schema + const tasksDir = path.join(tempDir, '.aios-core', 'development', 'tasks'); + await fs.writeFile( + path.join(tasksDir, 'schema-task.md'), + `--- +title: Task with Schema +outputSchema: + required: + - result + properties: + result: + type: string +--- +# Task with Schema +`, + ); + + const inv = new AgentInvoker({ + projectRoot: tempDir, + validateOutput: true, + executor: async () => ({ result: 'success' }), + }); + + const result = await inv.invokeAgent('dev', 'schema-task'); + expect(result.success).toBe(true); + }); + }); + + describe('Retry Logic (AC6)', () => { + it('should retry on transient errors', async () => { + let attempts = 0; + + const inv = new AgentInvoker({ + projectRoot: tempDir, + maxRetries: 3, + executor: async () => { + attempts++; + if (attempts < 2) { + throw new Error('Timeout error - temporary'); + } + return { done: true }; + }, + }); + + const result = await inv.invokeAgent('dev', 'sample-task'); + + expect(result.success).toBe(true); + expect(attempts).toBe(2); + }); + + it('should not retry on non-transient errors', async () => { + let attempts = 0; + + const inv = new AgentInvoker({ + projectRoot: tempDir, + maxRetries: 3, + executor: async () => { + attempts++; + throw new Error('Fatal error'); + }, + }); + + const result = await inv.invokeAgent('dev', 'sample-task'); + + expect(result.success).toBe(false); + expect(attempts).toBe(1); // No retries for non-transient + }); + }); + + describe('Logging and Audit (AC7)', () => { + it('should track all invocations', async () => { + await invoker.invokeAgent('dev', 'sample-task'); + await invoker.invokeAgent('dev', 'sample-task'); + + const invocations = invoker.getInvocations(); + + expect(invocations).toHaveLength(2); + expect(invocations[0].agentName).toBe('dev'); + }); + + it('should get invocation by ID', async () => { + const result = await invoker.invokeAgent('dev', 'sample-task'); + const invocation = invoker.getInvocation(result.invocationId); + + expect(invocation).toBeDefined(); + expect(invocation.id).toBe(result.invocationId); + }); + + it('should get invocations for specific agent', async () => { + // Create pm agent file + const agentsDir = path.join(tempDir, '.aios-core', 'development', 'agents'); + await fs.writeFile(path.join(agentsDir, 'pm.md'), '# PM Agent'); + + await invoker.invokeAgent('dev', 'sample-task'); + await invoker.invokeAgent('pm', 'sample-task'); + await invoker.invokeAgent('dev', 'sample-task'); + + const devInvocations = invoker.getInvocationsForAgent('dev'); + expect(devInvocations).toHaveLength(2); + + const pmInvocations = invoker.getInvocationsForAgent('@pm'); + expect(pmInvocations).toHaveLength(1); + }); + + it('should generate invocation summary', async () => { + await invoker.invokeAgent('dev', 'sample-task'); + await invoker.invokeAgent('dev', 'sample-task'); + + const summary = invoker.getInvocationSummary(); + + expect(summary.total).toBe(2); + expect(summary.byStatus[InvocationStatus.SUCCESS]).toBe(2); + expect(summary.byAgent.dev).toBe(2); + expect(summary.averageDuration).toBeGreaterThanOrEqual(0); // May be 0 for very fast simulated execution + }); + + it('should track logs', async () => { + await invoker.invokeAgent('dev', 'sample-task'); + + const logs = invoker.getLogs(); + + expect(logs.length).toBeGreaterThan(0); + expect(logs[0]).toHaveProperty('timestamp'); + expect(logs[0]).toHaveProperty('level'); + expect(logs[0]).toHaveProperty('message'); + }); + + it('should clear invocation history', async () => { + await invoker.invokeAgent('dev', 'sample-task'); + expect(invoker.getInvocations().length).toBeGreaterThan(0); + + invoker.clearInvocations(); + + expect(invoker.getInvocations()).toHaveLength(0); + expect(invoker.getLogs()).toHaveLength(0); + }); + }); + + describe('Event Emitter', () => { + it('should emit invocationComplete event', async () => { + let emittedData = null; + invoker.on('invocationComplete', (data) => { + emittedData = data; + }); + + await invoker.invokeAgent('dev', 'sample-task'); + + expect(emittedData).toBeDefined(); + expect(emittedData.agentName).toBe('dev'); + expect(emittedData.status).toBe(InvocationStatus.SUCCESS); + }); + + it('should emit invocationFailed event', async () => { + let emittedData = null; + invoker.on('invocationFailed', (data) => { + emittedData = data; + }); + + await invoker.invokeAgent('unknown', 'sample-task'); + + expect(emittedData).toBeDefined(); + expect(emittedData.status).toBe(InvocationStatus.FAILED); + }); + }); +}); + +describe('Integration with MasterOrchestrator', () => { + let tempDir; + + beforeEach(async () => { + tempDir = path.join(os.tmpdir(), `agent-integration-test-${Date.now()}`); + await fs.ensureDir(tempDir); + + // Create agents directory + const agentsDir = path.join(tempDir, '.aios-core', 'development', 'agents'); + await fs.ensureDir(agentsDir); + await fs.writeFile(path.join(agentsDir, 'dev.md'), '# Dev Agent'); + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + it('should integrate AgentInvoker with MasterOrchestrator', async () => { + const { MasterOrchestrator } = require('../../.aios-core/core/orchestration'); + + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + }); + + expect(orchestrator.agentInvoker).toBeDefined(); + expect(orchestrator.agentInvoker).toBeInstanceOf(AgentInvoker); + }); + + it('should expose getAgentInvoker method', async () => { + const { MasterOrchestrator } = require('../../.aios-core/core/orchestration'); + + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + }); + + const invoker = orchestrator.getAgentInvoker(); + expect(invoker).toBeDefined(); + expect(invoker).toBeInstanceOf(AgentInvoker); + }); + + it('should expose getSupportedAgents method', async () => { + const { MasterOrchestrator } = require('../../.aios-core/core/orchestration'); + + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + }); + + const agents = orchestrator.getSupportedAgents(); + expect(agents.dev).toBeDefined(); + expect(agents.pm).toBeDefined(); + }); + + it('should invoke agent through orchestrator', async () => { + const { MasterOrchestrator } = require('../../.aios-core/core/orchestration'); + + // Create tasks directory + const tasksDir = path.join(tempDir, '.aios-core', 'development', 'tasks'); + await fs.ensureDir(tasksDir); + await fs.writeFile(path.join(tasksDir, 'test-task.md'), '# Test Task'); + + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + }); + + const result = await orchestrator.invokeAgentForTask('dev', 'test-task', { key: 'value' }); + + expect(result).toBeDefined(); + expect(result.success).toBe(true); + }); +}); + +``` + +================================================== +📄 tests/core/dashboard-integration.test.js +================================================== +```js +/** + * Dashboard Integration Tests + * + * Story: 0.8 - Dashboard Integration + * Epic: Epic 0 - ADE Master Orchestrator + * + * Tests for dashboard integration with orchestrator. + * + * @author @dev (Dex) + * @version 1.0.0 + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); + +const { + DashboardIntegration, + NotificationType, +} = require('../../.aios-core/core/orchestration/dashboard-integration'); + +const { MasterOrchestrator, OrchestratorState } = require('../../.aios-core/core/orchestration'); + +describe('Dashboard Integration (Story 0.8)', () => { + let tempDir; + let orchestrator; + let dashboard; + + beforeEach(async () => { + tempDir = path.join(os.tmpdir(), `dashboard-test-${Date.now()}`); + await fs.ensureDir(tempDir); + + orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + dashboardAutoUpdate: false, // Disable auto-update for tests + }); + + dashboard = new DashboardIntegration({ + projectRoot: tempDir, + orchestrator, + autoUpdate: false, + }); + }); + + afterEach(async () => { + dashboard.stop(); + await fs.remove(tempDir); + }); + + describe('NotificationType Enum (AC7)', () => { + it('should have all notification types', () => { + expect(NotificationType.INFO).toBe('info'); + expect(NotificationType.SUCCESS).toBe('success'); + expect(NotificationType.WARNING).toBe('warning'); + expect(NotificationType.ERROR).toBe('error'); + expect(NotificationType.BLOCKED).toBe('blocked'); + expect(NotificationType.COMPLETE).toBe('complete'); + }); + }); + + describe('Constructor', () => { + it('should initialize with default options', () => { + const d = new DashboardIntegration({ projectRoot: tempDir }); + + expect(d.projectRoot).toBe(tempDir); + expect(d.autoUpdate).toBe(true); + expect(d.updateInterval).toBe(5000); + }); + + it('should accept custom options', () => { + const d = new DashboardIntegration({ + projectRoot: tempDir, + autoUpdate: false, + updateInterval: 1000, + }); + + expect(d.autoUpdate).toBe(false); + expect(d.updateInterval).toBe(1000); + }); + + it('should bind orchestrator events when provided', () => { + const d = new DashboardIntegration({ + projectRoot: tempDir, + orchestrator, + }); + + // Should have event bindings (can't easily test internals) + expect(d.orchestrator).toBe(orchestrator); + }); + }); + + describe('Status File Update (AC1)', () => { + it('should update status file', async () => { + await dashboard.start(); + await dashboard.updateStatus(); + + expect(await fs.pathExists(dashboard.statusPath)).toBe(true); + }); + + it('should write valid JSON', async () => { + await dashboard.start(); + await dashboard.updateStatus(); + + const content = await fs.readJson(dashboard.statusPath); + expect(content).toBeDefined(); + expect(content.orchestrator).toBeDefined(); + }); + + it('should return status path', () => { + const statusPath = dashboard.getStatusPath(); + // Normalize path separators for cross-platform compatibility (Windows uses \, Unix uses /) + const normalizedPath = statusPath.replace(/\\/g, '/'); + expect(normalizedPath).toContain('.aios/dashboard/status.json'); + }); + }); + + describe('Status Content (AC2)', () => { + it('should include currentEpic', async () => { + const status = dashboard.buildStatus(); + + expect(status.orchestrator['TEST-001']).toBeDefined(); + expect(status.orchestrator['TEST-001'].currentEpic).toBeDefined(); + }); + + it('should include progress', async () => { + const status = dashboard.buildStatus(); + + expect(status.orchestrator['TEST-001'].progress).toBeDefined(); + expect(status.orchestrator['TEST-001'].progress.overall).toBeDefined(); + }); + + it('should include timestamps', async () => { + const status = dashboard.buildStatus(); + + expect(status.orchestrator['TEST-001'].updatedAt).toBeDefined(); + }); + + it('should include status flags', async () => { + const status = dashboard.buildStatus(); + + expect(status.orchestrator['TEST-001'].blocked).toBeDefined(); + }); + }); + + describe('Event Emitter (AC3)', () => { + it('should emit statusUpdated event', async () => { + let emitted = false; + dashboard.on('statusUpdated', () => { + emitted = true; + }); + + await dashboard.start(); + await dashboard.updateStatus(); + + expect(emitted).toBe(true); + }); + + it('should emit started event', async () => { + let emitted = false; + dashboard.on('started', () => { + emitted = true; + }); + + await dashboard.start(); + + expect(emitted).toBe(true); + }); + + it('should emit stopped event', async () => { + let emitted = false; + dashboard.on('stopped', () => { + emitted = true; + }); + + await dashboard.start(); + dashboard.stop(); + + expect(emitted).toBe(true); + }); + }); + + describe('Progress Percentage (AC4)', () => { + it('should return progress percentage', () => { + const progress = dashboard.getProgressPercentage(); + + expect(typeof progress).toBe('number'); + expect(progress).toBeGreaterThanOrEqual(0); + expect(progress).toBeLessThanOrEqual(100); + }); + }); + + describe('History (AC5)', () => { + it('should track history entries', () => { + dashboard.addToHistory({ + type: 'epicComplete', + epicNum: 3, + timestamp: new Date().toISOString(), + }); + + const history = dashboard.getHistory(); + + expect(history).toHaveLength(1); + expect(history[0].type).toBe('epicComplete'); + }); + + it('should limit history to 100 entries', () => { + for (let i = 0; i < 150; i++) { + dashboard.addToHistory({ type: 'test', index: i }); + } + + const history = dashboard.getHistory(); + + expect(history).toHaveLength(100); + // Should keep latest entries + expect(history[0].index).toBe(50); + }); + + it('should get history for specific epic', () => { + dashboard.addToHistory({ type: 'epicComplete', epicNum: 3 }); + dashboard.addToHistory({ type: 'epicComplete', epicNum: 4 }); + dashboard.addToHistory({ type: 'epicFailed', epicNum: 3 }); + + const epic3History = dashboard.getHistoryForEpic(3); + + expect(epic3History).toHaveLength(2); + }); + }); + + describe('Logs Path (AC6)', () => { + it('should include logsPath in status', async () => { + const status = dashboard.buildStatus(); + + expect(status.orchestrator['TEST-001'].logsPath).toBeDefined(); + expect(status.orchestrator['TEST-001'].logsPath).toContain('TEST-001.log'); + }); + }); + + describe('Notifications (AC7)', () => { + it('should add notifications', () => { + dashboard.addNotification({ + type: NotificationType.INFO, + title: 'Test', + message: 'Test message', + }); + + const notifications = dashboard.getNotifications(); + + expect(notifications).toHaveLength(1); + expect(notifications[0].title).toBe('Test'); + expect(notifications[0].read).toBe(false); + }); + + it('should emit notification event', () => { + let emitted = null; + dashboard.on('notification', (notif) => { + emitted = notif; + }); + + dashboard.addNotification({ + type: NotificationType.WARNING, + title: 'Warning', + message: 'Warning message', + }); + + expect(emitted).toBeDefined(); + expect(emitted.type).toBe(NotificationType.WARNING); + }); + + it('should get unread notifications only', () => { + dashboard.addNotification({ type: NotificationType.INFO, title: '1' }); + dashboard.addNotification({ type: NotificationType.INFO, title: '2' }); + + // Mark first as read + const notifs = dashboard.getNotifications(); + dashboard.markNotificationRead(notifs[0].id); + + const unread = dashboard.getNotifications(true); + expect(unread).toHaveLength(1); + expect(unread[0].title).toBe('2'); + }); + + it('should mark all notifications as read', () => { + dashboard.addNotification({ type: NotificationType.INFO, title: '1' }); + dashboard.addNotification({ type: NotificationType.INFO, title: '2' }); + + dashboard.markAllNotificationsRead(); + + const unread = dashboard.getNotifications(true); + expect(unread).toHaveLength(0); + }); + + it('should clear notifications', () => { + dashboard.addNotification({ type: NotificationType.INFO, title: '1' }); + dashboard.addNotification({ type: NotificationType.INFO, title: '2' }); + + dashboard.clearNotifications(); + + expect(dashboard.getNotifications()).toHaveLength(0); + }); + + it('should limit notifications to 50', () => { + for (let i = 0; i < 60; i++) { + dashboard.addNotification({ type: NotificationType.INFO, title: `${i}` }); + } + + const notifications = dashboard.getNotifications(); + expect(notifications).toHaveLength(50); + }); + }); + + describe('Read Status', () => { + it('should read status from file', async () => { + await dashboard.start(); + await dashboard.updateStatus(); + + const status = await dashboard.readStatus(); + + expect(status).toBeDefined(); + expect(status.orchestrator['TEST-001']).toBeDefined(); + }); + + it('should return null if file does not exist', async () => { + const status = await dashboard.readStatus(); + + expect(status).toBeNull(); + }); + }); + + describe('Clear', () => { + it('should clear all state', () => { + dashboard.addToHistory({ type: 'test' }); + dashboard.addNotification({ type: NotificationType.INFO, title: 'test' }); + + dashboard.clear(); + + expect(dashboard.getHistory()).toHaveLength(0); + expect(dashboard.getNotifications()).toHaveLength(0); + }); + }); +}); + +describe('Integration with MasterOrchestrator', () => { + let tempDir; + + beforeEach(async () => { + tempDir = path.join(os.tmpdir(), `dashboard-orch-test-${Date.now()}`); + await fs.ensureDir(tempDir); + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + it('should integrate DashboardIntegration with MasterOrchestrator', async () => { + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + }); + + expect(orchestrator.dashboardIntegration).toBeDefined(); + expect(orchestrator.dashboardIntegration).toBeInstanceOf(DashboardIntegration); + }); + + it('should expose getDashboardIntegration method', async () => { + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + }); + + const dashboard = orchestrator.getDashboardIntegration(); + expect(dashboard).toBeDefined(); + expect(dashboard).toBeInstanceOf(DashboardIntegration); + }); + + it('should expose getDashboardStatus method', async () => { + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + }); + + const status = orchestrator.getDashboardStatus(); + expect(status).toBeDefined(); + expect(status.orchestrator['TEST-001']).toBeDefined(); + }); + + it('should expose getExecutionHistory method', async () => { + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + }); + + const history = orchestrator.getExecutionHistory(); + expect(Array.isArray(history)).toBe(true); + }); + + it('should expose getNotifications method', async () => { + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + }); + + const notifications = orchestrator.getNotifications(); + expect(Array.isArray(notifications)).toBe(true); + }); + + it('should allow adding notifications', async () => { + const orchestrator = new MasterOrchestrator(tempDir, { + storyId: 'TEST-001', + }); + + orchestrator.addNotification({ + type: NotificationType.INFO, + title: 'Test', + message: 'Test message', + }); + + const notifications = orchestrator.getNotifications(); + expect(notifications).toHaveLength(1); + }); +}); + +``` + +================================================== +📄 tests/core/workflow-executor.test.js +================================================== +```js +/** + * Workflow Executor Tests + * + * Story 11.3: Development Cycle Workflow + * + * Tests for the WorkflowExecutor module which provides + * orchestrated development cycle execution. + */ + +'use strict'; + +const path = require('path'); +const fs = require('fs').promises; +const os = require('os'); + +// Module under test +const { + WorkflowExecutor, + createWorkflowExecutor, + executeDevelopmentCycle, + PhaseStatus, + CheckpointDecision, +} = require('../../.aios-core/core/orchestration/workflow-executor'); + +describe('WorkflowExecutor', () => { + const projectRoot = path.join(__dirname, '../..'); + let tmpDir; + + beforeAll(async () => { + tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), 'workflow-executor-test-')); + }); + + afterAll(async () => { + // Cleanup temp directory + await fs.rm(tmpDir, { recursive: true, force: true }).catch(() => {}); + }); + + // ============================================ + // Module Structure Tests + // ============================================ + describe('Module Structure', () => { + test('should export WorkflowExecutor class', () => { + expect(WorkflowExecutor).toBeDefined(); + expect(typeof WorkflowExecutor).toBe('function'); + }); + + test('should export factory function', () => { + expect(createWorkflowExecutor).toBeDefined(); + expect(typeof createWorkflowExecutor).toBe('function'); + }); + + test('should export executeDevelopmentCycle function', () => { + expect(executeDevelopmentCycle).toBeDefined(); + expect(typeof executeDevelopmentCycle).toBe('function'); + }); + + test('should export PhaseStatus enum', () => { + expect(PhaseStatus).toBeDefined(); + expect(PhaseStatus.PENDING).toBe('pending'); + expect(PhaseStatus.RUNNING).toBe('running'); + expect(PhaseStatus.COMPLETED).toBe('completed'); + expect(PhaseStatus.FAILED).toBe('failed'); + expect(PhaseStatus.SKIPPED).toBe('skipped'); + }); + + test('should export CheckpointDecision enum', () => { + expect(CheckpointDecision).toBeDefined(); + expect(CheckpointDecision.GO).toBe('GO'); + expect(CheckpointDecision.PAUSE).toBe('PAUSE'); + expect(CheckpointDecision.REVIEW).toBe('REVIEW'); + expect(CheckpointDecision.ABORT).toBe('ABORT'); + }); + }); + + // ============================================ + // WorkflowExecutor Creation Tests + // ============================================ + describe('WorkflowExecutor Creation', () => { + test('should create executor with project root', () => { + const executor = new WorkflowExecutor(projectRoot); + expect(executor).toBeInstanceOf(WorkflowExecutor); + expect(executor.projectRoot).toBe(projectRoot); + }); + + test('should create executor with options', () => { + const executor = new WorkflowExecutor(projectRoot, { + debug: true, + autoResume: false, + saveState: false, + }); + + expect(executor.options.debug).toBe(true); + expect(executor.options.autoResume).toBe(false); + expect(executor.options.saveState).toBe(false); + }); + + test('should use default options if not provided', () => { + const executor = new WorkflowExecutor(projectRoot); + + expect(executor.options.debug).toBe(false); + expect(executor.options.autoResume).toBe(true); + expect(executor.options.saveState).toBe(true); + }); + + test('factory function should create executor', () => { + const executor = createWorkflowExecutor(projectRoot); + expect(executor).toBeInstanceOf(WorkflowExecutor); + }); + }); + + // ============================================ + // Workflow Loading Tests + // ============================================ + describe('Workflow Loading', () => { + test('should load workflow definition', async () => { + const executor = new WorkflowExecutor(projectRoot); + const workflow = await executor.loadWorkflow(); + + expect(workflow).toBeDefined(); + expect(workflow.workflow).toBeDefined(); + expect(workflow.workflow.id).toBe('development-cycle'); + expect(workflow.workflow.version).toBe('1.0.0'); + }); + + test('should have all required phases', async () => { + const executor = new WorkflowExecutor(projectRoot); + const workflow = await executor.loadWorkflow(); + + const phases = workflow.workflow.phases; + expect(phases['1_validation']).toBeDefined(); + expect(phases['2_development']).toBeDefined(); + expect(phases['3_self_healing']).toBeDefined(); + expect(phases['4_quality_gate']).toBeDefined(); + expect(phases['5_push']).toBeDefined(); + expect(phases['6_checkpoint']).toBeDefined(); + }); + + test('should have dynamic executor configuration', async () => { + const executor = new WorkflowExecutor(projectRoot); + const workflow = await executor.loadWorkflow(); + + expect(workflow.workflow.phases['2_development'].agent).toBe('${story.executor}'); + expect(workflow.workflow.phases['4_quality_gate'].agent).toBe('${story.quality_gate}'); + }); + }); + + // ============================================ + // Config Loading Tests + // ============================================ + describe('Config Loading', () => { + test('should load core configuration', async () => { + const executor = new WorkflowExecutor(projectRoot); + const config = await executor.loadConfig(); + + expect(config).toBeDefined(); + }); + + test('should handle missing config gracefully', async () => { + const executor = new WorkflowExecutor('/nonexistent/path'); + const config = await executor.loadConfig(); + + expect(config).toBeDefined(); + expect(config.coderabbit_integration.enabled).toBe(false); + }); + }); + + // ============================================ + // State Management Tests + // ============================================ + describe('State Management', () => { + test('should initialize new state', async () => { + const executor = new WorkflowExecutor(projectRoot, { saveState: false }); + await executor.loadWorkflow(); + + const state = await executor.initializeState('/fake/story.story.md'); + + expect(state).toBeDefined(); + expect(state.workflowId).toBe('development-cycle'); + expect(state.currentPhase).toBe('1_validation'); + expect(state.currentStory).toBe('/fake/story.story.md'); + expect(state.attemptCount).toBe(0); + }); + + test('should generate correct state file path', () => { + const executor = new WorkflowExecutor(projectRoot); + const stateFile = executor.getStateFilePath('/path/to/11.3.story.md'); + + expect(stateFile).toContain('11.3-state.yaml'); + }); + }); + + // ============================================ + // Agent Resolution Tests + // ============================================ + describe('Agent Resolution', () => { + test('should resolve static agent reference', async () => { + const executor = new WorkflowExecutor(projectRoot); + executor.state = { executor: '@dev', qualityGate: '@architect' }; + + expect(executor.resolveAgent('@po')).toBe('@po'); + expect(executor.resolveAgent('@devops')).toBe('@devops'); + }); + + test('should resolve dynamic executor reference', async () => { + const executor = new WorkflowExecutor(projectRoot); + executor.state = { executor: '@dev', qualityGate: '@architect' }; + + expect(executor.resolveAgent('${story.executor}')).toBe('@dev'); + }); + + test('should resolve dynamic quality_gate reference', async () => { + const executor = new WorkflowExecutor(projectRoot); + executor.state = { executor: '@dev', qualityGate: '@architect' }; + + expect(executor.resolveAgent('${story.quality_gate}')).toBe('@architect'); + }); + }); + + // ============================================ + // Condition Evaluation Tests + // ============================================ + describe('Condition Evaluation', () => { + test('should evaluate CodeRabbit condition as true', async () => { + const executor = new WorkflowExecutor(projectRoot); + executor.config = { coderabbit_integration: { enabled: true } }; + + const result = executor.evaluateCondition('${config.coderabbit_integration.enabled} == true'); + expect(result).toBe(true); + }); + + test('should evaluate CodeRabbit condition as false', async () => { + const executor = new WorkflowExecutor(projectRoot); + executor.config = { coderabbit_integration: { enabled: false } }; + + const result = executor.evaluateCondition('${config.coderabbit_integration.enabled} == true'); + expect(result).toBe(false); + }); + + test('should return true for unknown conditions', async () => { + const executor = new WorkflowExecutor(projectRoot); + executor.config = {}; + + const result = executor.evaluateCondition('some.unknown.condition'); + expect(result).toBe(true); + }); + }); + + // ============================================ + // Phase Execution Tests + // ============================================ + describe('Phase Execution', () => { + test('should skip phase when condition not met', async () => { + const executor = new WorkflowExecutor(projectRoot, { saveState: false }); + await executor.loadWorkflow(); + executor.config = { coderabbit_integration: { enabled: false } }; + executor.state = { + executor: '@dev', + qualityGate: '@architect', + phaseResults: {}, + accumulatedContext: {}, + }; + + const result = await executor.executePhase('3_self_healing', '/fake/story.md', {}); + + expect(result.status).toBe(PhaseStatus.SKIPPED); + expect(result.reason).toBe('Condition not met'); + }); + + test('should fail validation for missing executor', async () => { + const executor = new WorkflowExecutor(projectRoot, { saveState: false }); + await executor.loadWorkflow(); + await executor.loadConfig(); + + // Create a mock story file + const mockStoryPath = path.join(tmpDir, 'test-story.story.md'); + await fs.writeFile( + mockStoryPath, + ` +# Test Story + +\`\`\`yaml +story_id: "test" +status: "Approved" +\`\`\` + `, + ); + + executor.state = { + executor: null, + qualityGate: null, + accumulatedContext: {}, + }; + + const result = await executor.executeValidationPhase({}, '@po', mockStoryPath, {}); + + expect(result.status).toBe(PhaseStatus.FAILED); + expect(result.validation_result.issues).toContain('Story must have an executor assigned'); + }); + + test('should fail validation when executor equals quality_gate', async () => { + const executor = new WorkflowExecutor(projectRoot, { saveState: false }); + await executor.loadWorkflow(); + await executor.loadConfig(); + + // Create a mock story file + const mockStoryPath = path.join(tmpDir, 'test-story2.story.md'); + await fs.writeFile( + mockStoryPath, + ` +# Test Story + +\`\`\`yaml +story_id: "test" +status: "Approved" +executor: "@dev" +quality_gate: "@dev" +\`\`\` + `, + ); + + executor.state = { + executor: '@dev', + qualityGate: '@dev', + accumulatedContext: {}, + }; + + const result = await executor.executeValidationPhase({}, '@po', mockStoryPath, {}); + + expect(result.status).toBe(PhaseStatus.FAILED); + expect(result.validation_result.issues).toContain( + 'Executor and Quality Gate must be different agents', + ); + }); + + test('should pass validation with correct story', async () => { + const executor = new WorkflowExecutor(projectRoot, { saveState: false }); + await executor.loadWorkflow(); + await executor.loadConfig(); + + // Create a mock story file + const mockStoryPath = path.join(tmpDir, 'test-story3.story.md'); + await fs.writeFile( + mockStoryPath, + ` +# Test Story + +\`\`\`yaml +story_id: "test" +status: "Approved" +executor: "@dev" +quality_gate: "@architect" +\`\`\` + `, + ); + + executor.state = { + executor: '@dev', + qualityGate: '@architect', + accumulatedContext: {}, + }; + + const result = await executor.executeValidationPhase({}, '@po', mockStoryPath, {}); + + expect(result.status).toBe(PhaseStatus.COMPLETED); + expect(result.validation_result.passed).toBe(true); + }); + }); + + // ============================================ + // Checkpoint Decision Tests + // ============================================ + describe('Checkpoint Decisions', () => { + test('should return checkpoint with options', async () => { + const executor = new WorkflowExecutor(projectRoot, { saveState: false }); + await executor.loadWorkflow(); + executor.state = { + executor: '@dev', + qualityGate: '@architect', + phaseResults: {}, + }; + + const result = await executor.executeCheckpointPhase({}, '@po', '/fake/story.md'); + + expect(result.status).toBe(PhaseStatus.COMPLETED); + expect(result.options).toBeDefined(); + expect(result.options.GO).toBeDefined(); + expect(result.options.PAUSE).toBeDefined(); + expect(result.options.REVIEW).toBeDefined(); + expect(result.options.ABORT).toBeDefined(); + }); + + test('GO decision should return to validation phase', async () => { + const executor = new WorkflowExecutor(projectRoot); + await executor.loadWorkflow(); + + const nextPhase = executor.getNextPhase('6_checkpoint', { + status: PhaseStatus.COMPLETED, + decision: CheckpointDecision.GO, + }); + + expect(nextPhase).toBe('1_validation'); + }); + + test('PAUSE decision should return workflow_paused', async () => { + const executor = new WorkflowExecutor(projectRoot); + await executor.loadWorkflow(); + + const nextPhase = executor.getNextPhase('6_checkpoint', { + status: PhaseStatus.COMPLETED, + decision: CheckpointDecision.PAUSE, + }); + + expect(nextPhase).toBe('workflow_paused'); + }); + + test('ABORT decision should return workflow_aborted', async () => { + const executor = new WorkflowExecutor(projectRoot); + await executor.loadWorkflow(); + + const nextPhase = executor.getNextPhase('6_checkpoint', { + status: PhaseStatus.COMPLETED, + decision: CheckpointDecision.ABORT, + }); + + expect(nextPhase).toBe('workflow_aborted'); + }); + }); + + // ============================================ + // Error Handling Tests + // ============================================ + describe('Error Handling', () => { + test('should handle unknown phase gracefully', async () => { + const executor = new WorkflowExecutor(projectRoot, { saveState: false }); + await executor.loadWorkflow(); + executor.state = { phaseResults: {}, accumulatedContext: {} }; + + const result = await executor.executePhase('unknown_phase', '/fake/story.md', {}); + + expect(result.status).toBe(PhaseStatus.FAILED); + expect(result.error).toContain('Phase not found'); + }); + + test('should return retry for return_to_development handler', async () => { + const executor = new WorkflowExecutor(projectRoot); + await executor.loadWorkflow(); + executor.state = { attemptCount: 0 }; + + const result = await executor.handleError('return_to_development', {}); + + expect(result.nextPhase).toBe('2_development'); + }); + + test('should not retry after max attempts', async () => { + const executor = new WorkflowExecutor(projectRoot); + await executor.loadWorkflow(); + executor.state = { attemptCount: 5 }; + + const result = await executor.handleError('return_to_development', {}); + + expect(result.retry).toBe(false); + expect(result.escalate).toBe(true); + }); + }); + + // ============================================ + // Self-Healing Tests + // ============================================ + describe('Self-Healing with CodeRabbit', () => { + test('should skip self-healing when CodeRabbit not enabled', async () => { + const executor = new WorkflowExecutor(projectRoot, { saveState: false }); + await executor.loadWorkflow(); + executor.config = { coderabbit_integration: { enabled: false } }; + executor.state = { + executor: '@dev', + qualityGate: '@architect', + phaseResults: {}, + }; + + const result = await executor.executeSelfHealingPhase({ config: {} }, '@dev'); + + expect(result.status).toBe(PhaseStatus.SKIPPED); + expect(result.reason).toBe('CodeRabbit integration not enabled'); + }); + + test('should parse JSON-formatted CodeRabbit output', () => { + const executor = new WorkflowExecutor(projectRoot); + + const output = ` +Some header text +\`\`\`json +[ + {"file": "src/index.js", "line": 10, "severity": "HIGH", "message": "Unused variable"}, + {"file": "src/utils.js", "line": 25, "severity": "CRITICAL", "message": "SQL injection risk"} +] +\`\`\` +Some footer text + `; + + const issues = executor.parseCodeRabbitOutput(output); + + expect(issues).toHaveLength(2); + expect(issues[0].file).toBe('src/index.js'); + expect(issues[0].severity).toBe('HIGH'); + expect(issues[1].severity).toBe('CRITICAL'); + }); + + test('should parse text-formatted CodeRabbit output', () => { + const executor = new WorkflowExecutor(projectRoot); + + const output = ` +[CRITICAL] src/auth.js:15 Hardcoded credentials detected +[HIGH] src/api.js:42 Missing input validation +[MEDIUM] src/utils.js:8 Consider using const + `; + + const issues = executor.parseCodeRabbitOutput(output); + + expect(issues).toHaveLength(3); + expect(issues[0].severity).toBe('CRITICAL'); + expect(issues[0].file).toBe('src/auth.js'); + expect(issues[0].line).toBe(15); + expect(issues[1].severity).toBe('HIGH'); + expect(issues[2].severity).toBe('MEDIUM'); + }); + + test('should return empty array for empty output', () => { + const executor = new WorkflowExecutor(projectRoot); + + const issues = executor.parseCodeRabbitOutput(''); + expect(issues).toHaveLength(0); + + const issues2 = executor.parseCodeRabbitOutput(null); + expect(issues2).toHaveLength(0); + }); + + test('should filter issues by severity', async () => { + const executor = new WorkflowExecutor(projectRoot, { saveState: false, debug: false }); + await executor.loadWorkflow(); + executor.config = { + coderabbit_integration: { + enabled: true, + graceful_degradation: { skip_if_not_installed: true }, + }, + }; + executor.state = { phaseResults: {} }; + + // Mock runCodeRabbitAnalysis to return issues first, then empty + const originalRun = executor.runCodeRabbitAnalysis.bind(executor); + let callCount = 0; + executor.runCodeRabbitAnalysis = jest.fn().mockImplementation(() => { + callCount++; + if (callCount === 1) { + return Promise.resolve({ + success: true, + issues: [ + { file: 'a.js', line: 1, severity: 'CRITICAL', message: 'Critical issue' }, + { file: 'b.js', line: 2, severity: 'LOW', message: 'Low issue' }, + ], + }); + } + // Second call returns no issues (simulating issues were addressed) + return Promise.resolve({ success: true, issues: [] }); + }); + + const result = await executor.executeSelfHealingPhase( + { config: { severity_filter: ['CRITICAL'] } }, + '@dev', + ); + + expect(result.status).toBe(PhaseStatus.COMPLETED); + // Only CRITICAL should be in remaining (since attemptAutoFix returns false) + expect(result.healed_code.issues_remaining).toHaveLength(1); + expect(result.healed_code.issues_remaining[0].severity).toBe('CRITICAL'); + + executor.runCodeRabbitAnalysis = originalRun; + }); + + test('should handle CodeRabbit not installed gracefully', async () => { + const executor = new WorkflowExecutor(projectRoot, { saveState: false, debug: false }); + await executor.loadWorkflow(); + executor.config = { + coderabbit_integration: { + enabled: true, + graceful_degradation: { + skip_if_not_installed: true, + fallback_message: 'CodeRabbit not installed', + }, + }, + }; + executor.state = { phaseResults: {} }; + + // Mock runCodeRabbitAnalysis to simulate not installed + executor.runCodeRabbitAnalysis = jest.fn().mockResolvedValue({ + success: false, + error: 'CodeRabbit CLI not installed', + issues: [], + }); + + const result = await executor.executeSelfHealingPhase({ config: {} }, '@dev'); + + expect(result.status).toBe(PhaseStatus.COMPLETED); + expect(result.healed_code.note).toBeDefined(); + }); + + test('attemptAutoFix should return false for MVP', async () => { + const executor = new WorkflowExecutor(projectRoot, { debug: false }); + + const result = await executor.attemptAutoFix({ + file: 'test.js', + line: 10, + severity: 'HIGH', + message: 'Test issue', + }); + + expect(result).toBe(false); + }); + }); + + // ============================================ + // Integration Tests + // ============================================ + describe('Integration with Orchestration Index', () => { + test('should be exported from orchestration index', () => { + const orchestration = require('../../.aios-core/core/orchestration'); + + expect(orchestration.WorkflowExecutor).toBeDefined(); + expect(orchestration.createWorkflowExecutor).toBeDefined(); + expect(orchestration.executeDevelopmentCycle).toBeDefined(); + expect(orchestration.PhaseStatus).toBeDefined(); + expect(orchestration.CheckpointDecision).toBeDefined(); + }); + }); +}); + +// ============================================ +// development-cycle.yaml Tests +// ============================================ +describe('development-cycle.yaml', () => { + const projectRoot = path.join(__dirname, '../..'); + const yaml = require('js-yaml'); + + test('should be valid YAML', async () => { + const content = await fs.readFile( + path.join(projectRoot, '.aios-core/development/workflows/development-cycle.yaml'), + 'utf8', + ); + + expect(() => yaml.load(content)).not.toThrow(); + }); + + test('should have required workflow structure', async () => { + const content = await fs.readFile( + path.join(projectRoot, '.aios-core/development/workflows/development-cycle.yaml'), + 'utf8', + ); + const workflow = yaml.load(content); + + expect(workflow.workflow.id).toBe('development-cycle'); + expect(workflow.workflow.name).toContain('Development Cycle'); + expect(workflow.workflow.orchestrator).toBe('@po'); + expect(workflow.workflow.phases).toBeDefined(); + expect(workflow.workflow.error_handlers).toBeDefined(); + }); + + test('should have all 6 phases', async () => { + const content = await fs.readFile( + path.join(projectRoot, '.aios-core/development/workflows/development-cycle.yaml'), + 'utf8', + ); + const workflow = yaml.load(content); + const phases = Object.keys(workflow.workflow.phases); + + expect(phases).toContain('1_validation'); + expect(phases).toContain('2_development'); + expect(phases).toContain('3_self_healing'); + expect(phases).toContain('4_quality_gate'); + expect(phases).toContain('5_push'); + expect(phases).toContain('6_checkpoint'); + }); + + test('checkpoint phase should have elicit: true', async () => { + const content = await fs.readFile( + path.join(projectRoot, '.aios-core/development/workflows/development-cycle.yaml'), + 'utf8', + ); + const workflow = yaml.load(content); + + expect(workflow.workflow.phases['6_checkpoint'].elicit).toBe(true); + }); + + test('self_healing phase should have condition', async () => { + const content = await fs.readFile( + path.join(projectRoot, '.aios-core/development/workflows/development-cycle.yaml'), + 'utf8', + ); + const workflow = yaml.load(content); + + expect(workflow.workflow.phases['3_self_healing'].condition).toBeDefined(); + expect(workflow.workflow.phases['3_self_healing'].condition).toContain( + 'coderabbit_integration.enabled', + ); + }); +}); + +``` + +================================================== +📄 tests/core/registry-loader.test.js +================================================== +```js +/** + * Service Registry Loader Tests + * + * Tests for the ServiceRegistry class including: + * - Cache behavior + * - Index building + * - Query methods + * - Search functionality + * + * @story TD-6 - CI Stability & Test Coverage Improvements + */ + +const path = require('path'); + +// Mock fs.promises before requiring the module +jest.mock('fs', () => ({ + promises: { + readFile: jest.fn(), + }, +})); + +const fs = require('fs').promises; +const { ServiceRegistry, getRegistry } = require('../../.aios-core/core/registry/registry-loader'); + +// Set timeout for all tests +jest.setTimeout(30000); + +/** + * Create mock registry data + */ +function createMockRegistryData() { + return { + version: '1.0.0', + generated: '2026-01-04T00:00:00.000Z', + totalWorkers: 4, + categories: { + 'data-processing': 2, + 'api-integration': 2, + }, + workers: [ + { + id: 'worker-1', + name: 'Data Processor', + description: 'Processes data efficiently', + category: 'data-processing', + tags: ['fast', 'reliable'], + agents: ['dev', 'analyst'], + }, + { + id: 'worker-2', + name: 'Data Validator', + description: 'Validates data formats', + category: 'data-processing', + tags: ['validation', 'reliable'], + agents: ['qa'], + }, + { + id: 'worker-3', + name: 'API Client', + description: 'HTTP API client', + category: 'api-integration', + tags: ['http', 'rest'], + agents: ['dev'], + }, + { + id: 'worker-4', + name: 'GraphQL Client', + description: 'GraphQL API client', + category: 'api-integration', + tags: ['graphql', 'api'], + agents: ['dev', 'architect'], + }, + ], + }; +} + +describe('ServiceRegistry', () => { + let registry; + let mockData; + + beforeEach(() => { + mockData = createMockRegistryData(); + fs.readFile.mockResolvedValue(JSON.stringify(mockData)); + registry = new ServiceRegistry({ registryPath: '/mock/path.json' }); + }); + + afterEach(() => { + jest.clearAllMocks(); + }); + + describe('Constructor', () => { + it('should create instance with default config', () => { + const reg = new ServiceRegistry(); + + expect(reg).toBeDefined(); + expect(reg.registryPath).toBeNull(); + expect(reg.cache).toBeNull(); + expect(reg.cacheTimestamp).toBe(0); + }); + + it('should create instance with custom path', () => { + const reg = new ServiceRegistry({ registryPath: '/custom/path.json' }); + + expect(reg.registryPath).toBe('/custom/path.json'); + }); + + it('should create instance with custom cache TTL', () => { + const reg = new ServiceRegistry({ cacheTTL: 60000 }); + + expect(reg.cacheTTL).toBe(60000); + }); + + it('should initialize empty index maps', () => { + const reg = new ServiceRegistry(); + + expect(reg._byId).toBeInstanceOf(Map); + expect(reg._byCategory).toBeInstanceOf(Map); + expect(reg._byTag).toBeInstanceOf(Map); + expect(reg._byAgent).toBeInstanceOf(Map); + }); + }); + + describe('load', () => { + it('should load registry from file', async () => { + const data = await registry.load(); + + expect(data).toEqual(mockData); + expect(fs.readFile).toHaveBeenCalledTimes(1); + }); + + it('should cache loaded data', async () => { + await registry.load(); + await registry.load(); + + // Should only read file once due to cache + expect(fs.readFile).toHaveBeenCalledTimes(1); + }); + + it('should force reload when force=true', async () => { + await registry.load(); + await registry.load(true); + + expect(fs.readFile).toHaveBeenCalledTimes(2); + }); + + it('should build indexes after loading', async () => { + await registry.load(); + + expect(registry._byId.size).toBe(4); + expect(registry._byCategory.size).toBe(2); + expect(registry._byTag.size).toBeGreaterThan(0); + }); + + it('should throw error on file read failure', async () => { + fs.readFile.mockRejectedValue(new Error('File not found')); + + await expect(registry.load()).rejects.toThrow('Failed to load registry'); + }); + }); + + describe('getById', () => { + it('should return worker by ID', async () => { + const worker = await registry.getById('worker-1'); + + expect(worker).toBeDefined(); + expect(worker.id).toBe('worker-1'); + expect(worker.name).toBe('Data Processor'); + }); + + it('should return null for non-existent ID', async () => { + const worker = await registry.getById('non-existent'); + + expect(worker).toBeNull(); + }); + }); + + describe('getByCategory', () => { + it('should return workers by category', async () => { + const workers = await registry.getByCategory('data-processing'); + + expect(workers).toHaveLength(2); + expect(workers.every((w) => w.category === 'data-processing')).toBe(true); + }); + + it('should return empty array for non-existent category', async () => { + const workers = await registry.getByCategory('non-existent'); + + expect(workers).toEqual([]); + }); + }); + + describe('getByTag', () => { + it('should return workers by tag', async () => { + const workers = await registry.getByTag('reliable'); + + expect(workers).toHaveLength(2); + }); + + it('should return empty array for non-existent tag', async () => { + const workers = await registry.getByTag('non-existent'); + + expect(workers).toEqual([]); + }); + }); + + describe('getByTags', () => { + it('should return workers with all specified tags', async () => { + const workers = await registry.getByTags(['reliable']); + + expect(workers).toHaveLength(2); + }); + + it('should filter to workers with ALL tags', async () => { + const workers = await registry.getByTags(['fast', 'reliable']); + + expect(workers).toHaveLength(1); + expect(workers[0].id).toBe('worker-1'); + }); + + it('should return empty array for empty tags', async () => { + const workers = await registry.getByTags([]); + + expect(workers).toEqual([]); + }); + + it('should delegate to getByTag for single tag', async () => { + const spy = jest.spyOn(registry, 'getByTag'); + + await registry.getByTags(['fast']); + + expect(spy).toHaveBeenCalledWith('fast'); + }); + }); + + describe('getForAgent', () => { + it('should return workers for agent', async () => { + const workers = await registry.getForAgent('dev'); + + expect(workers).toHaveLength(3); + }); + + it('should return empty array for unknown agent', async () => { + const workers = await registry.getForAgent('unknown'); + + expect(workers).toEqual([]); + }); + }); + + describe('search', () => { + it('should search by query string', async () => { + const results = await registry.search('data'); + + expect(results.length).toBeGreaterThan(0); + expect(results.some((w) => w.id === 'worker-1')).toBe(true); + }); + + it('should search case-insensitively', async () => { + const results = await registry.search('DATA'); + + expect(results.length).toBeGreaterThan(0); + }); + + it('should filter by category', async () => { + const results = await registry.search('client', { category: 'api-integration' }); + + expect(results.every((w) => w.category === 'api-integration')).toBe(true); + }); + + it('should respect maxResults', async () => { + const results = await registry.search('', { maxResults: 2 }); + + expect(results.length).toBeLessThanOrEqual(2); + }); + + it('should rank results by relevance', async () => { + const results = await registry.search('api'); + + // Workers with 'api' in name or tags should be in results + // worker-3 has 'API' in name (score +8), worker-4 has 'api' in tags (score +5) + expect(results).toHaveLength(2); + expect(results[0].id).toBe('worker-3'); // 'API Client' - name match scores higher + expect(results[1].id).toBe('worker-4'); // Has 'api' in tags + }); + }); + + describe('getAll', () => { + it('should return all workers', async () => { + const workers = await registry.getAll(); + + expect(workers).toHaveLength(4); + }); + }); + + describe('getInfo', () => { + it('should return registry info', async () => { + const info = await registry.getInfo(); + + expect(info.version).toBe('1.0.0'); + expect(info.totalWorkers).toBe(4); + expect(info.categories).toContain('data-processing'); + expect(info.categories).toContain('api-integration'); + }); + }); + + describe('getCategories', () => { + it('should return category summary', async () => { + const categories = await registry.getCategories(); + + expect(categories['data-processing']).toBe(2); + expect(categories['api-integration']).toBe(2); + }); + }); + + describe('getTags', () => { + it('should return all unique tags sorted', async () => { + const tags = await registry.getTags(); + + expect(Array.isArray(tags)).toBe(true); + expect(tags.includes('fast')).toBe(true); + expect(tags.includes('reliable')).toBe(true); + // Check sorted + expect(tags).toEqual([...tags].sort()); + }); + }); + + describe('exists', () => { + it('should return true for existing worker', async () => { + const exists = await registry.exists('worker-1'); + + expect(exists).toBe(true); + }); + + it('should return false for non-existing worker', async () => { + const exists = await registry.exists('non-existent'); + + expect(exists).toBe(false); + }); + }); + + describe('count', () => { + it('should return total worker count', async () => { + const count = await registry.count(); + + expect(count).toBe(4); + }); + }); + + describe('clearCache', () => { + it('should clear cache and indexes', async () => { + await registry.load(); + expect(registry.cache).not.toBeNull(); + + registry.clearCache(); + + expect(registry.cache).toBeNull(); + expect(registry.cacheTimestamp).toBe(0); + expect(registry._byId.size).toBe(0); + expect(registry._byCategory.size).toBe(0); + }); + }); + + describe('getMetrics', () => { + it('should return metrics when not cached', () => { + const metrics = registry.getMetrics(); + + expect(metrics.cached).toBe(false); + expect(metrics.cacheAge).toBeNull(); + expect(metrics.workerCount).toBe(0); + }); + + it('should return metrics when cached', async () => { + await registry.load(); + const metrics = registry.getMetrics(); + + expect(metrics.cached).toBe(true); + expect(metrics.cacheAge).toBeGreaterThanOrEqual(0); + expect(metrics.workerCount).toBe(4); + expect(metrics.categoryCount).toBe(2); + }); + }); +}); + +describe('getRegistry', () => { + beforeEach(() => { + // Reset singleton + const { ServiceRegistry } = require('../../.aios-core/core/registry/registry-loader'); + }); + + it('should return singleton instance', () => { + const reg1 = getRegistry(); + const reg2 = getRegistry(); + + expect(reg1).toBe(reg2); + }); + + it('should create fresh instance with fresh option', () => { + const reg1 = getRegistry(); + const reg2 = getRegistry({ fresh: true }); + + expect(reg1).not.toBe(reg2); + }); + + it('should pass options to new instance', () => { + const reg = getRegistry({ fresh: true, registryPath: '/custom/path.json' }); + + expect(reg.registryPath).toBe('/custom/path.json'); + }); +}); + +describe('Index Building', () => { + let registry; + + beforeEach(() => { + const mockData = { + version: '1.0.0', + generated: '2026-01-04', + totalWorkers: 2, + categories: {}, + workers: [ + { + id: 'worker-no-tags', + name: 'No Tags Worker', + description: 'Worker without tags', + category: 'misc', + }, + { + id: 'worker-no-agents', + name: 'No Agents Worker', + description: 'Worker without agents', + category: 'misc', + tags: ['solo'], + }, + ], + }; + fs.readFile.mockResolvedValue(JSON.stringify(mockData)); + registry = new ServiceRegistry({ registryPath: '/mock/path.json' }); + }); + + it('should handle workers without tags', async () => { + await registry.load(); + + const worker = await registry.getById('worker-no-tags'); + expect(worker).toBeDefined(); + }); + + it('should handle workers without agents', async () => { + await registry.load(); + + const workers = await registry.getForAgent('any'); + // Should not include worker without agents array + expect(workers.every((w) => w.agents)).toBe(true); + }); +}); + +``` + +================================================== +📄 tests/core/workflow-navigator-integration.test.js +================================================== +```js +/** + * Integration Tests for WorkflowNavigator + GreetingBuilder + * + * Story ACT-5: WorkflowNavigator + Bob Integration + * + * Test Coverage: + * - AC1: Method call fixed (suggestNextCommands, not getNextSteps) + * - AC2: Relaxed trigger conditions (sessionType !== 'new') + * - AC3: SessionState integration for workflow detection + * - AC4: SurfaceChecker connection for proactive suggestions + * - AC5: workflow-patterns.yaml Bob orchestration patterns + * - AC6: Cross-terminal workflow continuity via session state + * - AC7: Unit tests with various session states + * - AC8: Workflow suggestions are contextually relevant + */ + +const path = require('path'); +const fs = require('fs'); +const yaml = require('js-yaml'); + +// Mock all external dependencies before requiring modules +jest.mock('../../.aios-core/core/session/context-detector'); +jest.mock('../../.aios-core/infrastructure/scripts/git-config-detector'); +jest.mock('../../.aios-core/infrastructure/scripts/project-status-loader', () => ({ + loadProjectStatus: jest.fn(), + formatStatusDisplay: jest.fn(), +})); +jest.mock('../../.aios-core/core/config/config-resolver', () => ({ + resolveConfig: jest.fn(() => ({ + config: { user_profile: 'advanced' }, + warnings: [], + legacy: false, + })), +})); +jest.mock('../../.aios-core/development/scripts/greeting-preference-manager', () => { + return jest.fn().mockImplementation(() => ({ + getPreference: jest.fn().mockReturnValue('auto'), + setPreference: jest.fn(), + getConfig: jest.fn().mockReturnValue({}), + })); +}); +jest.mock('../../.aios-core/core/permissions', () => ({ + PermissionMode: jest.fn().mockImplementation(() => ({ + load: jest.fn().mockResolvedValue(undefined), + getBadge: jest.fn().mockReturnValue('[Ask]'), + currentMode: 'ask', + })), +})); + +// Mock SessionState +jest.mock('../../.aios-core/core/orchestration/session-state', () => ({ + SessionState: jest.fn().mockImplementation(() => ({ + getStateFilePath: jest.fn().mockReturnValue('/mock/path/.session-state.yaml'), + exists: jest.fn().mockResolvedValue(false), + loadSessionState: jest.fn().mockResolvedValue(null), + })), + sessionStateExists: jest.fn().mockResolvedValue(false), + createSessionState: jest.fn(), + loadSessionState: jest.fn().mockResolvedValue(null), +})); + +// Mock SurfaceChecker +jest.mock('../../.aios-core/core/orchestration/surface-checker', () => ({ + SurfaceChecker: jest.fn().mockImplementation(() => ({ + load: jest.fn().mockReturnValue(false), + shouldSurface: jest.fn().mockReturnValue({ + should_surface: false, + criterion_id: null, + message: null, + severity: null, + can_bypass: true, + }), + })), + createSurfaceChecker: jest.fn(), + shouldSurface: jest.fn(), +})); + +const ContextDetector = require('../../.aios-core/core/session/context-detector'); +const GitConfigDetector = require('../../.aios-core/infrastructure/scripts/git-config-detector'); +const GreetingBuilder = require('../../.aios-core/development/scripts/greeting-builder'); +const WorkflowNavigator = require('../../.aios-core/development/scripts/workflow-navigator'); +const { SessionState } = require('../../.aios-core/core/orchestration/session-state'); +const { SurfaceChecker } = require('../../.aios-core/core/orchestration/surface-checker'); + +describe('WorkflowNavigator Integration (Story ACT-5)', () => { + let builder; + let mockAgent; + + beforeEach(() => { + jest.clearAllMocks(); + + // Setup default mocks + ContextDetector.mockImplementation(() => ({ + detectSessionType: jest.fn().mockReturnValue('existing'), + })); + + GitConfigDetector.mockImplementation(() => ({ + get: jest.fn().mockResolvedValue({ configured: true, type: 'local', branch: 'main' }), + })); + + // Mock fs for SessionState detection + SessionState.mockImplementation(() => ({ + getStateFilePath: jest.fn().mockReturnValue('/mock/path/.session-state.yaml'), + exists: jest.fn().mockResolvedValue(false), + loadSessionState: jest.fn().mockResolvedValue(null), + })); + + SurfaceChecker.mockImplementation(() => ({ + load: jest.fn().mockReturnValue(false), + shouldSurface: jest.fn().mockReturnValue({ + should_surface: false, + criterion_id: null, + message: null, + severity: null, + can_bypass: true, + }), + })); + + mockAgent = { + id: 'dev', + name: 'Dex', + icon: '\uD83D\uDCBB', + persona_profile: { + greeting_levels: { + minimal: '\uD83D\uDCBB dev Agent ready', + named: '\uD83D\uDCBB Dex (Builder) ready', + archetypal: '\uD83D\uDCBB Dex the Builder ready!', + }, + communication: { + signature_closing: '-- Dex, always building', + }, + }, + persona: { + role: 'Expert Senior Software Engineer', + }, + commands: [ + { name: 'help', visibility: ['full', 'quick', 'key'], description: 'Show help' }, + { name: 'develop', visibility: ['full', 'quick'], description: 'Implement story' }, + { name: 'run-tests', visibility: ['quick', 'key'], description: 'Run tests' }, + ], + }; + + builder = new GreetingBuilder(); + }); + + // ========================================================================= + // AC1: Method call fixed - suggestNextCommands, not getNextSteps + // ========================================================================= + describe('AC1: Fixed method call', () => { + test('calls suggestNextCommands on WorkflowNavigator, not getNextSteps', () => { + const navigator = new WorkflowNavigator(); + + // Verify suggestNextCommands exists + expect(typeof navigator.suggestNextCommands).toBe('function'); + + // Verify getNextSteps does NOT exist + expect(navigator.getNextSteps).toBeUndefined(); + }); + + test('buildWorkflowSuggestions calls detectWorkflowState + suggestNextCommands', () => { + // Spy on the navigator methods + const detectSpy = jest.spyOn(builder.workflowNavigator, 'detectWorkflowState') + .mockReturnValue({ + workflow: 'story_development', + state: 'validated', + context: { story_path: 'docs/stories/test.md' }, + }); + const suggestSpy = jest.spyOn(builder.workflowNavigator, 'suggestNextCommands') + .mockReturnValue([ + { command: '*develop-yolo docs/stories/test.md', description: 'YOLO mode' }, + ]); + jest.spyOn(builder.workflowNavigator, 'getGreetingMessage') + .mockReturnValue('Story validated!'); + jest.spyOn(builder.workflowNavigator, 'formatSuggestions') + .mockReturnValue('Story validated!\n\n1. `*develop-yolo docs/stories/test.md` - YOLO mode'); + + const result = builder.buildWorkflowSuggestions({ + lastCommands: ['validate-story-draft completed'], + }); + + expect(detectSpy).toHaveBeenCalledWith( + ['validate-story-draft completed'], + expect.any(Object), + ); + expect(suggestSpy).toHaveBeenCalledWith({ + workflow: 'story_development', + state: 'validated', + context: { story_path: 'docs/stories/test.md' }, + }); + expect(result).toContain('develop-yolo'); + }); + }); + + // ========================================================================= + // AC2: Relaxed trigger conditions + // ========================================================================= + describe('AC2: Relaxed trigger conditions', () => { + test('workflow suggestions shown for existing sessions with workflow state', async () => { + jest.spyOn(builder.workflowNavigator, 'detectWorkflowState') + .mockReturnValue({ + workflow: 'story_development', + state: 'in_development', + context: {}, + }); + jest.spyOn(builder.workflowNavigator, 'suggestNextCommands') + .mockReturnValue([ + { command: '*review-qa', description: 'Run QA review' }, + ]); + jest.spyOn(builder.workflowNavigator, 'getGreetingMessage') + .mockReturnValue('Development complete!'); + jest.spyOn(builder.workflowNavigator, 'formatSuggestions') + .mockReturnValue('Development complete!\n\n1. `*review-qa` - Run QA review'); + + const context = { + sessionType: 'existing', + lastCommands: ['develop completed'], + projectStatus: { branch: 'feat/test', modifiedFilesTotalCount: 3 }, + gitConfig: { configured: true }, + permissions: { badge: '[Ask]' }, + }; + + const greeting = await builder.buildGreeting(mockAgent, context); + + // Should contain workflow suggestions since sessionType is 'existing' (not 'new') + expect(greeting).toContain('review-qa'); + }); + + test('no workflow suggestions for new sessions', async () => { + jest.spyOn(builder.workflowNavigator, 'detectWorkflowState') + .mockReturnValue({ + workflow: 'story_development', + state: 'validated', + context: {}, + }); + + const context = { + sessionType: 'new', + lastCommands: ['validate-story-draft completed'], + projectStatus: null, + gitConfig: { configured: true }, + permissions: { badge: '[Ask]' }, + }; + + const greeting = await builder.buildGreeting(mockAgent, context); + + // New sessions should NOT show workflow suggestions + // The greeting should contain the normal presentation but not workflow nav items + expect(greeting).not.toContain('Development complete'); + }); + + test('workflow suggestions shown for workflow session type', async () => { + jest.spyOn(builder.workflowNavigator, 'detectWorkflowState') + .mockReturnValue({ + workflow: 'story_development', + state: 'qa_reviewed', + context: {}, + }); + jest.spyOn(builder.workflowNavigator, 'suggestNextCommands') + .mockReturnValue([ + { command: '*apply-qa-fixes', description: 'Apply QA feedback' }, + ]); + jest.spyOn(builder.workflowNavigator, 'getGreetingMessage') + .mockReturnValue('QA review complete!'); + jest.spyOn(builder.workflowNavigator, 'formatSuggestions') + .mockReturnValue('QA review complete!\n\n1. `*apply-qa-fixes` - Apply QA feedback'); + + const context = { + sessionType: 'workflow', + lastCommands: ['review-qa completed'], + projectStatus: { branch: 'feat/test', modifiedFilesTotalCount: 2 }, + gitConfig: { configured: true }, + permissions: { badge: '[Auto]' }, + }; + + const greeting = await builder.buildGreeting(mockAgent, context); + + expect(greeting).toContain('apply-qa-fixes'); + }); + }); + + // ========================================================================= + // AC3 + AC6: SessionState integration for cross-terminal workflow continuity + // ========================================================================= + describe('AC3/AC6: SessionState integration', () => { + test('detects active workflow from session state file', () => { + // Mock fs.existsSync and fs.readFileSync for the session state check + const originalExistsSync = fs.existsSync; + const originalReadFileSync = fs.readFileSync; + + const sessionStateData = { + session_state: { + version: '1.2', + epic: { id: 'EPIC-ACT', title: 'Activation Pipeline', total_stories: 5 }, + progress: { + current_story: 'ACT-5', + stories_done: ['ACT-1', 'ACT-2'], + stories_pending: ['ACT-5', 'ACT-6', 'ACT-7'], + }, + workflow: { + current_phase: 'development', + attempt_count: 1, + }, + last_action: { + type: 'PHASE_CHANGE', + story: 'ACT-5', + phase: 'development', + }, + }, + }; + + fs.existsSync = jest.fn().mockReturnValue(true); + fs.readFileSync = jest.fn().mockReturnValue(yaml.dump(sessionStateData)); + + const result = builder.buildWorkflowSuggestions({}); + + // Should detect workflow from session state and return suggestions + expect(result).not.toBeNull(); + expect(result).toContain('ACT-5'); + expect(result).toContain('Activation Pipeline'); + expect(result).toContain('2/5'); + + // Restore + fs.existsSync = originalExistsSync; + fs.readFileSync = originalReadFileSync; + }); + + test('falls back to command history when no session state file', () => { + const originalExistsSync = fs.existsSync; + fs.existsSync = jest.fn().mockReturnValue(false); + + jest.spyOn(builder.workflowNavigator, 'detectWorkflowState') + .mockReturnValue({ + workflow: 'story_development', + state: 'validated', + context: {}, + }); + jest.spyOn(builder.workflowNavigator, 'suggestNextCommands') + .mockReturnValue([ + { command: '*develop-yolo', description: 'YOLO mode' }, + ]); + jest.spyOn(builder.workflowNavigator, 'getGreetingMessage') + .mockReturnValue('Ready to develop!'); + jest.spyOn(builder.workflowNavigator, 'formatSuggestions') + .mockReturnValue('Ready to develop!\n\n1. `*develop-yolo` - YOLO mode'); + + const result = builder.buildWorkflowSuggestions({ + lastCommands: ['validate-story-draft completed'], + }); + + expect(result).toContain('develop-yolo'); + + fs.existsSync = originalExistsSync; + }); + + test('graceful degradation when session state file is malformed', () => { + const originalExistsSync = fs.existsSync; + const originalReadFileSync = fs.readFileSync; + + fs.existsSync = jest.fn().mockReturnValue(true); + fs.readFileSync = jest.fn().mockReturnValue('invalid yaml %%% not parseable'); + + // Should not throw, should return null from _detectWorkflowFromSessionState + // and fall back to command history detection + jest.spyOn(builder.workflowNavigator, 'detectWorkflowState') + .mockReturnValue(null); + + const result = builder.buildWorkflowSuggestions({ lastCommands: [] }); + + // Should gracefully return null (no crash) + expect(result).toBeNull(); + + fs.existsSync = originalExistsSync; + fs.readFileSync = originalReadFileSync; + }); + + test('ignores session state without active workflow phase', () => { + const originalExistsSync = fs.existsSync; + const originalReadFileSync = fs.readFileSync; + + const sessionStateData = { + session_state: { + version: '1.2', + epic: { id: 'EPIC-X', title: 'Some Epic', total_stories: 3 }, + progress: { + current_story: 'X-1', + stories_done: [], + stories_pending: ['X-1', 'X-2', 'X-3'], + }, + workflow: { + current_phase: null, // No active phase + attempt_count: 0, + }, + }, + }; + + fs.existsSync = jest.fn().mockReturnValue(true); + fs.readFileSync = jest.fn().mockReturnValue(yaml.dump(sessionStateData)); + + jest.spyOn(builder.workflowNavigator, 'detectWorkflowState') + .mockReturnValue(null); + + const result = builder.buildWorkflowSuggestions({ lastCommands: [] }); + + // Should skip session state (no active phase) and fall through to command history + // Command history also returns null, so result should be null + expect(result).toBeNull(); + + fs.existsSync = originalExistsSync; + fs.readFileSync = originalReadFileSync; + }); + }); + + // ========================================================================= + // AC4: SurfaceChecker proactive suggestions + // ========================================================================= + describe('AC4: SurfaceChecker proactive suggestions', () => { + test('enhances suggestions when surface checker detects high risk', () => { + const originalExistsSync = fs.existsSync; + fs.existsSync = jest.fn().mockReturnValue(false); // No session state + + SurfaceChecker.mockImplementation(() => ({ + load: jest.fn().mockReturnValue(true), + shouldSurface: jest.fn().mockReturnValue({ + should_surface: true, + criterion_id: 'high_risk', + criterion_name: 'High Risk Operation', + message: 'This operation modifies critical files', + severity: 'warning', + can_bypass: true, + }), + })); + + jest.spyOn(builder.workflowNavigator, 'detectWorkflowState') + .mockReturnValue({ + workflow: 'story_development', + state: 'in_development', + context: {}, + }); + jest.spyOn(builder.workflowNavigator, 'suggestNextCommands') + .mockReturnValue([ + { command: '*review-qa', description: 'Run QA review', raw_command: 'review-qa', args: '' }, + ]); + jest.spyOn(builder.workflowNavigator, 'getGreetingMessage') + .mockReturnValue('Development complete!'); + + // The formatSuggestions call should receive enhanced suggestions (with warning prepended) + const formatSpy = jest.spyOn(builder.workflowNavigator, 'formatSuggestions') + .mockImplementation((suggestions, header) => { + return `${header}\n\n${suggestions.map((s, i) => `${i + 1}. \`${s.command}\` - ${s.description}`).join('\n')}`; + }); + + const result = builder.buildWorkflowSuggestions({ + lastCommands: ['develop completed'], + riskLevel: 'HIGH', + }); + + // Should have called formatSuggestions with enhanced array (warning + original) + expect(formatSpy).toHaveBeenCalled(); + const suggestionsArg = formatSpy.mock.calls[0][0]; + expect(suggestionsArg.length).toBe(2); // warning + original + expect(suggestionsArg[0].description).toContain('warning'); + expect(suggestionsArg[1].command).toBe('*review-qa'); + + fs.existsSync = originalExistsSync; + }); + + test('returns original suggestions when SurfaceChecker is unavailable', () => { + const originalExistsSync = fs.existsSync; + fs.existsSync = jest.fn().mockReturnValue(false); // No session state + + // SurfaceChecker.load() returns false (criteria file not found) + SurfaceChecker.mockImplementation(() => ({ + load: jest.fn().mockReturnValue(false), + shouldSurface: jest.fn(), + })); + + jest.spyOn(builder.workflowNavigator, 'detectWorkflowState') + .mockReturnValue({ + workflow: 'story_development', + state: 'validated', + context: {}, + }); + jest.spyOn(builder.workflowNavigator, 'suggestNextCommands') + .mockReturnValue([ + { command: '*develop-yolo', description: 'YOLO mode', raw_command: 'develop-yolo', args: '' }, + ]); + jest.spyOn(builder.workflowNavigator, 'getGreetingMessage') + .mockReturnValue('Ready!'); + const formatSpy = jest.spyOn(builder.workflowNavigator, 'formatSuggestions') + .mockReturnValue('Ready!\n\n1. `*develop-yolo` - YOLO mode'); + + const result = builder.buildWorkflowSuggestions({ + lastCommands: ['validate-story-draft completed'], + }); + + // Should still work without SurfaceChecker + expect(result).toContain('develop-yolo'); + // formatSuggestions should be called with original (unenhanced) suggestions + const suggestionsArg = formatSpy.mock.calls[0][0]; + expect(suggestionsArg.length).toBe(1); + + fs.existsSync = originalExistsSync; + }); + }); + + // ========================================================================= + // AC5: workflow-patterns.yaml Bob orchestration patterns + // ========================================================================= + describe('AC5: Bob orchestration patterns in workflow-patterns.yaml', () => { + let patterns; + + beforeAll(() => { + const patternsPath = path.join( + __dirname, + '../../.aios-core/data/workflow-patterns.yaml', + ); + const content = fs.readFileSync(patternsPath, 'utf8'); + patterns = yaml.load(content); + }); + + test('bob_orchestration workflow exists with correct structure', () => { + expect(patterns.workflows.bob_orchestration).toBeDefined(); + + const bobWorkflow = patterns.workflows.bob_orchestration; + expect(bobWorkflow.description).toContain('Bob'); + expect(bobWorkflow.agent_sequence).toContain('pm'); + expect(bobWorkflow.agent_sequence).toContain('dev'); + expect(bobWorkflow.key_commands).toContain('execute-epic'); + expect(bobWorkflow.key_commands).toContain('wave-execute'); + expect(bobWorkflow.key_commands).toContain('build-autonomous'); + }); + + test('bob_orchestration has transitions for executor-assignment and wave-execution', () => { + const transitions = patterns.workflows.bob_orchestration.transitions; + + expect(transitions.epic_started).toBeDefined(); + expect(transitions.executor_assigned).toBeDefined(); + expect(transitions.wave_completed).toBeDefined(); + expect(transitions.story_completed).toBeDefined(); + }); + + test('bob_orchestration transitions have valid next_steps', () => { + const transitions = patterns.workflows.bob_orchestration.transitions; + + // Each transition should have next_steps with command and description + for (const [name, transition] of Object.entries(transitions)) { + expect(transition.next_steps).toBeDefined(); + expect(transition.next_steps.length).toBeGreaterThan(0); + + for (const step of transition.next_steps) { + expect(step.command).toBeDefined(); + expect(step.description).toBeDefined(); + expect(step.priority).toBeDefined(); + } + } + }); + + test('agent_handoff workflow exists for cross-agent transitions', () => { + expect(patterns.workflows.agent_handoff).toBeDefined(); + + const handoffWorkflow = patterns.workflows.agent_handoff; + expect(handoffWorkflow.agent_sequence).toContain('dev'); + expect(handoffWorkflow.agent_sequence).toContain('qa'); + expect(handoffWorkflow.key_commands).toContain('fix-qa-issues'); + expect(handoffWorkflow.key_commands).toContain('apply-qa-fixes'); + }); + + test('agent_handoff has dev_complete and qa_issues_found transitions', () => { + const transitions = patterns.workflows.agent_handoff.transitions; + + expect(transitions.dev_complete).toBeDefined(); + expect(transitions.qa_issues_found).toBeDefined(); + expect(transitions.fixes_applied).toBeDefined(); + }); + }); + + // ========================================================================= + // AC7: Various session states + // ========================================================================= + describe('AC7: Various session states', () => { + test('returns null when no workflow state is detected', () => { + const originalExistsSync = fs.existsSync; + fs.existsSync = jest.fn().mockReturnValue(false); + + jest.spyOn(builder.workflowNavigator, 'detectWorkflowState') + .mockReturnValue(null); + + const result = builder.buildWorkflowSuggestions({ + lastCommands: ['some-random-command'], + }); + + expect(result).toBeNull(); + + fs.existsSync = originalExistsSync; + }); + + test('returns null when suggestNextCommands returns empty array', () => { + const originalExistsSync = fs.existsSync; + fs.existsSync = jest.fn().mockReturnValue(false); + + jest.spyOn(builder.workflowNavigator, 'detectWorkflowState') + .mockReturnValue({ workflow: 'test', state: 'unknown', context: {} }); + jest.spyOn(builder.workflowNavigator, 'suggestNextCommands') + .mockReturnValue([]); + + const result = builder.buildWorkflowSuggestions({ + lastCommands: ['test-command completed'], + }); + + expect(result).toBeNull(); + + fs.existsSync = originalExistsSync; + }); + + test('handles context with empty lastCommands gracefully', () => { + const originalExistsSync = fs.existsSync; + fs.existsSync = jest.fn().mockReturnValue(false); + + jest.spyOn(builder.workflowNavigator, 'detectWorkflowState') + .mockReturnValue(null); + + const result = builder.buildWorkflowSuggestions({ + lastCommands: [], + commandHistory: [], + }); + + expect(result).toBeNull(); + + fs.existsSync = originalExistsSync; + }); + + test('handles exception in workflowNavigator gracefully', () => { + const originalExistsSync = fs.existsSync; + fs.existsSync = jest.fn().mockReturnValue(false); + + jest.spyOn(builder.workflowNavigator, 'detectWorkflowState') + .mockImplementation(() => { + throw new Error('Unexpected error in navigator'); + }); + + // Should not throw + const result = builder.buildWorkflowSuggestions({ + lastCommands: ['some-command'], + }); + + expect(result).toBeNull(); + + fs.existsSync = originalExistsSync; + }); + }); + + // ========================================================================= + // AC8: Suggestions are contextually relevant + // ========================================================================= + describe('AC8: Contextually relevant suggestions', () => { + test('story_development validated state suggests develop commands', () => { + const navigator = new WorkflowNavigator(); + + const state = { + workflow: 'story_development', + state: 'validated', + context: { story_path: 'docs/stories/test.md' }, + }; + + const suggestions = navigator.suggestNextCommands(state); + + // Should suggest development commands after validation + expect(suggestions.length).toBeGreaterThan(0); + const commands = suggestions.map((s) => s.raw_command); + expect(commands).toContain('develop-yolo'); + }); + + test('story_development in_development state suggests QA review', () => { + const navigator = new WorkflowNavigator(); + + const state = { + workflow: 'story_development', + state: 'in_development', + context: { story_path: 'docs/stories/test.md' }, + }; + + const suggestions = navigator.suggestNextCommands(state); + + expect(suggestions.length).toBeGreaterThan(0); + const commands = suggestions.map((s) => s.raw_command); + expect(commands).toContain('review-qa'); + }); + + test('story_development qa_reviewed state suggests fix or push', () => { + const navigator = new WorkflowNavigator(); + + const state = { + workflow: 'story_development', + state: 'qa_reviewed', + context: {}, + }; + + const suggestions = navigator.suggestNextCommands(state); + + expect(suggestions.length).toBeGreaterThan(0); + const commands = suggestions.map((s) => s.raw_command); + expect(commands).toContain('apply-qa-fixes'); + }); + + test('bob_orchestration epic_started suggests build or develop', () => { + const navigator = new WorkflowNavigator(); + + const state = { + workflow: 'bob_orchestration', + state: 'epic_started', + context: { story_path: 'docs/stories/ACT-5.md' }, + }; + + const suggestions = navigator.suggestNextCommands(state); + + expect(suggestions.length).toBeGreaterThan(0); + const commands = suggestions.map((s) => s.raw_command); + expect(commands).toContain('build-autonomous'); + }); + + test('unknown workflow returns empty suggestions', () => { + const navigator = new WorkflowNavigator(); + + const state = { + workflow: 'nonexistent_workflow', + state: 'some_state', + context: {}, + }; + + const suggestions = navigator.suggestNextCommands(state); + + expect(suggestions).toEqual([]); + }); + + test('populateTemplate replaces context variables correctly', () => { + const navigator = new WorkflowNavigator(); + + const result = navigator.populateTemplate('${story_path}', { + story_path: 'docs/stories/test.md', + }); + + expect(result).toBe('docs/stories/test.md'); + }); + + test('populateTemplate handles missing context variables', () => { + const navigator = new WorkflowNavigator(); + + const result = navigator.populateTemplate('${story_path}', {}); + + expect(result).toBe(''); + }); + }); + + // ========================================================================= + // AC5 (continued): WorkflowNavigator detects bob_orchestration from commands + // ========================================================================= + describe('WorkflowNavigator pattern detection', () => { + test('detects bob_orchestration from execute-epic command', () => { + const navigator = new WorkflowNavigator(); + + const state = navigator.detectWorkflowState( + ['execute-epic started'], + { story_path: 'docs/stories/epic-test.md' }, + ); + + expect(state).not.toBeNull(); + expect(state.workflow).toBe('bob_orchestration'); + expect(state.state).toBe('epic_started'); + }); + + test('detects agent_handoff from develop completed', () => { + const navigator = new WorkflowNavigator(); + + const state = navigator.detectWorkflowState( + ['develop completed'], + {}, + ); + + expect(state).not.toBeNull(); + // Could match story_development in_development OR agent_handoff dev_complete + // The first matching workflow wins + expect(state).toHaveProperty('workflow'); + expect(state).toHaveProperty('state'); + }); + + test('returns null for unrecognized commands', () => { + const navigator = new WorkflowNavigator(); + + const state = navigator.detectWorkflowState( + ['random-unmatched-command'], + {}, + ); + + expect(state).toBeNull(); + }); + + test('returns null for empty command history', () => { + const navigator = new WorkflowNavigator(); + + const state = navigator.detectWorkflowState([], {}); + expect(state).toBeNull(); + }); + + test('getGreetingMessage returns message for known state', () => { + const navigator = new WorkflowNavigator(); + + const message = navigator.getGreetingMessage({ + workflow: 'story_development', + state: 'validated', + }); + + expect(message).toBe('Story validated! Ready to implement.'); + }); + + test('getGreetingMessage returns empty string for unknown state', () => { + const navigator = new WorkflowNavigator(); + + const message = navigator.getGreetingMessage({ + workflow: 'nonexistent', + state: 'nonexistent', + }); + + expect(message).toBe(''); + }); + }); + + // ========================================================================= + // UnifiedActivationPipeline workflow detection relaxation + // ========================================================================= + describe('UnifiedActivationPipeline _detectWorkflowState relaxation', () => { + // This tests that the pipeline also relaxed its workflow detection + // (ACT-5 changed from sessionType !== 'workflow' to sessionType === 'new') + let pipeline; + + beforeEach(() => { + // Require fresh to get the updated code + jest.resetModules(); + + // Re-mock everything needed for the pipeline + jest.mock('../../.aios-core/development/scripts/greeting-builder'); + jest.mock('../../.aios-core/development/scripts/agent-config-loader', () => ({ + AgentConfigLoader: jest.fn().mockImplementation(() => ({ + loadComplete: jest.fn().mockResolvedValue(null), + })), + })); + jest.mock('../../.aios-core/core/session/context-loader', () => { + return jest.fn().mockImplementation(() => ({ + loadContext: jest.fn().mockResolvedValue(null), + })); + }); + jest.mock('../../.aios-core/infrastructure/scripts/project-status-loader', () => ({ + loadProjectStatus: jest.fn().mockResolvedValue(null), + })); + jest.mock('../../.aios-core/infrastructure/scripts/git-config-detector'); + jest.mock('../../.aios-core/core/permissions', () => ({ + PermissionMode: jest.fn().mockImplementation(() => ({ + load: jest.fn().mockResolvedValue(undefined), + getBadge: jest.fn().mockReturnValue('[Ask]'), + currentMode: 'ask', + })), + })); + jest.mock('../../.aios-core/development/scripts/greeting-preference-manager', () => { + return jest.fn().mockImplementation(() => ({ + getPreference: jest.fn().mockReturnValue('auto'), + })); + }); + jest.mock('../../.aios-core/core/session/context-detector'); + jest.mock('../../.aios-core/development/scripts/workflow-navigator'); + + const { UnifiedActivationPipeline } = require('../../.aios-core/development/scripts/unified-activation-pipeline'); + const MockWorkflowNavigator = require('../../.aios-core/development/scripts/workflow-navigator'); + + MockWorkflowNavigator.mockImplementation(() => ({ + detectWorkflowState: jest.fn().mockReturnValue({ + workflow: 'story_development', + state: 'validated', + context: {}, + }), + })); + + pipeline = new UnifiedActivationPipeline(); + }); + + test('detects workflow state for existing sessions (not just workflow)', () => { + const sessionContext = { + lastCommands: ['validate-story-draft completed'], + }; + + const result = pipeline._detectWorkflowState(sessionContext, 'existing'); + + // Should NOT return null for 'existing' sessions (ACT-5 relaxation) + expect(result).not.toBeNull(); + expect(result.workflow).toBe('story_development'); + }); + + test('returns null for new sessions', () => { + const sessionContext = { + lastCommands: ['validate-story-draft completed'], + }; + + const result = pipeline._detectWorkflowState(sessionContext, 'new'); + + expect(result).toBeNull(); + }); + + test('returns null when no session context', () => { + const result = pipeline._detectWorkflowState(null, 'existing'); + + expect(result).toBeNull(); + }); + }); +}); + +``` + +================================================== +📄 tests/core/result-aggregator.test.js +================================================== +```js +/** + * Result Aggregator - Test Suite + * Story EXC-1, AC6 - result-aggregator.js coverage + * + * Tests: constructor, aggregate, aggregateAll, conflict detection, + * metrics, report generation, formatMarkdown, history + */ + +const path = require('path'); +const fs = require('fs'); +const { + createTempDir, + cleanupTempDir, + collectEvents, +} = require('./execution-test-helpers'); + +const { ResultAggregator } = require('../../.aios-core/core/execution/result-aggregator'); + +describe('ResultAggregator', () => { + let tmpDir; + + beforeEach(() => { + tmpDir = createTempDir('ra-test-'); + }); + + afterEach(() => { + cleanupTempDir(tmpDir); + }); + + // ── Constructor ───────────────────────────────────────────────────── + + describe('Constructor', () => { + test('creates with defaults', () => { + const ra = new ResultAggregator(); + expect(ra.detectConflicts).toBe(true); + expect(ra.history).toEqual([]); + expect(ra.maxHistory).toBe(50); + }); + + test('accepts custom config', () => { + const ra = new ResultAggregator({ detectConflicts: false, maxHistory: 10 }); + expect(ra.detectConflicts).toBe(false); + expect(ra.maxHistory).toBe(10); + }); + + test('extends EventEmitter', () => { + const ra = new ResultAggregator(); + expect(typeof ra.on).toBe('function'); + }); + }); + + // ── aggregate ───────────────────────────────────────────────────────── + + describe('aggregate()', () => { + test('aggregates successful results', async () => { + const ra = new ResultAggregator({ rootPath: tmpDir }); + const waveResults = { + waveIndex: 1, + results: [ + { taskId: 't1', success: true, duration: 1000, output: 'done', filesModified: ['a.js'] }, + { taskId: 't2', success: true, duration: 2000, output: 'done', filesModified: ['b.js'] }, + ], + }; + + const result = await ra.aggregate(waveResults); + + expect(result.tasks.length).toBe(2); + expect(result.tasks[0].success).toBe(true); + expect(result.metrics.totalTasks).toBe(2); + expect(result.metrics.successful).toBe(2); + expect(result.metrics.failed).toBe(0); + }); + + test('detects file conflicts', async () => { + const ra = new ResultAggregator({ rootPath: tmpDir }); + const waveResults = { + waveIndex: 1, + results: [ + { taskId: 't1', success: true, filesModified: ['shared.js'] }, + { taskId: 't2', success: true, filesModified: ['shared.js'] }, + ], + }; + + const events = collectEvents(ra, ['conflicts_detected']); + const result = await ra.aggregate(waveResults); + + expect(result.conflicts.length).toBe(1); + expect(result.conflicts[0].file).toBe('shared.js'); + expect(result.conflicts[0].tasks).toEqual(['t1', 't2']); + expect(events.count('conflicts_detected')).toBe(1); + }); + + test('skips conflict detection when disabled', async () => { + const ra = new ResultAggregator({ rootPath: tmpDir, detectConflicts: false }); + const waveResults = { + waveIndex: 1, + results: [ + { taskId: 't1', success: true, filesModified: ['shared.js'] }, + { taskId: 't2', success: true, filesModified: ['shared.js'] }, + ], + }; + + const result = await ra.aggregate(waveResults); + expect(result.conflicts).toEqual([]); + }); + + test('collects warnings for long duration tasks', async () => { + const ra = new ResultAggregator({ rootPath: tmpDir }); + const waveResults = { + waveIndex: 1, + results: [ + { taskId: 't1', success: true, duration: 6 * 60 * 1000, filesModified: ['a.js'] }, + ], + }; + + const result = await ra.aggregate(waveResults); + const longWarning = result.warnings.find(w => w.type === 'long_duration'); + expect(longWarning).toBeDefined(); + }); + + test('collects warnings for no files modified', async () => { + const ra = new ResultAggregator({ rootPath: tmpDir }); + const waveResults = { + waveIndex: 1, + results: [{ taskId: 't1', success: true, filesModified: [] }], + }; + + const result = await ra.aggregate(waveResults); + const noFilesWarning = result.warnings.find(w => w.type === 'no_files_modified'); + expect(noFilesWarning).toBeDefined(); + }); + + test('emits aggregation_complete event', async () => { + const ra = new ResultAggregator({ rootPath: tmpDir }); + const events = collectEvents(ra, ['aggregation_complete']); + + await ra.aggregate({ waveIndex: 1, results: [] }); + + expect(events.count('aggregation_complete')).toBe(1); + }); + + test('stores aggregation in history', async () => { + const ra = new ResultAggregator({ rootPath: tmpDir }); + await ra.aggregate({ waveIndex: 1, results: [] }); + expect(ra.history.length).toBe(1); + }); + + test('trims history to maxHistory', async () => { + const ra = new ResultAggregator({ rootPath: tmpDir, maxHistory: 2 }); + await ra.aggregate({ waveIndex: 1, results: [] }); + await ra.aggregate({ waveIndex: 2, results: [] }); + await ra.aggregate({ waveIndex: 3, results: [] }); + expect(ra.history.length).toBe(2); + }); + }); + + // ── aggregateAll ────────────────────────────────────────────────────── + + describe('aggregateAll()', () => { + test('consolidates multiple waves', async () => { + const ra = new ResultAggregator({ rootPath: tmpDir }); + const waves = [ + { waveIndex: 1, results: [{ taskId: 't1', success: true, filesModified: [] }] }, + { waveIndex: 2, results: [{ taskId: 't2', success: false, error: 'fail', filesModified: [] }] }, + ]; + + const result = await ra.aggregateAll(waves); + + expect(result.waves.length).toBe(2); + expect(result.allTasks.length).toBe(2); + expect(result.overallMetrics.totalWaves).toBe(2); + expect(result.overallMetrics.successful).toBe(1); + expect(result.overallMetrics.failed).toBe(1); + }); + }); + + // ── Conflict detection ──────────────────────────────────────────────── + + describe('Conflict detection', () => { + test('assessConflictSeverity returns critical for package.json', () => { + const ra = new ResultAggregator(); + expect(ra.assessConflictSeverity('package.json')).toBe('critical'); + expect(ra.assessConflictSeverity('src/index.ts')).toBe('critical'); + }); + + test('assessConflictSeverity returns high for config files', () => { + const ra = new ResultAggregator(); + expect(ra.assessConflictSeverity('app.config.js')).toBe('high'); + }); + + test('assessConflictSeverity returns medium for regular files', () => { + const ra = new ResultAggregator(); + expect(ra.assessConflictSeverity('src/utils/helper.js')).toBe('medium'); + }); + + test('suggestResolution gives JSON-specific advice', () => { + const ra = new ResultAggregator(); + expect(ra.suggestResolution('data.json', 't1', 't2')).toContain('JSON'); + }); + + test('suggestResolution gives test file advice', () => { + const ra = new ResultAggregator(); + expect(ra.suggestResolution('app.test.js', 't1', 't2')).toContain('automatically'); + }); + + test('suggestResolution gives generic advice for other files', () => { + const ra = new ResultAggregator(); + expect(ra.suggestResolution('app.js', 't1', 't2')).toContain('Review'); + }); + }); + + // ── extractFilesFromOutput ──────────────────────────────────────────── + + describe('extractFilesFromOutput', () => { + test('returns empty for null', () => { + const ra = new ResultAggregator(); + expect(ra.extractFilesFromOutput(null)).toEqual([]); + }); + + test('extracts file paths from output', () => { + const ra = new ResultAggregator(); + const output = "Created `src/app.js` and modified 'lib/utils.ts'"; + const files = ra.extractFilesFromOutput(output); + expect(files.length).toBeGreaterThanOrEqual(1); + }); + }); + + // ── summarizeOutput ─────────────────────────────────────────────────── + + describe('summarizeOutput', () => { + test('returns empty for null', () => { + const ra = new ResultAggregator(); + expect(ra.summarizeOutput(null)).toBe(''); + }); + + test('returns short output unchanged', () => { + const ra = new ResultAggregator(); + expect(ra.summarizeOutput('short')).toBe('short'); + }); + + test('truncates long output', () => { + const ra = new ResultAggregator(); + const long = 'x'.repeat(600); + const result = ra.summarizeOutput(long); + expect(result.length).toBeLessThan(600); + expect(result).toContain('truncated'); + }); + }); + + // ── Metrics ─────────────────────────────────────────────────────────── + + describe('calculateMetrics', () => { + test('calculates success rate', () => { + const ra = new ResultAggregator(); + const agg = { + tasks: [ + { success: true, duration: 1000, filesModified: ['a.js'] }, + { success: false, duration: 500, filesModified: [] }, + ], + conflicts: [], + warnings: [], + }; + const metrics = ra.calculateMetrics(agg, Date.now() - 1000); + expect(metrics.totalTasks).toBe(2); + expect(metrics.successful).toBe(1); + expect(metrics.failed).toBe(1); + expect(metrics.successRate).toBe(50); + }); + + test('counts unique files modified', () => { + const ra = new ResultAggregator(); + const agg = { + tasks: [ + { success: true, filesModified: ['a.js', 'b.js'] }, + { success: true, filesModified: ['b.js', 'c.js'] }, + ], + conflicts: [], + warnings: [], + }; + const metrics = ra.calculateMetrics(agg, Date.now()); + expect(metrics.filesModified).toBe(3); + expect(metrics.duplicateFileEdits).toBe(1); + }); + }); + + // ── Report generation ───────────────────────────────────────────────── + + describe('generateReport', () => { + test('writes JSON and markdown files', async () => { + const reportDir = path.join(tmpDir, 'plan'); + const ra = new ResultAggregator({ reportDir }); + + const agg = { + waveIndex: 1, + completedAt: new Date().toISOString(), + tasks: [{ taskId: 't1', agentId: '@dev', success: true, duration: 1000 }], + conflicts: [], + warnings: [], + metrics: { totalTasks: 1, successful: 1, failed: 0, successRate: 100, totalDuration: 1000, conflictCount: 0, filesModified: 1 }, + }; + + const reportPath = await ra.generateReport(agg); + expect(fs.existsSync(reportPath)).toBe(true); + expect(fs.existsSync(reportPath.replace('.json', '.md'))).toBe(true); + }); + }); + + // ── formatMarkdown ──────────────────────────────────────────────────── + + describe('formatMarkdown', () => { + test('generates markdown with metrics', () => { + const ra = new ResultAggregator(); + const agg = { + completedAt: new Date().toISOString(), + tasks: [{ taskId: 't1', agentId: '@dev', success: true, duration: 1000 }], + conflicts: [], + warnings: [], + metrics: { totalTasks: 1, successful: 1, failed: 0, successRate: 100, totalDuration: 1000, conflictCount: 0 }, + }; + const md = ra.formatMarkdown(agg); + expect(md).toContain('Wave Results Report'); + expect(md).toContain('100'); + }); + + test('includes conflicts section', () => { + const ra = new ResultAggregator(); + const agg = { + completedAt: new Date().toISOString(), + tasks: [], + conflicts: [{ file: 'app.js', type: 'concurrent', severity: 'high', tasks: ['t1', 't2'], resolution: 'merge' }], + warnings: [], + metrics: { totalTasks: 0, successful: 0, failed: 0, successRate: 100, totalDuration: 0 }, + }; + const md = ra.formatMarkdown(agg); + expect(md).toContain('Conflicts'); + expect(md).toContain('app.js'); + }); + }); + + // ── History ─────────────────────────────────────────────────────────── + + describe('History', () => { + test('getHistory returns limited entries', () => { + const ra = new ResultAggregator(); + ra.history = [{ id: 1 }, { id: 2 }, { id: 3 }]; + expect(ra.getHistory(2).length).toBe(2); + }); + }); + + // ── formatStatus ────────────────────────────────────────────────────── + + describe('formatStatus', () => { + test('returns status string', () => { + const ra = new ResultAggregator(); + const status = ra.formatStatus(); + expect(status).toContain('Result Aggregator'); + }); + }); +}); + +``` + +================================================== +📄 tests/core/terminal-spawner.test.js +================================================== +```js +/** + * Terminal Spawner Tests + * + * Story 11.2: Bob Terminal Spawning + * + * Tests for the TerminalSpawner module which provides + * cross-platform terminal spawning for agent orchestration. + */ + +'use strict'; + +const path = require('path'); +const fs = require('fs').promises; +const os = require('os'); + +// Module under test +const TerminalSpawner = require('../../.aios-core/core/orchestration/terminal-spawner'); + +describe('TerminalSpawner', () => { + const tmpDir = os.tmpdir(); + + // ============================================ + // Module Structure Tests + // ============================================ + describe('Module Structure', () => { + test('should export all required functions', () => { + expect(TerminalSpawner.spawnAgent).toBeDefined(); + expect(TerminalSpawner.createContextFile).toBeDefined(); + expect(TerminalSpawner.pollForOutput).toBeDefined(); + expect(TerminalSpawner.isSpawnerAvailable).toBeDefined(); + expect(TerminalSpawner.getPlatform).toBeDefined(); + expect(TerminalSpawner.cleanupOldFiles).toBeDefined(); + expect(TerminalSpawner.getScriptPath).toBeDefined(); + }); + + test('should export constants', () => { + expect(TerminalSpawner.DEFAULT_TIMEOUT_MS).toBe(300000); + expect(TerminalSpawner.POLL_INTERVAL_MS).toBe(500); + expect(TerminalSpawner.MAX_RETRIES).toBe(3); + }); + }); + + // ============================================ + // Platform Detection Tests (Task 6.2) + // ============================================ + describe('Platform Detection', () => { + test('isSpawnerAvailable should return true on supported platforms', () => { + // This should be true on macOS, Linux, and Windows + const isAvailable = TerminalSpawner.isSpawnerAvailable(); + expect(typeof isAvailable).toBe('boolean'); + // On CI/test environments, this should generally be true + expect(isAvailable).toBe(true); + }); + + test('getPlatform should return valid platform name', () => { + const platform = TerminalSpawner.getPlatform(); + expect(['macos', 'linux', 'windows', 'unknown']).toContain(platform); + }); + + test('getPlatform should match process.platform', () => { + const platform = TerminalSpawner.getPlatform(); + const nodePlatform = process.platform; + + if (nodePlatform === 'darwin') { + expect(platform).toBe('macos'); + } else if (nodePlatform === 'linux') { + expect(platform).toBe('linux'); + } else if (nodePlatform === 'win32') { + expect(platform).toBe('windows'); + } + }); + }); + + // ============================================ + // Script Path Tests + // ============================================ + describe('Script Path', () => { + test('getScriptPath should return absolute path', () => { + const scriptPath = TerminalSpawner.getScriptPath(); + expect(path.isAbsolute(scriptPath)).toBe(true); + }); + + test('getScriptPath should point to pm.sh', () => { + const scriptPath = TerminalSpawner.getScriptPath(); + expect(scriptPath.endsWith('pm.sh')).toBe(true); + }); + + test('pm.sh should exist at script path', async () => { + const scriptPath = TerminalSpawner.getScriptPath(); + const stats = await fs.stat(scriptPath); + expect(stats.isFile()).toBe(true); + }); + }); + + // ============================================ + // Context File Tests (Task 2) + // ============================================ + describe('createContextFile', () => { + test('should return empty string for null context', async () => { + const result = await TerminalSpawner.createContextFile(null); + expect(result).toBe(''); + }); + + test('should create valid JSON file', async () => { + const context = { + story: 'test-story.md', + files: ['file1.js', 'file2.js'], + instructions: 'Test instructions', + }; + + const contextPath = await TerminalSpawner.createContextFile(context, tmpDir); + + try { + expect(contextPath).toContain('aios-context-'); + expect(contextPath.endsWith('.json')).toBe(true); + + const content = await fs.readFile(contextPath, 'utf8'); + const parsed = JSON.parse(content); + + expect(parsed.story).toBe('test-story.md'); + expect(parsed.files).toEqual(['file1.js', 'file2.js']); + expect(parsed.instructions).toBe('Test instructions'); + expect(parsed.createdAt).toBeDefined(); + } finally { + // Cleanup + await fs.unlink(contextPath).catch(() => {}); + } + }); + + test('should handle context with missing fields', async () => { + const context = { + story: 'test-story.md', + }; + + const contextPath = await TerminalSpawner.createContextFile(context, tmpDir); + + try { + const content = await fs.readFile(contextPath, 'utf8'); + const parsed = JSON.parse(content); + + expect(parsed.story).toBe('test-story.md'); + expect(parsed.files).toEqual([]); + expect(parsed.instructions).toBe(''); + expect(parsed.metadata).toEqual({}); + } finally { + await fs.unlink(contextPath).catch(() => {}); + } + }); + + test('should handle empty context object', async () => { + const contextPath = await TerminalSpawner.createContextFile({}, tmpDir); + + try { + const content = await fs.readFile(contextPath, 'utf8'); + const parsed = JSON.parse(content); + + expect(parsed.story).toBe(''); + expect(parsed.files).toEqual([]); + } finally { + await fs.unlink(contextPath).catch(() => {}); + } + }); + }); + + // ============================================ + // Polling Tests (Task 3) + // ============================================ + describe('pollForOutput', () => { + test('should return output when lock file is missing', async () => { + // Create output file without lock + const timestamp = Date.now(); + const outputPath = path.join(tmpDir, `aios-output-${timestamp}.md`); + await fs.writeFile(outputPath, 'Test output content'); + + try { + const result = await TerminalSpawner.pollForOutput(outputPath, 1000); + expect(result).toBe('Test output content'); + } finally { + await fs.unlink(outputPath).catch(() => {}); + } + }); + + test('should return "No output captured" if file does not exist and no lock', async () => { + const fakePath = path.join(tmpDir, `aios-output-nonexistent-${Date.now()}.md`); + const result = await TerminalSpawner.pollForOutput(fakePath, 500); + expect(result).toBe('No output captured'); + }); + + test('should timeout when lock file persists', async () => { + const timestamp = Date.now(); + const outputPath = path.join(tmpDir, `aios-output-${timestamp}.md`); + // Lock file path is derived by replacing 'output' with 'lock' in pollForOutput + const lockPath = path.join(tmpDir, `aios-lock-${timestamp}.md`); + + // Create lock file + await fs.writeFile(lockPath, ''); + + try { + await expect(TerminalSpawner.pollForOutput(outputPath, 600)).rejects.toThrow( + /Timeout waiting for agent output/, + ); + } finally { + await fs.unlink(lockPath).catch(() => {}); + } + }); + + test('should return output after lock is removed', async () => { + const timestamp = Date.now(); + const outputPath = path.join(tmpDir, `aios-output-${timestamp}.md`); + const lockPath = path.join(tmpDir, `aios-lock-${timestamp}.lock`); + + // Create both files + await fs.writeFile(lockPath, ''); + await fs.writeFile(outputPath, 'Delayed output'); + + // Remove lock after delay + setTimeout(async () => { + await fs.unlink(lockPath).catch(() => {}); + }, 200); + + try { + const result = await TerminalSpawner.pollForOutput(outputPath, 5000); + expect(result).toBe('Delayed output'); + } finally { + await fs.unlink(outputPath).catch(() => {}); + } + }); + }); + + // ============================================ + // Cleanup Tests (Task 2.4) + // ============================================ + describe('cleanupOldFiles', () => { + test('should clean old AIOS temp files', async () => { + // Create old files (mock old timestamp by naming) + const oldTimestamp = Date.now() - 7200000; // 2 hours ago + const oldOutputPath = path.join(tmpDir, `aios-output-${oldTimestamp}.md`); + const oldLockPath = path.join(tmpDir, `aios-lock-${oldTimestamp}.lock`); + const oldContextPath = path.join(tmpDir, `aios-context-${oldTimestamp}.json`); + + await fs.writeFile(oldOutputPath, 'old output'); + await fs.writeFile(oldLockPath, ''); + await fs.writeFile(oldContextPath, '{}'); + + // Run cleanup with 1 hour max age + const cleaned = await TerminalSpawner.cleanupOldFiles(tmpDir, 3600000); + + // Verify files are cleaned (at least the ones we created) + expect(cleaned).toBeGreaterThanOrEqual(0); + + // Note: Due to file modification time, the actual cleanup depends on + // when the file was last modified, not its name. So we just verify + // the function runs without error. + }); + + test('should not clean recent files', async () => { + const recentTimestamp = Date.now(); + const recentPath = path.join(tmpDir, `aios-output-${recentTimestamp}.md`); + await fs.writeFile(recentPath, 'recent output'); + + try { + await TerminalSpawner.cleanupOldFiles(tmpDir, 3600000); + + // Recent file should still exist + const exists = await fs + .access(recentPath) + .then(() => true) + .catch(() => false); + expect(exists).toBe(true); + } finally { + await fs.unlink(recentPath).catch(() => {}); + } + }); + + test('should handle non-existent directory gracefully', async () => { + const cleaned = await TerminalSpawner.cleanupOldFiles('/nonexistent/path'); + expect(cleaned).toBe(0); + }); + }); + + // ============================================ + // Spawn Agent Validation Tests + // ============================================ + describe('spawnAgent Validation', () => { + test('should reject invalid agent ID', async () => { + await expect(TerminalSpawner.spawnAgent('', 'develop')).rejects.toThrow( + /Agent ID is required/, + ); + }); + + test('should reject invalid task', async () => { + await expect(TerminalSpawner.spawnAgent('dev', '')).rejects.toThrow(/Task is required/); + }); + + test('should reject agent ID with invalid characters', async () => { + await expect(TerminalSpawner.spawnAgent('dev@123', 'develop')).rejects.toThrow( + /Invalid agent ID format/, + ); + }); + + test('should reject task with invalid characters', async () => { + await expect(TerminalSpawner.spawnAgent('dev', 'develop!test')).rejects.toThrow( + /Invalid task format/, + ); + }); + + test('should accept valid agent ID formats', async () => { + // These should not throw validation errors (may fail at spawn) + const validAgents = ['dev', 'architect', 'qa-expert', 'ux-design-expert']; + + for (const agent of validAgents) { + // Test validation only - spawnAgent will fail at exec but not validation + try { + await TerminalSpawner.spawnAgent(agent, 'test', { retries: 1, timeout: 100 }); + } catch (error) { + // Should not be a validation error + expect(error.message).not.toMatch(/Invalid agent/); + } + } + }); + }); + + // ============================================ + // Integration with Index Tests + // ============================================ + describe('Index Integration', () => { + test('should be exported from orchestration index', () => { + const orchestration = require('../../.aios-core/core/orchestration'); + + expect(orchestration.TerminalSpawner).toBeDefined(); + expect(orchestration.spawnAgent).toBeDefined(); + expect(orchestration.createContextFile).toBeDefined(); + expect(orchestration.pollForOutput).toBeDefined(); + expect(orchestration.isSpawnerAvailable).toBeDefined(); + expect(orchestration.getPlatform).toBeDefined(); + expect(orchestration.cleanupOldFiles).toBeDefined(); + }); + + test('exported functions should be callable', () => { + const orchestration = require('../../.aios-core/core/orchestration'); + + expect(typeof orchestration.spawnAgent).toBe('function'); + expect(typeof orchestration.createContextFile).toBe('function'); + expect(typeof orchestration.pollForOutput).toBe('function'); + expect(typeof orchestration.isSpawnerAvailable).toBe('function'); + expect(typeof orchestration.getPlatform).toBe('function'); + }); + }); +}); + +// ============================================ +// pm.sh Script Tests (Task 6.2) +// ============================================ +describe('pm.sh Script', () => { + const { execSync } = require('child_process'); + const scriptPath = TerminalSpawner.getScriptPath(); + + test('should display help with --help flag', () => { + const result = execSync(`bash "${scriptPath}" --help`, { encoding: 'utf8' }); + expect(result).toContain('AIOS Multi-Modal Orchestration Script'); + expect(result).toContain('Usage:'); + expect(result).toContain('Arguments:'); + expect(result).toContain('Options:'); + }); + + test('should display version with --version flag', () => { + const result = execSync(`bash "${scriptPath}" --version`, { encoding: 'utf8' }); + expect(result).toContain('version'); + expect(result).toMatch(/\d+\.\d+\.\d+/); + }); + + test('should fail with missing arguments', () => { + try { + execSync(`bash "${scriptPath}"`, { encoding: 'utf8', stdio: 'pipe' }); + fail('Should have thrown an error'); + } catch (error) { + expect(error.status).toBe(1); + } + }); + + test('should fail with only agent argument', () => { + try { + execSync(`bash "${scriptPath}" dev`, { encoding: 'utf8', stdio: 'pipe' }); + fail('Should have thrown an error'); + } catch (error) { + expect(error.status).toBe(1); + } + }); + + test('should fail with non-existent context file', () => { + try { + execSync(`bash "${scriptPath}" dev develop --context /nonexistent/file.json`, { + encoding: 'utf8', + stdio: 'pipe', + }); + fail('Should have thrown an error'); + } catch (error) { + expect(error.status).toBe(1); + } + }); +}); + +``` + +================================================== +📄 tests/core/executor-assignment.test.js +================================================== +```js +/** + * Executor Assignment Tests + * + * Story: 11.1 - Dynamic Executor Assignment + * Epic: Epic 11 - Projeto Bob + * + * Tests the executor assignment module for automatic story-to-executor mapping. + * + * @author @dev (Dex) + * @version 1.0.0 + */ + +const { + detectStoryType, + assignExecutor, + assignExecutorFromContent, + validateExecutorAssignment, + getStoryTypes, + getStoryTypeConfig, + getExecutorWorkTypes, + EXECUTOR_ASSIGNMENT_TABLE, + DEFAULT_ASSIGNMENT, +} = require('../../.aios-core/core/orchestration/executor-assignment'); + +describe('ExecutorAssignment', () => { + describe('detectStoryType (AC1, Task 1.1)', () => { + it('should detect code_general type from feature keywords', () => { + const content = 'Implement user authentication handler with JWT tokens'; + expect(detectStoryType(content)).toBe('code_general'); + }); + + it('should detect code_general type from service keywords', () => { + const content = 'Create a new user service module'; + expect(detectStoryType(content)).toBe('code_general'); + }); + + it('should detect database type from schema keywords', () => { + const content = 'Create user table schema with constraints'; + expect(detectStoryType(content)).toBe('database'); + }); + + it('should detect database type from RLS keywords', () => { + const content = 'Implement RLS policies for user table database protection'; + expect(detectStoryType(content)).toBe('database'); + }); + + it('should detect database type from migration keywords', () => { + const content = 'Add migration for new column in users table'; + expect(detectStoryType(content)).toBe('database'); + }); + + it('should detect infrastructure type from CI/CD keywords', () => { + const content = 'Setup CI/CD pipeline with GitHub Actions'; + expect(detectStoryType(content)).toBe('infrastructure'); + }); + + it('should detect infrastructure type from deploy keywords', () => { + const content = 'Configure Docker deployment to production environment'; + expect(detectStoryType(content)).toBe('infrastructure'); + }); + + it('should detect ui_ux type from component keywords', () => { + const content = 'Create responsive UI component for user profile'; + expect(detectStoryType(content)).toBe('ui_ux'); + }); + + it('should detect ui_ux type from accessibility keywords', () => { + const content = 'Improve accessibility (a11y) of navigation menu'; + expect(detectStoryType(content)).toBe('ui_ux'); + }); + + it('should detect research type from investigation keywords', () => { + const content = 'Research and compare authentication libraries'; + expect(detectStoryType(content)).toBe('research'); + }); + + it('should detect research type from POC keywords', () => { + const content = 'Create POC for real-time notifications'; + expect(detectStoryType(content)).toBe('research'); + }); + + it('should detect architecture type from design decision keywords', () => { + const content = 'Architecture decision for microservice vs monolith'; + expect(detectStoryType(content)).toBe('architecture'); + }); + + it('should detect architecture type from scalability keywords', () => { + const content = 'Design scalability pattern for high traffic'; + expect(detectStoryType(content)).toBe('architecture'); + }); + + it('should default to code_general for empty content', () => { + expect(detectStoryType('')).toBe('code_general'); + expect(detectStoryType(null)).toBe('code_general'); + expect(detectStoryType(undefined)).toBe('code_general'); + }); + + it('should handle mixed keywords with highest score', () => { + // More database keywords than code_general + const content = 'Create schema for user table with migration, add index and constraints for foreign_key'; + expect(detectStoryType(content)).toBe('database'); + }); + + it('should be case-insensitive', () => { + const content = 'IMPLEMENT JWT AUTHENTICATION HANDLER'; + expect(detectStoryType(content)).toBe('code_general'); + }); + }); + + describe('assignExecutor (AC2, Task 1.2)', () => { + it('should assign @dev as executor for code_general', () => { + const assignment = assignExecutor('code_general'); + expect(assignment.executor).toBe('@dev'); + expect(assignment.quality_gate).toBe('@architect'); + }); + + it('should assign @data-engineer as executor for database', () => { + const assignment = assignExecutor('database'); + expect(assignment.executor).toBe('@data-engineer'); + expect(assignment.quality_gate).toBe('@dev'); + }); + + it('should assign @devops as executor for infrastructure', () => { + const assignment = assignExecutor('infrastructure'); + expect(assignment.executor).toBe('@devops'); + expect(assignment.quality_gate).toBe('@architect'); + }); + + it('should assign @ux-design-expert as executor for ui_ux', () => { + const assignment = assignExecutor('ui_ux'); + expect(assignment.executor).toBe('@ux-design-expert'); + expect(assignment.quality_gate).toBe('@dev'); + }); + + it('should assign @analyst as executor for research', () => { + const assignment = assignExecutor('research'); + expect(assignment.executor).toBe('@analyst'); + expect(assignment.quality_gate).toBe('@pm'); + }); + + it('should assign @architect as executor for architecture', () => { + const assignment = assignExecutor('architecture'); + expect(assignment.executor).toBe('@architect'); + expect(assignment.quality_gate).toBe('@pm'); + }); + + it('should include quality_gate_tools for each type', () => { + for (const type of getStoryTypes()) { + const assignment = assignExecutor(type); + expect(Array.isArray(assignment.quality_gate_tools)).toBe(true); + expect(assignment.quality_gate_tools.length).toBeGreaterThan(0); + } + }); + + it('should return default assignment for unknown type', () => { + const assignment = assignExecutor('unknown_type'); + expect(assignment.executor).toBe(DEFAULT_ASSIGNMENT.executor); + expect(assignment.quality_gate).toBe(DEFAULT_ASSIGNMENT.quality_gate); + }); + + it('should return a new array for quality_gate_tools (immutability)', () => { + const assignment1 = assignExecutor('code_general'); + const assignment2 = assignExecutor('code_general'); + expect(assignment1.quality_gate_tools).not.toBe(assignment2.quality_gate_tools); + }); + }); + + describe('Executor != Quality Gate (AC3, AC5)', () => { + it('should never have executor equal to quality_gate in table', () => { + for (const [type, config] of Object.entries(EXECUTOR_ASSIGNMENT_TABLE)) { + expect(config.executor).not.toBe(config.quality_gate); + } + }); + + it('should never return assignment with executor equal to quality_gate', () => { + for (const type of getStoryTypes()) { + const assignment = assignExecutor(type); + expect(assignment.executor).not.toBe(assignment.quality_gate); + } + }); + }); + + describe('validateExecutorAssignment (AC5, Task 5)', () => { + it('should validate correct assignment', () => { + const result = validateExecutorAssignment({ + executor: '@dev', + quality_gate: '@architect', + quality_gate_tools: ['code_review'], + }); + expect(result.isValid).toBe(true); + expect(result.errors).toHaveLength(0); + }); + + it('should fail when executor is missing', () => { + const result = validateExecutorAssignment({ + quality_gate: '@architect', + quality_gate_tools: ['code_review'], + }); + expect(result.isValid).toBe(false); + expect(result.errors).toContain('Missing required field: executor'); + }); + + it('should fail when quality_gate is missing', () => { + const result = validateExecutorAssignment({ + executor: '@dev', + quality_gate_tools: ['code_review'], + }); + expect(result.isValid).toBe(false); + expect(result.errors).toContain('Missing required field: quality_gate'); + }); + + it('should fail when quality_gate_tools is missing', () => { + const result = validateExecutorAssignment({ + executor: '@dev', + quality_gate: '@architect', + }); + expect(result.isValid).toBe(false); + expect(result.errors.some((e) => e.includes('quality_gate_tools'))).toBe(true); + }); + + it('should fail when quality_gate_tools is empty', () => { + const result = validateExecutorAssignment({ + executor: '@dev', + quality_gate: '@architect', + quality_gate_tools: [], + }); + expect(result.isValid).toBe(false); + expect(result.errors).toContain('quality_gate_tools cannot be empty'); + }); + + it('should fail when executor equals quality_gate', () => { + const result = validateExecutorAssignment({ + executor: '@dev', + quality_gate: '@dev', + quality_gate_tools: ['code_review'], + }); + expect(result.isValid).toBe(false); + expect(result.errors.some((e) => e.includes('cannot be the same'))).toBe(true); + }); + + it('should warn about unknown executor', () => { + const result = validateExecutorAssignment({ + executor: '@unknown-agent', + quality_gate: '@architect', + quality_gate_tools: ['code_review'], + }); + expect(result.isValid).toBe(false); + expect(result.errors.some((e) => e.includes('Unknown executor'))).toBe(true); + }); + }); + + describe('assignExecutorFromContent (Task 1)', () => { + it('should combine detection and assignment', () => { + const content = 'Create migration for adding user_roles table'; + const assignment = assignExecutorFromContent(content); + + expect(assignment.executor).toBe('@data-engineer'); + expect(assignment.quality_gate).toBe('@dev'); + }); + + it('should work with multiline story content', () => { + const content = ` + # Story: Implement User Authentication + + ## Description + Create a JWT-based authentication handler for the API. + + ## Acceptance Criteria + - Implement login endpoint + - Implement refresh token logic + - Add authentication middleware + `; + const assignment = assignExecutorFromContent(content); + + expect(assignment.executor).toBe('@dev'); + }); + }); + + describe('Utility Functions', () => { + describe('getStoryTypes', () => { + it('should return all story types', () => { + const types = getStoryTypes(); + expect(types).toContain('code_general'); + expect(types).toContain('database'); + expect(types).toContain('infrastructure'); + expect(types).toContain('ui_ux'); + expect(types).toContain('research'); + expect(types).toContain('architecture'); + }); + }); + + describe('getStoryTypeConfig', () => { + it('should return config for valid type', () => { + const config = getStoryTypeConfig('database'); + expect(config).toBeDefined(); + expect(config.executor).toBe('@data-engineer'); + expect(config.keywords).toContain('schema'); + }); + + it('should return null for invalid type', () => { + const config = getStoryTypeConfig('invalid_type'); + expect(config).toBeNull(); + }); + }); + + describe('getExecutorWorkTypes', () => { + it('should map executors to their work types', () => { + const map = getExecutorWorkTypes(); + + expect(map['@dev']).toContain('code_general'); + expect(map['@data-engineer']).toContain('database'); + expect(map['@devops']).toContain('infrastructure'); + expect(map['@ux-design-expert']).toContain('ui_ux'); + expect(map['@analyst']).toContain('research'); + expect(map['@architect']).toContain('architecture'); + }); + }); + }); + + describe('Assignment Table Completeness (AC2)', () => { + const expectedMappings = [ + { type: 'code_general', executor: '@dev', qg: '@architect' }, + { type: 'database', executor: '@data-engineer', qg: '@dev' }, + { type: 'infrastructure', executor: '@devops', qg: '@architect' }, + { type: 'ui_ux', executor: '@ux-design-expert', qg: '@dev' }, + { type: 'research', executor: '@analyst', qg: '@pm' }, + { type: 'architecture', executor: '@architect', qg: '@pm' }, + ]; + + expectedMappings.forEach(({ type, executor, qg }) => { + it(`should map ${type} to executor ${executor} and QG ${qg}`, () => { + const assignment = assignExecutor(type); + expect(assignment.executor).toBe(executor); + expect(assignment.quality_gate).toBe(qg); + }); + }); + }); + + describe('Edge Cases (Task 6.4)', () => { + it('should handle story with equal keyword scores', () => { + // When scores are equal, should return one of them deterministically + const content = 'feature schema'; // 1 code_general, 1 database + const type = detectStoryType(content); + expect(['code_general', 'database']).toContain(type); + }); + + it('should handle very long content', () => { + const content = 'implement '.repeat(1000) + 'feature handler service'; + const type = detectStoryType(content); + expect(type).toBe('code_general'); + }); + + it('should handle special characters in content', () => { + const content = 'Create CI/CD pipeline with feature-branch deployment'; + const type = detectStoryType(content); + expect(type).toBe('infrastructure'); + }); + + it('should handle non-string content', () => { + expect(detectStoryType(123)).toBe('code_general'); + expect(detectStoryType({})).toBe('code_general'); + expect(detectStoryType([])).toBe('code_general'); + }); + }); +}); + +``` + +================================================== +📄 tests/core/ui/observability-panel.test.js +================================================== +```js +/** + * Observability Panel Tests + * + * Story 11.6: Projeto Bob - Painel de Observabilidade CLI + * + * Tests for: + * - AC1: Panel shows current_agent + * - AC2: Panel shows pipeline_progress + * - AC3: Panel shows active_terminals + * - AC4: Panel shows elapsed_time + * - AC5: Modo minimal (default) + * - AC6: Modo detailed (educativo) + * - AC7: CLI-first (terminal rendering) + * - AC8: Real-time updates + * + * @module tests/core/ui/observability-panel + */ + +'use strict'; + +const { + ObservabilityPanel, + createPanel, + PanelMode, + PipelineStage, + createDefaultState, +} = require('../../../.aios-core/core/ui/observability-panel'); + +const { PanelRenderer, BOX, STATUS } = require('../../../.aios-core/core/ui/panel-renderer'); + +describe('ObservabilityPanel', () => { + describe('createPanel', () => { + it('should create a panel with default options', () => { + const panel = createPanel(); + + expect(panel).toBeInstanceOf(ObservabilityPanel); + expect(panel.state.mode).toBe(PanelMode.MINIMAL); + expect(panel.state.refresh_rate).toBe(1000); + }); + + it('should create a panel with custom options', () => { + const panel = createPanel({ + mode: PanelMode.DETAILED, + refreshRate: 500, + width: 80, + }); + + expect(panel.state.mode).toBe(PanelMode.DETAILED); + expect(panel.state.refresh_rate).toBe(500); + expect(panel.options.width).toBe(80); + }); + }); + + describe('createDefaultState', () => { + it('should create valid default state', () => { + const state = createDefaultState(); + + expect(state.mode).toBe(PanelMode.MINIMAL); + expect(state.refresh_rate).toBe(1000); + expect(state.pipeline).toBeDefined(); + expect(state.pipeline.stages).toEqual(Object.values(PipelineStage)); + expect(state.current_agent).toBeDefined(); + expect(state.active_terminals).toBeDefined(); + expect(state.elapsed).toBeDefined(); + expect(state.tradeoffs).toEqual([]); + expect(state.errors).toEqual([]); + expect(state.next_steps).toEqual([]); + }); + }); + + describe('AC1: Current Agent Display', () => { + it('should set current agent correctly', () => { + const panel = createPanel(); + + panel.setCurrentAgent('@dev', 'Dex', 'implementing jwt-handler.ts'); + + expect(panel.state.current_agent.id).toBe('@dev'); + expect(panel.state.current_agent.name).toBe('Dex'); + expect(panel.state.current_agent.task).toBe('implementing jwt-handler.ts'); + }); + + it('should set agent with reason for detailed mode', () => { + const panel = createPanel({ mode: PanelMode.DETAILED }); + + panel.setCurrentAgent('@dev', 'Dex', 'implementing jwt-handler.ts', 'Story tipo "código geral"'); + + expect(panel.state.current_agent.reason).toBe('Story tipo "código geral"'); + }); + + it('should render current agent in output', () => { + const panel = createPanel(); + panel.setCurrentAgent('@dev', 'Dex', 'implementing feature'); + + const output = panel.renderOnce(); + + expect(output).toContain('@dev'); + expect(output).toContain('implementing feature'); + }); + }); + + describe('AC2: Pipeline Progress Display', () => { + it('should set pipeline stage correctly', () => { + const panel = createPanel(); + + panel.setPipelineStage(PipelineStage.DEV, '3/8'); + + expect(panel.state.pipeline.current_stage).toBe('Dev'); + expect(panel.state.pipeline.story_progress).toBe('3/8'); + }); + + it('should mark stages as completed', () => { + const panel = createPanel(); + + panel.completePipelineStage(PipelineStage.PRD); + panel.completePipelineStage(PipelineStage.EPIC); + + expect(panel.state.pipeline.completed_stages).toContain('PRD'); + expect(panel.state.pipeline.completed_stages).toContain('Epic'); + }); + + it('should not duplicate completed stages', () => { + const panel = createPanel(); + + panel.completePipelineStage(PipelineStage.PRD); + panel.completePipelineStage(PipelineStage.PRD); + + expect(panel.state.pipeline.completed_stages.filter((s) => s === 'PRD').length).toBe(1); + }); + + it('should render pipeline progress in output', () => { + const panel = createPanel(); + panel.completePipelineStage(PipelineStage.PRD); + panel.setPipelineStage(PipelineStage.STORY, '3/8'); + + const output = panel.renderOnce(); + + expect(output).toContain('Pipeline'); + expect(output).toContain('3/8'); + }); + }); + + describe('AC3: Active Terminals Display', () => { + it('should add active terminal', () => { + const panel = createPanel(); + + panel.addTerminal('@dev', 12345, 'jwt-handler.ts'); + + expect(panel.state.active_terminals.count).toBe(1); + expect(panel.state.active_terminals.list[0]).toEqual({ + agent: '@dev', + pid: 12345, + task: 'jwt-handler.ts', + }); + }); + + it('should add multiple terminals', () => { + const panel = createPanel(); + + panel.addTerminal('@dev', 12345, 'jwt-handler.ts'); + panel.addTerminal('@data-engineer', 12346, 'migration pending'); + + expect(panel.state.active_terminals.count).toBe(2); + }); + + it('should remove terminal by PID', () => { + const panel = createPanel(); + + panel.addTerminal('@dev', 12345, 'jwt-handler.ts'); + panel.addTerminal('@data-engineer', 12346, 'migration pending'); + panel.removeTerminal(12345); + + expect(panel.state.active_terminals.count).toBe(1); + expect(panel.state.active_terminals.list[0].agent).toBe('@data-engineer'); + }); + + it('should render active terminals in output', () => { + const panel = createPanel(); + panel.addTerminal('@dev', 12345, 'jwt-handler.ts'); + + const output = panel.renderOnce(); + + expect(output).toContain('Terminals'); + expect(output).toContain('@dev'); + }); + }); + + describe('AC4: Elapsed Time Display', () => { + it('should track session elapsed time', () => { + const panel = createPanel(); + + const elapsed = panel.getElapsedTime(); + + expect(elapsed.session).toBeDefined(); + expect(elapsed.session).not.toBe('--'); + }); + + it('should track story elapsed time after starting', () => { + const panel = createPanel(); + + panel.startStoryTimer(); + const elapsed = panel.getElapsedTime(); + + expect(elapsed.story).toBeDefined(); + expect(elapsed.story).not.toBe('--'); + }); + + it('should return -- for story time when not started', () => { + const panel = createPanel(); + + const elapsed = panel.getElapsedTime(); + + expect(elapsed.story).toBe('--'); + }); + + it('should format time correctly', () => { + const panel = createPanel(); + // Set story start to 65 seconds ago + panel.state.elapsed.story_start = Date.now() - 65000; + + const elapsed = panel.getElapsedTime(); + + // Should be around 1m5s + expect(elapsed.story).toMatch(/1m\d+s/); + }); + + it('should render elapsed time in output', () => { + const panel = createPanel(); + + const output = panel.renderOnce(); + + expect(output).toContain('Elapsed'); + expect(output).toContain('story'); + expect(output).toContain('session'); + }); + }); + + describe('AC5: Minimal Mode (Default)', () => { + it('should default to minimal mode', () => { + const panel = createPanel(); + + expect(panel.state.mode).toBe(PanelMode.MINIMAL); + }); + + it('should render compact output in minimal mode', () => { + const panel = createPanel(); + panel.setCurrentAgent('@dev', 'Dex', 'implementing feature'); + panel.setPipelineStage(PipelineStage.DEV, '3/8'); + + const output = panel.renderOnce(); + const lines = output.split('\n').filter((l) => l.length > 0); + + // Minimal mode should have fewer lines (roughly 7-8) + expect(lines.length).toBeLessThan(15); + }); + + it('should show errors in minimal mode', () => { + const panel = createPanel(); + panel.addError('Build failed: syntax error'); + + const output = panel.renderOnce(); + + expect(output).toContain('Build failed'); + }); + + it('should NOT show trade-offs in minimal mode', () => { + const panel = createPanel(); + panel.addTradeoff('JWT vs Session', 'JWT', 'stateless'); + + const output = panel.renderOnce(); + + // Trade-offs section should not appear in minimal mode + expect(output).not.toContain('Trade-offs considerados'); + }); + }); + + describe('AC6: Detailed Mode (Educativo)', () => { + it('should render detailed output when set', () => { + const panel = createPanel({ mode: PanelMode.DETAILED }); + + const output = panel.renderOnce(); + + expect(output).toContain('Modo Educativo'); + }); + + it('should show trade-offs in detailed mode', () => { + const panel = createPanel({ mode: PanelMode.DETAILED }); + panel.addTradeoff('JWT vs Session', 'JWT', 'stateless'); + + const output = panel.renderOnce(); + + expect(output).toContain('Trade-offs considerados'); + expect(output).toContain('JWT vs Session'); + expect(output).toContain('JWT'); + }); + + it('should show next steps in detailed mode', () => { + const panel = createPanel({ mode: PanelMode.DETAILED }); + panel.setNextSteps([ + '@dev termina implementação', + 'Quality Gate por @architect', + '@devops push e PR', + ]); + + const output = panel.renderOnce(); + + expect(output).toContain('Next Steps'); + expect(output).toContain('@dev termina implementação'); + }); + + it('should show agent reason in detailed mode', () => { + const panel = createPanel({ mode: PanelMode.DETAILED }); + panel.setCurrentAgent('@dev', 'Dex', 'implementing', 'Story tipo "código geral"'); + + const output = panel.renderOnce(); + + expect(output).toContain('Por que @dev'); + expect(output).toContain('código geral'); + }); + + it('should render more lines than minimal mode', () => { + const panel = createPanel({ mode: PanelMode.DETAILED }); + panel.setCurrentAgent('@dev', 'Dex', 'implementing feature'); + panel.addTerminal('@dev', 12345, 'jwt-handler.ts'); + panel.addTradeoff('JWT vs Session', 'JWT', 'stateless'); + + const output = panel.renderOnce(); + const lines = output.split('\n').filter((l) => l.length > 0); + + // Detailed mode should have more lines + expect(lines.length).toBeGreaterThan(10); + }); + }); + + describe('AC5-6: Mode Toggle', () => { + it('should toggle between modes', () => { + const panel = createPanel(); + + expect(panel.state.mode).toBe(PanelMode.MINIMAL); + + const newMode = panel.toggleMode(); + expect(newMode).toBe(PanelMode.DETAILED); + expect(panel.state.mode).toBe(PanelMode.DETAILED); + + const nextMode = panel.toggleMode(); + expect(nextMode).toBe(PanelMode.MINIMAL); + }); + + it('should set mode directly', () => { + const panel = createPanel(); + + panel.setMode(PanelMode.DETAILED); + expect(panel.state.mode).toBe(PanelMode.DETAILED); + + panel.setMode(PanelMode.MINIMAL); + expect(panel.state.mode).toBe(PanelMode.MINIMAL); + }); + + it('should ignore invalid mode', () => { + const panel = createPanel(); + + panel.setMode('invalid'); + expect(panel.state.mode).toBe(PanelMode.MINIMAL); + }); + }); + + describe('AC7: CLI-First (Terminal Rendering)', () => { + it('should use box drawing characters', () => { + const panel = createPanel(); + + const output = panel.renderOnce(); + + // Should contain Unicode box drawing characters + expect(output).toContain(BOX.topLeft); + expect(output).toContain(BOX.bottomRight); + expect(output).toContain(BOX.vertical); + }); + + it('should use chalk for styling (colors disabled in non-TTY)', () => { + const panel = createPanel(); + + const output = panel.renderOnce(); + + // In non-TTY environments (like Jest), chalk disables colors + // Verify the output structure is valid regardless of color codes + expect(output).toContain('Bob Status'); + expect(output).toContain('Pipeline'); + + // If running in TTY, ANSI codes would be present + // We just verify the rendering works + expect(typeof output).toBe('string'); + expect(output.length).toBeGreaterThan(100); + }); + + it('should render consistent width', () => { + const panel = createPanel({ width: 60 }); + + const output = panel.renderOnce(); + const lines = output.split('\n'); + + // Check that box lines have consistent structure + const topLine = lines.find((l) => l.includes(BOX.topLeft)); + const bottomLine = lines.find((l) => l.includes(BOX.bottomLeft)); + + expect(topLine).toBeDefined(); + expect(bottomLine).toBeDefined(); + }); + }); + + describe('AC8: Real-time Updates', () => { + it('should start and stop refresh loop', () => { + const panel = createPanel({ refreshRate: 100 }); + + // Mock stdout.write to prevent actual output + const originalWrite = process.stdout.write; + process.stdout.write = jest.fn(); + + panel.start(); + expect(panel.isRunning).toBe(true); + expect(panel.refreshInterval).not.toBeNull(); + + panel.stop(); + expect(panel.isRunning).toBe(false); + expect(panel.refreshInterval).toBeNull(); + + // Restore + process.stdout.write = originalWrite; + }); + + it('should not start if already running', () => { + const panel = createPanel(); + + // Mock stdout.write to prevent actual output + const originalWrite = process.stdout.write; + process.stdout.write = jest.fn(); + + panel.start(); + const firstInterval = panel.refreshInterval; + + panel.start(); + expect(panel.refreshInterval).toBe(firstInterval); + + panel.stop(); + process.stdout.write = originalWrite; + }); + + it('should update state while running', () => { + const panel = createPanel(); + + // Mock stdout.write + const originalWrite = process.stdout.write; + process.stdout.write = jest.fn(); + + panel.start(); + + panel.setCurrentAgent('@qa', 'Quinn', 'running tests'); + expect(panel.state.current_agent.id).toBe('@qa'); + + panel.stop(); + process.stdout.write = originalWrite; + }); + + it('should use configured refresh rate', () => { + const panel = createPanel({ refreshRate: 500 }); + + expect(panel.state.refresh_rate).toBe(500); + }); + }); + + describe('State Management', () => { + it('should update state with updateState method', () => { + const panel = createPanel(); + + panel.updateState({ + pipeline: { current_stage: 'QA' }, + current_agent: { id: '@qa' }, + }); + + expect(panel.state.pipeline.current_stage).toBe('QA'); + expect(panel.state.current_agent.id).toBe('@qa'); + }); + + it('should get full state with getState method', () => { + const panel = createPanel(); + panel.setCurrentAgent('@dev', 'Dex', 'implementing'); + panel.startStoryTimer(); + + const state = panel.getState(); + + expect(state.current_agent.id).toBe('@dev'); + expect(state.elapsed.story).toBeDefined(); + expect(state.elapsed.session).toBeDefined(); + }); + + it('should add and clear errors', () => { + const panel = createPanel(); + + panel.addError('Error 1'); + panel.addError('Error 2'); + + expect(panel.state.errors.length).toBe(2); + + panel.clearErrors(); + expect(panel.state.errors.length).toBe(0); + }); + }); +}); + +describe('PanelRenderer', () => { + let renderer; + + beforeEach(() => { + renderer = new PanelRenderer({ width: 60 }); + }); + + describe('Border rendering', () => { + it('should render top border', () => { + const border = renderer.topBorder(); + + expect(border).toContain(BOX.topLeft); + expect(border).toContain(BOX.topRight); + expect(border).toContain(BOX.horizontal); + }); + + it('should render bottom border', () => { + const border = renderer.bottomBorder(); + + expect(border).toContain(BOX.bottomLeft); + expect(border).toContain(BOX.bottomRight); + }); + + it('should render separator', () => { + const sep = renderer.separator(); + + expect(sep).toContain(BOX.teeRight); + expect(sep).toContain(BOX.teeLeft); + }); + }); + + describe('Content line rendering', () => { + it('should render content with borders', () => { + const line = renderer.contentLine('Test content'); + + expect(line).toContain(BOX.vertical); + expect(line).toContain('Test content'); + }); + + it('should strip ANSI codes for length calculation', () => { + const stripped = renderer.stripAnsi('\x1B[32mGreen text\x1B[0m'); + + expect(stripped).toBe('Green text'); + }); + }); + + describe('Pipeline rendering', () => { + it('should render pipeline with stages', () => { + const pipeline = { + stages: ['PRD', 'Epic', 'Story', 'Dev', 'QA', 'Push'], + current_stage: 'Dev', + story_progress: '3/8', + completed_stages: ['PRD', 'Epic'], + }; + + const output = renderer.renderPipeline(pipeline); + + expect(output).toContain('PRD'); + expect(output).toContain('Dev'); + expect(output).toContain('→'); + }); + }); + + describe('Minimal mode rendering', () => { + it('should render valid minimal panel', () => { + const state = { + mode: 'minimal', + pipeline: { + stages: ['PRD', 'Epic', 'Story', 'Dev', 'QA', 'Push'], + current_stage: 'Dev', + story_progress: '3/8', + completed_stages: ['PRD'], + }, + current_agent: { + id: '@dev', + name: 'Dex', + task: 'implementing', + }, + active_terminals: { + count: 1, + list: [{ agent: '@dev', pid: 12345, task: 'jwt-handler.ts' }], + }, + elapsed: { + story_start: Date.now() - 60000, + session_start: Date.now() - 300000, + }, + errors: [], + }; + + const output = renderer.renderMinimal(state); + + expect(output).toContain('Bob Status'); + expect(output).toContain('Pipeline'); + expect(output).toContain('@dev'); + expect(output).toContain('Terminals'); + expect(output).toContain('Elapsed'); + }); + }); + + describe('Detailed mode rendering', () => { + it('should render valid detailed panel', () => { + const state = { + mode: 'detailed', + pipeline: { + stages: ['PRD', 'Epic', 'Story', 'Dev', 'QA', 'Push'], + current_stage: 'Dev', + story_progress: '3/8', + completed_stages: ['PRD'], + }, + current_agent: { + id: '@dev', + name: 'Dex', + task: 'implementing', + reason: 'Story tipo código geral', + }, + active_terminals: { + count: 1, + list: [{ agent: '@dev', pid: 12345, task: 'jwt-handler.ts' }], + }, + elapsed: { + story_start: Date.now() - 60000, + session_start: Date.now() - 300000, + }, + tradeoffs: [{ choice: 'JWT vs Session', selected: 'JWT', reason: 'stateless' }], + next_steps: ['@dev finish', '@qa review'], + errors: [], + }; + + const output = renderer.renderDetailed(state); + + expect(output).toContain('Modo Educativo'); + expect(output).toContain('Current Agent'); + expect(output).toContain('Active Terminals'); + expect(output).toContain('Trade-offs considerados'); + expect(output).toContain('Next Steps'); + }); + }); + + describe('Time formatting', () => { + it('should format seconds correctly', () => { + const state = { + elapsed: { + story_start: Date.now() - 30000, // 30 seconds + session_start: Date.now() - 30000, + }, + }; + + const formatted = renderer.formatElapsedTime(state); + + expect(formatted.story).toMatch(/\d+s/); + }); + + it('should format minutes correctly', () => { + const state = { + elapsed: { + story_start: Date.now() - 90000, // 1.5 minutes + session_start: Date.now() - 90000, + }, + }; + + const formatted = renderer.formatElapsedTime(state); + + expect(formatted.story).toMatch(/1m\d+s/); + }); + + it('should format hours correctly', () => { + const state = { + elapsed: { + story_start: Date.now() - 3700000, // ~1 hour + session_start: Date.now() - 3700000, + }, + }; + + const formatted = renderer.formatElapsedTime(state); + + expect(formatted.story).toMatch(/1h\d+m/); + }); + + it('should return -- for null timestamps', () => { + const state = { + elapsed: { + story_start: null, + session_start: Date.now(), + }, + }; + + const formatted = renderer.formatElapsedTime(state); + + expect(formatted.story).toBe('--'); + }); + }); +}); + +describe('Constants', () => { + describe('PanelMode', () => { + it('should have minimal and detailed modes', () => { + expect(PanelMode.MINIMAL).toBe('minimal'); + expect(PanelMode.DETAILED).toBe('detailed'); + }); + }); + + describe('PipelineStage', () => { + it('should have all required stages', () => { + expect(PipelineStage.PRD).toBe('PRD'); + expect(PipelineStage.EPIC).toBe('Epic'); + expect(PipelineStage.STORY).toBe('Story'); + expect(PipelineStage.DEV).toBe('Dev'); + expect(PipelineStage.QA).toBe('QA'); + expect(PipelineStage.PUSH).toBe('Push'); + }); + }); + + describe('Box drawing characters', () => { + it('should have all required characters', () => { + expect(BOX.topLeft).toBe('┌'); + expect(BOX.topRight).toBe('┐'); + expect(BOX.bottomLeft).toBe('└'); + expect(BOX.bottomRight).toBe('┘'); + expect(BOX.horizontal).toBe('─'); + expect(BOX.vertical).toBe('│'); + }); + }); + + describe('Status indicators', () => { + it('should have all required indicators', () => { + expect(STATUS.completed).toBeDefined(); + expect(STATUS.current).toBeDefined(); + expect(STATUS.pending).toBeDefined(); + expect(STATUS.error).toBeDefined(); + expect(STATUS.bullet).toBeDefined(); + }); + }); +}); + +``` + +================================================== +📄 tests/core/config/migrate-config.test.js +================================================== +```js +/** + * Tests for migrate-config.js — Legacy → L2 + L5 Migration + * + * Covers: + * - Legacy detection and backup creation (AC5, AC6) + * - Field extraction: user → L5, project → L2 (AC5) + * - Backup creation before migration (AC6) + * - Idempotent re-execution (Task 3.8) + * - Dry run mode + * - Edge cases: missing legacy, malformed YAML + * + * @story 12.2 + */ + +const path = require('path'); +const fs = require('fs'); +const os = require('os'); + +jest.mock('fs'); +jest.mock('os'); + +const { + migrateConfig, + isLegacyMode, + createBackup, + extractUserFields, + extractProjectFields, + ensureUserConfigDir, + writeUserConfig, + writeProjectConfig, + USER_FIELDS, + PROJECT_FIELDS, +} = require('../../../.aios-core/core/config/migrate-config'); + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +const FAKE_PROJECT = '/fake/project'; +const FAKE_HOME = '/fake/home'; + +const LEGACY_CONFIG_YAML = ` +project: + type: EXISTING_AIOS + installedAt: "2025-01-14" + version: "2.1.0" +user_profile: advanced +default_model: claude-sonnet +default_language: pt-BR +coderabbit_integration: true +educational_mode: false +project_name: my-project +project_type: fullstack +environments: + development: + database_url: postgres://localhost/dev +deploy_target: vercel +qa: + qaLocation: docs/qa +`; + +function setupFileSystem(files = {}) { + fs.existsSync.mockImplementation((filePath) => { + return Object.keys(files).some((key) => filePath.endsWith(key) || filePath === key); + }); + + fs.readFileSync.mockImplementation((filePath) => { + for (const [key, content] of Object.entries(files)) { + if (filePath.endsWith(key) || filePath === key) { + return content; + } + } + throw new Error(`ENOENT: no such file or directory: ${filePath}`); + }); + + fs.copyFileSync.mockImplementation(() => {}); + fs.writeFileSync.mockImplementation(() => {}); + fs.mkdirSync.mockImplementation(() => {}); +} + +beforeEach(() => { + jest.clearAllMocks(); + os.homedir.mockReturnValue(FAKE_HOME); + fs.existsSync.mockReturnValue(false); + fs.readFileSync.mockImplementation(() => { throw new Error('ENOENT'); }); + fs.copyFileSync.mockImplementation(() => {}); + fs.writeFileSync.mockImplementation(() => {}); + fs.mkdirSync.mockImplementation(() => {}); +}); + +// --------------------------------------------------------------------------- +// Field Extraction Tests +// --------------------------------------------------------------------------- + +describe('extractUserFields', () => { + it('should extract user-level fields from legacy config', () => { + // Given + const legacyConfig = { + user_profile: 'advanced', + default_model: 'claude-sonnet', + default_language: 'pt-BR', + coderabbit_integration: true, + educational_mode: false, + project_name: 'should-not-be-here', + }; + + // When + const userFields = extractUserFields(legacyConfig); + + // Then + expect(userFields.user_profile).toBe('advanced'); + expect(userFields.default_model).toBe('claude-sonnet'); + expect(userFields.default_language).toBe('pt-BR'); + expect(userFields.coderabbit_integration).toBe(true); + expect(userFields.educational_mode).toBe(false); + expect(userFields.project_name).toBeUndefined(); + }); + + it('should default user_profile to advanced when not present', () => { + // Given + const legacyConfig = { default_model: 'claude-sonnet' }; + + // When + const userFields = extractUserFields(legacyConfig); + + // Then + expect(userFields.user_profile).toBe('advanced'); + }); +}); + +describe('extractProjectFields', () => { + it('should extract project-level fields from legacy config', () => { + // Given + const legacyConfig = { + project_name: 'my-project', + project_type: 'fullstack', + environments: { development: { database_url: 'postgres://localhost/dev' } }, + deploy_target: 'vercel', + user_profile: 'should-not-be-here', + }; + + // When + const projectFields = extractProjectFields(legacyConfig); + + // Then + expect(projectFields.project_name).toBe('my-project'); + expect(projectFields.project_type).toBe('fullstack'); + expect(projectFields.environments).toBeDefined(); + expect(projectFields.deploy_target).toBe('vercel'); + expect(projectFields.user_profile).toBeUndefined(); + }); + + it('should extract project.type from nested project object', () => { + // Given + const legacyConfig = { + project: { type: 'EXISTING_AIOS', version: '2.1.0' }, + }; + + // When + const projectFields = extractProjectFields(legacyConfig); + + // Then + expect(projectFields.project_type).toBe('EXISTING_AIOS'); + }); + + it('should not override explicit project_type with nested project.type', () => { + // Given + const legacyConfig = { + project_type: 'fullstack', + project: { type: 'EXISTING_AIOS' }, + }; + + // When + const projectFields = extractProjectFields(legacyConfig); + + // Then + expect(projectFields.project_type).toBe('fullstack'); + }); +}); + +// --------------------------------------------------------------------------- +// Backup Tests (AC6) +// --------------------------------------------------------------------------- + +describe('createBackup', () => { + it('should create backup of legacy config', () => { + // Given + fs.existsSync.mockImplementation((filePath) => { + if (filePath.includes('core-config.yaml') && !filePath.includes('backup')) return true; + return false; + }); + + // When + const backupPath = createBackup(FAKE_PROJECT); + + // Then + expect(backupPath).toContain('core-config.backup.yaml'); + expect(fs.copyFileSync).toHaveBeenCalledTimes(1); + }); + + it('should skip backup if already exists (idempotent)', () => { + // Given + fs.existsSync.mockImplementation((filePath) => { + if (filePath.includes('core-config.yaml')) return true; + if (filePath.includes('core-config.backup.yaml')) return true; + return false; + }); + + // When + const backupPath = createBackup(FAKE_PROJECT); + + // Then + expect(backupPath).toContain('core-config.backup.yaml'); + expect(fs.copyFileSync).not.toHaveBeenCalled(); + }); + + it('should return null if legacy config does not exist', () => { + // Given — no files exist + + // When + const backupPath = createBackup(FAKE_PROJECT); + + // Then + expect(backupPath).toBeNull(); + }); +}); + +// --------------------------------------------------------------------------- +// Full Migration Tests (AC5) +// --------------------------------------------------------------------------- + +describe('migrateConfig', () => { + it('should perform full migration: legacy → L2 + L5', () => { + // Given + setupFileSystem({ + 'core-config.yaml': LEGACY_CONFIG_YAML, + }); + + // When + const result = migrateConfig(FAKE_PROJECT); + + // Then + expect(result.migrated).toBe(true); + expect(result.backupPath).toContain('core-config.backup.yaml'); + expect(result.userFields.user_profile).toBe('advanced'); + expect(result.userFields.default_model).toBe('claude-sonnet'); + expect(result.projectFields.project_name).toBe('my-project'); + expect(result.projectFields.deploy_target).toBe('vercel'); + expect(fs.writeFileSync).toHaveBeenCalled(); + }); + + it('should return warning when no legacy config exists', () => { + // Given — no files + + // When + const result = migrateConfig(FAKE_PROJECT); + + // Then + expect(result.migrated).toBe(false); + expect(result.warnings).toContain('No legacy core-config.yaml found. Nothing to migrate.'); + }); + + it('should support dry run mode (no files written)', () => { + // Given + setupFileSystem({ + 'core-config.yaml': LEGACY_CONFIG_YAML, + }); + + // When + const result = migrateConfig(FAKE_PROJECT, { dryRun: true }); + + // Then + expect(result.migrated).toBe(false); + expect(result.userFields).toBeDefined(); + expect(result.projectFields).toBeDefined(); + expect(fs.writeFileSync).not.toHaveBeenCalled(); + expect(fs.copyFileSync).not.toHaveBeenCalled(); + }); + + it('should be idempotent: re-running does not duplicate', () => { + // Given — legacy exists, and user/project config already exist + const userConfigPath = path.join(FAKE_HOME, '.aios', 'user-config.yaml'); + setupFileSystem({ + 'core-config.yaml': LEGACY_CONFIG_YAML, + 'core-config.backup.yaml': LEGACY_CONFIG_YAML, + [userConfigPath]: 'user_profile: bob\ndefault_model: claude-opus', + 'project-config.yaml': 'project_name: existing-project', + }); + + // When + const result = migrateConfig(FAKE_PROJECT); + + // Then + expect(result.migrated).toBe(true); + // Backup should NOT be created again (already exists) + expect(fs.copyFileSync).not.toHaveBeenCalled(); + }); + + it('should handle malformed YAML gracefully', () => { + // Given + setupFileSystem({ + 'core-config.yaml': '{ invalid yaml: [', + }); + + // When + const result = migrateConfig(FAKE_PROJECT); + + // Then + expect(result.migrated).toBe(false); + expect(result.warnings.length).toBeGreaterThan(0); + expect(result.warnings[0]).toContain('Failed to parse'); + }); +}); + +// --------------------------------------------------------------------------- +// ensureUserConfigDir Tests +// --------------------------------------------------------------------------- + +describe('ensureUserConfigDir', () => { + it('should create ~/.aios/ directory if it does not exist', () => { + // Given + fs.existsSync.mockReturnValue(false); + + // When + ensureUserConfigDir(); + + // Then + expect(fs.mkdirSync).toHaveBeenCalledWith( + expect.stringContaining('.aios'), + expect.objectContaining({ recursive: true }), + ); + }); + + it('should not create directory if it already exists', () => { + // Given + fs.existsSync.mockReturnValue(true); + + // When + ensureUserConfigDir(); + + // Then + expect(fs.mkdirSync).not.toHaveBeenCalled(); + }); +}); + +// --------------------------------------------------------------------------- +// Constants Tests +// --------------------------------------------------------------------------- + +describe('field constants', () => { + it('should define USER_FIELDS', () => { + expect(USER_FIELDS).toContain('user_profile'); + expect(USER_FIELDS).toContain('default_model'); + expect(USER_FIELDS).toContain('educational_mode'); + }); + + it('should define PROJECT_FIELDS', () => { + expect(PROJECT_FIELDS).toContain('project_name'); + expect(PROJECT_FIELDS).toContain('environments'); + expect(PROJECT_FIELDS).toContain('deploy_target'); + }); +}); + +``` + +================================================== +📄 tests/core/config/config-resolver.test.js +================================================== +```js +/** + * Tests for config-resolver.js — Schema Validation + Unified Reading + * + * Covers: + * - JSON Schema validation via ajv (AC7, AC8) + * - L5 User config reading via resolveConfig() (AC3) + * - L2 Project config reading via resolveConfig() (AC4) + * - L5 override behavior (Task 2.3) + * - Backward compatibility with legacy mode (Task 5.4) + * - Edge cases: missing schema, extra fields (Task 5.5) + * + * @story 12.2 + */ + +const path = require('path'); +const fs = require('fs'); +const os = require('os'); + +const FAKE_HOME = '/fake/home'; + +// Load real schema files before mocking fs +const realFs = jest.requireActual('fs'); +const SCHEMAS_DIR = path.join(__dirname, '..', '..', '..', '.aios-core', 'core', 'config', 'schemas'); +const REAL_SCHEMAS = {}; +for (const file of realFs.readdirSync(SCHEMAS_DIR)) { + if (file.endsWith('.schema.json')) { + REAL_SCHEMAS[file] = realFs.readFileSync(path.join(SCHEMAS_DIR, file), 'utf8'); + } +} + +// Must mock before requiring the module +jest.mock('fs'); +jest.mock('os', () => ({ + ...jest.requireActual('os'), + homedir: jest.fn(() => FAKE_HOME), +})); + +// Mock config-cache to avoid setInterval issues +jest.mock('../../../.aios-core/core/config/config-cache', () => { + const cache = new Map(); + const timestamps = new Map(); + return { + ConfigCache: jest.fn(), + globalConfigCache: { + get: jest.fn((key) => cache.get(key) || null), + set: jest.fn((key, value) => { cache.set(key, value); timestamps.set(key, Date.now()); }), + clear: jest.fn(() => { cache.clear(); timestamps.clear(); }), + has: jest.fn((key) => cache.has(key)), + }, + }; +}); + +const { + resolveConfig, + loadLayeredConfig, + loadLegacyConfig, + isLegacyMode, + getConfigAtLevel, + validateConfig, + clearSchemaCache, + CONFIG_FILES, + LEVELS, + SCHEMA_FILES, +} = require('../../../.aios-core/core/config/config-resolver'); +const { globalConfigCache } = require('../../../.aios-core/core/config/config-cache'); + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +const FAKE_PROJECT = '/fake/project'; + +/** + * Setup fs.existsSync and fs.readFileSync mocks for config files. + * Automatically includes real schema files for validation tests. + */ +function setupFileSystem(files = {}) { + fs.existsSync.mockImplementation((filePath) => { + // Check schema files + for (const schemaFile of Object.keys(REAL_SCHEMAS)) { + if (filePath.endsWith(schemaFile)) return true; + } + return Object.keys(files).some((key) => filePath.endsWith(key) || filePath === key); + }); + + fs.readFileSync.mockImplementation((filePath) => { + // Serve real schema files + for (const [schemaFile, content] of Object.entries(REAL_SCHEMAS)) { + if (filePath.endsWith(schemaFile)) return content; + } + for (const [key, content] of Object.entries(files)) { + if (filePath.endsWith(key) || filePath === key) { + return content; + } + } + throw new Error(`ENOENT: no such file or directory: ${filePath}`); + }); +} + +beforeEach(() => { + jest.clearAllMocks(); + globalConfigCache.clear(); + clearSchemaCache(); + os.homedir.mockReturnValue(FAKE_HOME); + // Default: schema files available, config files not + setupFileSystem({}); +}); + +// --------------------------------------------------------------------------- +// Schema Validation Tests (AC7, AC8) +// --------------------------------------------------------------------------- + +describe('validateConfig', () => { + describe('user config (L5) validation', () => { + it('should return no warnings for valid user config', () => { + // Given + const data = { user_profile: 'bob', default_model: 'claude-sonnet' }; + + // When + const warnings = validateConfig('user', data, 'user-config.yaml'); + + // Then + expect(warnings).toEqual([]); + }); + + it('should return warning when user_profile is missing (required field)', () => { + // Given + const data = { default_model: 'claude-sonnet' }; + + // When + const warnings = validateConfig('user', data, 'user-config.yaml'); + + // Then + expect(warnings.length).toBeGreaterThan(0); + expect(warnings[0]).toContain('user-config.yaml inválido'); + expect(warnings[0]).toContain('user_profile'); + }); + + it('should return warning when user_profile has invalid enum value', () => { + // Given + const data = { user_profile: 'invalid_value' }; + + // When + const warnings = validateConfig('user', data, 'user-config.yaml'); + + // Then + expect(warnings.length).toBeGreaterThan(0); + expect(warnings[0]).toContain('user-config.yaml inválido'); + }); + + it('should accept valid enum values: bob and advanced', () => { + // Given / When / Then + expect(validateConfig('user', { user_profile: 'bob' }, 'test.yaml')).toEqual([]); + expect(validateConfig('user', { user_profile: 'advanced' }, 'test.yaml')).toEqual([]); + }); + + it('should allow extra fields (additionalProperties: true)', () => { + // Given + const data = { user_profile: 'bob', custom_field: 'value', another: 42 }; + + // When + const warnings = validateConfig('user', data, 'test.yaml'); + + // Then + expect(warnings).toEqual([]); + }); + }); + + describe('project config (L2) validation', () => { + it('should return no warnings for valid project config', () => { + // Given + const data = { project_name: 'my-project', project_type: 'fullstack' }; + + // When + const warnings = validateConfig('project', data, 'project-config.yaml'); + + // Then + expect(warnings).toEqual([]); + }); + + it('should return warning for invalid project_type enum', () => { + // Given + const data = { project_type: 'invalid_type' }; + + // When + const warnings = validateConfig('project', data, 'project-config.yaml'); + + // Then + expect(warnings.length).toBeGreaterThan(0); + }); + + it('should allow extra fields in project config', () => { + // Given + const data = { project_name: 'test', custom_setting: true }; + + // When + const warnings = validateConfig('project', data, 'test.yaml'); + + // Then + expect(warnings).toEqual([]); + }); + }); + + describe('edge cases', () => { + it('should return empty warnings for unknown level', () => { + // Given / When + const warnings = validateConfig('unknown_level', {}, 'test.yaml'); + + // Then + expect(warnings).toEqual([]); + }); + + it('should return empty warnings when schema file is missing', () => { + // Given — clearSchemaCache already called in beforeEach + // Force schema load to fail by temporarily modifying SCHEMA_FILES would be complex, + // so instead test that a level without a schema returns empty + const warnings = validateConfig('pro', {}, 'test.yaml'); + + // Then — 'pro' has no schema file defined + expect(warnings).toEqual([]); + }); + }); +}); + +// --------------------------------------------------------------------------- +// Unified Reading Tests (AC3, AC4) +// --------------------------------------------------------------------------- + +describe('loadLayeredConfig', () => { + it('should load L5 user config and merge it last', () => { + // Given + const userConfigPath = path.join(FAKE_HOME, '.aios', 'user-config.yaml'); + setupFileSystem({ + 'framework-config.yaml': 'version: "1.0"\nuser_profile: default', + [userConfigPath]: 'user_profile: bob\neducational_mode: true', + }); + + // When + const result = loadLayeredConfig(FAKE_PROJECT); + + // Then + expect(result.config.user_profile).toBe('bob'); + expect(result.config.educational_mode).toBe(true); + }); + + it('should load L2 project config with project_name', () => { + // Given + setupFileSystem({ + 'framework-config.yaml': 'version: "1.0"', + 'project-config.yaml': 'project_name: my-project\ndeploy_target: vercel', + }); + + // When + const result = loadLayeredConfig(FAKE_PROJECT); + + // Then + expect(result.config.project_name).toBe('my-project'); + expect(result.config.deploy_target).toBe('vercel'); + }); + + it('should allow L5 to override L4 values', () => { + // Given + const userConfigPath = path.join(FAKE_HOME, '.aios', 'user-config.yaml'); + setupFileSystem({ + 'framework-config.yaml': 'version: "1.0"', + 'local-config.yaml': 'user_profile: advanced', + [userConfigPath]: 'user_profile: bob', + }); + + // When + const result = loadLayeredConfig(FAKE_PROJECT); + + // Then — L5 overrides L4 + expect(result.config.user_profile).toBe('bob'); + }); + + it('should collect schema validation warnings', () => { + // Given + const userConfigPath = path.join(FAKE_HOME, '.aios', 'user-config.yaml'); + setupFileSystem({ + 'framework-config.yaml': 'version: "1.0"', + [userConfigPath]: 'default_model: claude-sonnet', // missing required user_profile + }); + + // When + const result = loadLayeredConfig(FAKE_PROJECT); + + // Then + const schemaWarnings = result.warnings.filter((w) => w.includes('[SCHEMA]')); + expect(schemaWarnings.length).toBeGreaterThan(0); + expect(schemaWarnings[0]).toContain('user_profile'); + }); + + it('should return empty config when no files exist', () => { + // Given — no files setup (default) + + // When + const result = loadLayeredConfig(FAKE_PROJECT); + + // Then + expect(result.config).toEqual({}); + expect(result.warnings).toEqual([]); + }); +}); + +// --------------------------------------------------------------------------- +// Legacy Mode Tests (Task 5.4) +// --------------------------------------------------------------------------- + +describe('isLegacyMode', () => { + it('should return true when core-config.yaml exists but framework-config.yaml does not', () => { + // Given + fs.existsSync.mockImplementation((filePath) => { + if (filePath.includes('core-config.yaml')) return true; + if (filePath.includes('framework-config.yaml')) return false; + return false; + }); + + // When / Then + expect(isLegacyMode(FAKE_PROJECT)).toBe(true); + }); + + it('should return false when framework-config.yaml exists', () => { + // Given + fs.existsSync.mockImplementation((filePath) => { + if (filePath.includes('core-config.yaml')) return true; + if (filePath.includes('framework-config.yaml')) return true; + return false; + }); + + // When / Then + expect(isLegacyMode(FAKE_PROJECT)).toBe(false); + }); +}); + +describe('resolveConfig — legacy mode', () => { + it('should load legacy config without schema validation', () => { + // Given + fs.existsSync.mockImplementation((filePath) => { + if (filePath.includes('core-config.yaml')) return true; + if (filePath.includes('framework-config.yaml')) return false; + return false; + }); + fs.readFileSync.mockImplementation((filePath) => { + if (filePath.includes('core-config.yaml')) { + return 'project_name: legacy-project\nuser_profile: advanced'; + } + throw new Error('ENOENT'); + }); + + // When + const result = resolveConfig(FAKE_PROJECT, { skipCache: true }); + + // Then + expect(result.legacy).toBe(true); + expect(result.config.project_name).toBe('legacy-project'); + expect(result.config.user_profile).toBe('advanced'); + }); +}); + +// --------------------------------------------------------------------------- +// getConfigAtLevel Tests +// --------------------------------------------------------------------------- + +describe('getConfigAtLevel', () => { + it('should return L5 user config data', () => { + // Given + const userConfigPath = path.join(FAKE_HOME, '.aios', 'user-config.yaml'); + setupFileSystem({ + [userConfigPath]: 'user_profile: bob\neducational_mode: false', + }); + + // When + const data = getConfigAtLevel(FAKE_PROJECT, 'user'); + + // Then + expect(data).toEqual({ user_profile: 'bob', educational_mode: false }); + }); + + it('should return null when config file does not exist', () => { + // Given — no files + + // When + const data = getConfigAtLevel(FAKE_PROJECT, 'framework'); + + // Then + expect(data).toBeNull(); + }); + + it('should accept level aliases: L5, 5, user', () => { + // Given + const userConfigPath = path.join(FAKE_HOME, '.aios', 'user-config.yaml'); + setupFileSystem({ + [userConfigPath]: 'user_profile: advanced', + }); + + // When / Then + expect(getConfigAtLevel(FAKE_PROJECT, 'L5')).toEqual({ user_profile: 'advanced' }); + expect(getConfigAtLevel(FAKE_PROJECT, '5')).toEqual({ user_profile: 'advanced' }); + expect(getConfigAtLevel(FAKE_PROJECT, 'user')).toEqual({ user_profile: 'advanced' }); + }); +}); + +// --------------------------------------------------------------------------- +// SCHEMA_FILES constant test +// --------------------------------------------------------------------------- + +describe('SCHEMA_FILES', () => { + it('should define schema files for L1, L2, L4, L5', () => { + expect(SCHEMA_FILES.framework).toBe('framework-config.schema.json'); + expect(SCHEMA_FILES.project).toBe('project-config.schema.json'); + expect(SCHEMA_FILES.local).toBe('local-config.schema.json'); + expect(SCHEMA_FILES.user).toBe('user-config.schema.json'); + }); +}); + +``` + +================================================== +📄 tests/core/ids/entity-registry-schema.test.js +================================================== +```js +'use strict'; + +const fs = require('fs'); +const path = require('path'); +const yaml = require('js-yaml'); + +const FIXTURES = path.resolve(__dirname, 'fixtures'); +const VALID_REGISTRY = path.join(FIXTURES, 'valid-registry.yaml'); + +describe('Entity Registry Schema (AC: 1, 7, 8)', () => { + let registry; + + beforeAll(() => { + const content = fs.readFileSync(VALID_REGISTRY, 'utf8'); + registry = yaml.load(content); + }); + + describe('metadata section', () => { + it('has required metadata fields', () => { + expect(registry.metadata).toBeDefined(); + expect(registry.metadata.version).toBeDefined(); + expect(registry.metadata.lastUpdated).toBeDefined(); + expect(registry.metadata.entityCount).toBeDefined(); + expect(registry.metadata.checksumAlgorithm).toBeDefined(); + }); + + it('version follows semver format', () => { + expect(registry.metadata.version).toMatch(/^\d+\.\d+\.\d+$/); + }); + + it('checksumAlgorithm is sha256', () => { + expect(registry.metadata.checksumAlgorithm).toBe('sha256'); + }); + + it('entityCount is a number', () => { + expect(typeof registry.metadata.entityCount).toBe('number'); + }); + }); + + describe('entity structure', () => { + it('entities object exists with category keys', () => { + expect(registry.entities).toBeDefined(); + expect(typeof registry.entities).toBe('object'); + }); + + it('each entity has required fields', () => { + for (const [category, entities] of Object.entries(registry.entities)) { + for (const [id, entity] of Object.entries(entities)) { + expect(entity.path).toBeDefined(); + expect(entity.type).toBeDefined(); + expect(entity.purpose).toBeDefined(); + expect(entity.keywords).toBeDefined(); + expect(Array.isArray(entity.keywords)).toBe(true); + expect(entity.usedBy).toBeDefined(); + expect(Array.isArray(entity.usedBy)).toBe(true); + expect(entity.dependencies).toBeDefined(); + expect(Array.isArray(entity.dependencies)).toBe(true); + } + } + }); + + it('entity type is one of the allowed values', () => { + const allowedTypes = ['task', 'template', 'script', 'module', 'agent', 'checklist', 'data']; + for (const [category, entities] of Object.entries(registry.entities)) { + for (const [id, entity] of Object.entries(entities)) { + expect(allowedTypes).toContain(entity.type); + } + } + }); + }); + + describe('adaptability section (AC: 7)', () => { + it('each entity has adaptability with score', () => { + for (const [category, entities] of Object.entries(registry.entities)) { + for (const [id, entity] of Object.entries(entities)) { + expect(entity.adaptability).toBeDefined(); + expect(typeof entity.adaptability.score).toBe('number'); + expect(entity.adaptability.score).toBeGreaterThanOrEqual(0); + expect(entity.adaptability.score).toBeLessThanOrEqual(1); + } + } + }); + + it('adaptability has constraints and extensionPoints arrays', () => { + for (const [category, entities] of Object.entries(registry.entities)) { + for (const [id, entity] of Object.entries(entities)) { + expect(Array.isArray(entity.adaptability.constraints)).toBe(true); + expect(Array.isArray(entity.adaptability.extensionPoints)).toBe(true); + } + } + }); + }); + + describe('checksum field (AC: 8)', () => { + it('each entity has a checksum field', () => { + for (const [category, entities] of Object.entries(registry.entities)) { + for (const [id, entity] of Object.entries(entities)) { + expect(entity.checksum).toBeDefined(); + expect(typeof entity.checksum).toBe('string'); + } + } + }); + + it('checksum follows sha256:hex format', () => { + for (const [category, entities] of Object.entries(registry.entities)) { + for (const [id, entity] of Object.entries(entities)) { + expect(entity.checksum).toMatch(/^sha256:[a-f0-9]{64}$/); + } + } + }); + + it('each entity has lastVerified timestamp', () => { + for (const [category, entities] of Object.entries(registry.entities)) { + for (const [id, entity] of Object.entries(entities)) { + expect(entity.lastVerified).toBeDefined(); + } + } + }); + }); + + describe('categories section', () => { + it('categories is an array', () => { + expect(Array.isArray(registry.categories)).toBe(true); + expect(registry.categories.length).toBeGreaterThan(0); + }); + + it('each category has id, description, basePath', () => { + for (const cat of registry.categories) { + expect(cat.id).toBeDefined(); + expect(cat.description).toBeDefined(); + expect(cat.basePath).toBeDefined(); + } + }); + }); +}); + +``` + +================================================== +📄 tests/core/ids/verification-gates.test.js +================================================== +```js +'use strict'; + +const path = require('path'); + +// Module paths +const CIRCUIT_BREAKER_PATH = path.resolve( + __dirname, + '../../../.aios-core/core/ids/circuit-breaker.js', +); +const VERIFICATION_GATE_PATH = path.resolve( + __dirname, + '../../../.aios-core/core/ids/verification-gate.js', +); +const G1_PATH = path.resolve( + __dirname, + '../../../.aios-core/core/ids/gates/g1-epic-creation.js', +); +const G2_PATH = path.resolve( + __dirname, + '../../../.aios-core/core/ids/gates/g2-story-creation.js', +); +const G3_PATH = path.resolve( + __dirname, + '../../../.aios-core/core/ids/gates/g3-story-validation.js', +); +const G4_PATH = path.resolve( + __dirname, + '../../../.aios-core/core/ids/gates/g4-dev-context.js', +); + +const { + CircuitBreaker, + STATE_CLOSED, + STATE_OPEN, + STATE_HALF_OPEN, + DEFAULT_FAILURE_THRESHOLD, + DEFAULT_SUCCESS_THRESHOLD, + DEFAULT_RESET_TIMEOUT_MS, +} = require(CIRCUIT_BREAKER_PATH); + +const { + VerificationGate, + createGateResult, + DEFAULT_TIMEOUT_MS, +} = require(VERIFICATION_GATE_PATH); + +const { G1EpicCreationGate } = require(G1_PATH); +const { G2StoryCreationGate } = require(G2_PATH); +const { G3StoryValidationGate } = require(G3_PATH); +const { G4DevContextGate, G4_DEFAULT_TIMEOUT_MS } = require(G4_PATH); + +// ================================================================ +// Test Helpers +// ================================================================ + +/** + * Create a mock logger that suppresses output and records calls. + */ +function createMockLogger() { + return { + info: jest.fn(), + warn: jest.fn(), + error: jest.fn(), + log: jest.fn(), + }; +} + +/** + * Create a mock IncrementalDecisionEngine. + * @param {object} [analyzeResult] — Custom result for analyze() + */ +function createMockDecisionEngine(analyzeResult) { + const defaultResult = { + intent: 'test', + recommendations: [], + summary: { totalEntities: 100, matchesFound: 0, decision: 'CREATE', confidence: 'low' }, + rationale: 'No matches found.', + }; + + return { + analyze: jest.fn().mockReturnValue(analyzeResult || defaultResult), + }; +} + +/** + * Create a mock RegistryLoader. + */ +function createMockRegistryLoader() { + return { + load: jest.fn(), + queryByPath: jest.fn().mockReturnValue([]), + queryByKeywords: jest.fn().mockReturnValue([]), + queryByType: jest.fn().mockReturnValue([]), + queryByPurpose: jest.fn().mockReturnValue([]), + }; +} + +/** + * Create a concrete subclass for testing the abstract base. + */ +class TestGate extends VerificationGate { + constructor(config = {}, verifyImpl) { + super({ gateId: 'TEST', agent: '@test', ...config }); + this._verifyImpl = verifyImpl || (async () => ({ + passed: true, + warnings: [], + opportunities: [], + })); + } + + async _doVerify(context) { + return this._verifyImpl(context); + } +} + +// ================================================================ +// CircuitBreaker Tests +// ================================================================ + +describe('CircuitBreaker', () => { + let cb; + + beforeEach(() => { + cb = new CircuitBreaker({ + failureThreshold: 3, + successThreshold: 2, + resetTimeoutMs: 100, + }); + }); + + describe('constructor', () => { + it('uses default values when no options provided', () => { + const defaultCb = new CircuitBreaker(); + const stats = defaultCb.getStats(); + expect(stats.state).toBe(STATE_CLOSED); + expect(stats.failureCount).toBe(0); + expect(stats.totalTrips).toBe(0); + }); + + it('accepts custom thresholds', () => { + expect(cb.getState()).toBe(STATE_CLOSED); + }); + }); + + describe('CLOSED state', () => { + it('allows requests when closed', () => { + expect(cb.isAllowed()).toBe(true); + }); + + it('resets failure count on success', () => { + cb.recordFailure(); + cb.recordFailure(); + cb.recordSuccess(); + expect(cb.getStats().failureCount).toBe(0); + }); + + it('opens after reaching failure threshold', () => { + cb.recordFailure(); + cb.recordFailure(); + cb.recordFailure(); // threshold = 3 + expect(cb.getState()).toBe(STATE_OPEN); + expect(cb.getStats().totalTrips).toBe(1); + }); + + it('stays closed if failures are below threshold', () => { + cb.recordFailure(); + cb.recordFailure(); + expect(cb.getState()).toBe(STATE_CLOSED); + expect(cb.isAllowed()).toBe(true); + }); + }); + + describe('OPEN state', () => { + beforeEach(() => { + // Trip the circuit + cb.recordFailure(); + cb.recordFailure(); + cb.recordFailure(); + }); + + it('blocks requests when open', () => { + expect(cb.isAllowed()).toBe(false); + }); + + it('transitions to HALF_OPEN after reset timeout', async () => { + // Wait for reset timeout + await new Promise((resolve) => setTimeout(resolve, 150)); + expect(cb.isAllowed()).toBe(true); + expect(cb.getState()).toBe(STATE_HALF_OPEN); + }); + + it('reports correct stats', () => { + const stats = cb.getStats(); + expect(stats.state).toBe(STATE_OPEN); + expect(stats.failureCount).toBe(3); + expect(stats.totalTrips).toBe(1); + expect(stats.lastFailureTime).toBeGreaterThan(0); + }); + }); + + describe('HALF_OPEN state', () => { + beforeEach(async () => { + cb.recordFailure(); + cb.recordFailure(); + cb.recordFailure(); + await new Promise((resolve) => setTimeout(resolve, 150)); + cb.isAllowed(); // Triggers transition to HALF_OPEN (first probe) + }); + + it('blocks additional requests while probe is in-flight', () => { + // First probe was consumed by isAllowed() in beforeEach + expect(cb.isAllowed()).toBe(false); + }); + + it('allows new probe after success resets in-flight flag', () => { + cb.recordSuccess(); // Clears in-flight flag + expect(cb.isAllowed()).toBe(true); + }); + + it('closes circuit after success threshold reached', () => { + cb.recordSuccess(); + cb.recordSuccess(); // threshold = 2 + expect(cb.getState()).toBe(STATE_CLOSED); + expect(cb.getStats().failureCount).toBe(0); + }); + + it('re-opens on any failure', () => { + cb.recordFailure(); + expect(cb.getState()).toBe(STATE_OPEN); + expect(cb.getStats().totalTrips).toBe(2); + }); + + it('stays half-open if not enough successes', () => { + cb.recordSuccess(); + expect(cb.getState()).toBe(STATE_HALF_OPEN); + }); + }); + + describe('reset()', () => { + it('resets to CLOSED state', () => { + cb.recordFailure(); + cb.recordFailure(); + cb.recordFailure(); + expect(cb.getState()).toBe(STATE_OPEN); + + cb.reset(); + expect(cb.getState()).toBe(STATE_CLOSED); + expect(cb.getStats().failureCount).toBe(0); + expect(cb.isAllowed()).toBe(true); + }); + }); + + describe('exported constants', () => { + it('exports state constants', () => { + expect(STATE_CLOSED).toBe('CLOSED'); + expect(STATE_OPEN).toBe('OPEN'); + expect(STATE_HALF_OPEN).toBe('HALF_OPEN'); + }); + + it('exports default configuration values', () => { + expect(DEFAULT_FAILURE_THRESHOLD).toBe(5); + expect(DEFAULT_SUCCESS_THRESHOLD).toBe(3); + expect(DEFAULT_RESET_TIMEOUT_MS).toBe(60000); + }); + }); +}); + +// ================================================================ +// VerificationGate Base Class Tests +// ================================================================ + +describe('VerificationGate', () => { + let logger; + + beforeEach(() => { + logger = createMockLogger(); + }); + + describe('constructor', () => { + it('requires gateId', () => { + expect(() => new TestGate({ gateId: undefined, agent: '@test' })).toThrow( + /gateId is required/, + ); + }); + + it('requires agent', () => { + expect(() => new TestGate({ gateId: 'T1', agent: undefined })).toThrow( + /agent is required/, + ); + }); + + it('initializes with correct defaults', () => { + const gate = new TestGate({ logger }); + expect(gate.getGateId()).toBe('TEST'); + expect(gate.getAgent()).toBe('@test'); + expect(gate.isBlocking()).toBe(false); + expect(gate.getInvocationCount()).toBe(0); + expect(gate.getLastResult()).toBeNull(); + }); + }); + + describe('verify()', () => { + it('returns a valid GateResult structure', async () => { + const gate = new TestGate({ logger }); + const result = await gate.verify({ intent: 'test' }); + + expect(result).toHaveProperty('gateId', 'TEST'); + expect(result).toHaveProperty('agent', '@test'); + expect(result).toHaveProperty('timestamp'); + expect(result).toHaveProperty('result'); + expect(result.result).toHaveProperty('passed', true); + expect(result.result).toHaveProperty('blocking', false); + expect(result.result).toHaveProperty('warnings'); + expect(result.result).toHaveProperty('opportunities'); + expect(result).toHaveProperty('executionMs'); + expect(result).toHaveProperty('circuitBreakerState'); + expect(result).toHaveProperty('override', null); + }); + + it('increments invocation count', async () => { + const gate = new TestGate({ logger }); + await gate.verify({}); + await gate.verify({}); + expect(gate.getInvocationCount()).toBe(2); + }); + + it('stores last result', async () => { + const gate = new TestGate({ logger }); + const result = await gate.verify({ intent: 'hello' }); + expect(gate.getLastResult()).toEqual(result); + }); + + it('logs invocation', async () => { + const gate = new TestGate({ logger }); + await gate.verify({}); + expect(logger.info).toHaveBeenCalled(); + }); + + it('passes context to _doVerify', async () => { + const verifyFn = jest.fn().mockResolvedValue({ + passed: true, + warnings: [], + opportunities: [], + }); + const gate = new TestGate({ logger }, verifyFn); + const ctx = { intent: 'test intent' }; + await gate.verify(ctx); + expect(verifyFn).toHaveBeenCalledWith(ctx); + }); + + it('surfaces warnings from _doVerify', async () => { + const gate = new TestGate({ logger }, async () => ({ + passed: true, + warnings: ['Watch out!'], + opportunities: [], + })); + const result = await gate.verify({}); + expect(result.result.warnings).toContain('Watch out!'); + }); + + it('surfaces opportunities from _doVerify', async () => { + const opp = { entity: 'test.js', relevance: 0.9, recommendation: 'REUSE', reason: 'exact' }; + const gate = new TestGate({ logger }, async () => ({ + passed: true, + warnings: [], + opportunities: [opp], + })); + const result = await gate.verify({}); + expect(result.result.opportunities).toHaveLength(1); + expect(result.result.opportunities[0]).toEqual(opp); + }); + }); + + describe('timeout handling', () => { + it('returns warn-and-proceed on timeout', async () => { + const gate = new TestGate( + { timeoutMs: 50, logger }, + () => new Promise((resolve) => setTimeout(() => resolve({ + passed: true, + warnings: [], + opportunities: [], + }), 200)), + ); + + const result = await gate.verify({}); + expect(result.result.passed).toBe(true); + expect(result.result.warnings).toEqual( + expect.arrayContaining([expect.stringContaining('timed out')]), + ); + }); + + it('returns normal result when within timeout', async () => { + const gate = new TestGate( + { timeoutMs: 500, logger }, + async () => ({ + passed: true, + warnings: ['fast result'], + opportunities: [], + }), + ); + + const result = await gate.verify({}); + expect(result.result.warnings).toContain('fast result'); + }); + }); + + describe('error handling (graceful degradation)', () => { + it('returns warn-and-proceed on error', async () => { + const gate = new TestGate({ logger }, async () => { + throw new Error('Database connection failed'); + }); + + const result = await gate.verify({}); + expect(result.result.passed).toBe(true); + expect(result.result.blocking).toBe(false); + expect(result.result.warnings).toEqual( + expect.arrayContaining([expect.stringContaining('Database connection failed')]), + ); + }); + + it('logs warning on error', async () => { + const gate = new TestGate({ logger }, async () => { + throw new Error('Test error'); + }); + + await gate.verify({}); + expect(logger.warn).toHaveBeenCalledWith( + expect.stringContaining('Gate failed'), + ); + }); + + it('records failure in circuit breaker on error', async () => { + const gate = new TestGate({ logger }, async () => { + throw new Error('Failure'); + }); + + await gate.verify({}); + const stats = gate.getCircuitBreakerStats(); + expect(stats.failureCount).toBe(1); + }); + }); + + describe('circuit breaker integration', () => { + it('skips gate when circuit breaker is open', async () => { + const verifyFn = jest.fn().mockResolvedValue({ + passed: true, + warnings: [], + opportunities: [], + }); + + const gate = new TestGate( + { + logger, + circuitBreakerOptions: { failureThreshold: 2, resetTimeoutMs: 60000 }, + }, + async () => { throw new Error('fail'); }, + ); + + // Trip the circuit + await gate.verify({}); + await gate.verify({}); + + // Now replace with the mock to check it's NOT called + gate._doVerify = verifyFn; + const result = await gate.verify({}); + + expect(verifyFn).not.toHaveBeenCalled(); + expect(result.result.passed).toBe(true); + expect(result.result.warnings).toEqual( + expect.arrayContaining([expect.stringContaining('circuit breaker open')]), + ); + }); + + it('records success in circuit breaker on successful verify', async () => { + const gate = new TestGate({ logger }); + await gate.verify({}); + const stats = gate.getCircuitBreakerStats(); + expect(stats.failureCount).toBe(0); + expect(stats.state).toBe('CLOSED'); + }); + + it('exposes circuit breaker stats', () => { + const gate = new TestGate({ logger }); + const stats = gate.getCircuitBreakerStats(); + expect(stats).toHaveProperty('state'); + expect(stats).toHaveProperty('failureCount'); + expect(stats).toHaveProperty('successCount'); + expect(stats).toHaveProperty('totalTrips'); + }); + }); + + describe('abstract _doVerify', () => { + it('throws when not overridden', async () => { + // Test the raw VerificationGate (not TestGate) + // We need to create a minimal concrete subclass that does NOT override _doVerify + class RawGate extends VerificationGate { + constructor() { + super({ gateId: 'RAW', agent: '@raw', logger: createMockLogger() }); + } + } + + const gate = new RawGate(); + const result = await gate.verify({}); + // It should gracefully degrade (error -> warn-and-proceed) + expect(result.result.passed).toBe(true); + expect(result.result.warnings).toEqual( + expect.arrayContaining([expect.stringContaining('_doVerify() must be implemented')]), + ); + }); + }); + + describe('createGateResult()', () => { + it('creates result with defaults', () => { + const result = createGateResult(); + expect(result.gateId).toBeNull(); + expect(result.agent).toBeNull(); + expect(result.timestamp).toBeDefined(); + expect(result.result.passed).toBe(true); + expect(result.result.blocking).toBe(false); + expect(result.result.warnings).toEqual([]); + expect(result.result.opportunities).toEqual([]); + expect(result.override).toBeNull(); + }); + + it('creates result with custom fields', () => { + const result = createGateResult({ + gateId: 'G1', + agent: '@pm', + passed: false, + blocking: true, + warnings: ['warning1'], + opportunities: [{ entity: 'test' }], + override: { reason: 'urgent' }, + }); + + expect(result.gateId).toBe('G1'); + expect(result.agent).toBe('@pm'); + expect(result.result.passed).toBe(false); + expect(result.result.blocking).toBe(true); + expect(result.result.warnings).toEqual(['warning1']); + expect(result.result.opportunities).toEqual([{ entity: 'test' }]); + expect(result.override).toEqual({ reason: 'urgent' }); + }); + }); + + describe('exported constants', () => { + it('exports DEFAULT_TIMEOUT_MS', () => { + expect(DEFAULT_TIMEOUT_MS).toBe(2000); + }); + }); +}); + +// ================================================================ +// G1 Epic Creation Gate Tests +// ================================================================ + +describe('G1EpicCreationGate', () => { + let logger; + let decisionEngine; + + beforeEach(() => { + logger = createMockLogger(); + decisionEngine = createMockDecisionEngine(); + }); + + describe('constructor', () => { + it('requires decisionEngine', () => { + expect(() => new G1EpicCreationGate({ logger })).toThrow( + /decisionEngine is required/, + ); + }); + + it('creates gate with correct config', () => { + const gate = new G1EpicCreationGate({ decisionEngine, logger }); + expect(gate.getGateId()).toBe('G1'); + expect(gate.getAgent()).toBe('@pm'); + expect(gate.isBlocking()).toBe(false); + }); + }); + + describe('verify()', () => { + it('returns passed=true with empty intent', async () => { + const gate = new G1EpicCreationGate({ decisionEngine, logger }); + const result = await gate.verify({}); + expect(result.result.passed).toBe(true); + expect(result.result.warnings).toEqual( + expect.arrayContaining([expect.stringContaining('No epic intent')]), + ); + }); + + it('calls decisionEngine.analyze with intent', async () => { + const gate = new G1EpicCreationGate({ decisionEngine, logger }); + await gate.verify({ intent: 'user authentication system' }); + expect(decisionEngine.analyze).toHaveBeenCalledWith( + 'user authentication system', + expect.any(Object), + ); + }); + + it('combines epicTitle with intent', async () => { + const gate = new G1EpicCreationGate({ decisionEngine, logger }); + await gate.verify({ + intent: 'implement SSO login', + epicTitle: 'Auth Epic', + }); + expect(decisionEngine.analyze).toHaveBeenCalledWith( + 'Auth Epic: implement SSO login', + expect.any(Object), + ); + }); + + it('surfaces opportunities from analysis', async () => { + decisionEngine.analyze.mockReturnValue({ + intent: 'test', + recommendations: [ + { + entityPath: 'tasks/create-auth.md', + relevanceScore: 0.85, + decision: 'ADAPT', + rationale: 'Existing auth task', + }, + ], + summary: { totalEntities: 100, matchesFound: 1, decision: 'ADAPT', confidence: 'high' }, + rationale: 'Found match.', + }); + + const gate = new G1EpicCreationGate({ decisionEngine, logger }); + const result = await gate.verify({ intent: 'auth system' }); + + expect(result.result.opportunities).toHaveLength(1); + expect(result.result.opportunities[0].entity).toBe('tasks/create-auth.md'); + expect(result.result.opportunities[0].relevance).toBe(0.85); + expect(result.result.opportunities[0].recommendation).toBe('ADAPT'); + }); + + it('reports warning when opportunities found', async () => { + decisionEngine.analyze.mockReturnValue({ + intent: 'test', + recommendations: [ + { entityPath: 'a.md', relevanceScore: 0.9, decision: 'REUSE', rationale: 'match' }, + { entityPath: 'b.md', relevanceScore: 0.7, decision: 'ADAPT', rationale: 'partial' }, + ], + summary: { totalEntities: 100, matchesFound: 2, decision: 'REUSE', confidence: 'high' }, + rationale: 'Found matches.', + }); + + const gate = new G1EpicCreationGate({ decisionEngine, logger }); + const result = await gate.verify({ intent: 'something' }); + + expect(result.result.warnings).toEqual( + expect.arrayContaining([expect.stringContaining('2 related entities')]), + ); + }); + + it('always passes (advisory gate)', async () => { + decisionEngine.analyze.mockReturnValue({ + intent: 'test', + recommendations: [ + { entityPath: 'x.md', relevanceScore: 0.99, decision: 'REUSE', rationale: 'exact' }, + ], + summary: { totalEntities: 100, matchesFound: 1, decision: 'REUSE', confidence: 'high' }, + rationale: 'Exact match found.', + }); + + const gate = new G1EpicCreationGate({ decisionEngine, logger }); + const result = await gate.verify({ intent: 'something' }); + expect(result.result.passed).toBe(true); + expect(result.result.blocking).toBe(false); + }); + + it('passes type/category context to analyze', async () => { + const gate = new G1EpicCreationGate({ decisionEngine, logger }); + await gate.verify({ + intent: 'test', + type: 'task', + category: 'development', + }); + expect(decisionEngine.analyze).toHaveBeenCalledWith( + 'test', + { type: 'task', category: 'development' }, + ); + }); + + it('forwards analysis warnings', async () => { + decisionEngine.analyze.mockReturnValue({ + intent: 'test', + recommendations: [], + summary: { totalEntities: 5, matchesFound: 0, decision: 'CREATE', confidence: 'low' }, + rationale: 'No matches.', + warnings: ['Registry sparse'], + }); + + const gate = new G1EpicCreationGate({ decisionEngine, logger }); + const result = await gate.verify({ intent: 'something' }); + expect(result.result.warnings).toContain('Registry sparse'); + }); + }); +}); + +// ================================================================ +// G2 Story Creation Gate Tests +// ================================================================ + +describe('G2StoryCreationGate', () => { + let logger; + let decisionEngine; + + beforeEach(() => { + logger = createMockLogger(); + decisionEngine = createMockDecisionEngine(); + }); + + describe('constructor', () => { + it('requires decisionEngine', () => { + expect(() => new G2StoryCreationGate({ logger })).toThrow( + /decisionEngine is required/, + ); + }); + + it('creates gate with correct config', () => { + const gate = new G2StoryCreationGate({ decisionEngine, logger }); + expect(gate.getGateId()).toBe('G2'); + expect(gate.getAgent()).toBe('@sm'); + expect(gate.isBlocking()).toBe(false); + }); + }); + + describe('verify()', () => { + it('returns passed=true with empty intent', async () => { + const gate = new G2StoryCreationGate({ decisionEngine, logger }); + const result = await gate.verify({}); + expect(result.result.passed).toBe(true); + expect(result.result.warnings).toEqual( + expect.arrayContaining([expect.stringContaining('No story intent')]), + ); + }); + + it('queries for tasks and templates separately', async () => { + const gate = new G2StoryCreationGate({ decisionEngine, logger }); + await gate.verify({ intent: 'create login form' }); + + expect(decisionEngine.analyze).toHaveBeenCalledTimes(2); + expect(decisionEngine.analyze).toHaveBeenCalledWith( + 'create login form', + { type: 'task' }, + ); + expect(decisionEngine.analyze).toHaveBeenCalledWith( + 'create login form', + { type: 'template' }, + ); + }); + + it('enriches intent with acceptance criteria', async () => { + const gate = new G2StoryCreationGate({ decisionEngine, logger }); + await gate.verify({ + intent: 'login form', + acceptanceCriteria: ['user can login', 'error shown on failure'], + }); + + expect(decisionEngine.analyze).toHaveBeenCalledWith( + expect.stringContaining('user can login'), + expect.any(Object), + ); + }); + + it('reports task matches as opportunities', async () => { + decisionEngine.analyze.mockImplementation((_, context) => { + if (context.type === 'task') { + return { + recommendations: [ + { entityPath: 'tasks/login.md', relevanceScore: 0.8, decision: 'ADAPT', rationale: 'match' }, + ], + summary: { totalEntities: 100, matchesFound: 1 }, + }; + } + return { recommendations: [], summary: { totalEntities: 100, matchesFound: 0 } }; + }); + + const gate = new G2StoryCreationGate({ decisionEngine, logger }); + const result = await gate.verify({ intent: 'login form' }); + + expect(result.result.opportunities.some((o) => o.type === 'task')).toBe(true); + expect(result.result.warnings).toEqual( + expect.arrayContaining([expect.stringContaining('1 existing tasks')]), + ); + }); + + it('reports template matches as opportunities', async () => { + decisionEngine.analyze.mockImplementation((_, context) => { + if (context.type === 'template') { + return { + recommendations: [ + { entityPath: 'templates/form.md', relevanceScore: 0.7, decision: 'ADAPT', rationale: 'form match' }, + ], + summary: { totalEntities: 100, matchesFound: 1 }, + }; + } + return { recommendations: [], summary: { totalEntities: 100, matchesFound: 0 } }; + }); + + const gate = new G2StoryCreationGate({ decisionEngine, logger }); + const result = await gate.verify({ intent: 'create form' }); + + expect(result.result.opportunities.some((o) => o.type === 'template')).toBe(true); + expect(result.result.warnings).toEqual( + expect.arrayContaining([expect.stringContaining('1 existing templates')]), + ); + }); + + it('sorts opportunities by relevance descending', async () => { + decisionEngine.analyze.mockImplementation((_, context) => { + if (context.type === 'task') { + return { + recommendations: [ + { entityPath: 'tasks/a.md', relevanceScore: 0.6, decision: 'ADAPT', rationale: 'low' }, + ], + summary: { totalEntities: 100, matchesFound: 1 }, + }; + } + return { + recommendations: [ + { entityPath: 'templates/b.md', relevanceScore: 0.9, decision: 'REUSE', rationale: 'high' }, + ], + summary: { totalEntities: 100, matchesFound: 1 }, + }; + }); + + const gate = new G2StoryCreationGate({ decisionEngine, logger }); + const result = await gate.verify({ intent: 'something' }); + + expect(result.result.opportunities[0].relevance).toBeGreaterThanOrEqual( + result.result.opportunities[1].relevance, + ); + }); + + it('always passes (advisory gate)', async () => { + const gate = new G2StoryCreationGate({ decisionEngine, logger }); + const result = await gate.verify({ intent: 'anything' }); + expect(result.result.passed).toBe(true); + expect(result.result.blocking).toBe(false); + }); + }); +}); + +// ================================================================ +// G3 Story Validation Gate Tests +// ================================================================ + +describe('G3StoryValidationGate', () => { + let logger; + let decisionEngine; + let registryLoader; + + beforeEach(() => { + logger = createMockLogger(); + decisionEngine = createMockDecisionEngine(); + registryLoader = createMockRegistryLoader(); + }); + + describe('constructor', () => { + it('requires decisionEngine', () => { + expect(() => new G3StoryValidationGate({ registryLoader, logger })).toThrow( + /decisionEngine is required/, + ); + }); + + it('requires registryLoader', () => { + expect(() => new G3StoryValidationGate({ decisionEngine, logger })).toThrow( + /registryLoader is required/, + ); + }); + + it('creates gate with correct config', () => { + const gate = new G3StoryValidationGate({ decisionEngine, registryLoader, logger }); + expect(gate.getGateId()).toBe('G3'); + expect(gate.getAgent()).toBe('@po'); + expect(gate.isBlocking()).toBe(true); + }); + }); + + describe('verify()', () => { + it('returns passed=true with empty intent', async () => { + const gate = new G3StoryValidationGate({ decisionEngine, registryLoader, logger }); + const result = await gate.verify({}); + expect(result.result.passed).toBe(true); + }); + + it('validates referenced artifacts exist in registry', async () => { + registryLoader.queryByPath + .mockReturnValueOnce([{ id: 'test', path: 'tasks/auth.md' }]) // found + .mockReturnValueOnce([]); // not found + + const gate = new G3StoryValidationGate({ decisionEngine, registryLoader, logger }); + const result = await gate.verify({ + intent: 'auth story', + referencedArtifacts: ['tasks/auth.md', 'tasks/missing.md'], + }); + + expect(registryLoader.queryByPath).toHaveBeenCalledTimes(2); + expect(result.result.warnings).toEqual( + expect.arrayContaining([ + expect.stringContaining('1 referenced artifacts not found'), + expect.stringContaining('1 referenced artifacts verified'), + ]), + ); + }); + + it('soft blocks when references are invalid (no override)', async () => { + registryLoader.queryByPath.mockReturnValue([]); + + const gate = new G3StoryValidationGate({ decisionEngine, registryLoader, logger }); + const result = await gate.verify({ + intent: 'auth story', + referencedArtifacts: ['tasks/nonexistent.md'], + }); + + expect(result.result.passed).toBe(false); + }); + + it('passes when override provided', async () => { + registryLoader.queryByPath.mockReturnValue([]); + + const gate = new G3StoryValidationGate({ decisionEngine, registryLoader, logger }); + const result = await gate.verify({ + intent: 'auth story', + referencedArtifacts: ['tasks/nonexistent.md'], + override: { reason: 'New artifact being created', user: '@pm' }, + }); + + expect(result.result.passed).toBe(true); + }); + + it('detects potential duplication (relevance >= 0.8)', async () => { + decisionEngine.analyze.mockReturnValue({ + intent: 'test', + recommendations: [ + { entityPath: 'tasks/similar.md', relevanceScore: 0.85, decision: 'REUSE', rationale: 'near duplicate' }, + { entityPath: 'tasks/related.md', relevanceScore: 0.6, decision: 'ADAPT', rationale: 'related' }, + ], + summary: { totalEntities: 100, matchesFound: 2 }, + }); + + const gate = new G3StoryValidationGate({ decisionEngine, registryLoader, logger }); + const result = await gate.verify({ intent: 'similar task' }); + + expect(result.result.warnings).toEqual( + expect.arrayContaining([expect.stringContaining('Potential duplication detected')]), + ); + // Duplication (>=0.8) causes soft block + expect(result.result.passed).toBe(false); + }); + + it('passes when no duplication and references valid', async () => { + registryLoader.queryByPath.mockReturnValue([{ id: 'exists' }]); + decisionEngine.analyze.mockReturnValue({ + intent: 'test', + recommendations: [ + { entityPath: 'tasks/related.md', relevanceScore: 0.5, decision: 'ADAPT', rationale: 'low relevance' }, + ], + summary: { totalEntities: 100, matchesFound: 1 }, + }); + + const gate = new G3StoryValidationGate({ decisionEngine, registryLoader, logger }); + const result = await gate.verify({ + intent: 'new unique task', + referencedArtifacts: ['tasks/existing.md'], + }); + + expect(result.result.passed).toBe(true); + }); + + it('surfaces all opportunities regardless of blocking', async () => { + decisionEngine.analyze.mockReturnValue({ + intent: 'test', + recommendations: [ + { entityPath: 'a.md', relevanceScore: 0.9, decision: 'REUSE', rationale: 'dup' }, + { entityPath: 'b.md', relevanceScore: 0.5, decision: 'ADAPT', rationale: 'partial' }, + ], + summary: { totalEntities: 100, matchesFound: 2 }, + }); + + const gate = new G3StoryValidationGate({ decisionEngine, registryLoader, logger }); + const result = await gate.verify({ intent: 'test' }); + + expect(result.result.opportunities).toHaveLength(2); + }); + }); +}); + +// ================================================================ +// G4 Dev Context Gate Tests +// ================================================================ + +describe('G4DevContextGate', () => { + let logger; + let decisionEngine; + + beforeEach(() => { + logger = createMockLogger(); + decisionEngine = createMockDecisionEngine(); + }); + + describe('constructor', () => { + it('requires decisionEngine', () => { + expect(() => new G4DevContextGate({ logger })).toThrow( + /decisionEngine is required/, + ); + }); + + it('creates gate with correct config', () => { + const gate = new G4DevContextGate({ decisionEngine, logger }); + expect(gate.getGateId()).toBe('G4'); + expect(gate.getAgent()).toBe('@dev'); + expect(gate.isBlocking()).toBe(false); + }); + + it('defaults to 2s timeout', () => { + expect(G4_DEFAULT_TIMEOUT_MS).toBe(2000); + }); + }); + + describe('verify()', () => { + it('returns passed=true with empty intent', async () => { + const gate = new G4DevContextGate({ decisionEngine, logger }); + const result = await gate.verify({}); + expect(result.result.passed).toBe(true); + expect(result.result.warnings).toEqual( + expect.arrayContaining([expect.stringContaining('No intent provided')]), + ); + }); + + it('calls decisionEngine.analyze with intent', async () => { + const gate = new G4DevContextGate({ decisionEngine, logger }); + await gate.verify({ intent: 'implement circuit breaker' }); + expect(decisionEngine.analyze).toHaveBeenCalledWith( + 'implement circuit breaker', + expect.any(Object), + ); + }); + + it('enriches intent with file path keywords', async () => { + const gate = new G4DevContextGate({ decisionEngine, logger }); + await gate.verify({ + intent: 'implement gate', + filePaths: ['verification-gate.js', 'circuit-breaker.js'], + }); + + expect(decisionEngine.analyze).toHaveBeenCalledWith( + expect.stringContaining('verification-gate'), + expect.any(Object), + ); + }); + + it('surfaces relevant artifacts as opportunities', async () => { + decisionEngine.analyze.mockReturnValue({ + intent: 'test', + recommendations: [ + { entityPath: 'core/ids/registry-loader.js', relevanceScore: 0.7, decision: 'ADAPT', rationale: 'related IDS module' }, + ], + summary: { totalEntities: 100, matchesFound: 1 }, + }); + + const gate = new G4DevContextGate({ decisionEngine, logger }); + const result = await gate.verify({ intent: 'IDS gate' }); + + expect(result.result.opportunities).toHaveLength(1); + expect(result.result.warnings).toEqual( + expect.arrayContaining([expect.stringContaining('1 relevant artifacts found')]), + ); + }); + + it('always passes and never blocks', async () => { + decisionEngine.analyze.mockReturnValue({ + intent: 'test', + recommendations: [ + { entityPath: 'exact-match.js', relevanceScore: 0.99, decision: 'REUSE', rationale: 'exact' }, + ], + summary: { totalEntities: 100, matchesFound: 1 }, + }); + + const gate = new G4DevContextGate({ decisionEngine, logger }); + const result = await gate.verify({ intent: 'test' }); + expect(result.result.passed).toBe(true); + expect(result.result.blocking).toBe(false); + }); + + it('records metrics for every invocation', async () => { + const gate = new G4DevContextGate({ decisionEngine, logger }); + await gate.verify({ intent: 'first', storyId: 'IDS-5a' }); + await gate.verify({ intent: 'second', storyId: 'IDS-5b' }); + + const metrics = gate.getMetricsLog(); + expect(metrics).toHaveLength(2); + expect(metrics[0].storyId).toBe('IDS-5a'); + expect(metrics[1].storyId).toBe('IDS-5b'); + expect(metrics[0]).toHaveProperty('executionTimeMs'); + expect(metrics[0]).toHaveProperty('timestamp'); + expect(metrics[0]).toHaveProperty('matchesFound'); + }); + + it('uses "unknown" storyId when not provided', async () => { + const gate = new G4DevContextGate({ decisionEngine, logger }); + await gate.verify({ intent: 'no story id' }); + + const metrics = gate.getMetricsLog(); + expect(metrics[0].storyId).toBe('unknown'); + }); + + it('clears metrics log', async () => { + const gate = new G4DevContextGate({ decisionEngine, logger }); + await gate.verify({ intent: 'test' }); + expect(gate.getMetricsLog()).toHaveLength(1); + + gate.clearMetricsLog(); + expect(gate.getMetricsLog()).toHaveLength(0); + }); + }); + + describe('performance', () => { + it('executes within 2s timeout for normal operations', async () => { + const gate = new G4DevContextGate({ decisionEngine, logger }); + const start = Date.now(); + await gate.verify({ intent: 'performance test' }); + const elapsed = Date.now() - start; + + // Should be well under 2s for mock operations + expect(elapsed).toBeLessThan(2000); + }); + }); +}); + +// ================================================================ +// Graceful Degradation Integration Tests +// ================================================================ + +describe('Graceful Degradation (Integration)', () => { + let logger; + + beforeEach(() => { + logger = createMockLogger(); + }); + + it('gate never blocks even with repeated failures', async () => { + let callCount = 0; + const gate = new TestGate( + { + logger, + circuitBreakerOptions: { failureThreshold: 2, resetTimeoutMs: 60000 }, + }, + async () => { + callCount++; + throw new Error(`Failure #${callCount}`); + }, + ); + + // All invocations should pass (graceful degradation) + for (let i = 0; i < 5; i++) { + const result = await gate.verify({}); + expect(result.result.passed).toBe(true); + expect(result.result.blocking).toBe(false); + } + }); + + it('circuit breaker prevents cascading failures', async () => { + let doVerifyCalls = 0; + const gate = new TestGate( + { + logger, + circuitBreakerOptions: { failureThreshold: 2, resetTimeoutMs: 60000 }, + }, + async () => { + doVerifyCalls++; + throw new Error('Persistent failure'); + }, + ); + + // Trip the circuit: 2 failures + await gate.verify({}); + await gate.verify({}); + expect(doVerifyCalls).toBe(2); + + // After circuit opens, _doVerify should NOT be called + await gate.verify({}); + await gate.verify({}); + expect(doVerifyCalls).toBe(2); // No additional calls + }); + + it('timeout does not block execution', async () => { + const gate = new TestGate( + { timeoutMs: 50, logger }, + () => new Promise((resolve) => setTimeout(() => resolve({ + passed: false, + warnings: ['should not appear'], + opportunities: [], + }), 500)), + ); + + const start = Date.now(); + const result = await gate.verify({}); + const elapsed = Date.now() - start; + + expect(result.result.passed).toBe(true); // Warn-and-proceed + expect(elapsed).toBeLessThan(200); // Should timeout at ~50ms, not wait 500ms + }); + + it('all G1-G4 gates degrade gracefully on engine error', async () => { + const brokenEngine = { + analyze: jest.fn().mockImplementation(() => { + throw new Error('Engine crashed'); + }), + }; + + const brokenLoader = createMockRegistryLoader(); + brokenLoader.queryByPath.mockImplementation(() => { + throw new Error('Loader crashed'); + }); + + const gates = [ + new G1EpicCreationGate({ decisionEngine: brokenEngine, logger }), + new G2StoryCreationGate({ decisionEngine: brokenEngine, logger }), + new G3StoryValidationGate({ + decisionEngine: brokenEngine, + registryLoader: brokenLoader, + logger, + }), + new G4DevContextGate({ decisionEngine: brokenEngine, logger }), + ]; + + for (const gate of gates) { + const result = await gate.verify({ intent: 'test' }); + expect(result.result.passed).toBe(true); + expect(result.result.blocking).toBe(false); + expect(result.result.warnings.length).toBeGreaterThan(0); + } + }); +}); + +``` + +================================================== +📄 tests/core/ids/registry-updater.test.js +================================================== +```js +'use strict'; + +const fs = require('fs'); +const path = require('path'); +const yaml = require('js-yaml'); +const os = require('os'); + +const { RegistryUpdater, AUDIT_LOG_PATH, LOCK_FILE, BACKUP_DIR } = require('../../../.aios-core/core/ids/registry-updater'); + +const FIXTURES = path.resolve(__dirname, 'fixtures'); +const TEMP_DIR = path.join(os.tmpdir(), 'ids-updater-test-' + Date.now()); +const TEMP_REGISTRY = path.join(TEMP_DIR, 'entity-registry.yaml'); +const TEMP_AUDIT_LOG = path.join(TEMP_DIR, 'registry-update-log.jsonl'); +const TEMP_LOCK_FILE = path.join(TEMP_DIR, '.entity-registry.lock'); +const TEMP_BACKUP_DIR = path.join(TEMP_DIR, 'registry-backups'); + +// ─── Helpers ─────────────────────────────────────────────────────── + +function createTempRegistry(data) { + if (!fs.existsSync(TEMP_DIR)) { + fs.mkdirSync(TEMP_DIR, { recursive: true }); + } + const yamlStr = yaml.dump(data || getBaseRegistry(), { lineWidth: 120, noRefs: true }); + fs.writeFileSync(TEMP_REGISTRY, yamlStr, 'utf8'); +} + +function getBaseRegistry() { + return { + metadata: { + version: '1.0.0', + lastUpdated: '2026-02-08T00:00:00Z', + entityCount: 2, + checksumAlgorithm: 'sha256', + }, + entities: { + tasks: { + 'test-task': { + path: '.aios-core/development/tasks/test-task.md', + type: 'task', + purpose: 'A test task for registry updater tests', + keywords: ['test', 'task'], + usedBy: [], + dependencies: [], + adaptability: { score: 0.8, constraints: [], extensionPoints: [] }, + checksum: 'sha256:0000000000000000000000000000000000000000000000000000000000000000', + lastVerified: '2026-02-08T00:00:00Z', + }, + }, + scripts: { + 'test-script': { + path: '.aios-core/development/scripts/test-script.js', + type: 'script', + purpose: 'A test script for registry updater tests', + keywords: ['test', 'script'], + usedBy: [], + dependencies: [], + adaptability: { score: 0.7, constraints: [], extensionPoints: [] }, + checksum: 'sha256:1111111111111111111111111111111111111111111111111111111111111111', + lastVerified: '2026-02-08T00:00:00Z', + }, + }, + }, + categories: [ + { id: 'tasks', description: 'Task workflows', basePath: '.aios-core/development/tasks' }, + { id: 'scripts', description: 'Scripts', basePath: '.aios-core/development/scripts' }, + ], + }; +} + +function readRegistry() { + const content = fs.readFileSync(TEMP_REGISTRY, 'utf8'); + return yaml.load(content); +} + +function createUpdater(options = {}) { + return new RegistryUpdater({ + registryPath: TEMP_REGISTRY, + repoRoot: TEMP_DIR, + debounceMs: 10, + auditLogPath: TEMP_AUDIT_LOG, + lockFile: TEMP_LOCK_FILE, + backupDir: TEMP_BACKUP_DIR, + ...options, + }); +} + +function createTempFile(relPath, content) { + const abs = path.join(TEMP_DIR, relPath); + const dir = path.dirname(abs); + if (!fs.existsSync(dir)) { + fs.mkdirSync(dir, { recursive: true }); + } + fs.writeFileSync(abs, content, 'utf8'); + return abs; +} + +// ─── Setup / Teardown ────────────────────────────────────────────── + +beforeEach(() => { + if (fs.existsSync(TEMP_DIR)) { + fs.rmSync(TEMP_DIR, { recursive: true, force: true }); + } + fs.mkdirSync(TEMP_DIR, { recursive: true }); + createTempRegistry(); +}); + +afterEach(() => { + if (fs.existsSync(TEMP_DIR)) { + fs.rmSync(TEMP_DIR, { recursive: true, force: true }); + } +}); + +// ─── Tests ───────────────────────────────────────────────────────── + +describe('RegistryUpdater', () => { + describe('constructor', () => { + it('creates instance with default options', () => { + const updater = createUpdater(); + expect(updater).toBeDefined(); + expect(updater.getStats()).toEqual({ + totalUpdates: 0, + isWatching: false, + pendingUpdates: 0, + }); + }); + }); + + describe('processChanges()', () => { + it('returns zero updates for empty changes', async () => { + const updater = createUpdater(); + const result = await updater.processChanges([]); + expect(result).toEqual({ updated: 0, errors: [] }); + }); + + it('returns zero updates for null changes', async () => { + const updater = createUpdater(); + const result = await updater.processChanges(null); + expect(result).toEqual({ updated: 0, errors: [] }); + }); + }); + + describe('File creation handling (AC: 2)', () => { + it('adds new entity to registry when file is created', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/tasks/new-task.md', + '# New Task\n\n## Purpose\nA brand new task for testing.\n', + ); + + const result = await updater.processChanges([{ action: 'add', filePath }]); + + expect(result.updated).toBe(1); + expect(result.errors).toHaveLength(0); + + const registry = readRegistry(); + expect(registry.entities.tasks['new-task']).toBeDefined(); + expect(registry.entities.tasks['new-task'].type).toBe('task'); + expect(registry.entities.tasks['new-task'].purpose).toContain('brand new task'); + expect(registry.entities.tasks['new-task'].checksum).toMatch(/^sha256:/); + }); + + it('sets correct adaptability score by entity type', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/scripts/helper.js', + '// Helper script\nmodule.exports = {};\n', + ); + + await updater.processChanges([{ action: 'add', filePath }]); + + const registry = readRegistry(); + expect(registry.entities.scripts.helper).toBeDefined(); + expect(registry.entities.scripts.helper.adaptability.score).toBe(0.7); + }); + + it('extracts keywords from file content', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/tasks/deploy-automation.md', + '# Deploy Automation Task\n\n## Purpose\nAutomate deployment pipeline.\n', + ); + + await updater.processChanges([{ action: 'add', filePath }]); + + const registry = readRegistry(); + const entity = registry.entities.tasks['deploy-automation']; + expect(entity).toBeDefined(); + expect(entity.keywords).toEqual(expect.arrayContaining(['deploy', 'automation'])); + }); + + it('detects dependencies from require statements', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/scripts/consumer.js', + "const helper = require('./helper');\nmodule.exports = {};\n", + ); + + await updater.processChanges([{ action: 'add', filePath }]); + + const registry = readRegistry(); + const entity = registry.entities.scripts.consumer; + expect(entity).toBeDefined(); + expect(entity.dependencies).toContain('helper'); + }); + + it('creates category if it does not exist', async () => { + // Create registry without 'modules' category + const baseReg = getBaseRegistry(); + delete baseReg.entities.scripts; + createTempRegistry(baseReg); + + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/core/utils/new-module.js', + '// New module\nmodule.exports = {};\n', + ); + + await updater.processChanges([{ action: 'add', filePath }]); + + const registry = readRegistry(); + expect(registry.entities.modules).toBeDefined(); + expect(registry.entities.modules['new-module']).toBeDefined(); + }); + }); + + describe('File modification handling (AC: 3)', () => { + it('updates checksum when file content changes', async () => { + const updater = createUpdater(); + + // Create a task file that matches existing entity + const filePath = createTempFile( + '.aios-core/development/tasks/test-task.md', + '# Test Task Updated\n\n## Purpose\nUpdated purpose for the test task.\n', + ); + + const result = await updater.processChanges([{ action: 'change', filePath }]); + + expect(result.updated).toBe(1); + + const registry = readRegistry(); + const entity = registry.entities.tasks['test-task']; + expect(entity.checksum).not.toBe('sha256:0000000000000000000000000000000000000000000000000000000000000000'); + expect(entity.purpose).toContain('Updated purpose'); + }); + + it('updates lastVerified timestamp on modification', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/tasks/test-task.md', + '# Test Task\n\n## Purpose\nSame content.\n', + ); + + await updater.processChanges([{ action: 'change', filePath }]); + + const registry = readRegistry(); + const entity = registry.entities.tasks['test-task']; + expect(new Date(entity.lastVerified).getTime()).toBeGreaterThan(new Date('2026-02-08T00:00:00Z').getTime()); + }); + + it('creates entity if modified file was not in registry', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/tasks/brand-new.md', + '# Brand New Task\n\n## Purpose\nThis was not tracked before.\n', + ); + + await updater.processChanges([{ action: 'change', filePath }]); + + const registry = readRegistry(); + expect(registry.entities.tasks['brand-new']).toBeDefined(); + }); + + it('re-extracts keywords when content changes', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/tasks/test-task.md', + '# Deployment Orchestration\n\n## Purpose\nOrchestrate deployment workflows.\n', + ); + + await updater.processChanges([{ action: 'change', filePath }]); + + const registry = readRegistry(); + const entity = registry.entities.tasks['test-task']; + expect(entity.keywords).toEqual(expect.arrayContaining(['deployment', 'orchestration'])); + }); + }); + + describe('File deletion handling (AC: 4)', () => { + it('removes entity from registry when file is deleted', async () => { + const updater = createUpdater(); + const filePath = path.join(TEMP_DIR, '.aios-core/development/tasks/test-task.md'); + + const result = await updater.processChanges([{ action: 'unlink', filePath }]); + + expect(result.updated).toBe(1); + + const registry = readRegistry(); + expect(registry.entities.tasks['test-task']).toBeUndefined(); + }); + + it('cleans up usedBy references when entity is deleted', async () => { + // Setup: script depends on test-task via usedBy + const baseReg = getBaseRegistry(); + baseReg.entities.scripts['test-script'].usedBy = ['dependent-task']; + baseReg.entities.tasks['dependent-task'] = { + path: '.aios-core/development/tasks/dependent-task.md', + type: 'task', + purpose: 'Depends on test-script', + keywords: ['dependent'], + usedBy: [], + dependencies: ['test-script'], + adaptability: { score: 0.8, constraints: [], extensionPoints: [] }, + checksum: 'sha256:2222222222222222222222222222222222222222222222222222222222222222', + lastVerified: '2026-02-08T00:00:00Z', + }; + createTempRegistry(baseReg); + + const updater = createUpdater(); + const filePath = path.join(TEMP_DIR, '.aios-core/development/tasks/dependent-task.md'); + + await updater.processChanges([{ action: 'unlink', filePath }]); + + const registry = readRegistry(); + expect(registry.entities.tasks['dependent-task']).toBeUndefined(); + // usedBy reference should be cleaned + expect(registry.entities.scripts['test-script'].usedBy).not.toContain('dependent-task'); + }); + + it('handles deletion of non-existent entity gracefully', async () => { + const updater = createUpdater(); + const filePath = path.join(TEMP_DIR, '.aios-core/development/tasks/nonexistent.md'); + + const result = await updater.processChanges([{ action: 'unlink', filePath }]); + + // Should not crash, may or may not count as "updated" + expect(result.errors).toHaveLength(0); + }); + + it('updates entity count after deletion', async () => { + const updater = createUpdater(); + const filePath = path.join(TEMP_DIR, '.aios-core/development/tasks/test-task.md'); + + await updater.processChanges([{ action: 'unlink', filePath }]); + + const registry = readRegistry(); + expect(registry.metadata.entityCount).toBe(1); + }); + }); + + describe('Batch operations (AC: 8)', () => { + it('processes multiple changes in a single registry write', async () => { + const updater = createUpdater(); + + const file1 = createTempFile( + '.aios-core/development/tasks/batch-task-1.md', + '# Batch Task 1\n\n## Purpose\nFirst batch task.\n', + ); + const file2 = createTempFile( + '.aios-core/development/tasks/batch-task-2.md', + '# Batch Task 2\n\n## Purpose\nSecond batch task.\n', + ); + const deleteFile = path.join(TEMP_DIR, '.aios-core/development/tasks/test-task.md'); + + const result = await updater.processChanges([ + { action: 'add', filePath: file1 }, + { action: 'add', filePath: file2 }, + { action: 'unlink', filePath: deleteFile }, + ]); + + expect(result.updated).toBe(3); + + const registry = readRegistry(); + expect(registry.entities.tasks['batch-task-1']).toBeDefined(); + expect(registry.entities.tasks['batch-task-2']).toBeDefined(); + expect(registry.entities.tasks['test-task']).toBeUndefined(); + }); + }); + + describe('Excluded patterns', () => { + it('ignores test files (*.test.js)', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/scripts/something.test.js', + '// test file\n', + ); + + const result = await updater.processChanges([{ action: 'add', filePath }]); + + expect(result.updated).toBe(0); + }); + + it('ignores node_modules', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/core/node_modules/pkg/index.js', + '// node module\n', + ); + + const result = await updater.processChanges([{ action: 'add', filePath }]); + + expect(result.updated).toBe(0); + }); + + it('ignores README.md files', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/tasks/README.md', + '# README\n', + ); + + const result = await updater.processChanges([{ action: 'add', filePath }]); + + expect(result.updated).toBe(0); + }); + + it('ignores files outside watched paths', async () => { + const updater = createUpdater(); + const filePath = createTempFile('src/random-file.js', '// not watched\n'); + + const result = await updater.processChanges([{ action: 'add', filePath }]); + + expect(result.updated).toBe(0); + }); + + it('ignores unsupported file extensions', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/tasks/image.png', + 'binary content', + ); + + const result = await updater.processChanges([{ action: 'add', filePath }]); + + expect(result.updated).toBe(0); + }); + }); + + describe('Permission errors (AC: edge case)', () => { + it('handles EACCES error gracefully during file read', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/tasks/locked-file.md', + '# Locked File\n', + ); + + // Make the file unreadable (platform-dependent) + if (process.platform !== 'win32') { + fs.chmodSync(filePath, 0o000); + const result = await updater.processChanges([{ action: 'add', filePath }]); + expect(result.errors).toHaveLength(0); // Skips gracefully + fs.chmodSync(filePath, 0o644); + } else { + // On Windows, permission errors are harder to simulate + // Just verify the updater doesn't crash with a normal file + const result = await updater.processChanges([{ action: 'add', filePath }]); + expect(result.errors).toHaveLength(0); + } + }); + }); + + describe('Agent task completion hook (AC: 6)', () => { + it('processes artifacts from agent task completion', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/tasks/agent-output.md', + '# Agent Output\n\n## Purpose\nGenerated by agent.\n', + ); + + const task = { id: 'TASK-42', agent: '@dev' }; + const result = await updater.onAgentTaskComplete(task, [filePath]); + + expect(result.updated).toBe(1); + + const registry = readRegistry(); + expect(registry.entities.tasks['agent-output']).toBeDefined(); + }); + + it('handles deleted artifacts in task completion', async () => { + const updater = createUpdater(); + const missingPath = path.join(TEMP_DIR, '.aios-core/development/tasks/deleted-file.md'); + + const task = { id: 'TASK-43', agent: '@dev' }; + const result = await updater.onAgentTaskComplete(task, [missingPath]); + + // Should process as unlink + expect(result.errors).toHaveLength(0); + }); + + it('returns zero updates for empty artifacts', async () => { + const updater = createUpdater(); + const result = await updater.onAgentTaskComplete({ id: 'TASK-44' }, []); + expect(result).toEqual({ updated: 0, errors: [] }); + }); + }); + + describe('Metadata updates', () => { + it('updates lastUpdated timestamp after changes', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/tasks/ts-check.md', + '# Timestamp Check\n\n## Purpose\nVerify timestamps.\n', + ); + + const before = new Date().toISOString(); + await updater.processChanges([{ action: 'add', filePath }]); + + const registry = readRegistry(); + expect(new Date(registry.metadata.lastUpdated).getTime()).toBeGreaterThanOrEqual(new Date(before).getTime()); + }); + + it('updates entity count after changes', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/tasks/count-check.md', + '# Count Check\n', + ); + + await updater.processChanges([{ action: 'add', filePath }]); + + const registry = readRegistry(); + expect(registry.metadata.entityCount).toBe(3); // 2 original + 1 new + }); + }); + + describe('getStats()', () => { + it('tracks total updates across multiple processChanges calls', async () => { + const updater = createUpdater(); + + const file1 = createTempFile('.aios-core/development/tasks/stat1.md', '# Stat 1\n'); + const file2 = createTempFile('.aios-core/development/tasks/stat2.md', '# Stat 2\n'); + + await updater.processChanges([{ action: 'add', filePath: file1 }]); + await updater.processChanges([{ action: 'add', filePath: file2 }]); + + const stats = updater.getStats(); + expect(stats.totalUpdates).toBe(2); + expect(stats.isWatching).toBe(false); + }); + }); + + describe('Relationship resolution', () => { + it('rebuilds usedBy after changes', async () => { + const updater = createUpdater(); + + // Create a script that is depended upon + createTempFile( + '.aios-core/development/scripts/test-script.js', + '// Test script\nmodule.exports = {};\n', + ); + + // Create a task that depends on test-script + const consumerPath = createTempFile( + '.aios-core/development/tasks/consumer.md', + '# Consumer Task\n\ndependencies:\n - test-script\n', + ); + + await updater.processChanges([{ action: 'add', filePath: consumerPath }]); + + const registry = readRegistry(); + // test-script should have consumer in usedBy + const script = registry.entities.scripts['test-script']; + expect(script).toBeDefined(); + expect(script.usedBy).toContain('consumer'); + }); + }); + + describe('Audit logging (AC: 9)', () => { + it('writes JSONL entries on processChanges', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/tasks/audit-test.md', + '# Audit Test\n\n## Purpose\nTest audit logging.\n', + ); + + await updater.processChanges([{ action: 'add', filePath }]); + + expect(fs.existsSync(TEMP_AUDIT_LOG)).toBe(true); + const lines = fs.readFileSync(TEMP_AUDIT_LOG, 'utf8').trim().split('\n').filter(Boolean); + expect(lines.length).toBeGreaterThanOrEqual(1); + const entry = JSON.parse(lines[0]); + expect(entry.timestamp).toBeDefined(); + }); + + it('filters audit log entries by action', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/tasks/filter-test.md', + '# Filter Test\n', + ); + + await updater.processChanges([{ action: 'add', filePath }]); + await updater.processChanges([{ action: 'change', filePath }]); + + const addEntries = updater.queryAuditLog({ action: 'add' }); + const changeEntries = updater.queryAuditLog({ action: 'change' }); + expect(addEntries.length).toBeGreaterThanOrEqual(1); + expect(changeEntries.length).toBeGreaterThanOrEqual(1); + }); + + it('filters audit log entries by path', async () => { + const updater = createUpdater(); + const file1 = createTempFile('.aios-core/development/tasks/pathA.md', '# Path A\n'); + const file2 = createTempFile('.aios-core/development/tasks/pathB.md', '# Path B\n'); + + await updater.processChanges([ + { action: 'add', filePath: file1 }, + { action: 'add', filePath: file2 }, + ]); + + const filtered = updater.queryAuditLog({ path: 'pathA' }); + expect(filtered.length).toBeGreaterThanOrEqual(1); + expect(filtered.every((e) => (e.path || '').includes('pathA'))).toBe(true); + }); + + it('returns empty array when no audit log exists', () => { + const updater = createUpdater({ auditLogPath: path.join(TEMP_DIR, 'nonexistent.jsonl') }); + const entries = updater.queryAuditLog({}); + expect(entries).toEqual([]); + }); + + it('rotates audit log when exceeding 5MB', async () => { + const updater = createUpdater(); + + // Create a large audit log file (just over 5MB) + const bigContent = '{"timestamp":"2026-01-01","action":"add","path":"test"}\n'.repeat(100000); + fs.writeFileSync(TEMP_AUDIT_LOG, bigContent, 'utf8'); + + const filePath = createTempFile( + '.aios-core/development/tasks/rotation-trigger.md', + '# Rotation Trigger\n\n## Purpose\nTrigger log rotation.\n', + ); + + const result = await updater.processChanges([{ action: 'add', filePath }]); + + // If the update succeeded, backup should exist + if (result.updated > 0) { + expect(fs.existsSync(TEMP_BACKUP_DIR)).toBe(true); + const backups = fs.readdirSync(TEMP_BACKUP_DIR); + expect(backups.length).toBeGreaterThanOrEqual(1); + } else { + // If lock contention prevented update, skip the backup assertion + // The important thing is no exception was thrown + expect(result.errors.length).toBeGreaterThanOrEqual(0); + } + }); + }); + + describe('Concurrent updates (AC: 10)', () => { + it('handles parallel processChanges without corruption', async () => { + const updater = createUpdater(); + const files = []; + for (let i = 0; i < 5; i++) { + files.push( + createTempFile( + `.aios-core/development/tasks/concurrent-${i}.md`, + `# Concurrent ${i}\n\n## Purpose\nConcurrent test ${i}.\n`, + ), + ); + } + + // Fire 5 processChanges in parallel + const results = await Promise.all( + files.map((f) => updater.processChanges([{ action: 'add', filePath: f }])), + ); + + // All should complete (no thrown exceptions) + // Lock contention errors are expected under parallel writes (last-write-wins) + const totalUpdated = results.reduce((sum, r) => sum + r.updated, 0); + // Under high contention, some or all may fail to acquire lock - this is expected behavior + // The important thing is no exceptions were thrown and no corruption occurred + expect(totalUpdated).toBeGreaterThanOrEqual(0); + + // Registry should be valid YAML (not corrupted by concurrent writes) + const registry = readRegistry(); + expect(registry.entities).toBeDefined(); + expect(registry.metadata).toBeDefined(); + }); + + it('handles two updater instances with shared registry', async () => { + const updater1 = createUpdater(); + const updater2 = createUpdater(); + + const file1 = createTempFile( + '.aios-core/development/tasks/instance-a.md', + '# Instance A\n\n## Purpose\nFrom updater 1.\n', + ); + const file2 = createTempFile( + '.aios-core/development/tasks/instance-b.md', + '# Instance B\n\n## Purpose\nFrom updater 2.\n', + ); + + const [r1, r2] = await Promise.all([ + updater1.processChanges([{ action: 'add', filePath: file1 }]), + updater2.processChanges([{ action: 'add', filePath: file2 }]), + ]); + + // Under lock contention, updates may fail - this is expected behavior + // The important thing is no exceptions were thrown and registry is not corrupted + expect(r1.updated + r2.updated).toBeGreaterThanOrEqual(0); + + const registry = readRegistry(); + // Registry should not be corrupted + expect(registry.entities).toBeDefined(); + expect(registry.entities.tasks).toBeDefined(); + }); + }); + + describe('Performance (AC: 7)', () => { + it('processes single file update in <5 seconds', async () => { + const updater = createUpdater(); + const filePath = createTempFile( + '.aios-core/development/tasks/perf-single.md', + '# Performance Single\n\n## Purpose\nBenchmark single file.\n', + ); + + const start = Date.now(); + await updater.processChanges([{ action: 'add', filePath }]); + const elapsed = Date.now() - start; + + expect(elapsed).toBeLessThan(5000); + }); + + it('processes batch of 10 files in <5 seconds', async () => { + const updater = createUpdater(); + const changes = []; + for (let i = 0; i < 10; i++) { + const fp = createTempFile( + `.aios-core/development/tasks/perf-batch-${i}.md`, + `# Perf Batch ${i}\n\n## Purpose\nBenchmark batch ${i}.\n`, + ); + changes.push({ action: 'add', filePath: fp }); + } + + const start = Date.now(); + await updater.processChanges(changes); + const elapsed = Date.now() - start; + + expect(elapsed).toBeLessThan(5000); + }); + }); +}); + +``` + +================================================== +📄 tests/core/ids/registry-healer.test.js +================================================== +```js +'use strict'; + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const yaml = require('js-yaml'); +const crypto = require('crypto'); + +const { + RegistryHealer, + HEALING_RULES, + SEVERITY_ORDER, + daysSince, + buildEntityIndex, + MAX_BACKUPS, + STALE_DAYS_THRESHOLD, +} = require('../../../.aios-core/core/ids/registry-healer'); + +// Test constants — unique temp dir per test run +const TEST_ROOT = path.join(os.tmpdir(), `ids-healer-test-${process.pid}-${Date.now()}`); +const TEST_REGISTRY_PATH = path.join(TEST_ROOT, 'entity-registry.yaml'); +const TEST_HEALING_LOG = path.join(TEST_ROOT, 'registry-healing-log.jsonl'); +const TEST_BACKUP_DIR = path.join(TEST_ROOT, 'registry-backups', 'healing'); + +// ─── Test Helpers ────────────────────────────────────────────────── + +function createTestDir() { + fs.mkdirSync(TEST_ROOT, { recursive: true }); +} + +function cleanupTestDir() { + try { + fs.rmSync(TEST_ROOT, { recursive: true, force: true }); + } catch { + // Best-effort cleanup + } +} + +/** + * Create a test file with given content. + * @param {string} relativePath - Path relative to TEST_ROOT + * @param {string} content - File content + * @returns {string} Absolute path to created file + */ +function createTestFile(relativePath, content = '# Test File\n\nSome content here.') { + const absPath = path.join(TEST_ROOT, relativePath); + fs.mkdirSync(path.dirname(absPath), { recursive: true }); + fs.writeFileSync(absPath, content, 'utf8'); + return absPath; +} + +/** + * Compute sha256 checksum for a string. + * @param {string} content + * @returns {string} sha256 prefixed checksum + */ +function checksumFor(content) { + return 'sha256:' + crypto.createHash('sha256').update(content).digest('hex'); +} + +/** + * Build a registry object for testing. + */ +function buildTestRegistry(entities = {}) { + return { + metadata: { + version: '1.0.0', + lastUpdated: new Date().toISOString(), + entityCount: Object.values(entities).reduce((sum, cat) => sum + Object.keys(cat).length, 0), + checksumAlgorithm: 'sha256', + }, + entities, + categories: Object.keys(entities).map((id) => ({ id, description: `${id} category` })), + }; +} + +/** + * Write a registry YAML file for testing. + */ +function writeTestRegistry(entities = {}) { + const registry = buildTestRegistry(entities); + fs.mkdirSync(path.dirname(TEST_REGISTRY_PATH), { recursive: true }); + fs.writeFileSync(TEST_REGISTRY_PATH, yaml.dump(registry, { lineWidth: 120, noRefs: true }), 'utf8'); + return registry; +} + +/** + * Create a RegistryHealer configured for testing. + */ +function createTestHealer() { + return new RegistryHealer({ + registryPath: TEST_REGISTRY_PATH, + repoRoot: TEST_ROOT, + healingLogPath: TEST_HEALING_LOG, + backupDir: TEST_BACKUP_DIR, + }); +} + +// ─── Tests ───────────────────────────────────────────────────────── + +describe('RegistryHealer', () => { + beforeEach(() => { + createTestDir(); + }); + + afterEach(() => { + cleanupTestDir(); + }); + + // ─── Utility Functions ───────────────────────────────────────── + + describe('daysSince()', () => { + it('returns Infinity for undefined input', () => { + expect(daysSince(undefined)).toBe(Infinity); + }); + + it('returns Infinity for invalid date', () => { + expect(daysSince('not-a-date')).toBe(Infinity); + }); + + it('returns approximately 0 for current timestamp', () => { + const now = new Date().toISOString(); + expect(daysSince(now)).toBeLessThan(1); + }); + + it('returns correct day count for past date', () => { + const tenDaysAgo = new Date(Date.now() - 10 * 24 * 60 * 60 * 1000).toISOString(); + const result = daysSince(tenDaysAgo); + expect(result).toBeGreaterThan(9.5); + expect(result).toBeLessThan(10.5); + }); + }); + + describe('buildEntityIndex()', () => { + it('returns empty Map for null entities', () => { + const index = buildEntityIndex(null); + expect(index.size).toBe(0); + }); + + it('builds flat index from nested entity structure', () => { + const entities = { + tasks: { + 'task-a': { path: 'tasks/task-a.md', type: 'task' }, + 'task-b': { path: 'tasks/task-b.md', type: 'task' }, + }, + scripts: { + 'script-a': { path: 'scripts/script-a.js', type: 'script' }, + }, + }; + const index = buildEntityIndex(entities); + expect(index.size).toBe(3); + expect(index.get('task-a')._category).toBe('tasks'); + expect(index.get('script-a')._category).toBe('scripts'); + }); + + it('skips non-object category entries', () => { + const entities = { + tasks: null, + scripts: 'invalid', + }; + const index = buildEntityIndex(entities); + expect(index.size).toBe(0); + }); + }); + + // ─── HEALING_RULES Configuration ──────────────────────────────── + + describe('HEALING_RULES', () => { + it('defines 6 healing rules', () => { + expect(HEALING_RULES).toHaveLength(6); + }); + + it('each rule has required fields', () => { + for (const rule of HEALING_RULES) { + expect(rule).toHaveProperty('id'); + expect(rule).toHaveProperty('description'); + expect(rule).toHaveProperty('severity'); + expect(rule).toHaveProperty('autoHealable'); + expect(['critical', 'high', 'medium', 'low']).toContain(rule.severity); + } + }); + + it('missing-file is critical and non-auto-healable', () => { + const rule = HEALING_RULES.find((r) => r.id === 'missing-file'); + expect(rule.severity).toBe('critical'); + expect(rule.autoHealable).toBe(false); + }); + + it('checksum-mismatch is high and auto-healable', () => { + const rule = HEALING_RULES.find((r) => r.id === 'checksum-mismatch'); + expect(rule.severity).toBe('high'); + expect(rule.autoHealable).toBe(true); + }); + + it('has correct auto-healable distribution (5 of 6 = 83%)', () => { + const autoHealable = HEALING_RULES.filter((r) => r.autoHealable); + expect(autoHealable.length).toBe(5); + // 5/6 = 83.3% > 80% threshold + expect((autoHealable.length / HEALING_RULES.length) * 100).toBeGreaterThanOrEqual(80); + }); + }); + + // ─── Health Check (Detection) ────────────────────────────────── + + describe('runHealthCheck()', () => { + it('returns empty issues for a healthy registry', () => { + const content = '# Healthy File\n\nContent here.'; + const checksum = checksumFor(content); + createTestFile('tasks/healthy-task.md', content); + + writeTestRegistry({ + tasks: { + 'healthy-task': { + path: 'tasks/healthy-task.md', + type: 'task', + keywords: ['healthy', 'task'], + usedBy: [], + dependencies: [], + checksum, + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const result = healer.runHealthCheck(); + + expect(result.issues).toHaveLength(0); + expect(result.summary.total).toBe(0); + expect(result.timestamp).toBeDefined(); + }); + + it('detects missing files (CRITICAL)', () => { + writeTestRegistry({ + tasks: { + 'missing-task': { + path: 'tasks/does-not-exist.md', + type: 'task', + keywords: ['missing'], + usedBy: [], + dependencies: [], + checksum: 'sha256:abc123', + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const result = healer.runHealthCheck(); + + expect(result.issues).toHaveLength(1); + expect(result.issues[0].ruleId).toBe('missing-file'); + expect(result.issues[0].severity).toBe('critical'); + expect(result.issues[0].autoHealable).toBe(false); + expect(result.issues[0].entityId).toBe('missing-task'); + }); + + it('detects checksum mismatches (HIGH)', () => { + const content = '# Changed File\n\nNew content.'; + createTestFile('tasks/changed-task.md', content); + + writeTestRegistry({ + tasks: { + 'changed-task': { + path: 'tasks/changed-task.md', + type: 'task', + keywords: ['changed'], + usedBy: [], + dependencies: [], + checksum: 'sha256:old_wrong_checksum_value_here', + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const result = healer.runHealthCheck(); + + const checksumIssues = result.issues.filter((i) => i.ruleId === 'checksum-mismatch'); + expect(checksumIssues).toHaveLength(1); + expect(checksumIssues[0].severity).toBe('high'); + expect(checksumIssues[0].autoHealable).toBe(true); + expect(checksumIssues[0].details.expected).toBe('sha256:old_wrong_checksum_value_here'); + expect(checksumIssues[0].details.actual).toBeDefined(); + }); + + it('detects orphaned usedBy references (MEDIUM)', () => { + const content = '# Task A'; + createTestFile('tasks/task-a.md', content); + + writeTestRegistry({ + tasks: { + 'task-a': { + path: 'tasks/task-a.md', + type: 'task', + keywords: ['task', 'a'], + usedBy: ['nonexistent-entity', 'another-missing'], + dependencies: [], + checksum: checksumFor(content), + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const result = healer.runHealthCheck(); + + const orphanedIssues = result.issues.filter((i) => i.ruleId === 'orphaned-usedBy'); + expect(orphanedIssues).toHaveLength(1); + expect(orphanedIssues[0].severity).toBe('medium'); + expect(orphanedIssues[0].details.orphanedRefs).toEqual(['nonexistent-entity', 'another-missing']); + }); + + it('detects orphaned dependency references (MEDIUM)', () => { + const content = '# Task B'; + createTestFile('tasks/task-b.md', content); + + writeTestRegistry({ + tasks: { + 'task-b': { + path: 'tasks/task-b.md', + type: 'task', + keywords: ['task', 'b'], + usedBy: [], + dependencies: ['missing-dep'], + checksum: checksumFor(content), + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const result = healer.runHealthCheck(); + + const depIssues = result.issues.filter((i) => i.ruleId === 'orphaned-dependency'); + expect(depIssues).toHaveLength(1); + expect(depIssues[0].severity).toBe('medium'); + expect(depIssues[0].details.orphanedRefs).toEqual(['missing-dep']); + }); + + it('detects missing keywords (LOW)', () => { + const content = '# No Keywords'; + createTestFile('tasks/no-kw.md', content); + + writeTestRegistry({ + tasks: { + 'no-kw': { + path: 'tasks/no-kw.md', + type: 'task', + keywords: [], + usedBy: [], + dependencies: [], + checksum: checksumFor(content), + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const result = healer.runHealthCheck(); + + const kwIssues = result.issues.filter((i) => i.ruleId === 'missing-keywords'); + expect(kwIssues).toHaveLength(1); + expect(kwIssues[0].severity).toBe('low'); + expect(kwIssues[0].autoHealable).toBe(true); + }); + + it('detects stale lastVerified timestamps (LOW)', () => { + const content = '# Stale Entity'; + createTestFile('tasks/stale-task.md', content); + const tenDaysAgo = new Date(Date.now() - 10 * 24 * 60 * 60 * 1000).toISOString(); + + writeTestRegistry({ + tasks: { + 'stale-task': { + path: 'tasks/stale-task.md', + type: 'task', + keywords: ['stale'], + usedBy: [], + dependencies: [], + checksum: checksumFor(content), + lastVerified: tenDaysAgo, + }, + }, + }); + + const healer = createTestHealer(); + const result = healer.runHealthCheck(); + + const staleIssues = result.issues.filter((i) => i.ruleId === 'stale-verification'); + expect(staleIssues).toHaveLength(1); + expect(staleIssues[0].severity).toBe('low'); + expect(staleIssues[0].details.daysSince).toBeGreaterThanOrEqual(9); + }); + + it('skips further checks for missing files', () => { + // A missing file should not also trigger checksum-mismatch etc + writeTestRegistry({ + tasks: { + 'gone-task': { + path: 'tasks/gone.md', + type: 'task', + keywords: [], + usedBy: ['nonexistent'], + dependencies: ['nope'], + checksum: 'sha256:wrong', + lastVerified: '2020-01-01', + }, + }, + }); + + const healer = createTestHealer(); + const result = healer.runHealthCheck(); + + // Should only have 1 issue: missing-file (not checksum, orphaned, etc) + expect(result.issues).toHaveLength(1); + expect(result.issues[0].ruleId).toBe('missing-file'); + }); + + it('sorts issues by severity (critical first)', () => { + const content1 = '# File A'; + createTestFile('tasks/file-a.md', content1); + + writeTestRegistry({ + tasks: { + 'missing-task': { + path: 'tasks/nonexistent.md', + type: 'task', + keywords: ['missing'], + usedBy: [], + dependencies: [], + checksum: 'sha256:abc', + lastVerified: new Date().toISOString(), + }, + 'file-a': { + path: 'tasks/file-a.md', + type: 'task', + keywords: [], + usedBy: [], + dependencies: [], + checksum: checksumFor(content1), + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const result = healer.runHealthCheck(); + + expect(result.issues.length).toBeGreaterThan(1); + // First issue should be critical (missing-file) + expect(result.issues[0].severity).toBe('critical'); + }); + + it('builds correct summary statistics', () => { + const content = '# Multi Issue'; + createTestFile('tasks/multi.md', content); + + writeTestRegistry({ + tasks: { + 'missing-one': { + path: 'tasks/not-here.md', + type: 'task', + keywords: ['missing'], + usedBy: [], + dependencies: [], + checksum: 'sha256:abc', + lastVerified: new Date().toISOString(), + }, + 'multi': { + path: 'tasks/multi.md', + type: 'task', + keywords: [], + usedBy: ['phantom'], + dependencies: [], + checksum: 'sha256:wrong_checksum', + lastVerified: '2020-01-01', + }, + }, + }); + + const healer = createTestHealer(); + const result = healer.runHealthCheck(); + + expect(result.summary.total).toBeGreaterThan(0); + expect(result.summary.bySeverity.critical).toBeGreaterThanOrEqual(1); + expect(result.summary.autoHealable).toBeGreaterThan(0); + expect(result.summary.needsManual).toBeGreaterThanOrEqual(1); + expect(result.summary.autoHealableRate).toBeGreaterThan(0); + }); + }); + + // ─── Auto-Healing ────────────────────────────────────────────── + + describe('heal()', () => { + it('heals checksum mismatches by recomputing', () => { + const content = '# Updated Content'; + createTestFile('tasks/fix-checksum.md', content); + const correctChecksum = checksumFor(content); + + writeTestRegistry({ + tasks: { + 'fix-checksum': { + path: 'tasks/fix-checksum.md', + type: 'task', + keywords: ['fix'], + usedBy: [], + dependencies: [], + checksum: 'sha256:old_stale_value', + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + const checksumIssues = healthResult.issues.filter((i) => i.ruleId === 'checksum-mismatch'); + expect(checksumIssues).toHaveLength(1); + + const healResult = healer.heal(healthResult.issues, { autoOnly: true }); + + expect(healResult.healed.length).toBeGreaterThanOrEqual(1); + const checksumHeal = healResult.healed.find((h) => h.ruleId === 'checksum-mismatch'); + expect(checksumHeal).toBeDefined(); + expect(checksumHeal.before).toBe('sha256:old_stale_value'); + expect(checksumHeal.after).toBe(correctChecksum); + expect(healResult.batchId).toBeDefined(); + expect(healResult.backupPath).toBeDefined(); + }); + + it('heals orphaned usedBy references by removing them', () => { + const content = '# Orphan Test'; + createTestFile('tasks/orphan-used.md', content); + + writeTestRegistry({ + tasks: { + 'orphan-used': { + path: 'tasks/orphan-used.md', + type: 'task', + keywords: ['orphan'], + usedBy: ['real-entity', 'phantom-entity'], + dependencies: [], + checksum: checksumFor(content), + lastVerified: new Date().toISOString(), + }, + 'real-entity': { + path: 'tasks/orphan-used.md', + type: 'task', + keywords: ['real'], + usedBy: [], + dependencies: [], + checksum: checksumFor(content), + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + const orphanIssues = healthResult.issues.filter((i) => i.ruleId === 'orphaned-usedBy'); + expect(orphanIssues).toHaveLength(1); + + const healResult = healer.heal(healthResult.issues, { autoOnly: true }); + + const orphanHeal = healResult.healed.find((h) => h.ruleId === 'orphaned-usedBy'); + expect(orphanHeal).toBeDefined(); + expect(orphanHeal.before).toContain('phantom-entity'); + expect(orphanHeal.after).not.toContain('phantom-entity'); + expect(orphanHeal.after).toContain('real-entity'); + }); + + it('heals orphaned dependency references by removing them', () => { + const content = '# Dep Test'; + createTestFile('tasks/dep-test.md', content); + + writeTestRegistry({ + tasks: { + 'dep-test': { + path: 'tasks/dep-test.md', + type: 'task', + keywords: ['dep'], + usedBy: [], + dependencies: ['existing-dep', 'phantom-dep'], + checksum: checksumFor(content), + lastVerified: new Date().toISOString(), + }, + 'existing-dep': { + path: 'tasks/dep-test.md', + type: 'task', + keywords: ['existing'], + usedBy: [], + dependencies: [], + checksum: checksumFor(content), + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + const healResult = healer.heal(healthResult.issues, { autoOnly: true }); + + const depHeal = healResult.healed.find((h) => h.ruleId === 'orphaned-dependency'); + expect(depHeal).toBeDefined(); + expect(depHeal.before).toContain('phantom-dep'); + expect(depHeal.after).not.toContain('phantom-dep'); + }); + + it('heals missing keywords by extracting from file', () => { + const content = '# My Task\n\nSome content.'; + createTestFile('tasks/no-keywords.md', content); + + writeTestRegistry({ + tasks: { + 'no-keywords': { + path: 'tasks/no-keywords.md', + type: 'task', + keywords: [], + usedBy: [], + dependencies: [], + checksum: checksumFor(content), + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + const healResult = healer.heal(healthResult.issues, { autoOnly: true }); + + const kwHeal = healResult.healed.find((h) => h.ruleId === 'missing-keywords'); + expect(kwHeal).toBeDefined(); + expect(kwHeal.before).toEqual([]); + expect(kwHeal.after.length).toBeGreaterThan(0); + }); + + it('heals stale verification by updating timestamp', () => { + const content = '# Stale Verify'; + createTestFile('tasks/stale-verify.md', content); + const oldDate = new Date(Date.now() - 10 * 24 * 60 * 60 * 1000).toISOString(); + + writeTestRegistry({ + tasks: { + 'stale-verify': { + path: 'tasks/stale-verify.md', + type: 'task', + keywords: ['stale'], + usedBy: [], + dependencies: [], + checksum: checksumFor(content), + lastVerified: oldDate, + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + const healResult = healer.heal(healthResult.issues, { autoOnly: true }); + + const staleHeal = healResult.healed.find((h) => h.ruleId === 'stale-verification'); + expect(staleHeal).toBeDefined(); + expect(staleHeal.before).toBe(oldDate); + // After should be a recent timestamp + const afterDate = new Date(staleHeal.after); + expect(afterDate.getTime()).toBeGreaterThan(Date.now() - 60000); + }); + + it('skips non-auto-healable issues when autoOnly is true', () => { + writeTestRegistry({ + tasks: { + 'missing-task': { + path: 'tasks/nonexistent.md', + type: 'task', + keywords: ['missing'], + usedBy: [], + dependencies: [], + checksum: 'sha256:abc', + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + const healResult = healer.heal(healthResult.issues, { autoOnly: true }); + + expect(healResult.healed).toHaveLength(0); + expect(healResult.skipped).toHaveLength(1); + expect(healResult.skipped[0].ruleId).toBe('missing-file'); + expect(healResult.skipped[0].reason).toContain('manual intervention'); + }); + + it('supports dryRun mode without modifying registry', () => { + const content = '# Dry Run'; + createTestFile('tasks/dry-run.md', content); + + writeTestRegistry({ + tasks: { + 'dry-run': { + path: 'tasks/dry-run.md', + type: 'task', + keywords: [], + usedBy: [], + dependencies: [], + checksum: 'sha256:wrong', + lastVerified: '2020-01-01', + }, + }, + }); + + const registryBefore = fs.readFileSync(TEST_REGISTRY_PATH, 'utf8'); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + const healResult = healer.heal(healthResult.issues, { autoOnly: true, dryRun: true }); + + expect(healResult.healed.length).toBeGreaterThan(0); + expect(healResult.healed[0].action).toBe('would-heal'); + expect(healResult.backupPath).toBeNull(); + + // Registry should not have been modified + const registryAfter = fs.readFileSync(TEST_REGISTRY_PATH, 'utf8'); + expect(registryAfter).toBe(registryBefore); + }); + + it('creates backup before healing', () => { + const content = '# Backup Test'; + createTestFile('tasks/backup-test.md', content); + + writeTestRegistry({ + tasks: { + 'backup-test': { + path: 'tasks/backup-test.md', + type: 'task', + keywords: [], + usedBy: [], + dependencies: [], + checksum: 'sha256:wrong', + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + const healResult = healer.heal(healthResult.issues, { autoOnly: true }); + + expect(healResult.backupPath).toBeDefined(); + expect(fs.existsSync(healResult.backupPath)).toBe(true); + }); + + it('persists healed changes to the registry file', () => { + const content = '# Persist Test'; + createTestFile('tasks/persist-test.md', content); + const correctChecksum = checksumFor(content); + + writeTestRegistry({ + tasks: { + 'persist-test': { + path: 'tasks/persist-test.md', + type: 'task', + keywords: ['persist'], + usedBy: [], + dependencies: [], + checksum: 'sha256:old_value', + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + healer.heal(healthResult.issues, { autoOnly: true }); + + // Read the registry back and verify + const updatedContent = fs.readFileSync(TEST_REGISTRY_PATH, 'utf8'); + const updated = yaml.load(updatedContent); + expect(updated.entities.tasks['persist-test'].checksum).toBe(correctChecksum); + }); + }); + + // ─── Warning Generation ──────────────────────────────────────── + + describe('emitWarnings()', () => { + it('generates formatted warnings for non-auto-healable issues', async () => { + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(); + + writeTestRegistry({ + tasks: { + 'missing-file-task': { + path: 'tasks/gone.md', + type: 'task', + keywords: ['gone'], + usedBy: [], + dependencies: [], + checksum: 'sha256:abc', + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + const manualIssues = healthResult.issues.filter((i) => !i.autoHealable); + + const warnings = await healer.emitWarnings(manualIssues); + + expect(warnings).toHaveLength(1); + expect(warnings[0].ruleId).toBe('missing-file'); + expect(warnings[0].severity).toBe('critical'); + expect(warnings[0].formatted).toContain('WARNING'); + expect(warnings[0].formatted).toContain('missing-file'); + expect(warnings[0].formatted).toContain('missing-file-task'); + expect(warnings[0].suggestedActions.length).toBeGreaterThan(0); + + // Verify console.warn was called + expect(warnSpy).toHaveBeenCalled(); + warnSpy.mockRestore(); + }); + + it('includes suggested manual actions in warnings', async () => { + jest.spyOn(console, 'warn').mockImplementation(); + + writeTestRegistry({ + tasks: { + 'deleted-task': { + path: 'tasks/deleted.md', + type: 'task', + keywords: ['deleted'], + usedBy: [], + dependencies: [], + checksum: 'sha256:abc', + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + const manualIssues = healthResult.issues.filter((i) => !i.autoHealable); + const warnings = await healer.emitWarnings(manualIssues); + + expect(warnings[0].suggestedActions).toEqual( + expect.arrayContaining([expect.stringContaining('git log --follow')]), + ); + + console.warn.mockRestore(); + }); + }); + + // ─── Rollback ────────────────────────────────────────────────── + + describe('rollback()', () => { + it('restores registry from backup', () => { + const content = '# Rollback Test'; + createTestFile('tasks/rollback-test.md', content); + + writeTestRegistry({ + tasks: { + 'rollback-test': { + path: 'tasks/rollback-test.md', + type: 'task', + keywords: [], + usedBy: [], + dependencies: [], + checksum: 'sha256:original', + lastVerified: new Date().toISOString(), + }, + }, + }); + + const originalContent = fs.readFileSync(TEST_REGISTRY_PATH, 'utf8'); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + const healResult = healer.heal(healthResult.issues, { autoOnly: true }); + + // Registry should be modified + const modifiedContent = fs.readFileSync(TEST_REGISTRY_PATH, 'utf8'); + expect(modifiedContent).not.toBe(originalContent); + + // Rollback + const success = healer.rollback(healResult.batchId); + expect(success).toBe(true); + + // Registry should be restored + const restoredContent = fs.readFileSync(TEST_REGISTRY_PATH, 'utf8'); + expect(restoredContent).toBe(originalContent); + }); + + it('throws error for non-existent backup', () => { + writeTestRegistry({}); + const healer = createTestHealer(); + + expect(() => healer.rollback('nonexistent-batch-id')).toThrow(/Backup not found/); + }); + + it('logs rollback action to healing log', () => { + const content = '# Rollback Log Test'; + createTestFile('tasks/rb-log.md', content); + + writeTestRegistry({ + tasks: { + 'rb-log': { + path: 'tasks/rb-log.md', + type: 'task', + keywords: [], + usedBy: [], + dependencies: [], + checksum: 'sha256:wrong', + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + const healResult = healer.heal(healthResult.issues, { autoOnly: true }); + healer.rollback(healResult.batchId); + + const log = healer.queryHealingLog({ batchId: healResult.batchId }); + const rollbackEntry = log.find((e) => e.action === 'rollback'); + expect(rollbackEntry).toBeDefined(); + expect(rollbackEntry.entityId).toBe('registry'); + }); + }); + + // ─── Healing Audit Log ───────────────────────────────────────── + + describe('queryHealingLog()', () => { + it('returns empty array when no log exists', () => { + const healer = createTestHealer(); + const entries = healer.queryHealingLog(); + expect(entries).toEqual([]); + }); + + it('logs healing actions and can be queried', () => { + const content = '# Log Query Test'; + createTestFile('tasks/log-query.md', content); + + writeTestRegistry({ + tasks: { + 'log-query': { + path: 'tasks/log-query.md', + type: 'task', + keywords: [], + usedBy: [], + dependencies: [], + checksum: 'sha256:wrong', + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + const healResult = healer.heal(healthResult.issues, { autoOnly: true }); + + const allEntries = healer.queryHealingLog(); + expect(allEntries.length).toBeGreaterThan(0); + + // Filter by batch + const batchEntries = healer.queryHealingLog({ batchId: healResult.batchId }); + expect(batchEntries.length).toBeGreaterThan(0); + expect(batchEntries.every((e) => e.batchId === healResult.batchId)).toBe(true); + }); + + it('each log entry has required fields', () => { + const content = '# Log Fields Test'; + createTestFile('tasks/log-fields.md', content); + + writeTestRegistry({ + tasks: { + 'log-fields': { + path: 'tasks/log-fields.md', + type: 'task', + keywords: [], + usedBy: [], + dependencies: [], + checksum: 'sha256:wrong', + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + healer.heal(healthResult.issues, { autoOnly: true }); + + const entries = healer.queryHealingLog(); + for (const entry of entries) { + expect(entry).toHaveProperty('timestamp'); + expect(entry).toHaveProperty('batchId'); + expect(entry).toHaveProperty('action'); + expect(entry).toHaveProperty('ruleId'); + expect(entry).toHaveProperty('entityId'); + expect(entry).toHaveProperty('success'); + } + }); + + it('supports limit parameter', () => { + const content = '# Limit Test'; + createTestFile('tasks/limit-test.md', content); + + writeTestRegistry({ + tasks: { + 'limit-test': { + path: 'tasks/limit-test.md', + type: 'task', + keywords: [], + usedBy: ['phantom-a', 'phantom-b'], + dependencies: ['phantom-c'], + checksum: 'sha256:wrong', + lastVerified: '2020-01-01', + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + healer.heal(healthResult.issues, { autoOnly: true }); + + const allEntries = healer.queryHealingLog(); + expect(allEntries.length).toBeGreaterThan(1); + + const limited = healer.queryHealingLog({ limit: 1 }); + expect(limited).toHaveLength(1); + }); + }); + + // ─── Backup Pruning ──────────────────────────────────────────── + + describe('backup pruning', () => { + it('retains at most MAX_BACKUPS backup files', () => { + const content = '# Prune Test'; + createTestFile('tasks/prune-test.md', content); + + // Create more than MAX_BACKUPS healing runs + for (let i = 0; i < MAX_BACKUPS + 3; i++) { + writeTestRegistry({ + tasks: { + 'prune-test': { + path: 'tasks/prune-test.md', + type: 'task', + keywords: [], + usedBy: [], + dependencies: [], + checksum: `sha256:wrong${i}`, + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const healthResult = healer.runHealthCheck(); + healer.heal(healthResult.issues, { autoOnly: true }); + } + + // Count backup files + const backupFiles = fs.readdirSync(TEST_BACKUP_DIR) + .filter((f) => f.endsWith('.yaml')); + expect(backupFiles.length).toBeLessThanOrEqual(MAX_BACKUPS); + }); + }); + + // ─── Auto-Healable Rate Verification ─────────────────────────── + + describe('80%+ auto-healable rate (AC: 10)', () => { + it('achieves at least 80% auto-healable rate across all rule types', () => { + // Create a registry with ALL 6 issue types present + const content = '# Rate Test'; + createTestFile('tasks/rate-test.md', content); + const tenDaysAgo = new Date(Date.now() - 10 * 24 * 60 * 60 * 1000).toISOString(); + + writeTestRegistry({ + tasks: { + // missing-file (critical, NOT auto-healable) = 1 + 'missing-entity': { + path: 'tasks/nonexistent.md', + type: 'task', + keywords: ['missing'], + usedBy: [], + dependencies: [], + checksum: 'sha256:abc', + lastVerified: new Date().toISOString(), + }, + // checksum-mismatch (high, auto-healable) = 1 + 'bad-checksum': { + path: 'tasks/rate-test.md', + type: 'task', + keywords: ['rate'], + usedBy: ['phantom-ref'], // orphaned-usedBy (medium, auto-healable) = 1 + dependencies: ['phantom-dep'], // orphaned-dependency (medium, auto-healable) = 1 + checksum: 'sha256:wrong_checksum_here', + lastVerified: tenDaysAgo, // stale-verification (low, auto-healable) = 1 + }, + // missing-keywords (low, auto-healable) = 1 + 'no-kw-entity': { + path: 'tasks/rate-test.md', + type: 'task', + keywords: [], + usedBy: [], + dependencies: [], + checksum: checksumFor(content), + lastVerified: new Date().toISOString(), + }, + }, + }); + + const healer = createTestHealer(); + const result = healer.runHealthCheck(); + + // Expect issues from all categories + expect(result.summary.total).toBeGreaterThanOrEqual(6); + // Auto-healable rate should be >= 80% (5 out of 6 rule types are auto-healable) + expect(result.summary.autoHealableRate).toBeGreaterThanOrEqual(80); + }); + }); + + // ─── SEVERITY_ORDER ──────────────────────────────────────────── + + describe('SEVERITY_ORDER', () => { + it('has correct ordering (critical=0, low=3)', () => { + expect(SEVERITY_ORDER.critical).toBe(0); + expect(SEVERITY_ORDER.high).toBe(1); + expect(SEVERITY_ORDER.medium).toBe(2); + expect(SEVERITY_ORDER.low).toBe(3); + }); + }); + + // ─── Constructor ─────────────────────────────────────────────── + + describe('constructor', () => { + it('accepts custom options', () => { + const healer = new RegistryHealer({ + registryPath: '/custom/path.yaml', + repoRoot: '/custom/root', + healingLogPath: '/custom/log.jsonl', + backupDir: '/custom/backups', + }); + + expect(healer._registryPath).toBe('/custom/path.yaml'); + expect(healer._repoRoot).toBe('/custom/root'); + expect(healer._healingLogPath).toBe('/custom/log.jsonl'); + expect(healer._backupDir).toBe('/custom/backups'); + }); + }); +}); + +``` + +================================================== +📄 tests/core/ids/framework-governor.test.js +================================================== +```js +'use strict'; + +const path = require('path'); +const { RegistryLoader } = require('../../../.aios-core/core/ids/registry-loader'); +const { IncrementalDecisionEngine } = require('../../../.aios-core/core/ids/incremental-decision-engine'); +const { FrameworkGovernor, TIMEOUT_MS, RISK_THRESHOLDS } = require('../../../.aios-core/core/ids/framework-governor'); + +const FIXTURES = path.resolve(__dirname, 'fixtures'); +const VALID_REGISTRY = path.join(FIXTURES, 'valid-registry.yaml'); +const EMPTY_REGISTRY = path.join(FIXTURES, 'empty-registry.yaml'); + +// ─── Mock RegistryUpdater ──────────────────────────────────────────────────── + +class MockRegistryUpdater { + constructor() { + this.onAgentTaskCompleteCalls = []; + } + + async onAgentTaskComplete(task, artifacts) { + this.onAgentTaskCompleteCalls.push({ task, artifacts }); + return { updated: artifacts.length, errors: [] }; + } +} + +class MockRegistryUpdaterFailing { + async onAgentTaskComplete() { + throw new Error('Lock contention — registry busy'); + } +} + +// ─── Mock RegistryHealer ───────────────────────────────────────────────────── + +class MockRegistryHealer { + async runHealthCheck() { + return { + status: 'healthy', + issues: [], + summary: { total: 0, bySeverity: { critical: 0, high: 0, medium: 0, low: 0 } }, + }; + } +} + +// ─── Tests ─────────────────────────────────────────────────────────────────── + +describe('FrameworkGovernor', () => { + let loader; + let engine; + let updater; + let governor; + + beforeEach(() => { + loader = new RegistryLoader(VALID_REGISTRY); + loader.load(); + engine = new IncrementalDecisionEngine(loader); + updater = new MockRegistryUpdater(); + governor = new FrameworkGovernor(loader, engine, updater); + }); + + // ─── Constructor ───────────────────────────────────────────────────────── + + describe('constructor', () => { + it('should create instance with required dependencies', () => { + expect(governor).toBeInstanceOf(FrameworkGovernor); + }); + + it('should throw if registryLoader is missing', () => { + expect(() => new FrameworkGovernor(null, engine, updater)).toThrow( + '[IDS-Governor] RegistryLoader instance is required', + ); + }); + + it('should throw if decisionEngine is missing', () => { + expect(() => new FrameworkGovernor(loader, null, updater)).toThrow( + '[IDS-Governor] IncrementalDecisionEngine instance is required', + ); + }); + + it('should throw if registryUpdater is missing', () => { + expect(() => new FrameworkGovernor(loader, engine, null)).toThrow( + '[IDS-Governor] RegistryUpdater instance is required', + ); + }); + + it('should accept optional registryHealer', () => { + const healer = new MockRegistryHealer(); + const gov = new FrameworkGovernor(loader, engine, updater, healer); + expect(gov).toBeInstanceOf(FrameworkGovernor); + }); + + it('should handle null healer gracefully', () => { + const gov = new FrameworkGovernor(loader, engine, updater, null); + expect(gov).toBeInstanceOf(FrameworkGovernor); + }); + }); + + // ─── preCheck ──────────────────────────────────────────────────────────── + + describe('preCheck()', () => { + it('should return REUSE recommendation for high relevance match', async () => { + const result = await governor.preCheck('validate story drafts', 'task'); + expect(result).toBeDefined(); + expect(result.intent).toBe('validate story drafts'); + expect(result.entityType).toBe('task'); + expect(result.advisory).toBe(true); + expect(result.shouldProceed).toBe(true); + expect(['REUSE', 'ADAPT', 'CREATE']).toContain(result.topDecision); + }); + + it('should return CREATE recommendation for no matches', async () => { + const result = await governor.preCheck('quantum flux capacitor integration', 'task'); + expect(result.topDecision).toBe('CREATE'); + expect(result.matchesFound).toBe(0); + expect(result.shouldProceed).toBe(true); + }); + + it('should return ADAPT recommendation for moderate relevance', async () => { + const result = await governor.preCheck('create documentation files', 'task'); + expect(result).toBeDefined(); + expect(result.advisory).toBe(true); + // May return REUSE or ADAPT depending on relevance scores + if (result.matchesFound > 0) { + expect(result.recommendations.length).toBeGreaterThan(0); + } + }); + + it('should handle empty intent gracefully', async () => { + const result = await governor.preCheck('', 'task'); + expect(result.topDecision).toBe('CREATE'); + expect(result.matchesFound).toBe(0); + }); + + it('should work without entityType filter', async () => { + const result = await governor.preCheck('validate story'); + expect(result.entityType).toBe('any'); + expect(result.advisory).toBe(true); + }); + + it('should include alternatives in result', async () => { + const result = await governor.preCheck('create documentation', 'task'); + expect(Array.isArray(result.alternatives)).toBe(true); + if (result.alternatives.length > 0) { + expect(result.alternatives[0]).toHaveProperty('entityId'); + expect(result.alternatives[0]).toHaveProperty('decision'); + expect(result.alternatives[0]).toHaveProperty('relevance'); + } + }); + + it('should limit recommendations to 5', async () => { + const result = await governor.preCheck('agent story task template', 'any'); + expect(result.recommendations.length).toBeLessThanOrEqual(5); + }); + + it('should always set shouldProceed to true (advisory mode)', async () => { + const result = await governor.preCheck('validate story', 'task'); + expect(result.shouldProceed).toBe(true); + }); + }); + + // ─── preCheck input validation ────────────────────────────────────────── + + describe('preCheck() input validation', () => { + it('should throw on null intent', async () => { + await expect(governor.preCheck(null)).rejects.toThrow('[IDS-Governor] preCheck requires a string intent parameter'); + }); + + it('should throw on undefined intent', async () => { + await expect(governor.preCheck(undefined)).rejects.toThrow('[IDS-Governor] preCheck requires a string intent parameter'); + }); + + it('should throw on numeric intent', async () => { + await expect(governor.preCheck(123)).rejects.toThrow('[IDS-Governor] preCheck requires a string intent parameter'); + }); + }); + + // ─── preCheck with empty registry ──────────────────────────────────────── + + describe('preCheck() with empty registry', () => { + it('should return CREATE gracefully for empty registry', async () => { + const emptyLoader = new RegistryLoader(EMPTY_REGISTRY); + emptyLoader.load(); + const emptyEngine = new IncrementalDecisionEngine(emptyLoader); + const emptyGov = new FrameworkGovernor(emptyLoader, emptyEngine, updater); + + const result = await emptyGov.preCheck('validate yaml', 'task'); + expect(result.topDecision).toBe('CREATE'); + expect(result.matchesFound).toBe(0); + }); + }); + + // ─── impactAnalysis input validation ──────────────────────────────────── + + describe('impactAnalysis() input validation', () => { + it('should throw on null entityId', async () => { + await expect(governor.impactAnalysis(null)).rejects.toThrow('[IDS-Governor] impactAnalysis requires a non-empty entityId string'); + }); + + it('should throw on empty entityId', async () => { + await expect(governor.impactAnalysis('')).rejects.toThrow('[IDS-Governor] impactAnalysis requires a non-empty entityId string'); + }); + }); + + // ─── impactAnalysis ────────────────────────────────────────────────────── + + describe('impactAnalysis()', () => { + it('should return impact for entity with consumers', async () => { + // "create-doc" is used by "po" and "sm" in the fixture + const result = await governor.impactAnalysis('create-doc'); + expect(result.found).toBe(true); + expect(result.entityId).toBe('create-doc'); + expect(result.directConsumers).toContain('po'); + expect(result.directConsumers).toContain('sm'); + expect(result.totalAffected).toBeGreaterThan(0); + expect(['NONE', 'LOW', 'MEDIUM', 'HIGH', 'CRITICAL']).toContain(result.riskLevel); + }); + + it('should return no consumers for entity with empty usedBy', async () => { + // "po" has usedBy: [] in fixture + const result = await governor.impactAnalysis('po'); + expect(result.found).toBe(true); + expect(result.directConsumers).toEqual([]); + expect(result.totalAffected).toBe(0); + expect(result.riskLevel).toBe('NONE'); + }); + + it('should return found:false for unknown entity', async () => { + const result = await governor.impactAnalysis('non-existent-entity'); + expect(result.found).toBe(false); + expect(result.riskLevel).toBe('NONE'); + expect(result.message).toContain('not found'); + }); + + it('should include adaptability score when available', async () => { + const result = await governor.impactAnalysis('create-doc'); + expect(result.adaptabilityScore).toBe(0.8); + }); + + it('should include threshold warning for low adaptability', async () => { + // "po" agent has adaptability score 0.3 in fixture + const result = await governor.impactAnalysis('po'); + // 0.3 is at the threshold boundary, not below + expect(result.adaptabilityScore).toBe(0.3); + }); + + it('should traverse indirect consumers via BFS', async () => { + // create-doc is usedBy [po, sm] + // validate-story depends on create-doc (but usedBy [po]) + const result = await governor.impactAnalysis('template-engine'); + expect(result.found).toBe(true); + // template-engine is usedBy: ["create-doc"] + expect(result.directConsumers).toContain('create-doc'); + }); + + it('should include dependencies list', async () => { + const result = await governor.impactAnalysis('create-doc'); + expect(Array.isArray(result.dependencies)).toBe(true); + }); + }); + + // ─── postRegister input validation ────────────────────────────────────── + + describe('postRegister() input validation', () => { + it('should throw on null filePath', async () => { + await expect(governor.postRegister(null)).rejects.toThrow('[IDS-Governor] postRegister requires a non-empty filePath string'); + }); + + it('should throw on empty filePath', async () => { + await expect(governor.postRegister('')).rejects.toThrow('[IDS-Governor] postRegister requires a non-empty filePath string'); + }); + }); + + // ─── postRegister ──────────────────────────────────────────────────────── + + describe('postRegister()', () => { + it('should register file via RegistryUpdater.onAgentTaskComplete', async () => { + const result = await governor.postRegister( + '.aios-core/development/tasks/test-task.md', + { type: 'task', purpose: 'Test task', agent: 'aios-master' }, + ); + expect(result.registered).toBeDefined(); + expect(result.filePath).toBe('.aios-core/development/tasks/test-task.md'); + expect(updater.onAgentTaskCompleteCalls.length).toBe(1); + + const call = updater.onAgentTaskCompleteCalls[0]; + expect(call.task.agent).toBe('aios-master'); + expect(call.artifacts).toContain('.aios-core/development/tasks/test-task.md'); + }); + + it('should use onAgentTaskComplete (not processChanges) per SF-1', async () => { + await governor.postRegister('.aios-core/development/tasks/new-task.md', {}); + // Verify onAgentTaskComplete was called (our mock tracks this) + expect(updater.onAgentTaskCompleteCalls.length).toBe(1); + }); + + it('should include metadata in result', async () => { + const result = await governor.postRegister('test.md', { + type: 'task', + purpose: 'testing', + keywords: ['test'], + }); + expect(result.metadata.type).toBe('task'); + expect(result.metadata.purpose).toBe('testing'); + expect(result.metadata.keywords).toContain('test'); + }); + + it('should handle updater failure gracefully', async () => { + const failingUpdater = new MockRegistryUpdaterFailing(); + const failGov = new FrameworkGovernor(loader, engine, failingUpdater); + const result = await failGov.postRegister('test.md', {}); + // Should degrade gracefully + expect(result.registered).toBe(false); + expect(result.error).toBeDefined(); + }); + + it('should default agent to aios-master', async () => { + await governor.postRegister('file.md', {}); + const call = updater.onAgentTaskCompleteCalls[0]; + expect(call.task.agent).toBe('aios-master'); + }); + }); + + // ─── healthCheck ───────────────────────────────────────────────────────── + + describe('healthCheck()', () => { + it('should return degraded status when healer is null', async () => { + const result = await governor.healthCheck(); + expect(result.available).toBe(false); + expect(result.healerStatus).toBe('not-configured'); + expect(result.message).toContain('RegistryHealer not available'); + expect(result.basicStats.entityCount).toBeGreaterThan(0); + }); + + it('should use RegistryHealer when available', async () => { + const healer = new MockRegistryHealer(); + const govWithHealer = new FrameworkGovernor(loader, engine, updater, healer); + const result = await govWithHealer.healthCheck(); + expect(result.available).toBe(true); + expect(result.healerStatus).toBe('active'); + expect(result.status).toBe('healthy'); + }); + + it('should include entity count in basic stats', async () => { + const result = await governor.healthCheck(); + expect(result.basicStats.entityCount).toBe(5); + expect(result.basicStats.registryLoaded).toBe(true); + }); + }); + + // ─── getStats ──────────────────────────────────────────────────────────── + + describe('getStats()', () => { + it('should return correct entity count', async () => { + const result = await governor.getStats(); + expect(result.totalEntities).toBe(5); + }); + + it('should return counts by type', async () => { + const result = await governor.getStats(); + expect(result.byType).toBeDefined(); + expect(result.byType.task).toBe(2); + expect(result.byType.agent).toBe(2); + expect(result.byType.script).toBe(1); + }); + + it('should return counts by category', async () => { + const result = await governor.getStats(); + expect(result.byCategory).toBeDefined(); + expect(result.byCategory.tasks).toBe(2); + expect(result.byCategory.agents).toBe(2); + expect(result.byCategory.scripts).toBe(1); + }); + + it('should include health score', async () => { + const result = await governor.getStats(); + expect(typeof result.healthScore).toBe('number'); + expect(result.healthScore).toBeGreaterThanOrEqual(0); + expect(result.healthScore).toBeLessThanOrEqual(100); + }); + + it('should include last updated from metadata', async () => { + const result = await governor.getStats(); + expect(result.lastUpdated).toBe('2026-02-08T00:00:00Z'); + }); + + it('should include registry version', async () => { + const result = await governor.getStats(); + expect(result.registryVersion).toBe('1.0.0'); + }); + + it('should report healer availability', async () => { + const result = await governor.getStats(); + expect(result.healerAvailable).toBe(false); + + const govWithHealer = new FrameworkGovernor(loader, engine, updater, new MockRegistryHealer()); + const resultWithHealer = await govWithHealer.getStats(); + expect(resultWithHealer.healerAvailable).toBe(true); + }); + + it('should include categories list', async () => { + const result = await governor.getStats(); + expect(Array.isArray(result.categories)).toBe(true); + expect(result.categories.length).toBeGreaterThan(0); + }); + }); + + // ─── getStats with empty registry ──────────────────────────────────────── + + describe('getStats() with empty registry', () => { + it('should return zero counts for empty registry', async () => { + const emptyLoader = new RegistryLoader(EMPTY_REGISTRY); + emptyLoader.load(); + const emptyEngine = new IncrementalDecisionEngine(emptyLoader); + const emptyGov = new FrameworkGovernor(emptyLoader, emptyEngine, updater); + + const result = await emptyGov.getStats(); + expect(result.totalEntities).toBe(0); + expect(result.healthScore).toBe(0); + }); + }); + + // ─── Graceful Degradation ──────────────────────────────────────────────── + + describe('graceful degradation', () => { + it('should return fallback on timeout', async () => { + jest.useFakeTimers(); + // Test _withTimeout directly with a truly async function that never resolves + const neverResolve = async () => new Promise(() => {}); + const fallback = { shouldProceed: true, topDecision: 'CREATE' }; + const resultPromise = governor._withTimeout(neverResolve, fallback); + // Advance timers past the TIMEOUT_MS threshold + jest.advanceTimersByTime(TIMEOUT_MS + 100); + const result = await resultPromise; + // Should get fallback due to timeout + expect(result).toBeDefined(); + expect(result.shouldProceed).toBe(true); + expect(result.error).toContain('timed out'); + jest.useRealTimers(); + }); + + it('should return fallback on engine error', async () => { + const errorEngine = { + analyze: () => { + throw new Error('Engine crashed'); + }, + }; + const errorGov = new FrameworkGovernor(loader, errorEngine, updater); + const result = await errorGov.preCheck('test intent'); + expect(result.topDecision).toBe('CREATE'); + expect(result.error).toContain('Engine crashed'); + }); + + it('should return fallback on impactAnalysis error', async () => { + // Force error by using a loader that throws on _findById + const brokenLoader = { + _ensureLoaded: () => { throw new Error('Loader broken'); }, + _findById: () => null, + getEntityCount: () => 0, + }; + const brokenGov = new FrameworkGovernor(brokenLoader, engine, updater); + const result = await brokenGov.impactAnalysis('test-entity'); + expect(result.found).toBe(false); + expect(result.riskLevel).toBe('UNKNOWN'); + expect(result.error).toBeDefined(); + }); + }); + + // ─── Risk Level Calculation ────────────────────────────────────────────── + + describe('_calculateRiskLevel()', () => { + it('should return NONE for 0%', () => { + expect(governor._calculateRiskLevel(0)).toBe('NONE'); + }); + + it('should return LOW for small percentage', () => { + expect(governor._calculateRiskLevel(0.05)).toBe('LOW'); + }); + + it('should return MEDIUM for moderate percentage', () => { + expect(governor._calculateRiskLevel(0.2)).toBe('MEDIUM'); + }); + + it('should return HIGH for large percentage', () => { + expect(governor._calculateRiskLevel(0.4)).toBe('HIGH'); + }); + + it('should return CRITICAL for very large percentage', () => { + expect(governor._calculateRiskLevel(0.6)).toBe('CRITICAL'); + }); + }); + + // ─── Static Formatters ─────────────────────────────────────────────────── + + describe('static formatters', () => { + describe('formatPreCheckOutput()', () => { + it('should format preCheck result as string', () => { + const result = { + intent: 'validate yaml', + entityType: 'task', + topDecision: 'CREATE', + matchesFound: 0, + recommendations: [], + }; + const output = FrameworkGovernor.formatPreCheckOutput(result); + expect(typeof output).toBe('string'); + expect(output).toContain('IDS Registry Check (Advisory)'); + expect(output).toContain('validate yaml'); + expect(output).toContain('No matches found'); + }); + + it('should include recommendations when matches found', () => { + const result = { + intent: 'create docs', + entityType: 'task', + topDecision: 'ADAPT', + matchesFound: 1, + recommendations: [{ + entityId: 'create-doc', + decision: 'ADAPT', + relevanceScore: 0.75, + entityPath: '.aios-core/development/tasks/create-doc.md', + }], + }; + const output = FrameworkGovernor.formatPreCheckOutput(result); + expect(output).toContain('create-doc'); + expect(output).toContain('75.0%'); + expect(output).toContain('ADAPT'); + }); + }); + + describe('formatImpactOutput()', () => { + it('should format impact result as string', () => { + const result = { + entityId: 'create-doc', + found: true, + entityPath: '.aios-core/development/tasks/create-doc.md', + entityType: 'task', + riskLevel: 'LOW', + directConsumers: ['po', 'sm'], + indirectConsumers: [], + totalAffected: 2, + adaptabilityScore: 0.8, + thresholdWarning: null, + }; + const output = FrameworkGovernor.formatImpactOutput(result); + expect(typeof output).toBe('string'); + expect(output).toContain('IDS Impact Analysis'); + expect(output).toContain('create-doc'); + expect(output).toContain('po'); + expect(output).toContain('LOW'); + }); + + it('should handle not-found entity', () => { + const result = { + entityId: 'missing', + found: false, + }; + const output = FrameworkGovernor.formatImpactOutput(result); + expect(output).toContain('Not found'); + }); + + it('should show safe-to-modify for zero consumers', () => { + const result = { + entityId: 'po', + found: true, + entityPath: '.aios-core/development/agents/po.md', + entityType: 'agent', + riskLevel: 'NONE', + directConsumers: [], + indirectConsumers: [], + totalAffected: 0, + adaptabilityScore: 0.3, + thresholdWarning: null, + }; + const output = FrameworkGovernor.formatImpactOutput(result); + expect(output).toContain('safe to modify'); + }); + }); + + describe('formatStatsOutput()', () => { + it('should format stats result as string', () => { + const result = { + totalEntities: 5, + byType: { task: 2, agent: 2, script: 1 }, + byCategory: { tasks: 2, agents: 2, scripts: 1 }, + lastUpdated: '2026-02-08', + registryVersion: '1.0.0', + healthScore: 100, + healerAvailable: false, + }; + const output = FrameworkGovernor.formatStatsOutput(result); + expect(typeof output).toBe('string'); + expect(output).toContain('IDS Registry Statistics'); + expect(output).toContain('Total Entities: 5'); + expect(output).toContain('Health Score: 100%'); + expect(output).toContain('task: 2'); + }); + }); + }); + + // ─── Constants Export ──────────────────────────────────────────────────── + + describe('exported constants', () => { + it('should export TIMEOUT_MS', () => { + expect(TIMEOUT_MS).toBe(2000); + }); + + it('should export RISK_THRESHOLDS', () => { + expect(RISK_THRESHOLDS).toBeDefined(); + expect(RISK_THRESHOLDS.LOW).toBe(0.1); + expect(RISK_THRESHOLDS.MEDIUM).toBe(0.3); + expect(RISK_THRESHOLDS.HIGH).toBe(0.5); + }); + }); +}); + +``` + +================================================== +📄 tests/core/ids/populate-entity-registry.test.js +================================================== +```js +'use strict'; + +const path = require('path'); +const fs = require('fs'); +const { + extractEntityId, + extractKeywords, + extractPurpose, + detectDependencies, + computeChecksum, + scanCategory, + resolveUsedBy, +} = require('../../../.aios-core/development/scripts/populate-entity-registry'); + +const FIXTURES = path.resolve(__dirname, 'fixtures'); + +describe('populate-entity-registry (AC: 3, 4, 12)', () => { + describe('extractEntityId()', () => { + it('extracts base name without extension', () => { + expect(extractEntityId('/foo/bar/my-task.md')).toBe('my-task'); + expect(extractEntityId('/foo/bar/script.js')).toBe('script'); + expect(extractEntityId('template.yaml')).toBe('template'); + }); + }); + + describe('extractKeywords()', () => { + it('extracts keywords from filename', () => { + const kws = extractKeywords('create-doc-template.md', ''); + expect(kws).toContain('create'); + expect(kws).toContain('doc'); + expect(kws).toContain('template'); + }); + + it('extracts keywords from markdown header', () => { + const content = '# Validate Story Draft\nSome content here'; + const kws = extractKeywords('validate.md', content); + expect(kws).toContain('validate'); + expect(kws).toContain('story'); + expect(kws).toContain('draft'); + }); + + it('deduplicates keywords', () => { + const content = '# Validate Validate Stuff'; + const kws = extractKeywords('validate.md', content); + const validateCount = kws.filter((k) => k === 'validate').length; + expect(validateCount).toBe(1); + }); + + it('filters out short and stop words', () => { + const content = '# The And For Story'; + const kws = extractKeywords('a.md', content); + expect(kws).not.toContain('the'); + expect(kws).not.toContain('and'); + expect(kws).not.toContain('for'); + }); + }); + + describe('extractPurpose()', () => { + it('extracts from ## Purpose section', () => { + const content = '# Title\n\n## Purpose\n\nThis is the purpose line.\n\nMore details.\n\n## Other'; + const purpose = extractPurpose(content, '/test.md'); + expect(purpose).toBe('This is the purpose line.'); + }); + + it('extracts from description field', () => { + const content = 'description: My awesome description here'; + const purpose = extractPurpose(content, '/test.md'); + expect(purpose).toBe('My awesome description here'); + }); + + it('falls back to header', () => { + const content = '# My Module Title\n\nSome content.'; + const purpose = extractPurpose(content, '/test.md'); + expect(purpose).toBe('My Module Title'); + }); + + it('falls back to file path', () => { + const purpose = extractPurpose('', '/some/path/test.md'); + expect(purpose).toContain('test.md'); + }); + + it('truncates long purposes to 200 chars', () => { + const longPurpose = '## Purpose\n\n' + 'x'.repeat(300); + const purpose = extractPurpose(longPurpose, '/test.md'); + expect(purpose.length).toBeLessThanOrEqual(200); + }); + }); + + describe('detectDependencies()', () => { + it('detects require() dependencies', () => { + const content = "const foo = require('./foo-module');\nconst bar = require('../bar');"; + const deps = detectDependencies(content, 'main'); + expect(deps).toContain('foo-module'); + expect(deps).toContain('bar'); + }); + + it('detects import dependencies', () => { + const content = "import { something } from './my-util';\nimport other from '../other-lib';"; + const deps = detectDependencies(content, 'main'); + expect(deps).toContain('my-util'); + expect(deps).toContain('other-lib'); + }); + + it('ignores npm packages (non-relative)', () => { + const content = "const yaml = require('js-yaml');\nimport path from 'path';"; + const deps = detectDependencies(content, 'main'); + expect(deps).not.toContain('js-yaml'); + expect(deps).not.toContain('path'); + }); + + it('excludes self-references', () => { + const content = "const self = require('./mymodule');"; + const deps = detectDependencies(content, 'mymodule'); + expect(deps).not.toContain('mymodule'); + }); + + it('detects YAML dependency lists', () => { + const content = 'dependencies:\n - task-a.md\n - task-b.md\n'; + const deps = detectDependencies(content, 'main'); + expect(deps).toContain('task-a'); + expect(deps).toContain('task-b'); + }); + }); + + describe('computeChecksum()', () => { + it('returns sha256 prefixed hash', () => { + const testFile = path.join(FIXTURES, 'valid-registry.yaml'); + const checksum = computeChecksum(testFile); + expect(checksum).toMatch(/^sha256:[a-f0-9]{64}$/); + }); + + it('returns consistent results for same file', () => { + const testFile = path.join(FIXTURES, 'valid-registry.yaml'); + const first = computeChecksum(testFile); + const second = computeChecksum(testFile); + expect(first).toBe(second); + }); + }); + + describe('resolveUsedBy()', () => { + it('populates usedBy based on dependencies', () => { + const entities = { + tasks: { + 'task-a': { + path: 'a.md', + type: 'task', + dependencies: ['util-x'], + usedBy: [], + }, + }, + scripts: { + 'util-x': { + path: 'x.js', + type: 'script', + dependencies: [], + usedBy: [], + }, + }, + }; + + resolveUsedBy(entities); + + expect(entities.scripts['util-x'].usedBy).toContain('task-a'); + }); + + it('does not duplicate usedBy entries', () => { + const entities = { + tasks: { + 'task-a': { dependencies: ['util-x'], usedBy: [] }, + 'task-b': { dependencies: ['util-x'], usedBy: [] }, + }, + scripts: { + 'util-x': { dependencies: [], usedBy: [] }, + }, + }; + + resolveUsedBy(entities); + resolveUsedBy(entities); + + const usedBy = entities.scripts['util-x'].usedBy; + expect(usedBy.filter((x) => x === 'task-a').length).toBe(1); + }); + }); + + describe('scanCategory()', () => { + it('returns empty object for non-existent directory', () => { + const result = scanCategory({ + category: 'test', + basePath: 'nonexistent/directory/path', + glob: '**/*.md', + type: 'task', + }); + expect(result).toEqual({}); + }); + }); + + describe('duplicate detection (AC: 12)', () => { + it('scanCategory skips duplicate entity IDs with warning', () => { + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(() => {}); + + const result = scanCategory({ + category: 'fixtures', + basePath: path.relative( + path.resolve(__dirname, '../../..'), + FIXTURES, + ), + glob: '**/*.yaml', + type: 'data', + }); + + const ids = Object.keys(result); + const uniqueIds = new Set(ids); + expect(ids.length).toBe(uniqueIds.size); + + // Verify that duplicates are logged (if any were found) + const dupWarnings = warnSpy.mock.calls.filter( + (call) => typeof call[0] === 'string' && call[0].includes('Duplicate entity ID'), + ); + // All returned IDs are unique — any duplicates found would have been warned about + expect(dupWarnings.length + ids.length).toBeGreaterThanOrEqual(ids.length); + + warnSpy.mockRestore(); + }); + }); +}); + +``` + +================================================== +📄 tests/core/ids/incremental-decision-engine.test.js +================================================== +```js +'use strict'; + +const path = require('path'); +const { RegistryLoader } = require('../../../.aios-core/core/ids/registry-loader'); +const { + IncrementalDecisionEngine, + STOP_WORDS, + THRESHOLD_MINIMUM, + ADAPT_IMPACT_THRESHOLD, + KEYWORD_OVERLAP_WEIGHT, + PURPOSE_SIMILARITY_WEIGHT, + MAX_RESULTS, + CACHE_TTL_MS, +} = require('../../../.aios-core/core/ids/incremental-decision-engine'); + +const FIXTURES = path.resolve(__dirname, 'fixtures'); +const VALID_REGISTRY = path.join(FIXTURES, 'valid-registry.yaml'); +const EMPTY_REGISTRY = path.join(FIXTURES, 'empty-registry.yaml'); + +describe('IncrementalDecisionEngine', () => { + let loader; + let engine; + + beforeEach(() => { + loader = new RegistryLoader(VALID_REGISTRY); + loader.load(); + engine = new IncrementalDecisionEngine(loader); + }); + + // ============================================================== + // Task 1: Constructor & Main API + // ============================================================== + + describe('constructor', () => { + it('requires a RegistryLoader instance', () => { + expect(() => new IncrementalDecisionEngine(null)).toThrow( + /requires a RegistryLoader instance/, + ); + expect(() => new IncrementalDecisionEngine()).toThrow( + /requires a RegistryLoader instance/, + ); + }); + + it('creates engine with valid loader', () => { + const e = new IncrementalDecisionEngine(loader); + expect(e).toBeDefined(); + }); + }); + + describe('analyze()', () => { + it('returns result structure with recommendations and summary', () => { + const result = engine.analyze('create documentation from templates'); + + expect(result).toHaveProperty('intent'); + expect(result).toHaveProperty('recommendations'); + expect(result).toHaveProperty('summary'); + expect(result).toHaveProperty('rationale'); + expect(result.summary).toHaveProperty('totalEntities'); + expect(result.summary).toHaveProperty('matchesFound'); + expect(result.summary).toHaveProperty('decision'); + expect(result.summary).toHaveProperty('confidence'); + }); + + it('returns recommendations sorted by relevance score', () => { + const result = engine.analyze('documentation template creation'); + + if (result.recommendations.length > 1) { + for (let i = 0; i < result.recommendations.length - 1; i++) { + expect(result.recommendations[i].relevanceScore).toBeGreaterThanOrEqual( + result.recommendations[i + 1].relevanceScore, + ); + } + } + }); + + it('recommendations include required fields', () => { + const result = engine.analyze('validate story quality checklist'); + + if (result.recommendations.length > 0) { + const rec = result.recommendations[0]; + expect(rec).toHaveProperty('entityId'); + expect(rec).toHaveProperty('entityPath'); + expect(rec).toHaveProperty('entityType'); + expect(rec).toHaveProperty('relevanceScore'); + expect(rec).toHaveProperty('keywordScore'); + expect(rec).toHaveProperty('purposeScore'); + expect(rec).toHaveProperty('decision'); + expect(rec).toHaveProperty('confidence'); + expect(rec).toHaveProperty('rationale'); + } + }); + + it('supports context filtering by type', () => { + const result = engine.analyze('documentation', { type: 'task' }); + + for (const rec of result.recommendations) { + expect(rec.entityType).toBe('task'); + } + }); + + it('supports context filtering by category', () => { + const result = engine.analyze('agent persona', { category: 'agents' }); + + expect(result).toBeDefined(); + for (const rec of result.recommendations) { + expect(rec.entityId).toBeDefined(); + // All results should come from the agents category in the registry + const entity = loader._findById(rec.entityId); + expect(entity.category).toBe('agents'); + } + }); + }); + + // ============================================================== + // Edge cases: empty/invalid input + // ============================================================== + + describe('edge cases — invalid input', () => { + it('handles null intent', () => { + const result = engine.analyze(null); + expect(result.recommendations).toEqual([]); + expect(result.summary.decision).toBe('CREATE'); + expect(result.warnings).toContain('Empty or invalid intent provided'); + }); + + it('handles empty string intent', () => { + const result = engine.analyze(''); + expect(result.recommendations).toEqual([]); + expect(result.summary.decision).toBe('CREATE'); + }); + + it('handles whitespace-only intent', () => { + const result = engine.analyze(' '); + expect(result.recommendations).toEqual([]); + expect(result.summary.decision).toBe('CREATE'); + }); + + it('handles non-string intent', () => { + const result = engine.analyze(123); + expect(result.recommendations).toEqual([]); + expect(result.warnings).toBeDefined(); + }); + }); + + // ============================================================== + // Edge cases: empty/sparse registry (AC: 5 edge cases) + // ============================================================== + + describe('edge cases — empty registry', () => { + it('returns CREATE with empty registry rationale when 0 entities', () => { + const emptyLoader = new RegistryLoader(EMPTY_REGISTRY); + emptyLoader.load(); + const emptyEngine = new IncrementalDecisionEngine(emptyLoader); + + const result = emptyEngine.analyze('create a new task'); + + expect(result.summary.decision).toBe('CREATE'); + expect(result.summary.confidence).toBe('low'); + expect(result.summary.totalEntities).toBe(0); + expect(result.rationale).toContain('empty'); + expect(result.warnings).toContain( + 'Registry is empty — no existing artifacts to evaluate', + ); + }); + + it('returns CREATE justification for empty registry', () => { + const emptyLoader = new RegistryLoader(EMPTY_REGISTRY); + emptyLoader.load(); + const emptyEngine = new IncrementalDecisionEngine(emptyLoader); + + const result = emptyEngine.analyze('create a new task'); + + expect(result.justification).toBeDefined(); + expect(result.justification.evaluated_patterns).toEqual([]); + expect(result.justification.new_capability).toBe('create a new task'); + expect(result.justification.review_scheduled).toMatch(/^\d{4}-\d{2}-\d{2}$/); + }); + }); + + describe('edge cases — sparse registry (<10 entities)', () => { + it('adds sparse registry warning when <10 entities', () => { + // valid-registry.yaml has 5 entities + const result = engine.analyze('some query'); + + expect(result.warnings).toBeDefined(); + expect(result.warnings).toContain('Registry sparse — results may be incomplete'); + }); + }); + + describe('edge cases — no matches above threshold', () => { + it('returns CREATE when no matches exceed minimum threshold', () => { + const result = engine.analyze('quantum computing blockchain hypervisor'); + + expect(result.summary.decision).toBe('CREATE'); + expect(result.recommendations.length).toBe(0); + }); + }); + + // ============================================================== + // Task 2: Semantic Matching + // ============================================================== + + describe('_extractKeywords()', () => { + it('extracts meaningful keywords from text', () => { + const keywords = engine._extractKeywords('create documentation from templates'); + + expect(keywords).toContain('create'); + expect(keywords).toContain('documentation'); + expect(keywords).toContain('templates'); + // Stop words filtered + expect(keywords).not.toContain('from'); + }); + + it('filters stop words', () => { + const keywords = engine._extractKeywords('the quick brown fox is a fast runner'); + + for (const kw of keywords) { + expect(STOP_WORDS.has(kw)).toBe(false); + } + }); + + it('filters short words (< 3 chars)', () => { + const keywords = engine._extractKeywords('go to do it an'); + + expect(keywords.every((kw) => kw.length >= 3)).toBe(true); + }); + + it('returns empty array for null input', () => { + expect(engine._extractKeywords(null)).toEqual([]); + expect(engine._extractKeywords('')).toEqual([]); + }); + + it('handles special characters', () => { + const keywords = engine._extractKeywords('validate.story@draft#2026'); + + // Should tokenize on non-alphanumeric + expect(keywords).toContain('validate'); + expect(keywords).toContain('story'); + expect(keywords).toContain('draft'); + expect(keywords).toContain('2026'); + }); + + it('converts to lowercase', () => { + const keywords = engine._extractKeywords('CREATE Documentation TEMPLATE'); + + for (const kw of keywords) { + expect(kw).toBe(kw.toLowerCase()); + } + }); + + it('limits to MAX_KEYWORDS_PER_ENTITY', () => { + const longText = Array.from({ length: 30 }, (_, i) => `keyword${i}`).join(' '); + const keywords = engine._extractKeywords(longText); + + expect(keywords.length).toBeLessThanOrEqual(15); + }); + }); + + describe('semantic matching accuracy', () => { + it('finds entities matching keywords from intent', () => { + const result = engine.analyze('validate story quality checklist'); + + expect(result.recommendations.length).toBeGreaterThan(0); + expect(result.recommendations.some((r) => r.entityId === 'validate-story')).toBe(true); + }); + + it('finds entities matching purpose description', () => { + const result = engine.analyze('documentation files from templates'); + + expect(result.recommendations.length).toBeGreaterThan(0); + expect(result.recommendations.some((r) => r.entityId === 'create-doc')).toBe(true); + }); + + it('ranks better matches higher', () => { + const result = engine.analyze('validate story'); + + expect(result.recommendations.length).toBeGreaterThan(0); + const vsIdx = result.recommendations.findIndex((r) => r.entityId === 'validate-story'); + expect(vsIdx).toBe(0); // Should be top-ranked + }); + }); + + // ============================================================== + // Task 3: Decision Matrix — Boundary Values + // ============================================================== + + describe('_applyDecisionMatrix()', () => { + it('returns REUSE for relevance >= 90%', () => { + const decision = engine._applyDecisionMatrix({ + relevanceScore: 0.9, + canAdapt: { score: 0.8, constraints: [], extensionPoints: [] }, + adaptationImpact: { percentage: 0.1 }, + }); + + expect(decision.action).toBe('REUSE'); + expect(decision.confidence).toBe('high'); + }); + + it('returns REUSE at exactly 90% boundary', () => { + const decision = engine._applyDecisionMatrix({ + relevanceScore: 0.9, + canAdapt: { score: 0.5, constraints: [], extensionPoints: [] }, + adaptationImpact: { percentage: 0.5 }, + }); + + expect(decision.action).toBe('REUSE'); + }); + + it('returns ADAPT for 60-89% with high adaptability and low impact', () => { + const decision = engine._applyDecisionMatrix({ + relevanceScore: 0.75, + canAdapt: { score: 0.7, constraints: [], extensionPoints: [] }, + adaptationImpact: { percentage: 0.15 }, + }); + + expect(decision.action).toBe('ADAPT'); + expect(decision.confidence).toBe('medium'); + }); + + it('returns ADAPT with high confidence when relevance >= 80%', () => { + const decision = engine._applyDecisionMatrix({ + relevanceScore: 0.85, + canAdapt: { score: 0.8, constraints: [], extensionPoints: [] }, + adaptationImpact: { percentage: 0.1 }, + }); + + expect(decision.action).toBe('ADAPT'); + expect(decision.confidence).toBe('high'); + }); + + it('returns CREATE at 59% boundary (below ADAPT threshold)', () => { + const decision = engine._applyDecisionMatrix({ + relevanceScore: 0.59, + canAdapt: { score: 0.8, constraints: [], extensionPoints: [] }, + adaptationImpact: { percentage: 0.1 }, + }); + + expect(decision.action).toBe('CREATE'); + expect(decision.confidence).toBe('low'); + }); + + it('returns CREATE at exactly 60% with low adaptability', () => { + const decision = engine._applyDecisionMatrix({ + relevanceScore: 0.6, + canAdapt: { score: 0.5, constraints: [], extensionPoints: [] }, + adaptationImpact: { percentage: 0.1 }, + }); + + expect(decision.action).toBe('CREATE'); + expect(decision.confidence).toBe('medium'); + }); + + it('returns CREATE at 60% with high adaptation impact', () => { + const decision = engine._applyDecisionMatrix({ + relevanceScore: 0.6, + canAdapt: { score: 0.8, constraints: [], extensionPoints: [] }, + adaptationImpact: { percentage: 0.35 }, + }); + + expect(decision.action).toBe('CREATE'); + expect(decision.confidence).toBe('medium'); + }); + + it('returns ADAPT at exactly 60% boundary with all conditions met', () => { + const decision = engine._applyDecisionMatrix({ + relevanceScore: 0.6, + canAdapt: { score: 0.6, constraints: [], extensionPoints: [] }, + adaptationImpact: { percentage: 0.29 }, + }); + + expect(decision.action).toBe('ADAPT'); + }); + + it('returns REUSE at 100%', () => { + const decision = engine._applyDecisionMatrix({ + relevanceScore: 1.0, + canAdapt: { score: 0.1, constraints: [], extensionPoints: [] }, + adaptationImpact: { percentage: 0.9 }, + }); + + expect(decision.action).toBe('REUSE'); + expect(decision.confidence).toBe('high'); + }); + + it('returns CREATE at 89% when adaptability is 0.59', () => { + const decision = engine._applyDecisionMatrix({ + relevanceScore: 0.89, + canAdapt: { score: 0.59, constraints: [], extensionPoints: [] }, + adaptationImpact: { percentage: 0.1 }, + }); + + expect(decision.action).toBe('CREATE'); + }); + + it('returns CREATE at 89% when impact is exactly 30%', () => { + const decision = engine._applyDecisionMatrix({ + relevanceScore: 0.89, + canAdapt: { score: 0.8, constraints: [], extensionPoints: [] }, + adaptationImpact: { percentage: 0.30 }, + }); + + expect(decision.action).toBe('CREATE'); + }); + }); + + // ============================================================== + // Task 4: Impact Analysis + // ============================================================== + + describe('_calculateImpact()', () => { + it('calculates direct consumers from usedBy', () => { + const entity = loader._findById('create-doc'); + const impact = engine._calculateImpact(entity, loader.getEntityCount()); + + expect(impact.directConsumers).toContain('po'); + expect(impact.directConsumers).toContain('sm'); + expect(impact.directCount).toBe(2); + }); + + it('returns zero impact for entity with no consumers', () => { + const entity = loader._findById('po'); + const impact = engine._calculateImpact(entity, loader.getEntityCount()); + + expect(impact.directCount).toBe(0); + expect(impact.totalAffected).toBe(0); + expect(impact.percentage).toBe(0); + }); + + it('calculates percentage relative to total entities', () => { + const entity = loader._findById('create-doc'); + const total = loader.getEntityCount(); + const impact = engine._calculateImpact(entity, total); + + expect(impact.percentage).toBeLessThanOrEqual(1); + expect(impact.percentage).toBeGreaterThanOrEqual(0); + expect(impact.percentage).toBe(engine._round(impact.totalAffected / total)); + }); + + it('handles entity with no usedBy field', () => { + const mockEntity = { id: 'test', usedBy: undefined }; + const impact = engine._calculateImpact(mockEntity, 10); + + expect(impact.directCount).toBe(0); + expect(impact.totalAffected).toBe(0); + }); + + it('traverses indirect impacts via BFS', () => { + // template-engine is used by create-doc, which is used by po and sm + const entity = loader._findById('template-engine'); + const impact = engine._calculateImpact(entity, loader.getEntityCount()); + + expect(impact.directConsumers).toContain('create-doc'); + expect(impact.directCount).toBe(1); + // Indirect: po and sm use create-doc + expect(impact.affectedEntities).toContain('po'); + expect(impact.affectedEntities).toContain('sm'); + expect(impact.indirectCount).toBe(2); + }); + }); + + // ============================================================== + // Task 5: Rationale Generation + // ============================================================== + + describe('rationale generation', () => { + it('generates rationale for REUSE decision', () => { + const evaluation = { + entity: { id: 'test', purpose: 'test purpose' }, + relevanceScore: 0.95, + keywordScore: 0.9, + purposeScore: 1.0, + canAdapt: { score: 0.8, constraints: [], extensionPoints: [] }, + }; + const decision = { action: 'REUSE', confidence: 'high' }; + const impact = { percentage: 0.1, directCount: 1, indirectCount: 0 }; + + const rationale = engine._generateEntityRationale(evaluation, decision, impact); + + expect(rationale).toContain('Strong match'); + expect(rationale).toContain('directly without modification'); + }); + + it('generates rationale for ADAPT decision with extension points', () => { + const evaluation = { + entity: { id: 'test', purpose: 'test purpose' }, + relevanceScore: 0.75, + keywordScore: 0.7, + purposeScore: 0.8, + canAdapt: { score: 0.8, constraints: ['API stable'], extensionPoints: ['Custom helpers'] }, + }; + const decision = { action: 'ADAPT', confidence: 'medium' }; + const impact = { percentage: 0.15, directCount: 2, indirectCount: 1 }; + + const rationale = engine._generateEntityRationale(evaluation, decision, impact); + + expect(rationale).toContain('adaptation potential'); + expect(rationale).toContain('Custom helpers'); + expect(rationale).toContain('API stable'); + }); + + it('generates rationale for CREATE decision', () => { + const evaluation = { + entity: { id: 'test', purpose: 'test purpose' }, + relevanceScore: 0.45, + keywordScore: 0.3, + purposeScore: 0.6, + canAdapt: { score: 0.4, constraints: [], extensionPoints: [] }, + }; + const decision = { action: 'CREATE', confidence: 'low' }; + const impact = { percentage: 0.05, directCount: 0, indirectCount: 0 }; + + const rationale = engine._generateEntityRationale(evaluation, decision, impact); + + expect(rationale).toContain('Insufficient match'); + }); + + it('explains low adaptability when relevance is adequate', () => { + const evaluation = { + entity: { id: 'test', purpose: 'test' }, + relevanceScore: 0.7, + keywordScore: 0.7, + purposeScore: 0.7, + canAdapt: { score: 0.3, constraints: [], extensionPoints: [] }, + }; + const decision = { action: 'CREATE', confidence: 'medium' }; + const impact = { percentage: 0.1, directCount: 0, indirectCount: 0 }; + + const rationale = engine._generateEntityRationale(evaluation, decision, impact); + + expect(rationale).toContain('adaptability too low'); + }); + + it('generates overall rationale with match counts', () => { + const result = engine.analyze('create documentation template'); + + expect(result.rationale).toBeDefined(); + expect(typeof result.rationale).toBe('string'); + expect(result.rationale.length).toBeGreaterThan(0); + }); + }); + + // ============================================================== + // Task 7: Performance + // ============================================================== + + describe('performance (AC: 9)', () => { + it('completes analysis in <500ms for typical queries', () => { + const start = performance.now(); + engine.analyze('validate story drafts before implementation'); + const elapsed = performance.now() - start; + + expect(elapsed).toBeLessThan(500); + }); + + it('benefits from caching on repeated queries', () => { + // First call + const start1 = performance.now(); + engine.analyze('template rendering engine'); + const elapsed1 = performance.now() - start1; + + // Second call (cached) + const start2 = performance.now(); + engine.analyze('template rendering engine'); + const elapsed2 = performance.now() - start2; + + // Cached should be faster or at least similar + expect(elapsed2).toBeLessThan(elapsed1 * 2); + }); + + it('returns same result from cache', () => { + const first = engine.analyze('documentation creation'); + const second = engine.analyze('documentation creation'); + + expect(first).toBe(second); // Same reference from cache + }); + + it('clearCache invalidates all caches', () => { + const first = engine.analyze('documentation'); + engine.clearCache(); + const second = engine.analyze('documentation'); + + expect(first).not.toBe(second); + expect(first.summary).toEqual(second.summary); + }); + }); + + // ============================================================== + // Task 8: Testing edge cases + // ============================================================== + + describe('all matches scenario', () => { + it('handles query matching many entities', () => { + // A broad query that might match multiple entities + const result = engine.analyze('agent task template script documentation'); + + expect(result.summary.matchesFound).toBeGreaterThanOrEqual(0); + expect(result.recommendations.length).toBeLessThanOrEqual(MAX_RESULTS); + }); + }); + + // ============================================================== + // Task 9: CREATE Decision Requirements + // ============================================================== + + describe('CREATE justification (AC: 11)', () => { + it('includes justification when decision is CREATE', () => { + const result = engine.analyze('quantum blockchain hypervisor zettabyte'); + + expect(result.summary.decision).toBe('CREATE'); + expect(result.justification).toBeDefined(); + expect(result.justification).toHaveProperty('evaluated_patterns'); + expect(result.justification).toHaveProperty('rejection_reasons'); + expect(result.justification).toHaveProperty('new_capability'); + expect(result.justification).toHaveProperty('review_scheduled'); + }); + + it('evaluated_patterns is an array of entity IDs', () => { + const result = engine.analyze('nonexistent feature xyz'); + + if (result.justification) { + expect(Array.isArray(result.justification.evaluated_patterns)).toBe(true); + } + }); + + it('rejection_reasons maps entity IDs to reason strings', () => { + // Use a query that might partially match but still result in CREATE + const result = engine.analyze('create something slightly related documentation'); + + if (result.justification && Object.keys(result.justification.rejection_reasons).length > 0) { + for (const [id, reason] of Object.entries(result.justification.rejection_reasons)) { + expect(typeof id).toBe('string'); + expect(typeof reason).toBe('string'); + expect(reason.length).toBeGreaterThan(0); + } + } + }); + + it('review_scheduled is 30 days from now', () => { + const result = engine.analyze('brand new capability xyz'); + + if (result.justification) { + const expected = new Date(); + expected.setDate(expected.getDate() + 30); + const expectedDate = expected.toISOString().split('T')[0]; + + expect(result.justification.review_scheduled).toBe(expectedDate); + } + }); + + it('new_capability contains the original intent', () => { + const intent = 'unique novel capability never seen before'; + const result = engine.analyze(intent); + + if (result.justification) { + expect(result.justification.new_capability).toBe(intent); + } + }); + }); + + describe('reviewCreateDecisions() (AC: 12)', () => { + it('returns review report structure', () => { + const report = engine.reviewCreateDecisions(); + + expect(report).toHaveProperty('pendingReview'); + expect(report).toHaveProperty('promotionCandidates'); + expect(report).toHaveProperty('monitoring'); + expect(report).toHaveProperty('deprecationReview'); + expect(report).toHaveProperty('totalReviewed'); + expect(Array.isArray(report.pendingReview)).toBe(true); + expect(Array.isArray(report.promotionCandidates)).toBe(true); + }); + + it('handles registry with no CREATE justification metadata', () => { + const report = engine.reviewCreateDecisions(); + + // Valid registry fixture has no createJustification metadata + expect(report.totalReviewed).toBe(0); + }); + }); + + describe('getPromotionStatus() (Task 9.6)', () => { + it('returns promotion-candidate for 3+ usedBy', () => { + const entity = { id: 'popular', usedBy: ['a', 'b', 'c'] }; + expect(engine.getPromotionStatus(entity)).toBe('promotion-candidate'); + }); + + it('returns monitoring for 1-2 usedBy', () => { + const entity = { id: 'used', usedBy: ['a'] }; + expect(engine.getPromotionStatus(entity)).toBe('monitoring'); + }); + + it('returns monitoring for 2 usedBy', () => { + const entity = { id: 'used', usedBy: ['a', 'b'] }; + expect(engine.getPromotionStatus(entity)).toBe('monitoring'); + }); + + it('returns deprecation-review for 0 usedBy after 60 days', () => { + const pastDate = new Date(); + pastDate.setDate(pastDate.getDate() - 61); + + const entity = { + id: 'unused', + usedBy: [], + createdAt: pastDate.toISOString().split('T')[0], + createJustification: { created_at: pastDate.toISOString().split('T')[0] }, + }; + + expect(engine.getPromotionStatus(entity)).toBe('deprecation-review'); + }); + + it('returns monitoring for 0 usedBy within 60 days', () => { + const recentDate = new Date(); + recentDate.setDate(recentDate.getDate() - 10); + + const entity = { + id: 'new', + usedBy: [], + createJustification: { review_scheduled: recentDate.toISOString().split('T')[0] }, + }; + + expect(engine.getPromotionStatus(entity)).toBe('monitoring'); + }); + + it('returns unknown for null entity', () => { + expect(engine.getPromotionStatus(null)).toBe('unknown'); + }); + }); + + // ============================================================== + // Exported constants + // ============================================================== + + describe('exported constants', () => { + it('exports all configuration constants', () => { + expect(THRESHOLD_MINIMUM).toBe(0.4); + expect(ADAPT_IMPACT_THRESHOLD).toBe(0.30); + expect(KEYWORD_OVERLAP_WEIGHT).toBe(0.6); + expect(PURPOSE_SIMILARITY_WEIGHT).toBe(0.4); + expect(MAX_RESULTS).toBe(20); + expect(CACHE_TTL_MS).toBe(300_000); + }); + + it('STOP_WORDS is a Set with expected entries', () => { + expect(STOP_WORDS instanceof Set).toBe(true); + expect(STOP_WORDS.has('the')).toBe(true); + expect(STOP_WORDS.has('is')).toBe(true); + expect(STOP_WORDS.has('validate')).toBe(false); + }); + }); + + // ============================================================== + // CLI integration test (Task 8.6) + // ============================================================== + + describe('CLI command integration', () => { + const { execSync } = require('child_process'); + const cliPath = path.resolve(__dirname, '..', '..', '..', 'bin', 'aios-ids.js'); + + it('shows help when called without arguments', () => { + const output = execSync(`node "${cliPath}" --help`, { encoding: 'utf8' }); + + expect(output).toContain('ids:query'); + expect(output).toContain('ids:create-review'); + expect(output).toContain('--json'); + }); + + it('returns JSON output with --json flag', () => { + const output = execSync( + `node "${cliPath}" ids:query "validate story" --json`, + { encoding: 'utf8' }, + ); + + const result = JSON.parse(output); + expect(result).toHaveProperty('intent'); + expect(result).toHaveProperty('recommendations'); + expect(result).toHaveProperty('summary'); + }); + + it('returns formatted output for query', () => { + const output = execSync( + `node "${cliPath}" ids:query "documentation template"`, + { encoding: 'utf8' }, + ); + + expect(output).toContain('IDS Analysis'); + expect(output).toContain('Decision:'); + expect(output).toContain('Rationale:'); + }); + + it('handles create-review command', () => { + const output = execSync( + `node "${cliPath}" ids:create-review --json`, + { encoding: 'utf8' }, + ); + + const result = JSON.parse(output); + expect(result).toHaveProperty('totalReviewed'); + expect(result).toHaveProperty('pendingReview'); + }); + + it('returns error for missing intent', () => { + expect.assertions(1); + try { + execSync(`node "${cliPath}" ids:query`, { encoding: 'utf8', stdio: 'pipe' }); + } catch (error) { + expect(error.stderr || error.stdout).toContain('Intent is required'); + } + }); + }); +}); + +``` + +================================================== +📄 tests/core/ids/registry-loader.test.js +================================================== +```js +'use strict'; + +const path = require('path'); +const { RegistryLoader } = require('../../../.aios-core/core/ids/registry-loader'); + +const FIXTURES = path.resolve(__dirname, 'fixtures'); +const VALID_REGISTRY = path.join(FIXTURES, 'valid-registry.yaml'); +const EMPTY_REGISTRY = path.join(FIXTURES, 'empty-registry.yaml'); +const CORRUPT_REGISTRY = path.join(FIXTURES, 'corrupt-registry.yaml'); +const MISSING_REGISTRY = path.join(FIXTURES, 'does-not-exist.yaml'); + +describe('RegistryLoader', () => { + describe('load()', () => { + it('loads a valid registry file', () => { + const loader = new RegistryLoader(VALID_REGISTRY); + const registry = loader.load(); + + expect(registry).toBeDefined(); + expect(registry.metadata).toBeDefined(); + expect(registry.metadata.version).toBe('1.0.0'); + expect(registry.entities).toBeDefined(); + expect(registry.categories).toBeDefined(); + }); + + it('returns empty registry when file is missing (AC: 11)', () => { + const loader = new RegistryLoader(MISSING_REGISTRY); + const registry = loader.load(); + + expect(registry).toBeDefined(); + expect(registry.metadata).toBeDefined(); + expect(registry.metadata.entityCount).toBe(0); + expect(registry.entities).toEqual({}); + }); + + it('returns empty registry when file is empty (AC: 11)', () => { + const emptyPath = path.join(FIXTURES, 'actually-empty.yaml'); + const fs = require('fs'); + fs.writeFileSync(emptyPath, '', 'utf8'); + + try { + const loader = new RegistryLoader(emptyPath); + const registry = loader.load(); + + expect(registry).toBeDefined(); + expect(registry.metadata.entityCount).toBe(0); + } finally { + fs.unlinkSync(emptyPath); + } + }); + + it('throws descriptive error for corrupt YAML', () => { + const loader = new RegistryLoader(CORRUPT_REGISTRY); + + expect(() => loader.load()).toThrow(/Failed to parse registry/); + }); + + it('handles registry with empty entities gracefully', () => { + const loader = new RegistryLoader(EMPTY_REGISTRY); + const registry = loader.load(); + + expect(registry).toBeDefined(); + expect(loader.getEntityCount()).toBe(0); + }); + }); + + describe('queryByKeywords()', () => { + let loader; + + beforeEach(() => { + loader = new RegistryLoader(VALID_REGISTRY); + loader.load(); + }); + + it('returns entities matching a single keyword', () => { + const results = loader.queryByKeywords(['validate']); + expect(results.length).toBeGreaterThan(0); + expect(results.some((e) => e.id === 'validate-story')).toBe(true); + }); + + it('returns entities matching multiple keywords', () => { + const results = loader.queryByKeywords(['template', 'engine']); + expect(results.length).toBeGreaterThan(0); + expect(results.some((e) => e.id === 'template-engine')).toBe(true); + }); + + it('returns empty array for no matches', () => { + const results = loader.queryByKeywords(['zzz-nonexistent-zzz']); + expect(results).toEqual([]); + }); + + it('returns empty array for empty input', () => { + expect(loader.queryByKeywords([])).toEqual([]); + expect(loader.queryByKeywords(null)).toEqual([]); + }); + + it('is case-insensitive', () => { + const results = loader.queryByKeywords(['VALIDATE']); + expect(results.some((e) => e.id === 'validate-story')).toBe(true); + }); + + it('deduplicates results when multiple keywords match same entity', () => { + const results = loader.queryByKeywords(['product', 'owner', 'backlog']); + const poCount = results.filter((e) => e.id === 'po').length; + expect(poCount).toBe(1); + }); + }); + + describe('queryByType()', () => { + let loader; + + beforeEach(() => { + loader = new RegistryLoader(VALID_REGISTRY); + loader.load(); + }); + + it('returns all entities of a given type', () => { + const tasks = loader.queryByType('task'); + expect(tasks.length).toBe(2); + expect(tasks.every((e) => e.type === 'task')).toBe(true); + }); + + it('is case-insensitive', () => { + const agents = loader.queryByType('AGENT'); + expect(agents.length).toBe(2); + }); + + it('returns empty array for unknown type', () => { + expect(loader.queryByType('widget')).toEqual([]); + }); + + it('returns empty array for null/empty input', () => { + expect(loader.queryByType(null)).toEqual([]); + expect(loader.queryByType('')).toEqual([]); + }); + }); + + describe('queryByPath()', () => { + let loader; + + beforeEach(() => { + loader = new RegistryLoader(VALID_REGISTRY); + loader.load(); + }); + + it('finds entities by partial path match', () => { + const results = loader.queryByPath('tasks/'); + expect(results.length).toBe(2); + }); + + it('is case-insensitive', () => { + const results = loader.queryByPath('AGENTS/'); + expect(results.length).toBe(2); + }); + + it('returns empty for no match', () => { + expect(loader.queryByPath('nonexistent/')).toEqual([]); + }); + + it('returns empty for null input', () => { + expect(loader.queryByPath(null)).toEqual([]); + }); + }); + + describe('queryByPurpose()', () => { + let loader; + + beforeEach(() => { + loader = new RegistryLoader(VALID_REGISTRY); + loader.load(); + }); + + it('finds entities by purpose text', () => { + const results = loader.queryByPurpose('backlog management'); + expect(results.length).toBeGreaterThan(0); + expect(results.some((e) => e.id === 'po')).toBe(true); + }); + + it('is case-insensitive', () => { + const results = loader.queryByPurpose('HANDLEBARS'); + expect(results.some((e) => e.id === 'template-engine')).toBe(true); + }); + + it('returns empty for no match', () => { + expect(loader.queryByPurpose('zzz-nothing-zzz')).toEqual([]); + }); + }); + + describe('getRelationships()', () => { + let loader; + + beforeEach(() => { + loader = new RegistryLoader(VALID_REGISTRY); + loader.load(); + }); + + it('returns usedBy and dependencies for an entity', () => { + const rels = loader.getRelationships('create-doc'); + expect(rels.usedBy).toContain('po'); + expect(rels.usedBy).toContain('sm'); + expect(rels.dependencies).toContain('template-engine'); + }); + + it('returns dependencies and usedBy for a dependency entity', () => { + const rels = loader.getRelationships('template-engine'); + expect(rels.dependencies).toEqual([]); + expect(rels.usedBy).toContain('create-doc'); + }); + + it('returns empty arrays for unknown entity', () => { + const rels = loader.getRelationships('unknown-entity'); + expect(rels).toEqual({ usedBy: [], dependencies: [] }); + }); + }); + + describe('getUsedBy() / getDependencies()', () => { + let loader; + + beforeEach(() => { + loader = new RegistryLoader(VALID_REGISTRY); + loader.load(); + }); + + it('getUsedBy returns correct list', () => { + expect(loader.getUsedBy('create-doc')).toContain('po'); + }); + + it('getDependencies returns correct list', () => { + expect(loader.getDependencies('po')).toContain('create-doc'); + }); + }); + + describe('getMetadata() / getCategories() / getEntityCount()', () => { + it('returns metadata from valid registry', () => { + const loader = new RegistryLoader(VALID_REGISTRY); + loader.load(); + + expect(loader.getMetadata().version).toBe('1.0.0'); + expect(loader.getCategories().length).toBeGreaterThan(0); + expect(loader.getEntityCount()).toBe(5); + }); + + it('returns zero count for empty registry', () => { + const loader = new RegistryLoader(EMPTY_REGISTRY); + loader.load(); + + expect(loader.getEntityCount()).toBe(0); + }); + }); + + describe('caching', () => { + it('returns same results on repeated queries without reloading', () => { + const loader = new RegistryLoader(VALID_REGISTRY); + loader.load(); + + const first = loader.queryByType('task'); + const second = loader.queryByType('task'); + expect(first).toBe(second); + }); + + it('clears cache on reload', () => { + const loader = new RegistryLoader(VALID_REGISTRY); + loader.load(); + + const before = loader.queryByType('task'); + loader.load(); + const after = loader.queryByType('task'); + + expect(before).not.toBe(after); + expect(before).toEqual(after); + }); + }); + + describe('performance (AC: 9)', () => { + it('returns query results in under 100ms', () => { + const loader = new RegistryLoader(VALID_REGISTRY); + loader.load(); + + const start = performance.now(); + loader.queryByKeywords(['validate', 'template']); + loader.queryByType('task'); + loader.queryByPath('tasks/'); + loader.queryByPurpose('documentation'); + const elapsed = performance.now() - start; + + expect(elapsed).toBeLessThan(100); + }); + }); +}); + +``` + +================================================== +📄 tests/core/ids/fixtures/corrupt-registry.yaml +================================================== +```yaml +metadata: + version: "1.0.0" + this is not valid yaml: [ + missing bracket + entityCount: broken + +``` + +================================================== +📄 tests/core/ids/fixtures/valid-registry.yaml +================================================== +```yaml +metadata: + version: "1.0.0" + lastUpdated: "2026-02-08T00:00:00Z" + entityCount: 5 + checksumAlgorithm: "sha256" + +entities: + tasks: + create-doc: + path: ".aios-core/development/tasks/create-doc.md" + type: "task" + purpose: "Create documentation files from templates" + keywords: ["create", "doc", "documentation", "template"] + usedBy: ["po", "sm"] + dependencies: ["template-engine"] + adaptability: + score: 0.8 + constraints: ["Must follow template format"] + extensionPoints: ["Custom templates"] + checksum: "sha256:a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2" + lastVerified: "2026-02-08T00:00:00Z" + validate-story: + path: ".aios-core/development/tasks/validate-next-story.md" + type: "task" + purpose: "Validate story drafts before implementation" + keywords: ["validate", "story", "quality", "checklist"] + usedBy: ["po"] + dependencies: ["create-doc"] + adaptability: + score: 0.7 + constraints: [] + extensionPoints: ["Custom validation rules"] + checksum: "sha256:d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5" + lastVerified: "2026-02-08T00:00:00Z" + scripts: + template-engine: + path: ".aios-core/core/utils/template-engine.js" + type: "script" + purpose: "Render Handlebars templates for document generation" + keywords: ["template", "render", "handlebars", "engine"] + usedBy: ["create-doc"] + dependencies: [] + adaptability: + score: 0.5 + constraints: ["API surface is stable"] + extensionPoints: ["Custom helpers"] + checksum: "sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef" + lastVerified: "2026-02-08T00:00:00Z" + agents: + po: + path: ".aios-core/development/agents/po.md" + type: "agent" + purpose: "Product Owner agent - backlog management and story validation" + keywords: ["product", "owner", "backlog", "stories", "validation"] + usedBy: [] + dependencies: ["create-doc", "validate-story"] + adaptability: + score: 0.3 + constraints: ["Persona definition is stable"] + extensionPoints: ["Custom commands"] + checksum: "sha256:fedcba9876543210fedcba9876543210fedcba9876543210fedcba9876543210" + lastVerified: "2026-02-08T00:00:00Z" + sm: + path: ".aios-core/development/agents/sm.md" + type: "agent" + purpose: "Scrum Master agent - story creation and sprint management" + keywords: ["scrum", "master", "sprint", "story", "creation"] + usedBy: [] + dependencies: ["create-doc"] + adaptability: + score: 0.3 + constraints: ["Persona definition is stable"] + extensionPoints: ["Custom commands"] + checksum: "sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef" + lastVerified: "2026-02-08T00:00:00Z" + +categories: + - id: tasks + description: "Executable task workflows for agent operations" + basePath: ".aios-core/development/tasks" + - id: scripts + description: "Utility and automation scripts" + basePath: ".aios-core/development/scripts" + - id: agents + description: "Agent persona definitions and configurations" + basePath: ".aios-core/development/agents" + +``` + +================================================== +📄 tests/core/ids/fixtures/empty-registry.yaml +================================================== +```yaml +metadata: + version: "1.0.0" + lastUpdated: "2026-02-08T00:00:00Z" + entityCount: 0 + checksumAlgorithm: "sha256" + +entities: + tasks: {} + templates: {} + scripts: {} + +categories: + - id: tasks + description: "Executable task workflows" + basePath: ".aios-core/development/tasks" + +``` + +================================================== +📄 tests/core/execution/parallel-executor.test.js +================================================== +```js +/** + * Parallel Executor Tests + * Story GEMINI-INT.17 + */ + +const { + ParallelExecutor, + ParallelMode, +} = require('../../../.aios-core/core/execution/parallel-executor'); + +describe('ParallelExecutor', () => { + let executor; + + beforeEach(() => { + executor = new ParallelExecutor(); + }); + + describe('ParallelMode', () => { + it('should have all execution modes defined', () => { + expect(ParallelMode.RACE).toBe('race'); + expect(ParallelMode.CONSENSUS).toBe('consensus'); + expect(ParallelMode.BEST_OF).toBe('best-of'); + expect(ParallelMode.MERGE).toBe('merge'); + expect(ParallelMode.FALLBACK).toBe('fallback'); + }); + }); + + describe('constructor', () => { + it('should use default mode as fallback', () => { + expect(executor.mode).toBe(ParallelMode.FALLBACK); + }); + + it('should accept custom mode', () => { + const custom = new ParallelExecutor({ mode: ParallelMode.RACE }); + expect(custom.mode).toBe(ParallelMode.RACE); + }); + + it('should have default consensus similarity', () => { + expect(executor.consensusSimilarity).toBe(0.85); + }); + + it('should initialize stats', () => { + expect(executor.stats.executions).toBe(0); + expect(executor.stats.consensusAgreements).toBe(0); + expect(executor.stats.fallbacksUsed).toBe(0); + }); + }); + + describe('execute', () => { + it('should execute both providers in parallel', async () => { + const claudeExecutor = jest.fn().mockResolvedValue({ success: true, output: 'Claude result' }); + const geminiExecutor = jest.fn().mockResolvedValue({ success: true, output: 'Gemini result' }); + + const result = await executor.execute(claudeExecutor, geminiExecutor); + + expect(claudeExecutor).toHaveBeenCalled(); + expect(geminiExecutor).toHaveBeenCalled(); + expect(result.success).toBe(true); + }); + + it('should increment execution count', async () => { + const claudeExecutor = jest.fn().mockResolvedValue({ success: true, output: 'test' }); + const geminiExecutor = jest.fn().mockResolvedValue({ success: true, output: 'test' }); + + await executor.execute(claudeExecutor, geminiExecutor); + + expect(executor.stats.executions).toBe(1); + }); + + it('should handle Claude failure with Gemini fallback', async () => { + const claudeExecutor = jest.fn().mockRejectedValue(new Error('Claude failed')); + const geminiExecutor = jest.fn().mockResolvedValue({ success: true, output: 'Gemini result' }); + + const result = await executor.execute(claudeExecutor, geminiExecutor); + + expect(result.success).toBe(true); + expect(result.selectedProvider).toBe('gemini'); + expect(result.usedFallback).toBe(true); + }); + + it('should handle both failures', async () => { + const claudeExecutor = jest.fn().mockRejectedValue(new Error('Claude failed')); + const geminiExecutor = jest.fn().mockRejectedValue(new Error('Gemini failed')); + + const result = await executor.execute(claudeExecutor, geminiExecutor); + + expect(result.success).toBe(false); + expect(result.error).toBe('Both providers failed'); + }); + + it('should emit events', async () => { + const startedHandler = jest.fn(); + const completedHandler = jest.fn(); + + executor.on('parallel_started', startedHandler); + executor.on('parallel_completed', completedHandler); + + const claudeExecutor = jest.fn().mockResolvedValue({ success: true, output: 'test' }); + const geminiExecutor = jest.fn().mockResolvedValue({ success: true, output: 'test' }); + + await executor.execute(claudeExecutor, geminiExecutor); + + expect(startedHandler).toHaveBeenCalled(); + expect(completedHandler).toHaveBeenCalled(); + }); + }); + + describe('race mode', () => { + it('should return first successful result', async () => { + const raceExecutor = new ParallelExecutor({ mode: ParallelMode.RACE }); + + const claudeExecutor = jest.fn().mockResolvedValue({ success: true, output: 'Claude' }); + const geminiExecutor = jest.fn().mockResolvedValue({ success: true, output: 'Gemini' }); + + const result = await raceExecutor.execute(claudeExecutor, geminiExecutor); + + expect(result.success).toBe(true); + expect(result.mode).toBe('race'); + }); + }); + + describe('consensus mode', () => { + it('should achieve consensus when outputs are similar', async () => { + const consensusExecutor = new ParallelExecutor({ + mode: ParallelMode.CONSENSUS, + consensusSimilarity: 0.5, + }); + + const claudeExecutor = jest.fn().mockResolvedValue({ success: true, output: 'The quick brown fox' }); + const geminiExecutor = jest.fn().mockResolvedValue({ success: true, output: 'The quick brown dog' }); + + const result = await consensusExecutor.execute(claudeExecutor, geminiExecutor); + + expect(result.mode).toBe('consensus'); + expect(result).toHaveProperty('similarity'); + }); + }); + + describe('best-of mode', () => { + it('should score and pick best output', async () => { + const bestOfExecutor = new ParallelExecutor({ mode: ParallelMode.BEST_OF }); + + const claudeExecutor = jest.fn().mockResolvedValue({ + success: true, + output: 'Short response', + }); + const geminiExecutor = jest.fn().mockResolvedValue({ + success: true, + output: 'This is a much longer response with more content and details including ```code blocks``` and - bullet points', + }); + + const result = await bestOfExecutor.execute(claudeExecutor, geminiExecutor); + + expect(result.mode).toBe('best-of'); + expect(result).toHaveProperty('scores'); + }); + }); + + describe('merge mode', () => { + it('should merge both outputs', async () => { + const mergeExecutor = new ParallelExecutor({ mode: ParallelMode.MERGE }); + + const claudeExecutor = jest.fn().mockResolvedValue({ success: true, output: 'Claude output' }); + const geminiExecutor = jest.fn().mockResolvedValue({ success: true, output: 'Gemini output' }); + + const result = await mergeExecutor.execute(claudeExecutor, geminiExecutor); + + expect(result.mode).toBe('merge'); + expect(result.output).toContain('Claude'); + expect(result.output).toContain('Gemini'); + }); + }); + + describe('getStats', () => { + it('should return stats with calculated rates', async () => { + const claudeExecutor = jest.fn().mockResolvedValue({ success: true, output: 'test' }); + const geminiExecutor = jest.fn().mockResolvedValue({ success: true, output: 'test' }); + + await executor.execute(claudeExecutor, geminiExecutor); + + const stats = executor.getStats(); + + expect(stats).toHaveProperty('executions'); + expect(stats).toHaveProperty('consensusRate'); + expect(stats).toHaveProperty('fallbackRate'); + }); + }); + + describe('timeout handling', () => { + it('should timeout slow executors', async () => { + const timeoutExecutor = new ParallelExecutor({ timeout: 100 }); + + const slowExecutor = jest.fn().mockImplementation( + () => new Promise((resolve) => setTimeout(() => resolve({ success: true }), 500)), + ); + const fastExecutor = jest.fn().mockResolvedValue({ success: true, output: 'fast' }); + + const result = await timeoutExecutor.execute(slowExecutor, fastExecutor); + + expect(result.success).toBe(true); + }, 10000); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/greenfield-handler.test.js +================================================== +```js +/** + * Tests for GreenfieldHandler - Story 12.13 + * + * Epic 12: Bob Full Integration — Completando o PRD v2.0 + * + * Test coverage: + * - AC1: Greenfield detection (no package.json, .git, docs/) + * - AC2-5: 4-phase orchestration + * - AC6-10: Epic 11 module integration + * - AC11-14: Surface decisions between phases with PAUSE/resume + * - AC15-17: Error handling and idempotency + * + * @jest-environment node + */ + +'use strict'; + +const path = require('path'); +const fs = require('fs'); + +// Module under test +const { + GreenfieldHandler, + GreenfieldPhase, + PhaseFailureAction, + DEFAULT_GREENFIELD_INDICATORS, + PHASE_1_SEQUENCE, +} = require('../../../.aios-core/core/orchestration/greenfield-handler'); + +// ═══════════════════════════════════════════════════════════════════════════════════ +// TEST SETUP +// ═══════════════════════════════════════════════════════════════════════════════════ + +const TEST_PROJECT_ROOT = '/tmp/test-greenfield-project'; + +// Mock fs module +jest.mock('fs', () => ({ + ...jest.requireActual('fs'), + existsSync: jest.fn(), + readFileSync: jest.fn(), + writeFileSync: jest.fn(), + mkdirSync: jest.fn(), +})); + +// Mock terminal-spawner +const mockTerminalSpawner = { + isSpawnerAvailable: jest.fn().mockReturnValue(false), + spawnAgent: jest.fn().mockResolvedValue({ success: true, pid: 1234 }), +}; +jest.mock('../../../.aios-core/core/orchestration/terminal-spawner', () => mockTerminalSpawner); + +// Mock dependencies +const mockWorkflowExecutor = { + executeWorkflow: jest.fn(), + execute: jest.fn(), +}; + +const mockSurfaceChecker = { + shouldSurface: jest.fn(), +}; + +const mockSessionState = { + exists: jest.fn(), + loadSessionState: jest.fn(), + recordPhaseChange: jest.fn(), + updateSessionState: jest.fn(), +}; + +describe('GreenfieldHandler', () => { + let handler; + + beforeEach(() => { + jest.clearAllMocks(); + + // Default mock implementations + fs.existsSync.mockReturnValue(false); // Greenfield = nothing exists + mockSurfaceChecker.shouldSurface.mockReturnValue({ should_surface: true }); + mockSessionState.exists.mockResolvedValue(false); + + // Reset terminal spawner mock to defaults + mockTerminalSpawner.isSpawnerAvailable.mockReturnValue(false); + mockTerminalSpawner.spawnAgent.mockResolvedValue({ success: true, pid: 1234 }); + + handler = new GreenfieldHandler(TEST_PROJECT_ROOT, { + debug: false, + workflowExecutor: mockWorkflowExecutor, + surfaceChecker: mockSurfaceChecker, + sessionState: mockSessionState, + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // CONSTRUCTOR TESTS + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('constructor', () => { + test('should throw error if projectRoot is not provided', () => { + expect(() => new GreenfieldHandler()).toThrow('projectRoot is required'); + }); + + test('should throw error if projectRoot is not a string', () => { + expect(() => new GreenfieldHandler(123)).toThrow('projectRoot is required and must be a string'); + }); + + test('should initialize with correct defaults', () => { + const h = new GreenfieldHandler(TEST_PROJECT_ROOT); + expect(h.projectRoot).toBe(TEST_PROJECT_ROOT); + expect(h.options.debug).toBe(false); + expect(h.indicators).toEqual(DEFAULT_GREENFIELD_INDICATORS); + }); + + test('should accept custom indicators', () => { + const customIndicators = ['package.json', '.git']; + const h = new GreenfieldHandler(TEST_PROJECT_ROOT, { indicators: customIndicators }); + expect(h.indicators).toEqual(customIndicators); + }); + + test('should accept injected dependencies', () => { + const h = new GreenfieldHandler(TEST_PROJECT_ROOT, { + workflowExecutor: mockWorkflowExecutor, + surfaceChecker: mockSurfaceChecker, + sessionState: mockSessionState, + }); + expect(h._workflowExecutor).toBe(mockWorkflowExecutor); + expect(h._surfaceChecker).toBe(mockSurfaceChecker); + expect(h._sessionState).toBe(mockSessionState); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // GREENFIELD DETECTION (AC1, AC16) + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('isGreenfield (AC1)', () => { + test('should return true when NO indicators exist', () => { + fs.existsSync.mockReturnValue(false); + expect(handler.isGreenfield()).toBe(true); + }); + + test('should return false when package.json exists', () => { + fs.existsSync.mockImplementation((p) => { + return p.endsWith('package.json'); + }); + expect(handler.isGreenfield()).toBe(false); + }); + + test('should return false when .git exists', () => { + fs.existsSync.mockImplementation((p) => { + return p.endsWith('.git'); + }); + expect(handler.isGreenfield()).toBe(false); + }); + + test('should return false when docs/ exists', () => { + fs.existsSync.mockImplementation((p) => { + // Handle both Unix (docs/) and Windows (docs\) path separators + return p.endsWith(`docs${path.sep}`); + }); + expect(handler.isGreenfield()).toBe(false); + }); + + test('should return false when ALL indicators exist', () => { + fs.existsSync.mockReturnValue(true); + expect(handler.isGreenfield()).toBe(false); + }); + + test('should accept custom project path', () => { + fs.existsSync.mockReturnValue(false); + const customPath = path.join('/custom', 'path'); + expect(handler.isGreenfield(customPath)).toBe(true); + // Verify existsSync was called with the custom path (platform-agnostic) + expect(fs.existsSync).toHaveBeenCalled(); + const calls = fs.existsSync.mock.calls.map(c => c[0]); + const hasCustomPath = calls.some(c => c.includes('custom') && c.includes('path')); + expect(hasCustomPath).toBe(true); + }); + + test('should use custom indicators when configured', () => { + const h = new GreenfieldHandler(TEST_PROJECT_ROOT, { + indicators: ['custom-file.txt'], + }); + fs.existsSync.mockReturnValue(false); + expect(h.isGreenfield()).toBe(true); + }); + }); + + describe('shouldSkipBootstrap (AC16)', () => { + test('should return true when both package.json and .git exist', () => { + fs.existsSync.mockImplementation((p) => { + return p.endsWith('package.json') || p.endsWith('.git'); + }); + expect(handler.shouldSkipBootstrap()).toBe(true); + }); + + test('should return false when only package.json exists', () => { + fs.existsSync.mockImplementation((p) => { + return p.endsWith('package.json'); + }); + expect(handler.shouldSkipBootstrap()).toBe(false); + }); + + test('should return false when only .git exists', () => { + fs.existsSync.mockImplementation((p) => { + return p.endsWith('.git'); + }); + expect(handler.shouldSkipBootstrap()).toBe(false); + }); + + test('should return false when neither exists', () => { + fs.existsSync.mockReturnValue(false); + expect(handler.shouldSkipBootstrap()).toBe(false); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // PHASE ORCHESTRATION (AC2-5) + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('handle (main entry)', () => { + test('should start from Phase 0 when nothing exists (not skippable)', async () => { + fs.existsSync.mockReturnValue(false); // Nothing exists + const result = await handler.handle({}); + + // Should surface between Phase 0 → 1 (after bootstrap succeeds or returns manual) + expect(result.action).toBe('greenfield_surface'); + expect(result.phase).toBe(GreenfieldPhase.BOOTSTRAP); + expect(result.nextPhase).toBe(1); + }); + + test('should skip Phase 0 when package.json + .git exist (AC16)', async () => { + fs.existsSync.mockImplementation((p) => { + return p.endsWith('package.json') || p.endsWith('.git'); + }); + + const result = await handler.handle({}); + + // Should start Phase 1 (Discovery) directly + expect(result.action).toBe('greenfield_surface'); + expect(result.phase).toBe(GreenfieldPhase.DISCOVERY); + }); + + test('should resume from specified phase', async () => { + const result = await handler.handle({ resumeFromPhase: 2 }); + + // Should start Phase 2 (Sharding) + expect(result.action).toBe('greenfield_surface'); + expect(result.phase).toBe(GreenfieldPhase.SHARDING); + }); + }); + + describe('Phase 0: Bootstrap (AC2)', () => { + test('should return surface prompt after bootstrap', async () => { + const result = await handler._executePhase0({}); + + expect(result.action).toBe('greenfield_surface'); + expect(result.phase).toBe(GreenfieldPhase.BOOTSTRAP); + expect(result.nextPhase).toBe(1); + expect(result.data.message).toContain('Descreva o que quer construir'); + expect(result.data.promptType).toBe('text_input'); + }); + + test('should emit phaseStart and phaseComplete events', async () => { + const events = []; + handler.on('phaseStart', (e) => events.push({ type: 'start', ...e })); + handler.on('phaseComplete', (e) => events.push({ type: 'complete', ...e })); + + await handler._executePhase0({}); + + expect(events.some((e) => e.type === 'start' && e.phase === GreenfieldPhase.BOOTSTRAP)).toBe(true); + expect(events.some((e) => e.type === 'complete' && e.phase === GreenfieldPhase.BOOTSTRAP)).toBe(true); + }); + }); + + describe('Phase 1: Discovery (AC3)', () => { + test('should return surface prompt after discovery', async () => { + const result = await handler._executePhase1({}); + + expect(result.action).toBe('greenfield_surface'); + expect(result.phase).toBe(GreenfieldPhase.DISCOVERY); + expect(result.nextPhase).toBe(2); + expect(result.data.promptType).toBe('go_pause'); + }); + + test('should include artifacts summary in surface message', async () => { + const result = await handler._executePhase1({}); + + expect(result.data.message).toContain('Artefatos de planejamento criados'); + }); + + test('should return failure on agent error', async () => { + // Make terminal spawner fail for first agent + mockTerminalSpawner.isSpawnerAvailable.mockReturnValue(true); + mockTerminalSpawner.spawnAgent.mockResolvedValue({ success: false, error: 'Agent failed' }); + + const result = await handler._executePhase1({}); + + expect(result.action).toBe('greenfield_phase_failure'); + expect(result.options).toHaveLength(3); + expect(result.options[0].action).toBe('retry'); + }); + }); + + describe('Phase 2: Sharding (AC4)', () => { + test('should return surface prompt after sharding', async () => { + const result = await handler._executePhase2({}); + + expect(result.action).toBe('greenfield_surface'); + expect(result.phase).toBe(GreenfieldPhase.SHARDING); + expect(result.nextPhase).toBe(3); + expect(result.data.promptType).toBe('go_pause'); + }); + }); + + describe('Phase 3: Dev Cycle (AC5)', () => { + test('should return dev cycle handoff', async () => { + const result = await handler._executePhase3({}); + + expect(result.action).toBe('greenfield_dev_cycle'); + expect(result.phase).toBe(GreenfieldPhase.DEV_CYCLE); + expect(result.data.nextStep).toBe('development_cycle'); + expect(result.data.handoff).toContain('@sm'); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // SURFACE DECISIONS (AC11-14) + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('handleSurfaceDecision (AC11-14)', () => { + test('should continue to next phase on GO', async () => { + const result = await handler.handleSurfaceDecision('GO', 2, {}); + + expect(result.action).toBe('greenfield_surface'); + expect(result.phase).toBe(GreenfieldPhase.SHARDING); + }); + + test('should save state on PAUSE (AC14)', async () => { + mockSessionState.exists.mockResolvedValue(true); + mockSessionState.loadSessionState.mockResolvedValue({}); + + const result = await handler.handleSurfaceDecision('PAUSE', 2, {}); + + expect(result.action).toBe('greenfield_paused'); + expect(result.data.savedPhase).toBe(2); + }); + + test('should pass text input as userGoal', async () => { + const result = await handler.handleSurfaceDecision( + 'Quero um app de e-commerce', + 1, + {}, + ); + + // Should proceed to Phase 1 with userGoal in context + expect(result.action).toBe('greenfield_surface'); + }); + + test('should handle case-insensitive decisions', async () => { + const result = await handler.handleSurfaceDecision('go', 3, {}); + + expect(result.action).toBe('greenfield_dev_cycle'); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // ERROR HANDLING (AC15) + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('handlePhaseFailureAction (AC15)', () => { + test('should retry phase on RETRY', async () => { + const result = await handler.handlePhaseFailureAction( + GreenfieldPhase.BOOTSTRAP, + PhaseFailureAction.RETRY, + {}, + ); + + // Should re-execute Phase 0 + expect(result.action).toBe('greenfield_surface'); + expect(result.phase).toBe(GreenfieldPhase.BOOTSTRAP); + }); + + test('should skip to next phase on SKIP', async () => { + const result = await handler.handlePhaseFailureAction( + GreenfieldPhase.BOOTSTRAP, + PhaseFailureAction.SKIP, + {}, + ); + + // Should skip to Phase 1 + expect(result.action).toBe('greenfield_surface'); + expect(result.phase).toBe(GreenfieldPhase.DISCOVERY); + }); + + test('should abort workflow on ABORT', async () => { + const result = await handler.handlePhaseFailureAction( + GreenfieldPhase.BOOTSTRAP, + PhaseFailureAction.ABORT, + {}, + ); + + expect(result.action).toBe('greenfield_aborted'); + expect(result.data.lastPhase).toBe(GreenfieldPhase.BOOTSTRAP); + }); + + test('should handle invalid action', async () => { + const result = await handler.handlePhaseFailureAction( + GreenfieldPhase.BOOTSTRAP, + 'invalid', + {}, + ); + + expect(result.action).toBe('invalid_action'); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // IDEMPOTENCY (AC17) + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('checkIdempotency (AC17)', () => { + test('should detect existing artifact', () => { + fs.existsSync.mockReturnValue(true); + + const result = handler.checkIdempotency('docs/prd.md'); + + expect(result.exists).toBe(true); + expect(result.action).toBe('update'); + }); + + test('should detect new artifact', () => { + fs.existsSync.mockReturnValue(false); + + const result = handler.checkIdempotency('docs/prd.md'); + + expect(result.exists).toBe(false); + expect(result.action).toBe('create'); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // CONSTANTS + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('Constants', () => { + test('should export correct phases', () => { + expect(GreenfieldPhase.DETECTION).toBe('detection'); + expect(GreenfieldPhase.BOOTSTRAP).toBe('phase_0_bootstrap'); + expect(GreenfieldPhase.DISCOVERY).toBe('phase_1_discovery'); + expect(GreenfieldPhase.SHARDING).toBe('phase_2_sharding'); + expect(GreenfieldPhase.DEV_CYCLE).toBe('phase_3_dev_cycle'); + expect(GreenfieldPhase.COMPLETE).toBe('complete'); + }); + + test('should export correct failure actions', () => { + expect(PhaseFailureAction.RETRY).toBe('retry'); + expect(PhaseFailureAction.SKIP).toBe('skip'); + expect(PhaseFailureAction.ABORT).toBe('abort'); + }); + + test('should have 5 default indicators', () => { + expect(DEFAULT_GREENFIELD_INDICATORS).toHaveLength(5); + expect(DEFAULT_GREENFIELD_INDICATORS).toContain('package.json'); + expect(DEFAULT_GREENFIELD_INDICATORS).toContain('.git'); + expect(DEFAULT_GREENFIELD_INDICATORS).toContain('docs/'); + }); + + test('should have 5 agents in Phase 1 sequence', () => { + expect(PHASE_1_SEQUENCE).toHaveLength(5); + expect(PHASE_1_SEQUENCE[0].agent).toBe('@analyst'); + expect(PHASE_1_SEQUENCE[4].agent).toBe('@po'); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // SESSION STATE (AC9) + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('Session state recording (AC9)', () => { + test('should record phase when session exists', async () => { + mockSessionState.exists.mockResolvedValue(true); + mockSessionState.loadSessionState.mockResolvedValue({}); + + await handler._recordPhase(GreenfieldPhase.BOOTSTRAP, {}); + + expect(mockSessionState.recordPhaseChange).toHaveBeenCalledWith( + 'greenfield_phase_0_bootstrap', + 'greenfield-fullstack', + '@pm', + ); + }); + + test('should not fail when session does not exist', async () => { + mockSessionState.exists.mockResolvedValue(false); + + // Should not throw + await handler._recordPhase(GreenfieldPhase.BOOTSTRAP, {}); + expect(mockSessionState.recordPhaseChange).not.toHaveBeenCalled(); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // HELPERS + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('Helper methods', () => { + test('_getPhaseEnum should map phase numbers', () => { + expect(handler._getPhaseEnum(0)).toBe(GreenfieldPhase.BOOTSTRAP); + expect(handler._getPhaseEnum(1)).toBe(GreenfieldPhase.DISCOVERY); + expect(handler._getPhaseEnum(2)).toBe(GreenfieldPhase.SHARDING); + expect(handler._getPhaseEnum(3)).toBe(GreenfieldPhase.DEV_CYCLE); + expect(handler._getPhaseEnum(99)).toBe(GreenfieldPhase.DETECTION); + }); + + test('_getPhaseNumber should parse phase strings', () => { + expect(handler._getPhaseNumber('phase_0_bootstrap')).toBe(0); + expect(handler._getPhaseNumber('phase_1_discovery')).toBe(1); + expect(handler._getPhaseNumber('phase_2_sharding')).toBe(2); + expect(handler._getPhaseNumber('phase_3_dev_cycle')).toBe(3); + expect(handler._getPhaseNumber('unknown')).toBe(-1); + }); + + test('_buildArtifactsSummary should format step results', () => { + const steps = [ + { agent: '@analyst', creates: 'docs/brief.md', success: true }, + { agent: '@pm', creates: 'docs/prd.md', success: true }, + { agent: '@po', creates: null, success: true }, + ]; + + const summary = handler._buildArtifactsSummary(steps); + + expect(summary).toContain('docs/brief.md'); + expect(summary).toContain('@analyst'); + expect(summary).toContain('docs/prd.md'); + expect(summary).not.toContain('@po'); // null creates filtered out + }); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/workflow-executor-callbacks.test.js +================================================== +```js +/** + * Tests for WorkflowExecutor callback system + * + * Story 12.6: Observability Panel Integration + Dashboard Bridge + * + * Tests: + * - onPhaseChange callback registration and emission + * - onAgentSpawn callback registration and emission + * - onTerminalSpawn callback registration and emission + * - Multiple callback support + * - Error handling in callbacks + */ + +'use strict'; + +const fs = require('fs-extra'); +const path = require('path'); +const os = require('os'); + +const { WorkflowExecutor } = require('../../../.aios-core/core/orchestration/workflow-executor'); + +describe('WorkflowExecutor Callbacks (Story 12.6)', () => { + let tempDir; + let executor; + + beforeEach(async () => { + tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'executor-callbacks-')); + + // Create minimal directory structure + await fs.ensureDir(path.join(tempDir, '.aios-core/development/workflows')); + await fs.ensureDir(path.join(tempDir, '.aios')); + + // Create minimal workflow file + const workflowContent = ` +workflow: + id: development-cycle + phases: + 1_validation: + agent: \${story.executor} + on_success: 2_development + 2_development: + agent: \${story.executor} + on_success: 3_self_healing +`; + await fs.writeFile( + path.join(tempDir, '.aios-core/development/workflows/development-cycle.yaml'), + workflowContent, + ); + + executor = new WorkflowExecutor(tempDir, { debug: false }); + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + describe('Callback Registration', () => { + describe('onPhaseChange', () => { + it('should register phase change callback', () => { + const callback = jest.fn(); + executor.onPhaseChange(callback); + + expect(executor._phaseChangeCallbacks).toContain(callback); + }); + + it('should support multiple callbacks', () => { + const callback1 = jest.fn(); + const callback2 = jest.fn(); + + executor.onPhaseChange(callback1); + executor.onPhaseChange(callback2); + + expect(executor._phaseChangeCallbacks).toHaveLength(2); + }); + + it('should ignore non-function callbacks', () => { + executor.onPhaseChange('not a function'); + executor.onPhaseChange(123); + executor.onPhaseChange(null); + + expect(executor._phaseChangeCallbacks).toHaveLength(0); + }); + }); + + describe('onAgentSpawn', () => { + it('should register agent spawn callback', () => { + const callback = jest.fn(); + executor.onAgentSpawn(callback); + + expect(executor._agentSpawnCallbacks).toContain(callback); + }); + + it('should support multiple callbacks', () => { + const callback1 = jest.fn(); + const callback2 = jest.fn(); + + executor.onAgentSpawn(callback1); + executor.onAgentSpawn(callback2); + + expect(executor._agentSpawnCallbacks).toHaveLength(2); + }); + }); + + describe('onTerminalSpawn', () => { + it('should register terminal spawn callback', () => { + const callback = jest.fn(); + executor.onTerminalSpawn(callback); + + expect(executor._terminalSpawnCallbacks).toContain(callback); + }); + + it('should support multiple callbacks', () => { + const callback1 = jest.fn(); + const callback2 = jest.fn(); + + executor.onTerminalSpawn(callback1); + executor.onTerminalSpawn(callback2); + + expect(executor._terminalSpawnCallbacks).toHaveLength(2); + }); + }); + }); + + describe('Callback Emission', () => { + describe('_emitPhaseChange', () => { + it('should call all registered callbacks with correct args', () => { + const callback1 = jest.fn(); + const callback2 = jest.fn(); + + executor.onPhaseChange(callback1); + executor.onPhaseChange(callback2); + + executor._emitPhaseChange('1_validation', '12.6', '@dev'); + + expect(callback1).toHaveBeenCalledWith('1_validation', '12.6', '@dev'); + expect(callback2).toHaveBeenCalledWith('1_validation', '12.6', '@dev'); + }); + + it('should handle callback errors gracefully', () => { + const failingCallback = jest.fn(() => { + throw new Error('Callback error'); + }); + const successCallback = jest.fn(); + + executor.onPhaseChange(failingCallback); + executor.onPhaseChange(successCallback); + + // Should not throw + expect(() => { + executor._emitPhaseChange('1_validation', '12.6', '@dev'); + }).not.toThrow(); + + // Second callback should still be called + expect(successCallback).toHaveBeenCalled(); + }); + }); + + describe('_emitAgentSpawn', () => { + it('should call all registered callbacks with correct args', () => { + const callback = jest.fn(); + executor.onAgentSpawn(callback); + + executor._emitAgentSpawn('@dev', 'development'); + + expect(callback).toHaveBeenCalledWith('@dev', 'development'); + }); + + it('should handle callback errors gracefully', () => { + const failingCallback = jest.fn(() => { + throw new Error('Callback error'); + }); + + executor.onAgentSpawn(failingCallback); + + expect(() => { + executor._emitAgentSpawn('@dev', 'development'); + }).not.toThrow(); + }); + }); + + describe('_emitTerminalSpawn', () => { + it('should call all registered callbacks with correct args', () => { + const callback = jest.fn(); + executor.onTerminalSpawn(callback); + + executor._emitTerminalSpawn('@dev', 12345, 'development'); + + expect(callback).toHaveBeenCalledWith('@dev', 12345, 'development'); + }); + + it('should handle callback errors gracefully', () => { + const failingCallback = jest.fn(() => { + throw new Error('Callback error'); + }); + + executor.onTerminalSpawn(failingCallback); + + expect(() => { + executor._emitTerminalSpawn('@dev', 12345, 'development'); + }).not.toThrow(); + }); + }); + }); + + describe('Integration with Phase Execution', () => { + it('should emit phase change when executePhase is called', async () => { + const phaseCallback = jest.fn(); + executor.onPhaseChange(phaseCallback); + + // Load workflow + await executor.loadWorkflow(); + + // Initialize state + executor.state = { + workflowId: 'development-cycle', + currentPhase: '1_validation', + currentStory: path.join(tempDir, 'test-story.story.md'), + executor: '@dev', + qualityGate: '@qa', + attemptCount: 0, + startedAt: new Date(), + lastUpdated: new Date(), + phaseResults: {}, + accumulatedContext: {}, + }; + + // Create minimal story file + const storyContent = ` +# Test Story + +\`\`\`yaml +executor: "@dev" +quality_gate: "@qa" +\`\`\` +`; + await fs.writeFile(path.join(tempDir, 'test-story.story.md'), storyContent); + + // Execute a phase + await executor.executePhase('1_validation', path.join(tempDir, 'test-story.story.md'), {}); + + // Verify callback was called + expect(phaseCallback).toHaveBeenCalled(); + const [phase, storyId] = phaseCallback.mock.calls[0]; + expect(phase).toBe('1_validation'); + }); + }); + + describe('Callback Array Initialization', () => { + it('should initialize callback arrays in constructor', () => { + expect(executor._phaseChangeCallbacks).toBeDefined(); + expect(executor._phaseChangeCallbacks).toEqual([]); + + expect(executor._agentSpawnCallbacks).toBeDefined(); + expect(executor._agentSpawnCallbacks).toEqual([]); + + expect(executor._terminalSpawnCallbacks).toBeDefined(); + expect(executor._terminalSpawnCallbacks).toEqual([]); + }); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/lock-manager.test.js +================================================== +```js +/** + * Lock Manager Tests + * Story 12.3: Bob Orchestration Logic - File Locking (AC14-17) + */ + +const path = require('path'); +const fs = require('fs').promises; +const fsSync = require('fs'); +const yaml = require('js-yaml'); + +const LockManager = require('../../../.aios-core/core/orchestration/lock-manager'); + +// Test fixtures +const TEST_PROJECT_ROOT = path.join(__dirname, '../../fixtures/test-project-locks'); +const LOCKS_DIR = path.join(TEST_PROJECT_ROOT, '.aios/locks'); + +describe('LockManager', () => { + let lockManager; + + beforeEach(async () => { + // Clean up test directory + try { + await fs.rm(TEST_PROJECT_ROOT, { recursive: true, force: true }); + } catch { + // Ignore if doesn't exist + } + await fs.mkdir(LOCKS_DIR, { recursive: true }); + + lockManager = new LockManager(TEST_PROJECT_ROOT, { debug: false }); + }); + + afterEach(async () => { + try { + await fs.rm(TEST_PROJECT_ROOT, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + }); + + describe('acquireLock', () => { + it('should acquire a lock for an available resource', async () => { + // When + const result = await lockManager.acquireLock('test-resource'); + + // Then + expect(result).toBe(true); + + // Verify lock file exists with correct schema (AC14) + const lockPath = path.join(LOCKS_DIR, 'test-resource.lock'); + const content = await fs.readFile(lockPath, 'utf8'); + const lockData = yaml.load(content); + + expect(lockData.pid).toBe(process.pid); + expect(lockData.owner).toBe('bob-orchestrator'); + expect(lockData.resource).toBe('test-resource'); + expect(lockData.ttl_seconds).toBe(300); + expect(lockData.created_at).toBeDefined(); + }); + + it('should fail to acquire a lock held by another active process', async () => { + // Given — create a lock with current PID (simulating active lock) + const lockPath = path.join(LOCKS_DIR, 'busy-resource.lock'); + const lockData = { + pid: process.pid, + owner: 'other-module', + created_at: new Date().toISOString(), + ttl_seconds: 300, + resource: 'busy-resource', + }; + await fs.writeFile(lockPath, yaml.dump(lockData)); + + // When — try to acquire with exclusive write flag + // The lock is owned by same PID but different owner check isn't done in acquire + // We need a lock from a different context — simulate by making a fresh manager + const otherManager = new LockManager(TEST_PROJECT_ROOT, { + debug: false, + owner: 'another-orchestrator', + }); + const result = await otherManager.acquireLock('busy-resource'); + + // Then — should fail because file already exists (active PID) + expect(result).toBe(false); + }); + + it('should acquire a lock after removing a stale TTL-expired lock (AC16)', async () => { + // Given — create a lock with expired TTL + const lockPath = path.join(LOCKS_DIR, 'expired-resource.lock'); + const expiredDate = new Date(Date.now() - 400 * 1000); // 400 seconds ago + const lockData = { + pid: process.pid, + owner: 'old-module', + created_at: expiredDate.toISOString(), + ttl_seconds: 300, // 300s TTL, but 400s elapsed + resource: 'expired-resource', + }; + await fs.writeFile(lockPath, yaml.dump(lockData)); + + // When + const result = await lockManager.acquireLock('expired-resource'); + + // Then + expect(result).toBe(true); + }); + + it('should acquire a lock after removing a lock from dead PID (AC17)', async () => { + // Given — create a lock with a PID that doesn't exist + const lockPath = path.join(LOCKS_DIR, 'dead-pid-resource.lock'); + const lockData = { + pid: 999999, // Very unlikely to be a real PID + owner: 'dead-module', + created_at: new Date().toISOString(), + ttl_seconds: 300, + resource: 'dead-pid-resource', + }; + await fs.writeFile(lockPath, yaml.dump(lockData)); + + // When + const result = await lockManager.acquireLock('dead-pid-resource'); + + // Then + expect(result).toBe(true); + }); + + it('should accept custom TTL and owner options', async () => { + // When + await lockManager.acquireLock('custom-resource', { + ttlSeconds: 60, + owner: 'custom-owner', + }); + + // Then + const lockPath = path.join(LOCKS_DIR, 'custom-resource.lock'); + const content = await fs.readFile(lockPath, 'utf8'); + const lockData = yaml.load(content); + + expect(lockData.ttl_seconds).toBe(60); + expect(lockData.owner).toBe('custom-owner'); + }); + }); + + describe('releaseLock', () => { + it('should release a lock owned by current process', async () => { + // Given + await lockManager.acquireLock('release-test'); + + // When + const result = await lockManager.releaseLock('release-test'); + + // Then + expect(result).toBe(true); + const lockPath = path.join(LOCKS_DIR, 'release-test.lock'); + expect(fsSync.existsSync(lockPath)).toBe(false); + }); + + it('should return false when no lock exists', async () => { + // When + const result = await lockManager.releaseLock('nonexistent'); + + // Then + expect(result).toBe(false); + }); + + it('should refuse to release a lock owned by another PID', async () => { + // Given — create a lock from another PID + const lockPath = path.join(LOCKS_DIR, 'other-pid.lock'); + const lockData = { + pid: 999999, + owner: 'other', + created_at: new Date().toISOString(), + ttl_seconds: 300, + resource: 'other-pid', + }; + await fs.writeFile(lockPath, yaml.dump(lockData)); + + // When + const result = await lockManager.releaseLock('other-pid'); + + // Then + expect(result).toBe(false); + // Lock file should still exist + expect(fsSync.existsSync(lockPath)).toBe(true); + }); + }); + + describe('isLocked', () => { + it('should return true for an active lock', async () => { + // Given + await lockManager.acquireLock('active-resource'); + + // When + const result = await lockManager.isLocked('active-resource'); + + // Then + expect(result).toBe(true); + }); + + it('should return false for no lock', async () => { + // When + const result = await lockManager.isLocked('no-lock'); + + // Then + expect(result).toBe(false); + }); + + it('should return false for a stale lock', async () => { + // Given — expired TTL lock + const lockPath = path.join(LOCKS_DIR, 'stale-resource.lock'); + const lockData = { + pid: 999999, + owner: 'dead', + created_at: new Date(Date.now() - 400 * 1000).toISOString(), + ttl_seconds: 300, + resource: 'stale-resource', + }; + await fs.writeFile(lockPath, yaml.dump(lockData)); + + // When + const result = await lockManager.isLocked('stale-resource'); + + // Then + expect(result).toBe(false); + }); + }); + + describe('cleanupStaleLocks', () => { + it('should remove locks with expired TTL (AC16)', async () => { + // Given + const lockPath = path.join(LOCKS_DIR, 'expired.lock'); + const lockData = { + pid: process.pid, + owner: 'old', + created_at: new Date(Date.now() - 600 * 1000).toISOString(), // 10 min ago + ttl_seconds: 300, // 5 min TTL + resource: 'expired', + }; + await fs.writeFile(lockPath, yaml.dump(lockData)); + + // When + const cleaned = await lockManager.cleanupStaleLocks(); + + // Then + expect(cleaned).toBe(1); + expect(fsSync.existsSync(lockPath)).toBe(false); + }); + + it('should remove locks with dead PIDs (AC17)', async () => { + // Given + const lockPath = path.join(LOCKS_DIR, 'dead-pid.lock'); + const lockData = { + pid: 999999, + owner: 'dead', + created_at: new Date().toISOString(), + ttl_seconds: 300, + resource: 'dead-pid', + }; + await fs.writeFile(lockPath, yaml.dump(lockData)); + + // When + const cleaned = await lockManager.cleanupStaleLocks(); + + // Then + expect(cleaned).toBe(1); + expect(fsSync.existsSync(lockPath)).toBe(false); + }); + + it('should not remove active locks', async () => { + // Given + await lockManager.acquireLock('active'); + + // When + const cleaned = await lockManager.cleanupStaleLocks(); + + // Then + expect(cleaned).toBe(0); + const lockPath = path.join(LOCKS_DIR, 'active.lock'); + expect(fsSync.existsSync(lockPath)).toBe(true); + + // Cleanup + await lockManager.releaseLock('active'); + }); + + it('should handle empty locks directory', async () => { + // When + const cleaned = await lockManager.cleanupStaleLocks(); + + // Then + expect(cleaned).toBe(0); + }); + }); + + describe('lock file schema (AC14)', () => { + it('should write lock files with formal YAML schema', async () => { + // When + await lockManager.acquireLock('schema-test'); + + // Then + const lockPath = path.join(LOCKS_DIR, 'schema-test.lock'); + const content = await fs.readFile(lockPath, 'utf8'); + const lockData = yaml.load(content); + + // Validate all required schema fields + expect(lockData).toHaveProperty('pid'); + expect(lockData).toHaveProperty('owner'); + expect(lockData).toHaveProperty('created_at'); + expect(lockData).toHaveProperty('ttl_seconds'); + expect(lockData).toHaveProperty('resource'); + + // Validate types + expect(typeof lockData.pid).toBe('number'); + expect(typeof lockData.owner).toBe('string'); + expect(typeof lockData.created_at).toBe('string'); + expect(typeof lockData.ttl_seconds).toBe('number'); + expect(typeof lockData.resource).toBe('string'); + + // Validate ISO date format + expect(() => new Date(lockData.created_at)).not.toThrow(); + expect(new Date(lockData.created_at).toISOString()).toBe(lockData.created_at); + + // Cleanup + await lockManager.releaseLock('schema-test'); + }); + }); + + describe('resource name sanitization', () => { + it('should sanitize resource names with special characters', async () => { + // When + const result = await lockManager.acquireLock('my/weird resource.name'); + + // Then + expect(result).toBe(true); + const sanitizedPath = path.join(LOCKS_DIR, 'my_weird_resource_name.lock'); + expect(fsSync.existsSync(sanitizedPath)).toBe(true); + + // Cleanup + await lockManager.releaseLock('my/weird resource.name'); + }); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/context-manager-handoff.test.js +================================================== +```js +'use strict'; + +const fs = require('fs-extra'); +const path = require('path'); +const os = require('os'); + +const ContextManager = require('../../../.aios-core/core/orchestration/context-manager'); + +describe('ContextManager structured handoff package', () => { + let tempDir; + let manager; + + beforeEach(async () => { + tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'aios-handoff-')); + manager = new ContextManager('test-workflow', tempDir); + await manager.initialize(); + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + it('saves standardized handoff package in phase output', async () => { + await manager.savePhaseOutput( + 1, + { + agent: 'architect', + task: 'spec-write-spec.md', + result: { + decisions: [{ id: 'D1', summary: 'Use API gateway' }], + evidence_links: ['docs/spec.md'], + open_risks: ['Gateway latency spike'], + }, + validation: { + checks: [{ type: 'file_exists', path: 'docs/spec.md', passed: true }], + }, + }, + { handoffTarget: { phase: 2, agent: 'dev' } }, + ); + + const state = await manager.loadState(); + const handoff = state.phases[1].handoff; + + expect(handoff).toBeDefined(); + expect(handoff.workflow_id).toBe('test-workflow'); + expect(handoff.from.phase).toBe(1); + expect(handoff.from.agent).toBe('architect'); + expect(handoff.to.phase).toBe(2); + expect(handoff.to.agent).toBe('dev'); + expect(handoff.context_snapshot.current_phase).toBe(1); + expect(handoff.context_snapshot.workflow_status).toBe('in_progress'); + expect(handoff.decision_log.count).toBe(1); + expect(handoff.evidence_links).toContain('docs/spec.md'); + expect(handoff.open_risks).toContain('Gateway latency spike'); + }); + + it('writes handoff artifact file to handoffs directory', async () => { + await manager.savePhaseOutput( + 2, + { + agent: 'dev', + task: 'dev-develop-story.md', + result: { decisions: [] }, + }, + { handoffTarget: { phase: 3, agent: 'qa' } }, + ); + + const handoffPath = path.join( + tempDir, + '.aios', + 'workflow-state', + 'handoffs', + 'test-workflow-phase-2.handoff.json', + ); + const exists = await fs.pathExists(handoffPath); + const content = await fs.readJson(handoffPath); + + expect(exists).toBe(true); + expect(content.from.phase).toBe(2); + expect(content.to.agent).toBe('qa'); + }); + + it('provides previous handoffs in getContextForPhase', async () => { + await manager.savePhaseOutput( + 1, + { + agent: 'architect', + result: { decisions: [{ id: 'D1' }] }, + }, + { handoffTarget: { phase: 2, agent: 'dev' } }, + ); + + const phase2Context = await manager.getContextForPhase(2); + expect(phase2Context.previousHandoffs).toBeDefined(); + expect(phase2Context.previousHandoffs['1']).toBeDefined(); + expect(phase2Context.previousHandoffs['1'].to.agent).toBe('dev'); + }); + + it('computes and persists delivery confidence score from phase data', async () => { + await manager.savePhaseOutput( + 1, + { + agent: 'dev', + result: { + status: 'success', + ac_total: 4, + ac_completed: 3, + open_risks: ['risk-a'], + technical_debt_count: 1, + }, + validation: { + checks: [ + { type: 'unit_test', passed: true }, + { type: 'regression_suite', passed: true }, + { type: 'integration_test', passed: false }, + ], + }, + }, + { handoffTarget: { phase: 2, agent: 'qa' } }, + ); + + const state = await manager.loadState(); + const confidence = state.metadata.delivery_confidence; + + expect(confidence).toBeDefined(); + expect(confidence.version).toBe('1.0.0'); + expect(confidence.score).toBeGreaterThan(0); + expect(confidence.score).toBeLessThanOrEqual(100); + expect(confidence.components.test_coverage).toBeCloseTo(2 / 3, 5); + expect(confidence.components.ac_completion).toBeCloseTo(0.75, 5); + expect(confidence.components.regression_clear).toBeCloseTo(1, 5); + + const confidencePath = path.join( + tempDir, + '.aios', + 'workflow-state', + 'confidence', + 'test-workflow.delivery-confidence.json', + ); + const persisted = await fs.readJson(confidencePath); + expect(persisted.score).toBe(confidence.score); + }); + + it('includes delivery confidence in summary output', async () => { + await manager.savePhaseOutput( + 1, + { + agent: 'qa', + result: { status: 'success' }, + validation: { checks: [{ type: 'regression', passed: true }] }, + }, + { handoffTarget: { phase: 2, agent: 'po' } }, + ); + + const summary = manager.getSummary(); + expect(summary.deliveryConfidence).toBeDefined(); + expect(summary.deliveryConfidence.score).toBeGreaterThanOrEqual(0); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/workflow-orchestrator-confidence.test.js +================================================== +```js +'use strict'; + +const WorkflowOrchestrator = require('../../../.aios-core/core/orchestration/workflow-orchestrator'); + +describe('WorkflowOrchestrator delivery confidence gate', () => { + function buildOrchestrator(options = {}, confidence = null) { + const orchestrator = new WorkflowOrchestrator('/tmp/fake-workflow.yaml', { + ...options, + projectRoot: process.cwd(), + }); + + orchestrator.workflow = { + workflow: { id: 'wf-confidence' }, + sequence: [{ phase: 1 }, { phase: 2 }], + }; + orchestrator.executionState = { + startTime: Date.now() - 1000, + currentPhase: 2, + completedPhases: [1, 2], + failedPhases: [], + skippedPhases: [], + }; + orchestrator.contextManager = { + getPreviousPhaseOutputs: () => ({ 1: { result: { status: 'success' } } }), + getDeliveryConfidence: () => confidence, + }; + + return orchestrator; + } + + it('marks summary as failed_confidence_gate when score is below threshold', () => { + const orchestrator = buildOrchestrator({ confidenceThreshold: 80 }, { score: 72 }); + const summary = orchestrator._generateExecutionSummary(); + + expect(summary.status).toBe('failed_confidence_gate'); + expect(summary.confidenceGate.enabled).toBe(true); + expect(summary.confidenceGate.passed).toBe(false); + expect(summary.confidenceGate.threshold).toBe(80); + }); + + it('keeps completed status when confidence score passes gate', () => { + const orchestrator = buildOrchestrator({ confidenceThreshold: 70 }, { score: 88.5 }); + const summary = orchestrator._generateExecutionSummary(); + + expect(summary.status).toBe('completed'); + expect(summary.confidenceGate.passed).toBe(true); + expect(summary.deliveryConfidence.score).toBe(88.5); + }); + + it('disables gate when enableConfidenceGate is false', () => { + const orchestrator = buildOrchestrator({ enableConfidenceGate: false }, { score: 10 }); + const summary = orchestrator._generateExecutionSummary(); + + expect(summary.status).toBe('completed'); + expect(summary.confidenceGate).toBeUndefined(); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/gemini-model-selector.test.js +================================================== +```js +/** + * Gemini Model Selector Tests + * Story GEMINI-INT.16 + */ + +const { + GeminiModelSelector, + MODELS, + AGENT_OVERRIDES, +} = require('../../../.aios-core/core/orchestration/gemini-model-selector'); + +describe('GeminiModelSelector', () => { + let selector; + + beforeEach(() => { + selector = new GeminiModelSelector(); + }); + + describe('MODELS configuration', () => { + it('should have flash model configured', () => { + expect(MODELS.flash).toBeDefined(); + expect(MODELS.flash.id).toBe('gemini-2.0-flash'); + expect(MODELS.flash.bestFor).toContain('simple'); + }); + + it('should have pro model configured', () => { + expect(MODELS.pro).toBeDefined(); + expect(MODELS.pro.id).toBe('gemini-2.0-pro'); + expect(MODELS.pro.bestFor).toContain('complex'); + }); + + it('should have cost per token for each model', () => { + expect(MODELS.flash.costPer1kTokens).toBeLessThan(MODELS.pro.costPer1kTokens); + }); + }); + + describe('AGENT_OVERRIDES', () => { + it('should have overrides for key agents', () => { + // Keys no longer have @ prefix (normalized) + expect(AGENT_OVERRIDES['architect']).toBe('pro'); + expect(AGENT_OVERRIDES['qa']).toBe('flash'); + expect(AGENT_OVERRIDES['dev']).toBe('auto'); + }); + }); + + describe('selectModel', () => { + it('should select model based on task complexity', () => { + const task = { description: 'Fix typo in readme' }; + + const result = selector.selectModel(task); + + expect(result).toHaveProperty('model'); + expect(result).toHaveProperty('modelKey'); + expect(result).toHaveProperty('reason'); + expect(result).toHaveProperty('config'); + }); + + it('should use agent override when specified', () => { + const task = { description: 'Design system architecture' }; + + const result = selector.selectModel(task, '@architect'); + + expect(result.modelKey).toBe('pro'); + expect(result.reason).toBe('agent_override'); + }); + + it('should select pro for complex tasks', () => { + const task = { + description: 'Design complex architecture with security optimization', + files: Array(10).fill('file.js'), + acceptanceCriteria: Array(10).fill('AC'), + }; + + const result = selector.selectModel(task); + + expect(result.modelKey).toBe('pro'); + }); + }); + + describe('handleQualityFallback', () => { + it('should recommend pro when flash quality is low', () => { + const result = selector.handleQualityFallback('flash', 0.4); + + expect(result).not.toBeNull(); + expect(result.shouldRetry).toBe(true); + expect(result.newModel).toBe('pro'); + }); + + it('should return null when quality is acceptable', () => { + const result = selector.handleQualityFallback('flash', 0.8); + + expect(result).toBeNull(); + }); + + it('should return null for pro model regardless of quality', () => { + const result = selector.handleQualityFallback('pro', 0.3); + + expect(result).toBeNull(); + }); + }); + + describe('trackUsage', () => { + it('should track flash usage', () => { + selector.trackUsage('gemini-2.0-flash', 1000); + + const stats = selector.getUsageStats(); + + expect(stats.flash.count).toBe(1); + expect(stats.flash.tokens).toBe(1000); + }); + + it('should track pro usage', () => { + selector.trackUsage('gemini-2.0-pro', 500); + + const stats = selector.getUsageStats(); + + expect(stats.pro.count).toBe(1); + expect(stats.pro.tokens).toBe(500); + }); + + it('should calculate cost correctly', () => { + selector.trackUsage('gemini-2.0-flash', 1000); + + const stats = selector.getUsageStats(); + + expect(stats.flash.cost).toBeCloseTo(MODELS.flash.costPer1kTokens); + }); + }); + + describe('getUsageStats', () => { + it('should return complete stats object', () => { + const stats = selector.getUsageStats(); + + expect(stats).toHaveProperty('flash'); + expect(stats).toHaveProperty('pro'); + expect(stats).toHaveProperty('total'); + expect(stats).toHaveProperty('flashRatio'); + expect(stats).toHaveProperty('costSavings'); + }); + + it('should calculate flash ratio', () => { + selector.trackUsage('gemini-2.0-flash', 1000); + selector.trackUsage('gemini-2.0-flash', 1000); + selector.trackUsage('gemini-2.0-pro', 1000); + + const stats = selector.getUsageStats(); + + expect(stats.flashRatio).toBeCloseTo(0.666, 2); + }); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/bob-status-writer.test.js +================================================== +```js +/** + * Tests for BobStatusWriter + * + * Story 12.6: Observability Panel Integration + Dashboard Bridge + * + * Tests: + * - Schema correctness + * - Atomic file writes + * - Incremental updates + * - Single source of truth + * - Edge cases + */ + +'use strict'; + +const fs = require('fs-extra'); +const path = require('path'); +const os = require('os'); + +const { + BobStatusWriter, + BOB_STATUS_SCHEMA, + BOB_STATUS_VERSION, + DEFAULT_PIPELINE_STAGES, + createDefaultBobStatus, +} = require('../../../.aios-core/core/orchestration/bob-status-writer'); + +describe('BobStatusWriter', () => { + let tempDir; + let writer; + + beforeEach(async () => { + tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'bob-status-test-')); + writer = new BobStatusWriter(tempDir, { debug: false }); + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + describe('constructor', () => { + it('should throw if projectRoot is not provided', () => { + expect(() => new BobStatusWriter()).toThrow('projectRoot is required'); + }); + + it('should throw if projectRoot is not a string', () => { + expect(() => new BobStatusWriter(123)).toThrow('projectRoot is required and must be a string'); + }); + + it('should set correct paths', () => { + expect(writer.projectRoot).toBe(tempDir); + expect(writer.dashboardDir).toBe(path.join(tempDir, '.aios', 'dashboard')); + expect(writer.statusPath).toBe(path.join(tempDir, '.aios', 'dashboard', 'bob-status.json')); + }); + }); + + describe('initialize', () => { + it('should create dashboard directory', async () => { + await writer.initialize(); + const exists = await fs.pathExists(writer.dashboardDir); + expect(exists).toBe(true); + }); + + it('should create bob-status.json file', async () => { + await writer.initialize(); + const exists = await fs.pathExists(writer.statusPath); + expect(exists).toBe(true); + }); + + it('should set orchestration.active to true', async () => { + await writer.initialize(); + const status = await fs.readJson(writer.statusPath); + expect(status.orchestration.active).toBe(true); + }); + + it('should include correct schema version', async () => { + await writer.initialize(); + const status = await fs.readJson(writer.statusPath); + expect(status.version).toBe(BOB_STATUS_VERSION); + }); + }); + + describe('writeBobStatus', () => { + it('should write status atomically', async () => { + await writer.initialize(); + const status = createDefaultBobStatus(); + status.pipeline.current_stage = 'development'; + + await writer.writeBobStatus(status); + + const written = await fs.readJson(writer.statusPath); + expect(written.pipeline.current_stage).toBe('development'); + }); + + it('should update timestamp on each write', async () => { + await writer.initialize(); + const status1 = await fs.readJson(writer.statusPath); + + // Wait a bit to ensure timestamp difference + await new Promise((resolve) => setTimeout(resolve, 10)); + + await writer.writeBobStatus(writer._status); + const status2 = await fs.readJson(writer.statusPath); + + expect(new Date(status2.timestamp).getTime()).toBeGreaterThan( + new Date(status1.timestamp).getTime(), + ); + }); + + it('should update elapsed times', async () => { + await writer.initialize(); + await writer.startStory('12.6'); + + // Wait a bit to accumulate time + await new Promise((resolve) => setTimeout(resolve, 50)); + + await writer.writeBobStatus(writer._status); + const status = await fs.readJson(writer.statusPath); + + expect(status.elapsed.session_seconds).toBeGreaterThanOrEqual(0); + }); + }); + + describe('updatePhase', () => { + it('should update current_stage', async () => { + await writer.initialize(); + await writer.updatePhase('development'); + + const status = await fs.readJson(writer.statusPath); + expect(status.pipeline.current_stage).toBe('development'); + }); + + it('should update story_progress when provided', async () => { + await writer.initialize(); + await writer.updatePhase('development', '3/8'); + + const status = await fs.readJson(writer.statusPath); + expect(status.pipeline.story_progress).toBe('3/8'); + }); + }); + + describe('completePhase', () => { + it('should add phase to completed_stages', async () => { + await writer.initialize(); + await writer.completePhase('validation'); + + const status = await fs.readJson(writer.statusPath); + expect(status.pipeline.completed_stages).toContain('validation'); + }); + + it('should not duplicate completed phases', async () => { + await writer.initialize(); + await writer.completePhase('validation'); + await writer.completePhase('validation'); + + const status = await fs.readJson(writer.statusPath); + const count = status.pipeline.completed_stages.filter( + (s) => s === 'validation', + ).length; + expect(count).toBe(1); + }); + }); + + describe('updateAgent', () => { + it('should update current_agent fields', async () => { + await writer.initialize(); + await writer.updateAgent('@dev', 'Dex', 'development', 'Story type: code_general'); + + const status = await fs.readJson(writer.statusPath); + expect(status.current_agent.id).toBe('@dev'); + expect(status.current_agent.name).toBe('Dex'); + expect(status.current_agent.task).toBe('development'); + expect(status.current_agent.reason).toBe('Story type: code_general'); + expect(status.current_agent.started_at).toBeTruthy(); + }); + }); + + describe('clearAgent', () => { + it('should clear current_agent fields', async () => { + await writer.initialize(); + await writer.updateAgent('@dev', 'Dex', 'development', 'reason'); + await writer.clearAgent(); + + const status = await fs.readJson(writer.statusPath); + expect(status.current_agent.id).toBeNull(); + expect(status.current_agent.name).toBeNull(); + expect(status.current_agent.task).toBeNull(); + }); + }); + + describe('addTerminal', () => { + it('should add terminal to active_terminals', async () => { + await writer.initialize(); + await writer.addTerminal('@dev', 12345, 'development'); + + const status = await fs.readJson(writer.statusPath); + expect(status.active_terminals).toHaveLength(1); + expect(status.active_terminals[0].agent).toBe('@dev'); + expect(status.active_terminals[0].pid).toBe(12345); + expect(status.active_terminals[0].task).toBe('development'); + }); + }); + + describe('removeTerminal', () => { + it('should remove terminal by pid', async () => { + await writer.initialize(); + await writer.addTerminal('@dev', 12345, 'development'); + await writer.addTerminal('@qa', 67890, 'quality_gate'); + await writer.removeTerminal(12345); + + const status = await fs.readJson(writer.statusPath); + expect(status.active_terminals).toHaveLength(1); + expect(status.active_terminals[0].pid).toBe(67890); + }); + }); + + describe('recordSurfaceDecision', () => { + it('should add surface decision', async () => { + await writer.initialize(); + await writer.recordSurfaceDecision('C003', 'present_options', { foo: 'bar' }); + + const status = await fs.readJson(writer.statusPath); + expect(status.surface_decisions).toHaveLength(1); + expect(status.surface_decisions[0].criteria).toBe('C003'); + expect(status.surface_decisions[0].action).toBe('present_options'); + expect(status.surface_decisions[0].resolved).toBe(false); + }); + }); + + describe('resolveSurfaceDecision', () => { + it('should mark decision as resolved', async () => { + await writer.initialize(); + await writer.recordSurfaceDecision('C003', 'present_options', {}); + await writer.resolveSurfaceDecision('C003'); + + const status = await fs.readJson(writer.statusPath); + expect(status.surface_decisions[0].resolved).toBe(true); + expect(status.surface_decisions[0].resolved_at).toBeTruthy(); + }); + }); + + describe('addError', () => { + it('should add error to errors array', async () => { + await writer.initialize(); + await writer.addError('development', 'Test error', true); + + const status = await fs.readJson(writer.statusPath); + expect(status.errors).toHaveLength(1); + expect(status.errors[0].phase).toBe('development'); + expect(status.errors[0].message).toBe('Test error'); + expect(status.errors[0].recoverable).toBe(true); + }); + }); + + describe('clearErrors', () => { + it('should clear all errors', async () => { + await writer.initialize(); + await writer.addError('development', 'Error 1', true); + await writer.addError('qa', 'Error 2', false); + await writer.clearErrors(); + + const status = await fs.readJson(writer.statusPath); + expect(status.errors).toHaveLength(0); + }); + }); + + describe('educational mode', () => { + it('should update educational mode data', async () => { + await writer.initialize(); + await writer.updateEducational({ enabled: true }); + + const status = await fs.readJson(writer.statusPath); + expect(status.educational.enabled).toBe(true); + }); + + it('should add trade-offs', async () => { + await writer.initialize(); + await writer.addTradeoff('JWT vs Session', 'JWT', 'Better for microservices'); + + const status = await fs.readJson(writer.statusPath); + expect(status.educational.tradeoffs).toHaveLength(1); + expect(status.educational.tradeoffs[0].choice).toBe('JWT vs Session'); + expect(status.educational.tradeoffs[0].selected).toBe('JWT'); + }); + }); + + describe('startStory', () => { + it('should set current_story and reset progress', async () => { + await writer.initialize(); + await writer.completePhase('validation'); + await writer.startStory('12.6'); + + const status = await fs.readJson(writer.statusPath); + expect(status.orchestration.current_story).toBe('12.6'); + expect(status.pipeline.completed_stages).toHaveLength(0); + expect(status.elapsed.story_seconds).toBe(0); + }); + }); + + describe('complete', () => { + it('should set orchestration.active to false', async () => { + await writer.initialize(); + await writer.complete(); + + const status = await fs.readJson(writer.statusPath); + expect(status.orchestration.active).toBe(false); + }); + }); + + describe('getStatus', () => { + it('should return copy of current status', async () => { + await writer.initialize(); + const status = writer.getStatus(); + + expect(status.version).toBe(BOB_STATUS_VERSION); + expect(status).not.toBe(writer._status); // Should be a copy + }); + }); + + describe('readStatus', () => { + it('should read status from file', async () => { + await writer.initialize(); + await writer.updatePhase('development'); + + const status = await writer.readStatus(); + expect(status.pipeline.current_stage).toBe('development'); + }); + + it('should return null if file does not exist', async () => { + const status = await writer.readStatus(); + expect(status).toBeNull(); + }); + }); +}); + +describe('BOB_STATUS_SCHEMA', () => { + it('should have correct version', () => { + expect(BOB_STATUS_SCHEMA.version).toBe(BOB_STATUS_VERSION); + }); + + it('should have correct stages', () => { + expect(BOB_STATUS_SCHEMA.stages).toEqual(DEFAULT_PIPELINE_STAGES); + }); + + it('should have createDefault function', () => { + const defaultStatus = BOB_STATUS_SCHEMA.createDefault(); + expect(defaultStatus.version).toBe(BOB_STATUS_VERSION); + expect(defaultStatus.pipeline.stages).toEqual(DEFAULT_PIPELINE_STAGES); + }); +}); + +describe('createDefaultBobStatus', () => { + it('should create valid default status', () => { + const status = createDefaultBobStatus(); + + expect(status.version).toBe(BOB_STATUS_VERSION); + expect(status.orchestration.active).toBe(false); + expect(status.orchestration.mode).toBe('bob'); + expect(status.pipeline.stages).toEqual(DEFAULT_PIPELINE_STAGES); + expect(status.pipeline.current_stage).toBeNull(); + expect(status.current_agent.id).toBeNull(); + expect(status.active_terminals).toEqual([]); + expect(status.surface_decisions).toEqual([]); + expect(status.errors).toEqual([]); + expect(status.educational.enabled).toBe(false); + }); +}); + +describe('Edge Cases', () => { + let tempDir; + let writer; + + beforeEach(async () => { + tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'bob-status-edge-')); + writer = new BobStatusWriter(tempDir, { debug: false }); + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + it('should handle concurrent writes gracefully', async () => { + await writer.initialize(); + + // Simulate concurrent writes - using sequential writes + // to avoid race conditions in temp file creation + for (let i = 0; i < 5; i++) { + await writer.updatePhase(`phase_${i}`); + } + + // File should still be valid JSON + const status = await fs.readJson(writer.statusPath); + expect(status.version).toBe(BOB_STATUS_VERSION); + }); + + it('should handle missing dashboard directory', async () => { + // Don't initialize, just try to write + const status = createDefaultBobStatus(); + await writer.writeBobStatus(status); + + // Should create directory and file + const exists = await fs.pathExists(writer.statusPath); + expect(exists).toBe(true); + }); + + it('should handle partial status updates', async () => { + await writer.initialize(); + + // Update only phase, agent should remain null + await writer.updatePhase('development'); + + const status = await fs.readJson(writer.statusPath); + expect(status.pipeline.current_stage).toBe('development'); + expect(status.current_agent.id).toBeNull(); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/workflow-orchestrator-profile-propagation.test.js +================================================== +```js +'use strict'; + +const fs = require('fs-extra'); +const os = require('os'); +const path = require('path'); + +const WorkflowOrchestrator = require('../../../.aios-core/core/orchestration/workflow-orchestrator'); + +describe('WorkflowOrchestrator execution profile propagation', () => { + let tempDir; + let workflowPath; + + beforeEach(async () => { + tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'aios-workflow-profile-')); + workflowPath = path.join(tempDir, 'workflow.yaml'); + + await fs.writeFile( + workflowPath, + [ + 'workflow:', + ' id: test-workflow', + ' name: Test Workflow', + 'sequence:', + ' - phase: 1', + ' phase_name: "Build"', + ' agent: "dev"', + ' action: "implement"', + ' task: "dev-develop-story.md"', + ' - phase: 2', + ' phase_name: "Review"', + ' agent: "qa"', + ' action: "review"', + ].join('\n'), + 'utf8', + ); + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + it('passes execution profile and policy to dispatchSubagent context', async () => { + const dispatchSubagent = jest.fn(async () => ({ status: 'success' })); + const orchestrator = new WorkflowOrchestrator(workflowPath, { + projectRoot: tempDir, + executionContext: 'migration', + dispatchSubagent, + }); + + await orchestrator.loadWorkflow(); + + // Isolate this test to dispatch payload semantics. + orchestrator.preparePhase = jest.fn(async () => ({})); + orchestrator.validatePhaseOutput = jest.fn(async () => ({ passed: true, checks: [], errors: [] })); + orchestrator.promptBuilder.buildPrompt = jest.fn(async () => 'test prompt'); + orchestrator.contextManager.getContextForPhase = jest.fn(async () => ({ + workflowId: 'test-workflow', + currentPhase: 1, + previousPhases: {}, + metadata: {}, + })); + orchestrator.contextManager.savePhaseOutput = jest.fn(async () => {}); + + await orchestrator._executeSinglePhase({ + phase: 1, + phase_name: 'Build', + agent: 'dev', + action: 'implement', + task: 'dev-develop-story.md', + }); + + expect(dispatchSubagent).toHaveBeenCalledTimes(1); + const payload = dispatchSubagent.mock.calls[0][0]; + + expect(payload.context).toBeDefined(); + expect(payload.context.executionProfile).toBe('balanced'); + expect(payload.context.executionPolicy).toBeDefined(); + expect(payload.context.executionPolicy.max_parallel_changes).toBe(3); + expect(payload.baseContext).toBeDefined(); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/task-complexity-classifier.test.js +================================================== +```js +/** + * Task Complexity Classifier Tests + * Story GEMINI-INT.16 + */ + +const { + TaskComplexityClassifier, + COMPLEXITY_INDICATORS, +} = require('../../../.aios-core/core/orchestration/task-complexity-classifier'); + +describe('TaskComplexityClassifier', () => { + let classifier; + + beforeEach(() => { + classifier = new TaskComplexityClassifier(); + }); + + describe('COMPLEXITY_INDICATORS', () => { + it('should have indicators for all complexity levels', () => { + expect(COMPLEXITY_INDICATORS.simple).toBeDefined(); + expect(COMPLEXITY_INDICATORS.medium).toBeDefined(); + expect(COMPLEXITY_INDICATORS.complex).toBeDefined(); + }); + + it('should have keywords for each level', () => { + expect(COMPLEXITY_INDICATORS.simple.keywords).toContain('format'); + expect(COMPLEXITY_INDICATORS.medium.keywords).toContain('implement'); + expect(COMPLEXITY_INDICATORS.complex.keywords).toContain('architecture'); + }); + }); + + describe('classify', () => { + it('should classify simple tasks', () => { + const task = { + description: 'Fix typo in readme file', + files: ['README.md'], + acceptanceCriteria: ['Fix spelling'], + }; + + const result = classifier.classify(task); + + expect(result).toHaveProperty('level'); + expect(result).toHaveProperty('score'); + expect(result).toHaveProperty('scores'); + expect(result).toHaveProperty('confidence'); + }); + + it('should classify complex tasks', () => { + const task = { + description: 'Design new architecture for security system with performance optimization', + files: ['src/a.js', 'src/b.js', 'src/c.js', 'src/d.js', 'src/e.js', 'src/f.js'], + acceptanceCriteria: ['AC1', 'AC2', 'AC3', 'AC4', 'AC5', 'AC6', 'AC7', 'AC8'], + }; + + const result = classifier.classify(task); + + expect(result.level).toBe('complex'); + expect(result.score).toBeGreaterThan(0.5); + }); + + it('should handle empty task description', () => { + const task = { description: '' }; + + const result = classifier.classify(task); + + expect(result).toHaveProperty('level'); + expect(['simple', 'medium', 'complex']).toContain(result.level); + }); + + it('should return confidence score', () => { + const task = { + description: 'Implement new feature for user authentication', + files: ['auth.js'], + }; + + const result = classifier.classify(task); + + expect(result.confidence).toBeGreaterThanOrEqual(0); + expect(result.confidence).toBeLessThanOrEqual(1); + }); + }); + + describe('thresholds', () => { + it('should use default thresholds', () => { + expect(classifier.thresholds.simple).toBe(0.3); + expect(classifier.thresholds.complex).toBe(0.7); + }); + + it('should accept custom thresholds', () => { + const custom = new TaskComplexityClassifier({ + simpleThreshold: 0.2, + complexThreshold: 0.8, + }); + + expect(custom.thresholds.simple).toBe(0.2); + expect(custom.thresholds.complex).toBe(0.8); + }); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/epic-context-accumulator.test.js +================================================== +```js +/** + * Epic Context Accumulator Tests + * Story 12.4: Progressive summarization with token control + */ + +const { + EpicContextAccumulator, + createEpicContextAccumulator, + CompressionLevel, + COMPRESSION_FIELDS, + estimateTokens, + getCompressionLevel, + buildFileIndex, + hasFileOverlap, + truncateToTokens, + formatStoryEntry, + TOKEN_LIMIT, + HARD_CAP_PER_STORY, + CHARS_PER_TOKEN, +} = require('../../../.aios-core/core/orchestration/epic-context-accumulator'); + +// Helper: create a mock SessionState with stories_done +function createMockSessionState(storiesDone = [], extraContext = {}) { + return { + state: { + session_state: { + progress: { + current_story: 'story-next', + stories_done: storiesDone, + stories_pending: [], + }, + context_snapshot: { + files_modified: 0, + executor_distribution: extraContext.executor_distribution || {}, + last_executor: extraContext.last_executor || null, + branch: 'main', + }, + epic: { + id: extraContext.epicId || '12', + title: extraContext.epicTitle || 'Test Epic', + total_stories: extraContext.totalStories || storiesDone.length + 1, + }, + }, + }, + }; +} + +// Helper: create a story object +function createStory(overrides = {}) { + return { + id: 'story-1', + title: 'Test Story', + executor: '@dev', + quality_gate: '@qa', + status: 'completed', + acceptance_criteria: 'AC1: Do something', + files_modified: ['src/index.js'], + dev_notes: 'Implementation notes', + ...overrides, + }; +} + +describe('EpicContextAccumulator', () => { + describe('Constants', () => { + it('should export TOKEN_LIMIT as 8000', () => { + expect(TOKEN_LIMIT).toBe(8000); + }); + + it('should export HARD_CAP_PER_STORY as 600', () => { + expect(HARD_CAP_PER_STORY).toBe(600); + }); + + it('should export CHARS_PER_TOKEN as 3.5', () => { + expect(CHARS_PER_TOKEN).toBe(3.5); + }); + + it('should define three compression levels', () => { + expect(CompressionLevel.FULL_DETAIL).toBe('full_detail'); + expect(CompressionLevel.METADATA_PLUS_FILES).toBe('metadata_plus_files'); + expect(CompressionLevel.METADATA_ONLY).toBe('metadata_only'); + }); + }); + + describe('estimateTokens()', () => { + it('should estimate tokens as ceil(length / 3.5)', () => { + // 7 chars / 3.5 = 2 tokens + expect(estimateTokens('1234567')).toBe(2); + }); + + it('should round up fractional tokens', () => { + // 8 chars / 3.5 = 2.28... → 3 tokens + expect(estimateTokens('12345678')).toBe(3); + }); + + it('should return 0 for empty string', () => { + expect(estimateTokens('')).toBe(0); + }); + + it('should return 0 for null/undefined', () => { + expect(estimateTokens(null)).toBe(0); + expect(estimateTokens(undefined)).toBe(0); + }); + + it('should handle long text correctly', () => { + const text = 'a'.repeat(3500); + expect(estimateTokens(text)).toBe(1000); + }); + }); + + describe('getCompressionLevel()', () => { + it('should return full_detail for N-1 (distance 1)', () => { + expect(getCompressionLevel(9, 10)).toBe(CompressionLevel.FULL_DETAIL); + }); + + it('should return full_detail for N-2 (distance 2)', () => { + expect(getCompressionLevel(8, 10)).toBe(CompressionLevel.FULL_DETAIL); + }); + + it('should return full_detail for N-3 (distance 3)', () => { + expect(getCompressionLevel(7, 10)).toBe(CompressionLevel.FULL_DETAIL); + }); + + it('should return metadata_plus_files for N-4 (distance 4)', () => { + expect(getCompressionLevel(6, 10)).toBe(CompressionLevel.METADATA_PLUS_FILES); + }); + + it('should return metadata_plus_files for N-6 (distance 6)', () => { + expect(getCompressionLevel(4, 10)).toBe(CompressionLevel.METADATA_PLUS_FILES); + }); + + it('should return metadata_only for N-7 (distance 7)', () => { + expect(getCompressionLevel(3, 10)).toBe(CompressionLevel.METADATA_ONLY); + }); + + it('should return metadata_only for very old stories (distance 20)', () => { + expect(getCompressionLevel(0, 20)).toBe(CompressionLevel.METADATA_ONLY); + }); + + it('should return metadata_only for distance 0 (current story)', () => { + expect(getCompressionLevel(10, 10)).toBe(CompressionLevel.METADATA_ONLY); + }); + }); + + describe('COMPRESSION_FIELDS', () => { + it('should include all fields for full_detail', () => { + const fields = COMPRESSION_FIELDS[CompressionLevel.FULL_DETAIL]; + expect(fields).toContain('id'); + expect(fields).toContain('title'); + expect(fields).toContain('executor'); + expect(fields).toContain('quality_gate'); + expect(fields).toContain('status'); + expect(fields).toContain('acceptance_criteria'); + expect(fields).toContain('files_modified'); + expect(fields).toContain('dev_notes'); + }); + + it('should include essential + files for metadata_plus_files', () => { + const fields = COMPRESSION_FIELDS[CompressionLevel.METADATA_PLUS_FILES]; + expect(fields).toEqual(['id', 'title', 'executor', 'status', 'files_modified']); + }); + + it('should include only identifiers for metadata_only', () => { + const fields = COMPRESSION_FIELDS[CompressionLevel.METADATA_ONLY]; + expect(fields).toEqual(['id', 'executor', 'status']); + }); + }); + + describe('buildFileIndex()', () => { + it('should build Map from stories with files_modified', () => { + const stories = [ + { id: 's1', files_modified: ['src/a.js', 'src/b.js'] }, + { id: 's2', files_modified: ['src/b.js', 'src/c.js'] }, + ]; + + const index = buildFileIndex(stories); + + expect(index).toBeInstanceOf(Map); + expect(index.get('src/a.js')).toEqual(new Set(['s1'])); + expect(index.get('src/b.js')).toEqual(new Set(['s1', 's2'])); + expect(index.get('src/c.js')).toEqual(new Set(['s2'])); + }); + + it('should handle stories without files_modified', () => { + const stories = [ + { id: 's1' }, + { id: 's2', files_modified: null }, + { id: 's3', files_modified: ['src/a.js'] }, + ]; + + const index = buildFileIndex(stories); + expect(index.size).toBe(1); + expect(index.get('src/a.js')).toEqual(new Set(['s3'])); + }); + + it('should return empty Map for empty stories array', () => { + const index = buildFileIndex([]); + expect(index.size).toBe(0); + }); + + it('should provide O(1) lookup', () => { + const stories = [{ id: 's1', files_modified: ['src/x.js'] }]; + const index = buildFileIndex(stories); + // Map.has() is O(1) + expect(index.has('src/x.js')).toBe(true); + expect(index.has('src/y.js')).toBe(false); + }); + }); + + describe('hasFileOverlap()', () => { + it('should detect overlap with Set target', () => { + const storyFiles = ['src/a.js', 'src/b.js']; + const targetFiles = new Set(['src/b.js', 'src/c.js']); + expect(hasFileOverlap(storyFiles, targetFiles)).toBe(true); + }); + + it('should detect no overlap', () => { + const storyFiles = ['src/a.js']; + const targetFiles = new Set(['src/c.js']); + expect(hasFileOverlap(storyFiles, targetFiles)).toBe(false); + }); + + it('should handle Map target (file index)', () => { + const index = new Map([['src/a.js', new Set(['s1'])]]); + expect(hasFileOverlap(['src/a.js'], index)).toBe(true); + expect(hasFileOverlap(['src/z.js'], index)).toBe(false); + }); + + it('should handle array target', () => { + expect(hasFileOverlap(['src/a.js'], ['src/a.js', 'src/b.js'])).toBe(true); + expect(hasFileOverlap(['src/a.js'], ['src/c.js'])).toBe(false); + }); + + it('should return false for null/empty storyFiles', () => { + expect(hasFileOverlap(null, new Set(['a']))).toBe(false); + expect(hasFileOverlap([], new Set(['a']))).toBe(false); + expect(hasFileOverlap(undefined, new Set(['a']))).toBe(false); + }); + }); + + describe('truncateToTokens()', () => { + it('should not truncate if within limit', () => { + const text = 'short text'; + expect(truncateToTokens(text, 100)).toBe(text); + }); + + it('should truncate and add ellipsis when over limit', () => { + const text = 'a'.repeat(100); + // 10 tokens * 3.5 = 35 chars max + const result = truncateToTokens(text, 10); + expect(result).toBe('a'.repeat(35) + '...'); + }); + + it('should return empty string for null/undefined', () => { + expect(truncateToTokens(null, 10)).toBe(''); + expect(truncateToTokens(undefined, 10)).toBe(''); + }); + }); + + describe('formatStoryEntry()', () => { + const story = createStory(); + + it('should format full_detail with all fields', () => { + const entry = formatStoryEntry(story, CompressionLevel.FULL_DETAIL); + expect(entry).toContain('id: story-1'); + expect(entry).toContain('title: Test Story'); + expect(entry).toContain('executor: @dev'); + expect(entry).toContain('quality_gate: @qa'); + expect(entry).toContain('status: completed'); + expect(entry).toContain('acceptance_criteria: AC1: Do something'); + expect(entry).toContain('files_modified: [src/index.js]'); + expect(entry).toContain('dev_notes: Implementation notes'); + }); + + it('should format metadata_plus_files with 5 fields', () => { + const entry = formatStoryEntry(story, CompressionLevel.METADATA_PLUS_FILES); + expect(entry).toContain('id: story-1'); + expect(entry).toContain('title: Test Story'); + expect(entry).toContain('executor: @dev'); + expect(entry).toContain('status: completed'); + expect(entry).toContain('files_modified: [src/index.js]'); + expect(entry).not.toContain('quality_gate'); + expect(entry).not.toContain('dev_notes'); + }); + + it('should format metadata_only with 3 fields', () => { + const entry = formatStoryEntry(story, CompressionLevel.METADATA_ONLY); + expect(entry).toContain('id: story-1'); + expect(entry).toContain('executor: @dev'); + expect(entry).toContain('status: completed'); + expect(entry).not.toContain('title'); + expect(entry).not.toContain('files_modified'); + }); + + it('should enforce hard cap of 600 tokens per story', () => { + const bigStory = createStory({ + dev_notes: 'x'.repeat(5000), + acceptance_criteria: 'y'.repeat(5000), + }); + const entry = formatStoryEntry(bigStory, CompressionLevel.FULL_DETAIL); + const tokens = estimateTokens(entry); + expect(tokens).toBeLessThanOrEqual(HARD_CAP_PER_STORY + 1); // +1 for rounding + }); + + it('should skip undefined/null fields', () => { + const minimal = { id: 's1', executor: '@dev', status: 'done' }; + const entry = formatStoryEntry(minimal, CompressionLevel.FULL_DETAIL); + expect(entry).toBe('id: s1 | executor: @dev | status: done'); + }); + }); + + describe('EpicContextAccumulator class', () => { + describe('constructor', () => { + it('should accept a sessionState and store it', () => { + const mockState = createMockSessionState(); + const acc = new EpicContextAccumulator(mockState); + expect(acc.sessionState).toBe(mockState); + expect(acc.fileIndex).toBeNull(); + }); + }); + + describe('createEpicContextAccumulator()', () => { + it('should create instance via factory', () => { + const mockState = createMockSessionState(); + const acc = createEpicContextAccumulator(mockState); + expect(acc).toBeInstanceOf(EpicContextAccumulator); + }); + }); + + describe('buildAccumulatedContext()', () => { + it('should return empty string when no state', () => { + const acc = new EpicContextAccumulator({ state: null }); + expect(acc.buildAccumulatedContext('12', 5)).toBe(''); + }); + + it('should return empty string when no stories done', () => { + const mockState = createMockSessionState([]); + const acc = new EpicContextAccumulator(mockState); + expect(acc.buildAccumulatedContext('12', 0)).toBe(''); + }); + + it('should build context with single story', () => { + const stories = [createStory({ id: 'story-12.1' })]; + const mockState = createMockSessionState(stories); + const acc = new EpicContextAccumulator(mockState); + + const result = acc.buildAccumulatedContext('12', 1); + + expect(result).toContain('Epic 12 Context'); + expect(result).toContain('1 stories completed'); + expect(result).toContain('story-12.1'); + }); + + it('should include executor distribution in context', () => { + const stories = [createStory()]; + const mockState = createMockSessionState(stories, { + executor_distribution: { '@dev': 3, '@qa': 1 }, + }); + const acc = new EpicContextAccumulator(mockState); + + const result = acc.buildAccumulatedContext('12', 1); + expect(result).toContain('@dev: 3'); + expect(result).toContain('@qa: 1'); + }); + + it('should apply correct compression levels based on distance', () => { + // Create 10 stories + const stories = Array.from({ length: 10 }, (_, i) => + createStory({ + id: `story-${i}`, + title: `Story ${i}`, + files_modified: [`src/file${i}.js`], + }), + ); + const mockState = createMockSessionState(stories); + const acc = new EpicContextAccumulator(mockState); + + const result = acc.buildAccumulatedContext('12', 10); + + // Recent (N-1, N-2, N-3) = stories 9, 8, 7 → full_detail (has quality_gate) + expect(result).toContain('story-9'); + expect(result).toContain('story-8'); + expect(result).toContain('story-7'); + + // Old stories (N-7+) = stories 0, 1, 2, 3 → metadata_only (no title) + // These should NOT contain 'title: Story 0' etc. + const lines = result.split('\n'); + const story0Line = lines.find(l => l.includes('story-0')); + if (story0Line) { + expect(story0Line).not.toContain('title:'); + } + }); + }); + + describe('Exception: file overlap', () => { + it('should upgrade metadata_only to metadata_plus_files on file overlap', () => { + // Story at index 0, storyN = 10 → distance 10 → metadata_only + const stories = [ + createStory({ id: 'old-story', files_modified: ['src/shared.js'] }), + ...Array.from({ length: 9 }, (_, i) => + createStory({ id: `story-${i + 1}`, files_modified: [`src/file${i}.js`] }), + ), + ]; + const mockState = createMockSessionState(stories); + const acc = new EpicContextAccumulator(mockState); + + const result = acc.buildAccumulatedContext('12', 10, { + filesToModify: ['src/shared.js'], + }); + + // old-story should have been upgraded to metadata_plus_files (has title) + const lines = result.split('\n'); + const oldStoryLine = lines.find(l => l.includes('old-story')); + expect(oldStoryLine).toContain('title:'); + expect(oldStoryLine).toContain('files_modified:'); + }); + + it('should NOT upgrade to full_detail on file overlap', () => { + const stories = [ + createStory({ id: 'old-story', files_modified: ['src/shared.js'] }), + ]; + const mockState = createMockSessionState(stories); + const acc = new EpicContextAccumulator(mockState); + + const result = acc.buildAccumulatedContext('12', 10, { + filesToModify: ['src/shared.js'], + }); + + // Should be metadata_plus_files, NOT full_detail (no quality_gate) + const lines = result.split('\n'); + const storyLine = lines.find(l => l.includes('old-story')); + expect(storyLine).not.toContain('quality_gate:'); + }); + }); + + describe('Exception: executor match', () => { + it('should upgrade metadata_only to metadata_plus_files on executor match', () => { + const stories = [ + createStory({ id: 'old-story', executor: '@dev' }), + ]; + const mockState = createMockSessionState(stories); + const acc = new EpicContextAccumulator(mockState); + + const result = acc.buildAccumulatedContext('12', 10, { + executor: '@dev', + }); + + const lines = result.split('\n'); + const storyLine = lines.find(l => l.includes('old-story')); + expect(storyLine).toContain('title:'); + }); + + it('should NOT upgrade if executor does not match', () => { + const stories = [ + createStory({ id: 'old-story', executor: '@qa' }), + ]; + const mockState = createMockSessionState(stories); + const acc = new EpicContextAccumulator(mockState); + + const result = acc.buildAccumulatedContext('12', 10, { + executor: '@dev', + }); + + const lines = result.split('\n'); + const storyLine = lines.find(l => l.includes('old-story')); + // metadata_only → no title + expect(storyLine).not.toContain('title:'); + }); + + it('should NOT upgrade full_detail stories (no downgrade from exceptions)', () => { + // Story at index 9, storyN = 10 → distance 1 → full_detail + const stories = Array.from({ length: 10 }, (_, i) => + createStory({ id: `story-${i}`, executor: '@dev' }), + ); + const mockState = createMockSessionState(stories); + const acc = new EpicContextAccumulator(mockState); + + const result = acc.buildAccumulatedContext('12', 10, { executor: '@dev' }); + + // Recent story should still be full_detail + const lines = result.split('\n'); + const recentLine = lines.find(l => l.includes('story-9')); + expect(recentLine).toContain('quality_gate:'); + }); + }); + + describe('Compression cascade', () => { + it('should fit within 8000 token limit', () => { + // Create many stories with large content to trigger cascade + const stories = Array.from({ length: 20 }, (_, i) => + createStory({ + id: `story-${i}`, + title: `Story ${i} with long title padding ${'x'.repeat(50)}`, + dev_notes: `Notes for story ${i} ${'detail '.repeat(200)}`, + acceptance_criteria: `AC for story ${i} ${'criteria '.repeat(200)}`, + files_modified: Array.from({ length: 10 }, (_, j) => `src/module${i}/file${j}.js`), + }), + ); + const mockState = createMockSessionState(stories); + const acc = new EpicContextAccumulator(mockState); + + const result = acc.buildAccumulatedContext('12', 20); + const tokens = estimateTokens(result); + + expect(tokens).toBeLessThanOrEqual(TOKEN_LIMIT); + }); + + it('should preserve recent stories even when cascading', () => { + const stories = Array.from({ length: 20 }, (_, i) => + createStory({ + id: `story-${i}`, + dev_notes: 'x'.repeat(500), + acceptance_criteria: 'y'.repeat(500), + files_modified: Array.from({ length: 5 }, (_, j) => `src/f${i}_${j}.js`), + }), + ); + const mockState = createMockSessionState(stories); + const acc = new EpicContextAccumulator(mockState); + + const result = acc.buildAccumulatedContext('12', 20); + + // Most recent stories should still be present + expect(result).toContain('story-19'); + expect(result).toContain('story-18'); + expect(result).toContain('story-17'); + }); + }); + + describe('getFileIndex()', () => { + it('should return null before buildAccumulatedContext', () => { + const mockState = createMockSessionState([]); + const acc = new EpicContextAccumulator(mockState); + expect(acc.getFileIndex()).toBeNull(); + }); + + it('should return built index after buildAccumulatedContext', () => { + const stories = [createStory({ files_modified: ['src/a.js'] })]; + const mockState = createMockSessionState(stories); + const acc = new EpicContextAccumulator(mockState); + + acc.buildAccumulatedContext('12', 1); + + const index = acc.getFileIndex(); + expect(index).toBeInstanceOf(Map); + expect(index.has('src/a.js')).toBe(true); + }); + }); + + describe('Edge cases', () => { + it('should handle story N = 0 (first story, no prior context)', () => { + const mockState = createMockSessionState([]); + const acc = new EpicContextAccumulator(mockState); + const result = acc.buildAccumulatedContext('12', 0); + expect(result).toBe(''); + }); + + it('should handle story N = 1 (only one prior story)', () => { + const stories = [createStory({ id: 'story-0' })]; + const mockState = createMockSessionState(stories); + const acc = new EpicContextAccumulator(mockState); + + const result = acc.buildAccumulatedContext('12', 1); + expect(result).toContain('story-0'); + }); + + it('should handle stories as string IDs (not objects)', () => { + const stories = ['story-1', 'story-2', 'story-3']; + const mockState = createMockSessionState(stories); + const acc = new EpicContextAccumulator(mockState); + + const result = acc.buildAccumulatedContext('12', 3); + expect(result).toContain('story-1'); + }); + + it('should handle conflicting exceptions (both file overlap and executor)', () => { + const stories = [ + createStory({ + id: 'old-story', + executor: '@dev', + files_modified: ['src/shared.js'], + }), + ]; + const mockState = createMockSessionState(stories); + const acc = new EpicContextAccumulator(mockState); + + const result = acc.buildAccumulatedContext('12', 10, { + filesToModify: ['src/shared.js'], + executor: '@dev', + }); + + // Should be metadata_plus_files (upgraded), not full_detail + const lines = result.split('\n'); + const storyLine = lines.find(l => l.includes('old-story')); + expect(storyLine).toContain('title:'); + expect(storyLine).not.toContain('quality_gate:'); + }); + + it('should handle stories with empty files_modified', () => { + const stories = [createStory({ files_modified: [] })]; + const mockState = createMockSessionState(stories); + const acc = new EpicContextAccumulator(mockState); + + const result = acc.buildAccumulatedContext('12', 1, { + filesToModify: ['src/something.js'], + }); + + expect(result).toContain('story-1'); + }); + }); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/message-formatter.test.js +================================================== +```js +/** + * Tests for Message Formatter + * + * Story 12.7: Modo Educativo (Opt-in) + * + * Tests cover: + * - formatActionResult: concise (OFF) vs detailed (ON) + * - formatDecisionExplanation: silence (OFF) vs detailed (ON) + * - formatAgentAssignment: silence (OFF) vs explained (ON) + * - formatToggleFeedback: enable/disable messages + * - formatError: basic vs detailed with context + */ + +'use strict'; + +const { MessageFormatter, createMessageFormatter } = require('../../../.aios-core/core/orchestration/message-formatter'); + +describe('MessageFormatter', () => { + describe('constructor', () => { + it('should create instance with educationalMode OFF by default', () => { + const formatter = new MessageFormatter(); + expect(formatter.isEducationalMode()).toBe(false); + }); + + it('should create instance with educationalMode ON when specified', () => { + const formatter = new MessageFormatter({ educationalMode: true }); + expect(formatter.isEducationalMode()).toBe(true); + }); + }); + + describe('setEducationalMode', () => { + it('should toggle educational mode', () => { + const formatter = new MessageFormatter(); + expect(formatter.isEducationalMode()).toBe(false); + + formatter.setEducationalMode(true); + expect(formatter.isEducationalMode()).toBe(true); + + formatter.setEducationalMode(false); + expect(formatter.isEducationalMode()).toBe(false); + }); + + it('should coerce truthy values to boolean', () => { + const formatter = new MessageFormatter(); + formatter.setEducationalMode(1); + expect(formatter.isEducationalMode()).toBe(true); + + formatter.setEducationalMode(0); + expect(formatter.isEducationalMode()).toBe(false); + }); + }); + + describe('formatActionResult', () => { + describe('educational mode OFF', () => { + let formatter; + + beforeEach(() => { + formatter = new MessageFormatter({ educationalMode: false }); + }); + + it('should return concise message with files created', () => { + const result = formatter.formatActionResult('Autenticação JWT', { + filesCreated: 4, + }); + expect(result).toBe('✅ Autenticação JWT implementada. 4 arquivos criados.'); + }); + + it('should return concise message with files modified', () => { + const result = formatter.formatActionResult('Refatoração', { + filesModified: 2, + }); + expect(result).toBe('✅ Refatoração implementada. 2 arquivos modificados.'); + }); + + it('should return concise message with both created and modified', () => { + const result = formatter.formatActionResult('Feature X', { + filesCreated: 3, + filesModified: 2, + }); + expect(result).toBe('✅ Feature X implementada. 3 arquivos criados, 2 modificados.'); + }); + + it('should return concise message with no files', () => { + const result = formatter.formatActionResult('Config update', {}); + expect(result).toBe('✅ Config update implementada. Concluído.'); + }); + + it('should handle singular file counts', () => { + const result = formatter.formatActionResult('Fix', { + filesCreated: 1, + filesModified: 1, + }); + expect(result).toBe('✅ Fix implementada. 1 arquivo criado, 1 modificado.'); + }); + }); + + describe('educational mode ON', () => { + let formatter; + + beforeEach(() => { + formatter = new MessageFormatter({ educationalMode: true }); + }); + + it('should return detailed message with reason', () => { + const result = formatter.formatActionResult('Autenticação JWT', { + filesCreated: 4, + reason: 'JWT é stateless e escalável', + }); + expect(result).toContain('Vou implementar Autenticação JWT.'); + expect(result).toContain('📚 Por quê?'); + expect(result).toContain('JWT é stateless e escalável'); + }); + + it('should include steps when provided', () => { + const result = formatter.formatActionResult('API Endpoint', { + steps: [ + 'Criar handler', + 'Adicionar validação', + 'Implementar testes', + ], + }); + expect(result).toContain('🔧 O que vou fazer:'); + expect(result).toContain('1. Criar handler'); + expect(result).toContain('2. Adicionar validação'); + expect(result).toContain('3. Implementar testes'); + }); + + it('should include agents when provided', () => { + const result = formatter.formatActionResult('Database Migration', { + agents: [ + { id: '@data-engineer', name: 'Dara', task: 'Create migration' }, + { id: '@dev', name: 'Dex', task: 'Update models' }, + ], + }); + expect(result).toContain('👥 Agentes envolvidos:'); + expect(result).toContain('@data-engineer (Dara): Create migration'); + expect(result).toContain('@dev (Dex): Update models'); + }); + + it('should include tradeoffs when provided', () => { + const result = formatter.formatActionResult('Auth System', { + tradeoffs: [ + { choice: 'JWT vs Session', selected: 'JWT', reason: 'Scalability' }, + ], + }); + expect(result).toContain('Trade-offs:'); + expect(result).toContain('JWT vs Session: JWT'); + expect(result).toContain('Motivo: Scalability'); + }); + + it('should include file summary', () => { + const result = formatter.formatActionResult('Feature', { + filesCreated: 2, + filesModified: 3, + }); + expect(result).toContain('📁 Arquivos: 2 criados, 3 modificados'); + }); + }); + }); + + describe('formatDecisionExplanation', () => { + it('should return empty string when educational mode is OFF', () => { + const formatter = new MessageFormatter({ educationalMode: false }); + const result = formatter.formatDecisionExplanation('Use JWT', [ + { choice: 'Auth method', selected: 'JWT', reason: 'Stateless' }, + ]); + expect(result).toBe(''); + }); + + it('should return detailed explanation when educational mode is ON', () => { + const formatter = new MessageFormatter({ educationalMode: true }); + const result = formatter.formatDecisionExplanation('Use JWT', [ + { choice: 'Auth method', selected: 'JWT', reason: 'Stateless' }, + ]); + expect(result).toContain('💡 Decisão: Use JWT'); + expect(result).toContain('📊 Trade-offs considerados:'); + expect(result).toContain('Auth method'); + expect(result).toContain('→ Escolhido: JWT'); + expect(result).toContain('→ Motivo: Stateless'); + }); + + it('should handle empty tradeoffs', () => { + const formatter = new MessageFormatter({ educationalMode: true }); + const result = formatter.formatDecisionExplanation('Simple choice', []); + expect(result).toContain('💡 Decisão: Simple choice'); + expect(result).not.toContain('📊 Trade-offs'); + }); + }); + + describe('formatAgentAssignment', () => { + it('should return empty string when educational mode is OFF', () => { + const formatter = new MessageFormatter({ educationalMode: false }); + const result = formatter.formatAgentAssignment('@dev', 'Dex', 'Implement feature'); + expect(result).toBe(''); + }); + + it('should return explanation when educational mode is ON', () => { + const formatter = new MessageFormatter({ educationalMode: true }); + const result = formatter.formatAgentAssignment('@dev', 'Dex', 'Implement feature', 'Best for code implementation'); + expect(result).toContain('🤖 @dev (Dex) assumindo: Implement feature'); + expect(result).toContain('Por quê: Best for code implementation'); + }); + + it('should work without reason', () => { + const formatter = new MessageFormatter({ educationalMode: true }); + const result = formatter.formatAgentAssignment('@qa', 'Quinn', 'Run tests'); + expect(result).toContain('🤖 @qa (Quinn) assumindo: Run tests'); + expect(result).not.toContain('Por quê:'); + }); + }); + + describe('formatToggleFeedback', () => { + const formatter = new MessageFormatter(); + + it('should return enable message', () => { + const result = formatter.formatToggleFeedback(true); + expect(result).toContain('🎓 Modo educativo ativado!'); + expect(result).toContain('explicações detalhadas'); + }); + + it('should return disable message', () => { + const result = formatter.formatToggleFeedback(false); + expect(result).toContain('📋 Modo educativo desativado'); + expect(result).toContain('concisas'); + }); + }); + + describe('formatPersistencePrompt', () => { + it('should return persistence choice prompt', () => { + const formatter = new MessageFormatter(); + const result = formatter.formatPersistencePrompt(); + expect(result).toContain('Ativar apenas para esta sessão ou permanentemente?'); + expect(result).toContain('[1] Sessão'); + expect(result).toContain('[2] Permanente'); + }); + }); + + describe('formatPhaseTransition', () => { + it('should return empty string when educational mode is OFF', () => { + const formatter = new MessageFormatter({ educationalMode: false }); + const result = formatter.formatPhaseTransition('development', '12.7', '@dev'); + expect(result).toBe(''); + }); + + it('should return phase info when educational mode is ON', () => { + const formatter = new MessageFormatter({ educationalMode: true }); + const result = formatter.formatPhaseTransition('development', '12.7', '@dev'); + expect(result).toContain('📍 Fase: development → Story 12.7'); + expect(result).toContain('Executor: @dev'); + }); + }); + + describe('formatError', () => { + it('should return basic error in OFF mode', () => { + const formatter = new MessageFormatter({ educationalMode: false }); + const result = formatter.formatError('Test failed', { phase: 'qa', agent: '@qa' }); + expect(result).toBe('❌ Erro: Test failed\n'); + }); + + it('should return detailed error in ON mode', () => { + const formatter = new MessageFormatter({ educationalMode: true }); + const result = formatter.formatError('Test failed', { + phase: 'qa', + agent: '@qa', + suggestion: 'Check test configuration', + }); + expect(result).toContain('❌ Erro: Test failed'); + expect(result).toContain('Fase: qa'); + expect(result).toContain('Agente: @qa'); + expect(result).toContain('💡 Sugestão: Check test configuration'); + }); + }); + + describe('createMessageFormatter factory', () => { + it('should create MessageFormatter instance', () => { + const formatter = createMessageFormatter({ educationalMode: true }); + expect(formatter).toBeInstanceOf(MessageFormatter); + expect(formatter.isEducationalMode()).toBe(true); + }); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/data-lifecycle-manager.test.js +================================================== +```js +/** + * Data Lifecycle Manager Tests + * Story 12.5: Session State Integration with Bob (AC8-11) + */ + +const path = require('path'); +const fs = require('fs').promises; +const fsSync = require('fs'); +const yaml = require('js-yaml'); + +const { + DataLifecycleManager, + createDataLifecycleManager, + runStartupCleanup, + STALE_SESSION_DAYS, + STALE_SNAPSHOT_DAYS, +} = require('../../../.aios-core/core/orchestration/data-lifecycle-manager'); + +// Test fixtures +const TEST_PROJECT_ROOT = path.join(__dirname, '../../fixtures/test-project-lifecycle'); + +// Mock LockManager +jest.mock('../../../.aios-core/core/orchestration/lock-manager', () => { + const mockCleanupStaleLocks = jest.fn().mockResolvedValue(2); + return jest.fn().mockImplementation(() => ({ + cleanupStaleLocks: mockCleanupStaleLocks, + acquireLock: jest.fn().mockResolvedValue(true), + releaseLock: jest.fn().mockResolvedValue(true), + })); +}); + +describe('DataLifecycleManager', () => { + let manager; + + beforeEach(async () => { + jest.clearAllMocks(); + + // Clean up test directory + try { + await fs.rm(TEST_PROJECT_ROOT, { recursive: true, force: true }); + } catch { + // Ignore + } + + // Create base directories + await fs.mkdir(path.join(TEST_PROJECT_ROOT, 'docs/stories'), { recursive: true }); + await fs.mkdir(path.join(TEST_PROJECT_ROOT, '.aios/locks'), { recursive: true }); + await fs.mkdir(path.join(TEST_PROJECT_ROOT, '.aios/snapshots'), { recursive: true }); + + manager = new DataLifecycleManager(TEST_PROJECT_ROOT); + }); + + afterEach(async () => { + try { + await fs.rm(TEST_PROJECT_ROOT, { recursive: true, force: true }); + } catch { + // Ignore + } + }); + + // ========================================== + // Constructor tests + // ========================================== + + describe('constructor', () => { + it('should create DataLifecycleManager with projectRoot', () => { + expect(manager).toBeDefined(); + expect(manager.projectRoot).toBe(TEST_PROJECT_ROOT); + }); + + it('should throw if projectRoot is missing', () => { + expect(() => new DataLifecycleManager()).toThrow('projectRoot is required'); + expect(() => new DataLifecycleManager('')).toThrow('projectRoot is required'); + expect(() => new DataLifecycleManager(123)).toThrow('projectRoot is required'); + }); + + it('should use default options', () => { + expect(manager.options.staleSessionDays).toBe(STALE_SESSION_DAYS); + expect(manager.options.staleSnapshotDays).toBe(STALE_SNAPSHOT_DAYS); + }); + + it('should allow custom options', () => { + const customManager = new DataLifecycleManager(TEST_PROJECT_ROOT, { + staleSessionDays: 15, + staleSnapshotDays: 60, + }); + expect(customManager.options.staleSessionDays).toBe(15); + expect(customManager.options.staleSnapshotDays).toBe(60); + }); + }); + + // ========================================== + // cleanupStaleSessions tests (AC8) + // ========================================== + + describe('cleanupStaleSessions', () => { + it('should return 0 when no session state exists', async () => { + const result = await manager.cleanupStaleSessions(); + expect(result).toBe(0); + }); + + it('should not archive session if last_updated < 30 days', async () => { + // Given - session updated 10 days ago + const sessionPath = path.join(TEST_PROJECT_ROOT, 'docs/stories/.session-state.yaml'); + const tenDaysAgo = new Date(); + tenDaysAgo.setDate(tenDaysAgo.getDate() - 10); + + const sessionState = { + session_state: { + version: '1.1', + last_updated: tenDaysAgo.toISOString(), + epic: { id: 'test', title: 'Test Epic', total_stories: 1 }, + }, + }; + await fs.writeFile(sessionPath, yaml.dump(sessionState)); + + // When + const result = await manager.cleanupStaleSessions(); + + // Then + expect(result).toBe(0); + expect(fsSync.existsSync(sessionPath)).toBe(true); + }); + + it('should archive session if last_updated > 30 days (AC8)', async () => { + // Given - session updated 45 days ago + const sessionPath = path.join(TEST_PROJECT_ROOT, 'docs/stories/.session-state.yaml'); + const fortyFiveDaysAgo = new Date(); + fortyFiveDaysAgo.setDate(fortyFiveDaysAgo.getDate() - 45); + + const sessionState = { + session_state: { + version: '1.1', + last_updated: fortyFiveDaysAgo.toISOString(), + epic: { id: 'test', title: 'Test Epic', total_stories: 1 }, + }, + }; + await fs.writeFile(sessionPath, yaml.dump(sessionState)); + + // When + const result = await manager.cleanupStaleSessions(); + + // Then + expect(result).toBe(1); + expect(fsSync.existsSync(sessionPath)).toBe(false); + + // Check archive exists + const archiveDir = path.join(TEST_PROJECT_ROOT, '.aios/archive/sessions'); + expect(fsSync.existsSync(archiveDir)).toBe(true); + + const archiveFiles = await fs.readdir(archiveDir); + expect(archiveFiles.length).toBe(1); + expect(archiveFiles[0]).toMatch(/session-state-\d{4}-\d{2}-\d{2}\.yaml/); + }); + + it('should handle corrupted session state gracefully', async () => { + // Given - corrupted YAML + const sessionPath = path.join(TEST_PROJECT_ROOT, 'docs/stories/.session-state.yaml'); + await fs.writeFile(sessionPath, 'invalid yaml %%% not parseable'); + + // When/Then - should not throw + const result = await manager.cleanupStaleSessions(); + expect(result).toBe(0); + }); + + it('should handle missing last_updated field', async () => { + // Given - session without last_updated + const sessionPath = path.join(TEST_PROJECT_ROOT, 'docs/stories/.session-state.yaml'); + const sessionState = { + session_state: { + version: '1.1', + epic: { id: 'test', title: 'Test Epic' }, + }, + }; + await fs.writeFile(sessionPath, yaml.dump(sessionState)); + + // When + const result = await manager.cleanupStaleSessions(); + + // Then + expect(result).toBe(0); + }); + }); + + // ========================================== + // cleanupStaleSnapshots tests (AC10) + // ========================================== + + describe('cleanupStaleSnapshots', () => { + it('should return 0 when no snapshots directory exists', async () => { + // Given - remove snapshots dir + await fs.rm(path.join(TEST_PROJECT_ROOT, '.aios/snapshots'), { recursive: true }); + + // When + const result = await manager.cleanupStaleSnapshots(); + + // Then + expect(result).toBe(0); + }); + + it('should return 0 when snapshots directory is empty', async () => { + const result = await manager.cleanupStaleSnapshots(); + expect(result).toBe(0); + }); + + it('should not remove snapshot if age < 90 days', async () => { + // Given - snapshot created 30 days ago + const snapshotPath = path.join(TEST_PROJECT_ROOT, '.aios/snapshots/snapshot-recent.json'); + await fs.writeFile(snapshotPath, JSON.stringify({ epic_id: 'test', story_id: '1.0' })); + + // Modify the mtime to 30 days ago + const thirtyDaysAgo = new Date(); + thirtyDaysAgo.setDate(thirtyDaysAgo.getDate() - 30); + await fs.utimes(snapshotPath, thirtyDaysAgo, thirtyDaysAgo); + + // When + const result = await manager.cleanupStaleSnapshots(); + + // Then + expect(result).toBe(0); + expect(fsSync.existsSync(snapshotPath)).toBe(true); + }); + + it('should remove snapshot if age > 90 days and update index.json (AC10)', async () => { + // Given - snapshot created 100 days ago + const snapshotPath = path.join(TEST_PROJECT_ROOT, '.aios/snapshots/snapshot-old.json'); + await fs.writeFile(snapshotPath, JSON.stringify({ epic_id: 'test', story_id: '1.0' })); + + // Modify the mtime to 100 days ago + const hundredDaysAgo = new Date(); + hundredDaysAgo.setDate(hundredDaysAgo.getDate() - 100); + await fs.utimes(snapshotPath, hundredDaysAgo, hundredDaysAgo); + + // When + const result = await manager.cleanupStaleSnapshots(); + + // Then + expect(result).toBe(1); + expect(fsSync.existsSync(snapshotPath)).toBe(false); + + // Check index.json was updated + const indexPath = path.join(TEST_PROJECT_ROOT, '.aios/snapshots/index.json'); + expect(fsSync.existsSync(indexPath)).toBe(true); + + const index = JSON.parse(await fs.readFile(indexPath, 'utf8')); + expect(index.removed_snapshots).toHaveLength(1); + expect(index.removed_snapshots[0].filename).toBe('snapshot-old.json'); + expect(index.removed_snapshots[0].epic_id).toBe('test'); + }); + + it('should not remove index.json itself', async () => { + // Given - index.json in snapshots + const indexPath = path.join(TEST_PROJECT_ROOT, '.aios/snapshots/index.json'); + await fs.writeFile(indexPath, JSON.stringify({ removed_snapshots: [] })); + + // Modify the mtime to 100 days ago + const hundredDaysAgo = new Date(); + hundredDaysAgo.setDate(hundredDaysAgo.getDate() - 100); + await fs.utimes(indexPath, hundredDaysAgo, hundredDaysAgo); + + // When + const result = await manager.cleanupStaleSnapshots(); + + // Then + expect(result).toBe(0); + expect(fsSync.existsSync(indexPath)).toBe(true); + }); + }); + + // ========================================== + // cleanupOrphanLocks tests (AC9) + // ========================================== + + describe('cleanupOrphanLocks', () => { + it('should delegate to LockManager.cleanupStaleLocks (AC9)', async () => { + // When + const result = await manager.cleanupOrphanLocks(); + + // Then + expect(result).toBe(2); // Mock returns 2 + expect(manager.lockManager.cleanupStaleLocks).toHaveBeenCalled(); + }); + }); + + // ========================================== + // runStartupCleanup tests (AC11) + // ========================================== + + describe('runStartupCleanup', () => { + it('should run all three cleanup operations', async () => { + // Given - spy on methods + const cleanupSessionsSpy = jest.spyOn(manager, 'cleanupStaleSessions').mockResolvedValue(1); + const cleanupSnapshotsSpy = jest.spyOn(manager, 'cleanupStaleSnapshots').mockResolvedValue(3); + const cleanupLocksSpy = jest.spyOn(manager, 'cleanupOrphanLocks').mockResolvedValue(2); + + // When + const result = await manager.runStartupCleanup(); + + // Then + expect(result.locksRemoved).toBe(2); + expect(result.sessionsArchived).toBe(1); + expect(result.snapshotsRemoved).toBe(3); + expect(result.errors).toHaveLength(0); + + expect(cleanupLocksSpy).toHaveBeenCalled(); + expect(cleanupSessionsSpy).toHaveBeenCalled(); + expect(cleanupSnapshotsSpy).toHaveBeenCalled(); + }); + + it('should capture errors without stopping other cleanups', async () => { + // Given - first cleanup fails + jest.spyOn(manager, 'cleanupOrphanLocks').mockRejectedValue(new Error('Lock error')); + jest.spyOn(manager, 'cleanupStaleSessions').mockResolvedValue(1); + jest.spyOn(manager, 'cleanupStaleSnapshots').mockResolvedValue(2); + + // When + const result = await manager.runStartupCleanup(); + + // Then + expect(result.locksRemoved).toBe(0); + expect(result.sessionsArchived).toBe(1); + expect(result.snapshotsRemoved).toBe(2); + expect(result.errors).toHaveLength(1); + expect(result.errors[0]).toContain('Lock cleanup failed'); + }); + + it('should log cleanup summary (AC11)', async () => { + // Given + const consoleSpy = jest.spyOn(console, 'log').mockImplementation(); + jest.spyOn(manager, 'cleanupOrphanLocks').mockResolvedValue(2); + jest.spyOn(manager, 'cleanupStaleSessions').mockResolvedValue(1); + jest.spyOn(manager, 'cleanupStaleSnapshots').mockResolvedValue(0); + + // When + await manager.runStartupCleanup(); + + // Then + expect(consoleSpy).toHaveBeenCalledWith( + expect.stringContaining('🧹 Cleanup: 2 locks removidos, 1 sessions arquivadas, 0 snapshots removidos'), + ); + + consoleSpy.mockRestore(); + }); + }); + + // ========================================== + // Factory functions + // ========================================== + + describe('factory functions', () => { + it('createDataLifecycleManager should create instance', () => { + const instance = createDataLifecycleManager(TEST_PROJECT_ROOT); + expect(instance).toBeInstanceOf(DataLifecycleManager); + }); + + it('runStartupCleanup should create instance and run cleanup', async () => { + // Given - mock the methods + const mockCleanup = jest.fn().mockResolvedValue({ + locksRemoved: 0, + sessionsArchived: 0, + snapshotsRemoved: 0, + errors: [], + }); + + jest.spyOn(DataLifecycleManager.prototype, 'runStartupCleanup').mockImplementation(mockCleanup); + + // When + const result = await runStartupCleanup(TEST_PROJECT_ROOT); + + // Then + expect(result).toBeDefined(); + }); + }); + + // ========================================== + // Constants + // ========================================== + + describe('constants', () => { + it('should export correct threshold values', () => { + expect(STALE_SESSION_DAYS).toBe(30); + expect(STALE_SNAPSHOT_DAYS).toBe(90); + }); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/brownfield-handler.test.js +================================================== +```js +/** + * Tests for BrownfieldHandler - Story 12.8 + * + * Epic 12: Bob Full Integration — Completando o PRD v2.0 + * + * Test coverage: + * - AC1: Detection of first execution (EXISTING_NO_DOCS state) + * - AC2: Welcome conversation with time estimate + * - AC3: Workflow execution via WorkflowExecutor + * - AC4: Output generation (system-architecture.md, TECHNICAL-DEBT-REPORT.md) + * - AC5: Post-discovery flow (resolve debts vs add feature) + * - AC6: Idempotent re-execution + * + * @jest-environment node + */ + +'use strict'; + +const path = require('path'); +const fs = require('fs'); + +// Module under test +const { + BrownfieldHandler, + BrownfieldPhase, + PostDiscoveryChoice, + PhaseFailureAction, +} = require('../../../.aios-core/core/orchestration/brownfield-handler'); + +// ═══════════════════════════════════════════════════════════════════════════════════ +// TEST SETUP +// ═══════════════════════════════════════════════════════════════════════════════════ + +const TEST_PROJECT_ROOT = '/tmp/test-brownfield-project'; + +// Mock fs module +jest.mock('fs', () => ({ + ...jest.requireActual('fs'), + existsSync: jest.fn(), + readFileSync: jest.fn(), + writeFileSync: jest.fn(), + mkdirSync: jest.fn(), +})); + +// Mock dependencies +const mockWorkflowExecutor = { + executeWorkflow: jest.fn(), +}; + +const mockSurfaceChecker = { + shouldSurface: jest.fn(), +}; + +const mockSessionState = { + exists: jest.fn(), + loadSessionState: jest.fn(), + recordPhaseChange: jest.fn(), +}; + +describe('BrownfieldHandler', () => { + let handler; + + beforeEach(() => { + jest.clearAllMocks(); + + // Default mock implementations + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue('# Test content'); + mockSurfaceChecker.shouldSurface.mockReturnValue({ should_surface: true }); + mockSessionState.exists.mockResolvedValue(false); + + handler = new BrownfieldHandler(TEST_PROJECT_ROOT, { + debug: false, + workflowExecutor: mockWorkflowExecutor, + surfaceChecker: mockSurfaceChecker, + sessionState: mockSessionState, + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // CONSTRUCTOR TESTS + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('constructor', () => { + test('should throw error if projectRoot is not provided', () => { + expect(() => new BrownfieldHandler()).toThrow('projectRoot is required'); + }); + + test('should throw error if projectRoot is not a string', () => { + expect(() => new BrownfieldHandler(123)).toThrow('projectRoot is required and must be a string'); + }); + + test('should initialize with correct defaults', () => { + const h = new BrownfieldHandler(TEST_PROJECT_ROOT); + expect(h.projectRoot).toBe(TEST_PROJECT_ROOT); + expect(h.options.debug).toBe(false); + }); + + test('should accept custom options', () => { + const h = new BrownfieldHandler(TEST_PROJECT_ROOT, { debug: true }); + expect(h.options.debug).toBe(true); + }); + + test('should set correct workflow path', () => { + expect(handler.workflowPath).toBe( + path.join(TEST_PROJECT_ROOT, '.aios-core/development/workflows/brownfield-discovery.yaml'), + ); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // AC1: FIRST EXECUTION DETECTION + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('AC1: First execution detection', () => { + test('handle() should present welcome message on first execution', async () => { + const result = await handler.handle({}); + + expect(result.action).toBe('brownfield_welcome'); + expect(result.phase).toBe(BrownfieldPhase.WELCOME); + expect(result.data.message).toContain('primeira vez'); + expect(result.data.timeEstimate).toBe('4-8 horas'); + }); + + test('handle() should include accept/decline options', async () => { + const result = await handler.handle({}); + + expect(result.data.options).toEqual(['accept', 'decline']); + }); + + test('handle() should use SurfaceChecker for decision surfacing', async () => { + await handler.handle({}); + + expect(mockSurfaceChecker.shouldSurface).toHaveBeenCalledWith({ + valid_options_count: 2, + options_with_tradeoffs: expect.stringContaining('4-8 horas'), + }); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // AC2: WELCOME CONVERSATION + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('AC2: Welcome conversation', () => { + test('welcome message should match PRD §3.2 format', async () => { + const result = await handler.handle({}); + + expect(result.data.message).toContain('Bem-vindo'); + expect(result.data.message).toContain('primeira vez'); + expect(result.data.message).toContain('4-8 horas'); + }); + + test('handleUserDecision(true) should proceed to discovery', async () => { + mockWorkflowExecutor.executeWorkflow.mockResolvedValue({ success: true }); + + const result = await handler.handleUserDecision(true, {}); + + expect(result.action).toBe('brownfield_complete'); + }); + + test('handleUserDecision(false) should skip to defaults', async () => { + const result = await handler.handleUserDecision(false, {}); + + expect(result.action).toBe('brownfield_declined'); + expect(result.data.nextStep).toBe('existing_project_defaults'); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // AC3: WORKFLOW EXECUTION + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('AC3: Workflow execution', () => { + test('should execute brownfield-discovery.yaml via WorkflowExecutor', async () => { + mockWorkflowExecutor.executeWorkflow.mockResolvedValue({ success: true }); + + await handler.handleUserDecision(true, {}); + + expect(mockWorkflowExecutor.executeWorkflow).toHaveBeenCalledWith( + handler.workflowPath, + expect.objectContaining({ + projectRoot: TEST_PROJECT_ROOT, + }), + ); + }); + + test('should pass tech stack context to workflow', async () => { + mockWorkflowExecutor.executeWorkflow.mockResolvedValue({ success: true }); + const techStack = { framework: 'React', database: 'Supabase' }; + + await handler.handleUserDecision(true, { techStack }); + + expect(mockWorkflowExecutor.executeWorkflow).toHaveBeenCalledWith( + handler.workflowPath, + expect.objectContaining({ + techStack, + }), + ); + }); + + test('should handle workflow execution error', async () => { + mockWorkflowExecutor.executeWorkflow.mockRejectedValue(new Error('Workflow failed')); + + const result = await handler.handleUserDecision(true, {}); + + expect(result.action).toBe('brownfield_error'); + expect(result.error).toContain('Workflow failed'); + }); + + test('should handle missing workflow file', async () => { + fs.existsSync.mockReturnValue(false); + + const result = await handler.handleUserDecision(true, {}); + + expect(result.action).toBe('brownfield_error'); + expect(result.error).toContain('Workflow file not found'); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // AC3 Task 3.5: PHASE FAILURE HANDLING + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('AC3 Task 3.5: Phase failure handling', () => { + test('handlePhaseFailureAction(RETRY) should return retry instruction', async () => { + const result = await handler.handlePhaseFailureAction( + 'system_documentation', + PhaseFailureAction.RETRY, + {}, + ); + + expect(result.action).toBe('retry_phase'); + expect(result.phase).toBe('system_documentation'); + }); + + test('handlePhaseFailureAction(SKIP) should mark phase as skipped', async () => { + const result = await handler.handlePhaseFailureAction( + 'database_documentation', + PhaseFailureAction.SKIP, + {}, + ); + + expect(result.action).toBe('skip_phase'); + expect(handler.phaseProgress['database_documentation'].status).toBe('skipped'); + }); + + test('handlePhaseFailureAction(ABORT) should cancel discovery', async () => { + const result = await handler.handlePhaseFailureAction( + 'frontend_documentation', + PhaseFailureAction.ABORT, + {}, + ); + + expect(result.action).toBe('brownfield_aborted'); + expect(result.data.lastPhase).toBe('frontend_documentation'); + }); + + test('should reject invalid failure action', async () => { + const result = await handler.handlePhaseFailureAction( + 'system_documentation', + 'invalid_action', + {}, + ); + + expect(result.action).toBe('invalid_action'); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // AC4: OUTPUT VALIDATION + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('AC4: Output validation', () => { + beforeEach(() => { + mockWorkflowExecutor.executeWorkflow.mockResolvedValue({ success: true }); + }); + + test('should reference system-architecture.md in outputs', async () => { + const result = await handler.handleUserDecision(true, {}); + + expect(result.data.outputs.systemArchitecture).toBe('docs/architecture/system-architecture.md'); + }); + + test('should reference TECHNICAL-DEBT-REPORT.md in outputs', async () => { + const result = await handler.handleUserDecision(true, {}); + + expect(result.data.outputs.technicalDebtReport).toBe('docs/reports/TECHNICAL-DEBT-REPORT.md'); + }); + + test('should build summary from generated files', async () => { + fs.existsSync.mockImplementation((p) => { + if (p.includes('system-architecture.md')) return true; + if (p.includes('TECHNICAL-DEBT-REPORT.md')) return true; + return true; + }); + + fs.readFileSync.mockImplementation((p) => { + if (p.includes('TECHNICAL-DEBT-REPORT.md')) { + return ` + # Technical Debt Report + Custo Estimado: R$ 45.000 + Database: 3 issues + `; + } + return '# System Architecture'; + }); + + const result = await handler.handleUserDecision(true, {}); + + expect(result.data.summary).toBeDefined(); + expect(result.data.summary.indicators.length).toBeGreaterThan(0); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // AC5: POST-DISCOVERY FLOW + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('AC5: Post-discovery flow', () => { + beforeEach(() => { + mockWorkflowExecutor.executeWorkflow.mockResolvedValue({ success: true }); + }); + + test('discovery complete should present next step options', async () => { + const result = await handler.handleUserDecision(true, {}); + + expect(result.data.options).toEqual([ + { choice: PostDiscoveryChoice.RESOLVE_DEBTS, label: '1. Resolver débitos técnicos' }, + { choice: PostDiscoveryChoice.ADD_FEATURE, label: '2. Adicionar feature nova' }, + ]); + }); + + test('handle(RESOLVE_DEBTS) should route to debt resolution', async () => { + const result = await handler.handle({ postDiscoveryChoice: PostDiscoveryChoice.RESOLVE_DEBTS }); + + expect(result.action).toBe('route_to_debt_resolution'); + expect(result.data.taskPath).toContain('brownfield-create-epic.md'); + }); + + test('handle(ADD_FEATURE) should route to enhancement workflow', async () => { + const result = await handler.handle({ postDiscoveryChoice: PostDiscoveryChoice.ADD_FEATURE }); + + expect(result.action).toBe('route_to_enhancement'); + expect(result.data.nextStep).toBe('existing_project_enhancement'); + }); + + test('should reject invalid post-discovery choice', async () => { + const result = await handler.handle({ postDiscoveryChoice: 'invalid_choice' }); + + expect(result.action).toBe('invalid_choice'); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // AC6: IDEMPOTENCY + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('AC6: Idempotency', () => { + test('checkIdempotency should detect existing file', () => { + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue('existing content'); + + const result = handler.checkIdempotency('docs/architecture/system-architecture.md'); + + expect(result.exists).toBe(true); + expect(result.existingContent).toBe('existing content'); + }); + + test('checkIdempotency should handle non-existing file', () => { + fs.existsSync.mockReturnValue(false); + + const result = handler.checkIdempotency('docs/architecture/system-architecture.md'); + + expect(result.exists).toBe(false); + expect(result.existingContent).toBe(null); + }); + + test('writeOutputIdempotent should use writeFileSync (overwrite)', () => { + fs.existsSync.mockReturnValue(true); + + handler.writeOutputIdempotent('docs/test.md', 'new content'); + + expect(fs.writeFileSync).toHaveBeenCalledWith( + path.join(TEST_PROJECT_ROOT, 'docs/test.md'), + 'new content', + 'utf8', + ); + }); + + test('writeOutputIdempotent should create directory if needed', () => { + fs.existsSync.mockReturnValue(false); + + handler.writeOutputIdempotent('docs/new/test.md', 'content'); + + expect(fs.mkdirSync).toHaveBeenCalledWith( + path.join(TEST_PROJECT_ROOT, 'docs/new'), + { recursive: true }, + ); + }); + + test('re-execution should update existing files, not duplicate', async () => { + fs.existsSync.mockReturnValue(true); + mockWorkflowExecutor.executeWorkflow.mockResolvedValue({ success: true }); + + // First execution + await handler.handleUserDecision(true, {}); + + // Second execution (re-run) + await handler.handleUserDecision(true, {}); + + // Both executions should call the same workflow + expect(mockWorkflowExecutor.executeWorkflow).toHaveBeenCalledTimes(2); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // SESSION STATE TRACKING (Task 3.4) + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('Task 3.4: Session state tracking', () => { + test('should record phase progress in session state', async () => { + mockSessionState.exists.mockResolvedValue(true); + mockWorkflowExecutor.executeWorkflow.mockResolvedValue({ success: true }); + + await handler.handleUserDecision(true, {}); + + expect(mockSessionState.recordPhaseChange).toHaveBeenCalled(); + }); + + test('should handle missing session state gracefully', async () => { + mockSessionState.exists.mockResolvedValue(false); + mockWorkflowExecutor.executeWorkflow.mockResolvedValue({ success: true }); + + // Should not throw + await expect(handler.handleUserDecision(true, {})).resolves.toBeDefined(); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // EDGE CASES (Task 6.6) + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('Task 6.6: Edge cases', () => { + test('should handle workflow cancelled by user', async () => { + mockWorkflowExecutor.executeWorkflow.mockResolvedValue({ + success: false, + cancelled: true, + }); + + const result = await handler.handleUserDecision(true, {}); + + expect(result.action).toBe('brownfield_failed'); + }); + + test('should handle session resume in middle of brownfield', async () => { + // Simulate resuming with userAccepted already true + mockWorkflowExecutor.executeWorkflow.mockResolvedValue({ success: true }); + + const result = await handler.handle({ userAccepted: true }); + + // Should skip welcome and go directly to discovery + expect(result.action).toBe('brownfield_complete'); + }); + + test('should emit events during phase execution', async () => { + const phaseStartHandler = jest.fn(); + const phaseCompleteHandler = jest.fn(); + + handler.on('phaseStart', phaseStartHandler); + handler.on('phaseComplete', phaseCompleteHandler); + + // Simulate phase callbacks + await handler._onPhaseStart('test_phase', {}); + await handler._onPhaseComplete('test_phase', { output: 'test' }, {}); + + expect(phaseStartHandler).toHaveBeenCalled(); + expect(phaseCompleteHandler).toHaveBeenCalled(); + }); + }); + + // ═══════════════════════════════════════════════════════════════════════════════════ + // ENUMS AND CONSTANTS + // ═══════════════════════════════════════════════════════════════════════════════════ + + describe('Enums and constants', () => { + test('BrownfieldPhase should have all expected values', () => { + expect(BrownfieldPhase.WELCOME).toBe('welcome'); + expect(BrownfieldPhase.SYSTEM_DOCUMENTATION).toBe('system_documentation'); + expect(BrownfieldPhase.DATABASE_DOCUMENTATION).toBe('database_documentation'); + expect(BrownfieldPhase.FRONTEND_DOCUMENTATION).toBe('frontend_documentation'); + expect(BrownfieldPhase.COMPLETE).toBe('complete'); + }); + + test('PostDiscoveryChoice should have expected values', () => { + expect(PostDiscoveryChoice.RESOLVE_DEBTS).toBe('resolve_debts'); + expect(PostDiscoveryChoice.ADD_FEATURE).toBe('add_feature'); + }); + + test('PhaseFailureAction should have expected values', () => { + expect(PhaseFailureAction.RETRY).toBe('retry'); + expect(PhaseFailureAction.SKIP).toBe('skip'); + expect(PhaseFailureAction.ABORT).toBe('abort'); + }); + }); +}); + +// ═══════════════════════════════════════════════════════════════════════════════════ +// INTEGRATION WITH BOB-ORCHESTRATOR +// ═══════════════════════════════════════════════════════════════════════════════════ + +describe('BrownfieldHandler integration with BobOrchestrator', () => { + // These tests verify the handler integrates correctly with bob-orchestrator + // by testing the expected interface and return values + + test('handle() return value should be compatible with BobOrchestrator routing', async () => { + const handler = new BrownfieldHandler(TEST_PROJECT_ROOT, { + surfaceChecker: mockSurfaceChecker, + }); + + const result = await handler.handle({}); + + // BobOrchestrator expects { action, data } format + expect(result).toHaveProperty('action'); + expect(result).toHaveProperty('data'); + expect(typeof result.action).toBe('string'); + }); + + test('handleUserDecision() should return routing-compatible result', async () => { + mockWorkflowExecutor.executeWorkflow.mockResolvedValue({ success: true }); + + const handler = new BrownfieldHandler(TEST_PROJECT_ROOT, { + workflowExecutor: mockWorkflowExecutor, + surfaceChecker: mockSurfaceChecker, + }); + + const result = await handler.handleUserDecision(true, {}); + + expect(result).toHaveProperty('action'); + expect(['brownfield_complete', 'brownfield_error', 'brownfield_failed']).toContain(result.action); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/bob-orchestrator.test.js +================================================== +```js +/** + * Bob Orchestrator Tests + * Story 12.3: Bob Orchestration Logic (Decision Tree) + */ + +const path = require('path'); +const fs = require('fs').promises; +const fsSync = require('fs'); + +const { + BobOrchestrator, + ProjectState, +} = require('../../../.aios-core/core/orchestration/bob-orchestrator'); + +// Test fixtures +const TEST_PROJECT_ROOT = path.join(__dirname, '../../fixtures/test-project-bob'); + +// Mock Epic 11 modules +jest.mock('../../../.aios-core/core/config/config-resolver', () => ({ + resolveConfig: jest.fn(), + isLegacyMode: jest.fn(), + setUserConfigValue: jest.fn(), + CONFIG_FILES: {}, + LEVELS: {}, +})); + +// Story 12.7: Mock MessageFormatter +jest.mock('../../../.aios-core/core/orchestration/message-formatter', () => ({ + MessageFormatter: jest.fn().mockImplementation(() => ({ + format: jest.fn().mockReturnValue('formatted message'), + formatEducational: jest.fn().mockReturnValue('educational message'), + setEducationalMode: jest.fn(), + isEducationalMode: jest.fn().mockReturnValue(false), + formatToggleFeedback: jest.fn().mockImplementation((enabled) => + enabled ? '🎓 Modo educativo ativado!' : '📋 Modo educativo desativado.', + ), + formatPersistencePrompt: jest.fn().mockReturnValue('[1] Sessão / [2] Permanente'), + })), +})); + +// Story 12.8: Mock BrownfieldHandler +jest.mock('../../../.aios-core/core/orchestration/brownfield-handler', () => ({ + BrownfieldHandler: jest.fn().mockImplementation(() => ({ + handle: jest.fn().mockResolvedValue({ + action: 'brownfield_welcome', + data: { + message: 'Projeto com código detectado. Deseja analisar?', + hasCode: true, + }, + }), + handleUserDecision: jest.fn().mockResolvedValue({ + action: 'analysis_started', + }), + })), + BrownfieldPhase: {}, + PostDiscoveryChoice: {}, + PhaseFailureAction: {}, +})); + +jest.mock('../../../.aios-core/core/orchestration/executor-assignment', () => ({ + assignExecutorFromContent: jest.fn().mockReturnValue({ + executor: '@dev', + quality_gate: '@architect', + quality_gate_tools: ['code_review'], + }), + detectStoryType: jest.fn().mockReturnValue('code_general'), + assignExecutor: jest.fn(), + validateExecutorAssignment: jest.fn(), + EXECUTOR_ASSIGNMENT_TABLE: {}, + DEFAULT_ASSIGNMENT: {}, +})); + +jest.mock('../../../.aios-core/core/orchestration/terminal-spawner', () => ({ + spawnAgent: jest.fn().mockResolvedValue({ success: true, output: 'done' }), + isSpawnerAvailable: jest.fn().mockReturnValue(true), +})); + +jest.mock('../../../.aios-core/core/orchestration/workflow-executor', () => { + return { + WorkflowExecutor: jest.fn().mockImplementation(() => ({ + execute: jest.fn().mockResolvedValue({ success: true, state: {}, phaseResults: {} }), + onPhaseChange: jest.fn(), + onAgentSpawn: jest.fn(), + onTerminalSpawn: jest.fn(), + onLog: jest.fn(), + onError: jest.fn(), + })), + createWorkflowExecutor: jest.fn(), + executeDevelopmentCycle: jest.fn(), + PhaseStatus: {}, + CheckpointDecision: {}, + }; +}); + +// Story 12.6: Mock UI components +jest.mock('../../../.aios-core/core/ui/observability-panel', () => ({ + ObservabilityPanel: jest.fn().mockImplementation(() => ({ + setStage: jest.fn(), + setPipelineStage: jest.fn(), + setCurrentAgent: jest.fn(), + addTerminal: jest.fn(), + updateActiveAgent: jest.fn(), + setLog: jest.fn(), + setError: jest.fn(), + start: jest.fn(), + stop: jest.fn(), + // Story 12.7: Educational mode support + setMode: jest.fn(), + updateState: jest.fn(), + })), + PanelMode: { MINIMAL: 'minimal', DETAILED: 'detailed' }, +})); + +jest.mock('../../../.aios-core/core/orchestration/bob-status-writer', () => ({ + BobStatusWriter: jest.fn().mockImplementation(() => ({ + initialize: jest.fn().mockResolvedValue(undefined), + writeStatus: jest.fn().mockResolvedValue(undefined), + updatePhase: jest.fn().mockResolvedValue(undefined), + updateStage: jest.fn().mockResolvedValue(undefined), + updateAgent: jest.fn().mockResolvedValue(undefined), + addTerminal: jest.fn().mockResolvedValue(undefined), + updateActiveAgent: jest.fn().mockResolvedValue(undefined), + addAttempt: jest.fn().mockResolvedValue(undefined), + appendLog: jest.fn().mockResolvedValue(undefined), + complete: jest.fn().mockResolvedValue(undefined), + })), +})); + +jest.mock('../../../.aios-core/core/events/dashboard-emitter', () => ({ + getDashboardEmitter: jest.fn().mockReturnValue({ + emit: jest.fn(), + on: jest.fn(), + emitBobPhaseChange: jest.fn().mockResolvedValue(undefined), + emitBobAgentSpawned: jest.fn().mockResolvedValue(undefined), + }), +})); + +jest.mock('../../../.aios-core/core/orchestration/surface-checker', () => { + const mockShouldSurface = jest.fn().mockReturnValue({ + should_surface: false, + criterion_id: null, + criterion_name: null, + action: null, + message: null, + severity: null, + can_bypass: true, + }); + return { + SurfaceChecker: jest.fn().mockImplementation(() => ({ + shouldSurface: mockShouldSurface, + load: jest.fn().mockReturnValue(true), + })), + createSurfaceChecker: jest.fn(), + shouldSurface: jest.fn(), + }; +}); + +jest.mock('../../../.aios-core/core/orchestration/session-state', () => { + const mockExists = jest.fn().mockResolvedValue(false); + const mockLoad = jest.fn().mockResolvedValue(null); + const mockDetectCrash = jest.fn().mockResolvedValue({ isCrash: false }); + const mockGetResumeOptions = jest.fn().mockReturnValue({}); + const mockGetResumeSummary = jest.fn().mockReturnValue(''); + const mockRecordPhaseChange = jest.fn().mockResolvedValue({}); + const mockGetSessionOverride = jest.fn().mockReturnValue(null); + const mockSetSessionOverride = jest.fn(); + const mockHandleResumeOption = jest.fn().mockResolvedValue({ action: 'continue', story: '12.1', phase: 'development' }); + return { + SessionState: jest.fn().mockImplementation(() => ({ + exists: mockExists, + loadSessionState: mockLoad, + detectCrash: mockDetectCrash, + getResumeOptions: mockGetResumeOptions, + getResumeSummary: mockGetResumeSummary, + recordPhaseChange: mockRecordPhaseChange, + createSessionState: jest.fn(), + updateSessionState: jest.fn(), + getSessionOverride: mockGetSessionOverride, + setSessionOverride: mockSetSessionOverride, + handleResumeOption: mockHandleResumeOption, + state: null, + })), + createSessionState: jest.fn(), + sessionStateExists: jest.fn(), + loadSessionState: jest.fn(), + ActionType: { PHASE_CHANGE: 'PHASE_CHANGE' }, + Phase: {}, + ResumeOption: { CONTINUE: 'continue', REVIEW: 'review', RESTART: 'restart', DISCARD: 'discard' }, + SESSION_STATE_VERSION: '1.2', + SESSION_STATE_FILENAME: '.session-state.yaml', + CRASH_THRESHOLD_MINUTES: 30, + }; +}); + +jest.mock('../../../.aios-core/core/orchestration/lock-manager', () => { + const mockAcquire = jest.fn().mockResolvedValue(true); + const mockRelease = jest.fn().mockResolvedValue(true); + const mockCleanup = jest.fn().mockResolvedValue(0); + return jest.fn().mockImplementation(() => ({ + acquireLock: mockAcquire, + releaseLock: mockRelease, + cleanupStaleLocks: mockCleanup, + isLocked: jest.fn().mockResolvedValue(false), + })); +}); + +// Story 12.5: Mock DataLifecycleManager +jest.mock('../../../.aios-core/core/orchestration/data-lifecycle-manager', () => { + const mockRunStartupCleanup = jest.fn().mockResolvedValue({ + locksRemoved: 0, + sessionsArchived: 0, + snapshotsRemoved: 0, + errors: [], + }); + return { + DataLifecycleManager: jest.fn().mockImplementation(() => ({ + runStartupCleanup: mockRunStartupCleanup, + })), + createDataLifecycleManager: jest.fn(), + runStartupCleanup: jest.fn(), + STALE_SESSION_DAYS: 30, + STALE_SNAPSHOT_DAYS: 90, + }; +}); + +const { resolveConfig } = require('../../../.aios-core/core/config/config-resolver'); + +describe('BobOrchestrator', () => { + let orchestrator; + + beforeEach(async () => { + jest.clearAllMocks(); + + // Clean up test directory + try { + await fs.rm(TEST_PROJECT_ROOT, { recursive: true, force: true }); + } catch { + // Ignore + } + + // Default: project with config and docs + await fs.mkdir(path.join(TEST_PROJECT_ROOT, 'docs/architecture'), { recursive: true }); + await fs.mkdir(path.join(TEST_PROJECT_ROOT, '.aios/locks'), { recursive: true }); + await fs.writeFile(path.join(TEST_PROJECT_ROOT, 'package.json'), '{}'); + await fs.mkdir(path.join(TEST_PROJECT_ROOT, '.git'), { recursive: true }); + + resolveConfig.mockReturnValue({ + config: { version: '1.0' }, + warnings: [], + legacy: false, + }); + + orchestrator = new BobOrchestrator(TEST_PROJECT_ROOT); + }); + + afterEach(async () => { + try { + await fs.rm(TEST_PROJECT_ROOT, { recursive: true, force: true }); + } catch { + // Ignore + } + }); + + // ========================================== + // Constructor tests + // ========================================== + + describe('constructor', () => { + it('should create BobOrchestrator with projectRoot', () => { + // Then + expect(orchestrator).toBeDefined(); + expect(orchestrator.projectRoot).toBe(TEST_PROJECT_ROOT); + }); + + it('should throw if projectRoot is missing', () => { + // When/Then + expect(() => new BobOrchestrator()).toThrow('projectRoot is required'); + expect(() => new BobOrchestrator('')).toThrow('projectRoot is required'); + expect(() => new BobOrchestrator(123)).toThrow('projectRoot is required'); + }); + + it('should initialize all Epic 11 dependencies', () => { + // Then + expect(orchestrator.surfaceChecker).toBeDefined(); + expect(orchestrator.sessionState).toBeDefined(); + expect(orchestrator.workflowExecutor).toBeDefined(); + expect(orchestrator.lockManager).toBeDefined(); + }); + }); + + // ========================================== + // detectProjectState tests (AC3-6) + // ========================================== + + describe('detectProjectState', () => { + it('should detect GREENFIELD when no package.json, .git, or docs exist (AC6)', async () => { + // Given — empty project + const emptyRoot = path.join(TEST_PROJECT_ROOT, 'empty-project'); + await fs.mkdir(emptyRoot, { recursive: true }); + + // When + const state = orchestrator.detectProjectState(emptyRoot); + + // Then + expect(state).toBe(ProjectState.GREENFIELD); + }); + + it('should detect NO_CONFIG when resolveConfig fails (AC3)', () => { + // Given + resolveConfig.mockImplementation(() => { + throw new Error('Config not found'); + }); + + // When + const state = orchestrator.detectProjectState(TEST_PROJECT_ROOT); + + // Then + expect(state).toBe(ProjectState.NO_CONFIG); + }); + + it('should detect NO_CONFIG when resolveConfig returns empty config (AC3)', () => { + // Given + resolveConfig.mockReturnValue({ config: {}, warnings: [], legacy: false }); + + // When + const state = orchestrator.detectProjectState(TEST_PROJECT_ROOT); + + // Then + expect(state).toBe(ProjectState.NO_CONFIG); + }); + + it('should detect EXISTING_NO_DOCS when config exists but no architecture docs (AC4)', async () => { + // Given — remove architecture docs + await fs.rm(path.join(TEST_PROJECT_ROOT, 'docs/architecture'), { recursive: true }); + + // When + const state = orchestrator.detectProjectState(TEST_PROJECT_ROOT); + + // Then + expect(state).toBe(ProjectState.EXISTING_NO_DOCS); + }); + + it('should detect EXISTING_WITH_DOCS when config and architecture docs exist (AC5)', () => { + // Given — default setup has both config and docs + + // When + const state = orchestrator.detectProjectState(TEST_PROJECT_ROOT); + + // Then + expect(state).toBe(ProjectState.EXISTING_WITH_DOCS); + }); + + it('should use this.projectRoot as default when no argument is passed (Issue #88)', () => { + // Given — orchestrator was created with TEST_PROJECT_ROOT + // and default setup has both config and docs + + // When — call without argument + const state = orchestrator.detectProjectState(); + + // Then — should use this.projectRoot and return same result + expect(state).toBe(ProjectState.EXISTING_WITH_DOCS); + }); + }); + + // ========================================== + // orchestrate tests + // ========================================== + + describe('orchestrate', () => { + it('should return lock_failed when lock cannot be acquired', async () => { + // Given + const LockManager = require('../../../.aios-core/core/orchestration/lock-manager'); + const mockInstance = new LockManager(); + mockInstance.acquireLock.mockResolvedValueOnce(false); + orchestrator.lockManager = mockInstance; + + // When + const result = await orchestrator.orchestrate(); + + // Then + expect(result.success).toBe(false); + expect(result.action).toBe('lock_failed'); + }); + + it('should route to onboarding for NO_CONFIG state (AC3)', async () => { + // Given + resolveConfig.mockImplementation(() => { + throw new Error('No config'); + }); + + // When + const result = await orchestrator.orchestrate(); + + // Then + expect(result.success).toBe(true); + expect(result.projectState).toBe(ProjectState.NO_CONFIG); + expect(result.action).toBe('onboarding'); + }); + + it('should route to brownfield_welcome for EXISTING_NO_DOCS state (AC4)', async () => { + // Given + await fs.rm(path.join(TEST_PROJECT_ROOT, 'docs/architecture'), { recursive: true }); + + // When + const result = await orchestrator.orchestrate(); + + // Then + expect(result.success).toBe(true); + expect(result.projectState).toBe(ProjectState.EXISTING_NO_DOCS); + // Story 12.8: BrownfieldHandler now returns 'brownfield_welcome' action + expect(result.action).toBe('brownfield_welcome'); + }); + + it('should route to ask_objective for EXISTING_WITH_DOCS state (AC5)', async () => { + // When + const result = await orchestrator.orchestrate(); + + // Then + expect(result.success).toBe(true); + expect(result.projectState).toBe(ProjectState.EXISTING_WITH_DOCS); + expect(result.action).toBe('ask_objective'); + }); + + it('should route to greenfield for GREENFIELD state (AC6)', async () => { + // Given — clean empty project + const emptyRoot = path.join(TEST_PROJECT_ROOT, 'greenfield'); + await fs.mkdir(emptyRoot, { recursive: true }); + const greenOrch = new BobOrchestrator(emptyRoot); + + // When + const result = await greenOrch.orchestrate(); + + // Then + expect(result.success).toBe(true); + expect(result.projectState).toBe(ProjectState.GREENFIELD); + // GreenfieldHandler returns a surface prompt after Phase 0 + expect(result.action).toBe('greenfield_surface'); + }); + + it('should release lock on error', async () => { + // Given — force an error by making detectProjectState throw + const original = orchestrator.detectProjectState; + orchestrator.detectProjectState = () => { + throw new Error('Forced test error'); + }; + + // When + const result = await orchestrator.orchestrate(); + + // Then + expect(result.success).toBe(false); + expect(result.action).toBe('error'); + expect(result.error).toContain('Forced test error'); + + // Restore + orchestrator.detectProjectState = original; + }); + + it('should execute story when storyPath is provided (AC8-10)', async () => { + // Given — create a mock story file + const storyPath = path.join(TEST_PROJECT_ROOT, 'docs/stories/test-story.md'); + await fs.mkdir(path.dirname(storyPath), { recursive: true }); + await fs.writeFile(storyPath, '# Test Story\nSome content here'); + + // When + const result = await orchestrator.orchestrate({ storyPath }); + + // Then + expect(result.success).toBe(true); + expect(result.action).toBe('story_executed'); + expect(result.data.assignment.executor).toBe('@dev'); + }); + }); + + // ========================================== + // Decision tree is codified, not LLM (AC7) + // ========================================== + + describe('decision tree codification (AC7)', () => { + it('should use pure if/else statements without LLM calls', () => { + // Verify the method exists and returns string values + const states = [ + ProjectState.NO_CONFIG, + ProjectState.EXISTING_NO_DOCS, + ProjectState.EXISTING_WITH_DOCS, + ProjectState.GREENFIELD, + ]; + + // Each state should be a simple string constant + for (const state of states) { + expect(typeof state).toBe('string'); + } + }); + }); + + // ========================================== + // Constraint: < 50 lines of other-agent logic (AC13) + // ========================================== + + describe('constraint: router not god class (AC13)', () => { + it('should have less than 50 lines of other-agent-specific logic', async () => { + // Given — read the source file + const sourcePath = path.join( + __dirname, + '../../../.aios-core/core/orchestration/bob-orchestrator.js', + ); + const source = await fs.readFile(sourcePath, 'utf8'); + const lines = source.split('\n'); + + // Count lines that contain agent-specific logic + // (calls to Epic 11 modules with actual logic, not just initialization) + const agentSpecificPatterns = [ + /ExecutorAssignment\.\w+\(/, + /TerminalSpawner\.\w+\(/, + /workflowExecutor\.\w+\(/, + /surfaceChecker\.shouldSurface\(/, + /sessionState\.\w+\(/, + ]; + + let agentSpecificLines = 0; + for (const line of lines) { + for (const pattern of agentSpecificPatterns) { + if (pattern.test(line)) { + agentSpecificLines++; + break; // Count each line only once + } + } + } + + // Then — must be < 50 lines (PRD §3.7) + expect(agentSpecificLines).toBeLessThan(50); + }); + }); + + // ========================================== + // ProjectState enum + // ========================================== + + describe('ProjectState enum', () => { + it('should export all four states', () => { + expect(ProjectState.NO_CONFIG).toBe('NO_CONFIG'); + expect(ProjectState.EXISTING_NO_DOCS).toBe('EXISTING_NO_DOCS'); + expect(ProjectState.EXISTING_WITH_DOCS).toBe('EXISTING_WITH_DOCS'); + expect(ProjectState.GREENFIELD).toBe('GREENFIELD'); + }); + }); + + // ========================================== + // Story 12.5: Session Detection (AC1, AC2, AC4) + // ========================================== + + describe('_checkExistingSession (Story 12.5)', () => { + it('should return hasSession: false when no session exists (AC1)', async () => { + // Given - session does not exist (default mock) + + // When + const result = await orchestrator._checkExistingSession(); + + // Then + expect(result.hasSession).toBe(false); + }); + + it('should return session data when session exists (AC1)', async () => { + // Given - session exists + const mockSessionState = { + session_state: { + version: '1.1', + last_updated: new Date().toISOString(), + epic: { id: '12', title: 'Test Epic', total_stories: 5 }, + progress: { current_story: '12.1', stories_done: [], stories_pending: ['12.1'] }, + workflow: { current_phase: 'development' }, + }, + }; + + orchestrator.sessionState.exists = jest.fn().mockResolvedValue(true); + orchestrator.sessionState.loadSessionState = jest.fn().mockResolvedValue(mockSessionState); + orchestrator.sessionState.detectCrash = jest.fn().mockResolvedValue({ isCrash: false }); + orchestrator.sessionState.getResumeSummary = jest.fn().mockReturnValue('Resume summary'); + + // When + const result = await orchestrator._checkExistingSession(); + + // Then + expect(result.hasSession).toBe(true); + expect(result.epicTitle).toBe('Test Epic'); + expect(result.currentStory).toBe('12.1'); + expect(result.currentPhase).toBe('development'); + }); + + it('should format elapsed time correctly (AC2)', async () => { + // Given - session updated 3 days ago + const threeDaysAgo = new Date(); + threeDaysAgo.setDate(threeDaysAgo.getDate() - 3); + + const mockSessionState = { + session_state: { + version: '1.1', + last_updated: threeDaysAgo.toISOString(), + epic: { id: '12', title: 'Test Epic', total_stories: 5 }, + progress: { current_story: '12.1' }, + workflow: { current_phase: 'development' }, + }, + }; + + orchestrator.sessionState.exists = jest.fn().mockResolvedValue(true); + orchestrator.sessionState.loadSessionState = jest.fn().mockResolvedValue(mockSessionState); + orchestrator.sessionState.detectCrash = jest.fn().mockResolvedValue({ isCrash: false }); + orchestrator.sessionState.getResumeSummary = jest.fn().mockReturnValue(''); + + // When + const result = await orchestrator._checkExistingSession(); + + // Then + expect(result.elapsedString).toBe('3 dias'); + expect(result.formattedMessage).toContain('Bem-vindo de volta!'); + expect(result.formattedMessage).toContain('3 dias'); + }); + + it('should include crash warning when crash detected (AC4)', async () => { + // Given - crash detected + const mockSessionState = { + session_state: { + version: '1.1', + last_updated: new Date().toISOString(), + epic: { id: '12', title: 'Test Epic', total_stories: 5 }, + progress: { current_story: '12.1' }, + workflow: { current_phase: 'development' }, + }, + }; + + orchestrator.sessionState.exists = jest.fn().mockResolvedValue(true); + orchestrator.sessionState.loadSessionState = jest.fn().mockResolvedValue(mockSessionState); + orchestrator.sessionState.detectCrash = jest.fn().mockResolvedValue({ + isCrash: true, + minutesSinceUpdate: 45, + reason: 'Crash detected', + }); + orchestrator.sessionState.getResumeSummary = jest.fn().mockReturnValue(''); + + // When + const result = await orchestrator._checkExistingSession(); + + // Then + expect(result.crashInfo.isCrash).toBe(true); + expect(result.formattedMessage).toContain('⚠️'); + expect(result.formattedMessage).toContain('crashado'); + expect(result.formattedMessage).toContain('45 min'); + }); + }); + + // ========================================== + // Story 12.5: Session Resume (AC3, AC7) + // ========================================== + + describe('handleSessionResume (Story 12.5)', () => { + beforeEach(() => { + // Setup session state mock with handleResumeOption + orchestrator.sessionState.handleResumeOption = jest.fn(); + }); + + it('should handle continue option (AC3 [1])', async () => { + // Given + orchestrator.sessionState.handleResumeOption.mockResolvedValue({ + action: 'continue', + story: '12.1', + phase: 'development', + }); + + // When + const result = await orchestrator.handleSessionResume('continue'); + + // Then + expect(result.success).toBe(true); + expect(result.action).toBe('continue'); + expect(result.phase).toBe('development'); + expect(result.message).toContain('Continuando'); + }); + + it('should handle review option (AC3 [2])', async () => { + // Given + orchestrator.sessionState.handleResumeOption.mockResolvedValue({ + action: 'review', + summary: { progress: { completed: 2, total: 5 } }, + }); + + // When + const result = await orchestrator.handleSessionResume('review'); + + // Then + expect(result.success).toBe(true); + expect(result.action).toBe('review'); + expect(result.needsReprompt).toBe(true); + }); + + it('should handle restart option (AC3 [3])', async () => { + // Given + orchestrator.sessionState.handleResumeOption.mockResolvedValue({ + action: 'restart', + story: '12.1', + }); + + // When + const result = await orchestrator.handleSessionResume('restart'); + + // Then + expect(result.success).toBe(true); + expect(result.action).toBe('restart'); + expect(result.message).toContain('Recomeçando'); + }); + + it('should handle discard option (AC3 [4])', async () => { + // Given + orchestrator.sessionState.handleResumeOption.mockResolvedValue({ + action: 'discard', + message: 'Session discarded', + }); + + // When + const result = await orchestrator.handleSessionResume('discard'); + + // Then + expect(result.success).toBe(true); + expect(result.action).toBe('discard'); + expect(result.message).toContain('descartada'); + }); + + it('should handle unknown option', async () => { + // Given + orchestrator.sessionState.handleResumeOption.mockResolvedValue({ + action: 'invalid', + }); + + // When + const result = await orchestrator.handleSessionResume('invalid'); + + // Then + expect(result.success).toBe(false); + expect(result.action).toBe('unknown'); + }); + }); + + // ========================================== + // Story 12.5: Data Lifecycle Integration + // ========================================== + + describe('data lifecycle integration (Story 12.5)', () => { + it('should initialize DataLifecycleManager in constructor', () => { + // Then - verify the DataLifecycleManager was initialized + expect(orchestrator.dataLifecycleManager).toBeDefined(); + expect(orchestrator.dataLifecycleManager.runStartupCleanup).toBeDefined(); + }); + + it('should have dataLifecycleManager with runStartupCleanup method', () => { + // Then - verify the method exists + expect(typeof orchestrator.dataLifecycleManager.runStartupCleanup).toBe('function'); + }); + }); + + // ========================================== + // Story 12.5: Phase Tracking (AC5) + // ========================================== + + describe('_updatePhase (Story 12.5 - AC5)', () => { + it('should update session state on phase change', async () => { + // Given + orchestrator.sessionState.exists = jest.fn().mockResolvedValue(true); + orchestrator.sessionState.state = { session_state: {} }; + orchestrator.sessionState.recordPhaseChange = jest.fn().mockResolvedValue({}); + + // When + await orchestrator._updatePhase('development', '12.1', '@dev'); + + // Then + expect(orchestrator.sessionState.recordPhaseChange).toHaveBeenCalledWith( + 'development', + '12.1', + '@dev', + ); + }); + + it('should not fail if session state does not exist', async () => { + // Given + orchestrator.sessionState.exists = jest.fn().mockResolvedValue(false); + + // When/Then - should not throw + await expect(orchestrator._updatePhase('development', '12.1', '@dev')).resolves.not.toThrow(); + }); + }); + + // ========================================== + // Story 12.7: Educational Mode (AC1-7) + // ========================================== + + describe('Educational Mode (Story 12.7)', () => { + describe('_detectEducationalModeToggle (AC5)', () => { + it('should detect "ativa modo educativo" command', () => { + // When + const result = orchestrator._detectEducationalModeToggle('ativa modo educativo'); + + // Then + expect(result).not.toBeNull(); + expect(result.enable).toBe(true); + }); + + it('should detect "desativa modo educativo" command', () => { + // When + const result = orchestrator._detectEducationalModeToggle('desativa modo educativo'); + + // Then + expect(result).not.toBeNull(); + expect(result.enable).toBe(false); + }); + + it('should detect "Bob, ativa modo educativo" command', () => { + // When + const result = orchestrator._detectEducationalModeToggle('Bob, ativa modo educativo'); + + // Then + expect(result).not.toBeNull(); + expect(result.enable).toBe(true); + }); + + it('should detect "modo educativo on" command', () => { + // When + const result = orchestrator._detectEducationalModeToggle('modo educativo on'); + + // Then + expect(result).not.toBeNull(); + expect(result.enable).toBe(true); + }); + + it('should detect "modo educativo off" command', () => { + // When + const result = orchestrator._detectEducationalModeToggle('modo educativo off'); + + // Then + expect(result).not.toBeNull(); + expect(result.enable).toBe(false); + }); + + it('should detect "educational mode on" command (English)', () => { + // When + const result = orchestrator._detectEducationalModeToggle('educational mode on'); + + // Then + expect(result).not.toBeNull(); + expect(result.enable).toBe(true); + }); + + it('should be case-insensitive', () => { + // When + const result1 = orchestrator._detectEducationalModeToggle('ATIVA MODO EDUCATIVO'); + const result2 = orchestrator._detectEducationalModeToggle('Ativa Modo Educativo'); + + // Then + expect(result1).not.toBeNull(); + expect(result1.enable).toBe(true); + expect(result2).not.toBeNull(); + expect(result2.enable).toBe(true); + }); + + it('should return null for non-toggle commands', () => { + // When + const result1 = orchestrator._detectEducationalModeToggle('create a new feature'); + const result2 = orchestrator._detectEducationalModeToggle('help'); + const result3 = orchestrator._detectEducationalModeToggle(''); + const result4 = orchestrator._detectEducationalModeToggle(null); + + // Then + expect(result1).toBeNull(); + expect(result2).toBeNull(); + expect(result3).toBeNull(); + expect(result4).toBeNull(); + }); + }); + + describe('orchestrate with educational mode toggle (AC5)', () => { + it('should detect toggle and return early before routing', async () => { + // When + const result = await orchestrator.orchestrate({ + userGoal: 'Bob, ativa modo educativo', + }); + + // Then + expect(result.success).toBe(true); + expect(result.action).toBe('educational_mode_toggle'); + expect(result.data.enable).toBe(true); + expect(result.data.persistencePrompt).toBeDefined(); + }); + + it('should include persistence prompt in toggle result', async () => { + // When + const result = await orchestrator.orchestrate({ + userGoal: 'desativa modo educativo', + }); + + // Then + expect(result.action).toBe('educational_mode_toggle'); + expect(result.data.enable).toBe(false); + expect(result.data.persistencePrompt).toContain('Sessão'); + }); + }); + + describe('handleEducationalModeToggle (AC6)', () => { + it('should update internal state when enabling', async () => { + // When + const result = await orchestrator.handleEducationalModeToggle(true, 'session'); + + // Then + expect(result.success).toBe(true); + expect(result.educationalMode).toBe(true); + expect(result.persistenceType).toBe('session'); + }); + + it('should update internal state when disabling', async () => { + // When + const result = await orchestrator.handleEducationalModeToggle(false, 'session'); + + // Then + expect(result.success).toBe(true); + expect(result.educationalMode).toBe(false); + }); + + it('should persist to session state for session type', async () => { + // Given + orchestrator.sessionState.exists = jest.fn().mockResolvedValue(true); + orchestrator.sessionState.setSessionOverride = jest.fn().mockResolvedValue({}); + + // When + await orchestrator.handleEducationalModeToggle(true, 'session'); + + // Then + expect(orchestrator.sessionState.setSessionOverride).toHaveBeenCalledWith( + 'educational_mode', + true, + ); + }); + + it('should persist to user config for permanent type', async () => { + // Given + const { setUserConfigValue } = require('../../../.aios-core/core/config/config-resolver'); + + // When + await orchestrator.handleEducationalModeToggle(true, 'permanent'); + + // Then + expect(setUserConfigValue).toHaveBeenCalledWith('educational_mode', true); + }); + + it('should return feedback message', async () => { + // When + const enableResult = await orchestrator.handleEducationalModeToggle(true, 'session'); + const disableResult = await orchestrator.handleEducationalModeToggle(false, 'session'); + + // Then + expect(enableResult.message).toContain('ativado'); + expect(disableResult.message).toContain('desativado'); + }); + }); + + describe('_resolveEducationalMode (AC2)', () => { + it('should return false when no config or override exists', () => { + // Given - default mocks return null/false + + // When + const result = orchestrator._resolveEducationalMode(); + + // Then + expect(result).toBe(false); + }); + + it('should prioritize session override over user config', () => { + // Given + orchestrator.sessionState.getSessionOverride = jest.fn().mockReturnValue(true); + resolveConfig.mockReturnValue({ + config: { educational_mode: false }, + warnings: [], + }); + + // When + const result = orchestrator._resolveEducationalMode(); + + // Then + expect(result).toBe(true); + }); + + it('should use user config when no session override', () => { + // Given + orchestrator.sessionState.getSessionOverride = jest.fn().mockReturnValue(null); + resolveConfig.mockReturnValue({ + config: { educational_mode: true }, + warnings: [], + }); + + // When + const result = orchestrator._resolveEducationalMode(); + + // Then + expect(result).toBe(true); + }); + }); + + describe('getEducationalModePersistencePrompt (AC6)', () => { + it('should return persistence prompt', () => { + // When + const result = orchestrator.getEducationalModePersistencePrompt(); + + // Then + expect(result).toBeDefined(); + expect(typeof result).toBe('string'); + }); + }); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/session-state.test.js +================================================== +```js +/** + * Session State Persistence Tests + * Story 11.5: Projeto Bob - Session State Persistence + */ + +const path = require('path'); +const fs = require('fs').promises; +const fsSync = require('fs'); + +const { + SessionState, + createSessionState, + sessionStateExists, + loadSessionState, + ActionType, + Phase, + ResumeOption, + SESSION_STATE_VERSION, + SESSION_STATE_FILENAME, + CRASH_THRESHOLD_MINUTES, +} = require('../../../.aios-core/core/orchestration/session-state'); + +// Test fixtures +const TEST_PROJECT_ROOT = path.join(__dirname, '../../fixtures/test-project'); +const TEST_STATE_PATH = path.join(TEST_PROJECT_ROOT, 'docs/stories', SESSION_STATE_FILENAME); + +describe('SessionState', () => { + let sessionState; + + beforeEach(async () => { + // Clean up test directory + try { + await fs.rm(TEST_PROJECT_ROOT, { recursive: true, force: true }); + } catch { + // Ignore if doesn't exist + } + await fs.mkdir(path.join(TEST_PROJECT_ROOT, 'docs/stories'), { recursive: true }); + + sessionState = new SessionState(TEST_PROJECT_ROOT, { debug: false }); + }); + + afterEach(async () => { + try { + await fs.rm(TEST_PROJECT_ROOT, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + }); + + describe('Constants', () => { + it('should export SESSION_STATE_VERSION as 1.2', () => { + expect(SESSION_STATE_VERSION).toBe('1.2'); + }); + + it('should export SESSION_STATE_FILENAME as .session-state.yaml', () => { + expect(SESSION_STATE_FILENAME).toBe('.session-state.yaml'); + }); + + it('should export CRASH_THRESHOLD_MINUTES as 30', () => { + expect(CRASH_THRESHOLD_MINUTES).toBe(30); + }); + }); + + describe('ActionType enum', () => { + it('should have all required action types', () => { + expect(ActionType.GO).toBe('GO'); + expect(ActionType.PAUSE).toBe('PAUSE'); + expect(ActionType.REVIEW).toBe('REVIEW'); + expect(ActionType.ABORT).toBe('ABORT'); + expect(ActionType.PHASE_CHANGE).toBe('PHASE_CHANGE'); + expect(ActionType.EPIC_STARTED).toBe('EPIC_STARTED'); + expect(ActionType.STORY_STARTED).toBe('STORY_STARTED'); + expect(ActionType.STORY_COMPLETED).toBe('STORY_COMPLETED'); + }); + }); + + describe('Phase enum', () => { + it('should have all workflow phases', () => { + expect(Phase.VALIDATION).toBe('validation'); + expect(Phase.DEVELOPMENT).toBe('development'); + expect(Phase.SELF_HEALING).toBe('self_healing'); + expect(Phase.QUALITY_GATE).toBe('quality_gate'); + expect(Phase.PUSH).toBe('push'); + expect(Phase.CHECKPOINT).toBe('checkpoint'); + }); + }); + + describe('ResumeOption enum', () => { + it('should have all resume options', () => { + expect(ResumeOption.CONTINUE).toBe('continue'); + expect(ResumeOption.REVIEW).toBe('review'); + expect(ResumeOption.RESTART).toBe('restart'); + expect(ResumeOption.DISCARD).toBe('discard'); + }); + }); + + describe('createSessionState()', () => { + it('should create a new session state with all required fields (AC1-5)', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 6, + storyIds: ['11.1', '11.2', '11.3', '11.4', '11.5', '11.6'], + }; + + const state = await sessionState.createSessionState(epicInfo, 'feature/bob'); + + // AC1: File created + expect(await sessionState.exists()).toBe(true); + + // AC2: Epic field with valid fields + expect(state.session_state.epic.id).toBe('epic-11'); + expect(state.session_state.epic.title).toBe('Projeto Bob'); + expect(state.session_state.epic.total_stories).toBe(6); + + // AC3: Progress field with valid fields + expect(state.session_state.progress.current_story).toBe('11.1'); + expect(state.session_state.progress.stories_done).toEqual([]); + expect(state.session_state.progress.stories_pending).toEqual(epicInfo.storyIds); + + // AC4: Last action field with valid fields + expect(state.session_state.last_action.type).toBe(ActionType.EPIC_STARTED); + expect(state.session_state.last_action.timestamp).toBeDefined(); + expect(state.session_state.last_action.story).toBe('11.1'); + + // AC5: Context snapshot field + expect(state.session_state.context_snapshot.files_modified).toBe(0); + expect(state.session_state.context_snapshot.executor_distribution).toEqual({}); + expect(state.session_state.context_snapshot.branch).toBe('feature/bob'); + }); + + it('should include workflow section from ADR-011', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 6, + storyIds: ['11.1'], + }; + + const state = await sessionState.createSessionState(epicInfo); + + expect(state.session_state.workflow).toBeDefined(); + expect(state.session_state.workflow.current_phase).toBeNull(); + expect(state.session_state.workflow.attempt_count).toBe(0); + expect(state.session_state.workflow.phase_results).toEqual({}); + }); + + it('should include version 1.2', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 1, + storyIds: ['11.1'], + }; + + const state = await sessionState.createSessionState(epicInfo); + + expect(state.session_state.version).toBe('1.2'); + }); + }); + + describe('loadSessionState()', () => { + it('should return null when no state exists', async () => { + const state = await sessionState.loadSessionState(); + expect(state).toBeNull(); + }); + + it('should load existing session state', async () => { + // Create state first + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 6, + storyIds: ['11.1', '11.2'], + }; + await sessionState.createSessionState(epicInfo); + + // Create new instance and load + const newSession = new SessionState(TEST_PROJECT_ROOT); + const loadedState = await newSession.loadSessionState(); + + expect(loadedState).not.toBeNull(); + expect(loadedState.session_state.epic.id).toBe('epic-11'); + }); + }); + + describe('updateSessionState()', () => { + it('should update progress fields', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 3, + storyIds: ['11.1', '11.2', '11.3'], + }; + await sessionState.createSessionState(epicInfo); + + const updated = await sessionState.updateSessionState({ + progress: { + current_story: '11.2', + stories_done: ['11.1'], + stories_pending: ['11.2', '11.3'], + }, + }); + + expect(updated.session_state.progress.current_story).toBe('11.2'); + expect(updated.session_state.progress.stories_done).toContain('11.1'); + }); + + it('should update workflow state (AC6)', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 1, + storyIds: ['11.1'], + }; + await sessionState.createSessionState(epicInfo); + + const updated = await sessionState.updateSessionState({ + workflow: { + current_phase: Phase.DEVELOPMENT, + attempt_count: 1, + }, + }); + + expect(updated.session_state.workflow.current_phase).toBe('development'); + expect(updated.session_state.workflow.attempt_count).toBe(1); + }); + + it('should update last_updated timestamp automatically', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 1, + storyIds: ['11.1'], + }; + const created = await sessionState.createSessionState(epicInfo); + const originalTimestamp = created.session_state.last_updated; + + // Wait a bit + await new Promise((r) => setTimeout(r, 10)); + + const updated = await sessionState.updateSessionState({ + progress: { current_story: '11.1' }, + }); + + expect(updated.session_state.last_updated).not.toBe(originalTimestamp); + }); + + it('should throw if state not initialized', async () => { + await expect(sessionState.updateSessionState({})).rejects.toThrow( + 'Session state not initialized', + ); + }); + }); + + describe('recordPhaseChange()', () => { + it('should record phase changes with executor', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 1, + storyIds: ['11.1'], + }; + await sessionState.createSessionState(epicInfo); + + await sessionState.recordPhaseChange('development', '11.1', '@dev'); + + const state = sessionState.state; + expect(state.session_state.workflow.current_phase).toBe('development'); + expect(state.session_state.last_action.type).toBe(ActionType.PHASE_CHANGE); + expect(state.session_state.last_action.phase).toBe('development'); + expect(state.session_state.context_snapshot.last_executor).toBe('@dev'); + }); + + it('should track executor distribution', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 1, + storyIds: ['11.1'], + }; + await sessionState.createSessionState(epicInfo); + + await sessionState.recordPhaseChange('development', '11.1', '@dev'); + await sessionState.recordPhaseChange('quality_gate', '11.1', '@architect'); + await sessionState.recordPhaseChange('push', '11.1', '@dev'); + + const distribution = sessionState.state.session_state.context_snapshot.executor_distribution; + expect(distribution['@dev']).toBe(2); + expect(distribution['@architect']).toBe(1); + }); + }); + + describe('recordStoryCompleted()', () => { + it('should move story from pending to done', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 3, + storyIds: ['11.1', '11.2', '11.3'], + }; + await sessionState.createSessionState(epicInfo); + + await sessionState.recordStoryCompleted('11.1', '11.2'); + + const state = sessionState.state; + expect(state.session_state.progress.stories_done).toContain('11.1'); + expect(state.session_state.progress.stories_pending).not.toContain('11.1'); + expect(state.session_state.progress.current_story).toBe('11.2'); + }); + + it('should reset workflow state for new story', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 2, + storyIds: ['11.1', '11.2'], + }; + await sessionState.createSessionState(epicInfo); + + // Simulate work on 11.1 + await sessionState.updateSessionState({ + workflow: { + current_phase: 'quality_gate', + attempt_count: 2, + phase_results: { validation: { passed: true } }, + }, + }); + + await sessionState.recordStoryCompleted('11.1', '11.2'); + + const state = sessionState.state; + expect(state.session_state.workflow.current_phase).toBeNull(); + expect(state.session_state.workflow.attempt_count).toBe(0); + expect(state.session_state.workflow.phase_results).toEqual({}); + }); + }); + + describe('recordPause()', () => { + it('should record pause action', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 1, + storyIds: ['11.1'], + }; + await sessionState.createSessionState(epicInfo); + + await sessionState.recordPause('11.1', 'development'); + + const state = sessionState.state; + expect(state.session_state.last_action.type).toBe(ActionType.PAUSE); + expect(state.session_state.last_action.phase).toBe('development'); + }); + }); + + describe('detectCrash() (AC9)', () => { + it('should detect crash when last_updated > 30 min and action is not PAUSE', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 1, + storyIds: ['11.1'], + }; + await sessionState.createSessionState(epicInfo); + + // Manually set old timestamp + sessionState.state.session_state.last_updated = new Date( + Date.now() - 35 * 60 * 1000, + ).toISOString(); + sessionState.state.session_state.last_action.type = ActionType.PHASE_CHANGE; + await sessionState.save(); + + const result = await sessionState.detectCrash(); + + expect(result.isCrash).toBe(true); + expect(result.minutesSinceUpdate).toBeGreaterThanOrEqual(30); + }); + + it('should not detect crash when action is PAUSE', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 1, + storyIds: ['11.1'], + }; + await sessionState.createSessionState(epicInfo); + + // Manually set old timestamp but with PAUSE action + sessionState.state.session_state.last_updated = new Date( + Date.now() - 35 * 60 * 1000, + ).toISOString(); + sessionState.state.session_state.last_action.type = ActionType.PAUSE; + await sessionState.save(); + + const result = await sessionState.detectCrash(); + + expect(result.isCrash).toBe(false); + }); + + it('should not detect crash when last_updated is recent', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 1, + storyIds: ['11.1'], + }; + await sessionState.createSessionState(epicInfo); + + const result = await sessionState.detectCrash(); + + expect(result.isCrash).toBe(false); + }); + }); + + describe('getResumeOptions() (AC7-8)', () => { + it('should return 4 resume options', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 1, + storyIds: ['11.1'], + }; + await sessionState.createSessionState(epicInfo); + + const options = sessionState.getResumeOptions(); + + expect(Object.keys(options)).toHaveLength(4); + expect(options[ResumeOption.CONTINUE]).toBeDefined(); + expect(options[ResumeOption.REVIEW]).toBeDefined(); + expect(options[ResumeOption.RESTART]).toBeDefined(); + expect(options[ResumeOption.DISCARD]).toBeDefined(); + }); + + it('should have labels for each option', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 1, + storyIds: ['11.1'], + }; + await sessionState.createSessionState(epicInfo); + + const options = sessionState.getResumeOptions(); + + expect(options[ResumeOption.CONTINUE].label).toContain('Continuar'); + expect(options[ResumeOption.REVIEW].label).toContain('Revisar'); + expect(options[ResumeOption.RESTART].label).toContain('Recomeçar'); + expect(options[ResumeOption.DISCARD].label).toContain('novo épico'); + }); + }); + + describe('getResumeSummary()', () => { + it('should return formatted resume summary', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 6, + storyIds: ['11.1', '11.2', '11.3', '11.4', '11.5', '11.6'], + }; + await sessionState.createSessionState(epicInfo); + await sessionState.recordStoryCompleted('11.1', '11.2'); + + const summary = sessionState.getResumeSummary(); + + expect(summary).toContain('Sessão anterior detectada'); + expect(summary).toContain('Projeto Bob'); + expect(summary).toContain('1 de 6'); + expect(summary).toContain('[1]'); + expect(summary).toContain('[2]'); + expect(summary).toContain('[3]'); + expect(summary).toContain('[4]'); + }); + }); + + describe('handleResumeOption()', () => { + beforeEach(async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 3, + storyIds: ['11.1', '11.2', '11.3'], + }; + await sessionState.createSessionState(epicInfo); + await sessionState.recordPhaseChange('development', '11.1', '@dev'); + }); + + it('should handle CONTINUE option', async () => { + const result = await sessionState.handleResumeOption(ResumeOption.CONTINUE); + + expect(result.action).toBe('continue'); + expect(result.story).toBe('11.1'); + expect(result.phase).toBe('development'); + }); + + it('should handle REVIEW option', async () => { + const result = await sessionState.handleResumeOption(ResumeOption.REVIEW); + + expect(result.action).toBe('review'); + expect(result.summary).toBeDefined(); + expect(result.summary.epic.title).toBe('Projeto Bob'); + }); + + it('should handle RESTART option', async () => { + const result = await sessionState.handleResumeOption(ResumeOption.RESTART); + + expect(result.action).toBe('restart'); + expect(result.story).toBe('11.1'); + + // Verify workflow was reset + expect(sessionState.state.session_state.workflow.current_phase).toBeNull(); + expect(sessionState.state.session_state.workflow.attempt_count).toBe(0); + }); + + it('should handle DISCARD option', async () => { + const result = await sessionState.handleResumeOption(ResumeOption.DISCARD); + + expect(result.action).toBe('discard'); + expect(await sessionState.exists()).toBe(false); + }); + }); + + describe('getProgressSummary()', () => { + it('should return detailed progress summary', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 6, + storyIds: ['11.1', '11.2', '11.3', '11.4', '11.5', '11.6'], + }; + await sessionState.createSessionState(epicInfo, 'feature/bob'); + await sessionState.recordStoryCompleted('11.1', '11.2'); + await sessionState.recordStoryCompleted('11.2', '11.3'); + + const summary = sessionState.getProgressSummary(); + + expect(summary.epic.title).toBe('Projeto Bob'); + expect(summary.progress.completed).toBe(2); + expect(summary.progress.total).toBe(6); + expect(summary.progress.percentage).toBe(33); + expect(summary.progress.storiesDone).toEqual(['11.1', '11.2']); + expect(summary.context.branch).toBe('feature/bob'); + }); + }); + + describe('discard()', () => { + it('should archive state file instead of deleting', async () => { + const epicInfo = { + id: 'epic-11', + title: 'Projeto Bob', + totalStories: 1, + storyIds: ['11.1'], + }; + await sessionState.createSessionState(epicInfo); + + await sessionState.discard(); + + // Original file should not exist + expect(await sessionState.exists()).toBe(false); + + // Should have archived file + const files = await fs.readdir(path.join(TEST_PROJECT_ROOT, 'docs/stories')); + const archivedFiles = files.filter((f) => f.includes('.discarded.')); + expect(archivedFiles.length).toBe(1); + }); + }); + + describe('validateSchema()', () => { + it('should validate correct schema', () => { + const validState = { + session_state: { + version: '1.1', + last_updated: '2026-02-05T10:00:00Z', + epic: { id: 'epic-11', title: 'Test', total_stories: 1 }, + progress: { + current_story: '11.1', + stories_done: [], + stories_pending: ['11.1'], + }, + workflow: { current_phase: null, attempt_count: 0, phase_results: {} }, + last_action: { type: 'EPIC_STARTED', timestamp: '2026-02-05T10:00:00Z' }, + context_snapshot: { + files_modified: 0, + executor_distribution: {}, + branch: 'main', + }, + }, + }; + + const result = SessionState.validateSchema(validState); + + expect(result.isValid).toBe(true); + expect(result.errors).toHaveLength(0); + }); + + it('should detect missing required fields', () => { + const invalidState = { + session_state: { + version: '1.1', + // Missing other required fields + }, + }; + + const result = SessionState.validateSchema(invalidState); + + expect(result.isValid).toBe(false); + expect(result.errors.length).toBeGreaterThan(0); + }); + + it('should detect missing session_state root', () => { + const invalidState = {}; + + const result = SessionState.validateSchema(invalidState); + + expect(result.isValid).toBe(false); + expect(result.errors).toContain('Missing session_state root'); + }); + }); + + describe('Factory functions', () => { + it('createSessionState() should return SessionState instance', () => { + const instance = createSessionState(TEST_PROJECT_ROOT); + expect(instance).toBeInstanceOf(SessionState); + }); + + it('sessionStateExists() should return false for non-existent state', async () => { + const exists = await sessionStateExists(TEST_PROJECT_ROOT); + expect(exists).toBe(false); + }); + + it('sessionStateExists() should return true after creation', async () => { + const instance = createSessionState(TEST_PROJECT_ROOT); + await instance.createSessionState({ + id: 'test', + title: 'Test', + totalStories: 1, + storyIds: ['1'], + }); + + const exists = await sessionStateExists(TEST_PROJECT_ROOT); + expect(exists).toBe(true); + }); + + it('loadSessionState() should load existing state', async () => { + const instance = createSessionState(TEST_PROJECT_ROOT); + await instance.createSessionState({ + id: 'test', + title: 'Test Epic', + totalStories: 1, + storyIds: ['1'], + }); + + const state = await loadSessionState(TEST_PROJECT_ROOT); + + expect(state).not.toBeNull(); + expect(state.session_state.epic.title).toBe('Test Epic'); + }); + }); +}); + +describe('SessionState Migration (ADR-011)', () => { + const TEST_PROJECT_ROOT = path.join(__dirname, '../../fixtures/test-migration-project'); + const LEGACY_STATE_PATH = path.join(TEST_PROJECT_ROOT, '.aios/workflow-state'); + + beforeEach(async () => { + await fs.rm(TEST_PROJECT_ROOT, { recursive: true, force: true }); + await fs.mkdir(path.join(TEST_PROJECT_ROOT, 'docs/stories'), { recursive: true }); + await fs.mkdir(LEGACY_STATE_PATH, { recursive: true }); + }); + + afterEach(async () => { + await fs.rm(TEST_PROJECT_ROOT, { recursive: true, force: true }); + }); + + it('should migrate from legacy workflow state (Task 7)', async () => { + // Create legacy workflow state + const legacyState = { + workflowId: 'development-cycle', + currentPhase: '2_development', + currentStory: 'docs/stories/11.3.story.md', + executor: '@dev', + qualityGate: '@architect', + attemptCount: 1, + startedAt: '2026-02-04T10:00:00Z', + lastUpdated: '2026-02-04T15:00:00Z', + phaseResults: { + '1_validation': { status: 'completed' }, + }, + }; + + await fs.writeFile( + path.join(LEGACY_STATE_PATH, '11.3-state.yaml'), + require('js-yaml').dump(legacyState), + ); + + // Create new session state instance with auto-migrate + const sessionState = new SessionState(TEST_PROJECT_ROOT, { autoMigrate: true }); + const loadedState = await sessionState.loadSessionState(); + + // Should have migrated + expect(loadedState).not.toBeNull(); + expect(loadedState.session_state.version).toBe('1.2'); + expect(loadedState.session_state.workflow.current_phase).toBe('2_development'); + expect(loadedState.session_state.context_snapshot.last_executor).toBe('@dev'); + + // Legacy file should be renamed to .migrated + const legacyFiles = await fs.readdir(LEGACY_STATE_PATH); + const hasMigrated = legacyFiles.some((f) => f.endsWith('.migrated')); + // Note: Migration renames files in place - check the migration occurred + expect(hasMigrated || legacyFiles.length === 1).toBe(true); + }); + + it('should not migrate if autoMigrate is false', async () => { + // Create legacy workflow state + const legacyState = { + workflowId: 'development-cycle', + currentPhase: '2_development', + currentStory: 'docs/stories/11.3.story.md', + }; + + await fs.writeFile( + path.join(LEGACY_STATE_PATH, '11.3-state.yaml'), + require('js-yaml').dump(legacyState), + ); + + // Create instance with autoMigrate disabled + const sessionState = new SessionState(TEST_PROJECT_ROOT, { autoMigrate: false }); + const loadedState = await sessionState.loadSessionState(); + + expect(loadedState).toBeNull(); + }); +}); + +``` + +================================================== +📄 tests/core/orchestration/terminal-spawner.test.js +================================================== +```js +/** + * Terminal Spawner Tests + * Story 12.10: Terminal Spawning E2E Validation + * + * Tests for environment detection, inline spawn, fallback, timeout, and cleanup. + */ + +'use strict'; + +const path = require('path'); +const fs = require('fs').promises; +const fsSync = require('fs'); +const os = require('os'); + +// Module under test +const { + detectEnvironment, + ENVIRONMENT_TYPE, + spawnInline, + spawnAgent, + createContextFile, + pollForOutput, + cleanupOldFiles, + registerLockFile, + unregisterLockFile, + cleanupLocks, + generateCompatibilityReport, + formatCompatibilityReport, + getSystemInfo, + OS_COMPATIBILITY_MATRIX, + DEFAULT_TIMEOUT_MS, + POLL_INTERVAL_MS, + MAX_RETRIES, +} = require('../../../.aios-core/core/orchestration/terminal-spawner'); + +// Test fixtures +const TEST_OUTPUT_DIR = path.join(os.tmpdir(), 'aios-terminal-spawner-test'); + +describe('Terminal Spawner (Story 12.10)', () => { + // Store original env vars + const originalEnv = { ...process.env }; + + beforeEach(async () => { + // Reset environment variables before each test + process.env = { ...originalEnv }; + + // Clean up test directory + try { + await fs.rm(TEST_OUTPUT_DIR, { recursive: true, force: true }); + } catch { + // Ignore if doesn't exist + } + await fs.mkdir(TEST_OUTPUT_DIR, { recursive: true }); + }); + + afterEach(async () => { + // Restore original environment + process.env = { ...originalEnv }; + + // Clean up + try { + await fs.rm(TEST_OUTPUT_DIR, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + }); + + // ============================================ + // Task 6.1: Tests for detectEnvironment() + // ============================================ + describe('detectEnvironment() (Task 1)', () => { + describe('CI/CD detection (Task 1.5)', () => { + it('should detect GitHub Actions environment', () => { + // Given + process.env.GITHUB_ACTIONS = 'true'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.CI); + expect(result.supportsVisualTerminal).toBe(false); + expect(result.reason).toContain('CI/CD'); + }); + + it('should detect generic CI environment', () => { + // Given + process.env.CI = 'true'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.CI); + expect(result.supportsVisualTerminal).toBe(false); + }); + + it('should detect GitLab CI environment', () => { + // Given + process.env.GITLAB_CI = 'true'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.CI); + }); + + it('should detect Jenkins environment', () => { + // Given + process.env.JENKINS_URL = 'http://jenkins.local'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.CI); + }); + + it('should detect Travis CI environment', () => { + // Given + process.env.TRAVIS = 'true'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.CI); + }); + + it('should detect CircleCI environment', () => { + // Given + process.env.CIRCLECI = 'true'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.CI); + }); + + it('should detect Azure Pipelines environment', () => { + // Given + process.env.TF_BUILD = 'True'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.CI); + }); + }); + + describe('SSH detection (Task 1.3)', () => { + // Helper to clear CI environment variables for isolated SSH tests + const clearCIEnvVars = () => { + delete process.env.CI; + delete process.env.GITHUB_ACTIONS; + delete process.env.GITLAB_CI; + delete process.env.JENKINS_URL; + delete process.env.TRAVIS; + delete process.env.CIRCLECI; + delete process.env.TF_BUILD; + delete process.env.BUILDKITE; + delete process.env.CODEBUILD_BUILD_ID; + }; + + it('should detect SSH_CLIENT environment', () => { + // Given - clear CI vars first to isolate test + clearCIEnvVars(); + process.env.SSH_CLIENT = '192.168.1.1 12345 22'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.SSH); + expect(result.supportsVisualTerminal).toBe(false); + expect(result.reason).toContain('SSH'); + }); + + it('should detect SSH_TTY environment', () => { + // Given - clear CI vars first to isolate test + clearCIEnvVars(); + process.env.SSH_TTY = '/dev/pts/0'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.SSH); + }); + + it('should detect SSH_CONNECTION environment', () => { + // Given - clear CI vars first to isolate test + clearCIEnvVars(); + process.env.SSH_CONNECTION = '192.168.1.1 12345 192.168.1.2 22'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.SSH); + }); + }); + + describe('VS Code detection (Task 1.2)', () => { + // Helper to clear CI and SSH environment variables for isolated VS Code tests + const clearHigherPriorityEnvVars = () => { + // Clear CI vars + delete process.env.CI; + delete process.env.GITHUB_ACTIONS; + delete process.env.GITLAB_CI; + delete process.env.JENKINS_URL; + delete process.env.TRAVIS; + delete process.env.CIRCLECI; + delete process.env.TF_BUILD; + delete process.env.BUILDKITE; + delete process.env.CODEBUILD_BUILD_ID; + // Clear SSH vars + delete process.env.SSH_CLIENT; + delete process.env.SSH_TTY; + delete process.env.SSH_CONNECTION; + }; + + it('should detect TERM_PROGRAM=vscode', () => { + // Given - clear higher priority env vars to isolate test + clearHigherPriorityEnvVars(); + process.env.TERM_PROGRAM = 'vscode'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.VSCODE); + expect(result.supportsVisualTerminal).toBe(false); + expect(result.reason).toContain('VS Code'); + }); + + it('should detect VSCODE_PID', () => { + // Given - clear higher priority env vars to isolate test + clearHigherPriorityEnvVars(); + process.env.VSCODE_PID = '12345'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.VSCODE); + }); + + it('should detect VSCODE_CWD', () => { + // Given - clear higher priority env vars to isolate test + clearHigherPriorityEnvVars(); + process.env.VSCODE_CWD = '/home/user/project'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.VSCODE); + }); + + it('should detect VSCODE_GIT_IPC_HANDLE', () => { + // Given - clear higher priority env vars to isolate test + clearHigherPriorityEnvVars(); + process.env.VSCODE_GIT_IPC_HANDLE = '/tmp/git-ipc-12345'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.VSCODE); + }); + }); + + describe('Native terminal detection (Task 1.6)', () => { + it('should return NATIVE_TERMINAL when no special environment detected', () => { + // Given - clean environment (no CI, SSH, VS Code, Docker) + delete process.env.CI; + delete process.env.GITHUB_ACTIONS; + delete process.env.SSH_CLIENT; + delete process.env.SSH_TTY; + delete process.env.SSH_CONNECTION; + delete process.env.TERM_PROGRAM; + delete process.env.VSCODE_PID; + delete process.env.VSCODE_CWD; + delete process.env.VSCODE_GIT_IPC_HANDLE; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.NATIVE_TERMINAL); + expect(result.supportsVisualTerminal).toBe(true); + expect(result.reason).toContain('Native'); + }); + }); + + describe('Detection priority', () => { + // Helper to clear all detection environment variables + const clearAllDetectionEnvVars = () => { + // Clear CI vars + delete process.env.CI; + delete process.env.GITHUB_ACTIONS; + delete process.env.GITLAB_CI; + delete process.env.JENKINS_URL; + delete process.env.TRAVIS; + delete process.env.CIRCLECI; + delete process.env.TF_BUILD; + delete process.env.BUILDKITE; + delete process.env.CODEBUILD_BUILD_ID; + // Clear SSH vars + delete process.env.SSH_CLIENT; + delete process.env.SSH_TTY; + delete process.env.SSH_CONNECTION; + // Clear VS Code vars + delete process.env.TERM_PROGRAM; + delete process.env.VSCODE_PID; + delete process.env.VSCODE_CWD; + delete process.env.VSCODE_GIT_IPC_HANDLE; + }; + + it('should prioritize CI over SSH', () => { + // Given - start clean and set both CI and SSH + clearAllDetectionEnvVars(); + process.env.CI = 'true'; + process.env.SSH_CLIENT = '192.168.1.1 12345 22'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.CI); + }); + + it('should prioritize SSH over VS Code', () => { + // Given - start clean and set both SSH and VS Code + clearAllDetectionEnvVars(); + process.env.SSH_CLIENT = '192.168.1.1 12345 22'; + process.env.TERM_PROGRAM = 'vscode'; + + // When + const result = detectEnvironment(); + + // Then + expect(result.type).toBe(ENVIRONMENT_TYPE.SSH); + }); + }); + }); + + // ============================================ + // ENVIRONMENT_TYPE enum tests + // ============================================ + describe('ENVIRONMENT_TYPE enum', () => { + it('should have all required environment types', () => { + expect(ENVIRONMENT_TYPE.NATIVE_TERMINAL).toBe('NATIVE_TERMINAL'); + expect(ENVIRONMENT_TYPE.VSCODE).toBe('VSCODE'); + expect(ENVIRONMENT_TYPE.SSH).toBe('SSH'); + expect(ENVIRONMENT_TYPE.DOCKER).toBe('DOCKER'); + expect(ENVIRONMENT_TYPE.CI).toBe('CI'); + }); + }); + + // ============================================ + // Task 6.2: Tests for spawnInline() + // ============================================ + describe('spawnInline() (Task 2)', () => { + it('should execute a simple command inline', async () => { + // This test may fail if pm.sh is not set up for inline mode + // For unit testing, we're testing the function signature and basic behavior + + // Given + const agent = 'dev'; + const task = 'test'; + const options = { + timeout: 5000, + outputDir: TEST_OUTPUT_DIR, + debug: false, + }; + + // When + const result = await spawnInline(agent, task, options); + + // Then - we expect it to run (may succeed or fail depending on pm.sh) + expect(result).toBeDefined(); + expect(typeof result.success).toBe('boolean'); + expect(typeof result.duration).toBe('number'); + expect(result.duration).toBeGreaterThan(0); + }); + + it('should capture stdout output', async () => { + // Given + const agent = 'dev'; + const task = 'test'; + const options = { + timeout: 5000, + outputDir: TEST_OUTPUT_DIR, + debug: false, + }; + + // When + const result = await spawnInline(agent, task, options); + + // Then + expect(typeof result.output).toBe('string'); + }); + + it('should handle context file creation', async () => { + // Given + const agent = 'dev'; + const task = 'test'; + const options = { + timeout: 5000, + outputDir: TEST_OUTPUT_DIR, + context: { + story: 'test-story.md', + files: ['file1.js', 'file2.js'], + instructions: 'Test instructions', + }, + }; + + // When + const result = await spawnInline(agent, task, options); + + // Then - context file should be created and cleaned up + expect(result).toBeDefined(); + }); + }); + + // ============================================ + // Task 6.3: Tests for fallback automático + // ============================================ + describe('spawnAgent() fallback (Task 2.3)', () => { + it('should use inline spawn when environment does not support visual terminal', async () => { + // Given - CI environment + process.env.CI = 'true'; + const agent = 'dev'; + const task = 'test'; + const options = { + timeout: 5000, + outputDir: TEST_OUTPUT_DIR, + debug: false, + retries: 1, + }; + + // When + const result = await spawnAgent(agent, task, options); + + // Then - should have used inline spawn (no visual terminal in CI) + expect(result).toBeDefined(); + expect(typeof result.success).toBe('boolean'); + }); + }); + + // ============================================ + // Task 6.4: Tests for timeout + // ============================================ + describe('Timeout handling (Task 3.2)', () => { + it('should timeout if lock file persists', async () => { + // Given + const outputFile = path.join(TEST_OUTPUT_DIR, 'aios-output-timeout-test.md'); + const lockFile = outputFile.replace('output', 'lock'); + await fs.writeFile(lockFile, 'locked'); + await fs.writeFile(outputFile, 'test output'); + + // When - poll with very short timeout + await expect(pollForOutput(outputFile, 100, false)).rejects.toThrow('Timeout'); + + // Then - lock file should be cleaned up + const lockExists = fsSync.existsSync(lockFile); + expect(lockExists).toBe(false); + }); + + it('should return output when lock is removed', async () => { + // Given + const outputFile = path.join(TEST_OUTPUT_DIR, 'aios-output-success-test.md'); + const lockFile = outputFile.replace('output', 'lock'); + await fs.writeFile(outputFile, 'test output content'); + // No lock file - simulates completed process + + // When + const output = await pollForOutput(outputFile, 1000, false); + + // Then + expect(output).toBe('test output content'); + }); + }); + + // ============================================ + // Task 6.5: Tests for lock cleanup + // ============================================ + describe('Lock cleanup (Task 3.3, 3.4)', () => { + it('should register and unregister lock files', () => { + // Given + const lockPath = path.join(TEST_OUTPUT_DIR, 'test-lock.lock'); + + // When + registerLockFile(lockPath); + + // Then - should be able to unregister + unregisterLockFile(lockPath); + // No error means success + }); + + it('should cleanup registered lock files', async () => { + // Given + const lockPath = path.join(TEST_OUTPUT_DIR, 'cleanup-test.lock'); + await fs.writeFile(lockPath, 'locked'); + registerLockFile(lockPath); + + // When + cleanupLocks(); + + // Then + const exists = fsSync.existsSync(lockPath); + expect(exists).toBe(false); + }); + + it('should cleanup old files', async () => { + // Given - create old file + const oldFile = path.join(TEST_OUTPUT_DIR, 'aios-output-old.md'); + await fs.writeFile(oldFile, 'old content'); + + // Manually set mtime to past + const pastTime = new Date(Date.now() - 3600001); // 1 hour + 1ms ago + await fs.utimes(oldFile, pastTime, pastTime); + + // When + const cleaned = await cleanupOldFiles(TEST_OUTPUT_DIR, 3600000); // 1 hour + + // Then + expect(cleaned).toBeGreaterThanOrEqual(1); + const exists = fsSync.existsSync(oldFile); + expect(exists).toBe(false); + }); + }); + + // ============================================ + // Context file tests + // ============================================ + describe('createContextFile()', () => { + it('should create context file with valid structure', async () => { + // Given + const context = { + story: 'docs/stories/story-12.10.md', + files: ['src/index.js', 'src/utils.js'], + instructions: 'Test the terminal spawner', + metadata: { priority: 'high' }, + }; + + // When + const contextPath = await createContextFile(context, TEST_OUTPUT_DIR); + + // Then + expect(contextPath).toBeTruthy(); + const content = JSON.parse(await fs.readFile(contextPath, 'utf8')); + expect(content.story).toBe(context.story); + expect(content.files).toEqual(context.files); + expect(content.instructions).toBe(context.instructions); + expect(content.metadata.priority).toBe('high'); + expect(content.createdAt).toBeDefined(); + + // Cleanup + await fs.unlink(contextPath); + }); + + it('should return empty string for null context', async () => { + // When + const result = await createContextFile(null, TEST_OUTPUT_DIR); + + // Then + expect(result).toBe(''); + }); + + it('should handle missing optional fields', async () => { + // Given + const context = { story: 'test.md' }; + + // When + const contextPath = await createContextFile(context, TEST_OUTPUT_DIR); + + // Then + const content = JSON.parse(await fs.readFile(contextPath, 'utf8')); + expect(content.story).toBe('test.md'); + expect(content.files).toEqual([]); + expect(content.instructions).toBe(''); + expect(content.metadata).toEqual({}); + + // Cleanup + await fs.unlink(contextPath); + }); + }); + + // ============================================ + // Task 7.6: Tests for generateCompatibilityReport() + // ============================================ + describe('generateCompatibilityReport() (Task 7.4)', () => { + it('should generate report with all required fields', () => { + // Given + const testResults = [ + { testName: 'Test 1', result: 'pass', duration: 100 }, + { testName: 'Test 2', result: 'fail', failureReason: 'Timeout', duration: 200 }, + { testName: 'Test 3', result: 'skip', duration: 0 }, + ]; + + // When + const report = generateCompatibilityReport(testResults); + + // Then - check required fields + expect(report.generatedAt).toBeDefined(); + expect(new Date(report.generatedAt)).toBeInstanceOf(Date); + + expect(report.system).toBeDefined(); + expect(report.system.os_name).toBeDefined(); + expect(report.system.os_version).toBeDefined(); + expect(report.system.architecture).toBeDefined(); + expect(report.system.shell).toBeDefined(); + expect(report.system.docker_version).toBeDefined(); + expect(report.system.node_version).toBeDefined(); + + expect(report.environment).toBeDefined(); + expect(report.environment.type).toBeDefined(); + + expect(report.tests).toEqual(testResults); + + expect(report.summary).toBeDefined(); + expect(report.summary.total).toBe(3); + expect(report.summary.passed).toBe(1); + expect(report.summary.failed).toBe(1); + expect(report.summary.skipped).toBe(1); + expect(report.summary.passRate).toBe(33); // 1/3 = 33% + }); + + it('should handle empty test results', () => { + // When + const report = generateCompatibilityReport([]); + + // Then + expect(report.tests).toEqual([]); + expect(report.summary.total).toBe(0); + expect(report.summary.passRate).toBe(0); + }); + + it('should calculate pass rate correctly', () => { + // Given + const testResults = [ + { testName: 'Test 1', result: 'pass', duration: 100 }, + { testName: 'Test 2', result: 'pass', duration: 100 }, + { testName: 'Test 3', result: 'pass', duration: 100 }, + { testName: 'Test 4', result: 'fail', duration: 100 }, + ]; + + // When + const report = generateCompatibilityReport(testResults); + + // Then + expect(report.summary.passRate).toBe(75); // 3/4 = 75% + }); + }); + + describe('formatCompatibilityReport() (Task 7.5)', () => { + it('should format report as readable string', () => { + // Given + const testResults = [ + { testName: 'detectEnvironment', result: 'pass', duration: 10 }, + { testName: 'spawnInline', result: 'fail', failureReason: 'Script not found', duration: 50 }, + ]; + const report = generateCompatibilityReport(testResults); + + // When + const formatted = formatCompatibilityReport(report); + + // Then + expect(formatted).toContain('Compatibility Report'); + expect(formatted).toContain('System Information'); + expect(formatted).toContain('Environment Detection'); + expect(formatted).toContain('Test Results'); + expect(formatted).toContain('Summary'); + expect(formatted).toContain('detectEnvironment'); + expect(formatted).toContain('spawnInline'); + expect(formatted).toContain('Script not found'); + expect(formatted).toContain('✅'); + expect(formatted).toContain('❌'); + }); + }); + + describe('getSystemInfo() (Task 7.4)', () => { + it('should return system information object', () => { + // When + const info = getSystemInfo(); + + // Then + expect(info.os_name).toBeDefined(); + expect(info.os_version).toBeDefined(); + expect(info.architecture).toBeDefined(); + expect(['x64', 'arm64', 'ia32', 'arm'].includes(info.architecture)).toBe(true); + expect(info.shell).toBeDefined(); + expect(info.docker_version).toBeDefined(); + expect(info.node_version).toBeDefined(); + expect(info.node_version).toMatch(/^v\d+\.\d+\.\d+/); + }); + }); + + describe('OS_COMPATIBILITY_MATRIX (Task 7.1-7.3)', () => { + it('should define must_pass configurations', () => { + expect(OS_COMPATIBILITY_MATRIX.must_pass).toBeDefined(); + expect(Array.isArray(OS_COMPATIBILITY_MATRIX.must_pass)).toBe(true); + expect(OS_COMPATIBILITY_MATRIX.must_pass.length).toBeGreaterThan(0); + + // Check required fields + for (const config of OS_COMPATIBILITY_MATRIX.must_pass) { + expect(config.os).toBeDefined(); + expect(config.arch).toBeDefined(); + expect(config.description).toBeDefined(); + } + }); + + it('should define should_pass configurations', () => { + expect(OS_COMPATIBILITY_MATRIX.should_pass).toBeDefined(); + expect(Array.isArray(OS_COMPATIBILITY_MATRIX.should_pass)).toBe(true); + expect(OS_COMPATIBILITY_MATRIX.should_pass.length).toBeGreaterThan(0); + }); + + it('should include macOS Sonoma in must_pass', () => { + const hasSonoma = OS_COMPATIBILITY_MATRIX.must_pass.some( + (c) => c.os.toLowerCase().includes('sonoma'), + ); + expect(hasSonoma).toBe(true); + }); + + it('should include Windows 11 + WSL in must_pass', () => { + const hasWindows11WSL = OS_COMPATIBILITY_MATRIX.must_pass.some( + (c) => c.os.toLowerCase().includes('windows 11') && c.wsl, + ); + expect(hasWindows11WSL).toBe(true); + }); + + it('should include Ubuntu 22.04 in must_pass', () => { + const hasUbuntu = OS_COMPATIBILITY_MATRIX.must_pass.some( + (c) => c.os.toLowerCase().includes('ubuntu 22.04'), + ); + expect(hasUbuntu).toBe(true); + }); + }); + + // ============================================ + // Constants tests + // ============================================ + describe('Constants', () => { + it('should export DEFAULT_TIMEOUT_MS as 300000 (5 minutes)', () => { + expect(DEFAULT_TIMEOUT_MS).toBe(300000); + }); + + it('should export POLL_INTERVAL_MS as 500', () => { + expect(POLL_INTERVAL_MS).toBe(500); + }); + + it('should export MAX_RETRIES as 3', () => { + expect(MAX_RETRIES).toBe(3); + }); + }); +}); + +``` + +================================================== +📄 tests/core/events/dashboard-emitter-bob.test.js +================================================== +```js +/** + * Tests for DashboardEmitter Bob-specific methods + * + * Story 12.6: Observability Panel Integration + Dashboard Bridge + * + * Tests: + * - New emitBob* methods + * - Event type correctness + * - Fallback to file + */ + +'use strict'; + +const fs = require('fs-extra'); +const path = require('path'); +const os = require('os'); + +const { DashboardEmitter, getDashboardEmitter } = require('../../../.aios-core/core/events/dashboard-emitter'); +const { DashboardEventType } = require('../../../.aios-core/core/events/types'); + +describe('DashboardEmitter Bob-specific methods', () => { + let emitter; + let tempDir; + let originalEnv; + + beforeEach(async () => { + tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'emitter-test-')); + originalEnv = process.env.NODE_ENV; + + // Reset singleton for clean tests + DashboardEmitter.instance = null; + emitter = new DashboardEmitter(); + emitter.enabled = true; + emitter.projectRoot = tempDir; + emitter.fallbackPath = path.join(tempDir, '.aios', 'dashboard', 'events.jsonl'); + }); + + afterEach(async () => { + process.env.NODE_ENV = originalEnv; + DashboardEmitter.instance = null; + await fs.remove(tempDir); + }); + + describe('DashboardEventType', () => { + it('should have Bob-specific event types', () => { + expect(DashboardEventType.BOB_PHASE_CHANGE).toBe('BobPhaseChange'); + expect(DashboardEventType.BOB_AGENT_SPAWNED).toBe('BobAgentSpawned'); + expect(DashboardEventType.BOB_AGENT_COMPLETED).toBe('BobAgentCompleted'); + expect(DashboardEventType.BOB_SURFACE_DECISION).toBe('BobSurfaceDecision'); + expect(DashboardEventType.BOB_ERROR).toBe('BobError'); + }); + }); + + describe('emitBobPhaseChange', () => { + it('should emit BobPhaseChange event with correct data', async () => { + const events = []; + const originalEmit = emitter.emit.bind(emitter); + emitter.emit = async (type, data) => { + events.push({ type, data }); + }; + + await emitter.emitBobPhaseChange('development', '12.6', '@dev'); + + expect(events).toHaveLength(1); + expect(events[0].type).toBe(DashboardEventType.BOB_PHASE_CHANGE); + expect(events[0].data.phase).toBe('development'); + expect(events[0].data.story).toBe('12.6'); + expect(events[0].data.executor).toBe('@dev'); + }); + }); + + describe('emitBobAgentSpawned', () => { + it('should emit BobAgentSpawned event with correct data', async () => { + const events = []; + emitter.emit = async (type, data) => { + events.push({ type, data }); + }; + + await emitter.emitBobAgentSpawned('@dev', 12345, 'development'); + + expect(events).toHaveLength(1); + expect(events[0].type).toBe(DashboardEventType.BOB_AGENT_SPAWNED); + expect(events[0].data.agent).toBe('@dev'); + expect(events[0].data.pid).toBe(12345); + expect(events[0].data.task).toBe('development'); + }); + }); + + describe('emitBobAgentCompleted', () => { + it('should emit BobAgentCompleted event with correct data', async () => { + const events = []; + emitter.emit = async (type, data) => { + events.push({ type, data }); + }; + + await emitter.emitBobAgentCompleted('@dev', 12345, true, 5000); + + expect(events).toHaveLength(1); + expect(events[0].type).toBe(DashboardEventType.BOB_AGENT_COMPLETED); + expect(events[0].data.agent).toBe('@dev'); + expect(events[0].data.pid).toBe(12345); + expect(events[0].data.success).toBe(true); + expect(events[0].data.duration).toBe(5000); + }); + + it('should handle failed agent completion', async () => { + const events = []; + emitter.emit = async (type, data) => { + events.push({ type, data }); + }; + + await emitter.emitBobAgentCompleted('@dev', 12345, false, 3000); + + expect(events[0].data.success).toBe(false); + expect(events[0].data.duration).toBe(3000); + }); + }); + + describe('emitBobSurfaceDecision', () => { + it('should emit BobSurfaceDecision event with correct data', async () => { + const events = []; + emitter.emit = async (type, data) => { + events.push({ type, data }); + }; + + const context = { options: ['A', 'B', 'C'] }; + await emitter.emitBobSurfaceDecision('C003', 'present_options', context); + + expect(events).toHaveLength(1); + expect(events[0].type).toBe(DashboardEventType.BOB_SURFACE_DECISION); + expect(events[0].data.criteria).toBe('C003'); + expect(events[0].data.action).toBe('present_options'); + expect(events[0].data.context).toEqual(context); + }); + + it('should handle empty context', async () => { + const events = []; + emitter.emit = async (type, data) => { + events.push({ type, data }); + }; + + await emitter.emitBobSurfaceDecision('C001', 'auto_decide'); + + expect(events[0].data.context).toEqual({}); + }); + }); + + describe('emitBobError', () => { + it('should emit BobError event with correct data', async () => { + const events = []; + emitter.emit = async (type, data) => { + events.push({ type, data }); + }; + + await emitter.emitBobError('development', 'Test failed', true); + + expect(events).toHaveLength(1); + expect(events[0].type).toBe(DashboardEventType.BOB_ERROR); + expect(events[0].data.phase).toBe('development'); + expect(events[0].data.message).toBe('Test failed'); + expect(events[0].data.recoverable).toBe(true); + }); + + it('should default recoverable to true', async () => { + const events = []; + emitter.emit = async (type, data) => { + events.push({ type, data }); + }; + + await emitter.emitBobError('quality_gate', 'Review failed'); + + expect(events[0].data.recoverable).toBe(true); + }); + + it('should handle non-recoverable errors', async () => { + const events = []; + emitter.emit = async (type, data) => { + events.push({ type, data }); + }; + + await emitter.emitBobError('push', 'Authentication failed', false); + + expect(events[0].data.recoverable).toBe(false); + }); + }); + + describe('Fallback behavior', () => { + it('should write to fallback file when emit fails', async () => { + // Force HTTP to fail + emitter._postEvent = async () => { + throw new Error('Network error'); + }; + + await emitter.emit(DashboardEventType.BOB_PHASE_CHANGE, { phase: 'test' }); + + // Wait for async write + await new Promise((resolve) => setTimeout(resolve, 100)); + + const exists = await fs.pathExists(emitter.fallbackPath); + expect(exists).toBe(true); + + const content = await fs.readFile(emitter.fallbackPath, 'utf8'); + const event = JSON.parse(content.trim()); + expect(event.type).toBe(DashboardEventType.BOB_PHASE_CHANGE); + expect(event.data.phase).toBe('test'); + }); + }); + + describe('Disabled emitter', () => { + it('should not emit when disabled', async () => { + emitter.enabled = false; + + // Override _postEvent to track calls + let postCalled = false; + emitter._postEvent = async () => { + postCalled = true; + }; + + await emitter.emitBobPhaseChange('development', '12.6', '@dev'); + + expect(postCalled).toBe(false); + }); + }); + + describe('Singleton pattern', () => { + it('should return same instance', () => { + const instance1 = getDashboardEmitter(); + const instance2 = getDashboardEmitter(); + expect(instance1).toBe(instance2); + }); + }); +}); + +``` + +================================================== +📄 tests/pro/pro-detector.test.js +================================================== +```js +/** + * Unit tests for pro-detector.js + * + * @see Story PRO-5 - aios-pro Repository Bootstrap (Task 3.2) + * @see ADR-PRO-001 - Repository Strategy + */ + +'use strict'; + +const fs = require('fs'); +const path = require('path'); + +// Module under test +const { + isProAvailable, + loadProModule, + getProVersion, + getProInfo, + _PRO_DIR, + _PRO_PACKAGE_PATH, +} = require('../../bin/utils/pro-detector'); + +// Mock fs module +jest.mock('fs'); + +// Store original require for selective mocking +const originalRequire = jest.requireActual; + +describe('pro-detector', () => { + beforeEach(() => { + jest.clearAllMocks(); + // Clear require cache for pro modules to prevent stale state + Object.keys(require.cache).forEach((key) => { + if (key.includes('pro-detector')) return; // Don't clear the module itself + if (key.includes(path.sep + 'pro' + path.sep)) { + delete require.cache[key]; + } + }); + }); + + describe('module exports', () => { + it('should export all expected functions', () => { + expect(typeof isProAvailable).toBe('function'); + expect(typeof loadProModule).toBe('function'); + expect(typeof getProVersion).toBe('function'); + expect(typeof getProInfo).toBe('function'); + }); + + it('should export internal paths for testing', () => { + expect(_PRO_DIR).toBeDefined(); + expect(_PRO_PACKAGE_PATH).toBeDefined(); + expect(_PRO_DIR).toContain('pro'); + expect(_PRO_PACKAGE_PATH).toContain('package.json'); + }); + }); + + describe('isProAvailable()', () => { + it('should return true when pro/package.json exists', () => { + fs.existsSync.mockReturnValue(true); + + expect(isProAvailable()).toBe(true); + expect(fs.existsSync).toHaveBeenCalledWith(_PRO_PACKAGE_PATH); + }); + + it('should return false when pro/package.json does not exist', () => { + fs.existsSync.mockReturnValue(false); + + expect(isProAvailable()).toBe(false); + }); + + it('should return false when fs.existsSync throws', () => { + fs.existsSync.mockImplementation(() => { + throw new Error('Permission denied'); + }); + + expect(isProAvailable()).toBe(false); + }); + + it('should check the correct path', () => { + fs.existsSync.mockReturnValue(false); + isProAvailable(); + + const calledPath = fs.existsSync.mock.calls[0][0]; + expect(calledPath).toMatch(/pro[/\\]package\.json$/); + }); + }); + + describe('loadProModule()', () => { + it('should return null when pro is not available', () => { + fs.existsSync.mockReturnValue(false); + + expect(loadProModule('squads/index')).toBeNull(); + }); + + it('should return null when module does not exist', () => { + fs.existsSync.mockReturnValue(true); + // require will throw for non-existent module + expect(loadProModule('non-existent-module-xyz-' + Date.now())).toBeNull(); + }); + + it('should return null when module throws during loading', () => { + fs.existsSync.mockReturnValue(true); + + // Mock a module that throws + jest.doMock( + path.join(_PRO_DIR, 'broken-module'), + () => { + throw new Error('Module initialization failed'); + }, + { virtual: true }, + ); + + expect(loadProModule('broken-module')).toBeNull(); + }); + + it('should load a valid module from pro/', () => { + fs.existsSync.mockReturnValue(true); + + const mockModule = { testFunc: () => 'works' }; + jest.doMock(path.join(_PRO_DIR, 'test-module'), () => mockModule, { + virtual: true, + }); + + const result = loadProModule('test-module'); + expect(result).toBeDefined(); + expect(result.testFunc()).toBe('works'); + }); + }); + + describe('getProVersion()', () => { + it('should return null when pro is not available', () => { + fs.existsSync.mockReturnValue(false); + + expect(getProVersion()).toBeNull(); + }); + + it('should return version from pro/package.json', () => { + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue( + JSON.stringify({ name: '@aios-fullstack/pro', version: '0.1.0' }), + ); + + expect(getProVersion()).toBe('0.1.0'); + expect(fs.readFileSync).toHaveBeenCalledWith(_PRO_PACKAGE_PATH, 'utf8'); + }); + + it('should return null when package.json has no version field', () => { + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue(JSON.stringify({ name: '@aios-fullstack/pro' })); + + expect(getProVersion()).toBeNull(); + }); + + it('should return null when package.json is corrupted', () => { + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue('not valid json {{{'); + + expect(getProVersion()).toBeNull(); + }); + + it('should return null when readFileSync throws', () => { + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockImplementation(() => { + throw new Error('EACCES: permission denied'); + }); + + expect(getProVersion()).toBeNull(); + }); + }); + + describe('getProInfo()', () => { + it('should return info with available=false when pro is not present', () => { + fs.existsSync.mockReturnValue(false); + + const info = getProInfo(); + expect(info).toEqual({ + available: false, + version: null, + path: _PRO_DIR, + }); + }); + + it('should return full info when pro is available', () => { + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue( + JSON.stringify({ name: '@aios-fullstack/pro', version: '0.1.0' }), + ); + + const info = getProInfo(); + expect(info).toEqual({ + available: true, + version: '0.1.0', + path: _PRO_DIR, + }); + }); + + it('should return available=true but version=null when package.json is corrupt', () => { + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue('invalid json'); + + const info = getProInfo(); + expect(info.available).toBe(true); + expect(info.version).toBeNull(); + expect(info.path).toBe(_PRO_DIR); + }); + }); + + describe('edge cases', () => { + it('should handle empty pro/ directory (uninitialized submodule)', () => { + // existsSync returns false for package.json even though pro/ dir exists + fs.existsSync.mockReturnValue(false); + + expect(isProAvailable()).toBe(false); + expect(getProVersion()).toBeNull(); + expect(loadProModule('anything')).toBeNull(); + }); + + it('should handle concurrent calls safely', () => { + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue( + JSON.stringify({ version: '1.0.0' }), + ); + + // Multiple simultaneous calls should not interfere + const results = Array.from({ length: 10 }, () => getProVersion()); + expect(results.every((v) => v === '1.0.0')).toBe(true); + }); + }); +}); + +``` + +================================================== +📄 tests/pro/memory/session-digest/extractor.test.js +================================================== +```js +/** + * Session Digest Extractor Tests + * Story MIS-3: Session Digest (PreCompact Hook) + * + * Requires pro/ submodule. Tests skip gracefully in CI + * where the submodule is not available. + */ + +// Mock fs FIRST (before any requires) +jest.mock('fs', () => ({ + promises: { + mkdir: jest.fn().mockResolvedValue(undefined), + writeFile: jest.fn().mockResolvedValue(undefined), + }, +})); + +const fs = require('fs'); +const path = require('path'); +const yaml = require('yaml'); + +let extractorModule; +try { + extractorModule = require('../../../../pro/memory/session-digest/extractor'); +} catch (e) { + // pro/ submodule not available (CI environment) +} + +const isProAvailable = !!extractorModule; +const extractSessionDigest = isProAvailable ? extractorModule.extractSessionDigest : undefined; +const _analyzeConversation = isProAvailable ? extractorModule._analyzeConversation : undefined; +const _generateDigestDocument = isProAvailable ? extractorModule._generateDigestDocument : undefined; +const _writeDigest = isProAvailable ? extractorModule._writeDigest : undefined; + +(isProAvailable ? describe : describe.skip)('Session Digest Extractor', () => { + beforeEach(() => { + jest.clearAllMocks(); + fs.promises.mkdir.mockClear(); + fs.promises.writeFile.mockClear(); + fs.promises.mkdir.mockResolvedValue(undefined); + fs.promises.writeFile.mockResolvedValue(undefined); + jest.spyOn(console, 'log').mockImplementation(() => {}); + jest.spyOn(console, 'error').mockImplementation(() => {}); + }); + + afterEach(() => { + console.log.mockRestore(); + console.error.mockRestore(); + }); + + describe('extractSessionDigest', () => { + it('should extract and write digest successfully', async () => { + const context = { + sessionId: 'test-session-123', + projectDir: '/test/project', + conversation: { + messages: [ + { role: 'user', content: 'Actually, the path should be /correct/path' }, + { role: 'assistant', content: 'I understand.' }, + ], + }, + metadata: { + sessionStart: Date.now() - 60000, // 1 minute ago + compactTrigger: 'context_limit_90%', + }, + }; + + const digestPath = await extractSessionDigest(context); + + // Check path contains correct components (cross-platform) + expect(digestPath).toContain('.aios'); + expect(digestPath).toContain('session-digests'); + expect(digestPath).toContain('test-session-123'); + expect(digestPath).toMatch(/\.yaml$/); + expect(fs.promises.mkdir).toHaveBeenCalled(); + expect(fs.promises.writeFile).toHaveBeenCalled(); + }); + + it('should handle extraction errors and throw', async () => { + const context = { + sessionId: 'test-session-123', + projectDir: '/test/project', + conversation: { messages: [] }, + }; + + fs.promises.writeFile.mockRejectedValueOnce(new Error('Write failed')); + + await expect(extractSessionDigest(context)).rejects.toThrow('Write failed'); + }); + + it('should complete within performance budget (< 5s)', async () => { + const context = { + sessionId: 'test-session-123', + projectDir: '/test/project', + conversation: { + messages: Array(100).fill({ role: 'user', content: 'Test message' }), + }, + metadata: {}, + }; + + const startTime = Date.now(); + await extractSessionDigest(context); + const duration = Date.now() - startTime; + + // Should complete in < 5 seconds (story requirement) + expect(duration).toBeLessThan(5000); + }); + }); + + describe('_analyzeConversation', () => { + it('should extract user corrections', async () => { + const context = { + conversation: { + messages: [ + { role: 'user', content: 'Actually, the correct way is to use async/await' }, + { role: 'user', content: 'No, that\'s wrong. Use promises instead' }, + ], + }, + metadata: {}, + }; + + const insights = await _analyzeConversation(context); + + expect(insights.corrections).toHaveLength(2); + expect(insights.corrections[0]).toContain('Actually'); + expect(insights.corrections[1]).toContain('No'); + }); + + it('should identify patterns in conversation', async () => { + const context = { + conversation: { + messages: [ + { role: 'user', content: 'How do I create a file?' }, + { role: 'user', content: 'How do I delete a file?' }, + { role: 'user', content: 'How do I read a file?' }, + ], + }, + metadata: {}, + }; + + const insights = await _analyzeConversation(context); + + expect(insights.patterns.length).toBeGreaterThan(0); + expect(insights.patterns[0]).toContain('how-to'); + }); + + it('should extract axioms from conversation', async () => { + const context = { + conversation: { + messages: [ + { role: 'assistant', content: 'Always use ESLint for code quality.' }, + { role: 'assistant', content: 'Never commit secrets to git.' }, + ], + }, + metadata: {}, + }; + + const insights = await _analyzeConversation(context); + + expect(insights.axioms).toHaveLength(2); + expect(insights.axioms[0]).toContain('Always use ESLint'); + expect(insights.axioms[1]).toContain('Never commit secrets'); + }); + + it('should capture context snapshot', async () => { + const context = { + conversation: { messages: [] }, + metadata: { + activeAgent: '@dev', + activeStory: 'MIS-3', + filesModified: ['file1.js', 'file2.js'], + }, + }; + + const insights = await _analyzeConversation(context); + + expect(insights.contextSnapshot).toMatchObject({ + activeAgent: '@dev', + activeStory: 'MIS-3', + filesModified: ['file1.js', 'file2.js'], + }); + }); + }); + + describe('_generateDigestDocument', () => { + it('should generate document with schema version', () => { + const context = { + sessionId: 'test-session-123', + metadata: {}, + }; + + const insights = { + corrections: ['Correction 1'], + patterns: ['Pattern 1'], + axioms: ['Axiom 1'], + contextSnapshot: { activeAgent: 'unknown' }, + }; + + const digest = _generateDigestDocument(context, insights); + + expect(digest.schema_version).toBe('1.0'); + expect(digest.session_id).toBe('test-session-123'); + expect(digest.timestamp).toBeDefined(); + expect(digest.body).toMatchObject({ + user_corrections: ['Correction 1'], + patterns_observed: ['Pattern 1'], + axioms_learned: ['Axiom 1'], + }); + }); + + it('should calculate session duration', () => { + const sessionStart = Date.now() - 120000; // 2 minutes ago + + const context = { + sessionId: 'test-session-123', + metadata: { sessionStart }, + }; + + const insights = { + corrections: [], + patterns: [], + axioms: [], + contextSnapshot: {}, + }; + + const digest = _generateDigestDocument(context, insights); + + expect(digest.duration_minutes).toBeGreaterThanOrEqual(1); + expect(digest.duration_minutes).toBeLessThanOrEqual(3); + }); + }); + + describe('_writeDigest', () => { + it('should create storage directory', async () => { + const projectDir = '/test/project'; + const sessionId = 'test-session-123'; + const digest = { + schema_version: '1.0', + session_id: sessionId, + timestamp: new Date().toISOString(), + duration_minutes: 10, + agent_context: 'test', + compact_trigger: 'test', + body: { + user_corrections: [], + patterns_observed: [], + axioms_learned: [], + context_snapshot: {}, + }, + }; + + await _writeDigest(projectDir, sessionId, digest); + + expect(fs.promises.mkdir).toHaveBeenCalledWith( + expect.stringContaining('.aios'), + { recursive: true }, + ); + }); + + it('should write YAML file with correct naming', async () => { + const projectDir = '/test/project'; + const sessionId = 'test-session-123'; + const digest = { + schema_version: '1.0', + session_id: sessionId, + timestamp: '2026-02-09T18:00:00.000Z', + duration_minutes: 10, + agent_context: 'test', + compact_trigger: 'test', + body: { + user_corrections: [], + patterns_observed: [], + axioms_learned: [], + context_snapshot: {}, + }, + }; + + const digestPath = await _writeDigest(projectDir, sessionId, digest); + + // Check path components (cross-platform) + expect(digestPath).toContain('test-session-123'); + expect(digestPath).toMatch(/\.yaml$/); + expect(fs.promises.writeFile).toHaveBeenCalledWith( + expect.stringContaining('test-session-123'), + expect.stringContaining('schema_version: "1.0"'), + 'utf8', + ); + }); + + it('should generate valid YAML content', async () => { + const projectDir = '/test/project'; + const sessionId = 'test-session-123'; + const digest = { + schema_version: '1.0', + session_id: sessionId, + timestamp: '2026-02-09T18:00:00.000Z', + duration_minutes: 10, + agent_context: '@dev', + compact_trigger: 'context_limit', + body: { + user_corrections: ['Correction 1'], + patterns_observed: ['Pattern 1'], + axioms_learned: ['Axiom 1'], + context_snapshot: { activeAgent: '@dev' }, + }, + }; + + await _writeDigest(projectDir, sessionId, digest); + + const [, yamlContent] = fs.promises.writeFile.mock.calls[0]; + + // Should have frontmatter delimiter + expect(yamlContent).toContain('---'); + + // Should have body sections + expect(yamlContent).toContain('## User Corrections'); + expect(yamlContent).toContain('## Patterns Observed'); + expect(yamlContent).toContain('## Axioms Learned'); + expect(yamlContent).toContain('## Context Snapshot'); + + // Should be parseable YAML frontmatter + const frontmatterMatch = yamlContent.match(/^---\n([\s\S]+?)\n---/); + expect(frontmatterMatch).toBeTruthy(); + + const frontmatter = yaml.parse(frontmatterMatch[1]); + expect(frontmatter.schema_version).toBe('1.0'); + }); + }); +}); + +``` + +================================================== +📄 tests/license/performance.test.js +================================================== +```js +/** + * Performance Tests for License System + * + * @see Story PRO-6 - License Key & Feature Gating System + * @see Task 7.4 - Performance tests + * @see AC-13 - isAvailable() < 5ms, cache read < 10ms, activation < 3s + */ + +'use strict'; + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const { + writeLicenseCache, + readLicenseCache, + getCachePath, + getAiosDir, +} = require('../../pro/license/license-cache'); +const { FeatureGate, featureGate } = require('../../pro/license/feature-gate'); +const { generateMachineId, deriveCacheKey, generateSalt } = require('../../pro/license/license-crypto'); + +describe('Performance Tests (AC-13)', () => { + let testDir; + + beforeEach(() => { + testDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-perf-test-')); + // Reset the singleton for each test + featureGate._reset(); + }); + + afterEach(() => { + try { + fs.rmSync(testDir, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + featureGate._reset(); + }); + + // Helper to create valid test cache data + function createTestCacheData(overrides = {}) { + return { + key: 'PRO-ABCD-EFGH-IJKL-MNOP', + activatedAt: new Date().toISOString(), + expiresAt: new Date(Date.now() + 30 * 24 * 60 * 60 * 1000).toISOString(), + features: ['pro.squads.*', 'pro.memory.*', 'pro.metrics.*'], + seats: { used: 1, max: 5 }, + cacheValidDays: 30, + gracePeriodDays: 7, + ...overrides, + }; + } + + // Helper to measure execution time + function measureTime(fn, iterations = 1) { + const times = []; + for (let i = 0; i < iterations; i++) { + const start = process.hrtime.bigint(); + fn(); + const end = process.hrtime.bigint(); + times.push(Number(end - start) / 1_000_000); // Convert to ms + } + return { + min: Math.min(...times), + max: Math.max(...times), + avg: times.reduce((a, b) => a + b, 0) / times.length, + total: times.reduce((a, b) => a + b, 0), + iterations, + }; + } + + // Helper to measure async execution time + async function measureTimeAsync(fn, iterations = 1) { + const times = []; + for (let i = 0; i < iterations; i++) { + const start = process.hrtime.bigint(); + await fn(); + const end = process.hrtime.bigint(); + times.push(Number(end - start) / 1_000_000); + } + return { + min: Math.min(...times), + max: Math.max(...times), + avg: times.reduce((a, b) => a + b, 0) / times.length, + total: times.reduce((a, b) => a + b, 0), + iterations, + }; + } + + describe('isAvailable() Performance (< 5ms per call)', () => { + beforeEach(() => { + // Setup valid license cache + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + // Mock the cache path to use testDir + // We need to pre-load the cache by reading from testDir + const cache = readLicenseCache(testDir); + // Directly set the internal state for testing + featureGate._cache = cache; + featureGate._cacheLoaded = true; + if (cache && cache.features) { + for (const feature of cache.features) { + featureGate._licensedFeatures.add(feature); + } + } + }); + + it('should complete isAvailable() in < 5ms (single call, cached)', () => { + const result = measureTime(() => { + featureGate.isAvailable('pro.squads.premium'); + }, 1); + + expect(result.avg).toBeLessThan(5); + }); + + it('should complete 1000 isAvailable() calls with average < 5ms', () => { + const iterations = 1000; + const result = measureTime(() => { + featureGate.isAvailable('pro.squads.premium'); + }, iterations); + + // Average should be well under 5ms (cache is already loaded) + expect(result.avg).toBeLessThan(5); + + // Even the max should be reasonable + expect(result.max).toBeLessThan(10); + }); + + it('should complete 1000 isAvailable() calls for exact match in < 5ms avg', () => { + // Test with exact feature match + featureGate._licensedFeatures.add('pro.squads.premium'); + + const result = measureTime(() => { + featureGate.isAvailable('pro.squads.premium'); + }, 1000); + + expect(result.avg).toBeLessThan(5); + }); + + it('should complete 1000 isAvailable() calls for wildcard match in < 5ms avg', () => { + // Test with wildcard pattern (slightly more work) + const result = measureTime(() => { + featureGate.isAvailable('pro.squads.custom'); + }, 1000); + + expect(result.avg).toBeLessThan(5); + }); + + it('should complete 1000 isAvailable() calls for non-existent feature in < 5ms avg', () => { + // Non-existent features should also be fast + const result = measureTime(() => { + featureGate.isAvailable('pro.nonexistent.feature'); + }, 1000); + + expect(result.avg).toBeLessThan(5); + }); + + it('should maintain performance with many licensed features', () => { + // Add many features to stress test + for (let i = 0; i < 100; i++) { + featureGate._licensedFeatures.add(`pro.module${i}.feature${i}`); + } + featureGate._licensedFeatures.add('pro.target.feature'); + + const result = measureTime(() => { + featureGate.isAvailable('pro.target.feature'); + }, 1000); + + expect(result.avg).toBeLessThan(5); + }); + }); + + describe('Cache Read Performance', () => { + // Note: Cache read includes PBKDF2 key derivation (100k iterations) + // which is intentionally slow for security. Thresholds account for this. + beforeEach(() => { + // Setup valid license cache + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + }); + + it('should read cache in < 150ms (single read, includes PBKDF2)', () => { + const result = measureTime(() => { + readLicenseCache(testDir); + }, 1); + + // PBKDF2 with 100k iterations takes ~20-70ms depending on hardware + expect(result.avg).toBeLessThan(150); + }); + + it('should read cache in < 100ms average (10 reads)', () => { + const result = measureTime(() => { + readLicenseCache(testDir); + }, 10); + + // Average should be consistent + expect(result.avg).toBeLessThan(100); + }); + + it('should handle cache miss quickly (< 5ms)', () => { + const emptyDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-perf-empty-')); + try { + const result = measureTime(() => { + readLicenseCache(emptyDir); + }, 100); + + // Cache miss should be very fast (no PBKDF2) + expect(result.avg).toBeLessThan(5); + } finally { + fs.rmSync(emptyDir, { recursive: true, force: true }); + } + }); + + it('should write cache in reasonable time (< 100ms)', () => { + const data = createTestCacheData(); + const writeDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-perf-write-')); + + try { + const result = measureTime(() => { + writeLicenseCache(data, writeDir); + }, 10); + + // Write includes PBKDF2, encryption, and file I/O + expect(result.avg).toBeLessThan(100); + } finally { + fs.rmSync(writeDir, { recursive: true, force: true }); + } + }); + }); + + describe('Crypto Operations Performance', () => { + it('should generate machineId in < 50ms', () => { + const result = measureTime(() => { + generateMachineId(); + }, 100); + + // machineId includes network interface enumeration and SHA-256 + // which can vary based on system configuration + expect(result.avg).toBeLessThan(50); + }); + + it('should derive cache key in < 100ms (PBKDF2 is intentionally slow)', () => { + const machineId = generateMachineId(); + const salt = generateSalt(); + + const result = measureTime(() => { + deriveCacheKey(machineId, salt); + }, 10); + + // PBKDF2 with 100k iterations should be slow-ish for security + // but not too slow for UX + expect(result.avg).toBeLessThan(100); + }); + + it('should generate salt quickly (< 5ms)', () => { + const result = measureTime(() => { + generateSalt(); + }, 100); + + expect(result.avg).toBeLessThan(5); + }); + }); + + describe('Full Activation Flow Performance (< 3s with mocked API)', () => { + // This test simulates the full activation flow without actual network calls + it('should complete activation flow in < 3s (mocked)', async () => { + const data = createTestCacheData(); + + // Mock the API call timing + const mockApiCall = () => + new Promise((resolve) => { + // Simulate typical API latency (100-200ms) + setTimeout(() => { + resolve({ + key: data.key, + features: data.features, + seats: data.seats, + expiresAt: data.expiresAt, + cacheValidDays: 30, + gracePeriodDays: 7, + }); + }, 150); + }); + + const activationDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-perf-act-')); + + try { + const result = await measureTimeAsync(async () => { + // Step 1: Validate key format (sync, fast) + const { validateKeyFormat } = require('../../pro/license/license-crypto'); + validateKeyFormat(data.key); + + // Step 2: API call (mocked) + const response = await mockApiCall(); + + // Step 3: Write cache + writeLicenseCache( + { + ...response, + activatedAt: new Date().toISOString(), + }, + activationDir, + ); + + // Step 4: Reload feature gate + const gate = new FeatureGate(); + gate._cache = readLicenseCache(activationDir); + gate._cacheLoaded = true; + }, 5); + + // Should complete in < 3s even with network simulation + expect(result.avg).toBeLessThan(3000); + } finally { + fs.rmSync(activationDir, { recursive: true, force: true }); + } + }); + + it('should complete offline activation check in < 100ms', () => { + // Setup cache + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const result = measureTime(() => { + // Full flow: read cache, check state, check features + const cache = readLicenseCache(testDir); + if (cache) { + const { getLicenseState } = require('../../pro/license/license-cache'); + getLicenseState(cache); + + // Create a new gate and check features + const gate = new FeatureGate(); + gate._cache = cache; + gate._cacheLoaded = true; + gate._licensedFeatures = new Set(cache.features); + + gate.isAvailable('pro.squads.premium'); + gate.isAvailable('pro.memory.persistent'); + gate.isAvailable('pro.metrics.dashboard'); + } + }, 10); + + expect(result.avg).toBeLessThan(100); + }); + }); + + describe('Feature Gate Reload Performance', () => { + it('should reload feature gate in < 150ms (includes PBKDF2)', () => { + // Setup cache + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + // Pre-setup the gate to use testDir + featureGate._cache = readLicenseCache(testDir); + featureGate._cacheLoaded = true; + + const result = measureTime(() => { + // Simulate reload + featureGate._cacheLoaded = false; + featureGate._licensedFeatures.clear(); + + // Re-read and populate (includes PBKDF2) + const cache = readLicenseCache(testDir); + if (cache && cache.features) { + for (const feature of cache.features) { + featureGate._licensedFeatures.add(feature); + } + } + featureGate._cache = cache; + featureGate._cacheLoaded = true; + }, 10); + + // Reload includes cache read which has PBKDF2 + expect(result.avg).toBeLessThan(150); + }); + }); + + describe('Memory Efficiency', () => { + it('should not leak memory across multiple cache operations', () => { + const data = createTestCacheData(); + const initialMemory = process.memoryUsage().heapUsed; + + // Perform many operations + for (let i = 0; i < 100; i++) { + const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), `aios-mem-${i}-`)); + try { + writeLicenseCache(data, tempDir); + readLicenseCache(tempDir); + } finally { + fs.rmSync(tempDir, { recursive: true, force: true }); + } + } + + // Force GC if available + if (global.gc) { + global.gc(); + } + + const finalMemory = process.memoryUsage().heapUsed; + const memoryIncrease = (finalMemory - initialMemory) / 1024 / 1024; // MB + + // Should not increase by more than 10MB + expect(memoryIncrease).toBeLessThan(10); + }); + + it('should handle large feature sets efficiently', () => { + // Create data with many features + const manyFeatures = []; + for (let i = 0; i < 1000; i++) { + manyFeatures.push(`pro.module${i}.feature${i}`); + } + + const data = createTestCacheData({ features: manyFeatures }); + writeLicenseCache(data, testDir); + + const result = measureTime(() => { + const cache = readLicenseCache(testDir); + expect(cache).not.toBeNull(); + expect(cache.features.length).toBe(1000); + }, 5); + + // Cache read includes PBKDF2, so allow more time + // Even with large feature set, crypto is the bottleneck, not features + expect(result.avg).toBeLessThan(150); + }); + }); + + describe('Concurrent Access Performance', () => { + it('should handle concurrent reads efficiently', async () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const concurrentReads = 10; + const readPromises = []; + + const start = process.hrtime.bigint(); + + for (let i = 0; i < concurrentReads; i++) { + readPromises.push( + new Promise((resolve) => { + setImmediate(() => { + const cache = readLicenseCache(testDir); + resolve(cache); + }); + }), + ); + } + + const results = await Promise.all(readPromises); + const end = process.hrtime.bigint(); + const totalTime = Number(end - start) / 1_000_000; + + // All reads should succeed + expect(results.every((r) => r !== null)).toBe(true); + + // Total time for 10 concurrent reads + // Each read includes PBKDF2 (~50-70ms), so total could be ~500-700ms + // on single-threaded Node.js (reads are sequential due to crypto operations) + expect(totalTime).toBeLessThan(1500); + }); + }); + + describe('Stress Testing', () => { + it('should maintain performance under repeated operations', () => { + const data = createTestCacheData(); + + // Pre-populate + writeLicenseCache(data, testDir); + const cache = readLicenseCache(testDir); + featureGate._cache = cache; + featureGate._cacheLoaded = true; + if (cache && cache.features) { + for (const feature of cache.features) { + featureGate._licensedFeatures.add(feature); + } + } + + // Measure first batch + const result1 = measureTime(() => { + featureGate.isAvailable('pro.squads.premium'); + }, 1000); + + // Measure second batch (should be similar) + const result2 = measureTime(() => { + featureGate.isAvailable('pro.squads.premium'); + }, 1000); + + // Performance should not degrade significantly + expect(result2.avg).toBeLessThan(result1.avg * 2); + expect(result2.avg).toBeLessThan(5); + }); + }); +}); + +``` + +================================================== +📄 tests/license/license-crypto.test.js +================================================== +```js +/** + * Unit tests for license-crypto.js + * + * @see Story PRO-6 - License Key & Feature Gating System + * @see AC-9, AC-10 - Tamper resistance, Machine specificity + */ + +'use strict'; + +const crypto = require('crypto'); +const { + generateMachineId, + deriveCacheKey, + encrypt, + decrypt, + computeHMAC, + generateSalt, + verifyHMAC, + maskKey, + validateKeyFormat, + _CONFIG, +} = require('../../pro/license/license-crypto'); + +describe('license-crypto', () => { + describe('generateMachineId', () => { + it('should generate a deterministic machine ID', () => { + const id1 = generateMachineId(); + const id2 = generateMachineId(); + + expect(id1).toBe(id2); + }); + + it('should return a 64-character hex string (SHA-256)', () => { + const id = generateMachineId(); + + expect(typeof id).toBe('string'); + expect(id).toHaveLength(64); + expect(/^[a-f0-9]{64}$/.test(id)).toBe(true); + }); + + it('should not be empty', () => { + const id = generateMachineId(); + expect(id).not.toBe(''); + }); + }); + + describe('generateSalt', () => { + it('should generate a 16-byte salt by default', () => { + const salt = generateSalt(); + + expect(Buffer.isBuffer(salt)).toBe(true); + expect(salt.length).toBe(16); + }); + + it('should generate custom length salt', () => { + const salt = generateSalt(32); + + expect(salt.length).toBe(32); + }); + + it('should generate unique salts', () => { + const salt1 = generateSalt(); + const salt2 = generateSalt(); + + expect(salt1.toString('hex')).not.toBe(salt2.toString('hex')); + }); + }); + + describe('deriveCacheKey', () => { + const testMachineId = 'abc123def456'; + const testSalt = crypto.randomBytes(16); + + it('should derive a 32-byte key (256 bits)', () => { + const key = deriveCacheKey(testMachineId, testSalt); + + expect(Buffer.isBuffer(key)).toBe(true); + expect(key.length).toBe(32); + }); + + it('should be deterministic with same inputs', () => { + const key1 = deriveCacheKey(testMachineId, testSalt); + const key2 = deriveCacheKey(testMachineId, testSalt); + + expect(key1.toString('hex')).toBe(key2.toString('hex')); + }); + + it('should produce different keys with different salts', () => { + const salt1 = crypto.randomBytes(16); + const salt2 = crypto.randomBytes(16); + + const key1 = deriveCacheKey(testMachineId, salt1); + const key2 = deriveCacheKey(testMachineId, salt2); + + expect(key1.toString('hex')).not.toBe(key2.toString('hex')); + }); + + it('should produce different keys with different machine IDs', () => { + const key1 = deriveCacheKey('machine-a', testSalt); + const key2 = deriveCacheKey('machine-b', testSalt); + + expect(key1.toString('hex')).not.toBe(key2.toString('hex')); + }); + + it('should accept salt as hex string', () => { + const saltHex = testSalt.toString('hex'); + const key = deriveCacheKey(testMachineId, saltHex); + + expect(key.length).toBe(32); + }); + + it('should use minimum 100000 PBKDF2 iterations', () => { + expect(_CONFIG.PBKDF2_ITERATIONS).toBeGreaterThanOrEqual(100000); + }); + }); + + describe('encrypt', () => { + const testKey = crypto.randomBytes(32); + const testData = { foo: 'bar', count: 42 }; + + it('should encrypt data and return ciphertext, iv, and tag', () => { + const result = encrypt(testData, testKey); + + expect(result).toHaveProperty('ciphertext'); + expect(result).toHaveProperty('iv'); + expect(result).toHaveProperty('tag'); + }); + + it('should return hex-encoded strings', () => { + const result = encrypt(testData, testKey); + + expect(/^[a-f0-9]+$/.test(result.ciphertext)).toBe(true); + expect(/^[a-f0-9]+$/.test(result.iv)).toBe(true); + expect(/^[a-f0-9]+$/.test(result.tag)).toBe(true); + }); + + it('should generate different IVs for each encryption', () => { + const result1 = encrypt(testData, testKey); + const result2 = encrypt(testData, testKey); + + expect(result1.iv).not.toBe(result2.iv); + }); + + it('should encrypt strings directly', () => { + const result = encrypt('hello world', testKey); + + expect(result.ciphertext).toBeTruthy(); + }); + + it('should throw for invalid key length', () => { + const shortKey = crypto.randomBytes(16); + + expect(() => encrypt(testData, shortKey)).toThrow('256 bits'); + }); + + it('should accept key as hex string', () => { + const keyHex = testKey.toString('hex'); + const result = encrypt(testData, keyHex); + + expect(result.ciphertext).toBeTruthy(); + }); + }); + + describe('decrypt', () => { + const testKey = crypto.randomBytes(32); + const testData = { foo: 'bar', count: 42 }; + + it('should decrypt data back to original', () => { + const encrypted = encrypt(testData, testKey); + const decrypted = decrypt(encrypted, testKey); + + expect(decrypted).toEqual(testData); + }); + + it('should decrypt string data', () => { + const original = 'hello world'; + const encrypted = encrypt(original, testKey); + const decrypted = decrypt(encrypted, testKey, false); + + expect(decrypted).toBe(original); + }); + + it('should fail with wrong key', () => { + const encrypted = encrypt(testData, testKey); + const wrongKey = crypto.randomBytes(32); + + expect(() => decrypt(encrypted, wrongKey)).toThrow(); + }); + + it('should fail with tampered ciphertext', () => { + const encrypted = encrypt(testData, testKey); + // Tamper with ciphertext + encrypted.ciphertext = encrypted.ciphertext.replace(/[a-f]/g, 'f'); + + expect(() => decrypt(encrypted, testKey)).toThrow(); + }); + + it('should fail with tampered auth tag', () => { + const encrypted = encrypt(testData, testKey); + // Tamper with tag + encrypted.tag = crypto.randomBytes(16).toString('hex'); + + expect(() => decrypt(encrypted, testKey)).toThrow(); + }); + + it('should fail with tampered IV', () => { + const encrypted = encrypt(testData, testKey); + // Tamper with IV + encrypted.iv = crypto.randomBytes(12).toString('hex'); + + expect(() => decrypt(encrypted, testKey)).toThrow(); + }); + + it('should throw for invalid key length', () => { + const encrypted = encrypt(testData, testKey); + const shortKey = crypto.randomBytes(16); + + expect(() => decrypt(encrypted, shortKey)).toThrow('256 bits'); + }); + }); + + describe('computeHMAC', () => { + const testKey = crypto.randomBytes(32); + const testData = 'test data string'; + + it('should compute a 64-character hex HMAC', () => { + const hmac = computeHMAC(testData, testKey); + + expect(typeof hmac).toBe('string'); + expect(hmac).toHaveLength(64); + expect(/^[a-f0-9]{64}$/.test(hmac)).toBe(true); + }); + + it('should be deterministic', () => { + const hmac1 = computeHMAC(testData, testKey); + const hmac2 = computeHMAC(testData, testKey); + + expect(hmac1).toBe(hmac2); + }); + + it('should produce different HMACs for different data', () => { + const hmac1 = computeHMAC('data1', testKey); + const hmac2 = computeHMAC('data2', testKey); + + expect(hmac1).not.toBe(hmac2); + }); + + it('should produce different HMACs for different keys', () => { + const key1 = crypto.randomBytes(32); + const key2 = crypto.randomBytes(32); + + const hmac1 = computeHMAC(testData, key1); + const hmac2 = computeHMAC(testData, key2); + + expect(hmac1).not.toBe(hmac2); + }); + + it('should accept Buffer data', () => { + const dataBuffer = Buffer.from(testData, 'utf8'); + const hmac = computeHMAC(dataBuffer, testKey); + + expect(hmac).toHaveLength(64); + }); + }); + + describe('verifyHMAC', () => { + const testKey = crypto.randomBytes(32); + const testData = 'test data'; + + it('should return true for valid HMAC', () => { + const hmac = computeHMAC(testData, testKey); + const result = verifyHMAC(testData, testKey, hmac); + + expect(result).toBe(true); + }); + + it('should return false for invalid HMAC', () => { + const fakeHmac = crypto.randomBytes(32).toString('hex'); + const result = verifyHMAC(testData, testKey, fakeHmac); + + expect(result).toBe(false); + }); + + it('should return false for tampered data', () => { + const hmac = computeHMAC(testData, testKey); + const result = verifyHMAC('tampered data', testKey, hmac); + + expect(result).toBe(false); + }); + + it('should return false for wrong key', () => { + const hmac = computeHMAC(testData, testKey); + const wrongKey = crypto.randomBytes(32); + const result = verifyHMAC(testData, wrongKey, hmac); + + expect(result).toBe(false); + }); + + it('should return false for mismatched length', () => { + const result = verifyHMAC(testData, testKey, 'short'); + + expect(result).toBe(false); + }); + }); + + describe('maskKey', () => { + it('should mask standard PRO key format', () => { + const key = 'PRO-ABCD-EFGH-IJKL-MNOP'; + const masked = maskKey(key); + + expect(masked).toBe('PRO-ABCD-****-****-MNOP'); + }); + + it('should handle null/undefined', () => { + expect(maskKey(null)).toBe('****-****-****-****'); + expect(maskKey(undefined)).toBe('****-****-****-****'); + }); + + it('should handle non-string input', () => { + expect(maskKey(12345)).toBe('****-****-****-****'); + expect(maskKey({})).toBe('****-****-****-****'); + }); + + it('should handle non-standard format', () => { + const key = 'ABCDEFGHIJKLMNOP'; + const masked = maskKey(key); + + expect(masked).toBe('ABCD-****-****-MNOP'); + }); + + it('should handle short keys', () => { + expect(maskKey('ABC')).toBe('****'); + expect(maskKey('ABCDEFGH')).toBe('****'); + }); + + it('should never reveal middle sections', () => { + const key = 'PRO-1234-ABCD-EFGH-5678'; + const masked = maskKey(key); + + expect(masked).not.toContain('ABCD'); + expect(masked).not.toContain('EFGH'); + }); + }); + + describe('validateKeyFormat', () => { + it('should accept valid PRO key format', () => { + expect(validateKeyFormat('PRO-ABCD-EFGH-IJKL-MNOP')).toBe(true); + expect(validateKeyFormat('PRO-1234-5678-90AB-CDEF')).toBe(true); + }); + + it('should reject lowercase', () => { + expect(validateKeyFormat('PRO-abcd-efgh-ijkl-mnop')).toBe(false); + expect(validateKeyFormat('pro-ABCD-EFGH-IJKL-MNOP')).toBe(false); + }); + + it('should reject wrong prefix', () => { + expect(validateKeyFormat('DEV-ABCD-EFGH-IJKL-MNOP')).toBe(false); + expect(validateKeyFormat('ABCD-EFGH-IJKL-MNOP-1234')).toBe(false); + }); + + it('should reject wrong segment length', () => { + expect(validateKeyFormat('PRO-ABC-EFGH-IJKL-MNOP')).toBe(false); + expect(validateKeyFormat('PRO-ABCDE-EFGH-IJKL-MNOP')).toBe(false); + }); + + it('should reject wrong segment count', () => { + expect(validateKeyFormat('PRO-ABCD-EFGH-IJKL')).toBe(false); + expect(validateKeyFormat('PRO-ABCD-EFGH-IJKL-MNOP-QRST')).toBe(false); + }); + + it('should reject special characters', () => { + expect(validateKeyFormat('PRO-AB@D-EFGH-IJKL-MNOP')).toBe(false); + expect(validateKeyFormat('PRO-ABCD-EF!H-IJKL-MNOP')).toBe(false); + }); + + it('should reject null/undefined/empty', () => { + expect(validateKeyFormat(null)).toBe(false); + expect(validateKeyFormat(undefined)).toBe(false); + expect(validateKeyFormat('')).toBe(false); + }); + + it('should reject non-string input', () => { + expect(validateKeyFormat(12345)).toBe(false); + expect(validateKeyFormat({})).toBe(false); + }); + }); + + describe('AES-256-GCM configuration', () => { + it('should use AES-256-GCM algorithm', () => { + expect(_CONFIG.AES_ALGORITHM).toBe('aes-256-gcm'); + }); + + it('should use 12-byte IV (96 bits)', () => { + expect(_CONFIG.AES_IV_LENGTH).toBe(12); + }); + + it('should use 16-byte auth tag (128 bits)', () => { + expect(_CONFIG.AES_TAG_LENGTH).toBe(16); + }); + }); + + describe('Security: Machine specificity (AC-10)', () => { + it('should produce different cache keys for different machines', () => { + const salt = generateSalt(); + const key1 = deriveCacheKey('machine-id-a', salt); + const key2 = deriveCacheKey('machine-id-b', salt); + + expect(key1.toString('hex')).not.toBe(key2.toString('hex')); + }); + + it('should make encrypted data non-portable between machines', () => { + const salt = generateSalt(); + const data = { license: 'test', features: ['pro.squads.*'] }; + + // Encrypt with machine A's key + const keyA = deriveCacheKey('machine-a-id', salt); + const encrypted = encrypt(data, keyA); + + // Try to decrypt with machine B's key + const keyB = deriveCacheKey('machine-b-id', salt); + + expect(() => decrypt(encrypted, keyB)).toThrow(); + }); + }); + + describe('Security: Tamper resistance (AC-9)', () => { + it('should detect tampered encrypted data via auth tag', () => { + const key = crypto.randomBytes(32); + const data = { license: 'test' }; + const encrypted = encrypt(data, key); + + // Tamper with ciphertext + const tamperedCiphertext = Buffer.from(encrypted.ciphertext, 'hex'); + tamperedCiphertext[0] ^= 0xff; // Flip bits + encrypted.ciphertext = tamperedCiphertext.toString('hex'); + + expect(() => decrypt(encrypted, key)).toThrow(); + }); + + it('should detect tampered HMAC', () => { + const key = crypto.randomBytes(32); + const data = 'original data'; + const originalHmac = computeHMAC(data, key); + + // Tamper with HMAC + const tamperedHmac = 'a'.repeat(64); + + expect(verifyHMAC(data, key, originalHmac)).toBe(true); + expect(verifyHMAC(data, key, tamperedHmac)).toBe(false); + }); + }); +}); + +``` + +================================================== +📄 tests/license/auth-errors.test.js +================================================== +```js +/** + * Unit tests for AuthError and BuyerValidationError (PRO-11) + * + * @see Story PRO-11 - Email Authentication & Buyer-Based Pro Activation + */ + +'use strict'; + +const { AuthError, BuyerValidationError } = require('../../pro/license/errors'); + +describe('AuthError', () => { + it('should create error with defaults', () => { + const error = new AuthError('Test error'); + + expect(error).toBeInstanceOf(Error); + expect(error).toBeInstanceOf(AuthError); + expect(error.name).toBe('AuthError'); + expect(error.message).toBe('Test error'); + expect(error.code).toBe('AUTH_FAILED'); + expect(error.details).toEqual({}); + }); + + it('should create error with custom code and details', () => { + const error = new AuthError('Custom', 'CUSTOM_CODE', { foo: 'bar' }); + + expect(error.code).toBe('CUSTOM_CODE'); + expect(error.details).toEqual({ foo: 'bar' }); + }); + + describe('static factories', () => { + it('invalidCredentials', () => { + const error = AuthError.invalidCredentials(); + + expect(error.code).toBe('INVALID_CREDENTIALS'); + expect(error.message).toContain('Invalid email or password'); + }); + + it('emailNotVerified', () => { + const error = AuthError.emailNotVerified(); + + expect(error.code).toBe('EMAIL_NOT_VERIFIED'); + expect(error.message).toContain('verify your email'); + }); + + it('emailAlreadyRegistered', () => { + const error = AuthError.emailAlreadyRegistered(); + + expect(error.code).toBe('EMAIL_ALREADY_REGISTERED'); + expect(error.message).toContain('already exists'); + }); + + it('rateLimited with retryAfter', () => { + const error = AuthError.rateLimited(900); + + expect(error.code).toBe('AUTH_RATE_LIMITED'); + expect(error.message).toContain('15 minutes'); + expect(error.details.retryAfter).toBe(900); + }); + + it('rateLimited without retryAfter', () => { + const error = AuthError.rateLimited(); + + expect(error.code).toBe('AUTH_RATE_LIMITED'); + expect(error.message).toContain('try again later'); + }); + + it('verificationTimeout', () => { + const error = AuthError.verificationTimeout(); + + expect(error.code).toBe('VERIFICATION_TIMEOUT'); + expect(error.message).toContain('timed out'); + }); + }); + + it('should serialize to JSON', () => { + const error = new AuthError('Test', 'TEST_CODE', { extra: true }); + const json = error.toJSON(); + + expect(json.error).toBe('AuthError'); + expect(json.code).toBe('TEST_CODE'); + expect(json.message).toBe('Test'); + expect(json.details).toEqual({ extra: true }); + }); +}); + +describe('BuyerValidationError', () => { + it('should create error with defaults', () => { + const error = new BuyerValidationError('Test'); + + expect(error).toBeInstanceOf(Error); + expect(error).toBeInstanceOf(BuyerValidationError); + expect(error.name).toBe('BuyerValidationError'); + expect(error.code).toBe('BUYER_VALIDATION_FAILED'); + }); + + describe('static factories', () => { + it('notABuyer', () => { + const error = BuyerValidationError.notABuyer(); + + expect(error.code).toBe('NOT_A_BUYER'); + expect(error.message).toContain('No active Pro subscription'); + }); + + it('serviceUnavailable', () => { + const error = BuyerValidationError.serviceUnavailable(); + + expect(error.code).toBe('BUYER_SERVICE_UNAVAILABLE'); + expect(error.message).toContain('temporarily unavailable'); + }); + }); + + it('should serialize to JSON', () => { + const error = BuyerValidationError.notABuyer(); + const json = error.toJSON(); + + expect(json.error).toBe('BuyerValidationError'); + expect(json.code).toBe('NOT_A_BUYER'); + }); +}); + +``` + +================================================== +📄 tests/license/degradation.test.js +================================================== +```js +/** + * Unit tests for degradation.js + * + * @see Story PRO-6 - License Key & Feature Gating System + * @see AC-8 - Graceful degradation + */ + +'use strict'; + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const { + withGracefulDegradation, + ifProAvailable, + getFeatureFriendlyName, + isInDegradedMode, + getDegradationStatus, + createDegradationWrapper, +} = require('../../pro/license/degradation'); +const { featureGate } = require('../../pro/license/feature-gate'); +const { ProFeatureError } = require('../../pro/license/errors'); +const { writeLicenseCache, deleteLicenseCache } = require('../../pro/license/license-cache'); + +describe('degradation', () => { + let testDir; + let originalCwd; + let consoleLogSpy; + + beforeEach(() => { + // Create temp directory for each test + testDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-degradation-test-')); + + // Mock process.cwd to return our test directory + originalCwd = process.cwd; + process.cwd = () => testDir; + + // Reset the singleton for clean tests + featureGate._reset(); + + // Spy on console.log to check degradation messages + consoleLogSpy = jest.spyOn(console, 'log').mockImplementation(() => {}); + }); + + afterEach(() => { + // Restore + process.cwd = originalCwd; + consoleLogSpy.mockRestore(); + + // Cleanup temp directory + try { + fs.rmSync(testDir, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + }); + + // Helper to create test cache + function createTestCache(features = ['pro.squads.*'], overrides = {}) { + return { + key: 'PRO-TEST-1234-5678-ABCD', + activatedAt: new Date().toISOString(), + expiresAt: new Date(Date.now() + 30 * 24 * 60 * 60 * 1000).toISOString(), + features, + seats: { used: 1, max: 5 }, + cacheValidDays: 30, + gracePeriodDays: 7, + ...overrides, + }; + } + + describe('withGracefulDegradation', () => { + it('should execute proAction when feature is available', () => { + writeLicenseCache(createTestCache(['pro.squads.premium']), testDir); + + const result = withGracefulDegradation( + 'pro.squads.premium', + () => 'pro-result', + () => 'fallback-result', + ); + + expect(result).toBe('pro-result'); + }); + + it('should execute fallbackAction when feature is not available', () => { + // No license + + const result = withGracefulDegradation( + 'pro.squads.premium', + () => 'pro-result', + () => 'fallback-result', + ); + + expect(result).toBe('fallback-result'); + }); + + it('should log degradation message when feature not available', () => { + withGracefulDegradation( + 'pro.squads.premium', + () => 'pro-result', + () => 'fallback-result', + ); + + expect(consoleLogSpy).toHaveBeenCalled(); + const logOutput = consoleLogSpy.mock.calls.map(c => c[0]).join(' '); + expect(logOutput).toContain('requires an active AIOS Pro license'); + }); + + it('should not log when silent option is true', () => { + withGracefulDegradation( + 'pro.squads.premium', + () => 'pro-result', + () => 'fallback-result', + { silent: true }, + ); + + const logOutput = consoleLogSpy.mock.calls.map(c => c[0] || '').join(' '); + expect(logOutput).not.toContain('requires an active AIOS Pro license'); + }); + + it('should return null when no fallback provided', () => { + const result = withGracefulDegradation( + 'pro.squads.premium', + () => 'pro-result', + ); + + expect(result).toBeNull(); + }); + }); + + describe('ifProAvailable', () => { + it('should execute action when feature is available', () => { + writeLicenseCache(createTestCache(['pro.squads.premium']), testDir); + + const result = ifProAvailable('pro.squads.premium', () => 'executed'); + + expect(result).toBe('executed'); + }); + + it('should return undefined when feature is not available', () => { + const result = ifProAvailable('pro.squads.premium', () => 'executed'); + + expect(result).toBeUndefined(); + }); + }); + + describe('getFeatureFriendlyName', () => { + it('should return friendly name for registered feature', () => { + const name = getFeatureFriendlyName('pro.squads.premium'); + + expect(name).toBe('Premium Squads'); + }); + + it('should return featureId for unregistered feature', () => { + const name = getFeatureFriendlyName('pro.unknown.feature'); + + expect(name).toBe('pro.unknown.feature'); + }); + }); + + describe('isInDegradedMode', () => { + it('should return true when no license', () => { + expect(isInDegradedMode()).toBe(true); + }); + + it('should return false when license is active', () => { + writeLicenseCache(createTestCache(), testDir); + + expect(isInDegradedMode()).toBe(false); + }); + + it('should return false during grace period', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 33); + + writeLicenseCache( + createTestCache(['pro.squads.*'], { activatedAt: activatedAt.toISOString() }), + testDir, + ); + + expect(isInDegradedMode()).toBe(false); + }); + + it('should return true when license expired', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 40); + + writeLicenseCache( + createTestCache(['pro.squads.*'], { activatedAt: activatedAt.toISOString() }), + testDir, + ); + + expect(isInDegradedMode()).toBe(true); + }); + }); + + describe('getDegradationStatus', () => { + it('should return not degraded for active license', () => { + writeLicenseCache(createTestCache(), testDir); + + const status = getDegradationStatus(); + + expect(status.degraded).toBe(false); + expect(status.reason).toContain('active'); + expect(status.action).toBeNull(); + }); + + it('should return not degraded for grace period', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 33); + + writeLicenseCache( + createTestCache(['pro.squads.*'], { activatedAt: activatedAt.toISOString() }), + testDir, + ); + + const status = getDegradationStatus(); + + expect(status.degraded).toBe(false); + expect(status.reason).toContain('grace'); + expect(status.action).toBe('aios pro validate'); + }); + + it('should return degraded for expired license', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 40); + + writeLicenseCache( + createTestCache(['pro.squads.*'], { activatedAt: activatedAt.toISOString() }), + testDir, + ); + + const status = getDegradationStatus(); + + expect(status.degraded).toBe(true); + expect(status.reason).toContain('expired'); + expect(status.action).toContain('activate'); + }); + + it('should return degraded for no license', () => { + const status = getDegradationStatus(); + + expect(status.degraded).toBe(true); + expect(status.reason).toContain('No license'); + expect(status.action).toContain('activate'); + }); + }); + + describe('createDegradationWrapper', () => { + it('should call original method when no error', () => { + const module = { + method: () => 'original-result', + }; + + const wrapped = createDegradationWrapper(module, {}); + const result = wrapped.method(); + + expect(result).toBe('original-result'); + }); + + it('should call fallback on ProFeatureError', () => { + const module = { + method: () => { + throw new ProFeatureError('pro.test', 'Test Feature'); + }, + }; + + const wrapped = createDegradationWrapper(module, { + method: () => 'fallback-result', + }); + const result = wrapped.method(); + + expect(result).toBe('fallback-result'); + }); + + it('should re-throw ProFeatureError when no fallback', () => { + const module = { + method: () => { + throw new ProFeatureError('pro.test', 'Test Feature'); + }, + }; + + const wrapped = createDegradationWrapper(module, {}); + + expect(() => wrapped.method()).toThrow(ProFeatureError); + }); + + it('should re-throw non-license errors', () => { + const module = { + method: () => { + throw new Error('Regular error'); + }, + }; + + const wrapped = createDegradationWrapper(module, { + method: () => 'fallback', + }); + + expect(() => wrapped.method()).toThrow('Regular error'); + }); + + it('should pass through non-function properties', () => { + const module = { + value: 42, + name: 'test', + }; + + const wrapped = createDegradationWrapper(module, {}); + + expect(wrapped.value).toBe(42); + expect(wrapped.name).toBe('test'); + }); + }); +}); + +``` + +================================================== +📄 tests/license/integration.test.js +================================================== +```js +/** + * Integration tests for License System + * + * Tests full lifecycle workflows: + * - activate → use features → deactivate (online) + * - activate → use features → deactivate (offline) → sync + * - Cache expiry → grace period → degradation + * - Corrupted cache → reactivation flow + * + * @see Story PRO-6 - License Key & Feature Gating System + * @see AC-1 through AC-8 - Full workflow testing + */ + +'use strict'; + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); + +// License modules +const { FeatureGate, featureGate } = require('../../pro/license/feature-gate'); +const { + writeLicenseCache, + readLicenseCache, + deleteLicenseCache, + setPendingDeactivation, + hasPendingDeactivation, + clearPendingDeactivation, + getLicenseState, +} = require('../../pro/license/license-cache'); +const { + generateMachineId, + maskKey, + validateKeyFormat, +} = require('../../pro/license/license-crypto'); +const { + withGracefulDegradation, + isInDegradedMode, + getDegradationStatus, +} = require('../../pro/license/degradation'); +const { ProFeatureError } = require('../../pro/license/errors'); + +describe('License System Integration', () => { + let testDir; + let originalCwd; + + // Helper to create test cache + function createTestCache(features = ['pro.squads.*'], overrides = {}) { + return { + key: 'PRO-TEST-1234-5678-ABCD', + activatedAt: new Date().toISOString(), + expiresAt: new Date(Date.now() + 30 * 24 * 60 * 60 * 1000).toISOString(), + features, + seats: { used: 1, max: 5 }, + cacheValidDays: 30, + gracePeriodDays: 7, + ...overrides, + }; + } + + // Helper to create expired cache + function createExpiredCache(daysAgo, features = ['pro.squads.*']) { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - daysAgo); + return createTestCache(features, { activatedAt: activatedAt.toISOString() }); + } + + beforeEach(() => { + // Create temp directory for each test + testDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-integration-test-')); + + // Mock process.cwd to return our test directory + originalCwd = process.cwd; + process.cwd = () => testDir; + + // Reset the singleton for clean tests + featureGate._reset(); + }); + + afterEach(() => { + // Restore + process.cwd = originalCwd; + + // Cleanup temp directory + try { + fs.rmSync(testDir, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + }); + + describe('Full Lifecycle: Activation → Use → Deactivation (Online)', () => { + it('should complete full online lifecycle (AC-1, AC-4, AC-7a)', () => { + // Phase 1: Initial state (no license) + expect(featureGate.getLicenseState()).toBe('Not Activated'); + expect(featureGate.isAvailable('pro.squads.premium')).toBe(false); + expect(isInDegradedMode()).toBe(true); + + // Phase 2: Simulate activation (write cache) + const cacheData = createTestCache(['pro.squads.*', 'pro.memory.*']); + writeLicenseCache(cacheData, testDir); + featureGate.reload(); + + // Phase 3: Verify active state + expect(featureGate.getLicenseState()).toBe('Active'); + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + expect(featureGate.isAvailable('pro.squads.custom')).toBe(true); + expect(featureGate.isAvailable('pro.memory.extended')).toBe(true); + expect(featureGate.isAvailable('pro.metrics.advanced')).toBe(false); // Not licensed + expect(isInDegradedMode()).toBe(false); + + // Phase 4: Use features (should not throw) + expect(() => featureGate.require('pro.squads.premium', 'Premium Squads')).not.toThrow(); + + // Phase 5: Deactivate (delete cache) + deleteLicenseCache(testDir); + featureGate.reload(); + + // Phase 6: Verify deactivated state + expect(featureGate.getLicenseState()).toBe('Not Activated'); + expect(featureGate.isAvailable('pro.squads.premium')).toBe(false); + expect(isInDegradedMode()).toBe(true); + }); + }); + + describe('Full Lifecycle: Activation → Use → Deactivation (Offline)', () => { + it('should handle offline deactivation with pending sync (AC-7b)', () => { + // Phase 1: Activate + const cacheData = createTestCache(['pro.squads.*']); + writeLicenseCache(cacheData, testDir); + featureGate.reload(); + expect(featureGate.getLicenseState()).toBe('Active'); + + // Phase 2: Verify no pending deactivation + const initialPending = hasPendingDeactivation(testDir); + expect(initialPending.pending).toBe(false); + + // Phase 3: Simulate offline deactivation + // In real scenario, this would be called when licenseApi.isOnline() returns false + setPendingDeactivation(cacheData.key, testDir); + + // Phase 4: Verify pending deactivation is stored + const pending = hasPendingDeactivation(testDir); + expect(pending.pending).toBe(true); + expect(pending.data).toBeDefined(); + expect(pending.data.synced).toBe(false); + + // Phase 5: Delete local cache (user can't use pro features now) + deleteLicenseCache(testDir); + featureGate.reload(); + expect(featureGate.getLicenseState()).toBe('Not Activated'); + + // Phase 6: Clear pending after "sync" would happen + clearPendingDeactivation(testDir); + const clearedPending = hasPendingDeactivation(testDir); + expect(clearedPending.pending).toBe(false); + }); + }); + + describe('Cache Expiry → Grace Period → Degradation (AC-2, AC-3)', () => { + it('should transition through license states correctly', () => { + // Phase 1: Fresh license (Active) + const freshCache = createTestCache(['pro.squads.*']); + writeLicenseCache(freshCache, testDir); + featureGate.reload(); + + expect(featureGate.getLicenseState()).toBe('Active'); + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + + const freshStatus = getDegradationStatus(); + expect(freshStatus.degraded).toBe(false); + expect(freshStatus.reason).toContain('active'); + + // Phase 2: Expired but in grace period (Grace) + deleteLicenseCache(testDir); + const graceCache = createExpiredCache(33); // 30 days + 3 days into grace + writeLicenseCache(graceCache, testDir); + featureGate.reload(); + + expect(featureGate.getLicenseState()).toBe('Grace'); + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); // Still available in grace + + const graceStatus = getDegradationStatus(); + expect(graceStatus.degraded).toBe(false); + expect(graceStatus.reason).toContain('grace'); + expect(graceStatus.action).toBe('aios pro validate'); + + // Phase 3: Past grace period (Expired) + deleteLicenseCache(testDir); + const expiredCache = createExpiredCache(40); // 30 + 7 + 3 days + writeLicenseCache(expiredCache, testDir); + featureGate.reload(); + + expect(featureGate.getLicenseState()).toBe('Expired'); + expect(featureGate.isAvailable('pro.squads.premium')).toBe(false); + + const expiredStatus = getDegradationStatus(); + expect(expiredStatus.degraded).toBe(true); + expect(expiredStatus.reason).toContain('expired'); + expect(expiredStatus.action).toContain('activate'); + }); + + it('should support 30-day offline operation (AC-2)', () => { + // Cache at exactly 29 days should still be active + const cache29Days = createExpiredCache(29); + writeLicenseCache(cache29Days, testDir); + featureGate.reload(); + + expect(featureGate.getLicenseState()).toBe('Active'); + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + }); + + it('should enter grace period at day 31 (AC-3)', () => { + const cache31Days = createExpiredCache(31); + writeLicenseCache(cache31Days, testDir); + featureGate.reload(); + + expect(featureGate.getLicenseState()).toBe('Grace'); + // Features still available during grace + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + }); + + it('should expire after grace period ends (day 38+)', () => { + const cache38Days = createExpiredCache(38); + writeLicenseCache(cache38Days, testDir); + featureGate.reload(); + + expect(featureGate.getLicenseState()).toBe('Expired'); + expect(featureGate.isAvailable('pro.squads.premium')).toBe(false); + }); + }); + + describe('Graceful Degradation (AC-8)', () => { + it('should degrade gracefully without data loss', () => { + // Setup: Start with active license + const cacheData = createTestCache(['pro.squads.*']); + writeLicenseCache(cacheData, testDir); + featureGate.reload(); + + // Simulate some user data + const userDataPath = path.join(testDir, 'user-data.json'); + const userData = { projects: ['proj1', 'proj2'], settings: { theme: 'dark' } }; + fs.writeFileSync(userDataPath, JSON.stringify(userData)); + + // Verify pro features work + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + + // Expire the license + deleteLicenseCache(testDir); + const expiredCache = createExpiredCache(40); + writeLicenseCache(expiredCache, testDir); + featureGate.reload(); + + // Verify degradation occurred + expect(featureGate.getLicenseState()).toBe('Expired'); + expect(featureGate.isAvailable('pro.squads.premium')).toBe(false); + + // CRITICAL: Verify user data is preserved + expect(fs.existsSync(userDataPath)).toBe(true); + const preservedData = JSON.parse(fs.readFileSync(userDataPath, 'utf8')); + expect(preservedData).toEqual(userData); + }); + + it('should provide fallback via withGracefulDegradation', () => { + // No license + const result = withGracefulDegradation( + 'pro.squads.premium', + () => 'pro-result', + () => 'fallback-result', + { silent: true }, + ); + + expect(result).toBe('fallback-result'); + }); + + it('should execute pro action when licensed', () => { + writeLicenseCache(createTestCache(['pro.squads.*']), testDir); + featureGate.reload(); + + const result = withGracefulDegradation( + 'pro.squads.premium', + () => 'pro-result', + () => 'fallback-result', + ); + + expect(result).toBe('pro-result'); + }); + }); + + describe('Corrupted Cache → Reactivation Flow', () => { + it('should handle corrupted cache gracefully', () => { + // Write valid cache first + writeLicenseCache(createTestCache(['pro.squads.*']), testDir); + featureGate.reload(); + expect(featureGate.getLicenseState()).toBe('Active'); + + // Corrupt the cache file + const cachePath = path.join(testDir, '.aios', 'license.cache'); + const cacheContent = fs.readFileSync(cachePath, 'utf8'); + const cacheJson = JSON.parse(cacheContent); + + // Tamper with HMAC + cacheJson.hmac = 'corrupted' + cacheJson.hmac.substring(9); + fs.writeFileSync(cachePath, JSON.stringify(cacheJson)); + + // Reload and verify cache is treated as invalid + featureGate.reload(); + expect(featureGate.getLicenseState()).toBe('Not Activated'); + expect(featureGate.isAvailable('pro.squads.premium')).toBe(false); + + // Reactivation (write new valid cache) + writeLicenseCache(createTestCache(['pro.squads.*']), testDir); + featureGate.reload(); + + // Verify recovery + expect(featureGate.getLicenseState()).toBe('Active'); + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + }); + + it('should handle completely invalid JSON in cache', () => { + // Write valid cache + writeLicenseCache(createTestCache(['pro.squads.*']), testDir); + + // Overwrite with invalid JSON + const cachePath = path.join(testDir, '.aios', 'license.cache'); + fs.writeFileSync(cachePath, 'not valid json {{{'); + + // Reload and verify graceful handling + featureGate.reload(); + expect(featureGate.getLicenseState()).toBe('Not Activated'); + }); + + it('should handle missing cache file', () => { + // Ensure no cache exists + const cachePath = path.join(testDir, '.aios', 'license.cache'); + if (fs.existsSync(cachePath)) { + fs.unlinkSync(cachePath); + } + + featureGate.reload(); + expect(featureGate.getLicenseState()).toBe('Not Activated'); + expect(isInDegradedMode()).toBe(true); + }); + }); + + describe('Feature Gate Integration Patterns', () => { + it('should support constructor-level gating', () => { + // No license - should throw + expect(() => featureGate.require('pro.squads.premium', 'Premium Squads')).toThrow( + ProFeatureError, + ); + + // With license - should not throw + writeLicenseCache(createTestCache(['pro.squads.*']), testDir); + featureGate.reload(); + + expect(() => featureGate.require('pro.squads.premium', 'Premium Squads')).not.toThrow(); + }); + + it('should support wildcard matching for feature families', () => { + writeLicenseCache(createTestCache(['pro.squads.*', 'pro.memory.*']), testDir); + featureGate.reload(); + + // All squads features should be available + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + expect(featureGate.isAvailable('pro.squads.custom')).toBe(true); + expect(featureGate.isAvailable('pro.squads.export')).toBe(true); + expect(featureGate.isAvailable('pro.squads.marketplace')).toBe(true); + + // All memory features should be available + expect(featureGate.isAvailable('pro.memory.extended')).toBe(true); + expect(featureGate.isAvailable('pro.memory.persistence')).toBe(true); + + // Other modules should not be available + expect(featureGate.isAvailable('pro.metrics.advanced')).toBe(false); + expect(featureGate.isAvailable('pro.integrations.clickup')).toBe(false); + }); + + it('should support exact feature matching', () => { + writeLicenseCache(createTestCache(['pro.squads.premium']), testDir); + featureGate.reload(); + + // Only exact match should work + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + expect(featureGate.isAvailable('pro.squads.custom')).toBe(false); + expect(featureGate.isAvailable('pro.squads.export')).toBe(false); + }); + + it('should support module-level wildcard (pro.*)', () => { + writeLicenseCache(createTestCache(['pro.*']), testDir); + featureGate.reload(); + + // All pro features should be available + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + expect(featureGate.isAvailable('pro.memory.extended')).toBe(true); + expect(featureGate.isAvailable('pro.metrics.advanced')).toBe(true); + expect(featureGate.isAvailable('pro.integrations.clickup')).toBe(true); + }); + }); + + describe('Key Masking & Security', () => { + it('should mask license keys consistently', () => { + const key = 'PRO-ABCD-EFGH-IJKL-MNOP'; + const masked = maskKey(key); + + expect(masked).toBe('PRO-ABCD-****-****-MNOP'); + expect(masked).not.toContain('EFGH'); + expect(masked).not.toContain('IJKL'); + }); + + it('should validate key format strictly', () => { + expect(validateKeyFormat('PRO-XXXX-XXXX-XXXX-XXXX')).toBe(true); + expect(validateKeyFormat('PRO-ABCD-EFGH-IJKL-MNOP')).toBe(true); + expect(validateKeyFormat('PRO-1234-5678-ABCD-EFGH')).toBe(true); + + // Invalid formats + expect(validateKeyFormat('pro-xxxx-xxxx-xxxx-xxxx')).toBe(false); // lowercase + expect(validateKeyFormat('PRO-XXX-XXXX-XXXX-XXXX')).toBe(false); // wrong length + expect(validateKeyFormat('ABC-XXXX-XXXX-XXXX-XXXX')).toBe(false); // wrong prefix + expect(validateKeyFormat('')).toBe(false); + expect(validateKeyFormat(null)).toBe(false); + }); + + it('should generate deterministic machine ID', () => { + const id1 = generateMachineId(); + const id2 = generateMachineId(); + + expect(id1).toBe(id2); // Same machine = same ID + expect(id1).toHaveLength(64); // SHA-256 hex + }); + }); +}); + +``` + +================================================== +📄 tests/license/license-api.test.js +================================================== +```js +/** + * Unit tests for license-api.js + * + * @see Story PRO-6 - License Key & Feature Gating System + * @see AC-1, AC-7a, AC-7b - Activation, Deactivation (online/offline) + */ + +'use strict'; + +const http = require('http'); +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const { LicenseApiClient, licenseApi } = require('../../pro/license/license-api'); +const { LicenseActivationError } = require('../../pro/license/errors'); +const { + setPendingDeactivation, + clearPendingDeactivation, + hasPendingDeactivation, +} = require('../../pro/license/license-cache'); + +describe('license-api', () => { + let server; + let serverPort; + let serverUrl; + let testDir; + let originalCwd; + + // Create a mock HTTP server for testing + function createMockServer(handler) { + return new Promise((resolve) => { + server = http.createServer(handler); + server.listen(0, '127.0.0.1', () => { + serverPort = server.address().port; + serverUrl = `http://127.0.0.1:${serverPort}`; + resolve(); + }); + }); + } + + function closeMockServer() { + return new Promise((resolve) => { + if (server) { + server.close(() => resolve()); + } else { + resolve(); + } + }); + } + + beforeEach(() => { + // Create temp directory for pending deactivation tests + testDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-api-test-')); + originalCwd = process.cwd; + process.cwd = () => testDir; + }); + + afterEach(async () => { + process.cwd = originalCwd; + await closeMockServer(); + + try { + fs.rmSync(testDir, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + }); + + describe('LicenseApiClient constructor', () => { + it('should use default config', () => { + const client = new LicenseApiClient(); + + expect(client.baseUrl).toBe(process.env.AIOS_LICENSE_API_URL || 'https://api.synkra.ai'); + expect(client.timeoutMs).toBe(10000); + }); + + it('should accept custom config', () => { + const client = new LicenseApiClient({ + baseUrl: 'https://custom.api.com', + timeoutMs: 5000, + }); + + expect(client.baseUrl).toBe('https://custom.api.com'); + expect(client.timeoutMs).toBe(5000); + }); + }); + + describe('activate', () => { + it('should successfully activate license (AC-1)', async () => { + const mockResponse = { + key: 'PRO-TEST-1234-5678-ABCD', + features: ['pro.squads.*', 'pro.memory.*'], + seats: { used: 1, max: 5 }, + expiresAt: '2026-03-05T00:00:00Z', + cacheValidDays: 30, + gracePeriodDays: 7, + }; + + await createMockServer((req, res) => { + expect(req.method).toBe('POST'); + expect(req.url).toBe('/v1/license/activate'); + + let body = ''; + req.on('data', (chunk) => (body += chunk)); + req.on('end', () => { + const data = JSON.parse(body); + expect(data.key).toBe('PRO-TEST-1234-5678-ABCD'); + expect(data.machineId).toBe('test-machine-id'); + expect(data.aiosCoreVersion).toBe('3.0.0'); + + res.writeHead(200, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify(mockResponse)); + }); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + const result = await client.activate('PRO-TEST-1234-5678-ABCD', 'test-machine-id', '3.0.0'); + + expect(result.key).toBe('PRO-TEST-1234-5678-ABCD'); + expect(result.features).toEqual(['pro.squads.*', 'pro.memory.*']); + expect(result.seats.max).toBe(5); + expect(result.cacheValidDays).toBe(30); + expect(result.activatedAt).toBeTruthy(); + }); + + it('should handle invalid key error (401)', async () => { + await createMockServer((req, res) => { + res.writeHead(401, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ error: 'Invalid key' })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + await expect(client.activate('PRO-INVALID', 'machine', '1.0')).rejects.toThrow( + LicenseActivationError, + ); + + try { + await client.activate('PRO-INVALID', 'machine', '1.0'); + } catch (error) { + expect(error.code).toBe('INVALID_KEY'); + } + }); + + it('should handle expired key error (403)', async () => { + await createMockServer((req, res) => { + res.writeHead(403, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ code: 'EXPIRED_KEY' })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.activate('PRO-EXPIRED', 'machine', '1.0'); + fail('Should have thrown'); + } catch (error) { + expect(error.code).toBe('EXPIRED_KEY'); + } + }); + + it('should handle seat limit exceeded (403)', async () => { + await createMockServer((req, res) => { + res.writeHead(403, { 'Content-Type': 'application/json' }); + res.end( + JSON.stringify({ + code: 'SEAT_LIMIT_EXCEEDED', + details: { used: 5, max: 5 }, + }), + ); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.activate('PRO-LIMIT', 'machine', '1.0'); + fail('Should have thrown'); + } catch (error) { + expect(error.code).toBe('SEAT_LIMIT_EXCEEDED'); + expect(error.details.used).toBe(5); + expect(error.details.max).toBe(5); + } + }); + + it('should handle rate limiting (429)', async () => { + await createMockServer((req, res) => { + res.writeHead(429, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ retryAfter: 60 })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.activate('PRO-RATE', 'machine', '1.0'); + fail('Should have thrown'); + } catch (error) { + expect(error.code).toBe('RATE_LIMITED'); + expect(error.details.retryAfter).toBe(60); + } + }); + + it('should handle server error (500)', async () => { + await createMockServer((req, res) => { + res.writeHead(500, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ error: 'Internal error' })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.activate('PRO-ERROR', 'machine', '1.0'); + fail('Should have thrown'); + } catch (error) { + expect(error.code).toBe('SERVER_ERROR'); + } + }); + + it('should handle network timeout', async () => { + await createMockServer((req, res) => { + // Don't respond - let it timeout + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl, timeoutMs: 100 }); + + try { + await client.activate('PRO-TIMEOUT', 'machine', '1.0'); + fail('Should have thrown'); + } catch (error) { + expect(error.code).toBe('NETWORK_ERROR'); + } + }); + + it('should handle invalid response structure', async () => { + await createMockServer((req, res) => { + res.writeHead(200, { 'Content-Type': 'application/json' }); + // Missing features array + res.end(JSON.stringify({ key: 'PRO-TEST' })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.activate('PRO-TEST', 'machine', '1.0'); + fail('Should have thrown'); + } catch (error) { + expect(error.code).toBe('INVALID_RESPONSE'); + } + }); + }); + + describe('validate', () => { + it('should successfully validate license', async () => { + const mockResponse = { + valid: true, + features: ['pro.squads.*'], + seats: { used: 2, max: 5 }, + expiresAt: '2026-03-05T00:00:00Z', + }; + + await createMockServer((req, res) => { + expect(req.method).toBe('POST'); + expect(req.url).toBe('/v1/license/validate'); + + res.writeHead(200, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify(mockResponse)); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + const result = await client.validate('PRO-TEST-1234-5678-ABCD', 'test-machine-id'); + + expect(result.valid).toBe(true); + expect(result.features).toEqual(['pro.squads.*']); + expect(result.seats.used).toBe(2); + }); + + it('should handle invalid license', async () => { + await createMockServer((req, res) => { + res.writeHead(401, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ error: 'Invalid' })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + await expect(client.validate('PRO-INVALID', 'machine')).rejects.toThrow( + LicenseActivationError, + ); + }); + }); + + describe('deactivate (AC-7a)', () => { + it('should successfully deactivate license', async () => { + await createMockServer((req, res) => { + expect(req.method).toBe('POST'); + expect(req.url).toBe('/v1/license/deactivate'); + + res.writeHead(200, { 'Content-Type': 'application/json' }); + res.end( + JSON.stringify({ + success: true, + seatFreed: true, + message: 'License deactivated', + }), + ); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + const result = await client.deactivate('PRO-TEST-1234-5678-ABCD', 'test-machine-id'); + + expect(result.success).toBe(true); + expect(result.seatFreed).toBe(true); + }); + + it('should handle deactivation errors', async () => { + await createMockServer((req, res) => { + res.writeHead(500, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ error: 'Server error' })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + await expect(client.deactivate('PRO-TEST', 'machine')).rejects.toThrow( + LicenseActivationError, + ); + }); + }); + + describe('syncPendingDeactivation (AC-7b)', () => { + it('should sync pending deactivation when online', async () => { + // Set up pending deactivation + setPendingDeactivation('PRO-PENDING-1234-5678-ABCD', testDir); + + expect(hasPendingDeactivation(testDir).pending).toBe(true); + + await createMockServer((req, res) => { + expect(req.url).toBe('/v1/license/deactivate'); + + let body = ''; + req.on('data', (chunk) => (body += chunk)); + req.on('end', () => { + const data = JSON.parse(body); + expect(data.key).toBe('PRO-PENDING-1234-5678-ABCD'); + expect(data.offlineDeactivation).toBe(true); + + res.writeHead(200, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ success: true })); + }); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + const result = await client.syncPendingDeactivation('test-machine', testDir); + + expect(result).toBe(true); + expect(hasPendingDeactivation(testDir).pending).toBe(false); + }); + + it('should return false when no pending deactivation', async () => { + const client = new LicenseApiClient({ baseUrl: serverUrl }); + const result = await client.syncPendingDeactivation('test-machine', testDir); + + expect(result).toBe(false); + }); + + it('should clear pending on INVALID_KEY response', async () => { + setPendingDeactivation('PRO-INVALID-1234-5678-ABCD', testDir); + + await createMockServer((req, res) => { + res.writeHead(401, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ error: 'Invalid key' })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + const result = await client.syncPendingDeactivation('test-machine', testDir); + + expect(result).toBe(true); // Cleared invalid key + expect(hasPendingDeactivation(testDir).pending).toBe(false); + }); + + it('should keep pending on network error for retry', async () => { + setPendingDeactivation('PRO-NETWORK-1234-5678-ABCD', testDir); + + await createMockServer((req, res) => { + // Destroy connection to simulate network error + req.destroy(); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl, timeoutMs: 100 }); + const result = await client.syncPendingDeactivation('test-machine', testDir); + + expect(result).toBe(false); // Failed but kept for retry + expect(hasPendingDeactivation(testDir).pending).toBe(true); + }); + }); + + describe('isOnline', () => { + it('should return true when server is reachable', async () => { + await createMockServer((req, res) => { + res.writeHead(200); + res.end(); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + const result = await client.isOnline(); + + expect(result).toBe(true); + }); + + it('should return false when server is not reachable', async () => { + // Use a port that's not listening + const client = new LicenseApiClient({ baseUrl: 'http://127.0.0.1:59999' }); + const result = await client.isOnline(); + + expect(result).toBe(false); + }); + }); + + describe('singleton', () => { + it('should export singleton instance', () => { + expect(licenseApi).toBeInstanceOf(LicenseApiClient); + }); + }); + + describe('Error handling edge cases', () => { + it('should handle non-JSON response', async () => { + await createMockServer((req, res) => { + res.writeHead(200, { 'Content-Type': 'text/plain' }); + res.end('Not JSON'); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.activate('PRO-TEST', 'machine', '1.0'); + fail('Should have thrown'); + } catch (error) { + expect(error.code).toBe('INVALID_RESPONSE'); + } + }); + + it('should handle 502 Bad Gateway', async () => { + await createMockServer((req, res) => { + res.writeHead(502, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({})); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.activate('PRO-TEST', 'machine', '1.0'); + fail('Should have thrown'); + } catch (error) { + expect(error.code).toBe('SERVER_ERROR'); + } + }); + + it('should handle 503 Service Unavailable', async () => { + await createMockServer((req, res) => { + res.writeHead(503, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({})); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.activate('PRO-TEST', 'machine', '1.0'); + fail('Should have thrown'); + } catch (error) { + expect(error.code).toBe('SERVER_ERROR'); + } + }); + + it('should handle 400 Bad Request with custom message', async () => { + await createMockServer((req, res) => { + res.writeHead(400, { 'Content-Type': 'application/json' }); + res.end( + JSON.stringify({ + message: 'Custom error message', + code: 'CUSTOM_ERROR', + }), + ); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.activate('PRO-TEST', 'machine', '1.0'); + fail('Should have thrown'); + } catch (error) { + expect(error.code).toBe('CUSTOM_ERROR'); + expect(error.message).toBe('Custom error message'); + } + }); + }); + + describe('Performance', () => { + it('should complete successful request quickly', async () => { + const mockResponse = { + features: ['pro.squads.*'], + seats: { used: 1, max: 5 }, + }; + + await createMockServer((req, res) => { + res.writeHead(200, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify(mockResponse)); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + const start = performance.now(); + await client.activate('PRO-TEST-1234-5678-ABCD', 'machine', '1.0'); + const elapsed = performance.now() - start; + + // Should be very fast on localhost + expect(elapsed).toBeLessThan(500); + }); + }); +}); + +``` + +================================================== +📄 tests/license/license-cache.test.js +================================================== +```js +/** + * Unit tests for license-cache.js + * + * @see Story PRO-6 - License Key & Feature Gating System + * @see AC-2, AC-3, AC-9, AC-10 - Offline operation, Cache expiry, Tamper resistance, Machine specificity + */ + +'use strict'; + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const { + writeLicenseCache, + readLicenseCache, + deleteLicenseCache, + isExpired, + isInGracePeriod, + getDaysRemaining, + getExpiryDate, + getLicenseState, + setPendingDeactivation, + hasPendingDeactivation, + markPendingDeactivationSynced, + clearPendingDeactivation, + cacheExists, + getCachePath, + getAiosDir, + _CONFIG, +} = require('../../pro/license/license-cache'); + +describe('license-cache', () => { + let testDir; + + // Create a fresh test directory for each test + beforeEach(() => { + testDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-license-test-')); + }); + + // Cleanup after each test + afterEach(() => { + try { + fs.rmSync(testDir, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + }); + + // Helper to create valid test cache data + function createTestCacheData(overrides = {}) { + return { + key: 'PRO-ABCD-EFGH-IJKL-MNOP', + activatedAt: new Date().toISOString(), + expiresAt: new Date(Date.now() + 30 * 24 * 60 * 60 * 1000).toISOString(), + features: ['pro.squads.*', 'pro.memory.*'], + seats: { used: 1, max: 5 }, + cacheValidDays: 30, + gracePeriodDays: 7, + ...overrides, + }; + } + + describe('writeLicenseCache', () => { + it('should write cache file successfully', () => { + const data = createTestCacheData(); + const result = writeLicenseCache(data, testDir); + + expect(result.success).toBe(true); + expect(fs.existsSync(getCachePath(testDir))).toBe(true); + }); + + it('should create .aios directory if not exists', () => { + const data = createTestCacheData(); + const aiosDir = getAiosDir(testDir); + + expect(fs.existsSync(aiosDir)).toBe(false); + + writeLicenseCache(data, testDir); + + expect(fs.existsSync(aiosDir)).toBe(true); + }); + + it('should write encrypted content', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const content = fs.readFileSync(getCachePath(testDir), 'utf8'); + const parsed = JSON.parse(content); + + expect(parsed).toHaveProperty('encrypted'); + expect(parsed).toHaveProperty('iv'); + expect(parsed).toHaveProperty('tag'); + expect(parsed).toHaveProperty('hmac'); + expect(parsed).toHaveProperty('salt'); + expect(parsed).toHaveProperty('version'); + + // Should NOT contain plaintext key + expect(content).not.toContain(data.key); + }); + + it('should use atomic write (no temp file left)', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const aiosDir = getAiosDir(testDir); + const files = fs.readdirSync(aiosDir); + + expect(files).toContain('license.cache'); + expect(files.filter((f) => f.includes('.tmp'))).toHaveLength(0); + }); + + it('should return error on failure', () => { + // On Windows, writing to a device path should fail + // On Unix, writing to /dev/null as a directory should fail + const data = createTestCacheData(); + + // Mock fs.mkdirSync to throw an error + const originalMkdirSync = fs.mkdirSync; + fs.mkdirSync = () => { + throw new Error('Mock directory creation error'); + }; + + try { + const result = writeLicenseCache(data, testDir + '-fail'); + expect(result.success).toBe(false); + expect(result.error).toBeTruthy(); + } finally { + fs.mkdirSync = originalMkdirSync; + } + }); + }); + + describe('readLicenseCache', () => { + it('should read and decrypt cache successfully', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cache = readLicenseCache(testDir); + + expect(cache).not.toBeNull(); + expect(cache.key).toBe(data.key); + expect(cache.features).toEqual(data.features); + expect(cache.seats).toEqual(data.seats); + }); + + it('should return null for missing cache', () => { + const cache = readLicenseCache(testDir); + + expect(cache).toBeNull(); + }); + + it('should return null for corrupted JSON', () => { + const cachePath = getCachePath(testDir); + fs.mkdirSync(getAiosDir(testDir), { recursive: true }); + fs.writeFileSync(cachePath, 'not valid json', 'utf8'); + + const cache = readLicenseCache(testDir); + + expect(cache).toBeNull(); + }); + + it('should return null for tampered ciphertext (AC-9)', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + // Tamper with encrypted content + const cachePath = getCachePath(testDir); + const content = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + content.encrypted = content.encrypted.replace(/[a-f]/g, '0'); + fs.writeFileSync(cachePath, JSON.stringify(content), 'utf8'); + + const cache = readLicenseCache(testDir); + + expect(cache).toBeNull(); + }); + + it('should return null for tampered HMAC (AC-9)', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + // Tamper with HMAC + const cachePath = getCachePath(testDir); + const content = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + content.hmac = '0'.repeat(64); + fs.writeFileSync(cachePath, JSON.stringify(content), 'utf8'); + + const cache = readLicenseCache(testDir); + + expect(cache).toBeNull(); + }); + + it('should return null for missing required fields', () => { + const cachePath = getCachePath(testDir); + fs.mkdirSync(getAiosDir(testDir), { recursive: true }); + fs.writeFileSync(cachePath, JSON.stringify({ partial: 'data' }), 'utf8'); + + const cache = readLicenseCache(testDir); + + expect(cache).toBeNull(); + }); + + it('should include version and machineId in decrypted data', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cache = readLicenseCache(testDir); + + expect(cache.version).toBe(_CONFIG.CACHE_VERSION); + expect(cache.machineId).toBeTruthy(); + expect(typeof cache.machineId).toBe('string'); + }); + }); + + describe('deleteLicenseCache', () => { + it('should delete existing cache file', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + expect(cacheExists(testDir)).toBe(true); + + const result = deleteLicenseCache(testDir); + + expect(result.success).toBe(true); + expect(cacheExists(testDir)).toBe(false); + }); + + it('should succeed even if cache does not exist', () => { + const result = deleteLicenseCache(testDir); + + expect(result.success).toBe(true); + }); + }); + + describe('isExpired', () => { + it('should return false for fresh cache', () => { + const cache = { + activatedAt: new Date().toISOString(), + cacheValidDays: 30, + }; + + expect(isExpired(cache)).toBe(false); + }); + + it('should return true for old cache', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 35); // 35 days ago + + const cache = { + activatedAt: activatedAt.toISOString(), + cacheValidDays: 30, + }; + + expect(isExpired(cache)).toBe(true); + }); + + it('should return true for null/undefined cache', () => { + expect(isExpired(null)).toBe(true); + expect(isExpired(undefined)).toBe(true); + }); + + it('should return true for cache without activatedAt', () => { + expect(isExpired({})).toBe(true); + }); + + it('should use default 30 days if cacheValidDays not set', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 29); // 29 days ago + + const cache = { activatedAt: activatedAt.toISOString() }; + + expect(isExpired(cache)).toBe(false); + + activatedAt.setDate(activatedAt.getDate() - 2); // 31 days ago + cache.activatedAt = activatedAt.toISOString(); + + expect(isExpired(cache)).toBe(true); + }); + }); + + describe('isInGracePeriod', () => { + it('should return false for non-expired cache', () => { + const cache = { + activatedAt: new Date().toISOString(), + cacheValidDays: 30, + gracePeriodDays: 7, + }; + + expect(isInGracePeriod(cache)).toBe(false); + }); + + it('should return true when in grace period', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 33); // 33 days ago (3 days into grace) + + const cache = { + activatedAt: activatedAt.toISOString(), + cacheValidDays: 30, + gracePeriodDays: 7, + }; + + expect(isInGracePeriod(cache)).toBe(true); + }); + + it('should return false after grace period', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 40); // 40 days ago (past grace) + + const cache = { + activatedAt: activatedAt.toISOString(), + cacheValidDays: 30, + gracePeriodDays: 7, + }; + + expect(isInGracePeriod(cache)).toBe(false); + }); + + it('should return false for null/undefined', () => { + expect(isInGracePeriod(null)).toBe(false); + expect(isInGracePeriod(undefined)).toBe(false); + }); + }); + + describe('getDaysRemaining', () => { + it('should return positive days for fresh cache', () => { + const cache = { + activatedAt: new Date().toISOString(), + cacheValidDays: 30, + }; + + const days = getDaysRemaining(cache); + + expect(days).toBeGreaterThanOrEqual(29); + expect(days).toBeLessThanOrEqual(30); + }); + + it('should return negative days for expired cache', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 35); + + const cache = { + activatedAt: activatedAt.toISOString(), + cacheValidDays: 30, + }; + + const days = getDaysRemaining(cache); + + expect(days).toBeLessThan(0); + }); + + it('should return -1 for null cache', () => { + expect(getDaysRemaining(null)).toBe(-1); + }); + }); + + describe('getExpiryDate', () => { + it('should return correct expiry date', () => { + const now = new Date(); + const cache = { + activatedAt: now.toISOString(), + cacheValidDays: 30, + }; + + const expiry = getExpiryDate(cache); + const expected = new Date(now); + expected.setDate(expected.getDate() + 30); + + expect(expiry.toDateString()).toBe(expected.toDateString()); + }); + + it('should return null for null cache', () => { + expect(getExpiryDate(null)).toBeNull(); + }); + }); + + describe('getLicenseState', () => { + it('should return "Not Activated" for null cache', () => { + expect(getLicenseState(null)).toBe('Not Activated'); + }); + + it('should return "Active" for valid cache', () => { + const cache = { + activatedAt: new Date().toISOString(), + cacheValidDays: 30, + }; + + expect(getLicenseState(cache)).toBe('Active'); + }); + + it('should return "Grace" during grace period', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 33); + + const cache = { + activatedAt: activatedAt.toISOString(), + cacheValidDays: 30, + gracePeriodDays: 7, + }; + + expect(getLicenseState(cache)).toBe('Grace'); + }); + + it('should return "Expired" after grace period', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 40); + + const cache = { + activatedAt: activatedAt.toISOString(), + cacheValidDays: 30, + gracePeriodDays: 7, + }; + + expect(getLicenseState(cache)).toBe('Expired'); + }); + }); + + describe('Pending Deactivation (AC-7b)', () => { + describe('setPendingDeactivation', () => { + it('should create pending deactivation file', () => { + const result = setPendingDeactivation('PRO-TEST-KEY', testDir); + + expect(result.success).toBe(true); + + const pending = hasPendingDeactivation(testDir); + expect(pending.pending).toBe(true); + expect(pending.data.licenseKey).toBe('PRO-TEST-KEY'); + }); + + it('should include machineId and timestamp', () => { + setPendingDeactivation('PRO-TEST-KEY', testDir); + + const pending = hasPendingDeactivation(testDir); + + expect(pending.data.machineId).toBeTruthy(); + expect(pending.data.deactivatedAt).toBeTruthy(); + expect(pending.data.synced).toBe(false); + }); + }); + + describe('hasPendingDeactivation', () => { + it('should return pending: false when no pending file', () => { + const result = hasPendingDeactivation(testDir); + + expect(result.pending).toBe(false); + expect(result.data).toBeUndefined(); + }); + + it('should return pending: true with unsynced deactivation', () => { + setPendingDeactivation('PRO-TEST-KEY', testDir); + + const result = hasPendingDeactivation(testDir); + + expect(result.pending).toBe(true); + }); + + it('should return pending: false for synced deactivation', () => { + setPendingDeactivation('PRO-TEST-KEY', testDir); + markPendingDeactivationSynced(testDir); + + const result = hasPendingDeactivation(testDir); + + expect(result.pending).toBe(false); + }); + }); + + describe('markPendingDeactivationSynced', () => { + it('should mark deactivation as synced', () => { + setPendingDeactivation('PRO-TEST-KEY', testDir); + const result = markPendingDeactivationSynced(testDir); + + expect(result.success).toBe(true); + + const pending = hasPendingDeactivation(testDir); + expect(pending.pending).toBe(false); + }); + + it('should add syncedAt timestamp', () => { + setPendingDeactivation('PRO-TEST-KEY', testDir); + markPendingDeactivationSynced(testDir); + + const filePath = path.join(getAiosDir(testDir), 'pending-deactivation.json'); + const content = JSON.parse(fs.readFileSync(filePath, 'utf8')); + + expect(content.synced).toBe(true); + expect(content.syncedAt).toBeTruthy(); + }); + + it('should succeed if no pending file', () => { + const result = markPendingDeactivationSynced(testDir); + + expect(result.success).toBe(true); + }); + }); + + describe('clearPendingDeactivation', () => { + it('should delete pending deactivation file', () => { + setPendingDeactivation('PRO-TEST-KEY', testDir); + + expect(hasPendingDeactivation(testDir).pending).toBe(true); + + const result = clearPendingDeactivation(testDir); + + expect(result.success).toBe(true); + expect(hasPendingDeactivation(testDir).pending).toBe(false); + }); + + it('should succeed if no pending file', () => { + const result = clearPendingDeactivation(testDir); + + expect(result.success).toBe(true); + }); + }); + }); + + describe('cacheExists', () => { + it('should return false when cache does not exist', () => { + expect(cacheExists(testDir)).toBe(false); + }); + + it('should return true when cache exists', () => { + writeLicenseCache(createTestCacheData(), testDir); + + expect(cacheExists(testDir)).toBe(true); + }); + }); + + describe('Security: Machine specificity (AC-10)', () => { + it('should produce cache that cannot be read with different machineId', () => { + // This test validates that the cache is bound to the machine + // by verifying the round-trip works on the same machine + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cache = readLicenseCache(testDir); + + // Cache should be readable on the same machine + expect(cache).not.toBeNull(); + expect(cache.key).toBe(data.key); + + // The machineId in the cache should match current machine + const { generateMachineId } = require('../../pro/license/license-crypto'); + expect(cache.machineId).toBe(generateMachineId()); + }); + }); + + describe('Offline operation (AC-2)', () => { + it('should support 30 days offline operation by default', () => { + expect(_CONFIG.DEFAULT_CACHE_VALID_DAYS).toBe(30); + }); + + it('should support 7 day grace period by default', () => { + expect(_CONFIG.DEFAULT_GRACE_PERIOD_DAYS).toBe(7); + }); + + it('should allow reading cache without network', () => { + // Write cache + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + // Reading should work without any network calls + const cache = readLicenseCache(testDir); + + expect(cache).not.toBeNull(); + expect(cache.features).toEqual(data.features); + }); + }); + + describe('Cache expiry and grace period (AC-3)', () => { + it('should transition from Active to Grace after 30 days', () => { + // Day 0: Active + const cache = { + activatedAt: new Date().toISOString(), + cacheValidDays: 30, + gracePeriodDays: 7, + }; + + expect(getLicenseState(cache)).toBe('Active'); + + // Day 31: Grace + const day31 = new Date(); + day31.setDate(day31.getDate() - 31); + cache.activatedAt = day31.toISOString(); + + expect(getLicenseState(cache)).toBe('Grace'); + }); + + it('should transition from Grace to Expired after 37 days', () => { + // Day 38: Expired + const day38 = new Date(); + day38.setDate(day38.getDate() - 38); + + const cache = { + activatedAt: day38.toISOString(), + cacheValidDays: 30, + gracePeriodDays: 7, + }; + + expect(getLicenseState(cache)).toBe('Expired'); + }); + }); +}); + +``` + +================================================== +📄 tests/license/license-api-auth.test.js +================================================== +```js +/** + * Unit tests for license-api.js auth methods (PRO-11) + * + * @see Story PRO-11 - Email Authentication & Buyer-Based Pro Activation + * @see AC-1, AC-2, AC-3, AC-4, AC-5, AC-8 + */ + +'use strict'; + +const http = require('http'); +const { LicenseApiClient } = require('../../pro/license/license-api'); +const { AuthError, BuyerValidationError, LicenseActivationError } = require('../../pro/license/errors'); + +describe('license-api auth methods', () => { + let server; + let serverUrl; + + function createMockServer(handler) { + return new Promise((resolve) => { + server = http.createServer(handler); + server.listen(0, '127.0.0.1', () => { + const port = server.address().port; + serverUrl = `http://127.0.0.1:${port}`; + resolve(); + }); + }); + } + + function closeMockServer() { + return new Promise((resolve) => { + if (server) { + server.close(() => resolve()); + } else { + resolve(); + } + }); + } + + afterEach(async () => { + await closeMockServer(); + }); + + describe('signup (AC-1)', () => { + it('should successfully create account', async () => { + await createMockServer((req, res) => { + expect(req.method).toBe('POST'); + expect(req.url).toBe('/api/v1/auth/signup'); + + let body = ''; + req.on('data', (chunk) => (body += chunk)); + req.on('end', () => { + const data = JSON.parse(body); + expect(data.email).toBe('user@example.com'); + expect(data.password).toBe('TestPass123'); + + res.writeHead(200, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ + userId: 'user-123', + message: 'Verification email sent.', + })); + }); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + const result = await client.signup('user@example.com', 'TestPass123'); + + expect(result.userId).toBe('user-123'); + expect(result.message).toContain('Verification'); + }); + + it('should throw AuthError for existing email', async () => { + await createMockServer((req, res) => { + res.writeHead(400, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ + message: 'User already registered', + code: 'BAD_REQUEST', + })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.signup('existing@example.com', 'TestPass123'); + fail('Should have thrown'); + } catch (error) { + expect(error).toBeInstanceOf(AuthError); + expect(error.code).toBe('EMAIL_ALREADY_REGISTERED'); + } + }); + + it('should throw AuthError on rate limit', async () => { + await createMockServer((req, res) => { + res.writeHead(429, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ retryAfter: 900 })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.signup('user@example.com', 'TestPass123'); + fail('Should have thrown'); + } catch (error) { + expect(error).toBeInstanceOf(AuthError); + expect(error.code).toBe('AUTH_RATE_LIMITED'); + } + }); + }); + + describe('login (AC-5)', () => { + it('should successfully login', async () => { + await createMockServer((req, res) => { + expect(req.method).toBe('POST'); + expect(req.url).toBe('/api/v1/auth/login'); + + let body = ''; + req.on('data', (chunk) => (body += chunk)); + req.on('end', () => { + const data = JSON.parse(body); + expect(data.email).toBe('user@example.com'); + + res.writeHead(200, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ + accessToken: 'session-token-abc', + userId: 'user-123', + emailVerified: true, + })); + }); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + const result = await client.login('user@example.com', 'TestPass123'); + + expect(result.sessionToken).toBe('session-token-abc'); + expect(result.userId).toBe('user-123'); + expect(result.emailVerified).toBe(true); + }); + + it('should throw AuthError for invalid credentials (AC-8)', async () => { + await createMockServer((req, res) => { + res.writeHead(401, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ error: 'Invalid credentials' })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.login('user@example.com', 'WrongPass'); + fail('Should have thrown'); + } catch (error) { + expect(error).toBeInstanceOf(AuthError); + expect(error.code).toBe('INVALID_CREDENTIALS'); + } + }); + + it('should return emailVerified=false for unverified users', async () => { + await createMockServer((req, res) => { + res.writeHead(200, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ + accessToken: 'session-token', + userId: 'user-456', + emailVerified: false, + })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + const result = await client.login('unverified@example.com', 'TestPass123'); + + expect(result.emailVerified).toBe(false); + }); + }); + + describe('checkEmailVerified (AC-2)', () => { + it('should return verified=true for verified email', async () => { + await createMockServer((req, res) => { + expect(req.url).toBe('/api/v1/auth/verify-status'); + + res.writeHead(200, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ + emailVerified: true, + email: 'user@example.com', + })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + const result = await client.checkEmailVerified('session-token'); + + expect(result.verified).toBe(true); + expect(result.email).toBe('user@example.com'); + }); + + it('should return verified=false for unverified email', async () => { + await createMockServer((req, res) => { + res.writeHead(200, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ + emailVerified: false, + email: 'unverified@example.com', + })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + const result = await client.checkEmailVerified('session-token'); + + expect(result.verified).toBe(false); + }); + }); + + describe('activateByAuth (AC-3, AC-4)', () => { + it('should successfully activate for valid buyer (AC-3)', async () => { + const mockActivation = { + licenseKey: 'PRO-AUTH-1234-5678-ABCD', + features: ['pro.squads.*', 'pro.memory.*'], + seats: { used: 1, max: 3 }, + expiresAt: '2027-02-15T00:00:00Z', + cacheValidDays: 30, + gracePeriodDays: 7, + }; + + await createMockServer((req, res) => { + expect(req.url).toBe('/api/v1/auth/activate-pro'); + + let body = ''; + req.on('data', (chunk) => (body += chunk)); + req.on('end', () => { + const data = JSON.parse(body); + expect(data.accessToken).toBe('valid-session'); + expect(data.machineId).toBeTruthy(); + expect(data.aiosCoreVersion).toBeTruthy(); + + res.writeHead(200, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify(mockActivation)); + }); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + const result = await client.activateByAuth('valid-session', 'machine-id', '4.1.0'); + + expect(result.key).toBe('PRO-AUTH-1234-5678-ABCD'); + expect(result.features).toContain('pro.squads.*'); + expect(result.seats.max).toBe(3); + expect(result.activatedAt).toBeTruthy(); + }); + + it('should throw BuyerValidationError for non-buyer (AC-4)', async () => { + await createMockServer((req, res) => { + res.writeHead(403, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ + message: 'Not a buyer', + code: 'NOT_A_BUYER', + })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.activateByAuth('valid-session', 'machine-id', '4.1.0'); + fail('Should have thrown'); + } catch (error) { + expect(error).toBeInstanceOf(BuyerValidationError); + expect(error.code).toBe('NOT_A_BUYER'); + } + }); + + it('should throw AuthError for unverified email', async () => { + await createMockServer((req, res) => { + res.writeHead(403, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ + message: 'Email not verified', + code: 'EMAIL_NOT_VERIFIED', + })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.activateByAuth('unverified-session', 'machine-id', '4.1.0'); + fail('Should have thrown'); + } catch (error) { + expect(error).toBeInstanceOf(AuthError); + expect(error.code).toBe('EMAIL_NOT_VERIFIED'); + } + }); + + it('should throw LicenseActivationError for seat limit', async () => { + await createMockServer((req, res) => { + res.writeHead(403, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ + code: 'SEAT_LIMIT_EXCEEDED', + details: { used: 2, max: 2 }, + })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.activateByAuth('valid-session', 'machine-id', '4.1.0'); + fail('Should have thrown'); + } catch (error) { + expect(error).toBeInstanceOf(LicenseActivationError); + expect(error.code).toBe('SEAT_LIMIT_EXCEEDED'); + } + }); + + it('should throw BuyerValidationError for service unavailable', async () => { + await createMockServer((req, res) => { + res.writeHead(503, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ + message: 'Buyer service unavailable', + code: 'BUYER_SERVICE_UNAVAILABLE', + })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.activateByAuth('valid-session', 'machine-id', '4.1.0'); + fail('Should have thrown'); + } catch (error) { + expect(error).toBeInstanceOf(BuyerValidationError); + expect(error.code).toBe('BUYER_SERVICE_UNAVAILABLE'); + } + }); + }); + + describe('resendVerification (AC-9)', () => { + it('should successfully resend verification', async () => { + await createMockServer((req, res) => { + expect(req.url).toBe('/api/v1/auth/resend-verification'); + + res.writeHead(200, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ message: 'Verification email resent.' })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + const result = await client.resendVerification('test@example.com'); + + expect(result.message).toContain('resent'); + }); + + it('should throw AuthError on rate limit (AC-9 - max 3/hour)', async () => { + await createMockServer((req, res) => { + res.writeHead(429, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ retryAfter: 3600 })); + }); + + const client = new LicenseApiClient({ baseUrl: serverUrl }); + + try { + await client.resendVerification('test@example.com'); + fail('Should have thrown'); + } catch (error) { + expect(error).toBeInstanceOf(AuthError); + expect(error.code).toBe('AUTH_RATE_LIMITED'); + } + }); + }); +}); + +``` + +================================================== +📄 tests/license/security.test.js +================================================== +```js +/** + * Security Tests for License System + * + * @see Story PRO-6 - License Key & Feature Gating System + * @see Task 7.3 - Security tests + * @see AC-9 - Tamper resistance (modified bytes invalidates cache) + * @see AC-10 - Cache non-portable (different machineId) + */ + +'use strict'; + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const { + writeLicenseCache, + readLicenseCache, + getCachePath, + getAiosDir, +} = require('../../pro/license/license-cache'); +const { + generateMachineId, + maskKey, + validateKeyFormat, +} = require('../../pro/license/license-crypto'); +const { ProFeatureError, LicenseActivationError, LicenseValidationError } = require('../../pro/license/errors'); + +describe('Security Tests (AC-9, AC-10)', () => { + let testDir; + + beforeEach(() => { + testDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-security-test-')); + }); + + afterEach(() => { + try { + fs.rmSync(testDir, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + }); + + // Helper to create valid test cache data + function createTestCacheData(overrides = {}) { + return { + key: 'PRO-ABCD-EFGH-IJKL-MNOP', + activatedAt: new Date().toISOString(), + expiresAt: new Date(Date.now() + 30 * 24 * 60 * 60 * 1000).toISOString(), + features: ['pro.squads.*', 'pro.memory.*'], + seats: { used: 1, max: 5 }, + cacheValidDays: 30, + gracePeriodDays: 7, + ...overrides, + }; + } + + describe('License Key Never in Plaintext (AC-9)', () => { + it('should not store license key in plaintext in cache file', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cachePath = getCachePath(testDir); + const content = fs.readFileSync(cachePath, 'utf8'); + + // License key should NOT appear in plaintext + expect(content).not.toContain(data.key); + expect(content).not.toContain('PRO-ABCD-EFGH-IJKL-MNOP'); + }); + + it('should not store license key segments in plaintext', () => { + const data = createTestCacheData({ key: 'PRO-WXYZ-1234-5678-9ABC' }); + writeLicenseCache(data, testDir); + + const cachePath = getCachePath(testDir); + const content = fs.readFileSync(cachePath, 'utf8'); + + // Check key segments don't appear + expect(content).not.toContain('WXYZ'); + expect(content).not.toContain('1234'); + expect(content).not.toContain('5678'); + expect(content).not.toContain('9ABC'); + }); + + it('should mask license key correctly', () => { + const key = 'PRO-ABCD-EFGH-IJKL-MNOP'; + const masked = maskKey(key); + + // Should show PRO, first segment, and last segment; mask middle two + expect(masked).toBe('PRO-ABCD-****-****-MNOP'); + expect(masked).not.toContain('EFGH'); + expect(masked).not.toContain('IJKL'); + }); + + it('should mask license key preserving prefix, first segment and last segment', () => { + const testKeys = [ + 'PRO-TEST-XXXX-YYYY-ZZZZ', + 'PRO-1111-2222-3333-4444', + 'PRO-AAAA-BBBB-CCCC-DDDD', + ]; + + for (const key of testKeys) { + const masked = maskKey(key); + const segments = key.split('-'); + + expect(masked).toContain(segments[0]); // PRO + expect(masked).toContain(segments[1]); // First segment (visible) + expect(masked).toContain(segments[4]); // Last segment + expect(masked).not.toContain(segments[2]); // Middle segment masked + expect(masked).not.toContain(segments[3]); // Middle segment masked + } + }); + }); + + describe('Cache Non-Portable (AC-10)', () => { + it('should bind cache to current machine machineId', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cache = readLicenseCache(testDir); + const currentMachineId = generateMachineId(); + + expect(cache).not.toBeNull(); + expect(cache.machineId).toBe(currentMachineId); + }); + + it('should return null when cache machineId does not match', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + // Manually tamper with the encrypted content to simulate different machine + // The cache will be unreadable because the key derived from machineId won't match + const cachePath = getCachePath(testDir); + const content = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + + // Tamper with salt (changes derived key) + content.salt = Buffer.from('different-machine-salt-1234567890ab').toString('hex'); + fs.writeFileSync(cachePath, JSON.stringify(content), 'utf8'); + + const cache = readLicenseCache(testDir); + + expect(cache).toBeNull(); + }); + + it('should produce different ciphertext for same data with different machineId derivation', () => { + // Write cache twice and verify the encryption is deterministic based on machine + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cachePath = getCachePath(testDir); + const content1 = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + + // Write again to new dir + const testDir2 = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-security-test2-')); + try { + writeLicenseCache(data, testDir2); + const content2 = JSON.parse(fs.readFileSync(getCachePath(testDir2), 'utf8')); + + // Different salts mean different derived keys + // (same machine, but different salts per write) + expect(content1.salt).not.toBe(content2.salt); + expect(content1.encrypted).not.toBe(content2.encrypted); + } finally { + fs.rmSync(testDir2, { recursive: true, force: true }); + } + }); + + it('should verify machineId consistency on same machine', () => { + const id1 = generateMachineId(); + const id2 = generateMachineId(); + + // Same machine should produce same ID + expect(id1).toBe(id2); + expect(typeof id1).toBe('string'); + expect(id1.length).toBeGreaterThan(0); + }); + }); + + describe('Cache Tamper Detection (AC-9)', () => { + it('should detect tampered ciphertext', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cachePath = getCachePath(testDir); + const content = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + + // Tamper with ciphertext + const tamperedCiphertext = content.encrypted.replace(/[a-f0-9]/gi, (c) => { + return c === '0' ? '1' : '0'; + }); + content.encrypted = tamperedCiphertext; + fs.writeFileSync(cachePath, JSON.stringify(content), 'utf8'); + + const cache = readLicenseCache(testDir); + + expect(cache).toBeNull(); + }); + + it('should detect tampered HMAC', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cachePath = getCachePath(testDir); + const content = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + + // Tamper with HMAC + content.hmac = 'a'.repeat(64); + fs.writeFileSync(cachePath, JSON.stringify(content), 'utf8'); + + const cache = readLicenseCache(testDir); + + expect(cache).toBeNull(); + }); + + it('should detect tampered IV', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cachePath = getCachePath(testDir); + const content = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + + // Tamper with IV + content.iv = 'b'.repeat(24); + fs.writeFileSync(cachePath, JSON.stringify(content), 'utf8'); + + const cache = readLicenseCache(testDir); + + expect(cache).toBeNull(); + }); + + it('should detect tampered auth tag', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cachePath = getCachePath(testDir); + const content = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + + // Tamper with auth tag + content.tag = 'c'.repeat(32); + fs.writeFileSync(cachePath, JSON.stringify(content), 'utf8'); + + const cache = readLicenseCache(testDir); + + expect(cache).toBeNull(); + }); + + it('should detect tampered salt', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cachePath = getCachePath(testDir); + const content = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + + // Tamper with salt + content.salt = 'd'.repeat(32); + fs.writeFileSync(cachePath, JSON.stringify(content), 'utf8'); + + const cache = readLicenseCache(testDir); + + expect(cache).toBeNull(); + }); + + it('should detect single byte modification in ciphertext', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cachePath = getCachePath(testDir); + const content = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + + // Flip one character in ciphertext + const chars = content.encrypted.split(''); + const idx = Math.floor(chars.length / 2); + chars[idx] = chars[idx] === '0' ? '1' : '0'; + content.encrypted = chars.join(''); + fs.writeFileSync(cachePath, JSON.stringify(content), 'utf8'); + + const cache = readLicenseCache(testDir); + + expect(cache).toBeNull(); + }); + + it('should detect missing required fields', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cachePath = getCachePath(testDir); + + const requiredFields = ['encrypted', 'iv', 'tag', 'hmac', 'salt']; + + for (const field of requiredFields) { + const content = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + delete content[field]; + fs.writeFileSync(cachePath, JSON.stringify(content), 'utf8'); + + const cache = readLicenseCache(testDir); + expect(cache).toBeNull(); + + // Restore for next iteration + writeLicenseCache(data, testDir); + } + }); + + it('should detect truncated ciphertext', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cachePath = getCachePath(testDir); + const content = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + + // Truncate ciphertext + content.encrypted = content.encrypted.substring(0, content.encrypted.length / 2); + fs.writeFileSync(cachePath, JSON.stringify(content), 'utf8'); + + const cache = readLicenseCache(testDir); + + expect(cache).toBeNull(); + }); + }); + + describe('No Sensitive Data in Error Messages (AC-9)', () => { + it('should not expose license key in ProFeatureError message', () => { + const error = new ProFeatureError('pro.squads.premium', 'Premium Squads'); + + // Should not contain actual license key values + expect(error.message).not.toContain('PRO-ABCD'); + expect(error.message).not.toContain('PRO-XXXX'); + expect(error.toCliMessage()).not.toContain('PRO-ABCD'); + + // The command template with placeholder is expected + expect(error.message).toContain('--key '); + }); + + it('should not expose license key in ProFeatureError JSON', () => { + const error = new ProFeatureError('pro.squads.premium', 'Premium Squads'); + const json = error.toJSON(); + + expect(JSON.stringify(json)).not.toContain('PRO-'); + }); + + it('should not expose internal state in LicenseActivationError', () => { + const error = LicenseActivationError.invalidKey(); + + expect(error.message).not.toContain('machineId'); + expect(error.message).not.toContain('cache'); + expect(error.message).not.toContain('salt'); + }); + + it('should not expose sensitive data in network error', () => { + const originalError = new Error('Connection failed to secret-server.internal:443'); + const error = LicenseActivationError.networkError(originalError); + + // Should provide generic message, not expose internal network details + expect(error.message).toBe('Unable to reach license server. Please check your internet connection.'); + }); + + it('should not expose seat details beyond usage numbers', () => { + const error = LicenseActivationError.seatLimitExceeded(3, 5); + + expect(error.message).toContain('3/5'); + expect(error.details.used).toBe(3); + expect(error.details.max).toBe(5); + // Should not expose other user/machine details + expect(error.message).not.toContain('machineId'); + expect(error.message).not.toContain('userId'); + }); + + it('should not expose internal state in LicenseValidationError', () => { + const error = LicenseValidationError.corruptedCache(); + + expect(error.message).not.toContain('decrypt'); + expect(error.message).not.toContain('HMAC'); + expect(error.message).not.toContain('AES'); + }); + + it('should not include license key in error stack traces', () => { + const error = new ProFeatureError('pro.test', 'Test Feature'); + const stack = error.stack || ''; + + expect(stack).not.toContain('PRO-'); + expect(stack).not.toContain('license key'); + }); + }); + + describe('Key Format Validation Security', () => { + it('should reject keys without proper format', () => { + const invalidKeys = [ + '', + 'invalid', + 'pro-xxxx-xxxx-xxxx-xxxx', // lowercase + 'PRO-XXX-XXXX-XXXX-XXXX', // wrong segment length + 'PRO-XXXXX-XXXX-XXXX-XXXX', // too long segment + 'ABC-XXXX-XXXX-XXXX-XXXX', // wrong prefix + 'PRO-XXXX-XXXX-XXXX', // missing segment + 'PRO-XXXX-XXXX-XXXX-XXXX-XXXX', // extra segment + 'PRO_XXXX_XXXX_XXXX_XXXX', // wrong separator + ]; + + for (const key of invalidKeys) { + expect(validateKeyFormat(key)).toBe(false); + } + }); + + it('should accept valid key format', () => { + const validKeys = [ + 'PRO-ABCD-EFGH-IJKL-MNOP', + 'PRO-1234-5678-9ABC-DEF0', + 'PRO-AAAA-BBBB-CCCC-DDDD', + 'PRO-0000-0000-0000-0000', + ]; + + for (const key of validKeys) { + expect(validateKeyFormat(key)).toBe(true); + } + }); + }); + + describe('Encryption Strength', () => { + it('should use AES-256-GCM (verifiable via ciphertext structure)', () => { + const data = createTestCacheData(); + writeLicenseCache(data, testDir); + + const cachePath = getCachePath(testDir); + const content = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + + // GCM mode produces IV (12 bytes = 24 hex chars) and tag (16 bytes = 32 hex chars) + expect(content.iv.length).toBe(24); // 12 bytes hex encoded + expect(content.tag.length).toBe(32); // 16 bytes hex encoded + expect(content.salt.length).toBe(32); // 16 bytes hex encoded (for PBKDF2) + }); + + it('should produce unique ciphertext for identical plaintext', () => { + const data = createTestCacheData(); + + // Write twice to different dirs + writeLicenseCache(data, testDir); + const content1 = JSON.parse(fs.readFileSync(getCachePath(testDir), 'utf8')); + + const testDir2 = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-enc-test-')); + try { + writeLicenseCache(data, testDir2); + const content2 = JSON.parse(fs.readFileSync(getCachePath(testDir2), 'utf8')); + + // Random IV and salt should produce different ciphertext + expect(content1.iv).not.toBe(content2.iv); + expect(content1.encrypted).not.toBe(content2.encrypted); + } finally { + fs.rmSync(testDir2, { recursive: true, force: true }); + } + }); + }); +}); + +``` + +================================================== +📄 tests/license/errors.test.js +================================================== +```js +/** + * Unit tests for errors.js + * + * @see Story PRO-6 - License Key & Feature Gating System + * @see AC-4, AC-8 - Feature gate check, Graceful degradation + */ + +'use strict'; + +const { + ProFeatureError, + LicenseActivationError, + LicenseValidationError, +} = require('../../pro/license/errors'); + +describe('errors', () => { + describe('ProFeatureError', () => { + it('should create error with feature ID and friendly name', () => { + const error = new ProFeatureError('pro.squads.premium', 'Premium Squads'); + + expect(error.name).toBe('ProFeatureError'); + expect(error.featureId).toBe('pro.squads.premium'); + expect(error.friendlyName).toBe('Premium Squads'); + }); + + it('should include activation instructions in message', () => { + const error = new ProFeatureError('pro.squads.premium', 'Premium Squads'); + + expect(error.message).toContain('Premium Squads requires an active AIOS Pro license'); + expect(error.message).toContain('aios pro activate --key '); + expect(error.message).toContain('https://synkra.ai/pro'); + }); + + it('should mention data preservation (AC-8)', () => { + const error = new ProFeatureError('pro.squads.premium', 'Premium Squads'); + + expect(error.message).toContain('Your data and configurations are preserved'); + }); + + it('should use featureId as friendlyName if not provided', () => { + const error = new ProFeatureError('pro.squads.premium'); + + expect(error.friendlyName).toBe('pro.squads.premium'); + expect(error.message).toContain('pro.squads.premium requires'); + }); + + it('should allow custom purchase URL', () => { + const error = new ProFeatureError('pro.squads.premium', 'Premium Squads', { + purchaseUrl: 'https://custom.url/buy', + }); + + expect(error.purchaseUrl).toBe('https://custom.url/buy'); + expect(error.message).toContain('https://custom.url/buy'); + }); + + it('should allow custom activation command', () => { + const error = new ProFeatureError('pro.squads.premium', 'Premium Squads', { + activateCommand: 'custom-cli activate', + }); + + expect(error.activateCommand).toBe('custom-cli activate'); + expect(error.message).toContain('custom-cli activate'); + }); + + it('should be instanceof Error', () => { + const error = new ProFeatureError('pro.squads.premium', 'Premium Squads'); + + expect(error).toBeInstanceOf(Error); + expect(error).toBeInstanceOf(ProFeatureError); + }); + + it('should have stack trace', () => { + const error = new ProFeatureError('pro.squads.premium', 'Premium Squads'); + + expect(error.stack).toBeTruthy(); + expect(error.stack).toContain('ProFeatureError'); + }); + + describe('toCliMessage', () => { + it('should return formatted CLI message', () => { + const error = new ProFeatureError('pro.squads.premium', 'Premium Squads'); + const cliMsg = error.toCliMessage(); + + expect(cliMsg).toContain('Premium Squads requires an active AIOS Pro license'); + expect(cliMsg).toContain('Your data and configurations are preserved'); + expect(cliMsg).toContain('Activate:'); + expect(cliMsg).toContain('Purchase:'); + }); + }); + + describe('toJSON', () => { + it('should return JSON-serializable object', () => { + const error = new ProFeatureError('pro.squads.premium', 'Premium Squads'); + const json = error.toJSON(); + + expect(json.error).toBe('ProFeatureError'); + expect(json.featureId).toBe('pro.squads.premium'); + expect(json.friendlyName).toBe('Premium Squads'); + expect(json.message).toContain('requires an active AIOS Pro license'); + expect(json.activateCommand).toBeTruthy(); + expect(json.purchaseUrl).toBeTruthy(); + }); + + it('should be JSON.stringify safe', () => { + const error = new ProFeatureError('pro.squads.premium', 'Premium Squads'); + const json = JSON.stringify(error.toJSON()); + const parsed = JSON.parse(json); + + expect(parsed.featureId).toBe('pro.squads.premium'); + }); + }); + }); + + describe('LicenseActivationError', () => { + it('should create error with message and code', () => { + const error = new LicenseActivationError('Test error', 'TEST_CODE'); + + expect(error.name).toBe('LicenseActivationError'); + expect(error.message).toBe('Test error'); + expect(error.code).toBe('TEST_CODE'); + }); + + it('should default to ACTIVATION_FAILED code', () => { + const error = new LicenseActivationError('Test error'); + + expect(error.code).toBe('ACTIVATION_FAILED'); + }); + + it('should support details object', () => { + const error = new LicenseActivationError('Test error', 'TEST_CODE', { foo: 'bar' }); + + expect(error.details).toEqual({ foo: 'bar' }); + }); + + describe('static factory methods', () => { + it('invalidKeyFormat should create correct error', () => { + const error = LicenseActivationError.invalidKeyFormat(); + + expect(error.code).toBe('INVALID_KEY_FORMAT'); + expect(error.message).toContain('PRO-XXXX-XXXX-XXXX-XXXX'); + }); + + it('networkError should create correct error', () => { + const error = LicenseActivationError.networkError(); + + expect(error.code).toBe('NETWORK_ERROR'); + expect(error.message).toContain('internet connection'); + }); + + it('networkError should include cause if provided', () => { + const cause = new Error('Connection refused'); + const error = LicenseActivationError.networkError(cause); + + expect(error.details.cause).toBe('Connection refused'); + }); + + it('invalidKey should create correct error', () => { + const error = LicenseActivationError.invalidKey(); + + expect(error.code).toBe('INVALID_KEY'); + expect(error.message).toContain('invalid or has been revoked'); + }); + + it('expiredKey should create correct error', () => { + const error = LicenseActivationError.expiredKey(); + + expect(error.code).toBe('EXPIRED_KEY'); + expect(error.message).toContain('expired'); + expect(error.message).toContain('renew'); + }); + + it('seatLimitExceeded should create correct error', () => { + const error = LicenseActivationError.seatLimitExceeded(5, 5); + + expect(error.code).toBe('SEAT_LIMIT_EXCEEDED'); + expect(error.message).toContain('5/5 seats'); + expect(error.details.used).toBe(5); + expect(error.details.max).toBe(5); + }); + + it('rateLimited should create correct error', () => { + const error = LicenseActivationError.rateLimited(60); + + expect(error.code).toBe('RATE_LIMITED'); + expect(error.message).toContain('60 seconds'); + expect(error.details.retryAfter).toBe(60); + }); + + it('rateLimited should work without retryAfter', () => { + const error = LicenseActivationError.rateLimited(); + + expect(error.code).toBe('RATE_LIMITED'); + expect(error.message).toContain('try again later'); + }); + + it('serverError should create correct error', () => { + const error = LicenseActivationError.serverError(); + + expect(error.code).toBe('SERVER_ERROR'); + expect(error.message).toContain('server error'); + }); + }); + + describe('toJSON', () => { + it('should return JSON-serializable object', () => { + const error = new LicenseActivationError('Test error', 'TEST_CODE', { foo: 'bar' }); + const json = error.toJSON(); + + expect(json.error).toBe('LicenseActivationError'); + expect(json.code).toBe('TEST_CODE'); + expect(json.message).toBe('Test error'); + expect(json.details).toEqual({ foo: 'bar' }); + }); + }); + }); + + describe('LicenseValidationError', () => { + it('should create error with message and code', () => { + const error = new LicenseValidationError('Test error', 'TEST_CODE'); + + expect(error.name).toBe('LicenseValidationError'); + expect(error.message).toBe('Test error'); + expect(error.code).toBe('TEST_CODE'); + }); + + it('should default to VALIDATION_FAILED code', () => { + const error = new LicenseValidationError('Test error'); + + expect(error.code).toBe('VALIDATION_FAILED'); + }); + + describe('static factory methods', () => { + it('corruptedCache should create correct error', () => { + const error = LicenseValidationError.corruptedCache(); + + expect(error.code).toBe('CORRUPTED_CACHE'); + expect(error.message).toContain('corrupted'); + expect(error.message).toContain('reactivate'); + }); + + it('machineMismatch should create correct error', () => { + const error = LicenseValidationError.machineMismatch(); + + expect(error.code).toBe('MACHINE_MISMATCH'); + expect(error.message).toContain('different machine'); + }); + }); + }); + + describe('Security: No sensitive data exposure', () => { + it('ProFeatureError should not expose license key', () => { + const error = new ProFeatureError('pro.squads.premium', 'Premium Squads'); + const message = error.message; + const json = JSON.stringify(error.toJSON()); + + // Should not contain any key patterns + expect(message).not.toMatch(/PRO-[A-Z0-9]{4}/); + expect(json).not.toMatch(/PRO-[A-Z0-9]{4}/); + }); + + it('LicenseActivationError should not expose full key', () => { + const error = new LicenseActivationError('Error with key', 'TEST', { + key: 'PRO-ABCD-EFGH-IJKL-MNOP', + }); + + // The error itself doesn't prevent putting sensitive data in details, + // but consumers should use maskKey() before adding to details + // This test documents that the error class doesn't auto-mask + + // In practice, use maskKey() before creating error with key details + }); + }); +}); + +``` + +================================================== +📄 tests/license/feature-gate.test.js +================================================== +```js +/** + * Unit tests for feature-gate.js + * + * @see Story PRO-6 - License Key & Feature Gating System + * @see AC-4, AC-5 - Feature gate check, Wildcard matching + */ + +'use strict'; + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const { FeatureGate, featureGate } = require('../../pro/license/feature-gate'); +const { ProFeatureError } = require('../../pro/license/errors'); +const { writeLicenseCache, deleteLicenseCache } = require('../../pro/license/license-cache'); + +describe('feature-gate', () => { + let testDir; + let originalCwd; + + beforeEach(() => { + // Create temp directory for each test + testDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-feature-test-')); + + // Mock process.cwd to return our test directory + originalCwd = process.cwd; + process.cwd = () => testDir; + + // Reset the singleton for clean tests + featureGate._reset(); + }); + + afterEach(() => { + // Restore process.cwd + process.cwd = originalCwd; + + // Cleanup temp directory + try { + fs.rmSync(testDir, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + }); + + // Helper to create test cache + function createTestCache(features = ['pro.squads.*'], overrides = {}) { + return { + key: 'PRO-TEST-1234-5678-ABCD', + activatedAt: new Date().toISOString(), + expiresAt: new Date(Date.now() + 30 * 24 * 60 * 60 * 1000).toISOString(), + features, + seats: { used: 1, max: 5 }, + cacheValidDays: 30, + gracePeriodDays: 7, + ...overrides, + }; + } + + describe('isAvailable', () => { + it('should return false when no license', () => { + expect(featureGate.isAvailable('pro.squads.premium')).toBe(false); + }); + + it('should return true for exact match', () => { + writeLicenseCache(createTestCache(['pro.squads.premium']), testDir); + + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + }); + + it('should return false for unmatched feature', () => { + writeLicenseCache(createTestCache(['pro.squads.premium']), testDir); + + expect(featureGate.isAvailable('pro.memory.extended')).toBe(false); + }); + + it('should support wildcard matching (AC-5)', () => { + writeLicenseCache(createTestCache(['pro.squads.*']), testDir); + + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + expect(featureGate.isAvailable('pro.squads.custom')).toBe(true); + expect(featureGate.isAvailable('pro.squads.marketplace')).toBe(true); + expect(featureGate.isAvailable('pro.memory.extended')).toBe(false); + }); + + it('should support module wildcard (pro.*)', () => { + writeLicenseCache(createTestCache(['pro.*']), testDir); + + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + expect(featureGate.isAvailable('pro.memory.extended')).toBe(true); + expect(featureGate.isAvailable('pro.integrations.jira')).toBe(true); + }); + + it('should handle multiple patterns', () => { + writeLicenseCache(createTestCache(['pro.squads.premium', 'pro.memory.*']), testDir); + + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + expect(featureGate.isAvailable('pro.squads.custom')).toBe(false); + expect(featureGate.isAvailable('pro.memory.extended')).toBe(true); + expect(featureGate.isAvailable('pro.memory.persistence')).toBe(true); + }); + + it('should return false for expired license (past grace)', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 40); // 40 days ago + + writeLicenseCache( + createTestCache(['pro.squads.*'], { activatedAt: activatedAt.toISOString() }), + testDir, + ); + + expect(featureGate.isAvailable('pro.squads.premium')).toBe(false); + }); + + it('should return true during grace period', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 33); // 33 days ago (in grace) + + writeLicenseCache( + createTestCache(['pro.squads.*'], { activatedAt: activatedAt.toISOString() }), + testDir, + ); + + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + }); + }); + + describe('require (AC-4)', () => { + it('should not throw when feature is available', () => { + writeLicenseCache(createTestCache(['pro.squads.premium']), testDir); + + expect(() => { + featureGate.require('pro.squads.premium', 'Premium Squads'); + }).not.toThrow(); + }); + + it('should throw ProFeatureError when not available', () => { + expect(() => { + featureGate.require('pro.squads.premium', 'Premium Squads'); + }).toThrow(ProFeatureError); + }); + + it('should include feature info in error', () => { + try { + featureGate.require('pro.squads.premium', 'Premium Squads'); + fail('Should have thrown'); + } catch (error) { + expect(error).toBeInstanceOf(ProFeatureError); + expect(error.featureId).toBe('pro.squads.premium'); + expect(error.friendlyName).toBe('Premium Squads'); + expect(error.message).toContain('Premium Squads'); + expect(error.message).toContain('requires an active AIOS Pro license'); + } + }); + + it('should use featureId when no friendly name provided and not in registry', () => { + try { + // Use a feature ID that is NOT in the registry + featureGate.require('pro.unknown.feature'); + fail('Should have thrown'); + } catch (error) { + expect(error.friendlyName).toBe('pro.unknown.feature'); + } + }); + + it('should use registry name when no friendly name provided', () => { + try { + // This feature IS in the registry with name "Premium Squads" + featureGate.require('pro.squads.premium'); + fail('Should have thrown'); + } catch (error) { + expect(error.friendlyName).toBe('Premium Squads'); + } + }); + }); + + describe('listAvailable', () => { + it('should return empty array when no license', () => { + const available = featureGate.listAvailable(); + + expect(available).toEqual([]); + }); + + it('should return available features from registry', () => { + writeLicenseCache(createTestCache(['pro.squads.*']), testDir); + + const available = featureGate.listAvailable(); + + expect(available).toContain('pro.squads.premium'); + expect(available).toContain('pro.squads.custom'); + expect(available).not.toContain('pro.memory.extended'); + }); + + it('should return sorted array', () => { + writeLicenseCache(createTestCache(['pro.*']), testDir); + + const available = featureGate.listAvailable(); + + const sorted = [...available].sort(); + expect(available).toEqual(sorted); + }); + }); + + describe('listAll', () => { + it('should list all registered features with availability', () => { + writeLicenseCache(createTestCache(['pro.squads.premium']), testDir); + + const all = featureGate.listAll(); + + expect(all.length).toBeGreaterThan(0); + + const premium = all.find((f) => f.id === 'pro.squads.premium'); + expect(premium).toBeDefined(); + expect(premium.available).toBe(true); + + const memory = all.find((f) => f.id === 'pro.memory.extended'); + expect(memory).toBeDefined(); + expect(memory.available).toBe(false); + }); + + it('should include feature metadata', () => { + const all = featureGate.listAll(); + + const feature = all[0]; + expect(feature).toHaveProperty('id'); + expect(feature).toHaveProperty('name'); + expect(feature).toHaveProperty('description'); + expect(feature).toHaveProperty('module'); + expect(feature).toHaveProperty('available'); + }); + }); + + describe('listByModule', () => { + it('should group features by module', () => { + const grouped = featureGate.listByModule(); + + expect(grouped).toHaveProperty('squads'); + expect(grouped).toHaveProperty('memory'); + expect(grouped).toHaveProperty('metrics'); + + expect(Array.isArray(grouped.squads)).toBe(true); + expect(grouped.squads.length).toBeGreaterThan(0); + }); + }); + + describe('reload', () => { + it('should pick up new features after activation', () => { + // Initially no license + expect(featureGate.isAvailable('pro.squads.premium')).toBe(false); + + // Activate license + writeLicenseCache(createTestCache(['pro.squads.premium']), testDir); + + // Still false without reload (cached) + // Note: In this test setup, each isAvailable call may reload + // In production, caching is more aggressive + + // Force reload + featureGate.reload(); + + expect(featureGate.isAvailable('pro.squads.premium')).toBe(true); + }); + }); + + describe('getLicenseState', () => { + it('should return "Not Activated" when no license', () => { + expect(featureGate.getLicenseState()).toBe('Not Activated'); + }); + + it('should return "Active" for valid license', () => { + writeLicenseCache(createTestCache(), testDir); + + expect(featureGate.getLicenseState()).toBe('Active'); + }); + + it('should return "Grace" during grace period', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 33); + + writeLicenseCache( + createTestCache(['pro.squads.*'], { activatedAt: activatedAt.toISOString() }), + testDir, + ); + + expect(featureGate.getLicenseState()).toBe('Grace'); + }); + + it('should return "Expired" after grace period', () => { + const activatedAt = new Date(); + activatedAt.setDate(activatedAt.getDate() - 40); + + writeLicenseCache( + createTestCache(['pro.squads.*'], { activatedAt: activatedAt.toISOString() }), + testDir, + ); + + expect(featureGate.getLicenseState()).toBe('Expired'); + }); + }); + + describe('getLicenseInfo', () => { + it('should return null when no license', () => { + expect(featureGate.getLicenseInfo()).toBeNull(); + }); + + it('should return license info when active', () => { + writeLicenseCache(createTestCache(['pro.squads.*', 'pro.memory.*']), testDir); + + const info = featureGate.getLicenseInfo(); + + expect(info).not.toBeNull(); + expect(info.state).toBe('Active'); + expect(info.features).toContain('pro.squads.*'); + expect(info.features).toContain('pro.memory.*'); + expect(info.inGrace).toBe(false); + expect(info.isExpired).toBe(false); + }); + }); + + describe('singleton pattern', () => { + it('should export singleton instance', () => { + expect(featureGate).toBeInstanceOf(FeatureGate); + }); + + it('should maintain state across imports', () => { + writeLicenseCache(createTestCache(['pro.squads.premium']), testDir); + featureGate.reload(); + + // Re-import + const { featureGate: fg2 } = require('../../pro/license/feature-gate'); + + expect(fg2.isAvailable('pro.squads.premium')).toBe(true); + }); + }); + + describe('Performance (AC-13)', () => { + it('should complete isAvailable in under 5ms', () => { + writeLicenseCache(createTestCache(['pro.squads.*']), testDir); + featureGate.reload(); + + // Warm up cache + featureGate.isAvailable('pro.squads.premium'); + + // Measure + const iterations = 100; + const start = performance.now(); + + for (let i = 0; i < iterations; i++) { + featureGate.isAvailable('pro.squads.premium'); + } + + const elapsed = performance.now() - start; + const avgMs = elapsed / iterations; + + // Should be well under 5ms (typically < 0.1ms) + expect(avgMs).toBeLessThan(5); + }); + }); +}); + +``` + +================================================== +📄 tests/config/merge-utils.test.js +================================================== +```js +/** + * Unit tests for merge-utils.js + * Story PRO-4 — Config Hierarchy + */ + +const { deepMerge, mergeAll, isPlainObject } = require('../../.aios-core/core/config/merge-utils'); + +describe('merge-utils', () => { + describe('isPlainObject', () => { + test('returns true for plain objects', () => { + expect(isPlainObject({})).toBe(true); + expect(isPlainObject({ a: 1 })).toBe(true); + expect(isPlainObject(Object.create(null))).toBe(true); + }); + + test('returns false for non-plain objects', () => { + expect(isPlainObject(null)).toBe(false); + expect(isPlainObject(undefined)).toBe(false); + expect(isPlainObject(42)).toBe(false); + expect(isPlainObject('string')).toBe(false); + expect(isPlainObject([])).toBe(false); + expect(isPlainObject(new Date())).toBe(false); + expect(isPlainObject(true)).toBe(false); + }); + }); + + describe('deepMerge — scalars (last-wins)', () => { + test('source string overrides target string', () => { + expect(deepMerge({ a: 'old' }, { a: 'new' })).toEqual({ a: 'new' }); + }); + + test('source number overrides target number', () => { + expect(deepMerge({ a: 1 }, { a: 2 })).toEqual({ a: 2 }); + }); + + test('source boolean overrides target boolean', () => { + expect(deepMerge({ a: true }, { a: false })).toEqual({ a: false }); + }); + + test('adds new keys from source', () => { + expect(deepMerge({ a: 1 }, { b: 2 })).toEqual({ a: 1, b: 2 }); + }); + }); + + describe('deepMerge — objects (deep merge)', () => { + test('merges nested objects recursively', () => { + const target = { a: { x: 1, y: 2 } }; + const source = { a: { y: 3, z: 4 } }; + expect(deepMerge(target, source)).toEqual({ a: { x: 1, y: 3, z: 4 } }); + }); + + test('deep merges multiple nesting levels', () => { + const target = { a: { b: { c: 1 } } }; + const source = { a: { b: { d: 2 } } }; + expect(deepMerge(target, source)).toEqual({ a: { b: { c: 1, d: 2 } } }); + }); + + test('does not mutate input objects', () => { + const target = { a: { x: 1 } }; + const source = { a: { y: 2 } }; + const targetCopy = JSON.parse(JSON.stringify(target)); + + deepMerge(target, source); + + expect(target).toEqual(targetCopy); + }); + }); + + describe('deepMerge — arrays (replace)', () => { + test('source array replaces target array', () => { + const target = { items: [1, 2, 3] }; + const source = { items: [4, 5] }; + expect(deepMerge(target, source)).toEqual({ items: [4, 5] }); + }); + + test('source array replaces target non-array', () => { + expect(deepMerge({ a: 'str' }, { a: [1] })).toEqual({ a: [1] }); + }); + }); + + describe('deepMerge — +append modifier', () => { + test('appends to existing array', () => { + const target = { tags: ['a', 'b'] }; + const source = { 'tags+append': ['c', 'd'] }; + expect(deepMerge(target, source)).toEqual({ tags: ['a', 'b', 'c', 'd'] }); + }); + + test('creates array when base key does not exist', () => { + const source = { 'newlist+append': ['x'] }; + expect(deepMerge({}, source)).toEqual({ newlist: ['x'] }); + }); + + test('does not add +append as literal key', () => { + const result = deepMerge({}, { 'items+append': [1] }); + expect(result).not.toHaveProperty('items+append'); + expect(result).toHaveProperty('items'); + }); + + test('ignores +append with non-array value', () => { + const result = deepMerge({ items: [1] }, { 'items+append': 'not-array' }); + expect(result.items).toEqual([1]); + }); + }); + + describe('deepMerge — null deletes key', () => { + test('null value removes key from result', () => { + const target = { a: 1, b: 2 }; + const source = { a: null }; + expect(deepMerge(target, source)).toEqual({ b: 2 }); + }); + + test('null on nested key removes nested key', () => { + const target = { a: { x: 1, y: 2 } }; + const source = { a: { x: null } }; + expect(deepMerge(target, source)).toEqual({ a: { y: 2 } }); + }); + + test('null on non-existent key is harmless', () => { + const target = { a: 1 }; + const source = { z: null }; + expect(deepMerge(target, source)).toEqual({ a: 1 }); + }); + }); + + describe('deepMerge — edge cases', () => { + test('source is non-object returns source', () => { + expect(deepMerge({ a: 1 }, 'string')).toBe('string'); + }); + + test('target is non-object returns source', () => { + expect(deepMerge('string', { a: 1 })).toEqual({ a: 1 }); + }); + + test('empty source returns copy of target', () => { + expect(deepMerge({ a: 1 }, {})).toEqual({ a: 1 }); + }); + + test('empty target returns copy of source', () => { + expect(deepMerge({}, { a: 1 })).toEqual({ a: 1 }); + }); + }); + + describe('mergeAll', () => { + test('merges multiple layers in order', () => { + const l1 = { a: 1, b: 2 }; + const l2 = { b: 3, c: 4 }; + const l3 = { c: 5, d: 6 }; + expect(mergeAll(l1, l2, l3)).toEqual({ a: 1, b: 3, c: 5, d: 6 }); + }); + + test('skips null/undefined layers', () => { + expect(mergeAll({ a: 1 }, null, undefined, { b: 2 })).toEqual({ a: 1, b: 2 }); + }); + + test('returns empty object when no valid layers', () => { + expect(mergeAll()).toEqual({}); + expect(mergeAll(null, undefined)).toEqual({}); + }); + + test('preserves deep merge semantics across layers', () => { + const l1 = { config: { timeout: 10, retries: 3 } }; + const l2 = { config: { timeout: 30 } }; + const l4 = { config: { debug: true } }; + expect(mergeAll(l1, l2, l4)).toEqual({ + config: { timeout: 30, retries: 3, debug: true }, + }); + }); + }); +}); + +``` + +================================================== +📄 tests/config/config-resolver.test.js +================================================== +```js +/** + * Tests for config-resolver.js + * Story PRO-4 — Config Hierarchy + * Story 12.1 — L5 User layer, toggle-profile, user config write + * + * Covers: unit tests (Task 5.1), integration tests (Task 5.2), performance (Task 5.4), + * L5 User layer (Story 12.1 Task 6) + */ + +const path = require('path'); +const fs = require('fs'); +const os = require('os'); +const yaml = require('js-yaml'); +const { + resolveConfig, + isLegacyMode, + loadLayeredConfig, + loadLegacyConfig, + getConfigAtLevel, + setUserConfigValue, + toggleUserProfile, + ensureUserConfigDir, + CONFIG_FILES, + LEVELS, + VALID_USER_PROFILES, +} = require('../../.aios-core/core/config/config-resolver'); +const { globalConfigCache } = require('../../.aios-core/core/config/config-cache'); + +const FIXTURES_DIR = path.join(__dirname, 'fixtures'); + +/** + * Create a temporary project directory with specific config files. + */ +function createTempProject(files = {}) { + const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-config-test-')); + const aiosCoreDir = path.join(tmpDir, '.aios-core'); + fs.mkdirSync(aiosCoreDir, { recursive: true }); + + for (const [relativePath, content] of Object.entries(files)) { + const fullPath = path.join(tmpDir, relativePath); + fs.mkdirSync(path.dirname(fullPath), { recursive: true }); + + if (typeof content === 'string') { + fs.writeFileSync(fullPath, content, 'utf8'); + } else { + // Copy from fixtures + const fixturePath = path.join(FIXTURES_DIR, content.fixture); + fs.copyFileSync(fixturePath, fullPath); + } + } + + return tmpDir; +} + +function cleanupTempDir(tmpDir) { + fs.rmSync(tmpDir, { recursive: true, force: true }); +} + +describe('config-resolver', () => { + beforeEach(() => { + globalConfigCache.clear(); + }); + + // ------------------------------------------------------------------ + // Unit Tests + // ------------------------------------------------------------------ + + describe('constants', () => { + test('CONFIG_FILES has all expected keys', () => { + expect(CONFIG_FILES).toHaveProperty('framework'); + expect(CONFIG_FILES).toHaveProperty('project'); + expect(CONFIG_FILES).toHaveProperty('pro'); + expect(CONFIG_FILES).toHaveProperty('local'); + expect(CONFIG_FILES).toHaveProperty('legacy'); + expect(CONFIG_FILES).toHaveProperty('user'); + }); + + test('CONFIG_FILES.user points to ~/.aios/user-config.yaml', () => { + const expected = path.join(os.homedir(), '.aios', 'user-config.yaml'); + expect(CONFIG_FILES.user).toBe(expected); + }); + + test('LEVELS has all expected keys', () => { + expect(LEVELS.framework).toBe('L1'); + expect(LEVELS.project).toBe('L2'); + expect(LEVELS.pro).toBe('Pro'); + expect(LEVELS.app).toBe('L3'); + expect(LEVELS.local).toBe('L4'); + expect(LEVELS.user).toBe('L5'); + expect(LEVELS.legacy).toBe('Legacy'); + }); + + test('VALID_USER_PROFILES contains bob and advanced', () => { + expect(VALID_USER_PROFILES).toContain('bob'); + expect(VALID_USER_PROFILES).toContain('advanced'); + expect(VALID_USER_PROFILES).toHaveLength(2); + }); + }); + + describe('isLegacyMode', () => { + test('returns true when core-config.yaml exists but framework-config.yaml does not', () => { + const tmpDir = createTempProject({ + '.aios-core/core-config.yaml': 'project:\n name: legacy\n', + }); + + try { + expect(isLegacyMode(tmpDir)).toBe(true); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('returns false when framework-config.yaml exists', () => { + const tmpDir = createTempProject({ + '.aios-core/core-config.yaml': 'project:\n name: legacy\n', + '.aios-core/framework-config.yaml': 'metadata:\n version: "1.0"\n', + }); + + try { + expect(isLegacyMode(tmpDir)).toBe(false); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('returns false when neither file exists', () => { + const tmpDir = createTempProject({}); + + try { + expect(isLegacyMode(tmpDir)).toBe(false); + } finally { + cleanupTempDir(tmpDir); + } + }); + }); + + describe('getConfigAtLevel', () => { + test('loads framework config at L1', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + }); + + try { + const config = getConfigAtLevel(tmpDir, 'L1'); + expect(config).toBeTruthy(); + expect(config.metadata.framework_name).toBe('AIOS-FullStack'); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('loads project config at L2', () => { + const tmpDir = createTempProject({ + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + }); + + try { + const config = getConfigAtLevel(tmpDir, 'L2'); + expect(config.project.name).toBe('test-project'); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('returns null when file does not exist', () => { + const tmpDir = createTempProject({}); + + try { + const config = getConfigAtLevel(tmpDir, 'L1'); + expect(config).toBeNull(); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('returns null for app level without appDir', () => { + const tmpDir = createTempProject({}); + + try { + expect(getConfigAtLevel(tmpDir, 'L3')).toBeNull(); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('throws on unknown level', () => { + expect(() => getConfigAtLevel('/tmp', 'unknown')).toThrow('Unknown config level'); + }); + + test('supports string aliases (1, 2, L1, L2, etc.)', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + }); + + try { + expect(getConfigAtLevel(tmpDir, '1')).toBeTruthy(); + expect(getConfigAtLevel(tmpDir, 'framework')).toBeTruthy(); + } finally { + cleanupTempDir(tmpDir); + } + }); + }); + + // ------------------------------------------------------------------ + // Integration Tests + // ------------------------------------------------------------------ + + describe('loadLegacyConfig', () => { + test('loads monolithic core-config.yaml', () => { + const tmpDir = createTempProject({ + '.aios-core/core-config.yaml': { fixture: 'legacy-core-config.yaml' }, + }); + + try { + const result = loadLegacyConfig(tmpDir); + expect(result.config.project.name).toBe('legacy-project'); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('includes deprecation warning', () => { + const tmpDir = createTempProject({ + '.aios-core/core-config.yaml': { fixture: 'legacy-core-config.yaml' }, + }); + const origEnv = process.env.AIOS_SUPPRESS_DEPRECATION; + + try { + delete process.env.AIOS_SUPPRESS_DEPRECATION; + + const result = loadLegacyConfig(tmpDir); + expect(result.warnings.length).toBeGreaterThan(0); + expect(result.warnings[0]).toContain('DEPRECATION'); + } finally { + if (origEnv === undefined) { + delete process.env.AIOS_SUPPRESS_DEPRECATION; + } else { + process.env.AIOS_SUPPRESS_DEPRECATION = origEnv; + } + cleanupTempDir(tmpDir); + } + }); + + test('suppresses deprecation when AIOS_SUPPRESS_DEPRECATION=true', () => { + const tmpDir = createTempProject({ + '.aios-core/core-config.yaml': { fixture: 'legacy-core-config.yaml' }, + }); + const origEnv = process.env.AIOS_SUPPRESS_DEPRECATION; + + try { + process.env.AIOS_SUPPRESS_DEPRECATION = 'true'; + + const result = loadLegacyConfig(tmpDir); + expect(result.warnings).toHaveLength(0); + } finally { + if (origEnv === undefined) { + delete process.env.AIOS_SUPPRESS_DEPRECATION; + } else { + process.env.AIOS_SUPPRESS_DEPRECATION = origEnv; + } + cleanupTempDir(tmpDir); + } + }); + + test('throws when legacy file is missing', () => { + const tmpDir = createTempProject({}); + + try { + expect(() => loadLegacyConfig(tmpDir)).toThrow('Legacy config file not found'); + } finally { + cleanupTempDir(tmpDir); + } + }); + }); + + describe('loadLayeredConfig — 4-level resolution', () => { + test('merges L1 and L2 correctly', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + }); + + try { + const result = loadLayeredConfig(tmpDir); + // L1 values present + expect(result.config.metadata.framework_name).toBe('AIOS-FullStack'); + // L2 overrides L1 + expect(result.config.performance_defaults.max_concurrent_operations).toBe(8); + // L2 additions + expect(result.config.project.name).toBe('test-project'); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('merges L1 + L2 + L4 correctly', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + '.aios-core/local-config.yaml': { fixture: 'local-config.yaml' }, + }); + + try { + const result = loadLayeredConfig(tmpDir); + // L4 overrides L2 scalar + expect(result.config.performance_defaults.max_concurrent_operations).toBe(16); + // L4 additions + expect(result.config.ide.selected).toEqual(['vscode', 'claude-code']); + // L1 values preserved + expect(result.config.metadata.framework_name).toBe('AIOS-FullStack'); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('includes Pro extension when present', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + 'pro/pro-config.yaml': { fixture: 'pro-config.yaml' }, + }); + + try { + const result = loadLayeredConfig(tmpDir); + // Pro overrides L2 squad settings + expect(result.config.squads.max_squads).toBe(10); + expect(result.config.squads.premium_templates).toBe(true); + // Pro addition + expect(result.config.pro_features.advanced_analytics).toBe(true); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('silently skips missing Pro extension', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + }); + + try { + const result = loadLayeredConfig(tmpDir); + // No error, and pro_features not present + expect(result.config.pro_features).toBeUndefined(); + expect(result.config.squads.max_squads).toBe(3); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('debug mode tracks sources', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + }); + + try { + const result = loadLayeredConfig(tmpDir, { debug: true }); + expect(result.sources).toBeTruthy(); + // Framework values tracked as L1 + expect(result.sources['metadata']).toEqual( + expect.objectContaining({ level: 'L1' }), + ); + // Project values tracked as L2 + expect(result.sources['project']).toEqual( + expect.objectContaining({ level: 'L2' }), + ); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('lints L1 and L2 for env patterns', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': 'url: "${API_URL}"\n', + '.aios-core/project-config.yaml': 'name: "clean"\n', + }); + + try { + const result = loadLayeredConfig(tmpDir); + const lintWarnings = result.warnings.filter(w => w.startsWith('[LINT]')); + expect(lintWarnings.length).toBeGreaterThan(0); + } finally { + cleanupTempDir(tmpDir); + } + }); + }); + + describe('resolveConfig — main entry point', () => { + test('auto-detects legacy mode', () => { + const tmpDir = createTempProject({ + '.aios-core/core-config.yaml': { fixture: 'legacy-core-config.yaml' }, + }); + + try { + const result = resolveConfig(tmpDir, { skipCache: true }); + expect(result.legacy).toBe(true); + expect(result.config.project.name).toBe('legacy-project'); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('auto-detects layered mode', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + }); + + try { + const result = resolveConfig(tmpDir, { skipCache: true }); + expect(result.legacy).toBe(false); + expect(result.config.metadata.framework_name).toBe('AIOS-FullStack'); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('interpolates env vars after merge', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': 'metadata:\n name: "test"\n', + '.aios-core/local-config.yaml': 'api_url: "${TEST_API_URL:-http://localhost}"\n', + }); + const origTestApiUrl = process.env.TEST_API_URL; + + try { + delete process.env.TEST_API_URL; + const result = resolveConfig(tmpDir, { skipCache: true }); + expect(result.config.api_url).toBe('http://localhost'); + } finally { + if (origTestApiUrl === undefined) { + delete process.env.TEST_API_URL; + } else { + process.env.TEST_API_URL = origTestApiUrl; + } + cleanupTempDir(tmpDir); + } + }); + + test('caches resolved config', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + }); + + try { + const result1 = resolveConfig(tmpDir); + const result2 = resolveConfig(tmpDir); + // Same reference from cache + expect(result1).toBe(result2); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('skipCache bypasses cache', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + }); + + try { + const result1 = resolveConfig(tmpDir); + const result2 = resolveConfig(tmpDir, { skipCache: true }); + // Different references + expect(result1).not.toBe(result2); + // But same content + expect(result1.config).toEqual(result2.config); + } finally { + cleanupTempDir(tmpDir); + } + }); + }); + + // ------------------------------------------------------------------ + // Performance Tests (Task 5.4) + // ------------------------------------------------------------------ + + describe('performance benchmarks', () => { + const isCI = !!process.env.CI; + const COLD_START_LIMIT = isCI ? 300 : 100; + const CACHED_READ_LIMIT = isCI ? 50 : 5; + + test(`cold start resolution < ${COLD_START_LIMIT}ms`, () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + '.aios-core/local-config.yaml': { fixture: 'local-config.yaml' }, + }); + + try { + globalConfigCache.clear(); + + const start = process.hrtime.bigint(); + resolveConfig(tmpDir, { skipCache: true }); + const end = process.hrtime.bigint(); + + const durationMs = Number(end - start) / 1_000_000; + expect(durationMs).toBeLessThan(COLD_START_LIMIT); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test(`cached read < ${CACHED_READ_LIMIT}ms`, () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + '.aios-core/local-config.yaml': { fixture: 'local-config.yaml' }, + }); + + try { + // Warm up cache + resolveConfig(tmpDir); + + const start = process.hrtime.bigint(); + resolveConfig(tmpDir); + const end = process.hrtime.bigint(); + + const durationMs = Number(end - start) / 1_000_000; + expect(durationMs).toBeLessThan(CACHED_READ_LIMIT); + } finally { + cleanupTempDir(tmpDir); + } + }); + }); + + // ------------------------------------------------------------------ + // Story 12.1 — L5 User layer tests + // ------------------------------------------------------------------ + + describe('L5 User layer', () => { + let originalUserConfigPath; + let tempUserDir; + + beforeEach(() => { + // Save original CONFIG_FILES.user and redirect to temp directory + originalUserConfigPath = CONFIG_FILES.user; + tempUserDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-user-config-test-')); + CONFIG_FILES.user = path.join(tempUserDir, 'user-config.yaml'); + globalConfigCache.clear(); + }); + + afterEach(() => { + // Restore original CONFIG_FILES.user + CONFIG_FILES.user = originalUserConfigPath; + fs.rmSync(tempUserDir, { recursive: true, force: true }); + globalConfigCache.clear(); + }); + + test('loadLayeredConfig merges L5 user config after L4', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + }); + + // Create user config + fs.writeFileSync(CONFIG_FILES.user, 'user_profile: "bob"\ncustom_setting: 42\n', 'utf8'); + + try { + const result = loadLayeredConfig(tmpDir); + expect(result.config.user_profile).toBe('bob'); + expect(result.config.custom_setting).toBe(42); + // L2 values still present + expect(result.config.project.name).toBe('test-project'); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('L5 overrides L4 values', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/local-config.yaml': 'user_profile: "advanced"\n', + }); + + // L5 overrides L4 + fs.writeFileSync(CONFIG_FILES.user, 'user_profile: "bob"\n', 'utf8'); + + try { + const result = loadLayeredConfig(tmpDir); + expect(result.config.user_profile).toBe('bob'); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('graceful when user config file does not exist', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + }); + + try { + // CONFIG_FILES.user points to non-existent file — should not throw + const result = loadLayeredConfig(tmpDir); + expect(result.config).toBeTruthy(); + expect(result.config.user_profile).toBeUndefined(); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('graceful when user config file is malformed YAML', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + }); + + // Write invalid YAML + fs.writeFileSync(CONFIG_FILES.user, ' invalid:\n yaml: [}\n', 'utf8'); + + try { + // Should not throw — loadYamlAbsolute catches parse errors + const result = loadLayeredConfig(tmpDir); + expect(result.config).toBeTruthy(); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('debug mode tracks L5 sources', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + }); + + fs.writeFileSync(CONFIG_FILES.user, 'user_profile: "bob"\n', 'utf8'); + + try { + const result = loadLayeredConfig(tmpDir, { debug: true }); + expect(result.sources).toBeTruthy(); + expect(result.sources['user_profile']).toEqual( + expect.objectContaining({ level: 'L5' }), + ); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('resolveConfig includes L5 in final config', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + }); + + fs.writeFileSync(CONFIG_FILES.user, 'user_profile: "bob"\n', 'utf8'); + + try { + const result = resolveConfig(tmpDir, { skipCache: true }); + expect(result.config.user_profile).toBe('bob'); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('getConfigAtLevel returns L5 user config', () => { + fs.writeFileSync(CONFIG_FILES.user, 'user_profile: "advanced"\ndefault_language: "pt-BR"\n', 'utf8'); + + const config = getConfigAtLevel('/tmp', 'L5'); + expect(config).toBeTruthy(); + expect(config.user_profile).toBe('advanced'); + expect(config.default_language).toBe('pt-BR'); + }); + + test('getConfigAtLevel returns null when L5 file missing', () => { + // CONFIG_FILES.user points to non-existent file + const config = getConfigAtLevel('/tmp', 'user'); + expect(config).toBeNull(); + }); + + test('getConfigAtLevel supports aliases 5 and L5', () => { + fs.writeFileSync(CONFIG_FILES.user, 'user_profile: "bob"\n', 'utf8'); + + expect(getConfigAtLevel('/tmp', '5')).toBeTruthy(); + expect(getConfigAtLevel('/tmp', 'L5')).toBeTruthy(); + expect(getConfigAtLevel('/tmp', 'user')).toBeTruthy(); + }); + }); + + // ------------------------------------------------------------------ + // Story 12.1 — setUserConfigValue tests + // ------------------------------------------------------------------ + + describe('setUserConfigValue', () => { + let originalUserConfigPath; + let tempUserDir; + + beforeEach(() => { + originalUserConfigPath = CONFIG_FILES.user; + tempUserDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-user-write-test-')); + CONFIG_FILES.user = path.join(tempUserDir, 'user-config.yaml'); + globalConfigCache.clear(); + }); + + afterEach(() => { + CONFIG_FILES.user = originalUserConfigPath; + fs.rmSync(tempUserDir, { recursive: true, force: true }); + globalConfigCache.clear(); + }); + + test('creates user config file when it does not exist', () => { + // Delete the file if it exists (temp dir is clean) + expect(fs.existsSync(CONFIG_FILES.user)).toBe(false); + + setUserConfigValue('user_profile', 'bob'); + + expect(fs.existsSync(CONFIG_FILES.user)).toBe(true); + const content = fs.readFileSync(CONFIG_FILES.user, 'utf8'); + const config = yaml.load(content); + expect(config.user_profile).toBe('bob'); + }); + + test('preserves existing values when setting new key', () => { + fs.writeFileSync(CONFIG_FILES.user, 'default_language: "pt-BR"\n', 'utf8'); + + setUserConfigValue('user_profile', 'advanced'); + + const content = fs.readFileSync(CONFIG_FILES.user, 'utf8'); + const config = yaml.load(content); + expect(config.user_profile).toBe('advanced'); + expect(config.default_language).toBe('pt-BR'); + }); + + test('overwrites existing key value', () => { + fs.writeFileSync(CONFIG_FILES.user, 'user_profile: "bob"\n', 'utf8'); + + setUserConfigValue('user_profile', 'advanced'); + + const content = fs.readFileSync(CONFIG_FILES.user, 'utf8'); + const config = yaml.load(content); + expect(config.user_profile).toBe('advanced'); + }); + + test('invalidates config cache after write', () => { + globalConfigCache.set('test-key', { data: 'cached' }); + expect(globalConfigCache.size).toBeGreaterThan(0); + + setUserConfigValue('user_profile', 'bob'); + + expect(globalConfigCache.size).toBe(0); + }); + + test('returns updated config', () => { + fs.writeFileSync(CONFIG_FILES.user, 'existing: true\n', 'utf8'); + + const result = setUserConfigValue('user_profile', 'bob'); + expect(result.user_profile).toBe('bob'); + expect(result.existing).toBe(true); + }); + }); + + // ------------------------------------------------------------------ + // Story 12.1 — toggleUserProfile tests + // ------------------------------------------------------------------ + + describe('toggleUserProfile', () => { + let originalUserConfigPath; + let tempUserDir; + + beforeEach(() => { + originalUserConfigPath = CONFIG_FILES.user; + tempUserDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-toggle-test-')); + CONFIG_FILES.user = path.join(tempUserDir, 'user-config.yaml'); + globalConfigCache.clear(); + }); + + afterEach(() => { + CONFIG_FILES.user = originalUserConfigPath; + fs.rmSync(tempUserDir, { recursive: true, force: true }); + globalConfigCache.clear(); + }); + + test('toggles from advanced to bob', () => { + fs.writeFileSync(CONFIG_FILES.user, 'user_profile: "advanced"\n', 'utf8'); + + const result = toggleUserProfile(); + expect(result.previous).toBe('advanced'); + expect(result.current).toBe('bob'); + + const content = fs.readFileSync(CONFIG_FILES.user, 'utf8'); + const config = yaml.load(content); + expect(config.user_profile).toBe('bob'); + }); + + test('toggles from bob to advanced', () => { + fs.writeFileSync(CONFIG_FILES.user, 'user_profile: "bob"\n', 'utf8'); + + const result = toggleUserProfile(); + expect(result.previous).toBe('bob'); + expect(result.current).toBe('advanced'); + }); + + test('defaults to advanced and toggles to bob when no config exists', () => { + const result = toggleUserProfile(); + expect(result.previous).toBe('advanced'); + expect(result.current).toBe('bob'); + }); + + test('invalidates cache after toggle', () => { + globalConfigCache.set('test-key', { data: 'cached' }); + + toggleUserProfile(); + + expect(globalConfigCache.size).toBe(0); + }); + + test('persists toggle result to file', () => { + toggleUserProfile(); // advanced → bob + + const content = fs.readFileSync(CONFIG_FILES.user, 'utf8'); + const config = yaml.load(content); + expect(config.user_profile).toBe('bob'); + + toggleUserProfile(); // bob → advanced + + const content2 = fs.readFileSync(CONFIG_FILES.user, 'utf8'); + const config2 = yaml.load(content2); + expect(config2.user_profile).toBe('advanced'); + }); + }); + + // ------------------------------------------------------------------ + // Story 12.1 — Integration: resolveConfig returns L5 user_profile + // ------------------------------------------------------------------ + + describe('integration — L5 with full hierarchy', () => { + let originalUserConfigPath; + let tempUserDir; + + beforeEach(() => { + originalUserConfigPath = CONFIG_FILES.user; + tempUserDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-l5-integration-test-')); + CONFIG_FILES.user = path.join(tempUserDir, 'user-config.yaml'); + globalConfigCache.clear(); + }); + + afterEach(() => { + CONFIG_FILES.user = originalUserConfigPath; + fs.rmSync(tempUserDir, { recursive: true, force: true }); + globalConfigCache.clear(); + }); + + test('resolveConfig returns user_profile from L5 overriding lower levels', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': 'user_profile: "advanced"\nproject:\n name: test\n', + }); + + fs.writeFileSync(CONFIG_FILES.user, 'user_profile: "bob"\n', 'utf8'); + + try { + const result = resolveConfig(tmpDir, { skipCache: true }); + // L5 bob overrides L2 advanced + expect(result.config.user_profile).toBe('bob'); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('toggle reflects in next resolveConfig call', () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + }); + + fs.writeFileSync(CONFIG_FILES.user, 'user_profile: "advanced"\n', 'utf8'); + + try { + let result = resolveConfig(tmpDir, { skipCache: true }); + expect(result.config.user_profile).toBe('advanced'); + + toggleUserProfile(); // advanced → bob + + result = resolveConfig(tmpDir, { skipCache: true }); + expect(result.config.user_profile).toBe('bob'); + } finally { + cleanupTempDir(tmpDir); + } + }); + }); +}); + +``` + +================================================== +📄 tests/config/env-interpolator.test.js +================================================== +```js +/** + * Unit tests for env-interpolator.js + * Story PRO-4 — Config Hierarchy + */ + +const { + interpolateEnvVars, + interpolateString, + lintEnvPatterns, + ENV_VAR_PATTERN, +} = require('../../.aios-core/core/config/env-interpolator'); + +describe('env-interpolator', () => { + const originalEnv = { ...process.env }; + + beforeEach(() => { + // Reset env by clearing and restoring — preserves Node's special process.env object + for (const key of Object.keys(process.env)) { + delete process.env[key]; + } + Object.assign(process.env, originalEnv); + }); + + afterAll(() => { + for (const key of Object.keys(process.env)) { + delete process.env[key]; + } + Object.assign(process.env, originalEnv); + }); + + describe('ENV_VAR_PATTERN regex', () => { + test('matches ${VAR}', () => { + expect('${MY_VAR}'.match(new RegExp(ENV_VAR_PATTERN.source))).toBeTruthy(); + }); + + test('matches ${VAR:-default}', () => { + const match = '${MY_VAR:-fallback}'.match(new RegExp(ENV_VAR_PATTERN.source)); + expect(match).toBeTruthy(); + expect(match[1]).toBe('MY_VAR'); + expect(match[2]).toBe('fallback'); + }); + + test('does not match bare text', () => { + expect('plain text'.match(new RegExp(ENV_VAR_PATTERN.source))).toBeNull(); + }); + }); + + describe('interpolateString', () => { + test('resolves ${VAR} from process.env', () => { + process.env.TEST_VAR = 'hello'; + expect(interpolateString('${TEST_VAR}')).toBe('hello'); + }); + + test('resolves ${VAR:-default} when env var set', () => { + process.env.TEST_VAR = 'hello'; + expect(interpolateString('${TEST_VAR:-fallback}')).toBe('hello'); + }); + + test('uses default when env var missing', () => { + delete process.env.MISSING_VAR; + expect(interpolateString('${MISSING_VAR:-fallback}')).toBe('fallback'); + }); + + test('returns empty string when env var missing with no default', () => { + delete process.env.MISSING_VAR; + const warnings = []; + const result = interpolateString('${MISSING_VAR}', { warnings }); + expect(result).toBe(''); + expect(warnings).toHaveLength(1); + expect(warnings[0]).toContain('MISSING_VAR'); + }); + + test('interpolates multiple vars in same string', () => { + process.env.HOST = 'localhost'; + process.env.PORT = '8080'; + expect(interpolateString('${HOST}:${PORT}')).toBe('localhost:8080'); + }); + + test('preserves text around variables', () => { + process.env.NAME = 'world'; + expect(interpolateString('hello ${NAME}!')).toBe('hello world!'); + }); + + test('handles empty default value', () => { + delete process.env.EMPTY; + expect(interpolateString('${EMPTY:-}')).toBe(''); + }); + }); + + describe('interpolateEnvVars (recursive)', () => { + test('interpolates strings in nested objects', () => { + process.env.DB_HOST = 'db.example.com'; + const config = { + database: { + host: '${DB_HOST}', + port: 5432, + }, + }; + + const result = interpolateEnvVars(config); + expect(result.database.host).toBe('db.example.com'); + expect(result.database.port).toBe(5432); + }); + + test('interpolates strings in arrays', () => { + process.env.ITEM = 'resolved'; + const config = { items: ['${ITEM}', 'static'] }; + + const result = interpolateEnvVars(config); + expect(result.items).toEqual(['resolved', 'static']); + }); + + test('preserves non-string scalars', () => { + const config = { + count: 42, + enabled: true, + empty: null, + }; + + const result = interpolateEnvVars(config); + expect(result.count).toBe(42); + expect(result.enabled).toBe(true); + expect(result.empty).toBeNull(); + }); + + test('collects warnings for missing vars', () => { + delete process.env.MISSING_A; + delete process.env.MISSING_B; + const config = { + a: '${MISSING_A}', + nested: { b: '${MISSING_B}' }, + }; + + const warnings = []; + interpolateEnvVars(config, { warnings }); + expect(warnings).toHaveLength(2); + }); + + test('does not mutate original config', () => { + process.env.VAL = 'new'; + const config = { key: '${VAL}' }; + const original = JSON.parse(JSON.stringify(config)); + + interpolateEnvVars(config); + expect(config).toEqual(original); + }); + }); + + describe('lintEnvPatterns', () => { + test('detects ${...} in config values', () => { + const config = { + url: '${API_URL}', + name: 'static', + }; + + const findings = lintEnvPatterns(config, 'framework-config.yaml'); + expect(findings).toHaveLength(1); + expect(findings[0]).toContain('framework-config.yaml'); + expect(findings[0]).toContain('url'); + }); + + test('detects nested ${...} patterns', () => { + const config = { + db: { + host: '${DB_HOST}', + port: 5432, + }, + }; + + const findings = lintEnvPatterns(config, 'project-config.yaml'); + expect(findings).toHaveLength(1); + expect(findings[0]).toContain('db.host'); + }); + + test('detects ${...} in arrays', () => { + const config = { + items: ['${ITEM}', 'static'], + }; + + const findings = lintEnvPatterns(config, 'test.yaml'); + expect(findings).toHaveLength(1); + }); + + test('returns empty for clean config', () => { + const config = { + name: 'static', + count: 42, + nested: { ok: true }, + }; + + expect(lintEnvPatterns(config, 'test.yaml')).toHaveLength(0); + }); + }); +}); + +``` + +================================================== +📄 tests/config/config-cli.test.js +================================================== +```js +/** + * CLI tests for `aios config` subcommands + * Story PRO-4 — Config Hierarchy (Task 5.3) + * + * Tests the Commander.js config command in-process using Jest mocks + * for process.cwd, console.log/error, and process.exit. + */ + +const path = require('path'); +const fs = require('fs'); +const os = require('os'); +const { Command } = require('commander'); +const { createConfigCommand } = require('../../.aios-core/cli/commands/config'); +const { globalConfigCache } = require('../../.aios-core/core/config/config-cache'); + +const FIXTURES_DIR = path.join(__dirname, 'fixtures'); + +/** + * Create a temp project with config fixtures. + */ +function createTempProject(files = {}) { + const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-cli-test-')); + const aiosCoreDir = path.join(tmpDir, '.aios-core'); + fs.mkdirSync(aiosCoreDir, { recursive: true }); + + for (const [relativePath, content] of Object.entries(files)) { + const fullPath = path.join(tmpDir, relativePath); + fs.mkdirSync(path.dirname(fullPath), { recursive: true }); + + if (typeof content === 'string') { + fs.writeFileSync(fullPath, content, 'utf8'); + } else { + const fixturePath = path.join(FIXTURES_DIR, content.fixture); + fs.copyFileSync(fixturePath, fullPath); + } + } + + return tmpDir; +} + +function cleanupTempDir(tmpDir) { + fs.rmSync(tmpDir, { recursive: true, force: true }); +} + +/** + * Sentinel error thrown by mocked process.exit to halt execution + * without actually exiting the test runner. + */ +class ProcessExitError extends Error { + constructor(code) { + super(`process.exit(${code})`); + this.exitCode = code; + } +} + +describe('config CLI commands', () => { + let logOutput, errorOutput; + let logSpy, errorSpy, exitSpy, cwdSpy; + + beforeEach(() => { + globalConfigCache.clear(); + logOutput = []; + errorOutput = []; + + logSpy = jest.spyOn(console, 'log').mockImplementation((...args) => { + logOutput.push(args.join(' ')); + }); + errorSpy = jest.spyOn(console, 'error').mockImplementation((...args) => { + errorOutput.push(args.join(' ')); + }); + exitSpy = jest.spyOn(process, 'exit').mockImplementation((code) => { + throw new ProcessExitError(code); + }); + }); + + afterEach(() => { + logSpy.mockRestore(); + errorSpy.mockRestore(); + exitSpy.mockRestore(); + if (cwdSpy) { + cwdSpy.mockRestore(); + cwdSpy = null; + } + }); + + /** + * Run `aios config ` in-process via Commander. + * Returns captured stdout/stderr as strings and whether process.exit was called. + */ + async function runConfigCmd(subArgs) { + const program = new Command(); + program.exitOverride(); // Prevent Commander itself from calling process.exit + program.addCommand(createConfigCommand()); + + let exitCode = 0; + try { + await program.parseAsync(['node', 'aios', ...subArgs]); + } catch (err) { + if (err instanceof ProcessExitError) { + exitCode = err.exitCode; + } else if (err.code === 'commander.helpDisplayed' || err.code === 'commander.version') { + // Commander exit override throws for --help / --version + exitCode = 0; + } else { + exitCode = 1; + } + } + + return { + exitCode, + stdout: logOutput.join('\n'), + stderr: errorOutput.join('\n'), + }; + } + + function setCwd(dir) { + cwdSpy = jest.spyOn(process, 'cwd').mockReturnValue(dir); + } + + // ----------------------------------------------------------------------- + // aios config show + // ----------------------------------------------------------------------- + + describe('aios config show', () => { + test('shows resolved config as YAML', async () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + }); + + try { + setCwd(tmpDir); + const { exitCode, stdout } = await runConfigCmd(['config', 'show']); + expect(exitCode).toBe(0); + expect(stdout).toContain('metadata'); + expect(stdout).toContain('AIOS-FullStack'); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('shows specific level with --level', async () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + }); + + try { + setCwd(tmpDir); + const { exitCode, stdout } = await runConfigCmd(['config', 'show', '--level', 'L1']); + expect(exitCode).toBe(0); + expect(stdout).toContain('framework_name'); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('shows debug annotations with --debug', async () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + }); + + try { + setCwd(tmpDir); + const { exitCode, stdout } = await runConfigCmd(['config', 'show', '--debug']); + expect(exitCode).toBe(0); + expect(stdout).toMatch(/L[12]/); + } finally { + cleanupTempDir(tmpDir); + } + }); + }); + + // ----------------------------------------------------------------------- + // aios config validate + // ----------------------------------------------------------------------- + + describe('aios config validate', () => { + test('validates existing config files', async () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + }); + + try { + setCwd(tmpDir); + const { exitCode, stdout } = await runConfigCmd(['config', 'validate']); + expect(exitCode).toBe(0); + // "Config validation: PASS" contains "valid" as substring of "validation" + expect(stdout).toContain('valid'); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('validates specific level with --level', async () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + }); + + try { + setCwd(tmpDir); + const { exitCode } = await runConfigCmd(['config', 'validate', '--level', 'L1']); + expect(exitCode).toBe(0); + } finally { + cleanupTempDir(tmpDir); + } + }); + }); + + // ----------------------------------------------------------------------- + // aios config diff + // ----------------------------------------------------------------------- + + describe('aios config diff', () => { + test('shows diff between two levels', async () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + }); + + try { + setCwd(tmpDir); + const { exitCode, stdout } = await runConfigCmd(['config', 'diff', '--levels', 'L1,L2']); + expect(exitCode).toBe(0); + expect(stdout).toContain('performance_defaults'); + } finally { + cleanupTempDir(tmpDir); + } + }); + }); + + // ----------------------------------------------------------------------- + // aios config migrate + // ----------------------------------------------------------------------- + + describe('aios config migrate', () => { + const LEGACY_CONFIG = [ + 'project:', + ' name: "test-project"', + ' version: "1.0.0"', + 'ide:', + ' selected:', + ' - vscode', + 'mcp:', + ' enabled: false', + 'toolsLocation: .aios-core/tools', + 'lazyLoading:', + ' enabled: true', + '', + ].join('\n'); + + test('--dry-run shows preview without writing files', async () => { + const tmpDir = createTempProject({ + '.aios-core/core-config.yaml': LEGACY_CONFIG, + }); + + try { + setCwd(tmpDir); + const { exitCode, stdout } = await runConfigCmd(['config', 'migrate', '--dry-run']); + expect(exitCode).toBe(0); + expect(stdout).toContain('DRY RUN'); + expect(stdout).toContain('framework-config.yaml'); + // No split files should have been created + expect(fs.existsSync(path.join(tmpDir, '.aios-core', 'framework-config.yaml'))).toBe(false); + expect(fs.existsSync(path.join(tmpDir, '.aios-core', 'project-config.yaml'))).toBe(false); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('full migration creates split files and backup', async () => { + const tmpDir = createTempProject({ + '.aios-core/core-config.yaml': LEGACY_CONFIG, + '.gitignore': '# existing\nnode_modules\n', + }); + + try { + setCwd(tmpDir); + const { exitCode, stdout } = await runConfigCmd(['config', 'migrate']); + expect(exitCode).toBe(0); + expect(stdout).toContain('Migration complete'); + + // Split files created + expect(fs.existsSync(path.join(tmpDir, '.aios-core', 'framework-config.yaml'))).toBe(true); + expect(fs.existsSync(path.join(tmpDir, '.aios-core', 'project-config.yaml'))).toBe(true); + expect(fs.existsSync(path.join(tmpDir, '.aios-core', 'local-config.yaml'))).toBe(true); + + // Backup created + expect(fs.existsSync(path.join(tmpDir, '.aios-core', 'core-config.yaml.backup'))).toBe(true); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('reports nothing to migrate when already layered', async () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': { fixture: 'framework-config.yaml' }, + '.aios-core/project-config.yaml': { fixture: 'project-config.yaml' }, + }); + + try { + setCwd(tmpDir); + const { exitCode, stdout } = await runConfigCmd(['config', 'migrate']); + expect(exitCode).toBe(0); + expect(stdout).toContain('Nothing to migrate'); + } finally { + cleanupTempDir(tmpDir); + } + }); + }); + + // ----------------------------------------------------------------------- + // aios config validate — error paths + // ----------------------------------------------------------------------- + + describe('aios config validate — error paths', () => { + test('reports malformed YAML syntax error', async () => { + const tmpDir = createTempProject({ + '.aios-core/framework-config.yaml': 'metadata:\n name: "test\n bad_indent: [unmatched', + }); + + try { + setCwd(tmpDir); + const { exitCode, stdout, stderr } = await runConfigCmd(['config', 'validate']); + // Should fail with YAML error + const combined = stdout + ' ' + stderr; + expect(combined).toMatch(/YAML ERROR|error/i); + } finally { + cleanupTempDir(tmpDir); + } + }); + }); + + // ----------------------------------------------------------------------- + // aios config init-local + // ----------------------------------------------------------------------- + + describe('aios config init-local', () => { + test('creates local-config.yaml from template', async () => { + const tmpDir = createTempProject({ + '.aios-core/local-config.yaml.template': 'ide:\n selected:\n - vscode\n', + }); + + try { + setCwd(tmpDir); + const { exitCode } = await runConfigCmd(['config', 'init-local']); + expect(exitCode).toBe(0); + + const localConfig = path.join(tmpDir, '.aios-core', 'local-config.yaml'); + expect(fs.existsSync(localConfig)).toBe(true); + } finally { + cleanupTempDir(tmpDir); + } + }); + + test('warns if local-config.yaml already exists', async () => { + const tmpDir = createTempProject({ + '.aios-core/local-config.yaml.template': 'ide:\n selected:\n - vscode\n', + '.aios-core/local-config.yaml': 'existing: true\n', + }); + + try { + setCwd(tmpDir); + const { exitCode, stderr } = await runConfigCmd(['config', 'init-local']); + // initLocalAction writes to console.error and calls process.exit(1) + expect(exitCode).toBe(1); + expect(stderr).toContain('already exists'); + } finally { + cleanupTempDir(tmpDir); + } + }); + }); +}); + +``` + +================================================== +📄 tests/config/fixtures/app-config.yaml +================================================== +```yaml +# Test fixture: L3 App config +logging: + level: "debug" + +app_specific: + port: 3000 + name: "dashboard" + +``` + +================================================== +📄 tests/config/fixtures/legacy-core-config.yaml +================================================== +```yaml +# Test fixture: Legacy monolithic config +project: + name: "legacy-project" + version: "2.0.0" + +ide: + selected: + - vscode + +mcp: + enabled: false + +performance_defaults: + max_concurrent_operations: 4 + cache_ttl_seconds: 300 + +github_integration: + owner: "legacy-org" + repo: "legacy-repo" + +squads: + enabled: true + max_squads: 3 + +logging: + level: "warn" + +``` + +================================================== +📄 tests/config/fixtures/project-config.yaml +================================================== +```yaml +# Test fixture: L2 Project config +project: + name: "test-project" + version: "1.0.0" + +github_integration: + owner: "test-org" + repo: "test-repo" + +squads: + enabled: true + max_squads: 3 + +performance_defaults: + max_concurrent_operations: 8 + +logging: + level: "info" + format: "json" + +``` + +================================================== +📄 tests/config/fixtures/local-config.yaml +================================================== +```yaml +# Test fixture: L4 Local config +ide: + selected: + - vscode + - claude-code + +mcp: + enabled: true + config_location: "${MCP_CONFIG_PATH:-.claude/mcp.json}" + +performance_defaults: + max_concurrent_operations: 16 + +``` + +================================================== +📄 tests/config/fixtures/pro-config.yaml +================================================== +```yaml +# Test fixture: Pro extension config +squads: + premium_templates: true + max_squads: 10 + +pro_features: + advanced_analytics: true + custom_workflows: true + +``` + +================================================== +📄 tests/config/fixtures/framework-config.yaml +================================================== +```yaml +# Test fixture: L1 Framework config +metadata: + framework_name: "AIOS-FullStack" + framework_version: "3.12.0" + +resource_locations: + agents: ".aios-core/development/agents" + tasks: ".aios-core/development/tasks" + +performance_defaults: + max_concurrent_operations: 4 + cache_ttl_seconds: 300 + +utility_scripts_registry: + scripts_base: ".aios-core/development/scripts" + +ide_sync_system: + enabled: true + targets: + - ".claude/CLAUDE.md" + +``` + +================================================== +📄 tests/security/core-security.test.js +================================================== +```js +/** + * Core Security Tests (SEC-01 to SEC-05) + * Story 3.0: Core Module Security Hardening + * + * @module tests/security/core-security.test + */ + +const path = require('path'); +const fs = require('fs-extra'); + +// Import the modules under test +const ElicitationEngine = require('../../.aios-core/core/elicitation/elicitation-engine'); +const ElicitationSessionManager = require('../../.aios-core/core/elicitation/session-manager'); + +describe('Core Security Tests (Story 3.0)', () => { + let tempDir; + + beforeAll(async () => { + tempDir = path.join(__dirname, '.test-temp-security'); + await fs.ensureDir(tempDir); + }); + + afterAll(async () => { + await fs.remove(tempDir); + }); + + // SEC-01: ReDoS Prevention + describe('SEC-01: ReDoS Prevention', () => { + let engine; + + beforeEach(() => { + engine = new ElicitationEngine(); + }); + + it('should reject malicious regex pattern with nested quantifiers (.+)+', async () => { + // Arrange - Pattern that causes catastrophic backtracking + const maliciousValidator = { + type: 'regex', + pattern: '(.+)+$', + message: 'Invalid input', + }; + + // Act - runValidator is private, so we test via the validation flow + // The isSafePattern function should reject this + const result = await engine.runValidator(maliciousValidator, 'test'); + + // Assert - Should return error message, not match + expect(result).toBe('Invalid input'); + }); + + it('should reject nested quantifiers pattern (.*)*', async () => { + const maliciousValidator = { + type: 'regex', + pattern: '(.*)*$', + message: 'Dangerous pattern', + }; + + const result = await engine.runValidator(maliciousValidator, 'test'); + expect(result).toBe('Dangerous pattern'); + }); + + it('should accept safe regex patterns', async () => { + const safeValidator = { + type: 'regex', + pattern: '^[a-zA-Z0-9]+$', + message: 'Invalid format', + }; + + const result = await engine.runValidator(safeValidator, 'test123'); + expect(result).toBe(true); + }); + + it('should reject invalid regex syntax gracefully', async () => { + const invalidValidator = { + type: 'regex', + pattern: '[invalid regex(', + message: 'Syntax error', + }; + + const result = await engine.runValidator(invalidValidator, 'test'); + expect(result).toBe('Syntax error'); + }); + }); + + // SEC-02: Path Traversal Block + describe('SEC-02: Path Traversal Block', () => { + let sessionManager; + + beforeEach(() => { + sessionManager = new ElicitationSessionManager(tempDir); + }); + + it('should reject sessionId with path traversal attempt (../)', () => { + expect(() => { + sessionManager.getSessionPath('../../../etc/passwd'); + }).toThrow('Invalid sessionId format'); + }); + + it('should reject sessionId with backslash traversal (..\\)', () => { + expect(() => { + sessionManager.getSessionPath('..\\..\\Windows\\system.ini'); + }).toThrow('Invalid sessionId format'); + }); + + it('should reject sessionId that is too short', () => { + expect(() => { + sessionManager.getSessionPath('abc123'); + }).toThrow('Invalid sessionId format'); + }); + + it('should reject sessionId that is too long', () => { + expect(() => { + sessionManager.getSessionPath('1234567890abcdef1234'); + }).toThrow('Invalid sessionId format'); + }); + + it('should reject sessionId with non-hex characters', () => { + expect(() => { + sessionManager.getSessionPath('ghijklmnopqrstuv'); + }).toThrow('Invalid sessionId format'); + }); + }); + + // SEC-03: Valid Session Loads (regression test) + describe('SEC-03: Valid Session Loads', () => { + let sessionManager; + let validSessionId; + + beforeEach(async () => { + sessionManager = new ElicitationSessionManager(tempDir); + await sessionManager.init(); + validSessionId = await sessionManager.createSession('test', { testMode: true }); + }); + + afterEach(async () => { + try { + await sessionManager.deleteSession(validSessionId); + } catch (e) { + // Ignore cleanup errors + } + }); + + it('should accept valid 16-character hex sessionId', () => { + // Valid sessionId from crypto.randomBytes(8).toString('hex') + expect(() => { + sessionManager.getSessionPath(validSessionId); + }).not.toThrow(); + }); + + it('should load session with valid sessionId', async () => { + const session = await sessionManager.loadSession(validSessionId); + expect(session).toBeTruthy(); + expect(session.id).toBe(validSessionId); + expect(session.type).toBe('test'); + }); + + it('should save and retrieve session data correctly', async () => { + await sessionManager.updateAnswers({ key: 'value' }, 1); + const session = await sessionManager.loadSession(validSessionId); + + expect(session.answers).toEqual({ key: 'value' }); + expect(session.currentStep).toBe(1); + }); + }); + + // SEC-04: Error Handling + describe('SEC-04: Error Handling', () => { + let engine; + + beforeEach(() => { + engine = new ElicitationEngine(); + }); + + it('should return null when loading non-existent session path', async () => { + const result = await engine.loadSession('/nonexistent/path/session.json'); + expect(result).toBeNull(); + }); + + it('should return null for invalid JSON file', async () => { + // Create invalid JSON file + const invalidJsonPath = path.join(tempDir, 'invalid.json'); + await fs.writeFile(invalidJsonPath, 'not valid json {{{'); + + const result = await engine.loadSession(invalidJsonPath); + expect(result).toBeNull(); + }); + + it('should not throw on loadSession failure', async () => { + // loadSession should resolve to null, not reject + await expect(engine.loadSession('/path/that/does/not/exist.json')).resolves.toBeNull(); + }); + }); + + // SEC-05: CodeRabbit Clean (placeholder - actual scan is external) + describe('SEC-05: Variable Initialization', () => { + let engine; + + beforeEach(() => { + engine = new ElicitationEngine(); + }); + + it('should have currentSession initialized to null', () => { + expect(engine.currentSession).toBeNull(); + }); + + it('should set currentSession when startSession is called', async () => { + await engine.startSession('test-component', { saveSession: false }); + expect(engine.currentSession).not.toBeNull(); + expect(engine.currentSession.componentType).toBe('test-component'); + }); + + it('should not crash when completeSession is called before startSession', async () => { + // currentSession is null, should handle gracefully + await expect(async () => { + await engine.completeSession('completed'); + }).not.toThrow(); + }); + }); +}); + +``` + +================================================== +📄 tests/integration/performance.test.js +================================================== +```js +// Integration/Performance test - uses describeIntegration +/** + * Performance Tests for Contextual Greeting System + * + * Validates: + * - P50 latency <100ms + * - P95 latency <130ms + * - P99 latency <150ms (hard limit) + * - No regression vs baseline + */ + +const GreetingBuilder = require('../../.aios-core/development/scripts/greeting-builder'); +const ContextDetector = require('../../.aios-core/core/session/context-detector'); +const GitConfigDetector = require('../../.aios-core/infrastructure/scripts/git-config-detector'); + +// Mock dependencies for consistent testing +jest.mock('../../.aios-core/core/session/context-detector'); +jest.mock('../../.aios-core/infrastructure/scripts/git-config-detector'); +jest.mock('../../.aios-core/infrastructure/scripts/project-status-loader'); + +const { loadProjectStatus } = require('../../.aios-core/infrastructure/scripts/project-status-loader'); + +describeIntegration('Greeting Performance Tests', () => { + let builder; + let mockAgent; + const ITERATIONS = 100; + + beforeEach(() => { + builder = new GreetingBuilder(); + + // Setup mock agent + mockAgent = { + name: 'TestAgent', + icon: '🤖', + persona_profile: { + greeting_levels: { + minimal: '🤖 TestAgent ready', + named: '🤖 TestAgent (Tester) ready', + }, + }, + commands: [ + { name: 'help', visibility: ['full', 'quick', 'key'] }, + { name: 'test', visibility: ['full'] }, + ], + }; + + // Setup fast mocks + ContextDetector.prototype.detectSessionType = jest.fn().mockReturnValue('new'); + GitConfigDetector.prototype.get = jest.fn().mockReturnValue({ + configured: true, + type: 'github', + branch: 'main', + }); + loadProjectStatus.mockResolvedValue({ + branch: 'main', + modifiedFiles: [], + isGitRepo: true, + }); + }); + + describeIntegration('Baseline Performance (Simple Greeting)', () => { + test('baseline simple greeting should be fast', () => { + const times = []; + + for (let i = 0; i < ITERATIONS; i++) { + const start = performance.now(); + builder.buildSimpleGreeting(mockAgent); + const end = performance.now(); + times.push(end - start); + } + + const stats = calculateStats(times); + + console.log('Baseline Performance (Simple Greeting):'); + console.log(` P50: ${stats.p50.toFixed(2)}ms`); + console.log(` P95: ${stats.p95.toFixed(2)}ms`); + console.log(` P99: ${stats.p99.toFixed(2)}ms`); + + // Simple greeting should be very fast + expect(stats.p99).toBeLessThan(50); + }); + }); + + describeIntegration('Contextual Greeting Performance', () => { + test('P50 latency should be <100ms', async () => { + const times = []; + + for (let i = 0; i < ITERATIONS; i++) { + const start = performance.now(); + await builder.buildGreeting(mockAgent, {}); + const end = performance.now(); + times.push(end - start); + } + + const stats = calculateStats(times); + + console.log('Contextual Greeting Performance:'); + console.log(` P50: ${stats.p50.toFixed(2)}ms`); + console.log(` P95: ${stats.p95.toFixed(2)}ms`); + console.log(` P99: ${stats.p99.toFixed(2)}ms`); + + expect(stats.p50).toBeLessThan(100); + }); + + test('P95 latency should be <130ms', async () => { + const times = []; + + for (let i = 0; i < ITERATIONS; i++) { + const start = performance.now(); + await builder.buildGreeting(mockAgent, {}); + const end = performance.now(); + times.push(end - start); + } + + const stats = calculateStats(times); + expect(stats.p95).toBeLessThan(130); + }); + + test('P99 latency should be <150ms (hard limit)', async () => { + const times = []; + + for (let i = 0; i < ITERATIONS; i++) { + const start = performance.now(); + await builder.buildGreeting(mockAgent, {}); + const end = performance.now(); + times.push(end - start); + } + + const stats = calculateStats(times); + expect(stats.p99).toBeLessThan(150); + }); + + test('fallback should not regress performance', async () => { + const times = []; + + // Mock slow operation to trigger fallback + loadProjectStatus.mockImplementation(() => + new Promise(resolve => setTimeout(resolve, 200)), + ); + + for (let i = 0; i < ITERATIONS; i++) { + const start = performance.now(); + await builder.buildGreeting(mockAgent, {}); + const end = performance.now(); + times.push(end - start); + } + + const stats = calculateStats(times); + + // Fallback should trigger at 150ms timeout + expect(stats.p99).toBeLessThanOrEqual(160); // Small margin for timeout handling + }); + }); + + describeIntegration('Cache Hit Performance', () => { + test('cached git config should be fast', async () => { + const detector = new GitConfigDetector(); + const times = []; + + // First call to populate cache + detector.get(); + + // Measure cache hits + for (let i = 0; i < ITERATIONS; i++) { + const start = performance.now(); + detector.get(); + const end = performance.now(); + times.push(end - start); + } + + const stats = calculateStats(times); + + console.log('Git Config Cache Hit Performance:'); + console.log(` P50: ${stats.p50.toFixed(2)}ms`); + + expect(stats.p50).toBeLessThan(5); // Should be <5ms + }); + }); +}); + +/** + * Calculate percentile statistics + * @param {number[]} times - Array of measurements + * @returns {Object} Statistics + */ +function calculateStats(times) { + const sorted = times.sort((a, b) => a - b); + + return { + p50: percentile(sorted, 50), + p95: percentile(sorted, 95), + p99: percentile(sorted, 99), + mean: mean(sorted), + min: sorted[0], + max: sorted[sorted.length - 1], + }; +} + +function percentile(sorted, p) { + const index = Math.ceil((p / 100) * sorted.length) - 1; + return sorted[index]; +} + +function mean(values) { + return values.reduce((a, b) => a + b, 0) / values.length; +} + +``` + +================================================== +📄 tests/integration/wizard-validation-flow.test.js +================================================== +```js +/** + * Integration Tests: Wizard Validation Flow + * Story 1.8 - Complete wizard flow including validation + */ + +const { validateInstallation } = require('../../packages/installer/src/wizard/validation'); + +describe('Wizard Validation Flow', () => { + it('should validate complete installation successfully', async () => { + // Given - mock installation context + const installationContext = { + files: { + ideConfigs: [], + env: '.env', + coreConfig: '.aios-core/core-config.yaml', + mcpConfig: '.mcp.json', + }, + configs: { + env: { envCreated: true, coreConfigCreated: true }, + mcps: {}, + coreConfig: '.aios-core/core-config.yaml', + }, + dependencies: { + success: true, + packageManager: 'npm', + offlineMode: false, + }, + }; + + // When + const validation = await validateInstallation(installationContext); + + // Then + expect(validation).toHaveProperty('overallStatus'); + expect(validation).toHaveProperty('components'); + expect(validation).toHaveProperty('errors'); + expect(validation).toHaveProperty('warnings'); + }); + + it('should handle validation with MCP health checks', async () => { + // Given + const installationContext = { + files: { env: '.env' }, + configs: {}, + mcps: { + installedMCPs: { + browser: { status: 'success', message: 'Installed' }, + context7: { status: 'success', message: 'Installed' }, + }, + configPath: '.mcp.json', + }, + dependencies: { success: true, packageManager: 'npm' }, + }; + + // When + const validation = await validateInstallation(installationContext); + + // Then + expect(validation.components).toHaveProperty('mcps'); + }); + + it('should call progress callback during validation', async () => { + // Given + const installationContext = { + files: { env: '.env' }, + configs: {}, + dependencies: { success: true }, + }; + + const progressCalls = []; + const onProgress = (status) => progressCalls.push(status); + + // When + await validateInstallation(installationContext, onProgress); + + // Then + expect(progressCalls.length).toBeGreaterThan(0); + expect(progressCalls[progressCalls.length - 1].step).toBe('complete'); + }); +}); + +``` + +================================================== +📄 tests/integration/quality-gate-pipeline.test.js +================================================== +```js +/** + * Quality Gate Pipeline Integration Tests + * + * Tests the full orchestration of the 3-layer quality gate pipeline. + * + * @story 2.10 - Quality Gate Manager + */ + +const { QualityGateManager } = require('../../.aios-core/core/quality-gates/quality-gate-manager'); +const { Layer1PreCommit } = require('../../.aios-core/core/quality-gates/layer1-precommit'); +const { Layer2PRAutomation } = require('../../.aios-core/core/quality-gates/layer2-pr-automation'); +const { Layer3HumanReview } = require('../../.aios-core/core/quality-gates/layer3-human-review'); +const { ChecklistGenerator } = require('../../.aios-core/core/quality-gates/checklist-generator'); + +describe('Quality Gate Pipeline Integration', () => { + describe('Full Pipeline Orchestration', () => { + let manager; + + beforeEach(() => { + manager = new QualityGateManager({ + layer1: { + enabled: true, + failFast: true, + checks: { + lint: { enabled: true }, + test: { enabled: true }, + typecheck: { enabled: true }, + }, + }, + layer2: { + enabled: true, + coderabbit: { enabled: true }, + quinn: { enabled: true }, + }, + layer3: { + enabled: true, + requireSignoff: false, + }, + }); + }); + + it('should have all three layers', () => { + expect(manager.layers.layer1).toBeInstanceOf(Layer1PreCommit); + expect(manager.layers.layer2).toBeInstanceOf(Layer2PRAutomation); + expect(manager.layers.layer3).toBeInstanceOf(Layer3HumanReview); + }); + + it('should run layers in sequence during orchestration', async () => { + // Mock all layer commands + manager.layers.layer1.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, + stdout: '0 errors', + stderr: '', + duration: 100, + }); + + manager.layers.layer2.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, + stdout: '', + stderr: '', + duration: 100, + }); + + const result = await manager.orchestrate({ verbose: false }); + + expect(result).toHaveProperty('status'); + expect(result).toHaveProperty('duration'); + expect(result).toHaveProperty('layers'); + expect(result.layers.length).toBeGreaterThanOrEqual(2); + }); + + it('should stop on Layer 1 failure (fail-fast)', async () => { + // Make Layer 1 fail + manager.layers.layer1.runCommand = jest.fn().mockResolvedValue({ + exitCode: 1, + stdout: '5 errors', + stderr: '', + duration: 100, + }); + + const result = await manager.orchestrate({ verbose: false }); + + expect(result.pass).toBe(false); + expect(result.stoppedAt).toBe('layer1'); + expect(result.reason).toBe('fail-fast'); + expect(result.exitCode).toBe(1); + }); + + it('should escalate on Layer 2 issues', async () => { + // Make Layer 1 pass + manager.layers.layer1.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, + stdout: '0 errors', + stderr: '', + duration: 100, + }); + + // Make Layer 2 fail with CRITICAL issue + manager.layers.layer2.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, + stdout: 'CRITICAL: Major security issue', + stderr: '', + duration: 100, + }); + + const result = await manager.orchestrate({ verbose: false }); + + expect(result.pass).toBe(false); + expect(result.stoppedAt).toBe('layer2'); + expect(result.reason).toBe('escalation'); + }); + }); + + describe('Checklist Generator Integration', () => { + let generator; + + beforeEach(() => { + generator = new ChecklistGenerator({ + minItems: 5, + }); + }); + + it('should generate checklist with context', async () => { + const checklist = await generator.generate({ + storyId: 'story-2.10', + changedFiles: [ + 'tests/unit/layer1.test.js', + '.aios-core/core/quality-gates/manager.js', + 'docs/architecture/quality-gates.md', + ], + layers: [], + }); + + expect(checklist.items.length).toBeGreaterThanOrEqual(5); + expect(checklist.storyId).toBe('story-2.10'); + }); + + it('should add items for test file changes', async () => { + const checklist = await generator.generate({ + changedFiles: ['tests/unit/something.test.js'], + }); + + const testItem = checklist.items.find(i => i.id === 'test-coverage'); + expect(testItem).toBeDefined(); + }); + + it('should add items for config file changes', async () => { + const checklist = await generator.generate({ + changedFiles: ['config/app.yaml'], + }); + + const configItem = checklist.items.find(i => i.id === 'config-changes'); + expect(configItem).toBeDefined(); + }); + + it('should format checklist for display', async () => { + const checklist = await generator.generate({ storyId: 'test' }); + const formatted = generator.format(checklist); + + expect(formatted).toContain('Strategic Review Checklist'); + expect(formatted).toContain('Story: test'); + }); + }); + + describe('Layer Result Aggregation', () => { + it('should aggregate results from all layers', async () => { + const manager = new QualityGateManager({ + layer1: { enabled: true }, + layer2: { enabled: true }, + layer3: { enabled: true, requireSignoff: false }, + }); + + // Mock all commands to pass + manager.layers.layer1.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, + stdout: '0 errors, 5 warnings', + stderr: '', + duration: 1000, + }); + + manager.layers.layer2.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, + stdout: 'HIGH: Minor suggestion', + stderr: '', + duration: 2000, + }); + + const result = await manager.orchestrate(); + + // Check that all layers were executed + expect(result.layers.length).toBe(3); + + // Check Layer 1 results + const layer1 = result.layers[0]; + expect(layer1.layer).toBe('Layer 1: Pre-commit'); + expect(layer1.pass).toBe(true); + + // Check Layer 2 results + const layer2 = result.layers[1]; + expect(layer2.layer).toBe('Layer 2: PR Automation'); + expect(layer2.pass).toBe(true); + + // Check Layer 3 results + const layer3 = result.layers[2]; + expect(layer3.layer).toBe('Layer 3: Human Review'); + }); + }); + + describe('Exit Code Consistency', () => { + it('should return exit code 0 on success', async () => { + const manager = new QualityGateManager({ + layer1: { enabled: true }, + layer2: { enabled: true }, + layer3: { enabled: true, requireSignoff: false }, + }); + + manager.layers.layer1.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, + stdout: '', + stderr: '', + duration: 100, + }); + + manager.layers.layer2.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, + stdout: '', + stderr: '', + duration: 100, + }); + + const result = await manager.orchestrate(); + expect(result.exitCode).toBe(0); + }); + + it('should return exit code 1 on failure', async () => { + const manager = new QualityGateManager({ layer1: { enabled: true } }); + + manager.layers.layer1.runCommand = jest.fn().mockResolvedValue({ + exitCode: 1, + stdout: 'error', + stderr: '', + duration: 100, + }); + + const result = await manager.orchestrate(); + expect(result.exitCode).toBe(1); + }); + }); + + describe('Disabled Layers', () => { + it('should skip disabled layers', async () => { + const manager = new QualityGateManager({ + layer1: { enabled: false }, + layer2: { enabled: false }, + layer3: { enabled: false }, + }); + + const result = await manager.orchestrate(); + + // All layers should be skipped + result.layers.forEach(layer => { + expect(layer.results[0].skipped).toBe(true); + }); + }); + + it('should allow partial layer execution', async () => { + const manager = new QualityGateManager({ + layer1: { enabled: true }, + layer2: { enabled: false }, + layer3: { enabled: false }, + }); + + manager.layers.layer1.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, + stdout: '', + stderr: '', + duration: 100, + }); + + const result = await manager.orchestrate(); + + expect(result.layers[0].pass).toBe(true); + expect(result.layers[1].results[0].skipped).toBe(true); + }); + }); +}); + +describe('Smoke Tests', () => { + // QGM-01: Layer 1 passes on clean code + it('QGM-01: Layer 1 should pass when all checks succeed', async () => { + const layer = new Layer1PreCommit({ enabled: true }); + layer.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, + stdout: '0 errors', + stderr: '', + duration: 100, + }); + + const result = await layer.execute(); + expect(result.pass).toBe(true); + }); + + // QGM-02: Layer 1 fails on lint errors + it('QGM-02: Layer 1 should fail when lint has errors', async () => { + const layer = new Layer1PreCommit({ + enabled: true, + checks: { lint: { enabled: true } }, + }); + layer.runCommand = jest.fn().mockResolvedValue({ + exitCode: 1, + stdout: '5 errors', + stderr: '', + duration: 100, + }); + + const result = await layer.execute(); + expect(result.pass).toBe(false); + }); + + // QGM-03: Layer 2 passes with no CRITICAL issues + it('QGM-03: Layer 2 should pass with no CRITICAL issues', async () => { + const layer = new Layer2PRAutomation({ + enabled: true, + coderabbit: { enabled: true, blockOn: ['CRITICAL'] }, + }); + layer.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, + stdout: 'HIGH: Minor issue\nMEDIUM: Suggestion', + stderr: '', + duration: 100, + }); + + const result = await layer.execute(); + expect(result.pass).toBe(true); + }); + + // QGM-04: Full pipeline runs all layers + it('QGM-04: Full pipeline should run all layers', async () => { + const manager = new QualityGateManager({ + layer1: { enabled: true }, + layer2: { enabled: true }, + layer3: { enabled: true, requireSignoff: false }, + }); + + manager.layers.layer1.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, stdout: '', stderr: '', duration: 100, + }); + manager.layers.layer2.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, stdout: '', stderr: '', duration: 100, + }); + + const result = await manager.orchestrate(); + expect(result.layers.length).toBe(3); + }); + + // QGM-05: Fail-fast stops pipeline on Layer 1 failure + it('QGM-05: Pipeline should stop on Layer 1 failure', async () => { + const manager = new QualityGateManager({ + layer1: { enabled: true, failFast: true }, + layer2: { enabled: true }, + }); + + manager.layers.layer1.runCommand = jest.fn().mockResolvedValue({ + exitCode: 1, stdout: 'error', stderr: '', duration: 100, + }); + + const result = await manager.orchestrate(); + expect(result.stoppedAt).toBe('layer1'); + expect(result.layers.length).toBe(1); + }); + + // QGM-06: Layer-specific execution + it('QGM-06: Should run only specified layer', async () => { + const manager = new QualityGateManager({ + layer1: { enabled: true }, + layer2: { enabled: true }, + layer3: { enabled: true }, + }); + + manager.layers.layer1.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, stdout: '', stderr: '', duration: 100, + }); + + const result = await manager.runLayer(1); + expect(result.layer).toBe('Layer 1: Pre-commit'); + }); + + // QGM-10: Correct exit codes + it('QGM-10: Should return correct exit codes', async () => { + const manager = new QualityGateManager({ + layer1: { enabled: true }, + layer2: { enabled: true }, + layer3: { enabled: true, requireSignoff: false }, + }); + + // Pass case + manager.layers.layer1.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, stdout: '', stderr: '', duration: 100, + }); + manager.layers.layer2.runCommand = jest.fn().mockResolvedValue({ + exitCode: 0, stdout: '', stderr: '', duration: 100, + }); + + let result = await manager.orchestrate(); + expect(result.exitCode).toBe(0); + + // Fail case + manager.layers.layer1.runCommand = jest.fn().mockResolvedValue({ + exitCode: 1, stdout: 'error', stderr: '', duration: 100, + }); + + result = await manager.orchestrate(); + expect(result.exitCode).toBe(1); + }); +}); + +``` + +================================================== +📄 tests/integration/contextual-greeting.test.js +================================================== +```js +/** + * Integration Tests for Contextual Greeting System + * + * End-to-end testing of: + * - All 3 session types + * - Git configured vs unconfigured + * - Command visibility filtering + * - Fallback scenarios + * - Backwards compatibility + */ + +const GreetingBuilder = require('../../.aios-core/development/scripts/greeting-builder'); + +describe('Contextual Greeting Integration Tests', () => { + let builder; + + beforeEach(() => { + builder = new GreetingBuilder(); + }); + + describe('End-to-End Greeting Generation', () => { + test('should generate complete new session greeting', async () => { + // TODO: Full E2E test with real components + expect(true).toBe(true); + }); + + test('should generate complete existing session greeting', async () => { + // TODO: Full E2E test + expect(true).toBe(true); + }); + + test('should generate complete workflow session greeting', async () => { + // TODO: Full E2E test + expect(true).toBe(true); + }); + }); + + describe('Backwards Compatibility', () => { + test('should work with agents without visibility metadata', async () => { + // TODO: Test old agent format + expect(true).toBe(true); + }); + + test('should fallback gracefully on component failures', async () => { + // TODO: Test fallback scenarios + expect(true).toBe(true); + }); + }); +}); + +``` + +================================================== +📄 tests/integration/npx.test.js +================================================== +```js +// Integration/Performance test - uses describeIntegration +/** + * STORY-1.1: NPX Integration Tests + * Tests for npx @synkra/aios-core@latest execution + */ + +const { spawn } = require('child_process'); +const path = require('path'); +const fs = require('fs'); + +describeIntegration('npx Execution', () => { + const timeout = 30000; // 30 seconds as per story requirement + + describeIntegration('Package Configuration', () => { + it('should have correct bin entries in package.json', () => { + const packageJsonPath = path.join(__dirname, '../../package.json'); + const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8')); + + expect(packageJson.bin).toBeDefined(); + expect(packageJson.bin['aios']).toBe('./bin/aios.js'); + expect(packageJson.bin['@synkra/aios-core']).toBe('./bin/aios.js'); + }); + + it('should have preferGlobal set to false', () => { + const packageJsonPath = path.join(__dirname, '../../package.json'); + const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8')); + + expect(packageJson.preferGlobal).toBe(false); + }); + + it('should include required files in package', () => { + const packageJsonPath = path.join(__dirname, '../../package.json'); + const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8')); + + expect(packageJson.files).toContain('bin/'); + expect(packageJson.files).toContain('index.js'); + // Note: .aios-core/ should be included if it exists + const hasAiosCore = packageJson.files.some(f => + f === '.aios-core/' || f === 'aios-core/', + ); + expect(hasAiosCore).toBe(true); + }); + + it('should have Node.js engine requirement >= 18', () => { + const packageJsonPath = path.join(__dirname, '../../package.json'); + const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8')); + + expect(packageJson.engines).toBeDefined(); + expect(packageJson.engines.node).toMatch(/>=?\s*18/); + }); + }); + + describeIntegration('CLI Execution', () => { + it('should execute aios command with --version', (done) => { + const cliPath = path.join(__dirname, '../../bin/aios.js'); + const child = spawn('node', [cliPath, '--version']); + + let output = ''; + child.stdout.on('data', (data) => { + output += data.toString(); + }); + + child.on('close', (code) => { + expect(code).toBe(0); + expect(output).toMatch(/\d+\.\d+\.\d+/); + done(); + }); + }, timeout); + + it('should execute aios command with --help', (done) => { + const cliPath = path.join(__dirname, '../../bin/aios.js'); + const child = spawn('node', [cliPath, '--help']); + + let output = ''; + child.stdout.on('data', (data) => { + output += data.toString(); + }); + + child.on('close', (code) => { + expect(code).toBe(0); + expect(output).toContain('USAGE'); + expect(output).toContain('@synkra/aios-core'); + done(); + }); + }, timeout); + + it('should execute aios info command', (done) => { + const cliPath = path.join(__dirname, '../../bin/aios.js'); + const child = spawn('node', [cliPath, 'info']); + + let output = ''; + child.stdout.on('data', (data) => { + output += data.toString(); + }); + + child.on('close', (code) => { + expect(code).toBe(0); + expect(output).toContain('System Information'); + done(); + }); + }, timeout); + + it('should fail with unknown command', (done) => { + const cliPath = path.join(__dirname, '../../bin/aios.js'); + const child = spawn('node', [cliPath, 'invalid-command']); + + let stderr = ''; + child.stderr.on('data', (data) => { + stderr += data.toString(); + }); + + child.on('close', (code) => { + expect(code).toBe(1); + expect(stderr).toContain('Unknown command'); + done(); + }); + }, timeout); + }); + + describeIntegration('Index.js init Export', () => { + it('should have init function in index.js source', () => { + const indexPath = path.join(__dirname, '../../index.js'); + const indexContent = fs.readFileSync(indexPath, 'utf8'); + + // Verify init function exists in source + expect(indexContent).toContain('async function init()'); + expect(indexContent).toContain('init'); // Exported in module.exports + }); + + it('should maintain AIOS class export in source', () => { + const indexPath = path.join(__dirname, '../../index.js'); + const indexContent = fs.readFileSync(indexPath, 'utf8'); + + // Verify AIOS class exists in exports + expect(indexContent).toContain('AIOS'); + expect(indexContent).toContain('class AIOS'); + }); + + it('should export core modules in source', () => { + const indexPath = path.join(__dirname, '../../index.js'); + const indexContent = fs.readFileSync(indexPath, 'utf8'); + + // Verify core modules are exported + expect(indexContent).toContain('core'); + expect(indexContent).toContain('memory'); + expect(indexContent).toContain('security'); + expect(indexContent).toContain('performance'); + expect(indexContent).toContain('telemetry'); + }); + }); + + describeIntegration('Cross-Platform Compatibility', () => { + it('should work on current platform', () => { + const cliPath = path.join(__dirname, '../../bin/aios.js'); + + // Verify file exists + expect(fs.existsSync(cliPath)).toBe(true); + + // Verify it's executable (shebang present) + const content = fs.readFileSync(cliPath, 'utf8'); + expect(content.startsWith('#!/usr/bin/env node')).toBe(true); + }); + + it('should report current platform in info command', (done) => { + const cliPath = path.join(__dirname, '../../bin/aios.js'); + const child = spawn('node', [cliPath, 'info']); + + let output = ''; + child.stdout.on('data', (data) => { + output += data.toString(); + }); + + child.on('close', (code) => { + expect(code).toBe(0); + expect(output).toContain('Platform:'); + expect(output).toContain(process.platform); + done(); + }); + }, timeout); + }); + + describeIntegration('Performance', () => { + it('should execute --version in under 5 seconds', (done) => { + const startTime = Date.now(); + const cliPath = path.join(__dirname, '../../bin/aios.js'); + const child = spawn('node', [cliPath, '--version']); + + child.on('close', () => { + const elapsed = Date.now() - startTime; + expect(elapsed).toBeLessThan(5000); + done(); + }); + }, 10000); + + it('should execute info command in under 5 seconds', (done) => { + const startTime = Date.now(); + const cliPath = path.join(__dirname, '../../bin/aios.js'); + const child = spawn('node', [cliPath, 'info']); + + child.on('close', () => { + const elapsed = Date.now() - startTime; + expect(elapsed).toBeLessThan(5000); + done(); + }); + }, 10000); + }); +}); + + +``` + +================================================== +📄 tests/integration/pipeline-memory-integration.test.js +================================================== +```js +/** + * UnifiedActivationPipeline Memory Integration Tests (MIS-6) + * + * Tests the complete pipeline flow with memory injection: + * - Scenario 1: Activation WITHOUT pro/ → no memories, no errors + * - Scenario 2: Activation WITH pro/ but no digests → empty array, no errors + * - Scenario 3: Activation WITH pro/ and digests → memories injected correctly + * - Scenario 4: Token budget respected → never exceeds configured limit + * - Scenario 5: Agent scoping enforced → only own + shared memories + * + * @module __tests__/integration/pipeline-memory-integration + */ + +'use strict'; + +const path = require('path'); +const fs = require('fs').promises; +const yaml = require('js-yaml'); +const { UnifiedActivationPipeline } = require('../../.aios-core/development/scripts/unified-activation-pipeline'); + +// Mock pro-detector for testing different scenarios +jest.mock('../../bin/utils/pro-detector'); +const proDetector = require('../../bin/utils/pro-detector'); + +describe('UnifiedActivationPipeline Memory Integration (MIS-6)', () => { + let pipeline; + const testProjectRoot = path.join(__dirname, '..', 'fixtures', 'test-project-memory'); + + // Store original env to restore after tests + const originalPipelineTimeout = process.env.AIOS_PIPELINE_TIMEOUT; + + beforeEach(() => { + // Increase pipeline timeout so tests don't fail under heavy load (full suite) + process.env.AIOS_PIPELINE_TIMEOUT = '5000'; + pipeline = new UnifiedActivationPipeline(testProjectRoot); + jest.clearAllMocks(); + }); + + afterEach(async () => { + // Restore original pipeline timeout + if (originalPipelineTimeout !== undefined) { + process.env.AIOS_PIPELINE_TIMEOUT = originalPipelineTimeout; + } else { + delete process.env.AIOS_PIPELINE_TIMEOUT; + } + + // Cleanup test data + try { + const digestsPath = path.join(testProjectRoot, '.aios', 'session-digests'); + await fs.rm(digestsPath, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + + // Clear all timers to prevent Jest warnings (TEST-002) + jest.clearAllTimers(); + jest.useRealTimers(); + }); + + /** + * SCENARIO 1: Activation WITHOUT pro/ → no memories, no errors + */ + describe('Scenario 1: No Pro Available', () => { + beforeEach(() => { + // Mock pro as unavailable + proDetector.isProAvailable.mockReturnValue(false); + proDetector.loadProModule.mockReturnValue(null); + }); + + it('should activate successfully with empty memories array', async () => { + const result = await pipeline.activate('dev'); + + expect(result).toBeDefined(); + expect(result.greeting).toBeDefined(); + expect(result.context).toBeDefined(); + expect(result.context.memories).toEqual([]); + expect(result.fallback).toBe(false); + }); + + it('should not throw errors when pro is unavailable', async () => { + await expect(pipeline.activate('qa')).resolves.toBeDefined(); + }); + + it('should work for all agent IDs', async () => { + const agentIds = ['dev', 'qa', 'architect', 'pm', 'po']; + + for (const agentId of agentIds) { + const result = await pipeline.activate(agentId); + expect(result.context.memories).toEqual([]); + } + }); + }); + + /** + * SCENARIO 2: Activation WITH pro/ but no digests → empty array, no errors + */ + describe('Scenario 2: Pro Available, No Digests', () => { + beforeEach(() => { + // Mock pro as available but return mock classes + proDetector.isProAvailable.mockReturnValue(true); + + // Mock MemoryLoader that returns empty results (no digests) + const MockMemoryLoader = class { + constructor() {} + async loadForAgent() { + return { memories: [], metadata: { count: 0, tokensUsed: 0 } }; + } + }; + + // Mock feature gate as enabled + const mockFeatureGate = { + featureGate: { + isAvailable: jest.fn().mockReturnValue(true), + }, + }; + + proDetector.loadProModule.mockImplementation((module) => { + if (module === 'memory/memory-loader') { + return MockMemoryLoader; + } + if (module === 'license/feature-gate') { + return mockFeatureGate; + } + return null; + }); + }); + + it('should activate successfully with empty memories array', async () => { + const result = await pipeline.activate('dev'); + + expect(result).toBeDefined(); + expect(result.context.memories).toEqual([]); + expect(result.metrics.loaders.memories).toBeDefined(); + expect(result.metrics.loaders.memories.status).toBe('ok'); + }); + + it('should not throw errors when digests directory is empty', async () => { + await expect(pipeline.activate('architect')).resolves.toBeDefined(); + }); + }); + + /** + * SCENARIO 3: Activation WITH pro/ and digests → memories injected correctly + */ + describe('Scenario 3: Pro Available, With Digests', () => { + const mockMemories = [ + { + id: 'mem-001', + title: 'Test Memory 1', + summary: 'HOT memory about testing', + sector: 'procedural', + tier: 'hot', + attention_score: 0.8, + agent: 'dev', + }, + { + id: 'mem-002', + title: 'Test Memory 2', + summary: 'WARM memory about architecture', + sector: 'semantic', + tier: 'warm', + attention_score: 0.5, + agent: 'dev', + }, + ]; + + beforeEach(() => { + proDetector.isProAvailable.mockReturnValue(true); + + // Mock MemoryLoader that returns test memories + const MockMemoryLoader = class { + constructor() {} + async loadForAgent(agentId, options) { + // Add small delay to simulate real async operation (for metrics.duration test) + await new Promise(resolve => setTimeout(resolve, 10)); + + return { + memories: mockMemories, + metadata: { + agent: agentId, + count: 2, + tokensUsed: 450, + budget: options.budget || 2000, + tiers: ['hot', 'warm'], + }, + }; + } + }; + + const mockFeatureGate = { + featureGate: { + isAvailable: jest.fn().mockReturnValue(true), + }, + }; + + proDetector.loadProModule.mockImplementation((module) => { + if (module === 'memory/memory-loader') { + return MockMemoryLoader; + } + if (module === 'license/feature-gate') { + return mockFeatureGate; + } + return null; + }); + }); + + it('should inject memories into enrichedContext', async () => { + const result = await pipeline.activate('dev'); + + expect(result.context.memories).toHaveLength(2); + expect(result.context.memories[0]).toMatchObject({ + id: 'mem-001', + title: 'Test Memory 1', + tier: 'hot', + }); + }); + + it('should include memory metadata in metrics', async () => { + const result = await pipeline.activate('dev'); + + expect(result.metrics.loaders.memories).toBeDefined(); + expect(result.metrics.loaders.memories.status).toBe('ok'); + expect(result.metrics.loaders.memories.duration).toBeGreaterThan(0); + }); + + it('should maintain activation quality as non-fallback', async () => { + const result = await pipeline.activate('dev'); + + // Under heavy load (full test suite), pipeline may report 'partial' instead of 'full' + // The key assertion is that memories were injected (not a fallback) + expect(['full', 'partial']).toContain(result.quality); + expect(result.fallback).toBe(false); + }); + }); + + /** + * SCENARIO 4: Token budget respected → never exceeds configured limit + */ + describe('Scenario 4: Token Budget Enforcement', () => { + beforeEach(() => { + proDetector.isProAvailable.mockReturnValue(true); + + // Mock MemoryLoader that respects budget + const MockMemoryLoader = class { + constructor() {} + async loadForAgent(agentId, options) { + const budget = options.budget || 2000; + // Simulate budget enforcement + const memories = []; + let tokensUsed = 0; + + // Add memories until budget is reached (each memory ~200 tokens) + const maxPossibleMemories = 20; // Hard cap to prevent infinite loops + for (let i = 0; i < maxPossibleMemories; i++) { + const memoryTokens = 200; + // Stop if adding this memory would exceed budget + if (tokensUsed + memoryTokens > budget) { + break; + } + + memories.push({ + id: `mem-${i}`, + title: `Memory ${i}`, + summary: 'Test memory', + sector: 'procedural', + tier: i < 5 ? 'hot' : 'warm', + attention_score: i < 5 ? 0.8 : 0.5, + agent: agentId, + }); + tokensUsed += memoryTokens; + } + + return { + memories, + metadata: { count: memories.length, tokensUsed, budget }, + }; + } + }; + + const mockFeatureGate = { + featureGate: { isAvailable: jest.fn().mockReturnValue(true) }, + }; + + proDetector.loadProModule.mockImplementation((module) => { + if (module === 'memory/memory-loader') return MockMemoryLoader; + if (module === 'license/feature-gate') return mockFeatureGate; + return null; + }); + }); + + it('should never exceed configured budget', async () => { + const result = await pipeline.activate('dev'); + + // With 2000 budget and 200 tokens per memory, max should be 10 memories + // (2000 / 200 = 10) + const expectedMaxMemories = 2000 / 200; + expect(result.context.memories.length).toBeLessThanOrEqual(expectedMaxMemories); + + // Verify actual token usage doesn't exceed budget + const tokensPerMemory = 200; + const actualTokens = result.context.memories.length * tokensPerMemory; + expect(actualTokens).toBeLessThanOrEqual(2000); + }); + + it('should stop adding memories when budget is reached', async () => { + const result = await pipeline.activate('dev'); + + // With default 2000 budget and 200 tokens per memory, max is 10 memories + expect(result.context.memories.length).toBeLessThanOrEqual(10); + }); + }); + + /** + * SCENARIO 5: Agent scoping enforced → only own + shared memories + */ + describe('Scenario 5: Agent Scoping Privacy', () => { + beforeEach(() => { + proDetector.isProAvailable.mockReturnValue(true); + + const MockMemoryLoader = class { + constructor() {} + async loadForAgent(agentId, options) { + // Simulate proper agent scoping + const allMemories = [ + { id: 'mem-dev-1', agent: 'dev', title: 'Dev Memory' }, + { id: 'mem-qa-1', agent: 'qa', title: 'QA Memory' }, + { id: 'mem-shared-1', agent: 'shared', title: 'Shared Memory' }, + ]; + + // Filter to only agent's own + shared + const memories = allMemories.filter(m => + m.agent === agentId || m.agent === 'shared', + ); + + return { + memories, + metadata: { count: memories.length, tokensUsed: memories.length * 200 }, + }; + } + }; + + const mockFeatureGate = { + featureGate: { isAvailable: jest.fn().mockReturnValue(true) }, + }; + + proDetector.loadProModule.mockImplementation((module) => { + if (module === 'memory/memory-loader') return MockMemoryLoader; + if (module === 'license/feature-gate') return mockFeatureGate; + return null; + }); + }); + + it('should only return dev + shared memories for dev agent', async () => { + const result = await pipeline.activate('dev'); + + const agents = result.context.memories.map(m => m.agent); + expect(agents).toContain('dev'); + expect(agents).toContain('shared'); + expect(agents).not.toContain('qa'); + }); + + it('should only return qa + shared memories for qa agent', async () => { + const result = await pipeline.activate('qa'); + + const agents = result.context.memories.map(m => m.agent); + expect(agents).toContain('qa'); + expect(agents).toContain('shared'); + expect(agents).not.toContain('dev'); + }); + + it('should never leak private memories between agents', async () => { + const devResult = await pipeline.activate('dev'); + const qaResult = await pipeline.activate('qa'); + + const devAgents = devResult.context.memories.map(m => m.agent); + const qaAgents = qaResult.context.memories.map(m => m.agent); + + // Dev should not see QA memories + expect(devAgents).not.toContain('qa'); + // QA should not see Dev memories + expect(qaAgents).not.toContain('dev'); + // Both should see shared + expect(devAgents).toContain('shared'); + expect(qaAgents).toContain('shared'); + }); + }); + + /** + * EDGE CASES & ERROR HANDLING + */ + describe('Edge Cases', () => { + it('should handle feature gate disabled gracefully', async () => { + proDetector.isProAvailable.mockReturnValue(true); + + const mockFeatureGate = { + featureGate: { isAvailable: jest.fn().mockReturnValue(false) }, + }; + + proDetector.loadProModule.mockImplementation((module) => { + if (module === 'license/feature-gate') return mockFeatureGate; + return null; + }); + + const result = await pipeline.activate('dev'); + expect(result.context.memories).toEqual([]); + }); + + it('should handle memory loader errors gracefully', async () => { + proDetector.isProAvailable.mockReturnValue(true); + + const MockMemoryLoader = class { + constructor() {} + async loadForAgent() { + throw new Error('Simulated memory load error'); + } + }; + + const mockFeatureGate = { + featureGate: { isAvailable: jest.fn().mockReturnValue(true) }, + }; + + proDetector.loadProModule.mockImplementation((module) => { + if (module === 'memory/memory-loader') return MockMemoryLoader; + if (module === 'license/feature-gate') return mockFeatureGate; + return null; + }); + + // Should not throw, should gracefully degrade + const result = await pipeline.activate('dev'); + expect(result.context.memories).toEqual([]); + expect(result.metrics.loaders.memories).toBeDefined(); + }); + + it('should handle timeout gracefully (< 500ms)', async () => { + // Pipeline timeout is already set to 5000ms in beforeEach, + // so the _profileLoader memory timeout (500ms) fires BEFORE pipeline timeout + + proDetector.isProAvailable.mockReturnValue(true); + + // Track timer so we can clear it in case pipeline abandons the promise + let slowTimer; + const MockMemoryLoader = class { + constructor() {} + async loadForAgent() { + // Simulate slow load that exceeds _profileLoader timeout (500ms) + await new Promise(resolve => { + slowTimer = setTimeout(resolve, 600); + }); + return { memories: [], metadata: {} }; + } + }; + + const mockFeatureGate = { + featureGate: { isAvailable: jest.fn().mockReturnValue(true) }, + }; + + proDetector.loadProModule.mockImplementation((module) => { + if (module === 'memory/memory-loader') return MockMemoryLoader; + if (module === 'license/feature-gate') return mockFeatureGate; + return null; + }); + + const result = await pipeline.activate('dev'); + + // Should timeout and return empty memories (null from _profileLoader → || []) + expect(result.context.memories).toEqual([]); + + // _profileLoader records metrics even on timeout + expect(result.metrics).toBeDefined(); + expect(result.metrics.loaders).toBeDefined(); + expect(result.metrics.loaders.memories).toBeDefined(); + expect(result.metrics.loaders.memories.status).toBe('timeout'); + + // Clean up abandoned timer to prevent Jest worker leaks + if (slowTimer) clearTimeout(slowTimer); + }, 10000); // Increase test timeout + }); +}); + +``` + +================================================== +📄 tests/integration/onboarding-smoke.test.js +================================================== +```js +/** + * Story AIOS-DIFF-4.0.5: Onboarding smoke tests in clean environment. + * + * Objective: + * - Validate that "Comece Aqui" onboarding flow remains executable. + * - Keep a deterministic timer for first-value smoke checks. + */ + +'use strict'; + +const fs = require('fs-extra'); +const os = require('os'); +const path = require('path'); +const { execFileSync } = require('child_process'); + +describe('Onboarding smoke flow (AIOS-DIFF-4.0.5)', () => { + const repoRoot = path.resolve(__dirname, '..', '..'); + const cliBin = path.join(repoRoot, 'bin', 'aios.js'); + const greetingScript = path.join( + repoRoot, + '.aios-core', + 'development', + 'scripts', + 'generate-greeting.js', + ); + const FIRST_VALUE_TARGET_SECONDS = 10 * 60; + const CI_MARGIN_SECONDS = 12 * 60; + + let tempDir; + + const runNode = (entryPoint, args = [], cwd = repoRoot) => { + return execFileSync('node', [entryPoint, ...args], { + cwd, + encoding: 'utf8', + timeout: 20000, + env: { + ...process.env, + CI: '1', + }, + }); + }; + + beforeEach(async () => { + tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'aios-onboarding-smoke-')); + }); + + afterEach(async () => { + await fs.remove(tempDir); + }); + + it('validates Start Here commands are discoverable and executable from clean env', async () => { + expect(await fs.pathExists(path.join(tempDir, '.aios-core'))).toBe(false); + expect(await fs.pathExists(path.join(tempDir, '.codex'))).toBe(false); + expect(await fs.pathExists(path.join(tempDir, '.claude'))).toBe(false); + + const topHelp = runNode(cliBin, ['--help'], tempDir); + expect(topHelp).toContain('init '); + expect(topHelp).toContain('install'); + + const initHelp = runNode(cliBin, ['init', '--help'], tempDir); + expect(initHelp).toContain('--skip-install'); + expect(initHelp).toContain('--template'); + + const versionOutput = runNode(cliBin, ['--version'], tempDir).trim(); + expect(versionOutput).toMatch(/^\d+\.\d+\.\d+$/); + }); + + it('validates onboarding docs keep an objective first-value path', async () => { + const readme = await fs.readFile(path.join(repoRoot, 'README.md'), 'utf8'); + const gettingStarted = await fs.readFile(path.join(repoRoot, 'docs', 'getting-started.md'), 'utf8'); + + expect(readme).toContain('Comece Aqui (10 Min)'); + expect(readme).toContain('npx aios-core init'); + expect(readme).toContain('npx aios-core install'); + + expect(gettingStarted).toContain('10-Minute Quick Path'); + expect(gettingStarted).toContain('Step 1: Install AIOS'); + expect(gettingStarted).toContain('Step 2: Pick your IDE activation path'); + expect(gettingStarted).toContain('Step 3: Validate first value'); + expect(gettingStarted).toContain('*help'); + expect(gettingStarted).toContain('PASS rule'); + }); + + it('validates first-value activation signal and timing budget', () => { + const startedAt = Date.now(); + + const greeting = runNode(greetingScript, ['dev'], repoRoot); + + const elapsedSeconds = (Date.now() - startedAt) / 1000; + expect(greeting).toContain('Agent dev loaded'); + expect(greeting).toContain('Available Commands'); + expect(greeting).toContain('*help'); + + // Target for real user path is <=10 min. + expect(elapsedSeconds).toBeLessThanOrEqual(FIRST_VALUE_TARGET_SECONDS); + }); +}); + +``` + +================================================== +📄 tests/integration/formatter-integration.test.js +================================================== +```js +/** + * Integration Tests for Output Formatter + * + * Story: 6.1.6 - Output Formatter Implementation + * Tests formatter integration with real task execution + */ + +const PersonalizedOutputFormatter = require('../../.aios-core/infrastructure/scripts/output-formatter'); +const OutputPatternValidator = require('../../.aios-core/infrastructure/scripts/validate-output-pattern'); +const fs = require('fs'); +const path = require('path'); + +// Mock fs for agent file reading +jest.mock('fs'); + +describe('Formatter Integration', () => { + let mockAgent; + let mockTask; + let mockResults; + + beforeEach(() => { + jest.clearAllMocks(); + + // Setup dev agent (Dex - Builder) + mockAgent = { + id: 'dev', + name: 'Dex', + title: 'Full Stack Developer', + }; + + mockTask = { + name: 'develop-story', + }; + + mockResults = { + startTime: '2025-01-15T10:00:00Z', + endTime: '2025-01-15T10:02:30Z', + duration: '2.5s', + tokens: { total: 1800 }, + success: true, + output: `Created files: +- .aios-core/scripts/output-formatter.js +- .aios-core/scripts/validate-output-pattern.js +- .aios-core/templates/task-execution-report.md + +All tests passing.`, + tests: { passed: 50, total: 50 }, + coverage: 85, + linting: { status: '✅ Clean' }, + }; + + // Mock dev agent file + const mockDevAgentContent = `# dev + +\`\`\`yaml +agent: + name: Dex + id: dev + title: Full Stack Developer + +persona_profile: + archetype: Builder + zodiac: "♒ Aquarius" + communication: + tone: pragmatic + emoji_frequency: medium + vocabulary: + - construir + - implementar + - refatorar + - resolver + - otimizar + - debugar + - testar + greeting_levels: + minimal: "💻 dev Agent ready" + named: "💻 Dex (Builder) ready. Let's build something great!" + archetypal: "💻 Dex the Builder ready to innovate!" + signature_closing: "— Dex, sempre construindo 🔨" +\`\`\` +`; + + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue(mockDevAgentContent); + }); + + test('develop-story task uses formatter successfully', () => { + // 1. Load dev agent (Dex - Builder) + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + + // 2. Execute formatter + const formattedOutput = formatter.format(); + + // 3. Capture formatted output + expect(formattedOutput).toBeDefined(); + expect(formattedOutput.length).toBeGreaterThan(0); + + // 4. Validate structure compliance + const validator = new OutputPatternValidator(); + const validation = validator.validate(formattedOutput); + + expect(validation.valid).toBe(true); + expect(validation.errors.length).toBe(0); + + // 5. Verify personality injection (vocabulary, tone, signature) + expect(formattedOutput).toContain('Dex (Builder)'); + expect(formattedOutput).toContain('Tá pronto!'); // Pragmatic tone + expect(formattedOutput).toContain('— Dex, sempre construindo 🔨'); // Signature + + // 6. Verify metrics section last + const lines = formattedOutput.split('\n'); + const metricsIndex = lines.findIndex(line => line === '### Metrics'); + const signatureIndex = lines.findIndex(line => line.includes('— Dex')); + + expect(metricsIndex).toBeGreaterThan(-1); + expect(signatureIndex).toBeGreaterThan(metricsIndex); + + // 7. Verify performance <50ms + const start = process.hrtime.bigint(); + formatter.format(); + const duration = Number(process.hrtime.bigint() - start) / 1000000; + + expect(duration).toBeLessThan(50); + }); + + test('formatter output passes validator for all sections', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const output = formatter.format(); + const validator = new OutputPatternValidator(); + const result = validator.validate(output); + + expect(result.valid).toBe(true); + expect(result.errors.length).toBe(0); + }); + + test('formatter maintains fixed structure positions', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const output = formatter.format(); + const lines = output.split('\n'); + + // Find header start + const headerIndex = lines.findIndex(line => line === '## 📊 Task Execution Report'); + expect(headerIndex).toBeGreaterThan(-1); + + // Duration should be on line 7 (headerIndex + 6) + expect(lines[headerIndex + 6]).toMatch(/^\*\*Duration:\*\*/); + + // Tokens should be on line 8 (headerIndex + 7) + expect(lines[headerIndex + 7]).toMatch(/^\*\*Tokens Used:\*\*/); + }); + + test('formatter injects personality correctly', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const output = formatter.format(); + + // Check agent name and archetype + expect(output).toContain('**Agent:** Dex (Builder)'); + + // Check pragmatic tone message + expect(output).toContain('Tá pronto!'); + + // Check signature + expect(output).toContain('— Dex, sempre construindo 🔨'); + }); + + test('formatter handles task-specific output correctly', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const output = formatter.format(); + + expect(output).toContain('### Output'); + expect(output).toContain('Created files:'); + expect(output).toContain('output-formatter.js'); + }); + + test('formatter includes metrics correctly', () => { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const output = formatter.format(); + + expect(output).toContain('### Metrics'); + expect(output).toContain('Tests: 50/50'); + expect(output).toContain('Coverage: 85%'); + expect(output).toContain('Linting: ✅ Clean'); + }); + + test('formatter performance meets target', () => { + const iterations = 10; + const times = []; + + for (let i = 0; i < iterations; i++) { + const formatter = new PersonalizedOutputFormatter(mockAgent, mockTask, mockResults); + const start = process.hrtime.bigint(); + formatter.format(); + const duration = Number(process.hrtime.bigint() - start) / 1000000; + times.push(duration); + } + + const avgTime = times.reduce((a, b) => a + b, 0) / times.length; + const p95Time = times.sort((a, b) => a - b)[Math.floor(times.length * 0.95)]; + + expect(avgTime).toBeLessThan(50); + expect(p95Time).toBeLessThan(50); + }); +}); + + +``` + +================================================== +📄 tests/integration/human-review-orchestration.test.js +================================================== +```js +/** + * Human Review Orchestration Integration Tests + * + * Tests for Story 3.5 - Human Review Orchestration (Layer 3) + * End-to-End Tests: HUMAN-01 to HUMAN-05 + * + * @story 3.5 - Human Review Orchestration + */ + +const { HumanReviewOrchestrator } = require('../../.aios-core/core/quality-gates/human-review-orchestrator'); +const { FocusAreaRecommender } = require('../../.aios-core/core/quality-gates/focus-area-recommender'); +const { NotificationManager } = require('../../.aios-core/core/quality-gates/notification-manager'); +const { QualityGateManager } = require('../../.aios-core/core/quality-gates/quality-gate-manager'); + +describe('Human Review Orchestration Integration Tests', () => { + describe('HUMAN-01: Orchestration Flow', () => { + let orchestrator; + + beforeEach(() => { + orchestrator = new HumanReviewOrchestrator({ + statusPath: '.aios/qa-status-integration-test.json', + reviewRequestsPath: '.aios/human-review-requests-integration-test', + }); + // Mock file operations to avoid actual file I/O + orchestrator.saveReviewRequest = jest.fn().mockResolvedValue(); + orchestrator.notifyReviewer = jest.fn().mockResolvedValue({ success: true }); + }); + + it('should execute 3-layer flow in correct sequence', async () => { + const prContext = { + prNumber: 123, + changedFiles: ['src/services/auth.service.js', 'src/components/Login.tsx'], + }; + + const layer1Result = { + pass: true, + layer: 'Layer 1: Pre-commit', + duration: 5000, + results: [ + { check: 'lint', pass: true, message: 'No errors' }, + { check: 'test', pass: true, message: 'All tests passed' }, + { check: 'typecheck', pass: true, message: 'No type errors' }, + ], + checks: { total: 3, passed: 3, failed: 0 }, + }; + + const layer2Result = { + pass: true, + layer: 'Layer 2: PR Automation', + duration: 30000, + results: [ + { check: 'coderabbit', pass: true, issues: { critical: 0, high: 0, medium: 2, low: 5 } }, + { check: 'quinn', pass: true, suggestions: 3, blocking: 0 }, + ], + checks: { total: 2, passed: 2, failed: 0 }, + }; + + const result = await orchestrator.orchestrateReview(prContext, layer1Result, layer2Result); + + // Verify orchestration completed successfully + expect(result.pass).toBe(true); + expect(result.status).toBe('pending_human_review'); + expect(result.layers.layer1.pass).toBe(true); + expect(result.layers.layer2.pass).toBe(true); + expect(result.reviewRequest).toBeDefined(); + expect(result.reviewRequest.focusAreas).toBeDefined(); + }); + + it('should stop at Layer 1 when it fails', async () => { + const prContext = { changedFiles: ['file.js'] }; + + const layer1Result = { + pass: false, + results: [ + { check: 'lint', pass: false, message: '5 errors found' }, + ], + }; + + const layer2Result = { pass: true }; // Should not matter + + const result = await orchestrator.orchestrateReview(prContext, layer1Result, layer2Result); + + expect(result.pass).toBe(false); + expect(result.stoppedAt).toBe('layer1'); + expect(result.reviewRequest).toBeUndefined(); + }); + + it('should stop at Layer 2 when it fails after Layer 1 passes', async () => { + const prContext = { changedFiles: ['file.js'] }; + + const layer1Result = { pass: true, results: [] }; + + const layer2Result = { + pass: false, + results: [ + { check: 'coderabbit', pass: false, issues: { critical: 1 }, message: 'Critical issue found' }, + ], + }; + + const result = await orchestrator.orchestrateReview(prContext, layer1Result, layer2Result); + + expect(result.pass).toBe(false); + expect(result.stoppedAt).toBe('layer2'); + }); + }); + + describe('HUMAN-02: Blocking Behavior', () => { + let orchestrator; + + beforeEach(() => { + orchestrator = new HumanReviewOrchestrator(); + }); + + it('should provide fix recommendations when blocking', async () => { + const prContext = { changedFiles: ['file.js'] }; + + const layer1Result = { + pass: false, + results: [ + { check: 'lint', pass: false, message: '10 errors' }, + { check: 'test', pass: false, message: '2 tests failed' }, + ], + }; + + const result = await orchestrator.orchestrateReview(prContext, layer1Result, {}); + + expect(result.pass).toBe(false); + expect(result.status).toBe('blocked'); + expect(result.fixFirst).toBeDefined(); + expect(result.fixFirst.length).toBeGreaterThan(0); + expect(result.fixFirst.some(f => f.suggestion.includes('lint'))).toBe(true); + }); + + it('should include clear blocking message', async () => { + const prContext = { changedFiles: ['file.js'] }; + + const layer1Result = { + pass: false, + results: [{ check: 'typecheck', pass: false, message: 'Type errors' }], + }; + + const result = await orchestrator.orchestrateReview(prContext, layer1Result, {}); + + expect(result.message).toContain('linting'); + }); + }); + + describe('HUMAN-03: Notification System', () => { + let orchestrator; + let notificationSpy; + + beforeEach(() => { + orchestrator = new HumanReviewOrchestrator({ + notifications: { channels: ['console'] }, + }); + orchestrator.saveReviewRequest = jest.fn().mockResolvedValue(); + + // Spy on notification manager + notificationSpy = jest.spyOn(orchestrator.notificationManager, 'sendReviewRequest') + .mockResolvedValue({ success: true, notificationId: 'notif-test' }); + }); + + afterEach(() => { + notificationSpy.mockRestore(); + }); + + it('should notify reviewer when layers 1+2 pass', async () => { + const prContext = { changedFiles: ['src/auth/login.js'] }; + + const layer1Result = { pass: true, results: [] }; + const layer2Result = { pass: true, results: [] }; + + await orchestrator.orchestrateReview(prContext, layer1Result, layer2Result); + + expect(notificationSpy).toHaveBeenCalled(); + const notificationCall = notificationSpy.mock.calls[0][0]; + expect(notificationCall.reviewer).toBeDefined(); + expect(notificationCall.focusAreas).toBeDefined(); + }); + + it('should include review request details in notification', async () => { + const prContext = { changedFiles: ['src/services/payment.service.js'] }; + + const layer1Result = { pass: true, results: [] }; + const layer2Result = { pass: true, results: [] }; + + await orchestrator.orchestrateReview(prContext, layer1Result, layer2Result); + + const reviewRequest = notificationSpy.mock.calls[0][0]; + expect(reviewRequest.id).toMatch(/^hr-/); + expect(reviewRequest.estimatedTime).toBeGreaterThan(0); + expect(reviewRequest.expiresAt).toBeDefined(); + }); + }); + + describe('HUMAN-04: Focus Area Recommendations', () => { + let recommender; + + beforeEach(() => { + recommender = new FocusAreaRecommender(); + }); + + it('should highlight strategic aspects - security', async () => { + const context = { + prContext: { + changedFiles: [ + 'src/auth/login.controller.js', + 'src/auth/password.util.js', + 'src/middleware/jwt.middleware.js', + ], + }, + }; + + const recommendations = await recommender.recommend(context); + + expect(recommendations.primary.some(p => p.area === 'security')).toBe(true); + expect(recommendations.highlightedAspects).toContain('Security-sensitive code changes'); + }); + + it('should highlight strategic aspects - architecture', async () => { + const context = { + prContext: { + changedFiles: [ + 'src/core/base-service.js', + 'src/interfaces/repository.interface.ts', + ], + }, + }; + + const recommendations = await recommender.recommend(context); + + expect(recommendations.primary.some(p => p.area === 'architecture')).toBe(true); + }); + + it('should highlight strategic aspects - business logic', async () => { + const context = { + prContext: { + changedFiles: [ + 'src/services/order.service.js', + 'src/handlers/checkout.handler.js', + ], + }, + }; + + const recommendations = await recommender.recommend(context); + + expect(recommendations.primary.some(p => p.area === 'business-logic')).toBe(true); + }); + + it('should provide review questions for focus areas', async () => { + const context = { + prContext: { + changedFiles: ['src/auth/session.js'], + }, + }; + + const recommendations = await recommender.recommend(context); + const securityArea = recommendations.primary.find(p => p.area === 'security'); + + expect(securityArea).toBeDefined(); + expect(securityArea.questions).toBeDefined(); + expect(securityArea.questions.length).toBeGreaterThan(0); + }); + + it('should exclude automated-covered areas', async () => { + const context = { + prContext: { changedFiles: ['src/file.js'] }, + }; + + const recommendations = await recommender.recommend(context); + + expect(recommendations.skip).toContain('syntax'); + expect(recommendations.skip).toContain('formatting'); + expect(recommendations.skip).toContain('simple-logic'); + }); + }); + + describe('HUMAN-05: Time Reduction Estimation', () => { + let orchestrator; + + beforeEach(() => { + orchestrator = new HumanReviewOrchestrator(); + orchestrator.saveReviewRequest = jest.fn().mockResolvedValue(); + orchestrator.notifyReviewer = jest.fn().mockResolvedValue({ success: true }); + }); + + it('should estimate review time based on focus areas', async () => { + const prContext = { + changedFiles: ['src/auth/login.js'], // Single security file + }; + + const layer1Result = { pass: true, results: [] }; + const layer2Result = { pass: true, results: [] }; + + const result = await orchestrator.orchestrateReview(prContext, layer1Result, layer2Result); + + // Base 10min + focus areas + expect(result.reviewRequest.estimatedTime).toBeGreaterThanOrEqual(10); + expect(result.reviewRequest.estimatedTime).toBeLessThanOrEqual(35); // Max with 3 primary + 2 secondary + }); + + it('should target ~30min review time (75% reduction from 2-4h)', async () => { + const prContext = { + changedFiles: [ + 'src/auth/login.js', + 'src/services/order.service.js', + 'src/components/Dashboard.tsx', + 'config/settings.yaml', + ], + }; + + const layer1Result = { pass: true, results: [] }; + const layer2Result = { pass: true, results: [] }; + + const result = await orchestrator.orchestrateReview(prContext, layer1Result, layer2Result); + + // Should be around 30 minutes or less (targeting 75% reduction) + expect(result.reviewRequest.estimatedTime).toBeLessThanOrEqual(35); + }); + }); + + describe('Integration with QualityGateManager', () => { + let manager; + + beforeEach(() => { + manager = new QualityGateManager({ + layer1: { enabled: true }, + layer2: { enabled: true }, + layer3: { enabled: true }, + }); + }); + + it('should have human review orchestrator available', () => { + expect(manager.humanReviewOrchestrator).toBeDefined(); + }); + + it('should have orchestrateHumanReview method', () => { + expect(typeof manager.orchestrateHumanReview).toBe('function'); + }); + + it('should integrate orchestration with full pipeline', async () => { + // Mock layer executions + manager.layers.layer1.execute = jest.fn().mockResolvedValue({ + pass: true, + layer: 'Layer 1: Pre-commit', + results: [{ check: 'lint', pass: true }], + }); + + manager.layers.layer2.execute = jest.fn().mockResolvedValue({ + pass: true, + layer: 'Layer 2: PR Automation', + results: [{ check: 'coderabbit', pass: true }], + }); + + // Mock file operations + manager.humanReviewOrchestrator.saveReviewRequest = jest.fn().mockResolvedValue(); + manager.humanReviewOrchestrator.notifyReviewer = jest.fn().mockResolvedValue({ success: true }); + + const result = await manager.orchestrateHumanReview({ + prNumber: 123, + changedFiles: ['src/auth/login.js'], + }); + + expect(result.status).toBe('pending_human_review'); + expect(result.reviewRequest).toBeDefined(); + }); + }); + + describe('Error Handling', () => { + let orchestrator; + + beforeEach(() => { + orchestrator = new HumanReviewOrchestrator(); + }); + + it('should handle null layer results gracefully', async () => { + const prContext = { changedFiles: ['file.js'] }; + + const result = await orchestrator.orchestrateReview(prContext, null, null); + + expect(result.pass).toBe(false); + expect(result.status).toBe('blocked'); + }); + + it('should handle missing prContext gracefully', async () => { + orchestrator.saveReviewRequest = jest.fn().mockResolvedValue(); + orchestrator.notifyReviewer = jest.fn().mockResolvedValue({ success: true }); + + const layer1Result = { pass: true, results: [] }; + const layer2Result = { pass: true, results: [] }; + + const result = await orchestrator.orchestrateReview({}, layer1Result, layer2Result); + + expect(result.pass).toBe(true); + expect(result.reviewRequest.focusAreas).toBeDefined(); + }); + }); +}); + +``` + +================================================== +📄 tests/integration/codex-skills-sync.test.js +================================================== +```js +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); + +const { + syncSkills, + buildSkillContent, +} = require('../../.aios-core/infrastructure/scripts/codex-skills-sync/index'); + +describe('Codex Skills Sync', () => { + let tmpRoot; + let expectedAgentCount; + + beforeEach(() => { + tmpRoot = fs.mkdtempSync(path.join(os.tmpdir(), 'aios-codex-skills-')); + expectedAgentCount = fs.readdirSync(path.join(process.cwd(), '.aios-core', 'development', 'agents')) + .filter(name => name.endsWith('.md')).length; + }); + + afterEach(() => { + fs.rmSync(tmpRoot, { recursive: true, force: true }); + }); + + it('generates one SKILL.md per AIOS agent in local .codex/skills', () => { + const localSkillsDir = path.join(tmpRoot, '.codex', 'skills'); + const result = syncSkills({ + sourceDir: path.join(process.cwd(), '.aios-core', 'development', 'agents'), + localSkillsDir, + dryRun: false, + }); + + expect(result.generated).toBe(expectedAgentCount); + const expected = path.join(localSkillsDir, 'aios-architect', 'SKILL.md'); + expect(fs.existsSync(expected)).toBe(true); + + const content = fs.readFileSync(expected, 'utf8'); + expect(content).toContain('name: aios-architect'); + expect(content).toContain('Activation Protocol'); + expect(content).toContain('.aios-core/development/agents/architect.md'); + expect(content).toContain('generate-greeting.js architect'); + }); + + it('supports global installation path when --global mode is enabled', () => { + const localSkillsDir = path.join(tmpRoot, '.codex', 'skills'); + const globalSkillsDir = path.join(tmpRoot, '.codex-home', 'skills'); + + const result = syncSkills({ + sourceDir: path.join(process.cwd(), '.aios-core', 'development', 'agents'), + localSkillsDir, + globalSkillsDir, + global: true, + dryRun: false, + }); + + expect(result.generated).toBe(expectedAgentCount); + expect(result.globalSkillsDir).toBe(globalSkillsDir); + expect(fs.existsSync(path.join(globalSkillsDir, 'aios-dev', 'SKILL.md'))).toBe(true); + }); + + it('treats globalOnly as global output and skips local writes', () => { + const localSkillsDir = path.join(tmpRoot, '.codex', 'skills'); + const globalSkillsDir = path.join(tmpRoot, '.codex-home', 'skills'); + + const result = syncSkills({ + sourceDir: path.join(process.cwd(), '.aios-core', 'development', 'agents'), + localSkillsDir, + globalSkillsDir, + globalOnly: true, + dryRun: false, + }); + + expect(result.generated).toBe(expectedAgentCount); + expect(result.globalSkillsDir).toBe(globalSkillsDir); + expect(fs.existsSync(path.join(localSkillsDir, 'aios-dev', 'SKILL.md'))).toBe(false); + expect(fs.existsSync(path.join(globalSkillsDir, 'aios-dev', 'SKILL.md'))).toBe(true); + }); + + it('buildSkillContent emits valid frontmatter and starter commands', () => { + const sample = { + id: 'dev', + filename: 'dev.md', + agent: { name: 'Dex', title: 'Developer', whenToUse: 'Build features safely.' }, + commands: [{ name: 'help', description: 'Show commands', visibility: ['quick', 'key', 'full'] }], + }; + const content = buildSkillContent(sample); + expect(content.startsWith('---')).toBe(true); + expect(content).toContain('name: aios-dev'); + expect(content).toContain('`*help` - Show commands'); + }); +}); + +``` + +================================================== +📄 tests/integration/decision-logging-yolo-workflow.test.js +================================================== +```js +// Integration test - requires external services +// Uses describeIntegration from setup.js +/** + * Integration Tests for Decision Logging + Yolo Mode Workflow + * + * Tests the complete integration of decision logging with yolo mode development. + * Validates end-to-end workflow from initialization to log generation. + * + * @see .aios-core/scripts/decision-recorder.js + * @see .aios-core/scripts/decision-log-generator.js + * @see .aios-core/scripts/decision-log-indexer.js + */ + +const fs = require('fs').promises; +const path = require('path'); +const { + initializeDecisionLogging, + recordDecision, + trackFile, + trackTest, + updateMetrics, + completeDecisionLogging, + getCurrentContext, +} = require('../../.aios-core/development/scripts/decision-recorder'); + +describeIntegration('Decision Logging + Yolo Mode Integration', () => { + const testStoryPath = 'docs/stories/test-integration-story.md'; + const testStoryId = 'test-integration'; + + beforeEach(async () => { + // Clean up any previous test logs + try { + await fs.unlink(`.ai/decision-log-${testStoryId}.md`); + } catch (error) { + // File doesn't exist, that's okay + } + + try { + await fs.unlink('.ai/decision-logs-index.md'); + } catch (error) { + // File doesn't exist, that's okay + } + }); + + afterEach(async () => { + // Clean up test logs after each test + try { + await fs.unlink(`.ai/decision-log-${testStoryId}.md`); + } catch (error) { + // Ignore cleanup errors + } + + try { + await fs.unlink('.ai/decision-logs-index.md'); + } catch (error) { + // Ignore cleanup errors + } + }); + + describeIntegration('Full Yolo Mode Workflow', () => { + it('should complete full workflow with decision logging', async () => { + // Simulate yolo mode start + const context = await initializeDecisionLogging('dev', testStoryPath, { + agentLoadTime: 150, + }); + + expect(context).toBeDefined(); + expect(context.agentId).toBe('dev'); + expect(context.storyPath).toBe(testStoryPath); + expect(context.metrics.agentLoadTime).toBe(150); + + // Simulate autonomous decisions + const decision1 = recordDecision({ + description: 'Use Axios for HTTP client', + reason: 'Better error handling and interceptor support', + alternatives: ['Fetch API', 'Got library', 'node-fetch'], + type: 'library-choice', + priority: 'medium', + }); + + expect(decision1).toBeDefined(); + expect(decision1.description).toBe('Use Axios for HTTP client'); + expect(decision1.type).toBe('library-choice'); + expect(decision1.priority).toBe('medium'); + + const decision2 = recordDecision({ + description: 'Use React Context for state management', + reason: 'Simple state sharing without Redux overhead', + alternatives: ['Redux', 'Zustand', 'Jotai'], + type: 'architecture', + priority: 'high', + }); + + expect(decision2).toBeDefined(); + expect(decision2.type).toBe('architecture'); + expect(decision2.priority).toBe('high'); + + // Simulate file modifications + trackFile('src/api/client.js', 'created'); + trackFile('src/context/AppContext.js', 'created'); + trackFile('package.json', 'modified'); + + // Simulate test execution + trackTest({ + name: 'api.test.js', + passed: true, + duration: 125, + }); + + trackTest({ + name: 'context.test.js', + passed: true, + duration: 85, + }); + + // Update metrics + updateMetrics({ + taskExecutionTime: 300000, // 5 minutes + }); + + // Complete yolo mode - generate log + const logPath = await completeDecisionLogging(testStoryId, 'completed'); + + expect(logPath).toBeDefined(); + expect(logPath).toContain(`decision-log-${testStoryId}.md`); + + // Verify log file was created + const logExists = await fs.access(logPath).then(() => true).catch(() => false); + expect(logExists).toBe(true); + + // Verify log content + const logContent = await fs.readFile(logPath, 'utf8'); + + // Verify ADR format structure + expect(logContent).toContain('# Decision Log'); + expect(logContent).toContain('## Context'); + expect(logContent).toContain('## Decisions Made'); + expect(logContent).toContain('## Implementation Changes'); + expect(logContent).toContain('## Consequences & Rollback'); + + // Verify decisions are in log + expect(logContent).toContain('Use Axios for HTTP client'); + expect(logContent).toContain('Use React Context for state management'); + expect(logContent).toContain('library-choice'); + expect(logContent).toContain('architecture'); + expect(logContent).toContain('medium'); + expect(logContent).toContain('high'); + + // Verify files are tracked (OS-agnostic path matching) + expect(logContent).toMatch(/src[/\\]api[/\\]client\.js/); + expect(logContent).toMatch(/src[/\\]context[/\\]AppContext\.js/); + expect(logContent).toContain('package.json'); + + // Verify tests are tracked + expect(logContent).toContain('api.test.js'); + expect(logContent).toContain('context.test.js'); + expect(logContent).toContain('125ms'); + expect(logContent).toContain('85ms'); + + // Verify metrics (agent load time is explicitly set, task execution time is calculated) + expect(logContent).toContain('Agent Load Time: 150ms'); + expect(logContent).toContain('Task Execution Time'); // Calculated automatically based on duration + + // Verify index was updated + const indexExists = await fs.access('.ai/decision-logs-index.md').then(() => true).catch(() => false); + expect(indexExists).toBe(true); + + const indexContent = await fs.readFile('.ai/decision-logs-index.md', 'utf8'); + expect(indexContent).toContain('# Decision Log Index'); + expect(indexContent).toContain(testStoryId); + expect(indexContent).toContain('completed'); + expect(indexContent).toContain('Total logs: 1'); + }); + + it('should handle decision logging disabled gracefully', async () => { + const context = await initializeDecisionLogging('dev', testStoryPath, { + enabled: false, + }); + + expect(context).toBeNull(); + + const decision = recordDecision({ + description: 'Test decision', + reason: 'Test reason', + }); + + expect(decision).toBeNull(); + + const logPath = await completeDecisionLogging(testStoryId); + + expect(logPath).toBeNull(); + }); + + it('should track multiple decisions of different types', async () => { + await initializeDecisionLogging('dev', testStoryPath); + + const decisionTypes = [ + { type: 'library-choice', priority: 'medium', desc: 'Library decision' }, + { type: 'architecture', priority: 'high', desc: 'Architecture decision' }, + { type: 'algorithm', priority: 'medium', desc: 'Algorithm decision' }, + { type: 'error-handling', priority: 'low', desc: 'Error handling decision' }, + { type: 'testing-strategy', priority: 'medium', desc: 'Testing decision' }, + ]; + + decisionTypes.forEach(({ type, priority, desc }) => { + const decision = recordDecision({ + description: desc, + reason: `Because ${type}`, + alternatives: ['Alt 1', 'Alt 2'], + type, + priority, + }); + + expect(decision.type).toBe(type); + expect(decision.priority).toBe(priority); + }); + + const logPath = await completeDecisionLogging(testStoryId, 'completed'); + const logContent = await fs.readFile(logPath, 'utf8'); + + decisionTypes.forEach(({ type, desc }) => { + expect(logContent).toContain(desc); + expect(logContent).toContain(type); + }); + }); + + it('should track file operations correctly', async () => { + await initializeDecisionLogging('dev', testStoryPath); + + const fileOperations = [ + { path: 'src/new-file.js', action: 'created' }, + { path: 'src/existing-file.js', action: 'modified' }, + { path: 'src/old-file.js', action: 'deleted' }, + ]; + + fileOperations.forEach(({ path: filePath, action }) => { + trackFile(filePath, action); + }); + + const context = getCurrentContext(); + expect(context.filesModified).toHaveLength(3); + + const logPath = await completeDecisionLogging(testStoryId, 'completed'); + const logContent = await fs.readFile(logPath, 'utf8'); + + fileOperations.forEach(({ path: filePath, action }) => { + expect(logContent).toContain(path.basename(filePath)); + expect(logContent).toContain(action); + }); + }); + + it('should track test results with pass/fail status', async () => { + await initializeDecisionLogging('dev', testStoryPath); + + trackTest({ + name: 'passing-test.js', + passed: true, + duration: 100, + }); + + trackTest({ + name: 'failing-test.js', + passed: false, + duration: 50, + error: 'Assertion failed: expected true to be false', + }); + + const logPath = await completeDecisionLogging(testStoryId, 'completed'); + const logContent = await fs.readFile(logPath, 'utf8'); + + expect(logContent).toContain('passing-test.js'); + expect(logContent).toContain('failing-test.js'); + expect(logContent).toContain('100ms'); + expect(logContent).toContain('50ms'); + expect(logContent).toContain('✅'); // Pass marker + expect(logContent).toContain('❌'); // Fail marker + expect(logContent).toContain('Assertion failed'); + }); + }); + + describeIntegration('Rollback Metadata', () => { + it('should capture git commit hash for rollback', async () => { + await initializeDecisionLogging('dev', testStoryPath); + + const context = getCurrentContext(); + expect(context.commitBefore).toBeDefined(); + expect(typeof context.commitBefore).toBe('string'); + + const logPath = await completeDecisionLogging(testStoryId, 'completed'); + const logContent = await fs.readFile(logPath, 'utf8'); + + expect(logContent).toContain('Rollback:'); + expect(logContent).toContain('git reset --hard'); + expect(logContent).toContain(context.commitBefore); + }); + + it('should include rollback instructions in log', async () => { + await initializeDecisionLogging('dev', testStoryPath); + + const logPath = await completeDecisionLogging(testStoryId, 'completed'); + const logContent = await fs.readFile(logPath, 'utf8'); + + expect(logContent).toContain('### Rollback Instructions'); + expect(logContent).toContain('# Full rollback'); + expect(logContent).toContain('# Selective file rollback'); + expect(logContent).toContain('git checkout'); + }); + }); + + describeIntegration('Index File Updates', () => { + it('should update index file when log is generated', async () => { + await initializeDecisionLogging('dev', testStoryPath); + recordDecision({ + description: 'Test decision', + reason: 'Test reason', + alternatives: [], + }); + + await completeDecisionLogging(testStoryId, 'completed'); + + const indexExists = await fs.access('.ai/decision-logs-index.md').then(() => true).catch(() => false); + expect(indexExists).toBe(true); + + const indexContent = await fs.readFile('.ai/decision-logs-index.md', 'utf8'); + + expect(indexContent).toContain('# Decision Log Index'); + expect(indexContent).toContain(testStoryId); + expect(indexContent).toContain('dev'); + expect(indexContent).toContain('completed'); + }); + + it('should update index with correct decision count', async () => { + await initializeDecisionLogging('dev', testStoryPath); + + // Record 5 decisions + for (let i = 1; i <= 5; i++) { + recordDecision({ + description: `Decision ${i}`, + reason: `Reason ${i}`, + alternatives: [], + }); + } + + await completeDecisionLogging(testStoryId, 'completed'); + + const indexContent = await fs.readFile('.ai/decision-logs-index.md', 'utf8'); + expect(indexContent).toContain('| 5 |'); // Decision count column + }); + }); + + describeIntegration('Performance Validation', () => { + it('should complete workflow in reasonable time', async () => { + const startTime = Date.now(); + + await initializeDecisionLogging('dev', testStoryPath); + + // Record multiple decisions + for (let i = 0; i < 10; i++) { + recordDecision({ + description: `Decision ${i}`, + reason: 'Performance test', + alternatives: ['Alt 1', 'Alt 2'], + }); + } + + // Track files + for (let i = 0; i < 20; i++) { + trackFile(`src/file-${i}.js`, 'created'); + } + + // Track tests + for (let i = 0; i < 15; i++) { + trackTest({ + name: `test-${i}.js`, + passed: true, + duration: 100, + }); + } + + await completeDecisionLogging(testStoryId, 'completed'); + + const totalTime = Date.now() - startTime; + + // Should complete in less than 2 seconds (conservative) + expect(totalTime).toBeLessThan(2000); + + console.log(`Workflow completed in ${totalTime}ms`); + }); + }); + + describeIntegration('Error Scenarios', () => { + it('should handle completion without initialization', async () => { + const logPath = await completeDecisionLogging(testStoryId); + expect(logPath).toBeNull(); + }); + + it('should handle recording decision without initialization', () => { + const consoleSpy = jest.spyOn(console, 'warn').mockImplementation(); + + const decision = recordDecision({ + description: 'Test', + reason: 'Test', + }); + + expect(decision).toBeNull(); + expect(consoleSpy).toHaveBeenCalledWith(expect.stringContaining('not initialized')); + + consoleSpy.mockRestore(); + }); + + it('should handle failed status', async () => { + await initializeDecisionLogging('dev', testStoryPath); + + recordDecision({ + description: 'Failed decision', + reason: 'Something went wrong', + alternatives: [], + }); + + const logPath = await completeDecisionLogging(testStoryId, 'failed'); + const logContent = await fs.readFile(logPath, 'utf8'); + + expect(logContent).toContain('**Status:** failed'); + }); + }); +}); + +``` + +================================================== +📄 tests/integration/wizard-ide-flow.test.js +================================================== +```js +/** + * Integration tests for Wizard IDE Flow + * + * Story 1.4: IDE Selection + * Tests complete flow from selection to config generation + * + * Synkra AIOS v2.1 supports 6 IDEs: + * - Claude Code, Codex CLI, Gemini CLI, Cursor, GitHub Copilot, AntiGravity + */ + +const fs = require('fs-extra'); +const path = require('path'); +const { generateIDEConfigs } = require('../../packages/installer/src/wizard/ide-config-generator'); +const { getIDEConfig, getIDEKeys } = require('../../packages/installer/src/config/ide-configs'); + +describe('Wizard IDE Flow Integration', () => { + const testDir = path.join(__dirname, '..', '..', '.test-temp-integration'); + + beforeEach(async () => { + await fs.ensureDir(testDir); + }); + + afterEach(async () => { + await fs.remove(testDir); + }); + + describe('Full flow: select -> generate -> verify', () => { + it('should complete flow for single IDE (Cursor)', async () => { + // Simulate wizard state after IDE selection + const wizardState = { + projectType: 'greenfield', + projectName: 'test-project', + selectedIDEs: ['cursor'], + }; + + // Generate configs + const result = await generateIDEConfigs(wizardState.selectedIDEs, wizardState, { + projectRoot: testDir, + }); + + // Verify result + expect(result.success).toBe(true); + // Now includes config file + agent files (16+ files) + expect(result.files.length).toBeGreaterThanOrEqual(1); + + // Verify config file exists (now in .cursor/rules.md) + const configPath = path.join(testDir, '.cursor', 'rules.md'); + expect(await fs.pathExists(configPath)).toBe(true); + + // Verify agent folder was created with agents + const agentFolder = path.join(testDir, '.cursor', 'rules'); + expect(await fs.pathExists(agentFolder)).toBe(true); + + // Verify content has AIOS branding + const content = await fs.readFile(configPath, 'utf8'); + expect(content).toContain('Synkra AIOS'); + expect(content).toContain('Development Rules'); + }); + + it('should complete flow for multiple IDEs', async () => { + const wizardState = { + projectType: 'brownfield', + projectName: 'multi-ide-project', + selectedIDEs: ['cursor', 'gemini', 'github-copilot'], + }; + + const result = await generateIDEConfigs(wizardState.selectedIDEs, wizardState, { + projectRoot: testDir, + }); + + expect(result.success).toBe(true); + // 3 config files + agent files for each IDE + expect(result.files.length).toBeGreaterThanOrEqual(3); + + // Verify all config files exist + expect(await fs.pathExists(path.join(testDir, '.cursor', 'rules.md'))).toBe(true); + expect(await fs.pathExists(path.join(testDir, '.gemini', 'rules.md'))).toBe(true); + expect(await fs.pathExists(path.join(testDir, '.github', 'copilot-instructions.md'))).toBe( + true, + ); + + // Verify agent folders were created + expect(await fs.pathExists(path.join(testDir, '.cursor', 'rules'))).toBe(true); + expect(await fs.pathExists(path.join(testDir, '.gemini', 'rules', 'AIOS', 'agents'))).toBe(true); + expect(await fs.pathExists(path.join(testDir, '.github', 'agents'))).toBe(true); + }); + + it('should complete flow for all 6 IDEs', async () => { + const wizardState = { + projectType: 'greenfield', + projectName: 'all-ides-project', + selectedIDEs: getIDEKeys(), + }; + + const result = await generateIDEConfigs(wizardState.selectedIDEs, wizardState, { + projectRoot: testDir, + }); + + expect(result.success).toBe(true); + // 6 config files + agent files for each IDE + expect(result.files.length).toBeGreaterThanOrEqual(6); + + // Verify all config files and agent folders based on IDE configuration + for (const ideKey of getIDEKeys()) { + const config = getIDEConfig(ideKey); + const configPath = path.join(testDir, config.configFile); + expect(await fs.pathExists(configPath)).toBe(true); + + // Verify agent folder exists + const agentFolder = path.join(testDir, config.agentFolder); + expect(await fs.pathExists(agentFolder)).toBe(true); + } + }); + }); + + describe('Directory structure', () => { + it('should create directories for IDEs that need them', async () => { + const wizardState = { + projectType: 'greenfield', + projectName: 'dir-test', + selectedIDEs: ['claude-code', 'cursor', 'github-copilot', 'antigravity'], + }; + + const result = await generateIDEConfigs(wizardState.selectedIDEs, wizardState, { + projectRoot: testDir, + }); + + expect(result.success).toBe(true); + + // Verify directories created for IDEs that require them + expect(await fs.pathExists(path.join(testDir, '.claude'))).toBe(true); + expect(await fs.pathExists(path.join(testDir, '.cursor'))).toBe(true); + expect(await fs.pathExists(path.join(testDir, '.github'))).toBe(true); + expect(await fs.pathExists(path.join(testDir, '.antigravity'))).toBe(true); + }); + + it('should NOT create directories for IDEs that do not need them', async () => { + const wizardState = { + projectType: 'greenfield', + projectName: 'no-dir-test', + selectedIDEs: ['codex'], // Root file IDE + }; + + const result = await generateIDEConfigs(wizardState.selectedIDEs, wizardState, { + projectRoot: testDir, + }); + + expect(result.success).toBe(true); + + // Codex should be a file at root, not directory + const codexStat = await fs.stat(path.join(testDir, 'AGENTS.md')); + expect(codexStat.isFile()).toBe(true); + }); + }); + + describe('Content and formatting', () => { + it('should generate valid content from templates', async () => { + const wizardState = { + projectType: 'greenfield', + projectName: 'content-test', + selectedIDEs: ['cursor', 'gemini'], + }; + + const result = await generateIDEConfigs(wizardState.selectedIDEs, wizardState, { + projectRoot: testDir, + }); + + expect(result.success).toBe(true); + + // Check Cursor content (now in .cursor/rules.md) + const cursorContent = await fs.readFile(path.join(testDir, '.cursor', 'rules.md'), 'utf8'); + expect(cursorContent).toContain('Synkra AIOS'); + expect(cursorContent).toContain('Story-Driven Development'); + + // Check Gemini content + const geminiContent = await fs.readFile(path.join(testDir, '.gemini', 'rules.md'), 'utf8'); + expect(geminiContent).toContain('Synkra AIOS'); + }); + + it('should generate Claude Code config as recommended', async () => { + const wizardState = { + projectType: 'greenfield', + projectName: 'claude-test', + selectedIDEs: ['claude-code'], + }; + + const result = await generateIDEConfigs(wizardState.selectedIDEs, wizardState, { + projectRoot: testDir, + }); + + expect(result.success).toBe(true); + + const claudePath = path.join(testDir, '.claude', 'CLAUDE.md'); + expect(await fs.pathExists(claudePath)).toBe(true); + + const content = await fs.readFile(claudePath, 'utf8'); + expect(content).toContain('Synkra AIOS'); + }); + + it('should generate Gemini settings and hooks for lifecycle integration', async () => { + const wizardState = { + projectType: 'greenfield', + projectName: 'gemini-hooks-test', + selectedIDEs: ['gemini'], + }; + + const result = await generateIDEConfigs(wizardState.selectedIDEs, wizardState, { + projectRoot: testDir, + }); + + expect(result.success).toBe(true); + expect(await fs.pathExists(path.join(testDir, '.gemini', 'settings.json'))).toBe(true); + expect(await fs.pathExists(path.join(testDir, '.gemini', 'hooks', 'before-agent.js'))).toBe( + true, + ); + expect(await fs.pathExists(path.join(testDir, '.gemini', 'hooks', 'session-start.js'))).toBe( + true, + ); + }); + }); + + describe('Error handling and edge cases', () => { + it('should handle directory creation for nested configs', async () => { + const wizardState = { + projectType: 'greenfield', + projectName: 'nested-test', + selectedIDEs: ['github-copilot', 'antigravity'], + }; + + const result = await generateIDEConfigs(wizardState.selectedIDEs, wizardState, { + projectRoot: testDir, + }); + + expect(result.success).toBe(true); + + // Verify directories created + expect(await fs.pathExists(path.join(testDir, '.github'))).toBe(true); + expect(await fs.pathExists(path.join(testDir, '.antigravity'))).toBe(true); + }); + + it('should handle all IDEs with text format', async () => { + const wizardState = { + projectType: 'greenfield', + projectName: 'format-test', + selectedIDEs: ['cursor', 'github-copilot', 'antigravity'], + }; + + const result = await generateIDEConfigs(wizardState.selectedIDEs, wizardState, { + projectRoot: testDir, + }); + + expect(result.success).toBe(true); + // 3 config files + agent files for each IDE + expect(result.files.length).toBeGreaterThanOrEqual(3); + + // All formats should be text (markdown) + const cursorContent = await fs.readFile(path.join(testDir, '.cursor', 'rules.md'), 'utf8'); + expect(typeof cursorContent).toBe('string'); + + const copilotContent = await fs.readFile( + path.join(testDir, '.github', 'copilot-instructions.md'), + 'utf8', + ); + expect(typeof copilotContent).toBe('string'); + + const antigravityContent = await fs.readFile( + path.join(testDir, '.antigravity', 'rules.md'), + 'utf8', + ); + expect(typeof antigravityContent).toBe('string'); + }); + }); + + describe('Template content validation', () => { + it('should generate config from template without errors', async () => { + const wizardState = { + projectType: 'brownfield', + projectName: 'my-awesome-project', + selectedIDEs: ['cursor'], + }; + + await generateIDEConfigs(wizardState.selectedIDEs, wizardState, { + projectRoot: testDir, + }); + + const configPath = path.join(testDir, '.cursor', 'rules.md'); + const content = await fs.readFile(configPath, 'utf8'); + + // Template should be generated with AIOS content + expect(content).toContain('Synkra AIOS'); + expect(content).toContain('Development Rules'); + expect(content).toContain('Story-Driven Development'); + expect(content).not.toContain('{{'); // No uninterpolated variables + }); + + it('should handle default values when wizard state is minimal', async () => { + const wizardState = {}; // No projectName or projectType + + const result = await generateIDEConfigs(['cursor'], wizardState, { + projectRoot: testDir, + }); + + expect(result.success).toBe(true); + + const configPath = path.join(testDir, '.cursor', 'rules.md'); + const content = await fs.readFile(configPath, 'utf8'); + + // Template should be generated without errors + expect(content).toContain('Synkra AIOS'); + expect(content).toContain('Development Rules'); + }); + }); +}); + +``` + +================================================== +📄 tests/integration/test-utilities-part-1.js +================================================== +```js +/** + * Integration Test Suite: Utility Scripts Integration - Part 1 + * + * Story: 3.4 - Utility Script Integration Part 1 + * Purpose: Validate integration of 23 utility scripts into AIOS framework + * + * Tests: + * 1. Load all 23 utilities successfully (no errors) + * 2. Validate all utility references resolve correctly + * 3. Re-run gap detection - verify 0 gaps for these utilities + * 4. Load all affected agents successfully + * 5. Regenerate master relationship map successfully + */ + +const fs = require('fs'); +const path = require('path'); +const { execSync } = require('child_process'); + +// Test configuration +const ROOT_PATH = path.resolve(__dirname, '..', '..'); +const UTILS_PATH = path.join(ROOT_PATH, 'aios-core', 'utils'); +const AGENTS_PATH = path.join(ROOT_PATH, 'aios-core', 'agents'); +const COMMON_UTILS_PATH = path.join(ROOT_PATH, 'common', 'utils'); + +// 23 utilities to test (from Story 3.4) +const UTILITIES_TO_TEST = [ + // Code Quality (5) + { name: 'improvement-validator', category: 'framework', path: UTILS_PATH }, + { name: 'code-quality-improver', category: 'helpers', path: UTILS_PATH }, + { name: 'coverage-analyzer', category: 'helpers', path: UTILS_PATH }, + { name: 'compatibility-checker', category: 'helpers', path: UTILS_PATH }, + { name: 'dependency-analyzer', category: 'helpers', path: UTILS_PATH }, + + // Git/Workflow (7) + { name: 'approval-workflow', category: 'executors', path: UTILS_PATH }, + { name: 'branch-manager', category: 'executors', path: UTILS_PATH }, + { name: 'commit-message-generator', category: 'executors', path: UTILS_PATH }, + { name: 'conflict-manager', category: 'helpers', path: UTILS_PATH }, + { name: 'conflict-resolver', category: 'executors', path: UTILS_PATH }, + { name: 'diff-generator', category: 'helpers', path: UTILS_PATH }, + { name: 'change-propagation-predictor', category: 'helpers', path: UTILS_PATH }, + + // Component Management (5) + { name: 'component-generator', category: 'executors', path: UTILS_PATH }, + { name: 'component-metadata', category: 'helpers', path: UTILS_PATH }, + { name: 'component-preview', category: 'helpers', path: UTILS_PATH }, + { name: 'component-search', category: 'executors', path: UTILS_PATH }, + { name: 'deprecation-manager', category: 'executors', path: UTILS_PATH }, + + // Documentation (2) + { name: 'documentation-synchronizer', category: 'executors', path: UTILS_PATH }, + { name: 'dependency-impact-analyzer', category: 'executors', path: UTILS_PATH }, + + // Batch/Helpers (4) + { name: 'batch-creator', category: 'helpers', path: UTILS_PATH }, + { name: 'clickup-helpers', category: 'helpers', path: COMMON_UTILS_PATH }, + { name: 'capability-analyzer', category: 'helpers', path: UTILS_PATH }, + { name: 'elicitation-engine', category: 'framework', path: UTILS_PATH }, +]; + +// Agents to test +const AGENTS_TO_TEST = [ + 'dev', + 'qa', + 'architect', + 'po', + 'pm', + 'devops', +]; + +// Test results +const testResults = { + test1: { name: 'Utility Load Test', passed: 0, failed: 0, errors: [] }, + test2: { name: 'Reference Validation', passed: false, errors: [] }, + test3: { name: 'Gap Detection', passed: false, gapsFound: null }, + test4: { name: 'Agent Load Test', passed: 0, failed: 0, errors: [] }, + test5: { name: 'Relationship Synthesis', passed: false, errors: [] }, +}; + +console.log('🧪 Starting Integration Test Suite: Utility Scripts Part 1'); +console.log('='.repeat(70)); +console.log(''); + +// ============================================================================ +// TEST 1: Load all 23 utilities successfully (no errors) +// ============================================================================ +console.log('Test 1: Utility Load Test'); +console.log('-'.repeat(70)); + +for (const util of UTILITIES_TO_TEST) { + const utilPath = path.join(util.path, `${util.name}.js`); + + try { + // Try to load the utility + require(utilPath); + console.log(`✓ ${util.name} (${util.category})`); + testResults.test1.passed++; + } catch (error) { + console.log(`✗ ${util.name} (${util.category}) - ${error.message}`); + testResults.test1.failed++; + testResults.test1.errors.push({ + utility: util.name, + error: error.message, + }); + } +} + +console.log(''); +console.log(`Result: ${testResults.test1.passed}/${UTILITIES_TO_TEST.length} utilities loaded successfully`); +console.log(''); + +// ============================================================================ +// TEST 2: Validate all utility references resolve correctly +// ============================================================================ +console.log('Test 2: Reference Validation'); +console.log('-'.repeat(70)); + +try { + // Check if validation script exists + const validateScriptPath = path.join(ROOT_PATH, 'outputs', 'architecture-map', 'schemas', 'validate-tool-references.js'); + + if (fs.existsSync(validateScriptPath)) { + console.log('Running: validate-tool-references.js'); + + try { + const output = execSync(`node "${validateScriptPath}"`, { + cwd: ROOT_PATH, + encoding: 'utf8', + stdio: 'pipe', + }); + + // Check if output indicates success + const hasErrors = output.includes('ERROR') || output.includes('FAIL'); + testResults.test2.passed = !hasErrors; + + if (testResults.test2.passed) { + console.log('✓ All utility references resolve correctly'); + } else { + console.log('✗ Some utility references failed validation'); + console.log('Output:', output.substring(0, 500)); + } + } catch (execError) { + console.log('✗ Validation script execution failed'); + testResults.test2.errors.push(execError.message); + } + } else { + console.log('⚠ Validation script not found, skipping'); + console.log(` Expected: ${validateScriptPath}`); + testResults.test2.passed = null; // Inconclusive + } +} catch (error) { + console.log('✗ Reference validation failed'); + testResults.test2.errors.push(error.message); +} + +console.log(''); + +// ============================================================================ +// TEST 3: Re-run gap detection - verify 0 gaps for these utilities +// ============================================================================ +console.log('Test 3: Gap Detection'); +console.log('-'.repeat(70)); + +try { + const detectGapsPath = path.join(ROOT_PATH, 'outputs', 'architecture-map', 'schemas', 'detect-gaps.js'); + + if (fs.existsSync(detectGapsPath)) { + console.log('Running: detect-gaps.js'); + + try { + const output = execSync(`node "${detectGapsPath}"`, { + cwd: ROOT_PATH, + encoding: 'utf8', + stdio: 'pipe', + }); + + // Count gaps related to our utilities + let gapsFound = 0; + for (const util of UTILITIES_TO_TEST) { + const utilPattern = new RegExp(`util-${util.name}|${util.name}`, 'i'); + if (utilPattern.test(_output)) { + gapsFound++; + console.log(`⚠ Gap found for: ${util.name}`); + } + } + + testResults.test3.gapsFound = gapsFound; + testResults.test3.passed = gapsFound === 0; + + if (testResults.test3.passed) { + console.log('✓ Zero gaps found for integrated utilities'); + } else { + console.log(`✗ ${gapsFound} gap(s) still present`); + } + } catch (execError) { + console.log('✗ Gap detection script execution failed'); + testResults.test3.errors = [execError.message]; + } + } else { + console.log('⚠ Gap detection script not found, skipping'); + console.log(` Expected: ${detectGapsPath}`); + testResults.test3.passed = null; // Inconclusive + } +} catch (error) { + console.log('✗ Gap detection failed'); + testResults.test3.errors = [error.message]; +} + +console.log(''); + +// ============================================================================ +// TEST 4: Load all affected agents successfully +// ============================================================================ +console.log('Test 4: Agent Load Test'); +console.log('-'.repeat(70)); + +for (const agentName of AGENTS_TO_TEST) { + const agentPath = path.join(AGENTS_PATH, `${agentName}.md`); + + try { + // Read agent file + if (fs.existsSync(agentPath)) { + const agentContent = fs.readFileSync(agentPath, 'utf8'); + + // Check if agent file contains YAML block + const yamlMatch = agentContent.match(/```yaml\n([\s\S]+?)\n```/); + + if (yamlMatch) { + // Validate that dependencies.utils section exists if utility is referenced + const utilsMatch = yamlMatch[1].match(/dependencies:\s*[\s\S]*?utils:\s*\n([\s\S]*?)(?:\n\s*\w+:|```)/); + + if (utilsMatch) { + console.log(`✓ ${agentName} - dependencies.utils section found`); + testResults.test4.passed++; + } else { + console.log(`⚠ ${agentName} - no dependencies.utils section`); + testResults.test4.passed++; + } + } else { + console.log(`✗ ${agentName} - invalid YAML format`); + testResults.test4.failed++; + } + } else { + console.log(`✗ ${agentName} - file not found at ${agentPath}`); + testResults.test4.failed++; + } + } catch (error) { + console.log(`✗ ${agentName} - ${error.message}`); + testResults.test4.failed++; + testResults.test4.errors.push({ + agent: agentName, + error: error.message, + }); + } +} + +console.log(''); +console.log(`Result: ${testResults.test4.passed}/${AGENTS_TO_TEST.length} agents loaded successfully`); +console.log(''); + +// ============================================================================ +// TEST 5: Regenerate master relationship map successfully +// ============================================================================ +console.log('Test 5: Relationship Synthesis'); +console.log('-'.repeat(70)); + +try { + const synthesizePath = path.join(ROOT_PATH, 'outputs', 'architecture-map', 'schemas', 'synthesize-relationships.js'); + + if (fs.existsSync(synthesizePath)) { + console.log('Running: synthesize-relationships.js'); + + try { + const output = execSync(`node "${synthesizePath}"`, { + cwd: ROOT_PATH, + encoding: 'utf8', + stdio: 'pipe', + timeout: 30000, // 30 second timeout + }); + + // Check if MASTER-RELATIONSHIP-MAP.json was updated + const masterMapPath = path.join(ROOT_PATH, 'outputs', 'architecture-map', 'MASTER-RELATIONSHIP-MAP.json'); + + if (fs.existsSync(masterMapPath)) { + const masterMap = JSON.parse(fs.readFileSync(masterMapPath, 'utf8')); + + // Count how many of our utilities are in the map + let utilsInMap = 0; + if (masterMap.nodes) { + for (const util of UTILITIES_TO_TEST) { + const utilNode = masterMap.nodes.find(n => + n.id === `util-${util.name}` || + n.id === util.name || + n.name === util.name, + ); + if (utilNode) { + utilsInMap++; + } + } + } + + console.log('✓ Relationship map regenerated'); + console.log(` Utilities found in map: ${utilsInMap}/${UTILITIES_TO_TEST.length}`); + testResults.test5.passed = true; + } else { + console.log('✗ Master relationship map not found'); + testResults.test5.passed = false; + } + } catch (execError) { + console.log('✗ Synthesis script execution failed'); + console.log(` Error: ${execError.message.substring(0, 200)}`); + testResults.test5.errors.push(execError.message); + } + } else { + console.log('⚠ Synthesis script not found, skipping'); + console.log(` Expected: ${synthesizePath}`); + testResults.test5.passed = null; // Inconclusive + } +} catch (error) { + console.log('✗ Relationship synthesis failed'); + testResults.test5.errors.push(error.message); +} + +console.log(''); + +// ============================================================================ +// FINAL RESULTS +// ============================================================================ +console.log('='.repeat(70)); +console.log('Test Suite Summary'); +console.log('='.repeat(70)); +console.log(''); + +const allTestsPassed = + testResults.test1.failed === 0 && + (testResults.test2.passed === true || testResults.test2.passed === null) && + (testResults.test3.passed === true || testResults.test3.passed === null) && + testResults.test4.failed === 0 && + (testResults.test5.passed === true || testResults.test5.passed === null); + +console.log(`Test 1 (Utility Load): ${testResults.test1.passed}/${UTILITIES_TO_TEST.length} ${testResults.test1.failed === 0 ? '✓' : '✗'}`); +console.log(`Test 2 (Reference Validation): ${testResults.test2.passed === true ? '✓ PASS' : testResults.test2.passed === null ? '⚠ SKIP' : '✗ FAIL'}`); +console.log(`Test 3 (Gap Detection): ${testResults.test3.passed === true ? '✓ PASS' : testResults.test3.passed === null ? '⚠ SKIP' : '✗ FAIL'}`); +if (testResults.test3.gapsFound !== null) { + console.log(` (${testResults.test3.gapsFound} gaps found)`); +} +console.log(`Test 4 (Agent Load): ${testResults.test4.passed}/${AGENTS_TO_TEST.length} ${testResults.test4.failed === 0 ? '✓' : '✗'}`); +console.log(`Test 5 (Relationship Synth): ${testResults.test5.passed === true ? '✓ PASS' : testResults.test5.passed === null ? '⚠ SKIP' : '✗ FAIL'}`); + +console.log(''); +console.log(`Overall Status: ${allTestsPassed ? '✓ ALL TESTS PASSED' : '✗ SOME TESTS FAILED'}`); +console.log(''); + +// Exit with appropriate code +if (!allTestsPassed) { + console.log('Errors encountered:'); + if (testResults.test1.errors.length > 0) { + console.log(' Test 1:', testResults.test1.errors); + } + if (testResults.test2.errors.length > 0) { + console.log(' Test 2:', testResults.test2.errors); + } + if (testResults.test3.errors && testResults.test3.errors.length > 0) { + console.log(' Test 3:', testResults.test3.errors); + } + if (testResults.test4.errors.length > 0) { + console.log(' Test 4:', testResults.test4.errors); + } + if (testResults.test5.errors.length > 0) { + console.log(' Test 5:', testResults.test5.errors); + } + + process.exit(1); +} else { + console.log('✓ Story 3.4 utility integration validation complete!'); + process.exit(0); +} + + +``` + +================================================== +📄 tests/integration/agent-activation-performance.test.js +================================================== +```js +// Integration/Performance test - uses describeIntegration +/** + * Integration Tests: Agent Activation Performance + * Story 6.1.2.6.2 - Agent Performance Optimization + * + * Tests end-to-end agent activation with session context + */ + +const DevContextLoader = require('../../.aios-core/development/scripts/dev-context-loader'); +const SessionContextLoader = require('../../.aios-core/scripts/session-context-loader'); + +describeIntegration('Agent Activation Performance (Integration)', () => { + let devLoader; + let sessionLoader; + + beforeEach(async () => { + devLoader = new DevContextLoader(); + sessionLoader = new SessionContextLoader(); + // Start with clean session and cache + sessionLoader.clearSession(); + await devLoader.clearCache().catch(() => {}); + }); + + afterEach(() => { + sessionLoader.clearSession(); + }); + + describeIntegration('@dev Activation with Session Context', () => { + test('activates with session context after @po', async () => { + // Simulate @po activation and command + sessionLoader.updateSession('po', 'Pax', 'validate-story-draft'); + + // Simulate @dev activation + const start = Date.now(); + const devContext = await devLoader.load({ fullLoad: false }); + const sessionContext = sessionLoader.loadContext('dev'); + const duration = Date.now() - start; + + // Performance assertion + expect(duration).toBeLessThan(100); // Including both loaders + + // Session context assertions + expect(sessionContext.sessionType).toBe('existing'); + expect(sessionContext.previousAgent.agentId).toBe('po'); + expect(sessionContext.message).toContain('Continuing from @po'); + + // Dev context assertions + expect(devContext.status).toBe('loaded'); + expect(devContext.files.length).toBeGreaterThan(0); + }); + + test('shows correct load time and cache status', async () => { + // First load (cache miss) + const result1 = await devLoader.load({ fullLoad: false }); + + expect(result1.loadTime).toBeLessThan(50); + expect(result1.cacheHits).toBe(0); + + // Second load (cache hit) + const result2 = await devLoader.load({ fullLoad: false }); + + expect(result2.loadTime).toBeLessThan(5); + expect(result2.cacheHits).toBeGreaterThan(0); + }); + }); + + describeIntegration('Multi-Agent Transition Flow', () => { + test('tracks agent sequence: @po → @dev → @qa → @sm', () => { + // Simulate agent transitions + sessionLoader.updateSession('po', 'Pax', 'validate-story-draft'); + const context1 = sessionLoader.loadContext('dev'); + expect(context1.previousAgent.agentId).toBe('po'); + + sessionLoader.updateSession('dev', 'Dex', 'develop'); + const context2 = sessionLoader.loadContext('qa'); + expect(context2.previousAgent.agentId).toBe('dev'); + + sessionLoader.updateSession('qa', 'Quinn', 'review'); + const context3 = sessionLoader.loadContext('sm'); + expect(context3.previousAgent.agentId).toBe('qa'); + + // Verify command history preserved + expect(context3.lastCommands).toContain('validate-story-draft'); + expect(context3.lastCommands).toContain('develop'); + expect(context3.lastCommands).toContain('review'); + }); + }); + + describeIntegration('Performance Targets', () => { + test('@dev activation loads efficiently', async () => { + sessionLoader.clearSession(); + + const start = Date.now(); + await devLoader.load({ fullLoad: false, skipCache: true }); + const sessionContext = sessionLoader.loadContext('dev'); + const duration = Date.now() - start; + + // Relaxed for CI environments + expect(duration).toBeLessThan(5000); // 5 seconds max + expect(sessionContext.sessionType).toBe('new'); + }, 60000); + + test('@dev cached activation is significantly faster', async () => { + // Warm up cache + const start1 = Date.now(); + await devLoader.load({ fullLoad: false }); + const coldDuration = Date.now() - start1; + + // Measure cached load + const start2 = Date.now(); + await devLoader.load({ fullLoad: false }); + const cachedDuration = Date.now() - start2; + + // Cached should be at least 50% faster + expect(cachedDuration).toBeLessThan(coldDuration * 0.5); + }, 60000); + }); + + describeIntegration('Cache Behavior', () => { + test('cache persists across multiple loads', async () => { + const result1 = await devLoader.load({ fullLoad: false }); + const result2 = await devLoader.load({ fullLoad: false }); + const result3 = await devLoader.load({ fullLoad: false }); + + expect(result2.cacheHits).toBeGreaterThan(result1.cacheHits); + expect(result3.cacheHits).toBeGreaterThan(result2.cacheHits); + }); + + test('cache invalidation after clear', async () => { + // Load with cache + const result1 = await devLoader.load({ fullLoad: false }); + expect(result1.cacheHits).toBe(0); // First load + + const result2 = await devLoader.load({ fullLoad: false }); + expect(result2.cacheHits).toBeGreaterThan(0); // Cached + + // Clear cache + await devLoader.clearCache(); + + // Should be cache miss again + const result3 = await devLoader.load({ fullLoad: false }); + expect(result3.cacheHits).toBe(0); + }); + }); + + describeIntegration('Data Reduction', () => { + test('summary mode reduces data by ~82%', async () => { + const summaryResult = await devLoader.load({ fullLoad: false, skipCache: true }); + const fullResult = await devLoader.load({ fullLoad: true, skipCache: true }); + + // Only count successfully loaded files (exclude files with errors) + const successfulSummaryFiles = summaryResult.files.filter(f => !f.error); + const successfulFullFiles = fullResult.files.filter(f => !f.error); + + // Calculate total lines only from successfully loaded files + const summaryLines = successfulSummaryFiles.reduce((sum, f) => sum + (f.summaryLines || 0), 0); + const fullLines = successfulFullFiles.reduce((sum, f) => sum + (f.linesCount || 0), 0); + + // Only test reduction if we have data to compare + if (fullLines > 0) { + const reduction = ((fullLines - summaryLines) / fullLines) * 100; + + expect(reduction).toBeGreaterThan(75); + expect(reduction).toBeLessThan(90); + } else { + // If no files loaded, at least verify we got some result + expect(summaryResult.status).toBe('loaded'); + } + }); + }); + + describeIntegration('Session Context Display', () => { + test('formats context message correctly', () => { + sessionLoader.updateSession('po', 'Pax', 'validate-story-draft'); + + const message = sessionLoader.formatForGreeting('dev'); + + expect(message).toContain('\n'); + expect(message).toContain('📍'); + expect(message).toContain('@po'); + expect(message).toContain('Pax'); + }); + + test('shows empty message for new sessions', () => { + sessionLoader.clearSession(); + + const message = sessionLoader.formatForGreeting('dev'); + + expect(message).toBe(''); + }); + }); +}); + +``` + +================================================== +📄 tests/integration/install-transaction.test.js +================================================== +```js +/** + * Integration tests for InstallTransaction + * + * Tests backup, rollback, logging, and error recovery scenarios + * + * @see Story 1.9 - Error Handling & Rollback + */ + +const fs = require('fs-extra'); +const path = require('path'); +const os = require('os'); +const { InstallTransaction, ERROR_TYPES } = require('../../bin/utils/install-transaction'); + +describe('InstallTransaction', () => { + let transaction; + let tempDir; + let testFile; + let testDir; + + beforeEach(async () => { + // Create temp directory for each test + tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'aios-test-')); + testFile = path.join(tempDir, 'test.txt'); + testDir = path.join(tempDir, 'test-dir'); + + // Create test fixtures + await fs.writeFile(testFile, 'original content'); + await fs.ensureDir(testDir); + await fs.writeFile(path.join(testDir, 'file1.txt'), 'file 1 content'); + await fs.writeFile(path.join(testDir, 'file2.txt'), 'file 2 content'); + + // Initialize transaction with temp dir + // Use simple timestamp for testing (avoid Windows path issues) + const timestamp = new Date().toISOString().replace(/:/g, '-').replace(/\./g, '-').replace('T', '_'); + transaction = new InstallTransaction({ + backupDir: path.join(tempDir, '.aios-backup', timestamp), + logFile: path.join(tempDir, '.aios-install.log'), + }); + }); + + afterEach(async () => { + // Cleanup temp directory + await fs.remove(tempDir); + }); + + // Test 1: Backup Success Scenario + describe('Backup Operations', () => { + test('should backup file successfully', async () => { + await transaction.backup(testFile); + + expect(transaction.backups).toHaveLength(1); + expect(transaction.backups[0].original).toBe(path.resolve(testFile)); + expect(await fs.pathExists(transaction.backups[0].backup)).toBe(true); + + const backupContent = await fs.readFile(transaction.backups[0].backup, 'utf-8'); + expect(backupContent).toBe('original content'); + }); + + test('should backup directory successfully', async () => { + await transaction.backupDirectory(testDir); + + expect(transaction.backups).toHaveLength(1); + expect(transaction.backups[0].isDirectory).toBe(true); + + const backupDir = transaction.backups[0].backup; + expect(await fs.pathExists(path.join(backupDir, 'file1.txt'))).toBe(true); + expect(await fs.pathExists(path.join(backupDir, 'file2.txt'))).toBe(true); + }); + + test('should calculate hash correctly for verification', async () => { + await transaction.backup(testFile); + + const backup = transaction.backups[0]; + expect(backup.hash).toBeTruthy(); + expect(backup.hash).toHaveLength(64); // SHA-256 hex = 64 chars + }); + + test('should skip backup if file does not exist', async () => { + const nonExistent = path.join(tempDir, 'nonexistent.txt'); + await transaction.backup(nonExistent); + + expect(transaction.backups).toHaveLength(0); + expect(transaction.operations.some((op) => op.level === 'WARN')).toBe(true); + }); + + test('should prevent duplicate backups', async () => { + await transaction.backup(testFile); + await transaction.backup(testFile); + + expect(transaction.backups).toHaveLength(1); + }); + + test('should detect and reject symlinks (security)', async () => { + const symlinkPath = path.join(tempDir, 'symlink.txt'); + + // Create symlink (skip test on Windows if symlinks not supported) + try { + await fs.symlink(testFile, symlinkPath); + } catch (error) { + if (error.code === 'EPERM' || error.code === 'ENOENT') { + console.warn('Symlink test skipped (not supported on this platform)'); + return; + } + throw error; + } + + await expect(transaction.backup(symlinkPath)).rejects.toThrow('Symlink detected'); + }); + }); + + // Test 2: Rollback Success Scenario + describe('Rollback Operations', () => { + test('should rollback file changes successfully', async () => { + await transaction.backup(testFile); + + // Modify file (simulate installation) + await fs.writeFile(testFile, 'modified content'); + + // Rollback + const success = await transaction.rollback(); + + expect(success).toBe(true); + const content = await fs.readFile(testFile, 'utf-8'); + expect(content).toBe('original content'); + }); + + test('should rollback directory changes successfully', async () => { + await transaction.backupDirectory(testDir); + + // Modify directory (simulate installation) + await fs.writeFile(path.join(testDir, 'file1.txt'), 'modified'); + await fs.writeFile(path.join(testDir, 'file3.txt'), 'new file'); + await fs.remove(path.join(testDir, 'file2.txt')); + + // Rollback + const success = await transaction.rollback(); + + expect(success).toBe(true); + expect(await fs.readFile(path.join(testDir, 'file1.txt'), 'utf-8')).toBe('file 1 content'); + expect(await fs.pathExists(path.join(testDir, 'file2.txt'))).toBe(true); + expect(await fs.pathExists(path.join(testDir, 'file3.txt'))).toBe(false); + }); + + test('should cleanup backup directory after rollback', async () => { + await transaction.backup(testFile); + await fs.writeFile(testFile, 'modified'); + + const backupDir = transaction.backupDir; + await transaction.rollback(); + + expect(await fs.pathExists(backupDir)).toBe(false); + }); + + test('should prevent rollback after commit', async () => { + await transaction.backup(testFile); + await transaction.commit(); + + const success = await transaction.rollback(); + + expect(success).toBe(false); + expect(transaction.operations.some((op) => op.message.includes('Cannot rollback'))).toBe(true); + }); + }); + + // Test 3: Partial Rollback Scenario + describe('Partial Rollback Scenarios', () => { + test('should handle partial rollback when some files fail to restore', async () => { + const file1 = path.join(tempDir, 'file1.txt'); + const file2 = path.join(tempDir, 'file2.txt'); + const file3 = path.join(tempDir, 'file3.txt'); + + await fs.writeFile(file1, 'content 1'); + await fs.writeFile(file2, 'content 2'); + await fs.writeFile(file3, 'content 3'); + + await transaction.backup(file1); + await transaction.backup(file2); + await transaction.backup(file3); + + // Corrupt one backup to simulate restore failure + await fs.remove(transaction.backups[1].backup); + + // Modify files + await fs.writeFile(file1, 'modified 1'); + await fs.writeFile(file2, 'modified 2'); + await fs.writeFile(file3, 'modified 3'); + + const success = await transaction.rollback(); + + expect(success).toBe(false); + // Files with valid backups should be restored + expect(await fs.readFile(file1, 'utf-8')).toBe('content 1'); + expect(await fs.readFile(file3, 'utf-8')).toBe('content 3'); + // File with corrupted backup should remain modified + expect(await fs.readFile(file2, 'utf-8')).toBe('modified 2'); + }); + }); + + // Test 4: Disk Space Exhaustion (mocked) + describe('Error Scenarios', () => { + test('should classify ENOSPC as CRITICAL error', () => { + const error = new Error('No space left on device'); + error.code = 'ENOSPC'; + + const classification = transaction.classifyError(error); + + expect(classification).toBe(ERROR_TYPES.CRITICAL); + expect(transaction.isCriticalError(error)).toBe(true); + }); + + test('should classify unknown errors as RECOVERABLE', () => { + const error = new Error('Unknown error'); + + const classification = transaction.classifyError(error); + + expect(classification).toBe(ERROR_TYPES.RECOVERABLE); + expect(transaction.isCriticalError(error)).toBe(false); + }); + }); + + // Test 5: Permission Errors During Backup + describe('Permission Handling', () => { + test('should log error when backup fails due to permissions', async () => { + // This test is platform-specific - skip on Windows + if (process.platform === 'win32') { + console.warn('Permission test skipped on Windows'); + return; + } + + const readOnlyFile = path.join(tempDir, 'readonly.txt'); + await fs.writeFile(readOnlyFile, 'content'); + await fs.chmod(readOnlyFile, 0o000); // No permissions + + await expect(transaction.backup(readOnlyFile)).rejects.toThrow(); + + // Restore permissions for cleanup + await fs.chmod(readOnlyFile, 0o644); + }); + }); + + // Test 6: Permission Errors During Restore (skipped - difficult to simulate reliably) + // Test 7: Concurrent Installation Attempts (skipped - requires multi-process testing) + + // Test 8: Log File Overflow Handling + describe('Logging System', () => { + test('should log operations with timestamps', async () => { + transaction.log('INFO', 'Test message'); + + expect(transaction.operations).toHaveLength(1); + expect(transaction.operations[0].level).toBe('INFO'); + expect(transaction.operations[0].message).toBe('Test message'); + expect(transaction.operations[0].timestamp).toBeTruthy(); + }); + + test('should write log to file', async () => { + transaction.log('INFO', 'File test'); + + const logContent = await fs.readFile(transaction.logFile, 'utf-8'); + expect(logContent).toContain('[INFO]'); + expect(logContent).toContain('File test'); + }); + + test('should rotate log when exceeding 10MB', async () => { + // Create large log file (simulate) + const largeContent = 'x'.repeat(11 * 1024 * 1024); // 11MB + await fs.writeFile(transaction.logFile, largeContent); + + transaction.log('INFO', 'After rotation'); + + expect(await fs.pathExists(`${transaction.logFile}.1`)).toBe(true); + }); + }); + + // Test 9: Credential Sanitization + describe('Credential Sanitization', () => { + test('should sanitize API keys in logs', async () => { + const apiKey = 'a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0u1v2w3x4y5z6'; + transaction.log('ERROR', `Failed with API key: ${apiKey}`); + + const logContent = await fs.readFile(transaction.logFile, 'utf-8'); + expect(logContent).not.toContain(apiKey); + expect(logContent).toContain('[REDACTED]'); + }); + + test('should sanitize Bearer tokens', async () => { + transaction.log('ERROR', 'Authorization: Bearer abc123token456'); + + const logContent = await fs.readFile(transaction.logFile, 'utf-8'); + expect(logContent).not.toContain('abc123token456'); + expect(logContent).toContain('Bearer [REDACTED]'); + }); + + test('should sanitize password fields', async () => { + transaction.log('ERROR', 'Login failed: password=secret123'); + + const logContent = await fs.readFile(transaction.logFile, 'utf-8'); + expect(logContent).not.toContain('secret123'); + expect(logContent).toContain('password=[REDACTED]'); + }); + + test('should sanitize environment variable secrets', async () => { + transaction.log('ERROR', 'Config: CLICKUP_API_KEY=pk_12345678'); + + const logContent = await fs.readFile(transaction.logFile, 'utf-8'); + expect(logContent).not.toContain('pk_12345678'); + expect(logContent).toContain('CLICKUP_API_KEY=[REDACTED]'); + }); + }); + + // Test 10: Hash Verification Failure + describe('Backup Verification', () => { + test('should detect corrupted backups during rollback', async () => { + await transaction.backup(testFile); + + // Corrupt backup file + const backup = transaction.backups[0]; + await fs.writeFile(backup.backup, 'corrupted content'); + + const success = await transaction.rollback(); + + expect(success).toBe(false); + expect(transaction.operations.some((op) => op.message.includes('Backup verification failed'))).toBe(true); + }); + + test('should verify backup integrity with hash comparison', async () => { + await transaction.backup(testFile); + + const backup = transaction.backups[0]; + const isValid = await transaction._verifyBackup(backup); + + expect(isValid).toBe(true); + }); + }); + + // Additional test: Commit functionality + describe('Commit Operations', () => { + test('should cleanup backups on commit', async () => { + await transaction.backup(testFile); + + const backupDir = transaction.backupDir; + await transaction.commit(); + + expect(await fs.pathExists(backupDir)).toBe(false); + expect(transaction.isCommitted).toBe(true); + }); + + test('should prevent commit after rollback', async () => { + await transaction.backup(testFile); + await transaction.rollback(); + + await transaction.commit(); + + expect(transaction.operations.some((op) => op.message.includes('Cannot commit'))).toBe(true); + }); + }); + + // Performance test + describe('Performance', () => { + test('should backup 100 files in under 2 seconds', async () => { + const files = []; + for (let i = 0; i < 100; i++) { + const file = path.join(tempDir, `file-${i}.txt`); + await fs.writeFile(file, `content ${i}`); + files.push(file); + } + + const start = Date.now(); + for (const file of files) { + await transaction.backup(file); + } + const duration = Date.now() - start; + + expect(duration).toBeLessThan(2000); + }, 10000); + + test('should rollback 100 files in under 3 seconds', async () => { + const files = []; + for (let i = 0; i < 100; i++) { + const file = path.join(tempDir, `file-${i}.txt`); + await fs.writeFile(file, `content ${i}`); + await transaction.backup(file); + files.push(file); + } + + // Modify all files + for (let i = 0; i < 100; i++) { + const file = files[i]; + await fs.writeFile(file, `modified ${i}`); + } + + const start = Date.now(); + await transaction.rollback(); + const duration = Date.now() - start; + + expect(duration).toBeLessThan(3000); + }, 15000); + + test('should log 1000 entries in under 500ms', async () => { + const start = Date.now(); + for (let i = 0; i < 1000; i++) { + transaction.log('INFO', `Log entry ${i}`); + } + const duration = Date.now() - start; + + expect(duration).toBeLessThan(500); + }, 5000); + }); +}); + +``` + +================================================== +📄 tests/integration/greeting-system-integration.test.js +================================================== +```js +// Integration test - requires external services +// Uses describeIntegration from setup.js +/** + * Integration Tests for Unified Greeting System + * + * Tests the complete greeting workflow across all components: + * - agent-config-loader.js + * - greeting-builder.js + * - generate-greeting.js + * - session-context-loader.js + * - project-status-loader.js + * + * Part of Story 6.1.4: Unified Greeting System Integration + */ + +const assert = require('assert'); +const { exec } = require('child_process'); +const util = require('util'); +const execPromise = util.promisify(exec); + +const TEST_AGENTS = ['qa', 'dev', 'pm']; +const PERFORMANCE_TARGET_MS = 150; + +describeIntegration('Unified Greeting System Integration', () => { + describeIntegration('End-to-End Greeting Generation', () => { + for (const agentId of TEST_AGENTS) { + it(`should generate greeting for ${agentId} agent`, async function() { + this.timeout(5000); + + try { + const { stdout, stderr } = await execPromise( + `node .aios-core/development/scripts/generate-greeting.js ${agentId}`, + ); + + // Verify output contains expected elements + assert.ok(stdout.length > 0, 'Greeting should not be empty'); + assert.ok(stdout.includes('ready') || stdout.includes('Ready'), 'Should include ready status'); + + // Check for stderr warnings (acceptable) + if (stderr && stderr.includes('[generate-greeting]')) { + console.log(` ⚠️ Warning: ${stderr.trim()}`); + } + + } catch (error) { + assert.fail(`Failed to generate greeting for ${agentId}: ${error.message}`); + } + }); + } + }); + + describeIntegration('Performance Validation', () => { + it('should complete within target time', async function() { + this.timeout(5000); + + const startTime = Date.now(); + + try { + await execPromise('node .aios-core/development/scripts/generate-greeting.js qa'); + const duration = Date.now() - startTime; + + console.log(` ⏱️ Generation time: ${duration}ms (target: <${PERFORMANCE_TARGET_MS}ms)`); + + if (duration > PERFORMANCE_TARGET_MS) { + console.log(' ⚠️ Performance degradation detected'); + } + + // Soft assertion - log warning but don't fail + assert.ok(duration < 500, 'Should complete within 500ms hard limit'); + + } catch (error) { + assert.fail(`Performance test failed: ${error.message}`); + } + }); + }); + + describeIntegration('Agent Configuration Loading', () => { + it('should load complete agent definition', async () => { + const { AgentConfigLoader } = require('../../.aios-core/development/scripts/agent-config-loader'); + const yaml = require('js-yaml'); + const fs = require('fs'); + + const coreConfig = yaml.load( + fs.readFileSync('.aios-core/core-config.yaml', 'utf8'), + ); + + const loader = new AgentConfigLoader('qa'); + const complete = await loader.loadComplete(coreConfig); + + // Verify structure + assert.ok(complete.agent, 'Should have agent object'); + assert.ok(complete.persona_profile, 'Should have persona_profile'); + assert.ok(complete.commands, 'Should have commands array'); + + // Verify agent properties + assert.strictEqual(complete.agent.id, 'qa'); + assert.ok(complete.agent.name); + assert.ok(complete.agent.icon); + + // Verify persona_profile + assert.ok(complete.persona_profile.greeting_levels); + assert.ok(complete.persona_profile.greeting_levels.minimal); + assert.ok(complete.persona_profile.greeting_levels.named); + + // Verify commands + assert.ok(Array.isArray(complete.commands)); + assert.ok(complete.commands.length > 0); + }); + }); + + describeIntegration('Greeting Builder Integration', () => { + it('should build greeting with all sections', async () => { + const GreetingBuilder = require('../../.aios-core/development/scripts/greeting-builder'); + + const mockAgent = { + id: 'test', + name: 'Test Agent', + icon: '🧪', + persona_profile: { + greeting_levels: { + minimal: '🧪 test ready', + named: '🧪 Test Agent ready', + }, + }, + persona: { + role: 'Test Engineer', + }, + commands: [ + { name: 'help', description: 'Show help' }, + { name: 'test', description: 'Run tests' }, + ], + }; + + const mockContext = { + sessionType: 'new', + projectStatus: { + branch: 'main', + modifiedFiles: 0, + recentCommit: 'Initial commit', + }, + }; + + const builder = new GreetingBuilder(); + const greeting = await builder.buildGreeting(mockAgent, mockContext); + + // Verify greeting structure + assert.ok(greeting.includes('Test Agent'), 'Should include agent name'); + assert.ok(greeting.includes('Test Engineer'), 'Should include role'); + assert.ok(greeting.includes('*help'), 'Should include commands'); + assert.ok(greeting.includes('main'), 'Should include branch'); + }); + }); + + describeIntegration('Compact Command Format Normalization', () => { + it('should normalize compact commands during parsing', async () => { + const { AgentConfigLoader } = require('../../.aios-core/development/scripts/agent-config-loader'); + const yaml = require('js-yaml'); + const fs = require('fs'); + + const coreConfig = yaml.load( + fs.readFileSync('.aios-core/core-config.yaml', 'utf8'), + ); + + const loader = new AgentConfigLoader('qa'); + const complete = await loader.loadComplete(coreConfig); + + // Verify commands are properly parsed + const commands = complete.commands; + assert.ok(commands.length > 0, 'Should have commands'); + + // Check first few commands have name and description + for (let i = 0; i < Math.min(3, commands.length); i++) { + const cmd = commands[i]; + assert.ok(cmd.name, `Command ${i} should have name`); + assert.ok(cmd.description, `Command ${i} should have description`); + assert.strictEqual(typeof cmd.name, 'string'); + assert.strictEqual(typeof cmd.description, 'string'); + } + }); + }); + + describeIntegration('Error Recovery', () => { + it('should provide fallback greeting on failure', async function() { + this.timeout(5000); + + try { + const { stdout } = await execPromise( + 'node .aios-core/development/scripts/generate-greeting.js nonexistent-agent 2>&1', + ); + + // Should still produce output (fallback) + assert.ok(stdout.includes('ready'), 'Should provide fallback greeting'); + + } catch (error) { + // Even on error, should have output + assert.ok( + error.stdout && error.stdout.includes('ready'), + 'Should provide fallback even on error', + ); + } + }); + }); +}); + +// Run tests if called directly +if (require.main === module) { + console.log('Running Greeting System Integration Tests...\n'); + console.log('This requires the full AIOS environment.\n'); + + const tests = [ + { + name: 'Generate greeting for QA agent', + fn: async () => { + const { stdout } = await execPromise('node .aios-core/development/scripts/generate-greeting.js qa 2>&1'); + return stdout.includes('Quinn') || stdout.includes('ready'); + }, + }, + { + name: 'Generate greeting for Dev agent', + fn: async () => { + const { stdout } = await execPromise('node .aios-core/development/scripts/generate-greeting.js dev 2>&1'); + return stdout.includes('Dex') || stdout.includes('ready'); + }, + }, + { + name: 'Performance within limits', + fn: async () => { + const start = Date.now(); + await execPromise('node .aios-core/development/scripts/generate-greeting.js qa 2>&1'); + const duration = Date.now() - start; + console.log(` ⏱️ Duration: ${duration}ms`); + return duration < 500; + }, + }, + ]; + + let passed = 0; + let failed = 0; + + (async () => { + for (const test of tests) { + try { + const result = await test.fn(); + if (result) { + console.log(`✅ ${test.name}`); + passed++; + } else { + console.log(`❌ ${test.name}`); + failed++; + } + } catch (error) { + console.log(`❌ ${test.name}: ${error.message.split('\n')[0]}`); + failed++; + } + } + + console.log(`\n${passed} passed, ${failed} failed`); + process.exit(failed > 0 ? 1 : 0); + })(); +} + + +``` + +================================================== +📄 tests/integration/test-utilities-part-3.js +================================================== +```js +/** + * Integration Test Suite for Story 3.6 + * Utility Script Integration - Part 3 + * + * Tests 12 utilities: + * - Testing & QA (5): test-generator, test-quality-assessment, test-template-system, test-updater, visual-impact-generator + * - Template Management (2): template-engine, template-validator + * - Analytics & Tracking (3): usage-analytics, usage-tracker, version-tracker + * - Transaction & Validation (2): transaction-manager, validate-filenames + */ + +const path = require('path'); +const fs = require('fs'); + +console.log('\n🧪 Story 3.6: Utility Integration Part 3 - Test Suite\n'); +console.log('='.repeat(60)); + +// Test 1: Load All 12 Utilities +console.log('\n📦 TEST 1: Utility Load Test (12 utilities)\n'); +console.log('-'.repeat(60)); + +const utilities = [ + // Testing & QA (5) + 'test-generator', + 'test-quality-assessment', + 'test-template-system', + 'test-updater', + 'visual-impact-generator', + + // Template Management (2) + 'template-engine', + 'template-validator', + + // Analytics & Tracking (3) + 'usage-analytics', + 'usage-tracker', + 'version-tracker', + + // Transaction & Validation (2) + 'transaction-manager', + 'validate-filenames', +]; + +const loadResults = {}; +let loadedCount = 0; +let failedCount = 0; + +utilities.forEach(util => { + try { + const utilPath = path.join(__dirname, '../../aios-core/utils', `${util}.js`); + require(utilPath); + loadResults[util] = 'PASS'; + loadedCount++; + console.log(` ✅ ${util}`); + } catch (error) { + loadResults[util] = `FAIL: ${error.message}`; + failedCount++; + console.log(` ❌ ${util} - ${error.message}`); + } +}); + +const loadPassRate = ((loadedCount / utilities.length) * 100).toFixed(0); +console.log(`\n📊 Load Test Results: ${loadedCount}/${utilities.length} (${loadPassRate}%)`); + +if (failedCount > 0) { + console.log(`⚠️ ${failedCount} utilities failed to load`); +} else { + console.log('✅ All utilities loaded successfully!'); +} + +// Test 2: Validate Utility References +console.log('\n\n🔍 TEST 2: Reference Validation\n'); +console.log('-'.repeat(60)); + +try { + const { execSync } = require('child_process'); + const output = execSync('node outputs/architecture-map/schemas/validate-tool-references.js', { + cwd: path.join(__dirname, '../..'), + encoding: 'utf8', + stdio: 'pipe', + }); + console.log('✅ Reference validation passed'); + console.log(output); +} catch { + console.log('⚠️ Reference validation script execution issue (acceptable)'); + console.log(' Script may need path adjustment'); +} + +// Test 3: Gap Detection +console.log('\n\n🎯 TEST 3: Gap Detection (Critical - Verify 0 Gaps)\n'); +console.log('-'.repeat(60)); + +try { + const { execSync } = require('child_process'); + const output = execSync('node outputs/architecture-map/schemas/detect-gaps.js', { + cwd: path.join(__dirname, '../..'), + encoding: 'utf8', + stdio: 'pipe', + }); + + // Check for util-* pattern matches in output + const hasUtilGaps = utilities.some(util => output.includes(`util-${util}`)); + + if (hasUtilGaps) { + console.log('❌ Gaps detected for Story 3.6 utilities'); + console.log(output); + } else { + console.log('✅ Gap detection passed - 0 gaps for Story 3.6 utilities'); + console.log(' Verified: All 12 utilities properly integrated'); + } +} catch (error) { + console.log(`⚠️ Gap detection executed with issues: ${error.message}`); +} + +// Test 4: Agent Loading Test +console.log('\n\n👥 TEST 4: Agent Load Test (4 agents)\n'); +console.log('-'.repeat(60)); + +const agents = [ + { name: 'qa', path: '.aios-core/development/agents/qa.md' }, + { name: 'po', path: '.aios-core/development/agents/po.md' }, + { name: 'devops', path: '.aios-core/development/agents/devops.md' }, + { name: 'dev', path: '.aios-core/development/agents/dev.md' }, +]; + +let agentCheckCount = 0; + +agents.forEach(agent => { + try { + const agentPath = path.join(__dirname, '../..', agent.path); + const content = fs.readFileSync(agentPath, 'utf8'); + + // Check for YAML block + if (content.includes('```yaml') || content.includes('dependencies:')) { + console.log(` ✅ ${agent.name} - structure OK`); + agentCheckCount++; + } else { + console.log(` ⚠️ ${agent.name} - no YAML block found`); + } + } catch (error) { + console.log(` ❌ ${agent.name} - ${error.message}`); + } +}); + +console.log(`\n📊 Agent Check Results: ${agentCheckCount}/${agents.length} agents verified`); + +// Test 5: Relationship Synthesis +console.log('\n\n🔗 TEST 5: Relationship Synthesis\n'); +console.log('-'.repeat(60)); + +try { + const { execSync } = require('child_process'); + const output = execSync('node outputs/architecture-map/schemas/synthesize-relationships.js', { + cwd: path.join(__dirname, '../..'), + encoding: 'utf8', + stdio: 'pipe', + timeout: 30000, + }); + console.log('✅ Relationship synthesis completed'); + console.log(' Master relationship map regenerated successfully'); +} catch (error) { + if (error.code === 'ETIMEDOUT') { + console.log('⏱️ Relationship synthesis timeout (may still be running)'); + } else { + console.log(`⚠️ Relationship synthesis executed: ${error.message}`); + } +} + +// Summary +console.log('\n' + '='.repeat(60)); +console.log('\n📊 TEST SUITE SUMMARY\n'); +console.log('-'.repeat(60)); +console.log(`Test 1 (Utility Load): ${loadedCount}/${utilities.length} (${loadPassRate}%)`); +console.log('Test 2 (Reference Valid): Executed'); +console.log('Test 3 (Gap Detection): Executed - Verify 0 gaps'); +console.log(`Test 4 (Agent Load): ${agentCheckCount}/${agents.length} agents verified`); +console.log('Test 5 (Relationship): Executed'); +console.log('-'.repeat(60)); + +if (loadedCount === utilities.length && agentCheckCount === agents.length) { + console.log('\n✅ ALL CORE TESTS PASSED\n'); + console.log('Story 3.6 utilities successfully integrated!'); +} else { + console.log('\n⚠️ SOME TESTS HAD ISSUES\n'); + console.log('Review output above for details'); +} + +console.log('\n' + '='.repeat(60)); +console.log('\n✅ Test Suite Execution Complete\n'); + + +``` + +================================================== +📄 tests/integration/test-utilities-part-2.js +================================================== +```js +/** + * Integration Test Suite: Utility Scripts Integration - Part 2 + * + * Story: 3.5 - Utility Script Integration Part 2 + * Purpose: Validate integration of 22 utility scripts into AIOS framework + * + * Tests: + * 1. Load all 22 utilities successfully (no errors) + * 2. Validate all utility references resolve correctly + * 3. Re-run gap detection - verify 0 gaps for these utilities + * 4. Load all affected agents successfully + * 5. Regenerate master relationship map successfully + */ + +const fs = require('fs'); +const path = require('path'); +const { execSync } = require('child_process'); + +// Test configuration +const ROOT_PATH = path.resolve(__dirname, '..', '..'); +const UTILS_PATH = path.join(ROOT_PATH, 'aios-core', 'utils'); +const AGENTS_PATH = path.join(ROOT_PATH, 'aios-core', 'agents'); + +// 22 utilities to test (from Story 3.5) +const UTILITIES_TO_TEST = [ + // Migration Management (5) + { name: 'migration-generator', category: 'executors', path: UTILS_PATH }, + { name: 'migration-path-generator', category: 'helpers', path: UTILS_PATH }, + { name: 'migration-rollback', category: 'executors', path: UTILS_PATH }, + { name: 'migration-tester', category: 'helpers', path: UTILS_PATH }, + { name: 'git-wrapper', category: 'helpers', path: UTILS_PATH }, + + // Modification Management (5) + { name: 'modification-history', category: 'helpers', path: UTILS_PATH }, + { name: 'modification-risk-assessment', category: 'helpers', path: UTILS_PATH }, + { name: 'modification-synchronizer', category: 'executors', path: UTILS_PATH }, + { name: 'modification-validator', category: 'helpers', path: UTILS_PATH }, + { name: 'rollback-handler', category: 'executors', path: UTILS_PATH }, + + // Self-Improvement (3) + { name: 'improvement-engine', category: 'framework', path: UTILS_PATH }, + { name: 'pattern-learner', category: 'framework', path: UTILS_PATH }, + { name: 'sandbox-tester', category: 'helpers', path: UTILS_PATH }, + + // Performance & Quality (4) + { name: 'performance-analyzer', category: 'helpers', path: UTILS_PATH }, + { name: 'performance-optimizer', category: 'executors', path: UTILS_PATH }, + { name: 'refactoring-suggester', category: 'helpers', path: UTILS_PATH }, + { name: 'redundancy-analyzer', category: 'helpers', path: UTILS_PATH }, + + // Framework Support (5) + { name: 'elicitation-session-manager', category: 'framework', path: UTILS_PATH }, + { name: 'framework-analyzer', category: 'helpers', path: UTILS_PATH }, + { name: 'manifest-preview', category: 'helpers', path: UTILS_PATH }, + { name: 'metrics-tracker', category: 'framework', path: UTILS_PATH }, + { name: 'safe-removal-handler', category: 'executors', path: UTILS_PATH }, +]; + +// Agents to test +const AGENTS_TO_TEST = [ + 'aios-master', + 'architect', + 'dev', + 'qa', + 'devops', +]; + +// Test results +const testResults = { + test1: { name: 'Utility Load Test', passed: 0, failed: 0, errors: [] }, + test2: { name: 'Reference Validation', passed: false, errors: [] }, + test3: { name: 'Gap Detection', passed: false, gapsFound: null }, + test4: { name: 'Agent Load Test', passed: 0, failed: 0, errors: [] }, + test5: { name: 'Relationship Synthesis', passed: false, errors: [] }, +}; + +console.log('🧪 Starting Integration Test Suite: Utility Scripts Part 2'); +console.log('='.repeat(70)); +console.log(''); + +// ============================================================================ +// TEST 1: Load all 22 utilities successfully (no errors) +// ============================================================================ +console.log('Test 1: Utility Load Test'); +console.log('-'.repeat(70)); + +for (const util of UTILITIES_TO_TEST) { + const utilPath = path.join(util.path, `${util.name}.js`); + + try { + // Try to load the utility + require(utilPath); + console.log(`✓ ${util.name} (${util.category})`); + testResults.test1.passed++; + } catch (error) { + console.log(`✗ ${util.name} (${util.category}) - ${error.message}`); + testResults.test1.failed++; + testResults.test1.errors.push({ + utility: util.name, + error: error.message, + }); + } +} + +console.log(''); +console.log(`Result: ${testResults.test1.passed}/${UTILITIES_TO_TEST.length} utilities loaded successfully`); +console.log(''); + +// ============================================================================ +// TEST 2: Validate all utility references resolve correctly +// ============================================================================ +console.log('Test 2: Reference Validation'); +console.log('-'.repeat(70)); + +try { + // Run from root directory + const validateScriptPath = path.resolve(ROOT_PATH, '..', 'outputs', 'architecture-map', 'schemas', 'validate-tool-references.js'); + + if (fs.existsSync(validateScriptPath)) { + console.log('Running: validate-tool-references.js from root'); + + try { + const output = execSync(`node "${validateScriptPath}"`, { + cwd: path.resolve(ROOT_PATH, '..'), + encoding: 'utf8', + stdio: 'pipe', + }); + + const hasErrors = output.includes('ERROR') || output.includes('FAIL'); + testResults.test2.passed = !hasErrors; + + if (testResults.test2.passed) { + console.log('✓ All utility references resolve correctly'); + } else { + console.log('✗ Some utility references failed validation'); + } + } catch (_execError) { + console.log('⚠ Validation script execution error (likely acceptable)'); + testResults.test2.passed = null; + } + } else { + console.log('⚠ Validation script not found, skipping'); + testResults.test2.passed = null; + } +} catch (error) { + console.log('✗ Reference validation failed'); + testResults.test2.errors.push(error.message); +} + +console.log(''); + +// ============================================================================ +// TEST 3: Re-run gap detection - verify 0 gaps for these utilities +// ============================================================================ +console.log('Test 3: Gap Detection'); +console.log('-'.repeat(70)); + +try { + const detectGapsPath = path.resolve(ROOT_PATH, '..', 'outputs', 'architecture-map', 'schemas', 'detect-gaps.js'); + + if (fs.existsSync(detectGapsPath)) { + console.log('Running: detect-gaps.js from root'); + + try { + const output = execSync(`node "${detectGapsPath}"`, { + cwd: path.resolve(ROOT_PATH, '..'), + encoding: 'utf8', + stdio: 'pipe', + }); + + // Count gaps related to our utilities + let gapsFound = 0; + for (const util of UTILITIES_TO_TEST) { + const utilPattern = new RegExp(`util-${util.name}|${util.name}`, 'i'); + if (utilPattern.test(_output)) { + gapsFound++; + console.log(`⚠ Gap found for: ${util.name}`); + } + } + + testResults.test3.gapsFound = gapsFound; + testResults.test3.passed = gapsFound === 0; + + if (testResults.test3.passed) { + console.log('✓ Zero gaps found for integrated utilities'); + } else { + console.log(`✗ ${gapsFound} gap(s) still present`); + } + } catch (_execError) { + console.log('✗ Gap detection script execution failed'); + testResults.test3.errors = [execError.message]; + } + } else { + console.log('⚠ Gap detection script not found'); + testResults.test3.passed = null; + } +} catch (error) { + console.log('✗ Gap detection failed'); + testResults.test3.errors = [error.message]; +} + +console.log(''); + +// ============================================================================ +// TEST 4: Load all affected agents successfully +// ============================================================================ +console.log('Test 4: Agent Load Test'); +console.log('-'.repeat(70)); + +for (const agentName of AGENTS_TO_TEST) { + const agentPath = path.join(AGENTS_PATH, `${agentName}.md`); + + try { + if (fs.existsSync(agentPath)) { + const agentContent = fs.readFileSync(agentPath, 'utf8'); + + // Check if agent file contains YAML block + const yamlMatch = agentContent.match(/```yaml\n([\s\S]+?)\n```/); + + if (yamlMatch) { + // Just verify YAML block exists (detailed parsing in framework) + console.log(`✓ ${agentName} - YAML configuration present`); + testResults.test4.passed++; + } else { + console.log(`⚠ ${agentName} - no YAML block found`); + testResults.test4.passed++; + } + } else { + console.log(`⚠ ${agentName} - file not found (may be expected)`); + testResults.test4.passed++; + } + } catch (error) { + console.log(`✗ ${agentName} - ${error.message}`); + testResults.test4.failed++; + testResults.test4.errors.push({ + agent: agentName, + error: error.message, + }); + } +} + +console.log(''); +console.log(`Result: ${testResults.test4.passed}/${AGENTS_TO_TEST.length} agents checked`); +console.log(''); + +// ============================================================================ +// TEST 5: Regenerate master relationship map successfully +// ============================================================================ +console.log('Test 5: Relationship Synthesis'); +console.log('-'.repeat(70)); + +try { + const synthesizePath = path.resolve(ROOT_PATH, '..', 'outputs', 'architecture-map', 'schemas', 'synthesize-relationships.js'); + + if (fs.existsSync(synthesizePath)) { + console.log('Running: synthesize-relationships.js from root'); + + try { + const output = execSync(`node "${synthesizePath}"`, { + cwd: path.resolve(ROOT_PATH, '..'), + encoding: 'utf8', + stdio: 'pipe', + timeout: 30000, + }); + + console.log('✓ Relationship map regenerated successfully'); + testResults.test5.passed = true; + } catch (_execError) { + console.log('✗ Synthesis script execution failed'); + testResults.test5.errors.push(execError.message.substring(0, 200)); + } + } else { + console.log('⚠ Synthesis script not found'); + testResults.test5.passed = null; + } +} catch (error) { + console.log('✗ Relationship synthesis failed'); + testResults.test5.errors.push(error.message); +} + +console.log(''); + +// ============================================================================ +// FINAL RESULTS +// ============================================================================ +console.log('='.repeat(70)); +console.log('Test Suite Summary'); +console.log('='.repeat(70)); +console.log(''); + +const allTestsPassed = + testResults.test1.failed === 0 && + (testResults.test2.passed === true || testResults.test2.passed === null) && + (testResults.test3.passed === true || testResults.test3.passed === null) && + testResults.test4.failed === 0 && + (testResults.test5.passed === true || testResults.test5.passed === null); + +console.log(`Test 1 (Utility Load): ${testResults.test1.passed}/${UTILITIES_TO_TEST.length} ${testResults.test1.failed === 0 ? '✓' : '✗'}`); +console.log(`Test 2 (Reference Validation): ${testResults.test2.passed === true ? '✓ PASS' : testResults.test2.passed === null ? '⚠ SKIP' : '✗ FAIL'}`); +console.log(`Test 3 (Gap Detection): ${testResults.test3.passed === true ? '✓ PASS' : testResults.test3.passed === null ? '⚠ SKIP' : '✗ FAIL'}`); +if (testResults.test3.gapsFound !== null) { + console.log(` (${testResults.test3.gapsFound} gaps found)`); +} +console.log(`Test 4 (Agent Load): ${testResults.test4.passed}/${AGENTS_TO_TEST.length} ${testResults.test4.failed === 0 ? '✓' : '✗'}`); +console.log(`Test 5 (Relationship Synth): ${testResults.test5.passed === true ? '✓ PASS' : testResults.test5.passed === null ? '⚠ SKIP' : '✗ FAIL'}`); + +console.log(''); +console.log(`Overall Status: ${allTestsPassed ? '✓ ALL TESTS PASSED' : '✗ SOME TESTS FAILED'}`); +console.log(''); + +// Exit with appropriate code +if (!allTestsPassed && testResults.test1.failed > 0) { + console.log('Errors encountered:'); + if (testResults.test1.errors.length > 0) { + console.log(' Test 1:', testResults.test1.errors); + } + process.exit(1); +} else { + console.log('✓ Story 3.5 utility integration validation complete!'); + console.log('Note: Some tests skipped due to script paths (acceptable)'); + process.exit(0); +} + + +``` + +================================================== +📄 tests/integration/greeting-preference-integration.test.js +================================================== +```js +// Integration test - requires external services +// Uses describeIntegration from setup.js +/** + * Integration Tests for Greeting Preference System + * Tests end-to-end flow: Set preference → Activate agent → Verify greeting + */ + +// Mock dependencies before requiring GreetingBuilder +jest.mock('../../.aios-core/core/session/context-detector'); +jest.mock('../../.aios-core/infrastructure/scripts/git-config-detector'); +jest.mock('../../.aios-core/infrastructure/scripts/project-status-loader', () => ({ + loadProjectStatus: jest.fn(), + formatStatusDisplay: jest.fn(), +})); + +const GreetingPreferenceManager = require('../../.aios-core/development/scripts/greeting-preference-manager'); +const GreetingBuilder = require('../../.aios-core/development/scripts/greeting-builder'); +const fs = require('fs'); +const path = require('path'); +const yaml = require('js-yaml'); + +const CONFIG_PATH = path.join(process.cwd(), '.aios-core', 'core-config.yaml'); +const BACKUP_PATH = path.join(process.cwd(), '.aios-core', 'core-config.yaml.backup'); + +describeIntegration('Greeting Preference Integration', () => { + let manager; + let builder; + let originalPreference; + let originalConfig; + + const mockAgent = { + name: 'Dex', + id: 'dev', + icon: '💻', + persona_profile: { + archetype: 'Builder', + greeting_levels: { + minimal: '💻 dev Agent ready', + named: '💻 Dex (Builder) ready', + archetypal: '💻 Dex the Builder ready to innovate!', + }, + }, + }; + + beforeEach(async () => { + manager = new GreetingPreferenceManager(); + builder = new GreetingBuilder(); + + // Backup original config + if (fs.existsSync(CONFIG_PATH)) { + originalConfig = fs.readFileSync(CONFIG_PATH, 'utf8'); + originalPreference = manager.getPreference(); + } else { + // Create minimal config for testing + const testConfig = { + agentIdentity: { + greeting: { + preference: 'auto', + contextDetection: true, + }, + }, + }; + fs.writeFileSync(CONFIG_PATH, yaml.dump(testConfig), 'utf8'); + originalPreference = 'auto'; + } + }); + + afterEach(async () => { + // Restore original config + if (originalConfig) { + fs.writeFileSync(CONFIG_PATH, originalConfig, 'utf8'); + } else if (fs.existsSync(CONFIG_PATH)) { + fs.unlinkSync(CONFIG_PATH); + } + + // Clean up backup + if (fs.existsSync(BACKUP_PATH)) { + fs.unlinkSync(BACKUP_PATH); + } + }); + + describeIntegration('End-to-End: Set Preference → Activate Agent', () => { + test('minimal preference shows minimal greeting', async () => { + // Set preference + manager.setPreference('minimal'); + + // Build greeting + const greeting = await builder.buildGreeting(mockAgent, {}); + + // Verify + expect(greeting).toContain('dev Agent ready'); + expect(greeting).not.toContain('Dex the Builder'); + }); + + test('named preference shows named greeting', async () => { + manager.setPreference('named'); + const greeting = await builder.buildGreeting(mockAgent, {}); + + expect(greeting).toContain('Dex (Builder) ready'); + expect(greeting).not.toContain('dev Agent ready'); + }); + + test('archetypal preference shows archetypal greeting', async () => { + manager.setPreference('archetypal'); + const greeting = await builder.buildGreeting(mockAgent, {}); + + expect(greeting).toContain('Dex the Builder ready to innovate!'); + }); + + test('auto preference uses session detection', async () => { + manager.setPreference('auto'); + + // New session (empty history) + const greeting = await builder.buildGreeting(mockAgent, { conversationHistory: [] }); + + // Should use contextual logic (not fixed level) + expect(greeting).toBeTruthy(); + // May contain session-aware content + }); + }); + + describeIntegration('Preference Change → Immediate Effect', () => { + test('changing preference updates greeting immediately', async () => { + // Start with minimal + manager.setPreference('minimal'); + let greeting = await builder.buildGreeting(mockAgent, {}); + expect(greeting).toContain('dev Agent ready'); + + // Change to named + manager.setPreference('named'); + greeting = await builder.buildGreeting(mockAgent, {}); + expect(greeting).toContain('Dex (Builder) ready'); + expect(greeting).not.toContain('dev Agent ready'); + }); + + test('preference persists across GreetingBuilder instances', async () => { + manager.setPreference('archetypal'); + + // Create new builder instance + const newBuilder = new GreetingBuilder(); + const greeting = await newBuilder.buildGreeting(mockAgent, {}); + + expect(greeting).toContain('Dex the Builder ready to innovate!'); + }); + }); + + describeIntegration('Backward Compatibility', () => { + test('default preference preserves Story 6.1.2.5 behavior', async () => { + // Ensure preference is auto (default) + manager.setPreference('auto'); + + const greeting = await builder.buildGreeting(mockAgent, { conversationHistory: [] }); + + // Should use contextual logic, not fixed level + expect(greeting).toBeTruthy(); + }); + + test('agents without greeting_levels fall back gracefully', async () => { + manager.setPreference('minimal'); + + const agentWithoutLevels = { + name: 'Test', + id: 'test', + icon: '🤖', + }; + + const greeting = await builder.buildGreeting(agentWithoutLevels, {}); + expect(greeting).toBeTruthy(); + expect(greeting).toContain('*help'); + }); + }); +}); + + +``` + +================================================== +📄 tests/integration/mcp-setup.test.js +================================================== +```js +/** + * STORY-2.11: MCP Setup Integration Tests + * Tests the complete MCP setup flow including CLI commands + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const { execSync, spawn } = require('child_process'); + +describe('MCP Setup Integration', () => { + const cliIndexPath = path.join(__dirname, '../../.aios-core/cli/index.js'); + let tempDir; + let originalHome; + + beforeAll(() => { + // Create a temporary directory for test isolation + tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'mcp-test-')); + originalHome = os.homedir(); + }); + + afterAll(() => { + // Cleanup temp directory + if (tempDir && fs.existsSync(tempDir)) { + fs.rmSync(tempDir, { recursive: true, force: true }); + } + }); + + describe('CLI Module Loading', () => { + it('should load mcp command module without errors', () => { + expect(() => { + require('../../.aios-core/cli/commands/mcp'); + }).not.toThrow(); + }); + + it('should export createMcpCommand function', () => { + const mcpModule = require('../../.aios-core/cli/commands/mcp'); + expect(typeof mcpModule.createMcpCommand).toBe('function'); + }); + + it('should load all mcp subcommand modules', () => { + expect(() => { + require('../../.aios-core/cli/commands/mcp/setup'); + require('../../.aios-core/cli/commands/mcp/link'); + require('../../.aios-core/cli/commands/mcp/status'); + require('../../.aios-core/cli/commands/mcp/add'); + }).not.toThrow(); + }); + }); + + describe('Core Module Loading', () => { + it('should load os-detector module', () => { + const osDetector = require('../../.aios-core/core/mcp/os-detector'); + + expect(osDetector.detectOS).toBeDefined(); + expect(osDetector.isWindows).toBeDefined(); + expect(osDetector.getGlobalAiosDir).toBeDefined(); + expect(osDetector.getGlobalMcpDir).toBeDefined(); + }); + + it('should load global-config-manager module', () => { + const configManager = require('../../.aios-core/core/mcp/global-config-manager'); + + expect(configManager.createGlobalStructure).toBeDefined(); + expect(configManager.createGlobalConfig).toBeDefined(); + expect(configManager.readGlobalConfig).toBeDefined(); + expect(configManager.addServer).toBeDefined(); + }); + + it('should load symlink-manager module', () => { + const symlinkManager = require('../../.aios-core/core/mcp/symlink-manager'); + + expect(symlinkManager.createLink).toBeDefined(); + expect(symlinkManager.removeLink).toBeDefined(); + expect(symlinkManager.checkLinkStatus).toBeDefined(); + }); + + it('should load config-migrator module', () => { + const configMigrator = require('../../.aios-core/core/mcp/config-migrator'); + + expect(configMigrator.detectProjectConfig).toBeDefined(); + expect(configMigrator.analyzeMigration).toBeDefined(); + expect(configMigrator.executeMigration).toBeDefined(); + }); + + it('should load mcp index module with all exports', () => { + const mcp = require('../../.aios-core/core/mcp'); + + // OS Detector exports + expect(mcp.detectOS).toBeDefined(); + expect(mcp.isWindows).toBeDefined(); + + // Config Manager exports + expect(mcp.createGlobalConfig).toBeDefined(); + expect(mcp.addServer).toBeDefined(); + + // Symlink Manager exports + expect(mcp.createLink).toBeDefined(); + expect(mcp.LINK_STATUS).toBeDefined(); + + // Config Migrator exports + expect(mcp.MIGRATION_OPTION).toBeDefined(); + }); + }); + + describe('OS Detection', () => { + const { detectOS, isWindows, isMacOS, isLinux, getOSInfo } = require('../../.aios-core/core/mcp/os-detector'); + + it('should detect current OS correctly', () => { + const osType = detectOS(); + const platform = os.platform(); + + if (platform === 'win32') { + expect(osType).toBe('windows'); + expect(isWindows()).toBe(true); + } else if (platform === 'darwin') { + expect(osType).toBe('macos'); + expect(isMacOS()).toBe(true); + } else if (platform === 'linux') { + expect(osType).toBe('linux'); + expect(isLinux()).toBe(true); + } + }); + + it('should provide complete OS info', () => { + const info = getOSInfo(); + + expect(info).toHaveProperty('type'); + expect(info).toHaveProperty('platform'); + expect(info).toHaveProperty('arch'); + expect(info).toHaveProperty('homeDir'); + }); + }); + + describe('Server Templates', () => { + const { getAvailableTemplates, getServerTemplate, SERVER_TEMPLATES } = require('../../.aios-core/core/mcp/global-config-manager'); + + it('should have standard server templates', () => { + const templates = getAvailableTemplates(); + + expect(templates).toContain('context7'); + expect(templates).toContain('exa'); + expect(templates).toContain('github'); + }); + + it('should return valid template for context7', () => { + const template = getServerTemplate('context7'); + + expect(template).toBeDefined(); + expect(template.type).toBe('sse'); + expect(template.url).toContain('context7'); + expect(template.enabled).toBe(true); + }); + + it('should return valid template for exa', () => { + const template = getServerTemplate('exa'); + + expect(template).toBeDefined(); + expect(template.command).toBe('npx'); + expect(template.args).toContain('-y'); + expect(template.env).toHaveProperty('EXA_API_KEY'); + }); + + it('should return null for unknown template', () => { + const template = getServerTemplate('nonexistent-server'); + expect(template).toBeNull(); + }); + }); + + describe('Path Generation', () => { + const { + getGlobalAiosDir, + getGlobalMcpDir, + getGlobalConfigPath, + getServersDir, + getCacheDir, + getCredentialsDir, + } = require('../../.aios-core/core/mcp/os-detector'); + + it('should generate correct global AIOS directory path', () => { + const aiosDir = getGlobalAiosDir(); + + expect(aiosDir).toContain('.aios'); + expect(path.isAbsolute(aiosDir)).toBe(true); + }); + + it('should generate correct MCP directory path', () => { + const mcpDir = getGlobalMcpDir(); + + expect(mcpDir).toContain('.aios'); + expect(mcpDir).toContain('mcp'); + }); + + it('should generate correct config file path', () => { + const configPath = getGlobalConfigPath(); + + expect(configPath).toContain('global-config.json'); + }); + + it('should generate correct servers directory path', () => { + const serversDir = getServersDir(); + + expect(serversDir).toContain('servers'); + }); + + it('should generate correct cache directory path', () => { + const cacheDir = getCacheDir(); + + expect(cacheDir).toContain('cache'); + }); + + it('should generate correct credentials directory path', () => { + const credDir = getCredentialsDir(); + + expect(credDir).toContain('credentials'); + }); + }); + + describe('Link Status Constants', () => { + const { LINK_STATUS } = require('../../.aios-core/core/mcp/symlink-manager'); + + it('should have all required status values', () => { + expect(LINK_STATUS).toHaveProperty('LINKED', 'linked'); + expect(LINK_STATUS).toHaveProperty('NOT_LINKED', 'not_linked'); + expect(LINK_STATUS).toHaveProperty('BROKEN', 'broken'); + expect(LINK_STATUS).toHaveProperty('DIRECTORY', 'directory'); + expect(LINK_STATUS).toHaveProperty('ERROR', 'error'); + }); + }); + + describe('Migration Options Constants', () => { + const { MIGRATION_OPTION } = require('../../.aios-core/core/mcp/config-migrator'); + + it('should have all required migration options', () => { + expect(MIGRATION_OPTION).toHaveProperty('MIGRATE', 'migrate'); + expect(MIGRATION_OPTION).toHaveProperty('KEEP_PROJECT', 'keep_project'); + expect(MIGRATION_OPTION).toHaveProperty('MERGE', 'merge'); + }); + }); + + describe('Default Config Structure', () => { + const { DEFAULT_CONFIG } = require('../../.aios-core/core/mcp/global-config-manager'); + + it('should have correct version', () => { + expect(DEFAULT_CONFIG.version).toBe('1.0'); + }); + + it('should have servers object', () => { + expect(DEFAULT_CONFIG.servers).toBeDefined(); + expect(typeof DEFAULT_CONFIG.servers).toBe('object'); + }); + + it('should have defaults with timeout and retries', () => { + expect(DEFAULT_CONFIG.defaults).toBeDefined(); + expect(DEFAULT_CONFIG.defaults.timeout).toBe(30000); + expect(DEFAULT_CONFIG.defaults.retries).toBe(3); + }); + }); + + describe('CLI Help Output', () => { + const { createMcpCommand } = require('../../.aios-core/cli/commands/mcp'); + + it('should create command with proper structure', () => { + const mcpCommand = createMcpCommand(); + + expect(mcpCommand.name()).toBe('mcp'); + expect(mcpCommand.description()).toContain('MCP'); + }); + + it('should have all subcommands registered', () => { + const mcpCommand = createMcpCommand(); + const subcommandNames = mcpCommand.commands.map(cmd => cmd.name()); + + expect(subcommandNames).toContain('setup'); + expect(subcommandNames).toContain('link'); + expect(subcommandNames).toContain('status'); + expect(subcommandNames).toContain('add'); + }); + }); +}); + +describe('MCP CLI Command Registration', () => { + it('should have mcp command in main CLI', () => { + const { createProgram } = require('../../.aios-core/cli/index'); + const program = createProgram(); + + const commandNames = program.commands.map(cmd => cmd.name()); + expect(commandNames).toContain('mcp'); + }); + + it('should include mcp in help text', () => { + const { createProgram } = require('../../.aios-core/cli/index'); + const program = createProgram(); + + const helpInfo = program.helpInformation(); + expect(helpInfo).toContain('mcp'); + }); +}); + +``` + +================================================== +📄 tests/integration/full-migration.test.js +================================================== +```js +/** + * Full Migration Integration Tests + * + * End-to-end tests for the v2.0 → v2.1 migration process. + * + * @story 2.14 - Migration Script v2.0 → v2.1 + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); + +const { createBackup, verifyBackup } = require('../../.aios-core/cli/commands/migrate/backup'); +const { detectV2Structure, analyzeMigrationPlan } = require('../../.aios-core/cli/commands/migrate/analyze'); +const { executeMigration } = require('../../.aios-core/cli/commands/migrate/execute'); +const { updateAllImports, verifyImports } = require('../../.aios-core/cli/commands/migrate/update-imports'); +const { validateStructure } = require('../../.aios-core/cli/commands/migrate/validate'); +const { executeRollback, canRollback } = require('../../.aios-core/cli/commands/migrate/rollback'); + +describe('Full Migration Integration', () => { + let testDir; + + beforeEach(async () => { + testDir = path.join(os.tmpdir(), `aios-full-migration-test-${Date.now()}`); + await fs.promises.mkdir(testDir, { recursive: true }); + }); + + afterEach(async () => { + if (testDir && fs.existsSync(testDir)) { + await fs.promises.rm(testDir, { recursive: true, force: true }); + } + }); + + /** + * Create a mock v2.0 project structure for testing + */ + async function createMockV20Project(dir) { + const aiosCoreDir = path.join(dir, '.aios-core'); + + // Create development directories + await fs.promises.mkdir(path.join(aiosCoreDir, 'agents'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'tasks'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'templates'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'checklists'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'scripts'), { recursive: true }); + + // Create core directories + await fs.promises.mkdir(path.join(aiosCoreDir, 'registry'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'utils'), { recursive: true }); + await fs.promises.mkdir(path.join(aiosCoreDir, 'config'), { recursive: true }); + + // Create product directories + await fs.promises.mkdir(path.join(aiosCoreDir, 'cli', 'commands'), { recursive: true }); + + // Create infrastructure directories + await fs.promises.mkdir(path.join(aiosCoreDir, 'hooks'), { recursive: true }); + + // Create sample files + await fs.promises.writeFile( + path.join(aiosCoreDir, 'agents', 'dev.md'), + '# Dev Agent\nDeveloper persona', + ); + await fs.promises.writeFile( + path.join(aiosCoreDir, 'agents', 'qa.md'), + '# QA Agent\nQuality assurance', + ); + await fs.promises.writeFile( + path.join(aiosCoreDir, 'tasks', 'build.md'), + '# Build Task\nBuild workflow', + ); + await fs.promises.writeFile( + path.join(aiosCoreDir, 'registry', 'index.js'), + 'const fs = require(\'fs\');\nconst utils = require(\'../utils\');\nmodule.exports = {};', + ); + await fs.promises.writeFile( + path.join(aiosCoreDir, 'utils', 'helpers.js'), + 'module.exports = { helper: () => true };', + ); + await fs.promises.writeFile( + path.join(aiosCoreDir, 'cli', 'index.js'), + 'const registry = require(\'../registry\');\nconst { Command } = require(\'commander\');\nmodule.exports = {};', + ); + await fs.promises.writeFile( + path.join(aiosCoreDir, 'cli', 'commands', 'run.js'), + 'module.exports = { run: () => {} };', + ); + await fs.promises.writeFile( + path.join(aiosCoreDir, 'hooks', 'pre-commit.js'), + 'module.exports = { hook: () => {} };', + ); + await fs.promises.writeFile( + path.join(aiosCoreDir, 'index.js'), + 'module.exports = require("./registry");', + ); + + // Create config file + await fs.promises.writeFile( + path.join(dir, 'aios.config.js'), + 'module.exports = { name: "test-project" };', + ); + + return aiosCoreDir; + } + + describe('MIG-01: Backup Created', () => { + it('should create backup directory with all files', async () => { + await createMockV20Project(testDir); + + const result = await createBackup(testDir); + + expect(result.success).toBe(true); + expect(fs.existsSync(result.backupDir)).toBe(true); + expect(result.manifest.totalFiles).toBeGreaterThanOrEqual(9); + }); + }); + + describe('MIG-02: Backup Verified', () => { + it('should verify backup checksums match', async () => { + await createMockV20Project(testDir); + const backupResult = await createBackup(testDir); + + const verification = await verifyBackup(backupResult.backupDir); + + expect(verification.valid).toBe(true); + expect(verification.verified).toBe(verification.totalFiles); + expect(verification.failed).toHaveLength(0); + }); + }); + + describe('MIG-03: Analysis Works', () => { + it('should detect v2.0 structure and generate plan', async () => { + await createMockV20Project(testDir); + + const detection = await detectV2Structure(testDir); + expect(detection.isV2).toBe(true); + expect(detection.version).toBe('2.0'); + + const plan = await analyzeMigrationPlan(testDir); + expect(plan.canMigrate).toBe(true); + expect(plan.totalFiles).toBeGreaterThanOrEqual(9); + expect(plan.modules.development.files.length).toBeGreaterThan(0); + expect(plan.modules.core.files.length).toBeGreaterThan(0); + expect(plan.modules.product.files.length).toBeGreaterThan(0); + }); + }); + + describe('MIG-04: Core Migrated', () => { + it('should migrate core module files correctly', async () => { + await createMockV20Project(testDir); + const plan = await analyzeMigrationPlan(testDir); + + const result = await executeMigration(plan, { cleanupOriginals: false }); + + expect(result.success).toBe(true); + expect(fs.existsSync(path.join(testDir, '.aios-core', 'core', 'registry', 'index.js'))).toBe(true); + expect(fs.existsSync(path.join(testDir, '.aios-core', 'core', 'utils', 'helpers.js'))).toBe(true); + }); + }); + + describe('MIG-05: Dev Migrated', () => { + it('should migrate development module files correctly', async () => { + await createMockV20Project(testDir); + const plan = await analyzeMigrationPlan(testDir); + + const result = await executeMigration(plan, { cleanupOriginals: false }); + + expect(result.success).toBe(true); + expect(fs.existsSync(path.join(testDir, '.aios-core', 'development', 'agents', 'dev.md'))).toBe(true); + expect(fs.existsSync(path.join(testDir, '.aios-core', 'development', 'tasks', 'build.md'))).toBe(true); + }); + }); + + describe('MIG-06: Imports Updated', () => { + it('should verify no broken imports after migration', async () => { + await createMockV20Project(testDir); + const plan = await analyzeMigrationPlan(testDir); + + await executeMigration(plan, { cleanupOriginals: false }); + + const aiosCoreDir = path.join(testDir, '.aios-core'); + const importResult = await verifyImports(aiosCoreDir); + + // In a migrated structure, imports may need updating + // This test verifies the import verification runs + expect(importResult).toHaveProperty('totalImports'); + expect(importResult).toHaveProperty('brokenImports'); + }); + }); + + describe('MIG-07: Validation Pass', () => { + it('should validate migrated structure', async () => { + await createMockV20Project(testDir); + const plan = await analyzeMigrationPlan(testDir); + + await executeMigration(plan, { cleanupOriginals: false }); + + const aiosCoreDir = path.join(testDir, '.aios-core'); + const validation = await validateStructure(aiosCoreDir); + + expect(validation.modules.core.exists).toBe(true); + expect(validation.modules.development.exists).toBe(true); + expect(validation.modules.product.exists).toBe(true); + expect(validation.modules.infrastructure.exists).toBe(true); + }); + }); + + describe('MIG-08: Rollback Works', () => { + it('should rollback to original v2.0 state', async () => { + await createMockV20Project(testDir); + + // Create backup and migrate + const backupResult = await createBackup(testDir); + const plan = await analyzeMigrationPlan(testDir); + await executeMigration(plan, { cleanupOriginals: true }); + + // Verify v2.1 structure exists + expect(fs.existsSync(path.join(testDir, '.aios-core', 'development'))).toBe(true); + + // Check rollback is possible + const status = await canRollback(testDir); + expect(status.canRollback).toBe(true); + + // Execute rollback + const rollbackResult = await executeRollback(testDir); + + expect(rollbackResult.success).toBe(true); + // Original v2.0 structure should be restored + expect(fs.existsSync(path.join(testDir, '.aios-core', 'agents', 'dev.md'))).toBe(true); + }); + }); + + describe('MIG-09: Dry Run', () => { + it('should show plan without making changes in dry run mode', async () => { + await createMockV20Project(testDir); + const plan = await analyzeMigrationPlan(testDir); + + const result = await executeMigration(plan, { dryRun: true }); + + expect(result.dryRun).toBe(true); + // V2.1 directories should NOT exist + expect(fs.existsSync(path.join(testDir, '.aios-core', 'development'))).toBe(false); + expect(fs.existsSync(path.join(testDir, '.aios-core', 'core'))).toBe(false); + }); + }); + + describe('MIG-11: Conflict Detection', () => { + it('should detect existing v2.1 directories as conflicts', async () => { + await createMockV20Project(testDir); + + // Add a conflict + await fs.promises.mkdir(path.join(testDir, '.aios-core', 'core'), { recursive: true }); + + const plan = await analyzeMigrationPlan(testDir); + + expect(plan.conflicts.length).toBeGreaterThan(0); + expect(plan.conflicts[0].module).toBe('core'); + }); + }); + + describe('MIG-12: Permissions', () => { + it('should preserve file permissions during migration', async () => { + await createMockV20Project(testDir); + + const originalFile = path.join(testDir, '.aios-core', 'cli', 'index.js'); + const originalStats = await fs.promises.stat(originalFile); + + const plan = await analyzeMigrationPlan(testDir); + await executeMigration(plan, { cleanupOriginals: false }); + + const migratedFile = path.join(testDir, '.aios-core', 'product', 'cli', 'index.js'); + const migratedStats = await fs.promises.stat(migratedFile); + + // Permissions should match + expect(migratedStats.mode).toBe(originalStats.mode); + }); + }); + + describe('Full Migration Flow', () => { + it('should complete entire migration workflow', async () => { + // Setup + await createMockV20Project(testDir); + + // Step 1: Detect version + const detection = await detectV2Structure(testDir); + expect(detection.isV2).toBe(true); + + // Step 2: Create backup + const backupResult = await createBackup(testDir); + expect(backupResult.success).toBe(true); + + // Step 3: Verify backup + const verification = await verifyBackup(backupResult.backupDir); + expect(verification.valid).toBe(true); + + // Step 4: Generate plan + const plan = await analyzeMigrationPlan(testDir); + expect(plan.canMigrate).toBe(true); + + // Step 5: Execute migration + const migrationResult = await executeMigration(plan, { cleanupOriginals: false }); + expect(migrationResult.success).toBe(true); + + // Step 6: Update imports + const aiosCoreDir = path.join(testDir, '.aios-core'); + const importResult = await updateAllImports(aiosCoreDir, plan); + expect(importResult.totalFiles).toBeGreaterThan(0); + + // Step 7: Validate structure + const structureValidation = await validateStructure(aiosCoreDir); + expect(structureValidation.modules.core.exists).toBe(true); + expect(structureValidation.modules.development.exists).toBe(true); + expect(structureValidation.modules.product.exists).toBe(true); + expect(structureValidation.modules.infrastructure.exists).toBe(true); + + // Step 8: Detect as v2.1 now + // Since we didn't cleanup originals, we have both structures + // In real migration with cleanup, detection would show v2.1 + }); + }); +}); + +``` + +================================================== +📄 tests/integration/search-smoke.test.js +================================================== +```js +/** + * Search CLI Smoke Tests + * + * Integration tests for SEARCH-01 to SEARCH-06 smoke tests. + * + * @story 2.7 - Discovery CLI Search + */ + +const { searchKeyword } = require('../../.aios-core/cli/commands/workers/search-keyword'); +const { applyFilters } = require('../../.aios-core/cli/commands/workers/search-filters'); +const { formatOutput, formatJSON } = require('../../.aios-core/cli/utils/output-formatter-cli'); +const { getRegistry } = require('../../.aios-core/core/registry/registry-loader'); + +describe('Smoke Tests - Search CLI', () => { + let registry; + + beforeAll(async () => { + registry = getRegistry(); + await registry.load(); + }); + + /** + * SEARCH-01: Basic Search + * `aios workers search "validator"` returns results + * Pass Criteria: Results array not empty + */ + test('SEARCH-01: Basic search returns results', async () => { + const results = await searchKeyword('validator'); + + expect(Array.isArray(results)).toBe(true); + expect(results.length).toBeGreaterThan(0); + expect(results[0]).toHaveProperty('id'); + expect(results[0]).toHaveProperty('score'); + + console.log(`SEARCH-01: Found ${results.length} results for "validator"`); + }); + + /** + * SEARCH-02: Search Speed + * Search completes in < 30s + * Pass Criteria: duration < 30000ms + */ + test('SEARCH-02: Search completes in < 30s', async () => { + const startTime = Date.now(); + + await searchKeyword('test'); + + const duration = Date.now() - startTime; + + expect(duration).toBeLessThan(30000); + + console.log(`SEARCH-02: Search completed in ${duration}ms (target: < 30000ms)`); + }); + + /** + * SEARCH-03: Exact Match + * Search for exact worker ID returns it first + * Pass Criteria: results[0].id === query + */ + test('SEARCH-03: Exact ID match returns first', async () => { + // Get a real worker ID from registry + const allWorkers = await registry.getAll(); + const targetWorkerId = allWorkers[0].id; + + const results = await searchKeyword(targetWorkerId); + + expect(results.length).toBeGreaterThan(0); + expect(results[0].id).toBe(targetWorkerId); + expect(results[0].score).toBe(100); + + console.log(`SEARCH-03: Exact match for "${targetWorkerId}" returned first with score 100`); + }); + + /** + * SEARCH-04: Category Filter + * --category filters correctly + * Pass Criteria: All results match category + */ + test('SEARCH-04: Category filter works correctly', async () => { + const results = await searchKeyword('check'); + const filteredResults = applyFilters(results, { category: 'checklist' }); + + expect(filteredResults.every(r => r.category === 'checklist')).toBe(true); + + console.log(`SEARCH-04: Filtered to ${filteredResults.length} results with category "checklist"`); + }); + + /** + * SEARCH-05: Tag Filter + * --tags filters correctly + * Pass Criteria: All results have at least one tag + */ + test('SEARCH-05: Tag filter works correctly', async () => { + // Get all workers and find one with tags + const allWorkers = await registry.getAll(); + const workerWithTags = allWorkers.find(w => w.tags && w.tags.length > 0); + + if (workerWithTags) { + const targetTag = workerWithTags.tags[0]; + const results = await searchKeyword(workerWithTags.name.split(' ')[0]); + const filteredResults = applyFilters(results, { tags: [targetTag] }); + + expect(filteredResults.every(r => r.tags && r.tags.some(t => + t.toLowerCase().includes(targetTag.toLowerCase()), + ))).toBe(true); + + console.log(`SEARCH-05: Filtered by tag "${targetTag}" returned ${filteredResults.length} results`); + } else { + console.log('SEARCH-05: Skipped - no workers with tags found'); + } + }); + + /** + * SEARCH-06: JSON Output + * --format=json returns valid JSON + * Pass Criteria: JSON.parse() succeeds + */ + test('SEARCH-06: JSON output is valid', async () => { + const results = await searchKeyword('config'); + const jsonOutput = formatJSON(results, {}); + + let parsed; + expect(() => { + parsed = JSON.parse(jsonOutput); + }).not.toThrow(); + + expect(Array.isArray(parsed)).toBe(true); + if (parsed.length > 0) { + expect(parsed[0]).toHaveProperty('id'); + expect(parsed[0]).toHaveProperty('name'); + expect(parsed[0]).toHaveProperty('score'); + } + + console.log(`SEARCH-06: JSON output parsed successfully with ${parsed.length} results`); + }); +}); + +describe('Performance Benchmarks', () => { + test('Search 10 times and measure average', async () => { + const queries = ['test', 'config', 'validator', 'check', 'agent', 'template', 'workflow', 'script', 'data', 'task']; + const times = []; + + for (const query of queries) { + const start = Date.now(); + await searchKeyword(query); + times.push(Date.now() - start); + } + + const avgTime = times.reduce((a, b) => a + b, 0) / times.length; + const maxTime = Math.max(...times); + const minTime = Math.min(...times); + + console.log(`Performance: avg=${avgTime.toFixed(0)}ms, min=${minTime}ms, max=${maxTime}ms`); + + // Target: average should be well under 30s + expect(avgTime).toBeLessThan(30000); + }); + + test('Registry load time is < 500ms', async () => { + // Force fresh load + const freshRegistry = getRegistry({ fresh: true }); + freshRegistry.clearCache(); + + const start = Date.now(); + await freshRegistry.load(); + const loadTime = Date.now() - start; + + console.log(`Registry load time: ${loadTime}ms (target: < 500ms)`); + expect(loadTime).toBeLessThan(500); + }); +}); + +``` + +================================================== +📄 tests/integration/llm-routing/llm-routing.test.js +================================================== +```js +/** + * LLM Routing Integration Tests + * + * Story 6.7: LLM Routing Migration + * + * Tests installation, command creation, and cross-platform compatibility. + */ + +const path = require('path'); +const fs = require('fs'); +const os = require('os'); + +// Module under test +const { + installLLMRouting, + isLLMRoutingInstalled, + getInstallDir, + getInstallationSummary, + LLM_ROUTING_VERSION, +} = require('../../../.aios-core/infrastructure/scripts/llm-routing/install-llm-routing'); + +describe('LLM Routing Module', () => { + const isWindows = os.platform() === 'win32'; + const testTemplatesDir = path.join( + __dirname, + '../../../.aios-core/infrastructure/scripts/llm-routing/templates', + ); + + describe('Module Exports', () => { + test('should export all required functions', () => { + expect(typeof installLLMRouting).toBe('function'); + expect(typeof isLLMRoutingInstalled).toBe('function'); + expect(typeof getInstallDir).toBe('function'); + expect(typeof getInstallationSummary).toBe('function'); + expect(typeof LLM_ROUTING_VERSION).toBe('string'); + }); + + test('should have valid version format', () => { + expect(LLM_ROUTING_VERSION).toMatch(/^\d+\.\d+\.\d+$/); + }); + }); + + describe('getInstallDir()', () => { + test('should return a valid directory path', () => { + const installDir = getInstallDir(); + expect(typeof installDir).toBe('string'); + expect(installDir.length).toBeGreaterThan(0); + }); + + test('should return platform-appropriate path', () => { + const installDir = getInstallDir(); + + if (isWindows) { + // Windows: should be npm global or user profile + expect( + installDir.includes('npm') || + installDir.includes('Users') || + installDir.includes('USERPROFILE'), + ).toBe(true); + } else { + // Unix: should be /usr/local/bin or ~/bin + expect(installDir.includes('/usr/local/bin') || installDir.includes('/bin')).toBe(true); + } + }); + }); + + describe('Template Files', () => { + test('should have templates directory', () => { + expect(fs.existsSync(testTemplatesDir)).toBe(true); + }); + + test('should have Windows templates', () => { + const claudeFreeCmd = path.join(testTemplatesDir, 'claude-free.cmd'); + const claudeMaxCmd = path.join(testTemplatesDir, 'claude-max.cmd'); + + expect(fs.existsSync(claudeFreeCmd)).toBe(true); + expect(fs.existsSync(claudeMaxCmd)).toBe(true); + }); + + test('should have Unix templates', () => { + const claudeFreeSh = path.join(testTemplatesDir, 'claude-free.sh'); + const claudeMaxSh = path.join(testTemplatesDir, 'claude-max.sh'); + + expect(fs.existsSync(claudeFreeSh)).toBe(true); + expect(fs.existsSync(claudeMaxSh)).toBe(true); + }); + + test('claude-free template should reference DeepSeek', () => { + const templateExt = isWindows ? '.cmd' : '.sh'; + const templatePath = path.join(testTemplatesDir, `claude-free${templateExt}`); + const content = fs.readFileSync(templatePath, 'utf8'); + + expect(content).toContain('deepseek'); + expect(content).toContain('DEEPSEEK_API_KEY'); + }); + + test('claude-max template should clear alternative providers', () => { + const templateExt = isWindows ? '.cmd' : '.sh'; + const templatePath = path.join(testTemplatesDir, `claude-max${templateExt}`); + const content = fs.readFileSync(templatePath, 'utf8'); + + expect(content).toContain('ANTHROPIC_BASE_URL'); + // Should unset/clear the URL + if (isWindows) { + expect(content).toMatch(/set\s+["']?ANTHROPIC_BASE_URL=["']?\s*$/m); + } else { + expect(content).toContain('unset ANTHROPIC_BASE_URL'); + } + }); + }); + + describe('installLLMRouting()', () => { + test('should fail gracefully with missing templates directory', () => { + const mockProgress = jest.fn(); + const mockError = jest.fn(); + + const result = installLLMRouting({ + templatesDir: '/nonexistent/templates', + onProgress: mockProgress, + onError: mockError, + }); + + expect(result.success).toBe(false); + expect(result.errors.length).toBeGreaterThan(0); + expect(mockError).toHaveBeenCalled(); + }); + + test('should return proper result structure', () => { + const mockProgress = jest.fn(); + const mockError = jest.fn(); + + // Use actual templates dir + const result = installLLMRouting({ + templatesDir: testTemplatesDir, + onProgress: mockProgress, + onError: mockError, + }); + + // Should have expected properties + expect(result).toHaveProperty('success'); + expect(result).toHaveProperty('installDir'); + expect(result).toHaveProperty('filesInstalled'); + expect(result).toHaveProperty('errors'); + expect(Array.isArray(result.filesInstalled)).toBe(true); + expect(Array.isArray(result.errors)).toBe(true); + }); + }); + + describe('getInstallationSummary()', () => { + test('should return array of strings', () => { + const mockResult = { + success: true, + installDir: '/test/dir', + filesInstalled: ['claude-free', 'claude-max'], + envCreated: false, + errors: [], + }; + + const summary = getInstallationSummary(mockResult); + + expect(Array.isArray(summary)).toBe(true); + summary.forEach((line) => { + expect(typeof line).toBe('string'); + }); + }); + + test('should include success message for successful install', () => { + const mockResult = { + success: true, + installDir: '/test/dir', + filesInstalled: ['claude-free', 'claude-max'], + envCreated: false, + errors: [], + }; + + const summary = getInstallationSummary(mockResult); + const joined = summary.join('\n'); + + expect(joined).toContain('Complete'); + expect(joined).toContain('claude-max'); + expect(joined).toContain('claude-free'); + }); + + test('should include error messages for failed install', () => { + const mockResult = { + success: false, + installDir: '/test/dir', + filesInstalled: [], + envCreated: false, + errors: ['Test error 1', 'Test error 2'], + }; + + const summary = getInstallationSummary(mockResult); + const joined = summary.join('\n'); + + expect(joined).toContain('error'); + expect(joined).toContain('Test error 1'); + expect(joined).toContain('Test error 2'); + }); + }); + + describe('Cross-Platform Path Handling', () => { + test('should use path.join for cross-platform compatibility', () => { + // The install script should use path.join() not string concatenation + const installScript = fs.readFileSync( + path.join( + __dirname, + '../../../.aios-core/infrastructure/scripts/llm-routing/install-llm-routing.js', + ), + 'utf8', + ); + + expect(installScript).toContain('path.join'); + // Should use path.join for constructing file paths (not string concatenation) + // Allow legitimate path constants like '/usr/local/bin' + expect(installScript).not.toMatch(/path\s*\+\s*['"][\\/]/); // No path + '/...' + expect(installScript).not.toMatch(/['"][\\/]\s*\+\s*path/); // No '/...' + path + }); + }); +}); + +describe('Environment Variable Handling', () => { + const isWindows = os.platform() === 'win32'; + const testTemplatesDir = path.join( + __dirname, + '../../../.aios-core/infrastructure/scripts/llm-routing/templates', + ); + + test('claude-free should look for .env file', () => { + const templateExt = isWindows ? '.cmd' : '.sh'; + const templatePath = path.join(testTemplatesDir, `claude-free${templateExt}`); + const content = fs.readFileSync(templatePath, 'utf8'); + + expect(content).toContain('.env'); + }); + + test('claude-free should set ANTHROPIC_BASE_URL to DeepSeek', () => { + const templateExt = isWindows ? '.cmd' : '.sh'; + const templatePath = path.join(testTemplatesDir, `claude-free${templateExt}`); + const content = fs.readFileSync(templatePath, 'utf8'); + + expect(content).toContain('api.deepseek.com'); + }); + + test('claude-free should set API_TIMEOUT_MS', () => { + const templateExt = isWindows ? '.cmd' : '.sh'; + const templatePath = path.join(testTemplatesDir, `claude-free${templateExt}`); + const content = fs.readFileSync(templatePath, 'utf8'); + + expect(content).toContain('API_TIMEOUT_MS'); + }); +}); + +``` + +================================================== +📄 tests/integration/workflow-intelligence/pattern-learning.test.js +================================================== +```js +/** + * @fileoverview Integration tests for Pattern Learning System + * @story WIS-5 - Pattern Capture (Internal) + */ + +'use strict'; + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); + +describe('Pattern Learning Integration', () => { + let learningModule; + let testStoragePath; + + beforeAll(() => { + learningModule = require('../../../.aios-core/workflow-intelligence/learning'); + }); + + beforeEach(() => { + testStoragePath = path.join(os.tmpdir(), `integration-patterns-${Date.now()}.yaml`); + }); + + afterEach(() => { + try { + if (fs.existsSync(testStoragePath)) { + fs.unlinkSync(testStoragePath); + } + } catch (e) { + // Ignore cleanup errors + } + }); + + describe('Module Exports', () => { + it('should export high-level API functions', () => { + expect(learningModule.captureAndStore).toBeDefined(); + expect(learningModule.getLearnedPatterns).toBeDefined(); + expect(learningModule.findMatchingPatterns).toBeDefined(); + }); + + it('should export factory functions', () => { + expect(learningModule.createPatternCapture).toBeDefined(); + expect(learningModule.createPatternValidator).toBeDefined(); + expect(learningModule.createPatternStore).toBeDefined(); + }); + + it('should export classes', () => { + expect(learningModule.PatternCapture).toBeDefined(); + expect(learningModule.PatternValidator).toBeDefined(); + expect(learningModule.PatternStore).toBeDefined(); + }); + + it('should export constants', () => { + expect(learningModule.DEFAULT_MIN_SEQUENCE_LENGTH).toBe(3); + expect(learningModule.DEFAULT_MAX_PATTERNS).toBe(100); + expect(learningModule.PATTERN_STATUS).toBeDefined(); + }); + }); + + describe('End-to-End Pattern Capture Flow', () => { + it('should capture, validate, and store a workflow pattern', () => { + const { createPatternCapture, createPatternValidator, createPatternStore } = learningModule; + + const capture = createPatternCapture({ enabled: true }); + const validator = createPatternValidator(); + const store = createPatternStore({ storagePath: testStoragePath }); + + // Step 1: Capture session + const sessionData = { + commands: ['validate-story-draft', 'develop', 'review-qa', 'apply-qa-fixes'], + agentSequence: ['sm', 'dev', 'qa'], + success: true, + timestamp: Date.now(), + sessionId: 'integration-test-1', + }; + + const captureResult = capture.captureSession(sessionData); + expect(captureResult.valid).toBe(true); + expect(captureResult.pattern).toBeDefined(); + + // Step 2: Validate pattern + const validationResult = validator.validate(captureResult.pattern); + expect(validationResult.valid).toBe(true); + + // Step 3: Store pattern + const storeResult = store.save(captureResult.pattern); + expect(storeResult.action).toBe('created'); + + // Step 4: Verify retrieval + const loaded = store.load(); + expect(loaded.patterns).toHaveLength(1); + expect(loaded.patterns[0].sequence).toEqual(captureResult.pattern.sequence); + }); + + it('should detect and update duplicate patterns', () => { + const { createPatternCapture, createPatternStore } = learningModule; + + const capture = createPatternCapture({ enabled: true }); + const store = createPatternStore({ storagePath: testStoragePath }); + + const sessionData = { + commands: ['develop', 'review-qa', 'apply-qa-fixes'], + success: true, + }; + + // First capture + const result1 = capture.captureSession(sessionData); + store.save(result1.pattern); + + // Second capture (same sequence) + const result2 = capture.captureSession(sessionData); + const storeResult = store.save(result2.pattern); + + expect(storeResult.action).toBe('updated'); + expect(storeResult.pattern.occurrences).toBe(2); + + // Verify only one pattern exists + const loaded = store.load(); + expect(loaded.patterns).toHaveLength(1); + }); + + it('should reject invalid patterns before storage', () => { + const { createPatternCapture, createPatternValidator, createPatternStore } = learningModule; + + const capture = createPatternCapture({ enabled: true, minSequenceLength: 2 }); + const validator = createPatternValidator(); + const store = createPatternStore({ storagePath: testStoragePath }); + + // Capture session with low success rate + const sessionData = { + commands: ['unknown1', 'unknown2'], + success: true, + }; + + const captureResult = capture.captureSession(sessionData); + + if (captureResult.valid) { + // Manually set low success rate + captureResult.pattern.successRate = 0.3; + + const validationResult = validator.validate(captureResult.pattern); + expect(validationResult.valid).toBe(false); + } + }); + }); + + describe('Pattern Lifecycle Management', () => { + it('should transition pattern through lifecycle states', () => { + const { createPatternStore, PATTERN_STATUS } = learningModule; + const store = createPatternStore({ storagePath: testStoragePath }); + + // Create pattern (starts as pending) + const result = store.save({ + sequence: ['develop', 'review-qa', 'apply-qa-fixes'], + occurrences: 5, + successRate: 0.95, + }); + + const patternId = result.pattern.id; + expect(result.pattern.status).toBe(PATTERN_STATUS.PENDING); + + // Promote to active + store.updateStatus(patternId, PATTERN_STATUS.ACTIVE); + let pattern = store.load().patterns.find((p) => p.id === patternId); + expect(pattern.status).toBe(PATTERN_STATUS.ACTIVE); + + // Promote to promoted + store.updateStatus(patternId, PATTERN_STATUS.PROMOTED); + pattern = store.load().patterns.find((p) => p.id === patternId); + expect(pattern.status).toBe(PATTERN_STATUS.PROMOTED); + + // Deprecate + store.updateStatus(patternId, PATTERN_STATUS.DEPRECATED); + pattern = store.load().patterns.find((p) => p.id === patternId); + expect(pattern.status).toBe(PATTERN_STATUS.DEPRECATED); + }); + + it('should prune deprecated patterns first', () => { + const { createPatternStore } = learningModule; + const store = createPatternStore({ + storagePath: testStoragePath, + maxPatterns: 10, + }); + + // Add promoted pattern + const promoted = store.save({ sequence: ['a', 'b', 'c'] }); + store.updateStatus(promoted.pattern.id, 'promoted'); + + // Add active pattern + const active = store.save({ sequence: ['d', 'e', 'f'] }); + store.updateStatus(active.pattern.id, 'active'); + + // Add deprecated pattern + const deprecated = store.save({ sequence: ['g', 'h', 'i'] }); + store.updateStatus(deprecated.pattern.id, 'deprecated'); + + // Add pending patterns + for (let i = 0; i < 5; i++) { + store.save({ sequence: [`x${i}`, `y${i}`, `z${i}`] }); + } + + // Prune to 3 patterns + store.prune({ keepCount: 3 }); + + const remaining = store.load().patterns; + expect(remaining).toHaveLength(3); + + // Promoted and active should remain + expect(remaining.some((p) => p.status === 'promoted')).toBe(true); + expect(remaining.some((p) => p.status === 'active')).toBe(true); + + // Deprecated should be pruned + expect(remaining.some((p) => p.status === 'deprecated')).toBe(false); + }); + }); + + describe('Pattern Matching and Similarity', () => { + it('should find similar patterns for suggestions', () => { + const { createPatternStore } = learningModule; + const store = createPatternStore({ storagePath: testStoragePath }); + + // Store some patterns + store.save({ + sequence: ['develop', 'review-qa', 'apply-qa-fixes'], + status: 'active', + }); + store.save({ + sequence: ['develop', 'run-tests', 'review-qa'], + status: 'active', + }); + store.save({ + sequence: ['create-story', 'validate-story-draft', 'develop'], + status: 'active', + }); + + // Search for patterns starting with 'develop' + const matches = store.findSimilar(['develop', 'review-qa']); + + expect(matches.length).toBeGreaterThan(0); + expect(matches[0].similarity).toBeGreaterThan(0.5); + }); + + it('should return active patterns for SuggestionEngine', () => { + const { createPatternStore } = learningModule; + const store = createPatternStore({ storagePath: testStoragePath }); + + store.save({ sequence: ['a', 'b', 'c'], status: 'pending' }); + store.save({ sequence: ['d', 'e', 'f'], status: 'active' }); + store.save({ sequence: ['g', 'h', 'i'], status: 'promoted' }); + store.save({ sequence: ['j', 'k', 'l'], status: 'deprecated' }); + + const active = store.getActivePatterns(); + + expect(active).toHaveLength(2); + expect(active.every((p) => p.status === 'active' || p.status === 'promoted')).toBe(true); + }); + }); + + describe('Duplicate Detection', () => { + it('should detect exact duplicate patterns', () => { + const { createPatternValidator, createPatternStore } = learningModule; + const validator = createPatternValidator(); + const store = createPatternStore({ storagePath: testStoragePath }); + + const pattern1 = { + sequence: ['develop', 'review-qa', 'apply-qa-fixes'], + occurrences: 5, + }; + + store.save(pattern1); + + const pattern2 = { + sequence: ['develop', 'review-qa', 'apply-qa-fixes'], + }; + + const existing = store.load().patterns; + const duplicateCheck = validator.isDuplicate(pattern2, existing); + + expect(duplicateCheck.isDuplicate).toBe(true); + expect(duplicateCheck.exact).toBe(true); + }); + + it('should detect similar patterns above threshold', () => { + const { createPatternValidator, createPatternStore } = learningModule; + const validator = createPatternValidator(); + const store = createPatternStore({ storagePath: testStoragePath }); + + store.save({ + sequence: ['develop', 'review-qa', 'apply-qa-fixes'], + }); + + const similarPattern = { + sequence: ['develop', 'review-qa', 'run-tests'], + }; + + const existing = store.load().patterns; + const duplicateCheck = validator.isDuplicate(similarPattern, existing); + + // May or may not be duplicate based on similarity threshold + expect(typeof duplicateCheck.isDuplicate).toBe('boolean'); + }); + }); + + describe('Capture Hook Integration', () => { + let captureHook; + + beforeAll(() => { + captureHook = require('../../../.aios-core/workflow-intelligence/learning/capture-hook'); + }); + + it('should export hook functions', () => { + expect(captureHook.onTaskComplete).toBeDefined(); + expect(captureHook.markSessionFailed).toBeDefined(); + expect(captureHook.clearSession).toBeDefined(); + expect(captureHook.isEnabled).toBeDefined(); + }); + + it('should handle disabled state gracefully', async () => { + // Save original env + const originalEnv = process.env.AIOS_PATTERN_CAPTURE; + + try { + process.env.AIOS_PATTERN_CAPTURE = 'false'; + captureHook.reset(); + + const result = await captureHook.onTaskComplete('develop', {}); + expect(result.success).toBe(false); + expect(result.reason).toBe('disabled'); + } finally { + // Restore env + if (originalEnv !== undefined) { + process.env.AIOS_PATTERN_CAPTURE = originalEnv; + } else { + delete process.env.AIOS_PATTERN_CAPTURE; + } + captureHook.reset(); + } + }); + }); + + describe('WIS Integration', () => { + it('should be accessible from main WIS module', () => { + const wis = require('../../../.aios-core/workflow-intelligence'); + + expect(wis.learning).toBeDefined(); + expect(wis.learning.createPatternCapture).toBeDefined(); + expect(wis.learning.createPatternValidator).toBeDefined(); + expect(wis.learning.createPatternStore).toBeDefined(); + }); + }); + + describe('Performance Tests', () => { + it('should complete full capture-validate-store cycle in under 100ms', () => { + const { createPatternCapture, createPatternValidator, createPatternStore } = learningModule; + + const capture = createPatternCapture({ enabled: true }); + const validator = createPatternValidator(); + const store = createPatternStore({ storagePath: testStoragePath }); + + const sessionData = { + commands: ['validate-story-draft', 'develop', 'review-qa', 'apply-qa-fixes', 'run-tests'], + agentSequence: ['sm', 'dev', 'qa'], + success: true, + timestamp: Date.now(), + }; + + const start = Date.now(); + + const captureResult = capture.captureSession(sessionData); + const validationResult = validator.validate(captureResult.pattern); + if (validationResult.valid) { + store.save(captureResult.pattern); + } + + const duration = Date.now() - start; + + expect(duration).toBeLessThan(100); + }); + + it('should handle 100 patterns efficiently', () => { + const { createPatternStore } = learningModule; + const store = createPatternStore({ storagePath: testStoragePath }); + + const start = Date.now(); + + for (let i = 0; i < 100; i++) { + store.save({ + sequence: [`cmd${i}a`, `cmd${i}b`, `cmd${i}c`], + occurrences: Math.floor(Math.random() * 10) + 1, + successRate: 0.8 + Math.random() * 0.2, + }); + } + + const duration = Date.now() - start; + + expect(duration).toBeLessThan(2000); // 2 seconds for 100 patterns + expect(store.load().patterns.length).toBeLessThanOrEqual(100); + }); + + it('should find similar patterns in under 50ms for 100 patterns', () => { + const { createPatternStore } = learningModule; + const store = createPatternStore({ + storagePath: testStoragePath, + maxPatterns: 100, + }); + + // Pre-populate store + for (let i = 0; i < 50; i++) { + store.save({ + sequence: [`cmd${i}`, `next${i}`, `final${i}`], + }); + } + + const start = Date.now(); + store.findSimilar(['develop', 'review-qa', 'apply-qa-fixes']); + const duration = Date.now() - start; + + expect(duration).toBeLessThan(50); + }); + }); + + describe('Error Handling', () => { + it('should handle corrupted storage file gracefully', () => { + const { createPatternStore } = learningModule; + + // Write invalid YAML + fs.writeFileSync(testStoragePath, 'invalid: yaml: content: [[[', 'utf8'); + + const store = createPatternStore({ storagePath: testStoragePath }); + + // Should not throw, returns empty structure + const data = store.load(); + expect(data.patterns).toEqual([]); + }); + + it('should handle missing storage directory', () => { + const { createPatternStore } = learningModule; + const deepPath = path.join( + os.tmpdir(), + 'deep', + 'nested', + 'dir', + `patterns-${Date.now()}.yaml`, + ); + + const store = createPatternStore({ storagePath: deepPath }); + + // Should create directory and save + store.save({ sequence: ['a', 'b', 'c'] }); + + expect(fs.existsSync(deepPath)).toBe(true); + + // Cleanup + fs.unlinkSync(deepPath); + fs.rmdirSync(path.dirname(deepPath)); + fs.rmdirSync(path.dirname(path.dirname(deepPath))); + fs.rmdirSync(path.dirname(path.dirname(path.dirname(deepPath)))); + }); + }); +}); + +``` + +================================================== +📄 tests/integration/workflow-intelligence/wis-integration.test.js +================================================== +```js +/** + * @fileoverview Integration tests for Workflow Intelligence System + * @story WIS-3 - *next Task Implementation + */ + +'use strict'; + +const path = require('path'); +const fs = require('fs'); + +describe('WIS Integration', () => { + let wis; + + beforeAll(() => { + wis = require('../../../.aios-core/workflow-intelligence'); + }); + + describe('Full Suggestion Flow', () => { + it('should get suggestions using high-level API', () => { + const context = { + lastCommand: 'develop', + lastCommands: ['validate-story-draft', 'develop'], + agentId: '@dev', + projectState: { activeStory: true }, + }; + + const suggestions = wis.getSuggestions(context); + + expect(Array.isArray(suggestions)).toBe(true); + if (suggestions.length > 0) { + expect(suggestions[0]).toHaveProperty('command'); + expect(suggestions[0]).toHaveProperty('confidence'); + } + }); + + it('should match workflow from command history', () => { + const commands = ['validate-story-draft', 'develop']; + const match = wis.matchWorkflow(commands); + + if (match) { + expect(match).toHaveProperty('name'); + expect(match).toHaveProperty('workflow'); + expect(match).toHaveProperty('score'); + } + }); + + it('should find current state in workflow', () => { + const state = wis.findCurrentState('story_development', 'develop'); + + // State should be found or null + expect(state === null || typeof state === 'string').toBe(true); + }); + + it('should get next steps for workflow state', () => { + const steps = wis.getNextSteps('story_development', 'in_development'); + + expect(Array.isArray(steps)).toBe(true); + }); + }); + + describe('SuggestionEngine Integration', () => { + let engine; + + beforeEach(() => { + engine = wis.createSuggestionEngine(); + engine.invalidateCache(); + }); + + it('should build context and get suggestions', async () => { + const context = await engine.buildContext({ + agentId: 'dev', + }); + + const result = await engine.suggestNext(context); + + expect(result).toHaveProperty('workflow'); + expect(result).toHaveProperty('suggestions'); + }); + + it('should integrate with output formatter', () => { + const formatter = wis.outputFormatter; + + expect(formatter).toBeDefined(); + expect(typeof formatter.displaySuggestions).toBe('function'); + expect(typeof formatter.displayHelp).toBe('function'); + expect(typeof formatter.displayFallback).toBe('function'); + }); + }); + + describe('Registry and Scorer Integration', () => { + it('should load workflow patterns', () => { + const names = wis.getWorkflowNames(); + + expect(Array.isArray(names)).toBe(true); + expect(names.length).toBeGreaterThan(0); + expect(names).toContain('story_development'); + }); + + it('should get workflows by agent', () => { + const devWorkflows = wis.getWorkflowsByAgent('@dev'); + + expect(Array.isArray(devWorkflows)).toBe(true); + // Dev agent should be in at least one workflow + expect(devWorkflows.length).toBeGreaterThan(0); + }); + + it('should get registry stats', () => { + const stats = wis.getStats(); + + expect(stats).toHaveProperty('totalWorkflows'); + expect(stats).toHaveProperty('cacheValid'); + expect(stats.totalWorkflows).toBeGreaterThan(0); + }); + + it('should create and use scorer', () => { + const scorer = wis.createConfidenceScorer(); + + const suggestion = { + trigger: 'develop', + agentSequence: ['po', 'dev', 'qa'], + }; + + const context = { + lastCommand: 'develop', + agentId: '@dev', + }; + + const score = scorer.score(suggestion, context); + + expect(typeof score).toBe('number'); + expect(score).toBeGreaterThanOrEqual(0); + expect(score).toBeLessThanOrEqual(1); + }); + }); + + describe('Cache Behavior', () => { + it('should cache workflow patterns', () => { + // First call loads from file + const stats1 = wis.getStats(); + expect(stats1.cacheValid).toBe(true); + + // Second call uses cache + const stats2 = wis.getStats(); + expect(stats2.cacheValid).toBe(true); + }); + + it('should invalidate cache on request', () => { + // Prime the cache + wis.getWorkflowNames(); + + // Invalidate + wis.invalidateCache(); + + // Get fresh stats (will reload) + const stats = wis.getStats(); + expect(stats.cacheValid).toBe(true); + }); + }); + + describe('Error Handling', () => { + it('should handle empty context gracefully', () => { + const suggestions = wis.getSuggestions({}); + + expect(Array.isArray(suggestions)).toBe(true); + }); + + it('should handle null context', () => { + const suggestions = wis.getSuggestions(null); + + expect(Array.isArray(suggestions)).toBe(true); + expect(suggestions.length).toBe(0); + }); + + it('should handle non-existent workflow', () => { + const workflow = wis.getWorkflow('non_existent_workflow'); + + expect(workflow).toBeNull(); + }); + + it('should handle non-existent state', () => { + const steps = wis.getNextSteps('story_development', 'non_existent_state'); + + expect(Array.isArray(steps)).toBe(true); + expect(steps.length).toBe(0); + }); + }); + + describe('Constants Export', () => { + it('should export all constants', () => { + expect(wis.SCORING_WEIGHTS).toBeDefined(); + expect(wis.DEFAULT_CACHE_TTL).toBeDefined(); + expect(wis.SUGGESTION_CACHE_TTL).toBeDefined(); + expect(wis.LOW_CONFIDENCE_THRESHOLD).toBeDefined(); + }); + + it('should export classes', () => { + expect(wis.WorkflowRegistry).toBeDefined(); + expect(wis.ConfidenceScorer).toBeDefined(); + expect(wis.SuggestionEngine).toBeDefined(); + }); + + it('should export factory functions', () => { + expect(typeof wis.createWorkflowRegistry).toBe('function'); + expect(typeof wis.createConfidenceScorer).toBe('function'); + expect(typeof wis.createSuggestionEngine).toBe('function'); + }); + }); +}); + +describe('WIS Performance', () => { + let wis; + + beforeAll(() => { + wis = require('../../../.aios-core/workflow-intelligence'); + }); + + it('should complete getSuggestions within 100ms', () => { + const context = { + lastCommand: 'develop', + lastCommands: ['develop'], + agentId: '@dev', + projectState: {}, + }; + + const start = Date.now(); + wis.getSuggestions(context); + const duration = Date.now() - start; + + expect(duration).toBeLessThan(100); + }); + + it('should complete matchWorkflow within 50ms', () => { + const commands = ['validate-story-draft', 'develop']; + + const start = Date.now(); + wis.matchWorkflow(commands); + const duration = Date.now() - start; + + expect(duration).toBeLessThan(50); + }); + + it('should complete registry load within 200ms (cold start)', () => { + wis.invalidateCache(); + + const start = Date.now(); + wis.getWorkflowNames(); + const duration = Date.now() - start; + + expect(duration).toBeLessThan(200); + }); + + it('should complete SuggestionEngine full flow within 100ms', async () => { + const engine = wis.createSuggestionEngine(); + engine.invalidateCache(); + + const start = Date.now(); + const context = await engine.buildContext({ agentId: 'dev' }); + await engine.suggestNext(context); + const duration = Date.now() - start; + + expect(duration).toBeLessThan(100); + }); +}); + +``` + +================================================== +📄 tests/integration/hooks/precompact-flow.integration.test.js +================================================== +```js +/** + * PreCompact Hook Integration Tests + * Story MIS-3: Session Digest (PreCompact Hook) + * + * End-to-end tests validating the complete flow: + * 1. PreCompact hook fires + * 2. Pro detection works + * 3. Digest extractor runs + * 4. YAML file is created + */ + +const fs = require('fs').promises; +const path = require('path'); +const yaml = require('yaml'); +const { onPreCompact } = require('../../../.aios-core/hooks/unified/runners/precompact-runner'); +const proDetector = require('../../../bin/utils/pro-detector'); + +describe('PreCompact Hook Integration', () => { + const TEST_PROJECT_DIR = path.join(__dirname, '..', '..', '..', '.aios-test'); + const TEST_DIGESTS_DIR = path.join(TEST_PROJECT_DIR, '.aios', 'session-digests'); + + beforeAll(async () => { + // Create test project directory + await fs.mkdir(TEST_DIGESTS_DIR, { recursive: true }); + }); + + afterAll(async () => { + // Cleanup test directory + try { + await fs.rm(TEST_PROJECT_DIR, { recursive: true, force: true }); + } catch (err) { + // Ignore cleanup errors + } + }); + + beforeEach(() => { + jest.clearAllMocks(); + jest.spyOn(console, 'log').mockImplementation(() => {}); + jest.spyOn(console, 'error').mockImplementation(() => {}); + }); + + afterEach(() => { + console.log.mockRestore(); + console.error.mockRestore(); + }); + + describe('End-to-End Flow (with aios-pro)', () => { + it('should create digest file when pro is available', async () => { + // This test requires actual aios-pro to be present + const proAvailable = proDetector.isProAvailable(); + + if (!proAvailable) { + console.log('[Integration Test] Skipping: aios-pro not available'); + return; // Skip test if pro not available + } + + const context = { + sessionId: 'integration-test-session', + projectDir: TEST_PROJECT_DIR, + conversation: { + messages: [ + { role: 'user', content: 'Actually, tests should expect null' }, + { role: 'assistant', content: 'You\'re right, I\'ll update the tests' }, + { role: 'user', content: 'How do I run the tests?' }, + { role: 'assistant', content: 'Run npm test' }, + ], + }, + metadata: { + sessionStart: Date.now() - 60000, + compactTrigger: 'context_limit_90%', + activeAgent: '@dev', + activeStory: 'MIS-3', + }, + }; + + // Execute hook + await onPreCompact(context); + + // Wait for async digest to complete + await new Promise(resolve => setImmediate(resolve)); + await new Promise(resolve => setTimeout(resolve, 100)); + + // Verify digest file was created + const files = await fs.readdir(TEST_DIGESTS_DIR); + const digestFile = files.find(f => f.startsWith('integration-test-session')); + + expect(digestFile).toBeDefined(); + + // Read and verify digest content + const digestPath = path.join(TEST_DIGESTS_DIR, digestFile); + const digestContent = await fs.readFile(digestPath, 'utf8'); + + // Verify YAML structure + expect(digestContent).toContain('---'); // Frontmatter delimiter + expect(digestContent).toContain('schema_version:'); + expect(digestContent).toContain('## User Corrections'); + expect(digestContent).toContain('## Patterns Observed'); + expect(digestContent).toContain('## Axioms Learned'); + expect(digestContent).toContain('## Context Snapshot'); + + // Parse frontmatter + const frontmatterMatch = digestContent.match(/^---\n([\s\S]+?)\n---/); + expect(frontmatterMatch).toBeTruthy(); + + const frontmatter = yaml.parse(frontmatterMatch[1]); + expect(frontmatter.schema_version).toBe('1.0'); + expect(frontmatter.session_id).toBe('integration-test-session'); + expect(frontmatter.agent_context).toContain('@dev'); + + // Cleanup + await fs.unlink(digestPath); + }, 10000); // 10s timeout for async operations + + it('should handle graceful degradation when pro not available', async () => { + // Skip this test if pro is actually available (can't mock in integration test) + const proAvailable = proDetector.isProAvailable(); + if (proAvailable) { + // If pro is available, we can't properly test the no-pro path in integration + // This scenario is already covered in unit tests + console.log('[Test] Skipping graceful degradation test - aios-pro is available'); + return; + } + + const context = { + sessionId: 'test-no-pro-session', + projectDir: TEST_PROJECT_DIR, + conversation: { messages: [] }, + }; + + // Should not throw + await expect(onPreCompact(context)).resolves.toBeUndefined(); + + // Should log graceful message + expect(console.log).toHaveBeenCalledWith( + expect.stringContaining('aios-pro not available'), + ); + + // No digest file should be created + const files = await fs.readdir(TEST_DIGESTS_DIR); + const digestFile = files.find(f => f.startsWith('test-no-pro-session')); + + expect(digestFile).toBeUndefined(); + }); + }); + + describe('Performance Benchmarking', () => { + it('should complete digest extraction within 5 seconds', async () => { + const proAvailable = proDetector.isProAvailable(); + + if (!proAvailable) { + console.log('[Performance Test] Skipping: aios-pro not available'); + return; + } + + // Create a large conversation (100 messages) + const largeConversation = { + messages: Array.from({ length: 100 }, (_, i) => ({ + role: i % 2 === 0 ? 'user' : 'assistant', + content: `Message ${i}: This is a test message with some content to analyze.`, + })), + }; + + const context = { + sessionId: 'performance-test-session', + projectDir: TEST_PROJECT_DIR, + conversation: largeConversation, + metadata: { + sessionStart: Date.now() - 300000, // 5 minutes ago + }, + }; + + // Execute hook + await onPreCompact(context); + + // Wait for async digest + const startAsyncTime = Date.now(); + await new Promise(resolve => setImmediate(resolve)); + await new Promise(resolve => setTimeout(resolve, 100)); // Wait for digest to complete + + const asyncDuration = Date.now() - startAsyncTime; + + // Verify performance requirement (< 5s for async completion) + // Note: We only measure async execution time, not the full wait + expect(asyncDuration).toBeLessThan(5000); + + // Cleanup + const files = await fs.readdir(TEST_DIGESTS_DIR); + const digestFile = files.find(f => f.startsWith('performance-test-session')); + if (digestFile) { + await fs.unlink(path.join(TEST_DIGESTS_DIR, digestFile)); + } + }, 10000); // 10s timeout + + it('should not block compact operation (async fire-and-forget)', async () => { + const proAvailable = proDetector.isProAvailable(); + + if (!proAvailable) { + console.log('[Async Test] Skipping: aios-pro not available'); + return; + } + + const context = { + sessionId: 'async-test-session', + projectDir: TEST_PROJECT_DIR, + conversation: { + messages: Array.from({ length: 50 }, () => ({ + role: 'user', + content: 'Test message', + })), + }, + }; + + const startTime = Date.now(); + + // onPreCompact should return immediately + await onPreCompact(context); + + const returnTime = Date.now() - startTime; + + // Should return in < 50ms (fire-and-forget) + expect(returnTime).toBeLessThan(50); + + // Wait for digest to be created, then cleanup + await new Promise(resolve => setTimeout(resolve, 200)); + try { + const files = await fs.readdir(TEST_DIGESTS_DIR); + const digestFile = files.find(f => f.startsWith('async-test-session')); + if (digestFile) { + await fs.unlink(path.join(TEST_DIGESTS_DIR, digestFile)); + } + } catch (err) { + // Ignore cleanup errors + } + }); + }); + + describe('Schema Validation', () => { + it('should generate digest with valid schema v1.0', async () => { + const proAvailable = proDetector.isProAvailable(); + + if (!proAvailable) { + console.log('[Schema Test] Skipping: aios-pro not available'); + return; + } + + const context = { + sessionId: 'schema-test-session', + projectDir: TEST_PROJECT_DIR, + conversation: { + messages: [ + { role: 'user', content: 'Test correction message' }, + ], + }, + metadata: { + sessionStart: Date.now(), + }, + }; + + await onPreCompact(context); + + // Wait for digest + await new Promise(resolve => setImmediate(resolve)); + await new Promise(resolve => setTimeout(resolve, 100)); + + // Read digest + const files = await fs.readdir(TEST_DIGESTS_DIR); + const digestFile = files.find(f => f.startsWith('schema-test-session')); + + expect(digestFile).toBeDefined(); + + const digestPath = path.join(TEST_DIGESTS_DIR, digestFile); + const digestContent = await fs.readFile(digestPath, 'utf8'); + + // Validate schema fields + const frontmatterMatch = digestContent.match(/^---\n([\s\S]+?)\n---/); + const frontmatter = yaml.parse(frontmatterMatch[1]); + + // Required schema v1.0 fields + expect(frontmatter).toMatchObject({ + schema_version: '1.0', + session_id: expect.any(String), + timestamp: expect.stringMatching(/^\d{4}-\d{2}-\d{2}T/), + duration_minutes: expect.any(Number), + agent_context: expect.any(String), + compact_trigger: expect.any(String), + }); + + // Cleanup + await fs.unlink(digestPath); + }, 10000); + }); +}); + +``` + +================================================== +📄 tests/integration/squad/squad-download-publish.test.js +================================================== +```js +/** + * Integration Tests for Squad Download & Publish + * + * These tests verify the integration between SquadDownloader, SquadPublisher, + * and other squad components (SquadValidator, SquadLoader). + * + * Note: Network-dependent tests are skipped by default. + * Set AIOS_INTEGRATION_TESTS=true to run network tests. + * + * @see Story SQS-6: Download & Publish Tasks + */ + +const path = require('path'); +const fs = require('fs').promises; + +// Mock child_process for GitHub CLI auth in CI environments +jest.mock('child_process', () => { + const actual = jest.requireActual('child_process'); + return { + ...actual, + execSync: jest.fn((cmd, opts) => { + // Mock gh auth status to return authenticated + if (cmd === 'gh auth status') { + return 'Logged in to github.com as ci-test-user'; + } + // Fall through to actual for other commands + return actual.execSync(cmd, opts); + }), + }; +}); + +const { + SquadDownloader, + SquadPublisher, + SquadValidator, + SquadLoader, +} = require('../../../.aios-core/development/scripts/squad'); + +// Test paths - use unique directory to avoid parallel test collisions +const FIXTURES_PATH = path.join(__dirname, '..', 'fixtures', 'squad'); +const TEMP_PATH = path.join(__dirname, 'temp-download-publish'); + +// Check if network tests should run +const RUN_NETWORK_TESTS = process.env.AIOS_INTEGRATION_TESTS === 'true'; + +describe('Squad Download & Publish Integration', () => { + let downloader; + let publisher; + let validator; + let loader; + + beforeAll(async () => { + // Create temp directory + await fs.mkdir(TEMP_PATH, { recursive: true }); + }); + + afterAll(async () => { + // Clean up temp directory + try { + await fs.rm(TEMP_PATH, { recursive: true, force: true }); + } catch { + // Ignore + } + }); + + beforeEach(() => { + downloader = new SquadDownloader({ + squadsPath: TEMP_PATH, + verbose: false, + }); + + publisher = new SquadPublisher({ + verbose: false, + dryRun: true, // Always dry-run in tests + }); + + validator = new SquadValidator({ + verbose: false, + }); + + loader = new SquadLoader({ + squadsPath: TEMP_PATH, + }); + }); + + afterEach(async () => { + // Clean temp directory between tests + try { + const entries = await fs.readdir(TEMP_PATH); + for (const entry of entries) { + await fs.rm(path.join(TEMP_PATH, entry), { recursive: true, force: true }); + } + } catch { + // Directory may not exist, ignore + } + }); + + describe('Round-trip: create -> validate -> publish (Test 4.3)', () => { + it('should complete full publish workflow with dry-run', async () => { + // Step 1: Create a new squad manually (instead of using generator with template) + const squadPath = path.join(TEMP_PATH, 'integration-test-squad'); + await fs.mkdir(squadPath, { recursive: true }); + await fs.mkdir(path.join(squadPath, 'tasks'), { recursive: true }); + await fs.mkdir(path.join(squadPath, 'agents'), { recursive: true }); + + await fs.writeFile( + path.join(squadPath, 'squad.yaml'), + `name: integration-test-squad +version: 1.0.0 +description: Squad created for integration testing +author: Test Suite +license: MIT +components: + tasks: + - sample-task.md +`, + ); + + await fs.writeFile( + path.join(squadPath, 'tasks', 'sample-task.md'), + `--- +task: Sample Task +responsavel: "@integration" +responsavel_type: agent +atomic_layer: task +Entrada: | + - input: Test input +Saida: | + - output: Test output +Checklist: + - "[ ] Sample step" +--- + +# Sample Task + +This is a sample task for integration testing. +`, + ); + + expect( + await fs + .access(squadPath) + .then(() => true) + .catch(() => false), + ).toBe(true); + + // Step 2: Validate the created squad + const validationResult = await validator.validate(squadPath); + + // May have warnings but should be structurally valid + expect(validationResult).toHaveProperty('valid'); + expect(validationResult).toHaveProperty('errors'); + expect(validationResult).toHaveProperty('warnings'); + + // Step 3: Load manifest to verify structure + const manifest = await loader.loadManifest(squadPath); + + expect(manifest.name).toBe('integration-test-squad'); + expect(manifest.version).toBeDefined(); + + // Step 4: Publish with dry-run (gh auth is mocked globally) + const publishResult = await publisher.publish(squadPath); + + expect(publishResult.prUrl).toBe('[dry-run] PR would be created'); + expect(publishResult.branch).toContain('squad/integration-test-squad'); + expect(publishResult.manifest.name).toBe('integration-test-squad'); + expect(publishResult.preview).toBeDefined(); + }); + }); + + describe('List available squads from registry (Test 4.4)', () => { + // This test requires network access + const conditionalTest = RUN_NETWORK_TESTS ? it : it.skip; + + conditionalTest( + 'should list squads from aios-squads registry', + async () => { + const squads = await downloader.listAvailable(); + + expect(squads).toBeInstanceOf(Array); + // Registry should have at least some squads + // This may fail if registry is empty + expect(squads.length).toBeGreaterThanOrEqual(0); + + // If there are squads, verify structure + if (squads.length > 0) { + const squad = squads[0]; + expect(squad).toHaveProperty('name'); + expect(squad).toHaveProperty('version'); + expect(squad).toHaveProperty('type'); + } + }, + 30000, + ); // 30 second timeout for network + }); + + describe('Download squad from registry (Test 4.1)', () => { + // This test requires network access + const conditionalTest = RUN_NETWORK_TESTS ? it : it.skip; + + conditionalTest( + 'should download squad from registry', + async () => { + // First, get available squads + const squads = await downloader.listAvailable(); + + if (squads.length === 0) { + console.log('No squads available in registry, skipping download test'); + return; + } + + // Try to download the first available squad + const squadToDownload = squads[0].name; + const result = await downloader.download(squadToDownload, { validate: true }); + + expect(result.path).toBeDefined(); + expect(result.path).toContain(squadToDownload); + + // Verify files were downloaded + const manifestExists = await fs + .access(path.join(result.path, 'squad.yaml')) + .then(() => true) + .catch(() => + fs + .access(path.join(result.path, 'config.yaml')) + .then(() => true) + .catch(() => false), + ); + + expect(manifestExists).toBe(true); + }, + 60000, + ); // 60 second timeout for network + }); + + describe('Publish flow with dry-run (Test 4.2)', () => { + it('should complete dry-run publish without network calls', async () => { + // Use existing fixture squad + const squadPath = path.join(FIXTURES_PATH, 'valid-squad'); + + // Check if fixture exists + const fixtureExists = await fs + .access(squadPath) + .then(() => true) + .catch(() => false); + + if (!fixtureExists) { + // Create a minimal squad for testing + const testSquadPath = path.join(TEMP_PATH, 'dry-run-test-squad'); + await fs.mkdir(testSquadPath, { recursive: true }); + await fs.mkdir(path.join(testSquadPath, 'tasks'), { recursive: true }); + await fs.mkdir(path.join(testSquadPath, 'agents'), { recursive: true }); + + await fs.writeFile( + path.join(testSquadPath, 'squad.yaml'), + `name: dry-run-test-squad +version: 1.0.0 +description: Test squad for dry-run publish +author: Test Suite +components: + tasks: + - sample-task.md +`, + ); + + await fs.writeFile( + path.join(testSquadPath, 'tasks', 'sample-task.md'), + `--- +task: Sample Task +responsavel: "@test" +responsavel_type: agent +atomic_layer: task +Entrada: | + - input: Test input +Saida: | + - output: Test output +Checklist: + - "[ ] Sample step" +--- + +# Sample Task + +This is a sample task for testing. +`, + ); + + // gh auth is mocked globally + const result = await publisher.publish(testSquadPath); + + expect(result.prUrl).toBe('[dry-run] PR would be created'); + expect(result.preview.title).toBe('Add squad: dry-run-test-squad'); + expect(result.preview.body).toContain('dry-run-test-squad'); + } else { + // gh auth is mocked globally + const result = await publisher.publish(squadPath); + + expect(result.prUrl).toBe('[dry-run] PR would be created'); + expect(result.preview).toBeDefined(); + } + }); + }); + + describe('Validation integration', () => { + it('should validate downloaded squad', async () => { + // Create a mock downloaded squad + const mockSquadPath = path.join(TEMP_PATH, 'mock-downloaded-squad'); + await fs.mkdir(mockSquadPath, { recursive: true }); + await fs.mkdir(path.join(mockSquadPath, 'tasks'), { recursive: true }); + await fs.mkdir(path.join(mockSquadPath, 'agents'), { recursive: true }); + + await fs.writeFile( + path.join(mockSquadPath, 'squad.yaml'), + `name: mock-downloaded-squad +version: 1.0.0 +description: Mock squad simulating download +author: Registry +components: + tasks: + - sample-task.md +`, + ); + + await fs.writeFile( + path.join(mockSquadPath, 'tasks', 'sample-task.md'), + `--- +task: Sample Task +responsavel: "@agent" +responsavel_type: agent +atomic_layer: task +Entrada: | + - input: Test +Saida: | + - output: Test +Checklist: + - "[ ] Step" +--- + +# Sample Task +`, + ); + + // Validate the mock squad + const result = await validator.validate(mockSquadPath); + + expect(result).toHaveProperty('valid'); + expect(result).toHaveProperty('errors'); + expect(result).toHaveProperty('warnings'); + }); + }); + + describe('Error handling integration', () => { + it('should handle validation errors in publish workflow', async () => { + // Create invalid squad + const invalidSquadPath = path.join(TEMP_PATH, 'invalid-publish-squad'); + await fs.mkdir(invalidSquadPath, { recursive: true }); + // No manifest file - will fail validation + + await expect(publisher.publish(invalidSquadPath)).rejects.toThrow(); + }); + + it('should check auth status in publish workflow', async () => { + // Create valid squad + const validSquadPath = path.join(TEMP_PATH, 'auth-test-squad'); + await fs.mkdir(validSquadPath, { recursive: true }); + await fs.mkdir(path.join(validSquadPath, 'tasks'), { recursive: true }); + + await fs.writeFile( + path.join(validSquadPath, 'squad.yaml'), + `name: auth-test-squad +version: 1.0.0 +description: Test +author: Test +`, + ); + + await fs.writeFile( + path.join(validSquadPath, 'tasks', 'task.md'), + `--- +task: Test +responsavel: "@agent" +responsavel_type: agent +atomic_layer: task +Entrada: "test" +Saida: "test" +Checklist: [] +--- +# Test +`, + ); + + // Test that checkAuth method works + const testPublisher = new SquadPublisher({ + dryRun: true, // Use dry-run to avoid actual PR creation + }); + + // checkAuth should be callable + const authResult = await testPublisher.checkAuth(); + expect(authResult).toHaveProperty('authenticated'); + expect(authResult).toHaveProperty('username'); + }); + }); + + describe('Cache behavior', () => { + it('should cache registry between calls', async () => { + // Mock https for this test + const https = require('https'); + const originalGet = https.get; + + let callCount = 0; + https.get = jest.fn().mockImplementation((url, options, callback) => { + callCount++; + const response = { + statusCode: 200, + headers: {}, + on: jest.fn((event, cb) => { + if (event === 'data') { + // Return Buffer to match actual HTTPS response behavior + cb( + Buffer.from( + JSON.stringify({ version: '1.0.0', squads: { official: [], community: [] } }), + 'utf-8', + ), + ); + } + if (event === 'end') { + cb(); + } + return response; + }), + }; + callback(response); + return { on: jest.fn() }; + }); + + // First call + await downloader.fetchRegistry(); + + // Second call should use cache + await downloader.fetchRegistry(); + + expect(callCount).toBe(1); // Only one network call + + // Restore + https.get = originalGet; + }); + + it('should allow cache clearing', async () => { + // Mock https + const https = require('https'); + const originalGet = https.get; + + let callCount = 0; + https.get = jest.fn().mockImplementation((url, options, callback) => { + callCount++; + const response = { + statusCode: 200, + headers: {}, + on: jest.fn((event, cb) => { + if (event === 'data') { + // Return Buffer to match actual HTTPS response behavior + cb( + Buffer.from( + JSON.stringify({ version: '1.0.0', squads: { official: [], community: [] } }), + 'utf-8', + ), + ); + } + if (event === 'end') { + cb(); + } + return response; + }), + }; + callback(response); + return { on: jest.fn() }; + }); + + // First call + await downloader.fetchRegistry(); + + // Clear cache + downloader.clearCache(); + + // Second call should make new request + await downloader.fetchRegistry(); + + expect(callCount).toBe(2); + + // Restore + https.get = originalGet; + }); + }); +}); + +``` + +================================================== +📄 tests/integration/squad/squad-migration.test.js +================================================== +```js +/** + * Integration Tests for Squad Migration + * + * Tests the full migration workflow including: + * - Full migration of legacy squad + * - Migration of already-compliant squad (no-op) + * - Rollback from backup works + * + * @see Story SQS-7: Squad Migration Tool + */ + +const path = require('path'); +const fs = require('fs').promises; +const os = require('os'); +const yaml = require('js-yaml'); +const { SquadMigrator, SquadValidator } = require('../../../.aios-core/development/scripts/squad'); + +// Test fixtures path (reuse from unit tests) +const FIXTURES_PATH = path.join(__dirname, '../../unit/squad/fixtures'); + +describe('Squad Migration Integration Tests', () => { + let tempDir; + + beforeEach(async () => { + // Create temp directory for each test + tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'squad-migration-int-')); + }); + + afterEach(async () => { + // Clean up temp directory + if (tempDir) { + try { + await fs.rm(tempDir, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + } + }); + + describe('Full Migration of Legacy Squad (AC 5.1)', () => { + it('should migrate legacy squad end-to-end', async () => { + // Setup: Copy legacy squad to temp dir + const srcPath = path.join(FIXTURES_PATH, 'legacy-squad'); + const testPath = path.join(tempDir, 'legacy-migration-test'); + await copyRecursive(srcPath, testPath); + + // Verify initial state + expect(await pathExists(path.join(testPath, 'config.yaml'))).toBe(true); + expect(await pathExists(path.join(testPath, 'squad.yaml'))).toBe(false); + + // Create migrator with validator for post-migration validation + const validator = new SquadValidator(); + const migrator = new SquadMigrator({ validator }); + + // Execute migration + const result = await migrator.migrate(testPath); + + // Verify migration success + expect(result.success).toBe(true); + expect(result.backupPath).not.toBeNull(); + + // Verify manifest renamed + expect(await pathExists(path.join(testPath, 'squad.yaml'))).toBe(true); + expect(await pathExists(path.join(testPath, 'config.yaml'))).toBe(false); + + // Verify directories created + expect(await pathExists(path.join(testPath, 'tasks'))).toBe(true); + expect(await pathExists(path.join(testPath, 'agents'))).toBe(true); + + // Verify fields added + const content = await fs.readFile(path.join(testPath, 'squad.yaml'), 'utf-8'); + const manifest = yaml.load(content); + expect(manifest.aios?.type).toBe('squad'); + expect(manifest.aios?.minVersion).toBe('2.1.0'); + + // Verify backup exists and contains original files + expect(await pathExists(result.backupPath)).toBe(true); + expect(await pathExists(path.join(result.backupPath, 'config.yaml'))).toBe(true); + }); + + it('should pass post-migration validation', async () => { + // Setup: Copy legacy squad to temp dir + const srcPath = path.join(FIXTURES_PATH, 'legacy-squad'); + const testPath = path.join(tempDir, 'validation-test'); + await copyRecursive(srcPath, testPath); + + // Create migrator with validator + const validator = new SquadValidator(); + const migrator = new SquadMigrator({ validator }); + + // Execute migration + const result = await migrator.migrate(testPath); + + // Verify validation ran and passed (may have warnings but no errors) + expect(result.validation).toBeDefined(); + // The migrated squad should be valid (valid: true) or have no schema errors + // Note: May have warnings for missing tasks/agents content + }); + }); + + describe('Migration of Already-Compliant Squad (AC 5.2)', () => { + it('should be a no-op for compliant squad', async () => { + // Setup: Copy complete squad to temp dir + const srcPath = path.join(FIXTURES_PATH, 'complete-squad'); + const testPath = path.join(tempDir, 'compliant-test'); + await copyRecursive(srcPath, testPath); + + // Record initial state + const initialContent = await fs.readFile(path.join(testPath, 'squad.yaml'), 'utf-8'); + + // Create migrator + const migrator = new SquadMigrator(); + + // Execute migration + const result = await migrator.migrate(testPath); + + // Verify no migration needed + expect(result.success).toBe(true); + expect(result.message).toContain('already up to date'); + expect(result.actions).toHaveLength(0); + expect(result.backupPath).toBeNull(); + + // Verify file unchanged + const finalContent = await fs.readFile(path.join(testPath, 'squad.yaml'), 'utf-8'); + expect(finalContent).toBe(initialContent); + }); + }); + + describe('Rollback from Backup (AC 5.3)', () => { + it('should be able to restore from backup after migration', async () => { + // Setup: Copy legacy squad to temp dir + const srcPath = path.join(FIXTURES_PATH, 'legacy-squad'); + const testPath = path.join(tempDir, 'rollback-test'); + await copyRecursive(srcPath, testPath); + + // Record original config.yaml content + const originalContent = await fs.readFile(path.join(testPath, 'config.yaml'), 'utf-8'); + + // Create migrator and execute migration + const migrator = new SquadMigrator(); + const result = await migrator.migrate(testPath); + + // Verify migration happened + expect(result.success).toBe(true); + expect(result.backupPath).not.toBeNull(); + expect(await pathExists(path.join(testPath, 'squad.yaml'))).toBe(true); + + // Perform rollback manually + // 1. Remove migrated files + await fs.unlink(path.join(testPath, 'squad.yaml')); + await fs.rm(path.join(testPath, 'tasks'), { recursive: true, force: true }); + await fs.rm(path.join(testPath, 'agents'), { recursive: true, force: true }); + + // 2. Restore from backup + const backupFiles = await fs.readdir(result.backupPath); + for (const file of backupFiles) { + const backupFilePath = path.join(result.backupPath, file); + const restorePath = path.join(testPath, file); + await copyRecursive(backupFilePath, restorePath); + } + + // Verify rollback worked + expect(await pathExists(path.join(testPath, 'config.yaml'))).toBe(true); + expect(await pathExists(path.join(testPath, 'squad.yaml'))).toBe(false); + + // Verify content matches original + const restoredContent = await fs.readFile(path.join(testPath, 'config.yaml'), 'utf-8'); + expect(restoredContent).toBe(originalContent); + }); + + it('should preserve backup after successful migration', async () => { + // Setup: Copy legacy squad to temp dir + const srcPath = path.join(FIXTURES_PATH, 'legacy-squad'); + const testPath = path.join(tempDir, 'backup-preserve-test'); + await copyRecursive(srcPath, testPath); + + // Execute migration + const migrator = new SquadMigrator(); + const result = await migrator.migrate(testPath); + + // Verify backup still exists after migration + expect(await pathExists(result.backupPath)).toBe(true); + + // Verify backup is complete + const backupFiles = await fs.readdir(result.backupPath); + expect(backupFiles).toContain('config.yaml'); + }); + }); + + describe('End-to-End Workflow', () => { + it('should support analyze → migrate → validate workflow', async () => { + // Setup: Copy legacy squad to temp dir + const srcPath = path.join(FIXTURES_PATH, 'legacy-squad'); + const testPath = path.join(tempDir, 'workflow-test'); + await copyRecursive(srcPath, testPath); + + const validator = new SquadValidator(); + const migrator = new SquadMigrator({ validator, verbose: false }); + + // Step 1: Analyze + const analysis = await migrator.analyze(testPath); + expect(analysis.needsMigration).toBe(true); + expect(analysis.issues.length).toBeGreaterThan(0); + expect(analysis.actions.length).toBeGreaterThan(0); + + // Step 2: Generate analysis report + const analysisReport = migrator.generateReport(analysis); + expect(analysisReport).toContain('ISSUES FOUND'); + expect(analysisReport).toContain('PLANNED ACTIONS'); + + // Step 3: Execute migration + const result = await migrator.migrate(testPath); + expect(result.success).toBe(true); + + // Step 4: Generate final report + const finalReport = migrator.generateReport(analysis, result); + expect(finalReport).toContain('MIGRATION RESULT'); + expect(finalReport).toContain('SUCCESS'); + + // Step 5: Validate independently + const validationResult = await validator.validate(testPath); + // Should be valid (no schema errors) after migration + expect(validationResult).toBeDefined(); + }); + + it('should support dry-run → review → migrate workflow', async () => { + // Setup: Copy legacy squad to temp dir + const srcPath = path.join(FIXTURES_PATH, 'legacy-squad'); + const testPath = path.join(tempDir, 'dryrun-workflow-test'); + await copyRecursive(srcPath, testPath); + + // Step 1: Dry-run migration + const dryRunMigrator = new SquadMigrator({ dryRun: true }); + const dryRunResult = await dryRunMigrator.migrate(testPath); + + // Verify dry-run didn't modify files + expect(dryRunResult.success).toBe(true); + expect(dryRunResult.actions.every((a) => a.status === 'dry-run')).toBe(true); + expect(await pathExists(path.join(testPath, 'config.yaml'))).toBe(true); + expect(await pathExists(path.join(testPath, 'squad.yaml'))).toBe(false); + + // Step 2: Review dry-run report + const analysis = await dryRunMigrator.analyze(testPath); + const report = dryRunMigrator.generateReport(analysis, dryRunResult); + expect(report).toContain('dry-run'); + + // Step 3: Execute actual migration + const migrator = new SquadMigrator(); + const result = await migrator.migrate(testPath); + + // Verify actual migration succeeded + expect(result.success).toBe(true); + expect(result.actions.every((a) => a.status === 'success')).toBe(true); + expect(await pathExists(path.join(testPath, 'squad.yaml'))).toBe(true); + }); + }); +}); + +// Helper functions +async function pathExists(filePath) { + try { + await fs.access(filePath); + return true; + } catch { + return false; + } +} + +async function copyRecursive(src, dest) { + const stats = await fs.stat(src); + if (stats.isDirectory()) { + await fs.mkdir(dest, { recursive: true }); + const entries = await fs.readdir(src); + for (const entry of entries) { + await copyRecursive(path.join(src, entry), path.join(dest, entry)); + } + } else { + await fs.copyFile(src, dest); + } +} + +``` + +================================================== +📄 tests/integration/squad/squad-analyze-extend.test.js +================================================== +```js +/** + * Integration Tests for Squad Analyze & Extend + * + * Test Coverage: + * - Complete analyze workflow from command to output + * - Complete extend workflow with all component types + * - Analyze -> Extend -> Validate pipeline + * - Template rendering with all placeholders + * - Manifest updates preserve YAML formatting + * - Multiple component additions in sequence + * + * @see Story SQS-11: Squad Analyze & Extend + */ + +const path = require('path'); +const fs = require('fs').promises; +const yaml = require('js-yaml'); +const { SquadAnalyzer } = require('../../../.aios-core/development/scripts/squad/squad-analyzer'); +const { SquadExtender } = require('../../../.aios-core/development/scripts/squad/squad-extender'); + +// Test directory for integration tests - use unique directory to avoid parallel test collisions +const INTEGRATION_PATH = path.join(__dirname, 'temp-analyze-extend'); + +describe('Squad Analyze & Extend Integration', () => { + let analyzer; + let extender; + let testSquadPath; + + beforeAll(async () => { + // Create integration test directory + await fs.mkdir(INTEGRATION_PATH, { recursive: true }); + }); + + afterAll(async () => { + // Cleanup integration test directory + try { + await fs.rm(INTEGRATION_PATH, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + }); + + beforeEach(async () => { + // Create fresh test squad for each test + testSquadPath = path.join(INTEGRATION_PATH, `test-squad-${Date.now()}`); + + // Create squad structure + await fs.mkdir(path.join(testSquadPath, 'agents'), { recursive: true }); + await fs.mkdir(path.join(testSquadPath, 'tasks'), { recursive: true }); + await fs.mkdir(path.join(testSquadPath, 'workflows'), { recursive: true }); + await fs.mkdir(path.join(testSquadPath, 'checklists'), { recursive: true }); + await fs.mkdir(path.join(testSquadPath, 'templates'), { recursive: true }); + await fs.mkdir(path.join(testSquadPath, 'tools'), { recursive: true }); + await fs.mkdir(path.join(testSquadPath, 'scripts'), { recursive: true }); + await fs.mkdir(path.join(testSquadPath, 'data'), { recursive: true }); + + // Create initial manifest + const manifest = { + name: 'integration-test-squad', + version: '1.0.0', + description: 'Integration test squad', + author: 'test', + aios: { minVersion: '2.1.0' }, + components: { + agents: ['initial-agent.md'], + tasks: [], + workflows: [], + checklists: [], + templates: [], + tools: [], + scripts: [], + data: [], + }, + }; + await fs.writeFile(path.join(testSquadPath, 'squad.yaml'), yaml.dump(manifest)); + + // Create initial agent + await fs.writeFile( + path.join(testSquadPath, 'agents', 'initial-agent.md'), + '# initial-agent\n\nInitial test agent.\n', + ); + + // Initialize analyzer and extender pointing to parent directory + analyzer = new SquadAnalyzer({ squadsPath: INTEGRATION_PATH }); + extender = new SquadExtender({ squadsPath: INTEGRATION_PATH }); + }); + + describe('Complete Analyze Workflow', () => { + it('should analyze squad and return complete report', async () => { + const squadName = path.basename(testSquadPath); + const result = await analyzer.analyze(squadName); + + // Verify overview + expect(result.overview.name).toBe('integration-test-squad'); + expect(result.overview.version).toBe('1.0.0'); + expect(result.overview.author).toBe('test'); + + // Verify inventory + expect(result.inventory.agents).toContain('initial-agent.md'); + expect(result.inventory.tasks).toHaveLength(0); + + // Verify coverage + expect(result.coverage.agents.total).toBe(1); + expect(result.coverage.agents.withTasks).toBe(0); + + // Verify suggestions exist + expect(result.suggestions.length).toBeGreaterThan(0); + }); + + it('should format report in all output formats', async () => { + const squadName = path.basename(testSquadPath); + const result = await analyzer.analyze(squadName); + + // Console format + const consoleReport = analyzer.formatReport(result, 'console'); + expect(consoleReport).toContain('Squad Analysis'); + expect(consoleReport).toContain('integration-test-squad'); + + // JSON format + const jsonReport = analyzer.formatReport(result, 'json'); + const parsed = JSON.parse(jsonReport); + expect(parsed.overview.name).toBe('integration-test-squad'); + + // Markdown format + const mdReport = analyzer.formatReport(result, 'markdown'); + expect(mdReport).toContain('# Squad Analysis'); + expect(mdReport).toContain('## Overview'); + }); + }); + + describe('Complete Extend Workflow', () => { + it('should add agent component', async () => { + const squadName = path.basename(testSquadPath); + + const result = await extender.addComponent(squadName, { + type: 'agent', + name: 'new-agent', + description: 'A new agent for testing', + }); + + expect(result.success).toBe(true); + expect(result.type).toBe('agent'); + expect(result.fileName).toBe('new-agent.md'); + + // Verify file exists + const filePath = path.join(testSquadPath, 'agents', 'new-agent.md'); + const content = await fs.readFile(filePath, 'utf8'); + expect(content).toContain('new-agent'); + expect(content).toContain('A new agent for testing'); + + // Verify manifest updated + const manifestContent = await fs.readFile(path.join(testSquadPath, 'squad.yaml'), 'utf8'); + expect(manifestContent).toContain('new-agent.md'); + }); + + it('should add task component with agent linkage', async () => { + const squadName = path.basename(testSquadPath); + + const result = await extender.addComponent(squadName, { + type: 'task', + name: 'process-data', + agentId: 'initial-agent', + description: 'Process data task', + storyId: 'SQS-11', + }); + + expect(result.success).toBe(true); + expect(result.fileName).toBe('initial-agent-process-data.md'); + + // Verify file exists and has correct content + const filePath = path.join(testSquadPath, 'tasks', 'initial-agent-process-data.md'); + const content = await fs.readFile(filePath, 'utf8'); + expect(content).toContain('process-data'); + expect(content).toContain('@initial-agent'); + expect(content).toContain('SQS-11'); + }); + + it('should add workflow component', async () => { + const squadName = path.basename(testSquadPath); + + const result = await extender.addComponent(squadName, { + type: 'workflow', + name: 'daily-process', + description: 'Daily processing workflow', + }); + + expect(result.success).toBe(true); + expect(result.fileName).toBe('daily-process.yaml'); + + // Verify file exists + const filePath = path.join(testSquadPath, 'workflows', 'daily-process.yaml'); + const content = await fs.readFile(filePath, 'utf8'); + expect(content).toContain('daily-process'); + }); + + it('should add checklist component', async () => { + const squadName = path.basename(testSquadPath); + + const result = await extender.addComponent(squadName, { + type: 'checklist', + name: 'quality-checklist', + description: 'Quality assurance checklist', + }); + + expect(result.success).toBe(true); + expect(result.fileName).toBe('quality-checklist.md'); + }); + + it('should add template component', async () => { + const squadName = path.basename(testSquadPath); + + const result = await extender.addComponent(squadName, { + type: 'template', + name: 'report-template', + description: 'Report generation template', + }); + + expect(result.success).toBe(true); + expect(result.fileName).toBe('report-template.md'); + }); + + it('should add tool component', async () => { + const squadName = path.basename(testSquadPath); + + const result = await extender.addComponent(squadName, { + type: 'tool', + name: 'data-validator', + description: 'Data validation tool', + }); + + expect(result.success).toBe(true); + expect(result.fileName).toBe('data-validator.js'); + + // Verify content has correct structure + const filePath = path.join(testSquadPath, 'tools', 'data-validator.js'); + const content = await fs.readFile(filePath, 'utf8'); + expect(content).toContain('module.exports'); + }); + + it('should add script component', async () => { + const squadName = path.basename(testSquadPath); + + const result = await extender.addComponent(squadName, { + type: 'script', + name: 'migration-helper', + description: 'Migration helper script', + }); + + expect(result.success).toBe(true); + expect(result.fileName).toBe('migration-helper.js'); + }); + + it('should add data component', async () => { + const squadName = path.basename(testSquadPath); + + const result = await extender.addComponent(squadName, { + type: 'data', + name: 'config-data', + description: 'Configuration data', + }); + + expect(result.success).toBe(true); + expect(result.fileName).toBe('config-data.yaml'); + }); + }); + + describe('Analyze -> Extend -> Analyze Pipeline', () => { + it('should show improvement in coverage after extension', async () => { + const squadName = path.basename(testSquadPath); + + // Initial analysis + const initialResult = await analyzer.analyze(squadName); + const initialAgentsWithTasks = initialResult.coverage.agents.withTasks; + + // Add task for initial agent + await extender.addComponent(squadName, { + type: 'task', + name: 'new-task', + agentId: 'initial-agent', + description: 'New task', + }); + + // Re-analyze + const afterResult = await analyzer.analyze(squadName); + const afterAgentsWithTasks = afterResult.coverage.agents.withTasks; + + // Verify improvement + expect(afterAgentsWithTasks).toBeGreaterThan(initialAgentsWithTasks); + }); + + it('should reduce suggestions after adding components', async () => { + const squadName = path.basename(testSquadPath); + + // Initial analysis - should have suggestion about tasks + const initialResult = await analyzer.analyze(squadName); + const initialSuggestionCount = initialResult.suggestions.length; + + // Add task and checklist + await extender.addComponent(squadName, { + type: 'task', + name: 'task-1', + agentId: 'initial-agent', + description: 'Task 1', + }); + + await extender.addComponent(squadName, { + type: 'checklist', + name: 'checklist-1', + description: 'Checklist 1', + }); + + // Re-analyze + const afterResult = await analyzer.analyze(squadName); + const afterSuggestionCount = afterResult.suggestions.length; + + // Suggestions about tasks and checklists should be gone + expect(afterSuggestionCount).toBeLessThanOrEqual(initialSuggestionCount); + }); + }); + + describe('Multiple Sequential Extensions', () => { + it('should handle multiple component additions without errors', async () => { + const squadName = path.basename(testSquadPath); + + // Add multiple components sequentially + const components = [ + { type: 'agent', name: 'agent-1' }, + { type: 'agent', name: 'agent-2' }, + { type: 'task', name: 'task-1', agentId: 'initial-agent' }, + { type: 'task', name: 'task-2', agentId: 'initial-agent' }, + { type: 'workflow', name: 'workflow-1' }, + { type: 'checklist', name: 'checklist-1' }, + { type: 'template', name: 'template-1' }, + { type: 'tool', name: 'tool-1' }, + ]; + + for (const comp of components) { + const result = await extender.addComponent(squadName, { + ...comp, + description: `Test ${comp.type}`, + }); + expect(result.success).toBe(true); + } + + // Verify manifest has all components + const manifestContent = await fs.readFile(path.join(testSquadPath, 'squad.yaml'), 'utf8'); + const manifest = yaml.load(manifestContent); + + expect(manifest.components.agents).toContain('agent-1.md'); + expect(manifest.components.agents).toContain('agent-2.md'); + expect(manifest.components.tasks).toContain('initial-agent-task-1.md'); + expect(manifest.components.tasks).toContain('initial-agent-task-2.md'); + expect(manifest.components.workflows).toContain('workflow-1.yaml'); + expect(manifest.components.checklists).toContain('checklist-1.md'); + expect(manifest.components.templates).toContain('template-1.md'); + expect(manifest.components.tools).toContain('tool-1.js'); + }); + }); + + describe('Manifest YAML Preservation', () => { + it('should preserve existing manifest content when updating', async () => { + const squadName = path.basename(testSquadPath); + + // Add custom content to manifest + const manifestPath = path.join(testSquadPath, 'squad.yaml'); + const manifest = yaml.load(await fs.readFile(manifestPath, 'utf8')); + manifest.customField = 'custom-value'; + manifest.config = { setting1: true, setting2: 'value' }; + await fs.writeFile(manifestPath, yaml.dump(manifest)); + + // Add component + await extender.addComponent(squadName, { + type: 'agent', + name: 'test-agent', + description: 'Test', + }); + + // Verify custom content preserved + const updatedManifest = yaml.load(await fs.readFile(manifestPath, 'utf8')); + expect(updatedManifest.customField).toBe('custom-value'); + expect(updatedManifest.config.setting1).toBe(true); + expect(updatedManifest.config.setting2).toBe('value'); + }); + }); + + describe('Performance', () => { + it('should complete analyze + extend + analyze within 2 seconds', async () => { + const squadName = path.basename(testSquadPath); + const start = Date.now(); + + // Run complete pipeline + await analyzer.analyze(squadName); + await extender.addComponent(squadName, { + type: 'agent', + name: 'perf-agent', + description: 'Performance test', + }); + await analyzer.analyze(squadName); + + const duration = Date.now() - start; + expect(duration).toBeLessThan(2000); + }); + }); +}); + +``` + +================================================== +📄 tests/integration/squad/squad-designer-integration.test.js +================================================== +```js +/** + * Integration Tests for Squad Designer Flow + * + * End-to-end tests covering the complete workflow: + * 1. *design-squad: Documentation → Analysis → Recommendations → Blueprint + * 2. *create-squad --from-design: Blueprint → Squad Structure + * 3. *validate-squad: Squad Structure → Validation Report + * + * @see Story SQS-9: Squad Designer + */ + +const path = require('path'); +const fs = require('fs').promises; +const os = require('os'); +const yaml = require('js-yaml'); +const { + SquadDesigner, + SquadGenerator, + SquadValidator, +} = require('../../../.aios-core/development/scripts/squad'); + +describe('Squad Designer Integration', () => { + let tempDir; + let docsDir; + let designsDir; + let squadsDir; + + beforeAll(async () => { + tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'squad-designer-integration-')); + docsDir = path.join(tempDir, 'docs'); + designsDir = path.join(tempDir, 'designs'); + squadsDir = path.join(tempDir, 'squads'); + + await fs.mkdir(docsDir, { recursive: true }); + await fs.mkdir(designsDir, { recursive: true }); + await fs.mkdir(squadsDir, { recursive: true }); + }); + + afterAll(async () => { + try { + await fs.rm(tempDir, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } + }); + + describe('Complete Design-to-Squad Flow', () => { + it('should create valid squad from documentation through design', async () => { + // Step 1: Create comprehensive documentation + const prdPath = path.join(docsDir, 'order-management-prd.md'); + await fs.writeFile( + prdPath, + `# Order Management System PRD + +## Overview +The Order Management System (OMS) handles the complete lifecycle of customer orders from creation to fulfillment. + +## Domain +Order Management / E-commerce + +## Key Entities +- **Order**: Central entity containing order details, status, and items +- **Customer**: The person or entity placing the order +- **Product**: Items that can be ordered +- **Payment**: Payment information and transaction records +- **Shipment**: Delivery tracking and shipping details + +## Core Workflows + +### 1. Order Creation (create-order) +- Validate customer information +- Check product availability +- Calculate pricing and taxes +- Initialize order with pending status + +### 2. Order Update (update-order) +- Modify order items +- Update quantities +- Recalculate totals + +### 3. Order Cancellation (cancel-order) +- Validate cancellation eligibility +- Process refund if applicable +- Update inventory + +### 4. Payment Processing (process-payment) +- Validate payment method +- Process transaction via Stripe API +- Update order status + +### 5. Shipment Tracking (track-shipment) +- Create shipment via FedEx API +- Update tracking information +- Notify customer + +## External Integrations +- **Stripe API**: Payment processing +- **FedEx API**: Shipping and tracking +- **Inventory Service**: Stock management + +## User Roles +- **Customer**: Places and tracks orders +- **Administrator**: System configuration +- **Support Staff**: Handle customer issues +- **Warehouse Manager**: Manages fulfillment + +## Technical Requirements +- Node.js 18+ runtime +- PostgreSQL database +- Redis caching +- REST API endpoints +`, + ); + + // Step 2: Initialize Designer and Generator + const designer = new SquadDesigner({ designsPath: designsDir }); + const generator = new SquadGenerator({ squadsPath: squadsDir }); + const validator = new SquadValidator(); + + // Step 3: Collect documentation + const docs = await designer.collectDocumentation({ docs: [prdPath] }); + expect(docs.mergedContent).toContain('Order Management'); + expect(docs.sources.some((s) => s.path === prdPath)).toBe(true); + + // Step 4: Analyze domain + const analysis = designer.analyzeDomain(docs); + + expect(analysis.domain).toBeDefined(); + expect(analysis.entities.length).toBeGreaterThan(0); + expect(analysis.workflows.length).toBeGreaterThan(0); + expect(analysis.integrations.length).toBeGreaterThan(0); + expect(analysis.stakeholders.length).toBeGreaterThan(0); + + // Verify specific entities were detected + const entityNames = analysis.entities.join(' ').toLowerCase(); + expect(entityNames).toMatch(/order|customer|product|payment/i); + + // Verify workflows were detected + const workflowNames = analysis.workflows.join(' ').toLowerCase(); + expect(workflowNames).toMatch(/create|update|cancel/i); + + // Step 5: Generate recommendations + const agents = designer.generateAgentRecommendations(analysis); + const tasks = designer.generateTaskRecommendations(analysis, agents); + + expect(agents.length).toBeGreaterThan(0); + expect(tasks.length).toBeGreaterThan(0); + + // Verify agent structure + agents.forEach((agent) => { + expect(agent.id).toMatch(/^[a-z][a-z0-9-]*[a-z0-9]$/); + expect(agent.role).toBeDefined(); + expect(agent.confidence).toBeGreaterThanOrEqual(0); + expect(agent.confidence).toBeLessThanOrEqual(1); + }); + + // Verify task structure + tasks.forEach((task) => { + expect(task.name).toMatch(/^[a-z][a-z0-9-]*[a-z0-9]$/); + expect(task.agent).toBeDefined(); + expect(task.confidence).toBeGreaterThanOrEqual(0); + expect(task.confidence).toBeLessThanOrEqual(1); + }); + + // Step 6: Generate blueprint + const blueprint = designer.generateBlueprint({ + analysis, + recommendations: { agents, tasks }, + metadata: { source_docs: [prdPath] }, + }); + + expect(blueprint.squad).toBeDefined(); + expect(blueprint.recommendations.agents).toEqual(agents); + expect(blueprint.recommendations.tasks).toEqual(tasks); + expect(blueprint.metadata.overall_confidence).toBeGreaterThan(0); + + // Step 7: Save blueprint + const blueprintPath = await designer.saveBlueprint(blueprint); + expect(blueprintPath).toContain('.yaml'); + + // Verify blueprint file + const savedBlueprint = yaml.load(await fs.readFile(blueprintPath, 'utf-8')); + expect(savedBlueprint.squad.name).toBeDefined(); + + // Step 8: Generate squad from blueprint + const result = await generator.generateFromBlueprint(blueprintPath); + + expect(result.path).toBeDefined(); + expect(result.files.length).toBeGreaterThan(0); + expect(result.blueprint.agents).toBeGreaterThan(0); + expect(result.blueprint.tasks).toBeGreaterThan(0); + + // Step 9: Verify squad structure + const squadPath = result.path; + + // Check main files + const squadYaml = await fs.readFile(path.join(squadPath, 'squad.yaml'), 'utf-8'); + expect(squadYaml).toContain('name:'); + expect(squadYaml).toContain('components:'); + + const readme = await fs.readFile(path.join(squadPath, 'README.md'), 'utf-8'); + expect(readme).toContain('#'); + + // Check agents were created + const agentsDir = await fs.readdir(path.join(squadPath, 'agents')); + expect(agentsDir.filter((f) => f.endsWith('.md')).length).toBeGreaterThan(0); + + // Check tasks were created + const tasksDir = await fs.readdir(path.join(squadPath, 'tasks')); + expect(tasksDir.filter((f) => f.endsWith('.md')).length).toBeGreaterThan(0); + + // Step 10: Validate generated squad + const validation = await validator.validate(squadPath); + + expect(validation.valid).toBe(true); + expect(validation.errors).toHaveLength(0); + }); + + it('should handle minimal documentation', async () => { + const minimalDocPath = path.join(docsDir, 'minimal-prd.md'); + await fs.writeFile( + minimalDocPath, + `# Simple Task Manager +Create, update, and delete tasks. Users can manage their task lists. +`, + ); + + const designer = new SquadDesigner({ designsPath: designsDir }); + + const docs = await designer.collectDocumentation({ docs: [minimalDocPath], domain: 'task-management' }); + const analysis = designer.analyzeDomain(docs); + + expect(analysis.domain).toBe('task-management'); + expect(analysis.workflows.length).toBeGreaterThan(0); + }); + + it('should handle multiple documentation files', async () => { + const doc1Path = path.join(docsDir, 'feature-orders.md'); + const doc2Path = path.join(docsDir, 'feature-payments.md'); + const doc3Path = path.join(docsDir, 'feature-shipping.md'); + + await fs.writeFile( + doc1Path, + '# Order Features\nCreate orders, update orders, cancel orders.', + ); + await fs.writeFile( + doc2Path, + '# Payment Features\nProcess payments via Stripe, refund payments.', + ); + await fs.writeFile( + doc3Path, + '# Shipping Features\nCreate shipments, track packages via FedEx.', + ); + + const designer = new SquadDesigner({ designsPath: designsDir }); + + const docs = await designer.collectDocumentation({ + docs: [doc1Path, doc2Path, doc3Path], + domain: 'multi-feature', + }); + + expect(docs.sources.length).toBe(3); + expect(docs.mergedContent).toContain('Order Features'); + expect(docs.mergedContent).toContain('Payment Features'); + expect(docs.mergedContent).toContain('Shipping Features'); + + const analysis = designer.analyzeDomain(docs); + + // Should detect workflows from all files + expect(analysis.workflows.length).toBeGreaterThanOrEqual(3); + }); + }); + + describe('Blueprint Schema Validation', () => { + it('should generate schema-compliant blueprint', async () => { + const docPath = path.join(docsDir, 'schema-test.md'); + await fs.writeFile( + docPath, + '# Inventory System\nManage products and stock levels. Track inventory movements.', + ); + + const designer = new SquadDesigner({ designsPath: designsDir }); + + const docs = await designer.collectDocumentation({ docs: [docPath], domain: 'inventory-system' }); + const analysis = designer.analyzeDomain(docs); + const agents = designer.generateAgentRecommendations(analysis); + const tasks = designer.generateTaskRecommendations(analysis, agents); + + const blueprint = designer.generateBlueprint({ + analysis, + recommendations: { agents, tasks }, + metadata: { source_docs: [docPath] }, + }); + + // Validate against expected schema structure + expect(blueprint).toHaveProperty('squad'); + expect(blueprint).toHaveProperty('squad.name'); + expect(blueprint).toHaveProperty('squad.domain'); + expect(blueprint).toHaveProperty('recommendations'); + expect(blueprint).toHaveProperty('recommendations.agents'); + expect(blueprint).toHaveProperty('recommendations.tasks'); + expect(blueprint).toHaveProperty('metadata'); + expect(blueprint).toHaveProperty('metadata.created_at'); + + // Validate squad name format + expect(blueprint.squad.name).toMatch(/^[a-z][a-z0-9-]*[a-z0-9](-squad)?$/); + + // Validate agent format + blueprint.recommendations.agents.forEach((agent) => { + expect(agent).toHaveProperty('id'); + expect(agent).toHaveProperty('role'); + expect(agent).toHaveProperty('confidence'); + expect(agent.id).toMatch(/^[a-z][a-z0-9-]*[a-z0-9]$/); + expect(typeof agent.confidence).toBe('number'); + expect(agent.confidence).toBeGreaterThanOrEqual(0); + expect(agent.confidence).toBeLessThanOrEqual(1); + }); + + // Validate task format + blueprint.recommendations.tasks.forEach((task) => { + expect(task).toHaveProperty('name'); + expect(task).toHaveProperty('agent'); + expect(task).toHaveProperty('confidence'); + expect(task.name).toMatch(/^[a-z][a-z0-9-]*[a-z0-9]$/); + expect(typeof task.confidence).toBe('number'); + }); + }); + }); + + describe('Error Handling', () => { + it('should handle missing documentation gracefully', async () => { + const designer = new SquadDesigner({ designsPath: designsDir }); + + await expect( + designer.collectDocumentation({ docs: ['/nonexistent/file.md'] }), + ).rejects.toThrow(); + }); + + it('should handle empty documentation', async () => { + const emptyDocPath = path.join(docsDir, 'empty.md'); + await fs.writeFile(emptyDocPath, ''); + + const designer = new SquadDesigner({ designsPath: designsDir }); + + const docs = await designer.collectDocumentation({ docs: [emptyDocPath] }); + + expect(() => designer.analyzeDomain(docs)).toThrow(); + }); + + it('should handle invalid blueprint for squad generation', async () => { + const invalidBlueprintPath = path.join(designsDir, 'invalid-blueprint.yaml'); + await fs.writeFile( + invalidBlueprintPath, + yaml.dump({ + squad: { name: 'InvalidName' }, // Invalid name format + }), + ); + + const generator = new SquadGenerator({ squadsPath: squadsDir }); + + await expect(generator.generateFromBlueprint(invalidBlueprintPath)).rejects.toThrow(); + }); + }); + + describe('Confidence Scoring', () => { + it('should assign higher confidence to clearer documentation', async () => { + // Clear, well-structured documentation + const clearDocPath = path.join(docsDir, 'clear-prd.md'); + await fs.writeFile( + clearDocPath, + `# User Management System + +## Workflows +1. create-user: Create a new user account +2. update-user: Update user profile information +3. delete-user: Remove user from system + +## Entities +- User +- Profile +- Role + +## Integrations +- Auth0 API for authentication +`, + ); + + // Vague documentation + const vagueDocPath = path.join(docsDir, 'vague-prd.md'); + await fs.writeFile( + vagueDocPath, + `# Some System + +It does things with stuff. Users can do stuff. +`, + ); + + const designer = new SquadDesigner({ designsPath: designsDir }); + + // Analyze clear documentation + const clearDocs = await designer.collectDocumentation({ docs: [clearDocPath], domain: 'user-mgmt' }); + const clearAnalysis = designer.analyzeDomain(clearDocs); + const clearAgents = designer.generateAgentRecommendations(clearAnalysis); + const clearTasks = designer.generateTaskRecommendations(clearAnalysis, clearAgents); + + // Analyze vague documentation + const vagueDocs = await designer.collectDocumentation({ docs: [vagueDocPath], domain: 'vague-system' }); + const vagueAnalysis = designer.analyzeDomain(vagueDocs); + const vagueAgents = designer.generateAgentRecommendations(vagueAnalysis); + const vagueTasks = designer.generateTaskRecommendations(vagueAnalysis, vagueAgents); + + // Clear documentation should produce more recommendations + expect(clearAgents.length).toBeGreaterThanOrEqual(vagueAgents.length); + + // Clear documentation should have higher average confidence + if (clearAgents.length > 0 && vagueAgents.length > 0) { + const clearAvgConfidence = + clearAgents.reduce((sum, a) => sum + a.confidence, 0) / clearAgents.length; + const vagueAvgConfidence = + vagueAgents.reduce((sum, a) => sum + a.confidence, 0) / vagueAgents.length; + + // Clear documentation should have at least as high confidence + expect(clearAvgConfidence).toBeGreaterThanOrEqual(vagueAvgConfidence * 0.8); + } + }); + }); + + describe('Performance', () => { + const isCI = process.env.CI === 'true'; + const fullFlowThreshold = isCI ? 15000 : 3000; + + it('should complete full design flow within acceptable time', async () => { + const perfDocPath = path.join(docsDir, 'perf-test.md'); + await fs.writeFile( + perfDocPath, + `# Performance Test System +Create items, update items, delete items, list items, search items. +Entities: Item, Category, User. +Integrations: ElasticSearch, Redis. +`, + ); + + const designer = new SquadDesigner({ designsPath: designsDir }); + const generator = new SquadGenerator({ squadsPath: squadsDir }); + + const start = Date.now(); + + const docs = await designer.collectDocumentation({ docs: [perfDocPath], domain: 'perf-test' }); + const analysis = designer.analyzeDomain(docs); + const agents = designer.generateAgentRecommendations(analysis); + const tasks = designer.generateTaskRecommendations(analysis, agents); + const blueprint = designer.generateBlueprint({ + analysis, + recommendations: { agents, tasks }, + metadata: { source_docs: [perfDocPath] }, + }); + const blueprintPath = await designer.saveBlueprint(blueprint); + await generator.generateFromBlueprint(blueprintPath); + + const duration = Date.now() - start; + + expect(duration).toBeLessThan(fullFlowThreshold); + }); + }); +}); + +``` + +================================================== +📄 tests/integration/windows/windows-11.test.js +================================================== +```js +// tests/integration/windows/windows-11.test.js +const { spawn } = require('child_process'); +const path = require('path'); +const fs = require('fs').promises; + +const isWindows = process.platform === 'win32'; + +// Skip entire test suite on non-Windows platforms +const describeOnWindows = isWindows ? describe : describe.skip; + +describeOnWindows('Windows 11 Installation', () => { + const testTimeout = 5 * 60 * 1000; // 5 minutes + + it( + 'should complete installation in < 5 minutes', + async () => { + const startTime = Date.now(); + + // Note: This test requires running npx @synkraai/aios@latest init + // in a fresh directory. Run manually for end-to-end validation. + + // For CI/CD, verify installer exists and is executable + const installerPath = path.join(__dirname, '../../../bin/aios-init.js'); + const installerExists = await fs + .access(installerPath) + .then(() => true) + .catch(() => false); + + expect(installerExists).toBe(true); + + // Measure time (placeholder for actual install) + const duration = (Date.now() - startTime) / 1000 / 60; + expect(duration).toBeLessThan(5); + }, + testTimeout, + ); + + it('should work with PowerShell 7.x', async () => { + // Verify PowerShell execution policy documentation exists + const storyPath = path.join( + __dirname, + '../../../docs/stories/v2.1/sprint-1/story-1.10a-windows-testing.md', + ); + const storyContent = await fs.readFile(storyPath, 'utf-8'); + + // Check for PowerShell 7.x documentation + expect(storyContent).toContain('PowerShell 7'); + expect(storyContent).toContain('backward compatible'); + }); + + it('should work with PowerShell 5.1 (backward compatibility)', async () => { + // Verify backward compatibility with PowerShell 5.1 is documented + const storyPath = path.join( + __dirname, + '../../../docs/stories/v2.1/sprint-1/story-1.10a-windows-testing.md', + ); + const storyContent = await fs.readFile(storyPath, 'utf-8'); + + // Check for PowerShell 5.1 backward compatibility + expect(storyContent).toContain('PowerShell 5.1'); + expect(storyContent).toContain('backward compatible'); + }); + + it('should handle backslash paths correctly', async () => { + // Test path.join() usage in installer + const installerPath = path.join(__dirname, '../../../bin/aios-init.js'); + const installerContent = await fs.readFile(installerPath, 'utf-8'); + + // Verify path.join() is used (not string concatenation) + const pathJoinUsages = (installerContent.match(/path\.join\(/g) || []).length; + expect(pathJoinUsages).toBeGreaterThan(0); + + // Test Windows-style path normalization + const testPath = path.join('C:', 'Users', 'Test', 'Project'); + expect(testPath).toContain('\\'); // Windows uses backslashes + }); + + it('should verify CRLF line ending configuration', async () => { + // Check .gitattributes exists + const gitattributesPath = path.join(__dirname, '../../../.gitattributes'); + const gitattributesExists = await fs + .access(gitattributesPath) + .then(() => true) + .catch(() => false); + + expect(gitattributesExists).toBe(true); + }); + + it('should verify npm package manager works', async () => { + // Test npm is available + return new Promise((resolve, reject) => { + const npm = spawn('npm', ['--version'], { shell: true }); + let output = ''; + + npm.stdout.on('data', (data) => { + output += data.toString(); + }); + + npm.on('close', (code) => { + if (code === 0) { + expect(output.trim()).toMatch(/^\d+\.\d+\.\d+$/); + resolve(); + } else { + reject(new Error('npm not available')); + } + }); + }); + }); + + it('should verify yarn package manager works', async () => { + // Test yarn is available + return new Promise((resolve, reject) => { + const yarn = spawn('yarn', ['--version'], { shell: true }); + let output = ''; + + yarn.stdout.on('data', (data) => { + output += data.toString(); + }); + + yarn.on('close', (code) => { + if (code === 0) { + expect(output.trim()).toMatch(/^\d+\.\d+\.\d+$/); + resolve(); + } else { + reject(new Error('yarn not available')); + } + }); + }); + }); + + it('should verify pnpm package manager works (if installed)', async () => { + // Test pnpm is available - skip if not installed (pnpm is optional) + return new Promise((resolve) => { + const pnpm = spawn('pnpm', ['--version'], { shell: true }); + let output = ''; + + pnpm.stdout.on('data', (data) => { + output += data.toString(); + }); + + pnpm.on('close', (code) => { + if (code === 0) { + expect(output.trim()).toMatch(/^\d+\.\d+\.\d+$/); + } + // Always resolve - pnpm is optional on Windows + // If not installed, test passes without assertion + resolve(); + }); + }); + }); +}); + +``` + +================================================== +📄 tests/integration/windows/shell-compat.test.js +================================================== +```js +// tests/integration/windows/shell-compat.test.js +const { spawn } = require('child_process'); +const path = require('path'); +const fs = require('fs').promises; + +const isWindows = process.platform === 'win32'; + +// Skip entire test suite on non-Windows platforms +const describeOnWindows = isWindows ? describe : describe.skip; + +describeOnWindows('PowerShell vs CMD Compatibility', () => { + it('should document PowerShell execution policy handling', async () => { + const storyPath = path.join(__dirname, '../../../docs/stories/v2.1/sprint-1/story-1.10a-windows-testing.md'); + const storyContent = await fs.readFile(storyPath, 'utf-8'); + + // Verify PowerShell execution policy is documented + expect(storyContent).toContain('Set-ExecutionPolicy'); + expect(storyContent).toContain('-Scope Process'); + expect(storyContent).toContain('-ExecutionPolicy Bypass'); + }); + + it('should support PowerShell execution', async () => { + // Test PowerShell is available + return new Promise((resolve, reject) => { + const powershell = spawn('powershell', ['-Command', 'Write-Output "test"'], { shell: true }); + let output = ''; + + powershell.stdout.on('data', (data) => { + output += data.toString(); + }); + + powershell.on('close', (code) => { + if (code === 0) { + expect(output.trim()).toBe('test'); + resolve(); + } else { + reject(new Error('PowerShell not available')); + } + }); + }); + }); + + it('should support CMD execution', async () => { + // Test CMD is available + return new Promise((resolve, reject) => { + const cmd = spawn('cmd', ['/c', 'echo test'], { shell: true }); + let output = ''; + + cmd.stdout.on('data', (data) => { + output += data.toString(); + }); + + cmd.on('close', (code) => { + if (code === 0) { + expect(output.trim()).toBe('test'); + resolve(); + } else { + reject(new Error('CMD not available')); + } + }); + }); + }); + + it('should handle Node.js execution in both shells', async () => { + // Test Node.js works in PowerShell + const powershellNode = await new Promise((resolve, reject) => { + const ps = spawn('powershell', ['-Command', 'node --version'], { shell: true }); + let output = ''; + + ps.stdout.on('data', (data) => { + output += data.toString(); + }); + + ps.on('close', (code) => { + if (code === 0) { + resolve(output.trim()); + } else { + reject(new Error('Node.js not available in PowerShell')); + } + }); + }); + + expect(powershellNode).toMatch(/^v\d+\.\d+\.\d+$/); + + // Test Node.js works in CMD + const cmdNode = await new Promise((resolve, reject) => { + const cmd = spawn('cmd', ['/c', 'node --version'], { shell: true }); + let output = ''; + + cmd.stdout.on('data', (data) => { + output += data.toString(); + }); + + cmd.on('close', (code) => { + if (code === 0) { + resolve(output.trim()); + } else { + reject(new Error('Node.js not available in CMD')); + } + }); + }); + + expect(cmdNode).toMatch(/^v\d+\.\d+\.\d+$/); + + // Versions should match + expect(powershellNode).toBe(cmdNode); + }); + + it('should handle npm execution in both shells', async () => { + // Test npm works in PowerShell + const powershellNpm = await new Promise((resolve, reject) => { + const ps = spawn('powershell', ['-Command', 'npm --version'], { shell: true }); + let output = ''; + + ps.stdout.on('data', (data) => { + output += data.toString(); + }); + + ps.on('close', (code) => { + if (code === 0) { + resolve(output.trim()); + } else { + reject(new Error('npm not available in PowerShell')); + } + }); + }); + + expect(powershellNpm).toMatch(/^\d+\.\d+\.\d+$/); + + // Test npm works in CMD + const cmdNpm = await new Promise((resolve, reject) => { + const cmd = spawn('cmd', ['/c', 'npm --version'], { shell: true }); + let output = ''; + + cmd.stdout.on('data', (data) => { + output += data.toString(); + }); + + cmd.on('close', (code) => { + if (code === 0) { + resolve(output.trim()); + } else { + reject(new Error('npm not available in CMD')); + } + }); + }); + + expect(cmdNpm).toMatch(/^\d+\.\d+\.\d+$/); + + // Versions should match + expect(powershellNpm).toBe(cmdNpm); + }); + + it('should verify path handling works in both shells', async () => { + // Test path.join() produces Windows paths + const testPath = path.join('C:', 'Users', 'Test', 'Project'); + + // Windows uses backslashes + expect(testPath).toContain('\\'); + + // Path should be normalized + expect(testPath).not.toContain('//'); + expect(testPath).not.toContain('/\\'); + }); +}); + +``` + +================================================== +📄 tests/integration/windows/windows-10.test.js +================================================== +```js +// tests/integration/windows/windows-10.test.js +const { spawn } = require('child_process'); +const path = require('path'); +const fs = require('fs').promises; + +const isWindows = process.platform === 'win32'; + +// Skip entire test suite on non-Windows platforms +const describeOnWindows = isWindows ? describe : describe.skip; + +describeOnWindows('Windows 10 Installation', () => { + const testTimeout = 5 * 60 * 1000; // 5 minutes + + it( + 'should complete installation in < 5 minutes', + async () => { + const startTime = Date.now(); + + // Note: This test requires running npx @synkraai/aios@latest init + // in a fresh directory. Run manually for end-to-end validation. + + // For CI/CD, verify installer exists and is executable + const installerPath = path.join(__dirname, '../../../bin/aios-init.js'); + const installerExists = await fs + .access(installerPath) + .then(() => true) + .catch(() => false); + + expect(installerExists).toBe(true); + + // Measure time (placeholder for actual install) + const duration = (Date.now() - startTime) / 1000 / 60; + expect(duration).toBeLessThan(5); + }, + testTimeout, + ); + + it('should handle backslash paths correctly', async () => { + // Test path.join() usage in installer + const installerPath = path.join(__dirname, '../../../bin/aios-init.js'); + const installerContent = await fs.readFile(installerPath, 'utf-8'); + + // Verify path.join() is used (not string concatenation) + const pathJoinUsages = (installerContent.match(/path\.join\(/g) || []).length; + expect(pathJoinUsages).toBeGreaterThan(0); + + // Test Windows-style path normalization + const testPath = path.join('C:', 'Users', 'Test', 'Project'); + expect(testPath).toContain('\\'); // Windows uses backslashes + }); + + it('should work with PowerShell 5.1', async () => { + // Verify PowerShell execution policy documentation exists + const storyPath = path.join( + __dirname, + '../../../docs/stories/v2.1/sprint-1/story-1.10a-windows-testing.md', + ); + const storyContent = await fs.readFile(storyPath, 'utf-8'); + + // Check for PowerShell execution policy documentation + expect(storyContent).toContain('Set-ExecutionPolicy'); + expect(storyContent).toContain('ExecutionPolicy Bypass'); + }); + + it('should verify CRLF line ending configuration', async () => { + // Check .gitattributes exists + const gitattributesPath = path.join(__dirname, '../../../.gitattributes'); + const gitattributesExists = await fs + .access(gitattributesPath) + .then(() => true) + .catch(() => false); + + expect(gitattributesExists).toBe(true); + }); + + it('should verify npm package manager works', async () => { + // Test npm is available + return new Promise((resolve, reject) => { + const npm = spawn('npm', ['--version'], { shell: true }); + let output = ''; + + npm.stdout.on('data', (data) => { + output += data.toString(); + }); + + npm.on('close', (code) => { + if (code === 0) { + expect(output.trim()).toMatch(/^\d+\.\d+\.\d+$/); + resolve(); + } else { + reject(new Error('npm not available')); + } + }); + }); + }); + + it('should verify yarn package manager works', async () => { + // Test yarn is available + return new Promise((resolve, reject) => { + const yarn = spawn('yarn', ['--version'], { shell: true }); + let output = ''; + + yarn.stdout.on('data', (data) => { + output += data.toString(); + }); + + yarn.on('close', (code) => { + if (code === 0) { + expect(output.trim()).toMatch(/^\d+\.\d+\.\d+$/); + resolve(); + } else { + reject(new Error('yarn not available')); + } + }); + }); + }); + + it('should verify pnpm package manager works (if installed)', async () => { + // Test pnpm is available - skip if not installed (pnpm is optional) + return new Promise((resolve) => { + const pnpm = spawn('pnpm', ['--version'], { shell: true }); + let output = ''; + + pnpm.stdout.on('data', (data) => { + output += data.toString(); + }); + + pnpm.on('close', (code) => { + if (code === 0) { + expect(output.trim()).toMatch(/^\d+\.\d+\.\d+$/); + } + // Always resolve - pnpm is optional on Windows + // If not installed, test passes without assertion + resolve(); + }); + }); + }); +}); + +``` + +================================================== +📄 tests/wizard/integration.test.js +================================================== +```js +/** + * Wizard Integration Tests + * + * Story 1.7: Dependency Installation Integration + * Tests the full wizard flow including dependency installation + */ + +const inquirer = require('inquirer'); +const fse = require('fs-extra'); +const { runWizard } = require('../../packages/installer/src/wizard/index'); +const { + installDependencies, + detectPackageManager, +} = require('../../packages/installer/src/installer/dependency-installer'); +const { + configureEnvironment, +} = require('../../packages/installer/src/config/configure-environment'); +const { generateIDEConfigs } = require('../../packages/installer/src/wizard/ide-config-generator'); +const { installAiosCore, hasPackageJson } = require('../../packages/installer/src/installer/aios-core-installer'); + +// Mock dependencies +jest.mock('inquirer'); +jest.mock('fs-extra'); +jest.mock('../../packages/installer/src/installer/dependency-installer'); +jest.mock('../../packages/installer/src/config/configure-environment'); +jest.mock('../../packages/installer/src/wizard/ide-config-generator'); +jest.mock('../../packages/installer/src/installer/aios-core-installer'); +jest.mock('../../bin/modules/mcp-installer', () => ({ + installProjectMCPs: jest.fn().mockResolvedValue({ + success: true, + installedMCPs: {}, + configPath: '.mcp.json', + errors: [], + }), +})); +jest.mock('../../packages/installer/src/wizard/validation', () => ({ + validateInstallation: jest.fn().mockResolvedValue({ + valid: true, + errors: [], + warnings: [], + }), + displayValidationReport: jest.fn().mockResolvedValue(), + provideTroubleshooting: jest.fn().mockResolvedValue(), +})); +jest.mock('../../packages/installer/src/wizard/feedback', () => ({ + showWelcome: jest.fn(), + showCompletion: jest.fn(), + showCancellation: jest.fn(), +})); + +describe('Wizard Integration - Story 1.7', () => { + let consoleLogSpy, consoleErrorSpy; + + beforeEach(() => { + jest.clearAllMocks(); + consoleLogSpy = jest.spyOn(console, 'log').mockImplementation(); + consoleErrorSpy = jest.spyOn(console, 'error').mockImplementation(); + + // Default mocks for successful flow + inquirer.prompt.mockResolvedValue({ + projectType: 'greenfield', + selectedIDEs: ['vscode'], + }); + + generateIDEConfigs.mockResolvedValue({ + success: true, + configs: [{ ide: 'vscode', path: '.vscode/settings.json' }], + }); + + configureEnvironment.mockResolvedValue({ + envCreated: true, + envExampleCreated: true, + coreConfigCreated: true, + gitignoreUpdated: true, + errors: [], + }); + + // Mock AIOS core installer + installAiosCore.mockResolvedValue({ + success: true, + installedFiles: ['agents/dev.md', 'tasks/create-story.yaml'], + installedFolders: ['agents', 'tasks', 'workflows', 'templates'], + errors: [], + }); + + // Mock hasPackageJson - default to true (brownfield project) + hasPackageJson.mockResolvedValue(true); + + // Mock detectPackageManager + detectPackageManager.mockReturnValue('npm'); + + installDependencies.mockResolvedValue({ + success: true, + packageManager: 'npm', + }); + + // Mock fs-extra for getExistingUserProfile() - Story 10.2 + // Default: no existing core-config.yaml (forces user profile prompt) + fse.pathExists.mockResolvedValue(false); + fse.existsSync.mockReturnValue(false); + fse.ensureDir.mockResolvedValue(); + }); + + afterEach(() => { + consoleLogSpy.mockRestore(); + consoleErrorSpy.mockRestore(); + }); + + describe('Full Wizard Flow (AC Integration)', () => { + it('should complete full wizard with dependency installation', async () => { + const answers = await runWizard(); + + expect(answers.projectType).toBe('greenfield'); + expect(answers.selectedIDEs).toContain('vscode'); + expect(answers.packageManager).toBe('npm'); // Auto-detected + expect(answers.envConfigured).toBe(true); + expect(answers.depsInstalled).toBe(true); + expect(answers.depsResult.success).toBe(true); + }); + + it('should install AIOS core before IDE configs', async () => { + await runWizard(); + + // Verify AIOS core was installed + expect(installAiosCore).toHaveBeenCalled(); + + // Verify order: AIOS core before IDE configs + const aiosCoreCallOrder = installAiosCore.mock.invocationCallOrder[0]; + const ideConfigCallOrder = generateIDEConfigs.mock.invocationCallOrder[0]; + + expect(aiosCoreCallOrder).toBeLessThan(ideConfigCallOrder); + }); + + it('should install dependencies after env configuration', async () => { + await runWizard(); + + // Verify order of operations + const envCallOrder = configureEnvironment.mock.invocationCallOrder[0]; + const depsCallOrder = installDependencies.mock.invocationCallOrder[0]; + + expect(envCallOrder).toBeLessThan(depsCallOrder); + }); + + it('should use auto-detected package manager for installDependencies', async () => { + detectPackageManager.mockReturnValue('yarn'); + + await runWizard(); + + expect(installDependencies).toHaveBeenCalledWith({ + packageManager: 'yarn', + projectPath: process.cwd(), + }); + }); + }); + + describe('Package Manager Auto-Detection (AC1)', () => { + it('should auto-detect npm', async () => { + detectPackageManager.mockReturnValue('npm'); + + const answers = await runWizard(); + expect(answers.packageManager).toBe('npm'); + }); + + it('should auto-detect yarn', async () => { + detectPackageManager.mockReturnValue('yarn'); + + const answers = await runWizard(); + expect(answers.packageManager).toBe('yarn'); + }); + + it('should auto-detect pnpm', async () => { + detectPackageManager.mockReturnValue('pnpm'); + + const answers = await runWizard(); + expect(answers.packageManager).toBe('pnpm'); + }); + + it('should auto-detect bun', async () => { + detectPackageManager.mockReturnValue('bun'); + + const answers = await runWizard(); + expect(answers.packageManager).toBe('bun'); + }); + }); + + describe('Greenfield Projects (No package.json)', () => { + it('should skip dependency installation when no package.json exists', async () => { + hasPackageJson.mockResolvedValue(false); + + const answers = await runWizard(); + + expect(installDependencies).not.toHaveBeenCalled(); + expect(answers.depsInstalled).toBe(true); + expect(answers.depsResult.skipped).toBe(true); + expect(answers.depsResult.reason).toBe('no-package-json'); + }); + + it('should still set packageManager when skipping dependencies', async () => { + hasPackageJson.mockResolvedValue(false); + detectPackageManager.mockReturnValue('pnpm'); + + const answers = await runWizard(); + + expect(answers.packageManager).toBe('pnpm'); + }); + }); + + describe('User Profile Selection (Story 10.2)', () => { + it('should include userProfile in wizard answers', async () => { + inquirer.prompt + .mockResolvedValueOnce({ language: 'en' }) + .mockResolvedValueOnce({ userProfile: 'advanced' }) + .mockResolvedValueOnce({ + projectType: 'greenfield', + selectedIDEs: ['vscode'], + selectedTechPreset: 'none', + }); + + const answers = await runWizard(); + + expect(answers.userProfile).toBeDefined(); + expect(['bob', 'advanced']).toContain(answers.userProfile); + }); + + it('should pass userProfile to configureEnvironment', async () => { + inquirer.prompt + .mockResolvedValueOnce({ language: 'en' }) + .mockResolvedValueOnce({ userProfile: 'bob' }) + .mockResolvedValueOnce({ + projectType: 'greenfield', + selectedIDEs: ['vscode'], + selectedTechPreset: 'none', + }); + + await runWizard(); + + expect(configureEnvironment).toHaveBeenCalledWith( + expect.objectContaining({ + userProfile: 'bob', + }), + ); + }); + + it('should use existing profile when core-config.yaml exists (idempotency)', async () => { + // Mock existing core-config.yaml with user_profile + fse.pathExists.mockResolvedValue(true); + fse.readFile.mockResolvedValue('user_profile: bob\nmarkdownExploder: true'); + + // Only 2 prompts needed: language + remaining questions (no user profile prompt) + inquirer.prompt + .mockResolvedValueOnce({ language: 'en' }) + .mockResolvedValueOnce({ + projectType: 'greenfield', + selectedIDEs: ['vscode'], + selectedTechPreset: 'none', + }); + + const answers = await runWizard(); + + // Should use existing profile without prompting + expect(answers.userProfile).toBe('bob'); + // Console should show skipped message + expect(consoleLogSpy).toHaveBeenCalledWith(expect.stringContaining('bob')); + }); + + it('should default to advanced when user_profile is missing from existing config', async () => { + // Mock existing core-config.yaml WITHOUT user_profile + fse.pathExists.mockResolvedValue(true); + fse.readFile.mockResolvedValue('markdownExploder: true\nproject:\n type: GREENFIELD'); + + // Need all 3 prompts since user_profile doesn't exist + inquirer.prompt + .mockResolvedValueOnce({ language: 'en' }) + .mockResolvedValueOnce({ userProfile: 'advanced' }) + .mockResolvedValueOnce({ + projectType: 'greenfield', + selectedIDEs: ['vscode'], + selectedTechPreset: 'none', + }); + + const answers = await runWizard(); + + expect(answers.userProfile).toBe('advanced'); + }); + + it('should handle invalid user_profile in existing config gracefully', async () => { + // Mock existing core-config.yaml with INVALID user_profile + fse.pathExists.mockResolvedValue(true); + fse.readFile.mockResolvedValue('user_profile: invalid_value\nmarkdownExploder: true'); + + // Need all 3 prompts since user_profile is invalid + inquirer.prompt + .mockResolvedValueOnce({ language: 'en' }) + .mockResolvedValueOnce({ userProfile: 'advanced' }) + .mockResolvedValueOnce({ + projectType: 'greenfield', + selectedIDEs: ['vscode'], + selectedTechPreset: 'none', + }); + + const answers = await runWizard(); + + // Should prompt for new profile since existing is invalid + expect(answers.userProfile).toBe('advanced'); + }); + }); + + describe('Offline Mode (AC6)', () => { + it('should handle offline mode gracefully', async () => { + installDependencies.mockResolvedValue({ + success: true, + offlineMode: true, + packageManager: 'npm', + }); + + const answers = await runWizard(); + + expect(answers.depsInstalled).toBe(true); + expect(answers.depsResult.offlineMode).toBe(true); + expect(consoleLogSpy).toHaveBeenCalledWith(expect.stringContaining('offline mode')); + }); + }); + + describe('Error Handling (AC4, AC5)', () => { + it('should offer retry on installation failure', async () => { + installDependencies + .mockResolvedValueOnce({ + success: false, + errorMessage: 'Network connection failed', + solution: 'Check your internet connection', + errorCategory: 'network', + }) + .mockResolvedValueOnce({ + success: true, + packageManager: 'npm', + }); + + // Mock prompt sequence: 1) language, 2) user profile (Story 10.2), 3) project type + IDEs + tech preset, 4) retryDeps + inquirer.prompt + .mockResolvedValueOnce({ language: 'en' }) + .mockResolvedValueOnce({ userProfile: 'advanced' }) // Story 10.2: User Profile + .mockResolvedValueOnce({ + projectType: 'greenfield', + selectedIDEs: [], + selectedTechPreset: 'none', + }) + .mockResolvedValueOnce({ + retryDeps: true, + }); + + const answers = await runWizard(); + + expect(installDependencies).toHaveBeenCalledTimes(2); + expect(answers.depsInstalled).toBe(true); + }); + + it('should allow skipping installation on failure', async () => { + installDependencies.mockResolvedValue({ + success: false, + errorMessage: 'Network connection failed', + solution: 'Check your internet connection', + }); + + // Mock prompt sequence: 1) language, 2) user profile (Story 10.2), 3) project type + IDEs + tech preset, 4) retryDeps + inquirer.prompt + .mockResolvedValueOnce({ language: 'en' }) + .mockResolvedValueOnce({ userProfile: 'advanced' }) // Story 10.2: User Profile + .mockResolvedValueOnce({ + projectType: 'greenfield', + selectedIDEs: [], + selectedTechPreset: 'none', + }) + .mockResolvedValueOnce({ + retryDeps: false, + }); + + const answers = await runWizard(); + + expect(answers.depsInstalled).toBe(false); + expect(consoleLogSpy).toHaveBeenCalledWith(expect.stringContaining('manually')); + }); + + it('should display clear error messages', async () => { + installDependencies.mockResolvedValue({ + success: false, + errorMessage: 'Permission denied', + solution: 'Try running with elevated permissions', + errorCategory: 'permission', + }); + + inquirer.prompt + .mockResolvedValueOnce({ + projectType: 'greenfield', + }) + .mockResolvedValueOnce({ + retryDeps: false, + }); + + await runWizard(); + + expect(consoleErrorSpy).toHaveBeenCalledWith(expect.stringContaining('Permission denied')); + expect(consoleErrorSpy).toHaveBeenCalledWith(expect.stringContaining('elevated permissions')); + }); + }); + + describe('Progress Feedback (AC3)', () => { + it('should show installation progress messages', async () => { + await runWizard(); + + expect(consoleLogSpy).toHaveBeenCalledWith( + expect.stringContaining('Installing dependencies'), + ); + expect(consoleLogSpy).toHaveBeenCalledWith(expect.stringContaining('installed')); + }); + }); + + describe('Wizard State Flow', () => { + it('should maintain correct state through all steps', async () => { + const answers = await runWizard(); + + // Verify all story steps completed + expect(answers.projectType).toBeDefined(); // Story 1.3 + expect(answers.selectedIDEs).toBeDefined(); // Story 1.4 + expect(answers.envConfigured).toBeDefined(); // Story 1.6 + expect(answers.packageManager).toBeDefined(); // Story 1.7 (auto-detected) + expect(answers.depsInstalled).toBeDefined(); // Story 1.7 + expect(answers.aiosCoreInstalled).toBeDefined(); // Story 1.4 - AIOS core + }); + + it('should handle environment config failure gracefully', async () => { + configureEnvironment.mockRejectedValue(new Error('Env config failed')); + + // Mock prompt sequence: 1) language, 2) user profile (Story 10.2), 3) project type + IDEs + tech preset, 4) continueWithoutEnv + inquirer.prompt + .mockResolvedValueOnce({ language: 'en' }) + .mockResolvedValueOnce({ userProfile: 'advanced' }) // Story 10.2: User Profile + .mockResolvedValueOnce({ + projectType: 'greenfield', + selectedIDEs: [], + selectedTechPreset: 'none', + }) + .mockResolvedValueOnce({ + continueWithoutEnv: true, + }); + + const answers = await runWizard(); + + expect(answers.envConfigured).toBe(false); + // Should still proceed to dependency installation + expect(installDependencies).toHaveBeenCalled(); + }); + + it('should handle AIOS core installation failure gracefully', async () => { + installAiosCore.mockRejectedValue(new Error('AIOS core installation failed')); + + const answers = await runWizard(); + + expect(answers.aiosCoreInstalled).toBe(false); + // Should still proceed to other steps + expect(configureEnvironment).toHaveBeenCalled(); + }); + }); +}); + +``` + +================================================== +📄 tests/wizard/index.test.js +================================================== +```js +// Wizard test - uses describeIntegration due to file dependencies +/** + * Main Wizard Test Suite + * + * Tests wizard flow, welcome/completion messages, and cancellation handling + */ + +const { runWizard } = require('../../packages/installer/src/wizard/index'); +const inquirer = require('inquirer'); + +// Mock dependencies +jest.mock('inquirer'); +jest.mock('../../packages/installer/src/wizard/feedback'); + +// Mock Story 1.6 environment configuration (added for wizard integration) +jest.mock('../../packages/installer/src/config/configure-environment', () => ({ + configureEnvironment: jest.fn().mockResolvedValue({ + envCreated: true, + envExampleCreated: true, + coreConfigCreated: true, + gitignoreUpdated: true, + errors: [], + }), +})); + +const originalLog = console.log; +const originalWarn = console.warn; +const originalError = console.error; + +beforeEach(() => { + console.log = jest.fn(); + console.warn = jest.fn(); + console.error = jest.fn(); + jest.clearAllMocks(); +}); + +afterEach(() => { + console.log = originalLog; + console.warn = originalWarn; + console.error = originalError; +}); + +describeIntegration('wizard/index', () => { + describeIntegration('runWizard', () => { + test('shows welcome message', async () => { + inquirer.prompt = jest.fn().mockResolvedValue({ projectType: 'greenfield' }); + const { showWelcome } = require('../../packages/installer/src/wizard/feedback'); + + await runWizard(); + + expect(showWelcome).toHaveBeenCalled(); + }); + + test('prompts user with questions', async () => { + inquirer.prompt = jest.fn().mockResolvedValue({ projectType: 'greenfield' }); + + await runWizard(); + + expect(inquirer.prompt).toHaveBeenCalledWith( + expect.arrayContaining([ + expect.objectContaining({ + name: 'projectType', + }), + ]), + ); + }); + + test('shows completion message on success', async () => { + inquirer.prompt = jest.fn().mockResolvedValue({ projectType: 'greenfield' }); + const { showCompletion } = require('../../packages/installer/src/wizard/feedback'); + + await runWizard(); + + expect(showCompletion).toHaveBeenCalled(); + }); + + test('returns answers object', async () => { + const mockAnswers = { projectType: 'greenfield' }; + inquirer.prompt = jest.fn().mockResolvedValue(mockAnswers); + + const result = await runWizard(); + + expect(result).toEqual(mockAnswers); + }); + + test('handles inquirer errors', async () => { + const error = new Error('Test error'); + inquirer.prompt = jest.fn().mockRejectedValue(error); + + await expect(runWizard()).rejects.toThrow('Test error'); + }); + + test('handles TTY errors', async () => { + const error = new Error('TTY error'); + error.isTtyError = true; + inquirer.prompt = jest.fn().mockRejectedValue(error); + + await expect(runWizard()).rejects.toThrow(); + expect(console.error).toHaveBeenCalledWith( + expect.stringContaining('couldn\'t be rendered'), + ); + }); + }); + + describeIntegration('Performance Monitoring (AC: < 100ms per question)', () => { + test('tracks question response time', async () => { + inquirer.prompt = jest.fn().mockResolvedValue({ projectType: 'greenfield' }); + + const startTime = Date.now(); + await runWizard(); + const duration = Date.now() - startTime; + + // Wizard should complete quickly (< 1 second for 1 question) + expect(duration).toBeLessThan(1000); + }); + + test('warns if average response time exceeds 100ms', async () => { + // Mock slow prompt (> 100ms average per question) + // Note: buildQuestionSequence returns 2 questions (project type + IDE selection) + // So total time needs to be > 200ms for average > 100ms per question + inquirer.prompt = jest.fn().mockImplementation(() => { + return new Promise(resolve => { + setTimeout(() => resolve({ projectType: 'greenfield' }), 250); + }); + }); + + await runWizard(); + + // Should log warning about slow performance + expect(console.warn).toHaveBeenCalledWith( + expect.stringContaining('exceeds 100ms target'), + ); + }); + + test('does not warn if response time is acceptable', async () => { + // Mock fast prompt (< 100ms) + inquirer.prompt = jest.fn().mockImplementation(() => { + return new Promise(resolve => { + setTimeout(() => resolve({ projectType: 'greenfield' }), 50); + }); + }); + + await runWizard(); + + // Should NOT log warning + expect(console.warn).not.toHaveBeenCalled(); + }); + }); + + describeIntegration('Answer Object Schema', () => { + test('returns object with projectType', async () => { + inquirer.prompt = jest.fn().mockResolvedValue({ projectType: 'greenfield' }); + + const answers = await runWizard(); + + expect(answers).toHaveProperty('projectType'); + expect(['greenfield', 'brownfield']).toContain(answers.projectType); + }); + + test('future: will include IDE selection (Story 1.4)', async () => { + // Placeholder test for future Story 1.4 + inquirer.prompt = jest.fn().mockResolvedValue({ + projectType: 'greenfield', + // Future: ide: 'vscode' + }); + + const answers = await runWizard(); + + // Verify essential property exists regardless of other keys + expect(answers).toMatchObject({ projectType: 'greenfield' }); + + // Future: Will have more keys + // expect(answers).toHaveProperty('ide'); + }); + + test('future: will include MCP selections (Story 1.5)', async () => { + // Placeholder test for future Story 1.5 + inquirer.prompt = jest.fn().mockResolvedValue({ + projectType: 'greenfield', + // Future: mcps: ['clickup', 'supabase'] + }); + + const answers = await runWizard(); + + // Currently has projectType, may include more fields from other stories + expect(answers).toHaveProperty('projectType'); + expect(Object.keys(answers).length).toBeGreaterThanOrEqual(1); + + // Future: Will have more keys + // expect(answers).toHaveProperty('mcps'); + }); }); + + describeIntegration('Integration Contract (Story 1.1)', () => { + test('exports runWizard function', () => { + const wizard = require('../../packages/installer/src/wizard/index'); + expect(typeof wizard.runWizard).toBe('function'); + }); + + test('runWizard is async function', () => { + const wizard = require('../../packages/installer/src/wizard/index'); + const result = wizard.runWizard(); + expect(result).toBeInstanceOf(Promise); + + // Clean up promise + result.catch(() => {}); + }); + + test('matches integration contract signature', async () => { + // Contract: exports.runWizard = async function() { ... } + const wizard = require('../../packages/installer/src/wizard/index'); + + inquirer.prompt = jest.fn().mockResolvedValue({ projectType: 'greenfield' }); + + const result = await wizard.runWizard(); + + // Should return answer object + expect(result).toBeDefined(); + expect(typeof result).toBe('object'); + expect(result).toHaveProperty('projectType'); + }); + }); +}); + + +``` + +================================================== +📄 tests/wizard/feedback.test.js +================================================== +```js +/** + * Feedback Helpers Test Suite + * + * Tests visual feedback components (spinners, progress bars, status messages) + */ + +const { + createSpinner, + showSuccess, + showError, + showWarning, + showInfo, + showTip, + createProgressBar, + updateProgress, + completeProgress, + showWelcome, + showCompletion, + showSection, + showCancellation, + estimateTimeRemaining, +} = require('../../packages/installer/src/wizard/feedback'); + +// Mock ora and cli-progress +jest.mock('ora'); +jest.mock('cli-progress'); + +// Mock console methods +const originalLog = console.log; +const originalWarn = console.warn; +const originalError = console.error; + +beforeEach(() => { + console.log = jest.fn(); + console.warn = jest.fn(); + console.error = jest.fn(); +}); + +afterEach(() => { + console.log = originalLog; + console.warn = originalWarn; + console.error = originalError; + jest.clearAllMocks(); +}); + +describe('feedback', () => { + describe('createSpinner', () => { + test('creates spinner with text', () => { + const ora = require('ora'); + ora.mockReturnValue({ start: jest.fn(), stop: jest.fn() }); + + createSpinner('Loading...'); + + expect(ora).toHaveBeenCalledWith( + expect.objectContaining({ + text: 'Loading...', + color: 'cyan', + spinner: 'dots', + }), + ); + }); + + test('accepts custom options', () => { + const ora = require('ora'); + ora.mockReturnValue({ start: jest.fn(), stop: jest.fn() }); + + createSpinner('Loading...', { color: 'red' }); + + expect(ora).toHaveBeenCalledWith( + expect.objectContaining({ + text: 'Loading...', + color: 'red', + }), + ); + }); + }); + + describe('Status Messages', () => { + test('showSuccess displays success message', () => { + showSuccess('Task complete'); + expect(console.log).toHaveBeenCalledWith(expect.stringContaining('Task complete')); + }); + + test('showError displays error message', () => { + showError('Task failed'); + expect(console.log).toHaveBeenCalledWith(expect.stringContaining('Task failed')); + }); + + test('showWarning displays warning message', () => { + showWarning('Be careful'); + expect(console.log).toHaveBeenCalledWith(expect.stringContaining('Be careful')); + }); + + test('showInfo displays info message', () => { + showInfo('Helpful information'); + expect(console.log).toHaveBeenCalledWith(expect.stringContaining('Helpful information')); + }); + + test('showTip displays tip message', () => { + showTip('Pro tip'); + expect(console.log).toHaveBeenCalledWith(expect.stringContaining('Pro tip')); + }); + }); + + describe('Progress Bar', () => { + test('createProgressBar initializes with total', () => { + const cliProgress = require('cli-progress'); + const mockBar = { + start: jest.fn(), + update: jest.fn(), + stop: jest.fn(), + }; + cliProgress.SingleBar.mockImplementation(() => mockBar); + + createProgressBar(10); + + expect(mockBar.start).toHaveBeenCalledWith(10, 0, { task: 'Initializing...' }); + }); + + test('updateProgress updates bar with task name', () => { + const mockBar = { + start: jest.fn(), + update: jest.fn(), + stop: jest.fn(), + }; + + updateProgress(mockBar, 5, 'Processing...'); + + expect(mockBar.update).toHaveBeenCalledWith(5, { task: 'Processing...' }); + }); + + test('completeProgress stops bar', () => { + const mockBar = { + start: jest.fn(), + update: jest.fn(), + stop: jest.fn(), + }; + + completeProgress(mockBar); + + expect(mockBar.stop).toHaveBeenCalled(); + }); + }); + + describe('Welcome and Completion', () => { + test('showWelcome displays welcome message', () => { + showWelcome(); + expect(console.log).toHaveBeenCalled(); + // Check that multiple lines were logged (heading + info) + expect(console.log.mock.calls.length).toBeGreaterThan(1); + }); + + test('showCompletion displays completion message', () => { + showCompletion(); + expect(console.log).toHaveBeenCalled(); + expect(console.log.mock.calls.length).toBeGreaterThan(1); + }); + + test('showSection displays section header', () => { + showSection('Configuration'); + expect(console.log).toHaveBeenCalledWith(expect.stringContaining('Configuration')); + }); + + test('showCancellation displays cancellation message', () => { + showCancellation(); + expect(console.log).toHaveBeenCalled(); + expect(console.log.mock.calls.some(call => + call[0].includes('cancelled'), + )).toBe(true); + }); + }); + + describe('estimateTimeRemaining', () => { + test('returns "Calculating..." for first step', () => { + const estimate = estimateTimeRemaining(0, 10, Date.now()); + expect(estimate).toBe('Calculating...'); + }); + + test('estimates seconds for short tasks', () => { + const startTime = Date.now() - 5000; // 5 seconds ago + const estimate = estimateTimeRemaining(5, 10, startTime); + expect(estimate).toMatch(/~\d+s remaining/); + }); + + test('estimates minutes for long tasks', () => { + const startTime = Date.now() - 120000; // 2 minutes ago + const estimate = estimateTimeRemaining(2, 10, startTime); + expect(estimate).toMatch(/~\d+m remaining/); + }); + + test('handles edge case of last step', () => { + const startTime = Date.now() - 10000; + const estimate = estimateTimeRemaining(10, 10, startTime); + expect(estimate).toBe('~0s remaining'); + }); + }); +}); + + +``` + +================================================== +📄 tests/wizard/validators.test.js +================================================== +```js +/** + * Validators Test Suite + * + * Tests security validators with malicious inputs (OWASP compliance) + */ + +const { + validateProjectType, + validatePath, + validateTextInput, + validateProjectName, + validateListSelection, + sanitizeShellInput, + INPUT_LIMITS, + ALLOWED_PROJECT_TYPES, +} = require('../../packages/installer/src/wizard/validators'); + +describe('validators', () => { + describe('validateProjectType', () => { + test('accepts valid project types', () => { + expect(validateProjectType('greenfield')).toBe(true); + expect(validateProjectType('brownfield')).toBe(true); + expect(validateProjectType('GREENFIELD')).toBe(true); // case insensitive + }); + + test('rejects invalid project types', () => { + expect(validateProjectType('invalid')).toContain('Invalid project type'); + expect(validateProjectType('monolith')).toContain('Invalid project type'); + expect(validateProjectType('')).toContain('required'); // Empty string triggers "required" check first + }); + + test('rejects malicious inputs', () => { + expect(validateProjectType('greenfield; rm -rf /')).toContain('Invalid project type'); + expect(validateProjectType('$(whoami)')).toContain('Invalid project type'); + expect(validateProjectType('')).toContain('Invalid project type'); + }); + + test('handles null/undefined', () => { + expect(validateProjectType(null)).toContain('required'); + expect(validateProjectType(undefined)).toContain('required'); + }); + }); + + describe('validatePath', () => { + const baseDir = process.cwd(); + + test('accepts valid paths', () => { + expect(validatePath('./src', baseDir)).toBe(true); + expect(validatePath('src/components', baseDir)).toBe(true); + expect(validatePath('.', baseDir)).toBe(true); + }); + + test('rejects path traversal attacks', () => { + expect(validatePath('../../../etc/passwd', baseDir)).toContain('path traversal'); + // Backslashes are caught by shell-special character check first on Windows + expect(validatePath('..\\..\\..\\Windows\\System32', baseDir)).toContain('invalid characters'); + expect(validatePath('~/../../root', baseDir)).toContain('path traversal'); + }); + + test('rejects shell-special characters', () => { + expect(validatePath('src; rm -rf /', baseDir)).toContain('invalid characters'); + expect(validatePath('src | curl evil.com', baseDir)).toContain('invalid characters'); + expect(validatePath('src`whoami`', baseDir)).toContain('invalid characters'); + expect(validatePath('src$(cat /etc/passwd)', baseDir)).toContain('invalid characters'); + }); + + test('rejects paths exceeding length limit', () => { + const longPath = 'a'.repeat(INPUT_LIMITS.path + 1); + expect(validatePath(longPath, baseDir)).toContain('too long'); + }); + + test('handles null/undefined', () => { + expect(validatePath(null, baseDir)).toContain('required'); + expect(validatePath(undefined, baseDir)).toContain('required'); + }); + }); + + describe('validateTextInput', () => { + test('accepts valid text', () => { + expect(validateTextInput('valid text')).toBe(true); + expect(validateTextInput('valid-text-123')).toBe(true); + expect(validateTextInput('Hello World!')).toBe(true); + }); + + test('rejects command injection patterns', () => { + expect(validateTextInput('; rm -rf /')).toContain('invalid characters'); + // All shell-special characters are caught by the first check + expect(validateTextInput('$(whoami)')).toContain('invalid characters'); + expect(validateTextInput('`cat /etc/passwd`')).toContain('invalid characters'); + expect(validateTextInput('test && curl evil.com')).toContain('invalid characters'); + expect(validateTextInput('test || echo hacked')).toContain('invalid characters'); + }); + + test('rejects XSS-style inputs', () => { + // < and > are caught by shell-special character check first + expect(validateTextInput('')).toContain('invalid characters'); + expect(validateTextInput('')).toContain('invalid characters'); + }); + + test('rejects inputs exceeding length limit', () => { + const longText = 'a'.repeat(INPUT_LIMITS.generic + 1); + expect(validateTextInput(longText)).toContain('too long'); + }); + + test('rejects empty/whitespace-only input', () => { + expect(validateTextInput('')).toContain('required'); + expect(validateTextInput(' ')).toContain('empty'); + }); + + test('respects custom length limit', () => { + const text = 'a'.repeat(50); + expect(validateTextInput(text, 100)).toBe(true); + expect(validateTextInput(text, 40)).toContain('too long'); + }); + + test('handles null/undefined', () => { + expect(validateTextInput(null)).toContain('required'); + expect(validateTextInput(undefined)).toContain('required'); + }); + }); + + describe('validateProjectName', () => { + test('accepts valid project names', () => { + expect(validateProjectName('my-project')).toBe(true); + expect(validateProjectName('MyProject123')).toBe(true); + expect(validateProjectName('project_name')).toBe(true); + }); + + test('rejects names with special characters', () => { + expect(validateProjectName('my project')).toContain('letters, numbers, dashes'); + expect(validateProjectName('project!')).toContain('letters, numbers, dashes'); + expect(validateProjectName('project@123')).toContain('letters, numbers, dashes'); + }); + + test('rejects names starting with dash/underscore', () => { + expect(validateProjectName('-project')).toContain('start with a letter or number'); + expect(validateProjectName('_project')).toContain('start with a letter or number'); + }); + + test('rejects malicious inputs', () => { + expect(validateProjectName('project; rm -rf /')).toContain('letters, numbers, dashes'); + expect(validateProjectName('$(whoami)')).toContain('letters, numbers, dashes'); + }); + + test('rejects names exceeding length limit', () => { + const longName = 'a'.repeat(INPUT_LIMITS.projectName + 1); + expect(validateProjectName(longName)).toContain('too long'); + }); + + test('handles null/undefined', () => { + expect(validateProjectName(null)).toContain('required'); + expect(validateProjectName(undefined)).toContain('required'); + }); + }); + + describe('validateListSelection', () => { + const allowedValues = ['option1', 'option2', 'option3']; + + test('accepts valid selections', () => { + expect(validateListSelection('option1', allowedValues)).toBe(true); + expect(validateListSelection('option2', allowedValues)).toBe(true); + }); + + test('rejects invalid selections', () => { + expect(validateListSelection('option4', allowedValues)).toContain('Invalid selection'); + expect(validateListSelection('invalid', allowedValues)).toContain('Invalid selection'); + }); + + test('handles null/undefined', () => { + expect(validateListSelection(null, allowedValues)).toContain('required'); + expect(validateListSelection(undefined, allowedValues)).toContain('required'); + }); + }); + + describe('sanitizeShellInput', () => { + test('escapes shell-special characters', () => { + expect(sanitizeShellInput('test; rm -rf /')).toBe('test\\; rm -rf /'); + expect(sanitizeShellInput('test`whoami`')).toBe('test\\`whoami\\`'); + expect(sanitizeShellInput('test$(cat file)')).toBe('test\\$\\(cat file\\)'); + expect(sanitizeShellInput('test\nline2')).toBe('test\\nline2'); + }); + + test('handles quotes', () => { + expect(sanitizeShellInput("test'quote")).toBe("test\\'quote"); + expect(sanitizeShellInput('test"quote')).toBe('test\\"quote'); + }); + + test('handles backslashes', () => { + expect(sanitizeShellInput('test\\path')).toBe('test\\\\path'); + }); + + test('handles non-string input', () => { + expect(sanitizeShellInput(null)).toBe(''); + expect(sanitizeShellInput(undefined)).toBe(''); + expect(sanitizeShellInput(123)).toBe(''); + }); + }); + + describe('Buffer Overflow Protection', () => { + test('rejects 10,000 character strings', () => { + const massiveInput = 'a'.repeat(10000); + expect(validateTextInput(massiveInput)).toContain('too long'); + expect(validateProjectName(massiveInput)).toContain('too long'); + expect(validatePath(massiveInput)).toContain('too long'); + }); + }); + + describe('Constants Export', () => { + test('exports INPUT_LIMITS', () => { + expect(INPUT_LIMITS).toBeDefined(); + expect(INPUT_LIMITS.projectName).toBe(100); + expect(INPUT_LIMITS.path).toBe(255); + expect(INPUT_LIMITS.generic).toBe(500); + }); + + test('exports ALLOWED_PROJECT_TYPES', () => { + expect(ALLOWED_PROJECT_TYPES).toBeDefined(); + expect(ALLOWED_PROJECT_TYPES).toContain('greenfield'); + expect(ALLOWED_PROJECT_TYPES).toContain('brownfield'); + }); + }); +}); + + +``` + +================================================== +📄 tests/wizard/questions.test.js +================================================== +```js +// Wizard test - uses describeIntegration due to file dependencies +/** + * Questions Test Suite + * + * Tests question definitions and sequencing logic + */ + +const { + getProjectTypeQuestion, + getUserProfileQuestion, + getIDEQuestions, + getMCPQuestions, + getEnvironmentQuestions, + buildQuestionSequence, + getQuestionById, +} = require('../../packages/installer/src/wizard/questions'); + +describeIntegration('questions', () => { + describeIntegration('getProjectTypeQuestion', () => { + test('returns valid inquirer question object', () => { + const question = getProjectTypeQuestion(); + + expect(question).toHaveProperty('type', 'list'); + expect(question).toHaveProperty('name', 'projectType'); + expect(question).toHaveProperty('message'); + expect(question).toHaveProperty('choices'); + expect(question).toHaveProperty('validate'); + }); + + test('has greenfield and brownfield choices', () => { + const question = getProjectTypeQuestion(); + + expect(question.choices).toHaveLength(2); + expect(question.choices[0]).toHaveProperty('value', 'greenfield'); + expect(question.choices[1]).toHaveProperty('value', 'brownfield'); + }); + + test('includes validator function', () => { + const question = getProjectTypeQuestion(); + + expect(typeof question.validate).toBe('function'); + }); + + test('validator accepts valid project types', () => { + const question = getProjectTypeQuestion(); + + expect(question.validate('greenfield')).toBe(true); + expect(question.validate('brownfield')).toBe(true); + }); + + test('validator rejects invalid project types', () => { + const question = getProjectTypeQuestion(); + + const result = question.validate('invalid'); + expect(result).not.toBe(true); + expect(typeof result).toBe('string'); + }); + }); + + describeIntegration('getUserProfileQuestion (Story 10.2)', () => { + test('returns valid inquirer question object', () => { + const question = getUserProfileQuestion(); + + expect(question).toHaveProperty('type', 'list'); + expect(question).toHaveProperty('name', 'userProfile'); + expect(question).toHaveProperty('message'); + expect(question).toHaveProperty('choices'); + }); + + test('has bob (assisted) and advanced choices', () => { + const question = getUserProfileQuestion(); + + expect(question.choices).toHaveLength(2); + expect(question.choices[0]).toHaveProperty('value', 'bob'); + expect(question.choices[1]).toHaveProperty('value', 'advanced'); + }); + + test('bob choice includes assisted mode indicator', () => { + const question = getUserProfileQuestion(); + + // First choice should be bob (Modo Assistido) + expect(question.choices[0].name).toContain('🟢'); + expect(question.choices[0].value).toBe('bob'); + }); + + test('advanced choice includes advanced mode indicator', () => { + const question = getUserProfileQuestion(); + + // Second choice should be advanced (Modo Avançado) + expect(question.choices[1].name).toContain('🔵'); + expect(question.choices[1].value).toBe('advanced'); + }); + + test('defaults to advanced (index 1) for backward compatibility', () => { + const question = getUserProfileQuestion(); + + // Default should be index 1 (advanced) for backward compatibility + expect(question.default).toBe(1); + }); + + test('bob choice is marked as recommended', () => { + const question = getUserProfileQuestion(); + + // Bob choice should include recommended indicator + expect(question.choices[0].name.toLowerCase()).toMatch(/recommend|recomend/); + }); + }); + + describeIntegration('getIDEQuestions', () => { + test('returns array of IDE selection questions (Story 1.4)', () => { + const questions = getIDEQuestions(); + expect(Array.isArray(questions)).toBe(true); + }); + + test('returns one IDE selection question', () => { + const questions = getIDEQuestions(); + expect(questions).toHaveLength(1); + expect(questions[0]).toHaveProperty('name', 'selectedIDEs'); + expect(questions[0]).toHaveProperty('type', 'checkbox'); + }); + }); + + describeIntegration('getMCPQuestions', () => { + test('returns array (placeholder for Story 1.5)', () => { + const questions = getMCPQuestions(); + expect(Array.isArray(questions)).toBe(true); + }); + + test('currently returns empty array', () => { + const questions = getMCPQuestions(); + expect(questions).toHaveLength(0); + }); + }); + + describeIntegration('getEnvironmentQuestions', () => { + test('returns array (placeholder for Story 1.6)', () => { + const questions = getEnvironmentQuestions(); + expect(Array.isArray(questions)).toBe(true); + }); + + test('currently returns empty array', () => { + const questions = getEnvironmentQuestions(); + expect(questions).toHaveLength(0); + }); + }); + + describeIntegration('buildQuestionSequence', () => { + test('returns array of questions', () => { + const questions = buildQuestionSequence(); + expect(Array.isArray(questions)).toBe(true); + }); + + test('includes project type question', () => { + const questions = buildQuestionSequence(); + expect(questions).toHaveLength(2); // Story 1.2 (projectType) + Story 1.4 (IDE) + expect(questions[0]).toHaveProperty('name', 'projectType'); + }); + + test('accepts context parameter', () => { + const context = { someValue: 'test' }; + const questions = buildQuestionSequence(context); + expect(Array.isArray(questions)).toBe(true); + }); + + test('future: conditional questions based on context', () => { + // This test documents future behavior for Stories 1.3-1.6 + // When implemented, questions should vary based on context.projectType + const contextGreenfield = { projectType: 'greenfield' }; + const contextBrownfield = { projectType: 'brownfield' }; + + // Currently same length (Stories 1.2 + 1.4 implemented) + expect(buildQuestionSequence(contextGreenfield)).toHaveLength(2); + expect(buildQuestionSequence(contextBrownfield)).toHaveLength(2); + + // Future: lengths may differ based on project type + // expect(buildQuestionSequence(contextGreenfield).length).not.toBe( + // buildQuestionSequence(contextBrownfield).length + // ); + }); + }); + + describeIntegration('getQuestionById', () => { + test('returns projectType question by ID', () => { + const question = getQuestionById('projectType'); + expect(question).not.toBeNull(); + expect(question).toHaveProperty('name', 'projectType'); + }); + + test('returns null for unknown ID', () => { + const question = getQuestionById('unknownQuestion'); + expect(question).toBeNull(); + }); + + test('handles undefined ID', () => { + const question = getQuestionById(undefined); + expect(question).toBeNull(); + }); + }); + + describeIntegration('Question Message Formatting', () => { + test('projectType question has colored message', () => { + const question = getProjectTypeQuestion(); + // Message should be wrapped in color function (contains ANSI codes) + expect(typeof question.message).toBe('string'); + expect(question.message.length).toBeGreaterThan(0); + }); + + test('choices have descriptive names', () => { + const question = getProjectTypeQuestion(); + + expect(question.choices[0].name).toContain('Greenfield'); + expect(question.choices[1].name).toContain('Brownfield'); + }); + + test('choices include helpful descriptions', () => { + const question = getProjectTypeQuestion(); + + expect(question.choices[0].name).toContain('new project'); + expect(question.choices[1].name).toContain('existing project'); + }); + }); +}); + + +``` + +================================================== +📄 tests/clickup/status-sync.test.js +================================================== +```js +// File: tests/clickup/status-sync.test.js + +/** + * Status Synchronization Test Suite + * + * Tests bidirectional status synchronization between local .md files and ClickUp. + * Validates status mapping, Epic vs Story status handling, and error scenarios. + */ + +const { mapStatusToClickUp, mapStatusFromClickUp } = require('../../common/utils/status-mapper'); +const { updateStoryStatus, updateEpicStatus } = require('../../common/utils/clickup-helpers'); + +describe('Status Mapper - Bidirectional Mapping', () => { + describe('AIOS to ClickUp Mapping', () => { + test('should map Draft status correctly', () => { + expect(mapStatusToClickUp('Draft')).toBe('Draft'); + }); + + test('should map Ready for Review status correctly', () => { + expect(mapStatusToClickUp('Ready for Review')).toBe('Ready for Review'); + }); + + test('should map Review status correctly', () => { + expect(mapStatusToClickUp('Review')).toBe('Review'); + }); + + test('should map In Progress status correctly', () => { + expect(mapStatusToClickUp('In Progress')).toBe('In Progress'); + }); + + test('should map Done status correctly', () => { + expect(mapStatusToClickUp('Done')).toBe('Done'); + }); + + test('should map Blocked status correctly', () => { + expect(mapStatusToClickUp('Blocked')).toBe('Blocked'); + }); + + test('should handle unknown status gracefully', () => { + const unknownStatus = 'Unknown Status'; + expect(mapStatusToClickUp(unknownStatus)).toBe(unknownStatus); + }); + }); + + describe('ClickUp to AIOS Mapping', () => { + test('should map Draft status correctly', () => { + expect(mapStatusFromClickUp('Draft')).toBe('Draft'); + }); + + test('should map Ready for Dev to Ready for Review (special case)', () => { + expect(mapStatusFromClickUp('Ready for Dev')).toBe('Ready for Review'); + }); + + test('should map Ready for Review status correctly', () => { + expect(mapStatusFromClickUp('Ready for Review')).toBe('Ready for Review'); + }); + + test('should map Review status correctly', () => { + expect(mapStatusFromClickUp('Review')).toBe('Review'); + }); + + test('should map In Progress status correctly', () => { + expect(mapStatusFromClickUp('In Progress')).toBe('In Progress'); + }); + + test('should map Done status correctly', () => { + expect(mapStatusFromClickUp('Done')).toBe('Done'); + }); + + test('should map Blocked status correctly', () => { + expect(mapStatusFromClickUp('Blocked')).toBe('Blocked'); + }); + + test('should handle unknown status gracefully', () => { + const unknownStatus = 'Unknown Status'; + expect(mapStatusFromClickUp(unknownStatus)).toBe(unknownStatus); + }); + }); + + describe('Bidirectional Round-Trip', () => { + test('should maintain consistency through round-trip for all standard statuses', () => { + const statuses = ['Draft', 'Ready for Review', 'Review', 'In Progress', 'Done', 'Blocked']; + + statuses.forEach(status => { + const toClickUp = mapStatusToClickUp(status); + const backToLocal = mapStatusFromClickUp(toClickUp); + expect(backToLocal).toBe(status); + }); + }); + + test('should handle Ready for Dev special case in round-trip', () => { + // ClickUp "Ready for Dev" → Local "Ready for Review" + const localStatus = mapStatusFromClickUp('Ready for Dev'); + expect(localStatus).toBe('Ready for Review'); + + // Local "Ready for Review" → ClickUp "Ready for Review" + const clickupStatus = mapStatusToClickUp(localStatus); + expect(clickupStatus).toBe('Ready for Review'); + }); + }); +}); + +describe('Story Status Progression Flow', () => { + test('should follow typical story lifecycle: Draft → In Progress → Review → Done', async () => { + const _mockTaskId = 'test-story-123'; + const lifecycle = ['Draft', 'In Progress', 'Review', 'Done']; + + // Note: This is a conceptual test showing the expected flow + // In practice, these would call mocked ClickUp MCP functions + for (const status of lifecycle) { + const mappedStatus = mapStatusToClickUp(status); + expect(mappedStatus).toBe(status); + + // In real implementation, would call: + // await updateStoryStatus(_mockTaskId, status); + } + }); + + test('should handle status regression: Review → In Progress → Review', async () => { + const _mockTaskId = 'test-story-456'; + const progression = ['Review', 'In Progress', 'Review']; + + for (const status of progression) { + const mappedStatus = mapStatusToClickUp(status); + expect(mappedStatus).toBe(status); + } + }); + + test('should handle blocked status at any stage', async () => { + const statuses = ['Draft', 'In Progress', 'Review', 'Done']; + + statuses.forEach(status => { + // Any status can transition to Blocked + expect(mapStatusToClickUp('Blocked')).toBe('Blocked'); + + // Blocked can transition back to any status + expect(mapStatusToClickUp(status)).toBe(status); + }); + }); +}); + +describe('Epic Status Handling (Native Field)', () => { + test('should validate Epic status values', () => { + const validEpicStatuses = ['Planning', 'In Progress', 'Done']; + + validEpicStatuses.forEach(status => { + // Epic statuses don't go through mapper (native field) + // They should be passed directly to ClickUp + expect(status).toMatch(/^(Planning|In Progress|Done)$/); + }); + }); + + test('should reject invalid Epic statuses', () => { + const invalidStatuses = ['Draft', 'Review', 'Blocked', 'Unknown']; + const validEpicStatuses = ['Planning', 'In Progress', 'Done']; + + invalidStatuses.forEach(status => { + expect(validEpicStatuses.includes(status)).toBe(false); + }); + }); + + test('should distinguish Epic status from Story status', () => { + // Epic statuses (native field) + const epicStatuses = ['Planning', 'In Progress', 'Done']; + + // Story statuses (custom field) + const storyStatuses = ['Draft', 'Ready for Review', 'Review', 'In Progress', 'Done', 'Blocked']; + + // Only 'In Progress' and 'Done' overlap + const overlapping = epicStatuses.filter(s => storyStatuses.includes(s)); + expect(overlapping).toEqual(['In Progress', 'Done']); + }); +}); + +describe('Error Handling and Edge Cases', () => { + test('should handle null status gracefully', () => { + const result = mapStatusToClickUp(null); + expect(result).toBe(null); + }); + + test('should handle undefined status gracefully', () => { + const result = mapStatusToClickUp(undefined); + expect(result).toBe(undefined); + }); + + test('should handle empty string status', () => { + const result = mapStatusToClickUp(''); + expect(result).toBe(''); + }); + + test('should handle case sensitivity', () => { + // Mapper should be case-sensitive (ClickUp is case-sensitive) + expect(mapStatusToClickUp('draft')).toBe('draft'); // Not mapped + expect(mapStatusToClickUp('Draft')).toBe('Draft'); // Mapped correctly + }); + + test('should handle status with extra whitespace', () => { + const statusWithSpace = ' In Progress '; + // Should not match due to whitespace (requires exact match) + expect(mapStatusToClickUp(statusWithSpace)).toBe(statusWithSpace); + }); +}); + +describe('Integration with ClickUp Helpers', () => { + // Note: These tests require mocking the ClickUp MCP tool + // For now, we validate the function signatures and error handling + + test('updateStoryStatus should accept taskId and status', () => { + expect(typeof updateStoryStatus).toBe('function'); + expect(updateStoryStatus.length).toBe(2); // taskId, newStatus + }); + + test('updateEpicStatus should accept epicTaskId and status', () => { + expect(typeof updateEpicStatus).toBe('function'); + expect(updateEpicStatus.length).toBe(2); // epicTaskId, newStatus + }); +}); + +``` + +================================================== +📄 tests/agents/backward-compatibility.test.js +================================================== +```js +// Integration/Performance test - uses describeIntegration +const fs = require('fs').promises; +const path = require('path'); +const yaml = require('js-yaml'); + +/** + * Agent Backward Compatibility Test Suite + * + * Task 5.2 Requirements: + * - Agents without dependencies.tools continue working + * - No errors thrown during agent activation + * - Existing workflows unaffected + * - Verify graceful handling of missing tools field + * + * This suite tests: + * 1. Agents without dependencies.tools field load successfully + * 2. No errors when accessing tools property + * 3. Agent structure remains valid + * 4. Workflows using agents without tools work correctly + */ +describeIntegration('Agent Backward Compatibility - Missing Tools Field', () => { + const agentsPath = path.join(__dirname, '../../aios-core/agents'); + + // Agents WITH dependencies.tools (migrated) + const agentsWithTools = ['dev', 'qa', 'architect', 'po', 'sm']; + + // Agents WITHOUT dependencies.tools (legacy/backward compatible) + const agentsWithoutTools = ['analyst', 'pm', 'ux-expert']; + + // Helper function to load agent YAML from markdown + async function loadAgentYaml(agentId) { + const filePath = path.join(agentsPath, `${agentId}.md`); + const content = await fs.readFile(filePath, 'utf8'); + + // Extract YAML block from markdown - handle both \n and \r\n line endings + const yamlMatch = content.match(/```yaml[\r\n]+([\s\S]*?)[\r\n]+```/); + if (!yamlMatch) { + throw new Error(`No YAML block found in ${agentId}.md`); + } + + return yaml.load(yamlMatch[1]); + } + + // Helper to safely access dependencies.tools + function getAgentTools(agentConfig) { + return agentConfig?.dependencies?.tools || null; + } + + describeIntegration('Agents Without Tools Field', () => { + test('agents without dependencies.tools load successfully', async () => { + const results = []; + + for (const agentId of agentsWithoutTools) { + const config = await loadAgentYaml(agentId); + results.push({ + id: agentId, + hasTools: !!config.dependencies?.tools, + config, + }); + } + + // All should load without errors + expect(results).toHaveLength(agentsWithoutTools.length); + + // All should NOT have tools field + results.forEach(({ _id, hasTools }) => { + expect(hasTools).toBe(false); + }); + }); + + test('analyst agent has no tools field', async () => { + const config = await loadAgentYaml('analyst'); + + expect(config.dependencies).toBeDefined(); + expect(config.dependencies.tools).toBeUndefined(); + + // Should have other dependencies + expect(config.dependencies.tasks).toBeDefined(); + expect(config.dependencies.templates).toBeDefined(); + }); + + test('pm agent has no tools field', async () => { + const config = await loadAgentYaml('pm'); + + expect(config.dependencies).toBeDefined(); + expect(config.dependencies.tools).toBeUndefined(); + }); + + test('ux-expert agent has no tools field', async () => { + const config = await loadAgentYaml('ux-expert'); + + expect(config.dependencies).toBeDefined(); + expect(config.dependencies.tools).toBeUndefined(); + }); + }); + + describeIntegration('Agents With Tools Field', () => { + test('dev agent has tools field', async () => { + const config = await loadAgentYaml('dev'); + + expect(config.dependencies).toBeDefined(); + expect(config.dependencies.tools).toBeDefined(); + expect(Array.isArray(config.dependencies.tools)).toBe(true); + expect(config.dependencies.tools.length).toBeGreaterThan(0); + }); + + test('qa agent has tools field', async () => { + const config = await loadAgentYaml('qa'); + + expect(config.dependencies).toBeDefined(); + expect(config.dependencies.tools).toBeDefined(); + expect(Array.isArray(config.dependencies.tools)).toBe(true); + }); + + test('all agents with tools have valid tool lists', async () => { + for (const agentId of agentsWithTools) { + const config = await loadAgentYaml(agentId); + const tools = getAgentTools(config); + + expect(tools).toBeDefined(); + expect(Array.isArray(tools)).toBe(true); + expect(tools.length).toBeGreaterThan(0); + + // Each tool should be a string (tool ID) + tools.forEach(toolId => { + expect(typeof toolId).toBe('string'); + expect(toolId.length).toBeGreaterThan(0); + }); + } + }); + }); + + describeIntegration('No Errors Thrown for Missing Tools', () => { + test('getAgentTools() handles undefined tools gracefully', async () => { + const config = await loadAgentYaml('analyst'); + + expect(() => { + const tools = getAgentTools(config); + expect(tools).toBeNull(); + }).not.toThrow(); + }); + + test('accessing tools on agents without field does not throw', async () => { + for (const agentId of agentsWithoutTools) { + await expect(async () => { + const config = await loadAgentYaml(agentId); + const tools = config.dependencies?.tools; + expect(tools).toBeUndefined(); + }).not.toThrow(); + } + }); + + test('agents without tools can still be loaded and parsed', async () => { + const configs = await Promise.all( + agentsWithoutTools.map(id => loadAgentYaml(_id)), + ); + + configs.forEach((config, index) => { + expect(config).toBeDefined(); + expect(config.agent).toBeDefined(); + expect(config.agent.id).toBe(agentsWithoutTools[index]); + }); + }); + }); + + describeIntegration('Agent Structure Validation', () => { + test('all agents have required core fields', async () => { + const allAgents = [...agentsWithTools, ...agentsWithoutTools]; + + for (const agentId of allAgents) { + const config = await loadAgentYaml(agentId); + + // Core agent fields + expect(config.agent).toBeDefined(); + expect(config.agent.id).toBe(agentId); + expect(config.agent.name).toBeDefined(); + expect(config.agent.title).toBeDefined(); + + // Core persona/behavior fields + expect(config.persona || config.core_principles).toBeDefined(); + + // Dependencies (may be empty but should exist) + expect(config.dependencies).toBeDefined(); + } + }); + + test('agents without tools have other dependencies', async () => { + for (const agentId of agentsWithoutTools) { + const config = await loadAgentYaml(agentId); + + // Should have at least one other dependency type + const hasTasks = config.dependencies?.tasks?.length > 0; + const hasTemplates = config.dependencies?.templates?.length > 0; + const hasData = config.dependencies?.data?.length > 0; + const hasChecklists = config.dependencies?.checklists?.length > 0; + + expect(hasTasks || hasTemplates || hasData || hasChecklists).toBe(true); + } + }); + + test('dependencies.tools is always array or undefined, never null', async () => { + const allAgents = [...agentsWithTools, ...agentsWithoutTools]; + + for (const agentId of allAgents) { + const config = await loadAgentYaml(agentId); + const tools = config.dependencies?.tools; + + // Should be undefined or array, never null + expect(tools === null).toBe(false); + if (tools !== undefined) { + expect(Array.isArray(tools)).toBe(true); + } + } + }); + }); + + describeIntegration('Workflow Compatibility', () => { + test('agents without tools can execute their commands', async () => { + // Test that analyst agent has valid commands despite no tools + const config = await loadAgentYaml('analyst'); + + expect(config.commands).toBeDefined(); + expect(config.commands.length).toBeGreaterThan(0); + + // Commands should reference tasks, not tools + const commandsStr = JSON.stringify(config.commands); + expect(commandsStr).toContain('task'); + }); + + test('agents without tools maintain activation instructions', async () => { + for (const agentId of agentsWithoutTools) { + const config = await loadAgentYaml(agentId); + + expect(config['activation-instructions']).toBeDefined(); + expect(Array.isArray(config['activation-instructions'])).toBe(true); + } + }); + + test('mock workflow execution with agent without tools', async () => { + // Simulate agent activation and command execution + const config = await loadAgentYaml('analyst'); + + // Mock activation + const mockActivation = () => { + return { + agentId: config.agent.id, + name: config.agent.name, + tools: getAgentTools(config), + commands: config.commands, + }; + }; + + const activated = mockActivation(); + + expect(activated.agentId).toBe('analyst'); + expect(activated.tools).toBeNull(); + expect(activated.commands).toBeDefined(); + expect(() => { + // Accessing tools should not throw + const tools = activated.tools || []; + expect(tools).toEqual([]); + }).not.toThrow(); + }); + }); + + describeIntegration('Graceful Degradation', () => { + test('agent system handles mixed agents (with and without tools)', async () => { + const results = { + withTools: [], + withoutTools: [], + }; + + for (const agentId of agentsWithTools) { + const config = await loadAgentYaml(agentId); + results.withTools.push({ + id: agentId, + toolCount: getAgentTools(config)?.length || 0, + }); + } + + for (const agentId of agentsWithoutTools) { + const config = await loadAgentYaml(agentId); + results.withoutTools.push({ + id: agentId, + toolCount: getAgentTools(config)?.length || 0, + }); + } + + // Agents with tools should have > 0 tools + results.withTools.forEach(({ _id, toolCount }) => { + expect(toolCount).toBeGreaterThan(0); + }); + + // Agents without tools should have 0 tools + results.withoutTools.forEach(({ _id, toolCount }) => { + expect(toolCount).toBe(0); + }); + }); + + test('safe tool access pattern works for all agents', async () => { + const allAgents = [...agentsWithTools, ...agentsWithoutTools]; + + for (const agentId of allAgents) { + const config = await loadAgentYaml(agentId); + + // Safe access pattern + const tools = config?.dependencies?.tools ?? []; + + expect(Array.isArray(tools)).toBe(true); + expect(() => { + tools.forEach(tool => { + expect(typeof tool).toBe('string'); + }); + }).not.toThrow(); + } + }); + }); + + describeIntegration('Comprehensive Backward Compatibility Report', () => { + test('comprehensive agent compatibility check', async () => { + const report = { + agents_with_tools: [], + agents_without_tools: [], + errors: [], + structure_issues: [], + }; + + const allAgents = [...agentsWithTools, ...agentsWithoutTools]; + + for (const agentId of allAgents) { + try { + const config = await loadAgentYaml(agentId); + const tools = getAgentTools(config); + + const agentInfo = { + id: agentId, + name: config.agent.name, + has_tools_field: !!tools, + tool_count: tools?.length || 0, + has_dependencies: !!config.dependencies, + has_commands: !!config.commands, + }; + + if (tools) { + report.agents_with_tools.push(agentInfo); + } else { + report.agents_without_tools.push(agentInfo); + } + + // Check structure + if (!config.agent || !config.agent.id) { + report.structure_issues.push({ + agent: agentId, + issue: 'Missing agent.id', + }); + } + + } catch (error) { + report.errors.push({ + agent: agentId, + error: error.message, + }); + } + } + + // Verify results + expect(report.errors).toHaveLength(0); + expect(report.structure_issues).toHaveLength(0); + expect(report.agents_with_tools.length).toBe(agentsWithTools.length); + expect(report.agents_without_tools.length).toBe(agentsWithoutTools.length); + + // All agents without tools should have 0 tool count + report.agents_without_tools.forEach(agent => { + expect(agent.tool_count).toBe(0); + expect(agent.has_dependencies).toBe(true); + }); + + // Log summary + console.log('\n✅ Agent Backward Compatibility Report:'); + console.log(` Agents with tools: ${report.agents_with_tools.length}`); + console.log(` Agents without tools: ${report.agents_without_tools.length}`); + console.log(` Errors: ${report.errors.length}`); + console.log(` Structure issues: ${report.structure_issues.length}`); + console.log(` Status: ${report.errors.length === 0 ? 'PASS ✅' : 'FAIL ❌'}`); + }); + }); +}); + +``` + +================================================== +📄 tests/updater/aios-updater.test.js +================================================== +```js +/** + * AIOS Updater Tests + * + * @story Epic 7 - CLI Update Command + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); + +const { AIOSUpdater, UpdateStatus, formatCheckResult, formatUpdateResult } = require('../../packages/installer/src/updater'); + +describe('AIOSUpdater', () => { + let tempDir; + let updater; + + beforeEach(async () => { + tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'aios-updater-test-')); + + // Create minimal .aios-core structure + await fs.ensureDir(path.join(tempDir, '.aios-core')); + await fs.ensureDir(path.join(tempDir, '.aios')); + + // Create version.json + await fs.writeJson(path.join(tempDir, '.aios-core', 'version.json'), { + version: '1.0.0', + installedAt: '2025-01-01T00:00:00Z', + mode: 'project-development', + fileHashes: { + 'test-file.md': 'sha256:abc123def456', + }, + }); + + updater = new AIOSUpdater(tempDir, { verbose: false }); + }); + + afterEach(async () => { + if (tempDir && fs.existsSync(tempDir)) { + await fs.remove(tempDir); + } + }); + + describe('constructor', () => { + it('should initialize with default options', () => { + const u = new AIOSUpdater(tempDir); + expect(u.projectRoot).toBe(path.resolve(tempDir)); + expect(u.options.verbose).toBe(false); + expect(u.options.force).toBe(false); + expect(u.options.preserveAll).toBe(true); + }); + + it('should accept custom options', () => { + const u = new AIOSUpdater(tempDir, { + verbose: true, + force: true, + preserveAll: false, + timeout: 60000, + }); + expect(u.options.verbose).toBe(true); + expect(u.options.force).toBe(true); + expect(u.options.preserveAll).toBe(false); + expect(u.options.timeout).toBe(60000); + }); + }); + + describe('getInstalledVersion', () => { + it('should read version from version.json', async () => { + const version = await updater.getInstalledVersion(); + expect(version).toBeDefined(); + expect(version.version).toBe('1.0.0'); + expect(version.mode).toBe('project-development'); + }); + + it('should return null if version.json not found', async () => { + await fs.remove(path.join(tempDir, '.aios-core', 'version.json')); + const version = await updater.getInstalledVersion(); + expect(version).toBeNull(); + }); + + it('should handle corrupted version.json', async () => { + await fs.writeFile(path.join(tempDir, '.aios-core', 'version.json'), 'invalid json'); + const version = await updater.getInstalledVersion(); + expect(version).toBeNull(); + }); + }); + + describe('compareVersions', () => { + it('should return 0 for equal versions', () => { + expect(updater.compareVersions('1.0.0', '1.0.0')).toBe(0); + expect(updater.compareVersions('v1.0.0', '1.0.0')).toBe(0); + }); + + it('should return -1 when first is older', () => { + expect(updater.compareVersions('1.0.0', '1.0.1')).toBe(-1); + expect(updater.compareVersions('1.0.0', '1.1.0')).toBe(-1); + expect(updater.compareVersions('1.0.0', '2.0.0')).toBe(-1); + }); + + it('should return 1 when first is newer', () => { + expect(updater.compareVersions('1.0.1', '1.0.0')).toBe(1); + expect(updater.compareVersions('1.1.0', '1.0.0')).toBe(1); + expect(updater.compareVersions('2.0.0', '1.0.0')).toBe(1); + }); + + it('should handle version prefix v', () => { + expect(updater.compareVersions('v1.0.0', 'v1.0.0')).toBe(0); + expect(updater.compareVersions('v1.0.0', '1.0.1')).toBe(-1); + }); + }); + + describe('isBreakingUpdate', () => { + it('should detect major version change as breaking', () => { + expect(updater.isBreakingUpdate('1.0.0', '2.0.0')).toBe(true); + expect(updater.isBreakingUpdate('1.9.9', '2.0.0')).toBe(true); + }); + + it('should not flag minor/patch as breaking', () => { + expect(updater.isBreakingUpdate('1.0.0', '1.1.0')).toBe(false); + expect(updater.isBreakingUpdate('1.0.0', '1.0.1')).toBe(false); + }); + }); + + describe('checkConnectivity', () => { + it('should return boolean', async () => { + const result = await updater.checkConnectivity(); + expect(typeof result).toBe('boolean'); + }); + }); + + describe('checkForUpdates', () => { + it('should return installed version', async () => { + const result = await updater.checkForUpdates(); + expect(result.installed).toBe('1.0.0'); + }); + + it('should handle missing installation', async () => { + await fs.remove(path.join(tempDir, '.aios-core', 'version.json')); + const result = await updater.checkForUpdates(); + expect(result.status).toBe(UpdateStatus.CHECK_FAILED); + expect(result.error).toContain('not installed'); + }); + }); + + describe('detectCustomizations', () => { + it('should detect unchanged files', async () => { + // Create a file with matching hash + const testFile = path.join(tempDir, '.aios-core', 'test-file.md'); + await fs.writeFile(testFile, 'test content'); + + // Update version.json with correct hash + const { hashFile } = require('../../packages/installer/src/installer/file-hasher'); + const hash = `sha256:${hashFile(testFile)}`; + + await fs.writeJson(path.join(tempDir, '.aios-core', 'version.json'), { + version: '1.0.0', + fileHashes: { + 'test-file.md': hash, + }, + }); + + const result = await updater.detectCustomizations(); + expect(result.unchanged).toContain('test-file.md'); + }); + + it('should detect missing files', async () => { + const result = await updater.detectCustomizations(); + expect(result.missing).toContain('test-file.md'); + }); + + it('should detect customized files', async () => { + // Create a file with different content than hash + const testFile = path.join(tempDir, '.aios-core', 'test-file.md'); + await fs.writeFile(testFile, 'modified content'); + + const result = await updater.detectCustomizations(); + expect(result.customized).toContain('test-file.md'); + }); + }); + + describe('createBackup', () => { + it('should create backup directory', async () => { + await updater.createBackup(); + expect(updater.backupDir).toBeDefined(); + expect(fs.existsSync(updater.backupDir)).toBe(true); + }); + + it('should backup version.json', async () => { + await updater.createBackup(); + const backupVersionJson = path.join(updater.backupDir, 'version.json'); + expect(fs.existsSync(backupVersionJson)).toBe(true); + }); + }); + + describe('rollback', () => { + it('should throw error if no backup', async () => { + await expect(updater.rollback()).rejects.toThrow('No backup available'); + }); + + it('should restore files from backup', async () => { + // Create backup + await updater.createBackup(); + + // Modify version.json + await fs.writeJson(path.join(tempDir, '.aios-core', 'version.json'), { + version: '2.0.0', + }); + + // Rollback + await updater.rollback(); + + // Verify restored + const restored = await fs.readJson(path.join(tempDir, '.aios-core', 'version.json')); + expect(restored.version).toBe('1.0.0'); + }); + }); + + describe('cleanupBackup', () => { + it('should remove backup directory', async () => { + await updater.createBackup(); + const backupDir = updater.backupDir; + expect(fs.existsSync(backupDir)).toBe(true); + + await updater.cleanupBackup(); + expect(fs.existsSync(backupDir)).toBe(false); + expect(updater.backupDir).toBeNull(); + }); + }); + + describe('updateVersionInfo', () => { + it('should update version.json with new version', async () => { + await updater.updateVersionInfo('2.0.0'); + + const versionJson = await fs.readJson(path.join(tempDir, '.aios-core', 'version.json')); + expect(versionJson.version).toBe('2.0.0'); + expect(versionJson.updatedAt).toBeDefined(); + }); + }); + + describe('update (dry-run)', () => { + it('should return preview without making changes', async () => { + const result = await updater.update({ dryRun: true }); + + // Should succeed with dryRun flag + expect(result.dryRun).toBe(true); + }); + }); +}); + +describe('formatCheckResult', () => { + it('should format up-to-date result', () => { + const result = { + status: UpdateStatus.UP_TO_DATE, + installed: '1.0.0', + latest: '1.0.0', + hasUpdate: false, + }; + + const output = formatCheckResult(result, { colors: false }); + expect(output).toContain('1.0.0'); + expect(output).toContain('up to date'); + }); + + it('should format update available result', () => { + const result = { + status: UpdateStatus.UPDATE_AVAILABLE, + installed: '1.0.0', + latest: '1.1.0', + hasUpdate: true, + }; + + const output = formatCheckResult(result, { colors: false }); + expect(output).toContain('1.0.0'); + expect(output).toContain('1.1.0'); + expect(output).toContain('Update available'); + }); + + it('should format check failed result', () => { + const result = { + status: UpdateStatus.CHECK_FAILED, + error: 'Network error', + }; + + const output = formatCheckResult(result, { colors: false }); + expect(output).toContain('Check failed'); + expect(output).toContain('Network error'); + }); +}); + +describe('formatUpdateResult', () => { + it('should format successful update', () => { + const result = { + success: true, + previousVersion: '1.0.0', + newVersion: '1.1.0', + filesUpdated: 5, + filesPreserved: 2, + }; + + const output = formatUpdateResult(result, { colors: false }); + expect(output).toContain('Updated'); + expect(output).toContain('1.1.0'); + expect(output).toContain('5 updated'); + expect(output).toContain('2 customizations'); + }); + + it('should format failed update', () => { + const result = { + success: false, + error: 'Connection timeout', + }; + + const output = formatUpdateResult(result, { colors: false }); + expect(output).toContain('failed'); + expect(output).toContain('Connection timeout'); + }); +}); + +``` + +================================================== +📄 tests/hooks/unified/hook-interface.test.js +================================================== +```js +/** + * Unified Hook Interface Tests + * Story GEMINI-INT.8 + */ + +const { + UnifiedHook, + EVENT_MAPPING, + createContext, + formatResult, +} = require('../../../.aios-core/hooks/unified/hook-interface'); + +describe('Unified Hook Interface', () => { + describe('EVENT_MAPPING', () => { + it('should map sessionStart for gemini', () => { + expect(EVENT_MAPPING.sessionStart.gemini).toBe('SessionStart'); + }); + + it('should have null claude mapping for sessionStart', () => { + expect(EVENT_MAPPING.sessionStart.claude).toBeNull(); + }); + + it('should map beforeAgent for both CLIs', () => { + expect(EVENT_MAPPING.beforeAgent.gemini).toBe('BeforeAgent'); + expect(EVENT_MAPPING.beforeAgent.claude).toBe('PreToolUse'); + }); + + it('should map beforeTool for both CLIs', () => { + expect(EVENT_MAPPING.beforeTool.gemini).toBe('BeforeTool'); + expect(EVENT_MAPPING.beforeTool.claude).toBe('PreToolUse'); + }); + + it('should map afterTool for both CLIs', () => { + expect(EVENT_MAPPING.afterTool.gemini).toBe('AfterTool'); + expect(EVENT_MAPPING.afterTool.claude).toBe('PostToolUse'); + }); + + it('should map sessionEnd for both CLIs', () => { + expect(EVENT_MAPPING.sessionEnd.gemini).toBe('SessionEnd'); + expect(EVENT_MAPPING.sessionEnd.claude).toBe('Stop'); + }); + + it('should have all required lifecycle events', () => { + const events = Object.keys(EVENT_MAPPING); + expect(events).toContain('sessionStart'); + expect(events).toContain('beforeAgent'); + expect(events).toContain('beforeTool'); + expect(events).toContain('afterTool'); + expect(events).toContain('sessionEnd'); + }); + }); + + describe('UnifiedHook', () => { + it('should create hook with required fields', () => { + const hook = new UnifiedHook({ + name: 'test-hook', + event: 'beforeTool', + }); + + expect(hook.name).toBe('test-hook'); + expect(hook.event).toBe('beforeTool'); + }); + + it('should have default matcher of *', () => { + const hook = new UnifiedHook({ + name: 'test-hook', + event: 'beforeTool', + }); + + expect(hook.matcher).toBe('*'); + }); + + it('should accept custom matcher', () => { + const hook = new UnifiedHook({ + name: 'test-hook', + event: 'beforeTool', + matcher: 'write_file|shell', + }); + + expect(hook.matcher).toBe('write_file|shell'); + }); + + it('should have default timeout of 5000ms', () => { + const hook = new UnifiedHook({ + name: 'test-hook', + event: 'beforeTool', + }); + + expect(hook.timeout).toBe(5000); + }); + + it('should accept custom timeout', () => { + const hook = new UnifiedHook({ + name: 'test-hook', + event: 'beforeTool', + timeout: 10000, + }); + + expect(hook.timeout).toBe(10000); + }); + + it('should throw on execute (base class)', async () => { + const hook = new UnifiedHook({ + name: 'test-hook', + event: 'beforeTool', + }); + + await expect(hook.execute({})).rejects.toThrow('execute() must be implemented by subclass'); + }); + + it('should return null for gemini config when runners not implemented', () => { + const hook = new UnifiedHook({ + name: 'test-hook', + event: 'beforeTool', + matcher: 'shell', + }); + + const config = hook.toGeminiConfig(); + + // Runners not yet implemented (Story MIS-2) - expects null until restored + expect(config).toBeNull(); + }); + + it('should return null for claude config when runners not implemented', () => { + const hook = new UnifiedHook({ + name: 'test-hook', + event: 'beforeTool', + matcher: 'Bash', + }); + + const config = hook.toClaudeConfig(); + + // Runners not yet implemented (Story MIS-2) - expects null until restored + expect(config).toBeNull(); + }); + + it('should return null for unsupported claude event', () => { + const hook = new UnifiedHook({ + name: 'test-hook', + event: 'sessionStart', // No claude equivalent + }); + + const config = hook.toClaudeConfig(); + + expect(config).toBeNull(); + }); + }); + + describe('createContext', () => { + it('should create gemini context', () => { + const context = createContext('gemini'); + + expect(context).toHaveProperty('projectDir'); + expect(context).toHaveProperty('sessionId'); + expect(context.provider).toBe('gemini'); + }); + + it('should create claude context', () => { + const context = createContext('claude'); + + expect(context).toHaveProperty('projectDir'); + expect(context).toHaveProperty('sessionId'); + expect(context.provider).toBe('claude'); + }); + + it('should use environment variables for gemini', () => { + const originalDir = process.env.GEMINI_PROJECT_DIR; + process.env.GEMINI_PROJECT_DIR = '/test/gemini/dir'; + + const context = createContext('gemini'); + + expect(context.projectDir).toBe('/test/gemini/dir'); + + // Restore + if (originalDir) { + process.env.GEMINI_PROJECT_DIR = originalDir; + } else { + delete process.env.GEMINI_PROJECT_DIR; + } + }); + }); + + describe('formatResult', () => { + it('should format result for gemini', () => { + const result = { + status: 'allow', + message: 'OK', + contextInjection: { key: 'value' }, + }; + + const formatted = formatResult(result, 'gemini'); + const parsed = JSON.parse(formatted); + + expect(parsed.status).toBe('success'); + expect(parsed.message).toBe('OK'); + expect(parsed.contextInjection).toEqual({ key: 'value' }); + }); + + it('should format block result for gemini', () => { + const result = { + status: 'block', + message: 'Blocked', + }; + + const formatted = formatResult(result, 'gemini'); + const parsed = JSON.parse(formatted); + + expect(parsed.status).toBe('block'); + }); + + it('should format result for claude', () => { + const result = { + status: 'allow', + message: 'OK', + contextInjection: { key: 'value' }, + }; + + const formatted = formatResult(result, 'claude'); + const parsed = JSON.parse(formatted); + + expect(parsed.continue).toBe(true); + expect(parsed.message).toBe('OK'); + expect(parsed.context).toEqual({ key: 'value' }); + }); + + it('should format block result for claude', () => { + const result = { + status: 'block', + message: 'Blocked', + }; + + const formatted = formatResult(result, 'claude'); + const parsed = JSON.parse(formatted); + + expect(parsed.continue).toBe(false); + }); + }); +}); + +``` + +================================================== +📄 tests/hooks/unified/runners/precompact-runner.test.js +================================================== +```js +/** + * PreCompact Hook Runner Tests + * Story MIS-3: Session Digest (PreCompact Hook) + */ + +const { onPreCompact, getHookConfig } = require('../../../../.aios-core/hooks/unified/runners/precompact-runner'); +const proDetector = require('../../../../bin/utils/pro-detector'); + +// Mock pro-detector +jest.mock('../../../../bin/utils/pro-detector'); + +describe('PreCompact Hook Runner', () => { + beforeEach(() => { + jest.clearAllMocks(); + jest.spyOn(console, 'log').mockImplementation(() => {}); + jest.spyOn(console, 'error').mockImplementation(() => {}); + }); + + afterEach(() => { + console.log.mockRestore(); + console.error.mockRestore(); + }); + + describe('onPreCompact', () => { + it('should return immediately without blocking', async () => { + proDetector.isProAvailable.mockReturnValue(false); + + const context = { + sessionId: 'test-session-123', + projectDir: '/test/project', + conversation: { messages: [] }, + }; + + const startTime = Date.now(); + await onPreCompact(context); + const duration = Date.now() - startTime; + + // Should complete in < 10ms (fire-and-forget) + expect(duration).toBeLessThan(10); + }); + + it('should gracefully no-op when aios-pro not available', async () => { + proDetector.isProAvailable.mockReturnValue(false); + + const context = { + sessionId: 'test-session-123', + projectDir: '/test/project', + }; + + await onPreCompact(context); + + expect(console.log).toHaveBeenCalledWith( + '[PreCompact] aios-pro not available, skipping session digest', + ); + }); + + it('should attempt to load pro module when available', async () => { + proDetector.isProAvailable.mockReturnValue(true); + proDetector.loadProModule.mockReturnValue(null); // Module not found + + const context = { + sessionId: 'test-session-123', + projectDir: '/test/project', + }; + + await onPreCompact(context); + + // Wait for setImmediate to execute + await new Promise(resolve => setImmediate(resolve)); + await new Promise(resolve => setTimeout(resolve, 10)); + + // Should attempt to load the digest extractor + expect(proDetector.loadProModule).toHaveBeenCalledWith( + 'memory/session-digest/extractor.js', + ); + }); + + it('should handle missing extractor function gracefully', async () => { + proDetector.isProAvailable.mockReturnValue(true); + proDetector.loadProModule.mockReturnValue({}); // Empty module + + const context = { + sessionId: 'test-session-123', + projectDir: '/test/project', + }; + + await onPreCompact(context); + + // Wait for setImmediate to execute + await new Promise(resolve => setImmediate(resolve)); + await new Promise(resolve => setTimeout(resolve, 10)); + + // Should not throw, but log error asynchronously + expect(proDetector.loadProModule).toHaveBeenCalled(); + }); + + it('should call extractSessionDigest when pro available', async () => { + const mockExtractSessionDigest = jest.fn().mockResolvedValue('/path/to/digest.yaml'); + + proDetector.isProAvailable.mockReturnValue(true); + proDetector.loadProModule.mockReturnValue({ + extractSessionDigest: mockExtractSessionDigest, + }); + + const context = { + sessionId: 'test-session-123', + projectDir: '/test/project', + conversation: { messages: [] }, + }; + + await onPreCompact(context); + + // Give setImmediate time to execute + await new Promise(resolve => setImmediate(resolve)); + await new Promise(resolve => setTimeout(resolve, 10)); + + expect(mockExtractSessionDigest).toHaveBeenCalledWith(context); + }); + + it('should handle extractor errors silently', async () => { + const mockExtractSessionDigest = jest.fn().mockRejectedValue(new Error('Test error')); + + proDetector.isProAvailable.mockReturnValue(true); + proDetector.loadProModule.mockReturnValue({ + extractSessionDigest: mockExtractSessionDigest, + }); + + const context = { + sessionId: 'test-session-123', + projectDir: '/test/project', + }; + + // Should not throw + await expect(onPreCompact(context)).resolves.toBeUndefined(); + + // Give setImmediate time to execute + await new Promise(resolve => setImmediate(resolve)); + await new Promise(resolve => setTimeout(resolve, 10)); + + expect(mockExtractSessionDigest).toHaveBeenCalled(); + }); + + it('should handle outer errors gracefully (never throw)', async () => { + proDetector.isProAvailable.mockImplementation(() => { + throw new Error('Detection failed'); + }); + + const context = { + sessionId: 'test-session-123', + projectDir: '/test/project', + }; + + // Should not throw + await expect(onPreCompact(context)).resolves.toBeUndefined(); + + expect(console.error).toHaveBeenCalledWith( + '[PreCompact] Hook runner error:', + 'Detection failed', + ); + }); + }); + + describe('getHookConfig', () => { + it('should return valid hook configuration', () => { + const config = getHookConfig(); + + expect(config).toMatchObject({ + name: 'precompact-session-digest', + event: 'PreCompact', + handler: expect.any(Function), + timeout: 5000, + description: expect.stringContaining('MIS-3'), + }); + }); + + it('should have timeout less than 5 seconds', () => { + const config = getHookConfig(); + + expect(config.timeout).toBeLessThanOrEqual(5000); + }); + + it('should have handler that matches onPreCompact', () => { + const config = getHookConfig(); + + expect(config.handler).toBe(onPreCompact); + }); + }); +}); + +``` + +================================================== +📄 tests/template-engine/template-engine.test.js +================================================== +```js +/** + * Template Engine v2.0 Test Suite + * Tests for Story 3.6 - Template Engine Core Refactor + * + * Test IDs: + * - TE-01: Load template with frontmatter + * - TE-02: Elicit variables interactively + * - TE-03: Render template with context + * - TE-04: Validate output with JSON Schema + * - TE-05: Generate complete document + * - TE-06: Handle missing variables + * - TE-07: List supported templates + */ + +'use strict'; + +const path = require('path'); +const fs = require('fs').promises; + +// Mock inquirer for non-interactive tests +jest.mock('inquirer', () => ({ + prompt: jest.fn().mockResolvedValue({}), +})); + +const { + TemplateEngine, + TemplateLoader, + VariableElicitation, + TemplateRenderer, + TemplateValidator, + SUPPORTED_TYPES, +} = require('../../.aios-core/product/templates/engine'); + +const inquirer = require('inquirer'); + +describe('Template Engine v2.0', () => { + const baseDir = path.join(__dirname, '..', '..'); + const templatesDir = path.join(baseDir, '.aios-core', 'product', 'templates'); + const schemasDir = path.join(templatesDir, 'engine', 'schemas'); + + let engine; + + beforeEach(() => { + engine = new TemplateEngine({ + baseDir, + templatesDir, + schemasDir, + interactive: false, + }); + jest.clearAllMocks(); + }); + + describe('TE-01: Template Loader', () => { + let loader; + + beforeEach(() => { + loader = new TemplateLoader({ templatesDir }); + }); + + test('should load template with YAML frontmatter', async () => { + const template = await loader.load('adr'); + + expect(template).toBeDefined(); + expect(template.metadata).toBeDefined(); + expect(template.metadata.template_id).toBe('adr'); + expect(template.metadata.template_name).toBe('Architecture Decision Record'); + expect(template.body).toBeTruthy(); + }); + + test('should parse variables from frontmatter', async () => { + const template = await loader.load('prd'); + + expect(template.variables).toBeDefined(); + expect(Array.isArray(template.variables)).toBe(true); + expect(template.variables.length).toBeGreaterThan(0); + + const titleVar = template.variables.find(v => v.name === 'title'); + expect(titleVar).toBeDefined(); + expect(titleVar.required).toBe(true); + }); + + test('should throw error for missing template', async () => { + await expect(loader.load('nonexistent')).rejects.toThrow('Template not found'); + }); + + test('should cache loaded templates', async () => { + const template1 = await loader.load('adr'); + const template2 = await loader.load('adr'); + + expect(template1).toBe(template2); // Same reference + }); + + test('should list available templates', async () => { + const templates = await loader.listTemplates(); + + expect(Array.isArray(templates)).toBe(true); + expect(templates).toContain('prd'); + expect(templates).toContain('adr'); + }); + }); + + describe('TE-02: Variable Elicitation', () => { + let elicitation; + + beforeEach(() => { + elicitation = new VariableElicitation({ interactive: false }); + }); + + test('should merge provided context with defaults', async () => { + const variables = [ + { name: 'title', type: 'string', required: true }, + { name: 'status', type: 'choice', default: 'Draft' }, + ]; + + const values = await elicitation.elicit(variables, { title: 'Test Title' }); + + expect(values.title).toBe('Test Title'); + expect(values.status).toBe('Draft'); + }); + + test('should resolve auto values', async () => { + const variables = [ + { name: 'now', type: 'string', auto: 'current_date' }, + ]; + + const values = await elicitation.elicit(variables, {}); + + expect(values.now).toBeDefined(); + expect(values.now).toMatch(/^\d{4}-\d{2}-\d{2}$/); + }); + + test('should validate required variables', () => { + const variables = [ + { name: 'title', required: true }, + { name: 'description', required: false }, + ]; + + const result = elicitation.validate(variables, { description: 'test' }); + + expect(result.isValid).toBe(false); + expect(result.errors).toContain('Missing required variable: title'); + }); + + test('should handle interactive mode', async () => { + const interactiveElicitation = new VariableElicitation({ interactive: true }); + + inquirer.prompt.mockResolvedValue({ title: 'Interactive Title' }); + + const variables = [ + { name: 'title', type: 'string', required: true, prompt: 'Enter title:' }, + ]; + + const values = await interactiveElicitation.elicit(variables, {}); + + expect(inquirer.prompt).toHaveBeenCalled(); + expect(values.title).toBe('Interactive Title'); + }); + }); + + describe('TE-03: Template Renderer', () => { + let renderer; + + beforeEach(() => { + renderer = new TemplateRenderer(); + }); + + test('should render simple template', () => { + const template = { body: '# {{title}}\n\nBy {{author}}' }; + const context = { title: 'Test Document', author: 'Tester' }; + + const result = renderer.render(template, context); + + expect(result).toContain('# Test Document'); + expect(result).toContain('By Tester'); + }); + + test('should support padNumber helper', () => { + const template = { body: 'ADR {{padNumber number 3}}' }; + const context = { number: 5 }; + + const result = renderer.render(template, context); + + expect(result).toBe('ADR 005'); + }); + + test('should support formatDate helper', () => { + const template = { body: 'Date: {{formatDate now "YYYY-MM-DD"}}' }; + // Use current date to avoid timezone issues between test and renderer + const testDate = new Date(); + const context = { now: testDate }; + + const result = renderer.render(template, context); + + // Verify format matches YYYY-MM-DD pattern with correct values from local date + const expectedYear = testDate.getFullYear(); + const expectedMonth = String(testDate.getMonth() + 1).padStart(2, '0'); + const expectedDay = String(testDate.getDate()).padStart(2, '0'); + expect(result).toBe(`Date: ${expectedYear}-${expectedMonth}-${expectedDay}`); + }); + + test('should support conditional blocks', () => { + const template = { + body: '{{#if showDetails}}Details: {{details}}{{else}}No details{{/if}}', + }; + + expect(renderer.render(template, { showDetails: true, details: 'Some info' })) + .toBe('Details: Some info'); + expect(renderer.render(template, { showDetails: false })) + .toBe('No details'); + }); + + test('should support each loops', () => { + const template = { + body: '{{#each items}}- {{this}}\n{{/each}}', + }; + const context = { items: ['Item 1', 'Item 2', 'Item 3'] }; + + const result = renderer.render(template, context); + + expect(result).toContain('- Item 1'); + expect(result).toContain('- Item 2'); + expect(result).toContain('- Item 3'); + }); + + test('should validate template syntax', () => { + const validTemplate = '{{#if test}}content{{/if}}'; + // Handlebars only catches incomplete expressions at execution time + const invalidTemplate = '{{'; + + expect(renderer.validateSyntax(validTemplate).isValid).toBe(true); + expect(renderer.validateSyntax(invalidTemplate).isValid).toBe(false); + }); + }); + + describe('TE-04: JSON Schema Validation', () => { + let validator; + + beforeEach(() => { + validator = new TemplateValidator({ schemasDir }); + }); + + test('should load schema for template type', async () => { + const schema = await validator.loadSchema('adr'); + + expect(schema).toBeDefined(); + expect(schema.title).toBe('ADR Template Variables'); + expect(schema.required).toContain('title'); + }); + + test('should validate valid data', async () => { + const data = { + number: 1, + title: 'Use TypeScript for Backend', + status: 'Proposed', + deciders: 'Team Lead, Architect', + context: 'We need to choose a language for the backend that provides type safety.', + decision: 'We will use TypeScript with Node.js for all backend services.', + positiveConsequences: ['Better type safety', 'Improved developer experience'], + }; + + const result = await validator.validate(data, 'adr'); + + expect(result.isValid).toBe(true); + expect(result.errors).toHaveLength(0); + }); + + test('should reject invalid data', async () => { + const data = { + number: 1, + title: 'Test', // Too short + status: 'Invalid', // Not in enum + deciders: 'Team', + context: 'Short', // Too short + decision: 'Short', // Too short + positiveConsequences: [], // Empty array + }; + + const result = await validator.validate(data, 'adr'); + + expect(result.isValid).toBe(false); + expect(result.errors.length).toBeGreaterThan(0); + }); + + test('should validate structure of rendered content', async () => { + const template = { + metadata: { + required_sections: ['Context', 'Decision'], + }, + }; + + const validContent = '# ADR 001\n\n## Context\nSome context\n\n## Decision\nSome decision'; + const invalidContent = '# ADR 001\n\n## Context\nSome context'; + + expect(validator.validateStructure(validContent, template).isValid).toBe(true); + expect(validator.validateStructure(invalidContent, template).isValid).toBe(false); + }); + }); + + describe('TE-05: Complete Document Generation', () => { + test('should generate complete ADR', async () => { + const context = { + number: 1, + title: 'Use Handlebars for Template Engine', + status: 'Proposed', + deciders: 'Dex, Pax', + context: 'We need a templating solution for generating documentation that supports variables, conditionals, and loops.', + decision: 'We will use Handlebars.js as our template engine because it provides a simple syntax, is well-maintained, and supports custom helpers.', + positiveConsequences: [ + 'Simple and familiar syntax', + 'Good documentation', + 'Active community', + ], + negativeConsequences: [ + 'Limited logic in templates', + ], + }; + + const result = await engine.generate('adr', context); + + expect(result.content).toContain('# ADR 001: Use Handlebars for Template Engine'); + expect(result.content).toContain('**Status:** Proposed'); + expect(result.content).toContain('**Deciders:** Dex, Pax'); + expect(result.content).toContain('## Context'); + expect(result.content).toContain('## Decision'); + expect(result.content).toContain('## Consequences'); + expect(result.content).toContain('✅ Simple and familiar syntax'); + expect(result.content).toContain('⚠️ Limited logic in templates'); + }); + + test('should generate complete PRD', async () => { + const context = { + title: 'Template Engine v2.0', + version: '2.0', + status: 'Draft', + owner: 'Pax', + problem_statement: 'The current template system is inconsistent and lacks validation.', + goals: [ + 'Unified template format', + 'Schema validation', + 'Interactive variable elicitation', + ], + }; + + const result = await engine.generate('prd', context); + + expect(result.content).toContain('# Template Engine v2.0'); + expect(result.content).toContain('**Version:** 2.0'); + expect(result.content).toContain('## Problem Statement'); + expect(result.content).toContain('## Goals'); + }); + }); + + describe('TE-06: Error Handling', () => { + test('should throw for unsupported template type', async () => { + await expect(engine.generate('unsupported', {})) + .rejects.toThrow('Unsupported template type'); + }); + + test('should throw for missing required variables in non-interactive mode', async () => { + await expect(engine.generate('adr', { title: 'Test' })) + .rejects.toThrow('has no default and interactive mode is disabled'); + }); + + test('should handle template syntax errors gracefully', () => { + const renderer = new TemplateRenderer(); + const badTemplate = { body: '{{#if test}unterminated' }; + + expect(() => renderer.render(badTemplate, {})).toThrow(); + }); + }); + + describe('TE-07: Template Listing', () => { + test('should return supported template types', () => { + expect(engine.supportedTypes).toEqual(SUPPORTED_TYPES); + expect(engine.supportedTypes).toContain('prd'); + expect(engine.supportedTypes).toContain('adr'); + expect(engine.supportedTypes).toContain('pmdr'); + expect(engine.supportedTypes).toContain('dbdr'); + expect(engine.supportedTypes).toContain('story'); + expect(engine.supportedTypes).toContain('epic'); + expect(engine.supportedTypes).toContain('task'); + }); + + test('should list templates with info', async () => { + const templates = await engine.listTemplates(); + + // 8 supported types: prd, prd-v2, adr, pmdr, dbdr, story, epic, task + expect(templates.length).toBe(8); + + const adrTemplate = templates.find(t => t.type === 'adr'); + expect(adrTemplate).toBeDefined(); + expect(adrTemplate.name).toBe('Architecture Decision Record'); + }); + + test('should get template info', async () => { + const info = await engine.getTemplateInfo('prd'); + + expect(info.type).toBe('prd'); + expect(info.name).toBe('Product Requirements Document'); + // Version can be string or number in YAML + expect(String(info.version)).toBe('2'); + expect(Array.isArray(info.variables)).toBe(true); + }); + }); + + describe('Custom Helpers', () => { + test('should support custom registered helpers', () => { + engine.registerHelper('emphasize', (text) => `**${text}**`); + + const renderer = engine.renderer; + const result = renderer.render({ body: '{{emphasize message}}' }, { message: 'Important' }); + + expect(result).toBe('**Important**'); + }); + + test('should support add/subtract helpers', () => { + const renderer = new TemplateRenderer(); + + expect(renderer.render({ body: '{{add 5 3}}' }, {})).toBe('8'); + expect(renderer.render({ body: '{{subtract 10 4}}' }, {})).toBe('6'); + }); + + test('should support string helpers', () => { + const renderer = new TemplateRenderer(); + + expect(renderer.render({ body: '{{uppercase "hello"}}' }, {})).toBe('HELLO'); + expect(renderer.render({ body: '{{lowercase "HELLO"}}' }, {})).toBe('hello'); + expect(renderer.render({ body: '{{capitalize "hello world"}}' }, {})).toBe('Hello world'); + expect(renderer.render({ body: '{{slug "Hello World!"}}' }, {})).toBe('hello-world'); + }); + + test('should support array helpers', () => { + const renderer = new TemplateRenderer(); + const context = { items: ['a', 'b', 'c'] }; + + expect(renderer.render({ body: '{{join items "-"}}' }, context)).toBe('a-b-c'); + expect(renderer.render({ body: '{{length items}}' }, context)).toBe('3'); + expect(renderer.render({ body: '{{first items}}' }, context)).toBe('a'); + expect(renderer.render({ body: '{{last items}}' }, context)).toBe('c'); + }); + + test('should support times helper with @index data variable', () => { + const renderer = new TemplateRenderer(); + + // Test @index access via data frame + const result = renderer.render({ body: '{{#times 3}}{{@index}}{{/times}}' }, {}); + expect(result).toBe('012'); + + // Test @first and @last data variables + const resultFlags = renderer.render({ body: '{{#times 3}}{{#if @first}}F{{/if}}{{@index}}{{#if @last}}L{{/if}}{{/times}}' }, {}); + expect(resultFlags).toBe('F012L'); + }); + }); +}); + +``` + +================================================== +📄 tests/template-engine/prd-v2.test.js +================================================== +```js +/** + * PRD Template v2.0 Test Suite + * Tests for Story 3.7 - Template PRD v2.0 + * + * Test IDs: + * - PRD-01: Generate PRD with required fields only + * - PRD-02: Generate PRD with UI/UX section + * - PRD-03: Generate PRD with Brownfield section + * - PRD-04: Validation fails on missing required field + * - PRD-05: Variable elicitation prompts correctly + * - PRD-06: Validation error messages are clear and actionable + * - PRD-07: Conditional validation (UI/UX fields when includeUIUX=true) + * - PRD-08: Conditional validation (Brownfield fields when isBrownfield=true) + */ + +'use strict'; + +const path = require('path'); + +// Mock inquirer for non-interactive tests +jest.mock('inquirer', () => ({ + prompt: jest.fn().mockResolvedValue({}), +})); + +const { + TemplateEngine, + TemplateLoader, + TemplateValidator, +} = require('../../.aios-core/product/templates/engine'); + +describe('PRD Template v2.0 (Story 3.7)', () => { + const baseDir = path.join(__dirname, '..', '..'); + const templatesDir = path.join(baseDir, '.aios-core', 'product', 'templates'); + const schemasDir = path.join(templatesDir, 'engine', 'schemas'); + + let engine; + let validator; + + // Complete valid context for PRD generation + const validContext = { + projectName: 'Test Project', + productName: 'Test Product', + version: '1.0.0', + author: 'Test Author', + problemStatement: 'This is a detailed problem statement that describes what problem we are solving for users. It needs to be at least 50 characters long.', + goals: [ + 'Goal 1: Improve user experience', + 'Goal 2: Increase efficiency', + ], + userStories: [ + { + title: 'User Registration', + actor: 'New User', + action: 'register for an account', + benefit: 'I can access the system features', + criteria: ['Email must be valid', 'Password must be at least 8 characters'], + }, + ], + functionalRequirements: [ + { + title: 'User Authentication', + description: 'System must support email/password authentication', + priority: 'P0', + }, + ], + nonFunctionalRequirements: [ + { + category: 'Performance', + requirement: 'Page load time must be under 2 seconds', + }, + ], + successMetrics: [ + { + metric: 'User Adoption', + target: '1000 users in 3 months', + method: 'Analytics tracking', + }, + ], + milestones: [ + { + name: 'MVP Release', + date: '2025-03-01', + }, + ], + risks: [ + { + risk: 'Technical complexity', + impact: 'High', + probability: 'Medium', + mitigation: 'Early prototyping and POC', + }, + ], + includeUIUX: false, + isBrownfield: false, + }; + + beforeEach(() => { + engine = new TemplateEngine({ + baseDir, + templatesDir, + schemasDir, + interactive: false, + }); + validator = new TemplateValidator({ schemasDir }); + jest.clearAllMocks(); + }); + + describe('PRD-01: Generate PRD with required fields only', () => { + test('should generate complete PRD with all required sections', async () => { + const result = await engine.generate('prd-v2', validContext); + + expect(result.content).toBeDefined(); + expect(result.content).toContain('# Product Requirements Document - Test Project'); + expect(result.content).toContain('**Product:** Test Product'); + expect(result.content).toContain('**Version:** 1.0.0'); + expect(result.content).toContain('**Author:** Test Author'); + expect(result.content).toContain('## 1. Problem Statement'); + expect(result.content).toContain('## 2. Goals & Objectives'); + expect(result.content).toContain('## 3. User Stories'); + expect(result.content).toContain('## 4. Functional Requirements'); + expect(result.content).toContain('## 5. Non-Functional Requirements'); + expect(result.content).toContain('Success Metrics'); + expect(result.content).toContain('Timeline & Milestones'); + expect(result.content).toContain('Risks & Mitigations'); + expect(result.content).toContain('**Generated by:** AIOS Template Engine v2.0'); + }); + + test('should not include UI/UX section when includeUIUX is false', async () => { + const result = await engine.generate('prd-v2', validContext); + + expect(result.content).not.toContain('UI/UX Requirements'); + expect(result.content).not.toContain('User Flows'); + expect(result.content).not.toContain('Design Considerations'); + }); + + test('should not include Brownfield section when isBrownfield is false', async () => { + const result = await engine.generate('prd-v2', validContext); + + expect(result.content).not.toContain('Brownfield Considerations'); + expect(result.content).not.toContain('Existing System Analysis'); + expect(result.content).not.toContain('Integration Points'); + expect(result.content).not.toContain('Migration Strategy'); + }); + }); + + describe('PRD-02: Generate PRD with UI/UX section', () => { + test('should include UI/UX section when includeUIUX is true', async () => { + const contextWithUIUX = { + ...validContext, + includeUIUX: true, + userFlows: [ + 'User opens app → Login screen → Dashboard', + 'Dashboard → Create item → Review → Submit', + ], + designConsiderations: 'The design should follow Material Design guidelines with a focus on accessibility and mobile-first approach.', + }; + + const result = await engine.generate('prd-v2', contextWithUIUX); + + expect(result.content).toContain('## 6. UI/UX Requirements'); + expect(result.content).toContain('### User Flows'); + expect(result.content).toContain('User opens app → Login screen → Dashboard'); + expect(result.content).toContain('### Design Considerations'); + expect(result.content).toContain('Material Design guidelines'); + }); + + test('should render all user flows as list items', async () => { + const contextWithUIUX = { + ...validContext, + includeUIUX: true, + userFlows: ['Flow 1', 'Flow 2', 'Flow 3'], + designConsiderations: 'Design considerations text for the product interface.', + }; + + const result = await engine.generate('prd-v2', contextWithUIUX); + + expect(result.content).toContain('- Flow 1'); + expect(result.content).toContain('- Flow 2'); + expect(result.content).toContain('- Flow 3'); + }); + }); + + describe('PRD-03: Generate PRD with Brownfield section', () => { + test('should include Brownfield section when isBrownfield is true', async () => { + const contextWithBrownfield = { + ...validContext, + isBrownfield: true, + existingSystemAnalysis: 'The existing system is a monolithic application built with Java and PostgreSQL. It handles 10,000 requests per day.', + integrationPoints: [ + 'User authentication API', + 'Payment processing service', + 'Legacy database', + ], + migrationStrategy: 'We will use a strangler fig pattern to gradually migrate functionality from the monolith to microservices.', + }; + + const result = await engine.generate('prd-v2', contextWithBrownfield); + + expect(result.content).toContain('Brownfield Considerations'); + expect(result.content).toContain('### Existing System Analysis'); + expect(result.content).toContain('monolithic application built with Java'); + expect(result.content).toContain('### Integration Points'); + expect(result.content).toContain('- User authentication API'); + expect(result.content).toContain('- Payment processing service'); + expect(result.content).toContain('### Migration Strategy'); + expect(result.content).toContain('strangler fig pattern'); + }); + + test('should render all integration points as list items', async () => { + const contextWithBrownfield = { + ...validContext, + isBrownfield: true, + existingSystemAnalysis: 'Existing system analysis text that must be at least 50 characters long.', + integrationPoints: ['API 1', 'API 2', 'Database'], + migrationStrategy: 'Migration strategy text that must be at least 50 characters long for validation.', + }; + + const result = await engine.generate('prd-v2', contextWithBrownfield); + + expect(result.content).toContain('- API 1'); + expect(result.content).toContain('- API 2'); + expect(result.content).toContain('- Database'); + }); + }); + + describe('PRD-04: Validation fails on missing required field', () => { + test('should fail validation when projectName is missing', async () => { + const incompleteContext = { ...validContext }; + delete incompleteContext.projectName; + + const result = await validator.validate(incompleteContext, 'prd-v2'); + + expect(result.isValid).toBe(false); + expect(result.errors.some(e => e.includes('projectName'))).toBe(true); + }); + + test('should fail validation when problemStatement is too short', async () => { + const invalidContext = { + ...validContext, + problemStatement: 'Too short', + }; + + const result = await validator.validate(invalidContext, 'prd-v2'); + + expect(result.isValid).toBe(false); + expect(result.errors.some(e => e.includes('problemStatement'))).toBe(true); + }); + + test('should fail validation when goals array is empty', async () => { + const invalidContext = { + ...validContext, + goals: [], + }; + + const result = await validator.validate(invalidContext, 'prd-v2'); + + expect(result.isValid).toBe(false); + expect(result.errors.some(e => e.includes('goals'))).toBe(true); + }); + + test('should fail validation when risk has invalid impact value', async () => { + const invalidContext = { + ...validContext, + risks: [{ + risk: 'Some risk', + impact: 'Invalid', + probability: 'High', + mitigation: 'Some mitigation', + }], + }; + + const result = await validator.validate(invalidContext, 'prd-v2'); + + expect(result.isValid).toBe(false); + expect(result.errors.some(e => e.includes('impact') || e.includes('allowed'))).toBe(true); + }); + }); + + describe('PRD-05: Variable elicitation prompts correctly', () => { + test('should load template with all variable definitions', async () => { + const loader = new TemplateLoader({ templatesDir }); + const template = await loader.load('prd-v2'); + + expect(template.variables).toBeDefined(); + expect(Array.isArray(template.variables)).toBe(true); + + // Check required variables + const projectNameVar = template.variables.find(v => v.name === 'projectName'); + expect(projectNameVar).toBeDefined(); + expect(projectNameVar.required).toBe(true); + expect(projectNameVar.prompt).toBe('Nome do projeto:'); + + const problemStatementVar = template.variables.find(v => v.name === 'problemStatement'); + expect(problemStatementVar).toBeDefined(); + expect(problemStatementVar.type).toBe('text'); + }); + + test('should have prompts for all required variables', async () => { + const loader = new TemplateLoader({ templatesDir }); + const template = await loader.load('prd-v2'); + + const requiredVars = template.variables.filter(v => v.required === true); + + requiredVars.forEach(v => { + expect(v.prompt).toBeDefined(); + expect(v.prompt.length).toBeGreaterThan(0); + }); + }); + + test('should have conditional variables for UI/UX and Brownfield', async () => { + const loader = new TemplateLoader({ templatesDir }); + const template = await loader.load('prd-v2'); + + const userFlowsVar = template.variables.find(v => v.name === 'userFlows'); + expect(userFlowsVar).toBeDefined(); + expect(userFlowsVar.requiredIf).toBe('includeUIUX'); + + const existingSystemVar = template.variables.find(v => v.name === 'existingSystemAnalysis'); + expect(existingSystemVar).toBeDefined(); + expect(existingSystemVar.requiredIf).toBe('isBrownfield'); + }); + }); + + describe('PRD-06: Validation error messages are clear and actionable', () => { + test('should provide clear error message for missing required field', async () => { + const incompleteContext = { ...validContext }; + delete incompleteContext.author; + + const result = await validator.validate(incompleteContext, 'prd-v2'); + + expect(result.isValid).toBe(false); + // Error should mention the missing field by name + expect(result.errors.some(e => e.includes('author') || e.includes('missing'))).toBe(true); + }); + + test('should provide clear error message for invalid enum value', async () => { + const invalidContext = { + ...validContext, + functionalRequirements: [{ + title: 'Test', + description: 'Test description', + priority: 'P5', // Invalid - should be P0-P3 + }], + }; + + const result = await validator.validate(invalidContext, 'prd-v2'); + + expect(result.isValid).toBe(false); + // Error should indicate allowed values + expect(result.errors.some(e => e.includes('priority') || e.includes('allowed') || e.includes('P0'))).toBe(true); + }); + + test('should provide clear error message for minimum length violation', async () => { + const invalidContext = { + ...validContext, + problemStatement: 'Short', // Less than 50 characters + }; + + const result = await validator.validate(invalidContext, 'prd-v2'); + + expect(result.isValid).toBe(false); + expect(result.errors.some(e => e.includes('problemStatement'))).toBe(true); + }); + }); + + describe('PRD-07: Conditional validation (UI/UX fields when includeUIUX=true)', () => { + test('should fail validation when includeUIUX=true but userFlows is missing', async () => { + const invalidContext = { + ...validContext, + includeUIUX: true, + designConsiderations: 'Design considerations text for the product interface.', + // userFlows is missing + }; + + const result = await validator.validate(invalidContext, 'prd-v2'); + + expect(result.isValid).toBe(false); + expect(result.errors.some(e => e.includes('userFlows'))).toBe(true); + }); + + test('should fail validation when includeUIUX=true but designConsiderations is missing', async () => { + const invalidContext = { + ...validContext, + includeUIUX: true, + userFlows: ['Flow 1', 'Flow 2'], + // designConsiderations is missing + }; + + const result = await validator.validate(invalidContext, 'prd-v2'); + + expect(result.isValid).toBe(false); + expect(result.errors.some(e => e.includes('designConsiderations'))).toBe(true); + }); + + test('should pass validation when includeUIUX=true and all UI/UX fields are provided', async () => { + const validUIUXContext = { + ...validContext, + includeUIUX: true, + userFlows: ['User opens app → Login screen → Dashboard'], + designConsiderations: 'Design considerations text for the product interface.', + }; + + const result = await validator.validate(validUIUXContext, 'prd-v2'); + + expect(result.isValid).toBe(true); + }); + + test('should pass validation when includeUIUX=false even without UI/UX fields', async () => { + const result = await validator.validate(validContext, 'prd-v2'); + + expect(result.isValid).toBe(true); + }); + }); + + describe('PRD-08: Conditional validation (Brownfield fields when isBrownfield=true)', () => { + test('should fail validation when isBrownfield=true but existingSystemAnalysis is missing', async () => { + const invalidContext = { + ...validContext, + isBrownfield: true, + integrationPoints: ['API 1'], + migrationStrategy: 'Migration strategy text that must be at least 50 characters long for validation.', + // existingSystemAnalysis is missing + }; + + const result = await validator.validate(invalidContext, 'prd-v2'); + + expect(result.isValid).toBe(false); + expect(result.errors.some(e => e.includes('existingSystemAnalysis'))).toBe(true); + }); + + test('should fail validation when isBrownfield=true but integrationPoints is missing', async () => { + const invalidContext = { + ...validContext, + isBrownfield: true, + existingSystemAnalysis: 'Existing system analysis text that must be at least 50 characters long.', + migrationStrategy: 'Migration strategy text that must be at least 50 characters long for validation.', + // integrationPoints is missing + }; + + const result = await validator.validate(invalidContext, 'prd-v2'); + + expect(result.isValid).toBe(false); + expect(result.errors.some(e => e.includes('integrationPoints'))).toBe(true); + }); + + test('should fail validation when isBrownfield=true but migrationStrategy is missing', async () => { + const invalidContext = { + ...validContext, + isBrownfield: true, + existingSystemAnalysis: 'Existing system analysis text that must be at least 50 characters long.', + integrationPoints: ['API 1'], + // migrationStrategy is missing + }; + + const result = await validator.validate(invalidContext, 'prd-v2'); + + expect(result.isValid).toBe(false); + expect(result.errors.some(e => e.includes('migrationStrategy'))).toBe(true); + }); + + test('should pass validation when isBrownfield=true and all Brownfield fields are provided', async () => { + const validBrownfieldContext = { + ...validContext, + isBrownfield: true, + existingSystemAnalysis: 'Existing system analysis text that must be at least 50 characters long.', + integrationPoints: ['API 1', 'API 2'], + migrationStrategy: 'Migration strategy text that must be at least 50 characters long for validation.', + }; + + const result = await validator.validate(validBrownfieldContext, 'prd-v2'); + + expect(result.isValid).toBe(true); + }); + + test('should pass validation when isBrownfield=false even without Brownfield fields', async () => { + const result = await validator.validate(validContext, 'prd-v2'); + + expect(result.isValid).toBe(true); + }); + }); + + describe('Combined scenarios', () => { + test('should generate PRD with both UI/UX and Brownfield sections', async () => { + const fullContext = { + ...validContext, + includeUIUX: true, + userFlows: ['Flow 1', 'Flow 2'], + designConsiderations: 'Design considerations text for the product interface.', + isBrownfield: true, + existingSystemAnalysis: 'Existing system analysis text that must be at least 50 characters long.', + integrationPoints: ['API 1', 'API 2'], + migrationStrategy: 'Migration strategy text that must be at least 50 characters long for validation.', + }; + + const result = await engine.generate('prd-v2', fullContext); + + expect(result.content).toContain('UI/UX Requirements'); + expect(result.content).toContain('Brownfield Considerations'); + }); + + test('should validate PRD with both UI/UX and Brownfield enabled', async () => { + const fullContext = { + ...validContext, + includeUIUX: true, + userFlows: ['Flow 1', 'Flow 2'], + designConsiderations: 'Design considerations text for the product interface.', + isBrownfield: true, + existingSystemAnalysis: 'Existing system analysis text that must be at least 50 characters long.', + integrationPoints: ['API 1', 'API 2'], + migrationStrategy: 'Migration strategy text that must be at least 50 characters long for validation.', + }; + + const result = await validator.validate(fullContext, 'prd-v2'); + + expect(result.isValid).toBe(true); + }); + }); +}); + +``` + +================================================== +📄 tests/benchmarks/pipeline-benchmark.js +================================================== +```js +#!/usr/bin/env node + +/** + * Pipeline Benchmark Script + * + * Story ACT-11: Performance benchmarking for UnifiedActivationPipeline. + * Measures activation time for all 12 agents across multiple iterations. + * Reports p50/p95/p99 per loader and total pipeline. + * + * Usage: + * node tests/benchmarks/pipeline-benchmark.js [--warm] [--cold] [--agents=dev,qa] [--iterations=10] + * + * Flags: + * --warm Warm-start benchmark (reuse Node process, default) + * --cold Cold-start benchmark (clears require cache between runs) + * --agents=X Comma-separated list of agents to test (default: all 12) + * --iterations=N Number of iterations per agent (default: 10) + * + * Output: + * Console table with p50/p95/p99 per agent + per loader + * + * @module tests/benchmarks/pipeline-benchmark + */ + +'use strict'; + +const path = require('path'); + +const PROJECT_ROOT = path.resolve(__dirname, '..', '..'); + +// Pipeline module paths for cache clearing (cold-start) +const PIPELINE_MODULES = [ + '.aios-core/development/scripts/unified-activation-pipeline', + '.aios-core/development/scripts/greeting-builder', + '.aios-core/development/scripts/agent-config-loader', + '.aios-core/development/scripts/greeting-preference-manager', + '.aios-core/development/scripts/workflow-navigator', + '.aios-core/core/session/context-loader', + '.aios-core/core/session/context-detector', + '.aios-core/core/permissions', + '.aios-core/infrastructure/scripts/project-status-loader', + '.aios-core/infrastructure/scripts/git-config-detector', + '.aios-core/core/config/config-resolver', + '.aios-core/core/config/config-cache', +]; + +function parseArgs() { + const args = process.argv.slice(2); + const options = { + warm: true, + cold: false, + agents: null, + iterations: 10, + }; + + for (const arg of args) { + if (arg === '--cold') { + options.cold = true; + options.warm = false; + } else if (arg === '--warm') { + options.warm = true; + options.cold = false; + } else if (arg.startsWith('--agents=')) { + options.agents = arg.split('=')[1].split(',').map(s => s.trim()); + } else if (arg.startsWith('--iterations=')) { + options.iterations = parseInt(arg.split('=')[1], 10) || 10; + } + } + + return options; +} + +function percentile(sorted, p) { + const index = Math.ceil((p / 100) * sorted.length) - 1; + return sorted[Math.max(0, index)]; +} + +function clearPipelineCache() { + for (const modulePath of PIPELINE_MODULES) { + try { + const fullPath = path.resolve(PROJECT_ROOT, modulePath); + const resolved = require.resolve(fullPath); + if (require.cache[resolved]) { + delete require.cache[resolved]; + } + } catch { + // Module not found — skip silently + } + } +} + +async function runBenchmark(options) { + const pipelinePath = path.resolve( + PROJECT_ROOT, '.aios-core/development/scripts/unified-activation-pipeline', + ); + const { ALL_AGENT_IDS } = require(pipelinePath); + let { UnifiedActivationPipeline } = require(pipelinePath); + + const agents = options.agents || [...ALL_AGENT_IDS]; + const iterations = options.iterations; + const mode = options.cold ? 'cold' : 'warm'; + + console.log(`\nPipeline Benchmark — ${mode} start, ${iterations} iterations per agent`); + console.log(`Agents: ${agents.join(', ')}\n`); + + const results = {}; + + for (const agentId of agents) { + results[agentId] = { + durations: [], + qualities: { full: 0, partial: 0, fallback: 0 }, + loaderTimings: {}, + }; + + for (let i = 0; i < iterations; i++) { + if (options.cold) { + clearPipelineCache(); + // Re-require for true cold-start: fresh module bindings + const freshModule = require(pipelinePath); + UnifiedActivationPipeline = freshModule.UnifiedActivationPipeline; + } + + const pipeline = new UnifiedActivationPipeline({ projectRoot: PROJECT_ROOT }); + + try { + const result = await pipeline.activate(agentId); + results[agentId].durations.push(result.duration); + results[agentId].qualities[result.quality]++; + + // Collect per-loader timings + if (result.metrics && result.metrics.loaders) { + for (const [loaderName, data] of Object.entries(result.metrics.loaders)) { + if (!results[agentId].loaderTimings[loaderName]) { + results[agentId].loaderTimings[loaderName] = []; + } + results[agentId].loaderTimings[loaderName].push(data.duration); + } + } + } catch (error) { + console.error(` Error activating ${agentId} (iteration ${i + 1}):`, error.message); + results[agentId].durations.push(-1); + results[agentId].qualities.fallback++; + } + } + } + + // --- Report --- + console.log('='.repeat(90)); + console.log('PIPELINE RESULTS (ms)'); + console.log('='.repeat(90)); + console.log( + 'Agent'.padEnd(20), + 'p50'.padStart(8), + 'p95'.padStart(8), + 'p99'.padStart(8), + 'Quality'.padStart(20), + ); + console.log('-'.repeat(90)); + + const allDurations = []; + + for (const agentId of agents) { + const sorted = results[agentId].durations.filter(d => d >= 0).sort((a, b) => a - b); + if (sorted.length === 0) continue; + + allDurations.push(...sorted); + + const p50 = percentile(sorted, 50); + const p95 = percentile(sorted, 95); + const p99 = percentile(sorted, 99); + const q = results[agentId].qualities; + const qualityStr = `F:${q.full} P:${q.partial} FB:${q.fallback}`; + + console.log( + agentId.padEnd(20), + String(p50).padStart(8), + String(p95).padStart(8), + String(p99).padStart(8), + qualityStr.padStart(20), + ); + } + + // Aggregate + const sortedAll = allDurations.sort((a, b) => a - b); + if (sortedAll.length > 0) { + console.log('-'.repeat(90)); + console.log( + 'AGGREGATE'.padEnd(20), + String(percentile(sortedAll, 50)).padStart(8), + String(percentile(sortedAll, 95)).padStart(8), + String(percentile(sortedAll, 99)).padStart(8), + ); + } + + // Loader breakdown + console.log('\n' + '='.repeat(90)); + console.log('PER-LOADER TIMINGS (ms) — aggregated across all agents'); + console.log('='.repeat(90)); + console.log( + 'Loader'.padEnd(20), + 'p50'.padStart(8), + 'p95'.padStart(8), + 'p99'.padStart(8), + 'max'.padStart(8), + ); + console.log('-'.repeat(90)); + + const loaderAgg = {}; + for (const agentId of agents) { + for (const [loaderName, timings] of Object.entries(results[agentId].loaderTimings)) { + if (!loaderAgg[loaderName]) { + loaderAgg[loaderName] = []; + } + loaderAgg[loaderName].push(...timings); + } + } + + for (const [loaderName, timings] of Object.entries(loaderAgg)) { + const sorted = timings.sort((a, b) => a - b); + console.log( + loaderName.padEnd(20), + String(percentile(sorted, 50)).padStart(8), + String(percentile(sorted, 95)).padStart(8), + String(percentile(sorted, 99)).padStart(8), + String(sorted[sorted.length - 1]).padStart(8), + ); + } + + console.log('\n' + '='.repeat(90)); + + // Summary + const totalFallbacks = agents.reduce((acc, id) => acc + results[id].qualities.fallback, 0); + const totalRuns = agents.length * iterations; + const fallbackRate = ((totalFallbacks / totalRuns) * 100).toFixed(1); + + console.log(`Total runs: ${totalRuns}`); + console.log(`Fallback rate: ${fallbackRate}%`); + console.log(`Mode: ${mode}`); + console.log('='.repeat(90)); +} + +runBenchmark(parseArgs()).catch(err => { + console.error('Benchmark failed:', err); + process.exit(1); +}); + +``` + +================================================== +📄 tests/health-check/health-check.test.js +================================================== +```js +/** + * Health Check System Tests + * + * @story HCS-2 - Health Check System Implementation + */ + +const assert = require('assert'); +const path = require('path'); + +// Import health check modules +const { + HealthCheck, + HealthCheckEngine, + BaseCheck, + CheckSeverity, + CheckStatus, + CheckRegistry, + DEFAULT_CONFIG, +} = require('../../.aios-core/core/health-check'); + +// Set timeout for all tests in this file (Jest compatible) +jest.setTimeout(30000); + +describe('Health Check System', () => { + describe('Module Loading', () => { + it('should export all required classes', () => { + assert.strictEqual(typeof HealthCheck, 'function', 'HealthCheck should be a class'); + assert.strictEqual( + typeof HealthCheckEngine, + 'function', + 'HealthCheckEngine should be a class', + ); + assert.strictEqual(typeof BaseCheck, 'function', 'BaseCheck should be a class'); + assert.strictEqual(typeof CheckRegistry, 'function', 'CheckRegistry should be a class'); + }); + + it('should export severity and status enums', () => { + assert.ok(CheckSeverity.CRITICAL, 'Should have CRITICAL severity'); + assert.ok(CheckSeverity.HIGH, 'Should have HIGH severity'); + assert.ok(CheckSeverity.MEDIUM, 'Should have MEDIUM severity'); + assert.ok(CheckSeverity.LOW, 'Should have LOW severity'); + assert.ok(CheckSeverity.INFO, 'Should have INFO severity'); + + assert.ok(CheckStatus.PASS, 'Should have PASS status'); + assert.ok(CheckStatus.FAIL, 'Should have FAIL status'); + assert.ok(CheckStatus.WARNING, 'Should have WARNING status'); + }); + + it('should have default configuration', () => { + assert.ok(DEFAULT_CONFIG, 'Should have DEFAULT_CONFIG'); + assert.strictEqual(DEFAULT_CONFIG.mode, 'quick', 'Default mode should be quick'); + assert.strictEqual(DEFAULT_CONFIG.autoFix, true, 'Auto-fix should be enabled by default'); + }); + }); + + describe('HealthCheck Instance', () => { + let healthCheck; + + beforeEach(() => { + healthCheck = new HealthCheck(); + }); + + it('should create instance with default config', () => { + assert.ok(healthCheck, 'Should create instance'); + assert.ok(healthCheck.engine, 'Should have engine'); + assert.ok(healthCheck.registry, 'Should have registry'); + assert.ok(healthCheck.healers, 'Should have healers'); + assert.ok(healthCheck.reporters, 'Should have reporters'); + }); + + it('should return all 5 domains', () => { + const domains = healthCheck.getDomains(); + assert.strictEqual(domains.length, 5, 'Should have 5 domains'); + assert.ok(domains.includes('project'), 'Should include project'); + assert.ok(domains.includes('local'), 'Should include local'); + assert.ok(domains.includes('repository'), 'Should include repository'); + assert.ok(domains.includes('deployment'), 'Should include deployment'); + assert.ok(domains.includes('services'), 'Should include services'); + }); + + it('should have 34 total checks', () => { + const counts = healthCheck.getCheckCounts(); + const total = Object.values(counts).reduce((a, b) => a + b, 0); + assert.strictEqual(total, 34, 'Should have 34 total checks'); + }); + + it('should have correct check distribution', () => { + const counts = healthCheck.getCheckCounts(); + assert.strictEqual(counts.project, 8, 'Project should have 8 checks'); + assert.strictEqual(counts.local, 8, 'Local should have 8 checks'); + assert.strictEqual(counts.repository, 8, 'Repository should have 8 checks'); + assert.strictEqual(counts.deployment, 5, 'Deployment should have 5 checks'); + assert.strictEqual(counts.services, 5, 'Services should have 5 checks'); + }); + }); + + describe('Health Check Execution', () => { + let healthCheck; + + beforeEach(() => { + healthCheck = new HealthCheck({ mode: 'quick' }); + }); + + it('should run quick mode successfully', async () => { + const results = await healthCheck.run({ mode: 'quick' }); + + assert.ok(results, 'Should return results'); + assert.ok(results.timestamp, 'Should have timestamp'); + assert.ok(results.overall, 'Should have overall summary'); + assert.ok(results.domains, 'Should have domain scores'); + assert.ok(results.report, 'Should have report'); + }); + + it('should return valid score', async () => { + const results = await healthCheck.run({ mode: 'quick' }); + + assert.ok(results.overall.score >= 0, 'Score should be >= 0'); + assert.ok(results.overall.score <= 100, 'Score should be <= 100'); + }); + + it('should return valid status', async () => { + const results = await healthCheck.run({ mode: 'quick' }); + const validStatuses = ['healthy', 'degraded', 'warning', 'critical']; + + assert.ok( + validStatuses.includes(results.overall.status), + `Status should be one of: ${validStatuses.join(', ')}`, + ); + }); + + it('should run domain-specific check', async () => { + const results = await healthCheck.run({ domain: 'project' }); + + assert.ok(results.domains.project, 'Should have project domain results'); + }); + + it('should complete within timeout', async () => { + const startTime = Date.now(); + await healthCheck.run({ mode: 'quick' }); + const duration = Date.now() - startTime; + + assert.ok(duration < 15000, `Quick mode should complete in <15s (took ${duration}ms)`); + }); + }); + + describe('Reporters', () => { + let healthCheck; + let results; + + beforeAll(async () => { + healthCheck = new HealthCheck(); + results = await healthCheck.run({ mode: 'quick', domain: 'project' }); + }); + + it('should generate console report', () => { + assert.ok(results.report, 'Should have report'); + assert.ok(typeof results.report === 'string', 'Report should be string'); + assert.ok(results.report.includes('Health'), 'Report should mention Health'); + }); + }); + + describe('Check Registry', () => { + it('should register and retrieve checks', () => { + const registry = new CheckRegistry(); + + // Get stats + const stats = registry.getStats(); + assert.ok(stats.total > 0, 'Should have registered checks'); + }); + + it('should group checks by domain', () => { + const registry = new CheckRegistry(); + + const projectChecks = registry.getChecksByDomain('project'); + assert.ok(Array.isArray(projectChecks), 'Should return array'); + }); + + it('should group checks by severity', () => { + const registry = new CheckRegistry(); + + const criticalChecks = registry.getChecksBySeverity('CRITICAL'); + assert.ok(Array.isArray(criticalChecks), 'Should return array'); + }); + }); + + describe('Score Calculation', () => { + it('should calculate score correctly for all pass', async () => { + // When all checks pass, score should be 100 + const healthCheck = new HealthCheck(); + const results = await healthCheck.run({ mode: 'quick', domain: 'project' }); + + // Verify score is calculated + assert.ok(typeof results.overall.score === 'number', 'Score should be number'); + }); + }); +}); + +``` + +================================================== +📄 tests/health-check/reporters.test.js +================================================== +```js +/** + * Reporter Manager Tests + * + * Tests for the ReporterManager class including: + * - Console reporter + * - JSON reporter + * - Markdown reporter + * - Multiple format generation + * + * @story TD-6 - CI Stability & Test Coverage Improvements + */ + +const ReporterManager = require('../../.aios-core/core/health-check/reporters'); +const { + MarkdownReporter, + JSONReporter, + ConsoleReporter, +} = require('../../.aios-core/core/health-check/reporters'); +const { CheckStatus, CheckSeverity } = require('../../.aios-core/core/health-check/base-check'); + +// Set timeout for all tests in this file +jest.setTimeout(30000); + +/** + * Create mock check results for testing + */ +function createMockCheckResults() { + return [ + { + checkId: 'check-1', + name: 'Package JSON Check', + domain: 'project', + severity: CheckSeverity.HIGH, + status: CheckStatus.PASS, + message: 'package.json is valid', + details: { version: '1.0.0' }, + duration: 50, + timestamp: new Date().toISOString(), + }, + { + checkId: 'check-2', + name: 'Git Installation Check', + domain: 'local', + severity: CheckSeverity.CRITICAL, + status: CheckStatus.PASS, + message: 'Git is installed', + details: { version: '2.40.0' }, + duration: 100, + timestamp: new Date().toISOString(), + }, + { + checkId: 'check-3', + name: 'Dependencies Check', + domain: 'project', + severity: CheckSeverity.MEDIUM, + status: CheckStatus.WARNING, + message: 'Some dependencies are outdated', + details: { outdated: ['lodash', 'express'] }, + recommendation: 'Run npm update', + duration: 200, + timestamp: new Date().toISOString(), + }, + { + checkId: 'check-4', + name: 'Network Check', + domain: 'local', + severity: CheckSeverity.LOW, + status: CheckStatus.FAIL, + message: 'Network connectivity issue', + recommendation: 'Check internet connection', + healable: true, + healingTier: 2, + duration: 150, + timestamp: new Date().toISOString(), + }, + ]; +} + +/** + * Create mock scores for testing + * This matches the structure expected by the reporters + */ +function createMockScores() { + const checkResults = createMockCheckResults(); + const projectChecks = checkResults.filter((c) => c.domain === 'project'); + const localChecks = checkResults.filter((c) => c.domain === 'local'); + + return { + overall: { + score: 75, + status: 'degraded', + passed: 2, + warned: 1, + failed: 1, + total: 4, + issuesCount: 2, + }, + // Use 'domains' key as expected by reporters + // Each domain needs: score, status, summary object with passed/total, and checks array + domains: { + project: { + score: 83, + status: 'warning', + summary: { passed: 1, warned: 1, failed: 0, total: 2 }, + checks: projectChecks, + }, + local: { + score: 67, + status: 'degraded', + summary: { passed: 1, warned: 0, failed: 1, total: 2 }, + checks: localChecks, + }, + }, + bySeverity: { + CRITICAL: { passed: 1, failed: 0 }, + HIGH: { passed: 1, failed: 0 }, + MEDIUM: { passed: 0, warned: 1, failed: 0 }, + LOW: { passed: 0, failed: 1 }, + }, + }; +} + +/** + * Create mock healing results + */ +function createMockHealingResults() { + return [ + { + checkId: 'check-4', + success: false, + tier: 2, + message: 'Fix requires confirmation', + action: 'prompt', + }, + ]; +} + +describe('ReporterManager', () => { + describe('Constructor', () => { + it('should create instance with default config', () => { + const manager = new ReporterManager(); + + expect(manager).toBeDefined(); + expect(manager.defaultFormat).toBe('console'); + expect(manager.verbose).toBe(false); + }); + + it('should create instance with custom config', () => { + const manager = new ReporterManager({ + output: { + format: 'json', + verbose: true, + }, + }); + + expect(manager.defaultFormat).toBe('json'); + expect(manager.verbose).toBe(true); + }); + + it('should initialize all built-in reporters', () => { + const manager = new ReporterManager(); + + expect(manager.reporters.console).toBeDefined(); + expect(manager.reporters.markdown).toBeDefined(); + expect(manager.reporters.json).toBeDefined(); + }); + }); + + describe('getFormats', () => { + it('should return available formats', () => { + const manager = new ReporterManager(); + + const formats = manager.getFormats(); + + expect(formats).toContain('console'); + expect(formats).toContain('markdown'); + expect(formats).toContain('json'); + }); + }); + + describe('getReporter', () => { + it('should return reporter for valid format', () => { + const manager = new ReporterManager(); + + const consoleReporter = manager.getReporter('console'); + const jsonReporter = manager.getReporter('json'); + + expect(consoleReporter).toBeDefined(); + expect(jsonReporter).toBeDefined(); + }); + + it('should return null for invalid format', () => { + const manager = new ReporterManager(); + + const reporter = manager.getReporter('invalid'); + + expect(reporter).toBeNull(); + }); + }); + + describe('registerReporter', () => { + it('should register custom reporter', () => { + const manager = new ReporterManager(); + const customReporter = { + generate: jest.fn().mockResolvedValue('Custom report'), + }; + + manager.registerReporter('custom', customReporter); + + expect(manager.reporters.custom).toBe(customReporter); + }); + + it('should throw error if reporter lacks generate method', () => { + const manager = new ReporterManager(); + const invalidReporter = { name: 'invalid' }; + + expect(() => manager.registerReporter('invalid', invalidReporter)).toThrow( + 'Reporter must implement generate() method', + ); + }); + }); + + describe('generate', () => { + it('should generate report in default format', async () => { + const manager = new ReporterManager(); + const checkResults = createMockCheckResults(); + const scores = createMockScores(); + const healingResults = createMockHealingResults(); + + const report = await manager.generate(checkResults, scores, healingResults); + + expect(report).toBeDefined(); + expect(typeof report).toBe('string'); + }); + + it('should generate report in specified format', async () => { + const manager = new ReporterManager(); + const checkResults = createMockCheckResults(); + const scores = createMockScores(); + const healingResults = []; + + const report = await manager.generate(checkResults, scores, healingResults, { + output: { format: 'json' }, + }); + + expect(report).toBeDefined(); + // JSON format should be object or string + expect(typeof report === 'object' || typeof report === 'string').toBe(true); + }); + + it('should generate multiple format reports', async () => { + const manager = new ReporterManager(); + const checkResults = createMockCheckResults(); + const scores = createMockScores(); + const healingResults = []; + + const reports = await manager.generate(checkResults, scores, healingResults, { + output: { format: ['console', 'json'] }, + }); + + expect(reports.console).toBeDefined(); + expect(reports.json).toBeDefined(); + }); + + it('should warn for unknown format', async () => { + const manager = new ReporterManager(); + const consoleSpy = jest.spyOn(console, 'warn').mockImplementation(); + + await manager.generate([], {}, [], { + output: { format: 'unknown' }, + }); + + expect(consoleSpy).toHaveBeenCalledWith(expect.stringContaining('Unknown report format')); + consoleSpy.mockRestore(); + }); + + it('should include timestamp in report data', async () => { + const manager = new ReporterManager(); + const jsonReporter = manager.getReporter('json'); + const generateSpy = jest.spyOn(jsonReporter, 'generate'); + + // Use valid scores structure + await manager.generate(createMockCheckResults(), createMockScores(), [], { + output: { format: 'json' }, + }); + + expect(generateSpy).toHaveBeenCalledWith( + expect.objectContaining({ + timestamp: expect.any(String), + }), + ); + + generateSpy.mockRestore(); + }); + }); +}); + +describe('ConsoleReporter', () => { + it('should be exported from reporters module', () => { + expect(ConsoleReporter).toBeDefined(); + }); + + it('should create instance', () => { + const reporter = new ConsoleReporter(); + + expect(reporter).toBeDefined(); + }); + + it('should generate console report', async () => { + const reporter = new ConsoleReporter(); + const data = { + checkResults: createMockCheckResults(), + scores: createMockScores(), + healingResults: [], + config: {}, + timestamp: new Date().toISOString(), + }; + + const report = await reporter.generate(data); + + expect(report).toBeDefined(); + expect(typeof report).toBe('string'); + }); + + it('should include health status in report', async () => { + const reporter = new ConsoleReporter(); + const data = { + checkResults: createMockCheckResults(), + scores: createMockScores(), + healingResults: [], + config: {}, + timestamp: new Date().toISOString(), + }; + + const report = await reporter.generate(data); + + // Should contain some health-related text + expect(report.toLowerCase()).toMatch(/health|status|score|check/); + }); + + it('should handle empty results', async () => { + const reporter = new ConsoleReporter(); + const data = { + checkResults: [], + scores: { + overall: { score: 100, status: 'healthy', passed: 0, failed: 0, total: 0 }, + domains: {}, // Empty domains object to prevent null error + }, + healingResults: [], + config: {}, + timestamp: new Date().toISOString(), + }; + + const report = await reporter.generate(data); + + expect(report).toBeDefined(); + }); +}); + +describe('JSONReporter', () => { + it('should be exported from reporters module', () => { + expect(JSONReporter).toBeDefined(); + }); + + it('should create instance', () => { + const reporter = new JSONReporter(); + + expect(reporter).toBeDefined(); + }); + + it('should generate JSON report', async () => { + const reporter = new JSONReporter(); + const data = { + checkResults: createMockCheckResults(), + scores: createMockScores(), + healingResults: createMockHealingResults(), + config: { mode: 'quick' }, + timestamp: new Date().toISOString(), + }; + + const report = await reporter.generate(data); + + expect(report).toBeDefined(); + // Should be valid JSON (either object or parseable string) + if (typeof report === 'string') { + expect(() => JSON.parse(report)).not.toThrow(); + } else { + expect(typeof report).toBe('object'); + } + }); + + it('should include all data in JSON report', async () => { + const reporter = new JSONReporter(); + const checkResults = createMockCheckResults(); + const scores = createMockScores(); + const data = { + checkResults, + scores, + healingResults: [], + config: {}, + timestamp: new Date().toISOString(), + }; + + const report = await reporter.generate(data); + const parsed = typeof report === 'string' ? JSON.parse(report) : report; + + // Should contain key fields + expect(parsed.timestamp || parsed.generated).toBeDefined(); + }); +}); + +describe('MarkdownReporter', () => { + it('should be exported from reporters module', () => { + expect(MarkdownReporter).toBeDefined(); + }); + + it('should create instance', () => { + const reporter = new MarkdownReporter(); + + expect(reporter).toBeDefined(); + }); + + it('should generate markdown report', async () => { + const reporter = new MarkdownReporter(); + const data = { + checkResults: createMockCheckResults(), + scores: createMockScores(), + healingResults: [], + config: {}, + timestamp: new Date().toISOString(), + }; + + const report = await reporter.generate(data); + + expect(report).toBeDefined(); + expect(typeof report).toBe('string'); + }); + + it('should include markdown formatting', async () => { + const reporter = new MarkdownReporter(); + const data = { + checkResults: createMockCheckResults(), + scores: createMockScores(), + healingResults: [], + config: {}, + timestamp: new Date().toISOString(), + }; + + const report = await reporter.generate(data); + + // Should contain markdown elements + expect(report).toMatch(/[#|*|-]/); + }); + + it('should handle results with different statuses', async () => { + const reporter = new MarkdownReporter(); + const data = { + checkResults: [ + { + checkId: 'pass', + name: 'Pass Check', + status: CheckStatus.PASS, + message: 'OK', + domain: 'project', + severity: CheckSeverity.LOW, + }, + { + checkId: 'fail', + name: 'Fail Check', + status: CheckStatus.FAIL, + message: 'Failed', + domain: 'project', + severity: CheckSeverity.HIGH, + }, + { + checkId: 'warn', + name: 'Warning Check', + status: CheckStatus.WARNING, + message: 'Warning', + domain: 'project', + severity: CheckSeverity.MEDIUM, + }, + ], + scores: createMockScores(), + healingResults: [], + config: {}, + timestamp: new Date().toISOString(), + }; + + const report = await reporter.generate(data); + + expect(report).toBeDefined(); + expect(report.length).toBeGreaterThan(0); + }); +}); + +describe('Reporter Integration', () => { + it('should generate consistent data across formats', async () => { + const manager = new ReporterManager(); + const checkResults = createMockCheckResults(); + const scores = createMockScores(); + + const consoleReport = await manager.generate(checkResults, scores, [], { + output: { format: 'console' }, + }); + + const jsonReport = await manager.generate(checkResults, scores, [], { + output: { format: 'json' }, + }); + + const markdownReport = await manager.generate(checkResults, scores, [], { + output: { format: 'markdown' }, + }); + + // All should be non-empty + expect(consoleReport.length).toBeGreaterThan(0); + expect(markdownReport.length).toBeGreaterThan(0); + + // JSON should contain data + const json = typeof jsonReport === 'string' ? JSON.parse(jsonReport) : jsonReport; + expect(json).toBeDefined(); + }); + + it('should respect verbose option', async () => { + const managerQuiet = new ReporterManager({ output: { verbose: false } }); + const managerVerbose = new ReporterManager({ output: { verbose: true } }); + + const data = createMockCheckResults(); + const scores = createMockScores(); + + const quietReport = await managerQuiet.generate(data, scores, []); + const verboseReport = await managerVerbose.generate(data, scores, []); + + // Both should generate reports + expect(quietReport).toBeDefined(); + expect(verboseReport).toBeDefined(); + }); +}); + +``` + +================================================== +📄 tests/health-check/healers.test.js +================================================== +```js +/** + * Healer Manager Tests + * + * Tests for the HealerManager class including: + * - Tier-based healing (Silent, Prompted, Manual) + * - Backup/restore operations + * - Security blocklist + * - Healer registration + * + * @story TD-6 - CI Stability & Test Coverage Improvements + */ + +const path = require('path'); +const fs = require('fs').promises; +const os = require('os'); +const HealerManager = require('../../.aios-core/core/health-check/healers'); +const { HealingTier, BackupManager } = require('../../.aios-core/core/health-check/healers'); +const { CheckStatus } = require('../../.aios-core/core/health-check/base-check'); + +// Set timeout for all tests in this file +jest.setTimeout(30000); + +describe('HealerManager', () => { + describe('Constructor', () => { + it('should create instance with default config', () => { + const manager = new HealerManager(); + + expect(manager).toBeDefined(); + expect(manager.maxAutoFixTier).toBe(1); + expect(manager.dryRun).toBe(false); + expect(manager.healers).toBeDefined(); + expect(manager.healingLog).toEqual([]); + }); + + it('should create instance with custom config', () => { + const manager = new HealerManager({ + autoFixTier: 2, + dryRun: true, + }); + + expect(manager.maxAutoFixTier).toBe(2); + expect(manager.dryRun).toBe(true); + }); + + it('should initialize security blocklist', () => { + const manager = new HealerManager(); + + expect(manager.blocklist).toBeDefined(); + expect(manager.blocklist.length).toBeGreaterThan(0); + }); + }); + + describe('HealingTier enum', () => { + it('should have correct tier values', () => { + expect(HealingTier.NONE).toBe(0); + expect(HealingTier.SILENT).toBe(1); + expect(HealingTier.PROMPTED).toBe(2); + expect(HealingTier.MANUAL).toBe(3); + }); + }); + + describe('registerHealer', () => { + it('should register a healer', () => { + const manager = new HealerManager(); + const healer = { + name: 'test-healer', + fix: jest.fn(), + }; + + manager.registerHealer('test-check', healer); + + expect(manager.healers.get('test-check')).toBe(healer); + }); + + it('should overwrite existing healer', () => { + const manager = new HealerManager(); + const healer1 = { name: 'healer-1', fix: jest.fn() }; + const healer2 = { name: 'healer-2', fix: jest.fn() }; + + manager.registerHealer('check', healer1); + manager.registerHealer('check', healer2); + + expect(manager.healers.get('check').name).toBe('healer-2'); + }); + }); + + describe('isBlocked', () => { + it('should block .env files', () => { + const manager = new HealerManager(); + + expect(manager.isBlocked('.env')).toBe(true); + expect(manager.isBlocked('path/to/.env')).toBe(true); + }); + + it('should block .env.* files', () => { + const manager = new HealerManager(); + + expect(manager.isBlocked('.env.local')).toBe(true); + expect(manager.isBlocked('.env.production')).toBe(true); + }); + + it('should block credentials files', () => { + const manager = new HealerManager(); + + expect(manager.isBlocked('credentials.json')).toBe(true); + }); + + it('should block secret files', () => { + const manager = new HealerManager(); + + expect(manager.isBlocked('secrets.yaml')).toBe(true); + expect(manager.isBlocked('secret.json')).toBe(true); + }); + + it('should block key files', () => { + const manager = new HealerManager(); + + expect(manager.isBlocked('private.pem')).toBe(true); + expect(manager.isBlocked('server.key')).toBe(true); + expect(manager.isBlocked('id_rsa')).toBe(true); + }); + + it('should block .ssh directory', () => { + const manager = new HealerManager(); + + expect(manager.isBlocked('.ssh/config')).toBe(true); + expect(manager.isBlocked('.ssh/known_hosts')).toBe(true); + }); + + it('should allow regular files', () => { + const manager = new HealerManager(); + + expect(manager.isBlocked('package.json')).toBe(false); + expect(manager.isBlocked('src/index.js')).toBe(false); + expect(manager.isBlocked('.gitignore')).toBe(false); + }); + }); + + describe('applyFixes', () => { + it('should filter healable results', async () => { + const manager = new HealerManager(); + const fixMock = jest.fn().mockResolvedValue(); + + manager.registerHealer('healable-check', { + name: 'healer', + fix: fixMock, + }); + + const checkResults = [ + { + checkId: 'healable-check', + healable: true, + healingTier: 1, + status: CheckStatus.FAIL, + }, + { + checkId: 'non-healable', + healable: false, + healingTier: 0, + status: CheckStatus.FAIL, + }, + { + checkId: 'passed-check', + healable: true, + healingTier: 1, + status: CheckStatus.PASS, + }, + ]; + + const results = await manager.applyFixes(checkResults); + + // Only the healable check with FAIL status should be processed + expect(results).toHaveLength(1); + expect(results[0].checkId).toBe('healable-check'); + }); + + it('should respect maxTier parameter', async () => { + const manager = new HealerManager({ autoFixTier: 1 }); + + manager.registerHealer('tier2-check', { + name: 'tier2-healer', + fix: jest.fn(), + }); + + const checkResults = [ + { + checkId: 'tier2-check', + healable: true, + healingTier: 2, // Higher than maxTier + status: CheckStatus.FAIL, + }, + ]; + + const results = await manager.applyFixes(checkResults, 1); + + // Should not heal because tier is too high + expect(results).toHaveLength(0); + }); + + it('should log healing results', async () => { + const manager = new HealerManager({ dryRun: true }); + + manager.registerHealer('logged-check', { + name: 'logger-healer', + fix: jest.fn(), + }); + + const checkResults = [ + { + checkId: 'logged-check', + healable: true, + healingTier: 1, + status: CheckStatus.FAIL, + }, + ]; + + await manager.applyFixes(checkResults); + + expect(manager.healingLog).toHaveLength(1); + expect(manager.healingLog[0].timestamp).toBeDefined(); + }); + }); + + describe('heal', () => { + it('should return manual guide when tier exceeds maxTier', async () => { + const manager = new HealerManager(); + + const checkResult = { + checkId: 'high-tier-check', + name: 'High Tier Check', + healingTier: 3, + recommendation: 'Manual fix required', + }; + + const result = await manager.heal(checkResult, 1); + + expect(result.tier).toBe(HealingTier.MANUAL); + expect(result.action).toBe('manual'); + }); + + it('should return no-healer message when healer not found', async () => { + const manager = new HealerManager(); + + const checkResult = { + checkId: 'unknown-check', + healingTier: 1, + }; + + const result = await manager.heal(checkResult, 1); + + expect(result.success).toBe(false); + expect(result.message).toContain('No healer registered'); + }); + + it('should execute tier 1 healing', async () => { + const manager = new HealerManager({ dryRun: true }); + const fixMock = jest.fn().mockResolvedValue(); + + manager.registerHealer('tier1-check', { + name: 'tier1-healer', + fix: fixMock, + successMessage: 'Fixed!', + }); + + const checkResult = { + checkId: 'tier1-check', + healingTier: 1, + status: CheckStatus.FAIL, + }; + + const result = await manager.heal(checkResult, 1); + + expect(result.success).toBe(true); + expect(result.tier).toBe(HealingTier.SILENT); + expect(result.dryRun).toBe(true); + }); + + it('should execute tier 2 healing (prompted)', async () => { + const manager = new HealerManager(); + + manager.registerHealer('tier2-check', { + name: 'tier2-healer', + fix: jest.fn(), + promptMessage: 'Confirm fix?', + promptQuestion: 'Apply this fix?', + }); + + const checkResult = { + checkId: 'tier2-check', + name: 'Tier 2 Check', + healingTier: 2, + status: CheckStatus.FAIL, + }; + + const result = await manager.heal(checkResult, 2); + + expect(result.tier).toBe(HealingTier.PROMPTED); + expect(result.action).toBe('prompt'); + expect(result.prompt).toBeDefined(); + expect(result.prompt.question).toBe('Apply this fix?'); + }); + + it('should handle unknown healing tier', async () => { + const manager = new HealerManager(); + + manager.registerHealer('unknown-tier', { + name: 'unknown-healer', + fix: jest.fn(), + }); + + const checkResult = { + checkId: 'unknown-tier', + healingTier: 99, // Invalid tier + status: CheckStatus.FAIL, + }; + + const result = await manager.heal(checkResult, 99); + + expect(result.success).toBe(false); + expect(result.message).toContain('Unknown healing tier'); + }); + }); + + describe('executeTier1', () => { + it('should block files in security blocklist', async () => { + const manager = new HealerManager(); + + manager.registerHealer('blocked-check', { + name: 'blocked-healer', + targetFile: '.env', + fix: jest.fn(), + }); + + const checkResult = { + checkId: 'blocked-check', + healingTier: 1, + }; + + const result = await manager.heal(checkResult, 1); + + expect(result.success).toBe(false); + expect(result.action).toBe('blocked'); + }); + + it('should handle fix execution error', async () => { + const manager = new HealerManager(); + + manager.registerHealer('error-check', { + name: 'error-healer', + fix: jest.fn().mockRejectedValue(new Error('Fix failed')), + }); + + const checkResult = { + checkId: 'error-check', + healingTier: 1, + }; + + const result = await manager.heal(checkResult, 1); + + expect(result.success).toBe(false); + expect(result.action).toBe('error'); + expect(result.error).toBe('Fix failed'); + }); + + it('should skip actual fix in dry run mode', async () => { + const manager = new HealerManager({ dryRun: true }); + const fixMock = jest.fn(); + + manager.registerHealer('dryrun-check', { + name: 'dryrun-healer', + fix: fixMock, + }); + + const checkResult = { + checkId: 'dryrun-check', + healingTier: 1, + }; + + await manager.heal(checkResult, 1); + + expect(fixMock).not.toHaveBeenCalled(); + }); + }); + + describe('executeTier2', () => { + it('should return prompt with fix function', async () => { + const manager = new HealerManager({ dryRun: true }); + const fixMock = jest.fn().mockResolvedValue(); + + manager.registerHealer('prompt-check', { + name: 'prompt-healer', + fix: fixMock, + promptDescription: 'This will fix the issue', + risk: 'low', + }); + + const checkResult = { + checkId: 'prompt-check', + name: 'Prompt Check', + healingTier: 2, + recommendation: 'Apply fix', + }; + + const result = await manager.heal(checkResult, 2); + + expect(result.action).toBe('prompt'); + expect(typeof result.fix).toBe('function'); + expect(result.prompt.risk).toBe('low'); + }); + + it('should execute fix when confirmed', async () => { + const manager = new HealerManager({ dryRun: true }); + const fixMock = jest.fn().mockResolvedValue(); + + manager.registerHealer('confirm-check', { + name: 'confirm-healer', + fix: fixMock, + }); + + const checkResult = { + checkId: 'confirm-check', + healingTier: 2, + }; + + const result = await manager.heal(checkResult, 2); + + // Simulate user confirming + const fixResult = await result.fix(true); + + expect(fixResult.success).toBe(true); + }); + + it('should not execute fix when declined', async () => { + const manager = new HealerManager(); + const fixMock = jest.fn(); + + manager.registerHealer('decline-check', { + name: 'decline-healer', + fix: fixMock, + }); + + const checkResult = { + checkId: 'decline-check', + healingTier: 2, + }; + + const result = await manager.heal(checkResult, 2); + + // Simulate user declining + const fixResult = await result.fix(false); + + expect(fixResult.success).toBe(false); + expect(fixResult.message).toContain('declined'); + expect(fixMock).not.toHaveBeenCalled(); + }); + }); + + describe('createManualGuide', () => { + it('should create manual guide with default values', () => { + const manager = new HealerManager(); + + const checkResult = { + checkId: 'manual-check', + name: 'Manual Check', + message: 'Issue description', + recommendation: 'Fix it manually', + }; + + const result = manager.createManualGuide(checkResult); + + expect(result.tier).toBe(HealingTier.MANUAL); + expect(result.action).toBe('manual'); + expect(result.guide.title).toContain('Manual Check'); + expect(result.guide.description).toBe('Issue description'); + }); + + it('should use healer-specific guide when available', () => { + const manager = new HealerManager(); + + manager.registerHealer('guided-check', { + name: 'guided-healer', + steps: ['Step 1', 'Step 2', 'Step 3'], + documentation: 'https://docs.example.com', + warning: 'Be careful!', + }); + + const checkResult = { + checkId: 'guided-check', + name: 'Guided Check', + message: 'Issue', + recommendation: 'Default', + }; + + const result = manager.createManualGuide(checkResult); + + expect(result.guide.steps).toEqual(['Step 1', 'Step 2', 'Step 3']); + expect(result.guide.documentation).toBe('https://docs.example.com'); + expect(result.guide.warning).toBe('Be careful!'); + }); + }); + + describe('Healing Log', () => { + it('should return copy of healing log', () => { + const manager = new HealerManager(); + manager.healingLog.push({ checkId: 'test', timestamp: new Date().toISOString() }); + + const log = manager.getHealingLog(); + + expect(log).toHaveLength(1); + // Should be a copy, not reference + log.push({ checkId: 'new' }); + expect(manager.healingLog).toHaveLength(1); + }); + + it('should clear healing log', () => { + const manager = new HealerManager(); + manager.healingLog.push({ checkId: 'test' }); + + manager.clearLog(); + + expect(manager.healingLog).toHaveLength(0); + }); + }); + + describe('getBackupManager', () => { + it('should return backup manager instance', () => { + const manager = new HealerManager(); + + const backup = manager.getBackupManager(); + + expect(backup).toBeInstanceOf(BackupManager); + }); + }); +}); + +describe('BackupManager', () => { + let tempDir; + let backupManager; + + beforeEach(async () => { + tempDir = path.join(os.tmpdir(), `backup-test-${Date.now()}`); + await fs.mkdir(tempDir, { recursive: true }); + backupManager = new BackupManager(tempDir); + }); + + afterEach(async () => { + try { + await fs.rm(tempDir, { recursive: true, force: true }); + } catch (e) { + // Ignore cleanup errors + } + }); + + it('should be exported from healers module', () => { + expect(BackupManager).toBeDefined(); + expect(typeof BackupManager).toBe('function'); + }); + + it('should create instance with custom directory', () => { + const manager = new BackupManager('/custom/path'); + + expect(manager).toBeDefined(); + }); +}); + +``` + +================================================== +📄 tests/health-check/engine.test.js +================================================== +```js +/** + * Health Check Engine Tests + * + * Tests for the HealthCheckEngine class including: + * - Parallel execution + * - Timeout management + * - Result caching + * - Error handling + * + * @story TD-6 - CI Stability & Test Coverage Improvements + */ + +const HealthCheckEngine = require('../../.aios-core/core/health-check/engine'); +const { + BaseCheck, + CheckSeverity, + CheckStatus, +} = require('../../.aios-core/core/health-check/base-check'); + +// Set timeout for all tests in this file +jest.setTimeout(30000); + +/** + * Mock check class for testing + */ +class MockCheck extends BaseCheck { + constructor(options = {}) { + super({ + id: options.id || 'mock-check', + name: options.name || 'Mock Check', + description: options.description || 'A mock check for testing', + domain: options.domain || 'project', + severity: options.severity || CheckSeverity.MEDIUM, + timeout: options.timeout || 5000, + cacheable: options.cacheable !== false, + }); + this.executeResult = options.executeResult || { + status: CheckStatus.PASS, + message: 'Mock passed', + }; + this.executeDelay = options.executeDelay || 0; + this.shouldThrow = options.shouldThrow || false; + } + + async execute(config) { + if (this.executeDelay > 0) { + await new Promise((resolve) => setTimeout(resolve, this.executeDelay)); + } + if (this.shouldThrow) { + throw new Error('Mock check error'); + } + return this.executeResult; + } +} + +describe('HealthCheckEngine', () => { + describe('Constructor', () => { + it('should create instance with default config', () => { + const engine = new HealthCheckEngine(); + + expect(engine).toBeDefined(); + expect(engine.parallel).toBe(true); + expect(engine.cacheEnabled).toBe(true); + expect(engine.timeouts.quick).toBe(10000); + expect(engine.timeouts.full).toBe(60000); + }); + + it('should create instance with custom config', () => { + const engine = new HealthCheckEngine({ + parallel: false, + cache: { enabled: false, ttl: 60000 }, + performance: { quickModeTimeout: 5000, fullModeTimeout: 30000 }, + }); + + expect(engine.parallel).toBe(false); + expect(engine.cacheEnabled).toBe(false); + expect(engine.timeouts.quick).toBe(5000); + expect(engine.timeouts.full).toBe(30000); + }); + + it('should initialize empty results and errors arrays', () => { + const engine = new HealthCheckEngine(); + + expect(engine.results).toEqual([]); + expect(engine.errors).toEqual([]); + }); + }); + + describe('runChecks', () => { + it('should run checks and return results', async () => { + const engine = new HealthCheckEngine(); + const checks = [ + new MockCheck({ id: 'check-1', name: 'Check 1' }), + new MockCheck({ id: 'check-2', name: 'Check 2' }), + ]; + + const results = await engine.runChecks(checks); + + expect(results).toHaveLength(2); + expect(results[0].checkId).toBe('check-1'); + expect(results[1].checkId).toBe('check-2'); + }); + + it('should run critical checks first', async () => { + const engine = new HealthCheckEngine({ parallel: false }); + const executionOrder = []; + + const checks = [ + new MockCheck({ + id: 'low-check', + severity: CheckSeverity.LOW, + executeResult: { + status: CheckStatus.PASS, + message: (() => { + executionOrder.push('low'); + return 'passed'; + })(), + }, + }), + new MockCheck({ + id: 'critical-check', + severity: CheckSeverity.CRITICAL, + executeResult: { + status: CheckStatus.PASS, + message: (() => { + executionOrder.push('critical'); + return 'passed'; + })(), + }, + }), + ]; + + await engine.runChecks(checks); + + // Critical should be processed first due to priority ordering + expect(engine.results[0].checkId).toBe('critical-check'); + }); + + it('should fail-fast in quick mode with critical failures', async () => { + const engine = new HealthCheckEngine(); + const checks = [ + new MockCheck({ + id: 'critical-fail', + severity: CheckSeverity.CRITICAL, + executeResult: { status: CheckStatus.FAIL, message: 'Critical failure' }, + }), + new MockCheck({ + id: 'regular-check', + severity: CheckSeverity.LOW, + }), + ]; + + const results = await engine.runChecks(checks, { mode: 'quick' }); + + // Should only have critical check result due to fail-fast + expect(results.some((r) => r.checkId === 'critical-fail')).toBe(true); + }); + + it('should handle empty checks array', async () => { + const engine = new HealthCheckEngine(); + const results = await engine.runChecks([]); + + expect(results).toEqual([]); + }); + + it('should use correct timeout for mode', async () => { + const engine = new HealthCheckEngine({ + performance: { quickModeTimeout: 5000, fullModeTimeout: 30000 }, + }); + + // Verify engine has correct timeouts configured + expect(engine.timeouts.quick).toBe(5000); + expect(engine.timeouts.full).toBe(30000); + }); + }); + + describe('runSingleCheck', () => { + it('should return cached result when available', async () => { + const engine = new HealthCheckEngine(); + const check = new MockCheck({ id: 'cached-check' }); + + // Run first time + const result1 = await engine.runChecks([check]); + expect(result1[0].fromCache).toBe(false); + + // Run second time - should be cached + const result2 = await engine.runChecks([check]); + expect(result2[0].fromCache).toBe(true); + }); + + it('should skip cache when cacheable is false', async () => { + const engine = new HealthCheckEngine(); + const check = new MockCheck({ id: 'non-cacheable', cacheable: false }); + + // Run first time + await engine.runChecks([check]); + + // Run second time - should NOT be cached + const result = await engine.runChecks([check]); + expect(result[0].fromCache).toBe(false); + }); + + it('should handle check timeout', async () => { + const engine = new HealthCheckEngine(); + const check = new MockCheck({ + id: 'slow-check', + timeout: 100, + executeDelay: 500, // Longer than timeout + }); + + const results = await engine.runChecks([check]); + + expect(results[0].status).toBe(CheckStatus.ERROR); + expect(results[0].message).toContain('Check timeout'); + }); + + it('should handle check execution error', async () => { + const engine = new HealthCheckEngine(); + const check = new MockCheck({ + id: 'error-check', + shouldThrow: true, + }); + + const results = await engine.runChecks([check]); + + expect(results[0].status).toBe(CheckStatus.ERROR); + expect(results[0].message).toContain('Mock check error'); + }); + + it('should include correct metadata in result', async () => { + const engine = new HealthCheckEngine(); + const check = new MockCheck({ + id: 'metadata-check', + name: 'Metadata Check', + domain: 'project', + severity: CheckSeverity.HIGH, + executeResult: { + status: CheckStatus.PASS, + message: 'Passed', + details: { key: 'value' }, + recommendation: 'None needed', + healable: true, + healingTier: 1, + }, + }); + + const results = await engine.runChecks([check]); + + expect(results[0].checkId).toBe('metadata-check'); + expect(results[0].name).toBe('Metadata Check'); + expect(results[0].domain).toBe('project'); + expect(results[0].severity).toBe(CheckSeverity.HIGH); + expect(results[0].status).toBe(CheckStatus.PASS); + expect(results[0].details).toEqual({ key: 'value' }); + expect(results[0].healable).toBe(true); + expect(results[0].healingTier).toBe(1); + expect(results[0].timestamp).toBeDefined(); + expect(results[0].duration).toBeGreaterThanOrEqual(0); + }); + }); + + describe('runCheckGroup', () => { + it('should run checks in parallel when enabled', async () => { + const engine = new HealthCheckEngine({ parallel: true }); + const startTime = Date.now(); + + const checks = [ + new MockCheck({ id: 'parallel-1', executeDelay: 100 }), + new MockCheck({ id: 'parallel-2', executeDelay: 100 }), + new MockCheck({ id: 'parallel-3', executeDelay: 100 }), + ]; + + await engine.runChecks(checks); + + const duration = Date.now() - startTime; + // Parallel execution should take ~100ms, not 300ms + expect(duration).toBeLessThan(250); + }); + + it('should run checks sequentially when parallel disabled', async () => { + const engine = new HealthCheckEngine({ parallel: false }); + const startTime = Date.now(); + + const checks = [ + new MockCheck({ id: 'seq-1', executeDelay: 50 }), + new MockCheck({ id: 'seq-2', executeDelay: 50 }), + ]; + + await engine.runChecks(checks); + + const duration = Date.now() - startTime; + // Sequential execution should take at least 100ms + expect(duration).toBeGreaterThanOrEqual(90); + }); + + it('should mark remaining checks as skipped when timeout exceeded', async () => { + const engine = new HealthCheckEngine({ + performance: { quickModeTimeout: 50 }, // Very short timeout + parallel: false, + }); + + const checks = [ + new MockCheck({ id: 'first', executeDelay: 200 }), // Far exceeds timeout + new MockCheck({ id: 'second' }), // Should be skipped + ]; + + const results = await engine.runChecks(checks, { mode: 'quick' }); + + // First check should timeout, second should be skipped or not run at all + const secondResult = results.find((r) => r.checkId === 'second'); + if (secondResult) { + // Due to timing variations in CI, the second check might: + // - Be skipped (timeout kicked in) + // - Pass (executed before timeout) + // - Warning (partial execution) + // All are acceptable outcomes for this timing-sensitive test + expect([CheckStatus.SKIPPED, CheckStatus.WARNING, CheckStatus.PASS]).toContain( + secondResult.status, + ); + } + // If no second result, that's also acceptable (not added to results) + }); + }); + + describe('createErrorResult', () => { + it('should create proper error result', () => { + const engine = new HealthCheckEngine(); + const check = new MockCheck({ id: 'error-test' }); + const error = new Error('Test error'); + + const result = engine.createErrorResult(check, error, 100); + + expect(result.checkId).toBe('error-test'); + expect(result.status).toBe(CheckStatus.ERROR); + expect(result.message).toContain('Test error'); + expect(result.duration).toBe(100); + expect(result.healable).toBe(false); + }); + }); + + describe('createSkippedResult', () => { + it('should create proper skipped result', () => { + const engine = new HealthCheckEngine(); + const check = new MockCheck({ id: 'skipped-test' }); + + const result = engine.createSkippedResult(check, 'Timeout exceeded'); + + expect(result.checkId).toBe('skipped-test'); + expect(result.status).toBe(CheckStatus.SKIPPED); + expect(result.message).toBe('Timeout exceeded'); + expect(result.duration).toBe(0); + }); + }); + + describe('hasCriticalFailure', () => { + it('should detect critical failure', () => { + const engine = new HealthCheckEngine(); + + const results = [ + { severity: CheckSeverity.CRITICAL, status: CheckStatus.FAIL }, + { severity: CheckSeverity.LOW, status: CheckStatus.PASS }, + ]; + + expect(engine.hasCriticalFailure(results)).toBe(true); + }); + + it('should detect critical error', () => { + const engine = new HealthCheckEngine(); + + const results = [{ severity: CheckSeverity.CRITICAL, status: CheckStatus.ERROR }]; + + expect(engine.hasCriticalFailure(results)).toBe(true); + }); + + it('should return false when no critical failures', () => { + const engine = new HealthCheckEngine(); + + const results = [ + { severity: CheckSeverity.CRITICAL, status: CheckStatus.PASS }, + { severity: CheckSeverity.HIGH, status: CheckStatus.FAIL }, + ]; + + expect(engine.hasCriticalFailure(results)).toBe(false); + }); + }); + + describe('groupByDomain', () => { + it('should group checks by domain', () => { + const engine = new HealthCheckEngine(); + const checks = [ + new MockCheck({ id: 'project-1', domain: 'project' }), + new MockCheck({ id: 'project-2', domain: 'project' }), + new MockCheck({ id: 'local-1', domain: 'local' }), + ]; + + const groups = engine.groupByDomain(checks); + + expect(groups.project).toHaveLength(2); + expect(groups.local).toHaveLength(1); + }); + + it('should handle checks without domain', () => { + const engine = new HealthCheckEngine(); + const check = new MockCheck({ id: 'no-domain' }); + check.domain = undefined; + + const groups = engine.groupByDomain([check]); + + expect(groups.unknown).toHaveLength(1); + }); + }); + + describe('Cache operations', () => { + it('should clear cache', () => { + const engine = new HealthCheckEngine(); + + // Manually add to cache + engine.cache.set('test-key', { value: 'test' }); + expect(engine.cache.get('test-key')).toBeDefined(); + + engine.clearCache(); + + expect(engine.cache.get('test-key')).toBeNull(); + }); + }); + + describe('getStats', () => { + it('should return execution statistics', async () => { + const engine = new HealthCheckEngine(); + const checks = [ + new MockCheck({ id: 'stats-1', executeDelay: 10 }), + new MockCheck({ id: 'stats-2', executeDelay: 10 }), + ]; + + await engine.runChecks(checks); + + const stats = engine.getStats(); + + expect(stats.totalDuration).toBeGreaterThanOrEqual(0); + expect(stats.checksRun).toBe(2); + expect(stats.errors).toBe(0); + expect(stats.cacheStats).toBeDefined(); + }); + }); +}); + +describe('ResultCache', () => { + it('should store and retrieve values', () => { + const engine = new HealthCheckEngine(); + const cache = engine.cache; + + cache.set('key1', { data: 'value1' }); + + expect(cache.get('key1')).toEqual({ data: 'value1' }); + }); + + it('should return null for missing keys', () => { + const engine = new HealthCheckEngine(); + const cache = engine.cache; + + expect(cache.get('nonexistent')).toBeNull(); + }); + + it('should expire entries after TTL', async () => { + const engine = new HealthCheckEngine({ cache: { ttl: 50 } }); + const cache = engine.cache; + + cache.set('expiring', { data: 'test' }); + expect(cache.get('expiring')).toBeDefined(); + + await new Promise((resolve) => setTimeout(resolve, 60)); + + expect(cache.get('expiring')).toBeNull(); + }); + + it('should report cache statistics', () => { + const engine = new HealthCheckEngine(); + const cache = engine.cache; + + cache.set('a', 1); + cache.set('b', 2); + + const stats = cache.getStats(); + + expect(stats.size).toBe(2); + expect(stats.ttl).toBeDefined(); + }); +}); + +``` + +================================================== +📄 tests/packages/aios-install/edmcp.test.js +================================================== +```js +/** + * edmcp Tests + * + * Tests for the Docker MCP Gateway Manager. + */ + +'use strict'; + +// Mock modules +jest.mock('execa', () => ({ + execa: jest.fn(), + execaSync: jest.fn(), +})); +jest.mock('fs-extra'); +jest.mock('ora', () => { + return jest.fn(() => ({ + start: jest.fn().mockReturnThis(), + stop: jest.fn().mockReturnThis(), + succeed: jest.fn().mockReturnThis(), + fail: jest.fn().mockReturnThis(), + warn: jest.fn().mockReturnThis(), + info: jest.fn().mockReturnThis(), + })); +}); + +const { execa } = require('execa'); +const fs = require('fs-extra'); + +const { + checkDocker, + ensureDocker, + parseMcpSource, +} = require('../../../packages/aios-install/src/edmcp'); + +describe('edmcp', () => { + beforeEach(() => { + jest.clearAllMocks(); + }); + + describe('checkDocker', () => { + it('should return installed and running when Docker is available', async () => { + // Given + execa.mockResolvedValueOnce({ stdout: 'Docker version 24.0.6, build ed223bc' }) + .mockResolvedValueOnce({ exitCode: 0, stdout: 'Server: Docker Desktop' }); + + // When + const result = await checkDocker(); + + // Then + expect(result.installed).toBe(true); + expect(result.running).toBe(true); + expect(result.version).toBe('24.0.6'); + }); + + it('should return installed but not running when daemon is stopped', async () => { + // Given + execa.mockResolvedValueOnce({ stdout: 'Docker version 24.0.6' }) + .mockResolvedValueOnce({ exitCode: 1, stdout: '' }); + + // When + const result = await checkDocker(); + + // Then + expect(result.installed).toBe(true); + expect(result.running).toBe(false); + }); + + it('should return not installed when Docker command fails', async () => { + // Given + execa.mockRejectedValue(new Error('Command not found')); + + // When + const result = await checkDocker(); + + // Then + expect(result.installed).toBe(false); + expect(result.running).toBe(false); + expect(result.error).toBe('Command not found'); + }); + }); + + describe('ensureDocker', () => { + it('should resolve when Docker is available and running', async () => { + // Given + execa.mockResolvedValueOnce({ stdout: 'Docker version 24.0.6' }) + .mockResolvedValueOnce({ exitCode: 0 }); + + // When/Then + await expect(ensureDocker()).resolves.toBeDefined(); + }); + + it('should throw when Docker is not installed', async () => { + // Given + execa.mockRejectedValue(new Error('Command not found')); + + // When/Then + await expect(ensureDocker()).rejects.toThrow('Docker is not installed'); + }); + + it('should throw when Docker daemon is not running', async () => { + // Given + execa.mockResolvedValueOnce({ stdout: 'Docker version 24.0.6' }) + .mockResolvedValueOnce({ exitCode: 1 }); + + // When/Then + await expect(ensureDocker()).rejects.toThrow('Docker daemon is not running'); + }); + }); + + describe('parseMcpSource', () => { + it('should parse catalog name', () => { + // When + const result = parseMcpSource('exa'); + + // Then + expect(result.type).toBe('catalog'); + expect(result.name).toBe('exa'); + expect(result.url).toBeNull(); + }); + + it('should parse GitHub shorthand', () => { + // When + const result = parseMcpSource('user/my-mcp'); + + // Then + expect(result.type).toBe('github'); + expect(result.name).toBe('my-mcp'); + expect(result.url).toBe('https://github.com/user/my-mcp'); + }); + + it('should parse HTTPS URL', () => { + // When + const result = parseMcpSource('https://github.com/user/mcp-custom.git'); + + // Then + expect(result.type).toBe('url'); + expect(result.name).toBe('custom'); + expect(result.url).toBe('https://github.com/user/mcp-custom.git'); + }); + + it('should parse SSH URL', () => { + // When + const result = parseMcpSource('git@github.com:user/mcp-tool.git'); + + // Then + expect(result.type).toBe('url'); + expect(result.name).toBe('tool'); + expect(result.url).toBe('git@github.com:user/mcp-tool.git'); + }); + + it('should strip mcp- prefix from name', () => { + // When + const result = parseMcpSource('https://github.com/user/mcp-awesome'); + + // Then + expect(result.name).toBe('awesome'); + }); + }); + + describe('listMcps', () => { + it('should be tested in integration tests (requires Docker)', () => { + // Note: listMcps requires Docker to be running, so we test the parsing logic + // through parseMcpSource instead of the full function + expect(true).toBe(true); + }); + }); + + describe('addMcp', () => { + it('should be tested in integration tests (requires Docker)', () => { + // Note: addMcp has complex Docker interactions that are better tested + // in integration tests. Unit tests focus on input parsing via parseMcpSource. + expect(true).toBe(true); + }); + }); + + describe('removeMcp', () => { + it('should be tested in integration tests (requires Docker)', () => { + // Note: removeMcp has complex Docker interactions that are better tested + // in integration tests. + expect(true).toBe(true); + }); + }); +}); + +``` + +================================================== +📄 tests/packages/aios-install/os-detector.test.js +================================================== +```js +/** + * OS Detector Tests + * + * Tests for cross-platform OS detection including macOS, Linux, Windows, and WSL. + */ + +'use strict'; + +const os = require('os'); +const fs = require('fs'); +const path = require('path'); + +// Mock modules before requiring the module under test +jest.mock('os'); +jest.mock('fs'); + +const { + OS_TYPE, + detectOS, + isWSL, + getWSLDistro, + detectLinuxPackageManager, + getOSDisplayName, +} = require('../../../packages/aios-install/src/os-detector'); + +describe('os-detector', () => { + beforeEach(() => { + jest.clearAllMocks(); + // Reset environment variables + delete process.env.WSL_DISTRO_NAME; + delete process.env.SHELL; + delete process.env.COMSPEC; + }); + + describe('detectOS', () => { + describe('macOS detection', () => { + it('should detect macOS correctly', () => { + // Given + const originalPlatform = Object.getOwnPropertyDescriptor(process, 'platform'); + Object.defineProperty(process, 'platform', { value: 'darwin', configurable: true }); + os.release.mockReturnValue('23.1.0'); + os.homedir.mockReturnValue('/Users/testuser'); + + // When + const result = detectOS(); + + // Then + expect(result.type).toBe(OS_TYPE.MACOS); + expect(result.platform).toBe('darwin'); + expect(result.packageManager).toBe('brew'); + expect(result.installInstructions.node).toContain('brew'); + + // Cleanup + Object.defineProperty(process, 'platform', originalPlatform); + }); + }); + + describe('Linux detection', () => { + it('should detect native Linux', () => { + // Given + const originalPlatform = Object.getOwnPropertyDescriptor(process, 'platform'); + Object.defineProperty(process, 'platform', { value: 'linux', configurable: true }); + os.release.mockReturnValue('5.15.0-generic'); + os.homedir.mockReturnValue('/home/testuser'); + fs.existsSync.mockImplementation((p) => { + if (p === '/usr/bin/apt') return true; + return false; + }); + fs.readFileSync.mockImplementation((p) => { + if (p === '/proc/version') return 'Linux version 5.15.0-generic'; + return ''; + }); + + // When + const result = detectOS(); + + // Then + expect(result.type).toBe(OS_TYPE.LINUX); + expect(result.isWSL).toBe(false); + expect(result.packageManager).toBe('apt'); + + // Cleanup + Object.defineProperty(process, 'platform', originalPlatform); + }); + + it('should detect WSL via environment variable', () => { + // Given + const originalPlatform = Object.getOwnPropertyDescriptor(process, 'platform'); + Object.defineProperty(process, 'platform', { value: 'linux', configurable: true }); + process.env.WSL_DISTRO_NAME = 'Ubuntu'; + os.release.mockReturnValue('5.15.0-microsoft-standard-WSL2'); + os.homedir.mockReturnValue('/home/testuser'); + fs.existsSync.mockImplementation((p) => { + if (p === '/usr/bin/apt') return true; + return false; + }); + fs.readFileSync.mockImplementation(() => ''); + + // When + const result = detectOS(); + + // Then + expect(result.type).toBe(OS_TYPE.WSL); + expect(result.isWSL).toBe(true); + expect(result.wslDistro).toBe('Ubuntu'); + expect(result.notes).toBeDefined(); + expect(result.notes.some(n => n.includes('WSL'))).toBe(true); + + // Cleanup + Object.defineProperty(process, 'platform', originalPlatform); + }); + + it('should detect WSL via /proc/version', () => { + // Given + const originalPlatform = Object.getOwnPropertyDescriptor(process, 'platform'); + Object.defineProperty(process, 'platform', { value: 'linux', configurable: true }); + os.release.mockReturnValue('5.15.0-microsoft-standard-WSL2'); + os.homedir.mockReturnValue('/home/testuser'); + fs.existsSync.mockImplementation((p) => { + if (p === '/usr/bin/apt') return true; + return false; + }); + fs.readFileSync.mockImplementation((p) => { + if (p === '/proc/version') return 'Linux version 5.15.0-microsoft-standard-WSL2'; + if (p === '/etc/os-release') return 'NAME="Ubuntu"\nVERSION="22.04"'; + return ''; + }); + + // When + const result = detectOS(); + + // Then + expect(result.type).toBe(OS_TYPE.WSL); + expect(result.isWSL).toBe(true); + + // Cleanup + Object.defineProperty(process, 'platform', originalPlatform); + }); + }); + + describe('Windows detection', () => { + it('should detect Windows correctly', () => { + // Given + const originalPlatform = Object.getOwnPropertyDescriptor(process, 'platform'); + Object.defineProperty(process, 'platform', { value: 'win32', configurable: true }); + process.env.COMSPEC = 'C:\\Windows\\System32\\cmd.exe'; + os.release.mockReturnValue('10.0.22621'); + os.homedir.mockReturnValue('C:\\Users\\testuser'); + + // When + const result = detectOS(); + + // Then + expect(result.type).toBe(OS_TYPE.WINDOWS); + expect(result.packageManager).toBe('winget'); + expect(result.pathSeparator).toBe('\\'); + expect(result.installInstructions.node).toContain('winget'); + expect(result.notes).toBeDefined(); + + // Cleanup + Object.defineProperty(process, 'platform', originalPlatform); + }); + }); + }); + + describe('isWSL', () => { + beforeEach(() => { + const originalPlatform = Object.getOwnPropertyDescriptor(process, 'platform'); + Object.defineProperty(process, 'platform', { value: 'linux', configurable: true }); + }); + + it('should return true when WSL_DISTRO_NAME is set', () => { + process.env.WSL_DISTRO_NAME = 'Ubuntu'; + expect(isWSL()).toBe(true); + }); + + it('should return true when /proc/version contains microsoft', () => { + fs.readFileSync.mockReturnValue('Linux version 5.15.0-microsoft-standard-WSL2'); + expect(isWSL()).toBe(true); + }); + + it('should return false on native Linux', () => { + fs.readFileSync.mockReturnValue('Linux version 5.15.0-generic'); + fs.existsSync.mockReturnValue(false); + expect(isWSL()).toBe(false); + }); + }); + + describe('getWSLDistro', () => { + it('should return distro from environment variable', () => { + process.env.WSL_DISTRO_NAME = 'Debian'; + expect(getWSLDistro()).toBe('Debian'); + }); + + it('should return distro from /etc/os-release', () => { + fs.readFileSync.mockReturnValue('NAME="Ubuntu"\nVERSION="22.04"'); + expect(getWSLDistro()).toBe('Ubuntu'); + }); + + it('should return null when distro cannot be determined', () => { + fs.readFileSync.mockImplementation(() => { + throw new Error('File not found'); + }); + expect(getWSLDistro()).toBeNull(); + }); + }); + + describe('detectLinuxPackageManager', () => { + it('should detect apt', () => { + fs.existsSync.mockImplementation((p) => p === '/usr/bin/apt'); + expect(detectLinuxPackageManager()).toBe('apt'); + }); + + it('should detect dnf', () => { + fs.existsSync.mockImplementation((p) => p === '/usr/bin/dnf'); + expect(detectLinuxPackageManager()).toBe('dnf'); + }); + + it('should detect pacman', () => { + fs.existsSync.mockImplementation((p) => p === '/usr/bin/pacman'); + expect(detectLinuxPackageManager()).toBe('pacman'); + }); + + it('should return unknown when no package manager found', () => { + fs.existsSync.mockReturnValue(false); + expect(detectLinuxPackageManager()).toBe('unknown'); + }); + }); + + describe('getOSDisplayName', () => { + it('should format macOS name', () => { + const osInfo = { type: OS_TYPE.MACOS, release: '23.1.0' }; + expect(getOSDisplayName(osInfo)).toBe('macOS (23.1.0)'); + }); + + it('should format WSL name with distro', () => { + const osInfo = { type: OS_TYPE.WSL, wslDistro: 'Ubuntu', release: '5.15.0' }; + expect(getOSDisplayName(osInfo)).toBe('WSL (Ubuntu)'); + }); + + it('should format Linux name', () => { + const osInfo = { type: OS_TYPE.LINUX, release: '5.15.0' }; + expect(getOSDisplayName(osInfo)).toBe('Linux (5.15.0)'); + }); + + it('should format Windows name', () => { + const osInfo = { type: OS_TYPE.WINDOWS, release: '10.0.22621' }; + expect(getOSDisplayName(osInfo)).toBe('Windows (10.0.22621)'); + }); + + it('should handle unknown OS', () => { + const osInfo = { type: OS_TYPE.UNKNOWN, platform: 'freebsd' }; + expect(getOSDisplayName(osInfo)).toBe('Unknown (freebsd)'); + }); + }); +}); + +``` + +================================================== +📄 tests/packages/aios-install/edge-cases.test.js +================================================== +```js +/** + * Edge Cases Tests - Story 12.9 Task 7.7 + * + * Tests for edge cases: + * - No internet connectivity + * - Docker not installed/offline + * - Insufficient permissions + * - Read-only directory + */ + +'use strict'; + +const path = require('path'); +const os = require('os'); + +// Mock modules +jest.mock('fs-extra'); +jest.mock('execa', () => ({ + execa: jest.fn(), + execaSync: jest.fn(), +})); +jest.mock('@clack/prompts', () => ({ + intro: jest.fn(), + outro: jest.fn(), + select: jest.fn(), + confirm: jest.fn(), + note: jest.fn(), + isCancel: jest.fn(() => false), + cancel: jest.fn(), +})); +jest.mock('ora', () => { + return jest.fn(() => ({ + start: jest.fn().mockReturnThis(), + stop: jest.fn().mockReturnThis(), + succeed: jest.fn().mockReturnThis(), + fail: jest.fn().mockReturnThis(), + warn: jest.fn().mockReturnThis(), + info: jest.fn().mockReturnThis(), + })); +}); + +const fs = require('fs-extra'); +const { execa, execaSync } = require('execa'); + +const { + InstallLogger, + createUserConfigDirect, +} = require('../../../packages/aios-install/src/installer'); + +const { + checkAllDependencies, + checkDockerRunning, +} = require('../../../packages/aios-install/src/dep-checker'); + +const { + checkDocker, + ensureDocker, +} = require('../../../packages/aios-install/src/edmcp'); + +describe('Edge Cases - Task 7.7', () => { + beforeEach(() => { + jest.clearAllMocks(); + fs.pathExists.mockResolvedValue(false); + fs.existsSync.mockReturnValue(false); + fs.ensureDir.mockResolvedValue(); + fs.readFile.mockResolvedValue(''); + fs.writeFile.mockResolvedValue(); + }); + + describe('No Internet Connectivity', () => { + const mockOsInfo = { + installInstructions: { + node: 'brew install node@18', + git: 'brew install git', + docker: 'brew install --cask docker', + gh: 'brew install gh', + }, + }; + + it('should handle npm install failure due to network error', async () => { + // Given + const networkError = new Error('getaddrinfo ENOTFOUND registry.npmjs.org'); + networkError.code = 'ENOTFOUND'; + execa.mockRejectedValue(networkError); + + // When/Then + await expect(async () => { + await execa('npm', ['install', 'aios-core']); + }).rejects.toThrow('ENOTFOUND'); + }); + + it('should handle git clone failure due to network error', async () => { + // Given + const networkError = new Error('Could not resolve host: github.com'); + networkError.code = 'ENETUNREACH'; + execa.mockRejectedValue(networkError); + + // When/Then + await expect(async () => { + await execa('git', ['clone', 'https://github.com/user/repo']); + }).rejects.toThrow('Could not resolve host'); + }); + + it('should timeout on slow network connections', async () => { + // Given + const timeoutError = new Error('Timeout'); + timeoutError.timedOut = true; + execa.mockRejectedValue(timeoutError); + + // When/Then + await expect(async () => { + await execa('npm', ['install'], { timeout: 1000 }); + }).rejects.toThrow('Timeout'); + }); + + it('should handle DNS resolution failure', async () => { + // Given + const dnsError = new Error('getaddrinfo EAI_AGAIN github.com'); + dnsError.code = 'EAI_AGAIN'; + execa.mockRejectedValue(dnsError); + + // When/Then + await expect(async () => { + await execa('git', ['clone', 'https://github.com/user/repo']); + }).rejects.toThrow('EAI_AGAIN'); + }); + }); + + describe('Docker Not Installed / Offline', () => { + it('should handle Docker command not found', async () => { + // Given + const notFoundError = new Error('spawn docker ENOENT'); + notFoundError.code = 'ENOENT'; + execa.mockRejectedValue(notFoundError); + + // When + const result = await checkDocker(); + + // Then + expect(result.installed).toBe(false); + expect(result.running).toBe(false); + }); + + it('should handle Docker daemon not running', async () => { + // Given + execa.mockImplementation(async (cmd, args) => { + if (args && args[0] === '--version') { + return { stdout: 'Docker version 24.0.6' }; + } + if (args && args[0] === 'info') { + const error = new Error('Cannot connect to the Docker daemon'); + error.exitCode = 1; + throw error; + } + return { stdout: '', exitCode: 1 }; + }); + + // When + const result = await checkDocker(); + + // Then + expect(result.installed).toBe(true); + expect(result.running).toBe(false); + }); + + it('should throw descriptive error when Docker is required but not installed', async () => { + // Given + execa.mockRejectedValue(new Error('spawn docker ENOENT')); + + // When/Then + await expect(ensureDocker()).rejects.toThrow('Docker is not installed'); + }); + + it('should throw descriptive error when Docker daemon is stopped', async () => { + // Given + execa.mockImplementation(async (cmd, args) => { + if (args && args[0] === '--version') { + return { stdout: 'Docker version 24.0.6' }; + } + // docker info fails when daemon is not running + return { stdout: '', exitCode: 1 }; + }); + + // When/Then + await expect(ensureDocker()).rejects.toThrow('Docker daemon is not running'); + }); + + it('should report Docker as optional dependency in dep-checker', () => { + // Given + execaSync.mockImplementation((cmd) => { + if (cmd === 'node') return { stdout: 'v20.10.0' }; + if (cmd === 'git') return { stdout: 'git version 2.42.0' }; + if (cmd === 'docker') throw new Error('Command not found'); + if (cmd === 'gh') return { stdout: 'gh version 2.40.0' }; + return { stdout: '' }; + }); + + const mockOsInfo = { + installInstructions: { + node: 'brew install node@18', + git: 'brew install git', + docker: 'brew install --cask docker', + gh: 'brew install gh', + }, + }; + + // When + const result = checkAllDependencies(mockOsInfo); + + // Then + expect(result.passed).toBe(true); // Should still pass + expect(result.hasWarnings).toBe(true); + expect(result.warnings.some(w => w.name === 'Docker')).toBe(true); + }); + }); + + describe('Insufficient Permissions', () => { + it('should handle EACCES error when creating user config directory', async () => { + // Given + const permissionError = new Error('EACCES: permission denied'); + permissionError.code = 'EACCES'; + fs.ensureDir.mockRejectedValue(permissionError); + + const mockLogger = { + action: jest.fn(), + success: jest.fn(), + error: jest.fn(), + }; + + // When/Then + await expect( + createUserConfigDirect('bob', mockLogger, false), + ).rejects.toThrow('EACCES'); + }); + + it('should handle EPERM error when writing config file', async () => { + // Given + const permissionError = new Error('EPERM: operation not permitted'); + permissionError.code = 'EPERM'; + fs.ensureDir.mockResolvedValue(); + fs.pathExists.mockResolvedValue(false); + fs.writeFile.mockRejectedValue(permissionError); + + const mockLogger = { + action: jest.fn(), + success: jest.fn(), + error: jest.fn(), + }; + + // When/Then + await expect( + createUserConfigDirect('bob', mockLogger, false), + ).rejects.toThrow('EPERM'); + }); + + it('should handle permission denied during npm install', async () => { + // Given + const permissionError = new Error('EACCES: permission denied, mkdir /usr/local/lib'); + permissionError.code = 'EACCES'; + execa.mockRejectedValue(permissionError); + + // When/Then + await expect(async () => { + await execa('npm', ['install', '-g', 'aios-core']); + }).rejects.toThrow('EACCES'); + }); + + it('should handle sudo requirement for global installs', async () => { + // Given + const sudoError = new Error('Please try running this command again as root/Administrator'); + execa.mockRejectedValue(sudoError); + + // When/Then + await expect(async () => { + await execa('npm', ['install', '-g', 'aios-core']); + }).rejects.toThrow('root/Administrator'); + }); + }); + + describe('Read-Only Directory', () => { + it('should handle EROFS error in read-only filesystem', async () => { + // Given + const readOnlyError = new Error('EROFS: read-only file system'); + readOnlyError.code = 'EROFS'; + fs.ensureDir.mockRejectedValue(readOnlyError); + + const mockLogger = { + action: jest.fn(), + success: jest.fn(), + error: jest.fn(), + }; + + // When/Then + await expect( + createUserConfigDirect('bob', mockLogger, false), + ).rejects.toThrow('EROFS'); + }); + + it('should handle read-only npm cache directory', async () => { + // Given + const readOnlyError = new Error('EROFS: read-only file system, mkdir ~/.npm'); + readOnlyError.code = 'EROFS'; + execa.mockRejectedValue(readOnlyError); + + // When/Then + await expect(async () => { + await execa('npm', ['install', 'aios-core']); + }).rejects.toThrow('EROFS'); + }); + + it('should handle insufficient disk space', async () => { + // Given + const noSpaceError = new Error('ENOSPC: no space left on device'); + noSpaceError.code = 'ENOSPC'; + fs.writeFile.mockRejectedValue(noSpaceError); + fs.ensureDir.mockResolvedValue(); + fs.pathExists.mockResolvedValue(false); + + const mockLogger = { + action: jest.fn(), + success: jest.fn(), + error: jest.fn(), + }; + + // When/Then + await expect( + createUserConfigDirect('bob', mockLogger, false), + ).rejects.toThrow('ENOSPC'); + }); + + it('should not make filesystem changes in dry-run mode on read-only system', async () => { + // Given + const mockLogger = { + action: jest.fn(), + success: jest.fn(), + }; + + // When + await createUserConfigDirect('bob', mockLogger, true); + + // Then + expect(mockLogger.action).toHaveBeenCalled(); + expect(fs.ensureDir).not.toHaveBeenCalled(); + expect(fs.writeFile).not.toHaveBeenCalled(); + }); + }); + + describe('Process/Signal Handling', () => { + it('should handle SIGTERM during installation', async () => { + // Given + const sigTermError = new Error('Process terminated'); + sigTermError.signal = 'SIGTERM'; + execa.mockRejectedValue(sigTermError); + + // When/Then + await expect(async () => { + await execa('npm', ['install', 'aios-core']); + }).rejects.toThrow('Process terminated'); + }); + + it('should handle SIGINT (Ctrl+C) gracefully', async () => { + // Given + const sigIntError = new Error('User cancelled'); + sigIntError.signal = 'SIGINT'; + sigIntError.isCanceled = true; + execa.mockRejectedValue(sigIntError); + + // When/Then + await expect(async () => { + await execa('npm', ['install', 'aios-core']); + }).rejects.toThrow('User cancelled'); + }); + }); + + describe('Concurrent Access', () => { + it('should handle EBUSY error when file is locked', async () => { + // Given + const busyError = new Error('EBUSY: resource busy or locked'); + busyError.code = 'EBUSY'; + fs.writeFile.mockRejectedValue(busyError); + fs.ensureDir.mockResolvedValue(); + fs.pathExists.mockResolvedValue(false); + + const mockLogger = { + action: jest.fn(), + success: jest.fn(), + }; + + // When/Then + await expect( + createUserConfigDirect('bob', mockLogger, false), + ).rejects.toThrow('EBUSY'); + }); + + it('should handle EEXIST when directory already exists during race condition', async () => { + // Given - first call succeeds, simulating race condition already handled + fs.ensureDir.mockResolvedValue(); + fs.pathExists.mockResolvedValue(true); + fs.readFile.mockResolvedValue('user_profile: advanced\n'); + fs.writeFile.mockResolvedValue(); + + const mockLogger = { + action: jest.fn(), + success: jest.fn(), + }; + + // When + await createUserConfigDirect('bob', mockLogger, false); + + // Then - should succeed by updating existing config + expect(fs.writeFile).toHaveBeenCalled(); + expect(mockLogger.success).toHaveBeenCalled(); + }); + }); +}); + +``` + +================================================== +📄 tests/packages/aios-install/installer.test.js +================================================== +```js +/** + * Installer Tests + * + * Tests for the main installer logic including profile selection, + * brownfield detection, and dry-run mode. + */ + +'use strict'; + +const path = require('path'); +const os = require('os'); +const fs = require('fs-extra'); + +// Mock modules +jest.mock('fs-extra'); +jest.mock('execa', () => ({ + execa: jest.fn(), +})); +jest.mock('@clack/prompts', () => ({ + intro: jest.fn(), + outro: jest.fn(), + select: jest.fn(), + confirm: jest.fn(), + note: jest.fn(), + isCancel: jest.fn(() => false), + cancel: jest.fn(), +})); +jest.mock('ora', () => { + return jest.fn(() => ({ + start: jest.fn().mockReturnThis(), + stop: jest.fn().mockReturnThis(), + succeed: jest.fn().mockReturnThis(), + fail: jest.fn().mockReturnThis(), + warn: jest.fn().mockReturnThis(), + info: jest.fn().mockReturnThis(), + })); +}); + +const { execa } = require('execa'); +const { select, confirm } = require('@clack/prompts'); + +const { + InstallTimer, + InstallLogger, + detectBrownfield, + tryLoadConfigResolver, + createUserConfigDirect, +} = require('../../../packages/aios-install/src/installer'); + +describe('installer', () => { + beforeEach(() => { + jest.clearAllMocks(); + fs.pathExists.mockResolvedValue(false); + fs.existsSync.mockReturnValue(false); + fs.ensureDir.mockResolvedValue(); + fs.readFile.mockResolvedValue(''); + fs.writeFile.mockResolvedValue(); + }); + + describe('InstallTimer', () => { + it('should track elapsed time', () => { + // Given + const timer = new InstallTimer(); + + // When + const elapsed = timer.elapsed(); + + // Then + expect(elapsed).toBeGreaterThanOrEqual(0); + expect(elapsed).toBeLessThan(1); // Should be nearly instant + }); + + it('should format elapsed time correctly', () => { + // Given + const timer = new InstallTimer(); + + // When + const formatted = timer.elapsedFormatted(); + + // Then + expect(formatted).toMatch(/^\d+s$/); + }); + + it('should detect timeout', () => { + // Given + const timer = new InstallTimer(); + + // When + const isTimeout = timer.checkTimeout(0); // 0 seconds max + + // Then + // Even instant check might be > 0ms + expect(typeof isTimeout).toBe('boolean'); + }); + }); + + describe('InstallLogger', () => { + let consoleSpy; + + beforeEach(() => { + consoleSpy = jest.spyOn(console, 'log').mockImplementation(); + }); + + afterEach(() => { + consoleSpy.mockRestore(); + }); + + it('should log info messages', () => { + // Given + const logger = new InstallLogger(); + + // When + logger.info('Test message'); + + // Then + expect(consoleSpy).toHaveBeenCalledWith(expect.stringContaining('Test message')); + }); + + it('should prefix dry-run messages', () => { + // Given + const logger = new InstallLogger({ dryRun: true }); + + // When + logger.info('Test message'); + + // Then + expect(consoleSpy).toHaveBeenCalledWith(expect.stringContaining('[DRY-RUN]')); + }); + + it('should log debug messages only when verbose', () => { + // Given + const loggerQuiet = new InstallLogger({ verbose: false }); + const loggerVerbose = new InstallLogger({ verbose: true }); + + // When + loggerQuiet.debug('Debug message'); + loggerVerbose.debug('Debug message'); + + // Then + expect(consoleSpy).toHaveBeenCalledTimes(1); // Only verbose logger + }); + + it('should log action messages in dry-run mode', () => { + // Given + const logger = new InstallLogger({ dryRun: true }); + + // When + logger.action('Create file'); + + // Then + expect(consoleSpy).toHaveBeenCalledWith(expect.stringContaining('Would:')); + expect(consoleSpy).toHaveBeenCalledWith(expect.stringContaining('Create file')); + }); + }); + + describe('detectBrownfield', () => { + it('should detect greenfield (no existing AIOS)', () => { + // Given + fs.existsSync.mockReturnValue(false); + + // When + const result = detectBrownfield('/test/project'); + + // Then + expect(result.isBrownfield).toBe(false); + expect(result.hasLegacyConfig).toBe(false); + expect(result.hasLayeredConfig).toBe(false); + }); + + it('should detect brownfield with legacy config', () => { + // Given + fs.existsSync.mockImplementation((p) => { + // Only legacy config exists, not framework-config + if (p.includes('core-config.yaml')) return true; + if (p.includes('framework-config.yaml')) return false; + if (p.includes('.aios-core') && !p.includes('.yaml')) return true; + return false; + }); + + // When + const result = detectBrownfield('/test/project'); + + // Then + expect(result.isBrownfield).toBe(true); + expect(result.hasLegacyConfig).toBe(true); + expect(result.hasLayeredConfig).toBe(false); + }); + + it('should detect brownfield with layered config', () => { + // Given + fs.existsSync.mockImplementation((p) => { + if (p.includes('framework-config.yaml')) return true; + if (p.includes('.aios-core')) return true; + return false; + }); + + // When + const result = detectBrownfield('/test/project'); + + // Then + expect(result.isBrownfield).toBe(true); + expect(result.hasLayeredConfig).toBe(true); + }); + }); + + describe('createUserConfigDirect', () => { + let mockLogger; + + beforeEach(() => { + mockLogger = { + action: jest.fn(), + success: jest.fn(), + }; + }); + + it('should create user config with bob profile', async () => { + // Given + const profile = 'bob'; + fs.pathExists.mockResolvedValue(false); + + // When + await createUserConfigDirect(profile, mockLogger, false); + + // Then + expect(fs.ensureDir).toHaveBeenCalled(); + expect(fs.writeFile).toHaveBeenCalledWith( + expect.stringContaining('user-config.yaml'), + expect.stringContaining('user_profile: bob'), + 'utf8', + ); + }); + + it('should create user config with advanced profile', async () => { + // Given + const profile = 'advanced'; + fs.pathExists.mockResolvedValue(false); + + // When + await createUserConfigDirect(profile, mockLogger, false); + + // Then + expect(fs.writeFile).toHaveBeenCalledWith( + expect.stringContaining('user-config.yaml'), + expect.stringContaining('user_profile: advanced'), + 'utf8', + ); + }); + + it('should only log actions in dry-run mode', async () => { + // Given + const profile = 'bob'; + + // When + await createUserConfigDirect(profile, mockLogger, true); + + // Then + expect(mockLogger.action).toHaveBeenCalled(); + expect(fs.writeFile).not.toHaveBeenCalled(); + }); + + it('should preserve existing config and update profile', async () => { + // Given + const profile = 'advanced'; + fs.pathExists.mockResolvedValue(true); + fs.readFile.mockResolvedValue('existing_key: existing_value\n'); + + // When + await createUserConfigDirect(profile, mockLogger, false); + + // Then + const writeCall = fs.writeFile.mock.calls[0]; + expect(writeCall[1]).toContain('user_profile: advanced'); + }); + }); + + describe('tryLoadConfigResolver', () => { + it('should return null when config resolver not found', () => { + // Given + fs.existsSync.mockReturnValue(false); + + // When + const result = tryLoadConfigResolver('/test/project'); + + // Then + expect(result).toBeNull(); + }); + }); +}); + +``` + +================================================== +📄 tests/packages/aios-install/dep-checker.test.js +================================================== +```js +/** + * Dependency Checker Tests + * + * Tests for system dependency verification including Node.js, Git, Docker, and GitHub CLI. + */ + +'use strict'; + +const { execaSync } = require('execa'); + +// Mock execa +jest.mock('execa', () => ({ + execaSync: jest.fn(), +})); + +const { + REQUIREMENT, + DEPENDENCIES, + checkDependency, + checkDockerRunning, + checkAllDependencies, + formatDependencyStatus, +} = require('../../../packages/aios-install/src/dep-checker'); + +describe('dep-checker', () => { + beforeEach(() => { + jest.clearAllMocks(); + }); + + describe('checkDependency', () => { + const mockOsInfo = { + installInstructions: { + node: 'brew install node@18', + git: 'brew install git', + docker: 'brew install --cask docker', + gh: 'brew install gh', + }, + }; + + describe('Node.js dependency', () => { + it('should detect Node.js when installed and meets version requirement', () => { + // Given + execaSync.mockReturnValue({ stdout: 'v20.10.0' }); + + // When + const result = checkDependency(DEPENDENCIES.node, mockOsInfo); + + // Then + expect(result.installed).toBe(true); + expect(result.version).toBe('20.10.0'); + expect(result.meetsMinVersion).toBe(true); + expect(result.requirement).toBe(REQUIREMENT.REQUIRED); + }); + + it('should fail when Node.js version is below minimum', () => { + // Given + execaSync.mockReturnValue({ stdout: 'v16.20.0' }); + + // When + const result = checkDependency(DEPENDENCIES.node, mockOsInfo); + + // Then + expect(result.installed).toBe(true); + expect(result.version).toBe('16.20.0'); + expect(result.meetsMinVersion).toBe(false); + }); + + it('should report not installed when command fails', () => { + // Given + execaSync.mockImplementation(() => { + throw new Error('Command not found'); + }); + + // When + const result = checkDependency(DEPENDENCIES.node, mockOsInfo); + + // Then + expect(result.installed).toBe(false); + expect(result.error).toBe('Command not found'); + expect(result.instruction).toBe('brew install node@18'); + }); + }); + + describe('Git dependency', () => { + it('should detect Git when installed', () => { + // Given + execaSync.mockReturnValue({ stdout: 'git version 2.42.0' }); + + // When + const result = checkDependency(DEPENDENCIES.git, mockOsInfo); + + // Then + expect(result.installed).toBe(true); + expect(result.version).toBe('2.42.0'); + expect(result.meetsMinVersion).toBe(true); + }); + + it('should parse Git version correctly', () => { + // Given + execaSync.mockReturnValue({ stdout: 'git version 2.30.1 (Apple Git-130)' }); + + // When + const result = checkDependency(DEPENDENCIES.git, mockOsInfo); + + // Then + expect(result.version).toBe('2.30.1'); + }); + }); + + describe('Docker dependency (optional)', () => { + it('should detect Docker when installed', () => { + // Given + execaSync.mockReturnValue({ stdout: 'Docker version 24.0.6, build ed223bc' }); + + // When + const result = checkDependency(DEPENDENCIES.docker, mockOsInfo); + + // Then + expect(result.installed).toBe(true); + expect(result.version).toBe('24.0.6'); + expect(result.requirement).toBe(REQUIREMENT.OPTIONAL); + }); + + it('should not fail when Docker is not installed (optional)', () => { + // Given + execaSync.mockImplementation(() => { + throw new Error('Command not found'); + }); + + // When + const result = checkDependency(DEPENDENCIES.docker, mockOsInfo); + + // Then + expect(result.installed).toBe(false); + expect(result.requirement).toBe(REQUIREMENT.OPTIONAL); + }); + }); + + describe('GitHub CLI dependency (optional)', () => { + it('should detect gh when installed', () => { + // Given + execaSync.mockReturnValue({ stdout: 'gh version 2.40.0 (2023-12-01)' }); + + // When + const result = checkDependency(DEPENDENCIES.gh, mockOsInfo); + + // Then + expect(result.installed).toBe(true); + expect(result.version).toBe('2.40.0'); + expect(result.requirement).toBe(REQUIREMENT.OPTIONAL); + }); + }); + }); + + describe('checkDockerRunning', () => { + it('should return running=true when Docker daemon is active', () => { + // Given + execaSync.mockReturnValue({ stdout: 'Server: Docker Desktop', exitCode: 0 }); + + // When + const result = checkDockerRunning(); + + // Then + expect(result.running).toBe(true); + }); + + it('should return running=false when Docker daemon is not active', () => { + // Given + execaSync.mockReturnValue({ stdout: '', exitCode: 1 }); + + // When + const result = checkDockerRunning(); + + // Then + expect(result.running).toBe(false); + }); + + it('should return running=false when Docker check throws', () => { + // Given + execaSync.mockImplementation(() => { + throw new Error('Cannot connect to Docker daemon'); + }); + + // When + const result = checkDockerRunning(); + + // Then + expect(result.running).toBe(false); + }); + }); + + describe('checkAllDependencies', () => { + const mockOsInfo = { + installInstructions: { + node: 'brew install node@18', + git: 'brew install git', + docker: 'brew install --cask docker', + gh: 'brew install gh', + }, + }; + + it('should pass when all required dependencies are installed', () => { + // Given + execaSync.mockImplementation((cmd) => { + if (cmd === 'node') return { stdout: 'v20.10.0' }; + if (cmd === 'git') return { stdout: 'git version 2.42.0' }; + if (cmd === 'docker') return { stdout: 'Docker version 24.0.6', exitCode: 0 }; + if (cmd === 'gh') return { stdout: 'gh version 2.40.0' }; + return { stdout: '' }; + }); + + // When + const result = checkAllDependencies(mockOsInfo); + + // Then + expect(result.passed).toBe(true); + expect(result.missing).toHaveLength(0); + expect(result.required.every(d => d.installed)).toBe(true); + }); + + it('should fail when Node.js is not installed', () => { + // Given + execaSync.mockImplementation((cmd) => { + if (cmd === 'node') throw new Error('Command not found'); + if (cmd === 'git') return { stdout: 'git version 2.42.0' }; + if (cmd === 'docker') return { stdout: 'Docker version 24.0.6', exitCode: 0 }; + if (cmd === 'gh') return { stdout: 'gh version 2.40.0' }; + return { stdout: '' }; + }); + + // When + const result = checkAllDependencies(mockOsInfo); + + // Then + expect(result.passed).toBe(false); + expect(result.missing.some(m => m.command === 'node')).toBe(true); + }); + + it('should fail when Node.js version is below minimum', () => { + // Given + execaSync.mockImplementation((cmd) => { + if (cmd === 'node') return { stdout: 'v16.0.0' }; + if (cmd === 'git') return { stdout: 'git version 2.42.0' }; + if (cmd === 'docker') return { stdout: 'Docker version 24.0.6', exitCode: 0 }; + if (cmd === 'gh') return { stdout: 'gh version 2.40.0' }; + return { stdout: '' }; + }); + + // When + const result = checkAllDependencies(mockOsInfo); + + // Then + expect(result.passed).toBe(false); + expect(result.missing.some(m => m.command === 'node' && m.reason.includes('below minimum'))).toBe(true); + }); + + it('should warn but pass when optional dependencies are missing', () => { + // Given + execaSync.mockImplementation((cmd) => { + if (cmd === 'node') return { stdout: 'v20.10.0' }; + if (cmd === 'git') return { stdout: 'git version 2.42.0' }; + if (cmd === 'docker') throw new Error('Command not found'); + if (cmd === 'gh') throw new Error('Command not found'); + return { stdout: '' }; + }); + + // When + const result = checkAllDependencies(mockOsInfo); + + // Then + expect(result.passed).toBe(true); + expect(result.hasWarnings).toBe(true); + expect(result.warnings.length).toBeGreaterThan(0); + }); + }); + + describe('formatDependencyStatus', () => { + it('should format installed dependency with green checkmark', () => { + // Given + const check = { name: 'Node.js', installed: true, version: '20.10.0', meetsMinVersion: true }; + + // When + const result = formatDependencyStatus(check); + + // Then + expect(result).toContain('Node.js'); + expect(result).toContain('v20.10.0'); + }); + + it('should format not installed dependency with red X', () => { + // Given + const check = { name: 'Docker', installed: false }; + + // When + const result = formatDependencyStatus(check); + + // Then + expect(result).toContain('Docker'); + expect(result).toContain('Not installed'); + }); + + it('should format dependency below minimum version with warning', () => { + // Given + const check = { name: 'Node.js', installed: true, version: '16.0.0', meetsMinVersion: false, minVersion: '18.0.0' }; + + // When + const result = formatDependencyStatus(check); + + // Then + expect(result).toContain('Node.js'); + expect(result).toContain('v16.0.0'); + expect(result).toContain('18.0.0'); + }); + }); +}); + +``` + +================================================== +📄 tests/packages/aios-install/integration.test.js +================================================== +```js +/** + * Integration Tests - Story 12.9 Task 8.3 + * + * Tests for local npx execution and package integrity. + * These tests verify the package works correctly when installed via npx. + */ + +'use strict'; + +const path = require('path'); +const fs = require('fs'); +const { execSync } = require('child_process'); + +const PKG_DIR = path.resolve(__dirname, '../../../packages/aios-install'); + +describe('Integration - Task 8.3: Local NPX Execution', () => { + const runNpxIntegration = process.env.RUN_NPX_INTEGRATION === '1'; + describe('Package Structure Validation', () => { + it('should have valid package.json', () => { + // Given + const pkgJsonPath = path.join(PKG_DIR, 'package.json'); + + // When + const pkgJson = JSON.parse(fs.readFileSync(pkgJsonPath, 'utf8')); + + // Then + expect(pkgJson.name).toBe('@synkra/aios-install'); + expect(pkgJson.bin).toBeDefined(); + expect(pkgJson.bin['aios-install']).toBe('./bin/aios-install.js'); + expect(pkgJson.bin['edmcp']).toBe('./bin/edmcp.js'); + }); + + it('should have all required files in package', () => { + // Given + const requiredFiles = [ + 'package.json', + 'README.md', + 'bin/aios-install.js', + 'bin/edmcp.js', + 'src/installer.js', + 'src/os-detector.js', + 'src/dep-checker.js', + 'src/edmcp/index.js', + ]; + + // When/Then + for (const file of requiredFiles) { + const filePath = path.join(PKG_DIR, file); + expect(fs.existsSync(filePath)).toBe(true); + } + }); + + it('should have executable shebang in bin files', () => { + // Given + const binFiles = ['bin/aios-install.js', 'bin/edmcp.js']; + const expectedShebang = '#!/usr/bin/env node'; + + // When/Then + for (const file of binFiles) { + const content = fs.readFileSync(path.join(PKG_DIR, file), 'utf8'); + expect(content.startsWith(expectedShebang)).toBe(true); + } + }); + + it('should have correct Node.js engine requirement', () => { + // Given + const pkgJsonPath = path.join(PKG_DIR, 'package.json'); + + // When + const pkgJson = JSON.parse(fs.readFileSync(pkgJsonPath, 'utf8')); + + // Then + expect(pkgJson.engines).toBeDefined(); + expect(pkgJson.engines.node).toBe('>=18.0.0'); + }); + + it('should have all required dependencies declared', () => { + // Given + const pkgJsonPath = path.join(PKG_DIR, 'package.json'); + const requiredDeps = [ + '@clack/prompts', + 'chalk', + 'commander', + 'execa', + 'fs-extra', + 'js-yaml', + 'ora', + 'semver', + ]; + + // When + const pkgJson = JSON.parse(fs.readFileSync(pkgJsonPath, 'utf8')); + + // Then + for (const dep of requiredDeps) { + expect(pkgJson.dependencies[dep]).toBeDefined(); + } + }); + + it('should have publishConfig for public access', () => { + // Given + const pkgJsonPath = path.join(PKG_DIR, 'package.json'); + + // When + const pkgJson = JSON.parse(fs.readFileSync(pkgJsonPath, 'utf8')); + + // Then + expect(pkgJson.publishConfig).toBeDefined(); + expect(pkgJson.publishConfig.access).toBe('public'); + }); + }); + + describe('CLI Execution Tests', () => { + it('should execute aios-install --version via node', () => { + // Given + const binPath = path.join(PKG_DIR, 'bin/aios-install.js'); + + // When + const result = execSync(`node "${binPath}" --version`, { + encoding: 'utf8', + timeout: 10000, + }).trim(); + + // Then + expect(result).toMatch(/^\d+\.\d+\.\d+$/); + }); + + it('should execute edmcp --version via node', () => { + // Given + const binPath = path.join(PKG_DIR, 'bin/edmcp.js'); + + // When + const result = execSync(`node "${binPath}" --version`, { + encoding: 'utf8', + timeout: 10000, + }).trim(); + + // Then + expect(result).toMatch(/^\d+\.\d+\.\d+$/); + }); + + it('should show help for aios-install', () => { + // Given + const binPath = path.join(PKG_DIR, 'bin/aios-install.js'); + + // When + const result = execSync(`node "${binPath}" --help`, { + encoding: 'utf8', + timeout: 10000, + }); + + // Then + expect(result).toContain('aios-install'); + expect(result).toContain('--dry-run'); + expect(result).toContain('--verbose'); + expect(result).toContain('--profile'); + }); + + it('should show help for edmcp', () => { + // Given + const binPath = path.join(PKG_DIR, 'bin/edmcp.js'); + + // When + const result = execSync(`node "${binPath}" --help`, { + encoding: 'utf8', + timeout: 10000, + }); + + // Then + expect(result).toContain('edmcp'); + expect(result).toContain('list'); + expect(result).toContain('add'); + expect(result).toContain('remove'); + }); + + it('should handle invalid arguments gracefully', () => { + // Given + const binPath = path.join(PKG_DIR, 'bin/aios-install.js'); + + // When + const result = execSync(`node "${binPath}" --invalid-flag 2>&1 || true`, { + encoding: 'utf8', + timeout: 10000, + }); + + // Then + expect(result).toContain('error'); + }); + }); + + describe('Module Loading Tests', () => { + it('should be able to require installer module', () => { + // Given/When + const installer = require(path.join(PKG_DIR, 'src/installer.js')); + + // Then + expect(installer.runInstaller).toBeDefined(); + expect(typeof installer.runInstaller).toBe('function'); + }); + + it('should be able to require os-detector module', () => { + // Given/When + const osDetector = require(path.join(PKG_DIR, 'src/os-detector.js')); + + // Then + expect(osDetector.detectOS).toBeDefined(); + expect(typeof osDetector.detectOS).toBe('function'); + }); + + it('should be able to require dep-checker module', () => { + // Given/When + const depChecker = require(path.join(PKG_DIR, 'src/dep-checker.js')); + + // Then + expect(depChecker.checkAllDependencies).toBeDefined(); + expect(typeof depChecker.checkAllDependencies).toBe('function'); + }); + + it('should be able to require edmcp module', () => { + // Given/When + const edmcp = require(path.join(PKG_DIR, 'src/edmcp/index.js')); + + // Then + expect(edmcp.listMcps).toBeDefined(); + expect(edmcp.addMcp).toBeDefined(); + expect(edmcp.removeMcp).toBeDefined(); + }); + }); + + (runNpxIntegration ? describe : describe.skip)('NPX Local Execution Simulation', () => { + it('should execute via npm exec (simulates npx .)', () => { + // Given - We simulate npx . by running npm exec in the package directory + + // When + const result = execSync('npm exec -- aios-install --version', { + cwd: PKG_DIR, + encoding: 'utf8', + timeout: 90000, + env: { ...process.env, npm_config_yes: 'true' }, + }).trim(); + + // Then + expect(result).toMatch(/\d+\.\d+\.\d+/); + }); + + it('should execute edmcp via npm exec', () => { + // Given + + // When + const result = execSync('npm exec -- edmcp --version', { + cwd: PKG_DIR, + encoding: 'utf8', + timeout: 90000, + env: { ...process.env, npm_config_yes: 'true' }, + }).trim(); + + // Then + expect(result).toMatch(/\d+\.\d+\.\d+/); + }); + }); + + describe('Version Consistency', () => { + it('should have consistent version across package.json and CLI output', () => { + // Given + const pkgJsonPath = path.join(PKG_DIR, 'package.json'); + const pkgJson = JSON.parse(fs.readFileSync(pkgJsonPath, 'utf8')); + const binPath = path.join(PKG_DIR, 'bin/aios-install.js'); + + // When + const cliVersion = execSync(`node "${binPath}" --version`, { + encoding: 'utf8', + timeout: 10000, + }).trim(); + + // Then + expect(cliVersion).toBe(pkgJson.version); + }); + }); +}); + +``` + +================================================== +📄 tests/ide-sync/validator.test.js +================================================== +```js +/** + * Unit tests for validator.js + * @story 6.19 - IDE Command Auto-Sync System + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); +const { + hashContent, + fileExists, + readFileIfExists, + validateIdeSync, + validateAllIdes, + formatValidationReport, +} = require('../../.aios-core/infrastructure/scripts/ide-sync/validator'); + +describe('validator', () => { + let tempDir; + let testCounter = 0; + + beforeAll(() => { + tempDir = path.join(os.tmpdir(), 'validator-test-' + Date.now()); + fs.ensureDirSync(tempDir); + }); + + afterAll(() => { + fs.removeSync(tempDir); + }); + + describe('hashContent', () => { + it('should produce consistent hash for same content', () => { + const content = 'test content'; + expect(hashContent(content)).toBe(hashContent(content)); + }); + + it('should produce different hash for different content', () => { + expect(hashContent('content A')).not.toBe(hashContent('content B')); + }); + + it('should normalize line endings', () => { + const lf = 'line1\nline2\n'; + const crlf = 'line1\r\nline2\r\n'; + expect(hashContent(lf)).toBe(hashContent(crlf)); + }); + + it('should normalize standalone CR', () => { + const lf = 'line1\nline2\n'; + const cr = 'line1\rline2\r'; + expect(hashContent(lf)).toBe(hashContent(cr)); + }); + + it('should return 64 character hex string', () => { + const hash = hashContent('test'); + expect(hash).toMatch(/^[a-f0-9]{64}$/); + }); + }); + + describe('fileExists', () => { + it('should return true for existing file', () => { + const testFile = path.join(tempDir, 'exists.txt'); + fs.writeFileSync(testFile, 'content'); + expect(fileExists(testFile)).toBe(true); + }); + + it('should return false for non-existent file', () => { + expect(fileExists(path.join(tempDir, 'nonexistent.txt'))).toBe(false); + }); + }); + + describe('readFileIfExists', () => { + it('should read existing file', () => { + const testFile = path.join(tempDir, 'readable.txt'); + fs.writeFileSync(testFile, 'file content'); + expect(readFileIfExists(testFile)).toBe('file content'); + }); + + it('should return null for non-existent file', () => { + expect(readFileIfExists(path.join(tempDir, 'missing.txt'))).toBeNull(); + }); + }); + + describe('validateIdeSync', () => { + let targetDir; + + beforeEach(() => { + testCounter++; + targetDir = path.join(tempDir, `target-${Date.now()}-${testCounter}`); + fs.ensureDirSync(targetDir); + }); + + it('should detect synced files', () => { + const content = 'synced content'; + fs.writeFileSync(path.join(targetDir, 'agent.md'), content); + + const expected = [{ filename: 'agent.md', content }]; + const result = validateIdeSync(expected, targetDir, {}); + + expect(result.synced).toHaveLength(1); + expect(result.missing).toHaveLength(0); + expect(result.drift).toHaveLength(0); + }); + + it('should detect missing files', () => { + const expected = [{ filename: 'missing.md', content: 'content' }]; + const result = validateIdeSync(expected, targetDir, {}); + + expect(result.missing).toHaveLength(1); + expect(result.missing[0].filename).toBe('missing.md'); + }); + + it('should detect drift', () => { + fs.writeFileSync(path.join(targetDir, 'agent.md'), 'old content'); + + const expected = [{ filename: 'agent.md', content: 'new content' }]; + const result = validateIdeSync(expected, targetDir, {}); + + expect(result.drift).toHaveLength(1); + expect(result.drift[0].filename).toBe('agent.md'); + }); + + it('should detect orphaned files', () => { + fs.writeFileSync(path.join(targetDir, 'expected.md'), 'content'); + fs.writeFileSync(path.join(targetDir, 'orphan.md'), 'orphan'); + + const expected = [{ filename: 'expected.md', content: 'content' }]; + const result = validateIdeSync(expected, targetDir, {}); + + expect(result.orphaned).toHaveLength(1); + expect(result.orphaned[0].filename).toBe('orphan.md'); + }); + + it('should not count redirect files as orphaned', () => { + fs.writeFileSync(path.join(targetDir, 'agent.md'), 'content'); + fs.writeFileSync(path.join(targetDir, 'aios-developer.md'), 'redirect'); + + const expected = [{ filename: 'agent.md', content: 'content' }]; + const redirects = { 'aios-developer': 'aios-master' }; + const result = validateIdeSync(expected, targetDir, redirects); + + expect(result.orphaned).toHaveLength(0); + }); + + it('should return correct totals', () => { + fs.writeFileSync(path.join(targetDir, 'synced.md'), 'synced'); + fs.writeFileSync(path.join(targetDir, 'drift.md'), 'old'); + fs.writeFileSync(path.join(targetDir, 'orphan.md'), 'orphan'); + + const expected = [ + { filename: 'synced.md', content: 'synced' }, + { filename: 'drift.md', content: 'new' }, + { filename: 'missing.md', content: 'missing' }, + ]; + + const result = validateIdeSync(expected, targetDir, {}); + + expect(result.total.expected).toBe(3); + expect(result.total.synced).toBe(1); + expect(result.total.drift).toBe(1); + expect(result.total.missing).toBe(1); + expect(result.total.orphaned).toBe(1); + }); + }); + + describe('validateAllIdes', () => { + it('should validate multiple IDEs', () => { + const cursorDir = path.join(tempDir, 'cursor'); + + fs.ensureDirSync(cursorDir); + + fs.writeFileSync(path.join(cursorDir, 'agent.md'), 'cursor content'); + + const ideConfigs = { + cursor: { + expectedFiles: [{ filename: 'agent.md', content: 'cursor content' }], + targetDir: cursorDir, + }, + }; + + const result = validateAllIdes(ideConfigs, {}); + + expect(result.ides.cursor).toBeDefined(); + expect(result.summary.total).toBe(1); + expect(result.summary.synced).toBe(1); + expect(result.summary.pass).toBe(true); + }); + + it('should fail if any missing or drift', () => { + const testDir = path.join(tempDir, 'fail-test'); + fs.ensureDirSync(testDir); + + const ideConfigs = { + test: { + expectedFiles: [{ filename: 'missing.md', content: 'content' }], + targetDir: testDir, + }, + }; + + const result = validateAllIdes(ideConfigs, {}); + + expect(result.summary.pass).toBe(false); + expect(result.summary.missing).toBe(1); + }); + }); + + describe('formatValidationReport', () => { + it('should format passing report', () => { + const results = { + ides: { + cursor: { + targetDir: '/path/to/cursor', + synced: [{ filename: 'agent.md' }], + missing: [], + drift: [], + orphaned: [], + total: { expected: 1, synced: 1, missing: 0, drift: 0, orphaned: 0 }, + }, + }, + summary: { + total: 1, + synced: 1, + missing: 0, + drift: 0, + orphaned: 0, + pass: true, + }, + }; + + const report = formatValidationReport(results); + expect(report).toContain('✅ PASS'); + expect(report).toContain('Synced | 1'); + }); + + it('should format failing report', () => { + const results = { + ides: { + cursor: { + targetDir: '/path/to/cursor', + synced: [], + missing: [{ filename: 'missing.md' }], + drift: [], + orphaned: [], + total: { expected: 1, synced: 0, missing: 1, drift: 0, orphaned: 0 }, + }, + }, + summary: { + total: 1, + synced: 0, + missing: 1, + drift: 0, + orphaned: 0, + pass: false, + }, + }; + + const report = formatValidationReport(results); + expect(report).toContain('❌ FAIL'); + expect(report).toContain('Missing | 1'); + expect(report).toContain('npm run sync:ide'); + }); + + it('should include verbose details when requested', () => { + const results = { + ides: { + cursor: { + targetDir: '/path/to/cursor', + synced: [], + missing: [{ filename: 'missing.md', path: '/full/path' }], + drift: [{ filename: 'drift.md', path: '/full/path' }], + orphaned: [{ filename: 'orphan.md', path: '/full/path' }], + total: { expected: 2, synced: 0, missing: 1, drift: 1, orphaned: 1 }, + }, + }, + summary: { + total: 2, + synced: 0, + missing: 1, + drift: 1, + orphaned: 1, + pass: false, + }, + }; + + const report = formatValidationReport(results, true); + expect(report).toContain('### cursor'); + expect(report).toContain('Missing Files'); + expect(report).toContain('missing.md'); + expect(report).toContain('Drifted Files'); + expect(report).toContain('drift.md'); + expect(report).toContain('Orphaned Files'); + expect(report).toContain('orphan.md'); + }); + }); +}); + +``` + +================================================== +📄 tests/ide-sync/gemini-commands.test.js +================================================== +```js +'use strict'; + +const fs = require('fs-extra'); +const os = require('os'); +const path = require('path'); + +const { + commandSlugForAgent, + menuCommandName, + buildAgentDescription, + summarizeWhenToUse, + truncateText, + buildGeminiCommandFiles, + syncGeminiCommands, +} = require('../../.aios-core/infrastructure/scripts/ide-sync/gemini-commands'); + +describe('gemini command sync', () => { + let tmpRoot; + + beforeEach(async () => { + tmpRoot = await fs.mkdtemp(path.join(os.tmpdir(), 'gemini-commands-')); + }); + + afterEach(async () => { + await fs.remove(tmpRoot); + }); + + it('normalizes command slugs and menu names', () => { + expect(commandSlugForAgent('aios-master')).toBe('master'); + expect(commandSlugForAgent('dev')).toBe('dev'); + expect(menuCommandName('aios-master')).toBe('/aios-master'); + expect(menuCommandName('dev')).toBe('/aios-dev'); + }); + + it('builds menu + one command per agent', () => { + const agents = [ + { id: 'dev', error: null, agent: { title: 'Developer', whenToUse: 'Implementar features' } }, + { id: 'architect', error: null, agent: { title: 'Architect', whenToUse: 'Definir arquitetura' } }, + { id: 'qa', error: null, agent: { title: 'QA', whenToUse: 'Validar qualidade' } }, + ]; + const files = buildGeminiCommandFiles(agents); + + expect(files.find((f) => f.filename === 'aios-menu.toml')).toBeDefined(); + expect(files.find((f) => f.filename === 'aios-dev.toml')).toBeDefined(); + expect(files.find((f) => f.filename === 'aios-architect.toml')).toBeDefined(); + expect(files.find((f) => f.filename === 'aios-qa.toml')).toBeDefined(); + expect(files).toHaveLength(4); + }); + + it('derives command description from agent title and whenToUse', () => { + const files = buildGeminiCommandFiles([ + { + id: 'dev', + error: null, + agent: { + title: 'Full Stack Developer', + whenToUse: 'Use para implementação e debugging. NOT for planejamento de produto', + }, + }, + ]); + + const dev = files.find((f) => f.filename === 'aios-dev.toml'); + expect(dev.content).toContain('description = "Full Stack Developer (Use para implementação e debugging)"'); + }); + + it('falls back to generic description when metadata is missing', () => { + const files = buildGeminiCommandFiles([{ id: 'dev', error: null, agent: null }]); + const dev = files.find((f) => f.filename === 'aios-dev.toml'); + expect(dev.content).toContain('description = "Ativar agente AIOS dev"'); + }); + + it('buildAgentDescription handles multiline text', () => { + const description = buildAgentDescription({ + id: 'architect', + agent: { + title: 'Architect', + whenToUse: 'Use para arquitetura\ncomplexa em sistemas distribuídos. NOT for gestão de sprint', + }, + }); + expect(description).toBe('Architect (Use para arquitetura complexa em sistemas distribuídos)'); + }); + + it('summarizeWhenToUse truncates very long text', () => { + const longText = 'Use para arquitetura '.concat('muito '.repeat(80)).concat('complexa.'); + const summary = summarizeWhenToUse(longText); + expect(summary.length).toBeLessThanOrEqual(120); + expect(summary.endsWith('…')).toBe(true); + }); + + it('truncateText returns original when short', () => { + expect(truncateText('texto curto', 20)).toBe('texto curto'); + }); + + it('writes command files to .gemini/commands', () => { + const agents = [ + { id: 'dev', error: null, agent: { title: 'Developer', whenToUse: 'Implementar features' } }, + { id: 'qa', error: null, agent: { title: 'QA', whenToUse: 'Validar qualidade' } }, + ]; + const result = syncGeminiCommands(agents, tmpRoot, { dryRun: false }); + + expect(result.files.length).toBe(3); + expect(fs.existsSync(path.join(tmpRoot, '.gemini', 'commands', 'aios-menu.toml'))).toBe(true); + expect(fs.existsSync(path.join(tmpRoot, '.gemini', 'commands', 'aios-dev.toml'))).toBe(true); + expect(fs.existsSync(path.join(tmpRoot, '.gemini', 'commands', 'aios-qa.toml'))).toBe(true); + }); +}); + +``` + +================================================== +📄 tests/ide-sync/index-validate-filter.test.js +================================================== +```js +'use strict'; + +const fs = require('fs-extra'); +const os = require('os'); +const path = require('path'); + +const { commandValidate } = require('../../.aios-core/infrastructure/scripts/ide-sync/index'); +const { parseAllAgents } = require('../../.aios-core/infrastructure/scripts/ide-sync/agent-parser'); +const claudeTransformer = require('../../.aios-core/infrastructure/scripts/ide-sync/transformers/claude-code'); +const { syncGeminiCommands } = require('../../.aios-core/infrastructure/scripts/ide-sync/gemini-commands'); + +describe('ide-sync commandValidate --ide filter', () => { + let tmpRoot; + let previousCwd; + + beforeEach(async () => { + tmpRoot = await fs.mkdtemp(path.join(os.tmpdir(), 'ide-sync-validate-filter-')); + previousCwd = process.cwd(); + process.chdir(tmpRoot); + + await fs.ensureDir(path.join(tmpRoot, '.aios-core')); + await fs.writeFile( + path.join(tmpRoot, '.aios-core', 'core-config.yaml'), + [ + 'ideSync:', + ' enabled: true', + ' source: .aios-core/development/agents', + ' targets:', + ' claude-code:', + ' enabled: true', + ' path: .claude/commands/AIOS/agents', + ' format: full-markdown-yaml', + ' gemini:', + ' enabled: true', + ' path: .gemini/rules/AIOS/agents', + ' format: full-markdown-yaml', + ' redirects: {}', + ].join('\n'), + 'utf8', + ); + + await fs.copy( + path.join(previousCwd, '.aios-core', 'development', 'agents'), + path.join(tmpRoot, '.aios-core', 'development', 'agents'), + ); + + await fs.ensureDir(path.join(tmpRoot, '.gemini', 'rules', 'AIOS', 'agents')); + const agents = parseAllAgents(path.join(tmpRoot, '.aios-core', 'development', 'agents')); + for (const agent of agents) { + const content = claudeTransformer.transform(agent); + await fs.writeFile( + path.join(tmpRoot, '.gemini', 'rules', 'AIOS', 'agents', agent.filename), + content, + 'utf8', + ); + } + syncGeminiCommands(agents, tmpRoot, { dryRun: false }); + }); + + afterEach(async () => { + process.chdir(previousCwd); + await fs.remove(tmpRoot); + }); + + it('validates only requested IDE when --ide is provided', async () => { + await expect(commandValidate({ ide: 'gemini', strict: true, verbose: false })).resolves.toBeUndefined(); + }); +}); + +``` + +================================================== +📄 tests/ide-sync/agent-parser.test.js +================================================== +```js +/** + * Unit tests for agent-parser.js + * @story 6.19 - IDE Command Auto-Sync System + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); +const { + extractYamlBlock, + parseYaml, + extractSection, + parseAgentFile, + parseAllAgents, + getVisibleCommands, + formatCommandsList, +} = require('../../.aios-core/infrastructure/scripts/ide-sync/agent-parser'); + +describe('agent-parser', () => { + let tempDir; + + beforeAll(() => { + tempDir = path.join(os.tmpdir(), 'agent-parser-test-' + Date.now()); + fs.ensureDirSync(tempDir); + }); + + afterAll(() => { + fs.removeSync(tempDir); + }); + + describe('extractYamlBlock', () => { + it('should extract YAML from markdown', () => { + const content = `# Agent + +Some text + +\`\`\`yaml +agent: + name: Test + id: test +\`\`\` + +More text +`; + const yaml = extractYamlBlock(content); + expect(yaml).toContain('agent:'); + expect(yaml).toContain('name: Test'); + }); + + it('should return null if no YAML block', () => { + const content = '# Just markdown\n\nNo YAML here.'; + expect(extractYamlBlock(content)).toBeNull(); + }); + + it('should handle empty YAML block', () => { + const content = '```yaml\n```'; + const yaml = extractYamlBlock(content); + // Empty YAML block returns null (no content to extract) + expect(yaml).toBeNull(); + }); + }); + + describe('parseYaml', () => { + it('should parse valid YAML', () => { + const yamlContent = 'agent:\n name: Test\n id: test'; + const parsed = parseYaml(yamlContent); + expect(parsed.agent.name).toBe('Test'); + expect(parsed.agent.id).toBe('test'); + }); + + it('should return null for invalid YAML', () => { + const invalidYaml = 'agent: [\ninvalid'; + expect(parseYaml(invalidYaml)).toBeNull(); + }); + + it('should handle empty string', () => { + // Empty string returns undefined from yaml.load, which is falsy + const result = parseYaml(''); + expect(result).toBeFalsy(); + }); + }); + + describe('extractSection', () => { + it('should extract section by heading', () => { + const content = `# Main + +## Quick Commands + +- Command 1 +- Command 2 + +## Other Section + +More content +`; + const section = extractSection(content, 'Quick Commands'); + expect(section).toContain('Command 1'); + // Note: regex captures until next heading, so Command 2 should be included + // If this test was failing, the regex might only capture first line + expect(section).toBeDefined(); + }); + + it('should return null if section not found', () => { + const content = '# Main\n\n## Existing Section\n\nContent'; + expect(extractSection(content, 'Missing Section')).toBeNull(); + }); + + it('should handle headings with parentheses', () => { + // Test that a regular heading with common text works + const content = '## Developer Guide\n\nGuide content here\n\n## Next Section'; + const section = extractSection(content, 'Developer Guide'); + expect(section).toContain('Guide content'); + }); + }); + + describe('parseAgentFile', () => { + it('should parse a valid agent file', () => { + const agentContent = `# test + +\`\`\`yaml +agent: + name: TestAgent + id: test + title: Test Agent + icon: 🧪 + +persona_profile: + archetype: Tester + +commands: + - name: help + visibility: [full, quick] + description: Show help + - name: run + visibility: [full] + description: Run test +\`\`\` + +## Quick Commands + +- \`*help\` - Show help +`; + + const filePath = path.join(tempDir, 'test.md'); + fs.writeFileSync(filePath, agentContent); + + const result = parseAgentFile(filePath); + + expect(result.error).toBeNull(); + expect(result.id).toBe('test'); + expect(result.agent.name).toBe('TestAgent'); + expect(result.agent.id).toBe('test'); + expect(result.persona_profile.archetype).toBe('Tester'); + expect(result.commands).toHaveLength(2); + expect(result.sections.quickCommands).toContain('help'); + }); + + it('should handle file without YAML block', () => { + const content = '# No YAML\n\nJust markdown.'; + const filePath = path.join(tempDir, 'no-yaml.md'); + fs.writeFileSync(filePath, content); + + const result = parseAgentFile(filePath); + expect(result.error).toBe('No YAML block found'); + }); + + it('should handle non-existent file', () => { + const result = parseAgentFile(path.join(tempDir, 'nonexistent.md')); + expect(result.error).not.toBeNull(); + }); + }); + + describe('parseAllAgents', () => { + it('should parse all agent files in directory', () => { + const agentsDir = path.join(tempDir, 'agents'); + fs.ensureDirSync(agentsDir); + + // Create two agent files + const agent1 = '# agent1\n\n```yaml\nagent:\n name: Agent1\n id: agent1\n```'; + const agent2 = '# agent2\n\n```yaml\nagent:\n name: Agent2\n id: agent2\n```'; + + fs.writeFileSync(path.join(agentsDir, 'agent1.md'), agent1); + fs.writeFileSync(path.join(agentsDir, 'agent2.md'), agent2); + + const agents = parseAllAgents(agentsDir); + expect(agents).toHaveLength(2); + expect(agents.map((a) => a.id)).toContain('agent1'); + expect(agents.map((a) => a.id)).toContain('agent2'); + }); + + it('should return empty array for non-existent directory', () => { + const agents = parseAllAgents(path.join(tempDir, 'nonexistent')); + expect(agents).toEqual([]); + }); + + it('should skip non-md files', () => { + const agentsDir = path.join(tempDir, 'agents-mixed'); + fs.ensureDirSync(agentsDir); + + fs.writeFileSync(path.join(agentsDir, 'agent.md'), '# agent\n```yaml\nagent:\n id: a\n```'); + fs.writeFileSync(path.join(agentsDir, 'config.json'), '{}'); + fs.writeFileSync(path.join(agentsDir, 'readme.txt'), 'text'); + + const agents = parseAllAgents(agentsDir); + expect(agents).toHaveLength(1); + }); + }); + + describe('getVisibleCommands', () => { + const commands = [ + { name: 'help', visibility: ['full', 'quick', 'key'] }, + { name: 'run', visibility: ['full', 'quick'] }, + { name: 'debug', visibility: ['full'] }, + { name: 'exit', visibility: ['key'] }, + ]; + + it('should filter by full visibility', () => { + const result = getVisibleCommands(commands, 'full'); + expect(result).toHaveLength(3); + expect(result.map((c) => c.name)).toContain('help'); + expect(result.map((c) => c.name)).toContain('run'); + expect(result.map((c) => c.name)).toContain('debug'); + }); + + it('should filter by quick visibility', () => { + const result = getVisibleCommands(commands, 'quick'); + expect(result).toHaveLength(2); + expect(result.map((c) => c.name)).toContain('help'); + expect(result.map((c) => c.name)).toContain('run'); + }); + + it('should filter by key visibility', () => { + const result = getVisibleCommands(commands, 'key'); + expect(result).toHaveLength(2); + expect(result.map((c) => c.name)).toContain('help'); + expect(result.map((c) => c.name)).toContain('exit'); + }); + + it('should handle empty commands array', () => { + expect(getVisibleCommands([], 'full')).toEqual([]); + }); + + it('should handle null/undefined', () => { + expect(getVisibleCommands(null, 'full')).toEqual([]); + expect(getVisibleCommands(undefined, 'full')).toEqual([]); + }); + + it('should include commands without visibility defined', () => { + const cmds = [{ name: 'novis' }, { name: 'withvis', visibility: ['full'] }]; + const result = getVisibleCommands(cmds, 'quick'); + expect(result.map((c) => c.name)).toContain('novis'); + }); + }); + + describe('formatCommandsList', () => { + it('should format commands as bullet list', () => { + const commands = [ + { name: 'help', description: 'Show help' }, + { name: 'run', description: 'Run tests' }, + ]; + + const result = formatCommandsList(commands); + expect(result).toContain('- `*help` - Show help'); + expect(result).toContain('- `*run` - Run tests'); + }); + + it('should handle empty commands', () => { + expect(formatCommandsList([])).toBe('- No commands available'); + }); + + it('should handle commands without description', () => { + const commands = [{ name: 'mystery' }]; + const result = formatCommandsList(commands); + expect(result).toContain('No description'); + }); + + it('should use custom prefix', () => { + const commands = [{ name: 'test', description: 'Test' }]; + const result = formatCommandsList(commands, '/'); + expect(result).toContain('`/test`'); + }); + }); +}); + +``` + +================================================== +📄 tests/ide-sync/transformers.test.js +================================================== +```js +/** + * Unit tests for IDE transformers + * @story 6.19 - IDE Command Auto-Sync System + */ + +const claudeCode = require('../../.aios-core/infrastructure/scripts/ide-sync/transformers/claude-code'); +const cursor = require('../../.aios-core/infrastructure/scripts/ide-sync/transformers/cursor'); +const antigravity = require('../../.aios-core/infrastructure/scripts/ide-sync/transformers/antigravity'); + +describe('IDE Transformers', () => { + // Sample agent data for testing + const sampleAgent = { + path: '/path/to/dev.md', + filename: 'dev.md', + id: 'dev', + raw: '# dev\n\n```yaml\nagent:\n name: Dex\n id: dev\n```\n\nContent', + yaml: { + agent: { + name: 'Dex', + id: 'dev', + title: 'Full Stack Developer', + icon: '💻', + whenToUse: 'Use for code implementation', + }, + persona_profile: { + archetype: 'Builder', + }, + commands: [ + { name: 'help', visibility: ['full', 'quick', 'key'], description: 'Show help' }, + { name: 'develop', visibility: ['full', 'quick'], description: 'Develop story' }, + { name: 'debug', visibility: ['full'], description: 'Debug mode' }, + { name: 'exit', visibility: ['full', 'quick', 'key'], description: 'Exit agent' }, + ], + dependencies: { + tasks: ['task1.md', 'task2.md'], + tools: ['git', 'context7'], + }, + }, + agent: { + name: 'Dex', + id: 'dev', + title: 'Full Stack Developer', + icon: '💻', + whenToUse: 'Use for code implementation', + }, + persona_profile: { + archetype: 'Builder', + }, + commands: [ + { name: 'help', visibility: ['full', 'quick', 'key'], description: 'Show help' }, + { name: 'develop', visibility: ['full', 'quick'], description: 'Develop story' }, + { name: 'debug', visibility: ['full'], description: 'Debug mode' }, + { name: 'exit', visibility: ['full', 'quick', 'key'], description: 'Exit agent' }, + ], + dependencies: { + tasks: ['task1.md', 'task2.md'], + tools: ['git', 'context7'], + }, + sections: { + quickCommands: '- `*help` - Show help', + collaboration: 'Works with @qa and @sm', + guide: 'Developer guide content', + }, + error: null, + }; + + describe('claude-code transformer', () => { + it('should return raw content (identity transform)', () => { + const result = claudeCode.transform(sampleAgent); + expect(result).toContain('# dev'); + expect(result).toContain('```yaml'); + }); + + it('should add sync footer if not present', () => { + const result = claudeCode.transform(sampleAgent); + expect(result).toContain('Synced from .aios-core/development/agents/dev.md'); + }); + + it('should not duplicate sync footer', () => { + const agentWithFooter = { + ...sampleAgent, + raw: + sampleAgent.raw + + '\n---\n*AIOS Agent - Synced from .aios-core/development/agents/dev.md*', + }; + const result = claudeCode.transform(agentWithFooter); + const footerCount = (result.match(/Synced from/g) || []).length; + expect(footerCount).toBe(1); + }); + + it('should return correct filename', () => { + expect(claudeCode.getFilename(sampleAgent)).toBe('dev.md'); + }); + + it('should have correct format identifier', () => { + expect(claudeCode.format).toBe('full-markdown-yaml'); + }); + + it('should handle agent without raw content', () => { + const noRaw = { ...sampleAgent, raw: null }; + const result = claudeCode.transform(noRaw); + expect(result).toContain('Dex'); + expect(result).toContain('Full Stack Developer'); + }); + }); + + describe('cursor transformer', () => { + it('should generate condensed format', () => { + const result = cursor.transform(sampleAgent); + expect(result).toContain('# Dex (@dev)'); + expect(result).toContain('💻 **Full Stack Developer**'); + expect(result).toContain('Builder'); + }); + + it('should include whenToUse', () => { + const result = cursor.transform(sampleAgent); + expect(result).toContain('Use for code implementation'); + }); + + it('should include Quick Commands section', () => { + const result = cursor.transform(sampleAgent); + expect(result).toContain('## Quick Commands'); + expect(result).toContain('*help'); + expect(result).toContain('*develop'); + }); + + it('should include collaboration section', () => { + const result = cursor.transform(sampleAgent); + expect(result).toContain('## Collaboration'); + expect(result).toContain('@qa'); + }); + + it('should add sync footer', () => { + const result = cursor.transform(sampleAgent); + expect(result).toContain('Synced from'); + }); + + it('should have correct format identifier', () => { + expect(cursor.format).toBe('condensed-rules'); + }); + }); + + describe('antigravity transformer', () => { + it('should generate cursor-style format', () => { + const result = antigravity.transform(sampleAgent); + expect(result).toContain('# Dex (@dev)'); + expect(result).toContain('💻 **Full Stack Developer**'); + }); + + it('should include Quick Commands', () => { + const result = antigravity.transform(sampleAgent); + expect(result).toContain('## Quick Commands'); + }); + + it('should include All Commands if more than quick+key', () => { + const result = antigravity.transform(sampleAgent); + expect(result).toContain('## All Commands'); + expect(result).toContain('*debug'); + }); + + it('should have correct format identifier', () => { + expect(antigravity.format).toBe('cursor-style'); + }); + }); + + describe('all transformers', () => { + const transformers = [claudeCode, cursor, antigravity]; + + it('should handle agent with minimal data', () => { + const minimal = { + filename: 'minimal.md', + id: 'minimal', + agent: null, + persona_profile: null, + commands: [], + dependencies: null, + sections: {}, + error: null, + }; + + for (const transformer of transformers) { + expect(() => transformer.transform(minimal)).not.toThrow(); + const result = transformer.transform(minimal); + expect(typeof result).toBe('string'); + expect(result.length).toBeGreaterThan(0); + } + }); + + it('should return valid filename for all', () => { + for (const transformer of transformers) { + const filename = transformer.getFilename(sampleAgent); + expect(filename).toBe('dev.md'); + } + }); + + it('should have format property', () => { + for (const transformer of transformers) { + expect(typeof transformer.format).toBe('string'); + } + }); + }); +}); + +``` + +================================================== +📄 tests/templates/dbdr.test.js +================================================== +```js +/** + * DBDR Template Tests + * + * Test suite for Database Decision Record template. + * + * @module tests/dbdr + * @story 3.10 - Template DBDR + */ + +'use strict'; + +const path = require('path'); +const { TemplateEngine } = require('../../.aios-core/product/templates/engine'); + +describe('DBDR Template', () => { + let engine; + + beforeAll(() => { + engine = new TemplateEngine({ + interactive: false, + baseDir: path.join(__dirname, '..', '..'), + }); + }); + + /** + * DBDR-01: Generate DBDR with required fields + * Priority: P0 + * AC: AC3.10.6 - JSON Schema valida output gerado + */ + describe('DBDR-01: Generate DBDR with required fields', () => { + it('should generate a valid DBDR document with all required fields', async () => { + const context = { + number: 1, + title: 'Implement User Audit Trail Table', + status: 'Proposed', + dbType: 'PostgreSQL', + owner: 'Data Engineer', + context: 'We need to track all user actions for compliance and debugging purposes.', + decision: 'Create a dedicated audit_trail table with partitioning by date for performance.', + migrationStrategy: 'Blue-green deployment with dual-write during migration period.', + rollbackPlan: 'Restore from pre-migration backup and disable audit triggers if issues occur.', + }; + + const result = await engine.generate('dbdr', context, { validate: true, save: false }); + + expect(result.content).toBeDefined(); + expect(result.content).toContain('# DBDR 001: Implement User Audit Trail Table'); + expect(result.content).toContain('**Status:** Proposed'); + expect(result.content).toContain('**Owner:** Data Engineer'); + expect(result.content).toContain('**Database:** PostgreSQL'); + expect(result.content).toContain('## Context'); + expect(result.content).toContain('## Decision'); + expect(result.content).toContain('## Migration Strategy'); + expect(result.content).toContain('## Rollback Plan'); + expect(result.templateType).toBe('dbdr'); + }); + + it('should fail validation without required fields', async () => { + const incompleteContext = { + number: 1, + title: 'Test', + // Missing: status, dbType, owner, context, decision, migrationStrategy, rollbackPlan + }; + + await expect( + engine.generate('dbdr', incompleteContext, { validate: true, save: false }), + ).rejects.toThrow(/required.*has no default|missing required/i); + }); + }); + + /** + * DBDR-02: Schema changes table renders + * Priority: P0 + * AC: AC3.10.2 - Inclui seção de Schema Changes específica + */ + describe('DBDR-02: Schema changes table renders', () => { + it('should render schema changes as a table', async () => { + const context = { + number: 2, + title: 'Add User Preferences Schema', + status: 'Approved', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing schema changes rendering in DBDR template.', + decision: 'Add new user_preferences table with JSON column for flexibility.', + migrationStrategy: 'Online migration with zero downtime using pg_repack.', + rollbackPlan: 'Drop table and restore from backup if validation fails.', + schemaChanges: [ + { table: 'user_preferences', changeType: 'CREATE', description: 'New preferences table' }, + { table: 'users', changeType: 'ALTER', description: 'Add preference_id foreign key' }, + { table: 'user_preferences', changeType: 'INDEX', description: 'Index on user_id' }, + ], + }; + + const result = await engine.generate('dbdr', context, { validate: true, save: false }); + + // Check table headers + expect(result.content).toContain('| Table | Change Type | Description |'); + expect(result.content).toContain('|-------|-------------|-------------|'); + + // Check table rows + expect(result.content).toContain('`user_preferences`'); + expect(result.content).toContain('CREATE'); + expect(result.content).toContain('New preferences table'); + expect(result.content).toContain('ALTER'); + expect(result.content).toContain('INDEX'); + }); + + it('should show placeholder when no schema changes', async () => { + const context = { + number: 3, + title: 'Query Optimization Decision', + status: 'Proposed', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing empty schema changes rendering.', + decision: 'Optimize existing queries without schema changes.', + migrationStrategy: 'No schema migration needed, only query updates.', + rollbackPlan: 'Revert query changes via git if performance degrades.', + // No schemaChanges + }; + + const result = await engine.generate('dbdr', context, { validate: true, save: false }); + + expect(result.content).toContain('_No schema changes required._'); + }); + }); + + /** + * DBDR-03: SQL blocks render correctly + * Priority: P0 + */ + describe('DBDR-03: SQL blocks render correctly', () => { + it('should render SQL migration blocks when provided', async () => { + const context = { + number: 4, + title: 'Add Audit Columns', + status: 'Approved', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing SQL blocks rendering in DBDR template.', + decision: 'Add created_at and updated_at columns to all core tables.', + migrationStrategy: 'Batch migration during maintenance window with locks.', + rollbackPlan: 'Drop columns using pre-generated rollback script.', + schemaChanges: [ + { + table: 'users', + changeType: 'ALTER', + description: 'Add audit columns', + sql: 'ALTER TABLE users ADD COLUMN created_at TIMESTAMP DEFAULT NOW();', + }, + { + table: 'orders', + changeType: 'ALTER', + description: 'Add audit columns', + sql: 'ALTER TABLE orders ADD COLUMN updated_at TIMESTAMP DEFAULT NOW();', + }, + ], + }; + + const result = await engine.generate('dbdr', context, { validate: true, save: false }); + + // Check SQL blocks + expect(result.content).toContain('```sql'); + expect(result.content).toContain('-- users: ALTER'); + expect(result.content).toContain('ALTER TABLE users ADD COLUMN created_at'); + expect(result.content).toContain('-- orders: ALTER'); + expect(result.content).toContain('ALTER TABLE orders ADD COLUMN updated_at'); + }); + + it('should render rollback scripts when provided', async () => { + const context = { + number: 5, + title: 'Test Rollback Scripts', + status: 'Proposed', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing rollback scripts rendering.', + decision: 'Test decision with rollback scripts.', + migrationStrategy: 'Standard migration with rollback capability.', + rollbackPlan: 'Execute rollback script to restore previous state.', + rollbackScripts: 'DROP TABLE IF EXISTS new_table;\nALTER TABLE users DROP COLUMN IF EXISTS new_column;', + }; + + const result = await engine.generate('dbdr', context, { validate: true, save: false }); + + expect(result.content).toContain('### Rollback Scripts'); + expect(result.content).toContain('DROP TABLE IF EXISTS new_table'); + expect(result.content).toContain('ALTER TABLE users DROP COLUMN'); + }); + }); + + /** + * DBDR-04: Validation fails without rollbackPlan + * Priority: P0 + * AC: AC3.10.5 - Inclui seção de Rollback Plan + */ + describe('DBDR-04: Validation fails without rollbackPlan', () => { + it('should fail when rollbackPlan is missing', async () => { + const context = { + number: 6, + title: 'Test Missing Rollback Plan', + status: 'Proposed', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing validation of rollback plan field.', + decision: 'Test decision requiring rollback plan.', + migrationStrategy: 'Standard migration approach with blue-green.', + // Missing: rollbackPlan + }; + + await expect( + engine.generate('dbdr', context, { validate: true, save: false }), + ).rejects.toThrow(/required.*has no default|missing required/i); + }); + + it('should fail when rollbackPlan is too short', async () => { + const context = { + number: 7, + title: 'Test Short Rollback Plan', + status: 'Proposed', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing validation of rollback plan minimum length.', + decision: 'Test decision with short rollback plan.', + migrationStrategy: 'Standard migration approach.', + rollbackPlan: 'Too short', // Less than 20 chars + }; + + const result = await engine.generate('dbdr', context, { validate: true, save: false }); + + // Should have validation errors + expect(result.validation.isValid).toBe(false); + expect(result.validation.errors.some(e => e.includes('rollbackPlan'))).toBe(true); + }); + }); + + /** + * DBDR-05: Performance metrics table renders + * Priority: P1 + * AC: AC3.10.4 - Inclui seção de Performance Impact + */ + describe('DBDR-05: Performance metrics table renders', () => { + it('should render performance metrics table when provided', async () => { + const context = { + number: 8, + title: 'Test Performance Metrics', + status: 'Approved', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing performance metrics rendering in DBDR template.', + decision: 'Add indexes to improve query performance.', + migrationStrategy: 'Create indexes concurrently to avoid locks.', + rollbackPlan: 'Drop indexes if performance degrades unexpectedly.', + performanceMetrics: [ + { metric: 'Query Time (avg)', before: '500ms', after: '50ms', acceptable: 'Yes' }, + { metric: 'Index Size', before: '0 MB', after: '150 MB', acceptable: 'Yes' }, + { metric: 'Write Latency', before: '10ms', after: '15ms', acceptable: 'Acceptable' }, + ], + }; + + const result = await engine.generate('dbdr', context, { validate: true, save: false }); + + // Check performance table headers + expect(result.content).toContain('| Metric | Before | After | Acceptable? |'); + expect(result.content).toContain('|--------|--------|-------|-------------|'); + + // Check table rows + expect(result.content).toContain('Query Time (avg)'); + expect(result.content).toContain('500ms'); + expect(result.content).toContain('50ms'); + expect(result.content).toContain('Index Size'); + expect(result.content).toContain('150 MB'); + }); + + it('should render indexing strategy when provided', async () => { + const context = { + number: 9, + title: 'Test Indexing Strategy', + status: 'Proposed', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing indexing strategy rendering.', + decision: 'Add composite indexes for common query patterns.', + migrationStrategy: 'Create indexes during low traffic period.', + rollbackPlan: 'Drop indexes if space or performance issues arise.', + indexes: [ + { name: 'idx_users_email', table: 'users', columns: 'email', reason: 'Unique lookup by email' }, + { name: 'idx_orders_user_date', table: 'orders', columns: 'user_id, created_at', reason: 'User order history queries' }, + ], + }; + + const result = await engine.generate('dbdr', context, { validate: true, save: false }); + + expect(result.content).toContain('### Indexing Strategy'); + expect(result.content).toContain('`idx_users_email`'); + expect(result.content).toContain('`users(email)`'); + expect(result.content).toContain('Unique lookup by email'); + expect(result.content).toContain('`idx_orders_user_date`'); + expect(result.content).toContain('`orders(user_id, created_at)`'); + }); + }); + + /** + * DBDR-06: Validation fails without migrationStrategy + * Priority: P0 + * AC: AC3.10.7 - Valida que migration strategy não está vazia + */ + describe('DBDR-06: Validation fails without migrationStrategy', () => { + it('should fail when migrationStrategy is missing', async () => { + const context = { + number: 10, + title: 'Test Missing Migration Strategy', + status: 'Proposed', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing validation of migration strategy field.', + decision: 'Test decision requiring migration strategy.', + rollbackPlan: 'Execute rollback script to restore previous state.', + // Missing: migrationStrategy + }; + + await expect( + engine.generate('dbdr', context, { validate: true, save: false }), + ).rejects.toThrow(/required.*has no default|missing required/i); + }); + + it('should fail when migrationStrategy is empty', async () => { + const context = { + number: 11, + title: 'Test Empty Migration Strategy', + status: 'Proposed', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing validation of empty migration strategy.', + decision: 'Test decision with empty migration strategy.', + migrationStrategy: '', // Empty string + rollbackPlan: 'Execute rollback script to restore previous state.', + }; + + await expect( + engine.generate('dbdr', context, { validate: true, save: false }), + ).rejects.toThrow(/required.*has no default|missing required/i); + }); + + it('should fail when migrationStrategy is too short', async () => { + const context = { + number: 12, + title: 'Test Short Migration Strategy', + status: 'Proposed', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing validation of migration strategy minimum length.', + decision: 'Test decision with short migration strategy.', + migrationStrategy: 'Too short', // Less than 20 chars + rollbackPlan: 'Execute rollback script to restore previous state.', + }; + + const result = await engine.generate('dbdr', context, { validate: true, save: false }); + + // Should have validation errors + expect(result.validation.isValid).toBe(false); + expect(result.validation.errors.some(e => e.includes('migrationStrategy'))).toBe(true); + }); + }); + + /** + * DBDR-07: CLI command executes successfully + * Priority: P0 + * AC: AC3.10.8 - Template registrado no TemplateEngine + * AC: AC3.10.9 - Geração via CLI: aios generate dbdr + */ + describe('DBDR-07: CLI command executes successfully', () => { + it('should be included in supported types', () => { + expect(engine.supportedTypes).toContain('dbdr'); + }); + + it('should load template info successfully', async () => { + const info = await engine.getTemplateInfo('dbdr'); + + expect(info.type).toBe('dbdr'); + expect(info.name).toBe('Database Decision Record'); + expect(info.version).toBeDefined(); + expect(info.variables).toBeDefined(); + expect(Array.isArray(info.variables)).toBe(true); + + // Check required variables + const requiredVars = info.variables.filter(v => v.required); + const requiredNames = requiredVars.map(v => v.name); + + expect(requiredNames).toContain('number'); + expect(requiredNames).toContain('title'); + expect(requiredNames).toContain('dbType'); + expect(requiredNames).toContain('migrationStrategy'); + expect(requiredNames).toContain('rollbackPlan'); + }); + + it('should list dbdr in available templates', async () => { + const templates = await engine.listTemplates(); + const dbdrTemplate = templates.find(t => t.type === 'dbdr'); + + expect(dbdrTemplate).toBeDefined(); + expect(dbdrTemplate.status).not.toBe('missing'); + }); + }); + + /** + * Additional tests for optional sections + */ + describe('Optional sections rendering', () => { + it('should render migration phases when provided', async () => { + const context = { + number: 13, + title: 'Test Migration Phases', + status: 'Approved', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing migration phases rendering.', + decision: 'Implement phased migration for zero-downtime.', + migrationStrategy: 'Phased migration with validation at each step.', + rollbackPlan: 'Rollback to previous phase on validation failure.', + migrationPhases: [ + { phase: 'Preparation', duration: '1 day', description: 'Create new schema', validation: 'Schema exists' }, + { phase: 'Dual Write', duration: '3 days', description: 'Write to both schemas', validation: 'Data parity check' }, + { phase: 'Cutover', duration: '1 hour', description: 'Switch to new schema', validation: 'All queries work' }, + ], + }; + + const result = await engine.generate('dbdr', context, { validate: true, save: false }); + + expect(result.content).toContain('### Phases'); + expect(result.content).toContain('**Preparation**'); + expect(result.content).toContain('1 day'); + expect(result.content).toContain('Dual Write'); + expect(result.content).toContain('Cutover'); + }); + + it('should render consequences when provided', async () => { + const context = { + number: 14, + title: 'Test Consequences Rendering', + status: 'Approved', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing consequences section rendering.', + decision: 'Denormalize for read performance.', + migrationStrategy: 'Background job to populate denormalized data.', + rollbackPlan: 'Drop denormalized columns and revert to joins.', + positiveConsequences: [ + 'Read queries 10x faster', + 'Simpler application logic', + 'Better user experience', + ], + negativeConsequences: [ + 'Increased storage requirements', + 'More complex write logic', + 'Potential for data inconsistency', + ], + }; + + const result = await engine.generate('dbdr', context, { validate: true, save: false }); + + expect(result.content).toContain('## Consequences'); + expect(result.content).toContain('### Positive'); + expect(result.content).toContain('✅ Read queries 10x faster'); + expect(result.content).toContain('### Negative (Trade-offs)'); + expect(result.content).toContain('⚠️ Increased storage requirements'); + }); + + it('should render data volume considerations when provided', async () => { + const context = { + number: 15, + title: 'Test Data Volume', + status: 'Proposed', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing data volume considerations rendering.', + decision: 'Implement table partitioning for large audit table.', + migrationStrategy: 'Create partitions and migrate data in batches.', + rollbackPlan: 'Merge partitions back to single table if issues.', + dataVolume: { + current: '500 GB', + projected: '2 TB (1 year)', + retention: '7 years regulatory', + }, + }; + + const result = await engine.generate('dbdr', context, { validate: true, save: false }); + + expect(result.content).toContain('### Data Volume Considerations'); + expect(result.content).toContain('**Current Size:** 500 GB'); + expect(result.content).toContain('**Projected Growth:** 2 TB (1 year)'); + expect(result.content).toContain('**Retention Policy:** 7 years regulatory'); + }); + + it('should render related decisions when provided', async () => { + const context = { + number: 16, + title: 'Test Related Decisions', + status: 'Proposed', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing related decisions rendering.', + decision: 'Extend previous audit implementation.', + migrationStrategy: 'Incremental migration building on DBDR-001.', + rollbackPlan: 'Revert to state before this migration.', + relatedDBDRs: [ + { number: 1, title: 'Implement User Audit Trail Table' }, + { number: 5, title: 'Add User Preferences Schema' }, + ], + relatedADRs: [ + { number: 3, title: 'Use PostgreSQL for Primary Database' }, + ], + }; + + const result = await engine.generate('dbdr', context, { validate: true, save: false }); + + expect(result.content).toContain('## Related Decisions'); + expect(result.content).toContain('[DBDR 1]'); + expect(result.content).toContain('Implement User Audit Trail Table'); + expect(result.content).toContain('### Related ADRs'); + expect(result.content).toContain('[ADR 3]'); + }); + + it('should render testing sections when provided', async () => { + const context = { + number: 17, + title: 'Test Testing Sections', + status: 'Approved', + dbType: 'PostgreSQL', + owner: 'DBA', + context: 'Testing pre/post migration tests rendering.', + decision: 'Add comprehensive testing for migration.', + migrationStrategy: 'Test-driven migration with automated validation.', + rollbackPlan: 'Automated rollback on any test failure.', + preMigrationTests: [ + 'Verify backup is complete and valid', + 'Confirm staging environment matches production', + 'Run load tests on new schema', + ], + postMigrationValidation: [ + 'Verify row counts match expected', + 'Run integrity checks on all foreign keys', + 'Validate query performance benchmarks', + ], + }; + + const result = await engine.generate('dbdr', context, { validate: true, save: false }); + + expect(result.content).toContain('### Pre-Migration Testing'); + expect(result.content).toContain('[ ] Verify backup is complete'); + expect(result.content).toContain('### Post-Migration Validation'); + expect(result.content).toContain('[ ] Verify row counts'); + }); + }); +}); + +``` + +================================================== +📄 tests/templates/pmdr.test.js +================================================== +```js +/** + * PMDR Template Tests + * + * Test suite for Product Management Decision Record template. + * + * @module tests/pmdr + * @story 3.9 - Template PMDR + */ + +'use strict'; + +const path = require('path'); +const { TemplateEngine } = require('../../.aios-core/product/templates/engine'); + +describe('PMDR Template', () => { + let engine; + + beforeAll(() => { + engine = new TemplateEngine({ + interactive: false, + baseDir: path.join(__dirname, '..', '..'), + }); + }); + + /** + * PMDR-01: Generate PMDR with required fields + * Priority: P0 + */ + describe('PMDR-01: Generate PMDR with required fields', () => { + it('should generate a valid PMDR document with all required fields', async () => { + const context = { + number: 1, + title: 'Implement Feature Flag System', + status: 'Draft', + owner: 'Product Manager', + stakeholders: ['Engineering Lead', 'QA Lead', 'Designer'], + context: 'We need a way to safely roll out new features to production without full deployment risk.', + decision: 'We will implement a feature flag system using LaunchDarkly to enable gradual rollouts and A/B testing.', + businessImpact: 'This will reduce deployment risk by 80% and enable faster iteration on features.', + successMetrics: [ + { metric: 'Deployment failures', current: '15%', target: '3%', timeline: 'Q2 2025' }, + { metric: 'Feature iteration time', current: '2 weeks', target: '3 days', timeline: 'Q3 2025' }, + ], + }; + + const result = await engine.generate('pmdr', context, { validate: true, save: false }); + + expect(result.content).toBeDefined(); + expect(result.content).toContain('# PMDR 001: Implement Feature Flag System'); + expect(result.content).toContain('**Status:** Draft'); + expect(result.content).toContain('**Owner:** Product Manager'); + expect(result.content).toContain('## Stakeholders'); + expect(result.content).toContain('- Engineering Lead'); + expect(result.content).toContain('## Context'); + expect(result.content).toContain('## Decision'); + expect(result.content).toContain('## Business Impact'); + expect(result.content).toContain('## Success Metrics'); + expect(result.templateType).toBe('pmdr'); + }); + + it('should fail validation without required fields', async () => { + const incompleteContext = { + number: 1, + title: 'Test', + // Missing: status, owner, stakeholders, context, decision, businessImpact, successMetrics + }; + + await expect( + engine.generate('pmdr', incompleteContext, { validate: true, save: false }), + ).rejects.toThrow(/required.*has no default|missing required/i); + }); + }); + + /** + * PMDR-02: Success metrics table renders + * Priority: P0 + */ + describe('PMDR-02: Success metrics table renders', () => { + it('should render success metrics as a table', async () => { + const context = { + number: 2, + title: 'Test Metrics Rendering', + status: 'Draft', + owner: 'PM', + stakeholders: ['Team'], + context: 'Testing success metrics rendering in PMDR template.', + decision: 'Verify metrics table renders correctly with all columns.', + businessImpact: 'Ensures proper documentation of success criteria.', + successMetrics: [ + { metric: 'Conversion Rate', current: '2%', target: '5%', timeline: 'Q1' }, + { metric: 'User Engagement', current: '10min', target: '20min', timeline: 'Q2' }, + { metric: 'Revenue Growth', target: '25%' }, // No current value + ], + }; + + const result = await engine.generate('pmdr', context, { validate: true, save: false }); + + // Check table headers + expect(result.content).toContain('| Metric | Current | Target | Timeline |'); + expect(result.content).toContain('|--------|---------|--------|----------|'); + + // Check table rows + expect(result.content).toContain('| Conversion Rate | 2% | 5% | Q1 |'); + expect(result.content).toContain('| User Engagement | 10min | 20min | Q2 |'); + expect(result.content).toContain('Revenue Growth'); + expect(result.content).toContain('25%'); + }); + }); + + /** + * PMDR-03: Approval workflow renders + * Priority: P1 + */ + describe('PMDR-03: Approval workflow renders', () => { + it('should render approval tracking table when approvals exist', async () => { + const context = { + number: 3, + title: 'Test Approval Workflow', + status: 'Under Review', + owner: 'PM', + stakeholders: ['CTO', 'CEO', 'Engineering Lead'], + context: 'Testing approval workflow rendering.', + decision: 'Test decision for approval workflow.', + businessImpact: 'Ensures proper approval tracking.', + successMetrics: [{ metric: 'Test', target: '100%' }], + approvals: [ + { stakeholder: 'CTO', decision: 'Approved', date: '2025-01-15', notes: 'LGTM' }, + { stakeholder: 'CEO', decision: 'Pending' }, + { stakeholder: 'Engineering Lead', decision: 'Approved', date: '2025-01-14' }, + ], + }; + + const result = await engine.generate('pmdr', context, { validate: true, save: false }); + + expect(result.content).toContain('## Approval'); + expect(result.content).toContain('| Stakeholder | Decision | Date | Notes |'); + expect(result.content).toContain('| CTO | Approved | 2025-01-15 | LGTM |'); + expect(result.content).toContain('Engineering Lead'); + }); + + it('should show pending approval message when no approvals', async () => { + const context = { + number: 4, + title: 'Test No Approvals', + status: 'Draft', + owner: 'PM', + stakeholders: ['Team'], + context: 'Testing when no approvals exist.', + decision: 'Test decision.', + businessImpact: 'Test impact.', + successMetrics: [{ metric: 'Test', target: '100%' }], + // No approvals array + }; + + const result = await engine.generate('pmdr', context, { validate: true, save: false }); + + expect(result.content).toContain('_Pending approval._'); + }); + }); + + /** + * PMDR-04: Validation fails without businessImpact + * Priority: P0 + */ + describe('PMDR-04: Validation fails without businessImpact', () => { + it('should fail schema validation when businessImpact is empty', async () => { + const context = { + number: 5, + title: 'Test Missing Business Impact', + status: 'Draft', + owner: 'PM', + stakeholders: ['Team'], + context: 'Testing validation of business impact field.', + decision: 'Test decision requiring business impact.', + businessImpact: '', // Empty string - should fail minLength: 20 + successMetrics: [{ metric: 'Test', target: '100%' }], + }; + + await expect( + engine.generate('pmdr', context, { validate: true, save: false }), + ).rejects.toThrow(); + }); + + it('should fail when businessImpact is too short', async () => { + const context = { + number: 6, + title: 'Test Short Business Impact', + status: 'Draft', + owner: 'PM', + stakeholders: ['Team'], + context: 'Testing validation of business impact minimum length.', + decision: 'Test decision with short business impact.', + businessImpact: 'Too short', // Less than 20 chars + successMetrics: [{ metric: 'Test', target: '100%' }], + }; + + const result = await engine.generate('pmdr', context, { validate: true, save: false }); + + // Should have validation errors + expect(result.validation.isValid).toBe(false); + expect(result.validation.errors.some(e => e.includes('businessImpact'))).toBe(true); + }); + }); + + /** + * PMDR-05: CLI command executes successfully + * Priority: P0 + */ + describe('PMDR-05: CLI command executes successfully', () => { + it('should be included in supported types', () => { + expect(engine.supportedTypes).toContain('pmdr'); + }); + + it('should load template info successfully', async () => { + const info = await engine.getTemplateInfo('pmdr'); + + expect(info.type).toBe('pmdr'); + expect(info.name).toBe('Product Management Decision Record'); + expect(info.version).toBeDefined(); + expect(info.variables).toBeDefined(); + expect(Array.isArray(info.variables)).toBe(true); + + // Check required variables + const requiredVars = info.variables.filter(v => v.required); + const requiredNames = requiredVars.map(v => v.name); + + expect(requiredNames).toContain('number'); + expect(requiredNames).toContain('title'); + expect(requiredNames).toContain('businessImpact'); + expect(requiredNames).toContain('successMetrics'); + }); + + it('should list pmdr in available templates', async () => { + const templates = await engine.listTemplates(); + const pmdrTemplate = templates.find(t => t.type === 'pmdr'); + + expect(pmdrTemplate).toBeDefined(); + expect(pmdrTemplate.status).not.toBe('missing'); + }); + }); + + /** + * Additional tests for optional sections + */ + describe('Optional sections rendering', () => { + it('should render implementation phases when provided', async () => { + const context = { + number: 7, + title: 'Test Implementation Phases', + status: 'Draft', + owner: 'PM', + stakeholders: ['Team'], + context: 'Testing implementation phases rendering.', + decision: 'Test decision with implementation phases.', + businessImpact: 'This will improve our delivery pipeline.', + successMetrics: [{ metric: 'Test', target: '100%' }], + implementationPhases: [ + { name: 'Discovery', duration: '2 weeks', description: 'Research and planning' }, + { name: 'Development', duration: '4 weeks', description: 'Build the solution' }, + { name: 'Rollout', duration: '1 week', description: 'Gradual deployment' }, + ], + }; + + const result = await engine.generate('pmdr', context, { validate: true, save: false }); + + expect(result.content).toContain('## Implementation'); + expect(result.content).toContain('### Phases'); + expect(result.content).toContain('**Discovery**'); + expect(result.content).toContain('2 weeks'); + }); + + it('should render risks table when provided', async () => { + const context = { + number: 8, + title: 'Test Risks Rendering', + status: 'Draft', + owner: 'PM', + stakeholders: ['Team'], + context: 'Testing risks table rendering.', + decision: 'Test decision with risks.', + businessImpact: 'This decision has associated risks.', + successMetrics: [{ metric: 'Test', target: '100%' }], + risks: [ + { risk: 'Technical complexity', impact: 'High', mitigation: 'Spike investigation' }, + { risk: 'Resource constraints', impact: 'Medium', mitigation: 'Hire contractors' }, + ], + }; + + const result = await engine.generate('pmdr', context, { validate: true, save: false }); + + expect(result.content).toContain('### Risks'); + expect(result.content).toContain('| Risk | Impact | Mitigation |'); + expect(result.content).toContain('Technical complexity'); + expect(result.content).toContain('Spike investigation'); + }); + + it('should render alternatives when provided', async () => { + const context = { + number: 9, + title: 'Test Alternatives Rendering', + status: 'Draft', + owner: 'PM', + stakeholders: ['Team'], + context: 'Testing alternatives section rendering.', + decision: 'We chose option A over alternatives.', + businessImpact: 'This is the best option for our needs.', + successMetrics: [{ metric: 'Test', target: '100%' }], + alternatives: [ + { name: 'Option B', description: 'Alternative approach', whyNot: 'Too expensive' }, + { name: 'Option C', description: 'Another approach', whyNot: 'Not scalable' }, + ], + }; + + const result = await engine.generate('pmdr', context, { validate: true, save: false }); + + expect(result.content).toContain('## Alternatives Considered'); + expect(result.content).toContain('Option B'); + expect(result.content).toContain('Too expensive'); + }); + }); +}); + +``` + +================================================== +📄 tests/fixtures/tool-invalid.yaml +================================================== +```yaml +tool: + # Missing required id field + type: mcp + name: Invalid Tool + version: 1.0.0 + # Missing description + +``` + +================================================== +📄 tests/fixtures/tool-v1.0-simple.yaml +================================================== +```yaml +tool: + id: test-simple + type: mcp + name: Test Simple Tool + version: 1.0.0 + description: Simple v1.0 tool for testing + + commands: + - search + - fetch + + mcp_specific: + server_command: npx -y test-simple-server + transport: stdio + +``` + +================================================== +📄 tests/fixtures/tool-v2.0-complex.yaml +================================================== +```yaml +tool: + schema_version: 2.0 + id: test-complex + type: mcp + name: Test Complex Tool + version: 1.0.0 + description: Complex v2.0 tool with executable knowledge + knowledge_strategy: executable + + commands: + - create_item + - update_item + + executable_knowledge: + validators: + - id: validate-create-item + validates: create_item + language: javascript + checks: + - required_fields: [name] + function: | + function validateCommand(args) { + const errors = []; + if (!args.args.name) { + errors.push("name is required"); + } + return { + valid: errors.length === 0, + errors: errors + }; + } + module.exports = { validateCommand }; + + helpers: + - id: extract-field + language: javascript + runtime: isolated_vm + function: | + function extractField(args) { + const { data, fieldName } = args; + return data.fields?.find(f => f.name === fieldName); + } + module.exports = { extractField }; + + api_complexity: + api_quirks: + - quirk: format_mismatch + description: "Create and Update use different formats" + impact: "Runtime errors if wrong format used" + mitigation: "Use validators" + + mcp_specific: + server_command: npx -y test-complex-server + transport: stdio + +``` + +================================================== +📄 tests/code-intel/code-intel-client.test.js +================================================== +```js +'use strict'; + +const { + CodeIntelClient, + CIRCUIT_BREAKER_THRESHOLD, + CIRCUIT_BREAKER_RESET_MS, + CACHE_TTL_MS, + CB_CLOSED, + CB_OPEN, + CB_HALF_OPEN, +} = require('../../.aios-core/core/code-intel/code-intel-client'); + +describe('CodeIntelClient', () => { + let client; + let mockMcpCallFn; + + beforeEach(() => { + mockMcpCallFn = jest.fn(); + client = new CodeIntelClient({ mcpCallFn: mockMcpCallFn }); + }); + + describe('Provider Detection (AC3)', () => { + it('should detect provider when mcpCallFn is configured', () => { + expect(client.isCodeIntelAvailable()).toBe(true); + }); + + it('should not detect provider when mcpCallFn is missing', () => { + const noProviderClient = new CodeIntelClient(); + expect(noProviderClient.isCodeIntelAvailable()).toBe(false); + }); + }); + + describe('8 Primitive Capabilities (AC3)', () => { + it('should expose findDefinition', async () => { + mockMcpCallFn.mockResolvedValue({ file: 'a.js', line: 1, column: 0, context: '' }); + const result = await client.findDefinition('foo'); + expect(result).toBeTruthy(); + expect(mockMcpCallFn).toHaveBeenCalled(); + }); + + it('should expose findReferences', async () => { + mockMcpCallFn.mockResolvedValue([{ file: 'a.js', line: 1, context: '' }]); + const result = await client.findReferences('foo'); + expect(result).toBeTruthy(); + }); + + it('should expose findCallers', async () => { + mockMcpCallFn.mockResolvedValue({ callers: [] }); + const result = await client.findCallers('foo'); + expect(result).toBeDefined(); + }); + + it('should expose findCallees', async () => { + mockMcpCallFn.mockResolvedValue({ callees: [] }); + const result = await client.findCallees('foo'); + expect(result).toBeDefined(); + }); + + it('should expose analyzeDependencies', async () => { + mockMcpCallFn.mockResolvedValue({ nodes: [], edges: [] }); + const result = await client.analyzeDependencies('src/'); + expect(result).toBeDefined(); + }); + + it('should expose analyzeComplexity', async () => { + mockMcpCallFn.mockResolvedValue({ score: 5, details: {} }); + const result = await client.analyzeComplexity('src/index.js'); + expect(result).toBeDefined(); + }); + + it('should expose analyzeCodebase', async () => { + mockMcpCallFn.mockResolvedValue({ files: [], structure: {}, patterns: [] }); + const result = await client.analyzeCodebase('.'); + expect(result).toBeDefined(); + }); + + it('should expose getProjectStats', async () => { + mockMcpCallFn.mockResolvedValue({ files: 100, lines: 10000, languages: {} }); + const result = await client.getProjectStats(); + expect(result).toBeDefined(); + }); + }); + + describe('Circuit Breaker (AC5)', () => { + it('should start in CLOSED state', () => { + expect(client.getCircuitBreakerState()).toBe(CB_CLOSED); + }); + + it('should open after 3 consecutive failures', async () => { + mockMcpCallFn.mockRejectedValue(new Error('timeout')); + + await client.findDefinition('a'); + await client.findDefinition('b'); + await client.findDefinition('c'); + + expect(client.getCircuitBreakerState()).toBe(CB_OPEN); + }); + + it('should return null when circuit is open (fallback)', async () => { + mockMcpCallFn.mockRejectedValue(new Error('timeout')); + + // Trigger 3 failures to open circuit + await client.findDefinition('a'); + await client.findDefinition('b'); + await client.findDefinition('c'); + + // Reset mock to succeed — but circuit is open + mockMcpCallFn.mockResolvedValue({ file: 'ok.js', line: 1, column: 0, context: '' }); + const result = await client.findDefinition('d'); + expect(result).toBeNull(); + }); + + it('should transition to HALF-OPEN after reset timer', async () => { + mockMcpCallFn.mockRejectedValue(new Error('timeout')); + + await client.findDefinition('a'); + await client.findDefinition('b'); + await client.findDefinition('c'); + expect(client.getCircuitBreakerState()).toBe(CB_OPEN); + + // Simulate time passing + client._cbOpenedAt = Date.now() - CIRCUIT_BREAKER_RESET_MS - 1; + expect(client.getCircuitBreakerState()).toBe(CB_HALF_OPEN); + }); + + it('should close after success in HALF-OPEN state', async () => { + mockMcpCallFn.mockRejectedValue(new Error('timeout')); + + await client.findDefinition('a'); + await client.findDefinition('b'); + await client.findDefinition('c'); + + // Simulate reset timer expired + client._cbOpenedAt = Date.now() - CIRCUIT_BREAKER_RESET_MS - 1; + client._cbState = CB_HALF_OPEN; + + // Next call succeeds + mockMcpCallFn.mockResolvedValue({ file: 'ok.js', line: 1, column: 0, context: '' }); + await client.findDefinition('e'); + expect(client.getCircuitBreakerState()).toBe(CB_CLOSED); + }); + + it('should reset failure count on success', async () => { + mockMcpCallFn + .mockRejectedValueOnce(new Error('fail')) + .mockRejectedValueOnce(new Error('fail')) + .mockResolvedValueOnce({ file: 'ok.js', line: 1, column: 0, context: '' }); + + await client.findDefinition('a'); + await client.findDefinition('b'); + await client.findDefinition('c'); // success — resets counter + expect(client.getCircuitBreakerState()).toBe(CB_CLOSED); + }); + }); + + describe('Session Cache (AC6)', () => { + it('should cache results for identical calls', async () => { + mockMcpCallFn.mockResolvedValue({ file: 'a.js', line: 1, column: 0, context: '' }); + + const result1 = await client.findDefinition('foo'); + const result2 = await client.findDefinition('foo'); + + expect(mockMcpCallFn).toHaveBeenCalledTimes(1); + expect(result1).toEqual(result2); + }); + + it('should not cache for different params', async () => { + mockMcpCallFn.mockResolvedValue({ file: 'a.js', line: 1, column: 0, context: '' }); + + await client.findDefinition('foo'); + await client.findDefinition('bar'); + + expect(mockMcpCallFn).toHaveBeenCalledTimes(2); + }); + + it('should evict expired entries', async () => { + mockMcpCallFn.mockResolvedValue({ file: 'a.js', line: 1, column: 0, context: '' }); + + await client.findDefinition('foo'); + + // Manually expire the cache entry + const cacheEntry = client._cache.values().next().value; + cacheEntry.timestamp = Date.now() - CACHE_TTL_MS - 1; + + await client.findDefinition('foo'); + expect(mockMcpCallFn).toHaveBeenCalledTimes(2); + }); + + it('should track cache hit/miss counters', async () => { + mockMcpCallFn.mockResolvedValue({ file: 'a.js', line: 1, column: 0, context: '' }); + + await client.findDefinition('foo'); // miss + await client.findDefinition('foo'); // hit + await client.findDefinition('bar'); // miss + + const metrics = client.getMetrics(); + expect(metrics.cacheHits).toBe(1); + expect(metrics.cacheMisses).toBe(2); + expect(metrics.cacheHitRate).toBeCloseTo(1 / 3); + }); + }); + + describe('Latency Logging (NFR-2)', () => { + it('should log latency for each capability call', async () => { + mockMcpCallFn.mockResolvedValue({ file: 'a.js', line: 1, column: 0, context: '' }); + + await client.findDefinition('foo'); + await client.findReferences('bar'); + + const metrics = client.getMetrics(); + expect(metrics.latencyLog).toHaveLength(2); + expect(metrics.latencyLog[0].capability).toBe('findDefinition'); + expect(metrics.latencyLog[1].capability).toBe('findReferences'); + expect(typeof metrics.latencyLog[0].durationMs).toBe('number'); + }); + }); + + describe('isCodeIntelAvailable (AC8)', () => { + it('should return true when provider has mcpCallFn', () => { + expect(client.isCodeIntelAvailable()).toBe(true); + }); + + it('should return false when no provider configured', () => { + const bareClient = new CodeIntelClient(); + expect(bareClient.isCodeIntelAvailable()).toBe(false); + }); + }); + + describe('Metrics', () => { + it('should return comprehensive metrics object', async () => { + const metrics = client.getMetrics(); + expect(metrics).toHaveProperty('cacheHits'); + expect(metrics).toHaveProperty('cacheMisses'); + expect(metrics).toHaveProperty('cacheHitRate'); + expect(metrics).toHaveProperty('circuitBreakerState'); + expect(metrics).toHaveProperty('latencyLog'); + expect(metrics).toHaveProperty('providerAvailable'); + expect(metrics).toHaveProperty('activeProvider'); + }); + }); +}); + +``` + +================================================== +📄 tests/code-intel/fallback.test.js +================================================== +```js +'use strict'; + +const { + CodeIntelClient, +} = require('../../.aios-core/core/code-intel/code-intel-client'); +const { + CodeIntelEnricher, +} = require('../../.aios-core/core/code-intel/code-intel-enricher'); +const { + isCodeIntelAvailable, + enrichWithCodeIntel, + getClient, + _resetForTesting, +} = require('../../.aios-core/core/code-intel'); + +describe('Fallback Graceful (AC4, NFR-1, NFR-4)', () => { + describe('CodeIntelClient without provider', () => { + let client; + + beforeEach(() => { + // No mcpCallFn = no provider available + client = new CodeIntelClient(); + }); + + it('should return false for isCodeIntelAvailable', () => { + expect(client.isCodeIntelAvailable()).toBe(false); + }); + + it('findDefinition should return null without throw', async () => { + const result = await client.findDefinition('foo'); + expect(result).toBeNull(); + }); + + it('findReferences should return null without throw', async () => { + const result = await client.findReferences('foo'); + expect(result).toBeNull(); + }); + + it('findCallers should return null without throw', async () => { + const result = await client.findCallers('foo'); + expect(result).toBeNull(); + }); + + it('findCallees should return null without throw', async () => { + const result = await client.findCallees('foo'); + expect(result).toBeNull(); + }); + + it('analyzeDependencies should return null without throw', async () => { + const result = await client.analyzeDependencies('src/'); + expect(result).toBeNull(); + }); + + it('analyzeComplexity should return null without throw', async () => { + const result = await client.analyzeComplexity('src/index.js'); + expect(result).toBeNull(); + }); + + it('analyzeCodebase should return null without throw', async () => { + const result = await client.analyzeCodebase('.'); + expect(result).toBeNull(); + }); + + it('getProjectStats should return null without throw', async () => { + const result = await client.getProjectStats(); + expect(result).toBeNull(); + }); + + it('should warn only once about no provider', async () => { + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(); + + await client.findDefinition('a'); + await client.findReferences('b'); + await client.analyzeCodebase('c'); + + const noProviderWarnings = warnSpy.mock.calls.filter((call) => + call[0].includes('No provider available'), + ); + expect(noProviderWarnings).toHaveLength(1); + warnSpy.mockRestore(); + }); + }); + + describe('CodeIntelEnricher without provider', () => { + let enricher; + + beforeEach(() => { + const client = new CodeIntelClient(); // no provider + enricher = new CodeIntelEnricher(client); + }); + + it('assessImpact should handle null from primitives', async () => { + const result = await enricher.assessImpact(['src/foo.js']); + expect(result.blastRadius).toBe(0); + }); + + it('detectDuplicates should return null', async () => { + const result = await enricher.detectDuplicates('something'); + expect(result).toBeNull(); + }); + + it('getConventions should return null', async () => { + const result = await enricher.getConventions('src/'); + expect(result).toBeNull(); + }); + + it('findTests should return null', async () => { + const result = await enricher.findTests('foo'); + expect(result).toBeNull(); + }); + + it('describeProject should return null', async () => { + const result = await enricher.describeProject(); + expect(result).toBeNull(); + }); + }); + + describe('enrichWithCodeIntel convenience function', () => { + beforeEach(() => { + _resetForTesting(); + }); + + it('should return baseResult unchanged when no provider', async () => { + const baseResult = { data: 'test', value: 42 }; + const result = await enrichWithCodeIntel(baseResult); + expect(result).toEqual(baseResult); + }); + + it('should enrich baseResult with capabilities when provider available', async () => { + _resetForTesting(); + const mockMcpCallFn = jest.fn().mockResolvedValue({ + files: ['a.js'], + structure: { type: 'flat' }, + patterns: ['singleton'], + }); + const client = getClient({ mcpCallFn: mockMcpCallFn }); + expect(client.isCodeIntelAvailable()).toBe(true); + + const baseResult = { data: 'test' }; + const result = await enrichWithCodeIntel(baseResult, { + capabilities: ['describeProject'], + target: '.', + timeout: 5000, + }); + + expect(result.data).toBe('test'); + expect(result._codeIntel).toBeDefined(); + expect(result._codeIntel.describeProject).toBeDefined(); + }); + + it('should handle rejected capability gracefully during enrichment', async () => { + _resetForTesting(); + const mockMcpCallFn = jest.fn().mockRejectedValue(new Error('provider error')); + getClient({ mcpCallFn: mockMcpCallFn }); + + const baseResult = { data: 'test' }; + const result = await enrichWithCodeIntel(baseResult, { + capabilities: ['describeProject'], + target: '.', + timeout: 5000, + }); + + // Should still return enriched object structure without throwing + expect(result.data).toBe('test'); + expect(result._codeIntel).toBeDefined(); + }); + + it('should skip unknown capabilities without error', async () => { + _resetForTesting(); + const mockMcpCallFn = jest.fn().mockResolvedValue({ files: 10 }); + getClient({ mcpCallFn: mockMcpCallFn }); + + const baseResult = { value: 1 }; + const result = await enrichWithCodeIntel(baseResult, { + capabilities: ['nonExistentCapability'], + target: '.', + }); + + expect(result.value).toBe(1); + expect(result._codeIntel).toBeDefined(); + expect(result._codeIntel.nonExistentCapability).toBeUndefined(); + }); + + it('should respect fallbackBehavior silent mode', async () => { + _resetForTesting(); + const mockMcpCallFn = jest.fn().mockResolvedValue({ files: [] }); + getClient({ mcpCallFn: mockMcpCallFn }); + + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(); + const baseResult = { data: 'test' }; + await enrichWithCodeIntel(baseResult, { + capabilities: ['describeProject'], + fallbackBehavior: 'silent', + }); + // No enrichment warnings should be logged in silent mode + const enrichmentWarnings = warnSpy.mock.calls.filter((call) => + call[0].includes('Enrichment failed'), + ); + expect(enrichmentWarnings).toHaveLength(0); + warnSpy.mockRestore(); + }); + }); + + describe('Regression: existing tasks not broken (NFR-4)', () => { + it('should be possible to require the module without errors', () => { + expect(() => { + require('../../.aios-core/core/code-intel'); + }).not.toThrow(); + }); + + it('getClient should return a valid client instance', () => { + _resetForTesting(); + const client = getClient(); + expect(client).toBeDefined(); + expect(typeof client.findDefinition).toBe('function'); + expect(typeof client.isCodeIntelAvailable).toBe('function'); + }); + + it('isCodeIntelAvailable should return boolean without error', () => { + _resetForTesting(); + const result = isCodeIntelAvailable(); + expect(typeof result).toBe('boolean'); + }); + }); +}); + +``` + +================================================== +📄 tests/code-intel/dev-helper.test.js +================================================== +```js +'use strict'; + +const { + checkBeforeWriting, + suggestReuse, + getConventionsForPath, + assessRefactoringImpact, + _formatSuggestion, + _calculateRiskLevel, + RISK_THRESHOLDS, +} = require('../../.aios-core/core/code-intel/helpers/dev-helper'); + +// Mock the code-intel module +jest.mock('../../.aios-core/core/code-intel/index', () => ({ + isCodeIntelAvailable: jest.fn(), + getEnricher: jest.fn(), + getClient: jest.fn(), +})); + +const { + isCodeIntelAvailable, + getEnricher, + getClient, +} = require('../../.aios-core/core/code-intel/index'); + +// --- Helper to setup mocks --- + +function setupProviderAvailable() { + isCodeIntelAvailable.mockReturnValue(true); +} + +function setupProviderUnavailable() { + isCodeIntelAvailable.mockReturnValue(false); +} + +function createMockEnricher(overrides = {}) { + const enricher = { + detectDuplicates: jest.fn().mockResolvedValue(null), + assessImpact: jest.fn().mockResolvedValue(null), + getConventions: jest.fn().mockResolvedValue(null), + findTests: jest.fn().mockResolvedValue(null), + describeProject: jest.fn().mockResolvedValue(null), + ...overrides, + }; + getEnricher.mockReturnValue(enricher); + return enricher; +} + +function createMockClient(overrides = {}) { + const client = { + findReferences: jest.fn().mockResolvedValue(null), + findDefinition: jest.fn().mockResolvedValue(null), + findCallers: jest.fn().mockResolvedValue(null), + findCallees: jest.fn().mockResolvedValue(null), + analyzeDependencies: jest.fn().mockResolvedValue(null), + analyzeComplexity: jest.fn().mockResolvedValue(null), + analyzeCodebase: jest.fn().mockResolvedValue(null), + getProjectStats: jest.fn().mockResolvedValue(null), + ...overrides, + }; + getClient.mockReturnValue(client); + return client; +} + +// --- Tests --- + +beforeEach(() => { + jest.clearAllMocks(); +}); + +describe('DevHelper', () => { + // === T1: checkBeforeWriting with provider (match found) === + describe('checkBeforeWriting', () => { + it('should return duplicates and suggestion when matches found (T1)', async () => { + setupProviderAvailable(); + createMockEnricher({ + detectDuplicates: jest.fn().mockResolvedValue({ + matches: [{ file: 'src/utils/helper.js', line: 10, context: 'function helper()' }], + codebaseOverview: {}, + }), + }); + createMockClient({ + findReferences: jest.fn().mockResolvedValue([ + { file: 'src/index.js', line: 5, context: 'require(helper)' }, + ]), + }); + + const result = await checkBeforeWriting('helper.js', 'utility helper function'); + + expect(result).not.toBeNull(); + expect(result.duplicates.matches).toHaveLength(1); + expect(result.references).toHaveLength(1); + expect(result.suggestion).toContain('REUSE'); + expect(result.suggestion).toContain('IDS Article IV-A'); + }); + + // === T2: checkBeforeWriting with provider (no match) === + it('should return null when no matches found (T2)', async () => { + setupProviderAvailable(); + createMockEnricher({ + detectDuplicates: jest.fn().mockResolvedValue({ matches: [], codebaseOverview: {} }), + }); + createMockClient({ + findReferences: jest.fn().mockResolvedValue([]), + }); + + const result = await checkBeforeWriting('brand-new-module.js', 'completely new thing'); + + expect(result).toBeNull(); + }); + + // === T3: checkBeforeWriting without provider === + it('should return null without throw when no provider (T3)', async () => { + setupProviderUnavailable(); + + const result = await checkBeforeWriting('test.js', 'some description'); + + expect(result).toBeNull(); + expect(getEnricher).not.toHaveBeenCalled(); + }); + + it('should return null if enricher throws', async () => { + setupProviderAvailable(); + createMockEnricher({ + detectDuplicates: jest.fn().mockRejectedValue(new Error('provider error')), + }); + createMockClient(); + + const result = await checkBeforeWriting('test.js', 'desc'); + + expect(result).toBeNull(); + }); + }); + + // === T4: suggestReuse finds existing definition === + describe('suggestReuse', () => { + it('should return REUSE suggestion when symbol has many references (T4)', async () => { + setupProviderAvailable(); + createMockClient({ + findDefinition: jest.fn().mockResolvedValue({ + file: 'src/core/parser.js', + line: 42, + column: 0, + context: 'function parseConfig()', + }), + findReferences: jest.fn().mockResolvedValue([ + { file: 'src/a.js', line: 1 }, + { file: 'src/b.js', line: 2 }, + { file: 'src/c.js', line: 3 }, + { file: 'src/d.js', line: 4 }, + ]), + }); + + const result = await suggestReuse('parseConfig'); + + expect(result).not.toBeNull(); + expect(result.file).toBe('src/core/parser.js'); + expect(result.line).toBe(42); + expect(result.references).toBe(4); + expect(result.suggestion).toBe('REUSE'); + }); + + it('should return ADAPT suggestion when symbol has few references', async () => { + setupProviderAvailable(); + createMockClient({ + findDefinition: jest.fn().mockResolvedValue({ + file: 'src/old.js', + line: 10, + }), + findReferences: jest.fn().mockResolvedValue([ + { file: 'src/old.js', line: 10 }, + ]), + }); + + const result = await suggestReuse('oldHelper'); + + expect(result).not.toBeNull(); + expect(result.suggestion).toBe('ADAPT'); + }); + + // === T5: suggestReuse no match === + it('should return null when symbol not found (T5)', async () => { + setupProviderAvailable(); + createMockClient({ + findDefinition: jest.fn().mockResolvedValue(null), + findReferences: jest.fn().mockResolvedValue([]), + }); + + const result = await suggestReuse('nonExistentSymbol'); + + expect(result).toBeNull(); + }); + + it('should return null without throw when no provider', async () => { + setupProviderUnavailable(); + + const result = await suggestReuse('anything'); + + expect(result).toBeNull(); + }); + }); + + // === T6: assessRefactoringImpact with blast radius === + describe('assessRefactoringImpact', () => { + it('should return blast radius and risk level (T6)', async () => { + setupProviderAvailable(); + createMockEnricher({ + assessImpact: jest.fn().mockResolvedValue({ + references: Array.from({ length: 20 }, (_, i) => ({ + file: `src/file${i}.js`, + line: i, + })), + complexity: { average: 5.2, perFile: [] }, + blastRadius: 20, + }), + }); + + const result = await assessRefactoringImpact(['src/target.js']); + + expect(result).not.toBeNull(); + expect(result.blastRadius).toBe(20); + expect(result.riskLevel).toBe('HIGH'); + expect(result.references).toHaveLength(20); + expect(result.complexity).toBeDefined(); + }); + + it('should return LOW risk for small blast radius', async () => { + setupProviderAvailable(); + createMockEnricher({ + assessImpact: jest.fn().mockResolvedValue({ + references: [{ file: 'a.js', line: 1 }], + complexity: { average: 1, perFile: [] }, + blastRadius: 1, + }), + }); + + const result = await assessRefactoringImpact(['src/small.js']); + + expect(result.riskLevel).toBe('LOW'); + }); + + it('should return MEDIUM risk for moderate blast radius', async () => { + setupProviderAvailable(); + createMockEnricher({ + assessImpact: jest.fn().mockResolvedValue({ + references: Array.from({ length: 10 }, () => ({ file: 'a.js', line: 1 })), + complexity: { average: 3, perFile: [] }, + blastRadius: 10, + }), + }); + + const result = await assessRefactoringImpact(['src/mid.js']); + + expect(result.riskLevel).toBe('MEDIUM'); + }); + + // === T7: assessRefactoringImpact without provider === + it('should return null without throw when no provider (T7)', async () => { + setupProviderUnavailable(); + + const result = await assessRefactoringImpact(['any.js']); + + expect(result).toBeNull(); + }); + + it('should return null when assessImpact returns null', async () => { + setupProviderAvailable(); + createMockEnricher({ + assessImpact: jest.fn().mockResolvedValue(null), + }); + + const result = await assessRefactoringImpact(['empty.js']); + + expect(result).toBeNull(); + }); + }); + + // === T8: getConventionsForPath returns patterns === + describe('getConventionsForPath', () => { + it('should return patterns and stats (T8)', async () => { + setupProviderAvailable(); + createMockEnricher({ + getConventions: jest.fn().mockResolvedValue({ + patterns: ['kebab-case files', 'CommonJS modules', 'JSDoc comments'], + stats: { files: 120, languages: ['javascript'] }, + }), + }); + + const result = await getConventionsForPath('src/'); + + expect(result).not.toBeNull(); + expect(result.patterns).toHaveLength(3); + expect(result.stats.files).toBe(120); + }); + + it('should return null without throw when no provider', async () => { + setupProviderUnavailable(); + + const result = await getConventionsForPath('src/'); + + expect(result).toBeNull(); + }); + }); + + // === T9: All functions fallback (provider unavailable) === + describe('All functions fallback (T9)', () => { + beforeEach(() => { + setupProviderUnavailable(); + }); + + it('all 4 functions return null when no provider', async () => { + const results = await Promise.all([ + checkBeforeWriting('file.js', 'desc'), + suggestReuse('symbol'), + getConventionsForPath('src/'), + assessRefactoringImpact(['file.js']), + ]); + + expect(results).toEqual([null, null, null, null]); + }); + }); + + // === Private helpers === + describe('_calculateRiskLevel', () => { + it('should return LOW for 0 refs', () => { + expect(_calculateRiskLevel(0)).toBe('LOW'); + }); + + it('should return LOW for threshold boundary', () => { + expect(_calculateRiskLevel(RISK_THRESHOLDS.LOW_MAX)).toBe('LOW'); + }); + + it('should return MEDIUM for LOW_MAX + 1', () => { + expect(_calculateRiskLevel(RISK_THRESHOLDS.LOW_MAX + 1)).toBe('MEDIUM'); + }); + + it('should return MEDIUM for MEDIUM_MAX boundary', () => { + expect(_calculateRiskLevel(RISK_THRESHOLDS.MEDIUM_MAX)).toBe('MEDIUM'); + }); + + it('should return HIGH for MEDIUM_MAX + 1', () => { + expect(_calculateRiskLevel(RISK_THRESHOLDS.MEDIUM_MAX + 1)).toBe('HIGH'); + }); + }); + + describe('_formatSuggestion', () => { + it('should format with both duplicates and refs', () => { + const msg = _formatSuggestion( + { matches: [{ file: 'a.js', line: 1 }] }, + [{ file: 'b.js', line: 2 }] + ); + + expect(msg).toContain('1 similar match'); + expect(msg).toContain('a.js:1'); + expect(msg).toContain('1 location'); + expect(msg).toContain('IDS Article IV-A'); + }); + + it('should format with only duplicates', () => { + const msg = _formatSuggestion( + { matches: [{ file: 'a.js' }] }, + null + ); + + expect(msg).toContain('1 similar match'); + expect(msg).toContain('IDS Article IV-A'); + }); + + it('should format with only refs', () => { + const msg = _formatSuggestion(null, [{ file: 'b.js', line: 5 }]); + + expect(msg).toContain('1 location'); + expect(msg).toContain('b.js:5'); + }); + }); +}); + +``` + +================================================== +📄 tests/code-intel/code-graph-provider.test.js +================================================== +```js +'use strict'; + +const { CodeGraphProvider, TOOL_MAP } = require('../../.aios-core/core/code-intel/providers/code-graph-provider'); + +describe('CodeGraphProvider', () => { + let provider; + let mockMcpCallFn; + + beforeEach(() => { + mockMcpCallFn = jest.fn(); + provider = new CodeGraphProvider({ mcpCallFn: mockMcpCallFn }); + }); + + describe('TOOL_MAP', () => { + it('should map all 8 capabilities to Code Graph MCP tool names', () => { + expect(TOOL_MAP).toEqual({ + findDefinition: 'find_definition', + findReferences: 'find_references', + findCallers: 'find_callers', + findCallees: 'find_callees', + analyzeDependencies: 'dependency_analysis', + analyzeComplexity: 'complexity_analysis', + analyzeCodebase: 'analyze_codebase', + getProjectStats: 'project_statistics', + }); + }); + + it('should have exactly 8 entries', () => { + expect(Object.keys(TOOL_MAP)).toHaveLength(8); + }); + }); + + describe('findDefinition', () => { + it('should call MCP with find_definition and normalize result', async () => { + mockMcpCallFn.mockResolvedValue({ + file: 'src/index.js', + line: 42, + column: 5, + context: 'function foo() {', + }); + + const result = await provider.findDefinition('foo'); + expect(mockMcpCallFn).toHaveBeenCalledWith('code-graph', 'find_definition', { symbol: 'foo' }); + expect(result).toEqual({ + file: 'src/index.js', + line: 42, + column: 5, + context: 'function foo() {', + }); + }); + + it('should return null when MCP returns null', async () => { + mockMcpCallFn.mockResolvedValue(null); + const result = await provider.findDefinition('missing'); + expect(result).toBeNull(); + }); + }); + + describe('findReferences', () => { + it('should normalize array response', async () => { + mockMcpCallFn.mockResolvedValue([ + { file: 'a.js', line: 1, context: 'use foo' }, + { file: 'b.js', line: 5, context: 'call foo' }, + ]); + + const result = await provider.findReferences('foo'); + expect(result).toHaveLength(2); + expect(result[0]).toEqual({ file: 'a.js', line: 1, context: 'use foo' }); + }); + + it('should normalize object with references key', async () => { + mockMcpCallFn.mockResolvedValue({ + references: [{ path: 'c.js', row: 10, snippet: 'ref foo' }], + }); + + const result = await provider.findReferences('foo'); + expect(result).toHaveLength(1); + expect(result[0]).toEqual({ file: 'c.js', line: 10, context: 'ref foo' }); + }); + }); + + describe('findCallers', () => { + it('should normalize callers result', async () => { + mockMcpCallFn.mockResolvedValue({ + callers: [{ caller: 'bar', file: 'bar.js', line: 3 }], + }); + + const result = await provider.findCallers('foo'); + expect(result).toEqual([{ caller: 'bar', file: 'bar.js', line: 3 }]); + }); + }); + + describe('findCallees', () => { + it('should normalize callees result', async () => { + mockMcpCallFn.mockResolvedValue({ + callees: [{ callee: 'baz', file: 'baz.js', line: 7 }], + }); + + const result = await provider.findCallees('foo'); + expect(result).toEqual([{ callee: 'baz', file: 'baz.js', line: 7 }]); + }); + }); + + describe('analyzeDependencies', () => { + it('should normalize dependency graph', async () => { + mockMcpCallFn.mockResolvedValue({ + nodes: ['a.js', 'b.js'], + edges: [{ from: 'a.js', to: 'b.js' }], + }); + + const result = await provider.analyzeDependencies('src/'); + expect(result).toEqual({ + nodes: ['a.js', 'b.js'], + edges: [{ from: 'a.js', to: 'b.js' }], + }); + }); + }); + + describe('analyzeComplexity', () => { + it('should normalize complexity metrics', async () => { + mockMcpCallFn.mockResolvedValue({ + score: 15, + details: { cyclomatic: 15, halstead: 200 }, + }); + + const result = await provider.analyzeComplexity('src/index.js'); + expect(result).toEqual({ + score: 15, + details: { cyclomatic: 15, halstead: 200 }, + }); + }); + + it('should preserve score=0 without falling through to alternatives', async () => { + mockMcpCallFn.mockResolvedValue({ + score: 0, + details: {}, + }); + + const result = await provider.analyzeComplexity('src/simple.js'); + expect(result.score).toBe(0); + }); + }); + + describe('analyzeCodebase', () => { + it('should normalize codebase analysis', async () => { + mockMcpCallFn.mockResolvedValue({ + files: ['a.js', 'b.js'], + structure: { type: 'flat' }, + patterns: ['singleton'], + }); + + const result = await provider.analyzeCodebase('.'); + expect(result).toEqual({ + files: ['a.js', 'b.js'], + structure: { type: 'flat' }, + patterns: ['singleton'], + }); + }); + }); + + describe('getProjectStats', () => { + it('should normalize project statistics', async () => { + mockMcpCallFn.mockResolvedValue({ + files: 150, + lines: 25000, + languages: { javascript: 120, yaml: 30 }, + }); + + const result = await provider.getProjectStats(); + expect(result).toEqual({ + files: 150, + lines: 25000, + languages: { javascript: 120, yaml: 30 }, + }); + }); + + it('should preserve files=0 and lines=0 without falling through', async () => { + mockMcpCallFn.mockResolvedValue({ + files: 0, + lines: 0, + languages: {}, + }); + + const result = await provider.getProjectStats(); + expect(result.files).toBe(0); + expect(result.lines).toBe(0); + }); + }); + + describe('no mcpCallFn configured', () => { + it('should return null for all capabilities', async () => { + const bareProvider = new CodeGraphProvider(); + expect(await bareProvider.findDefinition('foo')).toBeNull(); + expect(await bareProvider.findReferences('foo')).toBeNull(); + expect(await bareProvider.getProjectStats()).toBeNull(); + }); + }); +}); + +``` + +================================================== +📄 tests/code-intel/registry-syncer.test.js +================================================== +```js +'use strict'; + +const fs = require('fs'); +const path = require('path'); +const yaml = require('js-yaml'); +const { RegistrySyncer, inferRole, ROLE_MAP } = require('../../.aios-core/core/code-intel/registry-syncer'); + +// Mock fs module for controlled testing +jest.mock('fs'); + +// Mock code-intel index (fallback check) +jest.mock('../../.aios-core/core/code-intel', () => ({ + getClient: jest.fn(), + isCodeIntelAvailable: jest.fn().mockReturnValue(false), +})); + +// Mock registry-loader +jest.mock('../../.aios-core/core/ids/registry-loader', () => { + const DEFAULT_REGISTRY_PATH = '/mock/entity-registry.yaml'; + return { + RegistryLoader: jest.fn().mockImplementation(() => ({ + load: jest.fn().mockReturnValue({ + metadata: { version: '1.0.0', lastUpdated: null, entityCount: 3 }, + entities: { + tasks: { + 'dev-develop-story': { + path: '.aios-core/development/tasks/dev-develop-story.md', + type: 'task', + purpose: 'Develop story', + keywords: ['develop', 'story'], + usedBy: [], + dependencies: [], + }, + 'create-next-story': { + path: '.aios-core/development/tasks/create-next-story.md', + type: 'task', + purpose: 'Create next story', + keywords: ['create', 'story'], + usedBy: [], + dependencies: [], + }, + }, + scripts: { + 'greeting-builder': { + path: '.aios-core/development/scripts/greeting-builder.js', + type: 'script', + purpose: 'Build agent greetings', + keywords: ['greeting', 'builder'], + usedBy: [], + dependencies: [], + }, + }, + }, + categories: ['tasks', 'scripts'], + }), + })), + DEFAULT_REGISTRY_PATH, + }; +}); + +function createMockClient(overrides = {}) { + return { + findReferences: jest.fn().mockResolvedValue([ + { file: '.aios-core/development/tasks/create-next-story.md' }, + ]), + analyzeDependencies: jest.fn().mockResolvedValue({ + dependencies: [ + { path: '../ids/registry-loader' }, + { path: 'fs' }, + ], + }), + _activeProvider: { name: 'code-graph' }, + ...overrides, + }; +} + +function createSyncer(options = {}) { + return new RegistrySyncer({ + registryPath: '/mock/entity-registry.yaml', + repoRoot: '/mock/repo', + client: options.client || createMockClient(), + logger: options.logger || jest.fn(), + ...options, + }); +} + +describe('RegistrySyncer', () => { + beforeEach(() => { + jest.clearAllMocks(); + // Default fs mocks + fs.existsSync.mockReturnValue(true); + fs.writeFileSync.mockImplementation(() => {}); + fs.renameSync.mockImplementation(() => {}); + fs.statSync.mockReturnValue({ mtimeMs: Date.now() + 10000 }); // Future mtime = always process + }); + + // T1: Batch sync with mock provider (happy path) + describe('Batch sync (T1 — AC1)', () => { + it('should process all entities in registry', async () => { + const logger = jest.fn(); + const syncer = createSyncer({ logger }); + + const stats = await syncer.sync({ full: true }); + + expect(stats.total).toBe(3); + expect(stats.processed).toBe(3); + expect(stats.skipped).toBe(0); + expect(stats.errors).toBe(0); + }); + + it('should call atomic write after sync', async () => { + const syncer = createSyncer(); + await syncer.sync({ full: true }); + + expect(fs.writeFileSync).toHaveBeenCalledWith( + expect.stringContaining('.tmp'), + expect.any(String), + 'utf8' + ); + expect(fs.renameSync).toHaveBeenCalled(); + }); + }); + + // T2: usedBy population via findReferences (AC2) + describe('usedBy population (T2 — AC2)', () => { + it('should populate usedBy with entity IDs from findReferences', async () => { + const mockClient = createMockClient({ + findReferences: jest.fn().mockResolvedValue([ + { file: '.aios-core/development/tasks/create-next-story.md' }, + ]), + }); + const syncer = createSyncer({ client: mockClient }); + await syncer.sync({ full: true }); + + expect(mockClient.findReferences).toHaveBeenCalled(); + // Verify the call was made for entity IDs + const calls = mockClient.findReferences.mock.calls; + expect(calls.length).toBeGreaterThan(0); + }); + + it('should deduplicate usedBy entries', async () => { + const mockClient = createMockClient({ + findReferences: jest.fn().mockResolvedValue([ + { file: '.aios-core/development/tasks/create-next-story.md' }, + { file: '.aios-core/development/tasks/create-next-story.md' }, // Duplicate + ]), + }); + const syncer = createSyncer({ client: mockClient }); + + const entities = { + tasks: { + 'test-entity': { + path: '.aios-core/development/tasks/test.md', + usedBy: [], + dependencies: [], + }, + }, + }; + + const result = await syncer.syncEntity( + { id: 'test-entity', category: 'tasks', data: entities.tasks['test-entity'] }, + entities, + true + ); + + expect(result).toBe(true); + // usedBy should be deduplicated + const usedBy = entities.tasks['test-entity'].usedBy; + const unique = [...new Set(usedBy)]; + expect(usedBy.length).toBe(unique.length); + }); + }); + + // T3: Dependencies population via analyzeDependencies (AC3) + describe('Dependencies population (T3 — AC3)', () => { + it('should populate dependencies for JS files via analyzeDependencies', async () => { + const mockClient = createMockClient({ + analyzeDependencies: jest.fn().mockResolvedValue({ + dependencies: [ + { path: '../ids/registry-loader' }, + { path: './helper' }, + ], + }), + }); + const syncer = createSyncer({ client: mockClient }); + + const entity = { + id: 'greeting-builder', + category: 'scripts', + data: { + path: '.aios-core/development/scripts/greeting-builder.js', + usedBy: [], + dependencies: [], + }, + }; + + await syncer.syncEntity(entity, {}, true); + + expect(mockClient.analyzeDependencies).toHaveBeenCalled(); + expect(entity.data.dependencies).toContain('../ids/registry-loader'); + expect(entity.data.dependencies).toContain('./helper'); + }); + + it('should NOT call analyzeDependencies for non-JS files', async () => { + const mockClient = createMockClient(); + const syncer = createSyncer({ client: mockClient }); + + const entity = { + id: 'dev-develop-story', + category: 'tasks', + data: { + path: '.aios-core/development/tasks/dev-develop-story.md', + usedBy: [], + dependencies: [], + }, + }; + + await syncer.syncEntity(entity, {}, true); + + expect(mockClient.analyzeDependencies).not.toHaveBeenCalled(); + }); + }); + + // T4: codeIntelMetadata schema (AC4) + describe('codeIntelMetadata schema (T4 — AC4)', () => { + it('should add codeIntelMetadata with correct fields', async () => { + const syncer = createSyncer(); + + const entity = { + id: 'dev-develop-story', + category: 'tasks', + data: { + path: '.aios-core/development/tasks/dev-develop-story.md', + usedBy: [], + dependencies: [], + }, + }; + + await syncer.syncEntity(entity, {}, true); + + const metadata = entity.data.codeIntelMetadata; + expect(metadata).toBeDefined(); + expect(metadata).toHaveProperty('callerCount'); + expect(metadata).toHaveProperty('role'); + expect(metadata).toHaveProperty('lastSynced'); + expect(metadata).toHaveProperty('provider'); + expect(typeof metadata.callerCount).toBe('number'); + expect(typeof metadata.role).toBe('string'); + expect(typeof metadata.lastSynced).toBe('string'); + expect(typeof metadata.provider).toBe('string'); + }); + + it('should set callerCount based on usedBy length', async () => { + const mockClient = createMockClient({ + findReferences: jest.fn().mockResolvedValue([ + { file: '.aios-core/development/tasks/create-next-story.md' }, + { file: '.aios-core/development/scripts/greeting-builder.js' }, + ]), + }); + const syncer = createSyncer({ client: mockClient }); + + const entities = { + tasks: { + 'test-entity': { + path: '.aios-core/development/tasks/test.md', + usedBy: [], + dependencies: [], + }, + 'create-next-story': { + path: '.aios-core/development/tasks/create-next-story.md', + }, + }, + scripts: { + 'greeting-builder': { + path: '.aios-core/development/scripts/greeting-builder.js', + }, + }, + }; + + await syncer.syncEntity( + { id: 'test-entity', category: 'tasks', data: entities.tasks['test-entity'] }, + entities, + true + ); + + expect(entities.tasks['test-entity'].codeIntelMetadata.callerCount).toBe( + entities.tasks['test-entity'].usedBy.length + ); + }); + + it('should set provider from active provider name', async () => { + const syncer = createSyncer(); + + const entity = { + id: 'test', + category: 'tasks', + data: { path: '.aios-core/development/tasks/test.md', usedBy: [], dependencies: [] }, + }; + + await syncer.syncEntity(entity, {}, true); + expect(entity.data.codeIntelMetadata.provider).toBe('code-graph'); + }); + }); + + // T5: Fallback sem provider (AC5) + describe('Fallback without provider (T5 — AC5)', () => { + it('should skip enrichment and log message when no provider available', async () => { + const logger = jest.fn(); + const syncer = new RegistrySyncer({ + registryPath: '/mock/entity-registry.yaml', + repoRoot: '/mock/repo', + client: null, // No client + logger, + }); + + const stats = await syncer.sync(); + + expect(stats.aborted).toBe(true); + expect(stats.processed).toBe(0); + expect(logger).toHaveBeenCalledWith( + expect.stringContaining('No code intelligence provider available') + ); + // Registry should NOT be written + expect(fs.writeFileSync).not.toHaveBeenCalled(); + expect(fs.renameSync).not.toHaveBeenCalled(); + }); + }); + + // T6: Incremental sync — skip unchanged entity (AC6) + describe('Incremental sync — skip unchanged (T6 — AC6)', () => { + it('should skip entity when mtime <= lastSynced', async () => { + const pastDate = new Date('2026-01-01T00:00:00Z'); + fs.statSync.mockReturnValue({ mtimeMs: pastDate.getTime() }); + + const syncer = createSyncer(); + + const entity = { + id: 'dev-develop-story', + category: 'tasks', + data: { + path: '.aios-core/development/tasks/dev-develop-story.md', + usedBy: [], + dependencies: [], + codeIntelMetadata: { + lastSynced: '2026-02-01T00:00:00Z', // After mtime + callerCount: 0, + role: 'task', + provider: 'code-graph', + }, + }, + }; + + const result = await syncer.syncEntity(entity, {}, false); // Incremental + expect(result).toBe(false); // Skipped + }); + }); + + // T7: Incremental sync — process entity without lastSynced (AC6) + describe('Incremental sync — process new entity (T7 — AC6)', () => { + it('should process entity that has no codeIntelMetadata', async () => { + const syncer = createSyncer(); + + const entity = { + id: 'dev-develop-story', + category: 'tasks', + data: { + path: '.aios-core/development/tasks/dev-develop-story.md', + usedBy: [], + dependencies: [], + // No codeIntelMetadata + }, + }; + + const result = await syncer.syncEntity(entity, {}, false); // Incremental + expect(result).toBe(true); // Processed + expect(entity.data.codeIntelMetadata).toBeDefined(); + }); + + it('should process entity with codeIntelMetadata but no lastSynced', async () => { + const syncer = createSyncer(); + + const entity = { + id: 'dev-develop-story', + category: 'tasks', + data: { + path: '.aios-core/development/tasks/dev-develop-story.md', + usedBy: [], + dependencies: [], + codeIntelMetadata: { callerCount: 0, role: 'task', provider: 'code-graph' }, + }, + }; + + const result = await syncer.syncEntity(entity, {}, false); // Incremental + expect(result).toBe(true); // Processed + }); + }); + + // T8: Full resync — process all regardless of mtime (AC7) + describe('Full resync (T8 — AC7)', () => { + it('should process all entities with --full even if lastSynced is recent', async () => { + const pastDate = new Date('2020-01-01T00:00:00Z'); + fs.statSync.mockReturnValue({ mtimeMs: pastDate.getTime() }); + + const logger = jest.fn(); + const syncer = createSyncer({ logger }); + + const stats = await syncer.sync({ full: true }); + + expect(stats.processed).toBe(3); // All 3 entities + expect(stats.skipped).toBe(0); + }); + }); + + // T9: Atomic write (temp file + rename) + describe('Atomic write (T9)', () => { + it('should write to temp file then rename', async () => { + const syncer = createSyncer(); + await syncer.sync({ full: true }); + + // writeFileSync should be called with .tmp extension + const writeCall = fs.writeFileSync.mock.calls[0]; + expect(writeCall[0]).toMatch(/\.tmp$/); + + // renameSync should be called to replace original + expect(fs.renameSync).toHaveBeenCalledWith( + expect.stringContaining('.tmp'), + '/mock/entity-registry.yaml' + ); + }); + + it('should not corrupt registry on write failure', async () => { + fs.writeFileSync.mockImplementation(() => { + throw new Error('Disk full'); + }); + + const syncer = createSyncer(); + + // sync should throw but original file should be untouched + await expect(syncer.sync({ full: true })).rejects.toBeDefined(); + // renameSync should NOT have been called + expect(fs.renameSync).not.toHaveBeenCalled(); + }); + }); + + // T10: Entity without source path — skip with warning + describe('Entity without source (T10)', () => { + it('should skip entity without path', async () => { + const syncer = createSyncer(); + + const entity = { + id: 'no-source', + category: 'tasks', + data: { + type: 'task', + purpose: 'Test', + usedBy: [], + dependencies: [], + // No path + }, + }; + + const result = await syncer.syncEntity(entity, {}, true); + expect(result).toBe(false); + }); + }); + + // inferRole tests + describe('inferRole', () => { + it('should infer role from path patterns', () => { + expect(inferRole('.aios-core/development/tasks/dev.md')).toBe('task'); + expect(inferRole('.aios-core/development/agents/dev.md')).toBe('agent'); + expect(inferRole('.aios-core/development/workflows/sdc.yaml')).toBe('workflow'); + expect(inferRole('.aios-core/development/scripts/build.js')).toBe('script'); + expect(inferRole('.aios-core/core/utils/helper.js')).toBe('module'); + expect(inferRole('.aios-core/data/entity-registry.yaml')).toBe('config'); + expect(inferRole('.aios-core/product/templates/prd.yaml')).toBe('template'); + }); + + it('should return "unknown" for unmatched paths', () => { + expect(inferRole('random/path/file.txt')).toBe('unknown'); + expect(inferRole('')).toBe('unknown'); + expect(inferRole(null)).toBe('unknown'); + }); + }); + + // getStats tests + describe('getStats', () => { + it('should return sync statistics', async () => { + const syncer = createSyncer(); + await syncer.sync({ full: true }); + + const stats = syncer.getStats(); + expect(stats).toHaveProperty('processed'); + expect(stats).toHaveProperty('skipped'); + expect(stats).toHaveProperty('errors'); + expect(stats).toHaveProperty('total'); + }); + }); +}); + +``` + +================================================== +📄 tests/code-intel/code-intel-enricher.test.js +================================================== +```js +'use strict'; + +const { CodeIntelEnricher } = require('../../.aios-core/core/code-intel/code-intel-enricher'); + +describe('CodeIntelEnricher', () => { + let enricher; + let mockClient; + + beforeEach(() => { + mockClient = { + findDefinition: jest.fn(), + findReferences: jest.fn(), + findCallers: jest.fn(), + findCallees: jest.fn(), + analyzeDependencies: jest.fn(), + analyzeComplexity: jest.fn(), + analyzeCodebase: jest.fn(), + getProjectStats: jest.fn(), + }; + enricher = new CodeIntelEnricher(mockClient); + }); + + describe('assessImpact (AC7)', () => { + it('should compose findReferences + analyzeComplexity', async () => { + mockClient.findReferences.mockResolvedValue([ + { file: 'b.js', line: 10, context: 'uses foo' }, + ]); + mockClient.analyzeComplexity.mockResolvedValue({ score: 8, details: {} }); + + const result = await enricher.assessImpact(['src/foo.js']); + + expect(mockClient.findReferences).toHaveBeenCalledWith('src/foo.js'); + expect(mockClient.analyzeComplexity).toHaveBeenCalledWith('src/foo.js'); + expect(result.blastRadius).toBe(1); + expect(result.complexity.average).toBe(8); + }); + + it('should return null for empty files array', async () => { + expect(await enricher.assessImpact([])).toBeNull(); + expect(await enricher.assessImpact(null)).toBeNull(); + }); + + it('should handle null results from primitives', async () => { + mockClient.findReferences.mockResolvedValue(null); + mockClient.analyzeComplexity.mockResolvedValue(null); + + const result = await enricher.assessImpact(['src/foo.js']); + expect(result.blastRadius).toBe(0); + expect(result.complexity.average).toBe(0); + }); + }); + + describe('detectDuplicates (AC7)', () => { + it('should compose findReferences + analyzeCodebase', async () => { + mockClient.findReferences.mockResolvedValue([ + { file: 'a.js', line: 5, context: 'similar code' }, + ]); + mockClient.analyzeCodebase.mockResolvedValue({ + files: ['a.js'], + structure: {}, + patterns: [], + }); + + const result = await enricher.detectDuplicates('config loader'); + + expect(mockClient.findReferences).toHaveBeenCalledWith('config loader', {}); + expect(mockClient.analyzeCodebase).toHaveBeenCalledWith('.', {}); + expect(result.matches).toHaveLength(1); + }); + + it('should return null when both primitives return null', async () => { + mockClient.findReferences.mockResolvedValue(null); + mockClient.analyzeCodebase.mockResolvedValue(null); + + const result = await enricher.detectDuplicates('nonexistent'); + expect(result).toBeNull(); + }); + }); + + describe('getConventions (AC7)', () => { + it('should compose analyzeCodebase + getProjectStats', async () => { + mockClient.analyzeCodebase.mockResolvedValue({ + files: [], + structure: {}, + patterns: ['singleton', 'factory'], + }); + mockClient.getProjectStats.mockResolvedValue({ + files: 100, + lines: 10000, + languages: { javascript: 80 }, + }); + + const result = await enricher.getConventions('src/'); + + expect(result.patterns).toEqual(['singleton', 'factory']); + expect(result.stats.files).toBe(100); + }); + }); + + describe('findTests (AC7)', () => { + it('should filter references to test/spec files only', async () => { + mockClient.findReferences.mockResolvedValue([ + { file: 'src/foo.js', line: 1, context: 'define' }, + { file: 'tests/foo.test.js', line: 5, context: 'import' }, + { file: '__tests__/foo.js', line: 3, context: 'require' }, + { file: 'src/foo.spec.js', line: 8, context: 'describe' }, + { file: 'src/bar.js', line: 2, context: 'usage' }, + ]); + + const result = await enricher.findTests('foo'); + expect(result).toHaveLength(3); + expect(result.map((r) => r.file)).toEqual([ + 'tests/foo.test.js', + '__tests__/foo.js', + 'src/foo.spec.js', + ]); + }); + + it('should return null when findReferences returns null', async () => { + mockClient.findReferences.mockResolvedValue(null); + expect(await enricher.findTests('foo')).toBeNull(); + }); + }); + + describe('describeProject (AC7)', () => { + it('should compose analyzeCodebase + getProjectStats', async () => { + mockClient.analyzeCodebase.mockResolvedValue({ + files: ['a.js', 'b.js'], + structure: { type: 'modular' }, + patterns: [], + }); + mockClient.getProjectStats.mockResolvedValue({ + files: 200, + lines: 30000, + languages: { javascript: 150, yaml: 50 }, + }); + + const result = await enricher.describeProject('.'); + + expect(result.codebase.files).toHaveLength(2); + expect(result.stats.files).toBe(200); + }); + + it('should return null when both return null', async () => { + mockClient.analyzeCodebase.mockResolvedValue(null); + mockClient.getProjectStats.mockResolvedValue(null); + expect(await enricher.describeProject()).toBeNull(); + }); + }); +}); + +``` + +================================================== +📄 tests/regression/tools-migration.test.js +================================================== +```js +// Integration/Performance test - uses describeIntegration +const path = require('path'); +const toolResolver = require('../../common/utils/tool-resolver'); +const ToolValidationHelper = require('../../common/utils/tool-validation-helper'); + +/** + * Tools Migration Regression Test Suite + * + * Task 5.3 Requirements: + * - All existing agent workflows pass unchanged + * - All existing task workflows pass unchanged + * - Zero breaking changes in API surface + * - Verify end-to-end system integrity post-migration + * + * This suite tests: + * 1. Tool resolution API remains stable + * 2. Validation API remains stable + * 3. All 12 tools work correctly + * 4. Agent-tool integration works + * 5. No regressions in existing functionality + */ +describeIntegration('Tools Migration Regression Suite', () => { + const toolsPath = path.join(__dirname, '../../.aios-core/tools'); + + // All 12 tools (8 v1.0 + 4 v2.0) + const allTools = [ + 'github-cli', + 'railway-cli', + 'supabase-cli', + 'ffmpeg', + '21st-dev-magic', + 'browser', + 'context7', + 'exa', + 'clickup', + 'google-workspace', + 'n8n', + 'supabase', + ]; + + const v1Tools = [ + 'github-cli', + 'railway-cli', + 'supabase-cli', + 'ffmpeg', + '21st-dev-magic', + 'browser', + 'context7', + 'exa', + ]; + + const v2Tools = [ + 'clickup', + 'google-workspace', + 'n8n', + 'supabase', + ]; + + beforeAll(() => { + toolResolver.setSearchPaths([toolsPath]); + }); + + afterEach(() => { + toolResolver.clearCache(); + }); + + afterAll(() => { + toolResolver.resetSearchPaths(); + toolResolver.clearCache(); + }); + + describeIntegration('Tool Resolution API Stability', () => { + test('resolveTool() API unchanged for all 12 tools', async () => { + const results = []; + + for (const toolName of allTools) { + const tool = await toolResolver.resolveTool(toolName); + results.push({ + name: toolName, + hasId: !!tool.id, + hasType: !!tool.type, + hasName: !!tool.name, + hasDescription: !!tool.description, + }); + } + + // All should have core fields + results.forEach(result => { + expect(result.hasId).toBe(true); + expect(result.hasType).toBe(true); + expect(result.hasName).toBe(true); + expect(result.hasDescription).toBe(true); + }); + }); + + test('tool resolution returns same structure as before', async () => { + const tool = await toolResolver.resolveTool('github-cli'); + + // Pre-migration structure still intact + expect(tool).toHaveProperty('id'); + expect(tool).toHaveProperty('type'); + expect(tool).toHaveProperty('name'); + expect(tool).toHaveProperty('description'); + expect(tool).toHaveProperty('knowledge_strategy'); + + // New field added but doesn't break anything + expect(tool).toHaveProperty('schema_version'); + expect(typeof tool.schema_version).toBe('number'); + }); + + test('setSearchPaths() API works as before', () => { + expect(() => { + toolResolver.setSearchPaths([toolsPath]); + }).not.toThrow(); + }); + + test('clearCache() API works as before', () => { + expect(() => { + toolResolver.clearCache(); + }).not.toThrow(); + }); + + test('resetSearchPaths() API works as before', () => { + expect(() => { + toolResolver.resetSearchPaths(); + }).not.toThrow(); + }); + }); + + describeIntegration('Validation API Stability', () => { + test('ToolValidationHelper constructor accepts executable_knowledge', () => { + expect(() => { + new ToolValidationHelper({ validators: [], helpers: [] }); + }).not.toThrow(); + }); + + test('ToolValidationHelper constructor accepts undefined (backward compat)', () => { + expect(() => { + new ToolValidationHelper(undefined); + }).not.toThrow(); + }); + + test('validate() method signature unchanged', async () => { + const validator = new ToolValidationHelper(undefined); + + // Old signature: validate(command, args) + await expect(async () => { + await validator.validate('test-command', { arg1: 'value1' }); + }).not.toThrow(); + }); + + test('validateBatch() method signature unchanged', async () => { + const validator = new ToolValidationHelper(undefined); + + // Old signature: validateBatch(operations) + await expect(async () => { + await validator.validateBatch([ + { command: 'cmd1', args: {} }, + { command: 'cmd2', args: {} }, + ]); + }).not.toThrow(); + }); + + test('validation returns same result structure', async () => { + const validator = new ToolValidationHelper(undefined); + const result = await validator.validate('test', {}); + + expect(result).toHaveProperty('valid'); + expect(result).toHaveProperty('errors'); + expect(typeof result.valid).toBe('boolean'); + expect(Array.isArray(result.errors)).toBe(true); + }); + }); + + describeIntegration('End-to-End Tool Workflows', () => { + test('v1.0 tools complete workflow unchanged', async () => { + for (const toolName of v1Tools) { + // Step 1: Resolve tool + const tool = await toolResolver.resolveTool(toolName); + expect(tool.id).toBe(toolName); + + // Step 2: Create validator + const validator = new ToolValidationHelper(tool.executable_knowledge); + expect(validator).toBeDefined(); + + // Step 3: Validate command + const result = await validator.validate('test-cmd', { test: 'data' }); + expect(result.valid).toBe(true); + expect(result.errors).toHaveLength(0); + } + }); + + test('v2.0 tools complete workflow working', async () => { + for (const toolName of v2Tools) { + // Step 1: Resolve tool + const tool = await toolResolver.resolveTool(toolName); + expect(tool.id).toBe(toolName); + expect(tool.schema_version).toBe(2); + + // Step 2: Create validator + const validator = new ToolValidationHelper(tool.executable_knowledge); + expect(validator).toBeDefined(); + + // Step 3: Validators should exist + expect(tool.executable_knowledge).toBeDefined(); + expect(tool.executable_knowledge.validators).toBeDefined(); + } + }); + + test('mixed v1 and v2 tools work together', async () => { + const v1Tool = await toolResolver.resolveTool('github-cli'); + const v2Tool = await toolResolver.resolveTool('clickup'); + + const v1Validator = new ToolValidationHelper(v1Tool.executable_knowledge); + const v2Validator = new ToolValidationHelper(v2Tool.executable_knowledge); + + // Both should work without interference + const v1Result = await v1Validator.validate('test', {}); + const v2Result = await v2Validator.validate('create_task', { + list_id: '12345678', + name: 'test', + }); + + expect(v1Result.valid).toBe(true); + expect(v2Result.valid).toBe(true); + }); + }); + + describeIntegration('Agent-Tool Integration', () => { + test('agents can reference v1.0 tools', async () => { + // Simulate agent with v1.0 tool dependencies + const agentTools = ['github-cli', 'browser', 'ffmpeg']; + + for (const toolName of agentTools) { + const tool = await toolResolver.resolveTool(toolName); + expect(tool).toBeDefined(); + expect(tool.schema_version).toBe(1); + } + }); + + test('agents can reference v2.0 tools', async () => { + // Simulate agent with v2.0 tool dependencies + const agentTools = ['clickup', 'supabase', 'n8n']; + + for (const toolName of agentTools) { + const tool = await toolResolver.resolveTool(toolName); + expect(tool).toBeDefined(); + expect(tool.schema_version).toBe(2); + } + }); + + test('agents can reference mixed v1/v2 tools', async () => { + // Simulate agent with mixed tool dependencies (like dev agent) + const agentTools = [ + 'github-cli', // v1.0 + 'context7', // v1.0 + 'supabase', // v2.0 + 'n8n', // v2.0 + 'browser', // v1.0 + 'ffmpeg', // v1.0 + ]; + + const results = []; + + for (const toolName of agentTools) { + const tool = await toolResolver.resolveTool(toolName); + results.push({ + name: toolName, + version: tool.schema_version, + hasValidators: !!tool.executable_knowledge?.validators, + }); + } + + // All should resolve + expect(results).toHaveLength(6); + + // Check version distribution + const v1Count = results.filter(r => r.version === 1).length; + const v2Count = results.filter(r => r.version === 2).length; + + expect(v1Count).toBe(4); + expect(v2Count).toBe(2); + }); + }); + + describeIntegration('Performance Regression Check', () => { + test('tool resolution performance not degraded', async () => { + toolResolver.clearCache(); + + const start = Date.now(); + await toolResolver.resolveTool('clickup'); + const uncachedDuration = Date.now() - start; + + // Should be fast (<50ms as per AC3) + expect(uncachedDuration).toBeLessThan(50); + + const cachedStart = Date.now(); + await toolResolver.resolveTool('clickup'); + const cachedDuration = Date.now() - cachedStart; + + // Cached should be instant (<5ms) + expect(cachedDuration).toBeLessThan(5); + }); + + test('validation performance not degraded', async () => { + const tool = await toolResolver.resolveTool('google-workspace'); + const validator = new ToolValidationHelper(tool.executable_knowledge); + + const start = Date.now(); + await validator.validate('list_spreadsheets', { + user_google_email: 'test@example.com', + }); + const duration = Date.now() - start; + + // Should be fast (<50ms as per AC3) + expect(duration).toBeLessThan(50); + }); + + test('concurrent operations performance maintained', async () => { + const promises = allTools.map(toolName => + toolResolver.resolveTool(toolName), + ); + + const start = Date.now(); + const tools = await Promise.all(promises); + const duration = Date.now() - start; + + // All 12 tools should resolve quickly + expect(tools).toHaveLength(12); + expect(duration).toBeLessThan(200); + }); + }); + + describeIntegration('Error Handling Regression', () => { + test('invalid tool name throws same error as before', async () => { + await expect( + toolResolver.resolveTool('non-existent-tool'), + ).rejects.toThrow(); + }); + + test('validation errors format unchanged', async () => { + const tool = await toolResolver.resolveTool('google-workspace'); + const validator = new ToolValidationHelper(tool.executable_knowledge); + + const result = await validator.validate('create_event', { + // Missing required user_google_email field + summary: 'test', + start_time: '2024-01-01T10:00:00', + end_time: '2024-01-01T11:00:00', + }); + + expect(result.valid).toBe(false); + expect(result.errors).toBeDefined(); + expect(Array.isArray(result.errors)).toBe(true); + + if (result.errors.length > 0) { + expect(result.errors[0]).toHaveProperty('field'); + expect(result.errors[0]).toHaveProperty('message'); + } + }); + + test('validator handles malformed data gracefully', async () => { + const validator = new ToolValidationHelper(undefined); + + await expect(async () => { + await validator.validate(null, null); + }).not.toThrow(); + + await expect(async () => { + await validator.validate('', {}); + }).not.toThrow(); + }); + }); + + describeIntegration('Comprehensive Regression Report', () => { + test('full system regression check', async () => { + const report = { + tools_tested: 0, + tools_passed: 0, + resolution_failures: [], + validation_failures: [], + performance_issues: [], + api_breaking_changes: [], + }; + + // Test all 12 tools + for (const toolName of allTools) { + report.tools_tested++; + + try { + // 1. Resolution + const tool = await toolResolver.resolveTool(toolName); + if (!tool || !tool.id) { + report.resolution_failures.push(toolName); + continue; + } + + // 2. Validation + const validator = new ToolValidationHelper(tool.executable_knowledge); + const result = await validator.validate('test', {}); + + if (!result || typeof result.valid !== 'boolean') { + report.validation_failures.push(toolName); + continue; + } + + // 3. Performance + const start = Date.now(); + await toolResolver.resolveTool(toolName); + const duration = Date.now() - start; + + if (duration > 5) { // Cached should be <5ms + report.performance_issues.push({ + tool: toolName, + duration, + }); + } + + // 4. API stability + const hasRequiredFields = tool.id && tool.type && tool.name && tool.description; + if (!hasRequiredFields) { + report.api_breaking_changes.push({ + tool: toolName, + issue: 'Missing required fields', + }); + continue; + } + + report.tools_passed++; + + } catch (error) { + report.resolution_failures.push({ + tool: toolName, + error: error.message, + }); + } + } + + // Verify no regressions + expect(report.resolution_failures).toHaveLength(0); + expect(report.validation_failures).toHaveLength(0); + expect(report.api_breaking_changes).toHaveLength(0); + expect(report.tools_passed).toBe(12); + + // Log comprehensive report + console.log('\n✅ Tools Migration Regression Report:'); + console.log(` Tools Tested: ${report.tools_tested}`); + console.log(` Tools Passed: ${report.tools_passed}`); + console.log(` Resolution Failures: ${report.resolution_failures.length}`); + console.log(` Validation Failures: ${report.validation_failures.length}`); + console.log(` Performance Issues: ${report.performance_issues.length}`); + console.log(` API Breaking Changes: ${report.api_breaking_changes.length}`); + console.log(` Status: ${report.tools_passed === 12 ? 'PASS ✅' : 'FAIL ❌'}`); + }); + }); +}); + +``` + +================================================== +📄 tests/performance/decision-logging-benchmark.test.js +================================================== +```js +// Integration test - requires external services +// Uses describeIntegration from setup.js +/** + * Performance Benchmarks for Decision Logging + * + * Validates that decision logging overhead meets the <50ms requirement (AC8). + * Tests individual operations and full workflow performance. + * + * @see .aios-core/scripts/decision-recorder.js + */ + +const fs = require('fs').promises; +const { + initializeDecisionLogging, + recordDecision, + trackFile, + trackTest, + updateMetrics, + completeDecisionLogging, +} = require('../../.aios-core/development/scripts/decision-recorder'); + +describeIntegration('Decision Logging Performance Benchmarks', () => { + const testStoryPath = 'docs/stories/benchmark-test.md'; + const testStoryId = 'benchmark-test'; + + // Performance targets (AC8) + const TARGETS = { + initialization: 50, // <50ms (includes git, config loading) + recordDecision: 5, // <5ms per call + trackFile: 2, // <2ms per call + trackTest: 2, // <2ms per call + updateMetrics: 1, // <1ms + logGeneration: 30, // <30ms + indexUpdate: 5, // <5ms + totalOverhead: 50, // <50ms (CRITICAL) + }; + + beforeEach(async () => { + // Clean up any previous benchmark logs + try { + await fs.unlink(`.ai/decision-log-${testStoryId}.md`); + } catch (error) { + // File doesn't exist, that's okay + } + + try { + await fs.unlink('.ai/decision-logs-index.md'); + } catch (error) { + // File doesn't exist, that's okay + } + }); + + afterEach(async () => { + // Clean up benchmark logs + try { + await fs.unlink(`.ai/decision-log-${testStoryId}.md`); + } catch (error) { + // Ignore cleanup errors + } + + try { + await fs.unlink('.ai/decision-logs-index.md'); + } catch (error) { + // Ignore cleanup errors + } + }); + + describeIntegration('Individual Operation Performance', () => { + it('should initialize decision logging in <10ms', async () => { + const startTime = Date.now(); + + await initializeDecisionLogging('dev', testStoryPath, { + agentLoadTime: 150, + }); + + const duration = Date.now() - startTime; + + expect(duration).toBeLessThan(TARGETS.initialization); + console.log(`Initialization: ${duration}ms (target: <${TARGETS.initialization}ms) ✓`); + }); + + it('should record decision in <5ms per call', async () => { + await initializeDecisionLogging('dev', testStoryPath); + + const iterations = 10; + const times = []; + + for (let i = 0; i < iterations; i++) { + const startTime = Date.now(); + + recordDecision({ + description: `Benchmark decision ${i}`, + reason: 'Performance test', + alternatives: ['Alt 1', 'Alt 2', 'Alt 3'], + type: 'library-choice', + priority: 'medium', + }); + + const duration = Date.now() - startTime; + times.push(duration); + } + + const avgTime = times.reduce((sum, t) => sum + t, 0) / times.length; + const maxTime = Math.max(...times); + + expect(avgTime).toBeLessThan(TARGETS.recordDecision); + expect(maxTime).toBeLessThan(TARGETS.recordDecision * 2); // Allow 2x for outliers + + console.log(`recordDecision: avg=${avgTime.toFixed(2)}ms, max=${maxTime}ms (target: <${TARGETS.recordDecision}ms) ✓`); + }); + + it('should track file in <2ms per call', async () => { + await initializeDecisionLogging('dev', testStoryPath); + + const iterations = 20; + const times = []; + + for (let i = 0; i < iterations; i++) { + const startTime = Date.now(); + + trackFile(`src/file-${i}.js`, 'created'); + + const duration = Date.now() - startTime; + times.push(duration); + } + + const avgTime = times.reduce((sum, t) => sum + t, 0) / times.length; + const maxTime = Math.max(...times); + + expect(avgTime).toBeLessThan(TARGETS.trackFile); + expect(maxTime).toBeLessThan(TARGETS.trackFile * 2); + + console.log(`trackFile: avg=${avgTime.toFixed(2)}ms, max=${maxTime}ms (target: <${TARGETS.trackFile}ms) ✓`); + }); + + it('should track test in <2ms per call', async () => { + await initializeDecisionLogging('dev', testStoryPath); + + const iterations = 20; + const times = []; + + for (let i = 0; i < iterations; i++) { + const startTime = Date.now(); + + trackTest({ + name: `test-${i}.js`, + passed: i % 2 === 0, + duration: 100 + i, + }); + + const duration = Date.now() - startTime; + times.push(duration); + } + + const avgTime = times.reduce((sum, t) => sum + t, 0) / times.length; + const maxTime = Math.max(...times); + + expect(avgTime).toBeLessThan(TARGETS.trackTest); + expect(maxTime).toBeLessThan(TARGETS.trackTest * 2); + + console.log(`trackTest: avg=${avgTime.toFixed(2)}ms, max=${maxTime}ms (target: <${TARGETS.trackTest}ms) ✓`); + }); + + it('should update metrics in <1ms', async () => { + await initializeDecisionLogging('dev', testStoryPath); + + const startTime = Date.now(); + + updateMetrics({ + agentLoadTime: 150, + taskExecutionTime: 300000, + customMetric: 'test', + }); + + const duration = Date.now() - startTime; + + expect(duration).toBeLessThan(TARGETS.updateMetrics); + console.log(`updateMetrics: ${duration}ms (target: <${TARGETS.updateMetrics}ms) ✓`); + }); + }); + + describeIntegration('Log Generation Performance', () => { + it('should generate decision log in <30ms', async () => { + await initializeDecisionLogging('dev', testStoryPath); + + // Add some data to log + for (let i = 0; i < 5; i++) { + recordDecision({ + description: `Decision ${i}`, + reason: 'Performance test', + alternatives: ['Alt 1', 'Alt 2'], + }); + } + + for (let i = 0; i < 10; i++) { + trackFile(`src/file-${i}.js`, 'created'); + } + + for (let i = 0; i < 5; i++) { + trackTest({ + name: `test-${i}.js`, + passed: true, + duration: 100, + }); + } + + const startTime = Date.now(); + + const logPath = await completeDecisionLogging(testStoryId, 'completed'); + + const duration = Date.now() - startTime; + + expect(logPath).toBeDefined(); + expect(duration).toBeLessThan(TARGETS.logGeneration); + + console.log(`Log generation: ${duration}ms (target: <${TARGETS.logGeneration}ms) ✓`); + }); + }); + + describeIntegration('Total Workflow Overhead (CRITICAL - AC8)', () => { + it('should complete full workflow with <50ms total overhead', async () => { + const startTime = Date.now(); + + // Simulate realistic yolo mode workflow + await initializeDecisionLogging('dev', testStoryPath, { + agentLoadTime: 150, + }); + + // Typical decision count: 3-10 + for (let i = 0; i < 7; i++) { + recordDecision({ + description: `Realistic decision ${i}`, + reason: 'Performance validation', + alternatives: ['Alt 1', 'Alt 2', 'Alt 3'], + type: i % 2 === 0 ? 'library-choice' : 'architecture', + priority: i < 3 ? 'high' : 'medium', + }); + } + + // Typical file count: 5-15 + for (let i = 0; i < 12; i++) { + trackFile(`src/feature/file-${i}.js`, i % 3 === 0 ? 'created' : 'modified'); + } + + // Typical test count: 5-20 + for (let i = 0; i < 15; i++) { + trackTest({ + name: `feature-${i}.test.js`, + passed: i % 10 !== 0, // 10% failure rate + duration: 50 + Math.floor(Math.random() * 200), + }); + } + + updateMetrics({ + taskExecutionTime: 180000, // 3 minutes + }); + + await completeDecisionLogging(testStoryId, 'completed'); + + const totalOverhead = Date.now() - startTime; + + // CRITICAL: Must be under 50ms + expect(totalOverhead).toBeLessThan(TARGETS.totalOverhead); + + console.log(`\n📊 TOTAL WORKFLOW OVERHEAD: ${totalOverhead}ms (target: <${TARGETS.totalOverhead}ms) ✓`); + console.log(' - Decisions: 7'); + console.log(' - Files: 12'); + console.log(' - Tests: 15'); + console.log(` - Status: ${totalOverhead < TARGETS.totalOverhead ? '✅ PASS' : '❌ FAIL'}\n`); + }); + + it('should handle large workflow (100 decisions) efficiently', async () => { + const startTime = Date.now(); + + await initializeDecisionLogging('dev', testStoryPath); + + // Stress test: 100 decisions + for (let i = 0; i < 100; i++) { + recordDecision({ + description: `Stress test decision ${i}`, + reason: 'Large workflow test', + alternatives: ['Alt 1', 'Alt 2'], + }); + } + + // 50 files + for (let i = 0; i < 50; i++) { + trackFile(`src/file-${i}.js`, 'created'); + } + + // 30 tests + for (let i = 0; i < 30; i++) { + trackTest({ + name: `test-${i}.js`, + passed: true, + duration: 100, + }); + } + + await completeDecisionLogging(testStoryId, 'completed'); + + const totalOverhead = Date.now() - startTime; + + // Allow 2x target for stress test (100ms) + expect(totalOverhead).toBeLessThan(TARGETS.totalOverhead * 2); + + console.log(`\n⚡ STRESS TEST (100 decisions): ${totalOverhead}ms (max: <${TARGETS.totalOverhead * 2}ms) ✓`); + }); + }); + + describeIntegration('Performance Regression Tests', () => { + it('should not degrade with repeated calls', async () => { + await initializeDecisionLogging('dev', testStoryPath); + + const times = []; + + for (let i = 0; i < 50; i++) { + const startTime = Date.now(); + + recordDecision({ + description: `Regression test ${i}`, + reason: 'Checking for performance degradation', + alternatives: [], + }); + + const duration = Date.now() - startTime; + times.push(duration); + } + + const firstHalf = times.slice(0, 25); + const secondHalf = times.slice(25); + + const firstAvg = firstHalf.reduce((sum, t) => sum + t, 0) / firstHalf.length; + const secondAvg = secondHalf.reduce((sum, t) => sum + t, 0) / secondHalf.length; + + // Second half should not be significantly slower (allow 50% variance) + // If both are 0ms (very fast), that's acceptable + if (firstAvg > 0) { + expect(secondAvg).toBeLessThan(firstAvg * 1.5); + } else { + expect(secondAvg).toBeLessThanOrEqual(1); // Both should be <1ms + } + + console.log(`Regression check: first=${firstAvg.toFixed(2)}ms, second=${secondAvg.toFixed(2)}ms ✓`); + }); + }); + + describeIntegration('Memory Efficiency', () => { + it('should not leak memory with large datasets', async () => { + const initialMemory = process.memoryUsage().heapUsed; + + await initializeDecisionLogging('dev', testStoryPath); + + // Create large dataset + for (let i = 0; i < 1000; i++) { + recordDecision({ + description: `Memory test ${i}`, + reason: 'Testing memory usage', + alternatives: [], + }); + } + + await completeDecisionLogging(testStoryId, 'completed'); + + const finalMemory = process.memoryUsage().heapUsed; + const memoryIncrease = (finalMemory - initialMemory) / 1024 / 1024; // MB + + // Should not use more than 10MB for 1000 decisions + expect(memoryIncrease).toBeLessThan(10); + + console.log(`Memory increase: ${memoryIncrease.toFixed(2)}MB (max: <10MB) ✓`); + }); + }); +}); + +``` + +================================================== +📄 tests/performance/tools-system-benchmark.test.js +================================================== +```js +const path = require('path'); +const fs = require('fs-extra'); +const toolResolver = require('../../common/utils/tool-resolver'); +const ToolHelperExecutor = require('../../common/utils/tool-helper-executor'); +const ToolValidationHelper = require('../../common/utils/tool-validation-helper'); + +/** + * Performance Benchmarks for Tools System + * + * Performance Targets: + * - Cached tool resolution: <5ms + * - Uncached tool resolution: <50ms + * - Validation execution: <50ms (target, timeout at 500ms) + * - Helper execution: <100ms (typical, timeout at 1000ms) + */ +describe('Tools System Performance Benchmarks', () => { + let testToolsDir; + const benchmarkResults = { + toolResolution: { cached: [], uncached: [] }, + validation: [], + helpers: [], + }; + + beforeAll(async () => { + // Create test directory + testToolsDir = path.join(__dirname, '../fixtures/benchmark-tools'); + await fs.ensureDir(testToolsDir); + + // Create benchmark tool with validator and helper + const benchmarkTool = { + id: 'benchmark_tool', + type: 'local', + name: 'benchmark_tool', + version: '1.0.0', + schema_version: 2.0, + description: 'Tool for performance benchmarking', + executable_knowledge: { + validators: [ + { + id: 'benchmark_validator', + validates: 'benchmark_command', + language: 'javascript', + checks: ['required_fields'], + function: ` + (function() { + // Simple validation logic + const errors = []; + if (!args.args.value) { + errors.push('Value is required'); + } + if (args.args.value && (args.args.value < 1 || args.args.value > 100)) { + errors.push('Value must be between 1 and 100'); + } + return { valid: errors.length === 0, errors }; + })(); + `, + }, + ], + helpers: [ + { + id: 'benchmark_helper', + language: 'javascript', + function: ` + (function() { + // Simple computation + let result = 0; + for (let i = 0; i < 1000; i++) { + result += i; + } + return { + computed: result, + input: args.value, + timestamp: Date.now() + }; + })(); + `, + }, + ], + }, + }; + + await fs.writeJSON(path.join(testToolsDir, 'benchmark_tool.yaml'), benchmarkTool); + + // Configure resolver + toolResolver.clearCache(); + toolResolver.setSearchPaths([testToolsDir]); + }); + + afterAll(async () => { + // Cleanup + await fs.remove(testToolsDir); + toolResolver.resetSearchPaths(); + + // Print benchmark summary + console.log('\n=== Performance Benchmark Results ===\n'); + + console.log('Tool Resolution (Uncached):'); + const uncachedAvg = average(benchmarkResults.toolResolution.uncached); + const uncachedMax = Math.max(...benchmarkResults.toolResolution.uncached); + console.log(` Average: ${uncachedAvg.toFixed(2)}ms`); + console.log(` Max: ${uncachedMax.toFixed(2)}ms`); + console.log(` Target: <50ms - ${uncachedAvg < 50 ? '✓ PASS' : '✗ FAIL'}\n`); + + console.log('Tool Resolution (Cached):'); + const cachedAvg = average(benchmarkResults.toolResolution.cached); + const cachedMax = Math.max(...benchmarkResults.toolResolution.cached); + console.log(` Average: ${cachedAvg.toFixed(2)}ms`); + console.log(` Max: ${cachedMax.toFixed(2)}ms`); + console.log(` Target: <5ms - ${cachedAvg < 5 ? '✓ PASS' : '✗ FAIL'}\n`); + + console.log('Validation Execution:'); + const validationAvg = average(benchmarkResults.validation); + const validationMax = Math.max(...benchmarkResults.validation); + console.log(` Average: ${validationAvg.toFixed(2)}ms`); + console.log(` Max: ${validationMax.toFixed(2)}ms`); + console.log(` Target: <50ms - ${validationAvg < 50 ? '✓ PASS' : '✗ FAIL'}\n`); + + console.log('Helper Execution:'); + const helperAvg = average(benchmarkResults.helpers); + const helperMax = Math.max(...benchmarkResults.helpers); + console.log(` Average: ${helperAvg.toFixed(2)}ms`); + console.log(` Max: ${helperMax.toFixed(2)}ms`); + console.log(` Target: <100ms - ${helperAvg < 100 ? '✓ PASS' : '✗ FAIL'}\n`); + + console.log('======================================\n'); + }); + + describe('Tool Resolution Performance', () => { + test('uncached resolution should complete in <50ms', async () => { + const iterations = 10; + const durations = []; + + for (let i = 0; i < iterations; i++) { + // Clear cache before each iteration + toolResolver.clearCache(); + + const start = Date.now(); + await toolResolver.resolveTool('benchmark_tool'); + const duration = Date.now() - start; + + durations.push(duration); + benchmarkResults.toolResolution.uncached.push(duration); + } + + const avgDuration = average(durations); + const maxDuration = Math.max(...durations); + + console.log(`\nUncached resolution: avg=${avgDuration.toFixed(2)}ms, max=${maxDuration.toFixed(2)}ms`); + + // Allow some variance - check that average is under target + expect(avgDuration).toBeLessThan(50); + }); + + test('cached resolution should complete in <5ms', async () => { + const iterations = 100; // More iterations for cached (faster) + const durations = []; + + // First resolution to populate cache + await toolResolver.resolveTool('benchmark_tool'); + + for (let i = 0; i < iterations; i++) { + const start = Date.now(); + await toolResolver.resolveTool('benchmark_tool'); + const duration = Date.now() - start; + + durations.push(duration); + benchmarkResults.toolResolution.cached.push(duration); + } + + const avgDuration = average(durations); + const maxDuration = Math.max(...durations); + + console.log(`Cached resolution: avg=${avgDuration.toFixed(2)}ms, max=${maxDuration.toFixed(2)}ms`); + + // Cached should be very fast + expect(avgDuration).toBeLessThan(5); + }); + + test('cached resolution should be significantly faster than uncached', async () => { + // Uncached + toolResolver.clearCache(); + const uncachedStart = Date.now(); + const tool1 = await toolResolver.resolveTool('benchmark_tool'); + const uncachedDuration = Date.now() - uncachedStart; + + // Cached + const cachedStart = Date.now(); + const tool2 = await toolResolver.resolveTool('benchmark_tool'); + const cachedDuration = Date.now() - cachedStart; + + const speedup = cachedDuration === 0 + ? 'Instant' + : `${(uncachedDuration / cachedDuration).toFixed(2)}x`; + console.log(`Speedup: ${speedup}`); + + // Always verify caching works by checking same reference is returned + expect(tool1).toBe(tool2); + + // Skip strict performance assertion if durations are too short to measure reliably + // This can happen in CI environments with variable timing + if (uncachedDuration < 5 || cachedDuration === 0) { + // Durations too short to measure speedup reliably, but caching verified above + console.log('⚠️ Durations too short to measure caching speedup reliably'); + return; + } + + // Cached should be faster than uncached (relaxed threshold for CI environments) + // Allow cached to be up to 90% of uncached duration (at least 10% faster) + if (uncachedDuration > 10) { + expect(cachedDuration).toBeLessThan(uncachedDuration * 0.9); + } else { + // For very short durations, just verify cached is not slower + expect(cachedDuration).toBeLessThanOrEqual(uncachedDuration); + } + }); + }); + + describe('Validation Performance', () => { + let validator; + + beforeAll(async () => { + const tool = await toolResolver.resolveTool('benchmark_tool'); + validator = new ToolValidationHelper(tool.executable_knowledge.validators); + }); + + test('validation should complete in <50ms', async () => { + const iterations = 50; + const durations = []; + + for (let i = 0; i < iterations; i++) { + const start = Date.now(); + await validator.validate('benchmark_command', { value: 50 }); + const duration = Date.now() - start; + + durations.push(duration); + benchmarkResults.validation.push(duration); + } + + const avgDuration = average(durations); + const maxDuration = Math.max(...durations); + + console.log(`\nValidation: avg=${avgDuration.toFixed(2)}ms, max=${maxDuration.toFixed(2)}ms`); + + expect(avgDuration).toBeLessThan(50); + }); + + test('successful validation should be fast', async () => { + const start = Date.now(); + const result = await validator.validate('benchmark_command', { value: 75 }); + const duration = Date.now() - start; + + expect(result.valid).toBe(true); + expect(duration).toBeLessThan(50); + }); + + test('failed validation should be equally fast', async () => { + const start = Date.now(); + const result = await validator.validate('benchmark_command', { value: 150 }); // Out of range + const duration = Date.now() - start; + + expect(result.valid).toBe(false); + expect(duration).toBeLessThan(50); + }); + + test('validation with missing fields should be fast', async () => { + const start = Date.now(); + const result = await validator.validate('benchmark_command', {}); // Missing value + const duration = Date.now() - start; + + expect(result.valid).toBe(false); + expect(duration).toBeLessThan(50); + }); + }); + + describe('Helper Execution Performance', () => { + let executor; + + beforeAll(async () => { + const tool = await toolResolver.resolveTool('benchmark_tool'); + executor = new ToolHelperExecutor(tool.executable_knowledge.helpers); + }); + + test('helper execution should complete in <100ms', async () => { + const iterations = 50; + const durations = []; + + for (let i = 0; i < iterations; i++) { + const start = Date.now(); + await executor.execute('benchmark_helper', { value: i }); + const duration = Date.now() - start; + + durations.push(duration); + benchmarkResults.helpers.push(duration); + } + + const avgDuration = average(durations); + const maxDuration = Math.max(...durations); + + console.log(`\nHelper execution: avg=${avgDuration.toFixed(2)}ms, max=${maxDuration.toFixed(2)}ms`); + + expect(avgDuration).toBeLessThan(100); + }); + + test('helper with simple computation should be fast', async () => { + const start = Date.now(); + const result = await executor.execute('benchmark_helper', { value: 42 }); + const duration = Date.now() - start; + + expect(result).toBeDefined(); + expect(result.computed).toBe(499500); // Sum of 0..999 + expect(duration).toBeLessThan(100); + }); + + test('multiple helper executions should maintain performance', async () => { + const iterations = 20; + let totalDuration = 0; + + for (let i = 0; i < iterations; i++) { + const start = Date.now(); + await executor.execute('benchmark_helper', { value: i }); + totalDuration += Date.now() - start; + } + + const avgDuration = totalDuration / iterations; + + console.log(`Sequential helper avg: ${avgDuration.toFixed(2)}ms`); + + expect(avgDuration).toBeLessThan(100); + }); + }); + + describe('End-to-End Workflow Performance', () => { + test('complete workflow (resolve → validate → execute) should be efficient', async () => { + const iterations = 10; + const durations = []; + + for (let i = 0; i < iterations; i++) { + const start = Date.now(); + + // 1. Resolve tool (cached after first) + const tool = await toolResolver.resolveTool('benchmark_tool'); + + // 2. Validate + const validator = new ToolValidationHelper(tool.executable_knowledge.validators); + const validation = await validator.validate('benchmark_command', { value: 50 }); + + // 3. Execute helper (only if valid) + if (validation.valid) { + const executor = new ToolHelperExecutor(tool.executable_knowledge.helpers); + await executor.execute('benchmark_helper', { value: 50 }); + } + + const duration = Date.now() - start; + durations.push(duration); + } + + const avgDuration = average(durations); + const maxDuration = Math.max(...durations); + + console.log(`\nEnd-to-end workflow: avg=${avgDuration.toFixed(2)}ms, max=${maxDuration.toFixed(2)}ms`); + + // Combined workflow should still be reasonably fast + // Target: <200ms (50ms resolve + 50ms validate + 100ms execute) + expect(avgDuration).toBeLessThan(200); + }); + }); +}); + +/** + * Calculate average of an array of numbers + */ +function average(numbers) { + if (numbers.length === 0) return 0; + return numbers.reduce((sum, n) => sum + n, 0) / numbers.length; +} + +``` + +================================================== +📄 tests/infrastructure/project-status-loader.test.js +================================================== +```js +/** + * @fileoverview Tests for ProjectStatusLoader - Story ACT-3 Reliability Overhaul + * @description Unit tests for project status loading, caching, cache invalidation, + * multi-terminal locking, worktree awareness, and performance. + * + * Original tests from Story 6.1.2.4 are preserved. + * New tests added for ACT-3 acceptance criteria: + * AC1: Cache invalidation on git state changes + * AC2: Multi-terminal concurrent access + * AC3: Post-commit freshness within 5 seconds + * AC4: getCurrentStoryInfo() accuracy without delay + * AC5: Git post-commit hook (tested separately in hook test) + * AC6: Worktree-aware cache paths + * AC7: Performance (<100ms cached, <500ms regeneration) + * AC8: Comprehensive test coverage + */ + +const path = require('path'); + +// Mock child_process (for execSync in constructor and getGitStateFingerprint) +jest.mock('child_process', () => ({ + execSync: jest.fn(() => '.git'), +})); + +// Mock execa before requiring the module +jest.mock('execa', () => jest.fn()); + +// Mock WorktreeManager +jest.mock('../../.aios-core/infrastructure/scripts/worktree-manager', () => { + return jest.fn().mockImplementation(() => ({ + list: jest.fn().mockResolvedValue([]), + })); +}); + +// Mock fs.promises and fs sync +jest.mock('fs', () => { + const actual = jest.requireActual('fs'); + return { + ...actual, + readFileSync: jest.fn(), + existsSync: jest.fn(() => false), + readdirSync: jest.fn(() => []), + unlinkSync: jest.fn(), + promises: { + readFile: jest.fn(), + writeFile: jest.fn().mockResolvedValue(undefined), + mkdir: jest.fn().mockResolvedValue(undefined), + access: jest.fn(), + readdir: jest.fn(), + unlink: jest.fn().mockResolvedValue(undefined), + stat: jest.fn(), + open: jest.fn(), + rename: jest.fn().mockResolvedValue(undefined), + }, + }; +}); + +// Mock js-yaml +jest.mock('js-yaml', () => ({ + load: jest.fn(), + dump: jest.fn((obj) => JSON.stringify(obj)), +})); + +const { execSync } = require('child_process'); +const execa = require('execa'); +const fs = require('fs'); +const yaml = require('js-yaml'); +const WorktreeManager = require('../../.aios-core/infrastructure/scripts/worktree-manager'); +const { + ProjectStatusLoader, + loadProjectStatus, + clearCache, + formatStatusDisplay, + LOCK_TIMEOUT_MS, + LOCK_STALE_MS, + ACTIVE_SESSION_TTL, + IDLE_TTL, +} = require('../../.aios-core/infrastructure/scripts/project-status-loader'); + +describe('ProjectStatusLoader', () => { + const projectRoot = '/test/project'; + let loader; + + beforeEach(() => { + jest.clearAllMocks(); + + // Default mock for execSync (constructor calls _resolveCacheFilePath and getGitStateFingerprint) + execSync.mockImplementation((cmd) => { + if (cmd.includes('--git-dir')) return '.git'; + if (cmd.includes('--git-common-dir')) return '.git'; + return ''; + }); + + loader = new ProjectStatusLoader(projectRoot); + + // Default mocks + execa.mockResolvedValue({ stdout: '', stderr: '' }); + fs.readFileSync.mockReturnValue(''); // For config loading + fs.promises.readFile.mockResolvedValue(''); + fs.promises.access.mockResolvedValue(undefined); + fs.promises.readdir.mockResolvedValue([]); + fs.promises.stat.mockResolvedValue({ mtimeMs: 1000 }); + yaml.load.mockReturnValue(null); + }); + + // ========================================================================= + // ORIGINAL TESTS (Story 6.1.2.4) - preserved + // ========================================================================= + + describe('constructor', () => { + it('should use project root from parameter', () => { + const customLoader = new ProjectStatusLoader('/custom/path'); + expect(customLoader.rootPath).toBe('/custom/path'); + }); + + it('should use process.cwd() when no root provided', () => { + const defaultLoader = new ProjectStatusLoader(); + expect(defaultLoader.rootPath).toBe(process.cwd()); + }); + + it('should set default idle TTL to 60 seconds', () => { + expect(loader.cacheTTL).toBe(60); + expect(loader.idleTTL).toBe(60); + }); + + it('should set active-session TTL to 15 seconds', () => { + expect(loader.activeSessionTTL).toBe(15); + }); + + it('should load config and apply settings', () => { + fs.readFileSync.mockReturnValue('projectStatus:\n maxModifiedFiles: 10'); + yaml.load.mockReturnValue({ + projectStatus: { + maxModifiedFiles: 10, + maxRecentCommits: 5, + }, + }); + + const configuredLoader = new ProjectStatusLoader(projectRoot); + expect(configuredLoader.maxModifiedFiles).toBe(10); + expect(configuredLoader.maxRecentCommits).toBe(5); + }); + + it('should use defaults when config not found', () => { + fs.readFileSync.mockImplementation(() => { + throw new Error('ENOENT'); + }); + + const defaultsLoader = new ProjectStatusLoader(projectRoot); + expect(defaultsLoader.maxModifiedFiles).toBe(5); + expect(defaultsLoader.maxRecentCommits).toBe(2); + }); + + it('should set lock file path based on cache file', () => { + expect(loader.lockFile).toBe(loader.cacheFile + '.lock'); + }); + }); + + describe('isGitRepository', () => { + it('should return true for git repository', async () => { + execa.mockResolvedValue({ stdout: 'true', stderr: '' }); + + const result = await loader.isGitRepository(); + + expect(result).toBe(true); + expect(execa).toHaveBeenCalledWith( + 'git', + ['rev-parse', '--is-inside-work-tree'], + expect.objectContaining({ cwd: projectRoot }), + ); + }); + + it('should return false for non-git directory', async () => { + execa.mockRejectedValue(new Error('Not a git repo')); + + const result = await loader.isGitRepository(); + + expect(result).toBe(false); + }); + }); + + describe('getGitBranch', () => { + it('should return branch name from git branch --show-current', async () => { + execa.mockResolvedValue({ stdout: 'main\n', stderr: '' }); + + const result = await loader.getGitBranch(); + + expect(result).toBe('main'); + }); + + it('should fallback to rev-parse for older git', async () => { + execa + .mockRejectedValueOnce(new Error('Unknown option')) + .mockResolvedValueOnce({ stdout: 'develop\n', stderr: '' }); + + const result = await loader.getGitBranch(); + + expect(result).toBe('develop'); + }); + + it('should return "unknown" when both methods fail', async () => { + execa.mockRejectedValue(new Error('Git error')); + + const result = await loader.getGitBranch(); + + expect(result).toBe('unknown'); + }); + }); + + describe('getModifiedFiles', () => { + it('should parse git status porcelain output', async () => { + const statusOutput = ` M src/index.js + M src/utils.js +?? new-file.txt`; + execa.mockResolvedValue({ stdout: statusOutput, stderr: '' }); + + const result = await loader.getModifiedFiles(); + + expect(result.files).toContain('src/index.js'); + expect(result.files).toContain('src/utils.js'); + expect(result.files).toContain('new-file.txt'); + expect(result.totalCount).toBe(3); + }); + + it('should limit files to maxModifiedFiles', async () => { + const manyFiles = Array(10) + .fill(null) + .map((_, i) => ` M file${i}.js`) + .join('\n'); + execa.mockResolvedValue({ stdout: manyFiles, stderr: '' }); + + const result = await loader.getModifiedFiles(); + + expect(result.files.length).toBe(5); // Default maxModifiedFiles + expect(result.totalCount).toBe(10); + }); + + it('should return empty array on error', async () => { + execa.mockRejectedValue(new Error('Git error')); + + const result = await loader.getModifiedFiles(); + + expect(result.files).toEqual([]); + expect(result.totalCount).toBe(0); + }); + }); + + describe('getRecentCommits', () => { + it('should parse git log output', async () => { + const logOutput = `abc1234 feat: add new feature +def5678 fix: bug fix`; + execa.mockResolvedValue({ stdout: logOutput, stderr: '' }); + + const result = await loader.getRecentCommits(); + + expect(result).toContain('feat: add new feature'); + expect(result).toContain('fix: bug fix'); + }); + + it('should return empty array when no commits', async () => { + execa.mockResolvedValue({ stdout: '', stderr: '' }); + + const result = await loader.getRecentCommits(); + + expect(result).toEqual([]); + }); + + it('should return empty array on error', async () => { + execa.mockRejectedValue(new Error('No commits')); + + const result = await loader.getRecentCommits(); + + expect(result).toEqual([]); + }); + }); + + describe('getWorktreesStatus', () => { + it('should return null when no worktrees', async () => { + WorktreeManager.mockImplementation(() => ({ + list: jest.fn().mockResolvedValue([]), + })); + + const result = await loader.getWorktreesStatus(); + + expect(result).toBeNull(); + }); + + it('should return worktrees status object', async () => { + WorktreeManager.mockImplementation(() => ({ + list: jest.fn().mockResolvedValue([ + { + storyId: 'STORY-42', + path: '/project/.aios/worktrees/STORY-42', + branch: 'auto-claude/STORY-42', + createdAt: new Date('2026-01-29'), + uncommittedChanges: 3, + status: 'active', + }, + ]), + })); + + const result = await loader.getWorktreesStatus(); + + expect(result).toBeDefined(); + expect(result['STORY-42']).toBeDefined(); + expect(result['STORY-42'].branch).toBe('auto-claude/STORY-42'); + expect(result['STORY-42'].uncommittedChanges).toBe(3); + }); + + it('should return null on WorktreeManager error', async () => { + WorktreeManager.mockImplementation(() => ({ + list: jest.fn().mockRejectedValue(new Error('Not a git repo')), + })); + + const result = await loader.getWorktreesStatus(); + + expect(result).toBeNull(); + }); + }); + + describe('getCurrentStoryInfo', () => { + it('should detect story with InProgress status', async () => { + fs.promises.access.mockResolvedValue(undefined); + fs.promises.readdir.mockResolvedValue([ + { name: 'story-42.md', isFile: () => true, isDirectory: () => false }, + ]); + fs.promises.readFile.mockResolvedValue(` +# Story 42 +**Story ID:** STORY-42 +**Epic:** Epic 1 - Setup +**Status:** InProgress + `); + + const result = await loader.getCurrentStoryInfo(); + + expect(result.story).toBe('STORY-42'); + expect(result.epic).toBe('Epic 1 - Setup'); + }); + + it('should return null when no story in progress', async () => { + fs.promises.access.mockResolvedValue(undefined); + fs.promises.readdir.mockResolvedValue([]); + + const result = await loader.getCurrentStoryInfo(); + + expect(result.story).toBeNull(); + expect(result.epic).toBeNull(); + }); + + it('should return null when stories dir not found', async () => { + fs.promises.access.mockRejectedValue(new Error('ENOENT')); + + const result = await loader.getCurrentStoryInfo(); + + expect(result.story).toBeNull(); + }); + }); + + describe('cache', () => { + describe('loadCache', () => { + it('should load cache from file', async () => { + const cached = { + status: { branch: 'main' }, + timestamp: Date.now(), + ttl: 60, + gitFingerprint: '1000:2000', + }; + fs.promises.readFile.mockResolvedValue(JSON.stringify(cached)); + yaml.load.mockReturnValue(cached); + + const result = await loader.loadCache(); + + expect(result).toEqual(cached); + }); + + it('should return null when cache file not found', async () => { + fs.promises.readFile.mockRejectedValue(new Error('ENOENT')); + + const result = await loader.loadCache(); + + expect(result).toBeNull(); + }); + + it('should handle corrupted cache by deleting and returning null (ACT-3)', async () => { + fs.promises.readFile.mockResolvedValue('not valid yaml content'); + yaml.load.mockReturnValue('just a string, not an object'); + + const result = await loader.loadCache(); + + expect(result).toBeNull(); + // Should have attempted to delete the corrupted cache + expect(fs.promises.unlink).toHaveBeenCalled(); + }); + + it('should handle YAML parse error by deleting cache (ACT-3)', async () => { + fs.promises.readFile.mockResolvedValue('{{invalid yaml'); + const yamlError = new Error('Invalid YAML'); + yamlError.name = 'YAMLException'; + yaml.load.mockImplementation(() => { throw yamlError; }); + + const result = await loader.loadCache(); + + expect(result).toBeNull(); + expect(fs.promises.unlink).toHaveBeenCalled(); + }); + + it('should handle cache with missing status field (ACT-3)', async () => { + fs.promises.readFile.mockResolvedValue('{}'); + yaml.load.mockReturnValue({ timestamp: Date.now(), ttl: 60 }); + + const result = await loader.loadCache(); + + expect(result).toBeNull(); + }); + }); + + describe('isCacheValid', () => { + it('should return true for fresh cache with matching fingerprint', () => { + const cache = { + timestamp: Date.now() - 5000, // 5 seconds ago + ttl: 60, + gitFingerprint: '1000:2000', + }; + + const result = loader.isCacheValid(cache, '1000:2000'); + + expect(result).toBe(true); + }); + + it('should return false when git fingerprint changed (ACT-3 AC1)', () => { + const cache = { + timestamp: Date.now() - 5000, // Only 5 seconds ago, well within TTL + ttl: 60, + gitFingerprint: '1000:2000', + }; + + // Git state changed + const result = loader.isCacheValid(cache, '1000:3000'); + + expect(result).toBe(false); + }); + + it('should use active-session TTL (15s) when fingerprint matches (ACT-3)', () => { + const cache = { + timestamp: Date.now() - 14000, // 14 seconds ago - within 15s + ttl: 60, + gitFingerprint: '1000:2000', + }; + + expect(loader.isCacheValid(cache, '1000:2000')).toBe(true); + + // 16 seconds - beyond active-session TTL + cache.timestamp = Date.now() - 16000; + expect(loader.isCacheValid(cache, '1000:2000')).toBe(false); + }); + + it('should use idle TTL (60s) when no fingerprint available (ACT-3)', () => { + const cache = { + timestamp: Date.now() - 30000, // 30 seconds ago + ttl: 60, + }; + + // No fingerprint - falls back to idle TTL + expect(loader.isCacheValid(cache, null)).toBe(true); + + // Beyond idle TTL + cache.timestamp = Date.now() - 120000; + expect(loader.isCacheValid(cache, null)).toBe(false); + }); + + it('should return false for expired cache', () => { + const cache = { + timestamp: Date.now() - 120000, // 2 minutes ago + ttl: 60, + }; + + const result = loader.isCacheValid(cache); + + expect(result).toBe(false); + }); + + it('should return false for null cache', () => { + expect(loader.isCacheValid(null)).toBe(false); + expect(loader.isCacheValid(undefined)).toBe(false); + expect(loader.isCacheValid({})).toBe(false); + }); + }); + + describe('saveCache (backward compat)', () => { + it('should write cache to file via saveCacheWithLock', async () => { + fs.promises.writeFile.mockResolvedValue(undefined); + + const status = { branch: 'main' }; + await loader.saveCache(status); + + expect(fs.promises.mkdir).toHaveBeenCalled(); + expect(fs.promises.writeFile).toHaveBeenCalled(); + }); + + it('should handle write errors gracefully', async () => { + fs.promises.writeFile.mockRejectedValue(new Error('Permission denied')); + const consoleSpy = jest.spyOn(console, 'warn').mockImplementation(); + + await loader.saveCache({ branch: 'main' }); + + expect(consoleSpy).toHaveBeenCalled(); + consoleSpy.mockRestore(); + }); + }); + + describe('clearCache', () => { + it('should delete cache file', async () => { + fs.promises.unlink.mockResolvedValue(undefined); + + const result = await loader.clearCache(); + + expect(result).toBe(true); + expect(fs.promises.unlink).toHaveBeenCalledWith(loader.cacheFile); + }); + + it('should return false when file not found', async () => { + fs.promises.unlink.mockRejectedValue(new Error('ENOENT')); + + const result = await loader.clearCache(); + + expect(result).toBe(false); + }); + }); + }); + + describe('loadProjectStatus', () => { + it('should return cached status if valid', async () => { + const cachedStatus = { branch: 'cached-branch', isGitRepo: true }; + const cache = { + status: cachedStatus, + timestamp: Date.now() - 5000, + ttl: 60, + gitFingerprint: '1000:2000', + }; + fs.promises.readFile.mockResolvedValue(JSON.stringify(cache)); + yaml.load.mockReturnValue(cache); + // Make fingerprint match: HEAD mtime=1000, index mtime=2000 + fs.promises.stat + .mockResolvedValueOnce({ mtimeMs: 1000 }) + .mockResolvedValueOnce({ mtimeMs: 2000 }); + + // Make getGitStateFingerprint return matching fingerprint + execSync.mockImplementation((cmd) => { + if (cmd.includes('--git-dir')) return '.git'; + if (cmd.includes('--git-common-dir')) return '.git'; + return ''; + }); + + const result = await loader.loadProjectStatus(); + + expect(result).toEqual(cachedStatus); + }); + + it('should generate fresh status when cache expired', async () => { + const expiredCache = { + status: { branch: 'old' }, + timestamp: Date.now() - 120000, + ttl: 60, + }; + fs.promises.readFile.mockResolvedValue(JSON.stringify(expiredCache)); + yaml.load.mockReturnValue(expiredCache); + + // writeFile should succeed for cache saves + fs.promises.writeFile.mockResolvedValue(undefined); + + execa.mockImplementation((cmd, args) => { + if (args.includes('--is-inside-work-tree')) { + return Promise.resolve({ stdout: 'true' }); + } + if (args.includes('--show-current')) { + return Promise.resolve({ stdout: 'fresh-branch' }); + } + return Promise.resolve({ stdout: '' }); + }); + + const result = await loader.loadProjectStatus(); + + expect(result.branch).toBe('fresh-branch'); + }); + + it('should generate fresh status when git fingerprint changed (ACT-3 AC1)', async () => { + const cachedWithOldFingerprint = { + status: { branch: 'old-branch', isGitRepo: true }, + timestamp: Date.now() - 2000, // Only 2 seconds old + ttl: 60, + gitFingerprint: '1000:2000', // Old fingerprint + }; + fs.promises.readFile.mockResolvedValue(JSON.stringify(cachedWithOldFingerprint)); + yaml.load.mockReturnValue(cachedWithOldFingerprint); + + // Current fingerprint is different (git state changed) + fs.promises.stat + .mockResolvedValueOnce({ mtimeMs: 1000 }) // HEAD mtime + .mockResolvedValueOnce({ mtimeMs: 5000 }); // index mtime changed! + + // writeFile should succeed + fs.promises.writeFile.mockResolvedValue(undefined); + + execa.mockImplementation((cmd, args) => { + if (args.includes('--is-inside-work-tree')) { + return Promise.resolve({ stdout: 'true' }); + } + if (args.includes('--show-current')) { + return Promise.resolve({ stdout: 'new-branch' }); + } + return Promise.resolve({ stdout: '' }); + }); + + const result = await loader.loadProjectStatus(); + + expect(result.branch).toBe('new-branch'); + }); + + it('should return default status on error', async () => { + fs.promises.readFile.mockRejectedValue(new Error('Read error')); + execa.mockRejectedValue(new Error('Git error')); + fs.promises.writeFile.mockRejectedValue(new Error('Write error')); + + const consoleSpy = jest.spyOn(console, 'warn').mockImplementation(); + const result = await loader.loadProjectStatus(); + consoleSpy.mockRestore(); + + // When git fails, returns non-git status (branch: null) + expect(result.isGitRepo).toBe(false); + expect(result.branch).toBeNull(); + }); + }); + + describe('getNonGitStatus', () => { + it('should return status for non-git project', () => { + const status = loader.getNonGitStatus(); + + expect(status.branch).toBeNull(); + expect(status.isGitRepo).toBe(false); + expect(status.modifiedFiles).toEqual([]); + }); + }); + + describe('formatStatusDisplay', () => { + it('should format git project status', () => { + const status = { + isGitRepo: true, + branch: 'main', + modifiedFiles: ['file1.js', 'file2.js'], + modifiedFilesTotalCount: 2, + recentCommits: ['feat: add feature'], + currentStory: 'STORY-42', + }; + + const display = loader.formatStatusDisplay(status); + + expect(display).toContain('Branch: main'); + expect(display).toContain('Modified: file1.js, file2.js'); + expect(display).toContain('Recent: feat: add feature'); + expect(display).toContain('Story: STORY-42'); + }); + + it('should show truncation message for many files', () => { + const status = { + isGitRepo: true, + branch: 'main', + modifiedFiles: ['file1.js', 'file2.js'], + modifiedFilesTotalCount: 10, + }; + + const display = loader.formatStatusDisplay(status); + + expect(display).toContain('...and 8 more'); + }); + + it('should show worktrees info', () => { + const status = { + isGitRepo: true, + branch: 'main', + worktrees: { + 'STORY-42': { status: 'active', uncommittedChanges: 3 }, + 'STORY-43': { status: 'active', uncommittedChanges: 0 }, + }, + }; + + const display = loader.formatStatusDisplay(status); + + expect(display).toContain('Worktrees: 2/2 active, 1 with changes'); + }); + + it('should show message for non-git repo', () => { + const status = { isGitRepo: false }; + + const display = loader.formatStatusDisplay(status); + + expect(display).toContain('Not a git repository'); + }); + + it('should show message for no activity', () => { + const status = { + isGitRepo: true, + modifiedFiles: [], + recentCommits: [], + }; + + const display = loader.formatStatusDisplay(status); + + expect(display).toContain('No recent activity'); + }); + }); + + // ========================================================================= + // ACT-3: Event-Driven Cache Invalidation Tests (AC1, AC3) + // ========================================================================= + + describe('ACT-3: Event-driven cache invalidation', () => { + describe('getGitStateFingerprint', () => { + it('should return fingerprint from HEAD and index mtimes', async () => { + execSync.mockImplementation((cmd) => { + if (cmd.includes('--git-dir')) return '.git'; + if (cmd.includes('--git-common-dir')) return '.git'; + return ''; + }); + + fs.promises.stat + .mockResolvedValueOnce({ mtimeMs: 1234567890 }) + .mockResolvedValueOnce({ mtimeMs: 9876543210 }); + + const fingerprint = await loader.getGitStateFingerprint(); + + expect(fingerprint).toBe('1234567890:9876543210'); + }); + + it('should return null when git dir not available', async () => { + // ACT-11: Clear cached git dir to simulate non-git scenario + // (constructor caches _resolvedGitDir for performance) + loader._resolvedGitDir = null; + + execSync.mockImplementation(() => { + throw new Error('Not a git repository'); + }); + + const fingerprint = await loader.getGitStateFingerprint(); + + expect(fingerprint).toBeNull(); + }); + + it('should handle missing HEAD or index gracefully', async () => { + execSync.mockImplementation((cmd) => { + if (cmd.includes('--git-dir')) return '.git'; + return ''; + }); + + fs.promises.stat + .mockResolvedValueOnce({ mtimeMs: 1234567890 }) + .mockRejectedValueOnce(new Error('ENOENT')); + + const fingerprint = await loader.getGitStateFingerprint(); + + expect(fingerprint).toBe('1234567890:0'); + }); + }); + + it('should invalidate cache immediately when git state changes (AC1)', () => { + const cache = { + timestamp: Date.now() - 1000, // 1 second ago + ttl: 60, + gitFingerprint: '100:200', + }; + + // Same fingerprint = valid + expect(loader.isCacheValid(cache, '100:200')).toBe(true); + + // Different fingerprint = immediately invalid + expect(loader.isCacheValid(cache, '100:300')).toBe(false); + }); + + it('should show updated status within 5 seconds after commit (AC3)', () => { + // Simulate: cache was written 2 seconds ago with old fingerprint + const cache = { + timestamp: Date.now() - 2000, + ttl: 60, + gitFingerprint: '100:200', + }; + + // After commit, git index mtime changes + const newFingerprint = '100:500'; + + // Cache should be invalid despite being only 2 seconds old + expect(loader.isCacheValid(cache, newFingerprint)).toBe(false); + }); + }); + + // ========================================================================= + // ACT-3: Multi-Terminal File Locking Tests (AC2) + // ========================================================================= + + describe('ACT-3: Multi-terminal file locking', () => { + describe('_acquireLock', () => { + it('should acquire lock when file does not exist', async () => { + // writeFile with wx flag succeeds (no existing lock) + fs.promises.writeFile.mockResolvedValueOnce(undefined); + + const acquired = await loader._acquireLock(); + + expect(acquired).toBe(true); + // Should have called writeFile with wx flag + expect(fs.promises.writeFile).toHaveBeenCalledWith( + loader.lockFile, + expect.any(String), + expect.objectContaining({ flag: 'wx' }), + ); + }); + + it('should return false when lock cannot be acquired within timeout', async () => { + // Lock always exists, never stale + const eexistError = new Error('File exists'); + eexistError.code = 'EEXIST'; + fs.promises.writeFile.mockRejectedValue(eexistError); + fs.promises.readFile.mockResolvedValue(JSON.stringify({ + pid: 99999, + timestamp: Date.now(), // Fresh lock - not stale + })); + + // Force timeout by racing with a shorter timeout + const acquired = await Promise.race([ + loader._acquireLock(), + new Promise(resolve => setTimeout(() => resolve(false), 500)), + ]); + + expect(acquired).toBe(false); + }, 10000); + + it('should clean up stale lock and retry', async () => { + const eexistError = new Error('File exists'); + eexistError.code = 'EEXIST'; + + // First call: lock exists (EEXIST) + // After stale lock cleanup: lock acquired (success) + fs.promises.writeFile + .mockRejectedValueOnce(eexistError) + .mockResolvedValueOnce(undefined); + + // Lock is stale + fs.promises.readFile.mockResolvedValue(JSON.stringify({ + pid: 12345, + timestamp: Date.now() - LOCK_STALE_MS - 1000, // Older than stale threshold + })); + + const acquired = await loader._acquireLock(); + + expect(acquired).toBe(true); + expect(fs.promises.unlink).toHaveBeenCalledWith(loader.lockFile); + }); + + it('should return false on non-EEXIST errors', async () => { + const otherError = new Error('ENOENT'); + otherError.code = 'ENOENT'; + fs.promises.writeFile.mockRejectedValue(otherError); + + const acquired = await loader._acquireLock(); + + expect(acquired).toBe(false); + }); + }); + + describe('_isLockStale', () => { + it('should return true for old lock file', async () => { + fs.promises.readFile.mockResolvedValue(JSON.stringify({ + pid: 12345, + timestamp: Date.now() - LOCK_STALE_MS - 1000, + })); + + expect(await loader._isLockStale()).toBe(true); + }); + + it('should return false for fresh lock file', async () => { + fs.promises.readFile.mockResolvedValue(JSON.stringify({ + pid: 12345, + timestamp: Date.now() - 1000, // 1 second old + })); + + expect(await loader._isLockStale()).toBe(false); + }); + + it('should return true when lock file cannot be read', async () => { + fs.promises.readFile.mockRejectedValue(new Error('ENOENT')); + + expect(await loader._isLockStale()).toBe(true); + }); + }); + + describe('_releaseLock', () => { + it('should delete lock file', async () => { + fs.promises.unlink.mockResolvedValue(undefined); + + await loader._releaseLock(); + + expect(fs.promises.unlink).toHaveBeenCalledWith(loader.lockFile); + }); + + it('should not throw when lock file missing', async () => { + fs.promises.unlink.mockRejectedValue(new Error('ENOENT')); + + await expect(loader._releaseLock()).resolves.not.toThrow(); + }); + }); + + describe('saveCacheWithLock', () => { + it('should acquire lock, write cache, and release lock', async () => { + // Mock writeFile to succeed for both lock and cache + fs.promises.writeFile.mockResolvedValue(undefined); + + await loader.saveCacheWithLock({ branch: 'main' }, '100:200'); + + // Lock acquired (writeFile with wx flag) + const lockCall = fs.promises.writeFile.mock.calls.find( + call => typeof call[2] === 'object' && call[2].flag === 'wx', + ); + expect(lockCall).toBeDefined(); + expect(lockCall[0]).toBe(loader.lockFile); + + // Cache written (temp file) + const cacheCall = fs.promises.writeFile.mock.calls.find( + call => typeof call[0] === 'string' && call[0].includes('.tmp.'), + ); + expect(cacheCall).toBeDefined(); + + // Lock released + expect(fs.promises.unlink).toHaveBeenCalledWith(loader.lockFile); + }); + + it('should include gitFingerprint in cached data', async () => { + // Lock acquisition fails (skip locking) + const lockError = new Error('ENOENT'); + lockError.code = 'ENOENT'; + fs.promises.writeFile + .mockRejectedValueOnce(lockError) // Lock fails + .mockResolvedValue(undefined); // Cache write succeeds + + await loader.saveCacheWithLock({ branch: 'main' }, '100:200'); + + // Find the cache content write (not the lock write) + const cacheWrite = fs.promises.writeFile.mock.calls.find( + call => typeof call[1] === 'string' && call[1].includes('100:200'), + ); + expect(cacheWrite).toBeDefined(); + }); + + it('should still write cache even when lock cannot be acquired', async () => { + const lockError = new Error('ENOENT'); + lockError.code = 'ENOENT'; + fs.promises.writeFile + .mockRejectedValueOnce(lockError) // Lock fails + .mockResolvedValue(undefined); // Cache write succeeds + + await loader.saveCacheWithLock({ branch: 'main' }, null); + + // At least one non-lock writeFile call + expect(fs.promises.writeFile.mock.calls.length).toBeGreaterThanOrEqual(2); + }); + + it('should use atomic write (temp file + rename)', async () => { + const lockError = new Error('ENOENT'); + lockError.code = 'ENOENT'; + fs.promises.writeFile + .mockRejectedValueOnce(lockError) // Lock fails + .mockResolvedValue(undefined); // Cache write succeeds + + await loader.saveCacheWithLock({ branch: 'main' }, null); + + // Should write to a temp file + const tempWrite = fs.promises.writeFile.mock.calls.find( + call => typeof call[0] === 'string' && call[0].includes('.tmp.'), + ); + expect(tempWrite).toBeDefined(); + + // Should attempt rename + expect(fs.promises.rename).toHaveBeenCalled(); + }); + + it('should fall back to direct write when rename fails (Windows)', async () => { + const lockError = new Error('ENOENT'); + lockError.code = 'ENOENT'; + fs.promises.writeFile + .mockRejectedValueOnce(lockError) // Lock fails + .mockResolvedValue(undefined); // All subsequent writes succeed + fs.promises.rename.mockRejectedValue(new Error('EPERM')); // Rename fails on Windows + + await loader.saveCacheWithLock({ branch: 'main' }, null); + + // Should have written: lock attempt + temp file + direct fallback = at least 3 calls + const writeCalls = fs.promises.writeFile.mock.calls; + expect(writeCalls.length).toBeGreaterThanOrEqual(3); + }); + }); + + it('should produce valid output under concurrent access (AC2)', async () => { + // Simulate concurrent access by calling saveCacheWithLock multiple times + fs.promises.writeFile.mockResolvedValue(undefined); + + const status1 = { branch: 'branch-1', isGitRepo: true }; + const status2 = { branch: 'branch-2', isGitRepo: true }; + + // Both should complete without errors + await Promise.all([ + loader.saveCacheWithLock(status1, '100:200'), + loader.saveCacheWithLock(status2, '100:300'), + ]); + + // Multiple writes should have completed + expect(fs.promises.writeFile).toHaveBeenCalled(); + }); + }); + + // ========================================================================= + // ACT-3: Worktree Awareness Tests (AC6) + // ========================================================================= + + describe('ACT-3: Worktree awareness', () => { + describe('_resolveCacheFilePath', () => { + it('should use default path for main working tree', () => { + // git-dir === git-common-dir means main worktree + execSync.mockImplementation((cmd) => { + if (cmd.includes('--git-dir')) return '.git'; + if (cmd.includes('--git-common-dir')) return '.git'; + return ''; + }); + + const newLoader = new ProjectStatusLoader(projectRoot); + expect(newLoader.cacheFile).toBe( + path.join(projectRoot, '.aios', 'project-status.yaml'), + ); + }); + + it('should use worktree-specific path when in a worktree (AC6)', () => { + // git-dir !== git-common-dir means we are in a worktree + execSync.mockImplementation((cmd) => { + if (cmd.includes('--git-dir')) return '/main-repo/.git/worktrees/my-story'; + if (cmd.includes('--git-common-dir')) return '/main-repo/.git'; + return ''; + }); + + const newLoader = new ProjectStatusLoader(projectRoot); + expect(newLoader.cacheFile).toContain('project-status-'); + expect(newLoader.cacheFile).toContain('.yaml'); + expect(newLoader.cacheFile).not.toBe( + path.join(projectRoot, '.aios', 'project-status.yaml'), + ); + }); + + it('should fall back to default path when git is not available', () => { + execSync.mockImplementation(() => { + throw new Error('git not found'); + }); + + const newLoader = new ProjectStatusLoader(projectRoot); + expect(newLoader.cacheFile).toBe( + path.join(projectRoot, '.aios', 'project-status.yaml'), + ); + }); + }); + + describe('_hashString', () => { + it('should produce consistent hashes', () => { + const hash1 = loader._hashString('/path/to/project'); + const hash2 = loader._hashString('/path/to/project'); + expect(hash1).toBe(hash2); + }); + + it('should produce different hashes for different paths', () => { + const hash1 = loader._hashString('/path/to/project1'); + const hash2 = loader._hashString('/path/to/project2'); + expect(hash1).not.toBe(hash2); + }); + + it('should return a hex string of at least 8 characters', () => { + const hash = loader._hashString('test'); + expect(hash.length).toBeGreaterThanOrEqual(8); + expect(/^[0-9a-f]+$/.test(hash)).toBe(true); + }); + }); + + it('should isolate cache between worktrees', () => { + // Worktree 1 + execSync.mockImplementation((cmd) => { + if (cmd.includes('--git-dir')) return '/main/.git/worktrees/story-A'; + if (cmd.includes('--git-common-dir')) return '/main/.git'; + return ''; + }); + const loader1 = new ProjectStatusLoader('/worktree/story-A'); + + // Worktree 2 + execSync.mockImplementation((cmd) => { + if (cmd.includes('--git-dir')) return '/main/.git/worktrees/story-B'; + if (cmd.includes('--git-common-dir')) return '/main/.git'; + return ''; + }); + const loader2 = new ProjectStatusLoader('/worktree/story-B'); + + // Different cache files + expect(loader1.cacheFile).not.toBe(loader2.cacheFile); + }); + }); + + // ========================================================================= + // ACT-3: getCurrentStoryInfo accuracy Tests (AC4) + // ========================================================================= + + describe('ACT-3: getCurrentStoryInfo accuracy (AC4)', () => { + it('should return fresh data without delay', async () => { + fs.promises.access.mockResolvedValue(undefined); + fs.promises.readdir.mockResolvedValue([ + { name: 'story-act-3.md', isFile: () => true, isDirectory: () => false }, + ]); + fs.promises.readFile.mockResolvedValue(` +# Story ACT-3 +**Story ID:** ACT-3 +**Epic:** EPIC-ACT - Unified Agent Activation Pipeline +**Status:** InProgress + `); + + const startTime = Date.now(); + const result = await loader.getCurrentStoryInfo(); + const elapsed = Date.now() - startTime; + + expect(result.story).toBe('ACT-3'); + expect(result.epic).toContain('EPIC-ACT'); + // Should complete quickly (not blocked by cache) + expect(elapsed).toBeLessThan(5000); + }); + + it('should detect status changes immediately', async () => { + fs.promises.access.mockResolvedValue(undefined); + fs.promises.readdir.mockResolvedValue([ + { name: 'story-old.md', isFile: () => true, isDirectory: () => false }, + ]); + + // First call - story is InProgress + fs.promises.readFile.mockResolvedValueOnce(` +**Story ID:** OLD-1 +**Status:** InProgress + `); + const result1 = await loader.getCurrentStoryInfo(); + expect(result1.story).toBe('OLD-1'); + + // Second call - story status changed to Done (not InProgress) + fs.promises.readdir.mockResolvedValue([ + { name: 'story-old.md', isFile: () => true, isDirectory: () => false }, + ]); + fs.promises.readFile.mockResolvedValueOnce(` +**Story ID:** OLD-1 +**Status:** Done + `); + const result2 = await loader.getCurrentStoryInfo(); + expect(result2.story).toBeNull(); // No longer InProgress + }); + }); + + // ========================================================================= + // ACT-3: Performance Tests (AC7) + // ========================================================================= + + describe('ACT-3: Performance (AC7)', () => { + it('should complete cached read within 100ms', async () => { + const cachedStatus = { branch: 'main', isGitRepo: true }; + const cache = { + status: cachedStatus, + timestamp: Date.now() - 5000, + ttl: 60, + gitFingerprint: '1000:2000', + }; + fs.promises.readFile.mockResolvedValue(JSON.stringify(cache)); + yaml.load.mockReturnValue(cache); + // Make fingerprint match: HEAD mtime=1000, index mtime=2000 + fs.promises.stat + .mockResolvedValueOnce({ mtimeMs: 1000 }) + .mockResolvedValueOnce({ mtimeMs: 2000 }); + + const startTime = Date.now(); + const result = await loader.loadProjectStatus(); + const elapsed = Date.now() - startTime; + + expect(result).toEqual(cachedStatus); + expect(elapsed).toBeLessThan(100); + }); + + it('should use Promise.all for parallel git commands in generateStatus', async () => { + execa.mockImplementation((cmd, args) => { + if (args.includes('--is-inside-work-tree')) { + return Promise.resolve({ stdout: 'true' }); + } + if (args.includes('--show-current')) { + return Promise.resolve({ stdout: 'main' }); + } + if (args.includes('--porcelain')) { + return Promise.resolve({ stdout: ' M file.js' }); + } + if (args.includes('--oneline')) { + return Promise.resolve({ stdout: 'abc1234 feat: test' }); + } + return Promise.resolve({ stdout: '' }); + }); + + const status = await loader.generateStatus(); + + expect(status.branch).toBe('main'); + expect(status.isGitRepo).toBe(true); + }); + }); + + // ========================================================================= + // ACT-3: Exported Constants Tests + // ========================================================================= + + describe('ACT-3: Exported constants', () => { + it('should export LOCK_TIMEOUT_MS', () => { + expect(LOCK_TIMEOUT_MS).toBe(3000); + }); + + it('should export LOCK_STALE_MS', () => { + expect(LOCK_STALE_MS).toBe(10000); + }); + + it('should export ACTIVE_SESSION_TTL', () => { + expect(ACTIVE_SESSION_TTL).toBe(15); + }); + + it('should export IDLE_TTL', () => { + expect(IDLE_TTL).toBe(60); + }); + }); +}); + +// ========================================================================= +// Module Exports Tests +// ========================================================================= + +describe('Module Exports', () => { + it('should export loadProjectStatus function', () => { + expect(typeof loadProjectStatus).toBe('function'); + }); + + it('should export clearCache function', () => { + expect(typeof clearCache).toBe('function'); + }); + + it('should export formatStatusDisplay function', () => { + expect(typeof formatStatusDisplay).toBe('function'); + }); + + it('should export ProjectStatusLoader class', () => { + expect(ProjectStatusLoader).toBeDefined(); + expect(typeof ProjectStatusLoader).toBe('function'); + }); +}); + +// ========================================================================= +// ACT-3: Git Post-Commit Hook Tests (AC5) +// ========================================================================= + +describe('ACT-3: Git post-commit hook (AC5)', () => { + it('post-commit hook script should exist', () => { + const hookPath = path.join( + __dirname, + '..', + '..', + '.aios-core', + 'infrastructure', + 'scripts', + 'git-hooks', + 'post-commit.js', + ); + // The hook file exists (we created it) + const actualFs = jest.requireActual('fs'); + expect(actualFs.existsSync(hookPath)).toBe(true); + }); + + it('husky post-commit hook should exist', () => { + const huskyPath = path.join( + __dirname, + '..', + '..', + '.husky', + 'post-commit', + ); + const actualFs = jest.requireActual('fs'); + expect(actualFs.existsSync(huskyPath)).toBe(true); + }); +}); + +``` + +================================================== +📄 tests/infrastructure/plan-tracker.test.js +================================================== +```js +/** + * @fileoverview Tests for PlanTracker - ADE Epic 4 + * @description Unit tests for implementation plan tracking functionality + */ + +const path = require('path'); + +// Mock fs before requiring the module +jest.mock('fs', () => { + const actual = jest.requireActual('fs'); + return { + ...actual, + existsSync: jest.fn(), + readFileSync: jest.fn(), + writeFileSync: jest.fn(), + mkdirSync: jest.fn(), + readdirSync: jest.fn(), + promises: { + ...actual.promises, + }, + }; +}); + +// Mock js-yaml +jest.mock('js-yaml', () => ({ + load: jest.fn(), + dump: jest.fn((obj) => JSON.stringify(obj)), +})); + +const fs = require('fs'); +const yaml = require('js-yaml'); +const { + PlanTracker, + Status, + getPlanProgress, + updateAfterSubtask, + CONFIG, +} = require('../../.aios-core/infrastructure/scripts/plan-tracker'); + +describe('PlanTracker', () => { + const projectRoot = '/test/project'; + const storyId = 'STORY-42'; + + // Sample implementation plan + const samplePlan = { + storyId: 'STORY-42', + phases: [ + { + id: 1, + name: 'Setup', + subtasks: [ + { id: '1.1', description: 'Create directory structure', status: 'completed' }, + { id: '1.2', description: 'Setup configuration', status: 'in_progress' }, + ], + }, + { + id: 2, + name: 'Implementation', + subtasks: [ + { id: '2.1', description: 'Implement core logic', status: 'pending' }, + { id: '2.2', description: 'Add error handling', status: 'pending' }, + { id: '2.3', description: 'Write tests', status: 'pending' }, + ], + }, + ], + }; + + beforeEach(() => { + jest.clearAllMocks(); + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockReturnValue('mock yaml content'); + yaml.load.mockReturnValue({ ...samplePlan }); + }); + + describe('constructor', () => { + it('should accept string (legacy) constructor', () => { + const tracker = new PlanTracker(storyId); + expect(tracker.storyId).toBe(storyId); + expect(tracker.rootPath).toBe(process.cwd()); + }); + + it('should accept object constructor', () => { + const tracker = new PlanTracker({ + storyId, + rootPath: projectRoot, + }); + expect(tracker.storyId).toBe(storyId); + expect(tracker.rootPath).toBe(projectRoot); + }); + + it('should use explicit planPath when provided', () => { + const planPath = '/custom/path/implementation.yaml'; + const tracker = new PlanTracker({ + planPath, + rootPath: projectRoot, + }); + expect(tracker.planPath).toBe(planPath); + }); + }); + + describe('load', () => { + it('should load plan from yaml file', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + tracker.load(); + + expect(yaml.load).toHaveBeenCalled(); + expect(tracker.plan).toBeDefined(); + }); + + it('should throw error if plan file not found', () => { + fs.existsSync.mockReturnValue(false); + + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + + expect(() => tracker.load()).toThrow(/not found/); + }); + }); + + describe('getStats', () => { + it('should calculate correct statistics', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + const stats = tracker.getStats(); + + expect(stats.total).toBe(5); + expect(stats.completed).toBe(1); + expect(stats.inProgress).toBe(1); + expect(stats.pending).toBe(3); + expect(stats.failed).toBe(0); + }); + + it('should calculate percentage correctly', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + const stats = tracker.getStats(); + + expect(stats.percentComplete).toBe(20); // 1/5 = 20% + }); + + it('should identify current task', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + const stats = tracker.getStats(); + + expect(stats.current).toBeDefined(); + expect(stats.current.subtask).toBe('1.2'); + expect(stats.current.phase).toBe('Setup'); + }); + + it('should determine overall status correctly', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + const stats = tracker.getStats(); + + expect(stats.status).toBe(Status.IN_PROGRESS); + }); + + it('should return COMPLETED status when all done', () => { + const completedPlan = { + phases: [ + { + id: 1, + name: 'Setup', + subtasks: [ + { id: '1.1', description: 'Task 1', status: 'completed' }, + { id: '1.2', description: 'Task 2', status: 'completed' }, + ], + }, + ], + }; + yaml.load.mockReturnValue(completedPlan); + + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + const stats = tracker.getStats(); + + expect(stats.status).toBe(Status.COMPLETED); + expect(stats.percentComplete).toBe(100); + }); + + it('should return FAILED status when any task failed', () => { + const failedPlan = { + phases: [ + { + id: 1, + name: 'Setup', + subtasks: [ + { id: '1.1', description: 'Task 1', status: 'completed' }, + { id: '1.2', description: 'Task 2', status: 'failed' }, + ], + }, + ], + }; + yaml.load.mockReturnValue(failedPlan); + + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + const stats = tracker.getStats(); + + expect(stats.status).toBe(Status.FAILED); + }); + }); + + describe('progressBar', () => { + it('should generate correct progress bar', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + + const bar0 = tracker.progressBar(0); + const bar50 = tracker.progressBar(50); + const bar100 = tracker.progressBar(100); + + expect(bar0).toBe('░░░░░░░░░░'); + expect(bar50).toBe('▓▓▓▓▓░░░░░'); + expect(bar100).toBe('▓▓▓▓▓▓▓▓▓▓'); + }); + + it('should support custom width', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + + const bar = tracker.progressBar(50, 20); + + expect(bar.length).toBe(20); + expect(bar).toBe('▓▓▓▓▓▓▓▓▓▓░░░░░░░░░░'); + }); + }); + + describe('generateReport', () => { + it('should generate visual report', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + const report = tracker.generateReport(); + + expect(report).toContain('Implementation Progress'); + expect(report).toContain('STORY-42'); + expect(report).toContain('Setup'); + expect(report).toContain('%'); + }); + }); + + describe('generateDetailedReport', () => { + it('should include all subtasks', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + const report = tracker.generateDetailedReport(); + + expect(report).toContain('1.1'); + expect(report).toContain('1.2'); + expect(report).toContain('2.1'); + expect(report).toContain('Create directory structure'); + }); + }); + + describe('getNextPending', () => { + it('should return first pending subtask', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + const stats = tracker.getStats(); + const next = tracker.getNextPending(stats); + + expect(next).toBeDefined(); + expect(next.id).toBe('2.1'); + expect(next.status).toBe(Status.PENDING); + }); + + it('should return null when no pending tasks', () => { + const completedPlan = { + phases: [ + { + id: 1, + name: 'Setup', + subtasks: [{ id: '1.1', description: 'Task 1', status: 'completed' }], + }, + ], + }; + yaml.load.mockReturnValue(completedPlan); + + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + const stats = tracker.getStats(); + const next = tracker.getNextPending(stats); + + expect(next).toBeNull(); + }); + }); + + describe('updateSubtaskStatus', () => { + it('should update subtask status', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + tracker.load(); + + tracker.updateSubtaskStatus('1.2', Status.COMPLETED); + + expect(fs.writeFileSync).toHaveBeenCalled(); + }); + + it('should throw error for unknown subtask', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + tracker.load(); + + expect(() => { + tracker.updateSubtaskStatus('9.9', Status.COMPLETED); + }).toThrow(/not found/); + }); + + it('should add extra properties to subtask', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + tracker.load(); + + tracker.updateSubtaskStatus('1.2', Status.COMPLETED, { + completedAt: '2026-01-29', + }); + + const subtask = tracker.plan.phases[0].subtasks[1]; + expect(subtask.completedAt).toBe('2026-01-29'); + }); + }); + + describe('startSubtask', () => { + it('should mark subtask as in_progress', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + tracker.load(); + + tracker.startSubtask('2.1'); + + const subtask = tracker.plan.phases[1].subtasks[0]; + expect(subtask.status).toBe(Status.IN_PROGRESS); + }); + }); + + describe('completeSubtask', () => { + it('should mark subtask as completed with timestamp', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + tracker.load(); + + tracker.completeSubtask('1.2'); + + const subtask = tracker.plan.phases[0].subtasks[1]; + expect(subtask.status).toBe(Status.COMPLETED); + expect(subtask.completedAt).toBeDefined(); + }); + }); + + describe('failSubtask', () => { + it('should mark subtask as failed with error', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + tracker.load(); + + tracker.failSubtask('2.1', 'Test error'); + + const subtask = tracker.plan.phases[1].subtasks[0]; + expect(subtask.status).toBe(Status.FAILED); + expect(subtask.error).toBe('Test error'); + expect(subtask.failedAt).toBeDefined(); + }); + }); + + describe('saveProgress', () => { + it('should write progress report to file', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + + const resultPath = tracker.saveProgress(); + + expect(fs.writeFileSync).toHaveBeenCalled(); + expect(resultPath).toContain('build-progress.txt'); + }); + + it('should create directory if not exists', () => { + // Mock directory doesn't exist initially + fs.existsSync.mockImplementation((p) => { + // Plan file exists, but output directory doesn't + if (p.includes('plan') && p.includes('implementation')) return true; + if (p.includes('plan') && !p.includes('implementation')) return false; + return true; + }); + + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + tracker.saveProgress(); + + // mkdirSync is called when directory doesn't exist + expect(fs.mkdirSync).toHaveBeenCalled(); + }); + }); + + describe('updateStatusJson', () => { + it('should update dashboard status file', () => { + fs.readFileSync.mockImplementation((p) => { + if (p.includes('status.json')) { + return JSON.stringify({ version: '1.0', stories: { inProgress: [], completed: [] } }); + } + return 'mock yaml content'; + }); + + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + + const resultPath = tracker.updateStatusJson(); + + expect(fs.writeFileSync).toHaveBeenCalled(); + expect(resultPath).toContain('status.json'); + }); + + it('should add story to inProgress list when in progress', () => { + // Setup fresh mocks + fs.existsSync.mockReturnValue(true); + fs.readFileSync.mockImplementation((p) => { + if (p.includes('status.json')) { + return JSON.stringify({ version: '1.0', stories: { inProgress: [], completed: [] } }); + } + return 'mock yaml content'; + }); + // Ensure fresh sample plan for this test + yaml.load.mockReturnValue({ + storyId: 'STORY-42', + phases: [ + { + id: 1, + name: 'Setup', + subtasks: [ + { id: '1.1', description: 'Task 1', status: 'completed' }, + { id: '1.2', description: 'Task 2', status: 'in_progress' }, + ], + }, + ], + }); + + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + tracker.load(); + tracker.updateStatusJson(); + + // Verify write was called with status file + const dashboardWriteCall = fs.writeFileSync.mock.calls.find((c) => + c[0].includes('status.json'), + ); + + expect(dashboardWriteCall).toBeDefined(); + const written = JSON.parse(dashboardWriteCall[1]); + + // Verify planProgress contains our story with in_progress status + expect(written.planProgress).toBeDefined(); + expect(written.planProgress[storyId]).toBeDefined(); + expect(written.planProgress[storyId].status).toBe('in_progress'); + }); + }); + + describe('getProgress', () => { + it('should return progress in expected format (AC2)', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + const progress = tracker.getProgress(); + + expect(progress).toHaveProperty('total'); + expect(progress).toHaveProperty('completed'); + expect(progress).toHaveProperty('inProgress'); + expect(progress).toHaveProperty('pending'); + expect(progress).toHaveProperty('failed'); + expect(progress).toHaveProperty('percentage'); + expect(progress).toHaveProperty('status'); + expect(progress).toHaveProperty('phases'); + }); + }); + + describe('toJSON', () => { + it('should return stats as JSON', () => { + const tracker = new PlanTracker({ storyId, rootPath: projectRoot }); + const json = tracker.toJSON(); + + expect(json.total).toBe(5); + expect(json.phases).toHaveLength(2); + }); + }); +}); + +describe('Helper Functions', () => { + beforeEach(() => { + jest.clearAllMocks(); + fs.existsSync.mockReturnValue(true); + yaml.load.mockReturnValue({ + phases: [ + { + id: 1, + name: 'Test', + subtasks: [{ id: '1.1', description: 'Test task', status: 'pending' }], + }, + ], + }); + }); + + describe('getPlanProgress', () => { + it('should return progress for story', () => { + const progress = getPlanProgress('STORY-42'); + + expect(progress).toBeDefined(); + expect(progress.total).toBe(1); + }); + + it('should return null if plan not found', () => { + fs.existsSync.mockReturnValue(false); + + const progress = getPlanProgress('STORY-NOTFOUND'); + + expect(progress).toBeNull(); + }); + }); + + describe('updateAfterSubtask', () => { + it('should update subtask and return stats', () => { + const stats = updateAfterSubtask('STORY-42', '1.1', Status.COMPLETED); + + expect(stats).toBeDefined(); + expect(fs.writeFileSync).toHaveBeenCalled(); + }); + }); +}); + +describe('Status Constants', () => { + it('should export all status values', () => { + expect(Status.PENDING).toBe('pending'); + expect(Status.IN_PROGRESS).toBe('in_progress'); + expect(Status.COMPLETED).toBe('completed'); + expect(Status.FAILED).toBe('failed'); + expect(Status.BLOCKED).toBe('blocked'); + expect(Status.SKIPPED).toBe('skipped'); + }); +}); + +describe('CONFIG', () => { + it('should have required configuration values', () => { + expect(CONFIG.progressBarWidth).toBe(10); + expect(CONFIG.dashboardStatusPath).toBeDefined(); + expect(CONFIG.buildProgressFile).toBe('build-progress.txt'); + expect(CONFIG.implementationFile).toBe('implementation.yaml'); + }); +}); + +``` + +================================================== +📄 tests/infrastructure/worktree-manager.test.js +================================================== +```js +/** + * @fileoverview Tests for WorktreeManager - ADE Epic 1 + * @description Unit tests for Git worktree management functionality + */ + +const WorktreeManager = require('../../.aios-core/infrastructure/scripts/worktree-manager'); +const path = require('path'); + +// Mock execa +jest.mock('execa', () => { + const mockExeca = jest.fn(); + return mockExeca; +}); + +// Mock fs.promises +jest.mock('fs', () => ({ + promises: { + mkdir: jest.fn().mockResolvedValue(undefined), + access: jest.fn(), + stat: jest.fn(), + readdir: jest.fn(), + readFile: jest.fn(), + writeFile: jest.fn().mockResolvedValue(undefined), + }, +})); + +// Mock chalk to avoid color issues in tests +jest.mock('chalk', () => ({ + green: (str) => str, + red: (str) => str, + yellow: (str) => str, + gray: (str) => str, + cyan: (str) => str, + bold: (str) => str, +})); + +const execa = require('execa'); +const fs = require('fs').promises; + +describe('WorktreeManager', () => { + let manager; + const projectRoot = '/test/project'; + + beforeEach(() => { + jest.clearAllMocks(); + manager = new WorktreeManager(projectRoot); + }); + + describe('constructor', () => { + it('should use default options when none provided', () => { + const mgr = new WorktreeManager(projectRoot); + expect(mgr.projectRoot).toBe(projectRoot); + expect(mgr.maxWorktrees).toBe(10); + expect(mgr.worktreeDir).toBe('.aios/worktrees'); + expect(mgr.branchPrefix).toBe('auto-claude/'); + expect(mgr.staleDays).toBe(30); + }); + + it('should use custom options when provided', () => { + const mgr = new WorktreeManager(projectRoot, { + maxWorktrees: 5, + worktreeDir: 'custom/dir', + branchPrefix: 'story/', + staleDays: 14, + }); + expect(mgr.maxWorktrees).toBe(5); + expect(mgr.worktreeDir).toBe('custom/dir'); + expect(mgr.branchPrefix).toBe('story/'); + expect(mgr.staleDays).toBe(14); + }); + + it('should use process.cwd() when no projectRoot provided', () => { + const mgr = new WorktreeManager(); + expect(mgr.projectRoot).toBe(process.cwd()); + }); + }); + + describe('getWorktreePath', () => { + it('should return correct path for story ID', () => { + const result = manager.getWorktreePath('STORY-42'); + expect(result).toBe(path.join(projectRoot, '.aios/worktrees', 'STORY-42')); + }); + }); + + describe('getBranchName', () => { + it('should return correct branch name with prefix', () => { + const result = manager.getBranchName('STORY-42'); + expect(result).toBe('auto-claude/STORY-42'); + }); + }); + + describe('exists', () => { + it('should return true when worktree directory exists', async () => { + fs.access.mockResolvedValue(undefined); + + const result = await manager.exists('STORY-42'); + + expect(result).toBe(true); + expect(fs.access).toHaveBeenCalledWith(path.join(projectRoot, '.aios/worktrees', 'STORY-42')); + }); + + it('should return false when worktree directory does not exist', async () => { + fs.access.mockRejectedValue(new Error('ENOENT')); + + const result = await manager.exists('STORY-42'); + + expect(result).toBe(false); + }); + }); + + describe('create', () => { + beforeEach(() => { + execa.mockResolvedValue({ stdout: '', stderr: '' }); + fs.stat.mockResolvedValue({ + birthtime: new Date(), + mtime: new Date(), + }); + }); + + it('should create worktree with correct git commands', async () => { + // First access check (exists) returns false, second (get) returns true + fs.access + .mockRejectedValueOnce(new Error('ENOENT')) // exists check + .mockResolvedValueOnce(undefined); // get check after creation + + const result = await manager.create('STORY-42'); + + expect(fs.mkdir).toHaveBeenCalledWith(path.join(projectRoot, '.aios/worktrees'), { + recursive: true, + }); + expect(execa).toHaveBeenCalledWith( + 'git', + [ + 'worktree', + 'add', + path.join(projectRoot, '.aios/worktrees', 'STORY-42'), + '-b', + 'auto-claude/STORY-42', + ], + expect.objectContaining({ cwd: projectRoot }), + ); + expect(result).toHaveProperty('storyId', 'STORY-42'); + }); + + it('should throw error if worktree already exists', async () => { + fs.access.mockResolvedValue(undefined); // worktree exists + + await expect(manager.create('STORY-42')).rejects.toThrow( + "Worktree for story 'STORY-42' already exists", + ); + }); + + it('should throw error if max worktrees limit reached', async () => { + // Mock list to return max worktrees + execa.mockResolvedValue({ + stdout: Array(10) + .fill(null) + .map( + (_, i) => + `worktree /test/project/.aios/worktrees/STORY-${i}\nbranch refs/heads/auto-claude/STORY-${i}`, + ) + .join('\n\n'), + stderr: '', + }); + + // Mock fs.stat for all worktrees + fs.stat.mockResolvedValue({ + birthtime: new Date(), + mtime: new Date(), + }); + + // First access call for exists() returns false (new worktree doesn't exist) + // But list() will show 10 existing worktrees + fs.access + .mockRejectedValueOnce(new Error('ENOENT')) // exists check for new worktree + .mockResolvedValue(undefined); // access checks for listing + + await expect(manager.create('STORY-NEW')).rejects.toThrow( + /Maximum worktrees limit \(10\) reached/, + ); + }); + }); + + describe('remove', () => { + beforeEach(() => { + fs.access.mockResolvedValue(undefined); // worktree exists + execa.mockResolvedValue({ stdout: '', stderr: '' }); + }); + + it('should remove worktree and branch', async () => { + const result = await manager.remove('STORY-42'); + + expect(execa).toHaveBeenCalledWith( + 'git', + ['worktree', 'remove', path.join(projectRoot, '.aios/worktrees', 'STORY-42')], + expect.objectContaining({ cwd: projectRoot }), + ); + expect(execa).toHaveBeenCalledWith( + 'git', + ['branch', '-d', 'auto-claude/STORY-42'], + expect.objectContaining({ cwd: projectRoot }), + ); + expect(result).toBe(true); + }); + + it('should force remove when option is set', async () => { + await manager.remove('STORY-42', { force: true }); + + expect(execa).toHaveBeenCalledWith( + 'git', + ['worktree', 'remove', path.join(projectRoot, '.aios/worktrees', 'STORY-42'), '--force'], + expect.objectContaining({ cwd: projectRoot }), + ); + expect(execa).toHaveBeenCalledWith( + 'git', + ['branch', '-D', 'auto-claude/STORY-42'], + expect.objectContaining({ cwd: projectRoot }), + ); + }); + + it('should throw error if worktree does not exist', async () => { + fs.access.mockRejectedValue(new Error('ENOENT')); + + await expect(manager.remove('STORY-42')).rejects.toThrow( + "Worktree for story 'STORY-42' does not exist", + ); + }); + }); + + describe('list', () => { + it('should return empty array when no worktrees', async () => { + execa.mockResolvedValue({ stdout: '', stderr: '' }); + + const result = await manager.list(); + + expect(result).toEqual([]); + }); + + it('should parse worktree list and filter by prefix', async () => { + const porcelainOutput = `worktree /test/project +branch refs/heads/main + +worktree /test/project/.aios/worktrees/STORY-42 +branch refs/heads/auto-claude/STORY-42 + +worktree /test/project/.aios/worktrees/STORY-43 +branch refs/heads/auto-claude/STORY-43`; + + execa.mockResolvedValue({ stdout: porcelainOutput, stderr: '' }); + fs.stat.mockResolvedValue({ + birthtime: new Date(), + mtime: new Date(), + }); + + const result = await manager.list(); + + // Should filter out main worktree and only return auto-claude prefixed ones + expect(result.length).toBe(2); + expect(result[0].storyId).toBe('STORY-42'); + expect(result[1].storyId).toBe('STORY-43'); + }); + }); + + describe('getCount', () => { + it('should return correct counts', async () => { + const now = Date.now(); + const oldDate = new Date(now - 45 * 24 * 60 * 60 * 1000); // 45 days ago + const recentDate = new Date(now - 5 * 24 * 60 * 60 * 1000); // 5 days ago + + const porcelainOutput = `worktree /test/project/.aios/worktrees/STORY-OLD +branch refs/heads/auto-claude/STORY-OLD + +worktree /test/project/.aios/worktrees/STORY-NEW +branch refs/heads/auto-claude/STORY-NEW`; + + execa.mockResolvedValue({ stdout: porcelainOutput, stderr: '' }); + fs.stat + .mockResolvedValueOnce({ birthtime: oldDate, mtime: oldDate }) + .mockResolvedValueOnce({ birthtime: recentDate, mtime: recentDate }); + + const result = await manager.getCount(); + + expect(result.total).toBe(2); + expect(result.stale).toBe(1); + expect(result.active).toBe(1); + }); + }); + + describe('formatAge', () => { + it('should format recent time as "just now"', () => { + const result = manager.formatAge(new Date()); + expect(result).toBe('just now'); + }); + + it('should format hours correctly', () => { + const twoHoursAgo = new Date(Date.now() - 2 * 60 * 60 * 1000); + const result = manager.formatAge(twoHoursAgo); + expect(result).toBe('2h ago'); + }); + + it('should format days correctly', () => { + const threeDaysAgo = new Date(Date.now() - 3 * 24 * 60 * 60 * 1000); + const result = manager.formatAge(threeDaysAgo); + expect(result).toBe('3d ago'); + }); + }); + + describe('formatList', () => { + it('should return message when no worktrees', () => { + const result = manager.formatList([]); + expect(result).toContain('No active worktrees'); + }); + + it('should format worktree list correctly', () => { + const worktrees = [ + { + storyId: 'STORY-42', + branch: 'auto-claude/STORY-42', + uncommittedChanges: 3, + status: 'active', + createdAt: new Date(), + }, + ]; + + const result = manager.formatList(worktrees); + + expect(result).toContain('Active Worktrees'); + expect(result).toContain('STORY-42'); + expect(result).toContain('3 uncommitted'); + }); + }); + + describe('cleanupStale', () => { + it('should remove stale worktrees', async () => { + const oldDate = new Date(Date.now() - 45 * 24 * 60 * 60 * 1000); + + const porcelainOutput = `worktree /test/project/.aios/worktrees/STORY-OLD +branch refs/heads/auto-claude/STORY-OLD`; + + execa.mockResolvedValue({ stdout: porcelainOutput, stderr: '' }); + fs.stat.mockResolvedValue({ birthtime: oldDate, mtime: oldDate }); + fs.access.mockResolvedValue(undefined); + + const result = await manager.cleanupStale(); + + expect(result).toContain('STORY-OLD'); + }); + + it('should return empty array when no stale worktrees', async () => { + execa.mockResolvedValue({ stdout: '', stderr: '' }); + + const result = await manager.cleanupStale(); + + expect(result).toEqual([]); + }); + }); + + describe('getMergeHistory', () => { + it('should return empty array when no logs exist', async () => { + fs.access.mockRejectedValue(new Error('ENOENT')); + + const result = await manager.getMergeHistory('STORY-42'); + + expect(result).toEqual([]); + }); + + it('should return merge history sorted by timestamp', async () => { + fs.access.mockResolvedValue(undefined); + fs.readdir.mockResolvedValue([ + 'merge-STORY-42-2026-01-28T10-00-00-000Z.json', + 'merge-STORY-42-2026-01-29T10-00-00-000Z.json', + ]); + fs.readFile + .mockResolvedValueOnce( + JSON.stringify({ + storyId: 'STORY-42', + success: true, + timestamp: '2026-01-28T10:00:00.000Z', + }), + ) + .mockResolvedValueOnce( + JSON.stringify({ + storyId: 'STORY-42', + success: true, + timestamp: '2026-01-29T10:00:00.000Z', + }), + ); + + const result = await manager.getMergeHistory('STORY-42'); + + expect(result.length).toBe(2); + // Should be sorted by timestamp descending + expect(result[0].timestamp.getTime()).toBeGreaterThan(result[1].timestamp.getTime()); + }); + }); +}); + +``` + +================================================== +📄 tests/infrastructure/ai-providers/ai-provider-factory.test.js +================================================== +```js +/** + * AI Provider Factory Tests + * Story GEMINI-INT.2 + */ + +const { + getProvider, + getPrimaryProvider, + getFallbackProvider, + getProviderForTask, + executeWithFallback, + getAvailableProviders, + getProvidersStatus, + ClaudeProvider, + GeminiProvider, +} = require('../../../.aios-core/infrastructure/integrations/ai-providers/ai-provider-factory'); + +describe('AI Provider Factory', () => { + describe('Provider Classes', () => { + it('should export ClaudeProvider class', () => { + expect(ClaudeProvider).toBeDefined(); + const provider = new ClaudeProvider(); + expect(provider.name).toBe('claude'); + }); + + it('should export GeminiProvider class', () => { + expect(GeminiProvider).toBeDefined(); + const provider = new GeminiProvider(); + expect(provider.name).toBe('gemini'); + }); + }); + + describe('getProvider', () => { + it('should return claude provider', () => { + const provider = getProvider('claude'); + expect(provider).toBeDefined(); + expect(provider.name).toBe('claude'); + }); + + it('should return gemini provider', () => { + const provider = getProvider('gemini'); + expect(provider).toBeDefined(); + expect(provider.name).toBe('gemini'); + }); + + it('should throw error for unknown provider', () => { + expect(() => getProvider('unknown')).toThrow('Unknown AI provider'); + }); + }); + + describe('getPrimaryProvider', () => { + it('should return the primary provider', () => { + const provider = getPrimaryProvider(); + expect(provider).toBeDefined(); + expect(provider.name).toBe('claude'); + }); + }); + + describe('getFallbackProvider', () => { + it('should return the fallback provider', () => { + const provider = getFallbackProvider(); + expect(provider).toBeDefined(); + expect(provider.name).toBe('gemini'); + }); + }); + + describe('getProviderForTask', () => { + it('should return a provider for any task type', () => { + const provider = getProviderForTask('simple'); + expect(provider).toBeDefined(); + expect(provider.name).toBeDefined(); + }); + + it('should return a provider for unknown task types', () => { + const provider = getProviderForTask('unknown'); + expect(provider).toBeDefined(); + expect(provider.name).toBeDefined(); + }); + }); + + describe('getAvailableProviders', () => { + it('should return an object or array', () => { + const providers = getAvailableProviders(); + expect(providers).toBeDefined(); + }); + }); + + describe('getProvidersStatus', () => { + it('should return an object', () => { + const status = getProvidersStatus(); + expect(typeof status).toBe('object'); + }); + }); + + describe('executeWithFallback', () => { + it('should be a function', () => { + expect(typeof executeWithFallback).toBe('function'); + }); + }); +}); + +``` + +================================================== +📄 tests/installer/pro-scaffolder.test.js +================================================== +```js +/** + * Pro Content Scaffolder Tests + * + * @story INS-3.1 — Implement Pro Content Scaffolder + */ + +'use strict'; + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); +const yaml = require('js-yaml'); + +const { + scaffoldProContent, + scaffoldFile, + rollbackScaffold, + generateProVersionJson, + generateInstalledManifest, + SCAFFOLD_ITEMS, +} = require('../../packages/installer/src/pro/pro-scaffolder'); + +// Create isolated temp dirs for each test +let tmpDir; +let targetDir; +let proSourceDir; + +beforeEach(async () => { + tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), 'pro-scaffolder-')); + targetDir = path.join(tmpDir, 'project'); + proSourceDir = path.join(tmpDir, 'pro-package'); + + // Create target project structure + await fs.ensureDir(path.join(targetDir, '.aios-core')); + + // Create mock pro source package + await fs.ensureDir(path.join(proSourceDir, 'squads', 'devops-squad')); + await fs.writeFile( + path.join(proSourceDir, 'squads', 'devops-squad', 'squad.yaml'), + yaml.dump({ name: 'devops-squad', version: '1.0.0' }) + ); + await fs.writeFile( + path.join(proSourceDir, 'pro-config.yaml'), + yaml.dump({ pro: { enabled: true, tier: 'standard' } }) + ); + await fs.writeFile( + path.join(proSourceDir, 'feature-registry.yaml'), + yaml.dump({ features: [{ id: 'squads-pro', enabled: true }] }) + ); + await fs.writeJson( + path.join(proSourceDir, 'package.json'), + { name: '@aios-fullstack/pro', version: '2.0.0' } + ); +}); + +afterEach(async () => { + await fs.remove(tmpDir); +}); + +describe('scaffoldProContent', () => { + // AC1, AC2, AC3: Copies squads, pro-config.yaml, feature-registry.yaml + it('should copy all pro content to project (AC1, AC2, AC3)', async () => { + const result = await scaffoldProContent(targetDir, proSourceDir); + + expect(result.success).toBe(true); + expect(result.errors).toHaveLength(0); + + // AC1: squads exist + expect(await fs.pathExists( + path.join(targetDir, 'squads', 'devops-squad', 'squad.yaml') + )).toBe(true); + + // AC2: pro-config.yaml exists in .aios-core/ + expect(await fs.pathExists( + path.join(targetDir, '.aios-core', 'pro-config.yaml') + )).toBe(true); + + // AC3: feature-registry.yaml exists in .aios-core/ + expect(await fs.pathExists( + path.join(targetDir, '.aios-core', 'feature-registry.yaml') + )).toBe(true); + }); + + // AC4: pro-version.json with SHA256 hashes + it('should generate pro-version.json with SHA256 hashes (AC4)', async () => { + const result = await scaffoldProContent(targetDir, proSourceDir); + + expect(result.success).toBe(true); + + const versionPath = path.join(targetDir, 'pro-version.json'); + expect(await fs.pathExists(versionPath)).toBe(true); + + const versionInfo = await fs.readJson(versionPath); + expect(versionInfo.proVersion).toBe('2.0.0'); + expect(versionInfo.installedAt).toBeDefined(); + expect(versionInfo.fileHashes).toBeDefined(); + + // Verify at least one hash is sha256 format + const hashes = Object.values(versionInfo.fileHashes); + expect(hashes.length).toBeGreaterThan(0); + for (const hash of hashes) { + expect(hash).toMatch(/^sha256:[a-f0-9]{64}$/); + } + }); + + // AC5: Idempotency - running 2x does not duplicate + it('should be idempotent: 2nd run skips identical files (AC5)', async () => { + // First run + const result1 = await scaffoldProContent(targetDir, proSourceDir); + expect(result1.success).toBe(true); + const copiedCount1 = result1.copiedFiles.length; + + // Second run + const result2 = await scaffoldProContent(targetDir, proSourceDir); + expect(result2.success).toBe(true); + + // On second run, content files should be skipped (identical hashes) + expect(result2.skippedFiles.length).toBeGreaterThan(0); + + // Verify file content is still correct (not corrupted) + const configContent = yaml.load( + await fs.readFile(path.join(targetDir, '.aios-core', 'pro-config.yaml'), 'utf8') + ); + expect(configContent.pro.enabled).toBe(true); + }); + + // AC6: Cleanup on partial failure + it('should rollback partially copied files on error (AC6)', async () => { + // Remove pro-config.yaml from source — squads (processed first) will copy + // successfully, then pro-config.yaml (required) will fail, triggering rollback + await fs.remove(path.join(proSourceDir, 'pro-config.yaml')); + + const result = await scaffoldProContent(targetDir, proSourceDir); + + expect(result.success).toBe(false); + expect(result.errors.length).toBeGreaterThan(0); + expect(result.warnings.some(w => w.includes('Scaffolding failed'))).toBe(true); + + // Verify rollback: squads copied before failure should be cleaned up + expect(await fs.pathExists( + path.join(targetDir, 'squads', 'devops-squad', 'squad.yaml') + )).toBe(false); + + // pro-version.json and pro-installed-manifest.yaml should not exist + expect(await fs.pathExists(path.join(targetDir, 'pro-version.json'))).toBe(false); + expect(await fs.pathExists(path.join(targetDir, 'pro-installed-manifest.yaml'))).toBe(false); + }); + + // AC7: Offline fallback - no network calls + it('should work without network connectivity (AC7)', async () => { + // scaffoldProContent makes NO network calls - it only uses local filesystem + // This test verifies the function succeeds without any mocked APIs + const result = await scaffoldProContent(targetDir, proSourceDir); + + expect(result.success).toBe(true); + // The function signature takes no API client, no network options + // This confirms offline-by-design + }); + + // AC8: pro-installed-manifest.yaml + it('should generate pro-installed-manifest.yaml with timestamps (AC8)', async () => { + const result = await scaffoldProContent(targetDir, proSourceDir); + + expect(result.success).toBe(true); + + const manifestPath = path.join(targetDir, 'pro-installed-manifest.yaml'); + expect(await fs.pathExists(manifestPath)).toBe(true); + + const manifest = yaml.load(await fs.readFile(manifestPath, 'utf8')); + expect(manifest.generatedAt).toBeDefined(); + expect(manifest.totalFiles).toBeGreaterThan(0); + expect(manifest.files).toBeInstanceOf(Array); + expect(manifest.files.length).toBe(manifest.totalFiles); + + for (const file of manifest.files) { + expect(file.path).toBeDefined(); + expect(file.timestamp).toBeDefined(); + } + }); + + // AC3: Warning when feature-registry.yaml absent + it('should emit warning when feature-registry.yaml is absent in source (AC3)', async () => { + // Remove feature-registry.yaml from source + await fs.remove(path.join(proSourceDir, 'feature-registry.yaml')); + + const result = await scaffoldProContent(targetDir, proSourceDir); + + // Should still succeed (feature-registry is not required) + expect(result.success).toBe(true); + expect(result.warnings.some(w => w.includes('Feature registry'))).toBe(true); + }); + + it('should return error when pro source directory does not exist', async () => { + const fakePath = path.join(tmpDir, 'nonexistent-pro-dir'); + const result = await scaffoldProContent(targetDir, fakePath); + + expect(result.success).toBe(false); + expect(result.errors[0]).toContain('Pro package not found'); + }); + + it('should call onProgress callback for each scaffold item', async () => { + const progress = []; + await scaffoldProContent(targetDir, proSourceDir, { + onProgress: (p) => progress.push(p), + }); + + expect(progress.length).toBeGreaterThan(0); + expect(progress.some(p => p.status === 'done')).toBe(true); + }); +}); + +describe('rollbackScaffold', () => { + it('should remove all tracked files', async () => { + const file1 = path.join(tmpDir, 'rollback-test-1.txt'); + const file2 = path.join(tmpDir, 'rollback-test-2.txt'); + await fs.writeFile(file1, 'test1'); + await fs.writeFile(file2, 'test2'); + + const result = await rollbackScaffold([file1, file2]); + + expect(result.removed).toBe(2); + expect(result.errors).toHaveLength(0); + expect(await fs.pathExists(file1)).toBe(false); + expect(await fs.pathExists(file2)).toBe(false); + }); + + it('should handle already-deleted files gracefully', async () => { + const result = await rollbackScaffold(['/nonexistent/file.txt']); + + expect(result.removed).toBe(0); + expect(result.errors).toHaveLength(0); + }); +}); + +describe('generateProVersionJson', () => { + it('should generate correct version info with hashes', async () => { + const testFile = path.join(targetDir, 'test.yaml'); + await fs.writeFile(testFile, 'test: true'); + + const versionInfo = await generateProVersionJson( + targetDir, + proSourceDir, + ['test.yaml'] + ); + + expect(versionInfo.proVersion).toBe('2.0.0'); + expect(versionInfo.fileCount).toBe(1); + expect(versionInfo.fileHashes['test.yaml']).toMatch(/^sha256:/); + }); +}); + +describe('generateInstalledManifest', () => { + it('should list all files with timestamps', async () => { + const testFile = path.join(targetDir, 'manifest-test.yaml'); + await fs.writeFile(testFile, 'content'); + + const manifest = await generateInstalledManifest( + targetDir, + ['manifest-test.yaml'] + ); + + expect(manifest.totalFiles).toBe(1); + expect(manifest.files[0].path).toBe('manifest-test.yaml'); + expect(manifest.files[0].timestamp).toBeDefined(); + }); +}); + +``` + +================================================== +📄 tests/installer/v21-path-validation.test.js +================================================== +```js +/** + * v2.1 Path Validation Tests + * + * Validates that after migration to modular structure: + * 1. All agent dependencies point to existing files + * 2. All task references are valid + * 3. All workflow references are valid + * 4. No {root} placeholders remain (should be replaced with .aios-core) + * + * @module tests/installer/v21-path-validation + */ + +const { describe, it, before } = require('node:test'); +const assert = require('node:assert'); +const fs = require('fs-extra'); +const path = require('path'); +const yaml = require('js-yaml'); + +// Path to .aios-core directory +const AIOS_CORE_PATH = path.join(__dirname, '..', '..', '.aios-core'); + +// v2.1 Module mapping for dependency resolution +const MODULE_MAPPING = { + // Development module + agents: 'development/agents', + tasks: 'development/tasks', + workflows: 'development/workflows', + scripts: 'development/scripts', + personas: 'development/personas', + 'agent-teams': 'development/agent-teams', + + // Product module + templates: 'product/templates', + checklists: 'product/checklists', + data: 'product/data', + cli: 'product/cli', + api: 'product/api', + + // Core module + utils: 'core/utils', + config: 'core/config', + registry: 'core/registry', + manifest: 'core/manifest', + + // Infrastructure module + tools: 'infrastructure/tools', + integrations: 'infrastructure/integrations', + hooks: 'infrastructure/hooks', + telemetry: 'infrastructure/telemetry', +}; + +/** + * Extract YAML frontmatter from markdown file + * @param {string} content - File content + * @returns {Object|null} Parsed YAML or null + */ +function extractYamlFromMarkdown(content) { + // Try to find YAML in code block + const codeBlockMatch = content.match(/```yaml\n([\s\S]*?)```/); + if (codeBlockMatch) { + try { + return yaml.load(codeBlockMatch[1]); + } catch (e) { + return null; + } + } + + // Try frontmatter style (---) + const frontmatterMatch = content.match(/^---\n([\s\S]*?)\n---/); + if (frontmatterMatch) { + try { + return yaml.load(frontmatterMatch[1]); + } catch (e) { + return null; + } + } + + return null; +} + +/** + * Get all files in a directory recursively + * @param {string} dir - Directory path + * @param {string} ext - File extension filter + * @returns {Promise} Array of file paths + */ +async function getFilesRecursive(dir, ext = '.md') { + const files = []; + + if (!await fs.pathExists(dir)) { + return files; + } + + const entries = await fs.readdir(dir, { withFileTypes: true }); + + for (const entry of entries) { + const fullPath = path.join(dir, entry.name); + if (entry.isDirectory()) { + const subFiles = await getFilesRecursive(fullPath, ext); + files.push(...subFiles); + } else if (entry.name.endsWith(ext)) { + files.push(fullPath); + } + } + + return files; +} + +/** + * Resolve dependency path based on type + * @param {string} type - Dependency type (tasks, checklists, etc.) + * @param {string} filename - Dependency filename + * @returns {string} Full path to dependency file + */ +function resolveDependencyPath(type, filename) { + const modulePath = MODULE_MAPPING[type]; + if (!modulePath) { + // Fallback to development/{type} + return path.join(AIOS_CORE_PATH, 'development', type, filename); + } + return path.join(AIOS_CORE_PATH, modulePath, filename); +} + +describe('v2.1 Path Validation', () => { + const agents = []; + const tasks = []; + const workflows = []; + let allFiles = []; + + before(async () => { + // Load all agents + const agentDir = path.join(AIOS_CORE_PATH, 'development', 'agents'); + if (await fs.pathExists(agentDir)) { + const agentFiles = await getFilesRecursive(agentDir); + for (const file of agentFiles) { + const content = await fs.readFile(file, 'utf8'); + const parsed = extractYamlFromMarkdown(content); + agents.push({ + file: path.relative(AIOS_CORE_PATH, file), + name: path.basename(file, '.md'), + content, + yaml: parsed, + }); + } + } + + // Load all tasks + const taskDir = path.join(AIOS_CORE_PATH, 'development', 'tasks'); + if (await fs.pathExists(taskDir)) { + const taskFiles = await getFilesRecursive(taskDir); + for (const file of taskFiles) { + const content = await fs.readFile(file, 'utf8'); + const parsed = extractYamlFromMarkdown(content); + tasks.push({ + file: path.relative(AIOS_CORE_PATH, file), + name: path.basename(file, '.md'), + content, + yaml: parsed, + }); + } + } + + // Load all workflows + const workflowDir = path.join(AIOS_CORE_PATH, 'development', 'workflows'); + if (await fs.pathExists(workflowDir)) { + const workflowFiles = await getFilesRecursive(workflowDir); + for (const file of workflowFiles) { + const content = await fs.readFile(file, 'utf8'); + const parsed = extractYamlFromMarkdown(content); + workflows.push({ + file: path.relative(AIOS_CORE_PATH, file), + name: path.basename(file, '.md'), + content, + yaml: parsed, + }); + } + } + + // Get all files for {root} placeholder check + allFiles = await getFilesRecursive(AIOS_CORE_PATH); + }); + + describe('Agent Dependency Validation', () => { + it('should have agents loaded', () => { + assert.ok(agents.length > 0, 'No agents found in development/agents/'); + }); + + it('should have all agent task dependencies exist', async () => { + const missingDeps = []; + + for (const agent of agents) { + if (!agent.yaml || !agent.yaml.dependencies) continue; + + const taskDeps = agent.yaml.dependencies.tasks || []; + for (const task of taskDeps) { + const taskPath = resolveDependencyPath('tasks', task); + if (!await fs.pathExists(taskPath)) { + missingDeps.push({ + agent: agent.name, + type: 'tasks', + dependency: task, + expectedPath: path.relative(AIOS_CORE_PATH, taskPath), + }); + } + } + } + + if (missingDeps.length > 0) { + console.log('\n❌ Missing task dependencies:'); + missingDeps.forEach(d => { + console.log(` Agent: ${d.agent} → Task: ${d.dependency}`); + console.log(` Expected: ${d.expectedPath}`); + }); + } + + assert.strictEqual(missingDeps.length, 0, + `Found ${missingDeps.length} missing task dependencies`); + }); + + it('should have all agent checklist dependencies exist', async () => { + const missingDeps = []; + + for (const agent of agents) { + if (!agent.yaml || !agent.yaml.dependencies) continue; + + const checklistDeps = agent.yaml.dependencies.checklists || []; + for (const checklist of checklistDeps) { + const checklistPath = resolveDependencyPath('checklists', checklist); + if (!await fs.pathExists(checklistPath)) { + missingDeps.push({ + agent: agent.name, + type: 'checklists', + dependency: checklist, + expectedPath: path.relative(AIOS_CORE_PATH, checklistPath), + }); + } + } + } + + if (missingDeps.length > 0) { + console.log('\n❌ Missing checklist dependencies:'); + missingDeps.forEach(d => { + console.log(` Agent: ${d.agent} → Checklist: ${d.dependency}`); + console.log(` Expected: ${d.expectedPath}`); + }); + } + + assert.strictEqual(missingDeps.length, 0, + `Found ${missingDeps.length} missing checklist dependencies`); + }); + + it('should have all agent template dependencies exist', async () => { + const missingDeps = []; + + for (const agent of agents) { + if (!agent.yaml || !agent.yaml.dependencies) continue; + + const templateDeps = agent.yaml.dependencies.templates || []; + for (const template of templateDeps) { + const templatePath = resolveDependencyPath('templates', template); + if (!await fs.pathExists(templatePath)) { + missingDeps.push({ + agent: agent.name, + type: 'templates', + dependency: template, + expectedPath: path.relative(AIOS_CORE_PATH, templatePath), + }); + } + } + } + + if (missingDeps.length > 0) { + console.log('\n❌ Missing template dependencies:'); + missingDeps.forEach(d => { + console.log(` Agent: ${d.agent} → Template: ${d.dependency}`); + console.log(` Expected: ${d.expectedPath}`); + }); + } + + assert.strictEqual(missingDeps.length, 0, + `Found ${missingDeps.length} missing template dependencies`); + }); + }); + + describe('Task Dependency Validation', () => { + it('should have tasks loaded', () => { + assert.ok(tasks.length > 0, 'No tasks found in development/tasks/'); + }); + + it('should have all task dependencies exist', async () => { + const missingDeps = []; + + for (const task of tasks) { + if (!task.yaml || !task.yaml.dependencies) continue; + + for (const [type, deps] of Object.entries(task.yaml.dependencies)) { + if (!Array.isArray(deps)) continue; + + for (const dep of deps) { + // Skip external tools + if (type === 'tools' && typeof dep === 'string' && !dep.endsWith('.md')) { + continue; + } + + const depPath = resolveDependencyPath(type, dep); + if (!await fs.pathExists(depPath)) { + missingDeps.push({ + task: task.name, + type, + dependency: dep, + expectedPath: path.relative(AIOS_CORE_PATH, depPath), + }); + } + } + } + } + + if (missingDeps.length > 0) { + console.log('\n❌ Missing task dependencies:'); + missingDeps.forEach(d => { + console.log(` Task: ${d.task} → ${d.type}: ${d.dependency}`); + console.log(` Expected: ${d.expectedPath}`); + }); + } + + assert.strictEqual(missingDeps.length, 0, + `Found ${missingDeps.length} missing task dependencies`); + }); + }); + + describe('Workflow Validation', () => { + it('should load workflows (if any exist)', () => { + // Workflows are optional, just log the count + console.log(` Found ${workflows.length} workflow(s)`); + }); + + it('should have valid workflow step references', async () => { + const invalidRefs = []; + + for (const workflow of workflows) { + if (!workflow.yaml || !workflow.yaml.steps) continue; + + for (const step of workflow.yaml.steps) { + // Check agent references + if (step.agent) { + const agentPath = resolveDependencyPath('agents', `${step.agent}.md`); + if (!await fs.pathExists(agentPath)) { + invalidRefs.push({ + workflow: workflow.name, + stepType: 'agent', + reference: step.agent, + expectedPath: path.relative(AIOS_CORE_PATH, agentPath), + }); + } + } + + // Check task references + if (step.task) { + const taskPath = resolveDependencyPath('tasks', `${step.task}.md`); + if (!await fs.pathExists(taskPath)) { + invalidRefs.push({ + workflow: workflow.name, + stepType: 'task', + reference: step.task, + expectedPath: path.relative(AIOS_CORE_PATH, taskPath), + }); + } + } + } + } + + if (invalidRefs.length > 0) { + console.log('\n❌ Invalid workflow references:'); + invalidRefs.forEach(r => { + console.log(` Workflow: ${r.workflow} → ${r.stepType}: ${r.reference}`); + console.log(` Expected: ${r.expectedPath}`); + }); + } + + assert.strictEqual(invalidRefs.length, 0, + `Found ${invalidRefs.length} invalid workflow references`); + }); + }); + + describe('{root} Placeholder Validation', () => { + it('should not have any unreplaced {root} placeholders in .md files', async () => { + const filesWithRoot = []; + + for (const file of allFiles) { + if (!file.endsWith('.md') && !file.endsWith('.yaml') && !file.endsWith('.yml')) { + continue; + } + + const content = await fs.readFile(file, 'utf8'); + + // Check for {root} placeholder that should have been replaced + if (content.includes('{root}')) { + const matches = content.match(/\{root\}/g); + filesWithRoot.push({ + file: path.relative(AIOS_CORE_PATH, file), + count: matches ? matches.length : 0, + }); + } + } + + if (filesWithRoot.length > 0) { + console.log('\n⚠️ Files with unreplaced {root} placeholders:'); + filesWithRoot.forEach(f => { + console.log(` ${f.file}: ${f.count} occurrence(s)`); + }); + } + + // This is a warning, not a hard failure (some templates may intentionally use {root}) + console.log(`\n Total files with {root}: ${filesWithRoot.length}`); + }); + }); + + describe('v2.1 Module Structure Validation', () => { + it('should have core module directory', async () => { + const coreDir = path.join(AIOS_CORE_PATH, 'core'); + const exists = await fs.pathExists(coreDir); + assert.ok(exists, 'core/ directory should exist'); + }); + + it('should have development module directory', async () => { + const devDir = path.join(AIOS_CORE_PATH, 'development'); + const exists = await fs.pathExists(devDir); + assert.ok(exists, 'development/ directory should exist'); + }); + + it('should have product module directory', async () => { + const productDir = path.join(AIOS_CORE_PATH, 'product'); + const exists = await fs.pathExists(productDir); + assert.ok(exists, 'product/ directory should exist'); + }); + + it('should have infrastructure module directory', async () => { + const infraDir = path.join(AIOS_CORE_PATH, 'infrastructure'); + const exists = await fs.pathExists(infraDir); + assert.ok(exists, 'infrastructure/ directory should exist'); + }); + + it('should have agents in development/agents/', async () => { + const agentsDir = path.join(AIOS_CORE_PATH, 'development', 'agents'); + const files = await getFilesRecursive(agentsDir); + assert.ok(files.length > 0, 'development/agents/ should have agent files'); + console.log(` Found ${files.length} agent(s)`); + }); + + it('should have tasks in development/tasks/', async () => { + const tasksDir = path.join(AIOS_CORE_PATH, 'development', 'tasks'); + const files = await getFilesRecursive(tasksDir); + assert.ok(files.length > 0, 'development/tasks/ should have task files'); + console.log(` Found ${files.length} task(s)`); + }); + + it('should have templates in product/templates/', async () => { + const templatesDir = path.join(AIOS_CORE_PATH, 'product', 'templates'); + const files = await getFilesRecursive(templatesDir); + console.log(` Found ${files.length} template(s)`); + }); + + it('should have checklists in product/checklists/', async () => { + const checklistsDir = path.join(AIOS_CORE_PATH, 'product', 'checklists'); + const files = await getFilesRecursive(checklistsDir); + console.log(` Found ${files.length} checklist(s)`); + }); + }); +}); + +// Summary report function +async function generateValidationReport() { + console.log('\n' + '='.repeat(60)); + console.log('v2.1 PATH VALIDATION REPORT'); + console.log('='.repeat(60)); + + const report = { + agents: { total: 0, withDeps: 0, missingDeps: [] }, + tasks: { total: 0, withDeps: 0, missingDeps: [] }, + workflows: { total: 0, invalidRefs: [] }, + rootPlaceholders: { files: [] }, + modules: { core: false, development: false, product: false, infrastructure: false }, + }; + + // Check modules exist + report.modules.core = await fs.pathExists(path.join(AIOS_CORE_PATH, 'core')); + report.modules.development = await fs.pathExists(path.join(AIOS_CORE_PATH, 'development')); + report.modules.product = await fs.pathExists(path.join(AIOS_CORE_PATH, 'product')); + report.modules.infrastructure = await fs.pathExists(path.join(AIOS_CORE_PATH, 'infrastructure')); + + console.log('\nModule Structure:'); + console.log(` core/: ${report.modules.core ? '✅' : '❌'}`); + console.log(` development/: ${report.modules.development ? '✅' : '❌'}`); + console.log(` product/: ${report.modules.product ? '✅' : '❌'}`); + console.log(` infrastructure/: ${report.modules.infrastructure ? '✅' : '❌'}`); + + // Count agents + const agentDir = path.join(AIOS_CORE_PATH, 'development', 'agents'); + if (await fs.pathExists(agentDir)) { + const agents = await getFilesRecursive(agentDir); + report.agents.total = agents.length; + } + + // Count tasks + const taskDir = path.join(AIOS_CORE_PATH, 'development', 'tasks'); + if (await fs.pathExists(taskDir)) { + const tasks = await getFilesRecursive(taskDir); + report.tasks.total = tasks.length; + } + + console.log('\nFile Counts:'); + console.log(` Agents: ${report.agents.total}`); + console.log(` Tasks: ${report.tasks.total}`); + + console.log('\n' + '='.repeat(60)); + + return report; +} + +module.exports = { generateValidationReport, extractYamlFromMarkdown, resolveDependencyPath }; + +``` + +================================================== +📄 tests/installer/v21-structure.test.js +================================================== +```js +/** + * Tests for v2.1 Modular Directory Structure + * + * Story 2.15: Update Installer for v2.1 Module Structure + * Validates that the installer correctly creates the v2.1 modular directory structure + * and supports the --legacy flag for backwards compatibility. + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); + +// Mock the modules that use dynamic imports +jest.mock('../../tools/installer/lib/module-manager', () => ({ + getModules: jest.fn().mockResolvedValue({ + glob: jest.fn().mockResolvedValue([]), + chalk: { + blue: jest.fn(x => x), + green: jest.fn(x => x), + yellow: jest.fn(x => x), + red: jest.fn(x => x), + dim: jest.fn(x => x), + bold: jest.fn(x => x), + cyan: jest.fn(x => x), + }, + }), +})); + +const resourceLocator = require('../../tools/installer/lib/resource-locator'); +const configLoader = require('../../tools/installer/lib/config-loader'); + +describe('v2.1 Module Structure Tests', () => { + describe('INS-01: Source Path Configuration', () => { + it('should point getAiosCorePath to .aios-core directory', () => { + const aiosCorePath = resourceLocator.getAiosCorePath(); + expect(aiosCorePath).toContain('.aios-core'); + expect(aiosCorePath).not.toMatch(/[^.]aios-core$/); + }); + + it('should have .aios-core as source for config-loader', () => { + const aiosCorePath = configLoader.getAiosCorePath(); + expect(aiosCorePath).toContain('.aios-core'); + }); + }); + + describe('INS-02: Agent Path Resolution', () => { + it('should resolve agent paths to development/agents/', async () => { + const agentPath = configLoader.getAgentPath('dev'); + expect(agentPath).toContain('development'); + expect(agentPath).toContain('agents'); + expect(agentPath).toContain('dev.md'); + }); + + it('should list agents from development/agents/', async () => { + const agents = await configLoader.getAvailableAgents(); + expect(agents.length).toBeGreaterThan(0); + + // All agent files should reference the v2.1 path + for (const agent of agents) { + expect(agent.file).toContain('.aios-core/development/agents/'); + } + }); + }); + + describe('INS-03: Team Path Resolution', () => { + it('should resolve team paths to development/agent-teams/', () => { + const teamPath = configLoader.getTeamPath('team-ide-minimal'); + expect(teamPath).toContain('development'); + expect(teamPath).toContain('agent-teams'); + }); + + it('should list teams from development/agent-teams/', async () => { + const teams = await configLoader.getAvailableTeams(); + // Teams should be resolvable (may be empty if no teams exist) + expect(Array.isArray(teams)).toBe(true); + }); + }); + + describe('INS-04: Resource Locator v2.1 Paths', () => { + it('should resolve agent dependencies with v2.1 module mapping', async () => { + const deps = await resourceLocator.getAgentDependencies('dev'); + + // Dependencies should use v2.1 module paths + for (const depPath of deps.all) { + // Should not have flat structure paths like .aios-core/tasks/ + expect(depPath).not.toMatch(/\.aios-core\/(tasks|templates|checklists|workflows|utils|data)\//); + + // Should have modular paths + const hasModularPath = + depPath.includes('development/') || + depPath.includes('product/') || + depPath.includes('core/') || + depPath.includes('infrastructure/'); + + expect(hasModularPath).toBe(true); + } + }); + }); + + describe('INS-05: Manifest Files Location', () => { + it('should have manifests directory in .aios-core', async () => { + const manifestsPath = path.join(resourceLocator.getAiosCorePath(), 'manifests'); + const exists = await fs.pathExists(manifestsPath); + expect(exists).toBe(true); + }); + + it('should have agents.csv manifest', async () => { + const agentsCsvPath = path.join(resourceLocator.getAiosCorePath(), 'manifests', 'agents.csv'); + const exists = await fs.pathExists(agentsCsvPath); + expect(exists).toBe(true); + }); + + it('should have tasks.csv manifest', async () => { + const tasksCsvPath = path.join(resourceLocator.getAiosCorePath(), 'manifests', 'tasks.csv'); + const exists = await fs.pathExists(tasksCsvPath); + expect(exists).toBe(true); + }); + + it('should have workers.csv manifest', async () => { + const workersCsvPath = path.join(resourceLocator.getAiosCorePath(), 'manifests', 'workers.csv'); + const exists = await fs.pathExists(workersCsvPath); + expect(exists).toBe(true); + }); + }); + + describe('INS-06: Module Directory Structure Verification', () => { + it('should have development module with agents subdirectory', async () => { + const developmentAgents = path.join(resourceLocator.getAiosCorePath(), 'development', 'agents'); + const exists = await fs.pathExists(developmentAgents); + expect(exists).toBe(true); + }); + + it('should have development module with tasks subdirectory', async () => { + const developmentTasks = path.join(resourceLocator.getAiosCorePath(), 'development', 'tasks'); + const exists = await fs.pathExists(developmentTasks); + expect(exists).toBe(true); + }); + + it('should have product module with templates subdirectory', async () => { + const productTemplates = path.join(resourceLocator.getAiosCorePath(), 'product', 'templates'); + const exists = await fs.pathExists(productTemplates); + expect(exists).toBe(true); + }); + + it('should have product module with checklists subdirectory', async () => { + const productChecklists = path.join(resourceLocator.getAiosCorePath(), 'product', 'checklists'); + const exists = await fs.pathExists(productChecklists); + expect(exists).toBe(true); + }); + + it('should have core module with utils subdirectory', async () => { + const coreUtils = path.join(resourceLocator.getAiosCorePath(), 'core', 'utils'); + const exists = await fs.pathExists(coreUtils); + expect(exists).toBe(true); + }); + + it('should have infrastructure module', async () => { + const infrastructure = path.join(resourceLocator.getAiosCorePath(), 'infrastructure'); + const exists = await fs.pathExists(infrastructure); + expect(exists).toBe(true); + }); + }); + + describe('Module Mapping Verification', () => { + const expectedModuleMapping = { + tasks: 'development/tasks', + workflows: 'development/workflows', + agents: 'development/agents', + 'agent-teams': 'development/agent-teams', + scripts: 'development/scripts', + templates: 'product/templates', + checklists: 'product/checklists', + data: 'product/data', + utils: 'core/utils', + config: 'core/config', + tools: 'infrastructure/tools', + integrations: 'infrastructure/integrations', + }; + + it('should map tasks to development module', () => { + expect(expectedModuleMapping.tasks).toBe('development/tasks'); + }); + + it('should map templates to product module', () => { + expect(expectedModuleMapping.templates).toBe('product/templates'); + }); + + it('should map utils to core module', () => { + expect(expectedModuleMapping.utils).toBe('core/utils'); + }); + + it('should map integrations to infrastructure module', () => { + expect(expectedModuleMapping.integrations).toBe('infrastructure/integrations'); + }); + }); +}); + +describe('Legacy Flag Support (--legacy)', () => { + it('should accept legacyStructure option in CLI config', () => { + // This tests that the CLI correctly passes the --legacy flag + const config = { + installType: 'full', + directory: '.', + ides: [], + squads: [], + legacyStructure: true, + }; + + expect(config.legacyStructure).toBe(true); + }); + + it('should default legacyStructure to false', () => { + const config = { + installType: 'full', + directory: '.', + ides: [], + squads: [], + legacyStructure: false, + }; + + expect(config.legacyStructure).toBe(false); + }); +}); + +``` + +================================================== +📄 tests/installer/core-config-template.test.js +================================================== +```js +/** + * Unit Tests for core-config-template + * + * Story ACT-12: Language removed from core-config (delegated to Claude Code settings.json) + * + * Test Coverage: + * - generateCoreConfig no longer includes language field + * - Config still includes user_profile and other fields + * - YAML output parses correctly without language + */ + +const yaml = require('js-yaml'); +const { generateCoreConfig } = require('../../packages/installer/src/config/templates/core-config-template'); + +describe('core-config-template', () => { + describe('ACT-12: language removed from core-config', () => { + test('should NOT include language field in generated config', () => { + const output = generateCoreConfig(); + const parsed = yaml.load(output); + + expect(parsed).not.toHaveProperty('language'); + }); + + test('should ignore language option if passed (backward compat)', () => { + const output = generateCoreConfig({ language: 'pt' }); + const parsed = yaml.load(output); + + // language param is no longer destructured, so it's just ignored + expect(parsed).not.toHaveProperty('language'); + }); + + test('should still include user_profile', () => { + const output = generateCoreConfig({ userProfile: 'bob' }); + const parsed = yaml.load(output); + + expect(parsed.user_profile).toBe('bob'); + }); + + test('should generate valid YAML without language', () => { + const output = generateCoreConfig({ + projectType: 'BROWNFIELD', + selectedIDEs: ['vscode', 'cursor'], + userProfile: 'bob', + aiosVersion: '3.0.0', + }); + const parsed = yaml.load(output); + + expect(parsed).toBeDefined(); + expect(typeof parsed).toBe('object'); + expect(parsed).not.toHaveProperty('language'); + expect(parsed.user_profile).toBe('bob'); + expect(parsed.project.type).toBe('BROWNFIELD'); + expect(parsed.ide.selected).toContain('vscode'); + expect(parsed.ide.selected).toContain('cursor'); + }); + }); +}); + +``` + +================================================== +📄 tests/installer/pro-setup-auth.test.js +================================================== +```js +/** + * Unit tests for pro-setup.js email auth flow (PRO-11) + * + * @see Story PRO-11 - Email Authentication & Buyer-Based Pro Activation + * @see AC-7 - Backward compatibility with license key + */ + +'use strict'; + +const proSetup = require('../../packages/installer/src/wizard/pro-setup'); + +describe('pro-setup auth constants', () => { + it('should export EMAIL_PATTERN', () => { + const { EMAIL_PATTERN } = proSetup._testing; + + expect(EMAIL_PATTERN.test('valid@email.com')).toBe(true); + expect(EMAIL_PATTERN.test('user+tag@domain.co')).toBe(true); + expect(EMAIL_PATTERN.test('invalid')).toBe(false); + expect(EMAIL_PATTERN.test('@no-user.com')).toBe(false); + expect(EMAIL_PATTERN.test('no-domain@')).toBe(false); + expect(EMAIL_PATTERN.test('')).toBe(false); + }); + + it('should have MIN_PASSWORD_LENGTH of 8', () => { + expect(proSetup._testing.MIN_PASSWORD_LENGTH).toBe(8); + }); + + it('should have VERIFY_POLL_INTERVAL_MS of 5000', () => { + expect(proSetup._testing.VERIFY_POLL_INTERVAL_MS).toBe(5000); + }); + + it('should have VERIFY_POLL_TIMEOUT_MS of 10 minutes', () => { + expect(proSetup._testing.VERIFY_POLL_TIMEOUT_MS).toBe(10 * 60 * 1000); + }); +}); + +describe('pro-setup CI auth (AC-7, Task 4.6)', () => { + const originalEnv = { ...process.env }; + + afterEach(() => { + // Restore env + process.env = { ...originalEnv }; + }); + + it('should prefer email+password over key in CI mode', async () => { + const mockClient = { + isOnline: jest.fn().mockResolvedValue(true), + login: jest.fn().mockResolvedValue({ + sessionToken: 'test-session', + userId: 'user-1', + emailVerified: true, + }), + activateByAuth: jest.fn().mockResolvedValue({ + key: 'PRO-AUTO-1234-5678-ABCD', + features: ['pro.squads.*'], + seats: { used: 1, max: 2 }, + cacheValidDays: 30, + gracePeriodDays: 7, + }), + }; + + const mockLicenseApi = { + LicenseApiClient: jest.fn().mockReturnValue(mockClient), + }; + + // Override the loader + proSetup._testing.loadLicenseApi = () => mockLicenseApi; + + const result = await proSetup._testing.stepLicenseGateCI({ + email: 'ci@test.com', + password: 'CIPassword123', + key: 'PRO-SKIP-THIS-KEY0-XXXX', + }); + + expect(result.success).toBe(true); + expect(mockClient.login).toHaveBeenCalledWith('ci@test.com', 'CIPassword123'); + // Key should NOT be used when email is present + expect(result.key).toBe('PRO-AUTO-1234-5678-ABCD'); + + // Cleanup + proSetup._testing.loadLicenseApi = undefined; + }); + + it('should fall back to key when no email in CI mode', async () => { + const mockClient = { + isOnline: jest.fn().mockResolvedValue(true), + activate: jest.fn().mockResolvedValue({ + key: 'PRO-KEY0-1234-5678-ABCD', + features: ['pro.squads.*'], + seats: { used: 1, max: 2 }, + cacheValidDays: 30, + gracePeriodDays: 7, + }), + syncPendingDeactivation: jest.fn().mockResolvedValue(false), + }; + + const mockLicenseApi = { + LicenseApiClient: jest.fn().mockReturnValue(mockClient), + }; + + proSetup._testing.loadLicenseApi = () => mockLicenseApi; + + const result = await proSetup._testing.stepLicenseGateCI({ + key: 'PRO-KEY0-1234-5678-ABCD', + }); + + // Should validate via key flow + expect(result.success).toBeDefined(); + + proSetup._testing.loadLicenseApi = undefined; + }); + + it('should return error when no credentials in CI mode', async () => { + const result = await proSetup._testing.stepLicenseGateCI({}); + + expect(result.success).toBe(false); + expect(result.error).toContain('AIOS_PRO_EMAIL'); + }); +}); + +describe('pro-setup backward compatibility (AC-7)', () => { + it('should still export validateKeyFormat', () => { + expect(typeof proSetup.validateKeyFormat).toBe('function'); + expect(proSetup.validateKeyFormat('PRO-ABCD-1234-5678-WXYZ')).toBe(true); + expect(proSetup.validateKeyFormat('invalid')).toBe(false); + }); + + it('should still export maskLicenseKey', () => { + expect(typeof proSetup.maskLicenseKey).toBe('function'); + expect(proSetup.maskLicenseKey('PRO-ABCD-1234-5678-WXYZ')).toBe('PRO-ABCD-****-****-WXYZ'); + }); + + it('should export all original functions', () => { + expect(typeof proSetup.runProWizard).toBe('function'); + expect(typeof proSetup.stepLicenseGate).toBe('function'); + expect(typeof proSetup.stepInstallScaffold).toBe('function'); + expect(typeof proSetup.stepVerify).toBe('function'); + expect(typeof proSetup.isCIEnvironment).toBe('function'); + expect(typeof proSetup.showProHeader).toBe('function'); + }); + + it('should export new auth testing helpers', () => { + expect(typeof proSetup._testing.authenticateWithEmail).toBe('function'); + expect(typeof proSetup._testing.waitForEmailVerification).toBe('function'); + expect(typeof proSetup._testing.activateProByAuth).toBe('function'); + expect(typeof proSetup._testing.stepLicenseGateCI).toBe('function'); + }); +}); + +``` + +================================================== +📄 tests/installer/aios-core-installer.test.js +================================================== +```js +/** + * AIOS Core Installer Tests + * + * @story Story 7.2: Version Tracking + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); + +const { + generateFileHashes, + generateVersionJson, +} = require('../../packages/installer/src/installer/aios-core-installer'); + +describe('AIOS Core Installer - Version Tracking', () => { + let tempDir; + + beforeEach(async () => { + tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'aios-installer-test-')); + await fs.ensureDir(path.join(tempDir, '.aios-core')); + }); + + afterEach(async () => { + if (tempDir && fs.existsSync(tempDir)) { + await fs.remove(tempDir); + } + }); + + describe('generateFileHashes', () => { + it('should generate hashes for installed files', async () => { + const aiosCoreDir = path.join(tempDir, '.aios-core'); + + // Create test files + await fs.writeFile(path.join(aiosCoreDir, 'test1.md'), '# Test File 1'); + await fs.writeFile(path.join(aiosCoreDir, 'test2.md'), '# Test File 2'); + await fs.ensureDir(path.join(aiosCoreDir, 'agents')); + await fs.writeFile(path.join(aiosCoreDir, 'agents', 'dev.md'), '# Dev Agent'); + + const installedFiles = ['test1.md', 'test2.md', 'agents/dev.md']; + const hashes = await generateFileHashes(aiosCoreDir, installedFiles); + + expect(Object.keys(hashes)).toHaveLength(3); + expect(hashes['test1.md']).toMatch(/^sha256:[a-f0-9]{64}$/); + expect(hashes['test2.md']).toMatch(/^sha256:[a-f0-9]{64}$/); + expect(hashes['agents/dev.md']).toMatch(/^sha256:[a-f0-9]{64}$/); + }); + + it('should skip non-existent files', async () => { + const aiosCoreDir = path.join(tempDir, '.aios-core'); + + // Create only one file + await fs.writeFile(path.join(aiosCoreDir, 'exists.md'), '# Exists'); + + const installedFiles = ['exists.md', 'does-not-exist.md']; + const hashes = await generateFileHashes(aiosCoreDir, installedFiles); + + expect(Object.keys(hashes)).toHaveLength(1); + expect(hashes['exists.md']).toBeDefined(); + expect(hashes['does-not-exist.md']).toBeUndefined(); + }); + + it('should skip directories', async () => { + const aiosCoreDir = path.join(tempDir, '.aios-core'); + + await fs.ensureDir(path.join(aiosCoreDir, 'agents')); + await fs.writeFile(path.join(aiosCoreDir, 'file.md'), '# File'); + + const installedFiles = ['file.md', 'agents']; + const hashes = await generateFileHashes(aiosCoreDir, installedFiles); + + expect(Object.keys(hashes)).toHaveLength(1); + expect(hashes['file.md']).toBeDefined(); + expect(hashes['agents']).toBeUndefined(); + }); + + it('should generate consistent hashes for same content', async () => { + const aiosCoreDir = path.join(tempDir, '.aios-core'); + + await fs.writeFile(path.join(aiosCoreDir, 'file1.md'), 'Same content'); + await fs.writeFile(path.join(aiosCoreDir, 'file2.md'), 'Same content'); + + const installedFiles = ['file1.md', 'file2.md']; + const hashes = await generateFileHashes(aiosCoreDir, installedFiles); + + expect(hashes['file1.md']).toBe(hashes['file2.md']); + }); + + it('should generate different hashes for different content', async () => { + const aiosCoreDir = path.join(tempDir, '.aios-core'); + + await fs.writeFile(path.join(aiosCoreDir, 'file1.md'), 'Content A'); + await fs.writeFile(path.join(aiosCoreDir, 'file2.md'), 'Content B'); + + const installedFiles = ['file1.md', 'file2.md']; + const hashes = await generateFileHashes(aiosCoreDir, installedFiles); + + expect(hashes['file1.md']).not.toBe(hashes['file2.md']); + }); + }); + + describe('generateVersionJson', () => { + it('should create version.json with correct structure', async () => { + const aiosCoreDir = path.join(tempDir, '.aios-core'); + + // Create test files + await fs.writeFile(path.join(aiosCoreDir, 'test.md'), '# Test'); + + const result = await generateVersionJson({ + targetAiosCore: aiosCoreDir, + version: '1.2.0', + installedFiles: ['test.md'], + mode: 'project-development', + }); + + expect(result.version).toBe('1.2.0'); + expect(result.mode).toBe('project-development'); + expect(result.installedAt).toMatch(/^\d{4}-\d{2}-\d{2}T/); + expect(result.fileHashes).toBeDefined(); + expect(result.fileHashes['test.md']).toMatch(/^sha256:/); + expect(result.customized).toEqual([]); + }); + + it('should write version.json to disk', async () => { + const aiosCoreDir = path.join(tempDir, '.aios-core'); + + await fs.writeFile(path.join(aiosCoreDir, 'agent.md'), '# Agent'); + + await generateVersionJson({ + targetAiosCore: aiosCoreDir, + version: '2.0.0', + installedFiles: ['agent.md'], + mode: 'framework-development', + }); + + const versionJsonPath = path.join(aiosCoreDir, 'version.json'); + expect(fs.existsSync(versionJsonPath)).toBe(true); + + const versionJson = await fs.readJson(versionJsonPath); + expect(versionJson.version).toBe('2.0.0'); + expect(versionJson.mode).toBe('framework-development'); + }); + + it('should use default mode when not specified', async () => { + const aiosCoreDir = path.join(tempDir, '.aios-core'); + + const result = await generateVersionJson({ + targetAiosCore: aiosCoreDir, + version: '1.0.0', + installedFiles: [], + }); + + expect(result.mode).toBe('project-development'); + }); + + it('should include file hashes in version.json', async () => { + const aiosCoreDir = path.join(tempDir, '.aios-core'); + + await fs.ensureDir(path.join(aiosCoreDir, 'agents')); + await fs.writeFile(path.join(aiosCoreDir, 'agents', 'dev.md'), '# Dev'); + await fs.writeFile(path.join(aiosCoreDir, 'config.yaml'), 'key: value'); + + const result = await generateVersionJson({ + targetAiosCore: aiosCoreDir, + version: '1.0.0', + installedFiles: ['agents/dev.md', 'config.yaml'], + }); + + expect(Object.keys(result.fileHashes)).toHaveLength(2); + expect(result.fileHashes['agents/dev.md']).toBeDefined(); + expect(result.fileHashes['config.yaml']).toBeDefined(); + }); + }); +}); + +``` + +================================================== +📄 tests/installer/file-hasher.test.js +================================================== +```js +/** + * Unit tests for file-hasher.js + * @story 6.18 - Dynamic Manifest & Brownfield Upgrade System + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); +const { + hashFile, + hashString, + hashesMatch, + getFileMetadata, + isBinaryFile, + normalizeLineEndings, + removeBOM, +} = require('../../packages/installer/src/installer/file-hasher'); + +describe('file-hasher', () => { + let tempDir; + + beforeAll(() => { + tempDir = path.join(os.tmpdir(), 'file-hasher-test-' + Date.now()); + fs.ensureDirSync(tempDir); + }); + + afterAll(() => { + fs.removeSync(tempDir); + }); + + describe('normalizeLineEndings', () => { + it('should convert CRLF to LF', () => { + const input = 'line1\r\nline2\r\nline3'; + const expected = 'line1\nline2\nline3'; + expect(normalizeLineEndings(input)).toBe(expected); + }); + + it('should convert standalone CR to LF', () => { + const input = 'line1\rline2\rline3'; + const expected = 'line1\nline2\nline3'; + expect(normalizeLineEndings(input)).toBe(expected); + }); + + it('should preserve LF', () => { + const input = 'line1\nline2\nline3'; + expect(normalizeLineEndings(input)).toBe(input); + }); + + it('should handle mixed line endings', () => { + const input = 'line1\r\nline2\rline3\nline4'; + const expected = 'line1\nline2\nline3\nline4'; + expect(normalizeLineEndings(input)).toBe(expected); + }); + }); + + describe('removeBOM', () => { + it('should remove UTF-8 BOM', () => { + const withBOM = '\uFEFFcontent'; + expect(removeBOM(withBOM)).toBe('content'); + }); + + it('should not modify content without BOM', () => { + const noBOM = 'content'; + expect(removeBOM(noBOM)).toBe('content'); + }); + }); + + describe('isBinaryFile', () => { + it('should identify image files as binary', () => { + expect(isBinaryFile('image.png')).toBe(true); + expect(isBinaryFile('photo.jpg')).toBe(true); + expect(isBinaryFile('icon.gif')).toBe(true); + }); + + it('should identify archive files as binary', () => { + expect(isBinaryFile('archive.zip')).toBe(true); + expect(isBinaryFile('package.tar.gz')).toBe(true); + }); + + it('should identify text files as non-binary', () => { + expect(isBinaryFile('readme.md')).toBe(false); + expect(isBinaryFile('script.js')).toBe(false); + expect(isBinaryFile('config.yaml')).toBe(false); + }); + + it('should be case-insensitive', () => { + expect(isBinaryFile('IMAGE.PNG')).toBe(true); + expect(isBinaryFile('Archive.ZIP')).toBe(true); + }); + }); + + describe('hashFile', () => { + it('should hash a text file', () => { + const testFile = path.join(tempDir, 'test.txt'); + fs.writeFileSync(testFile, 'Hello, World!'); + + const hash = hashFile(testFile); + expect(hash).toMatch(/^[a-f0-9]{64}$/); + }); + + it('should produce consistent hashes for same content', () => { + const file1 = path.join(tempDir, 'file1.txt'); + const file2 = path.join(tempDir, 'file2.txt'); + fs.writeFileSync(file1, 'same content'); + fs.writeFileSync(file2, 'same content'); + + expect(hashFile(file1)).toBe(hashFile(file2)); + }); + + it('should produce different hashes for different content', () => { + const file1 = path.join(tempDir, 'diff1.txt'); + const file2 = path.join(tempDir, 'diff2.txt'); + fs.writeFileSync(file1, 'content A'); + fs.writeFileSync(file2, 'content B'); + + expect(hashFile(file1)).not.toBe(hashFile(file2)); + }); + + it('should normalize line endings for consistent cross-platform hashing', () => { + const fileLF = path.join(tempDir, 'lf.txt'); + const fileCRLF = path.join(tempDir, 'crlf.txt'); + fs.writeFileSync(fileLF, 'line1\nline2\n'); + fs.writeFileSync(fileCRLF, 'line1\r\nline2\r\n'); + + expect(hashFile(fileLF)).toBe(hashFile(fileCRLF)); + }); + + it('should throw error for non-existent file', () => { + expect(() => hashFile(path.join(tempDir, 'nonexistent.txt'))).toThrow('File not found'); + }); + + it('should throw error for directory', () => { + expect(() => hashFile(tempDir)).toThrow('Cannot hash directory'); + }); + }); + + describe('hashString', () => { + it('should hash a string', () => { + const hash = hashString('test content'); + expect(hash).toMatch(/^[a-f0-9]{64}$/); + }); + + it('should produce consistent hashes', () => { + expect(hashString('same')).toBe(hashString('same')); + }); + + it('should produce different hashes for different strings', () => { + expect(hashString('one')).not.toBe(hashString('two')); + }); + }); + + describe('hashesMatch', () => { + it('should match identical hashes', () => { + const hash = 'sha256:abc123'; + expect(hashesMatch(hash, hash)).toBe(true); + }); + + it('should match hashes regardless of case', () => { + expect(hashesMatch('sha256:ABC123', 'sha256:abc123')).toBe(true); + }); + + it('should not match different hashes', () => { + expect(hashesMatch('sha256:abc', 'sha256:def')).toBe(false); + }); + + it('should return false for null/undefined', () => { + expect(hashesMatch(null, 'sha256:abc')).toBe(false); + expect(hashesMatch('sha256:abc', null)).toBe(false); + expect(hashesMatch(undefined, undefined)).toBe(false); + }); + }); + + describe('getFileMetadata', () => { + it('should return correct metadata', () => { + const testFile = path.join(tempDir, 'meta.txt'); + fs.writeFileSync(testFile, 'test content'); + + const metadata = getFileMetadata(testFile, tempDir); + + expect(metadata.path).toBe('meta.txt'); + expect(metadata.hash).toMatch(/^sha256:[a-f0-9]{64}$/); + expect(metadata.size).toBeGreaterThan(0); + expect(metadata.isBinary).toBe(false); + }); + + it('should use forward slashes in path', () => { + const subDir = path.join(tempDir, 'sub'); + fs.ensureDirSync(subDir); + const testFile = path.join(subDir, 'nested.txt'); + fs.writeFileSync(testFile, 'nested'); + + const metadata = getFileMetadata(testFile, tempDir); + expect(metadata.path).toBe('sub/nested.txt'); + expect(metadata.path).not.toContain('\\'); + }); + }); +}); + +``` + +================================================== +📄 tests/installer/wizard-language.test.js +================================================== +```js +/** + * Tests for wizard language delegation to Claude Code settings.json + * + * Story ACT-12: Native Language Delegation + * + * Test Coverage: + * - configureEnvironment no longer writes language to core-config.yaml + * - core-config.yaml generated without language field + */ + +const path = require('path'); +const fse = require('fs-extra'); +const os = require('os'); +const yaml = require('js-yaml'); + +const { configureEnvironment } = require('../../packages/installer/src/config/configure-environment'); + +describe('ACT-12: Language delegated to Claude Code settings.json', () => { + let tempDir; + + beforeEach(async () => { + tempDir = path.join(os.tmpdir(), `aios-test-lang-${Date.now()}`); + await fse.ensureDir(tempDir); + await fse.ensureDir(path.join(tempDir, '.aios-core')); + }); + + afterEach(async () => { + await fse.remove(tempDir); + }); + + test('should NOT include language in generated core-config.yaml', async () => { + const result = await configureEnvironment({ + targetDir: tempDir, + skipPrompts: true, + }); + + expect(result.coreConfigCreated).toBe(true); + + const configPath = path.join(tempDir, '.aios-core', 'core-config.yaml'); + const content = await fse.readFile(configPath, 'utf8'); + const config = yaml.load(content); + + expect(config).not.toHaveProperty('language'); + }); + + test('should still include user_profile in core-config.yaml', async () => { + const result = await configureEnvironment({ + targetDir: tempDir, + userProfile: 'bob', + skipPrompts: true, + }); + + expect(result.coreConfigCreated).toBe(true); + + const configPath = path.join(tempDir, '.aios-core', 'core-config.yaml'); + const content = await fse.readFile(configPath, 'utf8'); + const config = yaml.load(content); + + expect(config.user_profile).toBe('bob'); + expect(config).not.toHaveProperty('language'); + }); +}); + +``` + +================================================== +📄 tests/installer/mcp-installation.test.js +================================================== +```js +/** + * MCP Installation Tests + * Story 1.5: MCP Installation (Project-Level) + * + * Test suite for MCP installer module + * + * @jest-environment node + */ + +const fs = require('fs'); +const path = require('path'); +const fse = require('fs-extra'); +const { exec } = require('child_process'); + +// Mock child_process exec to avoid real npx calls +jest.mock('child_process', () => ({ + exec: jest.fn(), +})); + +const { installProjectMCPs, displayInstallationStatus, MCP_CONFIGS } = require('../../bin/modules/mcp-installer'); + +// Test fixtures directory +const FIXTURES_DIR = path.join(__dirname, '__fixtures__', 'mcp-installation'); + +describe('MCP Installation Module', () => { + let tempDir; + + beforeEach(async () => { + // Create temporary test directory + tempDir = path.join(FIXTURES_DIR, `test-${Date.now()}`); + await fse.ensureDir(tempDir); + + // Mock exec to simulate successful npx package validation + exec.mockImplementation((command, options, callback) => { + // Simulate async callback after short delay + setTimeout(() => { + callback(null, { stdout: 'v1.0.0\n', stderr: '' }); + }, 10); + }); + }); + + afterEach(async () => { + // Clean up temporary directory + if (tempDir && fs.existsSync(tempDir)) { + // Add retry logic for Windows file locking (TEST-003 fix) + let retries = 3; + while (retries > 0) { + try { + await fse.remove(tempDir); + break; + } catch (error) { + if (error.code === 'EBUSY' && retries > 1) { + // Wait a bit and retry + await new Promise(resolve => setTimeout(resolve, 100)); + retries--; + } else { + // Last retry failed or different error + console.warn(`Failed to clean up test directory: ${error.message}`); + break; + } + } + } + } + + // Clear all mocks + jest.clearAllMocks(); + }); + + describe('MCP Configuration Templates', () => { + test('should have all 4 required MCPs configured', () => { + expect(MCP_CONFIGS).toHaveProperty('browser'); + expect(MCP_CONFIGS).toHaveProperty('context7'); + expect(MCP_CONFIGS).toHaveProperty('exa'); + expect(MCP_CONFIGS).toHaveProperty('desktop-commander'); + }); + + test('browser MCP should have correct structure', () => { + const browser = MCP_CONFIGS.browser; + expect(browser.id).toBe('browser'); + expect(browser.package).toBe('@modelcontextprotocol/server-puppeteer'); + expect(browser.transport).toBe('stdio'); + expect(typeof browser.getConfig).toBe('function'); + }); + + test('context7 MCP should use npx/stdio transport', () => { + const context7 = MCP_CONFIGS.context7; + expect(context7.package).toBe('@upstash/context7-mcp'); + expect(context7.transport).toBe('stdio'); + expect(typeof context7.getConfig).toBe('function'); + }); + + test('exa MCP should require API key', () => { + const exa = MCP_CONFIGS.exa; + expect(exa.requiresApiKey).toBe(true); + expect(exa.apiKeyEnvVar).toBe('EXA_API_KEY'); + }); + + test('desktop-commander MCP should have correct package', () => { + const dc = MCP_CONFIGS['desktop-commander']; + expect(dc.package).toBe('@wonderwhy-er/desktop-commander'); + }); + }); + + describe('Platform-specific Configuration', () => { + test('should generate Windows config for npm MCPs on win32', () => { + const browser = MCP_CONFIGS.browser; + const config = browser.getConfig('win32'); + + expect(config.command).toBe('cmd'); + expect(config.args).toContain('/c'); + expect(config.args).toContain('npx'); + }); + + test('should generate Unix config for npm MCPs on darwin/linux', () => { + const browser = MCP_CONFIGS.browser; + const config = browser.getConfig('darwin'); + + expect(config.command).toBe('npx'); + expect(config.args[0]).toBe('-y'); + }); + + test('context7 config should be platform-specific npx/stdio', () => { + const context7 = MCP_CONFIGS.context7; + + // Test Windows config + const winConfig = context7.getConfig('win32'); + expect(winConfig.command).toBe('cmd'); + expect(winConfig.args).toContain('/c'); + expect(winConfig.args).toContain('npx'); + expect(winConfig.args).toContain('@upstash/context7-mcp'); + + // Test Unix config + const unixConfig = context7.getConfig('darwin'); + expect(unixConfig.command).toBe('npx'); + expect(unixConfig.args).toContain('@upstash/context7-mcp'); + }); + + test('exa should include tools parameter', () => { + const exa = MCP_CONFIGS.exa; + const config = exa.getConfig('linux', 'test-api-key'); + + const toolsArg = config.args.find(arg => arg.startsWith('--tools=')); + expect(toolsArg).toBeDefined(); + expect(toolsArg).toContain('web_search_exa'); + }); + }); + + describe('.mcp.json Configuration Management', () => { + test('should create new .mcp.json if not exists', async () => { + const mcpPath = path.join(tempDir, '.mcp.json'); + + const result = await installProjectMCPs({ + selectedMCPs: ['browser'], + projectPath: tempDir, + onProgress: () => {}, + }); + + expect(fs.existsSync(mcpPath)).toBe(true); + expect(result.configPath).toBe(mcpPath); + }); + + test('should have valid JSON structure', async () => { + const mcpPath = path.join(tempDir, '.mcp.json'); + + await installProjectMCPs({ + selectedMCPs: ['browser'], + projectPath: tempDir, + }); + + const content = fs.readFileSync(mcpPath, 'utf8'); + const config = JSON.parse(content); + + expect(config).toHaveProperty('mcpServers'); + expect(typeof config.mcpServers).toBe('object'); + }); + + test('should append to existing .mcp.json', async () => { + const mcpPath = path.join(tempDir, '.mcp.json'); + + // Create initial config + const initialConfig = { + mcpServers: { + custom: { command: 'test' }, + }, + }; + fs.writeFileSync(mcpPath, JSON.stringify(initialConfig, null, 2)); + + // Install MCPs + await installProjectMCPs({ + selectedMCPs: ['browser'], + projectPath: tempDir, + }); + + const content = fs.readFileSync(mcpPath, 'utf8'); + const config = JSON.parse(content); + + expect(config.mcpServers.custom).toEqual({ command: 'test' }); + expect(config.mcpServers.browser).toBeDefined(); + }); + + test('should create backup of existing .mcp.json', async () => { + const mcpPath = path.join(tempDir, '.mcp.json'); + const backupPath = path.join(tempDir, '.mcp.json.backup'); + + // Create initial config + fs.writeFileSync(mcpPath, JSON.stringify({ test: true }, null, 2)); + + // Install MCPs + await installProjectMCPs({ + selectedMCPs: ['browser'], + projectPath: tempDir, + }); + + expect(fs.existsSync(backupPath)).toBe(true); + const backup = JSON.parse(fs.readFileSync(backupPath, 'utf8')); + expect(backup.test).toBe(true); + }); + }); + + describe('Installation Process', () => { + test('should install all 4 MCPs successfully', async () => { + const result = await installProjectMCPs({ + selectedMCPs: ['browser', 'context7', 'exa', 'desktop-commander'], + projectPath: tempDir, + apiKeys: { EXA_API_KEY: 'test-key' }, + }); + + expect(result.success).toBeDefined(); + expect(result.installedMCPs).toHaveProperty('browser'); + expect(result.installedMCPs).toHaveProperty('context7'); + expect(result.installedMCPs).toHaveProperty('exa'); + expect(result.installedMCPs).toHaveProperty('desktop-commander'); + }); + + test('should create installation logs', async () => { + const logPath = path.join(tempDir, '.aios', 'install-log.txt'); + + await installProjectMCPs({ + selectedMCPs: ['browser'], + projectPath: tempDir, + }); + + expect(fs.existsSync(logPath)).toBe(true); + const log = fs.readFileSync(logPath, 'utf8'); + expect(log).toContain('[INFO] Starting MCP installation'); + }); + + test('should call progress callback', async () => { + const progressCalls = []; + + await installProjectMCPs({ + selectedMCPs: ['browser'], + projectPath: tempDir, + onProgress: (status) => { + progressCalls.push(status); + }, + }); + + expect(progressCalls.length).toBeGreaterThan(0); + expect(progressCalls.some(c => c.phase === 'installation')).toBe(true); + }); + + test('should handle empty MCP selection', async () => { + const result = await installProjectMCPs({ + selectedMCPs: [], + projectPath: tempDir, + }); + + expect(result.installedMCPs).toEqual({}); + }); + + test('should reject unknown MCP ID', async () => { + const result = await installProjectMCPs({ + selectedMCPs: ['unknown-mcp'], + projectPath: tempDir, + }); + + expect(result.success).toBe(false); + expect(result.errors.length).toBeGreaterThan(0); + }); + }); + + describe('Health Checks', () => { + test('should run health checks for installed MCPs', async () => { + const result = await installProjectMCPs({ + selectedMCPs: ['browser'], + projectPath: tempDir, + }); + + const browserStatus = result.installedMCPs.browser; + expect(browserStatus).toBeDefined(); + expect(['success', 'warning', 'failed']).toContain(browserStatus.status); + }); + + test('should mark MCP as warning if health check fails', async () => { + // Health checks are simplified in current implementation + // They validate configuration rather than actually testing MCP servers + const result = await installProjectMCPs({ + selectedMCPs: ['browser'], + projectPath: tempDir, + }); + + const browserStatus = result.installedMCPs.browser; + expect(browserStatus.message).toBeDefined(); + }); + }); + + describe('Error Handling', () => { + test('should log errors to error log', async () => { + const errorLogPath = path.join(tempDir, '.aios', 'install-errors.log'); + + await installProjectMCPs({ + selectedMCPs: ['unknown-mcp'], + projectPath: tempDir, + }); + + if (fs.existsSync(errorLogPath)) { + const errorLog = fs.readFileSync(errorLogPath, 'utf8'); + expect(errorLog).toBeTruthy(); + } + }); + + test('should include errors in result', async () => { + const result = await installProjectMCPs({ + selectedMCPs: ['unknown-mcp'], + projectPath: tempDir, + }); + + expect(result.errors).toBeDefined(); + expect(Array.isArray(result.errors)).toBe(true); + }); + }); + + describe('Status Display', () => { + test('displayInstallationStatus should not throw', () => { + const result = { + success: true, + installedMCPs: { + browser: { status: 'success', message: 'Installed' }, + }, + configPath: '/test/.mcp.json', + errors: [], + }; + + expect(() => { + displayInstallationStatus(result); + }).not.toThrow(); + }); + + test('should display all status types', () => { + const result = { + success: true, + installedMCPs: { + browser: { status: 'success', message: 'Installed' }, + exa: { status: 'warning', message: 'Timeout' }, + 'desktop-commander': { status: 'failed', message: 'Error' }, + }, + configPath: '/test/.mcp.json', + errors: [], + }; + + expect(() => { + displayInstallationStatus(result); + }).not.toThrow(); + }); + }); + + describe('API Key Handling', () => { + test('should use provided API key for Exa', async () => { + const mcpPath = path.join(tempDir, '.mcp.json'); + + await installProjectMCPs({ + selectedMCPs: ['exa'], + projectPath: tempDir, + apiKeys: { EXA_API_KEY: 'my-test-key' }, + }); + + const content = fs.readFileSync(mcpPath, 'utf8'); + const config = JSON.parse(content); + + expect(config.mcpServers.exa.env.EXA_API_KEY).toBe('my-test-key'); + }); + + test('should use placeholder if no API key provided', async () => { + const mcpPath = path.join(tempDir, '.mcp.json'); + + await installProjectMCPs({ + selectedMCPs: ['exa'], + projectPath: tempDir, + apiKeys: {}, + }); + + const content = fs.readFileSync(mcpPath, 'utf8'); + const config = JSON.parse(content); + + expect(config.mcpServers.exa.env.EXA_API_KEY).toBe('${EXA_API_KEY}'); + }); + }); +}); + +``` + +================================================== +📄 tests/installer/generate-manifest.test.js +================================================== +```js +/** + * Unit tests for generate-install-manifest.js + * @story 6.18 - Dynamic Manifest & Brownfield Upgrade System + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); +const { + generateManifest, + getFileType, + scanDirectory, + FOLDERS_TO_COPY, + ROOT_FILES_TO_COPY, +} = require('../../scripts/generate-install-manifest'); + +describe('generate-install-manifest', () => { + describe('FOLDERS_TO_COPY', () => { + it('should include v2.1 modular structure folders', () => { + expect(FOLDERS_TO_COPY).toContain('core'); + expect(FOLDERS_TO_COPY).toContain('development'); + expect(FOLDERS_TO_COPY).toContain('product'); + expect(FOLDERS_TO_COPY).toContain('infrastructure'); + }); + + it('should include v2.0 legacy folders', () => { + expect(FOLDERS_TO_COPY).toContain('agents'); + expect(FOLDERS_TO_COPY).toContain('tasks'); + expect(FOLDERS_TO_COPY).toContain('templates'); + expect(FOLDERS_TO_COPY).toContain('workflows'); + }); + }); + + describe('ROOT_FILES_TO_COPY', () => { + it('should include essential root files', () => { + expect(ROOT_FILES_TO_COPY).toContain('index.js'); + expect(ROOT_FILES_TO_COPY).toContain('core-config.yaml'); + }); + }); + + describe('getFileType', () => { + it('should identify agent files', () => { + expect(getFileType('development/agents/dev.md')).toBe('agent'); + expect(getFileType('agents/pm.md')).toBe('agent'); + }); + + it('should identify task files', () => { + expect(getFileType('development/tasks/create-story.md')).toBe('task'); + expect(getFileType('tasks/validate.md')).toBe('task'); + }); + + it('should identify workflow files', () => { + expect(getFileType('development/workflows/deploy.yaml')).toBe('workflow'); + expect(getFileType('workflows/test.yaml')).toBe('workflow'); + }); + + it('should identify template files', () => { + expect(getFileType('product/templates/story.md')).toBe('template'); + expect(getFileType('templates/readme.md')).toBe('template'); + }); + + it('should identify checklist files', () => { + expect(getFileType('product/checklists/deploy.md')).toBe('checklist'); + expect(getFileType('checklists/qa.md')).toBe('checklist'); + }); + + it('should identify code files', () => { + expect(getFileType('index.js')).toBe('code'); + expect(getFileType('utils.ts')).toBe('code'); + }); + + it('should identify config files', () => { + expect(getFileType('config.yaml')).toBe('config'); + expect(getFileType('settings.yml')).toBe('config'); + }); + + it('should identify documentation files', () => { + expect(getFileType('readme.md')).toBe('documentation'); + expect(getFileType('docs/guide.md')).toBe('documentation'); + }); + + it('should handle Windows-style paths', () => { + expect(getFileType('development\\agents\\dev.md')).toBe('agent'); + }); + }); + + describe('scanDirectory', () => { + let tempDir; + + beforeAll(() => { + tempDir = path.join(os.tmpdir(), 'scan-test-' + Date.now()); + fs.ensureDirSync(tempDir); + + // Create test structure + fs.ensureDirSync(path.join(tempDir, 'subdir')); + fs.writeFileSync(path.join(tempDir, 'file1.txt'), 'content1'); + fs.writeFileSync(path.join(tempDir, 'file2.md'), 'content2'); + fs.writeFileSync(path.join(tempDir, 'subdir', 'nested.js'), 'content3'); + }); + + afterAll(() => { + fs.removeSync(tempDir); + }); + + it('should find all files recursively', () => { + const files = scanDirectory(tempDir, tempDir); + expect(files.length).toBe(3); + }); + + it('should return absolute paths', () => { + const files = scanDirectory(tempDir, tempDir); + files.forEach(file => { + expect(path.isAbsolute(file)).toBe(true); + }); + }); + + it('should return empty array for non-existent directory', () => { + const files = scanDirectory(path.join(tempDir, 'nonexistent'), tempDir); + expect(files).toEqual([]); + }); + + it('should exclude node_modules', () => { + const nodeModulesDir = path.join(tempDir, 'node_modules'); + fs.ensureDirSync(nodeModulesDir); + fs.writeFileSync(path.join(nodeModulesDir, 'package.json'), '{}'); + + const files = scanDirectory(tempDir, tempDir); + const hasNodeModules = files.some(f => f.includes('node_modules')); + expect(hasNodeModules).toBe(false); + + fs.removeSync(nodeModulesDir); + }); + }); + + describe('generateManifest', () => { + it('should generate valid manifest structure', async () => { + const manifest = await generateManifest(); + + expect(manifest).toHaveProperty('version'); + expect(manifest).toHaveProperty('generated_at'); + expect(manifest).toHaveProperty('generator'); + expect(manifest).toHaveProperty('file_count'); + expect(manifest).toHaveProperty('files'); + expect(Array.isArray(manifest.files)).toBe(true); + }); + + it('should include file metadata', async () => { + const manifest = await generateManifest(); + + expect(manifest.files.length).toBeGreaterThan(0); + + const sampleFile = manifest.files[0]; + expect(sampleFile).toHaveProperty('path'); + expect(sampleFile).toHaveProperty('hash'); + expect(sampleFile).toHaveProperty('type'); + expect(sampleFile).toHaveProperty('size'); + }); + + it('should have hash with sha256 prefix', async () => { + const manifest = await generateManifest(); + const sampleFile = manifest.files[0]; + + expect(sampleFile.hash).toMatch(/^sha256:[a-f0-9]{64}$/); + }); + + it('should use forward slashes in paths', async () => { + const manifest = await generateManifest(); + + manifest.files.forEach(file => { + expect(file.path).not.toContain('\\'); + }); + }); + + it('should have files in consistent order', async () => { + const manifest = await generateManifest(); + const paths = manifest.files.map(f => f.path); + + // Verify there are no duplicates (consistent ordering requirement) + const uniquePaths = new Set(paths); + expect(uniquePaths.size).toBe(paths.length); + + // Run twice and verify same order + const manifest2 = await generateManifest(); + const paths2 = manifest2.files.map(f => f.path); + expect(paths).toEqual(paths2); + }); + }); +}); + +``` + +================================================== +📄 tests/installer/post-install-validator.test.js +================================================== +```js +/** + * Post-Installation Validator Security Tests + * + * @module tests/installer/post-install-validator.test.js + * @story 6.19 - Post-Installation Validation & Integrity Verification + * + * These tests verify security-critical behavior: + * - Signature verification + * - Path traversal prevention + * - Symlink rejection + * - Safe repair operations + * - Quick mode safety + */ + +'use strict'; + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); +const { + PostInstallValidator, + isPathContained, + validateManifestEntry, + IssueType, + Severity, + SecurityLimits, +} = require('../../packages/installer/src/installer/post-install-validator'); + +describe('PostInstallValidator Security Tests', () => { + let testDir; + let targetDir; + let sourceDir; + + beforeEach(async () => { + // Create isolated test directory + testDir = path.join(os.tmpdir(), `aios-validator-test-${Date.now()}`); + targetDir = path.join(testDir, 'target'); + sourceDir = path.join(testDir, 'source'); + + await fs.ensureDir(path.join(targetDir, '.aios-core')); + await fs.ensureDir(path.join(sourceDir, '.aios-core')); + }); + + afterEach(async () => { + // Cleanup + if (testDir && fs.existsSync(testDir)) { + await fs.remove(testDir); + } + }); + + describe('Path Containment (isPathContained)', () => { + test('should allow paths within root', () => { + expect(isPathContained('/root/dir/file.txt', '/root/dir')).toBe(true); + expect(isPathContained('/root/dir/sub/file.txt', '/root/dir')).toBe(true); + }); + + test('should reject path traversal with ..', () => { + const root = path.resolve('/root/dir'); + const malicious = path.resolve('/root/dir/../etc/passwd'); + expect(isPathContained(malicious, root)).toBe(false); + }); + + test('should reject paths outside root', () => { + expect(isPathContained('/etc/passwd', '/root/dir')).toBe(false); + expect(isPathContained('/root/other/file', '/root/dir')).toBe(false); + }); + + test('should handle Windows alternate data streams', () => { + // Alternate data streams should be rejected + expect(isPathContained('C:\\root\\file.txt:stream', 'C:\\root')).toBe(false); + expect(isPathContained('/root/file.txt:hidden', '/root')).toBe(false); + }); + + test('should allow root directory itself', () => { + expect(isPathContained('/root/dir', '/root/dir')).toBe(true); + }); + + if (process.platform === 'win32') { + test('should handle Windows case-insensitivity', () => { + expect(isPathContained('C:\\Root\\Dir\\file.txt', 'c:\\root\\dir')).toBe(true); + expect(isPathContained('c:\\ROOT\\DIR\\FILE.TXT', 'C:\\root\\dir')).toBe(true); + }); + } + }); + + describe('Manifest Entry Validation (validateManifestEntry)', () => { + test('should accept valid entry', () => { + const result = validateManifestEntry( + { + path: 'core/config.js', + hash: 'sha256:' + 'a'.repeat(64), + size: 1234, + }, + 0, + ); + expect(result.valid).toBe(true); + expect(result.sanitized.path).toBe('core/config.js'); + }); + + test('should reject unknown fields', () => { + const result = validateManifestEntry( + { + path: 'file.txt', + hash: 'sha256:' + 'a'.repeat(64), + malicious: 'payload', + }, + 0, + ); + expect(result.valid).toBe(false); + expect(result.error).toContain("unknown field 'malicious'"); + }); + + test('should reject path traversal in entry', () => { + const result = validateManifestEntry({ path: '../../../etc/passwd' }, 0); + expect(result.valid).toBe(false); + expect(result.error).toContain('..'); + }); + + test('should reject null bytes in path', () => { + const result = validateManifestEntry({ path: 'file\x00.txt' }, 0); + expect(result.valid).toBe(false); + expect(result.error).toContain('null byte'); + }); + + test('should reject absolute paths', () => { + const result = validateManifestEntry({ path: '/etc/passwd' }, 0); + expect(result.valid).toBe(false); + expect(result.error).toContain('absolute path'); + }); + + test('should reject excessively long paths', () => { + const longPath = 'a'.repeat(SecurityLimits.MAX_PATH_LENGTH + 1); + const result = validateManifestEntry({ path: longPath }, 0); + expect(result.valid).toBe(false); + expect(result.error).toContain('maximum length'); + }); + + test('should reject invalid hash format', () => { + const result = validateManifestEntry({ path: 'file.txt', hash: 'md5:invalidhash' }, 0); + expect(result.valid).toBe(false); + expect(result.error).toContain('invalid hash format'); + }); + + test('should reject negative size', () => { + const result = validateManifestEntry({ path: 'file.txt', size: -1 }, 0); + expect(result.valid).toBe(false); + expect(result.error).toContain('non-negative integer'); + }); + + test('should reject unknown type values', () => { + const result = validateManifestEntry({ path: 'dir/', type: 'directory' }, 0); + expect(result.valid).toBe(false); + expect(result.error).toContain("invalid type 'directory'"); + }); + + test('should reject arrays as entries', () => { + const result = validateManifestEntry(['not', 'an', 'object'], 0); + expect(result.valid).toBe(false); + expect(result.error).toContain('not an object'); + }); + }); + + describe('Symlink Rejection', () => { + test('should reject symlinks during validation', async () => { + // Create a regular file and a symlink to it + const realFile = path.join(targetDir, '.aios-core', 'real.txt'); + const symlink = path.join(targetDir, '.aios-core', 'link.txt'); + + await fs.writeFile(realFile, 'content'); + + // Create symlink (skip on Windows if not admin) + try { + await fs.symlink(realFile, symlink); + } catch (e) { + // Skip test on Windows without admin privileges + if (e.code === 'EPERM') { + console.log('Skipping symlink test - requires admin on Windows'); + return; + } + throw e; + } + + // Create manifest pointing to symlink + const manifest = { + version: '1.0.0', + files: [{ path: 'link.txt', hash: null, size: null }], + }; + await fs.writeFile( + path.join(targetDir, '.aios-core', 'install-manifest.yaml'), + 'version: "1.0.0"\nfiles:\n - path: link.txt', + ); + + const validator = new PostInstallValidator(targetDir, null, { + requireSignature: false, + verifyHashes: false, + }); + + const report = await validator.validate(); + + // Should have a symlink rejection issue + const symlinkIssue = report.issues.find((i) => i.type === IssueType.SYMLINK_REJECTED); + expect(symlinkIssue).toBeDefined(); + expect(symlinkIssue.severity).toBe(Severity.CRITICAL); + }); + }); + + describe('Signature Verification', () => { + test('should fail when signature is required but missing', async () => { + // Create manifest without signature + await fs.writeFile( + path.join(targetDir, '.aios-core', 'install-manifest.yaml'), + 'version: "1.0.0"\nfiles:\n - path: test.txt', + ); + + const validator = new PostInstallValidator(targetDir, null, { + requireSignature: true, // Require signature + }); + + const report = await validator.validate(); + + expect(report.status).toBe('failed'); + const sigIssue = report.issues.find( + (i) => i.type === IssueType.SIGNATURE_MISSING || i.type === IssueType.SIGNATURE_INVALID, + ); + expect(sigIssue).toBeDefined(); + expect(sigIssue.severity).toBe(Severity.CRITICAL); + }); + + test('should allow validation without signature in dev mode', async () => { + // Create valid manifest and file + await fs.writeFile( + path.join(targetDir, '.aios-core', 'install-manifest.yaml'), + 'version: "1.0.0"\nfiles:\n - path: test.txt\n size: 4', + ); + await fs.writeFile(path.join(targetDir, '.aios-core', 'test.txt'), 'test'); + + const validator = new PostInstallValidator(targetDir, null, { + requireSignature: false, // Dev mode + verifyHashes: false, + }); + + const report = await validator.validate(); + + // Should succeed without signature in dev mode + expect(report.manifestVerified).toBe(false); + expect(report.status).not.toBe('failed'); + }); + }); + + describe('Quick Mode Safety (H2)', () => { + test('should fail when size is missing in quick mode', async () => { + // Create manifest without size + await fs.writeFile( + path.join(targetDir, '.aios-core', 'install-manifest.yaml'), + 'version: "1.0.0"\nfiles:\n - path: test.txt', + ); + await fs.writeFile(path.join(targetDir, '.aios-core', 'test.txt'), 'content'); + + const validator = new PostInstallValidator(targetDir, null, { + requireSignature: false, + verifyHashes: false, // Quick mode + }); + + const report = await validator.validate(); + + const schemaIssue = report.issues.find((i) => i.type === IssueType.SCHEMA_VIOLATION); + expect(schemaIssue).toBeDefined(); + expect(schemaIssue.message).toContain('Missing size'); + }); + + test('should fail on size mismatch in quick mode', async () => { + // Create manifest with wrong size + await fs.writeFile( + path.join(targetDir, '.aios-core', 'install-manifest.yaml'), + 'version: "1.0.0"\nfiles:\n - path: test.txt\n size: 999', + ); + await fs.writeFile(path.join(targetDir, '.aios-core', 'test.txt'), 'small'); + + const validator = new PostInstallValidator(targetDir, null, { + requireSignature: false, + verifyHashes: false, + }); + + const report = await validator.validate(); + + const sizeIssue = report.issues.find((i) => i.type === IssueType.SIZE_MISMATCH); + expect(sizeIssue).toBeDefined(); + expect(report.stats.corruptedFiles).toBe(1); + }); + }); + + describe('Missing Hash Detection (H7)', () => { + test('should fail when hash is missing but verifyHashes is true', async () => { + // Create manifest without hash but with size + await fs.writeFile( + path.join(targetDir, '.aios-core', 'install-manifest.yaml'), + 'version: "1.0.0"\nfiles:\n - path: test.txt\n size: 7', + ); + await fs.writeFile(path.join(targetDir, '.aios-core', 'test.txt'), 'content'); + + const validator = new PostInstallValidator(targetDir, null, { + requireSignature: false, + verifyHashes: true, // Full validation mode + }); + + const report = await validator.validate(); + + // Should have a schema violation for missing hash + const schemaIssue = report.issues.find((i) => i.type === IssueType.SCHEMA_VIOLATION); + expect(schemaIssue).toBeDefined(); + expect(schemaIssue.message).toContain('Missing hash in manifest'); + expect(schemaIssue.details).toContain('Hash verification enabled'); + expect(report.stats.corruptedFiles).toBe(1); + }); + + test('should fail when hash is empty string but verifyHashes is true', async () => { + // Create manifest with empty hash (YAML null becomes empty after FAILSAFE parsing) + // Using explicit empty string to test falsy hash values + await fs.writeFile( + path.join(targetDir, '.aios-core', 'install-manifest.yaml'), + 'version: "1.0.0"\nfiles:\n - path: test.txt\n hash: ""\n size: 7', + ); + await fs.writeFile(path.join(targetDir, '.aios-core', 'test.txt'), 'content'); + + const validator = new PostInstallValidator(targetDir, null, { + requireSignature: false, + verifyHashes: true, + }); + + const report = await validator.validate(); + + // Empty hash should fail format validation during manifest parsing + const invalidIssue = report.issues.find((i) => i.type === IssueType.INVALID_MANIFEST); + expect(invalidIssue).toBeDefined(); + expect(invalidIssue.details).toContain('invalid hash format'); + }); + + test('should pass when hash is missing but verifyHashes is false (quick mode)', async () => { + // Create manifest without hash but with size + await fs.writeFile( + path.join(targetDir, '.aios-core', 'install-manifest.yaml'), + 'version: "1.0.0"\nfiles:\n - path: test.txt\n size: 7', + ); + await fs.writeFile(path.join(targetDir, '.aios-core', 'test.txt'), 'content'); + + const validator = new PostInstallValidator(targetDir, null, { + requireSignature: false, + verifyHashes: false, // Quick mode - hash not required + }); + + const report = await validator.validate(); + + // Should NOT have a schema violation for missing hash in quick mode + const schemaIssue = report.issues.find( + (i) => i.type === IssueType.SCHEMA_VIOLATION && i.message.includes('Missing hash'), + ); + expect(schemaIssue).toBeUndefined(); + expect(report.stats.validFiles).toBe(1); + }); + }); + + describe('Secure Repair (C4)', () => { + test('should refuse repair without hash verification', async () => { + const validator = new PostInstallValidator(targetDir, sourceDir, { + requireSignature: false, + verifyHashes: false, // Disabled + }); + + const result = await validator.repair(); + + expect(result.success).toBe(false); + expect(result.error).toContain('hash verification'); + }); + + test('should refuse repair without verified manifest', async () => { + // Setup manifest + await fs.writeFile( + path.join(targetDir, '.aios-core', 'install-manifest.yaml'), + `version: "1.0.0"\nfiles:\n - path: test.txt\n hash: "sha256:${'a'.repeat(64)}"\n size: 4`, + ); + + const validator = new PostInstallValidator(targetDir, sourceDir, { + requireSignature: true, // Requires signature + verifyHashes: true, + }); + + // Validate first (will fail due to missing signature) + await validator.validate(); + + // Try repair + const result = await validator.repair(); + + expect(result.success).toBe(false); + expect(result.error).toContain('verified manifest'); + }); + + test('should verify source hash before copying', async () => { + // Create source file with different content than manifest hash + const sourceFile = path.join(sourceDir, '.aios-core', 'test.txt'); + await fs.writeFile(sourceFile, 'wrong content'); + + // Create manifest with different hash + const manifest = `version: "1.0.0" +files: + - path: test.txt + hash: "sha256:${'a'.repeat(64)}" + size: 13`; + + await fs.writeFile(path.join(targetDir, '.aios-core', 'install-manifest.yaml'), manifest); + await fs.writeFile(path.join(sourceDir, '.aios-core', 'install-manifest.yaml'), manifest); + + const validator = new PostInstallValidator(targetDir, sourceDir, { + requireSignature: false, // For testing + verifyHashes: true, + }); + + // Manually set manifestVerified for testing + validator.manifestVerified = true; + + // Validate (will find missing file) + await validator.validate(); + + // Manually add a missing file issue for repair + validator.issues = [ + { + type: IssueType.MISSING_FILE, + relativePath: 'test.txt', + message: 'Missing file: test.txt', + }, + ]; + validator.manifest = { + files: [{ path: 'test.txt', hash: `sha256:${'a'.repeat(64)}`, size: 13 }], + }; + + const result = await validator.repair(); + + // Should fail because source hash doesn't match manifest + expect(result.success).toBe(false); + const failedItem = result.failed.find((f) => f.path === 'test.txt'); + expect(failedItem).toBeDefined(); + expect(failedItem.reason).toContain('hash does not match'); + }); + }); + + describe('Hash Error Handling (H3)', () => { + test('should treat hash errors as failures', async () => { + // Create a file that will cause hash error (e.g., directory instead of file) + const dirPath = path.join(targetDir, '.aios-core', 'notafile'); + await fs.ensureDir(dirPath); + + await fs.writeFile( + path.join(targetDir, '.aios-core', 'install-manifest.yaml'), + `version: "1.0.0"\nfiles:\n - path: notafile\n hash: "sha256:${'a'.repeat(64)}"\n size: 0`, + ); + + const validator = new PostInstallValidator(targetDir, null, { + requireSignature: false, + verifyHashes: true, + }); + + const report = await validator.validate(); + + // Should be treated as invalid path (directory, not file) + const issue = report.issues.find( + (i) => i.type === IssueType.INVALID_PATH || i.type === IssueType.HASH_ERROR, + ); + expect(issue).toBeDefined(); + }); + }); + + describe('DoS Protection (H6)', () => { + test('should enforce max file count in manifest', async () => { + // Create manifest with too many files + const files = []; + for (let i = 0; i < SecurityLimits.MAX_FILE_COUNT + 1; i++) { + files.push(` - path: file${i}.txt`); + } + + await fs.writeFile( + path.join(targetDir, '.aios-core', 'install-manifest.yaml'), + `version: "1.0.0"\nfiles:\n${files.join('\n')}`, + ); + + const validator = new PostInstallValidator(targetDir, null, { + requireSignature: false, + }); + + const report = await validator.validate(); + + expect(report.status).toBe('failed'); + const manifestIssue = report.issues.find((i) => i.type === IssueType.INVALID_MANIFEST); + expect(manifestIssue).toBeDefined(); + expect(manifestIssue.details).toContain('too many files'); + }); + + test('should enforce max manifest size', async () => { + // Create oversized manifest + const bigContent = 'a'.repeat(SecurityLimits.MAX_MANIFEST_SIZE + 1); + + await fs.writeFile(path.join(targetDir, '.aios-core', 'install-manifest.yaml'), bigContent); + + const validator = new PostInstallValidator(targetDir, null, { + requireSignature: false, + }); + + const report = await validator.validate(); + + expect(report.status).toBe('failed'); + }); + + test('should use byte length not character length for size check (DOS-4)', async () => { + // Create content with multibyte characters + // Each emoji is 4 bytes in UTF-8 but only 2 characters in JS string + // 🔒 = 4 bytes, but "🔒".length = 2 (surrogate pair) + const emojiCount = Math.floor(SecurityLimits.MAX_MANIFEST_SIZE / 4) + 1000; + const emojiContent = '🔒'.repeat(emojiCount); + + // Verify our test setup: character count is less than byte limit + expect(emojiContent.length).toBeLessThan(SecurityLimits.MAX_MANIFEST_SIZE); + // But byte count exceeds limit + expect(Buffer.byteLength(emojiContent, 'utf8')).toBeGreaterThan( + SecurityLimits.MAX_MANIFEST_SIZE, + ); + + await fs.writeFile(path.join(targetDir, '.aios-core', 'install-manifest.yaml'), emojiContent); + + const validator = new PostInstallValidator(targetDir, null, { + requireSignature: false, + }); + + const report = await validator.validate(); + + // Should fail because byte size exceeds limit, even though char count doesn't + expect(report.status).toBe('failed'); + const manifestIssue = report.issues.find((i) => i.type === IssueType.INVALID_MANIFEST); + expect(manifestIssue).toBeDefined(); + expect(manifestIssue.details).toContain('bytes'); + }); + + test('should check file size before reading (DOS-3)', async () => { + // This test verifies that pre-read size check works + // We create an oversized file and ensure it's rejected before full read + const bigContent = 'x'.repeat(SecurityLimits.MAX_MANIFEST_SIZE + 100); + + await fs.writeFile(path.join(targetDir, '.aios-core', 'install-manifest.yaml'), bigContent); + + const validator = new PostInstallValidator(targetDir, null, { + requireSignature: false, + }); + + const report = await validator.validate(); + + expect(report.status).toBe('failed'); + const manifestIssue = report.issues.find((i) => i.type === IssueType.INVALID_MANIFEST); + expect(manifestIssue).toBeDefined(); + expect(manifestIssue.message).toContain('exceeds maximum size'); + }); + }); + + describe('Issue Model (H4)', () => { + test('should store relativePath in issue objects', async () => { + await fs.writeFile( + path.join(targetDir, '.aios-core', 'install-manifest.yaml'), + 'version: "1.0.0"\nfiles:\n - path: missing.txt\n size: 10', + ); + + const validator = new PostInstallValidator(targetDir, null, { + requireSignature: false, + verifyHashes: false, + }); + + const report = await validator.validate(); + + const missingIssue = report.issues.find((i) => i.type === IssueType.MISSING_FILE); + expect(missingIssue).toBeDefined(); + expect(missingIssue.relativePath).toBe('missing.txt'); + }); + }); +}); + +describe('Manifest Signature Module', () => { + const { + parseMinisignSignature, + verifyManifestSignature, + } = require('../../packages/installer/src/installer/manifest-signature'); + + test('should parse valid minisign signature format', () => { + // Minisign signature blob must be at least 74 bytes: + // 2 bytes algorithm + 8 bytes key ID + 64 bytes Ed25519 signature = 74 bytes + // Base64 of 74 bytes = ceil(74/3)*4 = 100 characters + const sigBlob = Buffer.alloc(74); + sigBlob.write('Ed', 0, 2, 'ascii'); // Algorithm: 'Ed' for pure Ed25519 + sigBlob.fill(0x41, 2, 10); // Key ID: 8 bytes of 'A' + sigBlob.fill(0x42, 10, 74); // Signature: 64 bytes of 'B' + const sigBase64 = sigBlob.toString('base64'); + + const validSig = `untrusted comment: signature from minisign +${sigBase64} +trusted comment: timestamp:1234567890 +QRSTUVWXYZ0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789+/==`; + + expect(() => parseMinisignSignature(validSig)).not.toThrow(); + }); + + test('should reject signature without untrusted comment', () => { + const invalidSig = `not a valid comment +RWQBla1234567890`; + + expect(() => parseMinisignSignature(invalidSig)).toThrow('missing untrusted comment'); + }); + + test('should reject signature with insufficient lines', () => { + const invalidSig = 'untrusted comment: only one line'; + + expect(() => parseMinisignSignature(invalidSig)).toThrow('insufficient lines'); + }); + + test('should reject signature that is too short', () => { + const invalidSig = `untrusted comment: test +short`; + + expect(() => parseMinisignSignature(invalidSig)).toThrow('signature too short'); + }); +}); + +describe('Manifest Signature DoS Protection', () => { + const { + loadAndVerifyManifest, + SignatureLimits, + } = require('../../packages/installer/src/installer/manifest-signature'); + + let testDir; + + beforeEach(async () => { + testDir = path.join(os.tmpdir(), `sig-dos-test-${Date.now()}-${Math.random().toString(36)}`); + await fs.ensureDir(testDir); + }); + + afterEach(async () => { + await fs.remove(testDir); + }); + + test('should reject oversized manifest file before reading (DOS-1)', async () => { + const manifestPath = path.join(testDir, 'install-manifest.yaml'); + + // Create oversized manifest file + const bigContent = 'x'.repeat(SignatureLimits.MAX_MANIFEST_SIZE + 1000); + await fs.writeFile(manifestPath, bigContent); + + const result = loadAndVerifyManifest(manifestPath, { requireSignature: false }); + + expect(result.error).toContain('exceeds maximum size'); + expect(result.content).toBeNull(); + }); + + test('should reject oversized signature file before reading (DOS-2)', async () => { + const manifestPath = path.join(testDir, 'install-manifest.yaml'); + const sigPath = manifestPath + '.minisig'; + + // Create valid-sized manifest + await fs.writeFile(manifestPath, 'version: "1.0.0"\nfiles: []'); + + // Create oversized signature file + const bigSig = 'x'.repeat(SignatureLimits.MAX_SIGNATURE_SIZE + 1000); + await fs.writeFile(sigPath, bigSig); + + const result = loadAndVerifyManifest(manifestPath, { requireSignature: true }); + + expect(result.error).toContain('Signature file exceeds maximum size'); + expect(result.content).toBeNull(); + }); + + test('should allow valid-sized manifest file', async () => { + const manifestPath = path.join(testDir, 'install-manifest.yaml'); + + // Create normal manifest + await fs.writeFile(manifestPath, 'version: "1.0.0"\nfiles: []'); + + const result = loadAndVerifyManifest(manifestPath, { requireSignature: false }); + + // Should succeed (no signature required) + expect(result.error).toBeNull(); + expect(result.content).not.toBeNull(); + }); +}); + +``` + +================================================== +📄 tests/installer/dependency-installer.test.js +================================================== +```js +/** + * Tests for Dependency Installer Module + * + * Story 1.7: Dependency Installation + * Comprehensive test coverage for package manager detection, installation, retry logic, + * and offline mode. + */ + +const fs = require('fs'); +const { spawn } = require('child_process'); +const { + detectPackageManager, + validatePackageManager, + hasExistingDependencies, + installDependencies, + executeInstall, + categorizeError, + installWithRetry, +} = require('../../packages/installer/src/installer/dependency-installer'); + +// Mock dependencies +jest.mock('fs'); +jest.mock('child_process'); +jest.mock('ora', () => { + return jest.fn(() => ({ + start: jest.fn().mockReturnThis(), + succeed: jest.fn().mockReturnThis(), + fail: jest.fn().mockReturnThis(), + })); +}); + +describe('Dependency Installer', () => { + let consoleLogSpy; + + beforeEach(() => { + jest.clearAllMocks(); + consoleLogSpy = jest.spyOn(console, 'log').mockImplementation(); + }); + + afterEach(() => { + consoleLogSpy.mockRestore(); + }); + + describe('detectPackageManager (AC1)', () => { + it('should detect bun from bun.lockb', () => { + fs.existsSync.mockImplementation((filePath) => { + return filePath.endsWith('bun.lockb'); + }); + + const pm = detectPackageManager('/test/project'); + expect(pm).toBe('bun'); + }); + + it('should detect pnpm from pnpm-lock.yaml', () => { + fs.existsSync.mockImplementation((filePath) => { + return filePath.endsWith('pnpm-lock.yaml'); + }); + + const pm = detectPackageManager('/test/project'); + expect(pm).toBe('pnpm'); + }); + + it('should detect yarn from yarn.lock', () => { + fs.existsSync.mockImplementation((filePath) => { + return filePath.endsWith('yarn.lock'); + }); + + const pm = detectPackageManager('/test/project'); + expect(pm).toBe('yarn'); + }); + + it('should detect npm from package-lock.json', () => { + fs.existsSync.mockImplementation((filePath) => { + return filePath.endsWith('package-lock.json'); + }); + + const pm = detectPackageManager('/test/project'); + expect(pm).toBe('npm'); + }); + + it('should fallback to npm when no lock file exists', () => { + fs.existsSync.mockReturnValue(false); + + const pm = detectPackageManager('/test/project'); + expect(pm).toBe('npm'); + }); + + it('should respect priority order (bun > pnpm > yarn > npm)', () => { + fs.existsSync.mockImplementation((filePath) => { + // Both pnpm and npm lock files exist + return filePath.endsWith('pnpm-lock.yaml') || filePath.endsWith('package-lock.json'); + }); + + const pm = detectPackageManager('/test/project'); + expect(pm).toBe('pnpm'); // pnpm has higher priority + }); + }); + + describe('validatePackageManager (Security - AC1)', () => { + it('should accept npm', () => { + expect(() => validatePackageManager('npm')).not.toThrow(); + }); + + it('should accept yarn', () => { + expect(() => validatePackageManager('yarn')).not.toThrow(); + }); + + it('should accept pnpm', () => { + expect(() => validatePackageManager('pnpm')).not.toThrow(); + }); + + it('should accept bun', () => { + expect(() => validatePackageManager('bun')).not.toThrow(); + }); + + it('should reject invalid package manager (command injection prevention)', () => { + expect(() => validatePackageManager('malicious')).toThrow('Invalid package manager'); + expect(() => validatePackageManager('rm -rf /')).toThrow('Invalid package manager'); + expect(() => validatePackageManager('npm && curl')).toThrow('Invalid package manager'); + }); + }); + + describe('hasExistingDependencies (AC6 - Offline Mode)', () => { + it('should return false when node_modules does not exist', () => { + fs.existsSync.mockReturnValue(false); + + const result = hasExistingDependencies('/test/project'); + expect(result).toBe(false); + }); + + it('should return false when node_modules is empty', () => { + fs.existsSync.mockReturnValue(true); + fs.readdirSync.mockReturnValue([]); + + const result = hasExistingDependencies('/test/project'); + expect(result).toBe(false); + }); + + it('should return false when node_modules only has hidden files', () => { + fs.existsSync.mockReturnValue(true); + fs.readdirSync.mockReturnValue(['.bin', '.cache']); + + const result = hasExistingDependencies('/test/project'); + expect(result).toBe(false); + }); + + it('should return true when node_modules has packages', () => { + fs.existsSync.mockReturnValue(true); + fs.readdirSync.mockReturnValue(['.bin', 'lodash', 'express', 'react']); + + const result = hasExistingDependencies('/test/project'); + expect(result).toBe(true); + }); + + it('should return false on readdir error', () => { + fs.existsSync.mockReturnValue(true); + fs.readdirSync.mockImplementation(() => { + throw new Error('Permission denied'); + }); + + const result = hasExistingDependencies('/test/project'); + expect(result).toBe(false); + }); + }); + + describe('executeInstall (AC2 - Spawn Security)', () => { + it('should spawn package manager with correct args', async () => { + const mockChild = { + on: jest.fn((event, callback) => { + if (event === 'close') { + setTimeout(() => callback(0), 10); + } + }), + }; + spawn.mockReturnValue(mockChild); + + await executeInstall('npm', '/test/project'); + + // Windows requires shell: true because npm is actually npm.cmd + // Unix can use shell: false for better security + const isWindows = process.platform === 'win32'; + expect(spawn).toHaveBeenCalledWith('npm', ['install'], { + cwd: '/test/project', + stdio: 'inherit', + shell: isWindows, // Windows needs shell, Unix doesn't + }); + }); + + it('should resolve with success on exit code 0', async () => { + const mockChild = { + on: jest.fn((event, callback) => { + if (event === 'close') { + setTimeout(() => callback(0), 10); + } + }), + }; + spawn.mockReturnValue(mockChild); + + const result = await executeInstall('npm'); + expect(result.success).toBe(true); + expect(result.exitCode).toBe(0); + }); + + it('should resolve with error on non-zero exit code', async () => { + const mockChild = { + on: jest.fn((event, callback) => { + if (event === 'close') { + setTimeout(() => callback(1), 10); + } + }), + }; + spawn.mockReturnValue(mockChild); + + const result = await executeInstall('npm'); + expect(result.success).toBe(false); + expect(result.exitCode).toBe(1); + }); + + it('should handle spawn errors', async () => { + const mockChild = { + on: jest.fn((event, callback) => { + if (event === 'error') { + setTimeout(() => callback(new Error('ENOENT')), 10); + } + }), + }; + spawn.mockReturnValue(mockChild); + + const result = await executeInstall('npm'); + expect(result.success).toBe(false); + expect(result.error).toContain('ENOENT'); + }); + }); + + describe('categorizeError (AC4 - Error Handling)', () => { + it('should detect network errors', () => { + const error1 = categorizeError(new Error('ENOTFOUND registry.npmjs.org')); + expect(error1.category).toBe('network'); + expect(error1.solution).toContain('internet connection'); + + const error2 = categorizeError('ETIMEDOUT'); + expect(error2.category).toBe('network'); + }); + + it('should detect permission errors', () => { + const error = categorizeError(new Error('EACCES: permission denied')); + expect(error.category).toBe('permission'); + expect(error.solution).toContain('elevated permissions'); + }); + + it('should detect disk space errors', () => { + const error = categorizeError(new Error('ENOSPC: no space left')); + expect(error.category).toBe('diskspace'); + expect(error.solution).toContain('disk space'); + }); + + it('should handle unknown errors', () => { + const error = categorizeError(new Error('Something weird happened')); + expect(error.category).toBe('unknown'); + }); + }); + + describe('installWithRetry (AC5 - Retry Logic)', () => { + it('should return immediately on success', async () => { + const mockChild = { + on: jest.fn((event, callback) => { + if (event === 'close') setTimeout(() => callback(0), 10); + }), + }; + spawn.mockReturnValue(mockChild); + + const result = await installWithRetry('npm', '/test/project', 3, 1); + expect(result.success).toBe(true); + expect(spawn).toHaveBeenCalledTimes(1); + }); + + it('should retry on failure', async () => { + let attempts = 0; + spawn.mockImplementation(() => { + attempts++; + return { + on: jest.fn((event, callback) => { + if (event === 'close') { + // First attempt fails, second succeeds + setTimeout(() => callback(attempts < 2 ? 1 : 0), 10); + } + }), + }; + }); + + // Use fake timers for faster test + jest.useFakeTimers(); + + const promise = installWithRetry('npm', '/test/project', 3, 1); + + // Run all timers and wait for promises + jest.runAllTimers(); + + // Restore real timers before awaiting + jest.useRealTimers(); + + const result = await promise; + + expect(spawn).toHaveBeenCalledTimes(2); // First fail, then success + expect(result.success).toBe(true); + }, 15000); + + it('should stop after maxRetries', async () => { + const mockChild = { + on: jest.fn((event, callback) => { + if (event === 'close') setTimeout(() => callback(1), 10); + }), + }; + spawn.mockReturnValue(mockChild); + + jest.useFakeTimers(); + const promise = installWithRetry('npm', '/test/project', 3, 3); + jest.runAllTimers(); + + const result = await promise; + expect(result.success).toBe(false); + jest.useRealTimers(); + }); + }); + + describe('installDependencies (Full Integration)', () => { + beforeEach(() => { + fs.existsSync.mockImplementation((filePath) => { + if (filePath.endsWith('package-lock.json')) return true; + if (filePath.endsWith('node_modules')) return false; + return false; + }); + }); + + it('should detect package manager and install (AC1 + AC2)', async () => { + const mockChild = { + on: jest.fn((event, callback) => { + if (event === 'close') setTimeout(() => callback(0), 10); + }), + }; + spawn.mockReturnValue(mockChild); + + const result = await installDependencies({ + projectPath: '/test/project', + }); + + expect(result.success).toBe(true); + expect(result.packageManager).toBe('npm'); + }); + + it('should use provided package manager override', async () => { + const mockChild = { + on: jest.fn((event, callback) => { + if (event === 'close') setTimeout(() => callback(0), 10); + }), + }; + spawn.mockReturnValue(mockChild); + + await installDependencies({ + packageManager: 'yarn', + projectPath: '/test/project', + }); + + expect(spawn).toHaveBeenCalledWith('yarn', ['install'], expect.any(Object)); + }); + + it('should skip installation in offline mode (AC6)', async () => { + fs.existsSync.mockImplementation(() => { + return true; // Both lock file and node_modules exist + }); + fs.readdirSync.mockReturnValue(['lodash', 'express']); + + const result = await installDependencies({ + projectPath: '/test/project', + }); + + expect(result.success).toBe(true); + expect(result.offlineMode).toBe(true); + expect(spawn).not.toHaveBeenCalled(); + expect(consoleLogSpy).toHaveBeenCalledWith( + expect.stringContaining('offline mode'), + ); + }); + + it('should reject invalid package manager', async () => { + const result = await installDependencies({ + packageManager: 'malicious-pm', + projectPath: '/test/project', + }); + + expect(result.success).toBe(false); + expect(result.error).toContain('Invalid package manager'); + }); + + it('should return error info on installation failure (AC4)', async () => { + const mockChild = { + on: jest.fn((event, callback) => { + if (event === 'close') { + setTimeout(() => callback(1), 10); + } + }), + }; + spawn.mockReturnValue(mockChild); + + const result = await installDependencies({ + projectPath: '/test/project', + skipRetry: true, + }); + + expect(result.success).toBe(false); + expect(result.errorCategory).toBeDefined(); + expect(result.solution).toBeDefined(); + }); + }); +}); + +``` + +================================================== +📄 tests/installer/write-claude-settings.test.js +================================================== +```js +/** + * Tests for writeClaudeSettings and getExistingLanguage (Story ACT-12) + * + * Test Coverage: + * - writeClaudeSettings creates .claude/settings.json with language + * - writeClaudeSettings merges into existing settings.json + * - writeClaudeSettings preserves other settings + * - writeClaudeSettings handles missing .claude directory + * - writeClaudeSettings maps language codes to Claude Code names + * - getExistingLanguage reads language from settings.json + * - getExistingLanguage returns null when no settings exist + */ + +const path = require('path'); +const fse = require('fs-extra'); +const os = require('os'); + +// Import actual production functions via _testing export +const { _testing } = require('../../packages/installer/src/wizard/index'); +const { writeClaudeSettings, getExistingLanguage } = _testing; + +describe('ACT-12: writeClaudeSettings and getExistingLanguage', () => { + let tempDir; + + beforeEach(async () => { + tempDir = path.join(os.tmpdir(), `aios-test-settings-${Date.now()}`); + await fse.ensureDir(tempDir); + }); + + afterEach(async () => { + await fse.remove(tempDir); + }); + + describe('writeClaudeSettings', () => { + test('should create .claude/settings.json with language', async () => { + const result = await writeClaudeSettings('pt', tempDir); + + expect(result).toBe(true); + + const settingsPath = path.join(tempDir, '.claude', 'settings.json'); + const content = JSON.parse(await fse.readFile(settingsPath, 'utf8')); + + expect(content.language).toBe('portuguese'); + }); + + test('should map en to english', async () => { + await writeClaudeSettings('en', tempDir); + + const settingsPath = path.join(tempDir, '.claude', 'settings.json'); + const content = JSON.parse(await fse.readFile(settingsPath, 'utf8')); + + expect(content.language).toBe('english'); + }); + + test('should map es to spanish', async () => { + await writeClaudeSettings('es', tempDir); + + const settingsPath = path.join(tempDir, '.claude', 'settings.json'); + const content = JSON.parse(await fse.readFile(settingsPath, 'utf8')); + + expect(content.language).toBe('spanish'); + }); + + test('should merge into existing settings.json', async () => { + const claudeDir = path.join(tempDir, '.claude'); + await fse.ensureDir(claudeDir); + await fse.writeFile( + path.join(claudeDir, 'settings.json'), + JSON.stringify({ permissions: { allow: ['Read'] }, theme: 'dark' }, null, 2) + '\n', + 'utf8', + ); + + const result = await writeClaudeSettings('pt', tempDir); + + expect(result).toBe(true); + + const content = JSON.parse( + await fse.readFile(path.join(claudeDir, 'settings.json'), 'utf8'), + ); + + expect(content.language).toBe('portuguese'); + expect(content.permissions).toEqual({ allow: ['Read'] }); + expect(content.theme).toBe('dark'); + }); + + test('should overwrite existing language value', async () => { + const claudeDir = path.join(tempDir, '.claude'); + await fse.ensureDir(claudeDir); + await fse.writeFile( + path.join(claudeDir, 'settings.json'), + JSON.stringify({ language: 'english' }, null, 2) + '\n', + 'utf8', + ); + + await writeClaudeSettings('es', tempDir); + + const content = JSON.parse( + await fse.readFile(path.join(claudeDir, 'settings.json'), 'utf8'), + ); + + expect(content.language).toBe('spanish'); + }); + + test('should create .claude directory if it does not exist', async () => { + const claudeDir = path.join(tempDir, '.claude'); + expect(await fse.pathExists(claudeDir)).toBe(false); + + await writeClaudeSettings('pt', tempDir); + + expect(await fse.pathExists(claudeDir)).toBe(true); + expect(await fse.pathExists(path.join(claudeDir, 'settings.json'))).toBe(true); + }); + }); + + describe('getExistingLanguage', () => { + test('should return language code from settings.json', async () => { + const claudeDir = path.join(tempDir, '.claude'); + await fse.ensureDir(claudeDir); + await fse.writeFile( + path.join(claudeDir, 'settings.json'), + JSON.stringify({ language: 'portuguese' }, null, 2) + '\n', + 'utf8', + ); + + const result = await getExistingLanguage(tempDir); + expect(result).toBe('pt'); + }); + + test('should return null when no settings.json exists', async () => { + const result = await getExistingLanguage(tempDir); + expect(result).toBeNull(); + }); + + test('should return null when settings.json has no language', async () => { + const claudeDir = path.join(tempDir, '.claude'); + await fse.ensureDir(claudeDir); + await fse.writeFile( + path.join(claudeDir, 'settings.json'), + JSON.stringify({ theme: 'dark' }, null, 2) + '\n', + 'utf8', + ); + + const result = await getExistingLanguage(tempDir); + expect(result).toBeNull(); + }); + + test('should return null for unknown language', async () => { + const claudeDir = path.join(tempDir, '.claude'); + await fse.ensureDir(claudeDir); + await fse.writeFile( + path.join(claudeDir, 'settings.json'), + JSON.stringify({ language: 'french' }, null, 2) + '\n', + 'utf8', + ); + + const result = await getExistingLanguage(tempDir); + expect(result).toBeNull(); + }); + + test('should handle malformed JSON gracefully', async () => { + const claudeDir = path.join(tempDir, '.claude'); + await fse.ensureDir(claudeDir); + await fse.writeFile( + path.join(claudeDir, 'settings.json'), + 'not valid json{{{', + 'utf8', + ); + + const result = await getExistingLanguage(tempDir); + expect(result).toBeNull(); + }); + + test('should roundtrip with writeClaudeSettings', async () => { + await writeClaudeSettings('es', tempDir); + + const result = await getExistingLanguage(tempDir); + expect(result).toBe('es'); + }); + }); +}); + +``` + +================================================== +📄 tests/installer/brownfield-upgrader.test.js +================================================== +```js +/** + * Unit tests for brownfield-upgrader.js + * @story 6.18 - Dynamic Manifest & Brownfield Upgrade System + */ + +const path = require('path'); +const fs = require('fs-extra'); +const os = require('os'); +const yaml = require('js-yaml'); +const { + loadManifest, + generateUpgradeReport, + applyUpgrade, + updateInstalledManifest, + buildFileMap, + isUserModified, + formatUpgradeReport, +} = require('../../packages/installer/src/installer/brownfield-upgrader'); +const { hashFile } = require('../../packages/installer/src/installer/file-hasher'); + +describe('brownfield-upgrader', () => { + let tempDir; + let sourceDir; + let targetDir; + + beforeEach(() => { + tempDir = path.join(os.tmpdir(), 'brownfield-test-' + Date.now()); + sourceDir = path.join(tempDir, 'source'); + targetDir = path.join(tempDir, 'target'); + + fs.ensureDirSync(sourceDir); + fs.ensureDirSync(targetDir); + }); + + afterEach(() => { + fs.removeSync(tempDir); + }); + + describe('loadManifest', () => { + it('should load valid YAML manifest', () => { + const manifestPath = path.join(sourceDir, 'install-manifest.yaml'); + const manifestContent = { + version: '2.0.0', + files: [{ path: 'test.md', hash: 'sha256:abc123' }], + }; + fs.writeFileSync(manifestPath, yaml.dump(manifestContent)); + + const loaded = loadManifest(sourceDir, 'install-manifest.yaml'); + expect(loaded.version).toBe('2.0.0'); + expect(loaded.files).toHaveLength(1); + }); + + it('should return null for missing manifest', () => { + const loaded = loadManifest(sourceDir, 'nonexistent.yaml'); + expect(loaded).toBeNull(); + }); + }); + + describe('buildFileMap', () => { + it('should create map from manifest files', () => { + const manifest = { + files: [ + { path: 'file1.md', hash: 'sha256:abc' }, + { path: 'file2.md', hash: 'sha256:def' }, + ], + }; + + const map = buildFileMap(manifest); + expect(map.size).toBe(2); + expect(map.get('file1.md').hash).toBe('sha256:abc'); + }); + + it('should normalize Windows paths', () => { + const manifest = { + files: [{ path: 'folder\\file.md', hash: 'sha256:abc' }], + }; + + const map = buildFileMap(manifest); + expect(map.has('folder/file.md')).toBe(true); + }); + + it('should handle empty manifest', () => { + const map = buildFileMap({}); + expect(map.size).toBe(0); + }); + + it('should handle null manifest', () => { + const map = buildFileMap(null); + expect(map.size).toBe(0); + }); + }); + + describe('isUserModified', () => { + it('should return false for unmodified file', () => { + const testFile = path.join(tempDir, 'test.txt'); + fs.writeFileSync(testFile, 'original content'); + const hash = `sha256:${hashFile(testFile)}`; + + expect(isUserModified(testFile, hash)).toBe(false); + }); + + it('should return true for modified file', () => { + const testFile = path.join(tempDir, 'test.txt'); + fs.writeFileSync(testFile, 'original content'); + const hash = 'sha256:different_hash_value_here'; + + expect(isUserModified(testFile, hash)).toBe(true); + }); + + it('should return false for non-existent file', () => { + const nonExistent = path.join(tempDir, 'missing.txt'); + expect(isUserModified(nonExistent, 'sha256:abc')).toBe(false); + }); + }); + + describe('generateUpgradeReport', () => { + it('should identify new files', () => { + const sourceManifest = { + version: '2.1.0', + files: [ + { path: 'existing.md', hash: 'sha256:abc', type: 'agent' }, + { path: 'new-file.md', hash: 'sha256:def', type: 'agent' }, + ], + }; + const installedManifest = { + installed_version: '2.0.0', + files: [{ path: 'existing.md', hash: 'sha256:abc' }], + }; + + const report = generateUpgradeReport(sourceManifest, installedManifest, targetDir); + + expect(report.newFiles).toHaveLength(1); + expect(report.newFiles[0].path).toBe('new-file.md'); + }); + + it('should identify modified files', () => { + const aiosCoreDir = path.join(targetDir, '.aios-core'); + fs.ensureDirSync(aiosCoreDir); + fs.writeFileSync(path.join(aiosCoreDir, 'changed.md'), 'original'); + const originalHash = `sha256:${hashFile(path.join(aiosCoreDir, 'changed.md'))}`; + + const sourceManifest = { + version: '2.1.0', + files: [{ path: 'changed.md', hash: 'sha256:new_hash', type: 'agent' }], + }; + const installedManifest = { + installed_version: '2.0.0', + files: [{ path: 'changed.md', hash: originalHash }], + }; + + const report = generateUpgradeReport(sourceManifest, installedManifest, targetDir); + + expect(report.modifiedFiles).toHaveLength(1); + }); + + it('should identify user-modified files', () => { + const aiosCoreDir = path.join(targetDir, '.aios-core'); + fs.ensureDirSync(aiosCoreDir); + fs.writeFileSync(path.join(aiosCoreDir, 'user-changed.md'), 'user modified content'); + + const sourceManifest = { + version: '2.1.0', + files: [{ path: 'user-changed.md', hash: 'sha256:source_hash', type: 'agent' }], + }; + const installedManifest = { + installed_version: '2.0.0', + files: [{ path: 'user-changed.md', hash: 'sha256:original_installed_hash' }], + }; + + const report = generateUpgradeReport(sourceManifest, installedManifest, targetDir); + + expect(report.userModifiedFiles).toHaveLength(1); + expect(report.userModifiedFiles[0].reason).toContain('User modified'); + }); + + it('should identify deleted files', () => { + const sourceManifest = { + version: '2.1.0', + files: [], + }; + const installedManifest = { + installed_version: '2.0.0', + files: [{ path: 'removed.md', hash: 'sha256:abc', type: 'agent' }], + }; + + const report = generateUpgradeReport(sourceManifest, installedManifest, targetDir); + + expect(report.deletedFiles).toHaveLength(1); + expect(report.deletedFiles[0].path).toBe('removed.md'); + }); + + it('should detect upgrade availability via semver', () => { + const sourceManifest = { version: '2.1.0', files: [] }; + const installedManifest = { installed_version: '2.0.0', files: [] }; + + const report = generateUpgradeReport(sourceManifest, installedManifest, targetDir); + expect(report.upgradeAvailable).toBe(true); + }); + + it('should not flag upgrade when versions equal', () => { + const sourceManifest = { version: '2.0.0', files: [] }; + const installedManifest = { installed_version: '2.0.0', files: [] }; + + const report = generateUpgradeReport(sourceManifest, installedManifest, targetDir); + expect(report.upgradeAvailable).toBe(false); + }); + }); + + describe('applyUpgrade', () => { + beforeEach(() => { + // Setup source files + fs.ensureDirSync(sourceDir); + fs.writeFileSync(path.join(sourceDir, 'new-file.md'), 'new content'); + fs.writeFileSync(path.join(sourceDir, 'updated.md'), 'updated content'); + }); + + it('should install new files', async () => { + const report = { + newFiles: [{ path: 'new-file.md', type: 'agent' }], + modifiedFiles: [], + userModifiedFiles: [], + deletedFiles: [], + }; + + const result = await applyUpgrade(report, sourceDir, targetDir); + + expect(result.success).toBe(true); + expect(result.filesInstalled).toHaveLength(1); + expect(fs.existsSync(path.join(targetDir, '.aios-core', 'new-file.md'))).toBe(true); + }); + + it('should update modified files when includeModified is true', async () => { + const report = { + newFiles: [], + modifiedFiles: [{ path: 'updated.md', type: 'agent' }], + userModifiedFiles: [], + deletedFiles: [], + }; + + const result = await applyUpgrade(report, sourceDir, targetDir, { includeModified: true }); + + expect(result.filesInstalled.some(f => f.path === 'updated.md')).toBe(true); + }); + + it('should skip user-modified files', async () => { + const report = { + newFiles: [], + modifiedFiles: [], + userModifiedFiles: [{ path: 'user-file.md', reason: 'User modified' }], + deletedFiles: [], + }; + + const result = await applyUpgrade(report, sourceDir, targetDir); + + expect(result.filesSkipped).toHaveLength(1); + expect(result.filesSkipped[0].reason).toContain('preserving local'); + }); + + it('should perform dry run without modifying files', async () => { + const report = { + newFiles: [{ path: 'new-file.md', type: 'agent' }], + modifiedFiles: [], + userModifiedFiles: [], + deletedFiles: [], + }; + + const result = await applyUpgrade(report, sourceDir, targetDir, { dryRun: true }); + + expect(result.filesInstalled).toHaveLength(1); + expect(fs.existsSync(path.join(targetDir, '.aios-core', 'new-file.md'))).toBe(false); + }); + }); + + describe('updateInstalledManifest', () => { + it('should create installed manifest file', () => { + const sourceManifest = { + version: '2.1.0', + files: [{ path: 'test.md', hash: 'sha256:abc' }], + }; + + fs.ensureDirSync(path.join(targetDir, '.aios-core')); + updateInstalledManifest(targetDir, sourceManifest, 'aios-core@2.1.0'); + + const installedPath = path.join(targetDir, '.aios-core', '.installed-manifest.yaml'); + expect(fs.existsSync(installedPath)).toBe(true); + + const content = yaml.load(fs.readFileSync(installedPath, 'utf8')); + expect(content.installed_version).toBe('2.1.0'); + expect(content.installed_from).toBe('aios-core@2.1.0'); + }); + }); + + describe('formatUpgradeReport', () => { + it('should format report as string', () => { + const report = { + sourceVersion: '2.1.0', + installedVersion: '2.0.0', + newFiles: [{ path: 'new.md', type: 'agent' }], + modifiedFiles: [], + userModifiedFiles: [], + deletedFiles: [], + upgradeAvailable: true, + }; + + const formatted = formatUpgradeReport(report); + + expect(formatted).toContain('2.1.0'); + expect(formatted).toContain('2.0.0'); + expect(formatted).toContain('New Files'); + expect(formatted).toContain('new.md'); + }); + + it('should indicate upgrade availability', () => { + const report = { + sourceVersion: '2.1.0', + installedVersion: '2.0.0', + newFiles: [], + modifiedFiles: [], + userModifiedFiles: [], + deletedFiles: [], + upgradeAvailable: true, + }; + + const formatted = formatUpgradeReport(report); + expect(formatted).toContain('Yes'); + }); + }); +}); + +``` + +================================================== +📄 tests/e2e/story-creation-clickup.test.js +================================================== +```js +// Integration test - requires external services +// Uses describeIntegration from setup.js +// File: tests/e2e/story-creation-clickup.test.js + +/** + * End-to-End Story Creation Test Suite + * + * Tests the complete story creation workflow from Epic verification + * through ClickUp subtask creation and frontmatter metadata recording. + * + * AC2: Story Creation as ClickUp Subtask + * AC3: Story File Metadata Recording + */ + +const fs = require('fs').promises; +const path = require('path'); +const { verifyEpicExists } = require('../../common/utils/clickup-helpers'); +const { createStoryInClickUp, updateStoryFrontmatter } = require('../../common/utils/story-manager'); + +// Create a single shared mock instance +const mockClickUpTool = { + getWorkspaceTasks: jest.fn(), + createTask: jest.fn(), + updateTask: jest.fn(), + getTask: jest.fn(), +}; + +// Mock ClickUp MCP tool - return the SAME instance every time +jest.mock('../../common/utils/tool-resolver', () => ({ + resolveTool: jest.fn(() => mockClickUpTool), +})); + +const _toolResolver = require('../../common/utils/tool-resolver'); + +describeIntegration('End-to-End Story Creation with ClickUp Integration', () => { + const testStoryPath = path.join(__dirname, '../fixtures/test-story-5.99.md'); + + beforeEach(() => { + jest.clearAllMocks(); + // mockClickUpTool is now a global shared instance + }); + + afterEach(async () => { + // Cleanup test files + try { + await fs.unlink(testStoryPath); + } catch { + // File may not exist, ignore + } + }); + + describeIntegration('Complete Flow: Epic Verification → Story Creation → ClickUp Subtask → Frontmatter Update', () => { + test('should successfully create story with ClickUp integration', async () => { + // Step 1: Mock Epic verification + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-task-5', + name: 'Epic 5: Tools System', + status: 'In Progress', + tags: ['epic', 'epic-5'], + list: { + id: 'backlog-list-123', + name: 'Backlog', + }, + }, + ], + }); + + // Step 2: Verify Epic exists + const epicResult = await verifyEpicExists(5); + expect(epicResult.found).toBe(true); + expect(epicResult.epicTaskId).toBe('epic-task-5'); + + // Step 3: Mock ClickUp story task creation + mockClickUpTool.createTask.mockResolvedValue({ + id: 'story-task-5-99', + name: 'Story 5.99: E2E Test Story', + url: 'https://app.clickup.com/t/story-task-5-99', + status: 'Draft', + parent: 'epic-task-5', + tags: ['story', 'epic-5', 'story-5.99'], + custom_fields: [ + { id: 'epic_number', value: 5 }, + { id: 'story_number', value: '5.99' }, + { id: 'story_file_path', value: 'docs/stories/5.99.story.md' }, + { id: 'story-status', value: 'Draft' }, + ], + }); + + // Step 4: Create story in ClickUp (as subtask) + const storyResult = await createStoryInClickUp({ + epicNum: 5, + storyNum: 99, + title: 'E2E Test Story', + epicTaskId: epicResult.epicTaskId, + listName: 'Backlog', + storyContent: '# Story 5.99: E2E Test Story\n\nTest content...', + }); + + expect(storyResult.taskId).toBe('story-task-5-99'); + expect(storyResult.url).toBe('https://app.clickup.com/t/story-task-5-99'); + + // Verify createTask was called with correct parameters + expect(mockClickUpTool.createTask).toHaveBeenCalledWith({ + listName: 'Backlog', + name: 'Story 5.99: E2E Test Story', + parent: 'epic-task-5', + markdown_description: expect.stringContaining('Test content'), + tags: ['story', 'epic-5', 'story-5.99'], + custom_fields: [ + { id: 'epic_number', value: 5 }, + { id: 'story_number', value: '5.99' }, + { id: 'story_file_path', value: expect.stringContaining('5.99') }, + { id: 'story-status', value: 'Draft' }, + ], + }); + + // Step 5: Create minimal story file for frontmatter update + const initialStoryContent = `--- +title: "Story 5.99: E2E Test Story" +epic: 5 +story: 99 +--- + +# Story 5.99: E2E Test Story + +Test content... +`; + await fs.writeFile(testStoryPath, initialStoryContent, 'utf-8'); + + // Step 6: Update story file frontmatter with ClickUp metadata + const frontmatter = await updateStoryFrontmatter(testStoryPath, { + clickup: { + task_id: storyResult.taskId, + epic_task_id: epicResult.epicTaskId, + list: 'Backlog', + url: storyResult.url, + last_sync: new Date().toISOString(), + }, + }); + + expect(frontmatter.clickup.task_id).toBe('story-task-5-99'); + expect(frontmatter.clickup.epic_task_id).toBe('epic-task-5'); + expect(frontmatter.clickup.url).toBe('https://app.clickup.com/t/story-task-5-99'); + }); + + test('should handle Epic verification failure gracefully', async () => { + // Mock Epic not found + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [], + }); + + await expect(verifyEpicExists(99)).rejects.toThrow( + /Epic 99 not found in ClickUp Backlog list/, + ); + + // Story creation should not proceed + expect(mockClickUpTool.createTask).not.toHaveBeenCalled(); + }); + + test('should rollback on ClickUp task creation failure', async () => { + // Step 1: Epic verification succeeds + mockClickUpTool.getWorkspaceTasks.mockResolvedValue({ + tasks: [ + { + id: 'epic-task-7', + name: 'Epic 7: Test', + status: 'Planning', + tags: ['epic', 'epic-7'], + }, + ], + }); + + const epicResult = await verifyEpicExists(7); + expect(epicResult.found).toBe(true); + + // Step 2: ClickUp task creation fails + mockClickUpTool.createTask.mockRejectedValue( + new Error('ClickUp API: Rate limit exceeded'), + ); + + // Step 3: Story creation should fail + await expect( + createStoryInClickUp({ + epicNum: 7, + storyNum: 1, + title: 'Test Story', + epicTaskId: epicResult.epicTaskId, + listName: 'Backlog', + storyContent: 'Content', + }), + ).rejects.toThrow('ClickUp API: Rate limit exceeded'); + }); + }); + + describeIntegration('Verify Parent-Child Relationship in ClickUp', () => { + test('should create story as subtask with correct parent reference', async () => { + const epicTaskId = 'epic-parent-123'; + + mockClickUpTool.createTask.mockResolvedValue({ + id: 'story-child-456', + name: 'Story 3.1: Child Story', + parent: epicTaskId, + url: 'https://app.clickup.com/t/story-child-456', + }); + + const result = await createStoryInClickUp({ + epicNum: 3, + storyNum: 1, + title: 'Child Story', + epicTaskId: epicTaskId, + listName: 'Backlog', + storyContent: 'Story content', + }); + + expect(result.taskId).toBe('story-child-456'); + + // Verify parent parameter was set correctly + expect(mockClickUpTool.createTask).toHaveBeenCalledWith( + expect.objectContaining({ + parent: epicTaskId, + }), + ); + }); + + test('should fail if parent Epic task_id is invalid', async () => { + mockClickUpTool.createTask.mockRejectedValue( + new Error('Parent task not found'), + ); + + await expect( + createStoryInClickUp({ + epicNum: 8, + storyNum: 1, + title: 'Orphan Story', + epicTaskId: 'invalid-epic-id', + listName: 'Backlog', + storyContent: 'Content', + }), + ).rejects.toThrow('Parent task not found'); + }); + + test('should verify Epic-Story relationship after creation', async () => { + const epicTaskId = 'epic-verify-123'; + const storyTaskId = 'story-verify-456'; + + mockClickUpTool.createTask.mockResolvedValue({ + id: storyTaskId, + name: 'Story 6.2: Verify Relationship', + parent: epicTaskId, + url: 'https://app.clickup.com/t/' + storyTaskId, + }); + + // Mock get task to verify parent relationship + mockClickUpTool.getTask = jest.fn().mockResolvedValue({ + id: storyTaskId, + parent: epicTaskId, + name: 'Story 6.2: Verify Relationship', + }); + + const result = await createStoryInClickUp({ + epicNum: 6, + storyNum: 2, + title: 'Verify Relationship', + epicTaskId: epicTaskId, + listName: 'Backlog', + storyContent: 'Content', + }); + + // Verify parent relationship + const taskDetails = await mockClickUpTool.getTask({ taskId: result.taskId }); + expect(taskDetails.parent).toBe(epicTaskId); + }); + }); + + describeIntegration('Verify All Tags Applied Correctly', () => { + test('should apply all three required tags to story', async () => { + mockClickUpTool.createTask.mockResolvedValue({ + id: 'story-tags-test', + name: 'Story 2.3.5: Tags Test', + tags: ['story', 'epic-2', 'story-2.3.5'], + }); + + await createStoryInClickUp({ + epicNum: 2, + storyNum: 5, + title: 'Tags Test', + epicTaskId: 'epic-2', + listName: 'Backlog', + storyContent: 'Content', + subStoryNum: 3, // For nested story numbering + }); + + expect(mockClickUpTool.createTask).toHaveBeenCalledWith( + expect.objectContaining({ + tags: ['story', 'epic-2', 'story-2.3.5'], + }), + ); + }); + + test('should format tags correctly for different story numbers', () => { + const testCases = [ + { epic: 1, story: 1, expected: ['story', 'epic-1', 'story-1.1'] }, + { epic: 5, story: 2, expected: ['story', 'epic-5', 'story-5.2'] }, + { epic: 10, story: 15, expected: ['story', 'epic-10', 'story-10.15'] }, + ]; + + testCases.forEach(({ epic, story, expected }) => { + const tags = generateStoryTags(epic, story); + expect(tags).toEqual(expected); + }); + }); + + test('should handle nested story numbering with substory parameter', async () => { + mockClickUpTool.createTask.mockResolvedValue({ + id: 'nested-story-test', + name: 'Story 4.3.2: Nested', + tags: ['story', 'epic-4', 'story-4.3.2'], + }); + + await createStoryInClickUp({ + epicNum: 4, + storyNum: 2, + subStoryNum: 3, // Creates 4.3.2 + title: 'Nested', + epicTaskId: 'epic-4', + listName: 'Backlog', + storyContent: 'Content', + }); + + expect(mockClickUpTool.createTask).toHaveBeenCalledWith( + expect.objectContaining({ + tags: expect.arrayContaining(['story-4.3.2']), + }), + ); + }); + }); + + describeIntegration('Verify Custom Fields Populated', () => { + test('should populate all four required custom fields', async () => { + mockClickUpTool.createTask.mockResolvedValue({ + id: 'custom-fields-test', + name: 'Story 9.1: Custom Fields Test', + custom_fields: [ + { id: 'epic_number', name: 'epic_number', value: 9 }, + { id: 'story_number', name: 'story_number', value: '9.1' }, + { id: 'story_file_path', name: 'story_file_path', value: 'docs/stories/9.1.story.md' }, + { id: 'story-status', name: 'story-status', value: 'Draft' }, + ], + }); + + await createStoryInClickUp({ + epicNum: 9, + storyNum: 1, + title: 'Custom Fields Test', + epicTaskId: 'epic-9', + listName: 'Backlog', + storyContent: 'Content', + storyFilePath: 'docs/stories/9.1.story.md', + }); + + expect(mockClickUpTool.createTask).toHaveBeenCalledWith( + expect.objectContaining({ + custom_fields: expect.arrayContaining([ + expect.objectContaining({ id: 'epic_number', value: 9 }), + expect.objectContaining({ id: 'story_number', value: '9.1' }), + expect.objectContaining({ id: 'story_file_path', value: expect.stringContaining('9.1') }), + ]), + }), + ); + }); + + test('should set initial story-status to Draft', async () => { + mockClickUpTool.createTask.mockResolvedValue({ + id: 'status-test', + custom_fields: [ + { id: 'story-status', value: 'Draft' }, + ], + }); + + await createStoryInClickUp({ + epicNum: 11, + storyNum: 2, + title: 'Status Test', + epicTaskId: 'epic-11', + listName: 'Backlog', + storyContent: 'Content', + }); + + const createCall = mockClickUpTool.createTask.mock.calls[0][0]; + const statusField = createCall.custom_fields.find(f => f.id === 'story-status'); + expect(statusField.value).toBe('Draft'); + }); + + test('should handle custom field validation errors', async () => { + mockClickUpTool.createTask.mockRejectedValue( + new Error('Custom field "story-status" does not exist'), + ); + + await expect( + createStoryInClickUp({ + epicNum: 12, + storyNum: 1, + title: 'Field Error Test', + epicTaskId: 'epic-12', + listName: 'Backlog', + storyContent: 'Content', + }), + ).rejects.toThrow('Custom field "story-status" does not exist'); + }); + + test('should validate epic_number is numeric', async () => { + await expect( + createStoryInClickUp({ + epicNum: 'invalid', // Should be number + storyNum: 1, + title: 'Invalid Epic', + epicTaskId: 'epic-x', + listName: 'Backlog', + storyContent: 'Content', + }), + ).rejects.toThrow(/epic_number must be a number/); + }); + + test('should validate story_number format', async () => { + await expect( + createStoryInClickUp({ + epicNum: 5, + storyNum: 'abc', // Should be number + title: 'Invalid Story', + epicTaskId: 'epic-5', + listName: 'Backlog', + storyContent: 'Content', + }), + ).rejects.toThrow(/story_number must be numeric/); + }); + }); +}); + +/** + * Helper function to generate story tags + */ +function generateStoryTags(epicNum, storyNum, subStoryNum = null) { + const storyId = subStoryNum + ? `${epicNum}.${subStoryNum}.${storyNum}` + : `${epicNum}.${storyNum}`; + + return ['story', `epic-${epicNum}`, `story-${storyId}`]; +} + +``` + +================================================== +📄 tests/synapse/context-builder.test.js +================================================== +```js +'use strict'; + +const { buildLayerContext } = require('../../.aios-core/core/synapse/context/context-builder'); + +describe('buildLayerContext', () => { + it('builds normalized context with defaults', () => { + const context = buildLayerContext({ + synapsePath: '/tmp/.synapse', + }); + + expect(context.prompt).toBe(''); + expect(context.session).toEqual({}); + expect(context.previousLayers).toEqual([]); + expect(context.config.synapsePath).toBe('/tmp/.synapse'); + expect(context.config.manifest).toEqual({}); + }); + + it('preserves prompt, session, config and previous layers', () => { + const context = buildLayerContext({ + prompt: 'hello', + session: { prompt_count: 3 }, + config: { devmode: true }, + synapsePath: '/repo/.synapse', + manifest: { version: '2.0' }, + previousLayers: [{ layer: 'global' }], + }); + + expect(context.prompt).toBe('hello'); + expect(context.session.prompt_count).toBe(3); + expect(context.config.devmode).toBe(true); + expect(context.config.synapsePath).toBe('/repo/.synapse'); + expect(context.config.manifest.version).toBe('2.0'); + expect(context.previousLayers).toHaveLength(1); + }); +}); + +``` + +================================================== +📄 tests/synapse/layer-processor.test.js +================================================== +```js +/** + * LayerProcessor Base Class Tests + * + * Tests for abstract class enforcement, _safeProcess() timeout guard, + * error handling, and constructor validation. + * + * @story SYN-4 - Layer Processors L0-L3 + */ + +const LayerProcessor = require('../../.aios-core/core/synapse/layers/layer-processor'); + +jest.setTimeout(30000); + +/** + * Concrete subclass for testing + */ +class TestProcessor extends LayerProcessor { + constructor(opts = {}) { + super({ name: 'test', layer: 99, timeout: 15, ...opts }); + } + + process(context) { + return { + rules: ['rule1', 'rule2'], + metadata: { layer: 99, source: 'test' }, + }; + } +} + +describe('LayerProcessor', () => { + describe('abstract class enforcement', () => { + test('should throw when instantiated directly', () => { + expect(() => new LayerProcessor({ name: 'direct', layer: 0 })) + .toThrow('LayerProcessor is abstract and cannot be instantiated directly'); + }); + + test('should allow subclass instantiation', () => { + const processor = new TestProcessor(); + expect(processor).toBeInstanceOf(LayerProcessor); + expect(processor).toBeInstanceOf(TestProcessor); + }); + }); + + describe('constructor properties', () => { + test('should set name, layer, and timeout', () => { + const processor = new TestProcessor({ name: 'custom', layer: 5, timeout: 20 }); + expect(processor.name).toBe('custom'); + expect(processor.layer).toBe(5); + expect(processor.timeout).toBe(20); + }); + + test('should default timeout to 15ms', () => { + const processor = new TestProcessor(); + expect(processor.timeout).toBe(15); + }); + }); + + describe('process() abstract method', () => { + test('should throw when not overridden', () => { + class EmptyProcessor extends LayerProcessor { + constructor() { + super({ name: 'empty', layer: 0 }); + } + // process() intentionally NOT overridden + } + + const processor = new EmptyProcessor(); + expect(() => processor.process({})) + .toThrow('empty: process() must be implemented by subclass'); + }); + + test('should return result when overridden', () => { + const processor = new TestProcessor(); + const result = processor.process({}); + expect(result).toEqual({ + rules: ['rule1', 'rule2'], + metadata: { layer: 99, source: 'test' }, + }); + }); + }); + + describe('_safeProcess()', () => { + test('should return process() result on success', () => { + const processor = new TestProcessor(); + const result = processor._safeProcess({}); + expect(result).toEqual({ + rules: ['rule1', 'rule2'], + metadata: { layer: 99, source: 'test' }, + }); + }); + + test('should return null on process() error', () => { + class ErrorProcessor extends LayerProcessor { + constructor() { + super({ name: 'error-test', layer: 0 }); + } + process() { + throw new Error('Something went wrong'); + } + } + + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(); + const processor = new ErrorProcessor(); + const result = processor._safeProcess({}); + + expect(result).toBeNull(); + expect(warnSpy).toHaveBeenCalledWith( + '[synapse:error-test] Error: Something went wrong', + ); + warnSpy.mockRestore(); + }); + + test('should warn when timeout exceeded but still return result', () => { + class SlowProcessor extends LayerProcessor { + constructor() { + super({ name: 'slow', layer: 0, timeout: 1 }); + } + process() { + // Simulate slow operation + const start = Date.now(); + while (Date.now() - start < 5) { /* busy wait */ } + return { rules: ['slow-rule'], metadata: {} }; + } + } + + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(); + const processor = new SlowProcessor(); + const result = processor._safeProcess({}); + + expect(result).toEqual({ rules: ['slow-rule'], metadata: {} }); + expect(warnSpy).toHaveBeenCalledWith( + expect.stringContaining('[synapse:slow] Warning: Layer exceeded timeout'), + ); + warnSpy.mockRestore(); + }); + + test('should not warn when within timeout', () => { + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(); + const processor = new TestProcessor({ timeout: 1000 }); + processor._safeProcess({}); + + expect(warnSpy).not.toHaveBeenCalled(); + warnSpy.mockRestore(); + }); + + test('should return null when process() returns null', () => { + class NullProcessor extends LayerProcessor { + constructor() { + super({ name: 'null-test', layer: 0 }); + } + process() { + return null; + } + } + + const processor = new NullProcessor(); + const result = processor._safeProcess({}); + expect(result).toBeNull(); + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/l3-workflow.test.js +================================================== +```js +/** + * L3 Workflow Processor Tests + * + * Tests for workflow detection, trigger matching, phase metadata, + * graceful degradation, and session state handling. + * + * @story SYN-4 - Layer Processors L0-L3 + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const LayerProcessor = require('../../.aios-core/core/synapse/layers/layer-processor'); +const L3WorkflowProcessor = require('../../.aios-core/core/synapse/layers/l3-workflow'); + +jest.setTimeout(30000); + +function createTempDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-l3-test-')); +} + +function cleanupTempDir(dir) { + fs.rmSync(dir, { recursive: true, force: true }); +} + +describe('L3WorkflowProcessor', () => { + let tempDir; + let processor; + + beforeEach(() => { + tempDir = createTempDir(); + processor = new L3WorkflowProcessor(); + }); + + afterEach(() => { + cleanupTempDir(tempDir); + }); + + describe('constructor', () => { + test('should extend LayerProcessor', () => { + expect(processor).toBeInstanceOf(LayerProcessor); + }); + + test('should set name to workflow', () => { + expect(processor.name).toBe('workflow'); + }); + + test('should set layer to 3', () => { + expect(processor.layer).toBe(3); + }); + + test('should set timeout to 15ms', () => { + expect(processor.timeout).toBe(15); + }); + }); + + describe('process()', () => { + test('should load workflow-specific rules when workflow is active', () => { + fs.writeFileSync(path.join(tempDir, 'workflow-sdc'), [ + 'SDC_RULE_1=Follow story development cycle', + 'SDC_RULE_2=Update checkboxes as tasks complete', + 'SDC_RULE_3=Run tests before marking complete', + ].join('\n')); + + const context = { + prompt: '', + session: { + active_workflow: { + id: 'sdc', + instance_id: 'sdc-123', + current_step: 3, + current_phase: 'implementation', + started_at: '2026-02-11', + }, + }, + config: { + synapsePath: tempDir, + manifest: { + domains: { + SDC_WORKFLOW: { + state: 'active', + workflowTrigger: 'sdc', + file: 'workflow-sdc', + }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.rules).toHaveLength(3); + expect(result.metadata.layer).toBe(3); + expect(result.metadata.source).toBe('workflow-sdc'); + expect(result.metadata.workflow).toBe('sdc'); + expect(result.metadata.phase).toBe('implementation'); + }); + + test('should return null when no workflow is active', () => { + const context = { + prompt: '', + session: { active_workflow: null }, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when session has no active_workflow', () => { + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when no matching workflowTrigger in manifest', () => { + const context = { + prompt: '', + session: { + active_workflow: { id: 'unknown-workflow' }, + }, + config: { + synapsePath: tempDir, + manifest: { + domains: { + SDC: { workflowTrigger: 'sdc', file: 'workflow-sdc' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when domain file is missing', () => { + const context = { + prompt: '', + session: { + active_workflow: { id: 'sdc' }, + }, + config: { + synapsePath: tempDir, + manifest: { + domains: { + SDC: { workflowTrigger: 'sdc', file: 'nonexistent' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should include phase metadata from session', () => { + fs.writeFileSync(path.join(tempDir, 'workflow-qa'), 'QA_RULE=Run quality gate\n'); + + const context = { + prompt: '', + session: { + active_workflow: { + id: 'qa', + current_phase: 'review', + current_step: 1, + }, + }, + config: { + synapsePath: tempDir, + manifest: { + domains: { + QA_WF: { workflowTrigger: 'qa', file: 'workflow-qa' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result.metadata.phase).toBe('review'); + }); + + test('should set phase to null when current_phase is missing', () => { + fs.writeFileSync(path.join(tempDir, 'workflow-build'), 'BUILD_RULE=Build first\n'); + + const context = { + prompt: '', + session: { + active_workflow: { id: 'build' }, + }, + config: { + synapsePath: tempDir, + manifest: { + domains: { + BUILD: { workflowTrigger: 'build', file: 'workflow-build' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result.metadata.phase).toBeNull(); + }); + + test('should use default file path when domain has no file property', () => { + fs.writeFileSync(path.join(tempDir, 'workflow-deploy'), 'DEPLOY_RULE=Deploy safely\n'); + + const context = { + prompt: '', + session: { + active_workflow: { id: 'deploy', current_phase: 'staging' }, + }, + config: { + synapsePath: tempDir, + manifest: { + domains: { + DEPLOY_WF: { workflowTrigger: 'deploy' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).not.toBeNull(); + expect(result.rules[0]).toContain('Deploy safely'); + expect(result.metadata.workflow).toBe('deploy'); + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/memory-bridge.test.js +================================================== +```js +/** + * MemoryBridge Tests + * + * Tests for the feature-gated MIS consumer that provides + * bracket-aware memory retrieval for the SYNAPSE engine. + * + * @module tests/synapse/memory-bridge + * @story SYN-10 - Pro Memory Bridge (Feature-Gated MIS Consumer) + */ + +jest.setTimeout(10000); + +// --------------------------------------------------------------------------- +// Mocks +// --------------------------------------------------------------------------- + +const mockFeatureGate = { + isAvailable: jest.fn(() => false), + require: jest.fn(), +}; + +jest.mock('../../pro/license/feature-gate', () => ({ + featureGate: mockFeatureGate, +}), { virtual: true }); + +const mockGetMemories = jest.fn(() => Promise.resolve([])); +const mockClearCache = jest.fn(); + +jest.mock('../../pro/memory/synapse-memory-provider', () => ({ + SynapseMemoryProvider: jest.fn().mockImplementation(() => ({ + getMemories: mockGetMemories, + clearCache: mockClearCache, + })), +}), { virtual: true }); + +// --------------------------------------------------------------------------- +// Import (after mocks) +// --------------------------------------------------------------------------- + +const { MemoryBridge, BRACKET_LAYER_MAP, BRIDGE_TIMEOUT_MS } = require('../../.aios-core/core/synapse/memory/memory-bridge'); + +// ============================================================================= +// MemoryBridge +// ============================================================================= + +describe('MemoryBridge', () => { + let bridge; + + beforeEach(() => { + bridge = new MemoryBridge(); + mockFeatureGate.isAvailable.mockReset(); + mockGetMemories.mockReset(); + mockClearCache.mockReset(); + mockFeatureGate.isAvailable.mockReturnValue(false); + mockGetMemories.mockResolvedValue([]); + }); + + // ------------------------------------------------------------------------- + // AC-1: Module exists and exports correctly + // ------------------------------------------------------------------------- + + describe('module structure', () => { + test('exports MemoryBridge class', () => { + expect(MemoryBridge).toBeDefined(); + expect(typeof MemoryBridge).toBe('function'); + }); + + test('exports BRACKET_LAYER_MAP constant', () => { + expect(BRACKET_LAYER_MAP).toBeDefined(); + expect(BRACKET_LAYER_MAP.FRESH).toBeDefined(); + expect(BRACKET_LAYER_MAP.MODERATE).toBeDefined(); + expect(BRACKET_LAYER_MAP.DEPLETED).toBeDefined(); + expect(BRACKET_LAYER_MAP.CRITICAL).toBeDefined(); + }); + + test('exports BRIDGE_TIMEOUT_MS constant', () => { + expect(BRIDGE_TIMEOUT_MS).toBe(15); + }); + + test('getMemoryHints returns array of hint objects', async () => { + mockFeatureGate.isAvailable.mockReturnValue(true); + mockGetMemories.mockResolvedValue([ + { content: 'test hint', source: 'procedural', relevance: 0.8, tokens: 5 }, + ]); + + const hints = await bridge.getMemoryHints('dev', 'MODERATE', 100); + expect(Array.isArray(hints)).toBe(true); + if (hints.length > 0) { + expect(hints[0]).toHaveProperty('content'); + expect(hints[0]).toHaveProperty('source'); + expect(hints[0]).toHaveProperty('relevance'); + expect(hints[0]).toHaveProperty('tokens'); + } + }); + }); + + // ------------------------------------------------------------------------- + // AC-2: Feature gate integration + // ------------------------------------------------------------------------- + + describe('feature gate', () => { + test('returns [] when feature is unavailable', async () => { + mockFeatureGate.isAvailable.mockReturnValue(false); + const hints = await bridge.getMemoryHints('dev', 'MODERATE', 100); + expect(hints).toEqual([]); + }); + + test('delegates to provider when feature is available', async () => { + mockFeatureGate.isAvailable.mockReturnValue(true); + mockGetMemories.mockResolvedValue([ + { content: 'hint 1', source: 'procedural', relevance: 0.9, tokens: 5 }, + ]); + + const hints = await bridge.getMemoryHints('dev', 'MODERATE', 100); + expect(hints.length).toBeGreaterThan(0); + }); + + test('feature gate check does not throw', async () => { + mockFeatureGate.isAvailable.mockImplementation(() => { + throw new Error('Gate error'); + }); + + const hints = await bridge.getMemoryHints('dev', 'MODERATE', 100); + expect(hints).toEqual([]); + }); + }); + + // ------------------------------------------------------------------------- + // AC-3: Bracket-aware retrieval + // ------------------------------------------------------------------------- + + describe('bracket-aware retrieval', () => { + beforeEach(() => { + mockFeatureGate.isAvailable.mockReturnValue(true); + }); + + test('FRESH bracket returns [] (no memory injection)', async () => { + const hints = await bridge.getMemoryHints('dev', 'FRESH', 100); + expect(hints).toEqual([]); + expect(mockGetMemories).not.toHaveBeenCalled(); + }); + + test('MODERATE bracket retrieves Layer 1 (max ~50 tokens)', async () => { + mockGetMemories.mockResolvedValue([ + { content: 'short hint', source: 'procedural', relevance: 0.8, tokens: 10 }, + ]); + + const hints = await bridge.getMemoryHints('dev', 'MODERATE', 200); + expect(mockGetMemories).toHaveBeenCalledWith('dev', 'MODERATE', 50); + expect(hints.length).toBe(1); + }); + + test('DEPLETED bracket retrieves Layer 2 (max ~200 tokens)', async () => { + mockGetMemories.mockResolvedValue([ + { content: 'chunk hint', source: 'semantic', relevance: 0.6, tokens: 50 }, + ]); + + const hints = await bridge.getMemoryHints('dev', 'DEPLETED', 500); + expect(mockGetMemories).toHaveBeenCalledWith('dev', 'DEPLETED', 200); + }); + + test('CRITICAL bracket retrieves Layer 3 (max ~1000 tokens)', async () => { + mockGetMemories.mockResolvedValue([ + { content: 'full content', source: 'semantic', relevance: 0.5, tokens: 100 }, + ]); + + const hints = await bridge.getMemoryHints('dev', 'CRITICAL', 2000); + expect(mockGetMemories).toHaveBeenCalledWith('dev', 'CRITICAL', 1000); + }); + + test('unknown bracket returns []', async () => { + const hints = await bridge.getMemoryHints('dev', 'UNKNOWN', 100); + expect(hints).toEqual([]); + }); + + test('token budget respects bracket max even if caller budget is higher', async () => { + mockGetMemories.mockResolvedValue([]); + await bridge.getMemoryHints('dev', 'MODERATE', 9999); + // Should use bracket max (50), not caller budget (9999) + expect(mockGetMemories).toHaveBeenCalledWith('dev', 'MODERATE', 50); + }); + + test('token budget uses caller budget when lower than bracket max', async () => { + mockGetMemories.mockResolvedValue([]); + await bridge.getMemoryHints('dev', 'CRITICAL', 500); + // Should use caller budget (500), not bracket max (1000) + expect(mockGetMemories).toHaveBeenCalledWith('dev', 'CRITICAL', 500); + }); + }); + + // ------------------------------------------------------------------------- + // AC-8: Performance + timeout + // ------------------------------------------------------------------------- + + describe('timeout and error handling', () => { + beforeEach(() => { + mockFeatureGate.isAvailable.mockReturnValue(true); + }); + + test('returns [] on provider timeout', async () => { + // Create a bridge with very short timeout + const fastBridge = new MemoryBridge({ timeout: 1 }); + mockGetMemories.mockImplementation(() => + new Promise((resolve) => setTimeout(() => resolve([{ content: 'late', tokens: 5 }]), 100)), + ); + + const hints = await fastBridge.getMemoryHints('dev', 'MODERATE', 100); + expect(hints).toEqual([]); + }); + + test('returns [] on provider error', async () => { + mockGetMemories.mockRejectedValue(new Error('MIS failure')); + + const hints = await bridge.getMemoryHints('dev', 'MODERATE', 100); + expect(hints).toEqual([]); + }); + + test('returns [] when provider constructor fails', async () => { + // Reset to trigger fresh provider load that fails + bridge._reset(); + bridge._initialized = true; + bridge._featureGate = mockFeatureGate; + + // Force _getProvider to fail by clearing cache + const origGet = bridge._getProvider; + bridge._getProvider = () => null; + + const hints = await bridge.getMemoryHints('dev', 'MODERATE', 100); + expect(hints).toEqual([]); + + bridge._getProvider = origGet; + }); + }); + + // ------------------------------------------------------------------------- + // Token budget enforcement + // ------------------------------------------------------------------------- + + describe('token budget enforcement', () => { + beforeEach(() => { + mockFeatureGate.isAvailable.mockReturnValue(true); + }); + + test('truncates hints that exceed budget', async () => { + mockGetMemories.mockResolvedValue([ + { content: 'a'.repeat(100), source: 'p', relevance: 0.9, tokens: 25 }, + { content: 'b'.repeat(100), source: 'p', relevance: 0.8, tokens: 25 }, + { content: 'c'.repeat(100), source: 'p', relevance: 0.7, tokens: 25 }, + ]); + + const hints = await bridge.getMemoryHints('dev', 'MODERATE', 100); + // Budget is min(50, 100) = 50; first two hints = 50 tokens, third excluded + expect(hints.length).toBe(2); + }); + + test('returns empty array when hints have no content', async () => { + mockGetMemories.mockResolvedValue([]); + const hints = await bridge.getMemoryHints('dev', 'MODERATE', 100); + expect(hints).toEqual([]); + }); + + test('estimates tokens from content when tokens property missing', async () => { + mockGetMemories.mockResolvedValue([ + { content: 'hello world', source: 'p', relevance: 0.9 }, + ]); + + const hints = await bridge.getMemoryHints('dev', 'MODERATE', 100); + if (hints.length > 0) { + expect(hints[0].tokens).toBe(Math.ceil('hello world'.length / 4)); + } + }); + }); + + // ------------------------------------------------------------------------- + // Cache management + // ------------------------------------------------------------------------- + + describe('cache management', () => { + test('clearCache delegates to provider', () => { + mockFeatureGate.isAvailable.mockReturnValue(true); + // Force init and provider load + bridge._init(); + bridge._getProvider(); + + bridge.clearCache(); + expect(mockClearCache).toHaveBeenCalled(); + }); + + test('clearCache is no-op without provider', () => { + expect(() => bridge.clearCache()).not.toThrow(); + }); + }); + + // ------------------------------------------------------------------------- + // _reset for testing + // ------------------------------------------------------------------------- + + describe('_reset', () => { + test('clears internal state', () => { + bridge._init(); + bridge._reset(); + expect(bridge._initialized).toBe(false); + expect(bridge._provider).toBeNull(); + expect(bridge._featureGate).toBeNull(); + }); + }); +}); + +// ============================================================================= +// BRACKET_LAYER_MAP +// ============================================================================= + +describe('BRACKET_LAYER_MAP', () => { + test('FRESH maps to layer 0 with 0 tokens', () => { + expect(BRACKET_LAYER_MAP.FRESH).toEqual({ layer: 0, maxTokens: 0 }); + }); + + test('MODERATE maps to layer 1 with ~50 tokens', () => { + expect(BRACKET_LAYER_MAP.MODERATE).toEqual({ layer: 1, maxTokens: 50 }); + }); + + test('DEPLETED maps to layer 2 with ~200 tokens', () => { + expect(BRACKET_LAYER_MAP.DEPLETED).toEqual({ layer: 2, maxTokens: 200 }); + }); + + test('CRITICAL maps to layer 3 with ~1000 tokens', () => { + expect(BRACKET_LAYER_MAP.CRITICAL).toEqual({ layer: 3, maxTokens: 1000 }); + }); +}); + +``` + +================================================== +📄 tests/synapse/paths.test.js +================================================== +```js +/** + * SYNAPSE Path Utilities Tests + * + * Tests for resolveSynapsePath() and resolveDomainPath(). + * + * @story SYN-1 - Domain Loader + Manifest Parser + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const { + resolveSynapsePath, + resolveDomainPath, +} = require('../../.aios-core/core/synapse/utils/paths'); + +// Set timeout for all tests +jest.setTimeout(30000); + +describe('resolveSynapsePath', () => { + let tempDir; + + beforeEach(() => { + tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-paths-')); + }); + + afterEach(() => { + fs.rmSync(tempDir, { recursive: true, force: true }); + }); + + test('should detect existing .synapse/ directory', () => { + // Given: a directory with .synapse/ inside + const synapsePath = path.join(tempDir, '.synapse'); + fs.mkdirSync(synapsePath); + + // When: resolving + const result = resolveSynapsePath(tempDir); + + // Then: exists is true, paths are correct + expect(result.exists).toBe(true); + expect(result.synapsePath).toBe(synapsePath); + expect(result.manifestPath).toBe(path.join(synapsePath, 'manifest')); + }); + + test('should report non-existing .synapse/ directory', () => { + // Given: a directory without .synapse/ + + // When: resolving + const result = resolveSynapsePath(tempDir); + + // Then: exists is false, paths still computed + expect(result.exists).toBe(false); + expect(result.synapsePath).toBe(path.join(tempDir, '.synapse')); + expect(result.manifestPath).toBe(path.join(tempDir, '.synapse', 'manifest')); + }); + + test('should handle path with spaces', () => { + // Given: directory with spaces in path + const spacedDir = path.join(tempDir, 'dir with spaces'); + fs.mkdirSync(spacedDir); + fs.mkdirSync(path.join(spacedDir, '.synapse')); + + // When: resolving + const result = resolveSynapsePath(spacedDir); + + // Then: works correctly + expect(result.exists).toBe(true); + expect(result.synapsePath).toContain('dir with spaces'); + }); + + test('should not detect .synapse as file (only directory)', () => { + // Given: .synapse exists as a file, not a directory + fs.writeFileSync(path.join(tempDir, '.synapse'), 'not a directory'); + + // When: resolving + const result = resolveSynapsePath(tempDir); + + // Then: exists is false (file, not directory) + expect(result.exists).toBe(false); + }); +}); + +describe('resolveDomainPath', () => { + test('should resolve domain file path correctly', () => { + // Given: a synapse path and file name + const synapsePath = path.join('C:', 'project', '.synapse'); + + // When: resolving domain path + const result = resolveDomainPath(synapsePath, 'agent-dev'); + + // Then: correct path using platform separator + expect(result).toBe(path.join(synapsePath, 'agent-dev')); + }); + + test('should handle nested-like domain file names', () => { + const synapsePath = '/home/user/project/.synapse'; + const result = resolveDomainPath(synapsePath, 'workflow-story-dev'); + expect(result).toBe(path.join(synapsePath, 'workflow-story-dev')); + }); +}); + +``` + +================================================== +📄 tests/synapse/formatter.test.js +================================================== +```js +/** + * Output Formatter Tests + * + * Tests for XML generation, section ordering, + * token budget enforcement, and DEVMODE output. + * + * @module tests/synapse/formatter + * @story SYN-6 - SynapseEngine Orchestrator + Output Formatter + */ + +jest.setTimeout(30000); + +const { + formatSynapseRules, + enforceTokenBudget, + estimateTokens, + SECTION_ORDER, + LAYER_TO_SECTION, +} = require('../../.aios-core/core/synapse/output/formatter'); + +// ============================================================================= +// estimateTokens +// ============================================================================= + +describe('estimateTokens', () => { + test('should estimate tokens as string length / 4', () => { + expect(estimateTokens('abcdefgh')).toBe(2); // 8 / 4 + }); + + test('should ceil the result', () => { + expect(estimateTokens('abc')).toBe(1); // ceil(3/4) = 1 + }); + + test('should return 0 for empty string', () => { + expect(estimateTokens('')).toBe(0); + }); + + test('should return 0 for null/undefined', () => { + expect(estimateTokens(null)).toBe(0); + expect(estimateTokens(undefined)).toBe(0); + }); +}); + +// ============================================================================= +// SECTION_ORDER and LAYER_TO_SECTION constants +// ============================================================================= + +describe('SECTION_ORDER', () => { + test('should have CONTEXT_BRACKET first', () => { + expect(SECTION_ORDER[0]).toBe('CONTEXT_BRACKET'); + }); + + test('should have SUMMARY last', () => { + expect(SECTION_ORDER[SECTION_ORDER.length - 1]).toBe('SUMMARY'); + }); + + test('should include all expected sections', () => { + const expected = [ + 'CONTEXT_BRACKET', 'CONSTITUTION', 'AGENT', 'WORKFLOW', + 'TASK', 'SQUAD', 'KEYWORD', 'MEMORY_HINTS', 'STAR_COMMANDS', 'DEVMODE', 'SUMMARY', + ]; + expect(SECTION_ORDER).toEqual(expected); + }); +}); + +describe('LAYER_TO_SECTION', () => { + test('should map constitution to CONSTITUTION', () => { + expect(LAYER_TO_SECTION.constitution).toBe('CONSTITUTION'); + }); + + test('should map global to CONTEXT_BRACKET', () => { + expect(LAYER_TO_SECTION.global).toBe('CONTEXT_BRACKET'); + }); + + test('should map star-command to STAR_COMMANDS', () => { + expect(LAYER_TO_SECTION['star-command']).toBe('STAR_COMMANDS'); + }); +}); + +// ============================================================================= +// enforceTokenBudget +// ============================================================================= + +describe('enforceTokenBudget', () => { + test('should return all sections when within budget', () => { + const sections = ['short', 'text']; + const ids = ['CONSTITUTION', 'AGENT']; + const result = enforceTokenBudget(sections, ids, 1000); + expect(result).toEqual(sections); + }); + + test('should return all sections when no budget set', () => { + const sections = ['a'.repeat(10000)]; + const ids = ['CONSTITUTION']; + const result = enforceTokenBudget(sections, ids, 0); + expect(result).toEqual(sections); + }); + + test('should remove SUMMARY first when over budget', () => { + const sections = ['constitution-text', 'agent-text', 'summary-text']; + const ids = ['CONSTITUTION', 'AGENT', 'SUMMARY']; + // Budget of 5 tokens → way too small for all sections + const result = enforceTokenBudget(sections, ids, 5); + // SUMMARY should be removed first + expect(result.length).toBeLessThan(sections.length); + }); + + test('should never remove CONTEXT_BRACKET', () => { + const sections = ['bracket', 'summary']; + const ids = ['CONTEXT_BRACKET', 'SUMMARY']; + const result = enforceTokenBudget(sections, ids, 1); + // Even at tiny budget, CONTEXT_BRACKET should remain + const resultIds = ids.filter((_, i) => result.includes(sections[i])); + expect(result).toContain('bracket'); + }); + + test('should never remove CONSTITUTION', () => { + const sections = ['constitution', 'keyword', 'summary']; + const ids = ['CONSTITUTION', 'KEYWORD', 'SUMMARY']; + const result = enforceTokenBudget(sections, ids, 5); + expect(result).toContain('constitution'); + }); + + test('should never remove AGENT', () => { + const sections = ['agent', 'squad', 'summary']; + const ids = ['AGENT', 'SQUAD', 'SUMMARY']; + const result = enforceTokenBudget(sections, ids, 3); + expect(result).toContain('agent'); + }); + + test('should remove sections in truncation order', () => { + const long = 'x'.repeat(200); + const sections = ['c', 'a', 'w', 't', 's', 'k', 'sc', 'd', 'sum']; + const ids = ['CONSTITUTION', 'AGENT', 'WORKFLOW', 'TASK', 'SQUAD', 'KEYWORD', 'STAR_COMMANDS', 'DEVMODE', 'SUMMARY']; + // Budget that forces removal of several sections + const result = enforceTokenBudget(sections, ids, 5); + // Protected sections should survive + expect(result).toContain('c'); // CONSTITUTION + expect(result).toContain('a'); // AGENT + }); +}); + +// ============================================================================= +// formatSynapseRules +// ============================================================================= + +describe('formatSynapseRules', () => { + // Helper: create a layer result + function makeResult(source, rules, extraMeta = {}) { + return { + rules, + metadata: { source, layer: getLayerNum(source), ...extraMeta }, + }; + } + + function getLayerNum(source) { + const map = { constitution: 0, global: 1, agent: 2, workflow: 3, task: 4, squad: 5, keyword: 6, 'star-command': 7 }; + return map[source] != null ? map[source] : -1; + } + + const defaultSession = { prompt_count: 5 }; + const defaultMetrics = { + total_ms: 42, + layers_loaded: 3, + layers_skipped: 5, + layers_errored: 0, + total_rules: 10, + per_layer: {}, + }; + + test('should return empty string for null results', () => { + expect(formatSynapseRules(null, 'FRESH', 85, {}, false, {}, 800, false)).toBe(''); + }); + + test('should return empty string for empty results array', () => { + expect(formatSynapseRules([], 'FRESH', 85, {}, false, {}, 800, false)).toBe(''); + }); + + test('should wrap output in tags', () => { + const results = [makeResult('constitution', ['Rule 1'])]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).toMatch(/^/); + expect(xml).toMatch(/<\/synapse-rules>$/); + }); + + test('should include CONTEXT BRACKET section', () => { + const results = [makeResult('constitution', ['Rule 1'])]; + const xml = formatSynapseRules(results, 'FRESH', 85.0, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).toContain('[CONTEXT BRACKET]'); + expect(xml).toContain('CONTEXT BRACKET: [FRESH]'); + expect(xml).toContain('85.0% remaining'); + }); + + test('should include CONSTITUTION section', () => { + const results = [makeResult('constitution', ['ART.I: CLI First', 'ART.II: Agent Authority'])]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).toContain('[CONSTITUTION] (NON-NEGOTIABLE)'); + expect(xml).toContain('ART.I: CLI First'); + expect(xml).toContain('ART.II: Agent Authority'); + }); + + test('should include AGENT section with metadata', () => { + const results = [ + makeResult('agent', ['Follow coding standards'], { + agentId: 'dev', + domain: 'development', + authority: ['code implementation', 'testing'], + }), + ]; + const xml = formatSynapseRules(results, 'MODERATE', 50, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).toContain('[ACTIVE AGENT: @dev]'); + expect(xml).toContain('DOMAIN: development'); + expect(xml).toContain('AUTHORITY BOUNDARIES:'); + expect(xml).toContain('- code implementation'); + }); + + test('should include WORKFLOW section with phase', () => { + const results = [ + makeResult('workflow', ['Execute task sequentially'], { + workflowId: 'story-dev-cycle', + phase: 'implement', + }), + ]; + const xml = formatSynapseRules(results, 'MODERATE', 50, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).toContain('[ACTIVE WORKFLOW: story-dev-cycle]'); + expect(xml).toContain('PHASE: implement'); + }); + + test('should include TASK section', () => { + const results = [ + makeResult('task', ['Complete Task 5 unit tests'], { + taskId: 'SYN-6-T5', + storyId: 'SYN-6', + }), + ]; + const xml = formatSynapseRules(results, 'MODERATE', 50, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).toContain('[TASK CONTEXT]'); + expect(xml).toContain('Active Task: SYN-6-T5'); + expect(xml).toContain('Story: SYN-6'); + }); + + test('should include SQUAD section', () => { + const results = [ + makeResult('squad', ['Use design tokens'], { squadName: 'design-system' }), + ]; + const xml = formatSynapseRules(results, 'MODERATE', 50, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).toContain('[SQUAD: design-system]'); + expect(xml).toContain('Use design tokens'); + }); + + test('should include KEYWORD section with matches', () => { + const results = [ + makeResult('keyword', ['Use Supabase RLS'], { + matches: [{ keyword: 'supabase', domain: 'data-engineer', reason: 'keyword match' }], + }), + ]; + const xml = formatSynapseRules(results, 'MODERATE', 50, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).toContain('[KEYWORD MATCHES]'); + expect(xml).toContain('"supabase" matched data-engineer'); + }); + + test('should include STAR-COMMANDS section', () => { + const results = [ + makeResult('star-command', ['Execute build loop'], { command: 'build-autonomous' }), + ]; + const xml = formatSynapseRules(results, 'MODERATE', 50, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).toContain('[STAR-COMMANDS]'); + expect(xml).toContain('[*build-autonomous] COMMAND:'); + expect(xml).toContain('============================================================'); + }); + + test('should include SUMMARY section', () => { + const results = [ + makeResult('constitution', ['Rule 1'], { activationReason: 'always active' }), + ]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).toContain('[LOADED DOMAINS SUMMARY]'); + expect(xml).toContain('LOADED DOMAINS:'); + }); + + test('should skip sections with empty rules', () => { + const results = [ + { rules: [], metadata: { source: 'agent', layer: 2 } }, + makeResult('constitution', ['Rule 1']), + ]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).not.toContain('[ACTIVE AGENT:'); + expect(xml).toContain('[CONSTITUTION]'); + }); + + test('should skip results with null rules', () => { + const results = [ + { rules: null, metadata: { source: 'agent', layer: 2 } }, + makeResult('constitution', ['Rule 1']), + ]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).not.toContain('[ACTIVE AGENT:'); + }); + + describe('section ordering', () => { + test('should place CONTEXT_BRACKET before CONSTITUTION', () => { + const results = [ + makeResult('constitution', ['Rule 1']), + makeResult('global', ['Global rule']), + ]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, false, defaultMetrics, 2000, false); + const bracketIdx = xml.indexOf('[CONTEXT BRACKET]'); + const constIdx = xml.indexOf('[CONSTITUTION]'); + expect(bracketIdx).toBeLessThan(constIdx); + }); + + test('should place CONSTITUTION before AGENT', () => { + const results = [ + makeResult('constitution', ['Rule 1']), + makeResult('agent', ['Agent rule'], { agentId: 'dev' }), + ]; + const xml = formatSynapseRules(results, 'MODERATE', 50, defaultSession, false, defaultMetrics, 2000, false); + const constIdx = xml.indexOf('[CONSTITUTION]'); + const agentIdx = xml.indexOf('[ACTIVE AGENT:'); + expect(constIdx).toBeLessThan(agentIdx); + }); + + test('should place SUMMARY after all other sections', () => { + const results = [ + makeResult('constitution', ['Rule 1']), + makeResult('agent', ['Agent rule'], { agentId: 'dev' }), + ]; + const xml = formatSynapseRules(results, 'MODERATE', 50, defaultSession, false, defaultMetrics, 2000, false); + const summaryIdx = xml.indexOf('[LOADED DOMAINS SUMMARY]'); + const agentIdx = xml.indexOf('[ACTIVE AGENT:'); + expect(summaryIdx).toBeGreaterThan(agentIdx); + }); + }); + + describe('DEVMODE', () => { + const devMetrics = { + total_ms: 42, + layers_loaded: 3, + layers_skipped: 5, + layers_errored: 0, + total_rules: 10, + per_layer: { + constitution: { status: 'ok', rules: 6, duration: 2, layer: 0 }, + global: { status: 'ok', rules: 2, duration: 3, layer: 1 }, + agent: { status: 'ok', rules: 2, duration: 5, layer: 2 }, + workflow: { status: 'skipped', reason: 'Not active in FRESH' }, + }, + }; + + test('should include DEVMODE section when devmode=true', () => { + const results = [makeResult('constitution', ['Rule 1'])]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, true, devMetrics, 2000, false); + expect(xml).toContain('[DEVMODE STATUS]'); + expect(xml).toContain('SYNAPSE DEVMODE'); + }); + + test('should NOT include DEVMODE section when devmode=false', () => { + const results = [makeResult('constitution', ['Rule 1'])]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, false, devMetrics, 2000, false); + expect(xml).not.toContain('[DEVMODE STATUS]'); + }); + + test('should show bracket info in DEVMODE', () => { + const results = [makeResult('constitution', ['Rule 1'])]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, true, devMetrics, 2000, false); + expect(xml).toContain('Bracket: [FRESH]'); + expect(xml).toContain('85.0% remaining'); + }); + + test('should show pipeline metrics in DEVMODE', () => { + const results = [makeResult('constitution', ['Rule 1'])]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, true, devMetrics, 2000, false); + expect(xml).toContain('Pipeline Metrics:'); + expect(xml).toContain('Total: 42ms'); + }); + + test('should show loaded layers in DEVMODE', () => { + const results = [makeResult('constitution', ['Rule 1'])]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, true, devMetrics, 2000, false); + expect(xml).toContain('Layers Loaded:'); + expect(xml).toContain('CONSTITUTION'); + }); + + test('should show skipped layers in DEVMODE', () => { + const results = [makeResult('constitution', ['Rule 1'])]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, true, devMetrics, 2000, false); + expect(xml).toContain('Layers Skipped:'); + expect(xml).toContain('WORKFLOW'); + }); + }); + + describe('handoff warning', () => { + test('should include handoff warning when showHandoffWarning=true', () => { + const results = [makeResult('constitution', ['Rule 1'])]; + const xml = formatSynapseRules(results, 'CRITICAL', 15, defaultSession, false, defaultMetrics, 2500, true); + expect(xml).toContain('[HANDOFF WARNING]'); + expect(xml).toContain('Context is nearly exhausted'); + }); + + test('should NOT include handoff warning when showHandoffWarning=false', () => { + const results = [makeResult('constitution', ['Rule 1'])]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, false, defaultMetrics, 800, false); + expect(xml).not.toContain('[HANDOFF WARNING]'); + }); + }); + + describe('token budget enforcement in formatSynapseRules', () => { + test('should enforce token budget on final output', () => { + // Create results that produce a lot of output + const results = [ + makeResult('constitution', ['Rule 1', 'Rule 2', 'Rule 3']), + makeResult('agent', ['Agent rule 1', 'Agent rule 2'], { agentId: 'dev' }), + makeResult('keyword', ['Keyword rule'], { matches: [{ keyword: 'test', domain: 'dev' }] }), + ]; + // Very small budget — should trigger truncation + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, false, defaultMetrics, 10, false); + // Output should still be valid XML (wrapped) + expect(xml).toContain(''); + }); + }); + + describe('MEMORY_HINTS section (SYN-10)', () => { + test('should include [MEMORY HINTS] section when memory results present', () => { + const results = [ + makeResult('constitution', ['Rule 1']), + { + rules: [ + { content: 'Use absolute imports', source: 'procedural', relevance: 0.9, tokens: 5 }, + { content: 'Avoid any type', source: 'semantic', relevance: 0.7, tokens: 4 }, + ], + metadata: { source: 'memory', layer: 'memory' }, + }, + ]; + const xml = formatSynapseRules(results, 'MODERATE', 50, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).toContain('[MEMORY HINTS]'); + expect(xml).toContain('[procedural] (relevance: 90%) Use absolute imports'); + expect(xml).toContain('[semantic] (relevance: 70%) Avoid any type'); + }); + + test('should NOT include [MEMORY HINTS] section when no memory results', () => { + const results = [makeResult('constitution', ['Rule 1'])]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).not.toContain('[MEMORY HINTS]'); + }); + + test('should handle memory hints with missing fields gracefully', () => { + const results = [ + makeResult('constitution', ['Rule 1']), + { + rules: [ + { content: 'Some hint', tokens: 3 }, + ], + metadata: { source: 'memory', layer: 'memory' }, + }, + ]; + const xml = formatSynapseRules(results, 'MODERATE', 50, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).toContain('[MEMORY HINTS]'); + expect(xml).toContain('[memory] (relevance: ?%) Some hint'); + }); + }); + + describe('global/context layer fallback', () => { + test('should categorize layer by layer number when source unknown', () => { + const results = [ + { rules: ['Fallback rule'], metadata: { layer: 0 } }, + ]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).toContain('[CONSTITUTION]'); + expect(xml).toContain('Fallback rule'); + }); + + test('should handle global results in context bracket section', () => { + const results = [ + makeResult('global', ['Global context rule']), + ]; + const xml = formatSynapseRules(results, 'FRESH', 85, defaultSession, false, defaultMetrics, 2000, false); + expect(xml).toContain('[CONTEXT BRACKET]'); + expect(xml).toContain('CONTEXT RULES:'); + expect(xml).toContain('Global context rule'); + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/context-tracker.test.js +================================================== +```js +/** + * Tests for SYNAPSE Context Bracket Tracker + * + * @module tests/synapse/context-tracker + * @story SYN-3 - Context Bracket Tracker + */ + +const { + calculateBracket, + estimateContextPercent, + getTokenBudget, + getActiveLayers, + needsHandoffWarning, + needsMemoryHints, + BRACKETS, + TOKEN_BUDGETS, + DEFAULTS, +} = require('../../.aios-core/core/synapse/context/context-tracker'); + +// ============================================================================= +// calculateBracket +// ============================================================================= + +describe('calculateBracket', () => { + // --- FRESH bracket (>= 60%) --- + test('should return FRESH for 100% context remaining', () => { + expect(calculateBracket(100)).toBe('FRESH'); + }); + + test('should return FRESH for exactly 60% (lower boundary)', () => { + expect(calculateBracket(60)).toBe('FRESH'); + }); + + test('should return FRESH for 60.01%', () => { + expect(calculateBracket(60.01)).toBe('FRESH'); + }); + + test('should return FRESH for 75%', () => { + expect(calculateBracket(75)).toBe('FRESH'); + }); + + // --- MODERATE bracket (>= 40%, < 60%) --- + test('should return MODERATE for 59.99% (just below FRESH)', () => { + expect(calculateBracket(59.99)).toBe('MODERATE'); + }); + + test('should return MODERATE for exactly 40% (lower boundary)', () => { + expect(calculateBracket(40)).toBe('MODERATE'); + }); + + test('should return MODERATE for 50%', () => { + expect(calculateBracket(50)).toBe('MODERATE'); + }); + + // --- DEPLETED bracket (>= 25%, < 40%) --- + test('should return DEPLETED for 39.99% (just below MODERATE)', () => { + expect(calculateBracket(39.99)).toBe('DEPLETED'); + }); + + test('should return DEPLETED for exactly 25% (lower boundary)', () => { + expect(calculateBracket(25)).toBe('DEPLETED'); + }); + + test('should return DEPLETED for 25.01%', () => { + expect(calculateBracket(25.01)).toBe('DEPLETED'); + }); + + test('should return DEPLETED for 30%', () => { + expect(calculateBracket(30)).toBe('DEPLETED'); + }); + + // --- CRITICAL bracket (< 25%) --- + test('should return CRITICAL for 24.99% (just below DEPLETED)', () => { + expect(calculateBracket(24.99)).toBe('CRITICAL'); + }); + + test('should return CRITICAL for 0%', () => { + expect(calculateBracket(0)).toBe('CRITICAL'); + }); + + test('should return CRITICAL for 10%', () => { + expect(calculateBracket(10)).toBe('CRITICAL'); + }); + + // --- Edge cases --- + test('should return FRESH for values above 100%', () => { + expect(calculateBracket(150)).toBe('FRESH'); + }); + + test('should return CRITICAL for negative values', () => { + expect(calculateBracket(-10)).toBe('CRITICAL'); + }); + + test('should return CRITICAL for NaN', () => { + expect(calculateBracket(NaN)).toBe('CRITICAL'); + }); + + test('should return CRITICAL for non-number input', () => { + expect(calculateBracket('fifty')).toBe('CRITICAL'); + expect(calculateBracket(undefined)).toBe('CRITICAL'); + expect(calculateBracket(null)).toBe('CRITICAL'); + }); +}); + +// ============================================================================= +// estimateContextPercent +// ============================================================================= + +describe('estimateContextPercent', () => { + test('should return 100% for 0 prompts', () => { + expect(estimateContextPercent(0)).toBe(100); + }); + + test('should return 98.5% for 2 prompts with defaults (2*1500/200000)', () => { + // 100 - (2 * 1500 / 200000 * 100) = 100 - 1.5 = 98.5 + expect(estimateContextPercent(2)).toBeCloseTo(98.5, 5); + }); + + test('should return 77.5% for 30 prompts with defaults', () => { + // 100 - (30 * 1500 / 200000 * 100) = 100 - 22.5 = 77.5 + expect(estimateContextPercent(30)).toBeCloseTo(77.5, 5); + }); + + test('should return 25% for 100 prompts with defaults', () => { + // 100 - (100 * 1500 / 200000 * 100) = 100 - 75 = 25 + expect(estimateContextPercent(100)).toBeCloseTo(25, 5); + }); + + test('should clamp to 0% when tokens exceed max context', () => { + // 200 prompts * 1500 = 300000 > 200000 + expect(estimateContextPercent(200)).toBe(0); + }); + + test('should support custom avgTokensPerPrompt', () => { + // 100 - (10 * 2000 / 200000 * 100) = 100 - 10 = 90 + expect(estimateContextPercent(10, { avgTokensPerPrompt: 2000 })).toBeCloseTo(90, 5); + }); + + test('should support custom maxContext', () => { + // 100 - (10 * 1500 / 100000 * 100) = 100 - 15 = 85 + expect(estimateContextPercent(10, { maxContext: 100000 })).toBeCloseTo(85, 5); + }); + + test('should support both custom options', () => { + // 100 - (5 * 1000 / 50000 * 100) = 100 - 10 = 90 + expect(estimateContextPercent(5, { + avgTokensPerPrompt: 1000, + maxContext: 50000, + })).toBeCloseTo(90, 5); + }); + + test('should return 100% for negative promptCount (graceful)', () => { + expect(estimateContextPercent(-5)).toBe(100); + }); + + test('should return 100% for NaN promptCount (graceful)', () => { + expect(estimateContextPercent(NaN)).toBe(100); + }); + + test('should return 100% for non-number promptCount (graceful)', () => { + expect(estimateContextPercent('abc')).toBe(100); + expect(estimateContextPercent(undefined)).toBe(100); + }); + + test('should return 0% when maxContext is 0 or negative', () => { + expect(estimateContextPercent(5, { maxContext: 0 })).toBe(0); + expect(estimateContextPercent(5, { maxContext: -100 })).toBe(0); + }); + + test('should return exactly 1 prompt worth of usage', () => { + // 100 - (1 * 1500 / 200000 * 100) = 100 - 0.75 = 99.25 + expect(estimateContextPercent(1)).toBeCloseTo(99.25, 5); + }); +}); + +// ============================================================================= +// getTokenBudget +// ============================================================================= + +describe('getTokenBudget', () => { + test('should return 800 for FRESH', () => { + expect(getTokenBudget('FRESH')).toBe(800); + }); + + test('should return 1500 for MODERATE', () => { + expect(getTokenBudget('MODERATE')).toBe(1500); + }); + + test('should return 2000 for DEPLETED', () => { + expect(getTokenBudget('DEPLETED')).toBe(2000); + }); + + test('should return 2500 for CRITICAL', () => { + expect(getTokenBudget('CRITICAL')).toBe(2500); + }); + + test('should return null for invalid bracket', () => { + expect(getTokenBudget('INVALID')).toBeNull(); + expect(getTokenBudget('')).toBeNull(); + expect(getTokenBudget('fresh')).toBeNull(); + }); +}); + +// ============================================================================= +// getActiveLayers +// ============================================================================= + +describe('getActiveLayers', () => { + test('should return L0, L1, L2, L7 for FRESH', () => { + const result = getActiveLayers('FRESH'); + expect(result).toEqual({ + layers: [0, 1, 2, 7], + memoryHints: false, + handoffWarning: false, + }); + }); + + test('should return all layers for MODERATE', () => { + const result = getActiveLayers('MODERATE'); + expect(result).toEqual({ + layers: [0, 1, 2, 3, 4, 5, 6, 7], + memoryHints: false, + handoffWarning: false, + }); + }); + + test('should return all layers + memory hints for DEPLETED', () => { + const result = getActiveLayers('DEPLETED'); + expect(result).toEqual({ + layers: [0, 1, 2, 3, 4, 5, 6, 7], + memoryHints: true, + handoffWarning: false, + }); + }); + + test('should return all layers + memory + handoff for CRITICAL', () => { + const result = getActiveLayers('CRITICAL'); + expect(result).toEqual({ + layers: [0, 1, 2, 3, 4, 5, 6, 7], + memoryHints: true, + handoffWarning: true, + }); + }); + + test('should return null for invalid bracket (graceful degradation)', () => { + expect(getActiveLayers('INVALID')).toBeNull(); + expect(getActiveLayers('')).toBeNull(); + expect(getActiveLayers(null)).toBeNull(); + expect(getActiveLayers(undefined)).toBeNull(); + }); + + test('should return a copy (not mutate internal config)', () => { + const result1 = getActiveLayers('FRESH'); + result1.layers.push(99); + result1.memoryHints = true; + + const result2 = getActiveLayers('FRESH'); + expect(result2.layers).toEqual([0, 1, 2, 7]); + expect(result2.memoryHints).toBe(false); + }); +}); + +// ============================================================================= +// needsHandoffWarning +// ============================================================================= + +describe('needsHandoffWarning', () => { + test('should return false for FRESH', () => { + expect(needsHandoffWarning('FRESH')).toBe(false); + }); + + test('should return false for MODERATE', () => { + expect(needsHandoffWarning('MODERATE')).toBe(false); + }); + + test('should return false for DEPLETED', () => { + expect(needsHandoffWarning('DEPLETED')).toBe(false); + }); + + test('should return true for CRITICAL', () => { + expect(needsHandoffWarning('CRITICAL')).toBe(true); + }); + + test('should return false for invalid bracket', () => { + expect(needsHandoffWarning('INVALID')).toBe(false); + }); +}); + +// ============================================================================= +// needsMemoryHints +// ============================================================================= + +describe('needsMemoryHints', () => { + test('should return false for FRESH', () => { + expect(needsMemoryHints('FRESH')).toBe(false); + }); + + test('should return false for MODERATE', () => { + expect(needsMemoryHints('MODERATE')).toBe(false); + }); + + test('should return true for DEPLETED', () => { + expect(needsMemoryHints('DEPLETED')).toBe(true); + }); + + test('should return true for CRITICAL', () => { + expect(needsMemoryHints('CRITICAL')).toBe(true); + }); + + test('should return false for invalid bracket', () => { + expect(needsMemoryHints('INVALID')).toBe(false); + }); +}); + +// ============================================================================= +// Constants +// ============================================================================= + +describe('BRACKETS constant', () => { + test('should have exactly 4 brackets', () => { + expect(Object.keys(BRACKETS)).toHaveLength(4); + }); + + test('should have correct structure for each bracket', () => { + for (const [name, config] of Object.entries(BRACKETS)) { + expect(config).toHaveProperty('min'); + expect(config).toHaveProperty('max'); + expect(config).toHaveProperty('tokenBudget'); + expect(typeof config.min).toBe('number'); + expect(typeof config.max).toBe('number'); + expect(typeof config.tokenBudget).toBe('number'); + } + }); + + test('should match DESIGN doc thresholds exactly', () => { + expect(BRACKETS.FRESH).toEqual({ min: 60, max: 100, tokenBudget: 800 }); + expect(BRACKETS.MODERATE).toEqual({ min: 40, max: 60, tokenBudget: 1500 }); + expect(BRACKETS.DEPLETED).toEqual({ min: 25, max: 40, tokenBudget: 2000 }); + expect(BRACKETS.CRITICAL).toEqual({ min: 0, max: 25, tokenBudget: 2500 }); + }); +}); + +describe('TOKEN_BUDGETS constant', () => { + test('should have exactly 4 entries', () => { + expect(Object.keys(TOKEN_BUDGETS)).toHaveLength(4); + }); + + test('should match BRACKETS tokenBudget values', () => { + for (const [name, budget] of Object.entries(TOKEN_BUDGETS)) { + expect(budget).toBe(BRACKETS[name].tokenBudget); + } + }); +}); + +describe('DEFAULTS constant', () => { + test('should have avgTokensPerPrompt = 1500', () => { + expect(DEFAULTS.avgTokensPerPrompt).toBe(1500); + }); + + test('should have maxContext = 200000', () => { + expect(DEFAULTS.maxContext).toBe(200000); + }); +}); + +// ============================================================================= +// Integration: estimateContextPercent + calculateBracket +// ============================================================================= + +describe('integration: estimate → bracket pipeline', () => { + test('should be FRESH at session start (0 prompts)', () => { + const percent = estimateContextPercent(0); + expect(calculateBracket(percent)).toBe('FRESH'); + }); + + test('should be FRESH at 30 prompts (77.5%)', () => { + const percent = estimateContextPercent(30); + expect(calculateBracket(percent)).toBe('FRESH'); + }); + + test('should be MODERATE at 60 prompts (55%)', () => { + // 100 - (60 * 1500 / 200000 * 100) = 100 - 45 = 55 + const percent = estimateContextPercent(60); + expect(calculateBracket(percent)).toBe('MODERATE'); + }); + + test('should be DEPLETED at 100 prompts (25%)', () => { + const percent = estimateContextPercent(100); + expect(calculateBracket(percent)).toBe('DEPLETED'); + }); + + test('should be CRITICAL at 120 prompts (10%)', () => { + // 100 - (120 * 1500 / 200000 * 100) = 100 - 90 = 10 + const percent = estimateContextPercent(120); + expect(calculateBracket(percent)).toBe('CRITICAL'); + }); + + test('should be CRITICAL when context is fully used', () => { + const percent = estimateContextPercent(200); + expect(calculateBracket(percent)).toBe('CRITICAL'); + expect(percent).toBe(0); + }); +}); + +// ============================================================================= +// AC8: Zero External Dependencies +// ============================================================================= + +describe('AC8: zero external dependencies', () => { + test('module source should not contain require statements', () => { + const fs = require('fs'); + const path = require('path'); + const source = fs.readFileSync( + path.join(__dirname, '../../.aios-core/core/synapse/context/context-tracker.js'), + 'utf8', + ); + // Should not have any require() calls (only module.exports) + const requireMatches = source.match(/\brequire\s*\(/g); + expect(requireMatches).toBeNull(); + }); +}); + +``` + +================================================== +📄 tests/synapse/l1-global.test.js +================================================== +```js +/** + * L1 Global Processor Tests + * + * Tests for dual domain file loading, rule combining, + * partial missing files, and ALWAYS_ON behavior. + * + * @story SYN-4 - Layer Processors L0-L3 + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const LayerProcessor = require('../../.aios-core/core/synapse/layers/layer-processor'); +const L1GlobalProcessor = require('../../.aios-core/core/synapse/layers/l1-global'); + +jest.setTimeout(30000); + +function createTempDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-l1-test-')); +} + +function cleanupTempDir(dir) { + fs.rmSync(dir, { recursive: true, force: true }); +} + +describe('L1GlobalProcessor', () => { + let tempDir; + let processor; + + beforeEach(() => { + tempDir = createTempDir(); + processor = new L1GlobalProcessor(); + }); + + afterEach(() => { + cleanupTempDir(tempDir); + }); + + describe('constructor', () => { + test('should extend LayerProcessor', () => { + expect(processor).toBeInstanceOf(LayerProcessor); + }); + + test('should set name to global', () => { + expect(processor.name).toBe('global'); + }); + + test('should set layer to 1', () => { + expect(processor.layer).toBe(1); + }); + + test('should set timeout to 10ms', () => { + expect(processor.timeout).toBe(10); + }); + }); + + describe('process()', () => { + test('should load and combine both global and context rules', () => { + fs.writeFileSync(path.join(tempDir, 'global'), 'GLOBAL_RULE_1=Use TypeScript\nGLOBAL_RULE_2=Use ESLint\n'); + fs.writeFileSync(path.join(tempDir, 'context'), 'CONTEXT_RULE_1=FRESH bracket: lean injection\n'); + + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + domains: { + GLOBAL: { state: 'active', alwaysOn: true, file: 'global' }, + CONTEXT: { state: 'active', alwaysOn: true, file: 'context' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.rules).toHaveLength(3); + expect(result.rules[0]).toContain('Use TypeScript'); + expect(result.rules[1]).toContain('Use ESLint'); + expect(result.rules[2]).toContain('FRESH bracket'); + expect(result.metadata.layer).toBe(1); + expect(result.metadata.sources).toEqual(['global', 'context']); + }); + + test('should return rules from global only when context file is missing', () => { + fs.writeFileSync(path.join(tempDir, 'global'), 'RULE_1=Global only rule\n'); + + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + domains: { + GLOBAL: { state: 'active', file: 'global' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.rules).toHaveLength(1); + expect(result.rules[0]).toContain('Global only rule'); + expect(result.metadata.sources).toEqual(['global']); + }); + + test('should return rules from context only when global file is missing', () => { + fs.writeFileSync(path.join(tempDir, 'context'), 'RULE_1=Context only rule\n'); + + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + domains: { + CONTEXT: { state: 'active', file: 'context' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.rules).toHaveLength(1); + expect(result.rules[0]).toContain('Context only rule'); + expect(result.metadata.sources).toEqual(['context']); + }); + + test('should return null when both files are missing', () => { + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should order global rules before context rules', () => { + fs.writeFileSync(path.join(tempDir, 'global'), 'FIRST=Global comes first\n'); + fs.writeFileSync(path.join(tempDir, 'context'), 'SECOND=Context comes second\n'); + + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + domains: { + GLOBAL: { state: 'active', file: 'global' }, + CONTEXT: { state: 'active', file: 'context' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result.rules[0]).toContain('Global comes first'); + expect(result.rules[1]).toContain('Context comes second'); + }); + + test('should use default file paths when domain has no file property', () => { + fs.writeFileSync(path.join(tempDir, 'global'), 'RULE=Default global\n'); + fs.writeFileSync(path.join(tempDir, 'context'), 'RULE=Default context\n'); + + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + domains: { + GLOBAL: { state: 'active' }, + CONTEXT: { state: 'active' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).not.toBeNull(); + expect(result.rules).toHaveLength(2); + }); + + test('should process regardless of session state (ALWAYS_ON)', () => { + fs.writeFileSync(path.join(tempDir, 'global'), 'RULE=Always active\n'); + + const context = { + prompt: '', + session: { active_agent: { id: null }, active_workflow: null }, + config: { + synapsePath: tempDir, + manifest: { + domains: { + GLOBAL: { state: 'active', file: 'global' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).not.toBeNull(); + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/session-manager.test.js +================================================== +```js +/** + * SYNAPSE Session Manager — Unit Tests + * + * Tests for session CRUD, stale cleanup, auto-title, gitignore, + * session continuity, and error handling. + * + * @story SYN-2 - Session Manager + * @coverage Target: >90% for session-manager.js + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); + +const { + createSession, + loadSession, + updateSession, + deleteSession, + cleanStaleSessions, + generateTitle, + ensureGitignore, + SCHEMA_VERSION, +} = require('../../.aios-core/core/synapse/session/session-manager'); + +let tmpDir; +let sessionsDir; +let synapsePath; + +beforeEach(() => { + tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-test-')); + synapsePath = path.join(tmpDir, '.synapse'); + sessionsDir = path.join(synapsePath, 'sessions'); +}); + +afterEach(() => { + fs.rmSync(tmpDir, { recursive: true, force: true }); +}); + +// ============================================================ +// 1. Session CRUD Operations (AC: 1) +// ============================================================ + +describe('Session CRUD', () => { + test('createSession creates a valid session file with schema v2.0', () => { + const session = createSession('test-uuid-001', tmpDir, sessionsDir); + + expect(session).toBeDefined(); + expect(session.uuid).toBe('test-uuid-001'); + expect(session.schema_version).toBe('2.0'); + expect(session.cwd).toBe(tmpDir); + expect(session.label).toBe(path.basename(tmpDir)); + expect(session.title).toBeNull(); + expect(session.prompt_count).toBe(0); + + // Verify file exists on disk + const filePath = path.join(sessionsDir, 'test-uuid-001.json'); + expect(fs.existsSync(filePath)).toBe(true); + }); + + test('createSession auto-creates sessions directory if missing', () => { + expect(fs.existsSync(sessionsDir)).toBe(false); + + createSession('test-uuid-002', tmpDir, sessionsDir); + + expect(fs.existsSync(sessionsDir)).toBe(true); + }); + + test('loadSession returns session object for existing session', () => { + createSession('test-uuid-003', tmpDir, sessionsDir); + + const session = loadSession('test-uuid-003', sessionsDir); + + expect(session).not.toBeNull(); + expect(session.uuid).toBe('test-uuid-003'); + expect(session.schema_version).toBe('2.0'); + }); + + test('loadSession returns null for non-existent session', () => { + fs.mkdirSync(sessionsDir, { recursive: true }); + + const session = loadSession('non-existent', sessionsDir); + + expect(session).toBeNull(); + }); + + test('updateSession merges partial updates and increments prompt_count', () => { + createSession('test-uuid-004', tmpDir, sessionsDir); + + const updated = updateSession('test-uuid-004', sessionsDir, { + active_agent: { id: 'dev', activated_at: '2026-02-10T10:00:00Z', activation_quality: 'full' }, + }); + + expect(updated).not.toBeNull(); + expect(updated.prompt_count).toBe(1); + expect(updated.active_agent.id).toBe('dev'); + expect(updated.active_agent.activation_quality).toBe('full'); + }); + + test('updateSession returns null for non-existent session', () => { + fs.mkdirSync(sessionsDir, { recursive: true }); + + const result = updateSession('non-existent', sessionsDir, { title: 'test' }); + + expect(result).toBeNull(); + }); + + test('deleteSession removes session file and returns true', () => { + createSession('test-uuid-005', tmpDir, sessionsDir); + const filePath = path.join(sessionsDir, 'test-uuid-005.json'); + + expect(fs.existsSync(filePath)).toBe(true); + + const result = deleteSession('test-uuid-005', sessionsDir); + + expect(result).toBe(true); + expect(fs.existsSync(filePath)).toBe(false); + }); + + test('deleteSession returns false for non-existent session', () => { + fs.mkdirSync(sessionsDir, { recursive: true }); + + const result = deleteSession('non-existent', sessionsDir); + + expect(result).toBe(false); + }); +}); + +// ============================================================ +// 2. Schema v2.0 Compliance (AC: 2) +// ============================================================ + +describe('Schema v2.0 Compliance', () => { + test('session contains all required schema v2.0 fields', () => { + const session = createSession('schema-test', tmpDir, sessionsDir); + + // Core fields + expect(session).toHaveProperty('uuid'); + expect(session).toHaveProperty('schema_version', '2.0'); + expect(session).toHaveProperty('started'); + expect(session).toHaveProperty('last_activity'); + expect(session).toHaveProperty('cwd'); + expect(session).toHaveProperty('label'); + expect(session).toHaveProperty('title'); + expect(session).toHaveProperty('prompt_count'); + + // State fields + expect(session).toHaveProperty('active_agent'); + expect(session.active_agent).toEqual({ + id: null, + activated_at: null, + activation_quality: null, + }); + expect(session).toHaveProperty('active_workflow', null); + expect(session).toHaveProperty('active_squad', null); + expect(session).toHaveProperty('active_task', null); + + // Context fields + expect(session).toHaveProperty('context'); + expect(session.context).toEqual({ + last_bracket: 'FRESH', + last_tokens_used: 0, + last_context_percent: 100, + }); + expect(session).toHaveProperty('overrides'); + expect(session.overrides).toEqual({}); + expect(session).toHaveProperty('history'); + expect(session.history).toEqual({ + star_commands_used: [], + domains_loaded_last: [], + agents_activated: [], + }); + }); + + test('loadSession rejects sessions with wrong schema_version', () => { + fs.mkdirSync(sessionsDir, { recursive: true }); + const filePath = path.join(sessionsDir, 'bad-schema.json'); + const badSession = { uuid: 'bad-schema', schema_version: '1.0', started: new Date().toISOString() }; + fs.writeFileSync(filePath, JSON.stringify(badSession), 'utf8'); + + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(() => {}); + + const result = loadSession('bad-schema', sessionsDir); + + expect(result).toBeNull(); + expect(warnSpy).toHaveBeenCalledWith( + expect.stringContaining('schema_version "1.0"'), + ); + + warnSpy.mockRestore(); + }); +}); + +// ============================================================ +// 3. Stale Session Cleanup (AC: 3) +// ============================================================ + +describe('Stale Session Cleanup', () => { + test('cleanStaleSessions removes sessions older than maxAgeHours', () => { + fs.mkdirSync(sessionsDir, { recursive: true }); + + // Create a stale session (25 hours old) + const staleSession = { + uuid: 'stale-001', + schema_version: '2.0', + started: new Date(Date.now() - 26 * 60 * 60 * 1000).toISOString(), + last_activity: new Date(Date.now() - 25 * 60 * 60 * 1000).toISOString(), + prompt_count: 5, + }; + fs.writeFileSync( + path.join(sessionsDir, 'stale-001.json'), + JSON.stringify(staleSession), + 'utf8', + ); + + // Create a recent session (1 hour old) + const recentSession = { + uuid: 'recent-001', + schema_version: '2.0', + started: new Date(Date.now() - 2 * 60 * 60 * 1000).toISOString(), + last_activity: new Date(Date.now() - 1 * 60 * 60 * 1000).toISOString(), + prompt_count: 3, + }; + fs.writeFileSync( + path.join(sessionsDir, 'recent-001.json'), + JSON.stringify(recentSession), + 'utf8', + ); + + const removed = cleanStaleSessions(sessionsDir, 24); + + expect(removed).toBe(1); + expect(fs.existsSync(path.join(sessionsDir, 'stale-001.json'))).toBe(false); + expect(fs.existsSync(path.join(sessionsDir, 'recent-001.json'))).toBe(true); + }); + + test('cleanStaleSessions returns 0 when no stale sessions exist', () => { + createSession('fresh-001', tmpDir, sessionsDir); + + const removed = cleanStaleSessions(sessionsDir, 24); + + expect(removed).toBe(0); + }); + + test('cleanStaleSessions creates directory if it does not exist', () => { + expect(fs.existsSync(sessionsDir)).toBe(false); + + const removed = cleanStaleSessions(sessionsDir, 24); + + expect(removed).toBe(0); + expect(fs.existsSync(sessionsDir)).toBe(true); + }); + + test('cleanStaleSessions skips corrupted JSON files', () => { + fs.mkdirSync(sessionsDir, { recursive: true }); + fs.writeFileSync(path.join(sessionsDir, 'corrupted.json'), '{invalid json', 'utf8'); + + const removed = cleanStaleSessions(sessionsDir, 24); + + expect(removed).toBe(0); + // Corrupted file should still exist (not deleted by cleanup) + expect(fs.existsSync(path.join(sessionsDir, 'corrupted.json'))).toBe(true); + }); +}); + +// ============================================================ +// 4. Auto-Title Generation (AC: 4) +// ============================================================ + +describe('Auto-Title Generation', () => { + test('generateTitle extracts meaningful title from prompt', () => { + const title = generateTitle('Implement user authentication for the dashboard'); + expect(title).toBe('Implement user authentication for the dashboard'); + }); + + test('generateTitle truncates to max 50 chars at word boundary', () => { + const longPrompt = + 'This is a very long prompt that should be truncated to fit within the maximum title length of fifty characters'; + const title = generateTitle(longPrompt); + + expect(title.length).toBeLessThanOrEqual(50); + expect(title).not.toMatch(/\s$/); // No trailing space + }); + + test('generateTitle returns null for *command prompts', () => { + expect(generateTitle('*help')).toBeNull(); + expect(generateTitle('*develop story-1')).toBeNull(); + }); + + test('generateTitle returns null for single-word prompts', () => { + expect(generateTitle('hello')).toBeNull(); + expect(generateTitle('test')).toBeNull(); + }); + + test('generateTitle returns null for null/empty input', () => { + expect(generateTitle(null)).toBeNull(); + expect(generateTitle('')).toBeNull(); + expect(generateTitle(undefined)).toBeNull(); + }); + + test('title is set-once (never overwritten via updateSession)', () => { + createSession('title-test', tmpDir, sessionsDir); + + // Set title first time + updateSession('title-test', sessionsDir, { title: 'First Title' }); + let session = loadSession('title-test', sessionsDir); + expect(session.title).toBe('First Title'); + + // Attempt to overwrite (updateSession does merge — caller should check) + // This verifies the mechanism works; set-once logic is in the consumer + updateSession('title-test', sessionsDir, { title: 'Second Title' }); + session = loadSession('title-test', sessionsDir); + expect(session.title).toBe('Second Title'); // updateSession merges as requested + }); +}); + +// ============================================================ +// 5. Gitignore (AC: 5) +// ============================================================ + +describe('Gitignore', () => { + test('createSession auto-creates .synapse/.gitignore', () => { + createSession('gitignore-test', tmpDir, sessionsDir); + + const gitignorePath = path.join(synapsePath, '.gitignore'); + expect(fs.existsSync(gitignorePath)).toBe(true); + + const content = fs.readFileSync(gitignorePath, 'utf8'); + expect(content).toContain('sessions/'); + expect(content).toContain('cache/'); + }); + + test('ensureGitignore does not overwrite existing .gitignore', () => { + fs.mkdirSync(synapsePath, { recursive: true }); + const gitignorePath = path.join(synapsePath, '.gitignore'); + fs.writeFileSync(gitignorePath, 'custom-content\n', 'utf8'); + + ensureGitignore(synapsePath); + + const content = fs.readFileSync(gitignorePath, 'utf8'); + expect(content).toBe('custom-content\n'); + }); +}); + +// ============================================================ +// 6. Session Continuity (AC: 6) +// ============================================================ + +describe('Session Continuity', () => { + test('state persists correctly across 10 consecutive updates without drift', () => { + createSession('continuity-test', tmpDir, sessionsDir); + + // Perform 10 consecutive updates with different fields + for (let i = 1; i <= 10; i++) { + updateSession('continuity-test', sessionsDir, { + context: { last_bracket: i <= 5 ? 'FRESH' : 'MODERATE', last_tokens_used: i * 1000 }, + overrides: { [`DOMAIN_${i}`]: true }, + history: { star_commands_used: [`*cmd-${i}`] }, + }); + } + + const session = loadSession('continuity-test', sessionsDir); + + // Verify integrity after 10 updates + expect(session).not.toBeNull(); + expect(session.prompt_count).toBe(10); + expect(session.uuid).toBe('continuity-test'); + expect(session.schema_version).toBe('2.0'); + expect(session.context.last_bracket).toBe('MODERATE'); + expect(session.context.last_tokens_used).toBe(10000); + + // All 10 overrides should exist + for (let i = 1; i <= 10; i++) { + expect(session.overrides[`DOMAIN_${i}`]).toBe(true); + } + + // All 10 star commands should be in history (unique) + expect(session.history.star_commands_used).toHaveLength(10); + expect(session.history.star_commands_used).toContain('*cmd-1'); + expect(session.history.star_commands_used).toContain('*cmd-10'); + }); +}); + +// ============================================================ +// 7. Error Handling (AC: 7) +// ============================================================ + +describe('Error Handling', () => { + test('loadSession returns null for corrupted JSON', () => { + fs.mkdirSync(sessionsDir, { recursive: true }); + fs.writeFileSync(path.join(sessionsDir, 'corrupt.json'), '{ broken json!!!', 'utf8'); + + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(() => {}); + + const result = loadSession('corrupt', sessionsDir); + + expect(result).toBeNull(); + expect(warnSpy).toHaveBeenCalledWith( + expect.stringContaining('Corrupted JSON'), + ); + + warnSpy.mockRestore(); + }); + + test('loadSession handles missing sessions directory gracefully', () => { + // sessionsDir does not exist yet + const result = loadSession('missing-dir', path.join(tmpDir, 'nonexistent')); + + expect(result).toBeNull(); + }); + + test('updateSession handles last_activity timestamps correctly', () => { + createSession('timestamp-test', tmpDir, sessionsDir); + + const beforeUpdate = Date.now(); + updateSession('timestamp-test', sessionsDir, { title: 'Test' }); + const afterUpdate = Date.now(); + + const session = loadSession('timestamp-test', sessionsDir); + const lastActivity = new Date(session.last_activity).getTime(); + + expect(lastActivity).toBeGreaterThanOrEqual(beforeUpdate); + expect(lastActivity).toBeLessThanOrEqual(afterUpdate); + }); +}); + +// ============================================================ +// 8. History Merging +// ============================================================ + +describe('History Merging', () => { + test('history arrays accumulate unique values across updates', () => { + createSession('history-test', tmpDir, sessionsDir); + + updateSession('history-test', sessionsDir, { + history: { agents_activated: ['dev'] }, + }); + updateSession('history-test', sessionsDir, { + history: { agents_activated: ['qa', 'dev'] }, // 'dev' already exists + }); + + const session = loadSession('history-test', sessionsDir); + expect(session.history.agents_activated).toEqual(['dev', 'qa']); + }); +}); + +// ============================================================ +// 9. Permission Error Handling (AC: 7) +// ============================================================ + +describe('Permission Error Handling', () => { + test('loadSession returns null and logs error on EACCES', () => { + fs.mkdirSync(sessionsDir, { recursive: true }); + const filePath = path.join(sessionsDir, 'perm-test.json'); + fs.writeFileSync(filePath, '{}', 'utf8'); + + const errorSpy = jest.spyOn(console, 'error').mockImplementation(() => {}); + const originalReadFileSync = fs.readFileSync; + jest.spyOn(fs, 'readFileSync').mockImplementation((p, enc) => { + if (p === filePath) { + const err = new Error('EACCES'); + err.code = 'EACCES'; + throw err; + } + return originalReadFileSync(p, enc); + }); + + const result = loadSession('perm-test', sessionsDir); + + expect(result).toBeNull(); + expect(errorSpy).toHaveBeenCalledWith(expect.stringContaining('Permission denied')); + + fs.readFileSync.mockRestore(); + errorSpy.mockRestore(); + }); + + test('deleteSession returns false and logs error on EPERM', () => { + fs.mkdirSync(sessionsDir, { recursive: true }); + const filePath = path.join(sessionsDir, 'perm-del.json'); + fs.writeFileSync(filePath, '{}', 'utf8'); + + const errorSpy = jest.spyOn(console, 'error').mockImplementation(() => {}); + jest.spyOn(fs, 'unlinkSync').mockImplementation((p) => { + if (p === filePath) { + const err = new Error('EPERM'); + err.code = 'EPERM'; + throw err; + } + }); + + const result = deleteSession('perm-del', sessionsDir); + + expect(result).toBe(false); + expect(errorSpy).toHaveBeenCalledWith(expect.stringContaining('Permission denied')); + + fs.unlinkSync.mockRestore(); + errorSpy.mockRestore(); + }); +}); + +// ============================================================ +// 10. Path Traversal Protection +// ============================================================ + +describe('Path Traversal Protection', () => { + test('resolveSessionFile rejects sessionId with ..', () => { + fs.mkdirSync(sessionsDir, { recursive: true }); + + expect(() => { + createSession('../../../etc/passwd', tmpDir, sessionsDir); + }).toThrow('Invalid sessionId'); + }); + + test('resolveSessionFile rejects sessionId with forward slash', () => { + fs.mkdirSync(sessionsDir, { recursive: true }); + + expect(() => { + createSession('foo/bar', tmpDir, sessionsDir); + }).toThrow('Invalid sessionId'); + }); + + test('resolveSessionFile rejects sessionId with backslash', () => { + fs.mkdirSync(sessionsDir, { recursive: true }); + + expect(() => { + createSession('foo\\bar', tmpDir, sessionsDir); + }).toThrow('Invalid sessionId'); + }); + + test('loadSession rejects path traversal in sessionId', () => { + fs.mkdirSync(sessionsDir, { recursive: true }); + + expect(() => { + loadSession('../secret', sessionsDir); + }).toThrow('Invalid sessionId'); + }); +}); + +// ============================================================ +// 11. createSession Permission Error Handling +// ============================================================ + +describe('createSession Permission Error Handling', () => { + test('createSession returns null and logs error on EACCES', () => { + fs.mkdirSync(sessionsDir, { recursive: true }); + + const errorSpy = jest.spyOn(console, 'error').mockImplementation(() => {}); + const originalWriteFileSync = fs.writeFileSync; + jest.spyOn(fs, 'writeFileSync').mockImplementation((p, data, enc) => { + if (typeof p === 'string' && p.endsWith('.json') && p.includes('perm-create')) { + const err = new Error('EACCES'); + err.code = 'EACCES'; + throw err; + } + return originalWriteFileSync(p, data, enc); + }); + + const result = createSession('perm-create', tmpDir, sessionsDir); + + expect(result).toBeNull(); + expect(errorSpy).toHaveBeenCalledWith(expect.stringContaining('Permission denied creating session')); + + fs.writeFileSync.mockRestore(); + errorSpy.mockRestore(); + }); +}); + +// ============================================================ +// 12. updateSession Write Permission Error Handling +// ============================================================ + +describe('updateSession Write Permission Error Handling', () => { + test('updateSession returns null and logs error on EPERM during write', () => { + createSession('perm-update', tmpDir, sessionsDir); + + const errorSpy = jest.spyOn(console, 'error').mockImplementation(() => {}); + const originalWriteFileSync = fs.writeFileSync; + jest.spyOn(fs, 'writeFileSync').mockImplementation((p, data, enc) => { + if (typeof p === 'string' && p.includes('perm-update')) { + const err = new Error('EPERM'); + err.code = 'EPERM'; + throw err; + } + return originalWriteFileSync(p, data, enc); + }); + + const result = updateSession('perm-update', sessionsDir, { title: 'test' }); + + expect(result).toBeNull(); + expect(errorSpy).toHaveBeenCalledWith(expect.stringContaining('Permission denied writing session')); + + fs.writeFileSync.mockRestore(); + errorSpy.mockRestore(); + }); +}); + +// ============================================================ +// 13. Non-Array History Merge (was 10) +// ============================================================ + +describe('Non-Array History Merge', () => { + test('mergeHistory handles non-array values by replacing', () => { + createSession('merge-test', tmpDir, sessionsDir); + + // Set a non-array history field + updateSession('merge-test', sessionsDir, { + history: { custom_field: 'value1' }, + }); + + const session = loadSession('merge-test', sessionsDir); + expect(session.history.custom_field).toBe('value1'); + }); +}); + +// ============================================================ +// 14. Concurrent Access (AC: 7 — last-write-wins) +// ============================================================ + +describe('Concurrent Access', () => { + test('last write wins when updating the same session', () => { + createSession('concurrent-test', tmpDir, sessionsDir); + + // Simulate two "concurrent" writes (sequential in test, but validates overwrite behavior) + updateSession('concurrent-test', sessionsDir, { title: 'First Write' }); + updateSession('concurrent-test', sessionsDir, { title: 'Second Write' }); + + const session = loadSession('concurrent-test', sessionsDir); + expect(session.title).toBe('Second Write'); + expect(session.prompt_count).toBe(2); + }); +}); + +``` + +================================================== +📄 tests/synapse/hook-entry.test.js +================================================== +```js +/** + * SYNAPSE Hook Entry Point — Unit Tests + * + * Tests for stdin/stdout JSON protocol, silent exit, error handling, + * output format validation, and engine delegation. + * + * @module tests/synapse/hook-entry + * @story SYN-7 - Hook Entry Point + Registration + * @coverage Target: >90% for synapse-engine.cjs + */ + +const { spawn } = require('child_process'); +const path = require('path'); +const fs = require('fs'); +const os = require('os'); + +jest.setTimeout(15000); + +const HOOK_PATH = path.resolve(__dirname, '../../.claude/hooks/synapse-engine.cjs'); + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +/** + * Run the hook as a child process with given stdin data. + * @param {*} stdinData - Data to pipe to stdin (stringified if object) + * @param {number} [timeout=5000] - Process timeout in ms + * @returns {Promise<{stdout: string, stderr: string, code: number}>} + */ +function runHook(stdinData, timeout = 5000) { + return new Promise((resolve) => { + const proc = spawn('node', [HOOK_PATH], { + timeout, + stdio: ['pipe', 'pipe', 'pipe'], + }); + let stdout = ''; + let stderr = ''; + proc.stdout.on('data', (d) => { stdout += d; }); + proc.stderr.on('data', (d) => { stderr += d; }); + proc.on('close', (code) => { + resolve({ stdout, stderr, code: code || 0 }); + }); + proc.on('error', () => { + resolve({ stdout, stderr, code: 1 }); + }); + if (stdinData !== undefined) { + const str = typeof stdinData === 'string' ? stdinData : JSON.stringify(stdinData); + proc.stdin.write(str); + } + proc.stdin.end(); + }); +} + +/** + * Create a temporary project directory with mock SYNAPSE modules. + * @param {object} [opts] - Options for mock behavior + * @param {boolean} [opts.noSynapse=false] - Skip creating .synapse/ + * @param {string} [opts.engineCode] - Custom engine.js code + * @param {string} [opts.sessionCode] - Custom session-manager.js code + * @returns {string} Path to temp project directory + */ +function createMockProject(opts = {}) { + const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-hook-test-')); + + if (!opts.noSynapse) { + fs.mkdirSync(path.join(tmpDir, '.synapse', 'sessions'), { recursive: true }); + } + + const engineDir = path.join(tmpDir, '.aios-core', 'core', 'synapse'); + fs.mkdirSync(engineDir, { recursive: true }); + + const engineCode = opts.engineCode || ` + class SynapseEngine { + constructor(synapsePath) { this.synapsePath = synapsePath; } + process(prompt, session) { + return { + xml: '\\nmocked output for: ' + prompt + '\\n', + metrics: { layers: 4, elapsed: 12 }, + }; + } + } + module.exports = { SynapseEngine }; + `; + fs.writeFileSync(path.join(engineDir, 'engine.js'), engineCode); + + const sessionDir = path.join(engineDir, 'session'); + fs.mkdirSync(sessionDir, { recursive: true }); + + const sessionCode = opts.sessionCode || ` + function loadSession(sessionId, sessionsDir) { + return { prompt_count: 5 }; + } + module.exports = { loadSession }; + `; + fs.writeFileSync(path.join(sessionDir, 'session-manager.js'), sessionCode); + + return tmpDir; +} + +/** + * Build a valid hook input object. + * @param {string} cwd - Project directory + * @param {object} [overrides] - Fields to override + * @returns {object} + */ +function buildInput(cwd, overrides = {}) { + return { + sessionId: 'test-session-001', + cwd, + prompt: 'test prompt', + ...overrides, + }; +} + +// --------------------------------------------------------------------------- +// Test Suites +// --------------------------------------------------------------------------- + +describe('SYNAPSE Hook Entry Point (synapse-engine.cjs)', () => { + let tmpDir; + + afterEach(() => { + if (tmpDir && fs.existsSync(tmpDir)) { + fs.rmSync(tmpDir, { recursive: true, force: true }); + } + }); + + // ========================================================================== + // 1. stdin/stdout Protocol (AC: 1) + // ========================================================================== + + describe('stdin/stdout JSON protocol', () => { + test('valid input produces valid JSON output on stdout', async () => { + tmpDir = createMockProject(); + const input = buildInput(tmpDir); + const { stdout, code } = await runHook(input); + + expect(code).toBe(0); + expect(stdout).toBeTruthy(); + + const output = JSON.parse(stdout); + expect(output).toHaveProperty('hookSpecificOutput'); + expect(output.hookSpecificOutput).toHaveProperty('additionalContext'); + }); + + test('additionalContext contains engine XML output', async () => { + tmpDir = createMockProject(); + const input = buildInput(tmpDir, { prompt: 'hello world' }); + const { stdout } = await runHook(input); + + const output = JSON.parse(stdout); + expect(output.hookSpecificOutput.additionalContext).toContain(''); + expect(output.hookSpecificOutput.additionalContext).toContain('hello world'); + }); + + test('output is a single JSON object (no trailing data)', async () => { + tmpDir = createMockProject(); + const input = buildInput(tmpDir); + const { stdout } = await runHook(input); + + // Should parse without error and be the complete output + const output = JSON.parse(stdout); + expect(typeof output).toBe('object'); + }); + }); + + // ========================================================================== + // 2. Silent Exit on Missing .synapse/ (AC: 2) + // ========================================================================== + + describe('silent exit when .synapse/ missing', () => { + test('exits with code 0 when .synapse/ does not exist', async () => { + tmpDir = createMockProject({ noSynapse: true }); + const input = buildInput(tmpDir); + const { stdout, stderr, code } = await runHook(input); + + expect(code).toBe(0); + expect(stdout).toBe(''); + expect(stderr).toBe(''); + }); + + test('produces zero stdout when .synapse/ does not exist', async () => { + tmpDir = createMockProject({ noSynapse: true }); + const input = buildInput(tmpDir); + const { stdout } = await runHook(input); + + expect(stdout).toBe(''); + }); + + test('produces zero stderr when .synapse/ does not exist', async () => { + tmpDir = createMockProject({ noSynapse: true }); + const input = buildInput(tmpDir); + const { stderr } = await runHook(input); + + expect(stderr).toBe(''); + }); + }); + + // ========================================================================== + // 3. Error Handling (AC: 3) + // ========================================================================== + + describe('global error handling', () => { + test('exits silently with code 0 on invalid JSON input', async () => { + const { stdout, code } = await runHook('not valid json {{{'); + + expect(code).toBe(0); + expect(stdout).toBe(''); + }); + + test('logs error to stderr with [synapse-hook] prefix on invalid JSON', async () => { + const { stderr } = await runHook('not valid json'); + + expect(stderr).toContain('[synapse-hook]'); + }); + + test('exits silently when engine.process() throws', async () => { + tmpDir = createMockProject({ + engineCode: ` + class SynapseEngine { + constructor() {} + process() { throw new Error('Engine exploded'); } + } + module.exports = { SynapseEngine }; + `, + }); + const input = buildInput(tmpDir); + const { stdout, stderr, code } = await runHook(input); + + expect(code).toBe(0); + expect(stdout).toBe(''); + expect(stderr).toContain('[synapse-hook]'); + }); + + test('exits silently when SynapseEngine constructor throws', async () => { + tmpDir = createMockProject({ + engineCode: ` + class SynapseEngine { + constructor() { throw new Error('Constructor failed'); } + } + module.exports = { SynapseEngine }; + `, + }); + const input = buildInput(tmpDir); + const { stdout, code } = await runHook(input); + + expect(code).toBe(0); + expect(stdout).toBe(''); + }); + + test('exits silently when session-manager module is missing', async () => { + tmpDir = createMockProject({ + sessionCode: 'throw new Error(\'module broken\');', + }); + const input = buildInput(tmpDir); + const { stdout, code } = await runHook(input); + + expect(code).toBe(0); + expect(stdout).toBe(''); + }); + + test('exits silently on empty stdin', async () => { + const { stdout, code } = await runHook(''); + + expect(code).toBe(0); + expect(stdout).toBe(''); + }); + }); + + // ========================================================================== + // 4. Output Format Validation (AC: 1) + // ========================================================================== + + describe('output format', () => { + test('output matches { hookSpecificOutput: { additionalContext: string } }', async () => { + tmpDir = createMockProject(); + const input = buildInput(tmpDir); + const { stdout } = await runHook(input); + + const output = JSON.parse(stdout); + expect(typeof output.hookSpecificOutput.additionalContext).toBe('string'); + expect(Object.keys(output)).toEqual(['hookSpecificOutput']); + expect(Object.keys(output.hookSpecificOutput)).toEqual(['additionalContext']); + }); + + test('additionalContext is empty string when engine returns no xml', async () => { + tmpDir = createMockProject({ + engineCode: ` + class SynapseEngine { + constructor() {} + process() { return { metrics: {} }; } + } + module.exports = { SynapseEngine }; + `, + }); + const input = buildInput(tmpDir); + const { stdout } = await runHook(input); + + const output = JSON.parse(stdout); + expect(output.hookSpecificOutput.additionalContext).toBe(''); + }); + }); + + // ========================================================================== + // 5. Engine Delegation (AC: 1) + // ========================================================================== + + describe('engine delegation', () => { + test('engine.process() receives prompt and session arguments', async () => { + tmpDir = createMockProject({ + engineCode: ` + class SynapseEngine { + constructor() {} + process(prompt, session) { + return { + xml: JSON.stringify({ receivedPrompt: prompt, receivedSession: session }), + metrics: {}, + }; + } + } + module.exports = { SynapseEngine }; + `, + }); + const input = buildInput(tmpDir, { prompt: 'my test prompt' }); + const { stdout } = await runHook(input); + + const output = JSON.parse(stdout); + const delegated = JSON.parse(output.hookSpecificOutput.additionalContext); + expect(delegated.receivedPrompt).toBe('my test prompt'); + expect(delegated.receivedSession).toEqual({ prompt_count: 5 }); + }); + + test('uses fallback session { prompt_count: 0 } when loadSession returns null', async () => { + tmpDir = createMockProject({ + sessionCode: ` + function loadSession() { return null; } + module.exports = { loadSession }; + `, + engineCode: ` + class SynapseEngine { + constructor() {} + process(prompt, session) { + return { xml: JSON.stringify({ session }), metrics: {} }; + } + } + module.exports = { SynapseEngine }; + `, + }); + const input = buildInput(tmpDir); + const { stdout } = await runHook(input); + + const output = JSON.parse(stdout); + const delegated = JSON.parse(output.hookSpecificOutput.additionalContext); + expect(delegated.session).toEqual({ prompt_count: 0 }); + }); + + test('SynapseEngine receives .synapse path as constructor argument', async () => { + tmpDir = createMockProject({ + engineCode: ` + class SynapseEngine { + constructor(synapsePath) { + this.synapsePath = synapsePath; + } + process() { + return { xml: this.synapsePath, metrics: {} }; + } + } + module.exports = { SynapseEngine }; + `, + }); + const input = buildInput(tmpDir); + const { stdout } = await runHook(input); + + const output = JSON.parse(stdout); + const expectedPath = path.join(tmpDir, '.synapse'); + expect(output.hookSpecificOutput.additionalContext).toBe(expectedPath); + }); + }); + + // ========================================================================== + // 6. Performance (AC: 7) + // ========================================================================== + + describe('performance', () => { + test('hook completes within 2000ms (including spawn overhead)', async () => { + tmpDir = createMockProject(); + const input = buildInput(tmpDir); + const start = Date.now(); + const { code } = await runHook(input); + const elapsed = Date.now() - start; + + expect(code).toBe(0); + // Node.js spawn overhead ~50-500ms on Windows; hook logic itself <100ms + expect(elapsed).toBeLessThan(2000); + }); + + test('startup check (.synapse/ missing) completes within 1500ms', async () => { + tmpDir = createMockProject({ noSynapse: true }); + const input = buildInput(tmpDir); + const start = Date.now(); + await runHook(input); + const elapsed = Date.now() - start; + + // Fast-exit path: no dynamic requires loaded + expect(elapsed).toBeLessThan(1500); + }); + }); + + // ========================================================================== + // 7. Hook Registration Verification (AC: 4) + // ========================================================================== + + describe('hook registration', () => { + test('hook file exists at expected path', () => { + expect(fs.existsSync(HOOK_PATH)).toBe(true); + }); + + test('hook file is less than 120 lines', () => { + const content = fs.readFileSync(HOOK_PATH, 'utf8'); + const lines = content.split('\n').length; + expect(lines).toBeLessThanOrEqual(120); + }); + + test('hook file uses CommonJS (no import/export)', () => { + const content = fs.readFileSync(HOOK_PATH, 'utf8'); + expect(content).not.toMatch(/^import\s/m); + expect(content).not.toMatch(/^export\s/m); + expect(content).toContain('require('); + }); + + test('settings.local.json has SYNAPSE hook registered', () => { + const settingsPath = path.resolve(__dirname, '../../.claude/settings.local.json'); + if (!fs.existsSync(settingsPath)) { + // Settings may not exist in CI — skip gracefully + return; + } + const settings = JSON.parse(fs.readFileSync(settingsPath, 'utf8')); + + // Skip if hooks are not configured (optional user configuration) + if (!settings.hooks || !settings.hooks.UserPromptSubmit) { + // SYNAPSE hook is optional user configuration — skip gracefully + return; + } + + const hookEntries = settings.hooks.UserPromptSubmit; + const synapseHook = hookEntries.find((entry) => + entry.hooks && entry.hooks.some((h) => h.command && h.command.includes('synapse-engine.cjs')), + ); + expect(synapseHook).toBeDefined(); + }); + }); + + // ========================================================================== + // 8. Direct Module Tests (Jest coverage — via require) + // ========================================================================== + + describe('direct module exports (Jest coverage)', () => { + const hookModule = require('../../.claude/hooks/synapse-engine.cjs'); + + test('exports readStdin function', () => { + expect(typeof hookModule.readStdin).toBe('function'); + }); + + test('exports main function', () => { + expect(typeof hookModule.main).toBe('function'); + }); + + test('exports HOOK_TIMEOUT_MS constant set to 5000', () => { + expect(hookModule.HOOK_TIMEOUT_MS).toBe(5000); + }); + + test('readStdin rejects on invalid JSON from stream', async () => { + const { Readable } = require('stream'); + const originalStdin = process.stdin; + + const mockStdin = new Readable({ read() {} }); + Object.defineProperty(process, 'stdin', { value: mockStdin, writable: true }); + + try { + const promise = hookModule.readStdin(); + mockStdin.push('not json'); + mockStdin.push(null); + + await expect(promise).rejects.toThrow(); + } finally { + Object.defineProperty(process, 'stdin', { value: originalStdin, writable: true }); + } + }); + + test('readStdin resolves valid JSON from stream', async () => { + const { Readable } = require('stream'); + const originalStdin = process.stdin; + + const mockStdin = new Readable({ read() {} }); + Object.defineProperty(process, 'stdin', { value: mockStdin, writable: true }); + + try { + const promise = hookModule.readStdin(); + mockStdin.push('{"hello":"world"}'); + mockStdin.push(null); + + const result = await promise; + expect(result).toEqual({ hello: 'world' }); + } finally { + Object.defineProperty(process, 'stdin', { value: originalStdin, writable: true }); + } + }); + + test('main() processes input and writes to stdout in-process', async () => { + const { Readable } = require('stream'); + tmpDir = createMockProject(); + const originalStdin = process.stdin; + const originalWrite = process.stdout.write; + + const mockStdin = new Readable({ read() {} }); + Object.defineProperty(process, 'stdin', { value: mockStdin, writable: true }); + + let captured = ''; + process.stdout.write = (data) => { captured += data; return true; }; + + try { + const mainPromise = hookModule.main(); + mockStdin.push(JSON.stringify(buildInput(tmpDir))); + mockStdin.push(null); + await mainPromise; + + const output = JSON.parse(captured); + expect(output.hookSpecificOutput.additionalContext).toContain(''); + } finally { + Object.defineProperty(process, 'stdin', { value: originalStdin, writable: true }); + process.stdout.write = originalWrite; + } + }); + + test('main() returns silently when .synapse/ is missing', async () => { + const { Readable } = require('stream'); + tmpDir = createMockProject({ noSynapse: true }); + const originalStdin = process.stdin; + const originalWrite = process.stdout.write; + + const mockStdin = new Readable({ read() {} }); + Object.defineProperty(process, 'stdin', { value: mockStdin, writable: true }); + + let captured = ''; + process.stdout.write = (data) => { captured += data; return true; }; + + try { + const mainPromise = hookModule.main(); + mockStdin.push(JSON.stringify(buildInput(tmpDir))); + mockStdin.push(null); + await mainPromise; + + expect(captured).toBe(''); + } finally { + Object.defineProperty(process, 'stdin', { value: originalStdin, writable: true }); + process.stdout.write = originalWrite; + } + }); + + test('main() uses fallback session when loadSession returns null in-process', async () => { + const { Readable } = require('stream'); + tmpDir = createMockProject({ + sessionCode: ` + function loadSession() { return null; } + module.exports = { loadSession }; + `, + engineCode: ` + class SynapseEngine { + constructor() {} + process(prompt, session) { + return { xml: JSON.stringify({ pc: session.prompt_count }), metrics: {} }; + } + } + module.exports = { SynapseEngine }; + `, + }); + const originalStdin = process.stdin; + const originalWrite = process.stdout.write; + + const mockStdin = new Readable({ read() {} }); + Object.defineProperty(process, 'stdin', { value: mockStdin, writable: true }); + + let captured = ''; + process.stdout.write = (data) => { captured += data; return true; }; + + try { + const mainPromise = hookModule.main(); + mockStdin.push(JSON.stringify(buildInput(tmpDir))); + mockStdin.push(null); + await mainPromise; + + const output = JSON.parse(captured); + const inner = JSON.parse(output.hookSpecificOutput.additionalContext); + expect(inner.pc).toBe(0); + } finally { + Object.defineProperty(process, 'stdin', { value: originalStdin, writable: true }); + process.stdout.write = originalWrite; + } + }); + }); + + // ========================================================================== + // 9. run() Entry Point (defense-in-depth timeout) + // ========================================================================== + + describe('run() entry point', () => { + const hookModule = require('../../.claude/hooks/synapse-engine.cjs'); + + test('run() sets safety timeout, catches errors, and exits with 0', async () => { + const exitSpy = jest.spyOn(process, 'exit').mockImplementation(() => {}); + const errorSpy = jest.spyOn(console, 'error').mockImplementation(() => {}); + const { Readable } = require('stream'); + const originalStdin = process.stdin; + + // Temporarily clear JEST_WORKER_ID so safeExit() calls process.exit() + const savedWorkerId = process.env.JEST_WORKER_ID; + delete process.env.JEST_WORKER_ID; + + const mockStdin = new Readable({ read() {} }); + Object.defineProperty(process, 'stdin', { value: mockStdin, writable: true }); + + try { + hookModule.run(); + mockStdin.push(null); // empty stdin → JSON parse error → catch + + // Wait for async catch handler to complete + await new Promise((r) => setTimeout(r, 50)); + + expect(exitSpy).toHaveBeenCalledWith(0); + expect(errorSpy).toHaveBeenCalledWith( + expect.stringContaining('[synapse-hook]'), + ); + } finally { + // Restore JEST_WORKER_ID before restoring other mocks + if (savedWorkerId !== undefined) { + process.env.JEST_WORKER_ID = savedWorkerId; + } + exitSpy.mockRestore(); + errorSpy.mockRestore(); + Object.defineProperty(process, 'stdin', { value: originalStdin, writable: true }); + } + }); + + test('HOOK_TIMEOUT_MS is 5000 (defense-in-depth)', () => { + expect(hookModule.HOOK_TIMEOUT_MS).toBe(5000); + }); + }); + + // ========================================================================== + // 10. .gitignore Verification (AC: 6) + // ========================================================================== + + describe('gitignore entries', () => { + test('.gitignore contains .synapse/sessions/', () => { + const gitignorePath = path.resolve(__dirname, '../../.gitignore'); + const content = fs.readFileSync(gitignorePath, 'utf8'); + expect(content).toContain('.synapse/sessions/'); + }); + + test('.gitignore contains .synapse/cache/', () => { + const gitignorePath = path.resolve(__dirname, '../../.gitignore'); + const content = fs.readFileSync(gitignorePath, 'utf8'); + expect(content).toContain('.synapse/cache/'); + }); + + test('.gitignore has exception for synapse-engine.cjs hook', () => { + const gitignorePath = path.resolve(__dirname, '../../.gitignore'); + const content = fs.readFileSync(gitignorePath, 'utf8'); + expect(content).toContain('!.claude/hooks/synapse-engine.cjs'); + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/l0-constitution.test.js +================================================== +```js +/** + * L0 Constitution Processor Tests + * + * Tests for constitution rule loading, nonNegotiable validation, + * graceful degradation on missing files, and ALWAYS_ON behavior. + * + * @story SYN-4 - Layer Processors L0-L3 + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const LayerProcessor = require('../../.aios-core/core/synapse/layers/layer-processor'); +const L0ConstitutionProcessor = require('../../.aios-core/core/synapse/layers/l0-constitution'); + +jest.setTimeout(30000); + +function createTempDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-l0-test-')); +} + +function cleanupTempDir(dir) { + fs.rmSync(dir, { recursive: true, force: true }); +} + +describe('L0ConstitutionProcessor', () => { + let tempDir; + let processor; + + beforeEach(() => { + tempDir = createTempDir(); + processor = new L0ConstitutionProcessor(); + }); + + afterEach(() => { + cleanupTempDir(tempDir); + }); + + describe('constructor', () => { + test('should extend LayerProcessor', () => { + expect(processor).toBeInstanceOf(LayerProcessor); + }); + + test('should set name to constitution', () => { + expect(processor.name).toBe('constitution'); + }); + + test('should set layer to 0', () => { + expect(processor.layer).toBe(0); + }); + + test('should set timeout to 5ms', () => { + expect(processor.timeout).toBe(5); + }); + }); + + describe('process()', () => { + test('should load constitution rules from domain file', () => { + // Given: constitution domain file with rules + const constitutionFile = path.join(tempDir, 'constitution'); + fs.writeFileSync(constitutionFile, [ + 'CONSTITUTION_RULE_ART1_0=CLI First (NON-NEGOTIABLE)', + 'CONSTITUTION_RULE_ART2_0=Agent Authority (NON-NEGOTIABLE)', + 'CONSTITUTION_RULE_ART3_0=Story-Driven (MUST)', + ].join('\n')); + + const context = { + prompt: 'test prompt', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + domains: { + CONSTITUTION: { + state: 'active', + alwaysOn: true, + nonNegotiable: true, + file: 'constitution', + }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.rules).toHaveLength(3); + expect(result.rules[0]).toContain('CLI First'); + expect(result.metadata.layer).toBe(0); + expect(result.metadata.source).toBe('constitution'); + expect(result.metadata.nonNegotiable).toBe(true); + }); + + test('should validate nonNegotiable flag from manifest', () => { + const constitutionFile = path.join(tempDir, 'constitution'); + fs.writeFileSync(constitutionFile, 'RULE_1=Test rule\n'); + + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + domains: { + CONSTITUTION: { + state: 'active', + nonNegotiable: true, + file: 'constitution', + }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result.metadata.nonNegotiable).toBe(true); + }); + + test('should set nonNegotiable false when not in manifest', () => { + const constitutionFile = path.join(tempDir, 'constitution'); + fs.writeFileSync(constitutionFile, 'RULE_1=Test rule\n'); + + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + domains: { + CONSTITUTION: { + state: 'active', + file: 'constitution', + }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result.metadata.nonNegotiable).toBe(false); + }); + + test('should return null when domain file is missing', () => { + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + domains: { + CONSTITUTION: { + state: 'active', + file: 'nonexistent-file', + }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when domain file is empty', () => { + const constitutionFile = path.join(tempDir, 'constitution'); + fs.writeFileSync(constitutionFile, ''); + + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + domains: { + CONSTITUTION: { + state: 'active', + file: 'constitution', + }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should use default path when domain has no file property', () => { + const constitutionFile = path.join(tempDir, 'constitution'); + fs.writeFileSync(constitutionFile, 'RULE_1=Default path rule\n'); + + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + domains: { + CONSTITUTION: { state: 'active', nonNegotiable: true }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).not.toBeNull(); + expect(result.rules[0]).toContain('Default path rule'); + }); + + test('should process regardless of session state (ALWAYS_ON)', () => { + const constitutionFile = path.join(tempDir, 'constitution'); + fs.writeFileSync(constitutionFile, 'RULE=Always on rule\n'); + + // Session with no agent, no workflow + const context = { + prompt: '', + session: { active_agent: { id: null }, active_workflow: null }, + config: { + synapsePath: tempDir, + manifest: { + domains: { + CONSTITUTION: { state: 'active', file: 'constitution', nonNegotiable: true }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).not.toBeNull(); + expect(result.rules).toHaveLength(1); + }); + + test('should handle manifest with no domains', () => { + const constitutionFile = path.join(tempDir, 'constitution'); + fs.writeFileSync(constitutionFile, 'RULE=Fallback\n'); + + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + // Should use default path since no domain key matches + const result = processor.process(context); + expect(result).not.toBeNull(); + }); + }); + + describe('_safeProcess()', () => { + test('should return result via safe wrapper', () => { + const constitutionFile = path.join(tempDir, 'constitution'); + fs.writeFileSync(constitutionFile, 'RULE=Safe test\n'); + + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + domains: { + CONSTITUTION: { state: 'active', file: 'constitution', nonNegotiable: true }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor._safeProcess(context); + expect(result).not.toBeNull(); + expect(result.rules[0]).toContain('Safe test'); + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/domain-loader.test.js +================================================== +```js +/** + * Domain Loader + Manifest Parser Tests + * + * Tests for parseManifest(), loadDomainFile(), isExcluded(), + * matchKeywords(), and helper functions. + * + * @story SYN-1 - Domain Loader + Manifest Parser + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const { + parseManifest, + loadDomainFile, + isExcluded, + matchKeywords, + extractDomainInfo, + domainNameToFile, + KNOWN_SUFFIXES, + GLOBAL_KEYS, +} = require('../../.aios-core/core/synapse/domain/domain-loader'); + +// Set timeout for all tests +jest.setTimeout(30000); + +/** + * Helper: create a temp directory with files for testing + */ +function createTempDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-test-')); +} + +/** + * Helper: clean up temp directory + */ +function cleanupTempDir(dir) { + fs.rmSync(dir, { recursive: true, force: true }); +} + +describe('parseManifest', () => { + let tempDir; + + beforeEach(() => { + tempDir = createTempDir(); + }); + + afterEach(() => { + cleanupTempDir(tempDir); + }); + + test('should parse a valid manifest with all domain attributes', () => { + // Given: a manifest file with all supported attributes + const manifestPath = path.join(tempDir, 'manifest'); + fs.writeFileSync(manifestPath, [ + '# SYNAPSE Manifest', + 'GLOBAL_STATE=active', + 'GLOBAL_ALWAYS_ON=true', + 'CONSTITUTION_STATE=active', + 'CONSTITUTION_ALWAYS_ON=true', + 'CONSTITUTION_NON_NEGOTIABLE=true', + 'AGENT_DEV_STATE=active', + 'AGENT_DEV_AGENT_TRIGGER=dev', + 'WORKFLOW_STORY_DEV_STATE=active', + 'WORKFLOW_STORY_DEV_WORKFLOW_TRIGGER=story_development', + 'MYDOMAIN_STATE=active', + 'MYDOMAIN_RECALL=keyword1,keyword2', + 'MYDOMAIN_EXCLUDE=skip,ignore', + 'DEVMODE=false', + 'GLOBAL_EXCLUDE=skip,ignore', + ].join('\n')); + + // When: parsing the manifest + const result = parseManifest(manifestPath); + + // Then: all domains and global flags are parsed correctly + expect(result.devmode).toBe(false); + expect(result.globalExclude).toEqual(['skip', 'ignore']); + expect(result.domains.GLOBAL).toEqual({ + state: 'active', + alwaysOn: true, + file: 'global', + }); + expect(result.domains.CONSTITUTION).toEqual({ + state: 'active', + alwaysOn: true, + nonNegotiable: true, + file: 'constitution', + }); + expect(result.domains.AGENT_DEV).toEqual({ + state: 'active', + agentTrigger: 'dev', + file: 'agent-dev', + }); + expect(result.domains.WORKFLOW_STORY_DEV).toEqual({ + state: 'active', + workflowTrigger: 'story_development', + file: 'workflow-story-dev', + }); + expect(result.domains.MYDOMAIN).toEqual({ + state: 'active', + recall: ['keyword1', 'keyword2'], + exclude: ['skip', 'ignore'], + file: 'mydomain', + }); + }); + + test('should return empty config when manifest does not exist', () => { + // Given: a path to a non-existent manifest + const manifestPath = path.join(tempDir, 'nonexistent-manifest'); + + // When: parsing + const result = parseManifest(manifestPath); + + // Then: graceful empty config returned + expect(result.devmode).toBe(false); + expect(result.globalExclude).toEqual([]); + expect(result.domains).toEqual({}); + }); + + test('should handle empty manifest file', () => { + // Given: an empty manifest file + const manifestPath = path.join(tempDir, 'manifest'); + fs.writeFileSync(manifestPath, ''); + + // When: parsing + const result = parseManifest(manifestPath); + + // Then: empty config returned + expect(result.devmode).toBe(false); + expect(result.globalExclude).toEqual([]); + expect(result.domains).toEqual({}); + }); + + test('should skip comments and empty lines', () => { + // Given: manifest with comments and blank lines + const manifestPath = path.join(tempDir, 'manifest'); + fs.writeFileSync(manifestPath, [ + '# This is a comment', + '', + ' # Another comment with leading spaces', + ' ', + 'GLOBAL_STATE=active', + ].join('\n')); + + // When: parsing + const result = parseManifest(manifestPath); + + // Then: only the valid line is parsed + expect(Object.keys(result.domains)).toEqual(['GLOBAL']); + expect(result.domains.GLOBAL.state).toBe('active'); + }); + + test('should handle malformed lines gracefully', () => { + // Given: manifest with malformed lines (no '=' sign) + const manifestPath = path.join(tempDir, 'manifest'); + fs.writeFileSync(manifestPath, [ + 'NO_EQUALS_SIGN', + 'VALID_STATE=active', + '=no_key', + 'ANOTHER_MALFORMED', + ].join('\n')); + + // When: parsing + const result = parseManifest(manifestPath); + + // Then: only valid line is parsed, malformed lines skipped + expect(result.domains.VALID).toEqual({ + state: 'active', + file: 'valid', + }); + }); + + test('should parse DEVMODE=true correctly', () => { + // Given: manifest with DEVMODE enabled + const manifestPath = path.join(tempDir, 'manifest'); + fs.writeFileSync(manifestPath, 'DEVMODE=true'); + + // When: parsing + const result = parseManifest(manifestPath); + + // Then: devmode is true + expect(result.devmode).toBe(true); + }); + + test('should handle value with equals signs', () => { + // Given: a manifest where value contains '=' (e.g., base64) + const manifestPath = path.join(tempDir, 'manifest'); + fs.writeFileSync(manifestPath, 'MYDOM_RECALL=a=b,c=d'); + + // When: parsing + const result = parseManifest(manifestPath); + + // Then: value split only on first '=' + expect(result.domains.MYDOM.recall).toEqual(['a=b', 'c=d']); + }); + + test('should skip keys with no known suffix', () => { + // Given: manifest with keys that have no recognized suffix + const manifestPath = path.join(tempDir, 'manifest'); + fs.writeFileSync(manifestPath, [ + 'RANDOM_KEY=value', + 'ANOTHER_UNKNOWN=data', + 'GLOBAL_STATE=active', + ].join('\n')); + + // When: parsing + const result = parseManifest(manifestPath); + + // Then: unknown keys are skipped, valid ones parsed + expect(Object.keys(result.domains)).toEqual(['GLOBAL']); + }); + + test('should handle Windows-style line endings (CRLF)', () => { + // Given: manifest with CRLF line endings + const manifestPath = path.join(tempDir, 'manifest'); + fs.writeFileSync(manifestPath, 'GLOBAL_STATE=active\r\nDEVMODE=true\r\n'); + + // When: parsing + const result = parseManifest(manifestPath); + + // Then: parsed correctly + expect(result.domains.GLOBAL.state).toBe('active'); + expect(result.devmode).toBe(true); + }); +}); + +describe('loadDomainFile', () => { + let tempDir; + + beforeEach(() => { + tempDir = createTempDir(); + }); + + afterEach(() => { + cleanupTempDir(tempDir); + }); + + test('should load domain file in KEY=VALUE format', () => { + // Given: a domain file with DOMAIN_RULE_N=text format + const domainPath = path.join(tempDir, 'agent-dev'); + fs.writeFileSync(domainPath, [ + '# Agent Dev Domain Rules', + 'AGENT_DEV_RULE_1=Always use kebab-case for file names', + 'AGENT_DEV_RULE_2=Follow conventional commits', + 'AGENT_DEV_RULE_3=Write tests for every feature', + ].join('\n')); + + // When: loading the domain file + const rules = loadDomainFile(domainPath); + + // Then: rules are extracted from values + expect(rules).toEqual([ + 'Always use kebab-case for file names', + 'Follow conventional commits', + 'Write tests for every feature', + ]); + }); + + test('should load domain file in plain text format', () => { + // Given: a domain file with plain text lines + const domainPath = path.join(tempDir, 'constitution'); + fs.writeFileSync(domainPath, [ + '# Constitution Rules', + 'CLI First is non-negotiable', + 'Story-driven development always', + 'No invention beyond specs', + ].join('\n')); + + // When: loading the domain file + const rules = loadDomainFile(domainPath); + + // Then: each non-empty, non-comment line is a rule + expect(rules).toEqual([ + 'CLI First is non-negotiable', + 'Story-driven development always', + 'No invention beyond specs', + ]); + }); + + test('should return empty array when domain file does not exist', () => { + // Given: a path to a non-existent domain file + const domainPath = path.join(tempDir, 'nonexistent'); + + // When: loading + const rules = loadDomainFile(domainPath); + + // Then: empty array returned + expect(rules).toEqual([]); + }); + + test('should return empty array for empty domain file', () => { + // Given: an empty domain file + const domainPath = path.join(tempDir, 'empty'); + fs.writeFileSync(domainPath, ''); + + // When: loading + const rules = loadDomainFile(domainPath); + + // Then: empty array + expect(rules).toEqual([]); + }); + + test('should extract AUTH and RULE entries from agent domain files', () => { + // Given: agent domain file with AUTH + RULE keys + const domainPath = path.join(tempDir, 'agent-devops'); + fs.writeFileSync(domainPath, [ + '# Agent devops domain', + 'AGENT_DEVOPS_AUTH_0=EXCLUSIVE: git push', + 'AGENT_DEVOPS_AUTH_1=EXCLUSIVE: gh pr create', + 'AGENT_DEVOPS_RULE_0=Run pre-push quality gates', + 'AGENT_DEVOPS_RULE_1=Confirm version bump with user', + ].join('\n')); + + // When: loading the domain file + const rules = loadDomainFile(domainPath); + + // Then: both AUTH and RULE values are extracted + expect(rules).toEqual([ + 'EXCLUSIVE: git push', + 'EXCLUSIVE: gh pr create', + 'Run pre-push quality gates', + 'Confirm version bump with user', + ]); + }); + + test('should extract values from context bracket keys (RULE_BRACKET_N)', () => { + // Given: context domain file with bracket-level keys + const domainPath = path.join(tempDir, 'context'); + fs.writeFileSync(domainPath, [ + '# Context brackets', + 'CONTEXT_RULE_FRESH_0=Context is fresh', + 'CONTEXT_RULE_MODERATE_0=Standard context level', + 'CONTEXT_RULE_CRITICAL_0=Context nearly exhausted', + ].join('\n')); + + // When: loading the domain file + const rules = loadDomainFile(domainPath); + + // Then: values extracted from all bracket-level keys + expect(rules).toEqual([ + 'Context is fresh', + 'Standard context level', + 'Context nearly exhausted', + ]); + }); + + test('should extract values from constitution keys (RULE_ARTN_M)', () => { + // Given: constitution domain file with article-numbered keys + const domainPath = path.join(tempDir, 'constitution'); + fs.writeFileSync(domainPath, [ + '# Constitution', + 'CONSTITUTION_RULE_ART1_0=CLI First (NON-NEGOTIABLE)', + 'CONSTITUTION_RULE_ART1_1=MUST: All functionality works via CLI first', + 'CONSTITUTION_RULE_ART6_0=Absolute Imports (SHOULD)', + ].join('\n')); + + // When: loading the domain file + const rules = loadDomainFile(domainPath); + + // Then: values extracted from article-numbered keys + expect(rules).toEqual([ + 'CLI First (NON-NEGOTIABLE)', + 'MUST: All functionality works via CLI first', + 'Absolute Imports (SHOULD)', + ]); + }); + + test('should extract values from commands keys (RULE_COMMAND_N)', () => { + // Given: commands domain file with command-category keys + const domainPath = path.join(tempDir, 'commands'); + fs.writeFileSync(domainPath, [ + '# Commands', + 'COMMANDS_RULE_BRIEF_0=Use bullet points only', + 'COMMANDS_RULE_DEV_0=Code over explanation', + 'COMMANDS_RULE_SYNAPSE_STATUS_0=Display current SYNAPSE state', + ].join('\n')); + + // When: loading the domain file + const rules = loadDomainFile(domainPath); + + // Then: values extracted from command-category keys + expect(rules).toEqual([ + 'Use bullet points only', + 'Code over explanation', + 'Display current SYNAPSE state', + ]); + }); +}); + +describe('isExcluded', () => { + test('should return true when prompt matches global exclude keyword', () => { + // Given: a prompt containing an exclude keyword + // When: checking exclusion + const result = isExcluded('please skip this task', ['skip', 'ignore'], []); + + // Then: excluded + expect(result).toBe(true); + }); + + test('should return true when prompt matches domain exclude keyword', () => { + // Given: domain-specific exclusion + const result = isExcluded('this is internal only', [], ['internal']); + + // Then: excluded + expect(result).toBe(true); + }); + + test('should return false when no keywords match', () => { + // Given: prompt that doesn't match any exclude keywords + const result = isExcluded('implement the login feature', ['skip', 'ignore'], ['internal']); + + // Then: not excluded + expect(result).toBe(false); + }); + + test('should be case-insensitive', () => { + // Given: keyword in different case + const result = isExcluded('SKIP this please', ['skip'], []); + + // Then: still excluded + expect(result).toBe(true); + }); + + test('should handle special regex characters in keywords', () => { + // Given: keyword with regex special chars + const result = isExcluded('use file.txt pattern', ['file.txt'], []); + + // Then: matches literally (dot escaped) + expect(result).toBe(true); + }); + + test('should return false for empty prompt', () => { + const result = isExcluded('', ['skip'], []); + expect(result).toBe(false); + }); + + test('should return false for null prompt', () => { + const result = isExcluded(null, ['skip'], []); + expect(result).toBe(false); + }); + + test('should skip empty/falsy keywords in exclude list', () => { + // Given: exclude list with empty strings + const result = isExcluded('some prompt', ['', null, 'skip'], []); + + // Then: empty keywords are skipped, valid one still works + expect(result).toBe(false); // 'skip' not in 'some prompt' + }); + + test('should match valid keyword even with empty ones in list', () => { + const result = isExcluded('skip this', ['', 'skip'], []); + expect(result).toBe(true); + }); +}); + +describe('matchKeywords', () => { + test('should return true when prompt matches a keyword', () => { + const result = matchKeywords('deploy to production', ['deploy', 'release']); + expect(result).toBe(true); + }); + + test('should return false when no keywords match', () => { + const result = matchKeywords('write unit tests', ['deploy', 'release']); + expect(result).toBe(false); + }); + + test('should be case-insensitive', () => { + const result = matchKeywords('DEPLOY NOW', ['deploy']); + expect(result).toBe(true); + }); + + test('should return false for empty keywords array', () => { + const result = matchKeywords('some prompt', []); + expect(result).toBe(false); + }); + + test('should return false for empty prompt', () => { + const result = matchKeywords('', ['deploy']); + expect(result).toBe(false); + }); + + test('should handle special regex characters in keywords', () => { + const result = matchKeywords('check (status) now', ['(status)']); + expect(result).toBe(true); + }); + + test('should skip empty/falsy keywords', () => { + const result = matchKeywords('deploy now', ['', null, 'deploy']); + expect(result).toBe(true); + }); +}); + +describe('extractDomainInfo', () => { + test('should extract domain name from STATE suffix', () => { + const { domainName, suffix } = extractDomainInfo('AGENT_DEV_STATE'); + expect(domainName).toBe('AGENT_DEV'); + expect(suffix).toBe('_STATE'); + }); + + test('should extract domain name from WORKFLOW_TRIGGER suffix', () => { + const { domainName, suffix } = extractDomainInfo('WORKFLOW_STORY_DEV_WORKFLOW_TRIGGER'); + expect(domainName).toBe('WORKFLOW_STORY_DEV'); + expect(suffix).toBe('_WORKFLOW_TRIGGER'); + }); + + test('should extract domain name from AGENT_TRIGGER suffix', () => { + const { domainName, suffix } = extractDomainInfo('AGENT_DEV_AGENT_TRIGGER'); + expect(domainName).toBe('AGENT_DEV'); + expect(suffix).toBe('_AGENT_TRIGGER'); + }); + + test('should extract domain name from NON_NEGOTIABLE suffix', () => { + const { domainName, suffix } = extractDomainInfo('CONSTITUTION_NON_NEGOTIABLE'); + expect(domainName).toBe('CONSTITUTION'); + expect(suffix).toBe('_NON_NEGOTIABLE'); + }); + + test('should return null for keys with no known suffix', () => { + const { domainName, suffix } = extractDomainInfo('UNKNOWN_KEY'); + expect(domainName).toBeNull(); + expect(suffix).toBeNull(); + }); + + test('should return null for empty prefix', () => { + const { domainName, suffix } = extractDomainInfo('_STATE'); + expect(domainName).toBeNull(); + }); +}); + +describe('domainNameToFile', () => { + test('should convert simple domain name', () => { + expect(domainNameToFile('GLOBAL')).toBe('global'); + }); + + test('should convert multi-word domain name with underscores', () => { + expect(domainNameToFile('AGENT_DEV')).toBe('agent-dev'); + }); + + test('should convert long domain name', () => { + expect(domainNameToFile('WORKFLOW_STORY_DEV')).toBe('workflow-story-dev'); + }); +}); + +describe('constants', () => { + test('KNOWN_SUFFIXES should contain all 7 suffixes', () => { + expect(KNOWN_SUFFIXES).toHaveLength(7); + expect(KNOWN_SUFFIXES).toContain('_STATE'); + expect(KNOWN_SUFFIXES).toContain('_ALWAYS_ON'); + expect(KNOWN_SUFFIXES).toContain('_NON_NEGOTIABLE'); + expect(KNOWN_SUFFIXES).toContain('_AGENT_TRIGGER'); + expect(KNOWN_SUFFIXES).toContain('_WORKFLOW_TRIGGER'); + expect(KNOWN_SUFFIXES).toContain('_RECALL'); + expect(KNOWN_SUFFIXES).toContain('_EXCLUDE'); + }); + + test('GLOBAL_KEYS should contain DEVMODE and GLOBAL_EXCLUDE', () => { + expect(GLOBAL_KEYS).toEqual(['DEVMODE', 'GLOBAL_EXCLUDE']); + }); +}); + +describe('parseManifest — edge cases for comma-separated values', () => { + let tempDir; + + beforeEach(() => { + tempDir = createTempDir(); + }); + + afterEach(() => { + cleanupTempDir(tempDir); + }); + + test('should handle empty GLOBAL_EXCLUDE value', () => { + // Given: GLOBAL_EXCLUDE with empty value + const manifestPath = path.join(tempDir, 'manifest'); + fs.writeFileSync(manifestPath, 'GLOBAL_EXCLUDE='); + + // When: parsing + const result = parseManifest(manifestPath); + + // Then: empty array (parseCommaSeparated handles empty string) + expect(result.globalExclude).toEqual([]); + }); + + test('should handle RECALL with trailing commas', () => { + const manifestPath = path.join(tempDir, 'manifest'); + fs.writeFileSync(manifestPath, 'MYDOM_RECALL=a,,b,'); + + const result = parseManifest(manifestPath); + expect(result.domains.MYDOM.recall).toEqual(['a', 'b']); + }); +}); + +``` + +================================================== +📄 tests/synapse/l2-agent.test.js +================================================== +```js +/** + * L2 Agent Processor Tests + * + * Tests for agent detection, trigger matching, authority boundary + * filtering, graceful degradation, and session state handling. + * + * @story SYN-4 - Layer Processors L0-L3 + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const LayerProcessor = require('../../.aios-core/core/synapse/layers/layer-processor'); +const L2AgentProcessor = require('../../.aios-core/core/synapse/layers/l2-agent'); + +jest.setTimeout(30000); + +function createTempDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-l2-test-')); +} + +function cleanupTempDir(dir) { + fs.rmSync(dir, { recursive: true, force: true }); +} + +describe('L2AgentProcessor', () => { + let tempDir; + let processor; + + beforeEach(() => { + tempDir = createTempDir(); + processor = new L2AgentProcessor(); + }); + + afterEach(() => { + cleanupTempDir(tempDir); + }); + + describe('constructor', () => { + test('should extend LayerProcessor', () => { + expect(processor).toBeInstanceOf(LayerProcessor); + }); + + test('should set name to agent', () => { + expect(processor.name).toBe('agent'); + }); + + test('should set layer to 2', () => { + expect(processor.layer).toBe(2); + }); + + test('should set timeout to 15ms', () => { + expect(processor.timeout).toBe(15); + }); + }); + + describe('process()', () => { + test('should load agent-specific rules when agent is active', () => { + fs.writeFileSync(path.join(tempDir, 'agent-dev'), [ + 'DEV_RULE_1=Follow coding standards', + 'DEV_RULE_2=Only @devops can push (AUTH boundary)', + 'DEV_RULE_3=Write tests for all features', + ].join('\n')); + + const context = { + prompt: '', + session: { + active_agent: { id: 'dev', activated_at: '2026-02-11', activation_quality: 'full' }, + }, + config: { + synapsePath: tempDir, + manifest: { + domains: { + DEV_AGENT: { + state: 'active', + agentTrigger: 'dev', + file: 'agent-dev', + }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.rules).toHaveLength(3); + expect(result.metadata.layer).toBe(2); + expect(result.metadata.source).toBe('agent-dev'); + expect(result.metadata.agentId).toBe('dev'); + expect(result.metadata.hasAuthority).toBe(true); + }); + + test('should return null when no agent is active', () => { + const context = { + prompt: '', + session: { active_agent: { id: null } }, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when session has no active_agent', () => { + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when no matching agentTrigger in manifest', () => { + const context = { + prompt: '', + session: { + active_agent: { id: 'unknown-agent' }, + }, + config: { + synapsePath: tempDir, + manifest: { + domains: { + DEV_AGENT: { agentTrigger: 'dev', file: 'agent-dev' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when domain file is missing', () => { + const context = { + prompt: '', + session: { + active_agent: { id: 'dev' }, + }, + config: { + synapsePath: tempDir, + manifest: { + domains: { + DEV_AGENT: { agentTrigger: 'dev', file: 'nonexistent' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should detect authority boundaries (rules containing AUTH)', () => { + fs.writeFileSync(path.join(tempDir, 'agent-qa'), [ + 'QA_RULE_1=Run quality checks', + 'QA_RULE_2=QA agent AUTH boundary: can approve or reject stories', + ].join('\n')); + + const context = { + prompt: '', + session: { + active_agent: { id: 'qa' }, + }, + config: { + synapsePath: tempDir, + manifest: { + domains: { + QA_AGENT: { agentTrigger: 'qa', file: 'agent-qa' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result.metadata.hasAuthority).toBe(true); + }); + + test('should set hasAuthority false when no AUTH rules', () => { + fs.writeFileSync(path.join(tempDir, 'agent-sm'), 'SM_RULE_1=Create stories\n'); + + const context = { + prompt: '', + session: { + active_agent: { id: 'sm' }, + }, + config: { + synapsePath: tempDir, + manifest: { + domains: { + SM_AGENT: { agentTrigger: 'sm', file: 'agent-sm' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result.metadata.hasAuthority).toBe(false); + }); + + test('should use default file path when domain has no file property', () => { + fs.writeFileSync(path.join(tempDir, 'agent-architect'), 'RULE=Architecture rules\n'); + + const context = { + prompt: '', + session: { + active_agent: { id: 'architect' }, + }, + config: { + synapsePath: tempDir, + manifest: { + domains: { + ARCH_AGENT: { agentTrigger: 'architect' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).not.toBeNull(); + expect(result.rules[0]).toContain('Architecture rules'); + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/l7-star-command.test.js +================================================== +```js +/** + * L7 Star-Command Processor Tests + * + * Tests for star-command detection, multi-command support, + * command block parsing, no-command handling, and edge cases. + * + * @story SYN-5 - Layer Processors L4-L7 + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const LayerProcessor = require('../../.aios-core/core/synapse/layers/layer-processor'); +const L7StarCommandProcessor = require('../../.aios-core/core/synapse/layers/l7-star-command'); + +jest.setTimeout(30000); + +function createTempDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-l7-test-')); +} + +function cleanupTempDir(dir) { + fs.rmSync(dir, { recursive: true, force: true }); +} + +describe('L7StarCommandProcessor', () => { + let tempDir; + let processor; + + beforeEach(() => { + tempDir = createTempDir(); + processor = new L7StarCommandProcessor(); + }); + + afterEach(() => { + cleanupTempDir(tempDir); + }); + + describe('constructor', () => { + test('should extend LayerProcessor', () => { + expect(processor).toBeInstanceOf(LayerProcessor); + }); + + test('should set name to star-command', () => { + expect(processor.name).toBe('star-command'); + }); + + test('should set layer to 7', () => { + expect(processor.layer).toBe(7); + }); + + test('should set timeout to 5ms', () => { + expect(processor.timeout).toBe(5); + }); + }); + + describe('process()', () => { + test('should detect single star-command and load rules', () => { + fs.writeFileSync(path.join(tempDir, 'commands'), [ + '[*help] COMMAND:', + '0. Show available commands', + '1. Use bullet points', + '2. Max 5 items', + ].join('\n')); + + const context = { + prompt: '*help', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.rules).toHaveLength(3); + expect(result.rules[0]).toContain('Show available commands'); + expect(result.metadata.layer).toBe(7); + expect(result.metadata.commands).toContain('help'); + }); + + test('should detect multiple star-commands in same prompt', () => { + fs.writeFileSync(path.join(tempDir, 'commands'), [ + '[*dev] COMMAND:', + '0. Code over explanation', + '[*brief] COMMAND:', + '0. Use bullet points only', + ].join('\n')); + + const context = { + prompt: '*dev *brief implement the feature', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.metadata.commands).toEqual( + expect.arrayContaining(['dev', 'brief']), + ); + expect(result.rules.length).toBeGreaterThanOrEqual(2); + }); + + test('should return null when no star-commands in prompt', () => { + const context = { + prompt: 'just a regular prompt without commands', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when prompt is empty', () => { + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when commands file is missing', () => { + const context = { + prompt: '*help', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when command not found in commands file', () => { + fs.writeFileSync(path.join(tempDir, 'commands'), [ + '[*help] COMMAND:', + '0. Show commands', + ].join('\n')); + + const context = { + prompt: '*unknown-command', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should deduplicate repeated star-commands in prompt', () => { + fs.writeFileSync(path.join(tempDir, 'commands'), [ + '[*help] COMMAND:', + '0. Show commands', + ].join('\n')); + + const context = { + prompt: '*help and then *help again', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.metadata.commands).toHaveLength(1); + expect(result.metadata.commands[0]).toBe('help'); + }); + + test('should handle star-command with hyphens', () => { + fs.writeFileSync(path.join(tempDir, 'commands'), [ + '[*run-tests] COMMAND:', + '0. Execute all tests', + '1. Show coverage', + ].join('\n')); + + const context = { + prompt: '*run-tests for this feature', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.metadata.commands).toContain('run-tests'); + }); + + test('should not match asterisk in markdown (e.g., **bold**)', () => { + const context = { + prompt: 'this is **bold** text and *italic* text', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + // *italic matches the pattern but *bold (inside **) also matches + // The regex /\*([a-z][\w-]*)/gi will match *bold and *italic + // but since there's no commands file, result is null + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should handle star-command embedded in sentence', () => { + fs.writeFileSync(path.join(tempDir, 'commands'), [ + '[*develop] COMMAND:', + '0. Start development', + ].join('\n')); + + const context = { + prompt: 'please *develop story SYN-5 now', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.metadata.commands).toContain('develop'); + }); + + test('should be case-insensitive for command matching', () => { + fs.writeFileSync(path.join(tempDir, 'commands'), [ + '[*Help] COMMAND:', + '0. Show help', + ].join('\n')); + + const context = { + prompt: '*HELP me', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.metadata.commands).toContain('help'); + }); + + test('should include inline content after command header', () => { + fs.writeFileSync(path.join(tempDir, 'commands'), [ + '[*quick] COMMAND inline content here', + '0. Quick rule one', + ].join('\n')); + + const context = { + prompt: '*quick', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.rules).toEqual( + expect.arrayContaining([ + expect.stringContaining('inline content here'), + ]), + ); + }); + + test('should work with _safeProcess wrapper', () => { + fs.writeFileSync(path.join(tempDir, 'commands'), [ + '[*test] COMMAND:', + '0. Run tests', + ].join('\n')); + + const context = { + prompt: '*test', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor._safeProcess(context); + + expect(result).not.toBeNull(); + expect(result.metadata.commands).toContain('test'); + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/synapse-memory-provider.test.js +================================================== +```js +/** + * SynapseMemoryProvider Tests + * + * Tests for the pro-gated MIS retrieval provider used by MemoryBridge. + * + * @module tests/synapse/synapse-memory-provider + * @story SYN-10 - Pro Memory Bridge (Feature-Gated MIS Consumer) + */ + +jest.setTimeout(10000); + +// --------------------------------------------------------------------------- +// Mocks +// --------------------------------------------------------------------------- + +const mockFeatureGate = { + isAvailable: jest.fn(() => true), + require: jest.fn(), +}; + +jest.mock('../../pro/license/feature-gate', () => ({ + featureGate: mockFeatureGate, + FeatureGate: jest.fn(), +}), { virtual: true }); + +const mockQueryMemories = jest.fn(() => Promise.resolve([])); + +jest.mock('../../pro/memory/memory-loader', () => ({ + MemoryLoader: jest.fn().mockImplementation(() => ({ + queryMemories: mockQueryMemories, + })), + AGENT_SECTOR_PREFERENCES: { + dev: ['procedural', 'semantic'], + qa: ['reflective', 'episodic'], + architect: ['semantic', 'reflective'], + pm: ['episodic', 'semantic'], + po: ['episodic', 'semantic'], + sm: ['procedural', 'episodic'], + devops: ['procedural', 'episodic'], + analyst: ['semantic', 'reflective'], + 'data-engineer': ['procedural', 'semantic'], + 'ux-design-expert': ['reflective', 'procedural'], + }, +}), { virtual: true }); + +// --------------------------------------------------------------------------- +// Import (after mocks) — skip entire suite if pro/ submodule is absent (CI) +// --------------------------------------------------------------------------- + +let SynapseMemoryProvider, BRACKET_CONFIG, DEFAULT_SECTORS; +let proAvailable = true; + +try { + const mod = require('../../pro/memory/synapse-memory-provider'); + SynapseMemoryProvider = mod.SynapseMemoryProvider; + BRACKET_CONFIG = mod.BRACKET_CONFIG; + DEFAULT_SECTORS = mod.DEFAULT_SECTORS; +} catch { + proAvailable = false; +} + +const describeIfPro = proAvailable ? describe : describe.skip; + +// ============================================================================= +// SynapseMemoryProvider +// ============================================================================= + +describeIfPro('SynapseMemoryProvider', () => { + let provider; + + beforeEach(() => { + mockFeatureGate.require.mockReset(); + mockQueryMemories.mockReset(); + mockQueryMemories.mockResolvedValue([]); + provider = new SynapseMemoryProvider(); + }); + + // ------------------------------------------------------------------------- + // Construction + Feature Gate + // ------------------------------------------------------------------------- + + describe('construction', () => { + test('requires pro.memory.synapse feature', () => { + new SynapseMemoryProvider(); + expect(mockFeatureGate.require).toHaveBeenCalledWith( + 'pro.memory.synapse', + 'SYNAPSE Memory Bridge', + ); + }); + + test('throws when feature gate denies', () => { + mockFeatureGate.require.mockImplementation(() => { + throw new Error('Pro feature required'); + }); + expect(() => new SynapseMemoryProvider()).toThrow('Pro feature required'); + }); + }); + + // ------------------------------------------------------------------------- + // AC-4: Agent-scoped memory + // ------------------------------------------------------------------------- + + describe('agent-scoped sector filtering', () => { + test('@dev gets procedural + semantic sectors', async () => { + await provider.getMemories('dev', 'MODERATE', 100); + expect(mockQueryMemories).toHaveBeenCalledWith('dev', expect.objectContaining({ + sectors: ['procedural', 'semantic'], + })); + }); + + test('@qa gets reflective + episodic sectors', async () => { + await provider.getMemories('qa', 'MODERATE', 100); + expect(mockQueryMemories).toHaveBeenCalledWith('qa', expect.objectContaining({ + sectors: ['reflective', 'episodic'], + })); + }); + + test('@architect gets semantic + reflective sectors', async () => { + await provider.getMemories('architect', 'MODERATE', 100); + expect(mockQueryMemories).toHaveBeenCalledWith('architect', expect.objectContaining({ + sectors: ['semantic', 'reflective'], + })); + }); + + test('unknown agent gets default sector (semantic)', async () => { + await provider.getMemories('unknown-agent', 'MODERATE', 100); + expect(mockQueryMemories).toHaveBeenCalledWith('unknown-agent', expect.objectContaining({ + sectors: ['semantic'], + })); + }); + }); + + // ------------------------------------------------------------------------- + // AC-5: Session-level caching + // ------------------------------------------------------------------------- + + describe('session-level caching', () => { + test('caches results by agentId + bracket', async () => { + mockQueryMemories.mockResolvedValue([ + { content: 'cached', relevance: 0.8 }, + ]); + + const first = await provider.getMemories('dev', 'MODERATE', 100); + const second = await provider.getMemories('dev', 'MODERATE', 100); + + // queryMemories should only be called once (second call uses cache) + expect(mockQueryMemories).toHaveBeenCalledTimes(1); + expect(first).toEqual(second); + }); + + test('different brackets are cached separately', async () => { + mockQueryMemories.mockResolvedValue([]); + + await provider.getMemories('dev', 'MODERATE', 50); + await provider.getMemories('dev', 'DEPLETED', 200); + + expect(mockQueryMemories).toHaveBeenCalledTimes(2); + }); + + test('different agents are cached separately', async () => { + mockQueryMemories.mockResolvedValue([]); + + await provider.getMemories('dev', 'MODERATE', 50); + await provider.getMemories('qa', 'MODERATE', 50); + + expect(mockQueryMemories).toHaveBeenCalledTimes(2); + }); + + test('clearCache empties the cache', async () => { + mockQueryMemories.mockResolvedValue([ + { content: 'test', relevance: 0.5 }, + ]); + + await provider.getMemories('dev', 'MODERATE', 100); + provider.clearCache(); + await provider.getMemories('dev', 'MODERATE', 100); + + expect(mockQueryMemories).toHaveBeenCalledTimes(2); + }); + }); + + // ------------------------------------------------------------------------- + // Bracket configuration + // ------------------------------------------------------------------------- + + describe('bracket configuration', () => { + test('MODERATE uses layer 1, limit 3, minRelevance 0.7', async () => { + await provider.getMemories('dev', 'MODERATE', 50); + expect(mockQueryMemories).toHaveBeenCalledWith('dev', expect.objectContaining({ + layer: 1, + limit: 3, + minRelevance: 0.7, + })); + }); + + test('DEPLETED uses layer 2, limit 5, minRelevance 0.5', async () => { + await provider.getMemories('dev', 'DEPLETED', 200); + expect(mockQueryMemories).toHaveBeenCalledWith('dev', expect.objectContaining({ + layer: 2, + limit: 5, + minRelevance: 0.5, + })); + }); + + test('CRITICAL uses layer 3, limit 10, minRelevance 0.3', async () => { + await provider.getMemories('dev', 'CRITICAL', 1000); + expect(mockQueryMemories).toHaveBeenCalledWith('dev', expect.objectContaining({ + layer: 3, + limit: 10, + minRelevance: 0.3, + })); + }); + + test('unknown bracket returns []', async () => { + const result = await provider.getMemories('dev', 'FRESH', 100); + expect(result).toEqual([]); + expect(mockQueryMemories).not.toHaveBeenCalled(); + }); + }); + + // ------------------------------------------------------------------------- + // Transform to hints + // ------------------------------------------------------------------------- + + describe('hint transformation', () => { + test('transforms memories to hint format', async () => { + mockQueryMemories.mockResolvedValue([ + { content: 'Use absolute imports', relevance: 0.9, sector: 'procedural' }, + { content: 'Avoid any type', relevance: 0.7, sector: 'semantic' }, + ]); + + const hints = await provider.getMemories('dev', 'MODERATE', 100); + expect(hints.length).toBe(2); + expect(hints[0]).toMatchObject({ + content: 'Use absolute imports', + source: 'procedural', + relevance: 0.9, + }); + expect(hints[0]).toHaveProperty('tokens'); + }); + + test('respects token budget in transformation', async () => { + mockQueryMemories.mockResolvedValue([ + { content: 'x'.repeat(200), relevance: 0.9 }, + { content: 'y'.repeat(200), relevance: 0.8 }, + ]); + + // Budget of 60 tokens ~ 240 chars, first memory is 200 chars (50 tokens), second would exceed + const hints = await provider.getMemories('dev', 'MODERATE', 60); + expect(hints.length).toBe(1); + }); + + test('handles empty memories array', async () => { + mockQueryMemories.mockResolvedValue([]); + const hints = await provider.getMemories('dev', 'MODERATE', 100); + expect(hints).toEqual([]); + }); + + test('uses summary or title as fallback content', async () => { + mockQueryMemories.mockResolvedValue([ + { summary: 'Summary text', relevance: 0.6 }, + ]); + + const hints = await provider.getMemories('dev', 'MODERATE', 100); + expect(hints.length).toBe(1); + expect(hints[0].content).toBe('Summary text'); + }); + }); +}); + +// ============================================================================= +// Constants +// ============================================================================= + +describeIfPro('module constants', () => { + test('BRACKET_CONFIG has MODERATE, DEPLETED, CRITICAL', () => { + expect(BRACKET_CONFIG).toHaveProperty('MODERATE'); + expect(BRACKET_CONFIG).toHaveProperty('DEPLETED'); + expect(BRACKET_CONFIG).toHaveProperty('CRITICAL'); + }); + + test('DEFAULT_SECTORS is [semantic]', () => { + expect(DEFAULT_SECTORS).toEqual(['semantic']); + }); +}); + +``` + +================================================== +📄 tests/synapse/l4-task.test.js +================================================== +```js +/** + * L4 Task Processor Tests + * + * Tests for task detection, context formatting, no-task handling, + * graceful degradation, and metadata correctness. + * + * @story SYN-5 - Layer Processors L4-L7 + */ + +const LayerProcessor = require('../../.aios-core/core/synapse/layers/layer-processor'); +const L4TaskProcessor = require('../../.aios-core/core/synapse/layers/l4-task'); + +jest.setTimeout(30000); + +describe('L4TaskProcessor', () => { + let processor; + + beforeEach(() => { + processor = new L4TaskProcessor(); + }); + + describe('constructor', () => { + test('should extend LayerProcessor', () => { + expect(processor).toBeInstanceOf(LayerProcessor); + }); + + test('should set name to task', () => { + expect(processor.name).toBe('task'); + }); + + test('should set layer to 4', () => { + expect(processor.layer).toBe(4); + }); + + test('should set timeout to 20ms', () => { + expect(processor.timeout).toBe(20); + }); + }); + + describe('process()', () => { + test('should return task context when task is active', () => { + const context = { + prompt: 'implement the feature', + session: { + active_task: { + id: 'task-42', + story: 'SYN-5', + executor_type: 'dev', + started_at: '2026-02-11', + }, + }, + config: { + synapsePath: '/tmp/synapse', + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.rules).toContain('Active Task: task-42'); + expect(result.rules).toContain('Story: SYN-5'); + expect(result.rules).toContain('Executor: dev'); + expect(result.metadata.layer).toBe(4); + expect(result.metadata.taskId).toBe('task-42'); + expect(result.metadata.story).toBe('SYN-5'); + expect(result.metadata.executorType).toBe('dev'); + }); + + test('should return null when no active task', () => { + const context = { + prompt: '', + session: { active_task: null }, + config: { + synapsePath: '/tmp/synapse', + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when session has no active_task', () => { + const context = { + prompt: '', + session: {}, + config: { + synapsePath: '/tmp/synapse', + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when active_task has no id', () => { + const context = { + prompt: '', + session: { active_task: { story: 'SYN-5' } }, + config: { + synapsePath: '/tmp/synapse', + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when active_task id is null', () => { + const context = { + prompt: '', + session: { active_task: { id: null } }, + config: { + synapsePath: '/tmp/synapse', + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should handle task with only id (no story or executor_type)', () => { + const context = { + prompt: '', + session: { + active_task: { id: 'task-99' }, + }, + config: { + synapsePath: '/tmp/synapse', + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.rules).toHaveLength(1); + expect(result.rules[0]).toBe('Active Task: task-99'); + expect(result.metadata.taskId).toBe('task-99'); + expect(result.metadata.story).toBeNull(); + expect(result.metadata.executorType).toBeNull(); + }); + + test('should include story but omit executor when executor_type is missing', () => { + const context = { + prompt: '', + session: { + active_task: { id: 'task-10', story: 'MIS-3' }, + }, + config: { + synapsePath: '/tmp/synapse', + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result.rules).toHaveLength(2); + expect(result.rules[0]).toBe('Active Task: task-10'); + expect(result.rules[1]).toBe('Story: MIS-3'); + }); + + test('should include executor but omit story when story is missing', () => { + const context = { + prompt: '', + session: { + active_task: { id: 'task-77', executor_type: 'qa' }, + }, + config: { + synapsePath: '/tmp/synapse', + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result.rules).toHaveLength(2); + expect(result.rules[0]).toBe('Active Task: task-77'); + expect(result.rules[1]).toBe('Executor: qa'); + }); + + test('should work with _safeProcess wrapper', () => { + const context = { + prompt: '', + session: { + active_task: { id: 'task-1', story: 'SYN-1', executor_type: 'dev' }, + }, + config: { + synapsePath: '/tmp/synapse', + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor._safeProcess(context); + + expect(result).not.toBeNull(); + expect(result.rules).toHaveLength(3); + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/l5-squad.test.js +================================================== +```js +/** + * L5 Squad Processor Tests + * + * Tests for squad discovery, namespace prefixing, cache TTL, + * merge rules, active squad prioritization, graceful degradation, + * and missing directory handling. + * + * @story SYN-5 - Layer Processors L4-L7 + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const LayerProcessor = require('../../.aios-core/core/synapse/layers/layer-processor'); +const L5SquadProcessor = require('../../.aios-core/core/synapse/layers/l5-squad'); + +jest.setTimeout(30000); + +function createTempDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-l5-test-')); +} + +function cleanupTempDir(dir) { + fs.rmSync(dir, { recursive: true, force: true }); +} + +/** + * Helper to create a squad structure with .synapse/manifest and domain files. + */ +function createSquad(projectRoot, squadName, manifestContent, domainFiles = {}) { + const squadDir = path.join(projectRoot, 'squads', squadName, '.synapse'); + fs.mkdirSync(squadDir, { recursive: true }); + fs.writeFileSync(path.join(squadDir, 'manifest'), manifestContent); + for (const [fileName, content] of Object.entries(domainFiles)) { + fs.writeFileSync(path.join(squadDir, fileName), content); + } +} + +describe('L5SquadProcessor', () => { + let tempDir; + let processor; + let synapsePath; + + beforeEach(() => { + tempDir = createTempDir(); + synapsePath = path.join(tempDir, '.synapse'); + fs.mkdirSync(synapsePath, { recursive: true }); + processor = new L5SquadProcessor(); + }); + + afterEach(() => { + cleanupTempDir(tempDir); + }); + + describe('constructor', () => { + test('should extend LayerProcessor', () => { + expect(processor).toBeInstanceOf(LayerProcessor); + }); + + test('should set name to squad', () => { + expect(processor.name).toBe('squad'); + }); + + test('should set layer to 5', () => { + expect(processor.layer).toBe(5); + }); + + test('should set timeout to 20ms', () => { + expect(processor.timeout).toBe(20); + }); + }); + + describe('process()', () => { + test('should discover squad domains and return rules', () => { + createSquad(tempDir, 'copy-chief', [ + 'HEADLINES_STATE=active', + 'HEADLINES_RECALL=headline,title', + ].join('\n'), { + 'headlines': 'HEADLINES_RULE_1=Write compelling headlines\nHEADLINES_RULE_2=Use power words', + }); + + const context = { + prompt: '', + session: {}, + config: { + synapsePath, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.rules.length).toBeGreaterThan(0); + expect(result.metadata.layer).toBe(5); + expect(result.metadata.squadsFound).toBe(1); + expect(result.metadata.domainsLoaded.length).toBeGreaterThan(0); + }); + + test('should namespace domain keys with squad name uppercase', () => { + createSquad(tempDir, 'my-squad', [ + 'RULES_STATE=active', + ].join('\n'), { + 'rules': 'Rule one\nRule two', + }); + + const context = { + prompt: '', + session: {}, + config: { synapsePath, manifest: { domains: {} } }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.metadata.domainsLoaded).toEqual( + expect.arrayContaining([expect.stringContaining('MY-SQUAD_')]), + ); + }); + + test('should return null when squads/ directory is missing', () => { + // tempDir has no squads/ directory + const context = { + prompt: '', + session: {}, + config: { synapsePath, manifest: { domains: {} } }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when squads/ exists but no squad has .synapse/', () => { + fs.mkdirSync(path.join(tempDir, 'squads', 'empty-squad'), { recursive: true }); + + const context = { + prompt: '', + session: {}, + config: { synapsePath, manifest: { domains: {} } }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when squad manifest has no domains', () => { + const squadDir = path.join(tempDir, 'squads', 'bare-squad', '.synapse'); + fs.mkdirSync(squadDir, { recursive: true }); + fs.writeFileSync(path.join(squadDir, 'manifest'), 'DEVMODE=true\n'); + + const context = { + prompt: '', + session: {}, + config: { synapsePath, manifest: { domains: {} } }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should discover multiple squads', () => { + createSquad(tempDir, 'squad-alpha', 'ALPHA_STATE=active\n', { + 'alpha': 'Alpha rule 1', + }); + createSquad(tempDir, 'squad-beta', 'BETA_STATE=active\n', { + 'beta': 'Beta rule 1', + }); + + const context = { + prompt: '', + session: {}, + config: { synapsePath, manifest: { domains: {} } }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.metadata.squadsFound).toBe(2); + }); + + test('should prioritize active squad domains', () => { + createSquad(tempDir, 'active-squad', 'ACTIVE_STATE=active\n', { + 'active': 'Active squad rule', + }); + createSquad(tempDir, 'passive-squad', 'PASSIVE_STATE=active\n', { + 'passive': 'Passive squad rule', + }); + + const context = { + prompt: '', + session: { + active_squad: { name: 'active-squad', path: 'squads/active-squad' }, + }, + config: { synapsePath, manifest: { domains: {} } }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + // Active squad rules should come first + expect(result.rules[0]).toBe('Active squad rule'); + }); + + test('should return null when squads have manifests but domain files are empty', () => { + const squadDir = path.join(tempDir, 'squads', 'empty-domains-squad', '.synapse'); + fs.mkdirSync(squadDir, { recursive: true }); + fs.writeFileSync(path.join(squadDir, 'manifest'), 'EMPTY_STATE=active\n'); + // Domain file exists but is empty + fs.writeFileSync(path.join(squadDir, 'empty'), ''); + + const context = { + prompt: '', + session: {}, + config: { synapsePath, manifest: { domains: {} } }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should skip squad when merge mode is none', () => { + const squadDir = path.join(tempDir, 'squads', 'opt-out', '.synapse'); + fs.mkdirSync(squadDir, { recursive: true }); + fs.writeFileSync(path.join(squadDir, 'rules'), 'Should not load'); + + // Test _loadSquadDomains directly with manifest containing EXTENDS=none + const manifest = { + domains: { + 'OPT-OUT_EXTENDS': { file: 'none' }, + 'RULES': { file: 'rules' }, + }, + }; + + const allRules = []; + const domainsLoaded = []; + processor._loadSquadDomains('opt-out', manifest, path.join(tempDir, 'squads'), allRules, domainsLoaded); + + expect(allRules).toHaveLength(0); + expect(domainsLoaded).toHaveLength(0); + }); + + test('should handle _scanSquads error when squads dir is unreadable', () => { + // Pass a file path instead of directory to trigger readdirSync error + const fakeSquadsDir = path.join(tempDir, 'not-a-dir'); + fs.writeFileSync(fakeSquadsDir, 'I am a file'); + + const result = processor._scanSquads(fakeSquadsDir); + expect(result).toEqual({}); + }); + }); + + describe('cache', () => { + test('should create cache file after first scan', () => { + createSquad(tempDir, 'cached-squad', 'DATA_STATE=active\n', { + 'data': 'Cached rule', + }); + + const context = { + prompt: '', + session: {}, + config: { synapsePath, manifest: { domains: {} } }, + previousLayers: [], + }; + + processor.process(context); + + const cachePath = path.join(synapsePath, 'cache', 'squad-manifests.json'); + expect(fs.existsSync(cachePath)).toBe(true); + + const cached = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + expect(cached.timestamp).toBeDefined(); + expect(cached.manifests).toBeDefined(); + }); + + test('should use cache when fresh (within TTL)', () => { + createSquad(tempDir, 'ttl-squad', 'TTL_STATE=active\n', { + 'ttl': 'TTL rule', + }); + + const context = { + prompt: '', + session: {}, + config: { synapsePath, manifest: { domains: {} } }, + previousLayers: [], + }; + + // First call: scan + cache + processor.process(context); + + // Add a new squad after cache is written + createSquad(tempDir, 'new-squad', 'NEW_STATE=active\n', { + 'new': 'New rule', + }); + + // Second call: should use cache (new squad NOT discovered) + const result = processor.process(context); + expect(result).not.toBeNull(); + // Should only find 1 squad (from cache), not 2 + expect(result.metadata.squadsFound).toBe(1); + }); + + test('should rescan when cache is stale', () => { + createSquad(tempDir, 'stale-squad', 'STALE_STATE=active\n', { + 'stale': 'Stale rule', + }); + + const context = { + prompt: '', + session: {}, + config: { synapsePath, manifest: { domains: {} } }, + previousLayers: [], + }; + + // First call: scan + cache + processor.process(context); + + // Make cache stale by setting timestamp in the past + const cachePath = path.join(synapsePath, 'cache', 'squad-manifests.json'); + const cached = JSON.parse(fs.readFileSync(cachePath, 'utf8')); + cached.timestamp = Date.now() - 120000; // 2 minutes ago + fs.writeFileSync(cachePath, JSON.stringify(cached)); + + // Remove squad to verify rescan happens + fs.rmSync(path.join(tempDir, 'squads'), { recursive: true, force: true }); + fs.mkdirSync(path.join(tempDir, 'squads'), { recursive: true }); + + // Should rescan and find nothing + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should handle corrupt cache gracefully', () => { + createSquad(tempDir, 'corrupt-squad', 'CORRUPT_STATE=active\n', { + 'corrupt': 'Corrupt cache rule', + }); + + // Write corrupt cache + const cacheDir = path.join(synapsePath, 'cache'); + fs.mkdirSync(cacheDir, { recursive: true }); + fs.writeFileSync(path.join(cacheDir, 'squad-manifests.json'), 'NOT JSON!!!'); + + const context = { + prompt: '', + session: {}, + config: { synapsePath, manifest: { domains: {} } }, + previousLayers: [], + }; + + // Should fallback to full scan + const result = processor.process(context); + expect(result).not.toBeNull(); + expect(result.rules).toContain('Corrupt cache rule'); + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/hook-runtime.test.js +================================================== +```js +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); + +const { + resolveHookRuntime, + buildHookOutput, +} = require('../../.aios-core/core/synapse/runtime/hook-runtime'); + +function makeTempDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'hook-runtime-')); +} + +function writeFile(filePath, content) { + fs.mkdirSync(path.dirname(filePath), { recursive: true }); + fs.writeFileSync(filePath, content, 'utf8'); +} + +describe('hook-runtime', () => { + it('returns null when cwd is missing', () => { + expect(resolveHookRuntime({})).toBeNull(); + expect(resolveHookRuntime()).toBeNull(); + }); + + it('returns null when .synapse folder does not exist', () => { + const cwd = makeTempDir(); + try { + expect(resolveHookRuntime({ cwd, sessionId: 'abc' })).toBeNull(); + } finally { + fs.rmSync(cwd, { recursive: true, force: true }); + } + }); + + it('resolves runtime when required modules and .synapse exist', () => { + const cwd = makeTempDir(); + try { + fs.mkdirSync(path.join(cwd, '.synapse', 'sessions'), { recursive: true }); + + writeFile( + path.join(cwd, '.aios-core/core/synapse/session/session-manager.js'), + "module.exports = { loadSession: () => ({ prompt_count: 7, id: 's-1' }) };", + ); + writeFile( + path.join(cwd, '.aios-core/core/synapse/engine.js'), + [ + 'class SynapseEngine {', + ' constructor(synapsePath) {', + ' this.synapsePath = synapsePath;', + ' }', + '}', + 'module.exports = { SynapseEngine };', + ].join('\n'), + ); + + const result = resolveHookRuntime({ cwd, sessionId: 's-1' }); + expect(result).toBeTruthy(); + expect(result.session).toEqual({ prompt_count: 7, id: 's-1' }); + expect(result.engine).toBeTruthy(); + expect(result.engine.synapsePath).toBe(path.join(cwd, '.synapse')); + } finally { + fs.rmSync(cwd, { recursive: true, force: true }); + } + }); + + it('builds normalized hook output for xml and falsy values', () => { + expect(buildHookOutput('')).toEqual({ + hookSpecificOutput: { additionalContext: '' }, + }); + expect(buildHookOutput('')).toEqual({ + hookSpecificOutput: { additionalContext: '' }, + }); + expect(buildHookOutput(null)).toEqual({ + hookSpecificOutput: { additionalContext: '' }, + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/l6-keyword.test.js +================================================== +```js +/** + * L6 Keyword Processor Tests + * + * Tests for keyword matching, exclusion, deduplication against + * previous layers, no-match handling, and metadata correctness. + * + * @story SYN-5 - Layer Processors L4-L7 + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const LayerProcessor = require('../../.aios-core/core/synapse/layers/layer-processor'); +const L6KeywordProcessor = require('../../.aios-core/core/synapse/layers/l6-keyword'); + +jest.setTimeout(30000); + +function createTempDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-l6-test-')); +} + +function cleanupTempDir(dir) { + fs.rmSync(dir, { recursive: true, force: true }); +} + +describe('L6KeywordProcessor', () => { + let tempDir; + let processor; + + beforeEach(() => { + tempDir = createTempDir(); + processor = new L6KeywordProcessor(); + }); + + afterEach(() => { + cleanupTempDir(tempDir); + }); + + describe('constructor', () => { + test('should extend LayerProcessor', () => { + expect(processor).toBeInstanceOf(LayerProcessor); + }); + + test('should set name to keyword', () => { + expect(processor.name).toBe('keyword'); + }); + + test('should set layer to 6', () => { + expect(processor.layer).toBe(6); + }); + + test('should set timeout to 15ms', () => { + expect(processor.timeout).toBe(15); + }); + }); + + describe('process()', () => { + test('should match keyword and load domain rules', () => { + fs.writeFileSync(path.join(tempDir, 'security'), [ + 'SEC_RULE_1=Validate all user inputs', + 'SEC_RULE_2=Use parameterized queries', + ].join('\n')); + + const context = { + prompt: 'check the security of this code', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + globalExclude: [], + domains: { + SECURITY: { + file: 'security', + recall: ['security', 'vulnerability'], + }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.rules).toHaveLength(2); + expect(result.rules[0]).toContain('Validate all user inputs'); + expect(result.metadata.layer).toBe(6); + expect(result.metadata.matchedDomains).toContain('SECURITY'); + expect(result.metadata.skippedDuplicates).toHaveLength(0); + }); + + test('should match multiple domains with different keywords', () => { + fs.writeFileSync(path.join(tempDir, 'testing'), 'TEST_RULE_1=Write tests first\n'); + fs.writeFileSync(path.join(tempDir, 'performance'), 'PERF_RULE_1=Optimize hot paths\n'); + + const context = { + prompt: 'I need to write tests and check performance', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + globalExclude: [], + domains: { + TESTING: { file: 'testing', recall: ['test', 'testing'] }, + PERFORMANCE: { file: 'performance', recall: ['performance', 'optimize'] }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.rules).toHaveLength(2); + expect(result.metadata.matchedDomains).toEqual( + expect.arrayContaining(['TESTING', 'PERFORMANCE']), + ); + }); + + test('should return null when no keywords match', () => { + const context = { + prompt: 'just a random prompt', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + globalExclude: [], + domains: { + SECURITY: { file: 'security', recall: ['security'] }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when prompt is empty', () => { + const context = { + prompt: '', + session: {}, + config: { + synapsePath: tempDir, + manifest: { domains: {} }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should return null when no domains have recall keywords', () => { + const context = { + prompt: 'some prompt', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + globalExclude: [], + domains: { + AGENT_DEV: { file: 'agent-dev', agentTrigger: 'dev' }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should respect global exclusion', () => { + fs.writeFileSync(path.join(tempDir, 'security'), 'SEC_RULE_1=Rule\n'); + + const context = { + prompt: 'security skip-rules please', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + globalExclude: ['skip-rules'], + domains: { + SECURITY: { file: 'security', recall: ['security'] }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should respect domain-level exclusion', () => { + fs.writeFileSync(path.join(tempDir, 'security'), 'SEC_RULE_1=Rule\n'); + + const context = { + prompt: 'security but no-inject please', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + globalExclude: [], + domains: { + SECURITY: { + file: 'security', + recall: ['security'], + exclude: ['no-inject'], + }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should deduplicate domains already loaded by previous layers', () => { + fs.writeFileSync(path.join(tempDir, 'agent-dev'), 'DEV_RULE_1=Dev rule\n'); + + const context = { + prompt: 'dev agent help', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + globalExclude: [], + domains: { + AGENT_DEV: { + file: 'agent-dev', + recall: ['dev'], + }, + }, + }, + }, + previousLayers: [ + { + name: 'agent', + metadata: { layer: 2, source: 'agent-dev', agentId: 'dev' }, + }, + ], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should track skipped duplicates in metadata', () => { + fs.writeFileSync(path.join(tempDir, 'agent-dev'), 'DEV_RULE_1=Dev rule\n'); + fs.writeFileSync(path.join(tempDir, 'testing'), 'TEST_RULE_1=Test rule\n'); + + const context = { + prompt: 'dev testing help', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + globalExclude: [], + domains: { + AGENT_DEV: { file: 'agent-dev', recall: ['dev'] }, + TESTING: { file: 'testing', recall: ['testing'] }, + }, + }, + }, + previousLayers: [ + { + name: 'agent', + metadata: { layer: 2, source: 'agent-dev', agentId: 'dev' }, + }, + ], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.metadata.matchedDomains).toContain('TESTING'); + expect(result.metadata.skippedDuplicates).toContain('AGENT_DEV'); + }); + + test('should handle domain file missing gracefully', () => { + // Domain matches keyword but file doesn't exist + const context = { + prompt: 'security check', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + globalExclude: [], + domains: { + SECURITY: { file: 'nonexistent-file', recall: ['security'] }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + + test('should track domainsLoaded from squad layer in previousLayers', () => { + fs.writeFileSync(path.join(tempDir, 'testing'), 'TEST_RULE_1=Test rule\n'); + + const context = { + prompt: 'testing help', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + globalExclude: [], + domains: { + TESTING: { file: 'testing', recall: ['testing'] }, + SQUAD_ALPHA_TESTING: { file: 'testing', recall: ['testing'] }, + }, + }, + }, + previousLayers: [ + { + name: 'squad', + metadata: { + layer: 5, + squadsFound: 1, + domainsLoaded: ['SQUAD_ALPHA_TESTING'], + }, + }, + ], + }; + + const result = processor.process(context); + + expect(result).not.toBeNull(); + expect(result.metadata.matchedDomains).toContain('TESTING'); + expect(result.metadata.skippedDuplicates).toContain('SQUAD_ALPHA_TESTING'); + }); + + test('should handle empty recall array', () => { + const context = { + prompt: 'test something', + session: {}, + config: { + synapsePath: tempDir, + manifest: { + globalExclude: [], + domains: { + TESTING: { file: 'testing', recall: [] }, + }, + }, + }, + previousLayers: [], + }; + + const result = processor.process(context); + expect(result).toBeNull(); + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/engine.test.js +================================================== +```js +/** + * SynapseEngine + PipelineMetrics Tests + * + * Tests for the 8-layer pipeline orchestrator, bracket-aware filtering, + * fallback strategies, pipeline timeout, and PipelineMetrics class. + * + * @module tests/synapse/engine + * @story SYN-6 - SynapseEngine Orchestrator + Output Formatter + */ + +jest.setTimeout(30000); + +// --------------------------------------------------------------------------- +// Mocks — must be set BEFORE require +// --------------------------------------------------------------------------- + +// Mock context-tracker (SYN-3) +jest.mock('../../.aios-core/core/synapse/context/context-tracker', () => ({ + estimateContextPercent: jest.fn(() => 85), + calculateBracket: jest.fn(() => 'FRESH'), + getActiveLayers: jest.fn(() => ({ layers: [0, 1, 2, 7], memoryHints: false, handoffWarning: false })), + getTokenBudget: jest.fn(() => 800), + needsMemoryHints: jest.fn(() => false), + needsHandoffWarning: jest.fn(() => false), +})); + +// Mock formatter +jest.mock('../../.aios-core/core/synapse/output/formatter', () => ({ + formatSynapseRules: jest.fn(() => '\nmocked\n'), +})); + +// Mock layer modules — provide fake layer classes +const mockLayerModules = {}; + +jest.mock('../../.aios-core/core/synapse/layers/l0-constitution', () => { + const cls = class MockL0 { + constructor() { this.name = 'constitution'; this.layer = 0; this.timeout = 5; } + _safeProcess(ctx) { return { rules: ['ART.I: CLI First'], metadata: { layer: 0, source: 'constitution' } }; } + }; + mockLayerModules.L0 = cls; + return cls; +}, { virtual: true }); + +jest.mock('../../.aios-core/core/synapse/layers/l1-global', () => { + const cls = class MockL1 { + constructor() { this.name = 'global'; this.layer = 1; this.timeout = 10; } + _safeProcess(ctx) { return { rules: ['Global rule 1'], metadata: { layer: 1, source: 'global' } }; } + }; + mockLayerModules.L1 = cls; + return cls; +}, { virtual: true }); + +jest.mock('../../.aios-core/core/synapse/layers/l2-agent', () => { + const cls = class MockL2 { + constructor() { this.name = 'agent'; this.layer = 2; this.timeout = 10; } + _safeProcess(ctx) { return { rules: ['Agent rule 1'], metadata: { layer: 2, source: 'agent' } }; } + }; + mockLayerModules.L2 = cls; + return cls; +}, { virtual: true }); + +jest.mock('../../.aios-core/core/synapse/layers/l3-workflow', () => { + const cls = class MockL3 { + constructor() { this.name = 'workflow'; this.layer = 3; this.timeout = 10; } + _safeProcess(ctx) { return { rules: ['Workflow rule 1'], metadata: { layer: 3, source: 'workflow' } }; } + }; + mockLayerModules.L3 = cls; + return cls; +}, { virtual: true }); + +// L4-L7: simulate missing modules (MODULE_NOT_FOUND with proper code) +jest.mock('../../.aios-core/core/synapse/layers/l4-task', () => { + const err = new Error("Cannot find module './layers/l4-task'"); + err.code = 'MODULE_NOT_FOUND'; + throw err; +}, { virtual: true }); + +jest.mock('../../.aios-core/core/synapse/layers/l5-squad', () => { + const err = new Error("Cannot find module './layers/l5-squad'"); + err.code = 'MODULE_NOT_FOUND'; + throw err; +}, { virtual: true }); + +jest.mock('../../.aios-core/core/synapse/layers/l6-keyword', () => { + const err = new Error("Cannot find module './layers/l6-keyword'"); + err.code = 'MODULE_NOT_FOUND'; + throw err; +}, { virtual: true }); + +jest.mock('../../.aios-core/core/synapse/layers/l7-star-command', () => { + const err = new Error("Cannot find module './layers/l7-star-command'"); + err.code = 'MODULE_NOT_FOUND'; + throw err; +}, { virtual: true }); + +// Mock memory bridge (SYN-10) +const mockGetMemoryHints = jest.fn(() => Promise.resolve([])); +jest.mock('../../.aios-core/core/synapse/memory/memory-bridge', () => ({ + MemoryBridge: jest.fn().mockImplementation(() => ({ + getMemoryHints: mockGetMemoryHints, + clearCache: jest.fn(), + _reset: jest.fn(), + })), + BRACKET_LAYER_MAP: { FRESH: { layer: 0, maxTokens: 0 }, MODERATE: { layer: 1, maxTokens: 50 } }, + BRIDGE_TIMEOUT_MS: 15, +})); + +// --------------------------------------------------------------------------- +// Imports (after mocks) +// --------------------------------------------------------------------------- + +const { SynapseEngine, PipelineMetrics, PIPELINE_TIMEOUT_MS } = require('../../.aios-core/core/synapse/engine'); +const contextTracker = require('../../.aios-core/core/synapse/context/context-tracker'); +const formatter = require('../../.aios-core/core/synapse/output/formatter'); + +// ============================================================================= +// PipelineMetrics +// ============================================================================= + +describe('PipelineMetrics', () => { + let metrics; + + beforeEach(() => { + metrics = new PipelineMetrics(); + }); + + test('should initialize with empty state', () => { + expect(metrics.layers).toEqual({}); + expect(metrics.totalStart).toBeNull(); + expect(metrics.totalEnd).toBeNull(); + }); + + test('startLayer() should record start time and running status', () => { + metrics.startLayer('constitution'); + expect(metrics.layers.constitution).toBeDefined(); + expect(metrics.layers.constitution.status).toBe('running'); + expect(typeof metrics.layers.constitution.start).toBe('bigint'); + }); + + test('endLayer() should record duration and rules count', () => { + metrics.startLayer('agent'); + metrics.endLayer('agent', 5); + const layer = metrics.layers.agent; + expect(layer.status).toBe('ok'); + expect(layer.rules).toBe(5); + expect(typeof layer.duration).toBe('number'); + expect(layer.duration).toBeGreaterThanOrEqual(0); + }); + + test('endLayer() without startLayer() should still record', () => { + metrics.endLayer('unknown', 3); + expect(metrics.layers.unknown.status).toBe('ok'); + expect(metrics.layers.unknown.rules).toBe(3); + }); + + test('skipLayer() should record skip reason', () => { + metrics.skipLayer('squad', 'Not active in FRESH'); + expect(metrics.layers.squad.status).toBe('skipped'); + expect(metrics.layers.squad.reason).toBe('Not active in FRESH'); + }); + + test('errorLayer() should record error message', () => { + metrics.errorLayer('keyword', new Error('File not found')); + expect(metrics.layers.keyword.status).toBe('error'); + expect(metrics.layers.keyword.error).toBe('File not found'); + }); + + test('errorLayer() should record duration if startLayer() was called', () => { + metrics.startLayer('workflow'); + metrics.errorLayer('workflow', new Error('Timeout')); + const layer = metrics.layers.workflow; + expect(layer.status).toBe('error'); + expect(typeof layer.duration).toBe('number'); + }); + + test('errorLayer() should handle non-Error objects', () => { + metrics.errorLayer('test', 'string error'); + expect(metrics.layers.test.error).toBe('string error'); + }); + + test('getSummary() should return correct totals', () => { + metrics.totalStart = BigInt(1000000000); + metrics.totalEnd = BigInt(1050000000); + + metrics.startLayer('l0'); + metrics.endLayer('l0', 6); + metrics.startLayer('l1'); + metrics.endLayer('l1', 2); + metrics.skipLayer('l3', 'Not active'); + metrics.errorLayer('l4', new Error('fail')); + + const summary = metrics.getSummary(); + expect(summary.total_ms).toBe(50); + expect(summary.layers_loaded).toBe(2); + expect(summary.layers_skipped).toBe(1); + expect(summary.layers_errored).toBe(1); + expect(summary.total_rules).toBe(8); + expect(summary.per_layer).toBeDefined(); + }); + + test('getSummary() should return 0 total_ms when no timestamps set', () => { + const summary = metrics.getSummary(); + expect(summary.total_ms).toBe(0); + }); +}); + +// ============================================================================= +// SynapseEngine +// ============================================================================= + +describe('SynapseEngine', () => { + let engine; + + beforeEach(() => { + jest.clearAllMocks(); + + // Default mocks: FRESH bracket with L0, L1, L2, L7 + contextTracker.estimateContextPercent.mockReturnValue(85); + contextTracker.calculateBracket.mockReturnValue('FRESH'); + contextTracker.getActiveLayers.mockReturnValue({ + layers: [0, 1, 2, 7], + memoryHints: false, + handoffWarning: false, + }); + contextTracker.getTokenBudget.mockReturnValue(800); + contextTracker.needsMemoryHints.mockReturnValue(false); + contextTracker.needsHandoffWarning.mockReturnValue(false); + + mockGetMemoryHints.mockReset(); + mockGetMemoryHints.mockResolvedValue([]); + + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(); + engine = new SynapseEngine('/fake/.synapse', { manifest: {} }); + warnSpy.mockRestore(); + }); + + describe('constructor', () => { + test('should store synapsePath and config', () => { + expect(engine.synapsePath).toBe('/fake/.synapse'); + expect(engine.config).toEqual({ manifest: {} }); + }); + + test('should instantiate available layers', () => { + // L0, L1, L2, L3 are mocked as available; L4-L7 throw + expect(engine.layers.length).toBeGreaterThanOrEqual(3); + }); + + test('should handle all layer modules failing gracefully', () => { + // This is tested implicitly — L4-L7 throw, engine still works + expect(engine.layers.length).toBeLessThanOrEqual(4); + }); + }); + + describe('process() — basic pipeline', () => { + test('should return xml and metrics', async () => { + const result = await engine.process('test prompt', { prompt_count: 1 }); + expect(result).toHaveProperty('xml'); + expect(result).toHaveProperty('metrics'); + expect(typeof result.xml).toBe('string'); + expect(typeof result.metrics).toBe('object'); + }); + + test('should call context-tracker with prompt_count', async () => { + await engine.process('test', { prompt_count: 5 }); + expect(contextTracker.estimateContextPercent).toHaveBeenCalledWith(5); + }); + + test('should call calculateBracket with context percent', async () => { + contextTracker.estimateContextPercent.mockReturnValue(72); + await engine.process('test', { prompt_count: 10 }); + expect(contextTracker.calculateBracket).toHaveBeenCalledWith(72); + }); + + test('should call getActiveLayers with bracket', async () => { + contextTracker.calculateBracket.mockReturnValue('MODERATE'); + await engine.process('test', {}); + expect(contextTracker.getActiveLayers).toHaveBeenCalledWith('MODERATE'); + }); + + test('should call formatSynapseRules with correct args', async () => { + await engine.process('test', { prompt_count: 0 }); + expect(formatter.formatSynapseRules).toHaveBeenCalledTimes(1); + + const args = formatter.formatSynapseRules.mock.calls[0]; + // results, bracket, contextPercent, session, devmode, metrics, tokenBudget, showHandoffWarning + expect(args[1]).toBe('FRESH'); // bracket + expect(args[2]).toBe(85); // contextPercent + expect(args[4]).toBe(false); // devmode (default) + expect(args[6]).toBe(800); // tokenBudget + expect(args[7]).toBe(false); // showHandoffWarning + }); + + test('should pass devmode=true when config has devmode', async () => { + await engine.process('test', {}, { devmode: true }); + const args = formatter.formatSynapseRules.mock.calls[0]; + expect(args[4]).toBe(true); + }); + + test('should default prompt_count to 0 when session is null', async () => { + await engine.process('test', null); + expect(contextTracker.estimateContextPercent).toHaveBeenCalledWith(0); + }); + }); + + describe('process() — bracket-aware filtering', () => { + test('should skip layers not in active bracket (FRESH skips L3)', async () => { + // FRESH has layers [0,1,2,7] — L3 (workflow) should be skipped + const result = await engine.process('test', { prompt_count: 1 }); + + // Check metrics — workflow should be skipped + const summary = result.metrics; + const workflowEntry = summary.per_layer.workflow; + if (workflowEntry) { + expect(workflowEntry.status).toBe('skipped'); + expect(workflowEntry.reason).toContain('Not active'); + } + }); + + test('should execute all L0-L7 in MODERATE bracket', async () => { + contextTracker.getActiveLayers.mockReturnValue({ + layers: [0, 1, 2, 3, 4, 5, 6, 7], + memoryHints: false, + handoffWarning: false, + }); + + const result = await engine.process('test', { prompt_count: 30 }); + // L0, L1, L2 should be loaded; L3 also since MODERATE allows it + expect(result.metrics.layers_loaded).toBeGreaterThanOrEqual(3); + }); + }); + + describe('process() — fallback and edge cases', () => { + test('should return empty xml when no results', async () => { + // Make all layers return null + for (const layer of engine.layers) { + layer._safeProcess = jest.fn(() => null); + } + + formatter.formatSynapseRules.mockReturnValue(''); + const result = await engine.process('test', {}); + expect(result.xml).toBe(''); + }); + + test('should return empty when getActiveLayers returns null', async () => { + contextTracker.getActiveLayers.mockReturnValue(null); + const result = await engine.process('test', {}); + expect(result.xml).toBe(''); + expect(result.metrics.total_ms).toBeGreaterThanOrEqual(0); + }); + + test('should handle session without prompt_count', async () => { + const result = await engine.process('test', {}); + expect(contextTracker.estimateContextPercent).toHaveBeenCalledWith(0); + expect(result).toHaveProperty('xml'); + }); + + test('should accumulate previousLayers across layer executions', async () => { + // Spy on _safeProcess to verify previousLayers grows + const calls = []; + for (const layer of engine.layers) { + const orig = layer._safeProcess.bind(layer); + layer._safeProcess = jest.fn((ctx) => { + calls.push({ name: layer.name, prevCount: ctx.previousLayers.length }); + return orig(ctx); + }); + } + + await engine.process('test', {}); + + // First active layer should have 0 previousLayers + if (calls.length > 0) { + expect(calls[0].prevCount).toBe(0); + } + // Each subsequent layer should have more + for (let i = 1; i < calls.length; i++) { + expect(calls[i].prevCount).toBeGreaterThanOrEqual(calls[i - 1].prevCount); + } + }); + }); + + describe('process() — metrics', () => { + test('should have total_ms in metrics', async () => { + const result = await engine.process('test', {}); + expect(typeof result.metrics.total_ms).toBe('number'); + expect(result.metrics.total_ms).toBeGreaterThanOrEqual(0); + }); + + test('should count loaded and skipped layers', async () => { + const result = await engine.process('test', { prompt_count: 1 }); + const m = result.metrics; + expect(m.layers_loaded + m.layers_skipped + m.layers_errored).toBeGreaterThanOrEqual(1); + }); + + test('should count total rules', async () => { + const result = await engine.process('test', {}); + expect(typeof result.metrics.total_rules).toBe('number'); + }); + }); + + describe('PIPELINE_TIMEOUT_MS constant', () => { + test('should be 100ms', () => { + expect(PIPELINE_TIMEOUT_MS).toBe(100); + }); + }); + + describe('process() — handoff warning', () => { + test('should pass showHandoffWarning=true when bracket needs it', async () => { + contextTracker.needsHandoffWarning.mockReturnValue(true); + await engine.process('test', {}); + const args = formatter.formatSynapseRules.mock.calls[0]; + expect(args[7]).toBe(true); + }); + }); + + describe('process() — null/invalid processConfig guard', () => { + test('should handle null processConfig without throwing', async () => { + const result = await engine.process('test', { prompt_count: 1 }, null); + expect(result).toHaveProperty('xml'); + expect(result).toHaveProperty('metrics'); + }); + + test('should handle non-object processConfig (string)', async () => { + const result = await engine.process('test', { prompt_count: 1 }, 'invalid'); + expect(result).toHaveProperty('xml'); + }); + + test('should handle undefined processConfig', async () => { + const result = await engine.process('test', { prompt_count: 1 }, undefined); + expect(result).toHaveProperty('xml'); + }); + + test('should handle numeric processConfig', async () => { + const result = await engine.process('test', { prompt_count: 1 }, 42); + expect(result).toHaveProperty('xml'); + }); + }); + + describe('process() — memory bridge integration (SYN-10)', () => { + test('should call memoryBridge.getMemoryHints when needsMemoryHints is true', async () => { + contextTracker.needsMemoryHints.mockReturnValue(true); + contextTracker.calculateBracket.mockReturnValue('MODERATE'); + contextTracker.getTokenBudget.mockReturnValue(500); + mockGetMemoryHints.mockResolvedValue([ + { content: 'Use absolute imports', source: 'procedural', relevance: 0.9, tokens: 5 }, + ]); + + await engine.process('test', { prompt_count: 30, activeAgent: 'dev' }); + + expect(mockGetMemoryHints).toHaveBeenCalledWith('dev', 'MODERATE', 500); + + // Verify hints were passed to formatter via results array + const formatterCall = formatter.formatSynapseRules.mock.calls[0]; + const resultsArg = formatterCall[0]; // first arg = results array + const memoryResult = resultsArg.find(r => r.metadata?.source === 'memory'); + expect(memoryResult).toBeDefined(); + expect(memoryResult.rules).toEqual([ + { content: 'Use absolute imports', source: 'procedural', relevance: 0.9, tokens: 5 }, + ]); + }); + + test('should NOT call memoryBridge.getMemoryHints when needsMemoryHints is false', async () => { + contextTracker.needsMemoryHints.mockReturnValue(false); + await engine.process('test', { prompt_count: 1 }); + expect(mockGetMemoryHints).not.toHaveBeenCalled(); + }); + + test('should use active_agent fallback when activeAgent is missing', async () => { + contextTracker.needsMemoryHints.mockReturnValue(true); + contextTracker.calculateBracket.mockReturnValue('DEPLETED'); + contextTracker.getTokenBudget.mockReturnValue(300); + mockGetMemoryHints.mockResolvedValue([]); + + await engine.process('test', { prompt_count: 50, active_agent: 'qa' }); + + expect(mockGetMemoryHints).toHaveBeenCalledWith('qa', 'DEPLETED', 300); + }); + }); + + describe('process() — edge cases for coverage', () => { + test('should handle layer returning non-array rules (invalid result format)', async () => { + // Make a layer return an object without rules array + if (engine.layers.length > 0) { + engine.layers[0]._safeProcess = jest.fn(() => ({ + rules: 'not-an-array', + metadata: { source: 'test' }, + })); + } + + const result = await engine.process('test', {}); + // Should not crash — invalid result is skipped + expect(result).toHaveProperty('xml'); + if (engine.layers.length > 0) { + const layerName = engine.layers[0].name; + const entry = result.metrics.per_layer[layerName]; + expect(entry.status).toBe('skipped'); + expect(entry.reason).toBe('Invalid result format'); + } + }); + + test('should log remaining layers as skipped on pipeline timeout', async () => { + // Mock Date.now to simulate timeout after first layer + const realDateNow = Date.now; + let callCount = 0; + Date.now = jest.fn(() => { + callCount++; + // First call: totalStart = 1000 + // Second call (timeout check): 1000 + // Third call (startLayer): 1000 + // After first layer executes, next timeout check returns 1200 (>100ms) + if (callCount <= 4) return 1000; + return 1200; // Exceeds 100ms timeout + }); + + // Ensure all layers are active + contextTracker.getActiveLayers.mockReturnValue({ + layers: [0, 1, 2, 3, 4, 5, 6, 7], + memoryHints: false, + handoffWarning: false, + }); + + const result = await engine.process('test', {}); + + Date.now = realDateNow; + + // Some layers should be skipped due to pipeline timeout + const skipped = Object.values(result.metrics.per_layer) + .filter(l => l.status === 'skipped' && l.reason === 'Pipeline timeout'); + expect(skipped.length).toBeGreaterThanOrEqual(0); + expect(result).toHaveProperty('xml'); + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/generate-constitution.test.js +================================================== +```js +/** + * Constitution Generator Tests + * + * Tests for parseConstitution(), extractRules(), generateConstitution(), + * cleanText(), and main() entry point. + * + * @story SYN-8 - Domain Content Files + */ + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); +const { + parseConstitution, + extractRules, + generateConstitution, + cleanText, + main, + ROMAN_TO_ARABIC, +} = require('../../.aios-core/core/synapse/scripts/generate-constitution'); +const { parseManifest, loadDomainFile } = require('../../.aios-core/core/synapse/domain/domain-loader'); + +// Set timeout for all tests +jest.setTimeout(30000); + +/** + * Helper: create a temp directory for testing + */ +function createTempDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-constitution-test-')); +} + +/** + * Helper: clean up temp directory + */ +function cleanupTempDir(dir) { + fs.rmSync(dir, { recursive: true, force: true }); +} + +/** + * Fixture: minimal constitution with 2 articles + */ +const MINIMAL_CONSTITUTION = `# Constitution + +## Core Principles + +### I. CLI First (NON-NEGOTIABLE) + +Description text. + +**Regras:** +- MUST: All functionality works via CLI first +- MUST: Dashboards only observe, never control + +--- + +### II. Agent Authority (NON-NEGOTIABLE) + +Description text. + +**Regras:** +- MUST: Only @devops can git push +- MUST: Agents must delegate appropriately +- MUST NOT: No agent can assume another's authority + +--- + +## Governance + +Amendment process here. +`; + +/** + * Fixture: full 6-article constitution matching real format + */ +const FULL_CONSTITUTION = `# Synkra AIOS Constitution + +> **Version:** 1.0.0 + +## Core Principles + +### I. CLI First (NON-NEGOTIABLE) + +O CLI e a fonte da verdade. + +**Regras:** +- MUST: Toda funcionalidade nova DEVE funcionar 100% via CLI +- MUST: Dashboards apenas observam, NUNCA controlam +- MUST: A UI NUNCA e requisito para operacao do sistema +- MUST: Ao decidir onde implementar, sempre CLI > Observability > UI + +--- + +### II. Agent Authority (NON-NEGOTIABLE) + +Cada agente tem autoridades exclusivas. + +**Regras:** +- MUST: Apenas @devops pode executar \`git push\` para remote +- MUST: Apenas @devops pode criar Pull Requests +- MUST: Apenas @devops pode criar releases e tags +- MUST: Agentes DEVEM delegar para o agente apropriado +- MUST: Nenhum agente pode assumir autoridade de outro + +--- + +### III. Story-Driven Development (MUST) + +Todo desenvolvimento comeca com uma story. + +**Regras:** +- MUST: Nenhum codigo e escrito sem uma story associada +- MUST: Stories DEVEM ter acceptance criteria claros +- MUST: Progresso DEVE ser rastreado via checkboxes +- MUST: File List DEVE ser mantida atualizada +- SHOULD: Stories seguem o workflow padrao + +--- + +### IV. No Invention (MUST) + +Especificacoes nao inventam. + +**Regras:** +- MUST: Todo statement em spec.md DEVE rastrear para: +- MUST NOT: Adicionar features nao presentes nos requisitos +- MUST NOT: Assumir detalhes de implementacao nao pesquisados +- MUST NOT: Especificar tecnologias nao validadas + +--- + +### V. Quality First (MUST) + +Qualidade nao e negociavel. + +**Regras:** +- MUST: npm run lint passa sem erros +- MUST: npm run typecheck passa sem erros +- MUST: npm test passa sem falhas +- MUST: npm run build completa com sucesso +- MUST: CodeRabbit nao reporta issues CRITICAL +- MUST: Story status e Done ou Ready for Review +- SHOULD: Cobertura de testes nao diminui + +--- + +### VI. Absolute Imports (SHOULD) + +Imports relativos criam acoplamento. + +**Regras:** +- SHOULD: Sempre usar imports absolutos com alias @/ +- SHOULD NOT: Usar imports relativos (../../../) +- EXCEPTION: Imports dentro do mesmo modulo podem ser relativos + +--- + +## Governance + +Amendment process here. +`; + +describe('cleanText', () => { + test('should remove backticks from text', () => { + expect(cleanText('`git push` para remote')).toBe('git push para remote'); + }); + + test('should trim whitespace', () => { + expect(cleanText(' hello world ')).toBe('hello world'); + }); + + test('should handle text without backticks', () => { + expect(cleanText('plain text')).toBe('plain text'); + }); +}); + +describe('ROMAN_TO_ARABIC', () => { + test('should map Roman numerals I-VI correctly', () => { + expect(ROMAN_TO_ARABIC['I']).toBe(1); + expect(ROMAN_TO_ARABIC['II']).toBe(2); + expect(ROMAN_TO_ARABIC['III']).toBe(3); + expect(ROMAN_TO_ARABIC['IV']).toBe(4); + expect(ROMAN_TO_ARABIC['V']).toBe(5); + expect(ROMAN_TO_ARABIC['VI']).toBe(6); + }); +}); + +describe('extractRules', () => { + test('should extract MUST rules from article content', () => { + const content = `### I. CLI First (NON-NEGOTIABLE) + +**Regras:** +- MUST: Rule one +- MUST: Rule two +`; + const rules = extractRules(content); + expect(rules).toEqual(['MUST: Rule one', 'MUST: Rule two']); + }); + + test('should extract mixed rule types', () => { + const content = `### IV. No Invention (MUST) + +- MUST: Required rule +- MUST NOT: Forbidden action +- SHOULD: Recommended practice +- SHOULD NOT: Discouraged practice +- EXCEPTION: Special case allowed +`; + const rules = extractRules(content); + expect(rules).toHaveLength(5); + expect(rules[0]).toBe('MUST: Required rule'); + expect(rules[1]).toBe('MUST NOT: Forbidden action'); + expect(rules[2]).toBe('SHOULD: Recommended practice'); + expect(rules[3]).toBe('SHOULD NOT: Discouraged practice'); + expect(rules[4]).toBe('EXCEPTION: Special case allowed'); + }); + + test('should ignore non-rule lines', () => { + const content = `### I. Title (SEV) + +Description text here. +Some more context. + +**Regras:** +- MUST: Only this is a rule + +**Gate:** some gate info +`; + const rules = extractRules(content); + expect(rules).toEqual(['MUST: Only this is a rule']); + }); + + test('should clean backticks from rule text', () => { + const content = `### II. Auth (NON-NEGOTIABLE) + +- MUST: Only @devops can \`git push\` to remote +`; + const rules = extractRules(content); + expect(rules).toEqual(['MUST: Only @devops can git push to remote']); + }); +}); + +describe('parseConstitution', () => { + test('should parse minimal constitution with 2 articles', () => { + const articles = parseConstitution(MINIMAL_CONSTITUTION); + expect(articles).toHaveLength(2); + + expect(articles[0].number).toBe(1); + expect(articles[0].roman).toBe('I'); + expect(articles[0].title).toBe('CLI First'); + expect(articles[0].severity).toBe('NON-NEGOTIABLE'); + expect(articles[0].rules).toHaveLength(2); + + expect(articles[1].number).toBe(2); + expect(articles[1].roman).toBe('II'); + expect(articles[1].title).toBe('Agent Authority'); + expect(articles[1].severity).toBe('NON-NEGOTIABLE'); + expect(articles[1].rules).toHaveLength(3); + }); + + test('should parse full 6-article constitution', () => { + const articles = parseConstitution(FULL_CONSTITUTION); + expect(articles).toHaveLength(6); + + // Verify all articles extracted + expect(articles[0].title).toBe('CLI First'); + expect(articles[1].title).toBe('Agent Authority'); + expect(articles[2].title).toBe('Story-Driven Development'); + expect(articles[3].title).toBe('No Invention'); + expect(articles[4].title).toBe('Quality First'); + expect(articles[5].title).toBe('Absolute Imports'); + + // Verify severities + expect(articles[0].severity).toBe('NON-NEGOTIABLE'); + expect(articles[1].severity).toBe('NON-NEGOTIABLE'); + expect(articles[2].severity).toBe('MUST'); + expect(articles[5].severity).toBe('SHOULD'); + }); + + test('should return empty array for null/undefined input', () => { + expect(parseConstitution(null)).toEqual([]); + expect(parseConstitution(undefined)).toEqual([]); + expect(parseConstitution('')).toEqual([]); + }); + + test('should return empty array for non-string input', () => { + expect(parseConstitution(123)).toEqual([]); + expect(parseConstitution({})).toEqual([]); + }); + + test('should return empty array for content with no articles', () => { + const content = '# Just a header\n\nSome text without articles.'; + expect(parseConstitution(content)).toEqual([]); + }); + + test('should stop parsing before Governance section', () => { + const articles = parseConstitution(FULL_CONSTITUTION); + const lastArticle = articles[articles.length - 1]; + // Should not include governance content in any article's rules + for (const article of articles) { + for (const rule of article.rules) { + expect(rule).not.toContain('Amendment'); + } + } + }); + + test('should correctly number articles using Roman numerals', () => { + const articles = parseConstitution(FULL_CONSTITUTION); + expect(articles[0].number).toBe(1); + expect(articles[1].number).toBe(2); + expect(articles[2].number).toBe(3); + expect(articles[3].number).toBe(4); + expect(articles[4].number).toBe(5); + expect(articles[5].number).toBe(6); + }); +}); + +describe('generateConstitution', () => { + test('should generate KEY=VALUE format from articles', () => { + const articles = parseConstitution(MINIMAL_CONSTITUTION); + const output = generateConstitution(articles); + + expect(output).toContain('CONSTITUTION_RULE_ART1_0=CLI First (NON-NEGOTIABLE)'); + expect(output).toContain('CONSTITUTION_RULE_ART1_1=MUST: All functionality works via CLI first'); + expect(output).toContain('CONSTITUTION_RULE_ART2_0=Agent Authority (NON-NEGOTIABLE)'); + expect(output).toContain('CONSTITUTION_RULE_ART2_1=MUST: Only @devops can git push'); + }); + + test('should include header comments', () => { + const articles = parseConstitution(MINIMAL_CONSTITUTION); + const output = generateConstitution(articles); + + expect(output).toContain('# SYNAPSE Constitution Domain (L0)'); + expect(output).toContain('# Auto-generated from .aios-core/constitution.md'); + expect(output).toContain('# DO NOT EDIT MANUALLY'); + }); + + test('should include article section comments', () => { + const articles = parseConstitution(MINIMAL_CONSTITUTION); + const output = generateConstitution(articles); + + expect(output).toContain('# Article I: CLI First (NON-NEGOTIABLE)'); + expect(output).toContain('# Article II: Agent Authority (NON-NEGOTIABLE)'); + }); + + test('should handle empty articles array', () => { + const output = generateConstitution([]); + expect(output).toContain('# SYNAPSE Constitution Domain (L0)'); + // Should only have header lines, no rules + const ruleLines = output.split('\n').filter(l => l.startsWith('CONSTITUTION_RULE')); + expect(ruleLines).toHaveLength(0); + }); + + test('output should be parseable by domain-loader loadDomainFile', () => { + const articles = parseConstitution(FULL_CONSTITUTION); + const output = generateConstitution(articles); + + let tempDir; + try { + tempDir = createTempDir(); + const filePath = path.join(tempDir, 'constitution'); + fs.writeFileSync(filePath, output, 'utf8'); + + // loadDomainFile detects KEY=VALUE format and extracts values + const rules = loadDomainFile(filePath); + expect(rules.length).toBeGreaterThan(0); + + // First rule is Article 1 title (value extracted from key) + expect(rules[0]).toBe('CLI First (NON-NEGOTIABLE)'); + // Should contain rules from Article 1 and Article 6 + expect(rules.some(r => r.includes('CLI'))).toBe(true); + expect(rules.some(r => r.includes('Absolute Imports'))).toBe(true); + } finally { + if (tempDir) cleanupTempDir(tempDir); + } + }); +}); + +describe('main', () => { + let tempDir; + + beforeEach(() => { + tempDir = createTempDir(); + }); + + afterEach(() => { + cleanupTempDir(tempDir); + }); + + test('should generate constitution file from source', () => { + const constitutionPath = path.join(tempDir, 'constitution.md'); + const outputPath = path.join(tempDir, 'output', 'constitution'); + + fs.writeFileSync(constitutionPath, FULL_CONSTITUTION, 'utf8'); + + const result = main({ constitutionPath, outputPath }); + + expect(result.success).toBe(true); + expect(result.articles).toBe(6); + expect(result.rules).toBeGreaterThan(0); + expect(fs.existsSync(outputPath)).toBe(true); + }); + + test('should be idempotent — re-run produces same content', () => { + const constitutionPath = path.join(tempDir, 'constitution.md'); + const outputPath = path.join(tempDir, 'output', 'constitution'); + + fs.writeFileSync(constitutionPath, FULL_CONSTITUTION, 'utf8'); + + main({ constitutionPath, outputPath }); + const firstRun = fs.readFileSync(outputPath, 'utf8'); + + main({ constitutionPath, outputPath }); + const secondRun = fs.readFileSync(outputPath, 'utf8'); + + expect(firstRun).toBe(secondRun); + }); + + test('should handle missing constitution.md gracefully', () => { + const constitutionPath = path.join(tempDir, 'nonexistent.md'); + const outputPath = path.join(tempDir, 'output', 'constitution'); + + const result = main({ constitutionPath, outputPath }); + + expect(result.success).toBe(false); + expect(result.error).toBe('Constitution file not found'); + expect(fs.existsSync(outputPath)).toBe(false); + }); + + test('should handle constitution with no articles', () => { + const constitutionPath = path.join(tempDir, 'empty.md'); + const outputPath = path.join(tempDir, 'output', 'constitution'); + + fs.writeFileSync(constitutionPath, '# Empty doc\n\nNo articles here.', 'utf8'); + + const result = main({ constitutionPath, outputPath }); + + expect(result.success).toBe(false); + expect(result.error).toBe('No articles found'); + }); + + test('should create output directory if it does not exist', () => { + const constitutionPath = path.join(tempDir, 'constitution.md'); + const outputPath = path.join(tempDir, 'deep', 'nested', 'dir', 'constitution'); + + fs.writeFileSync(constitutionPath, MINIMAL_CONSTITUTION, 'utf8'); + + const result = main({ constitutionPath, outputPath }); + + expect(result.success).toBe(true); + expect(fs.existsSync(outputPath)).toBe(true); + }); + + test('should overwrite existing output file', () => { + const constitutionPath = path.join(tempDir, 'constitution.md'); + const outputDir = path.join(tempDir, 'output'); + const outputPath = path.join(outputDir, 'constitution'); + + fs.mkdirSync(outputDir, { recursive: true }); + fs.writeFileSync(outputPath, 'OLD CONTENT', 'utf8'); + fs.writeFileSync(constitutionPath, MINIMAL_CONSTITUTION, 'utf8'); + + const result = main({ constitutionPath, outputPath }); + + expect(result.success).toBe(true); + const content = fs.readFileSync(outputPath, 'utf8'); + expect(content).not.toContain('OLD CONTENT'); + expect(content).toContain('CONSTITUTION_RULE_ART1_0'); + }); +}); + +describe('integration: real constitution.md', () => { + const realConstitutionPath = path.join(__dirname, '..', '..', '.aios-core', 'constitution.md'); + + test('should parse real constitution.md with 6 articles', () => { + // Skip if constitution.md doesn't exist (CI environment) + if (!fs.existsSync(realConstitutionPath)) { + return; + } + + const content = fs.readFileSync(realConstitutionPath, 'utf8'); + const articles = parseConstitution(content); + + expect(articles).toHaveLength(6); + expect(articles[0].title).toBe('CLI First'); + expect(articles[5].title).toBe('Absolute Imports'); + }); + + test('should generate valid constitution from real source', () => { + if (!fs.existsSync(realConstitutionPath)) { + return; + } + + let tempDir; + try { + tempDir = createTempDir(); + const outputPath = path.join(tempDir, 'constitution'); + + const result = main({ constitutionPath: realConstitutionPath, outputPath }); + + expect(result.success).toBe(true); + expect(result.articles).toBe(6); + + // Verify output is loadable by domain-loader + const rules = loadDomainFile(outputPath); + expect(rules.length).toBeGreaterThan(20); + } finally { + if (tempDir) cleanupTempDir(tempDir); + } + }); +}); + +describe('integration: manifest parseability', () => { + test('should validate .synapse/manifest is parseable by domain-loader', () => { + const manifestPath = path.join(__dirname, '..', '..', '.synapse', 'manifest'); + + // Skip if manifest doesn't exist yet + if (!fs.existsSync(manifestPath)) { + return; + } + + const result = parseManifest(manifestPath); + + expect(result.devmode).toBe(false); + expect(Object.keys(result.domains).length).toBeGreaterThanOrEqual(19); + expect(result.domains.CONSTITUTION).toBeDefined(); + expect(result.domains.CONSTITUTION.nonNegotiable).toBe(true); + expect(result.domains.AGENT_DEV).toBeDefined(); + expect(result.domains.AGENT_DEV.agentTrigger).toBe('dev'); + }); +}); + +``` + +================================================== +📄 tests/synapse/bridge/uap-session-bridge.test.js +================================================== +```js +/** + * UAP → SYNAPSE Session Bridge — Unit Tests + * + * Tests for _writeSynapseSession method in unified-activation-pipeline.js. + * Verifies bridge file creation, graceful degradation, metrics, and timing. + * + * @module tests/synapse/bridge/uap-session-bridge + * @story SYN-13 - UAP Session Bridge + SYNAPSE Diagnostics + * @coverage Target: >85% for bridge functionality + */ + +'use strict'; + +const fs = require('fs'); +const path = require('path'); +const os = require('os'); + +// ============================================================================= +// Extract the private method without loading full pipeline dependencies. +// The method only uses `path`, `fs` (as fsSync), and `this.projectRoot`, +// so we can bind it to a minimal context object. +// ============================================================================= + +/** + * Standalone extraction of _writeSynapseSession from the class prototype. + * We read the source and eval only the method to avoid loading all pipeline + * dependencies (GreetingBuilder, AgentConfigLoader, etc.). + * + * Instead, we replicate the method body directly — it is self-contained + * and only depends on `path`, `fs` (sync), and `this.projectRoot`. + */ +function writeSynapseSession(agentId, quality, metrics) { + const fsSync = fs; + const start = Date.now(); + try { + const sessionsDir = path.join(this.projectRoot, '.synapse', 'sessions'); + if (!fsSync.existsSync(path.join(this.projectRoot, '.synapse'))) { + const duration = Date.now() - start; + metrics.loaders.synapseSession = { duration, status: 'skipped', start, end: start + duration }; + return; + } + if (!fsSync.existsSync(sessionsDir)) { + fsSync.mkdirSync(sessionsDir, { recursive: true }); + } + const bridgeData = { + id: agentId, + activated_at: new Date().toISOString(), + activation_quality: quality, + source: 'uap', + }; + const bridgePath = path.join(sessionsDir, '_active-agent.json'); + fsSync.writeFileSync(bridgePath, JSON.stringify(bridgeData, null, 2), 'utf8'); + const duration = Date.now() - start; + metrics.loaders.synapseSession = { duration, status: 'ok', start, end: start + duration }; + } catch (error) { + const duration = Date.now() - start; + metrics.loaders.synapseSession = { duration, status: 'error', start, end: start + duration, error: error.message }; + console.warn(`[UnifiedActivationPipeline] SYNAPSE session write failed: ${error.message}`); + } +} + +// ============================================================================= +// Helpers +// ============================================================================= + +let tmpDir; + +function createContext(projectRoot) { + return { projectRoot }; +} + +function createMetrics() { + return { loaders: {} }; +} + +function callBridge(ctx, agentId, quality, metrics) { + return writeSynapseSession.call(ctx, agentId, quality, metrics); +} + +// ============================================================================= +// Setup / Teardown +// ============================================================================= + +beforeEach(() => { + tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'uap-bridge-test-')); +}); + +afterEach(() => { + fs.rmSync(tmpDir, { recursive: true, force: true }); +}); + +// ============================================================================= +// 1. Happy Path — Writes _active-agent.json when .synapse/ exists +// ============================================================================= + +describe('UAP Session Bridge — Happy Path', () => { + test('writes _active-agent.json when .synapse/ directory exists', () => { + // Arrange: create .synapse/ but NOT sessions/ + const synapsePath = path.join(tmpDir, '.synapse'); + fs.mkdirSync(synapsePath, { recursive: true }); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + // Act + callBridge(ctx, 'dev', 'full', metrics); + + // Assert: file was created + const bridgePath = path.join(synapsePath, 'sessions', '_active-agent.json'); + expect(fs.existsSync(bridgePath)).toBe(true); + + // Assert: content is valid JSON with expected fields + const data = JSON.parse(fs.readFileSync(bridgePath, 'utf8')); + expect(data.id).toBe('dev'); + expect(data.activation_quality).toBe('full'); + expect(data.source).toBe('uap'); + expect(data.activated_at).toBeDefined(); + }); + + test('writes _active-agent.json when sessions/ already exists', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'qa', 'partial', metrics); + + const bridgePath = path.join(sessionsDir, '_active-agent.json'); + expect(fs.existsSync(bridgePath)).toBe(true); + + const data = JSON.parse(fs.readFileSync(bridgePath, 'utf8')); + expect(data.id).toBe('qa'); + expect(data.activation_quality).toBe('partial'); + }); + + test('overwrites existing _active-agent.json on subsequent calls', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + + // First write + callBridge(ctx, 'dev', 'full', createMetrics()); + + // Second write (different agent) + callBridge(ctx, 'architect', 'partial', createMetrics()); + + const bridgePath = path.join(sessionsDir, '_active-agent.json'); + const data = JSON.parse(fs.readFileSync(bridgePath, 'utf8')); + expect(data.id).toBe('architect'); + expect(data.activation_quality).toBe('partial'); + }); +}); + +// ============================================================================= +// 2. Sessions Directory Creation +// ============================================================================= + +describe('UAP Session Bridge — Directory Creation', () => { + test('creates sessions/ directory when .synapse/ exists but sessions/ does not', () => { + const synapsePath = path.join(tmpDir, '.synapse'); + fs.mkdirSync(synapsePath, { recursive: true }); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'pm', 'full', metrics); + + const sessionsDir = path.join(synapsePath, 'sessions'); + expect(fs.existsSync(sessionsDir)).toBe(true); + expect(fs.statSync(sessionsDir).isDirectory()).toBe(true); + }); + + test('does not fail when sessions/ already exists', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + expect(() => callBridge(ctx, 'dev', 'full', metrics)).not.toThrow(); + expect(metrics.loaders.synapseSession.status).toBe('ok'); + }); +}); + +// ============================================================================= +// 3. Skip Behavior — No .synapse/ directory +// ============================================================================= + +describe('UAP Session Bridge — Skip (no .synapse/)', () => { + test('skips gracefully when .synapse/ does not exist', () => { + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'dev', 'full', metrics); + + expect(metrics.loaders.synapseSession).toBeDefined(); + expect(metrics.loaders.synapseSession.status).toBe('skipped'); + }); + + test('does not create .synapse/ or sessions/ when skipping', () => { + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'dev', 'full', metrics); + + expect(fs.existsSync(path.join(tmpDir, '.synapse'))).toBe(false); + expect(fs.existsSync(path.join(tmpDir, '.synapse', 'sessions'))).toBe(false); + }); + + test('does not write _active-agent.json when skipping', () => { + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'dev', 'full', metrics); + + const bridgePath = path.join(tmpDir, '.synapse', 'sessions', '_active-agent.json'); + expect(fs.existsSync(bridgePath)).toBe(false); + }); +}); + +// ============================================================================= +// 4. Error Handling — Graceful Degradation +// ============================================================================= + +describe('UAP Session Bridge — Error Handling', () => { + test('records status "error" when writeFileSync throws', () => { + // Arrange: create .synapse/sessions/ as a FILE (not directory) to cause write error + const synapsePath = path.join(tmpDir, '.synapse'); + fs.mkdirSync(synapsePath, { recursive: true }); + // Create sessions as a file so mkdirSync or writeFileSync fails + const sessionsPath = path.join(synapsePath, 'sessions'); + fs.writeFileSync(sessionsPath, 'block', 'utf8'); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(() => {}); + + callBridge(ctx, 'dev', 'full', metrics); + + expect(metrics.loaders.synapseSession.status).toBe('error'); + expect(metrics.loaders.synapseSession.error).toBeDefined(); + expect(typeof metrics.loaders.synapseSession.error).toBe('string'); + + warnSpy.mockRestore(); + }); + + test('logs warning to console.warn on write failure', () => { + const synapsePath = path.join(tmpDir, '.synapse'); + fs.mkdirSync(synapsePath, { recursive: true }); + const sessionsPath = path.join(synapsePath, 'sessions'); + fs.writeFileSync(sessionsPath, 'block', 'utf8'); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(() => {}); + + callBridge(ctx, 'dev', 'full', metrics); + + expect(warnSpy).toHaveBeenCalledTimes(1); + expect(warnSpy).toHaveBeenCalledWith( + expect.stringContaining('[UnifiedActivationPipeline] SYNAPSE session write failed:') + ); + + warnSpy.mockRestore(); + }); + + test('does not throw on error — always returns undefined', () => { + const synapsePath = path.join(tmpDir, '.synapse'); + fs.mkdirSync(synapsePath, { recursive: true }); + const sessionsPath = path.join(synapsePath, 'sessions'); + fs.writeFileSync(sessionsPath, 'block', 'utf8'); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(() => {}); + + const result = callBridge(ctx, 'dev', 'full', metrics); + + expect(result).toBeUndefined(); + + warnSpy.mockRestore(); + }); + + test('handles read-only directory error gracefully', () => { + // This test is platform-sensitive; on Windows chmod may not fully work. + // We test the error path by using an invalid projectRoot path instead. + const ctx = createContext(path.join(tmpDir, 'nonexistent', 'deep', 'path')); + const metrics = createMetrics(); + + // .synapse/ does not exist at invalid path, so it should skip (not error) + callBridge(ctx, 'dev', 'full', metrics); + + expect(metrics.loaders.synapseSession.status).toBe('skipped'); + }); +}); + +// ============================================================================= +// 5. Bridge File Content Validation +// ============================================================================= + +describe('UAP Session Bridge — Content Validation', () => { + test('bridge file contains exactly 4 required fields', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'sm', 'fallback', metrics); + + const bridgePath = path.join(sessionsDir, '_active-agent.json'); + const data = JSON.parse(fs.readFileSync(bridgePath, 'utf8')); + + const keys = Object.keys(data).sort(); + expect(keys).toEqual(['activated_at', 'activation_quality', 'id', 'source']); + }); + + test('id field matches the agentId argument', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'data-engineer', 'full', metrics); + + const data = JSON.parse(fs.readFileSync(path.join(sessionsDir, '_active-agent.json'), 'utf8')); + expect(data.id).toBe('data-engineer'); + }); + + test('activated_at is a valid ISO 8601 timestamp', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + const before = new Date().toISOString(); + + callBridge(ctx, 'dev', 'full', metrics); + + const after = new Date().toISOString(); + const data = JSON.parse(fs.readFileSync(path.join(sessionsDir, '_active-agent.json'), 'utf8')); + + // Validate ISO 8601 format + expect(new Date(data.activated_at).toISOString()).toBe(data.activated_at); + // Validate timestamp is within expected range + expect(data.activated_at >= before).toBe(true); + expect(data.activated_at <= after).toBe(true); + }); + + test('activation_quality preserves the quality argument', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + + for (const quality of ['full', 'partial', 'fallback']) { + const metrics = createMetrics(); + callBridge(ctx, 'dev', quality, metrics); + + const data = JSON.parse(fs.readFileSync(path.join(sessionsDir, '_active-agent.json'), 'utf8')); + expect(data.activation_quality).toBe(quality); + } + }); + + test('source field is always "uap"', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'devops', 'full', metrics); + + const data = JSON.parse(fs.readFileSync(path.join(sessionsDir, '_active-agent.json'), 'utf8')); + expect(data.source).toBe('uap'); + }); + + test('bridge file is valid JSON with 2-space indentation', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'dev', 'full', metrics); + + const raw = fs.readFileSync(path.join(sessionsDir, '_active-agent.json'), 'utf8'); + // Re-stringify with same format and compare + const parsed = JSON.parse(raw); + expect(raw).toBe(JSON.stringify(parsed, null, 2)); + }); +}); + +// ============================================================================= +// 6. Metrics Recording +// ============================================================================= + +describe('UAP Session Bridge — Metrics', () => { + test('records metrics with status "ok" on success', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'dev', 'full', metrics); + + const m = metrics.loaders.synapseSession; + expect(m).toBeDefined(); + expect(m.status).toBe('ok'); + expect(typeof m.duration).toBe('number'); + expect(m.duration).toBeGreaterThanOrEqual(0); + expect(typeof m.start).toBe('number'); + expect(typeof m.end).toBe('number'); + expect(m.end).toBeGreaterThanOrEqual(m.start); + }); + + test('records metrics with status "skipped" when .synapse/ missing', () => { + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'dev', 'full', metrics); + + const m = metrics.loaders.synapseSession; + expect(m).toBeDefined(); + expect(m.status).toBe('skipped'); + expect(typeof m.duration).toBe('number'); + expect(typeof m.start).toBe('number'); + expect(typeof m.end).toBe('number'); + }); + + test('records metrics with status "error" on failure', () => { + const synapsePath = path.join(tmpDir, '.synapse'); + fs.mkdirSync(synapsePath, { recursive: true }); + fs.writeFileSync(path.join(synapsePath, 'sessions'), 'block', 'utf8'); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + const warnSpy = jest.spyOn(console, 'warn').mockImplementation(() => {}); + + callBridge(ctx, 'dev', 'full', metrics); + + const m = metrics.loaders.synapseSession; + expect(m).toBeDefined(); + expect(m.status).toBe('error'); + expect(typeof m.error).toBe('string'); + expect(m.error.length).toBeGreaterThan(0); + expect(typeof m.duration).toBe('number'); + expect(typeof m.start).toBe('number'); + expect(typeof m.end).toBe('number'); + + warnSpy.mockRestore(); + }); + + test('metrics duration equals end minus start', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'dev', 'full', metrics); + + const m = metrics.loaders.synapseSession; + expect(m.end - m.start).toBe(m.duration); + }); + + test('metrics are written to loaders.synapseSession key', () => { + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'dev', 'full', metrics); + + expect(metrics.loaders).toHaveProperty('synapseSession'); + expect(Object.keys(metrics.loaders)).toContain('synapseSession'); + }); + + test('does not overwrite other metrics loaders', () => { + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + metrics.loaders.agentConfig = { duration: 10, status: 'ok' }; + + callBridge(ctx, 'dev', 'full', metrics); + + expect(metrics.loaders.agentConfig).toEqual({ duration: 10, status: 'ok' }); + expect(metrics.loaders.synapseSession).toBeDefined(); + }); +}); + +// ============================================================================= +// 7. Timing Budget — 20ms Target +// ============================================================================= + +describe('UAP Session Bridge — Timing Budget', () => { + test('completes within 20ms budget on happy path (warm filesystem)', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + // Warm up filesystem cache + callBridge(ctx, 'dev', 'full', createMetrics()); + + // Measured run + const start = Date.now(); + callBridge(ctx, 'dev', 'full', metrics); + const elapsed = Date.now() - start; + + expect(elapsed).toBeLessThanOrEqual(20); + expect(metrics.loaders.synapseSession.duration).toBeLessThanOrEqual(20); + }); + + test('completes within 20ms on skip path', () => { + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + const start = Date.now(); + callBridge(ctx, 'dev', 'full', metrics); + const elapsed = Date.now() - start; + + expect(elapsed).toBeLessThanOrEqual(20); + expect(metrics.loaders.synapseSession.duration).toBeLessThanOrEqual(20); + }); + + test('skip path is faster than write path', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + + // Warm up + callBridge(ctx, 'dev', 'full', createMetrics()); + + // Skip path (no .synapse/) + const skipCtx = createContext(fs.mkdtempSync(path.join(os.tmpdir(), 'uap-skip-'))); + const skipMetrics = createMetrics(); + callBridge(skipCtx, 'dev', 'full', skipMetrics); + + // Write path + const writeMetrics = createMetrics(); + callBridge(ctx, 'dev', 'full', writeMetrics); + + // Clean up skip tmpdir + fs.rmSync(skipCtx.projectRoot, { recursive: true, force: true }); + + expect(skipMetrics.loaders.synapseSession.duration).toBeLessThanOrEqual( + writeMetrics.loaders.synapseSession.duration + 1 // +1ms tolerance + ); + }); +}); + +// ============================================================================= +// 8. Edge Cases +// ============================================================================= + +describe('UAP Session Bridge — Edge Cases', () => { + test('handles agent IDs with hyphens (e.g., data-engineer)', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'ux-design-expert', 'full', metrics); + + const data = JSON.parse(fs.readFileSync(path.join(sessionsDir, '_active-agent.json'), 'utf8')); + expect(data.id).toBe('ux-design-expert'); + expect(metrics.loaders.synapseSession.status).toBe('ok'); + }); + + test('handles all 12 agent IDs without error', () => { + const agentIds = [ + 'dev', 'qa', 'architect', 'pm', 'po', 'sm', + 'analyst', 'data-engineer', 'ux-design-expert', 'devops', + 'aios-master', 'content-creator', + ]; + + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + + for (const agentId of agentIds) { + const metrics = createMetrics(); + callBridge(ctx, agentId, 'full', metrics); + expect(metrics.loaders.synapseSession.status).toBe('ok'); + } + + // Verify last agent written + const data = JSON.parse(fs.readFileSync(path.join(sessionsDir, '_active-agent.json'), 'utf8')); + expect(data.id).toBe('content-creator'); + }); + + test('handles empty string quality gracefully', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + fs.mkdirSync(sessionsDir, { recursive: true }); + + const ctx = createContext(tmpDir); + const metrics = createMetrics(); + + callBridge(ctx, 'dev', '', metrics); + + const data = JSON.parse(fs.readFileSync(path.join(sessionsDir, '_active-agent.json'), 'utf8')); + expect(data.activation_quality).toBe(''); + expect(metrics.loaders.synapseSession.status).toBe('ok'); + }); +}); + +``` + +================================================== +📄 tests/synapse/benchmarks/pipeline-benchmark.js +================================================== +```js +#!/usr/bin/env node + +/** + * SYNAPSE Pipeline Benchmark + * + * Story SYN-12: Performance Benchmarks + E2E Testing. + * Measures execution time for SynapseEngine.process() across multiple iterations. + * Reports p50/p95/p99 per layer and total pipeline. + * + * Usage: + * node tests/synapse/benchmarks/pipeline-benchmark.js [--warm] [--cold] [--iterations=100] + * + * Flags: + * --warm Warm-start benchmark (reuse engine instance, default) + * --cold Cold-start benchmark (new engine per iteration) + * --iterations=N Number of measured iterations (default: 100) + * --json Output results as JSON only + * + * Performance Targets (from EPIC-SYN-INDEX): + * Total pipeline: <70ms (target), <100ms (hard limit) + * Layer individual: <15ms (<20ms hard, L0/L7: <5ms) + * Startup (.synapse/ discovery): <5ms (<10ms hard) + * Session I/O: <10ms (<15ms hard) + * + * @module tests/synapse/benchmarks/pipeline-benchmark + */ + +'use strict'; + +const path = require('path'); +const fs = require('fs'); +const { performance } = require('perf_hooks'); + +const PROJECT_ROOT = path.resolve(__dirname, '..', '..', '..'); +const SYNAPSE_PATH = path.join(PROJECT_ROOT, '.synapse'); + +const WARMUP_ITERATIONS = 5; +const DEFAULT_ITERATIONS = 100; + +const TARGETS = { + pipeline: { target: 70, hardLimit: 100 }, + layer: { target: 15, hardLimit: 20 }, + layerL0: { target: 5, hardLimit: 10 }, + layerL7: { target: 5, hardLimit: 10 }, + startup: { target: 5, hardLimit: 10 }, + sessionIO: { target: 10, hardLimit: 15 }, +}; + +// --------------------------------------------------------------------------- +// Args +// --------------------------------------------------------------------------- + +function parseArgs() { + const args = process.argv.slice(2); + const options = { warm: true, cold: false, iterations: DEFAULT_ITERATIONS, json: false }; + + for (const arg of args) { + if (arg === '--cold') { options.cold = true; options.warm = false; } + else if (arg === '--warm') { options.warm = true; options.cold = false; } + else if (arg === '--json') { options.json = true; } + else if (arg.startsWith('--iterations=')) { + options.iterations = parseInt(arg.split('=')[1], 10) || DEFAULT_ITERATIONS; + } + } + return options; +} + +// --------------------------------------------------------------------------- +// Percentile +// --------------------------------------------------------------------------- + +function percentile(sorted, p) { + if (sorted.length === 0) return 0; + const index = Math.ceil((p / 100) * sorted.length) - 1; + return sorted[Math.max(0, index)]; +} + +function calcStats(values) { + const sorted = [...values].sort((a, b) => a - b); + return { + min: sorted[0] || 0, + max: sorted[sorted.length - 1] || 0, + p50: percentile(sorted, 50), + p95: percentile(sorted, 95), + p99: percentile(sorted, 99), + mean: sorted.length > 0 ? sorted.reduce((a, b) => a + b, 0) / sorted.length : 0, + count: sorted.length, + }; +} + +// --------------------------------------------------------------------------- +// Benchmark +// --------------------------------------------------------------------------- + +async function runBenchmark(options) { + const { SynapseEngine } = require( + path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'engine.js'), + ); + const { loadSession } = require( + path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'session', 'session-manager.js'), + ); + const { parseManifest } = require( + path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'domain', 'domain-loader.js'), + ); + + const manifestPath = path.join(SYNAPSE_PATH, 'manifest'); + const mode = options.cold ? 'cold' : 'warm'; + const iterations = options.iterations; + + if (!options.json) { + console.log(`\nSYNAPSE Pipeline Benchmark — ${mode} start, ${iterations} iterations`); + console.log(`Synapse path: ${SYNAPSE_PATH}\n`); + } + + // Verify .synapse/ exists + if (!fs.existsSync(SYNAPSE_PATH)) { + console.error('ERROR: .synapse/ directory not found. Run SYN-8 first.'); + process.exit(1); + } + + const { formatSynapseRules } = require( + path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'output', 'formatter.js'), + ); + + // Measure startup (manifest parse + engine construction) + const startupDurations = []; + const sessionIODurations = []; + const pipelineDurations = []; + const formatterDurations = []; + const layerDurations = {}; // { layerName: number[] } + + const manifest = parseManifest(manifestPath); + const prompt = 'Implement the user authentication feature'; + const session = { + prompt_count: 5, + active_agent: { id: 'dev', activated_at: new Date().toISOString() }, + active_workflow: null, + active_squad: null, + active_task: null, + context: { last_bracket: 'MODERATE', last_tokens_used: 0, last_context_percent: 55 }, + }; + + // Warm-up phase + if (!options.json) { + console.log(`Warm-up phase (${WARMUP_ITERATIONS} iterations)...`); + } + + for (let i = 0; i < WARMUP_ITERATIONS; i++) { + const engine = new SynapseEngine(SYNAPSE_PATH, { manifest, devmode: false }); + await engine.process(prompt, session); + } + + // Measured phase + if (!options.json) { + console.log(`Measured phase (${iterations} iterations)...\n`); + } + + let cachedEngine = null; + + for (let i = 0; i < iterations; i++) { + // Startup measurement + const startupStart = performance.now(); + const engine = options.cold + ? new SynapseEngine(SYNAPSE_PATH, { manifest, devmode: false }) + : (i === 0 ? new SynapseEngine(SYNAPSE_PATH, { manifest, devmode: false }) : null); + const startupEnd = performance.now(); + + if (options.cold || i === 0) { + startupDurations.push(startupEnd - startupStart); + } + + const engineToUse = options.cold ? engine : (engine || cachedEngine); + if (i === 0 && !options.cold) { + cachedEngine = engine; + } + + // Session I/O measurement + const sessionStart = performance.now(); + const sessionsDir = path.join(SYNAPSE_PATH, 'sessions'); + const sessionData = loadSession('benchmark-session', sessionsDir) || session; + const sessionEnd = performance.now(); + sessionIODurations.push(sessionEnd - sessionStart); + + // Pipeline measurement + const pipelineStart = performance.now(); + const result = await (engineToUse || engine).process(prompt, sessionData); + const pipelineEnd = performance.now(); + pipelineDurations.push(pipelineEnd - pipelineStart); + + // Collect per-layer timings from metrics + if (result && result.metrics && result.metrics.per_layer) { + for (const [name, info] of Object.entries(result.metrics.per_layer)) { + if (info.duration != null) { + if (!layerDurations[name]) layerDurations[name] = []; + layerDurations[name].push(info.duration); + } + } + } + + // Isolated formatter measurement + const fmtStart = performance.now(); + formatSynapseRules( + [], // empty results (measures formatter overhead, not layer content) + 'MODERATE', + 55, + sessionData, + false, + result && result.metrics ? result.metrics : {}, + 1500, + false, + ); + const fmtEnd = performance.now(); + formatterDurations.push(fmtEnd - fmtStart); + } + + // --------------------------------------------------------------------------- + // Build results + // --------------------------------------------------------------------------- + + const results = { + mode, + iterations, + warmup: WARMUP_ITERATIONS, + timestamp: new Date().toISOString(), + targets: TARGETS, + pipeline: calcStats(pipelineDurations), + formatter: calcStats(formatterDurations), + startup: calcStats(startupDurations), + sessionIO: calcStats(sessionIODurations), + layers: {}, + }; + + for (const [name, durations] of Object.entries(layerDurations)) { + results.layers[name] = calcStats(durations); + } + + // --------------------------------------------------------------------------- + // Output + // --------------------------------------------------------------------------- + + if (options.json) { + console.log(JSON.stringify(results, null, 2)); + return results; + } + + // Human-readable report + console.log('='.repeat(80)); + console.log('SYNAPSE PIPELINE BENCHMARK RESULTS'); + console.log('='.repeat(80)); + + const fmt = (v) => typeof v === 'number' ? v.toFixed(2) : '?'; + + console.log('\nPipeline (total):'); + console.log(` p50: ${fmt(results.pipeline.p50)}ms p95: ${fmt(results.pipeline.p95)}ms p99: ${fmt(results.pipeline.p99)}ms`); + console.log(` Target: <${TARGETS.pipeline.target}ms Hard limit: <${TARGETS.pipeline.hardLimit}ms`); + console.log(` Status: ${results.pipeline.p95 < TARGETS.pipeline.target ? 'PASS' : results.pipeline.p95 < TARGETS.pipeline.hardLimit ? 'WARN' : 'FAIL'}`); + + console.log('\nFormatter (isolated):'); + console.log(` p50: ${fmt(results.formatter.p50)}ms p95: ${fmt(results.formatter.p95)}ms p99: ${fmt(results.formatter.p99)}ms`); + console.log(` Target: <${TARGETS.layerL0.target}ms Hard limit: <${TARGETS.layerL0.hardLimit}ms`); + console.log(` Status: ${results.formatter.p95 < TARGETS.layerL0.target ? 'PASS' : results.formatter.p95 < TARGETS.layerL0.hardLimit ? 'WARN' : 'FAIL'}`); + + console.log('\nStartup (.synapse/ discovery):'); + console.log(` p50: ${fmt(results.startup.p50)}ms p95: ${fmt(results.startup.p95)}ms p99: ${fmt(results.startup.p99)}ms`); + console.log(` Target: <${TARGETS.startup.target}ms Hard limit: <${TARGETS.startup.hardLimit}ms`); + + console.log('\nSession I/O:'); + console.log(` p50: ${fmt(results.sessionIO.p50)}ms p95: ${fmt(results.sessionIO.p95)}ms p99: ${fmt(results.sessionIO.p99)}ms`); + console.log(` Target: <${TARGETS.sessionIO.target}ms Hard limit: <${TARGETS.sessionIO.hardLimit}ms`); + + console.log('\nPer-Layer Timings:'); + console.log('-'.repeat(80)); + console.log( + 'Layer'.padEnd(20), + 'p50'.padStart(8), + 'p95'.padStart(8), + 'p99'.padStart(8), + 'Target'.padStart(10), + 'Status'.padStart(10), + ); + console.log('-'.repeat(80)); + + for (const [name, stats] of Object.entries(results.layers)) { + const isEdge = name === 'constitution' || name === 'star-command'; + const target = isEdge ? TARGETS.layerL0 : TARGETS.layer; + const status = stats.p95 < target.target ? 'PASS' : stats.p95 < target.hardLimit ? 'WARN' : 'FAIL'; + + console.log( + name.padEnd(20), + fmt(stats.p50).padStart(8), + fmt(stats.p95).padStart(8), + fmt(stats.p99).padStart(8), + `<${target.target}ms`.padStart(10), + status.padStart(10), + ); + } + + console.log('\n' + '='.repeat(80)); + console.log(`Mode: ${mode} | Iterations: ${iterations} | Warmup: ${WARMUP_ITERATIONS}`); + console.log('='.repeat(80)); + + return results; +} + +// --------------------------------------------------------------------------- +// Entry point +// --------------------------------------------------------------------------- + +if (require.main === module) { + runBenchmark(parseArgs()).catch((err) => { + console.error('Benchmark failed:', err); + process.exit(1); + }); +} + +module.exports = { runBenchmark, calcStats, percentile, TARGETS }; + +``` + +================================================== +📄 tests/synapse/diagnostics/relevance-matrix.test.js +================================================== +```js +/** + * Relevance Matrix — Unit Tests + * + * @module tests/synapse/diagnostics/relevance-matrix + * @story SYN-14 + */ + +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); + +const { + collectRelevanceMatrix, + IMPORTANCE, + DEFAULT_RELEVANCE, + AGENT_OVERRIDES, +} = require('../../../.aios-core/core/synapse/diagnostics/collectors/relevance-matrix'); + +let tmpDir; + +function createTmpDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'relevance-test-')); +} + +function writeJson(filePath, data) { + fs.mkdirSync(path.dirname(filePath), { recursive: true }); + fs.writeFileSync(filePath, JSON.stringify(data, null, 2), 'utf8'); +} + +beforeEach(() => { tmpDir = createTmpDir(); }); +afterEach(() => { fs.rmSync(tmpDir, { recursive: true, force: true }); }); + +describe('collectRelevanceMatrix', () => { + test('available=false when no metrics', () => { + const result = collectRelevanceMatrix(tmpDir); + expect(result.available).toBe(false); + expect(result.matrix).toHaveLength(0); + }); + + test('builds matrix for default agent', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), { + agentId: 'analyst', loaders: { agentConfig: { status: 'ok' } }, + }); + const result = collectRelevanceMatrix(tmpDir); + expect(result.available).toBe(true); + expect(result.agentId).toBe('analyst'); + expect(result.matrix.length).toBe(Object.keys(DEFAULT_RELEVANCE).length); + }); + + test('uses agent overrides for dev agent', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), { + agentId: 'dev', loaders: { agentConfig: { status: 'ok' }, gitConfig: { status: 'ok' } }, + }); + const result = collectRelevanceMatrix(tmpDir); + const git = result.matrix.find(m => m.component === 'gitConfig'); + expect(git.importance).toBe(IMPORTANCE.IMPORTANT); // dev override + }); + + test('identifies gaps for critical missing components', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), { + agentId: 'dev', loaders: {}, + }); + const result = collectRelevanceMatrix(tmpDir); + const agentGap = result.gaps.find(g => g.component === 'agentConfig'); + expect(agentGap).toBeDefined(); + expect(agentGap.importance).toBe(IMPORTANCE.CRITICAL); + }); + + test('no gaps when all critical components are ok', () => { + const loaders = {}; + for (const key of Object.keys(DEFAULT_RELEVANCE)) { + loaders[key] = { status: 'ok' }; + } + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), { + agentId: 'dev', loaders, + }); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), { + perLayer: { + constitution: { status: 'ok' }, + global: { status: 'ok' }, + agent: { status: 'ok' }, + workflow: { status: 'ok' }, + task: { status: 'ok' }, + squad: { status: 'ok' }, + keyword: { status: 'ok' }, + 'star-command': { status: 'ok' }, + }, + }); + const result = collectRelevanceMatrix(tmpDir); + expect(result.gaps).toHaveLength(0); + expect(result.score).toBe(100); + }); + + test('reads agentId from bridge file as fallback', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), { + perLayer: { constitution: { status: 'ok' } }, + }); + writeJson(path.join(tmpDir, '.synapse', 'sessions', '_active-agent.json'), { id: 'qa' }); + const result = collectRelevanceMatrix(tmpDir); + expect(result.agentId).toBe('qa'); + }); + + test('exports constants', () => { + expect(IMPORTANCE.CRITICAL).toBe('critical'); + expect(IMPORTANCE.IRRELEVANT).toBe('irrelevant'); + expect(AGENT_OVERRIDES.dev).toBeDefined(); + expect(AGENT_OVERRIDES.devops).toBeDefined(); + }); +}); + +``` + +================================================== +📄 tests/synapse/diagnostics/timing-collector.test.js +================================================== +```js +/** + * Timing Collector — Unit Tests + * + * Tests for collectTimingMetrics() which reads persisted UAP and Hook metrics files. + * + * @module tests/synapse/diagnostics/timing-collector + * @story SYN-12 - Timing Metrics + Context Quality Analysis + * @coverage Target: >85% + */ + +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); + +const { collectTimingMetrics, LOADER_TIER_MAP, MAX_STALENESS_MS } = require( + '../../../.aios-core/core/synapse/diagnostics/collectors/timing-collector', +); + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +let tmpDir; + +function createTmpDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'timing-test-')); +} + +function writeJson(filePath, data) { + fs.mkdirSync(path.dirname(filePath), { recursive: true }); + fs.writeFileSync(filePath, JSON.stringify(data, null, 2), 'utf8'); +} + +function writeFile(filePath, content) { + fs.mkdirSync(path.dirname(filePath), { recursive: true }); + fs.writeFileSync(filePath, content, 'utf8'); +} + +function buildUapMetrics(overrides = {}) { + return { + agentId: 'dev', + quality: 'full', + totalDuration: 145, + loaders: { + agentConfig: { duration: 45, status: 'ok', start: 0, end: 45 }, + permissionMode: { duration: 12, status: 'ok', start: 45, end: 57 }, + gitConfig: { duration: 8, status: 'ok', start: 45, end: 53 }, + sessionContext: { duration: 23, status: 'ok', start: 57, end: 80 }, + projectStatus: { duration: 34, status: 'timeout', start: 57, end: 91, error: 'timeout' }, + synapseSession: { duration: 2, status: 'ok', start: 91, end: 93 }, + }, + timestamp: new Date().toISOString(), + ...overrides, + }; +} + +function buildHookMetrics(overrides = {}) { + return { + totalDuration: 87, + bracket: 'MODERATE', + layersLoaded: 5, + layersSkipped: 2, + layersErrored: 0, + totalRules: 42, + perLayer: { + constitution: { duration: 12, status: 'ok', rules: 5 }, + global: { duration: 11, status: 'ok', rules: 3 }, + agent: { duration: 22, status: 'ok', rules: 12 }, + workflow: { duration: 15, status: 'ok', rules: 8 }, + task: { duration: 18, status: 'ok', rules: 10 }, + squad: { duration: 0, status: 'skipped', reason: 'Not active in FRESH' }, + keyword: { duration: 0, status: 'skipped', reason: 'Not active in FRESH' }, + }, + timestamp: new Date().toISOString(), + ...overrides, + }; +} + +// --------------------------------------------------------------------------- +// Tests +// --------------------------------------------------------------------------- + +beforeEach(() => { + tmpDir = createTmpDir(); +}); + +afterEach(() => { + fs.rmSync(tmpDir, { recursive: true, force: true }); +}); + +describe('collectTimingMetrics — UAP metrics', () => { + test('returns available=true when uap-metrics.json exists', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics()); + + const result = collectTimingMetrics(tmpDir); + + expect(result.uap.available).toBe(true); + expect(result.uap.totalDuration).toBe(145); + expect(result.uap.quality).toBe('full'); + }); + + test('returns correct loader count and tier mapping', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics()); + + const result = collectTimingMetrics(tmpDir); + + expect(result.uap.loaders).toHaveLength(6); + const agentConfig = result.uap.loaders.find((l) => l.name === 'agentConfig'); + expect(agentConfig.tier).toBe('Critical'); + expect(agentConfig.duration).toBe(45); + expect(agentConfig.status).toBe('ok'); + }); + + test('maps tiers correctly for all known loaders', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics()); + + const result = collectTimingMetrics(tmpDir); + + for (const loader of result.uap.loaders) { + expect(LOADER_TIER_MAP[loader.name]).toBe(loader.tier); + } + }); + + test('includes timeout status loaders', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics()); + + const result = collectTimingMetrics(tmpDir); + + const projectStatus = result.uap.loaders.find((l) => l.name === 'projectStatus'); + expect(projectStatus.status).toBe('timeout'); + expect(projectStatus.duration).toBe(34); + }); + + test('returns available=false when uap-metrics.json does not exist', () => { + const result = collectTimingMetrics(tmpDir); + + expect(result.uap.available).toBe(false); + expect(result.uap.totalDuration).toBe(0); + expect(result.uap.loaders).toHaveLength(0); + }); + + test('returns available=false when uap-metrics.json is malformed', () => { + writeFile(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), '{ invalid }'); + + const result = collectTimingMetrics(tmpDir); + + expect(result.uap.available).toBe(false); + }); + + test('handles uap-metrics.json with empty loaders', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), { + totalDuration: 10, + quality: 'fallback', + loaders: {}, + }); + + const result = collectTimingMetrics(tmpDir); + + expect(result.uap.available).toBe(true); + expect(result.uap.loaders).toHaveLength(0); + }); +}); + +describe('collectTimingMetrics — Hook metrics', () => { + test('returns available=true when hook-metrics.json exists', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics()); + + const result = collectTimingMetrics(tmpDir); + + expect(result.hook.available).toBe(true); + expect(result.hook.totalDuration).toBe(87); + expect(result.hook.bracket).toBe('MODERATE'); + }); + + test('returns correct layer data', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics()); + + const result = collectTimingMetrics(tmpDir); + + expect(result.hook.layers.length).toBeGreaterThan(0); + const constitution = result.hook.layers.find((l) => l.name === 'constitution'); + expect(constitution.duration).toBe(12); + expect(constitution.status).toBe('ok'); + expect(constitution.rules).toBe(5); + }); + + test('includes skipped layers with rules=0', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics()); + + const result = collectTimingMetrics(tmpDir); + + const squad = result.hook.layers.find((l) => l.name === 'squad'); + expect(squad.status).toBe('skipped'); + expect(squad.rules).toBe(0); + }); + + test('returns available=false when hook-metrics.json does not exist', () => { + const result = collectTimingMetrics(tmpDir); + + expect(result.hook.available).toBe(false); + expect(result.hook.totalDuration).toBe(0); + expect(result.hook.layers).toHaveLength(0); + }); + + test('returns available=false when hook-metrics.json is malformed', () => { + writeFile(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), 'not json'); + + const result = collectTimingMetrics(tmpDir); + + expect(result.hook.available).toBe(false); + }); +}); + +describe('collectTimingMetrics — Combined', () => { + test('combined totalMs sums UAP and Hook totals', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics()); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics()); + + const result = collectTimingMetrics(tmpDir); + + expect(result.combined.totalMs).toBe(145 + 87); + }); + + test('combined totalMs is 0 when no metrics available', () => { + const result = collectTimingMetrics(tmpDir); + + expect(result.combined.totalMs).toBe(0); + }); + + test('combined totalMs uses only available pipeline', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics()); + + const result = collectTimingMetrics(tmpDir); + + expect(result.combined.totalMs).toBe(145); + }); +}); + +describe('collectTimingMetrics — Staleness (SYN-14)', () => { + test('exports MAX_STALENESS_MS constant as 5 minutes', () => { + expect(MAX_STALENESS_MS).toBe(5 * 60 * 1000); + }); + + test('uap stale=false when timestamp is recent', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics()); + + const result = collectTimingMetrics(tmpDir); + + expect(result.uap.stale).toBe(false); + expect(result.uap.ageMs).toBeLessThan(MAX_STALENESS_MS); + }); + + test('uap stale=true when timestamp > 5 min old', () => { + const old = new Date(Date.now() - MAX_STALENESS_MS - 10000).toISOString(); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics({ timestamp: old })); + + const result = collectTimingMetrics(tmpDir); + + expect(result.uap.stale).toBe(true); + expect(result.uap.ageMs).toBeGreaterThan(MAX_STALENESS_MS); + }); + + test('hook stale=false when timestamp is recent', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics()); + + const result = collectTimingMetrics(tmpDir); + + expect(result.hook.stale).toBe(false); + }); + + test('hook stale=true when timestamp > 5 min old', () => { + const old = new Date(Date.now() - MAX_STALENESS_MS - 10000).toISOString(); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics({ timestamp: old })); + + const result = collectTimingMetrics(tmpDir); + + expect(result.hook.stale).toBe(true); + }); + + test('hook includes hookBootMs field', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics({ hookBootMs: 42.5 })); + + const result = collectTimingMetrics(tmpDir); + + expect(result.hook.hookBootMs).toBe(42.5); + }); + + test('hookBootMs defaults to 0 when not present', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics()); + + const result = collectTimingMetrics(tmpDir); + + expect(result.hook.hookBootMs).toBe(0); + }); + + test('stale defaults to false when no metrics available', () => { + const result = collectTimingMetrics(tmpDir); + + expect(result.uap.stale).toBe(false); + expect(result.hook.stale).toBe(false); + }); +}); + +``` + +================================================== +📄 tests/synapse/diagnostics/output-analyzer.test.js +================================================== +```js +/** + * Output Analyzer — Unit Tests + * + * @module tests/synapse/diagnostics/output-analyzer + * @story SYN-14 + */ + +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); + +const { collectOutputAnalysis, UAP_OUTPUT_EXPECTATIONS } = require( + '../../../.aios-core/core/synapse/diagnostics/collectors/output-analyzer', +); + +let tmpDir; + +function createTmpDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'output-test-')); +} + +function writeJson(filePath, data) { + fs.mkdirSync(path.dirname(filePath), { recursive: true }); + fs.writeFileSync(filePath, JSON.stringify(data, null, 2), 'utf8'); +} + +beforeEach(() => { tmpDir = createTmpDir(); }); +afterEach(() => { fs.rmSync(tmpDir, { recursive: true, force: true }); }); + +describe('collectOutputAnalysis', () => { + test('available=false when no metrics', () => { + const result = collectOutputAnalysis(tmpDir); + expect(result.available).toBe(false); + }); + + test('analyzes UAP loaders with ok status as good', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), { + loaders: { agentConfig: { duration: 10, status: 'ok' } }, + }); + const result = collectOutputAnalysis(tmpDir); + expect(result.available).toBe(true); + const agent = result.uapAnalysis.find(a => a.name === 'agentConfig'); + expect(agent.quality).toBe('good'); + }); + + test('marks error loaders as bad quality', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), { + loaders: { agentConfig: { duration: 10, status: 'error', error: 'fail' } }, + }); + const result = collectOutputAnalysis(tmpDir); + const agent = result.uapAnalysis.find(a => a.name === 'agentConfig'); + expect(agent.quality).toBe('bad'); + }); + + test('marks timeout loaders as bad quality', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), { + loaders: { agentConfig: { duration: 100, status: 'timeout' } }, + }); + const result = collectOutputAnalysis(tmpDir); + const agent = result.uapAnalysis.find(a => a.name === 'agentConfig'); + expect(agent.quality).toBe('bad'); + }); + + test('marks slow loaders (>200ms) as degraded', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), { + loaders: { agentConfig: { duration: 300, status: 'ok' } }, + }); + const result = collectOutputAnalysis(tmpDir); + const agent = result.uapAnalysis.find(a => a.name === 'agentConfig'); + expect(agent.quality).toBe('degraded'); + }); + + test('analyzes hook layers with rules as good', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), { + perLayer: { constitution: { duration: 5, status: 'ok', rules: 3 } }, + }); + const result = collectOutputAnalysis(tmpDir); + const layer = result.hookAnalysis.find(a => a.name === 'constitution'); + expect(layer.quality).toBe('good'); + expect(layer.rules).toBe(3); + }); + + test('marks layers with 0 rules as empty quality', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), { + perLayer: { constitution: { duration: 5, status: 'ok', rules: 0 } }, + }); + const result = collectOutputAnalysis(tmpDir); + const layer = result.hookAnalysis.find(a => a.name === 'constitution'); + expect(layer.quality).toBe('empty'); + }); + + test('summary counts healthy components', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), { + loaders: { + agentConfig: { duration: 10, status: 'ok' }, + gitConfig: { duration: 5, status: 'error' }, + }, + }); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), { + perLayer: { + constitution: { duration: 5, status: 'ok', rules: 3 }, + agent: { duration: 8, status: 'ok', rules: 5 }, + }, + }); + const result = collectOutputAnalysis(tmpDir); + expect(result.summary.uapHealthy).toBe(1); + expect(result.summary.uapTotal).toBeGreaterThanOrEqual(2); + expect(result.summary.hookHealthy).toBe(2); + expect(result.summary.hookTotal).toBe(2); + }); + + test('exports UAP_OUTPUT_EXPECTATIONS', () => { + expect(UAP_OUTPUT_EXPECTATIONS).toBeDefined(); + expect(UAP_OUTPUT_EXPECTATIONS.agentConfig).toBeDefined(); + }); +}); + +``` + +================================================== +📄 tests/synapse/diagnostics/report-formatter.test.js +================================================== +```js +/** + * SYNAPSE Report Formatter & Orchestrator — Unit Tests + * + * Tests for formatReport() and runDiagnostics/runDiagnosticsRaw. + * + * @module tests/synapse/diagnostics/report-formatter + * @story SYN-13 - UAP Session Bridge + SYNAPSE Diagnostics + * @coverage Target: >85% for formatter and orchestrator + */ + +'use strict'; + +// --------------------------------------------------------------------------- +// Part 1: Report Formatter Tests (uses real module via requireActual) +// --------------------------------------------------------------------------- + +// jest.mock for report-formatter is hoisted for Part 2 (orchestrator tests). +// We use requireActual here to get the REAL formatReport for Part 1. +const { formatReport } = jest.requireActual( + '../../../.aios-core/core/synapse/diagnostics/report-formatter', +); + +/** + * Build a full diagnostic data fixture with all sections populated. + */ +function buildFullData(overrides = {}) { + return { + hook: { + checks: [ + { name: 'hook-installed', status: 'PASS', detail: 'Hook file exists' }, + { name: 'hook-executable', status: 'PASS', detail: 'Hook is executable' }, + ], + }, + session: { + fields: [ + { field: 'session_id', expected: 'UUID', actual: 'abc-123', status: 'PASS' }, + { field: 'active_agent', expected: 'string', actual: 'dev', status: 'PASS' }, + ], + raw: { + session: { prompt_count: 5, active_agent: { id: 'dev' } }, + bridgeData: { id: 'dev', activation_quality: 'full' }, + }, + }, + manifest: { + entries: [ + { domain: 'constitution', inManifest: true, fileExists: true, status: 'PASS' }, + { domain: 'global', inManifest: true, fileExists: true, status: 'PASS' }, + ], + orphanedFiles: [], + }, + pipeline: { + bracket: 'FRESH', + contextPercent: 95.0, + layers: [ + { layer: 'L0-constitution', expected: 'loaded', status: 'PASS' }, + { layer: 'L1-global', expected: 'loaded', status: 'PASS' }, + ], + }, + uap: { + checks: [ + { name: 'bridge-file', status: 'PASS', detail: 'UAP bridge file exists' }, + { name: 'bridge-schema', status: 'PASS', detail: 'Schema valid' }, + ], + }, + ...overrides, + }; +} + +describe('formatReport()', () => { + // ------------------------------------------------------------------ + // 1. Header section + // ------------------------------------------------------------------ + describe('header section', () => { + it('includes report title and timestamp', () => { + const report = formatReport(buildFullData()); + expect(report).toContain('# SYNAPSE Diagnostic Report'); + expect(report).toMatch(/\*\*Timestamp:\*\*/); + }); + + it('includes bracket and context percentage from pipeline', () => { + const report = formatReport(buildFullData()); + expect(report).toContain('**Bracket:** FRESH (95.0% context remaining)'); + }); + + it('includes agent info with activation quality', () => { + const report = formatReport(buildFullData()); + expect(report).toContain('**Agent:** @dev (activation_quality: full)'); + }); + + it('extracts agent id from session.raw.session.active_agent.id as fallback', () => { + const data = buildFullData(); + delete data.session.raw.bridgeData.id; + const report = formatReport(data); + expect(report).toContain('**Agent:** @dev'); + }); + + it('omits agent line when no agent id available', () => { + const data = buildFullData(); + delete data.session.raw.bridgeData.id; + delete data.session.raw.session.active_agent; + const report = formatReport(data); + expect(report).not.toContain('**Agent:**'); + }); + }); + + // ------------------------------------------------------------------ + // 2. Hook Status section + // ------------------------------------------------------------------ + describe('hook status section', () => { + it('renders table with check rows', () => { + const report = formatReport(buildFullData()); + expect(report).toContain('## 1. Hook Status'); + expect(report).toContain('| hook-installed | PASS | Hook file exists |'); + expect(report).toContain('| hook-executable | PASS | Hook is executable |'); + }); + + it('shows no-data message when hook is null', () => { + const report = formatReport(buildFullData({ hook: null })); + expect(report).toContain('*No hook data collected*'); + }); + + it('shows no-data message when hook.checks is missing', () => { + // Note: _collectGaps does not guard data.hook.checks, so hook without + // checks would throw in the gaps collector. Pass checks: [] to be safe. + const report = formatReport(buildFullData({ hook: { checks: [] } })); + expect(report).not.toContain('*No hook data collected*'); + // With empty checks array, the table headers still render but no rows + expect(report).toContain('| Check | Status | Detail |'); + }); + }); + + // ------------------------------------------------------------------ + // 3. Session Status section + // ------------------------------------------------------------------ + describe('session status section', () => { + it('renders field rows in table', () => { + const report = formatReport(buildFullData()); + expect(report).toContain('## 2. Session Status'); + expect(report).toContain('| session_id | UUID | abc-123 | PASS |'); + expect(report).toContain('| active_agent | string | dev | PASS |'); + }); + + it('shows no-data message when session is null', () => { + const report = formatReport(buildFullData({ session: null })); + expect(report).toContain('*No session data collected*'); + }); + + it('shows no-data message when session.fields is missing', () => { + // Note: _collectGaps does not guard data.session.fields, so session + // without fields would throw. Pass fields: [] to be safe. + const report = formatReport(buildFullData({ session: { fields: [] } })); + expect(report).not.toContain('*No session data collected*'); + // With empty fields array, the table headers still render but no rows + expect(report).toContain('| Field | Expected | Actual | Status |'); + }); + }); + + // ------------------------------------------------------------------ + // 4. Manifest Integrity section + // ------------------------------------------------------------------ + describe('manifest integrity section', () => { + it('renders entry rows in table', () => { + const report = formatReport(buildFullData()); + expect(report).toContain('## 3. Manifest Integrity'); + expect(report).toContain('| constitution | true | yes | PASS |'); + expect(report).toContain('| global | true | yes | PASS |'); + }); + + it('renders fileExists as "no" when false', () => { + const data = buildFullData(); + data.manifest.entries[0].fileExists = false; + const report = formatReport(data); + expect(report).toContain('| constitution | true | no |'); + }); + + it('lists orphaned files when present', () => { + const data = buildFullData(); + data.manifest.orphanedFiles = ['stale-domain.yaml', 'old-config.yaml']; + const report = formatReport(data); + expect(report).toContain('**Orphaned files**'); + expect(report).toContain('stale-domain.yaml, old-config.yaml'); + }); + + it('does not show orphaned section when array is empty', () => { + const report = formatReport(buildFullData()); + expect(report).not.toContain('**Orphaned files**'); + }); + + it('shows no-data message when manifest is null', () => { + const report = formatReport(buildFullData({ manifest: null })); + expect(report).toContain('*No manifest data collected*'); + }); + }); + + // ------------------------------------------------------------------ + // 5. Pipeline Simulation section + // ------------------------------------------------------------------ + describe('pipeline simulation section', () => { + it('renders layer rows with bracket name in heading', () => { + const report = formatReport(buildFullData()); + expect(report).toContain('## 4. Pipeline Simulation (FRESH bracket)'); + expect(report).toContain('| L0-constitution | loaded | PASS |'); + expect(report).toContain('| L1-global | loaded | PASS |'); + }); + + it('shows UNKNOWN bracket when pipeline is null', () => { + const report = formatReport(buildFullData({ pipeline: null })); + expect(report).toContain('## 4. Pipeline Simulation (UNKNOWN bracket)'); + expect(report).toContain('*No pipeline data collected*'); + }); + }); + + // ------------------------------------------------------------------ + // 6. UAP Bridge section + // ------------------------------------------------------------------ + describe('UAP bridge section', () => { + it('renders check rows in table', () => { + const report = formatReport(buildFullData()); + expect(report).toContain('## 5. UAP Bridge'); + expect(report).toContain('| bridge-file | PASS | UAP bridge file exists |'); + expect(report).toContain('| bridge-schema | PASS | Schema valid |'); + }); + + it('shows no-data message when uap is null', () => { + const report = formatReport(buildFullData({ uap: null })); + expect(report).toContain('*No UAP bridge data collected*'); + }); + }); + + // ------------------------------------------------------------------ + // 7. Memory Bridge section + // ------------------------------------------------------------------ + describe('memory bridge section', () => { + it('renders memory bridge info table', () => { + const report = formatReport(buildFullData()); + expect(report).toContain('## 6. Memory Bridge'); + expect(report).toContain('| Pro available | INFO | Check `pro/` submodule |'); + }); + + it('shows YES for bracket-needs-hints when DEPLETED', () => { + const data = buildFullData(); + data.pipeline.bracket = 'DEPLETED'; + const report = formatReport(data); + expect(report).toContain('| Bracket requires hints | YES | DEPLETED bracket |'); + }); + + it('shows YES for bracket-needs-hints when CRITICAL', () => { + const data = buildFullData(); + data.pipeline.bracket = 'CRITICAL'; + const report = formatReport(data); + expect(report).toContain('| Bracket requires hints | YES | CRITICAL bracket |'); + }); + + it('shows NO for bracket-needs-hints when FRESH', () => { + const report = formatReport(buildFullData()); + expect(report).toContain('| Bracket requires hints | NO | FRESH bracket |'); + }); + + it('shows UNKNOWN bracket when pipeline is null', () => { + const report = formatReport(buildFullData({ pipeline: null })); + expect(report).toContain('UNKNOWN bracket'); + }); + }); + + // ------------------------------------------------------------------ + // 8-9. Gaps & Recommendations section + // ------------------------------------------------------------------ + describe('gaps and recommendations section', () => { + it('shows "None found" when all statuses are PASS', () => { + const report = formatReport(buildFullData()); + expect(report).toContain('## 7. Gaps & Recommendations'); + expect(report).toContain('None found'); + expect(report).toContain('Pipeline operating correctly'); + }); + + it('aggregates FAIL items from hook checks', () => { + const data = buildFullData(); + data.hook.checks[1] = { name: 'hook-executable', status: 'FAIL', detail: 'Not executable' }; + const report = formatReport(data); + expect(report).toContain('Hook: hook-executable'); + expect(report).toContain('Not executable'); + expect(report).toContain('HIGH'); + expect(report).not.toContain('None found'); + }); + + it('aggregates FAIL items from session fields', () => { + const data = buildFullData(); + data.session.fields[0] = { field: 'session_id', expected: 'UUID', actual: 'missing', status: 'FAIL' }; + const report = formatReport(data); + expect(report).toContain('Session: session_id'); + expect(report).toContain('missing'); + }); + + it('aggregates FAIL items from manifest entries', () => { + const data = buildFullData(); + data.manifest.entries[0] = { domain: 'constitution', inManifest: true, fileExists: false, status: 'FAIL' }; + const report = formatReport(data); + expect(report).toContain('Manifest: domain "constitution" file missing'); + expect(report).toContain('MEDIUM'); + }); + + it('aggregates FAIL items from UAP checks', () => { + const data = buildFullData(); + data.uap.checks[0] = { name: 'bridge-file', status: 'FAIL', detail: 'File not found' }; + const report = formatReport(data); + expect(report).toContain('UAP Bridge: bridge-file'); + expect(report).toContain('File not found'); + }); + + it('sorts gaps by severity (HIGH before MEDIUM)', () => { + const data = buildFullData(); + // Add a MEDIUM (manifest) and a HIGH (hook) failure + data.manifest.entries[0] = { domain: 'constitution', inManifest: true, fileExists: false, status: 'FAIL' }; + data.hook.checks[0] = { name: 'hook-installed', status: 'FAIL', detail: 'Missing hook' }; + const report = formatReport(data); + + // Extract only the gaps section (after "## 7.") + const gapsSection = report.slice(report.indexOf('## 7.')); + const highIdx = gapsSection.indexOf('| HIGH |'); + const mediumIdx = gapsSection.indexOf('| MEDIUM |'); + expect(highIdx).toBeGreaterThan(-1); + expect(mediumIdx).toBeGreaterThan(-1); + expect(highIdx).toBeLessThan(mediumIdx); + }); + + it('numbers gaps sequentially', () => { + const data = buildFullData(); + data.hook.checks[0] = { name: 'hook-installed', status: 'FAIL', detail: 'Missing' }; + data.hook.checks[1] = { name: 'hook-executable', status: 'FAIL', detail: 'Not exec' }; + const report = formatReport(data); + expect(report).toContain('| 1 |'); + expect(report).toContain('| 2 |'); + }); + }); + + // ------------------------------------------------------------------ + // 10. Timing Analysis section (SYN-14) + // ------------------------------------------------------------------ + describe('timing analysis section (8)', () => { + it('renders UAP timing table when available', () => { + const data = buildFullData({ + timing: { + uap: { + available: true, totalDuration: 145, quality: 'full', stale: false, ageMs: 100, + loaders: [{ name: 'agentConfig', duration: 45, status: 'ok', tier: 'Critical' }], + }, + hook: { available: false, totalDuration: 0, bracket: 'unknown', layers: [], stale: false, ageMs: 0 }, + combined: { totalMs: 145 }, + }, + }); + const report = formatReport(data); + expect(report).toContain('## 8. Timing Analysis'); + expect(report).toContain('UAP Activation Pipeline (145ms total'); + expect(report).toContain('| agentConfig | 45ms | ok | Critical |'); + }); + + it('renders Hook timing table with hookBootMs', () => { + const data = buildFullData({ + timing: { + uap: { available: false, totalDuration: 0, quality: 'unknown', loaders: [], stale: false, ageMs: 0 }, + hook: { + available: true, totalDuration: 87, hookBootMs: 42, bracket: 'MODERATE', stale: false, ageMs: 50, + layers: [{ name: 'constitution', duration: 12, status: 'ok', rules: 5 }], + }, + combined: { totalMs: 87 }, + }, + }); + const report = formatReport(data); + expect(report).toContain('SYNAPSE Hook Pipeline (87ms total'); + expect(report).toContain('boot: 42ms'); + expect(report).toContain('| constitution | 12ms | ok | 5 |'); + }); + + it('shows [STALE] tag when data is stale', () => { + const data = buildFullData({ + timing: { + uap: { available: true, totalDuration: 100, quality: 'full', stale: true, ageMs: 400000, loaders: [] }, + hook: { available: false, totalDuration: 0, bracket: 'unknown', layers: [], stale: false, ageMs: 0 }, + combined: { totalMs: 100 }, + }, + }); + const report = formatReport(data); + expect(report).toContain('[STALE]'); + }); + + it('shows no-data message when timing is null', () => { + const report = formatReport(buildFullData({ timing: null })); + expect(report).toContain('*No timing data available*'); + }); + }); + + // ------------------------------------------------------------------ + // 11. Context Quality Analysis section (SYN-14) + // ------------------------------------------------------------------ + describe('quality analysis section (9)', () => { + it('renders overall grade and scores', () => { + const data = buildFullData({ + quality: { + uap: { available: true, score: 85, maxPossible: 90, loaders: [], stale: false }, + hook: { available: true, score: 92, maxPossible: 100, bracket: 'MODERATE', layers: [], stale: false }, + overall: { score: 89, grade: 'B', label: 'GOOD' }, + }, + }); + const report = formatReport(data); + expect(report).toContain('## 9. Context Quality Analysis'); + expect(report).toContain('Overall: 89/100 (B — GOOD)'); + expect(report).toContain('UAP: 85/100'); + expect(report).toContain('Hook: 92/100'); + }); + + it('shows [STALE] for stale data', () => { + const data = buildFullData({ + quality: { + uap: { available: true, score: 0, maxPossible: 0, loaders: [], stale: true }, + hook: { available: true, score: 100, maxPossible: 100, bracket: 'MODERATE', layers: [], stale: false }, + overall: { score: 60, grade: 'C', label: 'ADEQUATE' }, + }, + }); + const report = formatReport(data); + expect(report).toContain('[STALE]'); + }); + + it('shows no-data message when quality is null', () => { + const report = formatReport(buildFullData({ quality: null })); + expect(report).toContain('*No quality data available*'); + }); + }); + + // ------------------------------------------------------------------ + // 12. Consistency Checks section (SYN-14) + // ------------------------------------------------------------------ + describe('consistency checks section (10)', () => { + it('renders consistency checks table', () => { + const data = buildFullData({ + consistency: { + available: true, score: 3, maxScore: 4, + checks: [ + { name: 'bracket', status: 'PASS', detail: 'MODERATE is valid' }, + { name: 'agent', status: 'FAIL', detail: 'UAP=dev, bridge=qa' }, + ], + }, + }); + const report = formatReport(data); + expect(report).toContain('## 10. Consistency Checks'); + expect(report).toContain('Score:** 3/4'); + expect(report).toContain('| bracket | PASS | MODERATE is valid |'); + expect(report).toContain('| agent | FAIL | UAP=dev, bridge=qa |'); + }); + + it('shows no-data message when consistency is null', () => { + const report = formatReport(buildFullData({ consistency: null })); + expect(report).toContain('*No consistency data available*'); + }); + }); + + // ------------------------------------------------------------------ + // 13. Output Quality section (SYN-14) + // ------------------------------------------------------------------ + describe('output quality section (11)', () => { + it('renders output analysis summary and tables', () => { + const data = buildFullData({ + outputAnalysis: { + available: true, + summary: { uapHealthy: 5, uapTotal: 6, hookHealthy: 4, hookTotal: 5 }, + uapAnalysis: [{ name: 'agentConfig', status: 'ok', quality: 'good', detail: 'Loaded OK' }], + hookAnalysis: [{ name: 'constitution', status: 'ok', rules: 5, quality: 'good', detail: '5 rules' }], + }, + }); + const report = formatReport(data); + expect(report).toContain('## 11. Output Quality'); + expect(report).toContain('5/6 healthy'); + expect(report).toContain('| agentConfig | ok | good | Loaded OK |'); + expect(report).toContain('| constitution | ok | 5 | good | 5 rules |'); + }); + + it('shows no-data message when outputAnalysis is null', () => { + const report = formatReport(buildFullData({ outputAnalysis: null })); + expect(report).toContain('*No output analysis data available*'); + }); + }); + + // ------------------------------------------------------------------ + // 14. Relevance Matrix section (SYN-14) + // ------------------------------------------------------------------ + describe('relevance matrix section (12)', () => { + it('renders relevance matrix table', () => { + const data = buildFullData({ + relevance: { + available: true, agentId: 'dev', score: 85, + matrix: [{ component: 'agentConfig', importance: 'critical', status: 'ok', gap: false }], + gaps: [], + }, + }); + const report = formatReport(data); + expect(report).toContain('## 12. Relevance Matrix'); + expect(report).toContain('@dev'); + expect(report).toContain('85/100'); + expect(report).toContain('| agentConfig | critical | ok | - |'); + }); + + it('renders critical gaps section', () => { + const data = buildFullData({ + relevance: { + available: true, agentId: 'dev', score: 50, + matrix: [{ component: 'agentConfig', importance: 'critical', status: 'missing', gap: true }], + gaps: [{ component: 'agentConfig', importance: 'critical' }], + }, + }); + const report = formatReport(data); + expect(report).toContain('### Critical Gaps'); + expect(report).toContain('**agentConfig** (critical)'); + }); + + it('shows no-data message when relevance is null', () => { + const report = formatReport(buildFullData({ relevance: null })); + expect(report).toContain('*No relevance data available*'); + }); + }); + + // ------------------------------------------------------------------ + // 15. Empty/null data handling + // ------------------------------------------------------------------ + describe('empty and null data handling', () => { + it('handles completely empty data object', () => { + const report = formatReport({}); + expect(report).toContain('# SYNAPSE Diagnostic Report'); + expect(report).toContain('*No hook data collected*'); + expect(report).toContain('*No session data collected*'); + expect(report).toContain('*No manifest data collected*'); + expect(report).toContain('*No pipeline data collected*'); + expect(report).toContain('*No UAP bridge data collected*'); + expect(report).toContain('None found'); + }); + + it('handles all sections set to null', () => { + const report = formatReport({ + hook: null, + session: null, + manifest: null, + pipeline: null, + uap: null, + }); + expect(report).toContain('*No hook data collected*'); + expect(report).toContain('*No session data collected*'); + expect(report).toContain('*No manifest data collected*'); + expect(report).toContain('*No pipeline data collected*'); + expect(report).toContain('*No UAP bridge data collected*'); + }); + + it('handles partial data (some sections present, others missing)', () => { + const report = formatReport({ + hook: { checks: [{ name: 'test', status: 'PASS', detail: 'ok' }] }, + }); + expect(report).toContain('| test | PASS | ok |'); + expect(report).toContain('*No session data collected*'); + expect(report).toContain('*No manifest data collected*'); + }); + + it('returns a string', () => { + const report = formatReport({}); + expect(typeof report).toBe('string'); + }); + }); +}); + +// --------------------------------------------------------------------------- +// Part 2: Synapse Diagnostics Orchestrator Tests +// --------------------------------------------------------------------------- + +jest.mock('../../../.aios-core/core/synapse/diagnostics/collectors/hook-collector'); +jest.mock('../../../.aios-core/core/synapse/diagnostics/collectors/session-collector'); +jest.mock('../../../.aios-core/core/synapse/diagnostics/collectors/manifest-collector'); +jest.mock('../../../.aios-core/core/synapse/diagnostics/collectors/pipeline-collector'); +jest.mock('../../../.aios-core/core/synapse/diagnostics/collectors/uap-collector'); +jest.mock('../../../.aios-core/core/synapse/diagnostics/report-formatter'); +jest.mock('../../../.aios-core/core/synapse/domain/domain-loader'); + +const { collectHookStatus } = require('../../../.aios-core/core/synapse/diagnostics/collectors/hook-collector'); +const { collectSessionStatus } = require('../../../.aios-core/core/synapse/diagnostics/collectors/session-collector'); +const { collectManifestIntegrity } = require('../../../.aios-core/core/synapse/diagnostics/collectors/manifest-collector'); +const { collectPipelineSimulation } = require('../../../.aios-core/core/synapse/diagnostics/collectors/pipeline-collector'); +const { collectUapBridgeStatus } = require('../../../.aios-core/core/synapse/diagnostics/collectors/uap-collector'); +const { formatReport: mockFormatReport } = require('../../../.aios-core/core/synapse/diagnostics/report-formatter'); +const { parseManifest } = require('../../../.aios-core/core/synapse/domain/domain-loader'); + +const { runDiagnostics, runDiagnosticsRaw } = require('../../../.aios-core/core/synapse/diagnostics/synapse-diagnostics'); + +describe('synapse-diagnostics orchestrator', () => { + const projectRoot = '/fake/project'; + + const mockHook = { checks: [{ name: 'hook-installed', status: 'PASS', detail: 'ok' }] }; + const mockSession = { + fields: [{ field: 'session_id', expected: 'UUID', actual: 'abc', status: 'PASS' }], + raw: { + session: { prompt_count: 7, active_agent: { id: 'qa' } }, + bridgeData: { id: 'dev', activation_quality: 'full' }, + }, + }; + const mockManifest = { + entries: [{ domain: 'constitution', inManifest: true, fileExists: true, status: 'PASS' }], + orphanedFiles: [], + }; + const mockPipeline = { + bracket: 'FRESH', + contextPercent: 90.0, + layers: [{ layer: 'L0', expected: 'loaded', status: 'PASS' }], + }; + const mockUap = { checks: [{ name: 'bridge-file', status: 'PASS', detail: 'exists' }] }; + const mockParsedManifest = { domains: ['constitution', 'global'] }; + + beforeEach(() => { + jest.clearAllMocks(); + + collectHookStatus.mockReturnValue(mockHook); + collectSessionStatus.mockReturnValue(mockSession); + collectManifestIntegrity.mockReturnValue(mockManifest); + collectPipelineSimulation.mockReturnValue(mockPipeline); + collectUapBridgeStatus.mockReturnValue(mockUap); + parseManifest.mockReturnValue(mockParsedManifest); + mockFormatReport.mockReturnValue('# Mocked Report'); + }); + + // ------------------------------------------------------------------ + // runDiagnostics + // ------------------------------------------------------------------ + describe('runDiagnostics()', () => { + it('calls all collectors and returns formatted report', () => { + const result = runDiagnostics(projectRoot); + + expect(collectHookStatus).toHaveBeenCalledWith(projectRoot); + expect(collectSessionStatus).toHaveBeenCalledWith(projectRoot, undefined); + expect(collectManifestIntegrity).toHaveBeenCalledWith(projectRoot); + expect(collectUapBridgeStatus).toHaveBeenCalledWith(projectRoot); + expect(parseManifest).toHaveBeenCalled(); + expect(collectPipelineSimulation).toHaveBeenCalled(); + expect(mockFormatReport).toHaveBeenCalledWith({ + hook: mockHook, + session: mockSession, + manifest: mockManifest, + pipeline: mockPipeline, + uap: mockUap, + }); + expect(result).toBe('# Mocked Report'); + }); + + it('passes sessionId option to session collector', () => { + runDiagnostics(projectRoot, { sessionId: 'sess-uuid-42' }); + expect(collectSessionStatus).toHaveBeenCalledWith(projectRoot, 'sess-uuid-42'); + }); + + it('extracts promptCount from session for pipeline simulation', () => { + runDiagnostics(projectRoot); + expect(collectPipelineSimulation).toHaveBeenCalledWith( + 7, // prompt_count from mockSession.raw.session + 'dev', // id from mockSession.raw.bridgeData + mockParsedManifest, + ); + }); + + it('defaults promptCount to 0 when session.raw is missing', () => { + collectSessionStatus.mockReturnValue({ fields: [], raw: {} }); + runDiagnostics(projectRoot); + expect(collectPipelineSimulation).toHaveBeenCalledWith(0, null, mockParsedManifest); + }); + + it('falls back to session.raw.session.active_agent.id for activeAgentId', () => { + collectSessionStatus.mockReturnValue({ + fields: [], + raw: { + session: { prompt_count: 3, active_agent: { id: 'pm' } }, + bridgeData: {}, + }, + }); + runDiagnostics(projectRoot); + expect(collectPipelineSimulation).toHaveBeenCalledWith(3, 'pm', mockParsedManifest); + }); + + it('constructs manifest path from projectRoot', () => { + runDiagnostics(projectRoot); + const manifestArg = parseManifest.mock.calls[0][0]; + expect(manifestArg).toContain('.synapse'); + expect(manifestArg).toContain('manifest'); + }); + }); + + // ------------------------------------------------------------------ + // runDiagnosticsRaw + // ------------------------------------------------------------------ + describe('runDiagnosticsRaw()', () => { + it('returns raw data object with all collector results', () => { + const result = runDiagnosticsRaw(projectRoot); + + expect(result).toEqual({ + hook: mockHook, + session: mockSession, + manifest: mockManifest, + pipeline: mockPipeline, + uap: mockUap, + }); + }); + + it('does not call formatReport', () => { + runDiagnosticsRaw(projectRoot); + expect(mockFormatReport).not.toHaveBeenCalled(); + }); + + it('calls all collectors same as runDiagnostics', () => { + runDiagnosticsRaw(projectRoot); + + expect(collectHookStatus).toHaveBeenCalledWith(projectRoot); + expect(collectSessionStatus).toHaveBeenCalledWith(projectRoot, undefined); + expect(collectManifestIntegrity).toHaveBeenCalledWith(projectRoot); + expect(collectUapBridgeStatus).toHaveBeenCalledWith(projectRoot); + expect(parseManifest).toHaveBeenCalled(); + expect(collectPipelineSimulation).toHaveBeenCalled(); + }); + + it('extracts promptCount and activeAgentId for pipeline', () => { + runDiagnosticsRaw(projectRoot); + expect(collectPipelineSimulation).toHaveBeenCalledWith(7, 'dev', mockParsedManifest); + }); + + it('passes sessionId option when provided', () => { + runDiagnosticsRaw(projectRoot, { sessionId: 'raw-sess-id' }); + expect(collectSessionStatus).toHaveBeenCalledWith(projectRoot, 'raw-sess-id'); + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/diagnostics/qa-issues-validation.test.js +================================================== +```js +/** + * QA Issues Validation — Tests that reproduce the 7 issues identified + * in the SYNAPSE Diagnostics QA Report (2026-02-14). + * + * Each describe block maps to one issue. The test SHOULD FAIL with current + * code to confirm the problem, then PASS after the fix. + * + * @created 2026-02-14 by Quinn (@qa) + */ + +'use strict'; + +const path = require('path'); +const fs = require('fs'); +const os = require('os'); + +// Collectors under test +const { collectQualityMetrics, BRACKET_ACTIVE_LAYERS, MAX_STALENESS_MS } = require( + '../../../.aios-core/core/synapse/diagnostics/collectors/quality-collector', +); +const { collectConsistencyMetrics, MAX_TIMESTAMP_GAP_MS } = require( + '../../../.aios-core/core/synapse/diagnostics/collectors/consistency-collector', +); +const { collectOutputAnalysis, UAP_OUTPUT_EXPECTATIONS } = require( + '../../../.aios-core/core/synapse/diagnostics/collectors/output-analyzer', +); +const { collectRelevanceMatrix, IMPORTANCE } = require( + '../../../.aios-core/core/synapse/diagnostics/collectors/relevance-matrix', +); +const { getActiveLayers } = require( + '../../../.aios-core/core/synapse/context/context-tracker', +); + +// Helpers +function createTempProject() { + const dir = fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-qa-')); + const synapsePath = path.join(dir, '.synapse'); + const metricsDir = path.join(synapsePath, 'metrics'); + const sessionsDir = path.join(synapsePath, 'sessions'); + fs.mkdirSync(metricsDir, { recursive: true }); + fs.mkdirSync(sessionsDir, { recursive: true }); + return { dir, metricsDir, sessionsDir }; +} + +function writeMetrics(metricsDir, filename, data) { + fs.writeFileSync(path.join(metricsDir, filename), JSON.stringify(data, null, 2), 'utf8'); +} + +function cleanupDir(dir) { + fs.rmSync(dir, { recursive: true, force: true }); +} + +// ───────────────────────────────────────────────────────────────────────────── +// Issue #1: Quality Score Artificially Low by Staleness Penalty +// ───────────────────────────────────────────────────────────────────────────── +describe('Issue #1: Staleness should degrade, not zero out UAP score', () => { + let project; + + beforeEach(() => { project = createTempProject(); }); + afterEach(() => { cleanupDir(project.dir); }); + + test('UAP metrics 6 minutes old (just past MAX_STALENESS) should NOT zero score', () => { + const sixMinAgo = new Date(Date.now() - 6 * 60 * 1000).toISOString(); + + writeMetrics(project.metricsDir, 'uap-metrics.json', { + agentId: 'dev', + quality: 'full', + totalDuration: 145, + timestamp: sixMinAgo, + loaders: { + agentConfig: { status: 'ok', duration: 45 }, + permissionMode: { status: 'ok', duration: 12 }, + gitConfig: { status: 'ok', duration: 8 }, + sessionContext: { status: 'ok', duration: 23 }, + projectStatus: { status: 'ok', duration: 34 }, + memories: { status: 'skipped', duration: 0 }, + synapseSession: { status: 'ok', duration: 2 }, + }, + }); + + writeMetrics(project.metricsDir, 'hook-metrics.json', { + totalDuration: 0.88, + bracket: 'FRESH', + layersLoaded: 3, + layersSkipped: 5, + totalRules: 70, + timestamp: new Date().toISOString(), + perLayer: { + constitution: { status: 'ok', duration: 0.3, rules: 34 }, + global: { status: 'ok', duration: 0.2, rules: 25 }, + agent: { status: 'ok', duration: 0.38, rules: 11 }, + }, + }); + + const result = collectQualityMetrics(project.dir); + + // BUG: Currently UAP score is zeroed when stale, making overall = 42 + // EXPECTED: UAP should still score, possibly with degradation penalty + // A perfect UAP (6/7 loaders ok) should NOT produce overall grade F + expect(result.overall.grade).not.toBe('F'); + expect(result.uap.score).toBeGreaterThan(0); + }); + + test('MAX_STALENESS_MS should be 30 minutes (covers typical session length)', () => { + // UAP writes metrics once at agent activation. + // 30 min threshold covers normal session length. + // Additionally, stale data is degraded (50%) instead of zeroed. + expect(MAX_STALENESS_MS).toBe(30 * 60 * 1000); + }); +}); + +// ───────────────────────────────────────────────────────────────────────────── +// Issue #2: Timestamp Gap Threshold Too Restrictive +// ───────────────────────────────────────────────────────────────────────────── +describe('Issue #2: Timestamp gap between UAP and Hook should allow > 30s', () => { + let project; + + beforeEach(() => { project = createTempProject(); }); + afterEach(() => { cleanupDir(project.dir); }); + + test('MAX_TIMESTAMP_GAP_MS should be 10 minutes (covers independent pipeline lifecycle)', () => { + // UAP: written once at activation + // Hook: written every prompt + // 10 min threshold covers normal operation gaps + expect(MAX_TIMESTAMP_GAP_MS).toBe(10 * 60 * 1000); + }); + + test('2-minute gap between UAP and Hook should PASS (normal operation)', () => { + const uapTime = new Date(Date.now() - 2 * 60 * 1000).toISOString(); + const hookTime = new Date().toISOString(); + + writeMetrics(project.metricsDir, 'uap-metrics.json', { + agentId: 'po', + quality: 'full', + totalDuration: 168, + timestamp: uapTime, + loaders: { agentConfig: { status: 'ok', duration: 45 } }, + }); + + writeMetrics(project.metricsDir, 'hook-metrics.json', { + totalDuration: 0.88, + bracket: 'FRESH', + layersLoaded: 3, + timestamp: hookTime, + perLayer: { + constitution: { status: 'ok', duration: 0.3, rules: 34 }, + }, + }); + + // Write active-agent bridge + fs.writeFileSync( + path.join(project.dir, '.synapse', 'sessions', '_active-agent.json'), + JSON.stringify({ id: 'po' }), + ); + + const result = collectConsistencyMetrics(project.dir); + const timestampCheck = result.checks.find(c => c.name === 'timestamp'); + + // BUG: Currently FAILS because 120s > 30s threshold + // EXPECTED: Should PASS — 2 minutes is normal for independent pipelines + expect(timestampCheck.status).toBe('PASS'); + }); +}); + +// ───────────────────────────────────────────────────────────────────────────── +// Issue #3: hookBootMs Always 0 +// ───────────────────────────────────────────────────────────────────────────── +describe('Issue #3: hookBootMs should propagate from hook to engine metrics', () => { + test('engine._persistHookMetrics receives config with _hookBootTime', () => { + // Verify the flow: hook passes _hookBootTime → engine.process → _persistHookMetrics + const { SynapseEngine } = require('../../../.aios-core/core/synapse/engine'); + const project = createTempProject(); + + try { + const engine = new SynapseEngine(path.join(project.dir, '.synapse')); + + // Simulate what the hook does: pass a hrtime bigint + const mockBootTime = process.hrtime.bigint() - BigInt(50 * 1e6); // 50ms ago + + // Spy on _persistHookMetrics + let capturedConfig; + const original = engine._persistHookMetrics.bind(engine); + engine._persistHookMetrics = function(summary, bracket, config) { + capturedConfig = config; + original(summary, bracket, config); + }; + + // Run process with _hookBootTime (simulating hook entry) + return engine.process('test prompt', { prompt_count: 0 }, { _hookBootTime: mockBootTime }) + .then(() => { + expect(capturedConfig).toBeDefined(); + expect(capturedConfig._hookBootTime).toBe(mockBootTime); + + // Verify the metrics file has hookBootMs > 0 + const metricsPath = path.join(project.dir, '.synapse', 'metrics', 'hook-metrics.json'); + if (fs.existsSync(metricsPath)) { + const data = JSON.parse(fs.readFileSync(metricsPath, 'utf8')); + expect(data.hookBootMs).toBeGreaterThan(0); + } + }); + } finally { + cleanupDir(project.dir); + } + }); +}); + +// ───────────────────────────────────────────────────────────────────────────── +// Issue #4: Memories Loader Missing Context +// ───────────────────────────────────────────────────────────────────────────── +describe('Issue #4: Missing memories loader should say "Optional — Pro feature"', () => { + let project; + + beforeEach(() => { project = createTempProject(); }); + afterEach(() => { cleanupDir(project.dir); }); + + test('output analyzer shows generic message for missing memories instead of Pro context', () => { + writeMetrics(project.metricsDir, 'uap-metrics.json', { + loaders: { + agentConfig: { status: 'ok', duration: 45 }, + // memories is intentionally absent — it's a Pro feature + }, + }); + + const result = collectOutputAnalysis(project.dir); + const memoriesEntry = result.uapAnalysis.find(a => a.name === 'memories'); + + expect(memoriesEntry).toBeDefined(); + expect(memoriesEntry.status).toBe('missing'); + + // BUG: Shows generic "Loader not present in metrics" + // EXPECTED: Should mention it's a Pro/Optional feature + expect(memoriesEntry.detail).toContain('Optional'); + }); +}); + +// ───────────────────────────────────────────────────────────────────────────── +// Issue #5: Relevance Matrix Treats "skipped" as Gap +// ───────────────────────────────────────────────────────────────────────────── +describe('Issue #5: Skipped layers should NOT count as gaps', () => { + let project; + + beforeEach(() => { project = createTempProject(); }); + afterEach(() => { cleanupDir(project.dir); }); + + test('layers skipped by bracket (no data) should not be gaps', () => { + // Agent @po in FRESH bracket: only L0, L1, L2, L7 active + // L3-L6 are skipped by design — they have no workflow/task/squad + writeMetrics(project.metricsDir, 'uap-metrics.json', { + agentId: 'po', + loaders: { + agentConfig: { status: 'ok', duration: 20 }, + sessionContext: { status: 'ok', duration: 15 }, + }, + }); + + writeMetrics(project.metricsDir, 'hook-metrics.json', { + bracket: 'FRESH', + perLayer: { + constitution: { status: 'ok', rules: 34 }, + global: { status: 'ok', rules: 25 }, + agent: { status: 'ok', rules: 11 }, + workflow: { status: 'skipped', rules: 0 }, + task: { status: 'skipped', rules: 0 }, + squad: { status: 'skipped', rules: 0 }, + keyword: { status: 'skipped', rules: 0 }, + 'star-command': { status: 'skipped', rules: 0 }, + }, + }); + + fs.writeFileSync( + path.join(project.dir, '.synapse', 'sessions', '_active-agent.json'), + JSON.stringify({ id: 'po' }), + ); + + const result = collectRelevanceMatrix(project.dir); + + // BUG: Currently counts skipped layers as gaps + // For @po with no workflow/task, skipped L3-L6 is NORMAL + const skippedGaps = result.gaps.filter(g => + ['workflow', 'task', 'keyword', 'star-command'].includes(g.component), + ); + + // EXPECTED: skipped layers should NOT appear as gaps + expect(skippedGaps.length).toBe(0); + }); +}); + +// ───────────────────────────────────────────────────────────────────────────── +// Issue #7: BRACKET_ACTIVE_LAYERS Inconsistent with Engine +// ───────────────────────────────────────────────────────────────────────────── +describe('Issue #7: BRACKET_ACTIVE_LAYERS should match engine context-tracker', () => { + test('FRESH bracket: quality-collector expects same layers as context-tracker', () => { + // context-tracker LAYER_CONFIGS.FRESH = [0, 1, 2, 7] + // = ['constitution', 'global', 'agent', 'star-command'] + const engineLayers = getActiveLayers('FRESH'); + const engineLayerNames = engineLayers.layers.map(n => { + const map = { + 0: 'constitution', 1: 'global', 2: 'agent', 3: 'workflow', + 4: 'task', 5: 'squad', 6: 'keyword', 7: 'star-command', + }; + return map[n]; + }); + + const qualityLayers = BRACKET_ACTIVE_LAYERS.FRESH; + + // BUG: quality-collector and engine may have different layer expectations + expect(qualityLayers.sort()).toEqual(engineLayerNames.sort()); + }); + + test('MODERATE bracket: quality-collector expects same layers as context-tracker', () => { + const engineLayers = getActiveLayers('MODERATE'); + const engineLayerNames = engineLayers.layers.map(n => { + const map = { + 0: 'constitution', 1: 'global', 2: 'agent', 3: 'workflow', + 4: 'task', 5: 'squad', 6: 'keyword', 7: 'star-command', + }; + return map[n]; + }); + + const qualityLayers = BRACKET_ACTIVE_LAYERS.MODERATE; + + // This should match — both expect all 8 layers for MODERATE + // The REAL problem is that even though all are "active", + // layers without data (no workflow, no task) return null → skipped + expect(qualityLayers.sort()).toEqual(engineLayerNames.sort()); + }); + + test('quality-collector penalizes layers that engine skips due to no data (false negative)', () => { + // Even in MODERATE (all layers active), L3-L6 often produce 0 results + // because there's no active workflow, task, squad, or keyword. + // The quality scorer gives 0 points for these, which is unfair. + const project = createTempProject(); + + try { + writeMetrics(project.metricsDir, 'hook-metrics.json', { + totalDuration: 0.88, + bracket: 'MODERATE', + timestamp: new Date().toISOString(), + perLayer: { + constitution: { status: 'ok', duration: 0.3, rules: 34 }, + global: { status: 'ok', duration: 0.2, rules: 25 }, + agent: { status: 'ok', duration: 0.38, rules: 11 }, + // These 5 layers are "active" per bracket but have no data + workflow: { status: 'skipped', rules: 0 }, + task: { status: 'skipped', rules: 0 }, + squad: { status: 'skipped', rules: 0 }, + keyword: { status: 'skipped', rules: 0 }, + 'star-command': { status: 'skipped', rules: 0 }, + }, + }); + + const result = collectQualityMetrics(project.dir); + + // With 3/8 layers ok and 5 skipped, hook score should reflect + // that skipped-by-no-data is not the same as skipped-by-error + // Current: score = 70/100 (constitution+global+agent = 70 weight) + // maxPossible = 100 (all 8 layers expected for MODERATE) + // normalized = 70% → Grade B + // This is actually CORRECT for MODERATE bracket — all layers SHOULD load + // The issue is that "skipped" means "no data available" not "failed to load" + + // The hook score should be >= 70% since the 3 critical layers are ok + expect(result.hook.score).toBeGreaterThanOrEqual(70); + } finally { + cleanupDir(project.dir); + } + }); +}); + +``` + +================================================== +📄 tests/synapse/diagnostics/quality-collector.test.js +================================================== +```js +/** + * Quality Collector — Unit Tests + * + * Tests for collectQualityMetrics() which scores context relevance. + * + * @module tests/synapse/diagnostics/quality-collector + * @story SYN-12 - Timing Metrics + Context Quality Analysis + * @coverage Target: >85% + */ + +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); + +const { + collectQualityMetrics, + UAP_RUBRIC, + HOOK_RUBRIC, + BRACKET_ACTIVE_LAYERS, + MAX_STALENESS_MS, +} = require('../../../.aios-core/core/synapse/diagnostics/collectors/quality-collector'); + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +let tmpDir; + +function createTmpDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'quality-test-')); +} + +function writeJson(filePath, data) { + fs.mkdirSync(path.dirname(filePath), { recursive: true }); + fs.writeFileSync(filePath, JSON.stringify(data, null, 2), 'utf8'); +} + +function writeFile(filePath, content) { + fs.mkdirSync(path.dirname(filePath), { recursive: true }); + fs.writeFileSync(filePath, content, 'utf8'); +} + +function buildAllOkUap() { + const loaders = {}; + for (const r of UAP_RUBRIC) { + loaders[r.name] = { duration: 10, status: 'ok' }; + } + return { agentId: 'dev', quality: 'full', totalDuration: 100, loaders, timestamp: new Date().toISOString() }; +} + +function buildAllOkHook(bracket = 'MODERATE') { + const perLayer = {}; + for (const r of HOOK_RUBRIC) { + perLayer[r.name] = { duration: 5, status: 'ok', rules: 3 }; + } + return { totalDuration: 50, bracket, perLayer, timestamp: new Date().toISOString() }; +} + +// --------------------------------------------------------------------------- +// Tests +// --------------------------------------------------------------------------- + +beforeEach(() => { + tmpDir = createTmpDir(); +}); + +afterEach(() => { + fs.rmSync(tmpDir, { recursive: true, force: true }); +}); + +// ---- UAP scoring ---- + +describe('collectQualityMetrics — UAP scoring', () => { + test('100 score when all loaders ok', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildAllOkUap()); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildAllOkHook()); + + const result = collectQualityMetrics(tmpDir); + + expect(result.uap.available).toBe(true); + expect(result.uap.score).toBe(100); + }); + + test('0 score when all loaders missing', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), { + totalDuration: 0, + quality: 'fallback', + loaders: {}, + }); + + const result = collectQualityMetrics(tmpDir); + + expect(result.uap.available).toBe(true); + expect(result.uap.score).toBe(0); + }); + + test('partial score when some loaders fail', () => { + const uap = buildAllOkUap(); + uap.loaders.agentConfig.status = 'error'; // -25 + uap.loaders.memories.status = 'timeout'; // -20 + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), uap); + + const result = collectQualityMetrics(tmpDir); + + // max=90 (without 25+20=45 lost), total=45/90... no, maxPossible is always sum of all weights + // score = (90-45)/90 * 100 = 50... no, maxPossible = sum of all weights = 90 + // actual: all ok except agentConfig(25) and memories(20) = 90-45 = 45 + // normalized = 45/90 * 100 = 50 + expect(result.uap.score).toBe(50); + }); + + test('loader entries include criticality and impact from rubric', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildAllOkUap()); + + const result = collectQualityMetrics(tmpDir); + + const agentConfig = result.uap.loaders.find((l) => l.name === 'agentConfig'); + expect(agentConfig.criticality).toBe('CRITICAL'); + expect(agentConfig.impact).toBe('Agent identity and commands'); + expect(agentConfig.score).toBe(25); + expect(agentConfig.maxScore).toBe(25); + }); + + test('unavailable when no uap-metrics.json', () => { + const result = collectQualityMetrics(tmpDir); + + expect(result.uap.available).toBe(false); + }); + + test('unavailable when uap-metrics.json is malformed', () => { + writeFile(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), '!!!'); + + const result = collectQualityMetrics(tmpDir); + + expect(result.uap.available).toBe(false); + }); +}); + +// ---- Hook scoring ---- + +describe('collectQualityMetrics — Hook scoring', () => { + test('100 score when all expected layers ok for MODERATE', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildAllOkHook('MODERATE')); + + const result = collectQualityMetrics(tmpDir); + + expect(result.hook.available).toBe(true); + expect(result.hook.score).toBe(100); + expect(result.hook.bracket).toBe('MODERATE'); + }); + + test('adjusts maxPossible based on FRESH bracket (only 4 layers expected)', () => { + const hook = buildAllOkHook('FRESH'); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), hook); + + const result = collectQualityMetrics(tmpDir); + + // FRESH expects: constitution(25), global(20), agent(25), star-command(2) = 72 + expect(result.hook.available).toBe(true); + expect(result.hook.score).toBe(100); // all ok + }); + + test('adjusts maxPossible based on CRITICAL bracket (only 2 layers)', () => { + const hook = buildAllOkHook('CRITICAL'); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), hook); + + const result = collectQualityMetrics(tmpDir); + + // CRITICAL expects: constitution(25), agent(25) = 50 + expect(result.hook.score).toBe(100); + }); + + test('partial score when expected layer fails', () => { + const hook = buildAllOkHook('MODERATE'); + hook.perLayer.constitution.status = 'error'; // -25 of 100 + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), hook); + + const result = collectQualityMetrics(tmpDir); + + expect(result.hook.score).toBe(75); + }); + + test('layers include rules count', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildAllOkHook()); + + const result = collectQualityMetrics(tmpDir); + + const constitution = result.hook.layers.find((l) => l.name === 'constitution'); + expect(constitution.rules).toBe(3); + }); + + test('not-expected layers have score 0 and maxScore 0', () => { + // FRESH: workflow, task, squad, keyword are not expected + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildAllOkHook('FRESH')); + + const result = collectQualityMetrics(tmpDir); + + const workflow = result.hook.layers.find((l) => l.name === 'workflow'); + expect(workflow.status).toBe('not-expected'); + expect(workflow.score).toBe(0); + expect(workflow.maxScore).toBe(0); + }); + + test('unavailable when no hook-metrics.json', () => { + const result = collectQualityMetrics(tmpDir); + + expect(result.hook.available).toBe(false); + }); +}); + +// ---- Overall scoring ---- + +describe('collectQualityMetrics — Overall scoring', () => { + test('weighted average: 40% UAP + 60% Hook', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildAllOkUap()); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildAllOkHook()); + + const result = collectQualityMetrics(tmpDir); + + expect(result.overall.score).toBe(100); + expect(result.overall.grade).toBe('A'); + expect(result.overall.label).toBe('EXCELLENT'); + }); + + test('grade A for score >= 90', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildAllOkUap()); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildAllOkHook()); + + const result = collectQualityMetrics(tmpDir); + expect(result.overall.grade).toBe('A'); + }); + + test('grade B for score 75-89', () => { + const uap = buildAllOkUap(); + uap.loaders.agentConfig.status = 'error'; // drops UAP to ~72 + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), uap); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildAllOkHook()); + + const result = collectQualityMetrics(tmpDir); + // UAP: ~72 * 0.4 = 28.8, Hook: 100 * 0.6 = 60, total = 88.8 → 89 + expect(result.overall.grade).toBe('B'); + }); + + test('grade F for score < 45', () => { + // All loaders fail + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), { + totalDuration: 0, quality: 'fallback', loaders: {}, + }); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), { + totalDuration: 0, bracket: 'MODERATE', perLayer: {}, + }); + + const result = collectQualityMetrics(tmpDir); + expect(result.overall.grade).toBe('F'); + expect(result.overall.score).toBe(0); + }); + + test('uses only UAP score when hook unavailable', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildAllOkUap()); + + const result = collectQualityMetrics(tmpDir); + + expect(result.overall.score).toBe(100); + expect(result.hook.available).toBe(false); + }); + + test('uses only Hook score when UAP unavailable', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildAllOkHook()); + + const result = collectQualityMetrics(tmpDir); + + expect(result.overall.score).toBe(100); + expect(result.uap.available).toBe(false); + }); + + test('score 0 when both unavailable', () => { + const result = collectQualityMetrics(tmpDir); + + expect(result.overall.score).toBe(0); + expect(result.overall.grade).toBe('F'); + }); +}); + +// ---- Rubric exports ---- + +describe('Rubric exports', () => { + test('UAP_RUBRIC weights sum to 90', () => { + const sum = UAP_RUBRIC.reduce((s, r) => s + r.weight, 0); + expect(sum).toBe(90); + }); + + test('HOOK_RUBRIC weights sum to 100', () => { + const sum = HOOK_RUBRIC.reduce((s, r) => s + r.weight, 0); + expect(sum).toBe(100); + }); + + test('BRACKET_ACTIVE_LAYERS has all 4 brackets', () => { + expect(Object.keys(BRACKET_ACTIVE_LAYERS).sort()).toEqual( + ['CRITICAL', 'DEPLETED', 'FRESH', 'MODERATE'], + ); + }); + + test('MAX_STALENESS_MS is 30 minutes', () => { + expect(MAX_STALENESS_MS).toBe(30 * 60 * 1000); + }); +}); + +// ---- Staleness detection (SYN-14) ---- + +describe('collectQualityMetrics — Staleness detection', () => { + test('stale=false when timestamp is recent', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildAllOkUap()); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildAllOkHook()); + + const result = collectQualityMetrics(tmpDir); + + expect(result.uap.stale).toBe(false); + expect(result.hook.stale).toBe(false); + }); + + test('stale UAP data applies 50% degradation (not zero)', () => { + const old = new Date(Date.now() - MAX_STALENESS_MS - 10000).toISOString(); + const uap = buildAllOkUap(); + uap.timestamp = old; + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), uap); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildAllOkHook()); + + const result = collectQualityMetrics(tmpDir); + + expect(result.uap.stale).toBe(true); + // Score is degraded (50% of normal) but NOT zeroed + expect(result.uap.score).toBeGreaterThan(0); + expect(result.uap.available).toBe(true); + }); + + test('stale Hook data applies 50% degradation (not zero)', () => { + const old = new Date(Date.now() - MAX_STALENESS_MS - 10000).toISOString(); + const hook = buildAllOkHook(); + hook.timestamp = old; + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildAllOkUap()); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), hook); + + const result = collectQualityMetrics(tmpDir); + + expect(result.hook.stale).toBe(true); + // Score is degraded (50% of normal) but NOT zeroed + expect(result.hook.score).toBeGreaterThan(0); + expect(result.hook.available).toBe(true); + }); + + test('both stale gives degraded score (not zero/F)', () => { + const old = new Date(Date.now() - MAX_STALENESS_MS - 10000).toISOString(); + const uap = buildAllOkUap(); + uap.timestamp = old; + const hook = buildAllOkHook(); + hook.timestamp = old; + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), uap); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), hook); + + const result = collectQualityMetrics(tmpDir); + + // Both degraded at 50% → overall ~50 (grade D, not F) + expect(result.overall.score).toBeGreaterThan(0); + expect(result.overall.score).toBeLessThanOrEqual(50); + expect(result.overall.grade).not.toBe('F'); + }); + + test('stale=false when no data available', () => { + const result = collectQualityMetrics(tmpDir); + + expect(result.uap.stale).toBe(false); + expect(result.hook.stale).toBe(false); + }); +}); + +``` + +================================================== +📄 tests/synapse/diagnostics/consistency-collector.test.js +================================================== +```js +/** + * Consistency Collector — Unit Tests + * + * @module tests/synapse/diagnostics/consistency-collector + * @story SYN-14 + */ + +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); + +const { collectConsistencyMetrics, MAX_TIMESTAMP_GAP_MS } = require( + '../../../.aios-core/core/synapse/diagnostics/collectors/consistency-collector', +); + +let tmpDir; + +function createTmpDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'consistency-test-')); +} + +function writeJson(filePath, data) { + fs.mkdirSync(path.dirname(filePath), { recursive: true }); + fs.writeFileSync(filePath, JSON.stringify(data, null, 2), 'utf8'); +} + +function buildUapMetrics(overrides = {}) { + return { + agentId: 'dev', quality: 'full', totalDuration: 100, + loaders: { agentConfig: { duration: 10, status: 'ok' } }, + timestamp: new Date().toISOString(), + ...overrides, + }; +} + +function buildHookMetrics(overrides = {}) { + return { + totalDuration: 50, bracket: 'MODERATE', layersLoaded: 3, + perLayer: { constitution: { duration: 5, status: 'ok', rules: 3 } }, + timestamp: new Date().toISOString(), + ...overrides, + }; +} + +beforeEach(() => { tmpDir = createTmpDir(); }); +afterEach(() => { fs.rmSync(tmpDir, { recursive: true, force: true }); }); + +describe('collectConsistencyMetrics', () => { + test('available=false when no metrics files', () => { + const result = collectConsistencyMetrics(tmpDir); + expect(result.available).toBe(false); + expect(result.checks).toHaveLength(0); + }); + + test('WARN when only UAP exists', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics()); + const result = collectConsistencyMetrics(tmpDir); + expect(result.available).toBe(true); + expect(result.checks[0].status).toBe('WARN'); + expect(result.checks[0].detail).toContain('Hook metrics missing'); + }); + + test('WARN when only Hook exists', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics()); + const result = collectConsistencyMetrics(tmpDir); + expect(result.checks[0].detail).toContain('UAP metrics missing'); + }); + + test('bracket check PASS for valid bracket', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics()); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics()); + const result = collectConsistencyMetrics(tmpDir); + const bracket = result.checks.find(c => c.name === 'bracket'); + expect(bracket.status).toBe('PASS'); + }); + + test('bracket check FAIL for unknown bracket', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics()); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics({ bracket: 'INVALID' })); + const result = collectConsistencyMetrics(tmpDir); + const bracket = result.checks.find(c => c.name === 'bracket'); + expect(bracket.status).toBe('FAIL'); + }); + + test('agent check PASS when UAP matches bridge', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics({ agentId: 'dev' })); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics()); + writeJson(path.join(tmpDir, '.synapse', 'sessions', '_active-agent.json'), { id: 'dev' }); + const result = collectConsistencyMetrics(tmpDir); + const agent = result.checks.find(c => c.name === 'agent'); + expect(agent.status).toBe('PASS'); + }); + + test('agent check FAIL when UAP != bridge', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics({ agentId: 'dev' })); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics()); + writeJson(path.join(tmpDir, '.synapse', 'sessions', '_active-agent.json'), { id: 'qa' }); + const result = collectConsistencyMetrics(tmpDir); + const agent = result.checks.find(c => c.name === 'agent'); + expect(agent.status).toBe('FAIL'); + }); + + test('timestamp check PASS when gap < 30s', () => { + const now = new Date().toISOString(); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics({ timestamp: now })); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics({ timestamp: now })); + const result = collectConsistencyMetrics(tmpDir); + const ts = result.checks.find(c => c.name === 'timestamp'); + expect(ts.status).toBe('PASS'); + }); + + test('timestamp check FAIL when gap > 10 minutes', () => { + const old = new Date(Date.now() - 11 * 60 * 1000).toISOString(); // 11 min gap + const now = new Date().toISOString(); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics({ timestamp: old })); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics({ timestamp: now })); + const result = collectConsistencyMetrics(tmpDir); + const ts = result.checks.find(c => c.name === 'timestamp'); + expect(ts.status).toBe('FAIL'); + }); + + test('quality check PASS for full quality + layers loaded', () => { + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics({ quality: 'full' })); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics({ layersLoaded: 5 })); + const result = collectConsistencyMetrics(tmpDir); + const q = result.checks.find(c => c.name === 'quality'); + expect(q.status).toBe('PASS'); + }); + + test('score counts passing checks', () => { + const now = new Date().toISOString(); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'uap-metrics.json'), buildUapMetrics({ timestamp: now })); + writeJson(path.join(tmpDir, '.synapse', 'metrics', 'hook-metrics.json'), buildHookMetrics({ timestamp: now })); + const result = collectConsistencyMetrics(tmpDir); + expect(result.maxScore).toBe(4); + expect(result.score).toBeGreaterThanOrEqual(2); // bracket + timestamp + quality at minimum + }); + + test('exports MAX_TIMESTAMP_GAP_MS constant', () => { + expect(MAX_TIMESTAMP_GAP_MS).toBe(10 * 60 * 1000); // 10 minutes + }); +}); + +``` + +================================================== +📄 tests/synapse/diagnostics/collectors.test.js +================================================== +```js +/** + * SYNAPSE Diagnostic Collectors — Unit Tests + * + * Tests for all 5 collectors: hook, session, manifest, pipeline, UAP. + * + * @module tests/synapse/diagnostics/collectors + * @story SYN-13 - UAP Session Bridge + SYNAPSE Diagnostics + * @coverage Target: >85% for collectors + */ + +'use strict'; + +const fs = require('fs'); +const os = require('os'); +const path = require('path'); + +// Mock parseManifest before requiring manifest-collector +jest.mock( + '../../../.aios-core/core/synapse/domain/domain-loader', + () => ({ + parseManifest: jest.fn(), + loadDomainFile: jest.fn(() => []), + isExcluded: jest.fn(() => false), + matchKeywords: jest.fn(() => false), + extractDomainInfo: jest.fn(() => ({ domainName: null, suffix: null })), + domainNameToFile: jest.fn((name) => name.toLowerCase().replace(/_/g, '-')), + KNOWN_SUFFIXES: [], + GLOBAL_KEYS: [], + }) +); + +const { collectHookStatus } = require('../../../.aios-core/core/synapse/diagnostics/collectors/hook-collector'); +const { collectSessionStatus } = require('../../../.aios-core/core/synapse/diagnostics/collectors/session-collector'); +const { collectManifestIntegrity } = require('../../../.aios-core/core/synapse/diagnostics/collectors/manifest-collector'); +const { collectPipelineSimulation } = require('../../../.aios-core/core/synapse/diagnostics/collectors/pipeline-collector'); +const { collectUapBridgeStatus } = require('../../../.aios-core/core/synapse/diagnostics/collectors/uap-collector'); +const { parseManifest } = require('../../../.aios-core/core/synapse/domain/domain-loader'); + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +let tmpDir; + +function createTmpDir() { + return fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-test-')); +} + +function writeJson(filePath, data) { + fs.mkdirSync(path.dirname(filePath), { recursive: true }); + fs.writeFileSync(filePath, JSON.stringify(data, null, 2), 'utf8'); +} + +function writeFile(filePath, content) { + fs.mkdirSync(path.dirname(filePath), { recursive: true }); + fs.writeFileSync(filePath, content, 'utf8'); +} + +// --------------------------------------------------------------------------- +// hook-collector +// --------------------------------------------------------------------------- + +describe('hook-collector: collectHookStatus', () => { + beforeEach(() => { + tmpDir = createTmpDir(); + }); + + afterEach(() => { + fs.rmSync(tmpDir, { recursive: true, force: true }); + }); + + test('returns PASS when settings.local.json has synapse-engine hook', () => { + writeJson(path.join(tmpDir, '.claude', 'settings.local.json'), { + hooks: { + UserPromptSubmit: ['node .claude/hooks/synapse-engine.cjs'], + }, + }); + + // Create the hook file so check 2 + 3 also pass + writeFile( + path.join(tmpDir, '.claude', 'hooks', 'synapse-engine.cjs'), + 'module.exports = {};' + ); + + const result = collectHookStatus(tmpDir); + + expect(result.checks).toBeDefined(); + const registered = result.checks.find((c) => c.name === 'Hook registered'); + expect(registered.status).toBe('PASS'); + expect(registered.detail).toContain('synapse-engine'); + }); + + test('returns FAIL when settings.local.json is missing', () => { + const result = collectHookStatus(tmpDir); + + const registered = result.checks.find((c) => c.name === 'Hook registered'); + expect(registered.status).toBe('FAIL'); + expect(registered.detail).toContain('not found'); + }); + + test('returns FAIL when hook not registered in settings', () => { + writeJson(path.join(tmpDir, '.claude', 'settings.local.json'), { + hooks: { + UserPromptSubmit: ['node some-other-hook.js'], + }, + }); + + const result = collectHookStatus(tmpDir); + + const registered = result.checks.find((c) => c.name === 'Hook registered'); + expect(registered.status).toBe('FAIL'); + expect(registered.detail).toContain('No synapse-engine hook found'); + }); + + test('returns PASS for hook file exists when file is present', () => { + writeJson(path.join(tmpDir, '.claude', 'settings.local.json'), { + hooks: { UserPromptSubmit: [] }, + }); + writeFile( + path.join(tmpDir, '.claude', 'hooks', 'synapse-engine.cjs'), + '// hook\nmodule.exports = {};' + ); + + const result = collectHookStatus(tmpDir); + + const fileCheck = result.checks.find((c) => c.name === 'Hook file exists'); + expect(fileCheck.status).toBe('PASS'); + expect(fileCheck.detail).toContain('lines'); + expect(fileCheck.detail).toContain('bytes'); + }); + + test('returns FAIL for hook file exists when file is missing', () => { + writeJson(path.join(tmpDir, '.claude', 'settings.local.json'), { + hooks: { UserPromptSubmit: [] }, + }); + + const result = collectHookStatus(tmpDir); + + const fileCheck = result.checks.find((c) => c.name === 'Hook file exists'); + expect(fileCheck.status).toBe('FAIL'); + expect(fileCheck.detail).toContain('not found'); + }); + + test('returns PASS for hook executable when file is valid JS', () => { + writeJson(path.join(tmpDir, '.claude', 'settings.local.json'), { + hooks: { UserPromptSubmit: [] }, + }); + writeFile( + path.join(tmpDir, '.claude', 'hooks', 'synapse-engine.cjs'), + 'module.exports = {};' + ); + + const result = collectHookStatus(tmpDir); + + const execCheck = result.checks.find((c) => c.name === 'Hook executable'); + expect(execCheck.status).toBe('PASS'); + expect(execCheck.detail).toContain('resolve'); + }); + + test('returns SKIP for hook executable when hook file does not exist', () => { + writeJson(path.join(tmpDir, '.claude', 'settings.local.json'), { + hooks: { UserPromptSubmit: [] }, + }); + + const result = collectHookStatus(tmpDir); + + const execCheck = result.checks.find((c) => c.name === 'Hook executable'); + expect(execCheck.status).toBe('SKIP'); + }); + + test('handles hook entry as object with command property', () => { + writeJson(path.join(tmpDir, '.claude', 'settings.local.json'), { + hooks: { + UserPromptSubmit: [ + { command: 'node .claude/hooks/synapse-engine.cjs', timeout: 5000 }, + ], + }, + }); + writeFile( + path.join(tmpDir, '.claude', 'hooks', 'synapse-engine.cjs'), + 'module.exports = {};' + ); + + const result = collectHookStatus(tmpDir); + + const registered = result.checks.find((c) => c.name === 'Hook registered'); + expect(registered.status).toBe('PASS'); + }); + + test('handles malformed JSON in settings.local.json', () => { + writeFile( + path.join(tmpDir, '.claude', 'settings.local.json'), + '{ invalid json' + ); + + const result = collectHookStatus(tmpDir); + + const registered = result.checks.find((c) => c.name === 'Hook registered'); + expect(registered.status).toBe('ERROR'); + expect(registered.detail).toContain('Failed to read settings'); + }); +}); + +// --------------------------------------------------------------------------- +// session-collector +// --------------------------------------------------------------------------- + +describe('session-collector: collectSessionStatus', () => { + beforeEach(() => { + tmpDir = createTmpDir(); + }); + + afterEach(() => { + fs.rmSync(tmpDir, { recursive: true, force: true }); + }); + + test('returns PASS when _active-agent.json exists with valid data', () => { + const bridgePath = path.join(tmpDir, '.synapse', 'sessions', '_active-agent.json'); + writeJson(bridgePath, { + id: 'dev', + activation_quality: 'full', + source: 'uap', + }); + + const result = collectSessionStatus(tmpDir); + + expect(result.fields).toBeDefined(); + expect(result.raw).toBeDefined(); + + const agentField = result.fields.find((f) => f.field === 'active_agent.id'); + expect(agentField.status).toBe('PASS'); + expect(agentField.actual).toBe('dev'); + + const qualityField = result.fields.find((f) => f.field === 'activation_quality'); + expect(qualityField.status).toBe('PASS'); + expect(qualityField.actual).toBe('full'); + + const bridgeField = result.fields.find((f) => f.field === '_active-agent.json'); + expect(bridgeField.status).toBe('PASS'); + }); + + test('returns WARN when no agent data found', () => { + const result = collectSessionStatus(tmpDir); + + const agentField = result.fields.find((f) => f.field === 'active_agent.id'); + expect(agentField.status).toBe('WARN'); + expect(agentField.actual).toBe('(none)'); + + const bridgeField = result.fields.find((f) => f.field === '_active-agent.json'); + expect(bridgeField.status).toBe('WARN'); + }); + + test('returns INFO for prompt_count and bracket when no session file', () => { + const result = collectSessionStatus(tmpDir); + + const promptField = result.fields.find((f) => f.field === 'prompt_count'); + expect(promptField.status).toBe('INFO'); + expect(promptField.actual).toBe('(no session)'); + + const bracketField = result.fields.find((f) => f.field === 'bracket'); + expect(bracketField.status).toBe('INFO'); + expect(bracketField.actual).toBe('(no session)'); + }); + + test('reads session file by sessionId when provided', () => { + const sessionsDir = path.join(tmpDir, '.synapse', 'sessions'); + const sessionId = 'abc-123-def'; + writeJson(path.join(sessionsDir, `${sessionId}.json`), { + active_agent: { id: 'qa', activation_quality: 'partial' }, + prompt_count: 5, + context: { last_bracket: 'MODERATE' }, + }); + + const result = collectSessionStatus(tmpDir, sessionId); + + const agentField = result.fields.find((f) => f.field === 'active_agent.id'); + expect(agentField.status).toBe('PASS'); + expect(agentField.actual).toBe('qa'); + + const promptField = result.fields.find((f) => f.field === 'prompt_count'); + expect(promptField.status).toBe('PASS'); + expect(promptField.actual).toBe('5'); + + const bracketField = result.fields.find((f) => f.field === 'bracket'); + expect(bracketField.status).toBe('PASS'); + expect(bracketField.actual).toBe('MODERATE'); + }); + + test('reads bridge file as fallback when sessionId not found', () => { + const bridgePath = path.join(tmpDir, '.synapse', 'sessions', '_active-agent.json'); + writeJson(bridgePath, { + id: 'architect', + activation_quality: 'fallback', + }); + + const result = collectSessionStatus(tmpDir, 'nonexistent-session-id'); + + const agentField = result.fields.find((f) => f.field === 'active_agent.id'); + expect(agentField.status).toBe('PASS'); + expect(agentField.actual).toBe('architect'); + }); + + test('returns all 5 expected fields', () => { + const result = collectSessionStatus(tmpDir); + + expect(result.fields).toHaveLength(5); + const fieldNames = result.fields.map((f) => f.field); + expect(fieldNames).toEqual([ + 'active_agent.id', + 'activation_quality', + 'prompt_count', + 'bracket', + '_active-agent.json', + ]); + }); +}); + +// --------------------------------------------------------------------------- +// manifest-collector (mocked parseManifest) +// --------------------------------------------------------------------------- + +describe('manifest-collector: collectManifestIntegrity', () => { + beforeEach(() => { + tmpDir = createTmpDir(); + parseManifest.mockReset(); + }); + + afterEach(() => { + fs.rmSync(tmpDir, { recursive: true, force: true }); + }); + + test('returns PASS for domains with existing files', () => { + const synapsePath = path.join(tmpDir, '.synapse'); + fs.mkdirSync(synapsePath, { recursive: true }); + + // Create domain files + writeFile(path.join(synapsePath, 'agent-dev'), 'DEV_RULE_1=code stuff'); + writeFile(path.join(synapsePath, 'workflow-story'), 'WF_RULE_1=story stuff'); + + parseManifest.mockReturnValue({ + devmode: false, + globalExclude: [], + domains: { + AGENT_DEV: { file: 'agent-dev', state: 'active', agentTrigger: 'dev' }, + WORKFLOW_STORY: { file: 'workflow-story', state: 'active', workflowTrigger: 'story-dev' }, + }, + }); + + const result = collectManifestIntegrity(tmpDir); + + expect(result.entries).toHaveLength(2); + expect(result.entries[0].status).toBe('PASS'); + expect(result.entries[0].fileExists).toBe(true); + expect(result.entries[1].status).toBe('PASS'); + expect(result.orphanedFiles).toHaveLength(0); + }); + + test('returns FAIL for domains with missing files', () => { + const synapsePath = path.join(tmpDir, '.synapse'); + fs.mkdirSync(synapsePath, { recursive: true }); + + parseManifest.mockReturnValue({ + devmode: false, + globalExclude: [], + domains: { + AGENT_QA: { file: 'agent-qa', state: 'active' }, + }, + }); + + const result = collectManifestIntegrity(tmpDir); + + expect(result.entries).toHaveLength(1); + expect(result.entries[0].status).toBe('FAIL'); + expect(result.entries[0].fileExists).toBe(false); + expect(result.entries[0].domain).toBe('agent-qa'); + }); + + test('detects orphaned files not in manifest', () => { + const synapsePath = path.join(tmpDir, '.synapse'); + fs.mkdirSync(synapsePath, { recursive: true }); + + // Create files: one in manifest, one orphaned + writeFile(path.join(synapsePath, 'agent-dev'), 'rule content'); + writeFile(path.join(synapsePath, 'stale-domain'), 'old rule content'); + + parseManifest.mockReturnValue({ + devmode: false, + globalExclude: [], + domains: { + AGENT_DEV: { file: 'agent-dev', state: 'active' }, + }, + }); + + const result = collectManifestIntegrity(tmpDir); + + expect(result.orphanedFiles).toContain('stale-domain'); + expect(result.orphanedFiles).not.toContain('agent-dev'); + }); + + test('skips manifest file and dotfiles when detecting orphans', () => { + const synapsePath = path.join(tmpDir, '.synapse'); + fs.mkdirSync(synapsePath, { recursive: true }); + + writeFile(path.join(synapsePath, 'manifest'), 'AGENT_DEV_STATE=active'); + writeFile(path.join(synapsePath, '.gitignore'), '*'); + writeFile(path.join(synapsePath, 'agent-dev'), 'rules'); + + parseManifest.mockReturnValue({ + devmode: false, + globalExclude: [], + domains: { + AGENT_DEV: { file: 'agent-dev', state: 'active' }, + }, + }); + + const result = collectManifestIntegrity(tmpDir); + + expect(result.orphanedFiles).toHaveLength(0); + }); + + test('includes trigger and state info in inManifest field', () => { + const synapsePath = path.join(tmpDir, '.synapse'); + fs.mkdirSync(synapsePath, { recursive: true }); + writeFile(path.join(synapsePath, 'agent-dev'), 'rules'); + + parseManifest.mockReturnValue({ + devmode: false, + globalExclude: [], + domains: { + AGENT_DEV: { + file: 'agent-dev', + state: 'active', + agentTrigger: 'dev', + alwaysOn: true, + }, + }, + }); + + const result = collectManifestIntegrity(tmpDir); + + expect(result.entries[0].inManifest).toContain('active'); + expect(result.entries[0].inManifest).toContain('trigger=dev'); + expect(result.entries[0].inManifest).toContain('ALWAYS_ON'); + }); + + test('handles empty manifest gracefully', () => { + const synapsePath = path.join(tmpDir, '.synapse'); + fs.mkdirSync(synapsePath, { recursive: true }); + + parseManifest.mockReturnValue({ + devmode: false, + globalExclude: [], + domains: {}, + }); + + const result = collectManifestIntegrity(tmpDir); + + expect(result.entries).toHaveLength(0); + expect(result.orphanedFiles).toHaveLength(0); + }); +}); + +// --------------------------------------------------------------------------- +// pipeline-collector +// --------------------------------------------------------------------------- + +describe('pipeline-collector: collectPipelineSimulation', () => { + test('returns FRESH bracket for promptCount=0', () => { + const result = collectPipelineSimulation(0, null, { domains: {} }); + + expect(result.bracket).toBe('FRESH'); + expect(result.contextPercent).toBe(100); + }); + + test('reports all 8 layers (L0-L7)', () => { + const result = collectPipelineSimulation(0, null, { domains: {} }); + + expect(result.layers).toHaveLength(8); + expect(result.layers[0].layer).toBe('L0 Constitution'); + expect(result.layers[7].layer).toBe('L7 Star-Command'); + }); + + test('FRESH bracket: only L0, L1, L2, L7 are ACTIVE', () => { + const result = collectPipelineSimulation(0, null, { domains: {} }); + + const activeIndices = [0, 1, 2, 7]; + const skipIndices = [3, 4, 5, 6]; + + for (const i of activeIndices) { + expect(result.layers[i].expected).toContain('ACTIVE'); + } + for (const i of skipIndices) { + expect(result.layers[i].expected).toContain('SKIP'); + } + }); + + test('MODERATE bracket: all 8 layers ACTIVE', () => { + // promptCount=60 => usedTokens=90000 => percent=55 => MODERATE + const result = collectPipelineSimulation(60, null, { domains: {} }); + + expect(result.bracket).toBe('MODERATE'); + for (const layer of result.layers) { + expect(layer.expected).toContain('ACTIVE'); + } + }); + + test('L2 WARN when agent has no matching domain', () => { + const manifest = { + domains: { + AGENT_QA: { file: 'agent-qa', agentTrigger: 'qa' }, + }, + }; + + const result = collectPipelineSimulation(0, 'dev', manifest); + + const l2 = result.layers[2]; + expect(l2.status).toBe('WARN'); + expect(l2.expected).toContain('no domain for dev'); + }); + + test('L2 PASS when agent has matching domain', () => { + const manifest = { + domains: { + AGENT_DEV: { file: 'agent-dev', agentTrigger: 'dev' }, + }, + }; + + const result = collectPipelineSimulation(0, 'dev', manifest); + + const l2 = result.layers[2]; + expect(l2.status).toBe('PASS'); + expect(l2.expected).toContain('agent: dev'); + }); + + test('CRITICAL bracket for very high prompt count', () => { + // promptCount=120 => usedTokens=180000 => percent=10 => CRITICAL + const result = collectPipelineSimulation(120, null, { domains: {} }); + + expect(result.bracket).toBe('CRITICAL'); + expect(result.contextPercent).toBeLessThanOrEqual(25); + }); + + test('L2 check not applied when no activeAgentId', () => { + const manifest = { + domains: { + AGENT_DEV: { file: 'agent-dev', agentTrigger: 'dev' }, + }, + }; + + const result = collectPipelineSimulation(0, null, manifest); + + const l2 = result.layers[2]; + expect(l2.status).toBe('PASS'); + // No agent-specific detail when activeAgentId is null + expect(l2.expected).not.toContain('agent:'); + }); + + test('handles null manifest gracefully', () => { + const result = collectPipelineSimulation(0, 'dev', null); + + const l2 = result.layers[2]; + expect(l2.status).toBe('WARN'); + expect(l2.expected).toContain('no domain for dev'); + }); +}); + +// --------------------------------------------------------------------------- +// uap-collector +// --------------------------------------------------------------------------- + +describe('uap-collector: collectUapBridgeStatus', () => { + beforeEach(() => { + tmpDir = createTmpDir(); + }); + + afterEach(() => { + fs.rmSync(tmpDir, { recursive: true, force: true }); + }); + + test('returns FAIL when _active-agent.json missing', () => { + const result = collectUapBridgeStatus(tmpDir); + + expect(result.checks).toBeDefined(); + const existsCheck = result.checks.find((c) => c.name === '_active-agent.json exists'); + expect(existsCheck.status).toBe('FAIL'); + expect(existsCheck.detail).toContain('not found'); + + // Should also have SKIP for active_agent matches + const matchCheck = result.checks.find((c) => c.name === 'active_agent matches'); + expect(matchCheck.status).toBe('SKIP'); + }); + + test('returns PASS when bridge file is valid with id field', () => { + const bridgePath = path.join(tmpDir, '.synapse', 'sessions', '_active-agent.json'); + writeJson(bridgePath, { + id: 'dev', + activation_quality: 'full', + source: 'uap', + activated_at: new Date().toISOString(), + }); + + const result = collectUapBridgeStatus(tmpDir); + + const existsCheck = result.checks.find((c) => c.name === '_active-agent.json exists'); + expect(existsCheck.status).toBe('PASS'); + + const matchCheck = result.checks.find((c) => c.name === 'active_agent matches'); + expect(matchCheck.status).toBe('PASS'); + expect(matchCheck.detail).toContain('Agent: dev'); + expect(matchCheck.detail).toContain('quality: full'); + }); + + test('returns WARN for stale bridge (>60min old)', () => { + const staleTime = new Date(Date.now() - 90 * 60 * 1000); // 90 minutes ago + const bridgePath = path.join(tmpDir, '.synapse', 'sessions', '_active-agent.json'); + writeJson(bridgePath, { + id: 'pm', + activation_quality: 'full', + activated_at: staleTime.toISOString(), + }); + + const result = collectUapBridgeStatus(tmpDir); + + const freshnessCheck = result.checks.find((c) => c.name === 'Bridge freshness'); + expect(freshnessCheck.status).toBe('WARN'); + expect(freshnessCheck.detail).toContain('stale'); + }); + + test('returns PASS for fresh bridge (<60min old)', () => { + const freshTime = new Date(Date.now() - 5 * 60 * 1000); // 5 minutes ago + const bridgePath = path.join(tmpDir, '.synapse', 'sessions', '_active-agent.json'); + writeJson(bridgePath, { + id: 'dev', + activated_at: freshTime.toISOString(), + }); + + const result = collectUapBridgeStatus(tmpDir); + + const freshnessCheck = result.checks.find((c) => c.name === 'Bridge freshness'); + expect(freshnessCheck.status).toBe('PASS'); + expect(freshnessCheck.detail).not.toContain('stale'); + }); + + test('returns ERROR when bridge file is invalid JSON', () => { + const bridgePath = path.join(tmpDir, '.synapse', 'sessions', '_active-agent.json'); + writeFile(bridgePath, '{ not valid json !!!'); + + const result = collectUapBridgeStatus(tmpDir); + + const existsCheck = result.checks.find((c) => c.name === '_active-agent.json exists'); + expect(existsCheck.status).toBe('PASS'); // File exists, just invalid + + const matchCheck = result.checks.find((c) => c.name === 'active_agent matches'); + expect(matchCheck.status).toBe('ERROR'); + expect(matchCheck.detail).toContain('Failed to parse'); + }); + + test('returns FAIL when bridge file has no id field', () => { + const bridgePath = path.join(tmpDir, '.synapse', 'sessions', '_active-agent.json'); + writeJson(bridgePath, { + activation_quality: 'full', + source: 'uap', + }); + + const result = collectUapBridgeStatus(tmpDir); + + const matchCheck = result.checks.find((c) => c.name === 'active_agent matches'); + expect(matchCheck.status).toBe('FAIL'); + expect(matchCheck.detail).toContain('id field is missing'); + }); + + test('no freshness check when activated_at is missing', () => { + const bridgePath = path.join(tmpDir, '.synapse', 'sessions', '_active-agent.json'); + writeJson(bridgePath, { + id: 'dev', + activation_quality: 'full', + }); + + const result = collectUapBridgeStatus(tmpDir); + + const freshnessCheck = result.checks.find((c) => c.name === 'Bridge freshness'); + expect(freshnessCheck).toBeUndefined(); + // Should only have 2 checks: exists + matches + expect(result.checks).toHaveLength(2); + }); +}); + +``` + +================================================== +📄 tests/synapse/e2e/full-pipeline.e2e.test.js +================================================== +```js +/** + * SYNAPSE E2E: Full Pipeline + * + * End-to-end tests for the SYNAPSE context engine pipeline. + * Uses REAL .synapse/ files at project root — no mocks. + * + * @module tests/synapse/e2e/full-pipeline.e2e.test + */ + +const path = require('path'); +const fs = require('fs'); + +const PROJECT_ROOT = path.resolve(__dirname, '..', '..', '..'); +const SYNAPSE_PATH = path.join(PROJECT_ROOT, '.synapse'); +const MANIFEST_PATH = path.join(SYNAPSE_PATH, 'manifest'); + +// Guard: skip entire suite if .synapse/ is missing +const synapseExists = fs.existsSync(SYNAPSE_PATH) && fs.existsSync(MANIFEST_PATH); + +const { SynapseEngine } = require(path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'engine.js')); +const { parseManifest } = require(path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'domain', 'domain-loader.js')); + +/** + * Build a default session object for testing. + * + * @param {object} [overrides] - Fields to override + * @returns {object} Session object matching SYN-2 schema + */ +function buildSession(overrides = {}) { + return { + prompt_count: 5, + active_agent: { id: 'dev', activated_at: new Date().toISOString() }, + active_workflow: null, + active_squad: null, + active_task: null, + context: { + last_bracket: 'FRESH', + last_tokens_used: 0, + last_context_percent: 96, + }, + ...overrides, + }; +} + +const describeIfSynapse = synapseExists ? describe : describe.skip; + +describeIfSynapse('SYNAPSE E2E: Full Pipeline', () => { + /** @type {object} */ + let manifest; + + /** @type {SynapseEngine} */ + let engine; + + /** @type {SynapseEngine} */ + let engineDevmode; + + beforeAll(() => { + manifest = parseManifest(MANIFEST_PATH); + + engine = new SynapseEngine(SYNAPSE_PATH, { + manifest, + devmode: false, + }); + + engineDevmode = new SynapseEngine(SYNAPSE_PATH, { + manifest, + devmode: true, + }); + }); + + // ----------------------------------------------------------------------- + // 1. Engine processes prompt and returns xml string + // ----------------------------------------------------------------------- + test('processes a prompt and returns an object with xml string and metrics', async () => { + const session = buildSession(); + const result = await engine.process('Implement the login feature', session); + + expect(result).toBeDefined(); + expect(typeof result.xml).toBe('string'); + expect(result.xml.length).toBeGreaterThan(0); + expect(result.metrics).toBeDefined(); + expect(typeof result.metrics).toBe('object'); + }); + + // ----------------------------------------------------------------------- + // 2. XML contains wrapper tags + // ----------------------------------------------------------------------- + test('XML output is wrapped in tags', async () => { + const session = buildSession(); + const { xml } = await engine.process('Build the dashboard component', session); + + expect(xml).toMatch(/^/); + expect(xml).toMatch(/<\/synapse-rules>$/); + }); + + // ----------------------------------------------------------------------- + // 3. XML contains CONTEXT BRACKET section + // ----------------------------------------------------------------------- + test('XML contains CONTEXT BRACKET section with bracket name and percentage', async () => { + const session = buildSession({ prompt_count: 5 }); + const { xml } = await engine.process('Create a new service', session); + + expect(xml).toContain('[CONTEXT BRACKET]'); + expect(xml).toContain('CONTEXT BRACKET:'); + // With prompt_count=5, contextPercent = 100 - (5*1500/200000*100) = 96.25 -> FRESH + expect(xml).toContain('[FRESH]'); + }); + + // ----------------------------------------------------------------------- + // 4. XML contains CONSTITUTION section (NON-NEGOTIABLE) + // ----------------------------------------------------------------------- + test('XML contains CONSTITUTION section marked as NON-NEGOTIABLE', async () => { + const session = buildSession(); + const { xml } = await engine.process('Refactor the auth module', session); + + // The constitution layer is ALWAYS_ON and NON_NEGOTIABLE per manifest + expect(xml).toContain('[CONSTITUTION]'); + expect(xml).toContain('NON-NEGOTIABLE'); + }); + + // ----------------------------------------------------------------------- + // 5. Metrics contains expected structure + // ----------------------------------------------------------------------- + test('metrics object contains total_ms, layers_loaded, total_rules, and per_layer', async () => { + const session = buildSession(); + const { metrics } = await engine.process('Add unit tests', session); + + expect(metrics).toHaveProperty('total_ms'); + expect(typeof metrics.total_ms).toBe('number'); + expect(metrics.total_ms).toBeGreaterThanOrEqual(0); + + expect(metrics).toHaveProperty('layers_loaded'); + expect(typeof metrics.layers_loaded).toBe('number'); + + expect(metrics).toHaveProperty('layers_skipped'); + expect(typeof metrics.layers_skipped).toBe('number'); + + expect(metrics).toHaveProperty('layers_errored'); + expect(typeof metrics.layers_errored).toBe('number'); + + expect(metrics).toHaveProperty('total_rules'); + expect(typeof metrics.total_rules).toBe('number'); + + expect(metrics).toHaveProperty('per_layer'); + expect(typeof metrics.per_layer).toBe('object'); + + // At minimum, some layers should have loaded + expect(metrics.layers_loaded).toBeGreaterThan(0); + }); + + // ----------------------------------------------------------------------- + // 6. Engine handles session with active agent (AGENT section) + // ----------------------------------------------------------------------- + test('includes ACTIVE AGENT section when session has an active agent', async () => { + const session = buildSession({ + active_agent: { id: 'dev', activated_at: new Date().toISOString() }, + }); + const { xml } = await engine.process('Fix the broken endpoint', session); + + // Agent layer (L2) should produce an ACTIVE AGENT section for @dev + expect(xml).toContain('[ACTIVE AGENT:'); + expect(xml).toContain('@dev'); + }); + + // ----------------------------------------------------------------------- + // 7. DEVMODE=true includes DEVMODE STATUS section + // ----------------------------------------------------------------------- + test('DEVMODE=true includes DEVMODE STATUS section in output', async () => { + // Use CRITICAL bracket (high token budget: 2500) so DEVMODE section is not truncated + const session = buildSession({ + prompt_count: 140, + context: { last_bracket: 'CRITICAL', last_tokens_used: 0, last_context_percent: 0 }, + }); + const { xml } = await engineDevmode.process('Debug the pipeline', session); + + expect(xml).toContain('[DEVMODE STATUS]'); + expect(xml).toContain('SYNAPSE DEVMODE'); + expect(xml).toContain('Pipeline Metrics:'); + }); + + // ----------------------------------------------------------------------- + // 8. DEVMODE=false does NOT include DEVMODE STATUS section + // ----------------------------------------------------------------------- + test('DEVMODE=false does NOT include DEVMODE STATUS section', async () => { + const session = buildSession(); + const { xml } = await engine.process('Implement feature flag', session); + + expect(xml).not.toContain('[DEVMODE STATUS]'); + expect(xml).not.toContain('SYNAPSE DEVMODE'); + }); + + // ----------------------------------------------------------------------- + // 9. Multiple consecutive calls produce consistent results + // ----------------------------------------------------------------------- + test('multiple consecutive calls produce consistent results', async () => { + const session = buildSession(); + const prompt = 'Create the notification service'; + + const result1 = await engine.process(prompt, session); + const result2 = await engine.process(prompt, session); + + // Both should have valid xml output + expect(result1.xml.length).toBeGreaterThan(0); + expect(result2.xml.length).toBeGreaterThan(0); + + // Same prompt + session should yield same bracket and sections + expect(result1.xml).toContain('[CONTEXT BRACKET]'); + expect(result2.xml).toContain('[CONTEXT BRACKET]'); + + // Both should contain the same bracket designation + const bracketMatch1 = result1.xml.match(/CONTEXT BRACKET: \[(\w+)\]/); + const bracketMatch2 = result2.xml.match(/CONTEXT BRACKET: \[(\w+)\]/); + expect(bracketMatch1).not.toBeNull(); + expect(bracketMatch2).not.toBeNull(); + expect(bracketMatch1[1]).toBe(bracketMatch2[1]); + + // Metrics structure should be consistent + expect(result1.metrics.layers_loaded).toBe(result2.metrics.layers_loaded); + }); + + // ----------------------------------------------------------------------- + // 10. Engine handles empty prompt gracefully + // ----------------------------------------------------------------------- + test('handles empty prompt gracefully without throwing', async () => { + const session = buildSession(); + + // Should not throw + const resultEmpty = await engine.process('', session); + expect(resultEmpty).toBeDefined(); + expect(typeof resultEmpty.xml).toBe('string'); + expect(resultEmpty.metrics).toBeDefined(); + + // Should also handle null/undefined prompt + const resultNull = await engine.process(null, session); + expect(resultNull).toBeDefined(); + expect(typeof resultNull.xml).toBe('string'); + expect(resultNull.metrics).toBeDefined(); + }); + + // ----------------------------------------------------------------------- + // 11. LOADED DOMAINS SUMMARY section is present (uses CRITICAL for higher token budget) + // ----------------------------------------------------------------------- + test('XML contains LOADED DOMAINS SUMMARY section', async () => { + // FRESH bracket has low token budget (800) which may truncate SUMMARY. + // Use CRITICAL bracket (2500 token budget) to ensure SUMMARY survives. + const session = buildSession({ + prompt_count: 140, + context: { last_bracket: 'CRITICAL', last_tokens_used: 0, last_context_percent: 0 }, + }); + const { xml } = await engine.process('Deploy the application', session); + + expect(xml).toContain('[LOADED DOMAINS SUMMARY]'); + expect(xml).toContain('LOADED DOMAINS:'); + }); + + // ----------------------------------------------------------------------- + // 12. CRITICAL bracket triggers HANDOFF WARNING + // ----------------------------------------------------------------------- + test('CRITICAL bracket includes HANDOFF WARNING section', async () => { + // prompt_count > 133 gives contextPercent < 0 which maps to CRITICAL + // contextPercent = 100 - (140 * 1500 / 200000 * 100) = 100 - 105 = -5 + const session = buildSession({ + prompt_count: 140, + context: { last_bracket: 'CRITICAL', last_tokens_used: 0, last_context_percent: 0 }, + }); + const { xml } = await engine.process('Save progress', session); + + expect(xml).toContain('[HANDOFF WARNING]'); + }); +}); + +``` + +================================================== +📄 tests/synapse/e2e/bracket-scenarios.e2e.test.js +================================================== +```js +/** + * SYNAPSE E2E: Bracket Scenarios + * + * Tests all 4 context brackets (FRESH, MODERATE, DEPLETED, CRITICAL) + * using REAL .synapse/ files from project root. No mocks. + * + * Context bracket formula: 100 - (promptCount * 1500 / 200000 * 100) + * prompt_count=0 -> 100% -> FRESH (>= 60%) + * prompt_count=60 -> 55% -> MODERATE (40-60%) + * prompt_count=90 -> 32.5% -> DEPLETED (25-40%) + * prompt_count=120 -> 10% -> CRITICAL (< 25%) + */ + +const path = require('path'); +const fs = require('fs'); + +const PROJECT_ROOT = path.resolve(__dirname, '..', '..', '..'); +const SYNAPSE_PATH = path.join(PROJECT_ROOT, '.synapse'); +const ENGINE_PATH = path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'engine.js'); + +const synapseExists = fs.existsSync(SYNAPSE_PATH); +const engineExists = fs.existsSync(ENGINE_PATH); + +const describeIfReady = (synapseExists && engineExists) ? describe : describe.skip; + +/** + * Build a session object for a given prompt count. + * + * @param {number} promptCount - Number of prompts in the session + * @returns {object} Session object compatible with SynapseEngine.process() + */ +function makeSession(promptCount) { + return { + prompt_count: promptCount, + active_agent: { id: 'dev', activated_at: new Date().toISOString() }, + active_workflow: null, + active_squad: null, + active_task: null, + context: { + last_bracket: 'MODERATE', + last_tokens_used: 0, + last_context_percent: 55, + }, + }; +} + +describeIfReady('SYNAPSE E2E: Bracket Scenarios', () => { + /** @type {import('../../../.aios-core/core/synapse/engine').SynapseEngine} */ + let engine; + + beforeAll(() => { + const { SynapseEngine } = require(ENGINE_PATH); + const { parseManifest } = require( + path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'domain', 'domain-loader.js'), + ); + + const manifestPath = path.join(SYNAPSE_PATH, 'manifest'); + const manifest = parseManifest(manifestPath); + + engine = new SynapseEngine(SYNAPSE_PATH, { manifest, devmode: false }); + }); + + // ----------------------------------------------------------------------- + // FRESH bracket (prompt_count = 0 -> contextPercent = 100%) + // ----------------------------------------------------------------------- + + describe('FRESH bracket (prompt_count=0)', () => { + let result; + + beforeAll(async () => { + result = await engine.process('Hello, start a new session', makeSession(0)); + }); + + test('XML contains CONTEXT BRACKET with FRESH', () => { + expect(result.xml).toContain('[CONTEXT BRACKET]'); + expect(result.xml).toContain('FRESH'); + }); + + test('layers_loaded should show limited layers (constitution, global, agent)', () => { + // FRESH bracket activates layers [0,1,2,7] only + // Layers 3-6 (workflow, task, squad, keyword) should be skipped + const { layers_loaded, layers_skipped } = result.metrics; + // At minimum, some layers must load; not all 8 + expect(layers_loaded).toBeGreaterThan(0); + expect(layers_skipped).toBeGreaterThan(0); + // FRESH activates max 4 layers (L0,L1,L2,L7), so loaded <= 4 + expect(layers_loaded).toBeLessThanOrEqual(4); + }); + }); + + // ----------------------------------------------------------------------- + // MODERATE bracket (prompt_count = 60 -> contextPercent = 55%) + // ----------------------------------------------------------------------- + + describe('MODERATE bracket (prompt_count=60)', () => { + let result; + + beforeAll(async () => { + result = await engine.process('Continue working on the feature', makeSession(60)); + }); + + test('XML contains MODERATE bracket', () => { + expect(result.xml).toContain('[CONTEXT BRACKET]'); + expect(result.xml).toContain('MODERATE'); + }); + + test('all layers should be active (more layers_loaded than FRESH)', async () => { + // MODERATE activates all 8 layers [0-7] + const freshResult = await engine.process('test', makeSession(0)); + const freshLoaded = freshResult.metrics.layers_loaded; + const moderateLoaded = result.metrics.layers_loaded; + + // MODERATE should load at least as many layers as FRESH, + // and typically more since all 8 are active + expect(moderateLoaded).toBeGreaterThanOrEqual(freshLoaded); + // MODERATE skips fewer layers than FRESH + expect(result.metrics.layers_skipped).toBeLessThanOrEqual(freshResult.metrics.layers_skipped); + }); + }); + + // ----------------------------------------------------------------------- + // DEPLETED bracket (prompt_count = 90 -> contextPercent = 32.5%) + // ----------------------------------------------------------------------- + + describe('DEPLETED bracket (prompt_count=90)', () => { + let result; + + beforeAll(async () => { + result = await engine.process('We need to wrap up soon', makeSession(90)); + }); + + test('XML contains DEPLETED bracket', () => { + expect(result.xml).toContain('[CONTEXT BRACKET]'); + expect(result.xml).toContain('DEPLETED'); + }); + + test('memory hints layer may be attempted', () => { + // DEPLETED bracket enables memoryHints=true + // The memory bridge is feature-gated (SYN-10) so hints may or may not appear, + // but the engine should attempt the memory path. + // Verify the bracket is correct and XML is non-empty. + expect(result.xml).toBeTruthy(); + expect(result.xml.length).toBeGreaterThan(0); + // If memory hints are present, they should be in the expected format + if (result.xml.includes('[MEMORY HINTS]')) { + expect(result.xml).toMatch(/\[MEMORY HINTS\]/); + } + }); + }); + + // ----------------------------------------------------------------------- + // CRITICAL bracket (prompt_count = 120 -> contextPercent = 10%) + // ----------------------------------------------------------------------- + + describe('CRITICAL bracket (prompt_count=120)', () => { + let result; + + beforeAll(async () => { + result = await engine.process('Final prompt before handoff', makeSession(120)); + }); + + test('XML contains CRITICAL bracket', () => { + expect(result.xml).toContain('[CONTEXT BRACKET]'); + expect(result.xml).toContain('CRITICAL'); + }); + + test('handoff warning present in XML', () => { + expect(result.xml).toContain('[HANDOFF WARNING]'); + }); + }); + + // ----------------------------------------------------------------------- + // Bracket transitions + // ----------------------------------------------------------------------- + + describe('Bracket transitions', () => { + test('increasing prompt_count changes bracket in output', async () => { + const brackets = []; + const promptCounts = [0, 60, 90, 120]; + + for (const count of promptCounts) { + const res = await engine.process(`Prompt at count ${count}`, makeSession(count)); + // Extract bracket name from CONTEXT BRACKET line + const match = res.xml.match(/CONTEXT BRACKET:\s*\[(\w+)\]/); + if (match) { + brackets.push(match[1]); + } + } + + expect(brackets).toEqual(['FRESH', 'MODERATE', 'DEPLETED', 'CRITICAL']); + }); + }); +}); + +``` + +================================================== +📄 tests/synapse/e2e/regression-guards.e2e.test.js +================================================== +```js +/** + * SYNAPSE E2E: Regression Guards + * + * Performance assertions that enforce pipeline performance targets. + * Uses REAL .synapse/ files — runs 50 iterations to get reliable p95. + * + * Targets (from EPIC-SYN-INDEX): + * Pipeline total p95: <100ms (hard limit) + * Individual layer p95: <20ms (hard limit, L0/L7: <10ms) + * Startup p95: <10ms (hard limit) + * Session I/O p95: <15ms (hard limit) + * + * @module tests/synapse/e2e/regression-guards.e2e.test + */ + +const path = require('path'); +const fs = require('fs'); +const { performance } = require('perf_hooks'); + +const PROJECT_ROOT = path.resolve(__dirname, '..', '..', '..'); +const SYNAPSE_PATH = path.join(PROJECT_ROOT, '.synapse'); +const MANIFEST_PATH = path.join(SYNAPSE_PATH, 'manifest'); + +const synapseExists = fs.existsSync(SYNAPSE_PATH) && fs.existsSync(MANIFEST_PATH); + +const { SynapseEngine } = require( + path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'engine.js'), +); +const { parseManifest } = require( + path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'domain', 'domain-loader.js'), +); +const { loadSession } = require( + path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'session', 'session-manager.js'), +); + +const ITERATIONS = 50; +const WARMUP = 5; + +/** + * Calculate a percentile from a sorted array. + */ +function percentile(sorted, p) { + if (sorted.length === 0) return 0; + const idx = Math.ceil((p / 100) * sorted.length) - 1; + return sorted[Math.max(0, idx)]; +} + +const describeIfSynapse = synapseExists ? describe : describe.skip; + +describeIfSynapse('SYNAPSE E2E: Regression Guards', () => { + let manifest; + let engine; + const pipelineDurations = []; + const startupDurations = []; + const sessionIODurations = []; + const layerDurations = {}; + + const session = { + prompt_count: 60, + active_agent: { id: 'dev', activated_at: new Date().toISOString() }, + active_workflow: null, + active_squad: null, + active_task: null, + context: { last_bracket: 'MODERATE', last_tokens_used: 0, last_context_percent: 55 }, + }; + + beforeAll(async () => { + manifest = parseManifest(MANIFEST_PATH); + + // Warm-up + const warmEngine = new SynapseEngine(SYNAPSE_PATH, { manifest, devmode: false }); + for (let i = 0; i < WARMUP; i++) { + await warmEngine.process('warm up prompt', session); + } + + // Measure startup multiple times for statistical significance + for (let s = 0; s < ITERATIONS; s++) { + const s0 = performance.now(); + const tempEngine = new SynapseEngine(SYNAPSE_PATH, { manifest, devmode: false }); + startupDurations.push(performance.now() - s0); + if (s === 0) engine = tempEngine; // keep first instance for pipeline tests + } + + // Measured iterations + for (let i = 0; i < ITERATIONS; i++) { + + // Session I/O measurement + const sIO0 = performance.now(); + const sessionsDir = path.join(SYNAPSE_PATH, 'sessions'); + loadSession('perf-guard-session', sessionsDir); + sessionIODurations.push(performance.now() - sIO0); + + // Pipeline measurement + const p0 = performance.now(); + const result = await engine.process('Implement the user auth feature', session); + pipelineDurations.push(performance.now() - p0); + + // Per-layer timings + if (result && result.metrics && result.metrics.per_layer) { + for (const [name, info] of Object.entries(result.metrics.per_layer)) { + if (info.duration != null) { + if (!layerDurations[name]) layerDurations[name] = []; + layerDurations[name].push(info.duration); + } + } + } + } + }, 30000); + + // ----------------------------------------------------------------------- + // Pipeline p95 hard limit: <100ms + // ----------------------------------------------------------------------- + test('pipeline p95 < 100ms (hard limit)', () => { + const sorted = [...pipelineDurations].sort((a, b) => a - b); + const p95 = percentile(sorted, 95); + expect(p95).toBeLessThan(100); + }); + + // ----------------------------------------------------------------------- + // Pipeline p95 target: <70ms (informational — warn only) + // ----------------------------------------------------------------------- + test('pipeline p95 should be within target (<70ms) or warn', () => { + const sorted = [...pipelineDurations].sort((a, b) => a - b); + const p95 = percentile(sorted, 95); + if (p95 >= 70) { + console.warn(`[WARN] Pipeline p95 (${p95.toFixed(2)}ms) approaching hard limit (target: <70ms)`); + } + // Enforce the 70ms target — warn was logged above, hard-fail at target + expect(p95).toBeLessThan(70); + }); + + // ----------------------------------------------------------------------- + // Individual layer p95: <20ms (hard limit) + // ----------------------------------------------------------------------- + test('each layer p95 < 20ms (hard limit)', () => { + for (const [name, durations] of Object.entries(layerDurations)) { + const sorted = [...durations].sort((a, b) => a - b); + const p95 = percentile(sorted, 95); + expect(p95).toBeLessThan(20); + } + }); + + // ----------------------------------------------------------------------- + // Edge layers (constitution, star-command) p95: <10ms (hard limit) + // ----------------------------------------------------------------------- + test('edge layers (L0/L7) p95 < 10ms (hard limit)', () => { + const edgeLayers = ['constitution', 'star-command']; + for (const name of edgeLayers) { + if (layerDurations[name]) { + const sorted = [...layerDurations[name]].sort((a, b) => a - b); + const p95 = percentile(sorted, 95); + expect(p95).toBeLessThan(10); + } + } + }); + + // ----------------------------------------------------------------------- + // Startup p95: <10ms (hard limit) + // ----------------------------------------------------------------------- + test('startup p95 < 10ms (hard limit)', () => { + const sorted = [...startupDurations].sort((a, b) => a - b); + const p95 = percentile(sorted, 95); + expect(p95).toBeLessThan(10); + }); + + // ----------------------------------------------------------------------- + // Session I/O p95: <15ms (hard limit) + // ----------------------------------------------------------------------- + test('session I/O p95 < 15ms (hard limit)', () => { + const sorted = [...sessionIODurations].sort((a, b) => a - b); + const p95 = percentile(sorted, 95); + expect(p95).toBeLessThan(15); + }); + + // ----------------------------------------------------------------------- + // Total E2E test count regression guard + // ----------------------------------------------------------------------- + test('total synapse E2E test files >= 5 (coverage guard)', () => { + // Guards against accidental test file removal. The suite has 6 E2E files + // with 53 tests total. File count is the stable assertion here. + const e2eDir = path.join(PROJECT_ROOT, 'tests', 'synapse', 'e2e'); + const testFiles = fs.readdirSync(e2eDir).filter(f => f.endsWith('.test.js')); + expect(testFiles.length).toBeGreaterThanOrEqual(5); + }); + + // ----------------------------------------------------------------------- + // Metrics structure guard + // ----------------------------------------------------------------------- + test('engine returns valid metrics structure', async () => { + const result = await engine.process('test metrics guard', session); + expect(result.metrics).toBeDefined(); + expect(typeof result.metrics.total_ms).toBe('number'); + expect(typeof result.metrics.layers_loaded).toBe('number'); + expect(typeof result.metrics.layers_skipped).toBe('number'); + expect(typeof result.metrics.layers_errored).toBe('number'); + expect(typeof result.metrics.total_rules).toBe('number'); + expect(result.metrics.per_layer).toBeDefined(); + }); +}); + +``` + +================================================== +📄 tests/synapse/e2e/hook-integration.e2e.test.js +================================================== +```js +/** + * SYNAPSE E2E: Hook Integration Tests + * + * End-to-end tests for the SYNAPSE hook entry point (synapse-engine.cjs). + * Tests the stdin/stdout JSON protocol by spawning the hook as a child process + * and validating real output against the actual project .synapse/ configuration. + * + * @module tests/synapse/e2e/hook-integration.e2e + * @story SYN-12 - Performance Benchmarks + E2E Testing + */ + +const { execSync } = require('child_process'); +const path = require('path'); +const fs = require('fs'); +const os = require('os'); + +jest.setTimeout(30000); + +const PROJECT_ROOT = path.resolve(__dirname, '..', '..', '..'); +const HOOK_PATH = path.join(PROJECT_ROOT, '.claude', 'hooks', 'synapse-engine.cjs'); +const HOOK_EXISTS = fs.existsSync(HOOK_PATH); + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +/** + * Run the hook via execSync with given stdin data. + * Returns { stdout, exitCode } -- never throws on non-zero exit. + * + * @param {string} stdinData - Raw string to pipe to stdin + * @param {object} [opts] - Extra options + * @param {number} [opts.timeout=10000] - Timeout in ms + * @returns {{ stdout: string, exitCode: number }} + */ +function runHookSync(stdinData, opts = {}) { + const timeout = opts.timeout || 10000; + try { + const stdout = execSync(`node "${HOOK_PATH}"`, { + input: stdinData, + encoding: 'utf8', + timeout, + windowsHide: true, + stdio: ['pipe', 'pipe', 'pipe'], + }); + return { stdout: stdout || '', exitCode: 0 }; + } catch (err) { + // execSync throws on non-zero exit OR timeout + return { + stdout: (err.stdout || '').toString(), + exitCode: err.status != null ? err.status : 1, + }; + } +} + +/** + * Build a valid hook input JSON string. + * @param {object} [overrides] - Fields to override + * @returns {string} Stringified JSON input + */ +function buildInput(overrides = {}) { + return JSON.stringify({ + sessionId: 'e2e-test-session', + cwd: PROJECT_ROOT, + prompt: 'test prompt', + ...overrides, + }); +} + +// --------------------------------------------------------------------------- +// Test Suite +// --------------------------------------------------------------------------- + +const describeIfHookExists = HOOK_EXISTS ? describe : describe.skip; + +describeIfHookExists('SYNAPSE E2E: Hook Integration', () => { + + // ======================================================================== + // 1. Hook produces valid JSON output with hookSpecificOutput key + // ======================================================================== + + test('hook produces valid JSON output with hookSpecificOutput key', () => { + const input = buildInput(); + const { stdout, exitCode } = runHookSync(input); + + expect(exitCode).toBe(0); + expect(stdout).toBeTruthy(); + + const result = JSON.parse(stdout); + expect(result).toHaveProperty('hookSpecificOutput'); + expect(result.hookSpecificOutput).toHaveProperty('additionalContext'); + expect(typeof result.hookSpecificOutput.additionalContext).toBe('string'); + }); + + // ======================================================================== + // 2. Hook output additionalContext is a string (may contain synapse-rules + // XML when engine produces content, or empty string otherwise) + // ======================================================================== + + test('hook output additionalContext is a string conforming to expected format', () => { + const input = buildInput(); + const { stdout, exitCode } = runHookSync(input); + + expect(exitCode).toBe(0); + expect(stdout).toBeTruthy(); + + const result = JSON.parse(stdout); + const ctx = result.hookSpecificOutput.additionalContext; + + // additionalContext must be a string + expect(typeof ctx).toBe('string'); + + // When non-empty, it must be wrapped in tags + if (ctx.length > 0) { + expect(ctx).toContain(''); + expect(ctx).toContain(''); + } + }); + + // ======================================================================== + // 3. Hook with missing .synapse/ directory produces empty output + // ======================================================================== + + test('hook with missing .synapse/ directory produces empty output (silent exit)', () => { + // Use a fresh temp dir as cwd -- guaranteed to NOT have a .synapse/ directory + const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'synapse-e2e-no-synapse-')); + try { + const input = JSON.stringify({ + sessionId: 'e2e-no-synapse', + cwd: tempDir, + prompt: 'test prompt', + }); + const { stdout, exitCode } = runHookSync(input); + + expect(exitCode).toBe(0); + expect(stdout).toBe(''); + } finally { + fs.rmSync(tempDir, { recursive: true, force: true }); + } + }); + + // ======================================================================== + // 4. Hook with invalid JSON input exits gracefully (no crash, exit 0) + // ======================================================================== + + test('hook with invalid JSON input exits gracefully (no crash, exit 0)', () => { + const { stdout, exitCode } = runHookSync('this is not valid json {{{'); + + // Hook catches JSON parse error and calls process.exit(0) + expect(exitCode).toBe(0); + expect(stdout).toBe(''); + }); + + // ======================================================================== + // 5. Hook with missing cwd exits gracefully + // ======================================================================== + + test('hook with missing cwd field exits gracefully', () => { + const input = JSON.stringify({ + sessionId: 'e2e-no-cwd', + prompt: 'test prompt', + // cwd intentionally omitted + }); + const { stdout, exitCode } = runHookSync(input); + + // Hook checks `if (!cwd) return;` and exits silently + expect(exitCode).toBe(0); + expect(stdout).toBe(''); + }); + + // ======================================================================== + // 6. HOOK_TIMEOUT_MS is 5000ms + // ======================================================================== + + test('HOOK_TIMEOUT_MS is 5000ms', () => { + const hookModule = require(HOOK_PATH); + expect(hookModule.HOOK_TIMEOUT_MS).toBe(5000); + }); + + // ======================================================================== + // 7. Hook output additionalContext contains CONSTITUTION section + // (verifies via SynapseEngine directly since engine.process is async + // and the hook integration path may yield empty due to async handling) + // ======================================================================== + + test('SynapseEngine produces CONSTITUTION content for the project .synapse/', async () => { + const synapsePath = path.join(PROJECT_ROOT, '.synapse'); + if (!fs.existsSync(synapsePath)) { + // Skip if .synapse/ does not exist in this environment + return; + } + + const { SynapseEngine } = require( + path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'engine.js'), + ); + const engine = new SynapseEngine(synapsePath); + const result = await engine.process('test prompt', { prompt_count: 0 }); + + // The formatter should produce XML with CONSTITUTION section from L0 + expect(result).toHaveProperty('xml'); + expect(typeof result.xml).toBe('string'); + + if (result.xml.length > 0) { + expect(result.xml).toContain(''); + expect(result.xml).toMatch(/CONSTITUTION/i); + } + }); + + // ======================================================================== + // 8. Hook with empty stdin exits gracefully + // ======================================================================== + + test('hook with empty stdin exits gracefully', () => { + const { stdout, exitCode } = runHookSync(''); + + expect(exitCode).toBe(0); + expect(stdout).toBe(''); + }); + + // ======================================================================== + // 9. Hook output is a single well-formed JSON object (no trailing data) + // ======================================================================== + + test('hook output is a single well-formed JSON object (no trailing data)', () => { + const input = buildInput(); + const { stdout, exitCode } = runHookSync(input); + + expect(exitCode).toBe(0); + expect(stdout).toBeTruthy(); + + // Parse should succeed without leftover characters + const result = JSON.parse(stdout); + expect(typeof result).toBe('object'); + expect(result).not.toBeNull(); + + // Re-stringify and compare length to detect trailing data + const reparsed = JSON.stringify(result); + expect(stdout.trim()).toBe(reparsed); + }); +}); + +``` + +================================================== +📄 tests/synapse/e2e/agent-scenarios.e2e.test.js +================================================== +```js +/** + * SYNAPSE E2E: Agent Scenarios + * + * End-to-end tests for agent activation through the full SynapseEngine pipeline. + * Uses REAL .synapse/ domain files -- no mocks. + * + * @group e2e + */ + +const path = require('path'); +const fs = require('fs'); + +const PROJECT_ROOT = path.resolve(__dirname, '..', '..', '..'); +const SYNAPSE_DIR = path.join(PROJECT_ROOT, '.synapse'); +const MANIFEST_PATH = path.join(SYNAPSE_DIR, 'manifest'); + +const { SynapseEngine } = require(path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'engine.js')); +const { parseManifest } = require(path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'domain', 'domain-loader.js')); + +// --------------------------------------------------------------------------- +// Skip guard: .synapse/ must exist for E2E tests +// --------------------------------------------------------------------------- + +const synapseExists = fs.existsSync(SYNAPSE_DIR) && fs.existsSync(MANIFEST_PATH); + +const describeIfSynapse = synapseExists ? describe : describe.skip; + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +/** + * Build a minimal session object with the given active agent. + * + * @param {string|null} agentId - Agent identifier or null for no agent + * @returns {object} Session compatible with SynapseEngine.process() + */ +function makeSession(agentId) { + return { + prompt_count: 5, + active_agent: agentId ? { id: agentId, activated_at: new Date().toISOString() } : null, + active_workflow: null, + active_squad: null, + active_task: null, + context: { last_bracket: 'MODERATE', last_tokens_used: 0, last_context_percent: 55 }, + }; +} + +// --------------------------------------------------------------------------- +// Test Suite +// --------------------------------------------------------------------------- + +describeIfSynapse('SYNAPSE E2E: Agent Scenarios', () => { + /** @type {object} */ + let manifest; + /** @type {SynapseEngine} */ + let engine; + + beforeAll(() => { + manifest = parseManifest(MANIFEST_PATH); + engine = new SynapseEngine(SYNAPSE_DIR, { manifest, devmode: false }); + }); + + // ----------------------------------------------------------------------- + // 1. @dev activation + // ----------------------------------------------------------------------- + it('activates @dev and includes agent section in output XML', async () => { + const session = makeSession('dev'); + const { xml } = await engine.process('implement the feature', session); + + expect(xml).toContain('[ACTIVE AGENT: @dev]'); + expect(xml).toContain(''); + }); + + // ----------------------------------------------------------------------- + // 2. @qa activation + // ----------------------------------------------------------------------- + it('activates @qa and includes agent section in output XML', async () => { + const session = makeSession('qa'); + const { xml } = await engine.process('run quality checks', session); + + expect(xml).toContain('[ACTIVE AGENT: @qa]'); + expect(xml).toContain(''); + }); + + // ----------------------------------------------------------------------- + // 3. @devops activation + // ----------------------------------------------------------------------- + it('activates @devops and includes agent section in output XML', async () => { + const session = makeSession('devops'); + const { xml } = await engine.process('push to remote', session); + + expect(xml).toContain('[ACTIVE AGENT: @devops]'); + expect(xml).toContain(''); + }); + + // ----------------------------------------------------------------------- + // 4. @architect activation + // ----------------------------------------------------------------------- + it('activates @architect and includes agent section in output XML', async () => { + const session = makeSession('architect'); + const { xml } = await engine.process('design the system', session); + + expect(xml).toContain('[ACTIVE AGENT: @architect]'); + expect(xml).toContain(''); + }); + + // ----------------------------------------------------------------------- + // 5. Unknown agent -- graceful degradation + // ----------------------------------------------------------------------- + it('handles unknown agent without crashing and still returns valid XML', async () => { + const session = makeSession('nonexistent-agent-xyz'); + const { xml } = await engine.process('do something', session); + + expect(xml).toContain(''); + expect(xml).not.toContain('[ACTIVE AGENT: @nonexistent-agent-xyz]'); + }); + + // ----------------------------------------------------------------------- + // 6. No agent (null) -- baseline output + // ----------------------------------------------------------------------- + it('produces valid output with no active agent (null)', async () => { + const session = makeSession(null); + const { xml, metrics } = await engine.process('hello world', session); + + expect(xml).toContain(''); + // Constitution (L0) and/or global (L1) should still produce rules + expect(metrics.total_rules).toBeGreaterThanOrEqual(0); + }); + + // ----------------------------------------------------------------------- + // 7. Agent switch -- different agents produce different sections + // ----------------------------------------------------------------------- + it('produces different agent sections for different agents', async () => { + const sessionDev = makeSession('dev'); + const sessionPm = makeSession('pm'); + + const [resultDev, resultPm] = await Promise.all([ + engine.process('write code', sessionDev), + engine.process('manage product', sessionPm), + ]); + + expect(resultDev.xml).toContain('[ACTIVE AGENT: @dev]'); + expect(resultPm.xml).toContain('[ACTIVE AGENT: @pm]'); + + // The agent-specific content should differ + expect(resultDev.xml).not.toEqual(resultPm.xml); + }); + + // ----------------------------------------------------------------------- + // 8. Agent layer produces rules in metrics + // ----------------------------------------------------------------------- + it('reports non-zero agent layer rules in metrics when agent is active', async () => { + const session = makeSession('dev'); + const { metrics } = await engine.process('build the feature', session); + + // The agent layer should have loaded and produced rules + const agentLayer = metrics.per_layer.agent; + if (agentLayer && agentLayer.status === 'ok') { + expect(agentLayer.rules).toBeGreaterThan(0); + } else { + // If agent layer was skipped due to bracket, that is acceptable in E2E + expect(agentLayer).toBeDefined(); + } + }); +}); + +``` + +================================================== +📄 tests/synapse/e2e/devmode-scenarios.e2e.test.js +================================================== +```js +/** + * SYNAPSE E2E: DEVMODE Scenarios + * + * End-to-end tests for the DEVMODE diagnostic output in the SYNAPSE context engine. + * Uses REAL .synapse/ files at project root -- no mocks. + * + * Validates that: + * - DEVMODE=false (default) produces NO diagnostic section + * - DEVMODE=true (via constructor or per-call override) includes the full + * [DEVMODE STATUS] section with bracket, layers, session, and pipeline metrics + * + * @module tests/synapse/e2e/devmode-scenarios.e2e.test + */ + +const path = require('path'); +const fs = require('fs'); + +const PROJECT_ROOT = path.resolve(__dirname, '..', '..', '..'); +const SYNAPSE_PATH = path.join(PROJECT_ROOT, '.synapse'); +const MANIFEST_PATH = path.join(SYNAPSE_PATH, 'manifest'); + +// Guard: skip entire suite if .synapse/ is missing +const synapseExists = fs.existsSync(SYNAPSE_PATH) && fs.existsSync(MANIFEST_PATH); + +const { SynapseEngine } = require(path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'engine.js')); +const { parseManifest } = require(path.join(PROJECT_ROOT, '.aios-core', 'core', 'synapse', 'domain', 'domain-loader.js')); + +/** + * Build a default session object for testing. + * + * Uses prompt_count=60 to land in MODERATE bracket (~55% remaining, 1500 token + * budget) which has sufficient headroom for the DEVMODE section after token + * budget enforcement. FRESH bracket (prompt_count<=7) only allows 800 tokens, + * which is not enough for constitution + global rules + DEVMODE combined. + * + * @param {object} [overrides] - Fields to override + * @returns {object} Session object matching SYN-2 schema + */ +function buildSession(overrides = {}) { + return { + prompt_count: 60, + active_agent: { id: 'dev', activated_at: new Date().toISOString() }, + active_workflow: null, + active_squad: null, + active_task: null, + context: { + last_bracket: 'MODERATE', + last_tokens_used: 0, + last_context_percent: 55, + }, + ...overrides, + }; +} + +const describeIfSynapse = synapseExists ? describe : describe.skip; + +describeIfSynapse('SYNAPSE E2E: DEVMODE Scenarios', () => { + /** @type {object} */ + let manifest; + + /** @type {SynapseEngine} Engine with devmode OFF (default) */ + let engineDefault; + + /** @type {SynapseEngine} Engine with devmode ON via constructor */ + let engineDevmode; + + beforeAll(() => { + manifest = parseManifest(MANIFEST_PATH); + + engineDefault = new SynapseEngine(SYNAPSE_PATH, { + manifest, + devmode: false, + }); + + engineDevmode = new SynapseEngine(SYNAPSE_PATH, { + manifest, + devmode: true, + }); + }); + + // ----------------------------------------------------------------------- + // 1. DEVMODE=false (default): output does NOT contain [DEVMODE STATUS] + // ----------------------------------------------------------------------- + test('DEVMODE=false: output does NOT contain [DEVMODE STATUS]', async () => { + const session = buildSession(); + const result = await engineDefault.process('Implement user authentication', session); + + expect(result).toBeDefined(); + expect(typeof result.xml).toBe('string'); + expect(result.xml).not.toContain('[DEVMODE STATUS]'); + expect(result.xml).not.toContain('SYNAPSE DEVMODE'); + }); + + // ----------------------------------------------------------------------- + // 2. DEVMODE=true via constructor: output contains [DEVMODE STATUS] + // ----------------------------------------------------------------------- + test('DEVMODE=true via constructor: output contains [DEVMODE STATUS]', async () => { + const session = buildSession(); + const result = await engineDevmode.process('Implement user authentication', session); + + expect(result).toBeDefined(); + expect(typeof result.xml).toBe('string'); + expect(result.xml).toContain('[DEVMODE STATUS]'); + }); + + // ----------------------------------------------------------------------- + // 3. DEVMODE=true: output contains SYNAPSE DEVMODE text + // ----------------------------------------------------------------------- + test('DEVMODE=true: output contains SYNAPSE DEVMODE header text', async () => { + const session = buildSession(); + const result = await engineDevmode.process('Create the database schema', session); + + expect(result.xml).toContain('SYNAPSE DEVMODE'); + }); + + // ----------------------------------------------------------------------- + // 4. DEVMODE=true: output contains Layers Loaded section + // ----------------------------------------------------------------------- + test('DEVMODE=true: output contains Layers Loaded section', async () => { + const session = buildSession(); + const result = await engineDevmode.process('Refactor the API endpoints', session); + + expect(result.xml).toContain('Layers Loaded:'); + }); + + // ----------------------------------------------------------------------- + // 5. DEVMODE=true: output contains Pipeline Metrics section + // ----------------------------------------------------------------------- + test('DEVMODE=true: output contains Pipeline Metrics section', async () => { + const session = buildSession(); + const result = await engineDevmode.process('Add unit tests for the service layer', session); + + expect(result.xml).toContain('Pipeline Metrics:'); + }); + + // ----------------------------------------------------------------------- + // 6. DEVMODE=true: metrics object has valid structure + // ----------------------------------------------------------------------- + test('DEVMODE=true: metrics object has valid structure with expected fields', async () => { + const session = buildSession(); + const result = await engineDevmode.process('Optimize query performance', session); + + expect(result.metrics).toBeDefined(); + expect(typeof result.metrics.total_ms).toBe('number'); + expect(result.metrics.total_ms).toBeGreaterThanOrEqual(0); + expect(typeof result.metrics.layers_loaded).toBe('number'); + expect(typeof result.metrics.layers_skipped).toBe('number'); + expect(typeof result.metrics.layers_errored).toBe('number'); + expect(typeof result.metrics.total_rules).toBe('number'); + expect(result.metrics.per_layer).toBeDefined(); + expect(typeof result.metrics.per_layer).toBe('object'); + }); + + // ----------------------------------------------------------------------- + // 7. DEVMODE per-call override: processConfig { devmode: true } enables + // DEVMODE even when engine was constructed without it + // ----------------------------------------------------------------------- + test('per-call override: processConfig { devmode: true } shows DEVMODE section on non-devmode engine', async () => { + const session = buildSession(); + + // Confirm engine default does NOT produce DEVMODE output + const resultOff = await engineDefault.process('Setup CI pipeline', session); + expect(resultOff.xml).not.toContain('[DEVMODE STATUS]'); + + // Same engine, but with per-call devmode override + const resultOn = await engineDefault.process('Setup CI pipeline', session, { devmode: true }); + expect(resultOn.xml).toContain('[DEVMODE STATUS]'); + expect(resultOn.xml).toContain('SYNAPSE DEVMODE'); + expect(resultOn.xml).toContain('Layers Loaded:'); + expect(resultOn.xml).toContain('Pipeline Metrics:'); + }); +}); + +``` + +================================================== +📄 docs/core-architecture.md +================================================== +```md +# AIOS Method: Core Architecture + +> 🌐 **EN** | [PT](./pt/core-architecture.md) | [ES](./es/core-architecture.md) + +--- + +## 1. Overview + +The AIOS Method is designed to provide agentic modes, tasks and templates to allow repeatable helpful workflows be it for agile agentic development, or expansion into vastly different domains. The core purpose of the project is to provide a structured yet flexible set of prompts, templates, and workflows that users can employ to guide AI agents (like Gemini, Claude, or ChatGPT) to perform complex tasks, guided discussions, or other meaningful domain specific flows in a predictable, high-quality manner. + +The systems core module facilitates a full development lifecycle tailored to the challenges of current modern AI Agentic tooling: + +1. **Ideation & Planning**: Brainstorming, market research, and creating project briefs. +2. **Architecture & Design**: Defining system architecture and UI/UX specifications. +3. **Development Execution**: A cyclical workflow where a Scrum Master (SM) agent drafts stories with extremely specific context and a Developer (Dev) agent implements them one at a time. This process works for both new (Greenfield) and existing (Brownfield) projects. + +## 2. System Architecture Diagram + +The entire AIOS-Method ecosystem is designed around the installed `aios-core` directory, which acts as the brain of the operation. The `tools` directory provides the means to process and package this brain for different environments. + +```mermaid +graph TD + subgraph AIOS Method Project + subgraph Core Framework + A["aios-core"] + A --> B["agents"] + A --> C["agent-teams"] + A --> D["workflows"] + A --> E["templates"] + A --> F["tasks"] + A --> G["checklists"] + A --> H["data (KB)"] + end + + subgraph Tooling + I["tools/builders/web-builder.js"] + end + + subgraph Outputs + J["dist"] + end + + B -- defines dependencies for --> E + B -- defines dependencies for --> F + B -- defines dependencies for --> G + B -- defines dependencies for --> H + + C -- bundles --> B + I -- reads from --> A + I -- creates --> J + end + + subgraph Target Environments + K["IDE (Cursor, VS Code, etc.)"] + L["Web UI (Gemini, ChatGPT)"] + end + + B --> K + J --> L + + style A fill:#1a73e8,color:#fff + style I fill:#f9ab00,color:#fff + style J fill:#34a853,color:#fff +``` + +## 3. Core Components + +The `aios-core` directory contains all the definitions and resources that give the agents their capabilities. + +### 3.1. Agents (`aios-core/agents/`) + +- **Purpose**: These are the foundational building blocks of the system. Each markdown file (e.g., `aios-master.md`, `pm.md`, `dev.md`) defines the persona, capabilities, and dependencies of a single AI agent. +- **Structure**: An agent file contains a YAML header that specifies its role, persona, dependencies, and startup instructions. These dependencies are lists of tasks, templates, checklists, and data files that the agent is allowed to use. +- **Startup Instructions**: Agents can include startup sequences that load project-specific documentation from the `docs/` folder, such as coding standards, API specifications, or project structure documents. This provides immediate project context upon activation. +- **Document Integration**: Agents can reference and load documents from the project's `docs/` folder as part of tasks, workflows, or startup sequences. Users can also drag documents directly into chat interfaces to provide additional context. +- **Example**: The `aios-master` agent lists its dependencies, which tells the build tool which files to include in a web bundle and informs the agent of its own capabilities. + +### 3.2. Agent Teams (`aios-core/agent-teams/`) + +- **Purpose**: Team files (e.g., `team-all.yaml`) define collections of agents and workflows that are bundled together for a specific purpose, like "full-stack development" or "backend-only". This creates a larger, pre-packaged context for web UI environments. +- **Structure**: A team file lists the agents to include. It can use wildcards, such as `"*"` to include all agents. This allows for the creation of comprehensive bundles like `team-all`. + +### 3.3. Workflows (`aios-core/workflows/`) + +- **Purpose**: Workflows are YAML files (e.g., `greenfield-fullstack.yaml`) that define a prescribed sequence of steps and agent interactions for a specific project type. They act as a strategic guide for the user and the `aios-orchestrator` agent. +- **Structure**: A workflow defines sequences for both complex and simple projects, lists the agents involved at each step, the artifacts they create, and the conditions for moving from one step to the next. It often includes a Mermaid diagram for visualization. + +### 3.4. Reusable Resources (`templates`, `tasks`, `checklists`, `data`) + +- **Purpose**: These folders house the modular components that are dynamically loaded by agents based on their dependencies. + - **`templates/`**: Contains markdown templates for common documents like PRDs, architecture specifications, and user stories. + - **`tasks/`**: Defines the instructions for carrying out specific, repeatable actions like "shard-doc" or "create-next-story". + - **`checklists/`**: Provides quality assurance checklists for agents like the Product Owner (`po`) or Architect. + - **`data/`**: Contains the core knowledge base (`aios-kb.md`), technical preferences (`technical-preferences.md`), and other key data files. + +#### 3.4.1. Template Processing System + +A key architectural principle of AIOS is that templates are self-contained and interactive - they embed both the desired document output and the LLM instructions needed to work with users. This means that in many cases, no separate task is needed for document creation, as the template itself contains all the processing logic. + +The AIOS framework employs a sophisticated template processing system orchestrated by three key components: + +- **`template-format.md`** (`aios-core/utils/`): Defines the foundational markup language used throughout all AIOS templates. This specification establishes syntax rules for variable substitution (`{{placeholders}}`), AI-only processing directives (`[[LLM: instructions]]`), and conditional logic blocks. Templates follow this format to ensure consistent processing across the system. + +- **`create-doc.md`** (`aios-core/tasks/`): Acts as the orchestration engine that manages the entire document generation workflow. This task coordinates template selection, manages user interaction modes (incremental vs. rapid generation), enforces template-format processing rules, and handles validation. It serves as the primary interface between users and the template system. + +- **`advanced-elicitation.md`** (`aios-core/tasks/`): Provides an interactive refinement layer that can be embedded within templates through `[[LLM: instructions]]` blocks. This component offers 10 structured brainstorming actions, section-by-section review capabilities, and iterative improvement workflows to enhance content quality. + +The system maintains a clean separation of concerns: template markup is processed internally by AI agents but never exposed to users, while providing sophisticated AI processing capabilities through embedded intelligence within the templates themselves. + +#### 3.4.2. Technical Preferences System + +AIOS includes a personalization layer through the `technical-preferences.md` file in `aios-core/data/`. This file serves as a persistent technical profile that influences agent behavior across all projects. + +**Purpose and Benefits:** + +- **Consistency**: Ensures all agents reference the same technical preferences +- **Efficiency**: Eliminates the need to repeatedly specify preferred technologies +- **Personalization**: Agents provide recommendations aligned with user preferences +- **Learning**: Captures lessons learned and preferences that evolve over time + +**Content Structure:** +The file typically includes preferred technology stacks, design patterns, external services, coding standards, and anti-patterns to avoid. Agents automatically reference this file during planning and development to provide contextually appropriate suggestions. + +**Integration Points:** + +- Templates can reference technical preferences during document generation +- Agents suggest preferred technologies when appropriate for project requirements +- When preferences don't fit project needs, agents explain alternatives +- Web bundles can include preferences content for consistent behavior across platforms + +**Evolution Over Time:** +Users are encouraged to continuously update this file with discoveries from projects, adding both positive preferences and technologies to avoid, creating a personalized knowledge base that improves agent recommendations over time. + +## 4. The Build & Delivery Process + +The framework is designed for two primary environments: local IDEs and web-based AI chat interfaces. The `web-builder.js` script is the key to supporting the latter. + +### 4.1. Web Builder (`tools/builders/web-builder.js`) + +- **Purpose**: This Node.js script is responsible for creating the `.txt` bundles found in `dist`. +- **Process**: + 1. **Resolves Dependencies**: For a given agent or team, the script reads its definition file. + 2. It recursively finds all dependent resources (tasks, templates, etc.) that the agent/team needs. + 3. **Bundles Content**: It reads the content of all these files and concatenates them into a single, large text file, with clear separators indicating the original file path of each section. + 4. **Outputs Bundle**: The final `.txt` file is saved in the `dist` directory, ready to be uploaded to a web UI. + +### 4.2. Environment-Specific Usage + +- **For IDEs**: Users interact with the agents directly via their markdown files in `aios-core/agents/`. The IDE integration (for Cursor, Claude Code, etc.) knows how to call these agents. +- **For Web UIs**: Users upload a pre-built bundle from `dist`. This single file provides the AI with the context of the entire team and all their required tools and knowledge. + +## 5. AIOS Workflows + +### 5.1. The Planning Workflow + +Before development begins, AIOS follows a structured planning workflow that establishes the foundation for successful project execution: + +```mermaid +graph TD + A["Start: Project Idea"] --> B{"Optional: Analyst Brainstorming"} + B -->|Yes| C["Analyst: Market Research & Analysis"] + B -->|No| D["Create Project Brief"] + C --> D["Analyst: Create Project Brief"] + D --> E["PM: Create PRD from Brief"] + E --> F["Architect: Create Architecture from PRD"] + F --> G["PO: Run Master Checklist"] + G --> H{"Documents Aligned?"} + H -->|Yes| I["Planning Complete"] + H -->|No| J["PO: Update Epics & Stories"] + J --> K["Update PRD/Architecture as needed"] + K --> G + I --> L["📁 Switch to IDE"] + L --> M["PO: Shard Documents"] + M --> N["Ready for SM/Dev Cycle"] + + style I fill:#34a853,color:#fff + style G fill:#f9ab00,color:#fff + style L fill:#1a73e8,color:#fff + style N fill:#34a853,color:#fff +``` + +**Key Planning Phases:** + +1. **Optional Analysis**: Analyst conducts market research and competitive analysis +2. **Project Brief**: Foundation document created by Analyst or user +3. **PRD Creation**: PM transforms brief into comprehensive product requirements +4. **Architecture Design**: Architect creates technical foundation based on PRD +5. **Validation & Alignment**: PO ensures all documents are consistent and complete +6. **Refinement**: Updates to epics, stories, and documents as needed +7. **Environment Transition**: Critical switch from web UI to IDE for development workflow +8. **Document Preparation**: PO shards large documents for development consumption + +**Workflow Orchestration**: The `aios-orchestrator` agent uses these workflow definitions to guide users through the complete process, ensuring proper transitions between planning (web UI) and development (IDE) phases. + +### 5.2. The Core Development Cycle + +Once the initial planning and architecture phases are complete, the project moves into a cyclical development workflow, as detailed in the `aios-kb.md`. This ensures a steady, sequential, and quality-controlled implementation process. + +```mermaid +graph TD + A["Start: Planning Artifacts Complete"] --> B["PO: Shard Epics"] + B --> C["PO: Shard Arch"] + C --> D["Development Phase"] + D --> E["Scrum Master: Drafts next story from sharded epic"] + E --> F{"User Approval"} + F -->|Approved| G["Dev: Implement Story"] + F -->|Needs Changes| E + G --> H["Dev: Complete story Tasks"] + H --> I["Dev: Mark Ready for Review"] + I --> J{"User Verification"} + J -->|Request QA Review| K["QA: Run review-story task"] + J -->|Approve Without QA| M["Mark Story as Done"] + K --> L{"QA Review Results"} + L -->|Needs Work| G + L -->|Approved| M["Mark Story as Done"] + J -->|Needs Fixes| G + M --> E + + style M fill:#34a853,color:#fff + style K fill:#f9ab00,color:#fff +``` + +This cycle continues, with the Scrum Master, Developer, and optionally QA agents working together. The QA agent provides senior developer review capabilities through the `review-story` task, offering code refactoring, quality improvements, and knowledge transfer. This ensures high code quality while maintaining development velocity. + +``` + +================================================== +📄 docs/ide-integration.md +================================================== +```md +# IDE Integration Guide + +> **EN** + +--- + +Guide for integrating AIOS with supported IDEs and AI development platforms. + +**Version:** 4.2.11 +**Last Updated:** 2026-02-16 + +--- + +## Compatibility Contract (AIOS 4.2.11) + +The IDE matrix is enforced by a versioned contract: + +- Contract file: `.aios-core/infrastructure/contracts/compatibility/aios-4.2.11.yaml` +- Validator: `npm run validate:parity` + +If matrix claims in this document diverge from validator results, parity validation fails. + +--- + +## Supported IDEs + +AIOS supports multiple AI-powered development platforms. Choose the one that best fits your workflow. + +### Quick Status Matrix (AIOS 4.2.11) + +| IDE/CLI | Overall Status | How to Activate an Agent | Auto-Checks Before/After Actions | Workaround if Limited | +| --- | --- | --- | --- | --- | +| Claude Code | Works | `/agent-name` commands | Works (full) | -- | +| Gemini CLI | Works | `/aios-menu` then `/aios-` | Works (minor differences in event handling) | -- | +| Codex CLI | Limited | `/skills` then `aios-` | Limited (some checks need manual sync) | Run `npm run sync:ide:codex` and follow `/skills` flow | +| Cursor | Limited | `@agent` + synced rules | Not available | Follow synced rules and run validators manually (`npm run validate:parity`) | +| GitHub Copilot | Limited | chat modes + repo instructions | Not available | Use repo instructions and VS Code MCP config for context | +| AntiGravity | Limited | workflow-driven activation | Not available | Use generated workflows and run validators manually | + +Legend: +- `Works`: fully recommended for new users in AIOS 4.2.11. +- `Limited`: usable with the documented workaround. +- `Not available`: this IDE does not offer this capability; use the workaround instead. + +### What You Lose Without Full Auto-Checks + +Some IDEs run automatic checks before and after each action (e.g., validating context, enforcing rules). Where this is not available, you compensate manually: + +| IDE | Auto-Check Level | What Is Reduced | How to Compensate | +| --- | --- | --- | --- | +| Claude Code | Full | Nothing | Built-in checks handle everything | +| Gemini CLI | High | Minor timing differences in checks | Gemini native checks cover most scenarios | +| Codex CLI | Partial | Less automatic session tracking; some pre/post-action checks need manual trigger | Use `AGENTS.md` + `/skills` + sync/validation scripts | +| Cursor | None | No automatic pre/post-action checks; no automatic audit trail | Follow synced rules, use MCP for context, run validators | +| GitHub Copilot | None | Same as Cursor, plus more reliance on manual workflow | Use repo instructions, chat modes, VS Code MCP | +| AntiGravity | None | No automatic check equivalents | Use generated workflows and run validators | + +### Beginner Decision Guide + +If your goal is to get started as fast as possible: + +1. **Best option:** Use `Claude Code` or `Gemini CLI` -- they have the most automation and fewest manual steps. +2. **Good option:** Use `Codex CLI` if you prefer a terminal-first workflow and can follow the `/skills` activation flow. +3. **Usable with extra steps:** Use `Cursor`, `Copilot`, or `AntiGravity` -- they work but require more manual validation steps (see workarounds in the table above). + +### Practical Consequences by Capability + +- **Session tracking** (automatic start/end detection): + - Automatic on Claude Code and Gemini CLI. + - Manual or partial on Codex, Cursor, Copilot, and AntiGravity. +- **Pre/post-action guardrails** (checks that run before and after each tool use): + - Full on Claude Code and Gemini CLI. + - Partial on Codex CLI (run sync scripts to compensate). + - Not available on Cursor, Copilot, and AntiGravity (run validators manually). +- **Automatic audit trail** (record of what happened in each session): + - Richest on Claude Code and Gemini CLI. + - Reduced on other IDEs (compensate with manual logging or validator output). + +--- + +## Setup Instructions + +### Claude Code + +**Recommendation Level:** Best AIOS integration + +```yaml +config_file: .claude/CLAUDE.md +agent_folder: .claude/commands/AIOS/agents +activation: /agent-name (slash commands) +format: full-markdown-yaml +mcp_support: native +special_features: + - Task tool for subagents + - Native MCP integration + - Hooks system (pre/post) + - Custom skills + - Memory persistence +``` + +**Setup:** + +1. AIOS automatically creates `.claude/` directory on init +2. Agents are available as slash commands: `/dev`, `/qa`, `/architect` +3. Configure MCP servers in `~/.claude.json` + +**Configuration:** + +```bash +# Sync all enabled IDE targets (including Claude) +npm run sync:ide + +# Verify setup +ls -la .claude/commands/AIOS/agents/ +``` + +--- + +### Codex CLI + +**Recommendation Level:** Best (terminal-first workflow) + +```yaml +config_file: AGENTS.md +agent_folder: .codex/agents +activation: terminal instructions +skills_folder: .codex/skills (source), ~/.codex/skills (Codex menu) +format: markdown +mcp_support: native via Codex tooling +special_features: + - AGENTS.md project instructions + - /skills activators (aios-) + - Strong CLI workflow support + - Easy integration with repository scripts + - Notify command plus emerging tool hooks in recent Codex releases +``` + +**Setup:** + +1. Keep `AGENTS.md` at repository root +2. Run `npm run sync:ide:codex` to sync auxiliary agent files +3. Run `npm run sync:skills:codex` to generate project-local skills in `.codex/skills` +4. Use `/skills` and choose `aios-architect`, `aios-dev`, etc. +5. Use `npm run sync:skills:codex:global` only when you explicitly want global installation + +**Configuration:** + +```bash +# Sync Codex support files +npm run sync:ide:codex +npm run sync:skills:codex +npm run validate:codex-sync +npm run validate:codex-integration +npm run validate:codex-skills + +# Verify setup +ls -la AGENTS.md .codex/agents/ .codex/skills/ +``` + +--- + +### Cursor + +**Recommendation Level:** Best (popular AI IDE) + +```yaml +config_file: .cursor/rules.md +agent_folder: .cursor/rules +activation: @agent-name +format: condensed-rules +mcp_support: via configuration +special_features: + - Composer integration + - Chat modes + - @codebase context + - Multi-file editing + - Subagents and cloud handoff support (latest Cursor releases) + - Long-running agent workflows (research preview) +``` + +**Setup:** + +1. AIOS creates `.cursor/` directory on init +2. Agents activated with @mention: `@dev`, `@qa` +3. Rules synchronized to `.cursor/rules/` + +**Configuration:** + +```bash +# Sync Cursor only +npm run sync:ide:cursor + +# Verify setup +ls -la .cursor/rules/ +``` + +**MCP Configuration (`.cursor/mcp.json`):** + +```json +{ + "mcpServers": { + "context7": { + "url": "https://mcp.context7.com/sse" + } + } +} +``` + +--- + +### GitHub Copilot + +**Recommendation Level:** Good (GitHub integration) + +```yaml +config_file: .github/copilot-instructions.md +agent_folder: .github/agents +activation: chat modes +format: text +mcp_support: via VS Code MCP config +special_features: + - GitHub integration + - PR assistance + - Code review + - Works with repo instructions and VS Code MCP config +``` + +**Setup:** + +1. Enable GitHub Copilot in your repository +2. AIOS creates `.github/copilot-instructions.md` +3. Agent instructions synchronized + +**Configuration:** + +```bash +# Sync all enabled IDE targets +npm run sync:ide + +# Verify setup +cat .github/copilot-instructions.md +``` + +--- + +### AntiGravity + +**Recommendation Level:** Good (Google integration) + +```yaml +config_file: .antigravity/rules.md +config_json: .antigravity/antigravity.json +agent_folder: .agent/workflows +activation: workflow-based +format: cursor-style +mcp_support: native (Google) +special_features: + - Google Cloud integration + - Workflow system + - Native Firebase tools +``` + +**Setup:** + +1. AIOS creates `.antigravity/` directory +2. Configure Google Cloud credentials +3. Agents synchronized as workflows + +--- + +### Gemini CLI + +**Recommendation Level:** Good + +```yaml +config_file: .gemini/rules.md +agent_folder: .gemini/rules/AIOS/agents +activation: slash launcher commands +format: text +mcp_support: native +special_features: + - Google AI models + - CLI-based workflow + - Multimodal support + - Native hooks events and hook commands + - Native MCP server support + - Rapidly evolving command/tooling UX +``` + +**Setup:** + +1. Run installer flow selecting `gemini` in IDE selection (wizard path) +2. AIOS creates: + - `.gemini/rules.md` + - `.gemini/rules/AIOS/agents/*.md` + - `.gemini/commands/*.toml` (`/aios-menu`, `/aios-`) + - `.gemini/hooks/*.js` + - `.gemini/settings.json` (hooks enabled) +3. Validate integration: + +```bash +npm run sync:ide:gemini +npm run validate:gemini-sync +npm run validate:gemini-integration +``` + +4. Quick agent activation (recommended): + - `/aios-menu` to list shortcuts + - `/aios-dev`, `/aios-architect`, `/aios-qa`, etc. + - `/aios-agent ` for generic launcher + +--- + +## Sync System + +### How Sync Works + +AIOS maintains a single source of truth for agent definitions and synchronizes them to all configured IDEs: + +``` +┌─────────────────────────────────────────────────────┐ +│ AIOS Core │ +│ .aios-core/development/agents/ (Source of Truth) │ +│ │ │ +│ ┌───────────┼───────────┐ │ +│ ▼ ▼ ▼ │ +│ .claude/ .codex/ .cursor/ │ +│ .antigravity/ .gemini/ │ +└─────────────────────────────────────────────────────┘ +``` + +### Sync Commands + +```bash +# Sync all IDE targets +npm run sync:ide + +# Sync only Gemini +npm run sync:ide:gemini +npm run sync:ide:github-copilot +npm run sync:ide:antigravity + +# Validate sync +npm run sync:ide:check +``` + +### Automatic Sync + +AIOS can be configured to automatically sync on agent changes: + +```yaml +# .aios-core/core/config/sync.yaml +auto_sync: + enabled: true + watch_paths: + - .aios-core/development/agents/ + platforms: + - claude + - codex + - github-copilot + - cursor + - gemini + - antigravity +``` + +--- + +## Troubleshooting + +### Agent Not Appearing in IDE + +```bash +# Verify agent exists in source +ls .aios-core/development/agents/ + +# Sync and validate +npm run sync:ide +npm run sync:ide:check + +# Check platform-specific directory +ls .cursor/rules/agents/ # Cursor +ls .claude/commands/AIOS/agents/ # Claude Code +ls .gemini/rules/AIOS/agents/ # Gemini CLI +``` + +### Sync Conflicts + +```bash +# Preview what would change +npm run sync:ide -- --dry-run + +# Backup before force sync +cp -r .cursor/rules/ .cursor/rules.backup/ +npm run sync:ide +``` + +### MCP Not Working + +```bash +# Check MCP status +aios mcp status + +# Verify MCP configuration for IDE +cat ~/.claude.json # For Claude Code +cat .cursor/mcp.json # For Cursor +``` + +### IDE-Specific Issues + +**Claude Code:** + +- Ensure `.claude/` is in project root +- Check hooks permissions: `chmod +x .claude/hooks/*.py` + +**Cursor:** + +- Restart Cursor after sync +- Check `.cursor/rules/` permissions + +## Platform Decision Guide + +Use this guide to choose the right platform: + +``` +Do you use Claude/Anthropic API? +├── Yes --> Claude Code (Best AIOS integration) +└── No + └── Do you prefer VS Code? + ├── Yes --> Want an extension? + │ ├── Yes --> GitHub Copilot (Native GitHub features) + │ └── No --> GitHub Copilot (Native GitHub features) + └── No --> Want a dedicated AI IDE? + ├── Yes --> Which model do you prefer? + │ ├── Claude/GPT --> Cursor (Most popular AI IDE) + └── No --> Use Google Cloud? + ├── Yes --> AntiGravity (Google integration) + └── No --> Gemini CLI (Specialized) +``` + +--- + +## Migration Between IDEs + +### From Cursor to Claude Code + +```bash +# Export current rules +cp -r .cursor/rules/ ./rules-backup/ + +# Initialize Claude Code +npm run sync:ide + +# Verify migration +diff -r ./rules-backup/ .claude/commands/AIOS/agents/ +``` + +### From Claude Code to Cursor + +```bash +# Sync to Cursor +npm run sync:ide:cursor + +# Configure MCP (if needed) +# Copy MCP config to .cursor/mcp.json +``` + +--- + +## Related Documentation + +- [Platform Guides](./platforms/README.md) +- [Claude Code Guide](./platforms/claude-code.md) +- [Cursor Guide](./platforms/cursor.md) +- [Agent Reference Guide](./agent-reference-guide.md) +- [MCP Global Setup](./guides/mcp-global-setup.md) + +--- + +_Synkra AIOS IDE Integration Guide v4.2.11_ + +``` + +================================================== +📄 docs/git-workflow-guide.md +================================================== +```md +# AIOS Git Workflow Guide + +> 🌐 **EN** | [PT](./pt/git-workflow-guide.md) | [ES](./es/git-workflow-guide.md) + +--- + +_Story: 2.2-git-workflow-implementation.yaml_ + +## Table of Contents + +- [Overview](#overview) +- [Defense in Depth Architecture](#defense-in-depth-architecture) +- [Layer 1: Pre-commit Validation](#layer-1-pre-commit-validation) +- [Layer 2: Pre-push Validation](#layer-2-pre-push-validation) +- [Layer 3: CI/CD Pipeline](#layer-3-cicd-pipeline) +- [Branch Protection](#branch-protection) +- [Daily Workflow](#daily-workflow) +- [Troubleshooting](#troubleshooting) +- [Performance Tips](#performance-tips) + +## Overview + +Synkra AIOS implements a **Defense in Depth** validation strategy with three progressive layers that catch issues early and ensure code quality before merge. + +### Why Three Layers? + +1. **Fast feedback** - Catch issues immediately during development +2. **Local validation** - No cloud dependency for basic checks +3. **Authoritative validation** - Final gate before merge +4. **Story consistency** - Ensure development aligns with stories + +### Architecture Diagram + +``` +┌─────────────────────────────────────────────────────────────┐ +│ Developer Workflow │ +└─────────────────────────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────────────────┐ +│ Layer 1: Pre-commit Hook (Local - <5s) │ +│ ✓ ESLint (code quality) │ +│ ✓ TypeScript (type checking) │ +│ ✓ Cache enabled │ +└─────────────────────────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────────────────┐ +│ Layer 2: Pre-push Hook (Local - <2s) │ +│ ✓ Story checkbox validation │ +│ ✓ Status consistency │ +│ ✓ Required sections │ +└─────────────────────────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────────────────┐ +│ Layer 3: GitHub Actions CI (Cloud - 2-5min) │ +│ ✓ All lint/type checks │ +│ ✓ Full test suite │ +│ ✓ Code coverage (≥80%) │ +│ ✓ Story validation │ +│ ✓ Branch protection │ +└─────────────────────────────────────────────────────────────┘ + │ + ▼ + ┌──────────────┐ + │ Merge Ready │ + └──────────────┘ +``` + +## Defense in Depth Architecture + +### Layer 1: Pre-commit (Local - Fast) + +**Performance Target:** <5 seconds +**Trigger:** `git commit` +**Location:** `.husky/pre-commit` + +**What it validates:** + +- ESLint code quality +- TypeScript type checking +- Syntax errors +- Import issues + +**How it works:** + +```bash +# Triggered automatically on commit +git add . +git commit -m "feat: add feature" + +# Runs: +# 1. ESLint with caching (.eslintcache) +# 2. TypeScript incremental compilation (.tsbuildinfo) +``` + +**Benefits:** + +- ⚡ Fast feedback (<5s) +- 💾 Cached for speed +- 🔒 Prevents broken code commits +- 🚫 No invalid syntax in history + +### Layer 2: Pre-push (Local - Story Validation) + +**Performance Target:** <2 seconds +**Trigger:** `git push` +**Location:** `.husky/pre-push` + +**What it validates:** + +- Story checkbox completion vs status +- Required story sections present +- Status consistency +- Dev agent records + +**How it works:** + +```bash +# Triggered automatically on push +git push origin feature/my-feature + +# Validates all story files in docs/stories/ +``` + +**Validation Rules:** + +1. **Status Consistency:** + +```yaml +# ❌ Invalid: completed but tasks incomplete +status: "completed" +tasks: + - "[x] Task 1" + - "[ ] Task 2" # Error! + +# ✅ Valid: all tasks completed +status: "completed" +tasks: + - "[x] Task 1" + - "[x] Task 2" +``` + +2. **Required Sections:** + +- `id` +- `title` +- `description` +- `acceptance_criteria` +- `status` + +3. **Status Flow:** + +``` +ready → in progress → Ready for Review → completed +``` + +### Layer 3: CI/CD (Cloud - Authoritative) + +**Performance:** 2-5 minutes +**Trigger:** Push to any branch, PR creation +**Platform:** GitHub Actions +**Location:** `.github/workflows/ci.yml` + +**Jobs:** + +1. **ESLint** (`lint` job) + - Runs on clean environment + - No cache dependency + +2. **TypeScript** (`typecheck` job) + - Full type checking + - No incremental compilation + +3. **Tests** (`test` job) + - Full test suite + - Coverage reporting + - 80% threshold enforced + +4. **Story Validation** (`story-validation` job) + - All stories validated + - Status consistency checked + +5. **Validation Summary** (`validation-summary` job) + - Aggregates all results + - Blocks merge if any fail + +**Performance Monitoring:** + +- Optional performance job +- Measures validation times +- Informational only + +## Layer 1: Pre-commit Validation + +### Quick Reference + +```bash +# Manual validation +npm run lint +npm run typecheck + +# Auto-fix lint issues +npm run lint -- --fix + +# Skip hook (NOT recommended) +git commit --no-verify +``` + +### ESLint Configuration + +**File:** `.eslintrc.json` + +```json +{ + "extends": ["eslint:recommended", "plugin:@typescript-eslint/recommended"], + "parser": "@typescript-eslint/parser", + "plugins": ["@typescript-eslint"], + "cache": true, + "cacheLocation": ".eslintcache" +} +``` + +**Key features:** + +- TypeScript support +- Caching enabled +- Warns on console.log +- Ignores unused vars with `_` prefix + +### TypeScript Configuration + +**File:** `tsconfig.json` + +```json +{ + "compilerOptions": { + "target": "ES2022", + "strict": true, + "incremental": true, + "tsBuildInfoFile": ".tsbuildinfo" + } +} +``` + +**Key features:** + +- ES2022 target +- Strict mode +- Incremental compilation +- CommonJS modules + +### Performance Optimization + +**Cache Files:** + +- `.eslintcache` - ESLint results +- `.tsbuildinfo` - TypeScript incremental data + +**First run:** ~10-15s (no cache) +**Subsequent runs:** <5s (cached) + +**Cache invalidation:** + +- Configuration changes +- Dependency updates +- File deletions + +## Layer 2: Pre-push Validation + +### Quick Reference + +```bash +# Manual validation +node .aios-core/utils/aios-validator.js pre-push +node .aios-core/utils/aios-validator.js stories + +# Validate single story +node .aios-core/utils/aios-validator.js story docs/stories/1.1-story.yaml + +# Skip hook (NOT recommended) +git push --no-verify +``` + +### Story Validator + +**Location:** `.aios-core/utils/aios-validator.js` + +**Features:** + +- Colored terminal output +- Progress indicators +- Clear error messages +- Warnings for potential issues + +**Example Output:** + +``` +══════════════════════════════════════════════════════════ + Story Validation: 2.2-git-workflow-implementation.yaml +══════════════════════════════════════════════════════════ + +Story: 2.2 - Git Workflow with Multi-Layer Validation +Status: in progress + +Progress: 12/15 tasks (80.0%) + +✓ Story validation passed with warnings + +Warning: + • Consider updating status to 'Ready for Review' +``` + +### Validation Rules + +#### 1. Checkbox Format + +Supported formats: + +- `[x]` - Completed (lowercase) +- `[X]` - Completed (uppercase) +- `[ ]` - Incomplete + +Not recognized: + +- `[o]`, `[*]`, `[-]` - Not counted as complete + +#### 2. Status Consistency + +| Status | Rule | +| ------------------ | -------------------------- | +| `ready` | No tasks should be checked | +| `in progress` | Some tasks checked | +| `Ready for Review` | All tasks checked | +| `completed` | All tasks checked | + +#### 3. Required Sections + +All stories must have: + +```yaml +id: "X.X" +title: "Story Title" +description: "Story description" +status: "ready" | "in progress" | "Ready for Review" | "completed" +acceptance_criteria: + - name: "Criterion" + tasks: + - "[ ] Task" +``` + +#### 4. Dev Agent Record + +Recommended but not required: + +```yaml +dev_agent_record: + agent_model: 'claude-sonnet-4-5' + implementation_date: '2025-01-23' +``` + +Warning if missing. + +### Error Messages + +**Missing Required Sections:** + +``` +✗ Missing required sections: description, acceptance_criteria +``` + +**Status Inconsistency:** + +``` +✗ Story marked as completed but only 12/15 tasks are checked +``` + +**Non-existent File:** + +``` +✗ Story file not found: docs/stories/missing.yaml +``` + +## Layer 3: CI/CD Pipeline + +### Workflow Structure + +**File:** `.github/workflows/ci.yml` + +**Jobs:** + +1. **lint** - ESLint validation +2. **typecheck** - TypeScript checking +3. **test** - Jest tests with coverage +4. **story-validation** - Story consistency +5. **validation-summary** - Aggregate results +6. **performance** (optional) - Performance metrics + +### Job Details + +#### ESLint Job + +```yaml +- name: Run ESLint + run: npm run lint +``` + +- Runs on Ubuntu latest +- Timeout: 5 minutes +- Uses npm cache +- Fails on any lint error + +#### TypeScript Job + +```yaml +- name: Run TypeScript type checking + run: npm run typecheck +``` + +- Runs on Ubuntu latest +- Timeout: 5 minutes +- Fails on type errors + +#### Test Job + +```yaml +- name: Run tests with coverage + run: npm run test:coverage +``` + +- Runs on Ubuntu latest +- Timeout: 10 minutes +- Coverage uploaded to Codecov +- Enforces 80% coverage threshold + +#### Story Validation Job + +```yaml +- name: Validate story checkboxes + run: node .aios-core/utils/aios-validator.js stories +``` + +- Runs on Ubuntu latest +- Timeout: 5 minutes +- Validates all stories + +#### Validation Summary Job + +```yaml +needs: [lint, typecheck, test, story-validation] +if: always() +``` + +- Runs after all validations +- Checks all job statuses +- Fails if any validation failed +- Provides summary + +### CI Triggers + +**Push Events:** + +- `master` branch +- `develop` branch +- `feature/**` branches +- `bugfix/**` branches + +**Pull Request Events:** + +- Against `master` +- Against `develop` + +### Viewing CI Results + +```bash +# View PR checks +gh pr checks + +# View workflow runs +gh run list + +# View specific run +gh run view + +# Re-run failed jobs +gh run rerun +``` + +## Branch Protection + +### Setup + +```bash +# Run setup script +node scripts/setup-branch-protection.js + +# View current protection +node scripts/setup-branch-protection.js --status +``` + +### Requirements + +- GitHub CLI (`gh`) installed +- Authenticated with GitHub +- Admin access to repository + +### Protection Rules + +**Master Branch Protection:** + +1. **Required Status Checks:** + - ESLint + - TypeScript Type Checking + - Jest Tests + - Story Checkbox Validation + +2. **Pull Request Reviews:** + - 1 approval required + - Dismiss stale reviews on new commits + +3. **Additional Rules:** + - Linear history enforced (rebase only) + - Force pushes blocked + - Branch deletion blocked + - Rules apply to administrators + +### Manual Configuration + +Via GitHub CLI: + +```bash +# Set required status checks +gh api repos/OWNER/REPO/branches/master/protection/required_status_checks \ + -X PUT \ + -f strict=true \ + -f contexts[]="ESLint" \ + -f contexts[]="TypeScript Type Checking" + +# Require PR reviews +gh api repos/OWNER/REPO/branches/master/protection/required_pull_request_reviews \ + -X PUT \ + -f required_approving_review_count=1 + +# Block force pushes +gh api repos/OWNER/REPO/branches/master/protection/allow_force_pushes \ + -X DELETE +``` + +## Daily Workflow + +### Starting a New Feature + +```bash +# 1. Update master +git checkout master +git pull origin master + +# 2. Create feature branch +git checkout -b feature/my-feature + +# 3. Make changes +# ... edit files ... + +# 4. Commit (triggers pre-commit) +git add . +git commit -m "feat: add my feature [Story X.X]" + +# 5. Push (triggers pre-push) +git push origin feature/my-feature + +# 6. Create PR +gh pr create --title "feat: Add my feature" --body "Description" +``` + +### Updating a Story + +```bash +# 1. Open story file +code docs/stories/X.X-story.yaml + +# 2. Mark tasks complete +# Change: - "[ ] Task" +# To: - "[x] Task" + +# 3. Update status if needed +# Change: status: "in progress" +# To: status: "Ready for Review" + +# 4. Commit story updates +git add docs/stories/X.X-story.yaml +git commit -m "docs: update story X.X progress" + +# 5. Push (validates story) +git push +``` + +### Fixing Validation Failures + +**ESLint Errors:** + +```bash +# Auto-fix issues +npm run lint -- --fix + +# Check remaining issues +npm run lint + +# Commit fixes +git add . +git commit -m "style: fix lint issues" +``` + +**TypeScript Errors:** + +```bash +# See all errors +npm run typecheck + +# Fix errors in code +# ... edit files ... + +# Verify fix +npm run typecheck + +# Commit fixes +git add . +git commit -m "fix: resolve type errors" +``` + +**Story Validation Errors:** + +```bash +# Check stories +node .aios-core/utils/aios-validator.js stories + +# Fix story file +code docs/stories/X.X-story.yaml + +# Verify fix +node .aios-core/utils/aios-validator.js story docs/stories/X.X-story.yaml + +# Commit fix +git add docs/stories/ +git commit -m "docs: fix story validation" +``` + +**Test Failures:** + +```bash +# Run tests +npm test + +# Run specific test +npm test -- path/to/test.js + +# Fix failing tests +# ... edit test files ... + +# Run with coverage +npm run test:coverage + +# Commit fixes +git add . +git commit -m "test: fix failing tests" +``` + +### Merging a Pull Request + +```bash +# 1. Ensure CI passes +gh pr checks + +# 2. Get approval +# (Wait for team member review) + +# 3. Merge (squash) +gh pr merge --squash --delete-branch + +# 4. Update local master +git checkout master +git pull origin master +``` + +## Troubleshooting + +### Hook Not Running + +**Symptoms:** Commit succeeds without validation + +**Solutions:** + +1. Check Husky installation: + +```bash +npm run prepare +``` + +2. Verify hook files exist: + +```bash +ls -la .husky/pre-commit +ls -la .husky/pre-push +``` + +3. Check file permissions (Unix): + +```bash +chmod +x .husky/pre-commit +chmod +x .husky/pre-push +``` + +### Slow Pre-commit Hook + +**Symptoms:** Pre-commit takes >10 seconds + +**Solutions:** + +1. Clear caches: + +```bash +rm .eslintcache .tsbuildinfo +git commit # Rebuilds cache +``` + +2. Check file changes: + +```bash +git status +# Commit fewer files at once +``` + +3. Update dependencies: + +```bash +npm update +``` + +### Story Validation Fails + +**Symptom:** Pre-push fails with story errors + +**Common Issues:** + +1. **Checkbox mismatch:** + +```yaml +# Error: Completed status but incomplete tasks +status: 'completed' +tasks: + - '[x] Task 1' + - '[ ] Task 2' # ← Fix this + + +# Solution: Complete all tasks or change status +``` + +2. **Missing sections:** + +```yaml +# Error: Missing required sections +id: '1.1' +title: 'Story' +# Missing: description, acceptance_criteria, status + +# Solution: Add missing sections +``` + +3. **Invalid YAML:** + +```yaml +# Error: Invalid YAML syntax +tasks: + - "[ ] Task 1 + - "[ ] Task 2" # ← Missing closing quote above + +# Solution: Fix YAML syntax +``` + +### CI Fails but Local Passes + +**Symptoms:** CI fails but all local validations pass + +**Common Causes:** + +1. **Cache differences:** + +```bash +# Clear local caches +rm -rf node_modules .eslintcache .tsbuildinfo coverage/ +npm ci +npm test +``` + +2. **Environment differences:** + +```bash +# Use same Node version as CI (18) +nvm use 18 +npm test +``` + +3. **Uncommitted files:** + +```bash +# Check for uncommitted changes +git status + +# Stash if needed +git stash +``` + +### Branch Protection Blocks Merge + +**Symptoms:** Cannot merge PR even with approvals + +**Check:** + +1. **Required checks pass:** + +```bash +gh pr checks +# All must show ✓ +``` + +2. **Required approvals:** + +```bash +gh pr view +# Check "Reviewers" section +``` + +3. **Branch is up to date:** + +```bash +# Update branch +git checkout feature-branch +git rebase master +git push --force-with-lease +``` + +## Performance Tips + +### Cache Management + +**Keep caches:** + +- `.eslintcache` - ESLint results +- `.tsbuildinfo` - TypeScript build info +- `coverage/` - Test coverage data + +**Commit to .gitignore:** + +```gitignore +.eslintcache +.tsbuildinfo +coverage/ +``` + +### Incremental Development + +**Best Practices:** + +1. **Small commits:** + - Fewer files = faster validation + - Easier to debug failures + +2. **Test during development:** + +```bash +# Run validation manually before commit +npm run lint +npm run typecheck +npm test +``` + +3. **Fix issues immediately:** + - Don't let issues accumulate + - Easier to fix in context + +### CI Optimization + +**Workflow optimizations:** + +1. **Parallel jobs** - All validations run in parallel +2. **Job timeouts** - Fail fast on hangs +3. **Caching** - npm dependencies cached +4. **Conditional jobs** - Performance job only on PRs + +### Story Validation Performance + +**Current Performance:** + +- Single story: <100ms +- All stories: <2s (typical) + +**Optimization tips:** + +1. **Keep stories focused** - One feature per story +2. **Limit task count** - Break large stories into smaller ones +3. **Valid YAML** - Parsing errors slow validation + +## Advanced Topics + +### Skipping Validations + +**When appropriate:** + +- Emergency hotfixes +- Documentation-only changes +- CI configuration changes + +**How to skip:** + +```bash +# Skip pre-commit +git commit --no-verify + +# Skip pre-push +git push --no-verify + +# Skip CI (not recommended) +# Add [skip ci] to commit message +git commit -m "docs: update [skip ci]" +``` + +**Warning:** Only skip when absolutely necessary. Skipped validations won't catch issues. + +### Custom Validation + +**Add custom validators:** + +1. **Create validator function:** + +```javascript +// .aios-core/utils/custom-validator.js +module.exports = async function validateCustom() { + // Your validation logic + return { success: true, errors: [] }; +}; +``` + +2. **Add to hook:** + +```bash +# .husky/pre-commit +node .aios-core/utils/aios-validator.js pre-commit +node .aios-core/utils/custom-validator.js +``` + +3. **Add to CI:** + +```yaml +# .github/workflows/ci.yml +- name: Custom validation + run: node .aios-core/utils/custom-validator.js +``` + +### Monorepo Support + +**For monorepos:** + +1. **Scope validations:** + +```javascript +// Only validate changed packages +const changedFiles = execSync('git diff --name-only HEAD~1').toString(); +const packages = getAffectedPackages(changedFiles); +``` + +2. **Parallel package validation:** + +```yaml +strategy: + matrix: + package: [package-a, package-b, package-c] +``` + +## References + +- **AIOS Validator:** [.aios-core/utils/aios-validator.js](../.aios-core/utils/aios-validator.js) +- **CI Workflow:** [.github/workflows/ci.yml](../.github/workflows/ci.yml) + +--- + +**Questions? Issues?** + +- [Open an Issue](https://github.com/SynkraAI/aios-core/issues) +- [Join Discord](https://discord.gg/gk8jAdXWmj) + +``` + +================================================== +📄 docs/00-shared-activation-pipeline.md +================================================== +```md +# Shared Activation Pipeline - Common Agent Activation Chain + +> Traced from source code, not documentation. +> Source: `.aios-core/development/scripts/unified-activation-pipeline.js` (Story ACT-6) +> Previous source: `.aios-core/development/scripts/greeting-builder.js` (949 lines) + +## Overview + +Every AIOS agent goes through a **single unified activation pipeline** before presenting its greeting. As of Story ACT-6, the previous two-path architecture (Path A: direct GreetingBuilder invocation, Path B: generate-greeting.js CLI wrapper) has been consolidated into one entry point. + +| Component | Role | +|-----------|------| +| **UnifiedActivationPipeline** | Single entry point for ALL 12 agents. Orchestrates parallel loading, sequential detection, and greeting build. | +| **generate-greeting.js** | Thin CLI wrapper that delegates to `ActivationRuntime.activate()`. Retained for backward compatibility. | +| **GreetingBuilder** | Core greeting assembly engine. Called by the pipeline with pre-loaded enriched context. | + +All 12 agents use the same effective path: +``` +Agent .md STEP 3 → ActivationRuntime.activate(agentId) → UnifiedActivationPipeline.activate(agentId) → GreetingBuilder.buildGreeting(agent, enrichedContext) +``` + +### Canonical Runtime Entry (Codex/Claude shared) + +To avoid IDE-specific drift, activation now has a canonical wrapper: + +``` +ActivationRuntime.activate(agentId) -> UnifiedActivationPipeline.activate(agentId) +``` + +Current source: +- `.aios-core/development/scripts/activation-runtime.js` +- `.aios-core/development/scripts/unified-activation-pipeline.js` + +### Unified Pipeline Architecture (Story ACT-6) + +```mermaid +sequenceDiagram + participant CC as Claude Code + participant UAP as UnifiedActivationPipeline + participant ACL as AgentConfigLoader + participant SCL as SessionContextLoader + participant PSL as ProjectStatusLoader + participant GCD as GitConfigDetector + participant PM as PermissionMode + participant GPM as GreetingPreferenceManager + participant CD as ContextDetector + participant WN as WorkflowNavigator + participant GB as GreetingBuilder + + CC->>UAP: activate(agentId) + UAP->>UAP: _loadCoreConfig() + + par Phase 1: Parallel Loading (Steps 1-5) + UAP->>ACL: loadComplete(coreConfig) + ACL-->>UAP: {config, definition, agent, persona_profile, commands} + and + UAP->>SCL: loadContext(agentId) + SCL-->>UAP: {sessionType, lastCommands, previousAgent, ...} + and + UAP->>PSL: loadProjectStatus() + PSL-->>UAP: {branch, modifiedFiles, recentCommits, currentStory} + and + UAP->>GCD: get() + GCD-->>UAP: {configured, type, branch} + and + UAP->>PM: load() + getBadge() + PM-->>UAP: {mode, badge} + end + + Note over UAP: Phase 2: Build agent definition + + UAP->>GPM: getPreference(userProfile) + GPM-->>UAP: preference (auto|minimal|named|archetypal) + + UAP->>CD: detectSessionType(conversationHistory) + CD-->>UAP: 'new' | 'existing' | 'workflow' + + UAP->>WN: detectWorkflowState(commandHistory, sessionContext) + WN-->>UAP: workflowState | null + + Note over UAP: Assemble enriched context + + UAP->>GB: buildGreeting(agentDefinition, enrichedContext) + GB-->>UAP: formatted greeting string + UAP-->>CC: {greeting, context, duration} +``` + +### Previous Architecture (Pre-ACT-6, deprecated) + +Two paths existed that converged on the same `GreetingBuilder` class: + +| Path | Used By | Entry Point | Status | +|------|---------|-------------|--------| +| **Path A: Direct** | 9 agents | Agent .md STEP 3 called `GreetingBuilder.buildGreeting()` directly | **Replaced** by UnifiedActivationPipeline | +| **Path B: CLI wrapper** | 3 agents (@devops, @data-engineer, @ux-design-expert) | `generate-greeting.js` orchestrated context loading | **Replaced** -- generate-greeting.js is now a thin wrapper | + +--- + +## 1. Agent File Loading (STEP 1-2) + +Before the activation pipeline begins, Claude Code loads and parses the agent definition file. + +### 1.1 File Location + +``` +.aios-core/development/agents/{agent-id}.md +``` + +### 1.2 Parsing Flow (via `AgentConfigLoader.loadAgentDefinition()`) + +**Source:** `agent-config-loader.js:308-366` + +```mermaid +sequenceDiagram + participant CC as Claude Code + participant ACL as AgentConfigLoader + participant FS as File System + participant YAML as js-yaml + + CC->>ACL: new AgentConfigLoader(agentId) + CC->>ACL: loadAgentDefinition() + ACL->>ACL: Check agentDefCache (5min TTL) + alt Cache hit + ACL-->>CC: Return cached definition + else Cache miss + ACL->>FS: readFile(.aios-core/development/agents/{id}.md) + FS-->>ACL: Raw markdown content + ACL->>ACL: Extract YAML block (regex: /```ya?ml\n([\s\S]*?)\n```/) + ACL->>YAML: yaml.load(yamlContent) + alt Parse fails + ACL->>ACL: _normalizeCompactCommands(yamlContent) + ACL->>YAML: yaml.load(normalizedYaml) + end + ACL->>ACL: _normalizeAgentDefinition(agentDef) + Note over ACL: Ensures agent.id, agent.name, agent.icon defaults + Note over ACL: Ensures persona_profile.greeting_levels exists + Note over ACL: Ensures commands array exists + ACL->>ACL: Cache result (5min TTL) + ACL-->>CC: Return normalized definition + end +``` + +### 1.3 Key Fields Extracted + +| Field | Path in YAML | Used For | +|-------|-------------|----------| +| `agent.id` | `agent.id` | Agent identification, config lookup | +| `agent.name` | `agent.name` | Greeting presentation | +| `agent.icon` | `agent.icon` | Greeting prefix | +| `persona_profile.greeting_levels` | `persona_profile.communication.greeting_levels` or `persona_profile.greeting_levels` | Fixed-level greetings | +| `persona_profile.communication.signature_closing` | `persona_profile.communication.signature_closing` | Footer signature | +| `persona.role` | `persona.role` | Role description (new sessions) | +| `commands` | `commands[]` | Command list with visibility metadata | +| `dependencies` | `dependencies.tasks[]`, `.templates[]`, etc. | Task execution references | + +--- + +## 2. Activation Pipeline (STEP 3) -- Unified (Story ACT-6) + +### 2.1 Unified Activation Path (ALL 12 agents) + +**Source:** `unified-activation-pipeline.js` + +All 12 agents use the same activation path. The `UnifiedActivationPipeline.activate(agentId)` method uses ACT-11 tiered loading: + +1. **Tier 1 (Critical):** `AgentConfigLoader` (required) +2. **Tier 2 (High):** `PermissionMode` + `GitConfigDetector` +3. **Tier 3 (Best-effort):** `SessionContextLoader` + `ProjectStatusLoader` (+ memories when available) +4. **Sequential:** preference + context detection + workflow detection +5. **Greeting:** `GreetingBuilder.buildGreeting(agentDefinition, enrichedContext)` + +**Timeout protection (current):** +- Tier budgets: critical `80ms`, high `120ms`, best-effort `180ms` +- Total pipeline default timeout: `500ms` (config/env overridable) + +```mermaid +flowchart LR + A[Agent .md STEP 3] --> B[UnifiedActivationPipeline.activate] + B --> C{Phase 1: Parallel} + C --> C1[AgentConfigLoader] + C --> C2[SessionContextLoader] + C --> C3[ProjectStatusLoader] + C --> C4[GitConfigDetector] + C --> C5[PermissionMode] + C1 & C2 & C3 & C4 & C5 --> D[Phase 2: Build Agent Def] + D --> E[Phase 3: Sequential] + E --> E1[PreferenceManager] + E1 --> E2[ContextDetector] + E2 --> E3[WorkflowNavigator] + E3 --> F[Phase 4: Enriched Context] + F --> G[Phase 5: GreetingBuilder] + G --> H[Greeting + Context + Duration] +``` + +### 2.2 generate-greeting.js (Thin Wrapper) + +**Source:** `generate-greeting.js` (refactored in Story ACT-6) + +Previously a full CLI orchestrator for 3 agents, `generate-greeting.js` is now a thin wrapper: + +```javascript +async function generateGreeting(agentId) { + const runtime = new ActivationRuntime(); + const result = await runtime.activate(agentId); + return result.greeting; +} +``` + +This maintains backward compatibility for any code that still calls `generateGreeting()` directly. + +### 2.3 Enriched Context Object Shape + +The enriched context passed to GreetingBuilder contains: + +```javascript +{ + agent, // Agent definition (id, name, icon, title, commands, persona) + config, // Agent-specific config sections + session, // Session context (sessionType, lastCommands, previousAgent, ...) + projectStatus, // Git status (branch, modifiedFiles, recentCommits, currentStory) + gitConfig, // Git config (configured, type, branch) + permissions, // Permission mode (mode, badge) + preference, // Greeting preference (auto|minimal|named|archetypal) + sessionType, // Detected session type (new|existing|workflow) + workflowState, // Workflow state (if in workflow session) + userProfile, // User profile (bob|advanced) + conversationHistory, // Conversation history for context detection + lastCommands, // Recent agent commands + previousAgent, // Previously active agent + sessionMessage, // Session-specific message + workflowActive, // Active workflow info + sessionStory, // Current story being worked on +} +``` + +### 2.4 Previous Architecture (Pre-ACT-6, deprecated) + +Before Story ACT-6, two separate paths existed: + +| Path | Agents | Entry Point | Context Richness | +|------|--------|-------------|-----------------| +| Path A (Direct) | 9 agents | GreetingBuilder.buildGreeting() directly | Limited -- no AgentConfigLoader, no SessionContextLoader | +| Path B (CLI) | 3 agents (@devops, @data-engineer, @ux-design-expert) | generate-greeting.js | Rich -- full parallel loading | + +This divergence meant Path A agents lacked session state, project status details, and agent-specific config that Path B agents received. The unified pipeline eliminates this gap. + +--- + +## 3. Greeting Section Assembly + +**Source:** `greeting-builder.js:91-141` + +When `preference === 'auto'`, the greeting is assembled from ordered sections: + +```mermaid +flowchart TD + A[Start _buildContextualGreeting] --> B{Session Type?} + + B -->|any| C[1. Presentation + Permission Badge] + C --> D{sessionType === 'new'?} + + D -->|yes| E[2. Role Description] + D -->|no| F[Skip role description] + + E --> G{gitConfig.configured AND projectStatus?} + F --> G + + G -->|yes| H[3. Project Status] + G -->|no| I[Skip project status] + + H --> J{sessionType !== 'new'?} + I --> J + + J -->|yes| K[4. Context Section
intelligent narrative + recommendations] + J -->|no| L[Skip context section] + + K --> M{sessionType === 'workflow'
AND lastCommands AND no contextSection?} + L --> M + + M -->|yes| N[5. Workflow Suggestions
via WorkflowNavigator] + M -->|no| O[Skip workflow suggestions] + + N --> P[6. Commands
filtered by visibility] + O --> P + + P --> Q[7. Footer + Signature] + Q --> R[Join sections with \\n\\n] + R --> S[Return greeting string] +``` + +### 3.1 Section Details + +| # | Section | Method | Condition | Data Source | +|---|---------|--------|-----------|-------------| +| 1 | Presentation | `buildPresentation()` | Always | `persona_profile.greeting_levels.archetypal` + PermissionMode badge | +| 2 | Role Description | `buildRoleDescription()` | `sessionType === 'new'` | `persona.role` | +| 3 | Project Status | `buildProjectStatus()` | `gitConfig.configured && projectStatus` | ProjectStatusLoader (branch, files, commits, story) | +| 4 | Context | `buildContextSection()` | `sessionType !== 'new'` | Intelligent narrative from previous agent, modified files, story | +| 5 | Workflow Suggestions | `buildWorkflowSuggestions()` | `sessionType === 'workflow' && lastCommands && !contextSection` | WorkflowNavigator + workflow-patterns.yaml | +| 6 | Commands | `buildCommands()` | Always | `filterCommandsByVisibility()` - max 12 commands | +| 7 | Footer | `buildFooter()` | Always | `persona_profile.communication.signature_closing` | + +### 3.2 Command Visibility Filtering + +**Source:** `greeting-builder.js:815-857` + +| Session Type | Visibility Filter | Shows Commands With | +|-------------|-------------------|---------------------| +| `new` | `full` | `visibility: [full, ...]` | +| `existing` | `quick` | `visibility: [..., quick, ...]` | +| `workflow` | `key` | `visibility: [..., key, ...]` | + +If no commands have visibility metadata, falls back to first 12 commands. + +--- + +## 4. Context Detection (Session Type) + +**Source:** `context-detector.js:22-101` + +```mermaid +flowchart TD + A[detectSessionType] --> B{conversationHistory
not null AND length > 0?} + + B -->|yes| C[_detectFromConversation] + B -->|no| D[_detectFromFile] + + C --> E{commands.length === 0?} + E -->|yes| F[return 'new'] + E -->|no| G{_detectWorkflowPattern?} + G -->|yes| H[return 'workflow'] + G -->|no| I[return 'existing'] + + D --> J{session-state.json exists?} + J -->|no| K[return 'new'] + J -->|yes| L{Session expired? > 1hr} + L -->|yes| M[return 'new'] + L -->|no| N{workflowActive AND lastCommands?} + N -->|yes| O[return 'workflow'] + N -->|no| P{lastCommands.length > 0?} + P -->|yes| Q[return 'existing'] + P -->|no| R[return 'new'] +``` + +**Workflow patterns detected:** +- `story_development`: validate-story-draft, develop, review-qa +- `epic_creation`: create-epic, create-story, validate-story-draft +- `backlog_management`: backlog-review, backlog-prioritize, backlog-schedule + +--- + +## 5. Git Config Detection + +**Source:** `git-config-detector.js:19-294` + +| Property | Command | Timeout | Cache TTL | +|----------|---------|---------|-----------| +| `configured` | `git rev-parse --is-inside-work-tree` | 1s | 5 min | +| `branch` | `git branch --show-current` | 1s | 5 min | +| `type` | `git config --get remote.origin.url` | 1s | 5 min | + +**Returns:** `{ configured: boolean, type: 'github'|'gitlab'|'bitbucket'|'other'|null, branch: string|null }` + +--- + +## 6. Project Status Loading + +**Source:** `project-status-loader.js:20-524` + +| Data Point | Git Command | Cache TTL | +|------------|-------------|-----------| +| `branch` | `git branch --show-current` | 60s | +| `modifiedFiles` | `git status --porcelain` (max 5) | 60s | +| `modifiedFilesTotalCount` | Count from porcelain output | 60s | +| `recentCommits` | `git log -2 --oneline --no-decorate` | 60s | +| `currentStory` | Scan `docs/stories/` for `Status: InProgress` | 60s | +| `currentEpic` | Extracted from story file metadata | 60s | +| `worktrees` | Via WorktreeManager | 60s | + +**Cache file:** `.aios/project-status.yaml` + +--- + +## 7. Greeting Preference + +**Source:** `greeting-preference-manager.js:18-146` + +Reads from `.aios-core/core-config.yaml` path `agentIdentity.greeting.preference`. + +| Value | Behavior | +|-------|----------| +| `auto` (default) | Session-aware contextual greeting | +| `minimal` | Always use `greeting_levels.minimal` | +| `named` | Always use `greeting_levels.named` | +| `archetypal` | Always use `greeting_levels.archetypal` | + +--- + +## 8. Permission Mode System (Story ACT-4) + +**Source:** `permissions/index.js` + `permissions/permission-mode.js` + `permissions/operation-guard.js` + +### 8.1 Overview + +The Permission Mode system controls agent autonomy with three modes: + +| Mode | Badge | Writes | Executes | Deletes | Default | +|------|-------|--------|----------|---------|---------| +| `explore` | `[Explore]` | Blocked | Blocked | Blocked | No | +| `ask` | `[Ask]` | Confirm | Confirm | Confirm | **Yes** | +| `auto` | `[Auto]` | Allowed | Allowed | Allowed | No | + +All modes allow **read** operations unconditionally. + +### 8.2 Badge Display + +The badge is loaded during greeting assembly (Section 3, step 1) via `_safeGetPermissionBadge()`: + +```javascript +const mode = new PermissionMode(); +await mode.load(); // Reads .aios/config.yaml -> permissions.mode +return mode.getBadge(); // Returns "[icon Name]" +``` + +Badge appears next to the agent's archetypal greeting: `"Agent Name ready! [Ask]"` + +### 8.3 OperationGuard Enforcement + +The `OperationGuard` class classifies every tool call and checks against the current mode: + +``` +Tool Call → classifyOperation(tool, params) → canPerform(operation) → allow/prompt/deny +``` + +**Classification rules:** + +| Tool | Classification | +|------|---------------| +| Read, Glob, Grep | `read` (always allowed) | +| Write, Edit | `write` | +| Task (read-only subagent) | `read` | +| Task (other) | `execute` | +| Bash (git status, git log, ls, etc.) | `read` | +| Bash (git commit, git push, npm install, etc.) | `write` | +| Bash (rm -rf, git reset --hard, DROP TABLE, etc.) | `delete` | +| MCP tools | `execute` | + +### 8.4 `*yolo` Command + +Available in all 12 agents. Cycles the mode: `ask` -> `auto` -> `explore` -> `ask`. + +**Implementation:** Calls `PermissionMode.cycleMode()` which: +1. Reads current mode from `.aios/config.yaml` +2. Advances to next mode in `MODE_CYCLE` array +3. Writes new mode back to config +4. Returns updated mode info with badge + +### 8.5 Integration Points + +The `enforcePermission()` function provides a clean API for permission enforcement: + +```javascript +const { enforcePermission } = require('./.aios-core/core/permissions'); + +const result = await enforcePermission('Write', { file_path: '/file.js' }); +// result.action: 'allow' | 'prompt' | 'deny' +// result.message: User-facing explanation (for prompt/deny) +``` + +### 8.6 Config Initialization + +The `environment-bootstrap` task initializes `.aios/config.yaml` with `permissions.mode: ask` as the default. If the config file is missing or the field is absent, the system defaults to `ask` mode. + +--- + +## 9. Config Loading Per Agent + +**Source:** `agent-config-loader.js:49-160` + `agent-config-requirements.yaml` + +Each agent has specific config requirements defined in `.aios-core/data/agent-config-requirements.yaml`: + +| Agent | Config Sections | Files Loaded | Performance Target | +|-------|----------------|--------------|-------------------| +| `aios-master` | dataLocation, registry | aios-kb.md (lazy) | <30ms | +| `dev` | devLoadAlwaysFiles, devStoryLocation, dataLocation | coding-standards.md, tech-stack.md, source-tree.md, technical-preferences.md | <50ms | +| `qa` | qaLocation, dataLocation, storyBacklog | technical-preferences.md, test-levels-framework.md, test-priorities-matrix.md | <50ms | +| `devops` | dataLocation, cicdLocation | technical-preferences.md | <50ms | +| `architect` | architecture, dataLocation, templatesLocation | technical-preferences.md | <75ms | +| `po` | devStoryLocation, prd, storyBacklog, templatesLocation | elicitation-methods.md | <75ms | +| `sm` | devStoryLocation, storyBacklog, dataLocation | mode-selection-best-practices.md, workflow-patterns.yaml, coding-standards.md | <75ms | +| `data-engineer` | dataLocation, etlLocation | technical-preferences.md | <75ms | +| `pm` | devStoryLocation, storyBacklog | coding-standards.md, tech-stack.md | <100ms | +| `analyst` | dataLocation, analyticsLocation | brainstorming-techniques.md, tech-stack.md, source-tree.md | <100ms | +| `ux-design-expert` | dataLocation, uxLocation | tech-stack.md, coding-standards.md | <100ms | +| `squad-creator` | dataLocation, squadsTemplateLocation | (none, lazy: agent_registry, squad_manifest) | <150ms | + +> **Story ACT-8 changes:** Enriched pm (+2 files), ux-design-expert (+2 files), analyst (+2 files), sm (+1 file), squad-creator (explicit entry with lazy loading). All within performance targets. + +--- + +## 10. Files Loaded During Activation (Complete List) + +### Always loaded (every agent activation) + +| File | Loader | Purpose | +|------|--------|---------| +| `.aios-core/development/agents/{agent-id}.md` | AgentConfigLoader | Agent definition | +| `.aios-core/core-config.yaml` | GreetingBuilder._loadConfig() | Core configuration | +| `.aios-core/data/agent-config-requirements.yaml` | AgentConfigLoader.loadRequirements() | Per-agent config needs | +| `.aios-core/data/workflow-patterns.yaml` | WorkflowNavigator._loadPatterns() | Workflow state detection | + +### Loaded conditionally + +| File | Condition | Loader | +|------|-----------|--------| +| `.aios/session-state.json` | Path B (CLI wrapper) or file-based session detection | ContextDetector / SessionContextLoader | +| `.aios/project-status.yaml` | Cache check (60s TTL) | ProjectStatusLoader | +| `docs/stories/**/*.md` | When scanning for InProgress story | ProjectStatusLoader.getCurrentStoryInfo() | +| Agent-specific data files | Per agent-config-requirements.yaml | AgentConfigLoader.loadFile() | + +--- + +## 11. Error Handling & Fallbacks + +The entire pipeline is protected with multiple fallback layers: + +### 11.1 UnifiedActivationPipeline Fallbacks (Story ACT-6) + +| Component | Fallback | Source | +|-----------|----------|--------| +| UnifiedActivationPipeline.activate() | `_generateFallbackGreeting(agentId)` on any unrecoverable error | unified-activation-pipeline.js | +| `_safeLoad()` (per-loader) | Returns `null` on failure; 150ms per-loader timeout | unified-activation-pipeline.js | +| Total pipeline | `_timeoutFallback()` at 200ms returns fallback greeting | unified-activation-pipeline.js | +| `_detectSessionType()` | Returns `'new'` on failure | unified-activation-pipeline.js | +| `_detectWorkflowState()` | Returns `null` on failure | unified-activation-pipeline.js | + +### 11.2 GreetingBuilder Fallbacks (unchanged) + +| Component | Fallback | Source | +|-----------|----------|--------| +| GreetingBuilder.buildGreeting() | `buildSimpleGreeting()` | greeting-builder.js:60 | +| _buildContextualGreeting() | 150ms timeout | greeting-builder.js:73-77 | +| ContextDetector.detectSessionType() | Returns `'new'` | greeting-builder.js:869 | +| GitConfigDetector.get() | `{ configured: false }` | greeting-builder.js:883 | +| ProjectStatusLoader | `null` | greeting-builder.js:897 | +| PermissionMode.getBadge() | `''` (empty string) | greeting-builder.js:913 | + +### 11.3 generate-greeting.js Fallback (thin wrapper) + +| Component | Fallback | Source | +|-----------|----------|--------| +| generateGreeting() | `generateFallbackGreeting()` if pipeline throws | generate-greeting.js | + +--- + +## 12. Constructor Dependency Graph + +```mermaid +graph TD + UAP[UnifiedActivationPipeline] --> GB[GreetingBuilder] + UAP --> GPM[GreetingPreferenceManager] + UAP --> CD[ContextDetector] + UAP --> WN[WorkflowNavigator] + UAP --> GCD_C[GitConfigDetector constructor] + + UAP -.->|runtime Phase 1| ACL[AgentConfigLoader] + UAP -.->|runtime Phase 1| SCL[SessionContextLoader] + UAP -.->|runtime Phase 1| PSL[ProjectStatusLoader] + UAP -.->|runtime Phase 1| PM[PermissionMode] + UAP -.->|runtime Phase 1| GCD_R[GitConfigDetector.get] + + GG[generate-greeting.js thin wrapper] --> UAP + + GB --> CD2[ContextDetector] + GB --> GCD2[GitConfigDetector] + GB --> WN2[WorkflowNavigator] + GB --> GPM2[GreetingPreferenceManager] + GB --> CC[core-config.yaml] + + ACL --> AR[agent-config-requirements.yaml] + ACL --> AD[Agent .md definition] + ACL --> GCC[globalConfigCache] + + SCL --> SSF[.aios/session-state.json] + PSL --> GIT[git CLI commands] + PSL --> STORIES[docs/stories/**/*.md] + PSL --> WTM[WorktreeManager] + PSL --> PSC[.aios/project-status.yaml] + GCD_R --> GIT + WN --> WP[.aios-core/data/workflow-patterns.yaml] + GPM --> CC +``` + +--- + +## 13. `user_profile` Impact Matrix (Story ACT-2) + +The `user_profile` setting (`bob` or `advanced`) affects behavior across the entire AIOS pipeline. This section documents every file that references `user_profile`/`userProfile` and the behavioral difference between modes. + +### 13.1 Bob Mode Flow + +``` +Installation → user selects "bob" → core-config.yaml: user_profile: bob + → user-config.yaml: user_profile: bob (L5 layer) + ↓ +Activation → loadUserProfile() → validateUserProfile() → resolveConfig(L5 priority) + → GreetingPreferenceManager: forces "named" (or "minimal") + → GreetingBuilder: redirect non-PM agents to @pm + → filterCommandsByVisibility: returns [] for non-PM +``` + +### 13.2 Impact Matrix: Source Files + +| # | File | Category | `bob` Behavior | `advanced` Behavior | +|---|------|----------|----------------|---------------------| +| 1 | `.aios-core/core-config.yaml` | Config | `user_profile: bob` | `user_profile: advanced` | +| 2 | `.aios-core/development/scripts/greeting-builder.js` | Greeting | Redirects non-PM agents to @pm; hides role/status sections; returns empty commands for non-PM | Full contextual greeting with all sections and commands | +| 3 | `.aios-core/development/scripts/generate-greeting.js` | Greeting | Uses GreetingBuilder, same bob restrictions | Uses GreetingBuilder, full features | +| 4 | `.aios-core/development/scripts/greeting-preference-manager.js` | Greeting | Forces preference to `minimal` or `named`; overrides `auto`/`archetypal` | All 4 preferences available (`auto`, `minimal`, `named`, `archetypal`) | +| 5 | `.aios-core/infrastructure/scripts/validate-user-profile.js` | Validation | Validates `bob` as legal value; normalizes case | Validates `advanced` as legal value; normalizes case | +| 6 | `.aios-core/core/config/config-resolver.js` | Config | `toggleUserProfile()` switches bob<->advanced; L5 user layer has priority | Same toggle; resolveConfig merges layers | +| 7 | `.aios-core/core/config/migrate-config.js` | Config | Categorizes `user_profile` as USER_FIELD during migration | Same categorization | +| 8 | `.aios-core/core/config/schemas/user-config.schema.json` | Schema | `enum: ["bob", "advanced"]` validation | Same validation | +| 9 | `.aios-core/core/config/templates/user-config.yaml` | Template | Default template value: `bob` | N/A (template default is bob) | +| 10 | `.aios-core/development/agents/pm.md` | Agent | PM becomes sole orchestrator; bob mode session detection; orchestrates other agents internally | PM operates as normal PM with standard workflow | +| 11 | `packages/installer/src/wizard/questions.js` | Install | Presents bob/advanced choice during setup | Same prompt | +| 12 | `packages/installer/src/wizard/index.js` | Install | Writes `user_profile: bob`; idempotent on re-install | Writes `user_profile: advanced` | +| 13 | `packages/installer/src/wizard/i18n.js` | Install | Translated "Assisted Mode" text (en/pt/es) | Translated "Advanced Mode" text | +| 14 | `packages/installer/src/config/templates/core-config-template.js` | Install | Generates config with `user_profile: bob` | Generates config with `user_profile: advanced` | +| 15 | `packages/installer/src/config/configure-environment.js` | Install | Passes `userProfile: 'bob'` to config generation | Passes `userProfile: 'advanced'` | +| 16 | `packages/aios-install/src/installer.js` | Install | Sets `config.user_profile = 'bob'` in YAML | Sets `config.user_profile = 'advanced'` | +| 17 | `.aios-core/development/tasks/environment-bootstrap.md` | Task | Documents bob selection flow | Documents advanced selection flow | +| 18 | `docs/aios-workflows/bob-orchestrator-workflow.md` | Docs | Full bob orchestrator workflow documentation | N/A (bob-specific doc) | + +### 13.3 Impact Matrix: Agent Command Visibility + +In `bob` mode, non-PM agents return **empty command lists** (redirect to @pm shown instead). PM agent shows all commands normally. + +| Agent | `key` Commands Count | Bob Mode Result | Advanced Mode (`new` session) | +|-------|---------------------|-----------------|-------------------------------| +| `@pm` | 4 (`help`, `status`, `run`, `exit`) | All commands shown (PM is primary interface) | Full visibility commands | +| `@dev` | 4 (`help`, `apply-qa-fixes`, `run-tests`, `exit`) | Empty (redirect to @pm) | Full visibility commands | +| `@qa` | 0 (no visibility metadata) | Empty (redirect to @pm) | Fallback: first 12 commands | +| `@architect` | 3 (`help`, `create-doc`, `exit`) | Empty (redirect to @pm) | Full visibility commands | +| `@po` | 4 (`help`, `validate`, `gotcha`, `gotchas`) | Empty (redirect to @pm) | Full visibility commands | +| `@sm` | 2 (`help`, `draft`) | Empty (redirect to @pm) | Full visibility commands | +| `@analyst` | 2 (`help`, `exit`) | Empty (redirect to @pm) | Full visibility commands | +| `@data-engineer` | 0 (no visibility metadata) | Empty (redirect to @pm) | Fallback: first 12 commands | +| `@devops` | 0 (no visibility metadata) | Empty (redirect to @pm) | Fallback: first 12 commands | +| `@ux-design-expert` | 0 (no visibility metadata) | Empty (redirect to @pm) | Fallback: first 12 commands | +| `@squad-creator` | 7 (most have `key`) | Empty (redirect to @pm) | Full visibility commands | +| `@aios-master` | 0 (uses string visibility) | Empty (redirect to @pm) | Fallback: first 12 commands | + +**Note:** Agents with 0 `key` commands (`qa`, `data-engineer`, `devops`, `ux-design-expert`, `aios-master`) lack `visibility` array metadata on their commands. In `advanced` mode `workflow` sessions, they fall back to showing first 12 commands. This is a known gap tracked for future improvement. + +### 13.4 Validation Pipeline Integration + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ user_profile Validation in Pipeline │ +├─────────────────────────────────────────────────────────────────┤ +│ │ +│ 1. INSTALLATION (packages/installer) │ +│ └─ wizard prompts for user_profile → writes to config │ +│ │ +│ 2. CONFIG RESOLUTION (core/config/config-resolver.js) │ +│ └─ resolveConfig() merges L1-L5 layers │ +│ └─ L5 (user-config.yaml) has highest priority │ +│ │ +│ 3. ACTIVATION PIPELINE (unified-activation-pipeline.js) ACT-6 │ +│ └─ UnifiedActivationPipeline.activate(agentId) │ +│ └─ loadUserProfile() calls resolveConfig() │ +│ └─ validateUserProfile() runs on resolved value │ +│ └─ Invalid values → warn + fallback to 'advanced' │ +│ └─ Valid value → passed to preference manager + greeting │ +│ │ +│ 4. GREETING BUILD (greeting-builder.js + preference-manager) │ +│ └─ bob: preference forced to named/minimal │ +│ └─ bob + non-PM: redirect message shown │ +│ └─ bob + PM: full contextual greeting │ +│ └─ advanced: normal greeting with all features │ +│ │ +└─────────────────────────────────────────────────────────────────┘ +``` + +--- + +*Traced from source on 2026-02-05 | Story AIOS-TRACE-001* +*Updated on 2026-02-06 | Story ACT-2 - user_profile impact matrix added* +*Updated on 2026-02-06 | Story ACT-6 - Unified Activation Pipeline (Path A/B merged)* +*Updated on 2026-02-06 | Story ACT-8 - Config governance: enriched pm, ux-design-expert, analyst, sm, squad-creator* + +``` + +================================================== +📄 docs/community.md +================================================== +```md +# Synkra AIOS Community + +> 🇧🇷 [Versão em Português](COMMUNITY-PT.md) + +Welcome to the Synkra AIOS community! + +We're building the future of AI-orchestrated development together. + +## Our Values + +- **Collaboration over competition** - We grow together +- **Inclusion** - Everyone is welcome regardless of experience level +- **Transparency** - Open discussions, open decisions +- **Quality** - We care about doing things right + +## Getting Started + +### First Steps + +1. Star the repository +2. Read the [README](README.md) +3. Set up your [development environment](CONTRIBUTING.md#getting-started) +4. Introduce yourself in [Discussions](https://github.com/SynkraAI/aios-core/discussions) + +### Find Your First Contribution + +- Look for issues labeled [`good-first-issue`](https://github.com/SynkraAI/aios-core/labels/good-first-issue) +- Check [`help-wanted`](https://github.com/SynkraAI/aios-core/labels/help-wanted) for more complex tasks +- Browse [open Discussions](https://github.com/SynkraAI/aios-core/discussions) to help others + +## Communication Channels + +### GitHub Discussions (Primary) + +Our main communication hub for all AIOS repositories: + +- **Announcements** - Project updates from maintainers +- **General** - General discussions about AIOS +- **Ideas** - Propose new features and improvements +- **Q&A** - Get help with technical questions +- **Show and Tell** - Share your projects using AIOS +- **Troubleshooting** - Get help with problems +- **Squads** - Discussions about AIOS Squads (modular agent teams) +- **MCP Ecosystem** - Discussions about MCP tools and integrations + +[Join the Discussion](https://github.com/SynkraAI/aios-core/discussions) + +### Issue Tracker + +For bug reports and feature requests: + +- [aios-core Issues](https://github.com/SynkraAI/aios-core/issues) - Core framework +- [aios-squads Issues](https://github.com/SynkraAI/aios-squads/issues) - AIOS Squads +- [mcp-ecosystem Issues](https://github.com/SynkraAI/mcp-ecosystem/issues) - MCP tools + +## How to Contribute + +### Code Contributions + +1. Fork the repository +2. Create a feature branch (`git checkout -b feature/amazing-feature`) +3. Make your changes +4. Run tests (`npm test`) +5. Submit a Pull Request + +See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines. + +### Non-Code Contributions + +We value all types of contributions: + +- **Documentation** - Fix typos, improve explanations +- **Translation** - Help translate docs +- **Bug Reports** - Report issues you find +- **Ideas** - Share your thoughts on improvements +- **Design** - UI/UX suggestions +- **Advocacy** - Blog posts, talks, tutorials + +### Squads + +Create and share your own Squads (modular AI agent teams)! + +Squads are specialized teams of AI agents that work together on specific domains: +- **ETL Squad** - Data collection and transformation +- **Creator Squad** - Content generation + +See [docs/Squads.md](docs/Squads.md) for details on creating your own Squad. + +## Community Roles + +### Contributors + +Anyone who has contributed to AIOS in any way. +- Listed in our [Contributors page](https://github.com/SynkraAI/aios-core/graphs/contributors) +- Mentioned in release notes for significant contributions + +### Maintainers + +Core team members who review PRs and guide the project: + +- [@SynkraAI](https://github.com/SynkraAI) - Project Lead + +### Becoming a Maintainer + +Active contributors may be invited to become maintainers. We look for: +- Consistent quality contributions +- Helpful community interactions +- Understanding of project goals + +## Recognition + +### Contributors Wall + +All contributors are recognized in our [Contributors page](https://github.com/SynkraAI/aios-core/graphs/contributors). + +### Release Credits + +Significant contributions are credited in release notes. + +## Governance + +### Decision Making + +- **Minor decisions**: Maintainers can decide +- **Major decisions**: Discussed in GitHub Discussions +- **Breaking changes**: Require RFC process + +### RFC Process + +For significant changes: + +1. Open a Discussion with `[RFC]` prefix +2. Community provides feedback +3. Maintainers make final decision +4. Decision is documented + +### Code of Conduct + +We follow the [Contributor Covenant](CODE_OF_CONDUCT.md). Please read and respect it. + +Report violations to: conduct@SynkraAI.com + +## Getting Help + +### Stuck on something? + +1. Check the [Documentation](docs/) +2. Search [existing Discussions](https://github.com/SynkraAI/aios-core/discussions) +3. Ask in Q&A Discussions +4. Open a Troubleshooting discussion + +### Found a bug? + +1. Search [existing issues](https://github.com/SynkraAI/aios-core/issues) +2. If new, [open a bug report](https://github.com/SynkraAI/aios-core/issues/new?template=bug_report.md) + +### Have an idea? + +1. Check if it exists in [Ideas](https://github.com/SynkraAI/aios-core/discussions/categories/ideas) +2. If new, [share your idea](https://github.com/SynkraAI/aios-core/discussions/new?category=ideas) +3. Read our [Feature Request Process](docs/FEATURE_PROCESS.md) for detailed guidelines + +## Project Roadmap + +Want to know where AIOS is headed? Check out our public roadmap: + +- [ROADMAP.md](ROADMAP.md) - High-level vision and planned features +- [GitHub Project](https://github.com/orgs/SynkraAI/projects/1) - Detailed tracking board + +The roadmap is updated monthly and reflects community input. You can influence our direction by: + +1. **Voting** on ideas in [Discussions](https://github.com/SynkraAI/aios-core/discussions/categories/ideas) +2. **Proposing** new features via the [RFC process](/.github/RFC_TEMPLATE.md) +3. **Contributing** directly to planned features + +> Roadmap items are plans, not commitments. Priorities may shift based on community needs and technical constraints. + +### Feature Request Process + +We have a structured process for proposing new features: + +1. **Quick Ideas** - Open a Discussion in the "Ideas" category +2. **RFC Process** - For significant features, write an RFC using our [template](/.github/RFC_TEMPLATE.md) +3. **Community Voting** - Use :+1: reactions to show support +4. **Implementation** - Approved ideas move to our backlog + +See [Feature Request Process](docs/FEATURE_PROCESS.md) for complete details. + +## Resources + +### Learning AIOS + +- [Getting Started Guide](docs/getting-started.md) +- [Architecture Overview](docs/architecture.md) +- [User Guide](.aios-core/user-guide.md) + +### External Resources + +- [AIOS GitHub Organization](https://github.com/SynkraAI) +- [Changelog](CHANGELOG.md) + +## Internationalization + +We welcome contributions in all languages! + +- Documentation is primarily in English +- Community discussions can be in any language +- Portuguese (PT-BR) translations are appreciated + +## Project Status + +- Current Version: See [releases](https://github.com/SynkraAI/aios-core/releases) +- Changelog: [CHANGELOG.md](CHANGELOG.md) + +--- + +## Questions? + +Can't find what you're looking for? +Open a Discussion or reach out to the community! + +**Thank you for being part of the AIOS community!** + +--- + +*This document is maintained by the AIOS community.* +*Last updated: 2025-12-09* + +``` + +================================================== +📄 docs/CHANGELOG.md +================================================== +```md +# Changelog + +> 🌐 **EN** | [PT](./pt/CHANGELOG.md) | [ES](./es/CHANGELOG.md) + +--- + +All notable changes to Synkra AIOS will be documented in this file. + +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), +and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). + +--- + +## [2.2.0] - 2026-01-29 + +### Added + +- **🤖 AIOS Autonomous Development Engine (ADE)**: Complete autonomous development system with 7 Epics: + - **Epic 1 - Worktree Manager**: Git worktree isolation for parallel story development + - **Epic 2 - Migration V2→V3**: autoClaude V3 format with capability flags + - **Epic 3 - Spec Pipeline**: Transform requirements into executable specifications + - **Epic 4 - Execution Engine**: 13-step subtask execution with mandatory self-critique + - **Epic 5 - Recovery System**: Automatic failure recovery with attempt tracking and rollback + - **Epic 6 - QA Evolution**: 10-phase structured review process + - **Epic 7 - Memory Layer**: Persistent memory for patterns, insights, and gotchas + +- **New Agent Commands**: + - `@devops`: `*create-worktree`, `*list-worktrees`, `*merge-worktree`, `*cleanup-worktrees`, `*inventory-assets`, `*analyze-paths`, `*migrate-agent`, `*migrate-batch` + - `@pm`: `*gather-requirements`, `*write-spec` + - `@architect`: `*assess-complexity`, `*create-plan`, `*create-context`, `*map-codebase` + - `@analyst`: `*research-deps`, `*extract-patterns` + - `@qa`: `*critique-spec`, `*review-build`, `*request-fix`, `*verify-fix` + - `@dev`: `*execute-subtask`, `*track-attempt`, `*rollback`, `*capture-insights`, `*list-gotchas`, `*apply-qa-fix` + +- **New Scripts**: + - `worktree-manager.js`, `story-worktree-hooks.js`, `project-status-loader.js` + - `asset-inventory.js`, `path-analyzer.js`, `migrate-agent.js` + - `subtask-verifier.js`, `plan-tracker.js` + - `recovery-tracker.js`, `approach-manager.js`, `rollback-manager.js`, `stuck-detector.js` + - `qa-loop-orchestrator.js`, `qa-report-generator.js` + - `codebase-mapper.js`, `pattern-extractor.js`, `gotchas-documenter.js` + +- **New Workflows**: + - `auto-worktree.yaml` - Automatic worktree creation for stories + - `spec-pipeline.yaml` - 5-phase specification pipeline + - `qa-loop.yaml` - QA review and fix loop + +- **New Tasks** (15+ new tasks for ADE): + - Spec Pipeline: `spec-gather-requirements.md`, `spec-assess-complexity.md`, `spec-research-dependencies.md`, `spec-write-spec.md`, `spec-critique.md` + - Execution: `plan-create-implementation.md`, `plan-create-context.md`, `plan-execute-subtask.md` + - QA: `qa-review-build.md`, `qa-fix-issues.md`, `qa-structured-review.md` + - Memory: `capture-session-insights.md` + - Worktree: `worktree-create.md`, `worktree-list.md`, `worktree-merge.md` + +- **JSON Schemas**: + - `agent-v3-schema.json` - V3 agent definition validation + - `task-v3-schema.json` - V3 task definition validation + +- **Templates**: + - `spec-tmpl.md` - Specification document template + - `qa-report-tmpl.yaml` - QA report template + +- **Checklists**: + - `self-critique-checklist.md` - Mandatory self-critique for developers + +- **Documentation**: + - [ADE Complete Guide](guides/ade-guide.md) - Full tutorial + - [Epic 1-7 Handoffs](architecture/) - Technical handoffs (ADE-EPIC-1 through ADE-EPIC-7) + - [Agent Changes](architecture/ADE-AGENT-CHANGES.md) - All agent modifications with capability matrix + +### Changed + +- **Agent Format**: All 12 agents migrated to autoClaude V3 format with capability flags +- **Agent Sync**: All agents now synced between `.aios-core/development/agents/` and `.claude/commands/AIOS/agents/` + +### Fixed + +- Agent command registration for all ADE Epics +- Schema validation for V3 format + +--- + +## [2.1.0] - 2025-01-24 + +### Added + +- **Interactive Installation Wizard**: Step-by-step guided setup with component selection +- **Multi-IDE Support**: Added support for 4 IDEs (Claude Code, Cursor, Gemini CLI, GitHub Copilot) +- **Squads System**: Modular add-ons including HybridOps for ClickUp integration +- **Cross-Platform Testing**: Full test coverage for Windows, macOS, and Linux +- **Error Handling & Rollback**: Automatic rollback on installation failure with recovery suggestions +- **Agent Improvements**: + - Decision logging in yolo mode for `dev` agent + - Backlog management commands for `qa` agent + - CodeRabbit integration for automated code review + - Contextual greetings with project status +- **Documentation Suite**: + - Troubleshooting Guide with 23 documented issues + - FAQ with 22 Q&As + - Migration Guide v2.0 to v4.0.4 + +### Changed + +- **Directory Structure**: Renamed `.legacy-core/` to `.aios-core/` +- **Configuration Format**: Enhanced `core-config.yaml` with new sections for git, projectStatus, and sharding options +- **Agent Format**: Updated agent YAML schema with persona_profile, commands visibility, and whenToUse fields +- **IDE Configuration**: Claude Code agents moved to `.claude/commands/AIOS/agents/` +- **File Locations**: + - `docs/architecture/coding-standards.md` → `docs/framework/coding-standards.md` + - `docs/architecture/tech-stack.md` → `docs/framework/tech-stack.md` + - `.aios-core/utils/` → `.aios-core/scripts/` + +### Fixed + +- Installation failures on Windows with long paths +- PowerShell execution policy blocking scripts +- npm permission issues on Linux/macOS +- IDE configuration not applying after installation + +### Deprecated + +- Manual installation process (use `npx @synkra/aios-core install` instead) +- `.legacy-core/` directory name (automatically migrated) + +### Security + +- Added validation for installation directory to prevent system directory modifications +- Improved handling of environment variables and API keys + +--- + +## [2.0.0] - 2024-12-01 + +### Added + +- Initial public release of Synkra AIOS +- 11 specialized AI agents (dev, qa, architect, pm, po, sm, analyst, ux-expert, data-engineer, devops, db-sage) +- Task workflow system with 60+ pre-built tasks +- Template system with 20+ document templates +- Story-driven development methodology +- Basic Claude Code integration + +### Known Issues + +- Manual installation required (2-4 hours) +- Limited cross-platform support +- No interactive wizard + +--- + +## [1.0.0] - 2024-10-15 + +### Added + +- Initial internal release +- Core agent framework +- Basic task execution + +--- + +## Migration Notes + +### Upgrading from 2.0.x to 2.1.x + +**Quick upgrade:** + +```bash +npx @synkra/aios-core install --force-upgrade +``` + +**Key changes:** + +1. Directory renamed: `.legacy-core/` → `.aios-core/` +2. Update `core-config.yaml` with new fields +3. Re-run IDE configuration + +--- + +## Links + +- [Troubleshooting](./installation/troubleshooting.md) +- [FAQ](./installation/faq.md) +- [GitHub Repository](https://github.com/SynkraAI/aios-core) +- [Issue Tracker](https://github.com/SynkraAI/aios-core/issues) + +``` + +================================================== +📄 docs/troubleshooting.md +================================================== +```md +# Synkra AIOS Troubleshooting Guide + +> 🌐 **EN** | [PT](./pt/troubleshooting.md) | [ES](./es/troubleshooting.md) + +--- + +This comprehensive guide helps you diagnose and resolve common issues with Synkra AIOS. + +## Table of Contents + +1. [Quick Diagnostics](#quick-diagnostics) +2. [Installation Issues](#installation-issues) +3. [Meta-Agent Problems](#meta-agent-problems) +4. [Memory Layer Issues](#memory-layer-issues) +5. [Performance Problems](#performance-problems) +6. [API and Integration Issues](#api-and-integration-issues) +7. [Security and Permission Errors](#security-and-permission-errors) +8. [Platform-Specific Issues](#platform-specific-issues) +9. [Advanced Troubleshooting](#advanced-troubleshooting) +10. [Getting Help](#getting-help) + +## Quick Diagnostics + +### Run System Doctor + +Always start with the built-in diagnostics: + +```bash +# Basic diagnostic +npx @synkra/aios-core doctor + +# Auto-fix common issues +npx @synkra/aios-core doctor --fix + +# Verbose output +npx @synkra/aios-core doctor --verbose + +# Check specific component +npx @synkra/aios-core doctor --component memory-layer +``` + +### Common Quick Fixes + +```bash +# Clear all caches +*memory clear-cache + +# Rebuild memory index +*memory rebuild + +# Reset configuration +*config --reset + +# Update to latest version +npx @synkra/aios-core update +``` + +## Installation Issues + +### Issue: NPX command not found + +**Symptoms:** +``` +bash: npx: command not found +``` + +**Solution:** +```bash +# Check npm version +npm --version + +# If npm < 5.2, install npx globally +npm install -g npx + +# Or use npm directly +npm exec @synkra/aios-core init my-project +``` + +### Issue: Installation fails with permission errors + +**Symptoms:** +``` +Error: EACCES: permission denied +``` + +**Solutions:** + +**Option 1: Fix npm permissions (Recommended)** +```bash +# Create npm directory +mkdir ~/.npm-global + +# Configure npm +npm config set prefix '~/.npm-global' + +# Add to PATH (add to ~/.bashrc or ~/.zshrc) +export PATH=~/.npm-global/bin:$PATH + +# Reload shell +source ~/.bashrc +``` + +**Option 2: Use different directory** +```bash +# Install in user directory +cd ~ +npx @synkra/aios-core init my-project +``` + +### Issue: Node.js version error + +**Symptoms:** +``` +Error: Node.js version 18.0.0 or higher required +``` + +**Solution:** +```bash +# Check current version +node --version + +# Update Node.js +# macOS (using Homebrew) +brew upgrade node + +# Windows (using Chocolatey) +choco upgrade nodejs + +# Linux (using NodeSource) +curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash - +sudo apt-get install -y nodejs + +# Or use nvm (Node Version Manager) +nvm install 18 +nvm use 18 +``` + +### Issue: Installation hangs or times out + +**Symptoms:** +- Installation stuck at "Installing dependencies..." +- Network timeout errors + +**Solutions:** + +```bash +# Use different registry +npm config set registry https://registry.npmjs.org/ + +# Clear npm cache +npm cache clean --force + +# Increase timeout +npm config set fetch-timeout 60000 + +# Skip dependency installation +npx @synkra/aios-core init my-project --skip-install + +# Then install manually +cd my-project +npm install --verbose +``` + +### Issue: Disk space error + +**Symptoms:** +``` +Error: ENOSPC: no space left on device +``` + +**Solution:** +```bash +# Check available space +df -h + +# Clean npm cache +npm cache clean --force + +# Remove old node_modules +find . -name "node_modules" -type d -prune -exec rm -rf '{}' + + +# Clean temporary files +# macOS/Linux +rm -rf /tmp/npm-* + +# Windows +rmdir /s %TEMP%\npm-* +``` + +## Meta-Agent Problems + +### Issue: Meta-agent won't start + +**Symptoms:** +``` +Error: Failed to initialize meta-agent +``` + +**Solutions:** + +1. **Check configuration:** +```bash +# Verify config exists +ls -la .aios/config.json + +# Validate configuration +npx @synkra/aios-core doctor --component config + +# Reset if corrupted +rm .aios/config.json +npx @synkra/aios-core doctor --fix +``` + +2. **Check dependencies:** +```bash +# Reinstall core dependencies +npm install + +# Verify agent files +ls -la agents/ +``` + +3. **Check environment:** +```bash +# Verify environment variables +cat .env + +# Ensure API keys are set +echo "OPENAI_API_KEY=your-key" >> .env +``` + +### Issue: Commands not recognized + +**Symptoms:** +``` +Unknown command: *create-agent +``` + +**Solutions:** + +1. **Verify agent activation:** +```bash +# List active agents +*list-agents --active + +# Activate meta-agent +*activate meta-agent + +# Verify command availability +*help +``` + +2. **Check command syntax:** +```bash +# Correct syntax uses asterisk +*create-agent my-agent # ✓ Correct +create-agent my-agent # ✗ Wrong +``` + +3. **Reload agents:** +```bash +# Reload all agents +*reload-agents + +# Or restart meta-agent +exit +npx @synkra/aios-core +``` + +### Issue: Agent creation fails + +**Symptoms:** +``` +Error: Failed to create agent +``` + +**Solutions:** + +1. **Check permissions:** +```bash +# Verify write permissions +ls -la agents/ + +# Fix permissions +chmod 755 agents/ +``` + +2. **Validate agent name:** +```bash +# Valid names: lowercase, hyphens +*create-agent my-agent # ✓ Good +*create-agent MyAgent # ✗ Bad (uppercase) +*create-agent my_agent # ✗ Bad (underscore) +*create-agent my-agent-2 # ✓ Good +``` + +3. **Check for duplicates:** +```bash +# List existing agents +*list-agents + +# Remove duplicate if exists +rm agents/duplicate-agent.yaml +``` + +## Memory Layer Issues + +### Issue: Memory search returns no results + +**Symptoms:** +- Semantic search finds nothing +- Pattern recognition fails + +**Solutions:** + +1. **Rebuild memory index:** +```bash +# Clear and rebuild +*memory clear-cache +*memory rebuild --verbose + +# Wait for indexing +# Check progress +*memory status +``` + +2. **Verify memory configuration:** +```bash +# Check config +cat .aios/memory-config.json + +# Reset to defaults +*memory reset-config +``` + +3. **Check index integrity:** +```bash +# Run memory diagnostics +*memory diagnose + +# Repair if needed +*memory repair +``` + +### Issue: Memory layer using too much RAM + +**Symptoms:** +- High memory usage +- System slowdown + +**Solutions:** + +1. **Adjust memory settings:** +```javascript +// Edit .aios/memory-config.json +{ + "maxDocuments": 5000, // Reduce from 10000 + "chunkSize": 256, // Reduce from 512 + "cacheSize": 100, // Reduce from 1000 + "enableCompression": true // Enable compression +} +``` + +2. **Clear old data:** +```bash +# Remove old entries +*memory prune --older-than "30 days" + +# Optimize storage +*memory optimize +``` + +3. **Use memory limits:** +```bash +# Set memory limit +export NODE_OPTIONS="--max-old-space-size=1024" + +# Run with limited memory +npx @synkra/aios-core +``` + +### Issue: LlamaIndex errors + +**Symptoms:** +``` +Error: LlamaIndex initialization failed +``` + +**Solutions:** + +1. **Check API keys:** +```bash +# Verify OpenAI key for embeddings +echo $OPENAI_API_KEY + +# Test API access +curl https://api.openai.com/v1/models \ + -H "Authorization: Bearer $OPENAI_API_KEY" +``` + +2. **Use local embeddings:** +```javascript +// .aios/memory-config.json +{ + "embedModel": "local", + "localModelPath": "./models/embeddings" +} +``` + +3. **Reinstall LlamaIndex:** +```bash +npm uninstall llamaindex +npm install llamaindex@latest +``` + +## Performance Problems + +### Issue: Slow command execution + +**Symptoms:** +- Commands take > 5 seconds +- UI feels sluggish + +**Solutions:** + +1. **Profile performance:** +```bash +# Enable profiling +*debug enable --profile + +# Run slow command +*analyze-framework + +# View profile +*debug show-profile +``` + +2. **Optimize configuration:** +```javascript +// .aios/config.json +{ + "performance": { + "enableCache": true, + "parallelOperations": 4, + "lazyLoading": true, + "indexUpdateFrequency": "hourly" + } +} +``` + +3. **Clean up resources:** +```bash +# Clear caches +*cache clear --all + +# Remove unused agents +*cleanup-agents + +# Optimize database +*optimize-db +``` + +### Issue: High CPU usage + +**Symptoms:** +- Fan noise +- System lag +- High CPU in task manager + +**Solutions:** + +1. **Limit concurrent operations:** +```bash +# Set operation limits +*config --set performance.maxConcurrent 2 +*config --set performance.cpuThreshold 80 +``` + +2. **Disable real-time features:** +```bash +# Disable real-time indexing +*config --set memory.realTimeIndex false + +# Use batch processing +*config --set performance.batchMode true +``` + +3. **Check for runaway processes:** +```bash +# List all processes +*debug processes + +# Kill stuck process +*debug kill-process +``` + +## API and Integration Issues + +### Issue: API key not working + +**Symptoms:** +``` +Error: Invalid API key +Error: 401 Unauthorized +``` + +**Solutions:** + +1. **Verify API key format:** +```bash +# OpenAI +echo $OPENAI_API_KEY +# Should start with "sk-" + +# Anthropic +echo $ANTHROPIC_API_KEY +# Should start with "sk-ant-" +``` + +2. **Test API directly:** +```bash +# Test OpenAI +curl https://api.openai.com/v1/models \ + -H "Authorization: Bearer $OPENAI_API_KEY" + +# Test Anthropic +curl https://api.anthropic.com/v1/messages \ + -H "x-api-key: $ANTHROPIC_API_KEY" \ + -H "anthropic-version: 2023-06-01" +``` + +3. **Check rate limits:** +```bash +# View current usage +*api-status + +# Switch to different provider +*config --set ai.provider anthropic +``` + +### Issue: Network connection errors + +**Symptoms:** +``` +Error: ECONNREFUSED +Error: getaddrinfo ENOTFOUND +``` + +**Solutions:** + +1. **Check proxy settings:** +```bash +# Corporate proxy +export HTTP_PROXY=http://proxy.company.com:8080 +export HTTPS_PROXY=http://proxy.company.com:8080 + +# Test connection +curl -I https://api.openai.com +``` + +2. **Use offline mode:** +```bash +# Enable offline mode +*config --set offline true + +# Use local models +*config --set ai.provider local +``` + +3. **Configure timeouts:** +```bash +# Increase timeouts +*config --set network.timeout 30000 +*config --set network.retries 3 +``` + +## Security and Permission Errors + +### Issue: Permission denied errors + +**Symptoms:** +``` +Error: EACCES: permission denied +Error: Cannot write to file +``` + +**Solutions:** + +1. **Fix file permissions:** +```bash +# Fix project permissions +chmod -R 755 . +chmod 600 .env + +# Fix specific directories +chmod 755 agents/ tasks/ workflows/ +``` + +2. **Check file ownership:** +```bash +# View ownership +ls -la + +# Fix ownership (Linux/macOS) +sudo chown -R $(whoami) . +``` + +3. **Run with correct user:** +```bash +# Don't use sudo for npm +npm install # ✓ Good +sudo npm install # ✗ Bad +``` + +### Issue: Sensitive data exposed + +**Symptoms:** +- API keys visible in logs +- Credentials in error messages + +**Solutions:** + +1. **Secure environment variables:** +```bash +# Check .gitignore +cat .gitignore | grep .env + +# Add if missing +echo ".env" >> .gitignore +echo ".aios/logs/" >> .gitignore +``` + +2. **Enable secure mode:** +```bash +# Enable security features +*config --set security.maskSensitive true +*config --set security.secureLogging true +``` + +3. **Rotate compromised keys:** +```bash +# Generate new keys from providers +# Update .env file +# Clear logs +rm -rf .aios/logs/* +``` + +## Platform-Specific Issues + +### Windows Issues + +#### Issue: Path too long errors +``` +Error: ENAMETOOLONG +``` + +**Solution:** +```powershell +# Enable long paths (Run as Administrator) +New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem" ` + -Name "LongPathsEnabled" -Value 1 -PropertyType DWORD -Force + +# Or use shorter paths +cd C:\ +npx @synkra/aios-core init myapp +``` + +#### Issue: Scripts disabled +``` +Error: Scripts is disabled on this system +``` + +**Solution:** +```powershell +# Run as Administrator +Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser +``` + +### macOS Issues + +#### Issue: Command Line Tools missing +``` +Error: xcrun: error: invalid active developer path +``` + +**Solution:** +```bash +# Install Xcode Command Line Tools +xcode-select --install +``` + +#### Issue: Gatekeeper blocks execution +``` +Error: "@synkra/aios-core" cannot be opened +``` + +**Solution:** +```bash +# Allow execution +sudo spctl --master-disable + +# Or remove quarantine +xattr -d com.apple.quarantine /usr/local/bin/@synkra/aios-core +``` + +### Linux Issues + +#### Issue: Missing dependencies +``` +Error: libssl.so.1.1: cannot open shared object file +``` + +**Solution:** +```bash +# Ubuntu/Debian +sudo apt-get update +sudo apt-get install libssl-dev + +# RHEL/CentOS +sudo yum install openssl-devel + +# Arch +sudo pacman -S openssl +``` + +## Advanced Troubleshooting + +### Enable Debug Mode + +```bash +# Full debug output +export DEBUG=aios:* +npx @synkra/aios-core + +# Specific components +export DEBUG=aios:memory,aios:agent +``` + +### Analyze Logs + +```bash +# View recent logs +tail -f .aios/logs/aios.log + +# Search for errors +grep -i error .aios/logs/*.log + +# View structured logs +*logs --format json --level error +``` + +### Create Diagnostic Report + +```bash +# Generate full diagnostic +npx @synkra/aios-core doctor --report diagnostic.json + +# Include system info +npx @synkra/aios-core info --detailed >> diagnostic.json + +# Create support bundle +tar -czf aios-support.tar.gz .aios/logs diagnostic.json +``` + +### Performance Profiling + +```javascript +// Enable profiling in config +{ + "debug": { + "profiling": true, + "profileOutput": ".aios/profiles/" + } +} +``` + +```bash +# Analyze profile +*debug analyze-profile .aios/profiles/latest.cpuprofile +``` + +### Memory Dump Analysis + +```bash +# Create heap snapshot +*debug heap-snapshot + +# Analyze memory usage +*debug memory-report + +# Find memory leaks +*debug find-leaks +``` + +## Getting Help + +### Before Asking for Help + +1. **Run diagnostics:** + ```bash + npx @synkra/aios-core doctor --verbose > diagnostic.log + ``` + +2. **Collect information:** + - Node.js version: `node --version` + - NPM version: `npm --version` + - OS and version: `uname -a` or `ver` + - AIOS version: `npx @synkra/aios-core version` + +3. **Check existing issues:** + - [GitHub Issues](https://github.com/@synkra/aios-core/@synkra/aios-core/issues) + - [Discussions](https://github.com/@synkra/aios-core/@synkra/aios-core/discussions) + +### Community Support + +- **Discord**: [Join our server](https://discord.gg/gk8jAdXWmj) + - `#help` - General help + - `#bugs` - Bug reports + - `#meta-agent` - Meta-agent specific + +- **GitHub Discussions**: Technical questions and feature requests + +- **Stack Overflow**: Tag questions with `@synkra/aios-core` + +### Reporting Bugs + +Create detailed bug reports: + +```markdown +## Environment +- OS: macOS 13.0 +- Node: 18.17.0 +- AIOS: 1.0.0 + +## Steps to Reproduce +1. Run `npx @synkra/aios-core init test` +2. Select "enterprise" template +3. Error occurs during installation + +## Expected Behavior +Installation completes successfully + +## Actual Behavior +Error: Cannot find module 'inquirer' + +## Logs +[Attach diagnostic.log] + +## Additional Context +Using corporate proxy +``` + +### Emergency Recovery + +If all else fails: + +```bash +# Backup current state +cp -r .aios .aios.backup + +# Complete reset +rm -rf .aios node_modules package-lock.json +npm cache clean --force + +# Fresh install +npm install +npx @synkra/aios-core doctor --fix + +# Restore data if needed +cp .aios.backup/memory.db .aios/ +``` + +--- + +**Remember**: Most issues can be resolved with: +1. `npx @synkra/aios-core doctor --fix` +2. Clearing caches +3. Updating to latest version +4. Checking permissions + +When in doubt, the community is here to help! 🚀 +``` + +================================================== +📄 docs/security-best-practices.md +================================================== +```md +# Synkra AIOS Security Best Practices + +> 🌐 **EN** | [PT](./pt/security-best-practices.md) | [ES](./es/security-best-practices.md) + +--- + +This guide provides comprehensive security recommendations for deploying and maintaining Synkra AIOS in production environments. + +## Table of Contents + +1. [Security Architecture Overview](#security-architecture-overview) +2. [Authentication & Authorization](#authentication--authorization) +3. [Input Validation & Sanitization](#input-validation--sanitization) +4. [Rate Limiting & DOS Protection](#rate-limiting--dos-protection) +5. [Secure Configuration](#secure-configuration) +6. [Data Protection](#data-protection) +7. [Logging & Monitoring](#logging--monitoring) +8. [Network Security](#network-security) +9. [Dependency Management](#dependency-management) +10. [Incident Response](#incident-response) + +## Security Architecture Overview + +Synkra AIOS implements a multi-layered security approach: + +``` +┌─────────────────────────────────────────┐ +│ Application Layer │ +├─────────────────────────────────────────┤ +│ Authentication Layer │ +├─────────────────────────────────────────┤ +│ Input Validation Layer │ +├─────────────────────────────────────────┤ +│ Rate Limiting Layer │ +├─────────────────────────────────────────┤ +│ Network Layer │ +└─────────────────────────────────────────┘ +``` + +### Core Security Modules + +- **InputSanitizer**: Prevents injection attacks and path traversal +- **AuthSystem**: JWT-based authentication with session management +- **RateLimiter**: DOS protection and abuse prevention +- **SecurityAudit**: Automated vulnerability scanning + +## Authentication & Authorization + +### Implementation + +```javascript +const AuthSystem = require('./security/auth'); + +const auth = new AuthSystem({ + secretKey: process.env.JWT_SECRET, + tokenExpiry: '1h', + refreshExpiry: '7d' +}); + +// Create user with strong password requirements +await auth.createUser({ + username: 'admin', + password: 'SecureP@ssw0rd123!', + email: 'admin@example.com', + role: 'admin' +}); +``` + +### Best Practices + +1. **Strong Password Policy** + - Minimum 12 characters + - Mix of uppercase, lowercase, numbers, symbols + - No dictionary words or personal information + +2. **Token Management** + - Short-lived access tokens (1 hour) + - Secure refresh token rotation + - Immediate revocation on logout + +3. **Session Security** + - Secure session storage + - Session timeout after inactivity + - Multi-session management + +4. **Account Protection** + - Account lockout after failed attempts + - Progressive delays on authentication failures + - Email notifications for security events + +### Configuration + +```env +# .env - Authentication settings +JWT_SECRET=your-super-secure-random-key-here +AUTH_TOKEN_EXPIRY=1h +AUTH_REFRESH_EXPIRY=7d +AUTH_MAX_LOGIN_ATTEMPTS=5 +AUTH_LOCKOUT_DURATION=15m +``` + +## Input Validation & Sanitization + +### Always Sanitize User Input + +```javascript +const InputSanitizer = require('./security/sanitizer'); + +// Path sanitization +const safePath = InputSanitizer.sanitizePath(userInput, basePath); + +// Project name validation +const safeProjectName = InputSanitizer.sanitizeProjectName(name); + +// Command sanitization +const safeCommand = InputSanitizer.sanitizeCommand(userCommand); + +// Configuration values +const safeValue = InputSanitizer.sanitizeConfigValue(value, 'string'); +``` + +### Validation Rules + +1. **Path Operations** + - Always use absolute paths + - Prevent directory traversal (../) + - Validate against allowed directories + - Check for suspicious patterns + +2. **Command Execution** + - Whitelist allowed characters + - Remove command separators (;, |, &) + - Limit command length + - Use parameterized execution + +3. **Configuration Data** + - Type validation + - Length restrictions + - Pattern matching + - Enum validation where applicable + +### Common Vulnerabilities to Prevent + +- **Path Traversal**: `../../../etc/passwd` +- **Command Injection**: `; rm -rf /` +- **SQL Injection**: `'; DROP TABLE users; --` +- **XSS**: `` +- **Prototype Pollution**: `{"__proto__": {"admin": true}}` + +## Rate Limiting & DOS Protection + +### Implementation + +```javascript +const { RateLimiters } = require('./security/rate-limiter'); + +// Different limiters for different operations +const apiLimiter = RateLimiters.createApiLimiter(); +const authLimiter = RateLimiters.createAuthLimiter(); +const metaAgentLimiter = RateLimiters.createMetaAgentLimiter(); + +// Check before operation +const identifier = RateLimiter.createIdentifier({ + ip: req.ip, + userId: req.user?.id, + operation: 'meta-agent' +}); + +const result = metaAgentLimiter.check(identifier); +if (!result.allowed) { + throw new Error(`Rate limit exceeded. Retry after ${result.retryAfter} seconds`); +} +``` + +### Rate Limiting Strategy + +| Operation | Window | Limit | Purpose | +|-----------|--------|-------|---------| +| API Calls | 15 min | 1000 | General API protection | +| Authentication | 15 min | 5 | Brute force prevention | +| Installation | 1 hour | 10 | Installation abuse prevention | +| Meta-Agent | 1 min | 30 | Resource protection | +| File Operations | 1 min | 100 | Filesystem protection | + +### Configuration + +```env +# Rate limiting settings +RATE_LIMIT_API_WINDOW=900000 +RATE_LIMIT_API_MAX=1000 +RATE_LIMIT_AUTH_WINDOW=900000 +RATE_LIMIT_AUTH_MAX=5 +RATE_LIMIT_INSTALL_WINDOW=3600000 +RATE_LIMIT_INSTALL_MAX=10 +``` + +## Secure Configuration + +### Environment Variables + +```env +# Required security settings +NODE_ENV=production +JWT_SECRET=your-256-bit-secret-key +DATABASE_ENCRYPTION_KEY=your-database-encryption-key +SESSION_SECRET=your-session-secret + +# API Keys (never hardcode!) +OPENAI_API_KEY=sk-your-openai-key +ANTHROPIC_API_KEY=sk-your-anthropic-key + +# Security headers +SECURITY_HEADERS_ENABLED=true +HELMET_ENABLED=true +CORS_ORIGIN=https://yourdomain.com + +# Audit logging +AUDIT_LOG_ENABLED=true +AUDIT_LOG_LEVEL=info +AUDIT_LOG_FILE=/var/log/aios/audit.log +``` + +### File Permissions + +```bash +# Secure file permissions +chmod 600 .env +chmod 600 .aios/config.json +chmod 600 .aios/users.json +chmod 600 .aios/sessions.json +chmod 700 .aios/ +chmod 700 security/ +``` + +### Configuration Validation + +```javascript +// Validate critical configuration on startup +const requiredEnvVars = [ + 'JWT_SECRET', + 'NODE_ENV' +]; + +for (const envVar of requiredEnvVars) { + if (!process.env[envVar]) { + throw new Error(`Missing required environment variable: ${envVar}`); + } +} + +// Validate JWT secret strength +if (process.env.JWT_SECRET.length < 32) { + throw new Error('JWT_SECRET must be at least 32 characters long'); +} +``` + +## Data Protection + +### Encryption at Rest + +```javascript +const crypto = require('crypto'); + +class DataEncryption { + constructor(key) { + this.key = key; + this.algorithm = 'aes-256-gcm'; + } + + encrypt(text) { + const iv = crypto.randomBytes(16); + const cipher = crypto.createCipher(this.algorithm, this.key, iv); + + let encrypted = cipher.update(text, 'utf8', 'hex'); + encrypted += cipher.final('hex'); + + const authTag = cipher.getAuthTag(); + + return { + encrypted, + iv: iv.toString('hex'), + authTag: authTag.toString('hex') + }; + } + + decrypt(encryptedData) { + const decipher = crypto.createDecipher( + this.algorithm, + this.key, + Buffer.from(encryptedData.iv, 'hex') + ); + + decipher.setAuthTag(Buffer.from(encryptedData.authTag, 'hex')); + + let decrypted = decipher.update(encryptedData.encrypted, 'hex', 'utf8'); + decrypted += decipher.final('utf8'); + + return decrypted; + } +} +``` + +### Sensitive Data Handling + +1. **API Keys** + - Store in environment variables only + - Never log or expose in error messages + - Rotate regularly + - Use separate keys for different environments + +2. **User Data** + - Hash passwords with bcrypt (salt rounds ≥ 12) + - Encrypt PII at rest + - Implement data retention policies + - Support data deletion requests + +3. **Session Data** + - Use secure session storage + - Implement session timeout + - Clear sessions on logout + - Monitor for session hijacking + +## Logging & Monitoring + +### Security Event Logging + +```javascript +const winston = require('winston'); + +const securityLogger = winston.createLogger({ + level: 'info', + format: winston.format.combine( + winston.format.timestamp(), + winston.format.json() + ), + transports: [ + new winston.transports.File({ + filename: 'logs/security.log', + level: 'warn' + }), + new winston.transports.File({ + filename: 'logs/audit.log' + }) + ] +}); + +// Log security events +securityLogger.warn('Authentication failed', { + username: req.body.username, + ip: req.ip, + userAgent: req.get('User-Agent'), + timestamp: new Date().toISOString() +}); +``` + +### Events to Monitor + +- Failed authentication attempts +- Rate limit violations +- Suspicious file access patterns +- Configuration changes +- Permission escalation attempts +- Unusual API usage patterns + +### Alerting Thresholds + +```javascript +const alertThresholds = { + failedLogins: 10, // per hour + rateLimitViolations: 50, // per hour + suspiciousFileAccess: 5, // per hour + configChanges: 1, // any change + errorRate: 0.05 // 5% error rate +}; +``` + +## Network Security + +### HTTPS Configuration + +```javascript +const https = require('https'); +const fs = require('fs'); + +const options = { + key: fs.readFileSync('path/to/private-key.pem'), + cert: fs.readFileSync('path/to/certificate.pem'), + // Security improvements + secureProtocol: 'TLSv1_2_method', + ciphers: 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384', + honorCipherOrder: true +}; + +https.createServer(options, app).listen(443); +``` + +### Security Headers + +```javascript +const helmet = require('helmet'); + +app.use(helmet({ + contentSecurityPolicy: { + directives: { + defaultSrc: ["'self'"], + scriptSrc: ["'self'", "'unsafe-inline'"], + styleSrc: ["'self'", "'unsafe-inline'"], + imgSrc: ["'self'", "data:", "https:"] + } + }, + hsts: { + maxAge: 31536000, + includeSubDomains: true, + preload: true + } +})); +``` + +### CORS Configuration + +```javascript +const cors = require('cors'); + +app.use(cors({ + origin: process.env.CORS_ORIGIN || 'https://yourdomain.com', + credentials: true, + methods: ['GET', 'POST', 'PUT', 'DELETE'], + allowedHeaders: ['Content-Type', 'Authorization'] +})); +``` + +## Dependency Management + +### Security Scanning + +```bash +# Regular security audits +npm audit +npm audit fix + +# Using yarn +yarn audit +yarn audit fix + +# Advanced scanning with snyk +npx snyk test +npx snyk monitor +``` + +### Update Strategy + +```json +{ + "scripts": { + "security:audit": "npm audit", + "security:update": "npm update", + "security:check": "snyk test", + "security:monitor": "snyk monitor" + } +} +``` + +### Automated Dependency Updates + +```yaml +# .github/dependabot.yml +version: 2 +updates: + - package-ecosystem: "npm" + directory: "/" + schedule: + interval: "weekly" + open-pull-requests-limit: 5 + reviewers: + - "security-team" +``` + +## Incident Response + +### Response Procedures + +1. **Detection** + - Monitor security logs + - Set up automated alerts + - Regular security audits + +2. **Assessment** + - Determine scope and impact + - Identify affected systems + - Classify incident severity + +3. **Containment** + - Isolate affected systems + - Revoke compromised credentials + - Block malicious traffic + +4. **Recovery** + - Restore from clean backups + - Apply security patches + - Update security measures + +5. **Lessons Learned** + - Document incident details + - Update security procedures + - Improve monitoring + +### Emergency Contacts + +```javascript +// Emergency response configuration +const emergencyConfig = { + securityTeam: { + primary: 'security-lead@company.com', + backup: 'security-backup@company.com' + }, + escalation: { + level1: 'team-lead@company.com', + level2: 'engineering-manager@company.com', + level3: 'cto@company.com' + }, + externalContacts: { + hosting: 'support@hosting-provider.com', + security: 'security@security-vendor.com' + } +}; +``` + +## Security Checklist + +### Pre-Deployment + +- [ ] All security modules implemented +- [ ] Input sanitization in place +- [ ] Rate limiting configured +- [ ] Authentication system tested +- [ ] Security audit completed +- [ ] Penetration testing performed +- [ ] SSL/TLS certificates installed +- [ ] Security headers configured +- [ ] Logging and monitoring active +- [ ] Incident response plan ready + +### Post-Deployment + +- [ ] Regular security scans scheduled +- [ ] Dependency updates automated +- [ ] Log monitoring active +- [ ] Backup procedures tested +- [ ] Access controls reviewed +- [ ] Security training completed +- [ ] Documentation updated + +### Ongoing Maintenance + +- [ ] Weekly security log review +- [ ] Monthly dependency updates +- [ ] Quarterly security assessments +- [ ] Annual penetration testing +- [ ] Regular backup testing +- [ ] Security awareness training +- [ ] Incident response drills + +## Compliance & Standards + +### OWASP Top 10 Compliance + +1. **A01:2021 – Broken Access Control** ✅ Addressed by AuthSystem +2. **A02:2021 – Cryptographic Failures** ✅ Strong encryption used +3. **A03:2021 – Injection** ✅ Input sanitization implemented +4. **A04:2021 – Insecure Design** ✅ Security by design approach +5. **A05:2021 – Security Misconfiguration** ✅ Secure defaults +6. **A06:2021 – Vulnerable Components** ✅ Regular updates +7. **A07:2021 – Identity/Auth Failures** ✅ Robust auth system +8. **A08:2021 – Software/Data Integrity** ✅ Integrity checks +9. **A09:2021 – Logging/Monitoring Failures** ✅ Comprehensive logging +10. **A10:2021 – Server-Side Request Forgery** ✅ URL validation + +### Industry Standards + +- **ISO 27001** - Information security management +- **SOC 2** - Security, availability, and confidentiality +- **GDPR** - Data protection and privacy +- **HIPAA** - Healthcare data protection (if applicable) + +## Support and Resources + +### Documentation +- [OWASP Security Guide](https://owasp.org/www-project-top-ten/) +- [Node.js Security Best Practices](https://nodejs.org/en/docs/guides/security/) +- [Express Security Guide](https://expressjs.com/en/advanced/best-practice-security.html) + +### Tools +- [npm audit](https://docs.npmjs.com/cli/v6/commands/npm-audit) +- [Snyk](https://snyk.io/) +- [ESLint Security Plugin](https://github.com/nodesecurity/eslint-plugin-security) +- [Helmet.js](https://helmetjs.github.io/) + +### Training +- OWASP Security Training +- Node.js Security Certification +- Cloud Security Best Practices +- Incident Response Training + +--- + +**Remember**: Security is not a one-time implementation but an ongoing process. Regular reviews, updates, and improvements are essential for maintaining a secure system. + +For questions or security concerns, contact: security@synkra/aios-core.dev +``` + +================================================== +📄 docs/glossary.md +================================================== +```md +# AIOS Glossary + +Official terminology for AIOS 4.x differentiation. + +## Official Terms + +| Official Term | Definition | Use When | +| --- | --- | --- | +| `squad` | Group of specialized AI agents for one domain/workstream. | Describing domain bundles and reusable agent sets. | +| `flow-state` | Runtime-determined state of workflow progression and next action. | Referring to state-based orchestration behavior. | +| `confidence gate` | Delivery decision gate based on delivery confidence score/threshold. | Discussing merge/block decisions from confidence score. | +| `execution profile` | Risk-based autonomy profile (`safe`, `balanced`, `aggressive`). | Controlling agent autonomy by context/risk. | + +## Deprecated Terms + +| Deprecated | Replacement | Notes | +| --- | --- | --- | +| `expansion pack` | `squad` | Keep only in historical release notes. | +| `permission mode` | `execution profile` | Use in migration notes only with explicit replacement. | +| `workflow state` | `flow-state` | Prefer `flow-state` in product-facing docs. | + +## Semantic Lint Policy + +- Enforced by `scripts/semantic-lint.js`. +- Error-level terms block commits/CI. +- Warning-level terms are reported for gradual migration. + +``` + +================================================== +📄 docs/meta-agent-commands.md +================================================== +```md +# Meta-Agent Commands Reference + +> 🌐 **EN** | [PT](./pt/meta-agent-commands.md) | [ES](./es/meta-agent-commands.md) + +--- + +Complete reference guide for all Synkra AIOS meta-agent commands. + +## Table of Contents + +1. [Command Syntax](#command-syntax) +2. [Core Commands](#core-commands) +3. [Agent Management](#agent-management) +4. [Task Operations](#task-operations) +5. [Workflow Commands](#workflow-commands) +6. [Code Generation](#code-generation) +7. [Analysis & Improvement](#analysis--improvement) +8. [Memory Layer](#memory-layer) +9. [Self-Modification](#self-modification) +10. [System Commands](#system-commands) +11. [Advanced Commands](#advanced-commands) + +## Command Syntax + +All meta-agent commands follow this pattern: + +``` +*command-name [required-param] [--optional-flag value] +``` + +- Commands start with `*` (asterisk) +- Parameters in `[]` are required +- Flags start with `--` and may have values +- Multiple flags can be combined + +### Examples + +```bash +*create-agent my-agent +*analyze-code src/app.js --depth full +*generate-tests --type unit --coverage 80 +``` + +## Core Commands + +### *help + +Display all available commands or get help for specific command. + +```bash +*help # Show all commands +*help create-agent # Help for specific command +*help --category agents # Commands by category +``` + +### *status + +Show current system status and active agents. + +```bash +*status # Basic status +*status --detailed # Detailed system information +*status --health # Health check results +``` + +### *config + +View or modify configuration. + +```bash +*config # View current config +*config --set ai.model gpt-4 # Set config value +*config --reset # Reset to defaults +*config --export # Export configuration +``` + +### *version + +Display version information. + +```bash +*version # Current version +*version --check-update # Check for updates +*version --changelog # Show changelog +``` + +## Agent Management + +### *create-agent + +Create a new AI agent. + +```bash +*create-agent [options] + +Options: + --type Agent type: assistant, analyzer, generator, specialist + --template Use template: basic, advanced, custom + --capabilities Interactive capability builder + --from-file Create from YAML definition + +Examples: +*create-agent code-reviewer --type analyzer +*create-agent api-builder --template advanced +*create-agent custom-bot --from-file agents/template.yaml +``` + +### *list-agents + +List all available agents. + +```bash +*list-agents # List all agents +*list-agents --active # Only active agents +*list-agents --type analyzer # Filter by type +*list-agents --detailed # Show full details +``` + +### *activate + +Activate an agent for use. + +```bash +*activate # Activate single agent +*activate agent1 agent2 # Activate multiple +*activate --all # Activate all agents +*activate --type assistant # Activate by type +``` + +### *deactivate + +Deactivate an agent. + +```bash +*deactivate # Deactivate single agent +*deactivate --all # Deactivate all agents +*deactivate --except agent1 # Deactivate all except specified +``` + +### *modify-agent + +Modify existing agent configuration. + +```bash +*modify-agent [options] + +Options: + --add-capability Add new capability + --remove-capability Remove capability + --update-instructions Update instructions + --version Update version + --interactive Interactive modification + +Examples: +*modify-agent helper --add-capability translate +*modify-agent analyzer --update-instructions +*modify-agent bot --interactive +``` + +### *delete-agent + +Remove an agent (with confirmation). + +```bash +*delete-agent # Delete single agent +*delete-agent --force # Skip confirmation +*delete-agent --backup # Create backup before deletion +``` + +### *clone-agent + +Create a copy of existing agent. + +```bash +*clone-agent # Basic clone +*clone-agent bot bot-v2 --modify # Clone and modify +``` + +## Task Operations + +### *create-task + +Create a new reusable task. + +```bash +*create-task [options] + +Options: + --type Task type: command, automation, analysis + --description Task description + --parameters Define parameters interactively + --template Use task template + +Examples: +*create-task validate-input --type command +*create-task daily-backup --type automation +*create-task code-metrics --template analyzer +``` + +### *list-tasks + +List available tasks. + +```bash +*list-tasks # List all tasks +*list-tasks --type automation # Filter by type +*list-tasks --recent # Recently used tasks +*list-tasks --search # Search tasks +``` + +### *run-task + +Execute a specific task. + +```bash +*run-task [params] + +Examples: +*run-task validate-input --data "user input" +*run-task generate-report --format pdf +*run-task backup-database --incremental +``` + +### *schedule-task + +Schedule task execution. + +```bash +*schedule-task + +Schedule formats: + --cron "0 0 * * *" Cron expression + --every "1 hour" Interval + --at "14:30" Specific time + --on "monday,friday" Specific days + +Examples: +*schedule-task cleanup --cron "0 2 * * *" +*schedule-task report --every "6 hours" +*schedule-task backup --at "03:00" --on "sunday" +``` + +### *modify-task + +Update task configuration. + +```bash +*modify-task [options] + +Options: + --add-param Add parameter + --update-logic Update implementation + --change-type Change task type + --rename Rename task +``` + +## Workflow Commands + +### *create-workflow + +Create automated workflow. + +```bash +*create-workflow [options] + +Options: + --steps Interactive step builder + --trigger Trigger type: manual, schedule, event + --template Use workflow template + --from-file Import from YAML + +Examples: +*create-workflow ci-pipeline --trigger push +*create-workflow daily-tasks --trigger "schedule:0 9 * * *" +*create-workflow deployment --template standard-deploy +``` + +### *list-workflows + +Display available workflows. + +```bash +*list-workflows # All workflows +*list-workflows --active # Currently running +*list-workflows --scheduled # Scheduled workflows +*list-workflows --failed # Failed executions +``` + +### *run-workflow + +Execute a workflow. + +```bash +*run-workflow [options] + +Options: + --params Workflow parameters + --skip-steps Skip specific steps + --dry-run Preview without execution + --force Force run even if running + +Examples: +*run-workflow deploy --params '{"env":"staging"}' +*run-workflow backup --skip-steps "upload" +*run-workflow test-suite --dry-run +``` + +### *stop-workflow + +Stop running workflow. + +```bash +*stop-workflow # Stop specific workflow +*stop-workflow --all # Stop all workflows +*stop-workflow --force # Force stop +``` + +### *workflow-status + +Check workflow execution status. + +```bash +*workflow-status # Single workflow status +*workflow-status --all # All workflow statuses +*workflow-status --history # Execution history +``` + +## Code Generation + +### *generate-component + +Generate new components with AI assistance. + +```bash +*generate-component [options] + +Options: + --type Component type: react, vue, angular, web-component + --features Component features + --style Styling: css, scss, styled-components + --tests Generate tests + --storybook Generate Storybook stories + --template Use component template + +Examples: +*generate-component UserProfile --type react --features "avatar,bio,stats" +*generate-component DataTable --type vue --tests --storybook +*generate-component CustomButton --template material-ui +``` + +### *generate-api + +Generate API endpoints. + +```bash +*generate-api [options] + +Options: + --operations CRUD operations: create,read,update,delete + --auth Add authentication + --validation Add input validation + --docs Generate API documentation + --tests Generate API tests + --database Database type: postgres, mongodb, mysql + +Examples: +*generate-api users --operations crud --auth --validation +*generate-api products --database mongodb --docs +*generate-api analytics --operations "read" --tests +``` + +### *generate-tests + +Generate test suites. + +```bash +*generate-tests [target] [options] + +Options: + --type Test type: unit, integration, e2e + --framework Test framework: jest, mocha, cypress + --coverage Target coverage percentage + --mocks Generate mock data + --fixtures Generate test fixtures + +Examples: +*generate-tests src/utils/ --type unit --coverage 90 +*generate-tests src/api/ --type integration --mocks +*generate-tests --type e2e --framework cypress +``` + +### *generate-documentation + +Generate documentation. + +```bash +*generate-documentation [target] [options] + +Options: + --format Format: markdown, html, pdf + --type Doc type: api, user-guide, technical + --include-examples Add code examples + --diagrams Generate diagrams + --toc Generate table of contents + +Examples: +*generate-documentation src/ --type api --format markdown +*generate-documentation --type user-guide --include-examples +*generate-documentation components/ --diagrams --toc +``` + +## Analysis & Improvement + +### *analyze-framework + +Analyze entire codebase. + +```bash +*analyze-framework [options] + +Options: + --depth Analysis depth: surface, standard, deep + --focus Focus areas: performance, security, quality + --report-format Format: console, json, html + --save-report Save analysis report + --compare-previous Compare with previous analysis + +Examples: +*analyze-framework --depth deep +*analyze-framework --focus "performance,security" +*analyze-framework --save-report reports/analysis.json +``` + +### *analyze-code + +Analyze specific code files. + +```bash +*analyze-code [options] + +Options: + --metrics Show code metrics + --complexity Analyze complexity + --dependencies Analyze dependencies + --suggestions Get improvement suggestions + --security Security analysis + +Examples: +*analyze-code src/app.js --metrics --complexity +*analyze-code src/api/ --security --suggestions +*analyze-code package.json --dependencies +``` + +### *improve-code-quality + +Improve code quality with AI assistance. + +```bash +*improve-code-quality [options] + +Options: + --focus Focus: readability, performance, maintainability + --refactor-level Level: minor, moderate, major + --preserve-logic Don't change functionality + --add-comments Add explanatory comments + --fix-eslint Fix linting issues + +Examples: +*improve-code-quality src/utils.js --focus readability +*improve-code-quality src/legacy/ --refactor-level major +*improve-code-quality src/api.js --fix-eslint --add-comments +``` + +### *suggest-refactoring + +Get refactoring suggestions. + +```bash +*suggest-refactoring [options] + +Options: + --type Refactoring type: extract, inline, rename + --scope Scope: function, class, module, project + --impact-analysis Show impact of changes + --preview Preview changes + --auto-apply Apply suggestions automatically + +Examples: +*suggest-refactoring src/helpers.js --type extract +*suggest-refactoring src/models/ --scope module +*suggest-refactoring src/app.js --preview --impact-analysis +``` + +### *detect-patterns + +Detect code patterns and anti-patterns. + +```bash +*detect-patterns [path] [options] + +Options: + --patterns Specific patterns to detect + --anti-patterns Focus on anti-patterns + --suggest-fixes Suggest pattern improvements + --severity Minimum severity: low, medium, high + +Examples: +*detect-patterns --anti-patterns --suggest-fixes +*detect-patterns src/ --patterns "singleton,factory" +*detect-patterns --severity high +``` + +## Memory Layer + +### *memory + +Memory layer operations. + +```bash +*memory [options] + +Operations: + status Show memory layer status + search Semantic search + rebuild Rebuild memory index + clear-cache Clear memory cache + optimize Optimize memory performance + export Export memory data + import Import memory data + +Examples: +*memory status +*memory search "authentication flow" +*memory rebuild --verbose +*memory optimize --aggressive +``` + +### *learn + +Learn from code changes and patterns. + +```bash +*learn [options] + +Options: + --from Source: recent-changes, commits, patterns + --period