Skip to content

feat(execution): Predictive Pipeline — previsão inteligente de resultados#589

Open
nikolasdehor wants to merge 1 commit intoSynkraAI:mainfrom
nikolasdehor:feat/predictive-pipeline
Open

feat(execution): Predictive Pipeline — previsão inteligente de resultados#589
nikolasdehor wants to merge 1 commit intoSynkraAI:mainfrom
nikolasdehor:feat/predictive-pipeline

Conversation

@nikolasdehor
Copy link
Contributor

@nikolasdehor nikolasdehor commented Mar 12, 2026

Summary

  • Módulo de previsão de resultados antes da execução
  • k-NN com similaridade coseno para encontrar execuções similares
  • EWMA para estimativa de duração, risk assessment e recommendation engine

Testes

  • 89 testes unitários passando

Reabertura do PR #575 (fechado acidentalmente)

Summary by CodeRabbit

  • New Features
    • Introduced a predictive pipeline system for intelligent task execution featuring k-nearest neighbors matching, weighted prediction capabilities, and anomaly detection.
    • Added batch prediction and similarity search functionality for pattern analysis.
    • Included risk assessment framework with confidence scoring and agent/strategy recommendations.
    • Added event-based observability for monitoring pipeline activities.

…ados de tasks

Pipeline preditivo que estima resultados antes da execução usando
padrões históricos. Implementa k-NN ponderado com vetores de features,
EWMA para estimativa de duração, detecção de anomalias, avaliação de
risco e motor de recomendação de agentes/estratégias.

89 testes unitários cobrindo todos os cenários.
@vercel
Copy link

vercel bot commented Mar 12, 2026

@nikolasdehor is attempting to deploy a commit to the Pedro Valério Lopez's projects Team on Vercel.

A member of the Team first needs to authorize it.

@github-actions github-actions bot added area: agents Agent system related area: workflows Workflow system related squad mcp type: test Test coverage and quality area: core Core framework (.aios-core/core/) area: installer Installer and setup (packages/installer/) area: synapse SYNAPSE context engine area: cli CLI tools (bin/, packages/aios-pro-cli/) area: pro Pro features (pro/) area: health-check Health check system area: docs Documentation (docs/) area: devops CI/CD, GitHub Actions (.github/) labels Mar 12, 2026
@coderabbitai
Copy link

coderabbitai bot commented Mar 12, 2026

Walkthrough

Introduces a new PredictivePipeline class that implements a k-NN-based predictive system for task execution with multi-stage orchestration, persistent data storage, risk assessment, and event-based observability. Includes a backward-compatible wrapper module and comprehensive test suite covering all public APIs and edge cases.

Changes

Cohort / File(s) Summary
Core Predictive Pipeline Implementation
.aiox-core/core/execution/predictive-pipeline.js
Adds 1264-line PredictivePipeline class with k-NN matching, five-stage pipeline (preprocess, match, predict, score, recommend), lazy-loading persistence layer with serialized write chains, outcome recording with auto-pruning, feature extraction/similarity computation, risk assessment framework, batch prediction, and event-based observability (prediction, anomaly-detected, high-risk-detected, model-retrained, outcome-recorded).
Backward-Compatibility Wrapper
.aios-core/core/execution/predictive-pipeline.js
Adds 2-line re-export module that preserves existing import paths by delegating to canonical implementation at .aiox-core/core/execution/predictive-pipeline.js.
Installation Manifest Update
.aiox-core/install-manifest.yaml
Updates manifest entries: adds core/execution/predictive-pipeline.js file entry, removes development/tasks/review-prs.md entry, adjusts template sizes and associated metadata across multiple entries.
Test Suite
tests/core/execution/predictive-pipeline.test.js
Adds 1137-line comprehensive test suite validating: constants exposure, constructor behavior, outcome recording with validation/persistence/pruning, prediction with anomaly/risk detection, batch prediction, similarity search, pattern strength, risk assessment, agent/strategy recommendations, stage metrics aggregation, model accuracy/retraining, persistence across restarts, confidence scoring, and utility functions (EWMA, similarity metrics, deep cloning).

Sequence Diagram

sequenceDiagram
    participant Client
    participant Pipeline as PredictivePipeline
    participant Preprocess
    participant Match
    participant Predict
    participant Score
    participant Recommend
    participant Storage as Persistence Layer
    participant Emitter as EventEmitter

    Client->>Pipeline: predict(taskType, complexity, ...)
    Pipeline->>Preprocess: extractFeatures()
    Preprocess-->>Pipeline: normalized features
    
    Pipeline->>Storage: loadPersistedOutcomes()
    Storage-->>Pipeline: historical outcomes
    
    Pipeline->>Match: findKNearestNeighbors()
    Match->>Match: compute similarity scores
    Match-->>Pipeline: k similar tasks
    
    Pipeline->>Predict: computeWeightedPrediction()
    Predict->>Predict: aggregate success probability<br/>duration EWMA, resources
    Predict-->>Pipeline: prediction data
    
    Pipeline->>Score: validateConfidence()<br/>detectAnomalies()
    Score->>Score: assess risk factors
    Score-->>Pipeline: confidence + risk level
    
    Pipeline->>Recommend: selectBestAgent()<br/>selectBestStrategy()
    Recommend-->>Pipeline: recommendations
    
    Pipeline->>Emitter: emit('prediction', result)
    Emitter-->>Client: observable event
    
    Client->>Pipeline: recordOutcome(result)
    Pipeline->>Storage: persist outcome + stats
    Storage-->>Pipeline: write complete
    Pipeline->>Emitter: emit('outcome-recorded')
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: introducing a Predictive Pipeline module for intelligent outcome prediction before execution.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (1)
tests/core/execution/predictive-pipeline.test.js (1)

12-17: Please cover the .aios-core compatibility path with one smoke test.

Every new test loads the canonical .aiox-core module directly, so .aios-core/core/execution/predictive-pipeline.js can regress without any red signal.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/execution/predictive-pipeline.test.js` around lines 12 - 17, Add a
smoke test to cover the .aios-core compatibility path by requiring the same
exported symbols from the alternate module path and exercising a minimal API
call: import PredictivePipeline, PipelineStage, RiskLevel, DEFAULTS from
'../../../.aios-core/core/execution/predictive-pipeline' (mirroring the existing
import), instantiate a PredictivePipeline (or call a small method) and assert
that the key exports exist and behave as expected (e.g., typeof
PredictivePipeline === 'function', PipelineStage/RiskLevel enums present,
DEFAULTS has expected keys) to ensure the compatibility entrypoint doesn't
regress.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.aios-core/core/execution/predictive-pipeline.js:
- Around line 1-2: Replace the brittle relative require in the compat shim with
the repo's absolute import path: update the line in predictive-pipeline.js that
currently does module.exports =
require('../../../.aiox-core/core/execution/predictive-pipeline'); to use the
absolute import (for example module.exports =
require('.aiox-core/core/execution/predictive-pipeline') or the project's
configured package alias), leaving module.exports intact so the file remains a
retrocompat wrapper.

In @.aiox-core/core/execution/predictive-pipeline.js:
- Around line 71-81: The constructor currently assigns numeric options directly
(kNeighbors, minSamplesForPrediction, anomalyThreshold, ewmaAlpha,
highRiskThreshold, maxOutcomes, confidenceSampleCap) without validation; add
input validation in the constructor to verify types and ranges (e.g., kNeighbors
and minSamplesForPrediction and maxOutcomes and confidenceSampleCap are positive
integers >=1, anomalyThreshold/ewmaAlpha/highRiskThreshold are numbers in [0,1])
and throw a clear Error if any check fails so invalid values fail-fast; retain
use of DEFAULTS when an option is undefined but reject out-of-range or
non-numeric values before storing to the instance fields.
- Around line 509-513: _stagePredict currently only falls back to
_defaultPrediction when neighbors.length === 0, so the configured
minSamplesForPrediction option never gates predictions; update the logic in
_stagePredict to check the configured threshold
(this.options.minSamplesForPrediction or similar) and call
this._defaultPrediction(features) whenever neighbors.length is less than that
threshold (including the zero case), ensuring the option is honored before
running the normal prediction path in _runStage(PipelineStage.PREDICT, ...).
- Around line 305-312: When auto-pruning _outcomes to enforce maxOutcomes, also
update the in-memory _model so its aggregated stats remain consistent (currently
splicing _outcomes then persisting causes getModelAccuracy(), assessRisk(),
recommendations and model.json to remain overstated). Fix by applying the same
pruning logic to _model before calling _persistModel(): either call the existing
prune() routine (or a new helper like _updateModelOnRemove/ _recalculateModel)
after computing excess and before persisting, or iterate the removed outcome
entries and decrement/remove their contributions from _model so that _model,
_outcomes, _persistOutcomes, and _persistModel remain in sync (affecting methods
getModelAccuracy, assessRisk, retrain, and prune).
- Around line 129-150: The _loadSync method currently swallows all errors when
reading/parsing this._outcomesPath and this._modelPath; change the catch logic
in both read/parse blocks (inside _loadSync) to distinguish JSON parse errors
from real I/O errors: if the thrown error is a SyntaxError (or JSON parse
failure) treat it as recoverable and reset this._outcomes or this._model (via
this._emptyModel()), but for other errors (ENoENT aside if you want missing-file
treated as empty) rethrow or throw a new Error that includes the path
(this._outcomesPath or this._modelPath) and the original error info so
permission/read errors surface with context rather than being silently ignored.
- Around line 177-181: The _enqueueWrite method swallows write errors by
catching them and not propagating them, causing callers like recordOutcome(),
retrain(), and prune() to observe success even when persistence failed; change
the catch handler on this._writeChain so after calling this._emitSafeError({
type: 'persistence', error: err }) it rethrows the error (or returns a rejected
promise) so the returned promise remains rejected and callers receive the
failure; update _enqueueWrite (and any similar write-queue logic using
_writeChain) to propagate the error instead of resolving it.

---

Nitpick comments:
In `@tests/core/execution/predictive-pipeline.test.js`:
- Around line 12-17: Add a smoke test to cover the .aios-core compatibility path
by requiring the same exported symbols from the alternate module path and
exercising a minimal API call: import PredictivePipeline, PipelineStage,
RiskLevel, DEFAULTS from
'../../../.aios-core/core/execution/predictive-pipeline' (mirroring the existing
import), instantiate a PredictivePipeline (or call a small method) and assert
that the key exports exist and behave as expected (e.g., typeof
PredictivePipeline === 'function', PipelineStage/RiskLevel enums present,
DEFAULTS has expected keys) to ensure the compatibility entrypoint doesn't
regress.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: e628c9ab-e22c-4de9-a73c-a89bd2d0f9f3

📥 Commits

Reviewing files that changed from the base of the PR and between f74e3e7 and e473e86.

📒 Files selected for processing (4)
  • .aios-core/core/execution/predictive-pipeline.js
  • .aiox-core/core/execution/predictive-pipeline.js
  • .aiox-core/install-manifest.yaml
  • tests/core/execution/predictive-pipeline.test.js

Comment on lines +1 to +2
// Retrocompatible wrapper — canonical source in .aiox-core/
module.exports = require('../../../.aiox-core/core/execution/predictive-pipeline');
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use the repo's absolute import path in the compat shim.

This wrapper is a long-lived compatibility entry point, so the ../../../.aiox-core/... hop is brittle and breaks the project import rule.

As per coding guidelines, "**/*.{js,jsx,ts,tsx}: Use absolute imports instead of relative imports in all code`."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.aios-core/core/execution/predictive-pipeline.js around lines 1 - 2, Replace
the brittle relative require in the compat shim with the repo's absolute import
path: update the line in predictive-pipeline.js that currently does
module.exports =
require('../../../.aiox-core/core/execution/predictive-pipeline'); to use the
absolute import (for example module.exports =
require('.aiox-core/core/execution/predictive-pipeline') or the project's
configured package alias), leaving module.exports intact so the file remains a
retrocompat wrapper.

Comment on lines +71 to +81
constructor(projectRoot, options = {}) {
super();

this.projectRoot = projectRoot ?? process.cwd();
this.kNeighbors = options.kNeighbors ?? DEFAULTS.kNeighbors;
this.minSamplesForPrediction = options.minSamplesForPrediction ?? DEFAULTS.minSamplesForPrediction;
this.anomalyThreshold = options.anomalyThreshold ?? DEFAULTS.anomalyThreshold;
this.ewmaAlpha = options.ewmaAlpha ?? DEFAULTS.ewmaAlpha;
this.highRiskThreshold = options.highRiskThreshold ?? DEFAULTS.highRiskThreshold;
this.maxOutcomes = options.maxOutcomes ?? DEFAULTS.maxOutcomes;
this.confidenceSampleCap = options.confidenceSampleCap ?? DEFAULTS.confidenceSampleCap;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Validate constructor options before storing them.

This public entry point accepts any numeric values today. Values like kNeighbors < 0, confidenceSampleCap <= 0, or weights outside [0, 1] silently degrade the model instead of failing fast.

As per coding guidelines, "Check for proper input validation on public API methods."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.aiox-core/core/execution/predictive-pipeline.js around lines 71 - 81, The
constructor currently assigns numeric options directly (kNeighbors,
minSamplesForPrediction, anomalyThreshold, ewmaAlpha, highRiskThreshold,
maxOutcomes, confidenceSampleCap) without validation; add input validation in
the constructor to verify types and ranges (e.g., kNeighbors and
minSamplesForPrediction and maxOutcomes and confidenceSampleCap are positive
integers >=1, anomalyThreshold/ewmaAlpha/highRiskThreshold are numbers in [0,1])
and throw a clear Error if any check fails so invalid values fail-fast; retain
use of DEFAULTS when an option is undefined but reject out-of-range or
non-numeric values before storing to the instance fields.

Comment on lines +129 to +150
_loadSync() {
try {
if (fs.existsSync(this._outcomesPath)) {
const raw = fs.readFileSync(this._outcomesPath, 'utf8');
const parsed = JSON.parse(raw);
this._outcomes = Array.isArray(parsed) ? parsed : [];
}
} catch {
this._outcomes = [];
}

try {
if (fs.existsSync(this._modelPath)) {
const raw = fs.readFileSync(this._modelPath, 'utf8');
const parsed = JSON.parse(raw);
if (parsed && typeof parsed === 'object') {
this._model = { ...this._emptyModel(), ...parsed };
}
}
} catch {
this._model = this._emptyModel();
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Differentiate parse recovery from real I/O failures.

These bare catch blocks reset to empty state for every error. Malformed JSON may be recoverable, but permission/read errors should surface with path context; otherwise the next write can overwrite valid persisted history.

As per coding guidelines, "Verify error handling is comprehensive with proper try/catch and error context."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.aiox-core/core/execution/predictive-pipeline.js around lines 129 - 150, The
_loadSync method currently swallows all errors when reading/parsing
this._outcomesPath and this._modelPath; change the catch logic in both
read/parse blocks (inside _loadSync) to distinguish JSON parse errors from real
I/O errors: if the thrown error is a SyntaxError (or JSON parse failure) treat
it as recoverable and reset this._outcomes or this._model (via
this._emptyModel()), but for other errors (ENoENT aside if you want missing-file
treated as empty) rethrow or throw a new Error that includes the path
(this._outcomesPath or this._modelPath) and the original error info so
permission/read errors surface with context rather than being silently ignored.

Comment on lines +177 to +181
_enqueueWrite(writeFn) {
this._writeChain = this._writeChain.then(() => writeFn()).catch((err) => {
this._emitSafeError({ type: 'persistence', error: err });
});
return this._writeChain;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Propagate persistence failures back to callers.

Line 178 converts a failed write into a resolved promise. recordOutcome(), retrain(), and prune() can therefore report success even when nothing reached disk.

🛠️ Suggested fix
  _enqueueWrite(writeFn) {
-    this._writeChain = this._writeChain.then(() => writeFn()).catch((err) => {
-      this._emitSafeError({ type: 'persistence', error: err });
-    });
-    return this._writeChain;
+    const writePromise = this._writeChain.then(() => writeFn());
+    this._writeChain = writePromise.catch((err) => {
+      this._emitSafeError({ type: 'persistence', error: err });
+    });
+    return writePromise;
  }
As per coding guidelines, "Verify error handling is comprehensive with proper try/catch and error context."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.aiox-core/core/execution/predictive-pipeline.js around lines 177 - 181, The
_enqueueWrite method swallows write errors by catching them and not propagating
them, causing callers like recordOutcome(), retrain(), and prune() to observe
success even when persistence failed; change the catch handler on
this._writeChain so after calling this._emitSafeError({ type: 'persistence',
error: err }) it rethrows the error (or returns a rejected promise) so the
returned promise remains rejected and callers receive the failure; update
_enqueueWrite (and any similar write-queue logic using _writeChain) to propagate
the error instead of resolving it.

Comment on lines +305 to +312
// Auto-prune if exceeding max
if (this._outcomes.length > this.maxOutcomes) {
const excess = this._outcomes.length - this.maxOutcomes;
this._outcomes.splice(0, excess);
}

await this._persistOutcomes();
await this._persistModel();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Keep _model consistent when auto-pruning.

This branch drops records from _outcomes after their stats were already accumulated. After the first overflow, getModelAccuracy(), assessRisk(), recommendations, and persisted model.json are all overstated until a later retrain() or explicit prune().

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.aiox-core/core/execution/predictive-pipeline.js around lines 305 - 312,
When auto-pruning _outcomes to enforce maxOutcomes, also update the in-memory
_model so its aggregated stats remain consistent (currently splicing _outcomes
then persisting causes getModelAccuracy(), assessRisk(), recommendations and
model.json to remain overstated). Fix by applying the same pruning logic to
_model before calling _persistModel(): either call the existing prune() routine
(or a new helper like _updateModelOnRemove/ _recalculateModel) after computing
excess and before persisting, or iterate the removed outcome entries and
decrement/remove their contributions from _model so that _model, _outcomes,
_persistOutcomes, and _persistModel remain in sync (affecting methods
getModelAccuracy, assessRisk, retrain, and prune).

Comment on lines +509 to +513
_stagePredict(neighbors, features) {
return this._runStage(PipelineStage.PREDICT, () => {
if (neighbors.length === 0) {
return this._defaultPrediction(features);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

minSamplesForPrediction never gates predictions.

The option is documented and exposed, but this only falls back when there are zero neighbors. With one or two matches, callers still receive a full prediction even though the configured minimum has not been met.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.aiox-core/core/execution/predictive-pipeline.js around lines 509 - 513,
_stagePredict currently only falls back to _defaultPrediction when
neighbors.length === 0, so the configured minSamplesForPrediction option never
gates predictions; update the logic in _stagePredict to check the configured
threshold (this.options.minSamplesForPrediction or similar) and call
this._defaultPrediction(features) whenever neighbors.length is less than that
threshold (including the zero case), ensuring the option is honored before
running the normal prediction path in _runStage(PipelineStage.PREDICT, ...).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area: agents Agent system related area: cli CLI tools (bin/, packages/aios-pro-cli/) area: core Core framework (.aios-core/core/) area: devops CI/CD, GitHub Actions (.github/) area: docs Documentation (docs/) area: health-check Health check system area: installer Installer and setup (packages/installer/) area: pro Pro features (pro/) area: synapse SYNAPSE context engine area: workflows Workflow system related mcp squad type: test Test coverage and quality

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant