Skip to content

test: add comprehensive unit tests for condition-evaluator#224

Closed
nikolasdehor wants to merge 1 commit intoSynkraAI:mainfrom
nikolasdehor:test/condition-evaluator-coverage
Closed

test: add comprehensive unit tests for condition-evaluator#224
nikolasdehor wants to merge 1 commit intoSynkraAI:mainfrom
nikolasdehor:test/condition-evaluator-coverage

Conversation

@nikolasdehor
Copy link
Contributor

@nikolasdehor nikolasdehor commented Feb 18, 2026

Summary

  • Add 74 unit tests for .aios-core/core/orchestration/condition-evaluator.js (379 lines)
  • Covers all public methods and built-in condition evaluators
  • Tests complex expression parsing (AND, OR, mixed, negation, dot notation)

Test Coverage Highlights

Area Tests Description
Tech stack conditions 5 database, frontend, backend, typescript, tests
Database conditions 5 supabase, RLS, migrations
Frontend conditions 4 react, vue, tailwind
Workflow state 11 QA approval, phase completion, composite
Operators 9 AND, OR, mixed, negation
Dot notation 4 boolean access, equality checks, missing paths
shouldExecutePhase 3 no condition, met, not met
getFailedConditions 8 single, AND, OR, mixed operators
getSkipExplanation 7 known explanations, generic fallback
evaluateAllPhases 4 categorization, details, edge cases

Closes #223

Test plan

  • All 74 tests pass with npx jest tests/core/orchestration/condition-evaluator.test.js
  • No external dependencies (pure unit tests)
  • Tests run in < 1s

cc @Pedrovaleriolopez

Summary by CodeRabbit

  • Tests
    • Added a comprehensive test suite for condition evaluation and phase orchestration, covering profile scenarios, QA approval flows, database/frontend/workflow condition semantics, composite logic (AND/OR/negation), dot-notation and equality checks, handling of unknown or missing data, phase execution decisions, skip explanations, and end-to-end phase evaluation outcomes.

Copilot AI review requested due to automatic review settings February 18, 2026 18:26
@vercel
Copy link

vercel bot commented Feb 18, 2026

@nikolasdehor is attempting to deploy a commit to the Pedro Valério Lopez's projects Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link

coderabbitai bot commented Feb 18, 2026

Walkthrough

Adds a comprehensive Jest test suite for the ConditionEvaluator module, covering constructor behavior, state setters, evaluation of tech-stack, database, frontend, workflow-state and composite conditions, helper methods (shouldExecutePhase, getFailedConditions, getSkipExplanation), and evaluateAllPhases.

Changes

Cohort / File(s) Summary
ConditionEvaluator Test Suite
tests/core/orchestration/condition-evaluator.test.js
Adds ~462-line Jest test file with fixtures, spying on console.warn, tests for constructor and state setters, extensive evaluate() coverage (tech-stack, database, frontend, QA/workflow, composite logic, negation, dot-notation, unknown conditions), and tests for shouldExecutePhase, getFailedConditions, getSkipExplanation, and evaluateAllPhases.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related issues

Suggested labels

core, tests

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'test: add comprehensive unit tests for condition-evaluator' directly and accurately describes the main change: adding 74 unit tests to the condition-evaluator module.
Linked Issues check ✅ Passed The PR addresses all coding objectives from #223: 74 tests cover all public methods, test all built-in condition evaluators including tech-stack/database/frontend conditions, test complex expressions (AND/OR/negation/dot-notation), and confirm all tests pass.
Out of Scope Changes check ✅ Passed All changes are within scope: a single test file is added with no modifications to production code or unrelated files, fully aligned with the objective of adding unit test coverage to condition-evaluator.js.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
tests/core/orchestration/condition-evaluator.test.js (2)

104-132: Prefer local evaluator variables over mid-test reassignment.

Reassigning the outer evaluator mid-it (e.g., Lines 106, 112, 118, 124, 130) conflates two logical test cases in one block and makes it harder to pinpoint which state triggers a failure. The same pattern appears throughout the database, frontend, and composite sections. Prefer separate it blocks or a local constant.

♻️ Example refactor for one representative test
-  it('should evaluate project_has_database from profile', () => {
-    expect(evaluator.evaluate('project_has_database')).toBe(false);
-    evaluator = new ConditionEvaluator(createProfile({ hasDatabase: true }));
-    expect(evaluator.evaluate('project_has_database')).toBe(true);
-  });
+  it('should return false for project_has_database when profile has no database', () => {
+    expect(evaluator.evaluate('project_has_database')).toBe(false);
+  });
+
+  it('should return true for project_has_database when profile has database', () => {
+    const e = new ConditionEvaluator(createProfile({ hasDatabase: true }));
+    expect(e.evaluate('project_has_database')).toBe(true);
+  });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/orchestration/condition-evaluator.test.js` around lines 104 - 132,
Split each test so you don't reassign the outer evaluator inside an it block:
instead of reusing the shared evaluator variable and calling evaluator = new
ConditionEvaluator(...), create a new local constant for the specific scenario
(e.g., const evaluator = new ConditionEvaluator(createProfile({ hasDatabase:
true })) ) or move the second assertion into its own it block; update the tests
that reference evaluator and ConditionEvaluator (project_has_database,
project_has_frontend, project_has_backend, project_has_typescript,
project_has_tests) to use locally scoped evaluators created via createProfile to
keep each it single-purpose and avoid mid-test reassignment.

35-38: Move the console.warn spy before evaluator construction.

Currently the spy is installed after new ConditionEvaluator(createProfile()), so any warning emitted by the constructor would not be captured. Swap the two lines for defensive correctness.

♻️ Proposed fix
 beforeEach(() => {
+  jest.spyOn(console, 'warn').mockImplementation(() => {});
   evaluator = new ConditionEvaluator(createProfile());
-  jest.spyOn(console, 'warn').mockImplementation(() => {});
 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/orchestration/condition-evaluator.test.js` around lines 35 - 38,
Install the console.warn spy before constructing the ConditionEvaluator so
constructor-emitted warnings are captured: in the beforeEach block, call
jest.spyOn(console, 'warn').mockImplementation(() => {}) prior to creating
evaluator with new ConditionEvaluator(createProfile()); this ensures any
warnings from ConditionEvaluator's constructor are intercepted during tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/core/orchestration/condition-evaluator.test.js`:
- Around line 402-411: The two assertions for result.reason must use the same
style: update the negative-path assertion on the result from
evaluator.shouldExecutePhase({ phase: 1, condition: 'project_has_database' }) so
it uses an exact match like the positive path
(expect(result.reason).toBe('condition_not_met')) if reason is an enum string;
if the implementation appends context (e.g. 'condition_not_met:
project_has_database') instead, either assert the full expected string exactly
or change the positive-path assertion to a substring check
(expect(result.reason).toContain('condition_met')) so both tests use the same
intent.

---

Nitpick comments:
In `@tests/core/orchestration/condition-evaluator.test.js`:
- Around line 104-132: Split each test so you don't reassign the outer evaluator
inside an it block: instead of reusing the shared evaluator variable and calling
evaluator = new ConditionEvaluator(...), create a new local constant for the
specific scenario (e.g., const evaluator = new
ConditionEvaluator(createProfile({ hasDatabase: true })) ) or move the second
assertion into its own it block; update the tests that reference evaluator and
ConditionEvaluator (project_has_database, project_has_frontend,
project_has_backend, project_has_typescript, project_has_tests) to use locally
scoped evaluators created via createProfile to keep each it single-purpose and
avoid mid-test reassignment.
- Around line 35-38: Install the console.warn spy before constructing the
ConditionEvaluator so constructor-emitted warnings are captured: in the
beforeEach block, call jest.spyOn(console, 'warn').mockImplementation(() => {})
prior to creating evaluator with new ConditionEvaluator(createProfile()); this
ensures any warnings from ConditionEvaluator's constructor are intercepted
during tests.

Comment on lines +402 to +411
const result = evaluator.shouldExecutePhase({ phase: 1, condition: 'project_has_database' });
expect(result.shouldExecute).toBe(true);
expect(result.reason).toBe('condition_met');
});

it('should return false with condition_not_met reason when condition fails', () => {
const result = evaluator.shouldExecutePhase({ phase: 1, condition: 'project_has_database' });
expect(result.shouldExecute).toBe(false);
expect(result.reason).toContain('condition_not_met');
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Align assertion style for reason between the positive and negative paths.

Line 404 uses toBe('condition_met') (exact match) while Line 410 uses toContain('condition_not_met') (substring). If reason is an exact enum string in both cases, use toBe consistently. If the failure path appends extra context (e.g., 'condition_not_met: project_has_database'), document the intent and consider asserting on the extra text too.

🛠️ Proposed fix (if `reason` is a bare enum value in both paths)
-    expect(result.reason).toContain('condition_not_met');
+    expect(result.reason).toBe('condition_not_met');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/orchestration/condition-evaluator.test.js` around lines 402 - 411,
The two assertions for result.reason must use the same style: update the
negative-path assertion on the result from evaluator.shouldExecutePhase({ phase:
1, condition: 'project_has_database' }) so it uses an exact match like the
positive path (expect(result.reason).toBe('condition_not_met')) if reason is an
enum string; if the implementation appends context (e.g. 'condition_not_met:
project_has_database') instead, either assert the full expected string exactly
or change the positive-path assertion to a substring check
(expect(result.reason).toContain('condition_met')) so both tests use the same
intent.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request adds comprehensive unit test coverage for the ConditionEvaluator module, which is responsible for evaluating workflow conditions based on the detected tech stack profile. The module performs deterministic evaluation (no AI involvement) to determine which workflow phases should execute.

Changes:

  • Added 74 unit tests covering all public methods and built-in condition evaluators
  • Tests cover tech stack conditions, database conditions, frontend conditions, workflow state, operators (AND/OR/mixed), negation, dot notation, and all helper methods
  • Includes helper function createProfile() to create mock tech stack profiles for testing

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

evaluator = new ConditionEvaluator(createProfile({ hasFrontend: true }));
expect(evaluator.evaluate('has_any_data_to_analyze')).toBe(true);
});

Copy link

Copilot AI Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test coverage for the has_any_data_to_analyze composite condition is missing a test case for when only hasBackend is true. The implementation checks hasDatabase || hasFrontend || hasBackend, but only the database and frontend branches are tested. Consider adding a test case like: it('should evaluate has_any_data_to_analyze with backend', () => { evaluator = new ConditionEvaluator(createProfile({ hasBackend: true })); expect(evaluator.evaluate('has_any_data_to_analyze')).toBe(true); });

Suggested change
it('should evaluate has_any_data_to_analyze with backend', () => {
evaluator = new ConditionEvaluator(createProfile({ hasBackend: true }));
expect(evaluator.evaluate('has_any_data_to_analyze')).toBe(true);
});

Copilot uses AI. Check for mistakes.
Comment on lines +11 to +30
function createProfile(overrides = {}) {
return {
hasDatabase: false,
hasFrontend: false,
hasBackend: false,
hasTypeScript: false,
hasTests: false,
database: {
type: null,
envVarsConfigured: false,
hasRLS: false,
hasMigrations: false,
},
frontend: {
framework: null,
styling: null,
},
...overrides,
};
}
Copy link

Copilot AI Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The createProfile helper uses shallow object spreading with ...overrides, which means that when passing nested objects like database or frontend, all properties must be specified or they will be lost. For example, createProfile({ database: { type: 'supabase' } }) would lose the envVarsConfigured, hasRLS, and hasMigrations properties. While current tests correctly provide all properties, consider using deep merging to make the helper more robust and prevent future issues. You could use a deep merge utility or restructure the helper to handle nested overrides separately.

Copilot uses AI. Check for mistakes.
const result = evaluator.getFailedConditions({
condition: 'project_has_database && project_has_frontend || project_has_backend',
});
expect(result.length).toBeGreaterThan(0);
Copy link

Copilot AI Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The assertion expect(result.length).toBeGreaterThan(0) is too weak and doesn't adequately verify the behavior. According to the implementation, when all groups in a mixed operator condition fail, it should return an array of the failed group strings. Consider using a more specific assertion like expect(result).toEqual(['project_has_database && project_has_frontend', 'project_has_backend']) to verify the exact expected behavior.

Suggested change
expect(result.length).toBeGreaterThan(0);
expect(result).toEqual(['project_has_database && project_has_frontend', 'project_has_backend']);

Copilot uses AI. Check for mistakes.
Comment on lines +53 to +59
it('should initialize qaApproved as false', () => {
expect(evaluator._qaApproved).toBe(false);
});

it('should initialize empty phase outputs', () => {
expect(evaluator._phaseOutputs).toEqual({});
});
Copy link

Copilot AI Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests directly access private properties _qaApproved and _phaseOutputs (indicated by underscore prefix), which couples the tests to implementation details. Consider testing these through their public interfaces instead. For example, instead of checking evaluator._qaApproved, you could verify the behavior through evaluator.evaluate('qa_review_approved'). However, if these properties are meant to be tested directly for initialization verification, consider whether they should be private or if getter methods should be provided.

Suggested change
it('should initialize qaApproved as false', () => {
expect(evaluator._qaApproved).toBe(false);
});
it('should initialize empty phase outputs', () => {
expect(evaluator._phaseOutputs).toEqual({});
});
it('should have QA review not approved by default', () => {
// Use the public evaluate API instead of checking the private _qaApproved field
expect(evaluator.evaluate('qa_review_approved')).toBe(false);
});

Copilot uses AI. Check for mistakes.
Add 51 tests covering ConditionEvaluator class:
- Tech stack conditions: database, frontend, backend, typescript, tests
- Database-specific: supabase_configured, RLS, migrations
- Frontend-specific: react, vue, tailwind
- Workflow state: QA approval, phase completion, collection phases
- Logical operators: AND, OR, mixed AND/OR (OR-of-ANDs)
- Dot notation access and equality checks
- Negation, unknown conditions, composite conditions
- shouldExecutePhase, getFailedConditions, getSkipExplanation
- evaluateAllPhases summary
@nikolasdehor nikolasdehor force-pushed the test/condition-evaluator-coverage branch from d522817 to 88fb5ea Compare February 18, 2026 19:39
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (4)
tests/core/orchestration/condition-evaluator.test.js (4)

456-461: uses step when phase not set only asserts key existence — verify execution outcome too.

toBeDefined() confirms the key was created but doesn't validate that the step is treated as applicable (no condition → should execute) or that shouldExecute is true.

♻️ Proposed fix
    test('uses step when phase not set', () => {
      const phases = [{ step: 'init' }];
      const summary = evaluator.evaluateAllPhases(phases);
-     expect(summary.details['init']).toBeDefined();
+     expect(summary.details['init']).toBeDefined();
+     expect(summary.details['init'].shouldExecute).toBe(true);
+     expect(summary.applicable).toContain('init');
+     expect(summary.skipped).not.toContain('init');
    });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/orchestration/condition-evaluator.test.js` around lines 456 - 461,
The test "uses step when phase not set" currently only checks the presence of
the 'init' key; update the assertion to verify the step was treated as
executable by asserting summary.details['init'].shouldExecute === true (and
optionally validate any execution result flag or status the evaluator sets)
after calling evaluator.evaluateAllPhases(phases). Locate the test block using
the test name and the call to evaluator.evaluateAllPhases and replace or augment
the expect(summary.details['init']).toBeDefined() with an assertion that
inspects summary.details['init'].shouldExecute (and/or the evaluator's execution
result property) to confirm the step actually runs when no phase is set.

401-418: getSkipExplanation is under-tested — 3 tests vs. 7 claimed.

The current 3 tests exercise only the happy path and two known-string conditions. Missing scenarios that could hide real bugs:

  • Phase with no condition field (should it return an "always execute" message or empty?)
  • Unknown/future condition (does it fall back gracefully?)
  • Negated condition (e.g., '!project_has_database' when database is present)
  • Complex AND/OR expression (e.g., 'project_has_database && project_has_frontend' with partial failure)
♻️ Example additions
+   test('returns execute message when no condition is set', () => {
+     const explanation = evaluator.getSkipExplanation({ phase: 1 });
+     expect(explanation).toContain('should execute');
+   });
+
+   test('handles negated condition correctly', () => {
+     const explanation = evaluator.getSkipExplanation({ condition: '!project_has_database' });
+     // evaluator has FULL_PROFILE so !project_has_database is false → skip
+     expect(explanation).toBeDefined();
+     expect(typeof explanation).toBe('string');
+   });
+
+   test('handles unknown condition gracefully', () => {
+     const explanation = evaluator.getSkipExplanation({ condition: 'future_condition' });
+     // unknown conditions default to true → should execute
+     expect(explanation).toContain('should execute');
+   });
+
+   test('handles AND/OR partial failure explanation', () => {
+     const ev = new ConditionEvaluator(EMPTY_PROFILE);
+     const explanation = ev.getSkipExplanation({
+       condition: 'project_has_database && project_has_frontend',
+     });
+     expect(explanation).toBeDefined();
+     expect(typeof explanation).toBe('string');
+   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/orchestration/condition-evaluator.test.js` around lines 401 - 418,
Add the missing unit tests for getSkipExplanation to cover edge cases: add a
test that calls evaluator.getSkipExplanation with a phase object missing the
condition field (expect an "always execute" or empty/execute message consistent
with existing behavior), a test using an unknown condition string (e.g.,
'unknown_condition') to assert it falls back to a generic graceful message, a
test for a negated condition string (e.g., '!project_has_database') using
ConditionEvaluator(EMPTY_PROFILE) or a profile where the condition is true to
assert the negation is handled, and a test for a complex boolean expression
(e.g., 'project_has_database && project_has_frontend') with a profile where only
one subcondition is true to assert the combined expression explanation describes
partial failure; reference getSkipExplanation, ConditionEvaluator, EMPTY_PROFILE
and the existing evaluator instance to add these cases.

107-117: Missing false-path coverage for 6 built-in condition evaluators.

project_has_backend, project_has_typescript, project_has_tests (lines 107–117), database_has_rls, database_has_migrations (lines 137–143), frontend_has_react, and frontend_has_tailwind (lines 150–161) are each tested only against FULL_PROFILE. A regression in any of these evaluators that makes them always return true would go undetected. The acceptance criterion ("test all built-in condition evaluators") is partially unmet.

♻️ Suggested additions (representative; apply same pattern to the others)
    test('project_has_backend', () => {
      expect(evaluator.evaluate('project_has_backend')).toBe(true);
+     const ev = new ConditionEvaluator(EMPTY_PROFILE);
+     expect(ev.evaluate('project_has_backend')).toBe(false);
    });

    test('project_has_typescript', () => {
      expect(evaluator.evaluate('project_has_typescript')).toBe(true);
+     const ev = new ConditionEvaluator(EMPTY_PROFILE);
+     expect(ev.evaluate('project_has_typescript')).toBe(false);
    });

    test('project_has_tests', () => {
      expect(evaluator.evaluate('project_has_tests')).toBe(true);
+     const ev = new ConditionEvaluator(EMPTY_PROFILE);
+     expect(ev.evaluate('project_has_tests')).toBe(false);
    });
    test('database_has_rls', () => {
      expect(evaluator.evaluate('database_has_rls')).toBe(true);
+     const ev = new ConditionEvaluator(EMPTY_PROFILE);
+     expect(ev.evaluate('database_has_rls')).toBe(false);
    });

    test('database_has_migrations', () => {
      expect(evaluator.evaluate('database_has_migrations')).toBe(true);
+     const ev = new ConditionEvaluator(EMPTY_PROFILE);
+     expect(ev.evaluate('database_has_migrations')).toBe(false);
    });
    test('frontend_has_react', () => {
      expect(evaluator.evaluate('frontend_has_react')).toBe(true);
+     const ev = new ConditionEvaluator(EMPTY_PROFILE);
+     expect(ev.evaluate('frontend_has_react')).toBe(false);
    });

    test('frontend_has_tailwind', () => {
      expect(evaluator.evaluate('frontend_has_tailwind')).toBe(true);
+     const ev = new ConditionEvaluator(EMPTY_PROFILE);
+     expect(ev.evaluate('frontend_has_tailwind')).toBe(false);
    });

Also applies to: 137-143, 150-161

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/orchestration/condition-evaluator.test.js` around lines 107 - 117,
Add negative-path tests for each built-in evaluator currently only checked
against FULL_PROFILE by calling evaluator.evaluate with a minimal/empty profile
(e.g., EMPTY_PROFILE or a profile object lacking the feature flags) and
asserting false; specifically update tests in
tests/core/orchestration/condition-evaluator.test.js to add counterparts for
project_has_backend, project_has_typescript, project_has_tests,
database_has_rls, database_has_migrations, frontend_has_react, and
frontend_has_tailwind that call evaluator.evaluate('...') with the empty/minimal
profile and expect(false) so regressions forcing always-true results are caught.

382-388: Weak assertion on mixed AND/OR failure — prefer specific conditions.

toBeGreaterThan(0) would pass even if the wrong conditions are reported as failed. Every other getFailedConditions test in this block asserts specific condition names.

♻️ Proposed fix
    test('handles mixed AND/OR', () => {
      const ev = new ConditionEvaluator(EMPTY_PROFILE);
      const failed = ev.getFailedConditions({
        condition: 'project_has_database && project_has_frontend || project_has_backend',
      });
-     expect(failed.length).toBeGreaterThan(0);
+     expect(failed).toContain('project_has_database');
+     expect(failed).toContain('project_has_frontend');
+     expect(failed).toContain('project_has_backend');
    });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/orchestration/condition-evaluator.test.js` around lines 382 - 388,
Replace the weak length assertion in the "handles mixed AND/OR" test: call
ConditionEvaluator.getFailedConditions and assert the exact failed condition
names instead of just checking length; for example map failed to their names and
use
expect(...).toEqual(expect.arrayContaining(['project_has_database','project_has_frontend','project_has_backend']))
(or the exact set expected for EMPTY_PROFILE) so the test verifies the specific
conditions returned by getFailedConditions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@tests/core/orchestration/condition-evaluator.test.js`:
- Around line 332-340: The test is asserting reason inconsistently; change the
assertion in the "condition not met" test that currently uses
expect(result.reason).toContain('condition_not_met') to an exact-match assertion
to mirror the other test (use expect(result.reason).toBe('condition_not_met'))
so the suite consistently checks exact reasons returned by
ConditionEvaluator.shouldExecutePhase.

---

Nitpick comments:
In `@tests/core/orchestration/condition-evaluator.test.js`:
- Around line 456-461: The test "uses step when phase not set" currently only
checks the presence of the 'init' key; update the assertion to verify the step
was treated as executable by asserting summary.details['init'].shouldExecute ===
true (and optionally validate any execution result flag or status the evaluator
sets) after calling evaluator.evaluateAllPhases(phases). Locate the test block
using the test name and the call to evaluator.evaluateAllPhases and replace or
augment the expect(summary.details['init']).toBeDefined() with an assertion that
inspects summary.details['init'].shouldExecute (and/or the evaluator's execution
result property) to confirm the step actually runs when no phase is set.
- Around line 401-418: Add the missing unit tests for getSkipExplanation to
cover edge cases: add a test that calls evaluator.getSkipExplanation with a
phase object missing the condition field (expect an "always execute" or
empty/execute message consistent with existing behavior), a test using an
unknown condition string (e.g., 'unknown_condition') to assert it falls back to
a generic graceful message, a test for a negated condition string (e.g.,
'!project_has_database') using ConditionEvaluator(EMPTY_PROFILE) or a profile
where the condition is true to assert the negation is handled, and a test for a
complex boolean expression (e.g., 'project_has_database &&
project_has_frontend') with a profile where only one subcondition is true to
assert the combined expression explanation describes partial failure; reference
getSkipExplanation, ConditionEvaluator, EMPTY_PROFILE and the existing evaluator
instance to add these cases.
- Around line 107-117: Add negative-path tests for each built-in evaluator
currently only checked against FULL_PROFILE by calling evaluator.evaluate with a
minimal/empty profile (e.g., EMPTY_PROFILE or a profile object lacking the
feature flags) and asserting false; specifically update tests in
tests/core/orchestration/condition-evaluator.test.js to add counterparts for
project_has_backend, project_has_typescript, project_has_tests,
database_has_rls, database_has_migrations, frontend_has_react, and
frontend_has_tailwind that call evaluator.evaluate('...') with the empty/minimal
profile and expect(false) so regressions forcing always-true results are caught.
- Around line 382-388: Replace the weak length assertion in the "handles mixed
AND/OR" test: call ConditionEvaluator.getFailedConditions and assert the exact
failed condition names instead of just checking length; for example map failed
to their names and use
expect(...).toEqual(expect.arrayContaining(['project_has_database','project_has_frontend','project_has_backend']))
(or the exact set expected for EMPTY_PROFILE) so the test verifies the specific
conditions returned by getFailedConditions.

@nikolasdehor
Copy link
Contributor Author

Consolidated into #426

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

test: add unit test coverage for condition-evaluator

2 participants