test: add comprehensive unit tests for condition-evaluator#224
test: add comprehensive unit tests for condition-evaluator#224nikolasdehor wants to merge 1 commit intoSynkraAI:mainfrom
Conversation
|
@nikolasdehor is attempting to deploy a commit to the Pedro Valério Lopez's projects Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughAdds a comprehensive Jest test suite for the ConditionEvaluator module, covering constructor behavior, state setters, evaluation of tech-stack, database, frontend, workflow-state and composite conditions, helper methods (shouldExecutePhase, getFailedConditions, getSkipExplanation), and evaluateAllPhases. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related issues
Suggested labels
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tests/core/orchestration/condition-evaluator.test.js (2)
104-132: Prefer localevaluatorvariables over mid-test reassignment.Reassigning the outer
evaluatormid-it(e.g., Lines 106, 112, 118, 124, 130) conflates two logical test cases in one block and makes it harder to pinpoint which state triggers a failure. The same pattern appears throughout the database, frontend, and composite sections. Prefer separateitblocks or a local constant.♻️ Example refactor for one representative test
- it('should evaluate project_has_database from profile', () => { - expect(evaluator.evaluate('project_has_database')).toBe(false); - evaluator = new ConditionEvaluator(createProfile({ hasDatabase: true })); - expect(evaluator.evaluate('project_has_database')).toBe(true); - }); + it('should return false for project_has_database when profile has no database', () => { + expect(evaluator.evaluate('project_has_database')).toBe(false); + }); + + it('should return true for project_has_database when profile has database', () => { + const e = new ConditionEvaluator(createProfile({ hasDatabase: true })); + expect(e.evaluate('project_has_database')).toBe(true); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/orchestration/condition-evaluator.test.js` around lines 104 - 132, Split each test so you don't reassign the outer evaluator inside an it block: instead of reusing the shared evaluator variable and calling evaluator = new ConditionEvaluator(...), create a new local constant for the specific scenario (e.g., const evaluator = new ConditionEvaluator(createProfile({ hasDatabase: true })) ) or move the second assertion into its own it block; update the tests that reference evaluator and ConditionEvaluator (project_has_database, project_has_frontend, project_has_backend, project_has_typescript, project_has_tests) to use locally scoped evaluators created via createProfile to keep each it single-purpose and avoid mid-test reassignment.
35-38: Move theconsole.warnspy before evaluator construction.Currently the spy is installed after
new ConditionEvaluator(createProfile()), so any warning emitted by the constructor would not be captured. Swap the two lines for defensive correctness.♻️ Proposed fix
beforeEach(() => { + jest.spyOn(console, 'warn').mockImplementation(() => {}); evaluator = new ConditionEvaluator(createProfile()); - jest.spyOn(console, 'warn').mockImplementation(() => {}); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/orchestration/condition-evaluator.test.js` around lines 35 - 38, Install the console.warn spy before constructing the ConditionEvaluator so constructor-emitted warnings are captured: in the beforeEach block, call jest.spyOn(console, 'warn').mockImplementation(() => {}) prior to creating evaluator with new ConditionEvaluator(createProfile()); this ensures any warnings from ConditionEvaluator's constructor are intercepted during tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/core/orchestration/condition-evaluator.test.js`:
- Around line 402-411: The two assertions for result.reason must use the same
style: update the negative-path assertion on the result from
evaluator.shouldExecutePhase({ phase: 1, condition: 'project_has_database' }) so
it uses an exact match like the positive path
(expect(result.reason).toBe('condition_not_met')) if reason is an enum string;
if the implementation appends context (e.g. 'condition_not_met:
project_has_database') instead, either assert the full expected string exactly
or change the positive-path assertion to a substring check
(expect(result.reason).toContain('condition_met')) so both tests use the same
intent.
---
Nitpick comments:
In `@tests/core/orchestration/condition-evaluator.test.js`:
- Around line 104-132: Split each test so you don't reassign the outer evaluator
inside an it block: instead of reusing the shared evaluator variable and calling
evaluator = new ConditionEvaluator(...), create a new local constant for the
specific scenario (e.g., const evaluator = new
ConditionEvaluator(createProfile({ hasDatabase: true })) ) or move the second
assertion into its own it block; update the tests that reference evaluator and
ConditionEvaluator (project_has_database, project_has_frontend,
project_has_backend, project_has_typescript, project_has_tests) to use locally
scoped evaluators created via createProfile to keep each it single-purpose and
avoid mid-test reassignment.
- Around line 35-38: Install the console.warn spy before constructing the
ConditionEvaluator so constructor-emitted warnings are captured: in the
beforeEach block, call jest.spyOn(console, 'warn').mockImplementation(() => {})
prior to creating evaluator with new ConditionEvaluator(createProfile()); this
ensures any warnings from ConditionEvaluator's constructor are intercepted
during tests.
| const result = evaluator.shouldExecutePhase({ phase: 1, condition: 'project_has_database' }); | ||
| expect(result.shouldExecute).toBe(true); | ||
| expect(result.reason).toBe('condition_met'); | ||
| }); | ||
|
|
||
| it('should return false with condition_not_met reason when condition fails', () => { | ||
| const result = evaluator.shouldExecutePhase({ phase: 1, condition: 'project_has_database' }); | ||
| expect(result.shouldExecute).toBe(false); | ||
| expect(result.reason).toContain('condition_not_met'); | ||
| }); |
There was a problem hiding this comment.
Align assertion style for reason between the positive and negative paths.
Line 404 uses toBe('condition_met') (exact match) while Line 410 uses toContain('condition_not_met') (substring). If reason is an exact enum string in both cases, use toBe consistently. If the failure path appends extra context (e.g., 'condition_not_met: project_has_database'), document the intent and consider asserting on the extra text too.
🛠️ Proposed fix (if `reason` is a bare enum value in both paths)
- expect(result.reason).toContain('condition_not_met');
+ expect(result.reason).toBe('condition_not_met');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/core/orchestration/condition-evaluator.test.js` around lines 402 - 411,
The two assertions for result.reason must use the same style: update the
negative-path assertion on the result from evaluator.shouldExecutePhase({ phase:
1, condition: 'project_has_database' }) so it uses an exact match like the
positive path (expect(result.reason).toBe('condition_not_met')) if reason is an
enum string; if the implementation appends context (e.g. 'condition_not_met:
project_has_database') instead, either assert the full expected string exactly
or change the positive-path assertion to a substring check
(expect(result.reason).toContain('condition_met')) so both tests use the same
intent.
There was a problem hiding this comment.
Pull request overview
This pull request adds comprehensive unit test coverage for the ConditionEvaluator module, which is responsible for evaluating workflow conditions based on the detected tech stack profile. The module performs deterministic evaluation (no AI involvement) to determine which workflow phases should execute.
Changes:
- Added 74 unit tests covering all public methods and built-in condition evaluators
- Tests cover tech stack conditions, database conditions, frontend conditions, workflow state, operators (AND/OR/mixed), negation, dot notation, and all helper methods
- Includes helper function
createProfile()to create mock tech stack profiles for testing
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| evaluator = new ConditionEvaluator(createProfile({ hasFrontend: true })); | ||
| expect(evaluator.evaluate('has_any_data_to_analyze')).toBe(true); | ||
| }); | ||
|
|
There was a problem hiding this comment.
The test coverage for the has_any_data_to_analyze composite condition is missing a test case for when only hasBackend is true. The implementation checks hasDatabase || hasFrontend || hasBackend, but only the database and frontend branches are tested. Consider adding a test case like: it('should evaluate has_any_data_to_analyze with backend', () => { evaluator = new ConditionEvaluator(createProfile({ hasBackend: true })); expect(evaluator.evaluate('has_any_data_to_analyze')).toBe(true); });
| it('should evaluate has_any_data_to_analyze with backend', () => { | |
| evaluator = new ConditionEvaluator(createProfile({ hasBackend: true })); | |
| expect(evaluator.evaluate('has_any_data_to_analyze')).toBe(true); | |
| }); |
| function createProfile(overrides = {}) { | ||
| return { | ||
| hasDatabase: false, | ||
| hasFrontend: false, | ||
| hasBackend: false, | ||
| hasTypeScript: false, | ||
| hasTests: false, | ||
| database: { | ||
| type: null, | ||
| envVarsConfigured: false, | ||
| hasRLS: false, | ||
| hasMigrations: false, | ||
| }, | ||
| frontend: { | ||
| framework: null, | ||
| styling: null, | ||
| }, | ||
| ...overrides, | ||
| }; | ||
| } |
There was a problem hiding this comment.
The createProfile helper uses shallow object spreading with ...overrides, which means that when passing nested objects like database or frontend, all properties must be specified or they will be lost. For example, createProfile({ database: { type: 'supabase' } }) would lose the envVarsConfigured, hasRLS, and hasMigrations properties. While current tests correctly provide all properties, consider using deep merging to make the helper more robust and prevent future issues. You could use a deep merge utility or restructure the helper to handle nested overrides separately.
| const result = evaluator.getFailedConditions({ | ||
| condition: 'project_has_database && project_has_frontend || project_has_backend', | ||
| }); | ||
| expect(result.length).toBeGreaterThan(0); |
There was a problem hiding this comment.
The assertion expect(result.length).toBeGreaterThan(0) is too weak and doesn't adequately verify the behavior. According to the implementation, when all groups in a mixed operator condition fail, it should return an array of the failed group strings. Consider using a more specific assertion like expect(result).toEqual(['project_has_database && project_has_frontend', 'project_has_backend']) to verify the exact expected behavior.
| expect(result.length).toBeGreaterThan(0); | |
| expect(result).toEqual(['project_has_database && project_has_frontend', 'project_has_backend']); |
| it('should initialize qaApproved as false', () => { | ||
| expect(evaluator._qaApproved).toBe(false); | ||
| }); | ||
|
|
||
| it('should initialize empty phase outputs', () => { | ||
| expect(evaluator._phaseOutputs).toEqual({}); | ||
| }); |
There was a problem hiding this comment.
The tests directly access private properties _qaApproved and _phaseOutputs (indicated by underscore prefix), which couples the tests to implementation details. Consider testing these through their public interfaces instead. For example, instead of checking evaluator._qaApproved, you could verify the behavior through evaluator.evaluate('qa_review_approved'). However, if these properties are meant to be tested directly for initialization verification, consider whether they should be private or if getter methods should be provided.
| it('should initialize qaApproved as false', () => { | |
| expect(evaluator._qaApproved).toBe(false); | |
| }); | |
| it('should initialize empty phase outputs', () => { | |
| expect(evaluator._phaseOutputs).toEqual({}); | |
| }); | |
| it('should have QA review not approved by default', () => { | |
| // Use the public evaluate API instead of checking the private _qaApproved field | |
| expect(evaluator.evaluate('qa_review_approved')).toBe(false); | |
| }); |
Add 51 tests covering ConditionEvaluator class: - Tech stack conditions: database, frontend, backend, typescript, tests - Database-specific: supabase_configured, RLS, migrations - Frontend-specific: react, vue, tailwind - Workflow state: QA approval, phase completion, collection phases - Logical operators: AND, OR, mixed AND/OR (OR-of-ANDs) - Dot notation access and equality checks - Negation, unknown conditions, composite conditions - shouldExecutePhase, getFailedConditions, getSkipExplanation - evaluateAllPhases summary
d522817 to
88fb5ea
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (4)
tests/core/orchestration/condition-evaluator.test.js (4)
456-461:uses step when phase not setonly asserts key existence — verify execution outcome too.
toBeDefined()confirms the key was created but doesn't validate that the step is treated as applicable (no condition → should execute) or thatshouldExecuteistrue.♻️ Proposed fix
test('uses step when phase not set', () => { const phases = [{ step: 'init' }]; const summary = evaluator.evaluateAllPhases(phases); - expect(summary.details['init']).toBeDefined(); + expect(summary.details['init']).toBeDefined(); + expect(summary.details['init'].shouldExecute).toBe(true); + expect(summary.applicable).toContain('init'); + expect(summary.skipped).not.toContain('init'); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/orchestration/condition-evaluator.test.js` around lines 456 - 461, The test "uses step when phase not set" currently only checks the presence of the 'init' key; update the assertion to verify the step was treated as executable by asserting summary.details['init'].shouldExecute === true (and optionally validate any execution result flag or status the evaluator sets) after calling evaluator.evaluateAllPhases(phases). Locate the test block using the test name and the call to evaluator.evaluateAllPhases and replace or augment the expect(summary.details['init']).toBeDefined() with an assertion that inspects summary.details['init'].shouldExecute (and/or the evaluator's execution result property) to confirm the step actually runs when no phase is set.
401-418:getSkipExplanationis under-tested — 3 tests vs. 7 claimed.The current 3 tests exercise only the happy path and two known-string conditions. Missing scenarios that could hide real bugs:
- Phase with no
conditionfield (should it return an "always execute" message or empty?)- Unknown/future condition (does it fall back gracefully?)
- Negated condition (e.g.,
'!project_has_database'when database is present)- Complex AND/OR expression (e.g.,
'project_has_database && project_has_frontend'with partial failure)♻️ Example additions
+ test('returns execute message when no condition is set', () => { + const explanation = evaluator.getSkipExplanation({ phase: 1 }); + expect(explanation).toContain('should execute'); + }); + + test('handles negated condition correctly', () => { + const explanation = evaluator.getSkipExplanation({ condition: '!project_has_database' }); + // evaluator has FULL_PROFILE so !project_has_database is false → skip + expect(explanation).toBeDefined(); + expect(typeof explanation).toBe('string'); + }); + + test('handles unknown condition gracefully', () => { + const explanation = evaluator.getSkipExplanation({ condition: 'future_condition' }); + // unknown conditions default to true → should execute + expect(explanation).toContain('should execute'); + }); + + test('handles AND/OR partial failure explanation', () => { + const ev = new ConditionEvaluator(EMPTY_PROFILE); + const explanation = ev.getSkipExplanation({ + condition: 'project_has_database && project_has_frontend', + }); + expect(explanation).toBeDefined(); + expect(typeof explanation).toBe('string'); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/orchestration/condition-evaluator.test.js` around lines 401 - 418, Add the missing unit tests for getSkipExplanation to cover edge cases: add a test that calls evaluator.getSkipExplanation with a phase object missing the condition field (expect an "always execute" or empty/execute message consistent with existing behavior), a test using an unknown condition string (e.g., 'unknown_condition') to assert it falls back to a generic graceful message, a test for a negated condition string (e.g., '!project_has_database') using ConditionEvaluator(EMPTY_PROFILE) or a profile where the condition is true to assert the negation is handled, and a test for a complex boolean expression (e.g., 'project_has_database && project_has_frontend') with a profile where only one subcondition is true to assert the combined expression explanation describes partial failure; reference getSkipExplanation, ConditionEvaluator, EMPTY_PROFILE and the existing evaluator instance to add these cases.
107-117: Missing false-path coverage for 6 built-in condition evaluators.
project_has_backend,project_has_typescript,project_has_tests(lines 107–117),database_has_rls,database_has_migrations(lines 137–143),frontend_has_react, andfrontend_has_tailwind(lines 150–161) are each tested only againstFULL_PROFILE. A regression in any of these evaluators that makes them always returntruewould go undetected. The acceptance criterion ("test all built-in condition evaluators") is partially unmet.♻️ Suggested additions (representative; apply same pattern to the others)
test('project_has_backend', () => { expect(evaluator.evaluate('project_has_backend')).toBe(true); + const ev = new ConditionEvaluator(EMPTY_PROFILE); + expect(ev.evaluate('project_has_backend')).toBe(false); }); test('project_has_typescript', () => { expect(evaluator.evaluate('project_has_typescript')).toBe(true); + const ev = new ConditionEvaluator(EMPTY_PROFILE); + expect(ev.evaluate('project_has_typescript')).toBe(false); }); test('project_has_tests', () => { expect(evaluator.evaluate('project_has_tests')).toBe(true); + const ev = new ConditionEvaluator(EMPTY_PROFILE); + expect(ev.evaluate('project_has_tests')).toBe(false); });test('database_has_rls', () => { expect(evaluator.evaluate('database_has_rls')).toBe(true); + const ev = new ConditionEvaluator(EMPTY_PROFILE); + expect(ev.evaluate('database_has_rls')).toBe(false); }); test('database_has_migrations', () => { expect(evaluator.evaluate('database_has_migrations')).toBe(true); + const ev = new ConditionEvaluator(EMPTY_PROFILE); + expect(ev.evaluate('database_has_migrations')).toBe(false); });test('frontend_has_react', () => { expect(evaluator.evaluate('frontend_has_react')).toBe(true); + const ev = new ConditionEvaluator(EMPTY_PROFILE); + expect(ev.evaluate('frontend_has_react')).toBe(false); }); test('frontend_has_tailwind', () => { expect(evaluator.evaluate('frontend_has_tailwind')).toBe(true); + const ev = new ConditionEvaluator(EMPTY_PROFILE); + expect(ev.evaluate('frontend_has_tailwind')).toBe(false); });Also applies to: 137-143, 150-161
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/orchestration/condition-evaluator.test.js` around lines 107 - 117, Add negative-path tests for each built-in evaluator currently only checked against FULL_PROFILE by calling evaluator.evaluate with a minimal/empty profile (e.g., EMPTY_PROFILE or a profile object lacking the feature flags) and asserting false; specifically update tests in tests/core/orchestration/condition-evaluator.test.js to add counterparts for project_has_backend, project_has_typescript, project_has_tests, database_has_rls, database_has_migrations, frontend_has_react, and frontend_has_tailwind that call evaluator.evaluate('...') with the empty/minimal profile and expect(false) so regressions forcing always-true results are caught.
382-388: Weak assertion on mixed AND/OR failure — prefer specific conditions.
toBeGreaterThan(0)would pass even if the wrong conditions are reported as failed. Every othergetFailedConditionstest in this block asserts specific condition names.♻️ Proposed fix
test('handles mixed AND/OR', () => { const ev = new ConditionEvaluator(EMPTY_PROFILE); const failed = ev.getFailedConditions({ condition: 'project_has_database && project_has_frontend || project_has_backend', }); - expect(failed.length).toBeGreaterThan(0); + expect(failed).toContain('project_has_database'); + expect(failed).toContain('project_has_frontend'); + expect(failed).toContain('project_has_backend'); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/orchestration/condition-evaluator.test.js` around lines 382 - 388, Replace the weak length assertion in the "handles mixed AND/OR" test: call ConditionEvaluator.getFailedConditions and assert the exact failed condition names instead of just checking length; for example map failed to their names and use expect(...).toEqual(expect.arrayContaining(['project_has_database','project_has_frontend','project_has_backend'])) (or the exact set expected for EMPTY_PROFILE) so the test verifies the specific conditions returned by getFailedConditions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tests/core/orchestration/condition-evaluator.test.js`:
- Around line 332-340: The test is asserting reason inconsistently; change the
assertion in the "condition not met" test that currently uses
expect(result.reason).toContain('condition_not_met') to an exact-match assertion
to mirror the other test (use expect(result.reason).toBe('condition_not_met'))
so the suite consistently checks exact reasons returned by
ConditionEvaluator.shouldExecutePhase.
---
Nitpick comments:
In `@tests/core/orchestration/condition-evaluator.test.js`:
- Around line 456-461: The test "uses step when phase not set" currently only
checks the presence of the 'init' key; update the assertion to verify the step
was treated as executable by asserting summary.details['init'].shouldExecute ===
true (and optionally validate any execution result flag or status the evaluator
sets) after calling evaluator.evaluateAllPhases(phases). Locate the test block
using the test name and the call to evaluator.evaluateAllPhases and replace or
augment the expect(summary.details['init']).toBeDefined() with an assertion that
inspects summary.details['init'].shouldExecute (and/or the evaluator's execution
result property) to confirm the step actually runs when no phase is set.
- Around line 401-418: Add the missing unit tests for getSkipExplanation to
cover edge cases: add a test that calls evaluator.getSkipExplanation with a
phase object missing the condition field (expect an "always execute" or
empty/execute message consistent with existing behavior), a test using an
unknown condition string (e.g., 'unknown_condition') to assert it falls back to
a generic graceful message, a test for a negated condition string (e.g.,
'!project_has_database') using ConditionEvaluator(EMPTY_PROFILE) or a profile
where the condition is true to assert the negation is handled, and a test for a
complex boolean expression (e.g., 'project_has_database &&
project_has_frontend') with a profile where only one subcondition is true to
assert the combined expression explanation describes partial failure; reference
getSkipExplanation, ConditionEvaluator, EMPTY_PROFILE and the existing evaluator
instance to add these cases.
- Around line 107-117: Add negative-path tests for each built-in evaluator
currently only checked against FULL_PROFILE by calling evaluator.evaluate with a
minimal/empty profile (e.g., EMPTY_PROFILE or a profile object lacking the
feature flags) and asserting false; specifically update tests in
tests/core/orchestration/condition-evaluator.test.js to add counterparts for
project_has_backend, project_has_typescript, project_has_tests,
database_has_rls, database_has_migrations, frontend_has_react, and
frontend_has_tailwind that call evaluator.evaluate('...') with the empty/minimal
profile and expect(false) so regressions forcing always-true results are caught.
- Around line 382-388: Replace the weak length assertion in the "handles mixed
AND/OR" test: call ConditionEvaluator.getFailedConditions and assert the exact
failed condition names instead of just checking length; for example map failed
to their names and use
expect(...).toEqual(expect.arrayContaining(['project_has_database','project_has_frontend','project_has_backend']))
(or the exact set expected for EMPTY_PROFILE) so the test verifies the specific
conditions returned by getFailedConditions.
|
Consolidated into #426 |
Summary
.aios-core/core/orchestration/condition-evaluator.js(379 lines)Test Coverage Highlights
Closes #223
Test plan
npx jest tests/core/orchestration/condition-evaluator.test.jscc @Pedrovaleriolopez
Summary by CodeRabbit