test: add deep coverage for execution modules (P1 priority)#467
test: add deep coverage for execution modules (P1 priority)#467nikolasdehor wants to merge 3 commits intoSynkraAI:mainfrom
Conversation
Expand parallel-executor tests with edge cases for all 5 modes: - RACE: gemini fallback when claude fails - CONSENSUS: no-consensus detection, stat tracking, provider failure fallback - BEST_OF: single-provider success, both-fail handling - MERGE: single output when one provider fails - FALLBACK: fallbacksUsed stat tracking - _calculateSimilarity: identity, disjoint, null, case insensitivity - _scoreOutput: null, length scoring, code block scoring - getStats: computed rates, zero-execution edge case Add wave-executor test suite (26 tests): - Constructor defaults and custom config - Single/multi-wave execution with and without WaveAnalyzer - Critical task failure abort vs non-critical continuation - Task chunking (chunkArray) - Metrics calculation (success rate, wall time, efficiency) - EventEmitter lifecycle (execution/wave/task started/completed/failed) - Status reporting and formatStatus - cancelAll behavior - defaultExecutor placeholder All 60 tests pass locally with Jest 30.
|
@nikolasdehor is attempting to deploy a commit to the Pedro Valério Lopez's projects Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughAdds extensive unit tests for parallel and wave executors, covering execution modes, deep-mode scenarios, lifecycle events, failure/fallback behavior, metrics/status reporting, and internal utility validations. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/core/execution/parallel-executor.test.js (1)
1-9:⚠️ Potential issue | 🟡 MinorAdd
'use strict'for consistency with the rest of the test suite.
wave-executor.test.jsdeclares'use strict'at line 1, but this file omits it.🔧 Proposed fix
+ +'use strict'; + /** * Parallel Executor Tests🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/execution/parallel-executor.test.js` around lines 1 - 9, Add the missing 'use strict' directive at the top of the test file to match the rest of the suite; open the file that imports ParallelExecutor and ParallelMode (the test declaring those symbols) and insert 'use strict' as the first statement so the module runs in strict mode consistently with other tests.
🧹 Nitpick comments (4)
tests/core/execution/wave-executor.test.js (2)
32-34: Flush pending timers before switching back to real timers inafterEach.It's important to call
runOnlyPendingTimersbefore switching to real timers to flush all pending timers; if you don't, scheduled tasks won't get executed and you'll get unexpected behavior.🔧 Proposed fix
afterEach(() => { + jest.runOnlyPendingTimers(); jest.useRealTimers(); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/execution/wave-executor.test.js` around lines 32 - 34, The afterEach teardown currently calls jest.useRealTimers() without flushing pending timers; update the afterEach to first call jest.runOnlyPendingTimers() (or jest.runOnlyPendingTimers?.() for safety) to execute any scheduled tasks, then call jest.useRealTimers(); locate the afterEach that references jest.useRealTimers and insert the runOnlyPendingTimers call immediately before switching back to real timers.
281-294: Document the expected event count incancelAll.
toHaveLength(3)is implicitly2 × task_cancelled + 1 × execution_cancelled. Without a comment, this magic number is opaque and fragile.💬 Suggested clarification
+ // 2 × task_cancelled (one per task) + 1 × execution_cancelled expect(events).toHaveLength(3);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/execution/wave-executor.test.js` around lines 281 - 294, The test uses a magic number for emitted events after executor.cancelAll(): replace the opaque expect(events).toHaveLength(3) with an explicit, documented expectation—compute a named expectedEvents count (e.g., const expectedEvents = 2 /* task_cancelled for t1,t2 */ + 1 /* execution_cancelled */) and assert expect(events).toHaveLength(expectedEvents); reference the test case and symbols executor.cancelAll, events, 'task_cancelled' and 'execution_cancelled' so the rationale is clear and resilient to future changes.tests/core/execution/parallel-executor.test.js (2)
207-216: Clarify thatsuccess: falseis distinct from a rejected promise in RACE mode.The mock at Line 209 resolves (not rejects) with
{ success: false }. The test relies on the RACE implementation inspecting thesuccessflag to skip this result rather than just taking the first settled promise. A brief inline comment would make this behavioural contract explicit for future readers.💬 Suggested clarification
- const claude = jest.fn().mockResolvedValue({ success: false, error: 'down' }); + // claude resolves with success: false (not a rejection) — RACE mode should skip it + const claude = jest.fn().mockResolvedValue({ success: false, error: 'down' });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/execution/parallel-executor.test.js` around lines 207 - 216, The test relies on ParallelExecutor.execute in ParallelMode.RACE treating a resolved value with success: false differently from a rejected promise, so add a concise inline comment above the claude mock in tests/core/execution/parallel-executor.test.js explaining that claude.mockResolvedValue({ success: false, ... }) intentionally returns a settled (resolved) response with success:false (not a rejected promise) to verify that ParallelExecutor.inspect(succeeded flag) skips that provider in RACE mode; reference ParallelExecutor, execute, and ParallelMode.RACE in the comment so future readers understand the contract being tested.
229-232: Brittle string assertion on warning message.
toContain('did not reach consensus')is tightly coupled to the implementation's exact warning text. Prefer asserting the semantic result (result.consensus === false) and treat the warning string as supplementary.💬 Suggested change
expect(result.consensus).toBe(false); - expect(result.warning).toContain('did not reach consensus'); + expect(result.warning).toBeDefined();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/execution/parallel-executor.test.js` around lines 229 - 232, The test currently uses a brittle string assertion expect(result.warning).toContain('did not reach consensus'); instead rely on the semantic assertion already present (expect(result.consensus).toBe(false)) and either remove the exact-warning check or loosen it (e.g., assert that result.warning is a non-empty string or matches a minimal regex like /consensus/i if you must check content). Update the assertions around the result object in parallel-executor.test.js to drop the hardcoded substring check on result.warning (or replace it with a generic non-empty or case-insensitive match) so the test asserts behavior, not exact wording.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/core/execution/wave-executor.test.js`:
- Around line 238-244: The test uses
expect(metrics.successRate).toBeCloseTo(66.67, 0) which gives too-loose
precision; update the matcher to use at least one decimal place (e.g., change
the second argument from 0 to 1) so the assertion meaningfully checks that
successRate is approximately 66.67 in the wave-executor.test (look for the
expect(metrics.successRate).toBeCloseTo call) and run tests.
---
Outside diff comments:
In `@tests/core/execution/parallel-executor.test.js`:
- Around line 1-9: Add the missing 'use strict' directive at the top of the test
file to match the rest of the suite; open the file that imports ParallelExecutor
and ParallelMode (the test declaring those symbols) and insert 'use strict' as
the first statement so the module runs in strict mode consistently with other
tests.
---
Nitpick comments:
In `@tests/core/execution/parallel-executor.test.js`:
- Around line 207-216: The test relies on ParallelExecutor.execute in
ParallelMode.RACE treating a resolved value with success: false differently from
a rejected promise, so add a concise inline comment above the claude mock in
tests/core/execution/parallel-executor.test.js explaining that
claude.mockResolvedValue({ success: false, ... }) intentionally returns a
settled (resolved) response with success:false (not a rejected promise) to
verify that ParallelExecutor.inspect(succeeded flag) skips that provider in RACE
mode; reference ParallelExecutor, execute, and ParallelMode.RACE in the comment
so future readers understand the contract being tested.
- Around line 229-232: The test currently uses a brittle string assertion
expect(result.warning).toContain('did not reach consensus'); instead rely on the
semantic assertion already present (expect(result.consensus).toBe(false)) and
either remove the exact-warning check or loosen it (e.g., assert that
result.warning is a non-empty string or matches a minimal regex like
/consensus/i if you must check content). Update the assertions around the result
object in parallel-executor.test.js to drop the hardcoded substring check on
result.warning (or replace it with a generic non-empty or case-insensitive
match) so the test asserts behavior, not exact wording.
In `@tests/core/execution/wave-executor.test.js`:
- Around line 32-34: The afterEach teardown currently calls jest.useRealTimers()
without flushing pending timers; update the afterEach to first call
jest.runOnlyPendingTimers() (or jest.runOnlyPendingTimers?.() for safety) to
execute any scheduled tasks, then call jest.useRealTimers(); locate the
afterEach that references jest.useRealTimers and insert the runOnlyPendingTimers
call immediately before switching back to real timers.
- Around line 281-294: The test uses a magic number for emitted events after
executor.cancelAll(): replace the opaque expect(events).toHaveLength(3) with an
explicit, documented expectation—compute a named expectedEvents count (e.g.,
const expectedEvents = 2 /* task_cancelled for t1,t2 */ + 1 /*
execution_cancelled */) and assert expect(events).toHaveLength(expectedEvents);
reference the test case and symbols executor.cancelAll, events, 'task_cancelled'
and 'execution_cancelled' so the rationale is clear and resilient to future
changes.
The wave-analyzer and rate-limit-manager modules exist in the repo,
so { virtual: true } is incorrect and causes mock interception to fail
on Node 18, leading to TypeError: WaveAnalyzer is not a constructor.
There was a problem hiding this comment.
Pull request overview
This PR adds comprehensive test coverage for execution modules (parallel-executor.js and wave-executor.js) as part of Priority 1 improvements from issue #52 to address coverage gaps after the ADE implementation. The PR adds 44 new tests across two files, targeting all 5 execution modes of ParallelExecutor and comprehensive WaveExecutor functionality.
Changes:
- Added 18 new tests to
parallel-executor.test.jscovering edge cases for all 5 execution modes (RACE, CONSENSUS, BEST_OF, MERGE, FALLBACK) plus internal methods_calculateSimilarity,_scoreOutput, andgetStats - Added new test suite
wave-executor.test.jswith 26 tests covering constructor, wave execution, critical failure handling, utilities, status reporting, event emissions, and cancellation
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
| tests/core/execution/parallel-executor.test.js | Extends existing test suite with 18 deep coverage tests for execution mode edge cases, similarity calculation, output scoring, and stats computation |
| tests/core/execution/wave-executor.test.js | New comprehensive test suite (26 tests) covering wave-based task execution, chunking, metrics, events, and cancellation |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| it('returns gemini when claude fails in race', async () => { | ||
| const raceExec = new ParallelExecutor({ mode: ParallelMode.RACE }); | ||
| const claude = jest.fn().mockResolvedValue({ success: false, error: 'down' }); | ||
| const gemini = jest.fn().mockResolvedValue({ success: true, output: 'gemini wins' }); | ||
|
|
||
| const result = await raceExec.execute(claude, gemini); | ||
|
|
||
| expect(result.selectedProvider).toBe('gemini'); | ||
| expect(result.mode).toBe('race'); | ||
| }); | ||
| }); | ||
|
|
||
| describe('CONSENSUS mode - edge cases', () => { | ||
| it('reports no consensus for very different outputs', async () => { | ||
| const consensusExec = new ParallelExecutor({ | ||
| mode: ParallelMode.CONSENSUS, | ||
| consensusSimilarity: 0.95, | ||
| }); | ||
| const claude = jest.fn().mockResolvedValue({ success: true, output: 'apples oranges bananas' }); | ||
| const gemini = jest.fn().mockResolvedValue({ success: true, output: 'zzz xxx yyy completely different' }); | ||
|
|
||
| const result = await consensusExec.execute(claude, gemini); | ||
|
|
||
| expect(result.consensus).toBe(false); | ||
| expect(result.warning).toContain('did not reach consensus'); | ||
| }); | ||
|
|
||
| it('falls back when gemini fails', async () => { | ||
| const consensusExec = new ParallelExecutor({ mode: ParallelMode.CONSENSUS }); | ||
| const claude = jest.fn().mockResolvedValue({ success: true, output: 'ok' }); | ||
| const gemini = jest.fn().mockRejectedValue(new Error('down')); | ||
|
|
||
| const result = await consensusExec.execute(claude, gemini); | ||
|
|
||
| expect(result.selectedProvider).toBe('claude'); | ||
| expect(result.mode).toBe('fallback'); | ||
| }); | ||
|
|
||
| it('increments consensusAgreements on agreement', async () => { | ||
| const consensusExec = new ParallelExecutor({ | ||
| mode: ParallelMode.CONSENSUS, | ||
| consensusSimilarity: 0.5, | ||
| }); | ||
| const claude = jest.fn().mockResolvedValue({ success: true, output: 'same words here' }); | ||
| const gemini = jest.fn().mockResolvedValue({ success: true, output: 'same words here' }); | ||
|
|
||
| await consensusExec.execute(claude, gemini); | ||
|
|
||
| expect(consensusExec.stats.consensusAgreements).toBe(1); | ||
| }); | ||
| }); | ||
|
|
||
| describe('BEST_OF mode - edge cases', () => { | ||
| it('picks claude when only claude succeeds', async () => { | ||
| const bestExec = new ParallelExecutor({ mode: ParallelMode.BEST_OF }); | ||
| const claude = jest.fn().mockResolvedValue({ success: true, output: 'ok' }); | ||
| const gemini = jest.fn().mockResolvedValue({ success: false }); | ||
|
|
||
| const result = await bestExec.execute(claude, gemini); | ||
|
|
||
| expect(result.selectedProvider).toBe('claude'); | ||
| }); | ||
|
|
||
| it('picks gemini when only gemini succeeds', async () => { | ||
| const bestExec = new ParallelExecutor({ mode: ParallelMode.BEST_OF }); | ||
| const claude = jest.fn().mockResolvedValue({ success: false }); | ||
| const gemini = jest.fn().mockResolvedValue({ success: true, output: 'ok' }); | ||
|
|
||
| const result = await bestExec.execute(claude, gemini); | ||
|
|
||
| expect(result.selectedProvider).toBe('gemini'); | ||
| }); | ||
|
|
||
| it('returns failure when both fail', async () => { | ||
| const bestExec = new ParallelExecutor({ mode: ParallelMode.BEST_OF }); | ||
| const claude = jest.fn().mockResolvedValue({ success: false }); | ||
| const gemini = jest.fn().mockResolvedValue({ success: false }); | ||
|
|
||
| const result = await bestExec.execute(claude, gemini); | ||
|
|
||
| expect(result.success).toBe(false); | ||
| }); | ||
| }); | ||
|
|
||
| describe('MERGE mode - edge cases', () => { | ||
| it('returns single output when claude fails', async () => { | ||
| const mergeExec = new ParallelExecutor({ mode: ParallelMode.MERGE }); | ||
| const claude = jest.fn().mockResolvedValue({ success: false }); | ||
| const gemini = jest.fn().mockResolvedValue({ success: true, output: 'gemini only' }); | ||
|
|
||
| const result = await mergeExec.execute(claude, gemini); | ||
|
|
||
| expect(result.mode).toBe('merge'); | ||
| expect(result.output).toBe('gemini only'); | ||
| }); | ||
| }); | ||
|
|
||
| describe('FALLBACK mode - stats tracking', () => { | ||
| it('increments fallbacksUsed when claude fails', async () => { | ||
| const claude = jest.fn().mockRejectedValue(new Error('down')); | ||
| const gemini = jest.fn().mockResolvedValue({ success: true, output: 'ok' }); | ||
|
|
||
| await executor.execute(claude, gemini); | ||
|
|
||
| expect(executor.stats.fallbacksUsed).toBe(1); | ||
| }); | ||
| }); | ||
|
|
||
| describe('_calculateSimilarity', () => { | ||
| it('returns 1 for identical strings', () => { | ||
| expect(executor._calculateSimilarity('hello world', 'hello world')).toBe(1); | ||
| }); | ||
|
|
||
| it('returns 0 for completely different strings', () => { | ||
| expect(executor._calculateSimilarity('aaa', 'bbb')).toBe(0); | ||
| }); | ||
|
|
||
| it('returns 0 when either input is null', () => { | ||
| expect(executor._calculateSimilarity(null, 'hello')).toBe(0); | ||
| expect(executor._calculateSimilarity('hello', null)).toBe(0); | ||
| }); | ||
|
|
||
| it('is case insensitive', () => { | ||
| expect(executor._calculateSimilarity('Hello World', 'hello world')).toBe(1); | ||
| }); | ||
| }); | ||
|
|
||
| describe('_scoreOutput', () => { | ||
| it('returns 0 for null/empty', () => { | ||
| expect(executor._scoreOutput(null)).toBe(0); | ||
| expect(executor._scoreOutput('')).toBe(0); | ||
| }); | ||
|
|
||
| it('scores length > 100 higher', () => { | ||
| const short = 'a'.repeat(50); | ||
| const medium = 'a'.repeat(150); | ||
| expect(executor._scoreOutput(medium)).toBeGreaterThan(executor._scoreOutput(short)); | ||
| }); | ||
|
|
||
| it('scores code blocks higher', () => { | ||
| const withCode = 'x'.repeat(101) + ' ```code``` '; | ||
| const withoutCode = 'x'.repeat(101); | ||
| expect(executor._scoreOutput(withCode)).toBeGreaterThan(executor._scoreOutput(withoutCode)); | ||
| }); | ||
| }); | ||
|
|
||
| describe('getStats computed rates', () => { | ||
| it('computes consensusRate and fallbackRate', () => { | ||
| executor.stats.executions = 10; | ||
| executor.stats.consensusAgreements = 3; | ||
| executor.stats.fallbacksUsed = 2; | ||
|
|
||
| const stats = executor.getStats(); | ||
|
|
||
| expect(stats.consensusRate).toBeCloseTo(0.3); | ||
| expect(stats.fallbackRate).toBeCloseTo(0.2); | ||
| }); | ||
|
|
||
| it('returns 0 rates when no executions', () => { | ||
| const stats = executor.getStats(); | ||
| expect(stats.consensusRate).toBe(0); | ||
| expect(stats.fallbackRate).toBe(0); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Test naming is inconsistent within this file. The new tests added starting from line 207 drop the "should" prefix (e.g., "returns gemini when claude fails in race"), while the existing tests above line 203 use the "should" prefix (e.g., "should execute both providers in parallel"). For consistency, all test names should follow the same convention. Based on the existing tests in this file, the "should" prefix convention appears to be established.
| 'use strict'; | ||
|
|
||
| /** | ||
| * Unit tests for WaveExecutor | ||
| * | ||
| * Covers wave-based parallel task execution: single/multi-wave workflows, | ||
| * task chunking, timeout handling, critical task failure abort, | ||
| * EventEmitter lifecycle, metrics calculation, status reporting, and cancelAll. | ||
| * | ||
| * Refs #52 | ||
| */ | ||
|
|
||
| jest.mock('../../../.aios-core/workflow-intelligence/engine/wave-analyzer', () => null); | ||
| jest.mock('../../../.aios-core/core/execution/rate-limit-manager', () => null); | ||
|
|
||
| const WaveExecutor = require('../../../.aios-core/core/execution/wave-executor'); | ||
|
|
||
| describe('WaveExecutor', () => { | ||
| let executor; | ||
|
|
||
| beforeEach(() => { | ||
| jest.resetAllMocks(); | ||
| jest.useFakeTimers(); | ||
| executor = new WaveExecutor({ | ||
| taskExecutor: (task) => Promise.resolve({ success: true, output: `done-${task.id}` }), | ||
| waveAnalyzer: null, | ||
| rateLimitManager: null, | ||
| taskTimeout: 5000, | ||
| }); | ||
| }); | ||
|
|
||
| afterEach(() => { | ||
| jest.useRealTimers(); | ||
| }); | ||
|
|
||
| // ── Constructor ────────────────────────────────────────────────── | ||
|
|
||
| describe('constructor', () => { | ||
| test('defaults maxParallel to 4', () => { | ||
| const ex = new WaveExecutor(); | ||
| expect(ex.maxParallel).toBe(4); | ||
| }); | ||
|
|
||
| test('defaults taskTimeout to 10 minutes', () => { | ||
| const ex = new WaveExecutor(); | ||
| expect(ex.taskTimeout).toBe(10 * 60 * 1000); | ||
| }); | ||
|
|
||
| test('defaults continueOnNonCriticalFailure to true', () => { | ||
| const ex = new WaveExecutor(); | ||
| expect(ex.continueOnNonCriticalFailure).toBe(true); | ||
| }); | ||
|
|
||
| test('accepts custom config', () => { | ||
| const ex = new WaveExecutor({ maxParallel: 2, taskTimeout: 1000 }); | ||
| expect(ex.maxParallel).toBe(2); | ||
| expect(ex.taskTimeout).toBe(1000); | ||
| }); | ||
|
|
||
| test('extends EventEmitter', () => { | ||
| expect(typeof executor.on).toBe('function'); | ||
| expect(typeof executor.emit).toBe('function'); | ||
| }); | ||
|
|
||
| test('initializes empty state', () => { | ||
| expect(executor.activeExecutions.size).toBe(0); | ||
| expect(executor.completedWaves).toEqual([]); | ||
| expect(executor.currentWaveIndex).toBe(0); | ||
| }); | ||
| }); | ||
|
|
||
| // ── executeWaves ───────────────────────────────────────────────── | ||
|
|
||
| describe('executeWaves', () => { | ||
| test('returns success with empty waves', async () => { | ||
| jest.useRealTimers(); | ||
| const ex = new WaveExecutor({ waveAnalyzer: { analyze: () => ({ waves: [] }) } }); | ||
|
|
||
| const result = await ex.executeWaves('wf-1'); | ||
|
|
||
| expect(result.success).toBe(true); | ||
| expect(result.waves).toEqual([]); | ||
| expect(result.message).toBe('No waves to execute'); | ||
| }); | ||
|
|
||
| test('executes single wave with tasks', async () => { | ||
| jest.useRealTimers(); | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: (task) => Promise.resolve({ success: true, output: task.id }), | ||
| }); | ||
|
|
||
| const tasks = [ | ||
| { id: 'task-1', description: 'First' }, | ||
| { id: 'task-2', description: 'Second' }, | ||
| ]; | ||
| const result = await ex.executeWaves('wf-1', { tasks }); | ||
|
|
||
| expect(result.success).toBe(true); | ||
| expect(result.workflowId).toBe('wf-1'); | ||
| expect(result.waves).toHaveLength(1); | ||
| expect(result.waves[0].results).toHaveLength(2); | ||
| expect(result.waves[0].allSucceeded).toBe(true); | ||
| }); | ||
|
|
||
| test('executes multiple waves from analyzer', async () => { | ||
| jest.useRealTimers(); | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: () => Promise.resolve({ success: true }), | ||
| waveAnalyzer: { | ||
| analyze: () => ({ | ||
| waves: [ | ||
| { index: 1, tasks: [{ id: 't1', description: 'a' }] }, | ||
| { index: 2, tasks: [{ id: 't2', description: 'b' }] }, | ||
| ], | ||
| }), | ||
| }, | ||
| }); | ||
|
|
||
| const result = await ex.executeWaves('wf-2'); | ||
|
|
||
| expect(result.waves).toHaveLength(2); | ||
| expect(result.success).toBe(true); | ||
| expect(result.metrics).toBeDefined(); | ||
| }); | ||
| }); | ||
|
|
||
| // ── Critical task failure ──────────────────────────────────────── | ||
|
|
||
| describe('critical task failure', () => { | ||
| test('aborts when critical task fails and continueOnNonCriticalFailure is false', async () => { | ||
| jest.useRealTimers(); | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: (task) => { | ||
| if (task.id === 'critical-fail') return Promise.resolve({ success: false }); | ||
| return Promise.resolve({ success: true }); | ||
| }, | ||
| continueOnNonCriticalFailure: false, | ||
| waveAnalyzer: { | ||
| analyze: () => ({ | ||
| waves: [ | ||
| { index: 1, tasks: [{ id: 'critical-fail', description: 'will fail', critical: true }] }, | ||
| { index: 2, tasks: [{ id: 'should-skip', description: 'skipped' }] }, | ||
| ], | ||
| }), | ||
| }, | ||
| }); | ||
|
|
||
| const result = await ex.executeWaves('wf-abort'); | ||
|
|
||
| expect(result.aborted).toBe(true); | ||
| expect(result.success).toBe(false); | ||
| expect(result.waves).toHaveLength(1); | ||
| }); | ||
|
|
||
| test('continues when non-critical task fails', async () => { | ||
| jest.useRealTimers(); | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: (task) => { | ||
| if (task.id === 'noncrit-fail') return Promise.resolve({ success: false }); | ||
| return Promise.resolve({ success: true }); | ||
| }, | ||
| waveAnalyzer: { | ||
| analyze: () => ({ | ||
| waves: [ | ||
| { index: 1, tasks: [{ id: 'noncrit-fail', description: 'fails', critical: false }] }, | ||
| { index: 2, tasks: [{ id: 'runs', description: 'runs' }] }, | ||
| ], | ||
| }), | ||
| }, | ||
| }); | ||
|
|
||
| const result = await ex.executeWaves('wf-continue'); | ||
|
|
||
| expect(result.aborted).toBe(false); | ||
| expect(result.waves).toHaveLength(2); | ||
| }); | ||
| }); | ||
|
|
||
| // ── executeWave ────────────────────────────────────────────────── | ||
|
|
||
| describe('executeWave', () => { | ||
| test('returns empty array for wave with no tasks', async () => { | ||
| jest.useRealTimers(); | ||
| const result = await executor.executeWave({ tasks: [], index: 1 }, {}); | ||
| expect(result).toEqual([]); | ||
| }); | ||
|
|
||
| test('handles task executor rejection', async () => { | ||
| jest.useRealTimers(); | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: () => Promise.reject(new Error('boom')), | ||
| }); | ||
|
|
||
| const result = await ex.executeWave( | ||
| { tasks: [{ id: 'fail-task', description: 'will fail' }], index: 1 }, | ||
| {}, | ||
| ); | ||
|
|
||
| expect(result).toHaveLength(1); | ||
| expect(result[0].success).toBe(false); | ||
| }); | ||
| }); | ||
|
|
||
| // ── chunkArray ─────────────────────────────────────────────────── | ||
|
|
||
| describe('chunkArray', () => { | ||
| test('splits array into chunks of given size', () => { | ||
| const result = executor.chunkArray([1, 2, 3, 4, 5], 2); | ||
| expect(result).toEqual([[1, 2], [3, 4], [5]]); | ||
| }); | ||
|
|
||
| test('returns single chunk when array is smaller than size', () => { | ||
| const result = executor.chunkArray([1, 2], 5); | ||
| expect(result).toEqual([[1, 2]]); | ||
| }); | ||
|
|
||
| test('returns empty array for empty input', () => { | ||
| const result = executor.chunkArray([], 3); | ||
| expect(result).toEqual([]); | ||
| }); | ||
| }); | ||
|
|
||
| // ── calculateMetrics ───────────────────────────────────────────── | ||
|
|
||
| describe('calculateMetrics', () => { | ||
| test('computes correct metrics', () => { | ||
| const waveResults = [ | ||
| { | ||
| results: [ | ||
| { success: true, duration: 100 }, | ||
| { success: true, duration: 200 }, | ||
| { success: false, duration: 50 }, | ||
| ], | ||
| }, | ||
| ]; | ||
|
|
||
| const metrics = executor.calculateMetrics(waveResults); | ||
|
|
||
| expect(metrics.totalTasks).toBe(3); | ||
| expect(metrics.successful).toBe(2); | ||
| expect(metrics.failed).toBe(1); | ||
| expect(metrics.successRate).toBeCloseTo(66.67, 0); | ||
| expect(metrics.totalDuration).toBe(350); | ||
| expect(metrics.wallTime).toBe(200); | ||
| expect(metrics.totalWaves).toBe(1); | ||
| }); | ||
|
|
||
| test('returns 100% success rate for empty results', () => { | ||
| const metrics = executor.calculateMetrics([]); | ||
| expect(metrics.successRate).toBe(100); | ||
| expect(metrics.totalTasks).toBe(0); | ||
| }); | ||
| }); | ||
|
|
||
| // ── getStatus / formatStatus ───────────────────────────────────── | ||
|
|
||
| describe('getStatus', () => { | ||
| test('returns current state', () => { | ||
| executor.currentWaveIndex = 3; | ||
| const status = executor.getStatus(); | ||
|
|
||
| expect(status.currentWave).toBe(3); | ||
| expect(status.activeExecutions).toEqual([]); | ||
| expect(status.completedWaves).toBe(0); | ||
| }); | ||
| }); | ||
|
|
||
| describe('formatStatus', () => { | ||
| test('returns formatted string', () => { | ||
| const output = executor.formatStatus(); | ||
|
|
||
| expect(output).toContain('Wave Executor Status'); | ||
| expect(output).toContain('Current Wave'); | ||
| expect(output).toContain('Active Executions'); | ||
| }); | ||
| }); | ||
|
|
||
| // ── cancelAll ──────────────────────────────────────────────────── | ||
|
|
||
| describe('cancelAll', () => { | ||
| test('marks all active executions as cancelled', () => { | ||
| executor.activeExecutions.set('t1', { task: { id: 't1' }, status: 'running', startTime: Date.now() }); | ||
| executor.activeExecutions.set('t2', { task: { id: 't2' }, status: 'running', startTime: Date.now() }); | ||
|
|
||
| const events = []; | ||
| executor.on('task_cancelled', (data) => events.push(data)); | ||
| executor.on('execution_cancelled', (data) => events.push(data)); | ||
|
|
||
| executor.cancelAll(); | ||
|
|
||
| expect(executor.activeExecutions.get('t1').status).toBe('cancelled'); | ||
| expect(executor.activeExecutions.get('t2').status).toBe('cancelled'); | ||
| expect(events).toHaveLength(3); | ||
| }); | ||
| }); | ||
|
|
||
| // ── Events ─────────────────────────────────────────────────────── | ||
|
|
||
| describe('events', () => { | ||
| test('emits execution_started with workflow info', async () => { | ||
| jest.useRealTimers(); | ||
| const events = []; | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: () => Promise.resolve({ success: true }), | ||
| }); | ||
| ex.on('execution_started', (data) => events.push(data)); | ||
|
|
||
| await ex.executeWaves('wf-events', { tasks: [{ id: 't1', description: 'a' }] }); | ||
|
|
||
| expect(events).toHaveLength(1); | ||
| expect(events[0].workflowId).toBe('wf-events'); | ||
| expect(events[0].totalWaves).toBe(1); | ||
| }); | ||
|
|
||
| test('emits wave_started and wave_completed', async () => { | ||
| jest.useRealTimers(); | ||
| const started = []; | ||
| const completed = []; | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: () => Promise.resolve({ success: true }), | ||
| }); | ||
| ex.on('wave_started', (data) => started.push(data)); | ||
| ex.on('wave_completed', (data) => completed.push(data)); | ||
|
|
||
| await ex.executeWaves('wf-wave', { tasks: [{ id: 't1', description: 'a' }] }); | ||
|
|
||
| expect(started).toHaveLength(1); | ||
| expect(completed).toHaveLength(1); | ||
| expect(completed[0].success).toBe(true); | ||
| }); | ||
|
|
||
| test('emits task_completed for each task', async () => { | ||
| jest.useRealTimers(); | ||
| const events = []; | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: () => Promise.resolve({ success: true }), | ||
| }); | ||
| ex.on('task_completed', (data) => events.push(data)); | ||
|
|
||
| await ex.executeWaves('wf-tc', { | ||
| tasks: [ | ||
| { id: 't1', description: 'a' }, | ||
| { id: 't2', description: 'b' }, | ||
| ], | ||
| }); | ||
|
|
||
| expect(events).toHaveLength(2); | ||
| }); | ||
|
|
||
| test('emits wave_failed on critical failure', async () => { | ||
| jest.useRealTimers(); | ||
| const events = []; | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: () => Promise.resolve({ success: false }), | ||
| waveAnalyzer: { | ||
| analyze: () => ({ | ||
| waves: [{ index: 1, tasks: [{ id: 'crit', description: 'x', critical: true }] }], | ||
| }), | ||
| }, | ||
| }); | ||
| ex.on('wave_failed', (data) => events.push(data)); | ||
|
|
||
| await ex.executeWaves('wf-fail'); | ||
|
|
||
| expect(events).toHaveLength(1); | ||
| expect(events[0].reason).toBe('critical_task_failed'); | ||
| }); | ||
| }); | ||
|
|
||
| // ── defaultExecutor ────────────────────────────────────────────── | ||
|
|
||
| describe('defaultExecutor', () => { | ||
| test('returns success with default message', async () => { | ||
| jest.useRealTimers(); | ||
| const ex = new WaveExecutor(); | ||
| const consoleSpy = jest.spyOn(console, 'log').mockImplementation(); | ||
|
|
||
| const result = await ex.defaultExecutor({ id: 'test-task' }, {}); | ||
|
|
||
| expect(result.success).toBe(true); | ||
| expect(result.output).toContain('Default executor'); | ||
| consoleSpy.mockRestore(); | ||
| }); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
The jest.config.js file at lines 80-81 explicitly excludes '.aios-core/core/execution/**' from coverage collection with the comment "Exclude templates, generated files, and legacy scripts". However, this PR adds extensive test coverage for execution modules to address issue #52. This creates a discrepancy where tests are being added for code that is excluded from coverage metrics. Either: (1) the exclusion in jest.config.js should be removed if execution modules should have coverage, or (2) these tests may not contribute to the coverage metrics mentioned in the PR goals. This should be clarified to ensure the tests achieve their intended purpose.
|
|
||
| // ── Deep mode coverage (P1 requested by maintainer) ────────────── | ||
|
|
||
| describe('RACE mode - edge cases', () => { | ||
| it('returns gemini when claude fails in race', async () => { | ||
| const raceExec = new ParallelExecutor({ mode: ParallelMode.RACE }); | ||
| const claude = jest.fn().mockResolvedValue({ success: false, error: 'down' }); | ||
| const gemini = jest.fn().mockResolvedValue({ success: true, output: 'gemini wins' }); | ||
|
|
||
| const result = await raceExec.execute(claude, gemini); | ||
|
|
||
| expect(result.selectedProvider).toBe('gemini'); | ||
| expect(result.mode).toBe('race'); | ||
| }); | ||
| }); | ||
|
|
||
| describe('CONSENSUS mode - edge cases', () => { | ||
| it('reports no consensus for very different outputs', async () => { | ||
| const consensusExec = new ParallelExecutor({ | ||
| mode: ParallelMode.CONSENSUS, | ||
| consensusSimilarity: 0.95, | ||
| }); | ||
| const claude = jest.fn().mockResolvedValue({ success: true, output: 'apples oranges bananas' }); | ||
| const gemini = jest.fn().mockResolvedValue({ success: true, output: 'zzz xxx yyy completely different' }); | ||
|
|
||
| const result = await consensusExec.execute(claude, gemini); | ||
|
|
||
| expect(result.consensus).toBe(false); | ||
| expect(result.warning).toContain('did not reach consensus'); | ||
| }); | ||
|
|
||
| it('falls back when gemini fails', async () => { | ||
| const consensusExec = new ParallelExecutor({ mode: ParallelMode.CONSENSUS }); | ||
| const claude = jest.fn().mockResolvedValue({ success: true, output: 'ok' }); | ||
| const gemini = jest.fn().mockRejectedValue(new Error('down')); | ||
|
|
||
| const result = await consensusExec.execute(claude, gemini); | ||
|
|
||
| expect(result.selectedProvider).toBe('claude'); | ||
| expect(result.mode).toBe('fallback'); | ||
| }); | ||
|
|
||
| it('increments consensusAgreements on agreement', async () => { | ||
| const consensusExec = new ParallelExecutor({ | ||
| mode: ParallelMode.CONSENSUS, | ||
| consensusSimilarity: 0.5, | ||
| }); | ||
| const claude = jest.fn().mockResolvedValue({ success: true, output: 'same words here' }); | ||
| const gemini = jest.fn().mockResolvedValue({ success: true, output: 'same words here' }); | ||
|
|
||
| await consensusExec.execute(claude, gemini); | ||
|
|
||
| expect(consensusExec.stats.consensusAgreements).toBe(1); | ||
| }); | ||
| }); | ||
|
|
||
| describe('BEST_OF mode - edge cases', () => { | ||
| it('picks claude when only claude succeeds', async () => { | ||
| const bestExec = new ParallelExecutor({ mode: ParallelMode.BEST_OF }); | ||
| const claude = jest.fn().mockResolvedValue({ success: true, output: 'ok' }); | ||
| const gemini = jest.fn().mockResolvedValue({ success: false }); | ||
|
|
||
| const result = await bestExec.execute(claude, gemini); | ||
|
|
||
| expect(result.selectedProvider).toBe('claude'); | ||
| }); | ||
|
|
||
| it('picks gemini when only gemini succeeds', async () => { | ||
| const bestExec = new ParallelExecutor({ mode: ParallelMode.BEST_OF }); | ||
| const claude = jest.fn().mockResolvedValue({ success: false }); | ||
| const gemini = jest.fn().mockResolvedValue({ success: true, output: 'ok' }); | ||
|
|
||
| const result = await bestExec.execute(claude, gemini); | ||
|
|
||
| expect(result.selectedProvider).toBe('gemini'); | ||
| }); | ||
|
|
||
| it('returns failure when both fail', async () => { | ||
| const bestExec = new ParallelExecutor({ mode: ParallelMode.BEST_OF }); | ||
| const claude = jest.fn().mockResolvedValue({ success: false }); | ||
| const gemini = jest.fn().mockResolvedValue({ success: false }); | ||
|
|
||
| const result = await bestExec.execute(claude, gemini); | ||
|
|
||
| expect(result.success).toBe(false); | ||
| }); | ||
| }); | ||
|
|
||
| describe('MERGE mode - edge cases', () => { | ||
| it('returns single output when claude fails', async () => { | ||
| const mergeExec = new ParallelExecutor({ mode: ParallelMode.MERGE }); | ||
| const claude = jest.fn().mockResolvedValue({ success: false }); | ||
| const gemini = jest.fn().mockResolvedValue({ success: true, output: 'gemini only' }); | ||
|
|
||
| const result = await mergeExec.execute(claude, gemini); | ||
|
|
||
| expect(result.mode).toBe('merge'); | ||
| expect(result.output).toBe('gemini only'); | ||
| }); | ||
| }); | ||
|
|
||
| describe('FALLBACK mode - stats tracking', () => { | ||
| it('increments fallbacksUsed when claude fails', async () => { | ||
| const claude = jest.fn().mockRejectedValue(new Error('down')); | ||
| const gemini = jest.fn().mockResolvedValue({ success: true, output: 'ok' }); | ||
|
|
||
| await executor.execute(claude, gemini); | ||
|
|
||
| expect(executor.stats.fallbacksUsed).toBe(1); | ||
| }); | ||
| }); | ||
|
|
||
| describe('_calculateSimilarity', () => { | ||
| it('returns 1 for identical strings', () => { | ||
| expect(executor._calculateSimilarity('hello world', 'hello world')).toBe(1); | ||
| }); | ||
|
|
||
| it('returns 0 for completely different strings', () => { | ||
| expect(executor._calculateSimilarity('aaa', 'bbb')).toBe(0); | ||
| }); | ||
|
|
||
| it('returns 0 when either input is null', () => { | ||
| expect(executor._calculateSimilarity(null, 'hello')).toBe(0); | ||
| expect(executor._calculateSimilarity('hello', null)).toBe(0); | ||
| }); | ||
|
|
||
| it('is case insensitive', () => { | ||
| expect(executor._calculateSimilarity('Hello World', 'hello world')).toBe(1); | ||
| }); | ||
| }); | ||
|
|
||
| describe('_scoreOutput', () => { | ||
| it('returns 0 for null/empty', () => { | ||
| expect(executor._scoreOutput(null)).toBe(0); | ||
| expect(executor._scoreOutput('')).toBe(0); | ||
| }); | ||
|
|
||
| it('scores length > 100 higher', () => { | ||
| const short = 'a'.repeat(50); | ||
| const medium = 'a'.repeat(150); | ||
| expect(executor._scoreOutput(medium)).toBeGreaterThan(executor._scoreOutput(short)); | ||
| }); | ||
|
|
||
| it('scores code blocks higher', () => { | ||
| const withCode = 'x'.repeat(101) + ' ```code``` '; | ||
| const withoutCode = 'x'.repeat(101); | ||
| expect(executor._scoreOutput(withCode)).toBeGreaterThan(executor._scoreOutput(withoutCode)); | ||
| }); | ||
| }); | ||
|
|
||
| describe('getStats computed rates', () => { | ||
| it('computes consensusRate and fallbackRate', () => { | ||
| executor.stats.executions = 10; | ||
| executor.stats.consensusAgreements = 3; | ||
| executor.stats.fallbacksUsed = 2; | ||
|
|
||
| const stats = executor.getStats(); | ||
|
|
||
| expect(stats.consensusRate).toBeCloseTo(0.3); | ||
| expect(stats.fallbackRate).toBeCloseTo(0.2); | ||
| }); | ||
|
|
||
| it('returns 0 rates when no executions', () => { | ||
| const stats = executor.getStats(); | ||
| expect(stats.consensusRate).toBe(0); | ||
| expect(stats.fallbackRate).toBe(0); | ||
| }); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
The jest.config.js file at lines 80-81 explicitly excludes '.aios-core/core/execution/**' from coverage collection. However, this PR adds extensive test coverage for execution modules to address issue #52's coverage gaps. This creates a discrepancy where tests are being added for code that is excluded from coverage metrics. Either: (1) the exclusion in jest.config.js should be removed if execution modules should have coverage, or (2) these tests may not contribute to the coverage metrics mentioned in the PR goals. This should be clarified to ensure the tests achieve their intended purpose.
| 'use strict'; | ||
|
|
||
| /** | ||
| * Unit tests for WaveExecutor | ||
| * | ||
| * Covers wave-based parallel task execution: single/multi-wave workflows, | ||
| * task chunking, timeout handling, critical task failure abort, | ||
| * EventEmitter lifecycle, metrics calculation, status reporting, and cancelAll. | ||
| * | ||
| * Refs #52 | ||
| */ | ||
|
|
||
| jest.mock('../../../.aios-core/workflow-intelligence/engine/wave-analyzer', () => null); | ||
| jest.mock('../../../.aios-core/core/execution/rate-limit-manager', () => null); | ||
|
|
||
| const WaveExecutor = require('../../../.aios-core/core/execution/wave-executor'); | ||
|
|
||
| describe('WaveExecutor', () => { | ||
| let executor; | ||
|
|
||
| beforeEach(() => { | ||
| jest.resetAllMocks(); | ||
| jest.useFakeTimers(); | ||
| executor = new WaveExecutor({ | ||
| taskExecutor: (task) => Promise.resolve({ success: true, output: `done-${task.id}` }), | ||
| waveAnalyzer: null, | ||
| rateLimitManager: null, | ||
| taskTimeout: 5000, | ||
| }); | ||
| }); | ||
|
|
||
| afterEach(() => { | ||
| jest.useRealTimers(); | ||
| }); | ||
|
|
||
| // ── Constructor ────────────────────────────────────────────────── | ||
|
|
||
| describe('constructor', () => { | ||
| test('defaults maxParallel to 4', () => { | ||
| const ex = new WaveExecutor(); | ||
| expect(ex.maxParallel).toBe(4); | ||
| }); | ||
|
|
||
| test('defaults taskTimeout to 10 minutes', () => { | ||
| const ex = new WaveExecutor(); | ||
| expect(ex.taskTimeout).toBe(10 * 60 * 1000); | ||
| }); | ||
|
|
||
| test('defaults continueOnNonCriticalFailure to true', () => { | ||
| const ex = new WaveExecutor(); | ||
| expect(ex.continueOnNonCriticalFailure).toBe(true); | ||
| }); | ||
|
|
||
| test('accepts custom config', () => { | ||
| const ex = new WaveExecutor({ maxParallel: 2, taskTimeout: 1000 }); | ||
| expect(ex.maxParallel).toBe(2); | ||
| expect(ex.taskTimeout).toBe(1000); | ||
| }); | ||
|
|
||
| test('extends EventEmitter', () => { | ||
| expect(typeof executor.on).toBe('function'); | ||
| expect(typeof executor.emit).toBe('function'); | ||
| }); | ||
|
|
||
| test('initializes empty state', () => { | ||
| expect(executor.activeExecutions.size).toBe(0); | ||
| expect(executor.completedWaves).toEqual([]); | ||
| expect(executor.currentWaveIndex).toBe(0); | ||
| }); | ||
| }); | ||
|
|
||
| // ── executeWaves ───────────────────────────────────────────────── | ||
|
|
||
| describe('executeWaves', () => { | ||
| test('returns success with empty waves', async () => { | ||
| jest.useRealTimers(); | ||
| const ex = new WaveExecutor({ waveAnalyzer: { analyze: () => ({ waves: [] }) } }); | ||
|
|
||
| const result = await ex.executeWaves('wf-1'); | ||
|
|
||
| expect(result.success).toBe(true); | ||
| expect(result.waves).toEqual([]); | ||
| expect(result.message).toBe('No waves to execute'); | ||
| }); | ||
|
|
||
| test('executes single wave with tasks', async () => { | ||
| jest.useRealTimers(); | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: (task) => Promise.resolve({ success: true, output: task.id }), | ||
| }); | ||
|
|
||
| const tasks = [ | ||
| { id: 'task-1', description: 'First' }, | ||
| { id: 'task-2', description: 'Second' }, | ||
| ]; | ||
| const result = await ex.executeWaves('wf-1', { tasks }); | ||
|
|
||
| expect(result.success).toBe(true); | ||
| expect(result.workflowId).toBe('wf-1'); | ||
| expect(result.waves).toHaveLength(1); | ||
| expect(result.waves[0].results).toHaveLength(2); | ||
| expect(result.waves[0].allSucceeded).toBe(true); | ||
| }); | ||
|
|
||
| test('executes multiple waves from analyzer', async () => { | ||
| jest.useRealTimers(); | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: () => Promise.resolve({ success: true }), | ||
| waveAnalyzer: { | ||
| analyze: () => ({ | ||
| waves: [ | ||
| { index: 1, tasks: [{ id: 't1', description: 'a' }] }, | ||
| { index: 2, tasks: [{ id: 't2', description: 'b' }] }, | ||
| ], | ||
| }), | ||
| }, | ||
| }); | ||
|
|
||
| const result = await ex.executeWaves('wf-2'); | ||
|
|
||
| expect(result.waves).toHaveLength(2); | ||
| expect(result.success).toBe(true); | ||
| expect(result.metrics).toBeDefined(); | ||
| }); | ||
| }); | ||
|
|
||
| // ── Critical task failure ──────────────────────────────────────── | ||
|
|
||
| describe('critical task failure', () => { | ||
| test('aborts when critical task fails and continueOnNonCriticalFailure is false', async () => { | ||
| jest.useRealTimers(); | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: (task) => { | ||
| if (task.id === 'critical-fail') return Promise.resolve({ success: false }); | ||
| return Promise.resolve({ success: true }); | ||
| }, | ||
| continueOnNonCriticalFailure: false, | ||
| waveAnalyzer: { | ||
| analyze: () => ({ | ||
| waves: [ | ||
| { index: 1, tasks: [{ id: 'critical-fail', description: 'will fail', critical: true }] }, | ||
| { index: 2, tasks: [{ id: 'should-skip', description: 'skipped' }] }, | ||
| ], | ||
| }), | ||
| }, | ||
| }); | ||
|
|
||
| const result = await ex.executeWaves('wf-abort'); | ||
|
|
||
| expect(result.aborted).toBe(true); | ||
| expect(result.success).toBe(false); | ||
| expect(result.waves).toHaveLength(1); | ||
| }); | ||
|
|
||
| test('continues when non-critical task fails', async () => { | ||
| jest.useRealTimers(); | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: (task) => { | ||
| if (task.id === 'noncrit-fail') return Promise.resolve({ success: false }); | ||
| return Promise.resolve({ success: true }); | ||
| }, | ||
| waveAnalyzer: { | ||
| analyze: () => ({ | ||
| waves: [ | ||
| { index: 1, tasks: [{ id: 'noncrit-fail', description: 'fails', critical: false }] }, | ||
| { index: 2, tasks: [{ id: 'runs', description: 'runs' }] }, | ||
| ], | ||
| }), | ||
| }, | ||
| }); | ||
|
|
||
| const result = await ex.executeWaves('wf-continue'); | ||
|
|
||
| expect(result.aborted).toBe(false); | ||
| expect(result.waves).toHaveLength(2); | ||
| }); | ||
| }); | ||
|
|
||
| // ── executeWave ────────────────────────────────────────────────── | ||
|
|
||
| describe('executeWave', () => { | ||
| test('returns empty array for wave with no tasks', async () => { | ||
| jest.useRealTimers(); | ||
| const result = await executor.executeWave({ tasks: [], index: 1 }, {}); | ||
| expect(result).toEqual([]); | ||
| }); | ||
|
|
||
| test('handles task executor rejection', async () => { | ||
| jest.useRealTimers(); | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: () => Promise.reject(new Error('boom')), | ||
| }); | ||
|
|
||
| const result = await ex.executeWave( | ||
| { tasks: [{ id: 'fail-task', description: 'will fail' }], index: 1 }, | ||
| {}, | ||
| ); | ||
|
|
||
| expect(result).toHaveLength(1); | ||
| expect(result[0].success).toBe(false); | ||
| }); | ||
| }); | ||
|
|
||
| // ── chunkArray ─────────────────────────────────────────────────── | ||
|
|
||
| describe('chunkArray', () => { | ||
| test('splits array into chunks of given size', () => { | ||
| const result = executor.chunkArray([1, 2, 3, 4, 5], 2); | ||
| expect(result).toEqual([[1, 2], [3, 4], [5]]); | ||
| }); | ||
|
|
||
| test('returns single chunk when array is smaller than size', () => { | ||
| const result = executor.chunkArray([1, 2], 5); | ||
| expect(result).toEqual([[1, 2]]); | ||
| }); | ||
|
|
||
| test('returns empty array for empty input', () => { | ||
| const result = executor.chunkArray([], 3); | ||
| expect(result).toEqual([]); | ||
| }); | ||
| }); | ||
|
|
||
| // ── calculateMetrics ───────────────────────────────────────────── | ||
|
|
||
| describe('calculateMetrics', () => { | ||
| test('computes correct metrics', () => { | ||
| const waveResults = [ | ||
| { | ||
| results: [ | ||
| { success: true, duration: 100 }, | ||
| { success: true, duration: 200 }, | ||
| { success: false, duration: 50 }, | ||
| ], | ||
| }, | ||
| ]; | ||
|
|
||
| const metrics = executor.calculateMetrics(waveResults); | ||
|
|
||
| expect(metrics.totalTasks).toBe(3); | ||
| expect(metrics.successful).toBe(2); | ||
| expect(metrics.failed).toBe(1); | ||
| expect(metrics.successRate).toBeCloseTo(66.67, 0); | ||
| expect(metrics.totalDuration).toBe(350); | ||
| expect(metrics.wallTime).toBe(200); | ||
| expect(metrics.totalWaves).toBe(1); | ||
| }); | ||
|
|
||
| test('returns 100% success rate for empty results', () => { | ||
| const metrics = executor.calculateMetrics([]); | ||
| expect(metrics.successRate).toBe(100); | ||
| expect(metrics.totalTasks).toBe(0); | ||
| }); | ||
| }); | ||
|
|
||
| // ── getStatus / formatStatus ───────────────────────────────────── | ||
|
|
||
| describe('getStatus', () => { | ||
| test('returns current state', () => { | ||
| executor.currentWaveIndex = 3; | ||
| const status = executor.getStatus(); | ||
|
|
||
| expect(status.currentWave).toBe(3); | ||
| expect(status.activeExecutions).toEqual([]); | ||
| expect(status.completedWaves).toBe(0); | ||
| }); | ||
| }); | ||
|
|
||
| describe('formatStatus', () => { | ||
| test('returns formatted string', () => { | ||
| const output = executor.formatStatus(); | ||
|
|
||
| expect(output).toContain('Wave Executor Status'); | ||
| expect(output).toContain('Current Wave'); | ||
| expect(output).toContain('Active Executions'); | ||
| }); | ||
| }); | ||
|
|
||
| // ── cancelAll ──────────────────────────────────────────────────── | ||
|
|
||
| describe('cancelAll', () => { | ||
| test('marks all active executions as cancelled', () => { | ||
| executor.activeExecutions.set('t1', { task: { id: 't1' }, status: 'running', startTime: Date.now() }); | ||
| executor.activeExecutions.set('t2', { task: { id: 't2' }, status: 'running', startTime: Date.now() }); | ||
|
|
||
| const events = []; | ||
| executor.on('task_cancelled', (data) => events.push(data)); | ||
| executor.on('execution_cancelled', (data) => events.push(data)); | ||
|
|
||
| executor.cancelAll(); | ||
|
|
||
| expect(executor.activeExecutions.get('t1').status).toBe('cancelled'); | ||
| expect(executor.activeExecutions.get('t2').status).toBe('cancelled'); | ||
| expect(events).toHaveLength(3); | ||
| }); | ||
| }); | ||
|
|
||
| // ── Events ─────────────────────────────────────────────────────── | ||
|
|
||
| describe('events', () => { | ||
| test('emits execution_started with workflow info', async () => { | ||
| jest.useRealTimers(); | ||
| const events = []; | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: () => Promise.resolve({ success: true }), | ||
| }); | ||
| ex.on('execution_started', (data) => events.push(data)); | ||
|
|
||
| await ex.executeWaves('wf-events', { tasks: [{ id: 't1', description: 'a' }] }); | ||
|
|
||
| expect(events).toHaveLength(1); | ||
| expect(events[0].workflowId).toBe('wf-events'); | ||
| expect(events[0].totalWaves).toBe(1); | ||
| }); | ||
|
|
||
| test('emits wave_started and wave_completed', async () => { | ||
| jest.useRealTimers(); | ||
| const started = []; | ||
| const completed = []; | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: () => Promise.resolve({ success: true }), | ||
| }); | ||
| ex.on('wave_started', (data) => started.push(data)); | ||
| ex.on('wave_completed', (data) => completed.push(data)); | ||
|
|
||
| await ex.executeWaves('wf-wave', { tasks: [{ id: 't1', description: 'a' }] }); | ||
|
|
||
| expect(started).toHaveLength(1); | ||
| expect(completed).toHaveLength(1); | ||
| expect(completed[0].success).toBe(true); | ||
| }); | ||
|
|
||
| test('emits task_completed for each task', async () => { | ||
| jest.useRealTimers(); | ||
| const events = []; | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: () => Promise.resolve({ success: true }), | ||
| }); | ||
| ex.on('task_completed', (data) => events.push(data)); | ||
|
|
||
| await ex.executeWaves('wf-tc', { | ||
| tasks: [ | ||
| { id: 't1', description: 'a' }, | ||
| { id: 't2', description: 'b' }, | ||
| ], | ||
| }); | ||
|
|
||
| expect(events).toHaveLength(2); | ||
| }); | ||
|
|
||
| test('emits wave_failed on critical failure', async () => { | ||
| jest.useRealTimers(); | ||
| const events = []; | ||
| const ex = new WaveExecutor({ | ||
| taskExecutor: () => Promise.resolve({ success: false }), | ||
| waveAnalyzer: { | ||
| analyze: () => ({ | ||
| waves: [{ index: 1, tasks: [{ id: 'crit', description: 'x', critical: true }] }], | ||
| }), | ||
| }, | ||
| }); | ||
| ex.on('wave_failed', (data) => events.push(data)); | ||
|
|
||
| await ex.executeWaves('wf-fail'); | ||
|
|
||
| expect(events).toHaveLength(1); | ||
| expect(events[0].reason).toBe('critical_task_failed'); | ||
| }); | ||
| }); | ||
|
|
||
| // ── defaultExecutor ────────────────────────────────────────────── | ||
|
|
||
| describe('defaultExecutor', () => { | ||
| test('returns success with default message', async () => { | ||
| jest.useRealTimers(); | ||
| const ex = new WaveExecutor(); | ||
| const consoleSpy = jest.spyOn(console, 'log').mockImplementation(); | ||
|
|
||
| const result = await ex.defaultExecutor({ id: 'test-task' }, {}); | ||
|
|
||
| expect(result.success).toBe(true); | ||
| expect(result.output).toContain('Default executor'); | ||
| consoleSpy.mockRestore(); | ||
| }); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
This test file creates duplicate test coverage for WaveExecutor, as there is already a comprehensive test suite at tests/core/wave-executor.test.js (584 lines) that tests the same WaveExecutor class from .aios-core/core/execution/wave-executor.js. Both files cover similar functionality including constructor, executeWaves, executeWave, chunkArray, calculateMetrics, and event emissions. Having two separate test files for the same module creates maintenance burden and potential confusion. Consider either: (1) consolidating the tests into one file, (2) clearly documenting why both files exist, or (3) removing the old file if this is meant to replace it as part of a reorganization to match directory structure (tests/core/execution/ to mirror .aios-core/core/execution/).
| beforeEach(() => { | ||
| jest.resetAllMocks(); | ||
| jest.useFakeTimers(); | ||
| executor = new WaveExecutor({ | ||
| taskExecutor: (task) => Promise.resolve({ success: true, output: `done-${task.id}` }), | ||
| waveAnalyzer: null, | ||
| rateLimitManager: null, | ||
| taskTimeout: 5000, | ||
| }); | ||
| }); | ||
|
|
||
| afterEach(() => { | ||
| jest.useRealTimers(); | ||
| }); |
There was a problem hiding this comment.
The test setup uses jest.useFakeTimers() in beforeEach but many individual tests then call jest.useRealTimers(). This creates inconsistent timer behavior across tests. Since the WaveExecutor implementation uses setTimeout (for cleanup in executeTaskWithTimeout at line 276), mixing fake and real timers can lead to unpredictable test behavior and potential race conditions. Consider either: (1) removing jest.useFakeTimers() from beforeEach and only using it in specific tests that need to control time, or (2) ensuring all tests consistently use fake timers with proper jest.runAllTimers() or jest.advanceTimersByTime() calls where needed.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tests/core/execution/wave-executor.test.js (2)
21-34:jest.useFakeTimers()inbeforeEachis unused and forces every async test to opt out manually.No test advances or inspects fake timers — all async tests immediately call
jest.useRealTimers()at their top, and the synchronous tests don't rely on timer APIs either. Removing fake timers frombeforeEach/afterEacheliminates the per-test boilerplate without losing any coverage.♻️ Proposed refactor
beforeEach(() => { jest.resetAllMocks(); - jest.useFakeTimers(); executor = new WaveExecutor({ taskExecutor: (task) => Promise.resolve({ success: true, output: `done-${task.id}` }), waveAnalyzer: null, rateLimitManager: null, taskTimeout: 5000, }); }); afterEach(() => { - jest.useRealTimers(); });Each test that currently opens with
jest.useRealTimers()can drop that line as well.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/execution/wave-executor.test.js` around lines 21 - 34, The beforeEach/afterEach setup calls jest.useFakeTimers() and jest.useRealTimers(), but no tests advance or inspect timers; remove the fake/real timers from the shared setup (delete jest.useFakeTimers() from beforeEach and jest.useRealTimers() from afterEach) and then remove the per-test jest.useRealTimers() calls at the start of individual tests that were compensating; keep jest.resetAllMocks() and the WaveExecutor initialization intact (refer to the WaveExecutor constructor usage in the test file) so tests run with real timers by default.
13-14: Consider using structured mocks instead ofnullfactories.Mocking both modules to
nullis fragile — ifwave-executor.jsever accesses a property or calls a method on either import at module load time, the test will throw aTypeErrorrather than a meaningful test failure. A minimal{}or a jest factory with the relevant shape is easier to evolve safely.♻️ Suggested alternative mock shape
-jest.mock('../../../.aios-core/workflow-intelligence/engine/wave-analyzer', () => null); -jest.mock('../../../.aios-core/core/execution/rate-limit-manager', () => null); +jest.mock('../../../.aios-core/workflow-intelligence/engine/wave-analyzer', () => ({})); +jest.mock('../../../.aios-core/core/execution/rate-limit-manager', () => ({}));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/execution/wave-executor.test.js` around lines 13 - 14, The tests currently mock '../../../.aios-core/workflow-intelligence/engine/wave-analyzer' and '../../../.aios-core/core/execution/rate-limit-manager' to null via the two jest.mock calls, which can cause TypeErrors if the module is accessed at load time; change those mocks to return a safe shaped object (e.g., an empty object or a minimal factory exposing the methods/properties the code expects) instead of null so require/import in wave-executor.js won’t blow up — update the two jest.mock(...) invocations in tests/core/execution/wave-executor.test.js to use {} or a jest.fn factory that provides the necessary stubs for the wave-analyzer and rate-limit-manager symbols used by WaveExecutor.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/core/execution/wave-executor.test.js`:
- Around line 376-383: The test creates a console mock via jest.spyOn(console,
'log') assigned to consoleSpy but restores it only after assertions, which can
leak the mock if assertions throw; wrap the invocation and assertions of
defaultExecutor (the call to ex.defaultExecutor and the expect(result...) lines)
in a try/finally so consoleSpy.mockRestore() is always executed; locate the
consoleSpy setup (jest.spyOn(console, 'log')), the call to ex.defaultExecutor,
and the two expect(...) assertions and move the mockRestore into the finally
block to guarantee cleanup.
---
Duplicate comments:
In `@tests/core/execution/wave-executor.test.js`:
- Line 242: The test assertion uses toBeCloseTo with precision 0 which is too
loose; update the assertion in tests/core/execution/wave-executor.test.js (the
expect(metrics.successRate).toBeCloseTo(...) call) to use a tighter precision
(e.g., change the second argument from 0 to 2 or a suitable number of decimal
places) or adjust the expected value to a more precise number so the successRate
is asserted with appropriate precision.
---
Nitpick comments:
In `@tests/core/execution/wave-executor.test.js`:
- Around line 21-34: The beforeEach/afterEach setup calls jest.useFakeTimers()
and jest.useRealTimers(), but no tests advance or inspect timers; remove the
fake/real timers from the shared setup (delete jest.useFakeTimers() from
beforeEach and jest.useRealTimers() from afterEach) and then remove the per-test
jest.useRealTimers() calls at the start of individual tests that were
compensating; keep jest.resetAllMocks() and the WaveExecutor initialization
intact (refer to the WaveExecutor constructor usage in the test file) so tests
run with real timers by default.
- Around line 13-14: The tests currently mock
'../../../.aios-core/workflow-intelligence/engine/wave-analyzer' and
'../../../.aios-core/core/execution/rate-limit-manager' to null via the two
jest.mock calls, which can cause TypeErrors if the module is accessed at load
time; change those mocks to return a safe shaped object (e.g., an empty object
or a minimal factory exposing the methods/properties the code expects) instead
of null so require/import in wave-executor.js won’t blow up — update the two
jest.mock(...) invocations in tests/core/execution/wave-executor.test.js to use
{} or a jest.fn factory that provides the necessary stubs for the wave-analyzer
and rate-limit-manager symbols used by WaveExecutor.
- Tighten toBeCloseTo precision from 0 to 1 digits - Wrap consoleSpy.mockRestore() in try/finally to prevent leak - Add 'use strict' to parallel-executor.test.js for consistency - Document magic number in cancelAll event count assertion
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/core/execution/wave-executor.test.js (1)
21-34: Invert the fake/real timer default to reduce fragility.
jest.useFakeTimers()is set inbeforeEachbut every async test immediately overrides it withjest.useRealTimers(). No test in the suite actually relies on fake timers (evencancelAllat line 282 only readsDate.now()for astartTimefield that isn't asserted). This creates a silent trap for anyone adding a new async test — forgetting the override causes the test to hang when Promises fail to settle under fake timers.♻️ Suggested approach
Remove
jest.useFakeTimers()/jest.useRealTimers()from the global lifecycle hooks and the individual tests that override them, since no test currently needs controlled timers:beforeEach(() => { jest.resetAllMocks(); - jest.useFakeTimers(); executor = new WaveExecutor({ taskExecutor: (task) => Promise.resolve({ success: true, output: `done-${task.id}` }), waveAnalyzer: null, rateLimitManager: null, taskTimeout: 5000, }); }); afterEach(() => { - jest.useRealTimers(); });Then remove the
jest.useRealTimers()calls at the top of each individual test that currently has them. If a future test genuinely needs fake timers, opt-in locally.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/execution/wave-executor.test.js` around lines 21 - 34, The test suite currently enables fake timers in the global beforeEach (jest.useFakeTimers()) and restores them in afterEach, but individual async tests repeatedly call jest.useRealTimers() and no tests actually require fake timers; remove the global jest.useFakeTimers() call from beforeEach and the matching jest.useRealTimers() from afterEach, and delete the per-test jest.useRealTimers() calls (e.g., in the test referencing cancelAll) so tests run with real timers by default; if a future test needs fake timers, opt-in locally using jest.useFakeTimers() inside that test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/core/execution/wave-executor.test.js`:
- Around line 21-34: The test suite currently enables fake timers in the global
beforeEach (jest.useFakeTimers()) and restores them in afterEach, but individual
async tests repeatedly call jest.useRealTimers() and no tests actually require
fake timers; remove the global jest.useFakeTimers() call from beforeEach and the
matching jest.useRealTimers() from afterEach, and delete the per-test
jest.useRealTimers() calls (e.g., in the test referencing cancelAll) so tests
run with real timers by default; if a future test needs fake timers, opt-in
locally using jest.useFakeTimers() inside that test.
|
@Pedrovaleriolopez @oalanicolas, temos 6 PRs de cobertura de testes (#467, #468, #478, #501, #502, #503, #518, #534) esperando review desde fev. São só testes — zero risco de regressão, só aumentam a cobertura do projeto. Posso consolidar em menos PRs se facilitar o review. |
|
@Pedrovaleriolopez esse PR já foi aprovado pelo CodeRabbit. Poderia fazer o merge? Obrigado! 🙏 |
Resumo
Plano de teste