-
Notifications
You must be signed in to change notification settings - Fork 1
Revert "Run multiple handles in parallel" #238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert "Run multiple handles in parallel" #238
Conversation
This reverts commit ec3e88a.
WalkthroughThis pull request reverts multi-handle and multi-template support across the CLI and library functions. Changes include removing the 1.52.0 changelog entry, downgrading the version to 1.51.0, simplifying the Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
bin/cli.js (1)
465-488: Missingawaiton async function calls may cause incorrect exit behavior.The action callback was changed from
asyncto synchronous, butrunTestsStatusOnlyandrunTestsWithOutputare both async functions. Calling them withoutawaitmeans:
- The CLI may exit before tests complete
- Unhandled promise rejections can occur
- Exit codes won't reflect actual test results (critical for
--statusmode used in CI/GitHub Actions)Proposed fix: Restore async callback and await calls
- .action((options) => { + .action(async (options) => { if (!options.handle && !options.accountTemplate) { consola.error("You need to specify either a reconciliation handle or an account template"); process.exit(1); } if (options.test && options.pattern) { consola.error("You cannot use both --test and --pattern options at the same time"); process.exit(1); } const templateType = options.handle ? "reconciliationText" : "accountTemplate"; const templateName = options.handle ? options.handle : options.accountTemplate; if (options.status) { - liquidTestRunner.runTestsStatusOnly(options.firm, templateType, templateName, options.test, options.pattern); - } else { + await liquidTestRunner.runTestsStatusOnly(options.firm, templateType, templateName, options.test, options.pattern); + } else { if (options.previewOnly && !options.htmlInput && !options.htmlPreview) { consola.info(`When using "--preview-only" you need to specify at least one of the following options: "--html-input", "--html-preview"`); process.exit(1); } - liquidTestRunner.runTestsWithOutput(options.firm, templateType, templateName, options.test, options.previewOnly, options.htmlInput, options.htmlPreview, options.pattern); + await liquidTestRunner.runTestsWithOutput(options.firm, templateType, templateName, options.test, options.previewOnly, options.htmlInput, options.htmlPreview, options.pattern); } });
🤖 Fix all issues with AI agents
In @lib/liquidTestRunner.js:
- Around line 472-499: runTestsStatusOnly currently treats a falsy testResult as
PASSED which hides failures from runTests (e.g., missing config/test/empty
YAML). Update the logic so a falsy testResult does not auto-pass: either (A)
change runTests to return an explicit sentinel (e.g., { noTests: true }) for "no
tests to run" and update runTestsStatusOnly to treat that sentinel as a PASS
with a clear log message, or (B) keep runTests as-is and in runTestsStatusOnly
detect undefined/null, log a descriptive error via consola.error("runTests
failed: <reason>") and return "FAILED". Reference runTestsStatusOnly and
runTests when making the change so the function returns FAILED on real failures
but still allows an explicit no-tests sentinel to be treated as success with
explicit logging.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (4)
CHANGELOG.mdbin/cli.jslib/liquidTestRunner.jspackage.json
💤 Files with no reviewable changes (1)
- CHANGELOG.md
🧰 Additional context used
🧬 Code graph analysis (1)
bin/cli.js (3)
lib/liquidTestRunner.js (1)
options(16-16)lib/cli/utils.js (1)
options(89-89)lib/cli/devMode.js (1)
liquidTestRunner(4-4)
🔇 Additional comments (3)
package.json (1)
3-3: Version downgrade aligns with the revert.The version rollback from 1.52.0 to 1.51.0 is appropriate for reverting the parallel handles feature introduced in PR #233.
lib/liquidTestRunner.js (1)
487-498: Test completion logic is correct.The logic properly checks that
testRunexists, has a "completed" status, and contains no errors before returning "PASSED". The fallback to "FAILED" for all other cases is appropriate.bin/cli.js (1)
456-457: Option definitions correctly reverted to single-value.The options now accept single values (
<handle>,<name>) instead of variadic arrays, which aligns with the reverted single-template workflow.
| async function runTestsStatusOnly(firmId, templateType, handle, testName = "", pattern = "") { | ||
| if (templateType !== "reconciliationText" && templateType !== "accountTemplate") { | ||
| consola.error(`Template type is missing or invalid`); | ||
| process.exit(1); | ||
| } | ||
|
|
||
| const runSingleHandle = async (singleHandle) => { | ||
| let status = "FAILED"; | ||
| const failedTestNames = []; | ||
| const testResult = await runTests(firmId, templateType, singleHandle, testName, false, "none", pattern); | ||
| let status = "FAILED"; | ||
| const testResult = await runTests(firmId, templateType, handle, testName, false, "none", pattern); | ||
|
|
||
| if (!testResult) { | ||
| status = "PASSED"; | ||
| } else { | ||
| const testRun = testResult?.testRun; | ||
| if (!testResult) { | ||
| status = "PASSED"; | ||
| consola.success(status); | ||
| return status; | ||
| } | ||
|
|
||
| if (testRun && testRun?.status === "completed") { | ||
| const errorsPresent = checkAllTestsErrorsPresent(testRun.tests); | ||
| if (errorsPresent === false) { | ||
| status = "PASSED"; | ||
| } else { | ||
| // Extract failed test names | ||
| const testNames = Object.keys(testRun.tests).sort(); | ||
| testNames.forEach((testName) => { | ||
| const testErrorsPresent = checkTestErrorsPresent(testName, testRun.tests); | ||
| if (testErrorsPresent) { | ||
| failedTestNames.push(testName); | ||
| } | ||
| }); | ||
| } | ||
| } | ||
| } | ||
| const testRun = testResult?.testRun; | ||
|
|
||
| if (status === "PASSED") { | ||
| consola.log(`${singleHandle}: ${status}`); | ||
| } else { | ||
| consola.log(`${singleHandle}: ${status}`); | ||
| // Display failed test names | ||
| failedTestNames.forEach((testName) => { | ||
| consola.log(` ${testName}: FAILED`); | ||
| }); | ||
| if (testRun && testRun?.status === "completed") { | ||
| const errorsPresent = checkAllTestsErrorsPresent(testRun.tests); | ||
| if (errorsPresent === false) { | ||
| status = "PASSED"; | ||
| consola.success(status); | ||
| return status; | ||
| } | ||
|
|
||
| return { handle: singleHandle, status, failedTestNames }; | ||
| }; | ||
|
|
||
| const results = await Promise.all(handles.map(runSingleHandle)); | ||
|
|
||
| const overallStatus = results.every((result) => result.status === "PASSED") ? "PASSED" : "FAILED"; | ||
|
|
||
| return overallStatus; | ||
| } | ||
| consola.error(status); | ||
| return status; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Returning "PASSED" when no test result is produced may mask failures.
When runTests returns a falsy value (lines 481-485), the function returns "PASSED". However, runTests returns undefined in failure scenarios such as:
- Config file not found (line 110)
- Test file not found (line 119)
- Empty YAML file (line 127)
Returning "PASSED" in these cases could silently mask configuration issues in CI/GitHub Actions workflows where this function is used.
Proposed fix
if (!testResult) {
- status = "PASSED";
- consola.success(status);
+ // No test result likely means config/test file issues - already logged by runTests
return status;
}Alternatively, if the intent is to treat "no tests to run" as success, consider adding explicit logging to clarify why it passed.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function runTestsStatusOnly(firmId, templateType, handle, testName = "", pattern = "") { | |
| if (templateType !== "reconciliationText" && templateType !== "accountTemplate") { | |
| consola.error(`Template type is missing or invalid`); | |
| process.exit(1); | |
| } | |
| const runSingleHandle = async (singleHandle) => { | |
| let status = "FAILED"; | |
| const failedTestNames = []; | |
| const testResult = await runTests(firmId, templateType, singleHandle, testName, false, "none", pattern); | |
| let status = "FAILED"; | |
| const testResult = await runTests(firmId, templateType, handle, testName, false, "none", pattern); | |
| if (!testResult) { | |
| status = "PASSED"; | |
| } else { | |
| const testRun = testResult?.testRun; | |
| if (!testResult) { | |
| status = "PASSED"; | |
| consola.success(status); | |
| return status; | |
| } | |
| if (testRun && testRun?.status === "completed") { | |
| const errorsPresent = checkAllTestsErrorsPresent(testRun.tests); | |
| if (errorsPresent === false) { | |
| status = "PASSED"; | |
| } else { | |
| // Extract failed test names | |
| const testNames = Object.keys(testRun.tests).sort(); | |
| testNames.forEach((testName) => { | |
| const testErrorsPresent = checkTestErrorsPresent(testName, testRun.tests); | |
| if (testErrorsPresent) { | |
| failedTestNames.push(testName); | |
| } | |
| }); | |
| } | |
| } | |
| } | |
| const testRun = testResult?.testRun; | |
| if (status === "PASSED") { | |
| consola.log(`${singleHandle}: ${status}`); | |
| } else { | |
| consola.log(`${singleHandle}: ${status}`); | |
| // Display failed test names | |
| failedTestNames.forEach((testName) => { | |
| consola.log(` ${testName}: FAILED`); | |
| }); | |
| if (testRun && testRun?.status === "completed") { | |
| const errorsPresent = checkAllTestsErrorsPresent(testRun.tests); | |
| if (errorsPresent === false) { | |
| status = "PASSED"; | |
| consola.success(status); | |
| return status; | |
| } | |
| return { handle: singleHandle, status, failedTestNames }; | |
| }; | |
| const results = await Promise.all(handles.map(runSingleHandle)); | |
| const overallStatus = results.every((result) => result.status === "PASSED") ? "PASSED" : "FAILED"; | |
| return overallStatus; | |
| } | |
| consola.error(status); | |
| return status; | |
| } | |
| async function runTestsStatusOnly(firmId, templateType, handle, testName = "", pattern = "") { | |
| if (templateType !== "reconciliationText" && templateType !== "accountTemplate") { | |
| consola.error(`Template type is missing or invalid`); | |
| process.exit(1); | |
| } | |
| let status = "FAILED"; | |
| const testResult = await runTests(firmId, templateType, handle, testName, false, "none", pattern); | |
| if (!testResult) { | |
| // No test result likely means config/test file issues - already logged by runTests | |
| return status; | |
| } | |
| const testRun = testResult?.testRun; | |
| if (testRun && testRun?.status === "completed") { | |
| const errorsPresent = checkAllTestsErrorsPresent(testRun.tests); | |
| if (errorsPresent === false) { | |
| status = "PASSED"; | |
| consola.success(status); | |
| return status; | |
| } | |
| } | |
| consola.error(status); | |
| return status; | |
| } |
🤖 Prompt for AI Agents
In @lib/liquidTestRunner.js around lines 472 - 499, runTestsStatusOnly currently
treats a falsy testResult as PASSED which hides failures from runTests (e.g.,
missing config/test/empty YAML). Update the logic so a falsy testResult does not
auto-pass: either (A) change runTests to return an explicit sentinel (e.g., {
noTests: true }) for "no tests to run" and update runTestsStatusOnly to treat
that sentinel as a PASS with a clear log message, or (B) keep runTests as-is and
in runTestsStatusOnly detect undefined/null, log a descriptive error via
consola.error("runTests failed: <reason>") and return "FAILED". Reference
runTestsStatusOnly and runTests when making the change so the function returns
FAILED on real failures but still allows an explicit no-tests sentinel to be
treated as success with explicit logging.
Reverts #233