⚡ Bolt: Optimize workflow execution loop#138
Conversation
Replaced the O(N) `Promise.race(executingPromises)` wait mechanism with a signal-based O(1) approach to reduce overhead in high-concurrency scenarios. Removed redundant calls to `processLoop` from the main execution loop, relying on the task `.finally` block to drive scheduling. Impact: - Reduces main loop wake-up cost from O(Concurrency) to O(1). - Reduces scheduling checks by ~50%. - Verified with 100k tasks benchmark. Co-authored-by: thalesraymond <32554150+thalesraymond@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
Summary of ChangesHello @thalesraymond, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces significant performance enhancements to the workflow execution engine by refining how concurrent tasks are managed. The changes streamline the task scheduling and completion detection process, leading to a more efficient and less resource-intensive operation, particularly under heavy load. The overall impact is a faster and more robust workflow execution system with reduced internal overhead. Highlights
Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a significant performance optimization to the workflow execution loop by replacing Promise.race with a more efficient manual completion signal. This change cleverly reduces overhead from O(N) to O(1) in the wait step, which is particularly beneficial for workflows with high concurrency. The removal of a redundant processLoop call further streamlines the execution logic. The accompanying changes involve widespread formatting updates across documentation, tests, and configuration files, which improve code consistency and readability. The core logic change in WorkflowExecutor.ts is well-implemented and robust, correctly handling potential race conditions while simplifying the main loop. Overall, this is an excellent pull request that enhances both performance and maintainability.
⚡ Bolt: Optimize workflow execution loop
💡 What:
Promise.race(executingPromises)with a manually managed "completion signal" promise.this.processLoopcall in the mainwhileloop.🎯 Why:
Promise.raceon a Set of promises requires creating an array and attaching listeners to all promises every time the loop iterates. For high concurrency (e.g., 100+ tasks), this adds unnecessary overhead.processLoopmethod (which schedules new tasks) is already called in the.finallyblock of every task. Calling it again in the main loop after a task finishes is redundant.📊 Impact:
🔬 Measurement:
tests/benchmark_race.test.ts(100,000 tasks, 100 concurrency). Performance remains excellent (~1.1s for 100k tasks).PR created automatically by Jules for task 7192801152092314253 started by @thalesraymond