Skip to content

Comments

chore(release): prepare 0.10.0 ci and packaging flow#3

Merged
marlon-costa-dc merged 13 commits intomainfrom
0.10.0-dev
Feb 20, 2026
Merged

chore(release): prepare 0.10.0 ci and packaging flow#3
marlon-costa-dc merged 13 commits intomainfrom
0.10.0-dev

Conversation

@marlon-costa-dc
Copy link
Contributor

@marlon-costa-dc marlon-costa-dc commented Feb 20, 2026

Summary

  • align repository CI workflow with workspace canonical ci.yml
  • prune non-canonical workflows for the 0.10.0 release line
  • keep branch 0.10.0-dev ready for package generation and release checks

Summary by cubic

Aligns this repo’s CI with the canonical workspace flow and finalizes 0.10.0 packaging for Python 3.13. Adds workflow sync/lint tooling and hardens dependency sync; setup/check/test/validate are advisory for easier rollout.

  • New Features

    • Canonical CI added (.github/workflows/ci.yml) using shared base.mk; installs system build deps and gate toolchain (mypy, pyright, pyrefly, ruff, bandit, pip-audit, markdownlint-cli); setup/check/test/validate run in advisory mode (continue-on-error); removed external base.mk bootstrap.
    • New workflow sync tool (scripts/github/sync_workflows.py) with dry-run/apply, prune, and JSON report at .reports/workflows/sync.json.
    • Workflow lint tool (scripts/github/lint_workflows.py) writes actionlint results to .reports/workflows/actionlint.json; strict mode optional.
    • Dependency sync supports FLEXT_WORKSPACE_ROOT, infers GitHub owner to synthesize a repo map when standalone, project discovery adds --format json, and now validates git refs/repo URLs and cleans target dirs before clone.
  • Refactors

    • Packaging/metadata set for 0.10.0 (Python 3.13 only; classifiers adjusted); dev tooling bumps (autoflake 2.3.3, isort 8.0.0, pylint 4.0.5, ruff 0.15.2, stevedore 5.7.0).
    • Reports moved to .reports/; README simplified and kept at Alpha status.

Written for commit 1708183. Summary will update on new commits.

@coderabbitai
Copy link

coderabbitai bot commented Feb 20, 2026

Warning

Rate limit exceeded

@marlon-costa-dc has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 2 minutes and 35 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 0.10.0-dev

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @marlon-costa-dc, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request streamlines the development and release processes by standardizing CI workflows, updating core development dependencies, and refining internal tooling. It introduces a new mechanism for synchronizing GitHub workflows across projects, ensuring a consistent and maintainable CI/CD pipeline. Additionally, the project's documentation has been updated to provide clearer guidance in Portuguese, reflecting its current development status and usage recommendations.

Highlights

  • CI/CD Workflow Standardization: The pull request introduces a new script to synchronize GitHub CI workflows across all projects in the workspace, ensuring consistency and aligning with a canonical ci.yml.
  • Dependency Management Updates: Several development dependencies, including autoflake, isort, pylint, ruff, and stevedore, have been updated to their latest versions in poetry.lock and pyproject.toml.
  • Improved Dependency Synchronization Script: The sync_internal_deps.py script has been enhanced with more robust workspace root detection and the ability to infer repository ownership for standalone dependency resolution.
  • README Update (Portuguese): The README.md file has been completely rewritten in Portuguese, providing a detailed description of the project's purpose, operational context, current status, and usage guidelines.
  • Release Preparation: Changes are made to prepare the 0.10.0-dev branch for package generation and release checks, including pruning non-canonical workflows.
Changelog
  • README.md
    • Content completely rewritten in Portuguese, detailing project purpose, operational context, current status, and usage guidelines.
  • poetry.lock
    • Updated autoflake from 2.3.2 to 2.3.3.
    • Updated isort from 7.0.0 to 8.0.0 and adjusted its dependency range for pylint.
    • Updated pylint from 4.0.4 to 4.0.5 and adjusted its dependency range for isort.
    • Updated ruff from 0.15.1 to 0.15.2.
    • Updated stevedore from 5.6.0 to 5.7.0.
  • pyproject.toml
    • Updated poetry-core build system requirement to include poetry-core>=2.0.
    • Adjusted Development Status classifier order, effectively maintaining 'Production/Stable' while adding 'Beta'.
    • Added a trailing comma to the typer dependency.
    • Standardized the format for packages definition.
    • Reordered extend_exclude and known_first_party within [tool.deptry] configuration.
    • Added a trailing comma to pydantic-core in per_rule_ignores.DEP002.
    • Reordered run.source within [tool.coverage].
    • Reordered overrides and added project-excludes to [tool.pyrefly] configuration.
  • scripts/core/generate_scripts_inventory.py
    • Refactored _artifact_path function to simplify its signature and use the .reports directory for artifacts.
    • Updated calls to _artifact_path to reflect the new signature.
  • scripts/dependencies/sync_internal_deps.py
    • Added new helper functions _is_relative_to, _workspace_root_from_env, and _workspace_root_from_parents for more robust workspace root detection.
    • Introduced _owner_from_remote_url, _infer_owner_from_origin, and _synthesized_repo_map to infer repository owner and synthesize repo maps for standalone dependency resolution.
    • Updated _is_workspace_mode to leverage the new workspace root detection helpers.
    • Modified the _main function to use a synthesized standalone repo map if flext-repo-map.toml is missing and the GitHub owner can be inferred.
  • scripts/github/sync_workflows.py
    • Added a new script to synchronize GitHub workflows across projects.
    • Implemented functionality to discover projects, render canonical workflow templates, and apply or prune workflow files based on configuration.
  • scripts/maintenance/_discover.py
    • Added json as a new output format option for project discovery.
    • Implemented logic to output project details in JSON format when requested.
Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/ci.yml
Activity
  • The pull request aims to align the repository's CI workflow with a canonical ci.yml file, suggesting a move towards standardized CI practices.
  • Non-canonical workflows are being pruned for the 0.10.0 release line, indicating a cleanup and consolidation effort.
  • The 0.10.0-dev branch is being prepared for package generation and release checks, signifying progress towards a new release.
  • Dependency updates and new scripting for workflow synchronization point to active development and maintenance of the project's infrastructure.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request successfully aligns the repository's CI and packaging flow for the 0.10.0 release. It introduces a useful workflow synchronization script and improves internal dependency management. However, there are some inconsistencies in the pyproject.toml file regarding build requirements and project classifiers that should be addressed to ensure metadata correctness.

requires = [
"poetry-core>=2",
]
requires = [ "poetry-core>=2.0", "poetry-core>=2" ]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The requires list contains redundant entries for poetry-core. Both poetry-core>=2.0 and poetry-core>=2 cover the same version range. It is cleaner to keep only the more specific one.

Suggested change
requires = [ "poetry-core>=2.0", "poetry-core>=2" ]
requires = [ "poetry-core>=2.0" ]

Comment on lines +23 to +30
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed",
"Development Status :: 5 - Production/Stable",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The classifiers list contains contradictory development status entries: both 4 - Beta (line 23) and 5 - Production/Stable (line 30). A project should typically have only one development status classifier. Given the current version is 0.10.0-dev, you should likely remove the Production/Stable entry.

Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 8 files

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="pyproject.toml">

<violation number="1" location="pyproject.toml:23">
P1: Contradictory Development Status classifiers: the list now includes both `4 - Beta` and `5 - Production/Stable`. These are mutually exclusive trove classifiers. For a `0.10.0-dev` version, keep only `4 - Beta` and remove `5 - Production/Stable`.</violation>
</file>

<file name="scripts/github/sync_workflows.py">

<violation number="1" location="scripts/github/sync_workflows.py:50">
P2: Relative project paths from the discovery subprocess would be resolved against CWD, not `workspace_root`. If `path_value` is relative, `Path(path_value).resolve()` uses the process's current working directory. Safer to resolve against `workspace_root` to handle both relative and absolute paths correctly.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

requires-python = ">=3.13,<3.14"
classifiers = [
"Development Status :: 5 - Production/Stable",
"Development Status :: 4 - Beta",
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Contradictory Development Status classifiers: the list now includes both 4 - Beta and 5 - Production/Stable. These are mutually exclusive trove classifiers. For a 0.10.0-dev version, keep only 4 - Beta and remove 5 - Production/Stable.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At pyproject.toml, line 23:

<comment>Contradictory Development Status classifiers: the list now includes both `4 - Beta` and `5 - Production/Stable`. These are mutually exclusive trove classifiers. For a `0.10.0-dev` version, keep only `4 - Beta` and remove `5 - Production/Stable`.</comment>

<file context>
@@ -18,17 +16,18 @@ keywords = [
 requires-python = ">=3.13,<3.14"
 classifiers = [
-  "Development Status :: 5 - Production/Stable",
+  "Development Status :: 4 - Beta",
   "Intended Audience :: Developers",
   "Operating System :: OS Independent",
</file context>
Fix with Cubic

path_value = item.get("path")
if not isinstance(name, str) or not isinstance(path_value, str):
continue
projects.append((name, Path(path_value).resolve()))
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Relative project paths from the discovery subprocess would be resolved against CWD, not workspace_root. If path_value is relative, Path(path_value).resolve() uses the process's current working directory. Safer to resolve against workspace_root to handle both relative and absolute paths correctly.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/github/sync_workflows.py, line 50:

<comment>Relative project paths from the discovery subprocess would be resolved against CWD, not `workspace_root`. If `path_value` is relative, `Path(path_value).resolve()` uses the process's current working directory. Safer to resolve against `workspace_root` to handle both relative and absolute paths correctly.</comment>

<file context>
@@ -0,0 +1,220 @@
+        path_value = item.get("path")
+        if not isinstance(name, str) or not isinstance(path_value, str):
+            continue
+        projects.append((name, Path(path_value).resolve()))
+    return projects
+
</file context>
Fix with Cubic

Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name=".github/workflows/ci.yml">

<violation number="1" location=".github/workflows/ci.yml:60">
P2: All CI quality gate steps (`check`, `test`, `validate`) are set to `continue-on-error: true`, which means this CI pipeline can never fail — it will always report green regardless of broken tests, type errors, lint violations, or security findings. This effectively disables CI as a quality gate.

If this is intentional for bootstrapping, consider adding a tracking issue or TODO comment with a target date/milestone to re-enable enforcement. At minimum, consider keeping `make test` as a hard failure so regressions are caught.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@@ -0,0 +1,69 @@
# Generated by scripts/github/sync_workflows.py - DO NOT EDIT
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: All CI quality gate steps (check, test, validate) are set to continue-on-error: true, which means this CI pipeline can never fail — it will always report green regardless of broken tests, type errors, lint violations, or security findings. This effectively disables CI as a quality gate.

If this is intentional for bootstrapping, consider adding a tracking issue or TODO comment with a target date/milestone to re-enable enforcement. At minimum, consider keeping make test as a hard failure so regressions are caught.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/ci.yml, line 60:

<comment>All CI quality gate steps (`check`, `test`, `validate`) are set to `continue-on-error: true`, which means this CI pipeline can never fail — it will always report green regardless of broken tests, type errors, lint violations, or security findings. This effectively disables CI as a quality gate.

If this is intentional for bootstrapping, consider adding a tracking issue or TODO comment with a target date/milestone to re-enable enforcement. At minimum, consider keeping `make test` as a hard failure so regressions are caught.</comment>

<file context>
@@ -56,11 +56,14 @@ jobs:
 
-      - name: Check
+      - name: Check (advisory)
+        continue-on-error: true
         run: make check
 
</file context>
Fix with Cubic

Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name=".github/workflows/ci.yml">

<violation number="1" location=".github/workflows/ci.yml:51">
P2: Making the Setup step `continue-on-error: true` undermines the value of all downstream advisory steps. If `make setup` fails, the environment is broken and Check/Test/Validate will produce unreliable results (or fail for the wrong reasons). Consider keeping Setup as a hard failure so that advisory feedback from subsequent steps is at least trustworthy when it does run.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

- name: Install CI gate toolchain
run: |
python -m pip install --upgrade pip
python -m pip install mypy pyright pyrefly ruff bandit pip-audit
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Making the Setup step continue-on-error: true undermines the value of all downstream advisory steps. If make setup fails, the environment is broken and Check/Test/Validate will produce unreliable results (or fail for the wrong reasons). Consider keeping Setup as a hard failure so that advisory feedback from subsequent steps is at least trustworthy when it does run.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/ci.yml, line 51:

<comment>Making the Setup step `continue-on-error: true` undermines the value of all downstream advisory steps. If `make setup` fails, the environment is broken and Check/Test/Validate will produce unreliable results (or fail for the wrong reasons). Consider keeping Setup as a hard failure so that advisory feedback from subsequent steps is at least trustworthy when it does run.</comment>

<file context>
@@ -47,7 +47,8 @@ jobs:
 
-      - name: Setup
+      - name: Setup (advisory)
+        continue-on-error: true
         run: make setup
 
</file context>
Fix with Cubic

Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

6 issues found across 9 files (changes from recent commits).

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="scripts/release/notes.py">

<violation number="1" location="scripts/release/notes.py:73">
P2: Potential off-by-one: `len(projects) + 2` doesn't match the 3 hardcoded names in the "Projects impacted" list below. If "root" is intentionally excluded from the packaged count (because it's the workspace root, not a distributable package), please add a brief comment explaining this so future maintainers don't 'fix' the mismatch. If "root" should be counted, change the offset to `+ 3`.</violation>
</file>

<file name="scripts/release/version.py">

<violation number="1" location="scripts/release/version.py:19">
P2: `content.replace(old, new)` replaces **all** occurrences in the file, which could unintentionally modify dependency version pins or other sections. Use `content.replace(old, new, 1)` to limit to a single replacement, consistent with the fallback branch that only replaces the first match.</violation>

<violation number="2" location="scripts/release/version.py:80">
P1: `--check` mode always exits with 0, defeating its purpose as a CI gate. When `changed > 0` and `--apply` is not set, the script should return non-zero to signal that files are out of date.</violation>
</file>

<file name="scripts/release/changelog.py">

<violation number="1" location="scripts/release/changelog.py:34">
P2: Idempotency check compares the full section including the dynamically generated date, so re-running on a different day with the same version/tag will produce a duplicate changelog entry. Check against the version header instead.</violation>

<violation number="2" location="scripts/release/changelog.py:50">
P2: `notes_path` is read unconditionally (line 50), but `notes_text` is only consumed inside `if args.apply`. In dry-run mode this is wasted I/O and will crash with `FileNotFoundError` if the notes file doesn't yet exist. Move the read inside the `if args.apply:` block.</violation>
</file>

<file name="scripts/release/run.py">

<violation number="1" location="scripts/release/run.py:87">
P2: Use `sys.executable` instead of hardcoded `"python"` for subprocess calls. In virtual environments, `"python"` may resolve to a different interpreter than the one running this script. The sibling module `shared.py` already uses `sys.executable` for the same purpose in `discover_projects()`. This same issue applies to lines 106, 126, and 142.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

if args.check:
_ = print(f"checked_version={args.version}")
_ = print(f"files_changed={changed}")
return 0
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: --check mode always exits with 0, defeating its purpose as a CI gate. When changed > 0 and --apply is not set, the script should return non-zero to signal that files are out of date.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/release/version.py, line 80:

<comment>`--check` mode always exits with 0, defeating its purpose as a CI gate. When `changed > 0` and `--apply` is not set, the script should return non-zero to signal that files are out of date.</comment>

<file context>
@@ -0,0 +1,84 @@
+    if args.check:
+        _ = print(f"checked_version={args.version}")
+    _ = print(f"files_changed={changed}")
+    return 0
+
+
</file context>
Fix with Cubic

"## Scope",
"",
f"- Workspace release version: {version}",
f"- Projects packaged: {len(projects) + 2}",
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Potential off-by-one: len(projects) + 2 doesn't match the 3 hardcoded names in the "Projects impacted" list below. If "root" is intentionally excluded from the packaged count (because it's the workspace root, not a distributable package), please add a brief comment explaining this so future maintainers don't 'fix' the mismatch. If "root" should be counted, change the offset to + 3.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/release/notes.py, line 73:

<comment>Potential off-by-one: `len(projects) + 2` doesn't match the 3 hardcoded names in the "Projects impacted" list below. If "root" is intentionally excluded from the packaged count (because it's the workspace root, not a distributable package), please add a brief comment explaining this so future maintainers don't 'fix' the mismatch. If "root" should be counted, change the offset to `+ 3`.</comment>

<file context>
@@ -0,0 +1,106 @@
+        "## Scope",
+        "",
+        f"- Workspace release version: {version}",
+        f"- Projects packaged: {len(projects) + 2}",
+        "",
+        "## Projects impacted",
</file context>
Fix with Cubic

old = 'version = "0.10.0-dev"'
new = f'version = "{version}"'
if old in content:
return content.replace(old, new), True
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: content.replace(old, new) replaces all occurrences in the file, which could unintentionally modify dependency version pins or other sections. Use content.replace(old, new, 1) to limit to a single replacement, consistent with the fallback branch that only replaces the first match.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/release/version.py, line 19:

<comment>`content.replace(old, new)` replaces **all** occurrences in the file, which could unintentionally modify dependency version pins or other sections. Use `content.replace(old, new, 1)` to limit to a single replacement, consistent with the fallback branch that only replaces the first match.</comment>

<file context>
@@ -0,0 +1,84 @@
+    old = 'version = "0.10.0-dev"'
+    new = f'version = "{version}"'
+    if old in content:
+        return content.replace(old, new), True
+
+    marker = 'version = "'
</file context>
Fix with Cubic

tagged_notes_path = root / "docs" / "releases" / f"{args.tag}.md"
notes_path = args.notes if args.notes.is_absolute() else root / args.notes

notes_text = notes_path.read_text(encoding="utf-8")
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: notes_path is read unconditionally (line 50), but notes_text is only consumed inside if args.apply. In dry-run mode this is wasted I/O and will crash with FileNotFoundError if the notes file doesn't yet exist. Move the read inside the if args.apply: block.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/release/changelog.py, line 50:

<comment>`notes_path` is read unconditionally (line 50), but `notes_text` is only consumed inside `if args.apply`. In dry-run mode this is wasted I/O and will crash with `FileNotFoundError` if the notes file doesn't yet exist. Move the read inside the `if args.apply:` block.</comment>

<file context>
@@ -0,0 +1,72 @@
+    tagged_notes_path = root / "docs" / "releases" / f"{args.tag}.md"
+    notes_path = args.notes if args.notes.is_absolute() else root / args.notes
+
+    notes_text = notes_path.read_text(encoding="utf-8")
+    existing = (
+        changelog_path.read_text(encoding="utf-8")
</file context>
Fix with Cubic

"- Status: Alpha, non-production\n\n"
f"Full notes: `docs/releases/{tag}.md`\n\n"
)
if section in existing:
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Idempotency check compares the full section including the dynamically generated date, so re-running on a different day with the same version/tag will produce a duplicate changelog entry. Check against the version header instead.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/release/changelog.py, line 34:

<comment>Idempotency check compares the full section including the dynamically generated date, so re-running on a different day with the same version/tag will produce a duplicate changelog entry. Check against the version header instead.</comment>

<file context>
@@ -0,0 +1,72 @@
+        "- Status: Alpha, non-production\n\n"
+        f"Full notes: `docs/releases/{tag}.md`\n\n"
+    )
+    if section in existing:
+        return existing
+    marker = "# Changelog\n\n"
</file context>
Fix with Cubic


def _phase_version(root: Path, version: str, dry_run: bool) -> None:
command = [
"python",
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Use sys.executable instead of hardcoded "python" for subprocess calls. In virtual environments, "python" may resolve to a different interpreter than the one running this script. The sibling module shared.py already uses sys.executable for the same purpose in discover_projects(). This same issue applies to lines 106, 126, and 142.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/release/run.py, line 87:

<comment>Use `sys.executable` instead of hardcoded `"python"` for subprocess calls. In virtual environments, `"python"` may resolve to a different interpreter than the one running this script. The sibling module `shared.py` already uses `sys.executable` for the same purpose in `discover_projects()`. This same issue applies to lines 106, 126, and 142.</comment>

<file context>
@@ -0,0 +1,202 @@
+
+def _phase_version(root: Path, version: str, dry_run: bool) -> None:
+    command = [
+        "python",
+        "scripts/release/version.py",
+        "--root",
</file context>
Fix with Cubic

@marlon-costa-dc marlon-costa-dc merged commit 27e8114 into main Feb 20, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant