chore(workspace): propagate unified make and release automation#4
chore(workspace): propagate unified make and release automation#4marlon-costa-dc merged 4 commits intomainfrom
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @marlon-costa-dc, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the workspace's maintainability and consistency by centralizing Python version enforcement and streamlining release automation. It ensures that all projects adhere to a specific Python version (3.13) and provides more granular control over the release process by allowing specific projects to be targeted. These changes reduce potential environment-related issues and improve the overall reliability of the build and release pipelines. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new script to enforce Python 3.13 across the workspace, ensuring consistency by creating .python-version files and injecting version guards into conftest.py. Additionally, it refactors the release automation scripts to leverage a centralized project resolution mechanism and support project filtering during the release process. Key improvements include enhanced robustness in parsing pyproject.toml files through the use of tomllib and regular expressions, and increased security in command execution by adopting shlex.join. Overall, the changes contribute to better maintainability, correctness, and security of the workspace automation.
| def _inject_guard(content: str) -> str: | ||
| """Inject version guard after the module docstring, before other imports.""" | ||
| # Remove any existing guard first | ||
| content = _remove_existing_guard(content) | ||
|
|
||
| # Find insertion point: after module docstring, before first import | ||
| # Strategy: find the end of the docstring block, insert guard there | ||
| lines = content.split("\n") | ||
| insert_idx = 0 | ||
|
|
||
| # Skip shebang | ||
| if lines and lines[0].startswith("#!"): | ||
| insert_idx = 1 | ||
|
|
||
| # Skip leading comments | ||
| while insert_idx < len(lines) and lines[insert_idx].startswith("#"): | ||
| insert_idx += 1 | ||
|
|
||
| # Skip blank lines | ||
| while insert_idx < len(lines) and not lines[insert_idx].strip(): | ||
| insert_idx += 1 | ||
|
|
||
| # Skip docstring (triple-quoted) | ||
| if insert_idx < len(lines): | ||
| line = lines[insert_idx].strip() | ||
| if line.startswith('"""') or line.startswith("'''"): | ||
| quote = line[:3] | ||
| # Check if single-line docstring | ||
| if line.count(quote) >= 2 and len(line) > 3: | ||
| insert_idx += 1 | ||
| else: | ||
| # Multi-line docstring — find closing quotes | ||
| insert_idx += 1 | ||
| while insert_idx < len(lines) and quote not in lines[insert_idx]: | ||
| insert_idx += 1 | ||
| if insert_idx < len(lines): | ||
| insert_idx += 1 | ||
|
|
||
| # Skip blank lines after docstring | ||
| while insert_idx < len(lines) and not lines[insert_idx].strip(): | ||
| insert_idx += 1 | ||
|
|
||
| # Skip __future__ imports (must come before guard) | ||
| while insert_idx < len(lines) and lines[insert_idx].strip().startswith( | ||
| "from __future__" | ||
| ): | ||
| insert_idx += 1 | ||
|
|
||
| # Skip blank lines after __future__ | ||
| while insert_idx < len(lines) and not lines[insert_idx].strip(): | ||
| insert_idx += 1 | ||
|
|
||
| # Insert guard | ||
| before = "\n".join(lines[:insert_idx]) | ||
| after = "\n".join(lines[insert_idx:]) | ||
|
|
||
| if before and not before.endswith("\n"): | ||
| before += "\n" | ||
|
|
||
| return f"{before}{GUARD_BLOCK}\n{after}" |
There was a problem hiding this comment.
The _inject_guard function's logic for finding the insertion point is quite detailed and relies on specific patterns (shebang, comments, docstrings, __future__ imports). While it appears to handle various scenarios, this manual parsing could be brittle if the conftest.py structure evolves in unexpected ways. Consider adding comprehensive unit tests for this function to ensure its resilience against different conftest.py layouts.
| content = pyproject.read_bytes() | ||
| data = tomllib.loads(content.decode("utf-8")) | ||
| project = data.get("project") | ||
| if not isinstance(project, dict): | ||
| raise RuntimeError("unable to detect [project] section from pyproject.toml") | ||
| version = project.get("version") | ||
| if not isinstance(version, str) or not version: | ||
| raise RuntimeError("unable to detect version from pyproject.toml") | ||
| value = match.group("version") | ||
| return value.removesuffix("-dev") | ||
| return version.removesuffix("-dev") |
There was a problem hiding this comment.
| def _replace_version(content: str, version: str) -> tuple[str, bool]: | ||
| old = 'version = "0.10.0-dev"' | ||
| new = f'version = "{version}"' | ||
| if old in content: | ||
| return content.replace(old, new), True | ||
|
|
||
| marker = 'version = "' | ||
| start = content.find(marker) | ||
| if start < 0: | ||
| project_match = re.search(r"(?ms)^\[project\]\n(?P<body>.*?)(?:^\[|\Z)", content) | ||
| if not project_match: | ||
| return content, False | ||
| value_start = start + len(marker) | ||
| value_end = content.find('"', value_start) | ||
| if value_end < 0: | ||
|
|
||
| body = project_match.group("body") | ||
| version_match = re.search(r'(?m)^version\s*=\s*"(?P<value>[^"]+)"\s*$', body) | ||
| if not version_match: | ||
| return content, False | ||
|
|
||
| current = content[value_start:value_end] | ||
| current = version_match.group("value") | ||
| current_clean = current.removesuffix("-dev") | ||
| _ = parse_semver(current_clean) | ||
| if current == version: | ||
| return content, False | ||
| updated = content[:value_start] + version + content[value_end:] | ||
| return updated, True | ||
|
|
||
| replacement = f'version = "{version}"' | ||
| updated_body = re.sub( | ||
| r'(?m)^version\s*=\s*"[^"]+"\s*$', | ||
| replacement, | ||
| body, | ||
| count=1, | ||
| ) | ||
| start, end = project_match.span("body") | ||
| updated = content[:start] + updated_body + content[end:] | ||
| return updated, updated != content |
There was a problem hiding this comment.
The refactoring of _replace_version to use regular expressions for precisely locating and updating the version within the [project] section of pyproject.toml is a substantial improvement. This approach is more targeted and less prone to errors than generic string replacements, especially in complex TOML structures.
There was a problem hiding this comment.
2 issues found across 10 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="scripts/github/pr_manager.py">
<violation number="1" location="scripts/github/pr_manager.py:156">
P2: The `--merge-method` argument lacks a `choices` constraint, so a typo (e.g., `reabse`) silently defaults to `--squash` via the `.get()` fallback in `_merge_pr`. Add `choices=["merge", "rebase", "squash"]` to catch invalid values at argument-parsing time, consistent with how `--action` is handled.</violation>
</file>
<file name="scripts/maintenance/enforce_python_version.py">
<violation number="1" location="scripts/maintenance/enforce_python_version.py:185">
P2: Guard staleness bug: `_ensure_conftest_guard` checks only for marker presence, not content correctness. If `REQUIRED_MINOR` changes, existing guards with the old Python version won't be detected as stale or updated. Consider comparing the actual guard block content (or at least the version within it) rather than just checking for the marker string.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| _ = parser.add_argument("--title", default="") | ||
| _ = parser.add_argument("--body", default="") | ||
| _ = parser.add_argument("--draft", type=int, default=0) | ||
| _ = parser.add_argument("--merge-method", default="squash") |
There was a problem hiding this comment.
P2: The --merge-method argument lacks a choices constraint, so a typo (e.g., reabse) silently defaults to --squash via the .get() fallback in _merge_pr. Add choices=["merge", "rebase", "squash"] to catch invalid values at argument-parsing time, consistent with how --action is handled.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/github/pr_manager.py, line 156:
<comment>The `--merge-method` argument lacks a `choices` constraint, so a typo (e.g., `reabse`) silently defaults to `--squash` via the `.get()` fallback in `_merge_pr`. Add `choices=["merge", "rebase", "squash"]` to catch invalid values at argument-parsing time, consistent with how `--action` is handled.</comment>
<file context>
@@ -0,0 +1,199 @@
+ _ = parser.add_argument("--title", default="")
+ _ = parser.add_argument("--body", default="")
+ _ = parser.add_argument("--draft", type=int, default=0)
+ _ = parser.add_argument("--merge-method", default="squash")
+ _ = parser.add_argument("--auto", type=int, default=0)
+ _ = parser.add_argument("--delete-branch", type=int, default=0)
</file context>
|
|
||
| content = conftest.read_text(encoding="utf-8") | ||
|
|
||
| if _has_guard(content): |
There was a problem hiding this comment.
P2: Guard staleness bug: _ensure_conftest_guard checks only for marker presence, not content correctness. If REQUIRED_MINOR changes, existing guards with the old Python version won't be detected as stale or updated. Consider comparing the actual guard block content (or at least the version within it) rather than just checking for the marker string.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/maintenance/enforce_python_version.py, line 185:
<comment>Guard staleness bug: `_ensure_conftest_guard` checks only for marker presence, not content correctness. If `REQUIRED_MINOR` changes, existing guards with the old Python version won't be detected as stale or updated. Consider comparing the actual guard block content (or at least the version within it) rather than just checking for the marker string.</comment>
<file context>
@@ -0,0 +1,247 @@
+
+ content = conftest.read_text(encoding="utf-8")
+
+ if _has_guard(content):
+ if verbose:
+ print(f" ✓ conftest.py guard OK: {project.name}")
</file context>
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="scripts/maintenance/_discover.py">
<violation number="1" location="scripts/maintenance/_discover.py:14">
P0: Missing `libs.discovery` module: `discover_projects` import will fail at runtime with `ModuleNotFoundError`. The `libs/` directory does not exist in this repository. This may be intended as a workspace-level shared library that hasn't been added to this repo yet, but as-is, the script is broken.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| kind = "submodule" if entry.name in submodules else "external" | ||
| projects.append(ProjectInfo(path=entry, name=entry.name, kind=kind)) | ||
| return projects | ||
| from libs.discovery import discover_projects |
There was a problem hiding this comment.
P0: Missing libs.discovery module: discover_projects import will fail at runtime with ModuleNotFoundError. The libs/ directory does not exist in this repository. This may be intended as a workspace-level shared library that hasn't been added to this repo yet, but as-is, the script is broken.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/maintenance/_discover.py, line 14:
<comment>Missing `libs.discovery` module: `discover_projects` import will fail at runtime with `ModuleNotFoundError`. The `libs/` directory does not exist in this repository. This may be intended as a workspace-level shared library that hasn't been added to this repo yet, but as-is, the script is broken.</comment>
<file context>
@@ -4,49 +4,14 @@
- kind = "submodule" if entry.name in submodules else "external"
- projects.append(ProjectInfo(path=entry, name=entry.name, kind=kind))
- return projects
+from libs.discovery import discover_projects
</file context>
There was a problem hiding this comment.
2 issues found across 6 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="scripts/dependencies/dependency_detection.py">
<violation number="1" location="scripts/dependencies/dependency_detection.py:26">
P0: Module `libs.selection` does not exist in the repository. This top-level import will raise `ModuleNotFoundError` at runtime, making the entire `dependency_detection` module unimportable. If this module is expected to come from a separate PR or workspace propagation step, consider gating the import with a try/except or ensuring it's added atomically with this change.</violation>
</file>
<file name="scripts/core/skill_validate.py">
<violation number="1" location="scripts/core/skill_validate.py:18">
P0: Import from non-existent module `libs.discovery` — the module does not exist anywhere in the repository. This will cause a `ModuleNotFoundError` at import time, making the entire script unusable. The old inline `discover_projects` implementation was removed, so there is no fallback. Ensure `libs/discovery.py` (with a `discover_projects` function returning objects with `.name` and `.kind` attributes) is added to the repository, or revert to the previous inline implementation.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| if str(Path(__file__).resolve().parents[2]) not in sys.path: | ||
| sys.path.insert(0, str(Path(__file__).resolve().parents[2])) | ||
|
|
||
| from libs.selection import resolve_projects |
There was a problem hiding this comment.
P0: Module libs.selection does not exist in the repository. This top-level import will raise ModuleNotFoundError at runtime, making the entire dependency_detection module unimportable. If this module is expected to come from a separate PR or workspace propagation step, consider gating the import with a try/except or ensuring it's added atomically with this change.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/dependencies/dependency_detection.py, line 26:
<comment>Module `libs.selection` does not exist in the repository. This top-level import will raise `ModuleNotFoundError` at runtime, making the entire `dependency_detection` module unimportable. If this module is expected to come from a separate PR or workspace propagation step, consider gating the import with a try/except or ensuring it's added atomically with this change.</comment>
<file context>
@@ -12,12 +12,19 @@
+if str(Path(__file__).resolve().parents[2]) not in sys.path:
+ sys.path.insert(0, str(Path(__file__).resolve().parents[2]))
+
+from libs.selection import resolve_projects
+
# Mypy output patterns for typing library detection (aligned with stub_supply_chain)
</file context>
| if str(Path(__file__).resolve().parents[2]) not in sys.path: | ||
| sys.path.insert(0, str(Path(__file__).resolve().parents[2])) | ||
|
|
||
| from libs.discovery import discover_projects as ssot_discover_projects |
There was a problem hiding this comment.
P0: Import from non-existent module libs.discovery — the module does not exist anywhere in the repository. This will cause a ModuleNotFoundError at import time, making the entire script unusable. The old inline discover_projects implementation was removed, so there is no fallback. Ensure libs/discovery.py (with a discover_projects function returning objects with .name and .kind attributes) is added to the repository, or revert to the previous inline implementation.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/core/skill_validate.py, line 18:
<comment>Import from non-existent module `libs.discovery` — the module does not exist anywhere in the repository. This will cause a `ModuleNotFoundError` at import time, making the entire script unusable. The old inline `discover_projects` implementation was removed, so there is no fallback. Ensure `libs/discovery.py` (with a `discover_projects` function returning objects with `.name` and `.kind` attributes) is added to the repository, or revert to the previous inline implementation.</comment>
<file context>
@@ -12,6 +12,11 @@
+if str(Path(__file__).resolve().parents[2]) not in sys.path:
+ sys.path.insert(0, str(Path(__file__).resolve().parents[2]))
+
+from libs.discovery import discover_projects as ssot_discover_projects
+
try:
</file context>
Summary
Validation