Skip to content

chore: add strict codecov and zero-issue quality gates#16

Merged
Prekzursil merged 5 commits intomainfrom
feat/quality-zero-gates-2026-03-03
Mar 3, 2026
Merged

chore: add strict codecov and zero-issue quality gates#16
Prekzursil merged 5 commits intomainfrom
feat/quality-zero-gates-2026-03-03

Conversation

@Prekzursil
Copy link
Owner

Summary

  • Adds strict Codecov + quality-zero workflows.
  • Adds fail-closed quality secrets preflight.
  • Adds 100% coverage gate (Coverage 100 Gate).
  • Adds aggregate context gate (Quality Zero Gate).
  • Adds momentstudio-style codecov.yml structure (require_ci_to_pass, component_management, bundle_analysis) with strict 100% policy.

Required Secrets/Vars

  • Secrets:
    • CODECOV_TOKEN
    • SONAR_TOKEN
    • CODACY_API_TOKEN
    • SNYK_TOKEN
    • SENTRY_AUTH_TOKEN
    • DEEPSCAN_API_TOKEN
  • Vars:
    • SENTRY_ORG
    • SENTRY_PROJECT
    • DEEPSCAN_POLICY_MODE
    • DEEPSCAN_OPEN_ISSUES_URL

Policy

This PR intentionally fails closed when required secrets/vars are missing.

Prekzursil and others added 2 commits March 3, 2026 08:21
Co-authored-by: Codex <noreply@openai.com>
Co-authored-by: Codex <noreply@openai.com>
@devloai
Copy link

devloai bot commented Mar 3, 2026

Unable to trigger custom agent "Code Reviewer". You have run out of credits 😔
Please upgrade your plan or buy additional credits from the subscription page.

@chatgpt-codex-connector
Copy link

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@coderabbitai
Copy link

coderabbitai bot commented Mar 3, 2026

Warning

Rate limit exceeded

@Prekzursil has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 15 minutes and 19 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 0b35b66 and 912ddfa.

📒 Files selected for processing (18)
  • .github/workflows/codacy-zero.yml
  • .github/workflows/codecov-analytics.yml
  • .github/workflows/coverage-100.yml
  • .github/workflows/deepscan-zero.yml
  • .github/workflows/quality-zero-gate.yml
  • .github/workflows/sentry-zero.yml
  • .github/workflows/snyk-zero.yml
  • .github/workflows/sonar-zero.yml
  • codecov.yml
  • docs/quality/QUALITY_ZERO_GATES.md
  • scripts/quality/assert_coverage_100.py
  • scripts/quality/check_codacy_zero.py
  • scripts/quality/check_deepscan_zero.py
  • scripts/quality/check_quality_secrets.py
  • scripts/quality/check_required_checks.py
  • scripts/quality/check_sentry_zero.py
  • scripts/quality/check_sonar_zero.py
  • scripts/security_helpers.py
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/quality-zero-gates-2026-03-03

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link

ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan

Review Summary by Qodo

Add strict quality-zero gates and comprehensive code quality enforcement

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Adds comprehensive quality-zero gate enforcement scripts
  - Coverage 100% validation for backend and frontend
  - Zero-issue assertions for Sonar, Codacy, Sentry, DeepScan
  - Required GitHub check context aggregation
  - Quality secrets preflight validation
• Implements strict GitHub Actions workflows
  - Separate workflows for each quality tool integration
  - Aggregated Quality Zero Gate workflow with fail-closed policy
  - Artifact uploads for audit and debugging
• Establishes security-hardened URL validation helper
  - HTTPS-only enforcement with credential stripping
  - Private/local IP address rejection
  - Hostname allowlist support
• Configures Codecov with strict 100% coverage policy
  - Component-based coverage tracking
  - Bundle analysis with zero-warning threshold
Diagram
flowchart LR
  Scripts["Quality Scripts<br/>assert_coverage_100<br/>check_*_zero<br/>check_quality_secrets<br/>check_required_checks"]
  Helper["Security Helper<br/>normalize_https_url"]
  Workflows["GitHub Workflows<br/>coverage-100<br/>codecov-analytics<br/>sonar/codacy/sentry<br/>snyk/deepscan<br/>quality-zero-gate"]
  Config["Configuration<br/>codecov.yml<br/>QUALITY_ZERO_GATES.md"]
  Scripts --> Helper
  Scripts --> Workflows
  Config --> Workflows
  Workflows --> Gate["Aggregated Quality<br/>Zero Gate"]
Loading

Grey Divider

File Changes

1. scripts/quality/assert_coverage_100.py ✨ Enhancement +192/-0

Coverage 100% enforcement script with XML/LCOV parsing

scripts/quality/assert_coverage_100.py


2. scripts/quality/check_codacy_zero.py ✨ Enhancement +198/-0

Codacy API integration for zero-issue validation

scripts/quality/check_codacy_zero.py


3. scripts/quality/check_deepscan_zero.py ✨ Enhancement +155/-0

DeepScan API integration for zero-issue validation

scripts/quality/check_deepscan_zero.py


View more (15)
4. scripts/quality/check_quality_secrets.py ✨ Enhancement +135/-0

Quality secrets and variables preflight validation

scripts/quality/check_quality_secrets.py


5. scripts/quality/check_required_checks.py ✨ Enhancement +196/-0

GitHub required check contexts aggregation and polling

scripts/quality/check_required_checks.py


6. scripts/quality/check_sentry_zero.py ✨ Enhancement +181/-0

Sentry API integration for unresolved issues validation

scripts/quality/check_sentry_zero.py


7. scripts/quality/check_sonar_zero.py ✨ Enhancement +166/-0

SonarCloud API integration for issues and quality gate

scripts/quality/check_sonar_zero.py


8. scripts/security_helpers.py ✨ Enhancement +60/-0

HTTPS URL validation with security hardening

scripts/security_helpers.py


9. .github/workflows/codacy-zero.yml ⚙️ Configuration changes +33/-0

Codacy zero-issue gate workflow

.github/workflows/codacy-zero.yml


10. .github/workflows/codecov-analytics.yml ⚙️ Configuration changes +51/-0

Codecov coverage collection and upload workflow

.github/workflows/codecov-analytics.yml


11. .github/workflows/coverage-100.yml ⚙️ Configuration changes +52/-0

Coverage 100% enforcement workflow

.github/workflows/coverage-100.yml


12. .github/workflows/deepscan-zero.yml ⚙️ Configuration changes +32/-0

DeepScan zero-issue gate workflow

.github/workflows/deepscan-zero.yml


13. .github/workflows/quality-zero-gate.yml ⚙️ Configuration changes +86/-0

Aggregated quality gate with secrets preflight

.github/workflows/quality-zero-gate.yml


14. .github/workflows/sentry-zero.yml ⚙️ Configuration changes +34/-0

Sentry unresolved issues gate workflow

.github/workflows/sentry-zero.yml


15. .github/workflows/snyk-zero.yml ⚙️ Configuration changes +36/-0

Snyk OSS and code security testing workflow

.github/workflows/snyk-zero.yml


16. .github/workflows/sonar-zero.yml ⚙️ Configuration changes +36/-0

SonarCloud analysis and zero-issue gate workflow

.github/workflows/sonar-zero.yml


17. codecov.yml ⚙️ Configuration changes +47/-0

Codecov configuration with strict 100% policy

codecov.yml


18. docs/quality/QUALITY_ZERO_GATES.md 📝 Documentation +10/-0

Quality zero gates documentation and policy

docs/quality/QUALITY_ZERO_GATES.md


Grey Divider

Qodo Logo

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 3, 2026

Code Review by Qodo

🐞 Bugs (6) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Action required

1. Preflight always fails🐞 Bug ⛯ Reliability
Description
The secrets preflight script requires several secrets/vars that the workflow does not provide (e.g.,
APPLITOOLS/Percy/BrowserStack and SENTRY_PROJECT_BACKEND/WEB), so the Quality Secrets Preflight job
will fail even when the workflow’s listed secrets/vars are set.
Code

scripts/quality/check_quality_secrets.py[R11-27]

+DEFAULT_REQUIRED_SECRETS = [
+    "SONAR_TOKEN",
+    "CODACY_API_TOKEN",
+    "CODECOV_TOKEN",
+    "SNYK_TOKEN",
+    "SENTRY_AUTH_TOKEN",
+    "APPLITOOLS_API_KEY",
+    "PERCY_TOKEN",
+    "BROWSERSTACK_USERNAME",
+    "BROWSERSTACK_ACCESS_KEY",
+]
+
+DEFAULT_REQUIRED_VARS = [
+    "SENTRY_ORG",
+    "SENTRY_PROJECT_BACKEND",
+    "SENTRY_PROJECT_WEB",
+]
Evidence
check_quality_secrets.py hard-codes defaults for secrets/vars that are not set in
quality-zero-gate.yml (and not added via --required-secret/--required-var), so evaluate_env()
will mark them missing and return a failing status.

scripts/quality/check_quality_secrets.py[11-27]
.github/workflows/quality-zero-gate.yml[10-38]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The secrets preflight job fails closed for secrets/vars that are not actually provided by the workflows, blocking the entire Quality Zero Gate.
### Issue Context
`check_quality_secrets.py` includes many default requirements that do not match `quality-zero-gate.yml`’s environment and arguments.
### Fix Focus Areas
- scripts/quality/check_quality_secrets.py[11-27]
- .github/workflows/quality-zero-gate.yml[17-38]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. GitHub API perms missing 🐞 Bug ⛯ Reliability
Description
Quality Zero Gate restricts workflow token permissions to contents: read, but then calls the
GitHub Checks/Statuses APIs to query check results; this can cause 403 errors and a permanently
failing gate.
Code

.github/workflows/quality-zero-gate.yml[R10-12]

+permissions:
+  contents: read
+
Evidence
The workflow explicitly sets only contents: read, yet check_required_checks.py calls the GitHub
API endpoints for check-runs and commit status using GITHUB_TOKEN. Those endpoints are not
“contents” operations, so permissions are likely insufficient and can fail at runtime.

.github/workflows/quality-zero-gate.yml[10-12]
.github/workflows/quality-zero-gate.yml[62-66]
scripts/quality/check_required_checks.py[28-41]
scripts/quality/check_required_checks.py[151-153]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The Quality Zero Gate queries GitHub check-runs and commit statuses, but the workflow token is restricted to `contents: read`, which risks API authorization failures.
### Issue Context
The workflow runs `scripts/quality/check_required_checks.py` which calls `/check-runs` and `/status` endpoints.
### Fix Focus Areas
- .github/workflows/quality-zero-gate.yml[10-12]
- scripts/quality/check_required_checks.py[151-153]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Coverage gate false-pass 🐞 Bug ✓ Correctness
Description
The 100% coverage assertion treats total <= 0 as 100% coverage, so empty/invalid coverage inputs
can incorrectly pass the gate.
Code

scripts/quality/assert_coverage_100.py[R20-24]

+    @property
+    def percent(self) -> float:
+        if self.total <= 0:
+            return 100.0
+        return (self.covered / self.total) * 100.0
Evidence
CoverageStats.percent returns 100.0 when total <= 0, and evaluate() only fails when `percent <
100.0. Therefore parsing failures that produce total=0` (or an empty report) will appear as 100%
covered and pass.

scripts/quality/assert_coverage_100.py[20-24]
scripts/quality/assert_coverage_100.py[86-100]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The Coverage 100 gate can incorrectly pass on empty or unparsable coverage data.
### Issue Context
`CoverageStats.percent` currently returns 100% when `total &amp;amp;amp;amp;lt;= 0`, and the evaluator relies on that percent.
### Fix Focus Areas
- scripts/quality/assert_coverage_100.py[20-25]
- scripts/quality/assert_coverage_100.py[86-100]
- scripts/quality/assert_coverage_100.py[49-83]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Polling fails too early 🐞 Bug ⛯ Reliability
Description
The required-check polling loop only waits for in-progress *check runs*; if a required context comes
from commit statuses and is still pending, the script can stop polling and fail prematurely.
Code

scripts/quality/check_required_checks.py[R170-174]

+        # wait only while there are missing contexts or in-progress check-runs
+        in_progress = any(v.get("state") != "completed" for v in contexts.values() if v.get("source") == "check_run")
+        if not missing and not in_progress:
+            break
+        time.sleep(max(args.poll_seconds, 1))
Evidence
The script tracks commit-status contexts (source=status) and stores their state, but the
in_progress calculation ignores them. As a result, a required status context in pending will be
treated as failed and can trigger an early exit even though it may later become success.

scripts/quality/check_required_checks.py[57-65]
scripts/quality/check_required_checks.py[170-174]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`check_required_checks.py` can fail early when a required context is a commit status that is still pending.
### Issue Context
The script collects both check-run and status contexts but only waits for check-runs to complete.
### Fix Focus Areas
- scripts/quality/check_required_checks.py[44-67]
- scripts/quality/check_required_checks.py[170-174]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Coverage scope too broad 🐞 Bug ➹ Performance
Description
The new coverage workflows run pytest with coverage, but backend’s pytest configuration adds
--cov=. which may unintentionally broaden what gets measured (e.g., including non-source
directories) and make the 100% gate noisy/slow.
Code

.github/workflows/coverage-100.yml[R26-31]

+      - name: Backend coverage
+        run: |
+          mkdir -p coverage
+          python -m pip install --upgrade pip
+          python -m pip install -r backend/requirements.txt pytest pytest-cov
+          python -m pytest backend --cov=backend --cov-report=xml:backend/coverage.xml
Evidence
The workflow starts running backend coverage, and backend’s pytest configuration forces --cov=..
Depending on pytest rootdir/config resolution, this can expand coverage beyond intended modules and
increase runtime/false failures once this new workflow is enabled.

.github/workflows/coverage-100.yml[26-31]
backend/pytest.ini[1-4]
backend/venv/pyvenv.cfg[1-5]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The new coverage workflows can end up measuring more files than intended due to backend pytest addopts.
### Issue Context
`backend/pytest.ini` injects `--cov=.` and the new workflow adds additional `--cov` arguments.
### Fix Focus Areas
- backend/pytest.ini[1-4]
- .github/workflows/coverage-100.yml[26-31]
- .github/workflows/codecov-analytics.yml[34-43]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Advisory comments

6. Unverified actions v6 pins 🐞 Bug ⛯ Reliability
Description
New workflows pin actions/checkout, actions/setup-python, and actions/setup-node to @v6,
while existing repo workflows use @v4/@v5; if v6 is unavailable or behaves differently, CI will
break.
Code

.github/workflows/coverage-100.yml[R18-23]

+      - uses: actions/checkout@v6
+      - uses: actions/setup-python@v6
+        with:
+          python-version: '3.12'
+      - uses: actions/setup-node@v6
+        with:
Evidence
Within this repo, established workflows use actions/checkout@v4, actions/setup-python@v5, and
actions/setup-node@v4, but the new quality workflows switch to @v6. This should be
double-checked for validity/compatibility before relying on these gates.

.github/workflows/verify.yml[18-23]
.github/workflows/coverage-100.yml[18-23]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
New workflows use `actions/*@v6` while existing workflows in this repo use `@v4/@v5`.
### Issue Context
If `v6` tags are not available or introduce breaking changes, these quality gates will fail before running any checks.
### Fix Focus Areas
- .github/workflows/coverage-100.yml[18-24]
- .github/workflows/codecov-analytics.yml[20-26]
- .github/workflows/quality-zero-gate.yml[29-55]
- .github/workflows/verify.yml[18-23]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

workflow_dispatch:

permissions:
contents: read

Check notice

Code scanning / SonarCloud

Read permissions should be defined at the job level Low

Move this read permission from workflow level to job level. See more on SonarQube Cloud
steps:
- uses: actions/checkout@v6
- name: Run Sonar scan
uses: SonarSource/sonarqube-scan-action@v6

Check warning

Code scanning / CodeQL

Unpinned tag for a non-immutable Action in workflow Medium

Unpinned 3rd party Action 'Sonar Zero' step
Uses Step
uses 'SonarSource/sonarqube-scan-action' with ref 'v6', not a pinned commit hash


def parse_coverage_xml(name: str, path: Path) -> CoverageStats:
text = path.read_text(encoding="utf-8")

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.

Copilot Autofix

AI about 10 hours ago

In general, to fix uncontrolled path use for file reads, you should normalize the path, resolve it against a trusted base directory, and ensure that the resulting path stays within that directory. This prevents path traversal (e.g., ../../etc/passwd) and disallows accessing arbitrary absolute paths when only project files should be read.

In this script, the best fix without changing intended functionality is:

  • Introduce a helper _safe_input_path analogous to _safe_output_path, which:
    • Accepts a Path coming from parse_named_path.
    • Resolves it relative to a configurable base directory (defaulting to the current working directory).
    • Normalizes it (resolve(strict=False)).
    • Ensures the path is contained within the base by using relative_to and raising if it escapes.
  • Use this helper when parsing --xml and --lcov arguments in main() so that all coverage files are constrained to the workspace.

Concretely:

  1. Add a new function _safe_input_path(raw: Path, base: Path | None = None) -> Path just above _safe_output_path. It will be similar to _safe_output_path but accept an existing Path and not deal with defaults/fallback strings.
  2. Modify the loops in main() where parse_named_path(item) is called to pass the resulting path through _safe_input_path before reading any files:
    • For XML: name, path = parse_named_path(item); safe_path = _safe_input_path(path); stats.append(parse_coverage_xml(name, safe_path))
    • For LCOV: same pattern with parse_lcov.
  3. Use Path.cwd() (as in _safe_output_path) as the default root so behavior stays intuitive: paths are still resolved relative to where the script is run, but traversals outside this directory are blocked.

No new imports are needed; everything can be done with pathlib.Path, which is already imported.


Suggested changeset 1
scripts/quality/assert_coverage_100.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/scripts/quality/assert_coverage_100.py b/scripts/quality/assert_coverage_100.py
--- a/scripts/quality/assert_coverage_100.py
+++ b/scripts/quality/assert_coverage_100.py
@@ -128,6 +128,20 @@
     return "\n".join(lines) + "\n"
 
 
+def _safe_input_path(raw: Path, base: Path | None = None) -> Path:
+    """Resolve a potentially relative input path safely within a workspace root."""
+    root = (base or Path.cwd()).resolve()
+    candidate = raw.expanduser()
+    if not candidate.is_absolute():
+        candidate = root / candidate
+    resolved = candidate.resolve(strict=False)
+    try:
+        resolved.relative_to(root)
+    except ValueError as exc:
+        raise ValueError(f"Input path escapes workspace root: {candidate}") from exc
+    return resolved
+
+
 def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
     root = (base or Path.cwd()).resolve()
     candidate = Path((raw or "").strip() or fallback).expanduser()
@@ -147,10 +161,12 @@
     stats: list[CoverageStats] = []
     for item in args.xml:
         name, path = parse_named_path(item)
-        stats.append(parse_coverage_xml(name, path))
+        safe_path = _safe_input_path(path)
+        stats.append(parse_coverage_xml(name, safe_path))
     for item in args.lcov:
         name, path = parse_named_path(item)
-        stats.append(parse_lcov(name, path))
+        safe_path = _safe_input_path(path)
+        stats.append(parse_lcov(name, safe_path))
 
     if not stats:
         raise SystemExit("No coverage files were provided; pass --xml and/or --lcov inputs.")
EOF
@@ -128,6 +128,20 @@
return "\n".join(lines) + "\n"


def _safe_input_path(raw: Path, base: Path | None = None) -> Path:
"""Resolve a potentially relative input path safely within a workspace root."""
root = (base or Path.cwd()).resolve()
candidate = raw.expanduser()
if not candidate.is_absolute():
candidate = root / candidate
resolved = candidate.resolve(strict=False)
try:
resolved.relative_to(root)
except ValueError as exc:
raise ValueError(f"Input path escapes workspace root: {candidate}") from exc
return resolved


def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()
@@ -147,10 +161,12 @@
stats: list[CoverageStats] = []
for item in args.xml:
name, path = parse_named_path(item)
stats.append(parse_coverage_xml(name, path))
safe_path = _safe_input_path(path)
stats.append(parse_coverage_xml(name, safe_path))
for item in args.lcov:
name, path = parse_named_path(item)
stats.append(parse_lcov(name, path))
safe_path = _safe_input_path(path)
stats.append(parse_lcov(name, safe_path))

if not stats:
raise SystemExit("No coverage files were provided; pass --xml and/or --lcov inputs.")
Copilot is powered by AI and may make mistakes. Always verify output.
total = 0
covered = 0

for raw in path.read_text(encoding="utf-8").splitlines():

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.

Copilot Autofix

AI about 10 hours ago

In general, to fix uncontrolled paths from user input, normalize the path and constrain it to a known safe root. For this script, the inputs are coverage report files. We can treat the current working directory (or an explicitly provided base) as the workspace root and resolve input paths relative to it, then enforce that the normalized path remains under that root. This mirrors the existing _safe_output_path logic used for output files.

The best targeted fix here is:

  • Introduce a _safe_input_path helper that:
    • Takes a raw string path and an optional base root (Path | None).
    • Resolves the base root to an absolute canonical path.
    • Builds a Path from the raw string, expands ~, and if it is not absolute, joins it to the root.
    • Resolves the resulting candidate (strict=False), then checks that it is within the root via relative_to.
    • Raises a clear ValueError if the path escapes the root.
  • Update parse_named_path to call _safe_input_path instead of returning Path(...) directly.
  • This ensures both XML and LCOV paths are validated before use because both go through parse_named_path.

Concretely:

  • In scripts/quality/assert_coverage_100.py, above parse_named_path, add the _safe_input_path function.
  • Change parse_named_path line 46 from Path(match.group("path").strip()) to _safe_input_path(match.group("path").strip()).
  • No new external dependencies are required; we only use pathlib.Path, which is already imported.
Suggested changeset 1
scripts/quality/assert_coverage_100.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/scripts/quality/assert_coverage_100.py b/scripts/quality/assert_coverage_100.py
--- a/scripts/quality/assert_coverage_100.py
+++ b/scripts/quality/assert_coverage_100.py
@@ -39,11 +39,31 @@
     return parser.parse_args()
 
 
+def _safe_input_path(raw: str, base: Path | None = None) -> Path:
+    """
+    Resolve an input path relative to a workspace root and ensure it does not escape it.
+
+    This mirrors `_safe_output_path` but is used for coverage input files.
+    """
+    root = (base or Path.cwd()).resolve()
+    candidate = Path((raw or "").strip()).expanduser()
+    if not candidate.is_absolute():
+        candidate = root / candidate
+    resolved = candidate.resolve(strict=False)
+    try:
+        resolved.relative_to(root)
+    except ValueError as exc:
+        raise ValueError(f"Input path escapes workspace root: {candidate}") from exc
+    return resolved
+
+
 def parse_named_path(value: str) -> tuple[str, Path]:
     match = _PAIR_RE.match(value.strip())
     if not match:
         raise ValueError(f"Invalid input '{value}'. Expected format: name=path")
-    return match.group("name").strip(), Path(match.group("path").strip())
+    name = match.group("name").strip()
+    raw_path = match.group("path").strip()
+    return name, _safe_input_path(raw_path)
 
 
 def parse_coverage_xml(name: str, path: Path) -> CoverageStats:
EOF
@@ -39,11 +39,31 @@
return parser.parse_args()


def _safe_input_path(raw: str, base: Path | None = None) -> Path:
"""
Resolve an input path relative to a workspace root and ensure it does not escape it.

This mirrors `_safe_output_path` but is used for coverage input files.
"""
root = (base or Path.cwd()).resolve()
candidate = Path((raw or "").strip()).expanduser()
if not candidate.is_absolute():
candidate = root / candidate
resolved = candidate.resolve(strict=False)
try:
resolved.relative_to(root)
except ValueError as exc:
raise ValueError(f"Input path escapes workspace root: {candidate}") from exc
return resolved


def parse_named_path(value: str) -> tuple[str, Path]:
match = _PAIR_RE.match(value.strip())
if not match:
raise ValueError(f"Invalid input '{value}'. Expected format: name=path")
return match.group("name").strip(), Path(match.group("path").strip())
name = match.group("name").strip()
raw_path = match.group("path").strip()
return name, _safe_input_path(raw_path)


def parse_coverage_xml(name: str, path: Path) -> CoverageStats:
Copilot is powered by AI and may make mistakes. Always verify output.

def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.

Copilot Autofix

AI about 10 hours ago

In general, to fix uncontrolled path usage from user input, ensure that any path derived from user-controlled data is normalized and then validated against a fixed, trusted base directory. The trusted base should not itself be influenced by the same user input or easily controlled environment variables.

In this script, _safe_output_path already normalizes and constrains paths, but it uses the current working directory as the base (Path.cwd()), which can be influenced by how the script is launched. The safest minimal change is to define a constant base directory for coverage outputs (e.g., the coverage-100 folder relative to the current working directory) and force all output paths to remain under that directory. This keeps behavior consistent with the defaults (coverage-100/coverage.json and coverage-100/coverage.md) while adding a stricter boundary: even if a user passes --out-json as an absolute path or with traversal segments, the resolved path must remain within the coverage-100 directory, or a clear error is raised.

Concretely:

  • Introduce a module-level constant such as OUTPUT_ROOT = (Path.cwd() / "coverage-100").resolve() just below the imports.
  • Change _safe_output_path to use this OUTPUT_ROOT as its base when base is not explicitly provided: root = (base or OUTPUT_ROOT).resolve().
  • Leave the rest of _safe_output_path as-is, since it already performs normalization, absolutization, and a relative_to check.

No new imports are needed; we already import Path from pathlib. Only scripts/quality/assert_coverage_100.py needs to be edited, within the shown regions.

Suggested changeset 1
scripts/quality/assert_coverage_100.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/scripts/quality/assert_coverage_100.py b/scripts/quality/assert_coverage_100.py
--- a/scripts/quality/assert_coverage_100.py
+++ b/scripts/quality/assert_coverage_100.py
@@ -9,7 +9,9 @@
 from datetime import datetime, timezone
 from pathlib import Path
 
+OUTPUT_ROOT = (Path.cwd() / "coverage-100").resolve()
 
+
 @dataclass
 class CoverageStats:
     name: str
@@ -129,7 +130,7 @@
 
 
 def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
-    root = (base or Path.cwd()).resolve()
+    root = (base or OUTPUT_ROOT).resolve()
     candidate = Path((raw or "").strip() or fallback).expanduser()
     if not candidate.is_absolute():
         candidate = root / candidate
EOF
@@ -9,7 +9,9 @@
from datetime import datetime, timezone
from pathlib import Path

OUTPUT_ROOT = (Path.cwd() / "coverage-100").resolve()


@dataclass
class CoverageStats:
name: str
@@ -129,7 +130,7 @@


def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
root = (base or OUTPUT_ROOT).resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()
if not candidate.is_absolute():
candidate = root / candidate
Copilot is powered by AI and may make mistakes. Always verify output.

def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.

Copilot Autofix

AI about 10 hours ago

In general, to fix uncontrolled-path issues you must (1) define a trusted root directory, (2) normalize any user-supplied path against that root, and (3) verify that the normalized path is contained within the root before using it to access the filesystem. Any attempt to escape the root should be rejected.

For this script, the best fix with minimal functional change is to make _safe_output_path always anchor paths under a clearly defined workspace root (the directory containing the script, or its parent, consistent with _HELPER_ROOT), instead of using the current working directory when base is None. The rest of _safe_output_path can remain as is: it already normalizes the path (resolve(strict=False)) and enforces containment via relative_to(root). We only need to change how root is computed.

Concretely:

  • On line 104–105, update _safe_output_path so that when base is not provided, it uses _SCRIPT_DIR (which is already defined at the top of the file as the directory of this script) as the root instead of Path.cwd().
  • Keep the rest of the function intact to preserve functionality.
  • No new imports or additional helper methods are required; we reuse existing constants.

This ensures that args.out_json and args.out_md cannot point outside the repository (or wherever this script actually lives), even if the script is invoked from a different directory.

Suggested changeset 1
scripts/quality/check_codacy_zero.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/scripts/quality/check_codacy_zero.py b/scripts/quality/check_codacy_zero.py
--- a/scripts/quality/check_codacy_zero.py
+++ b/scripts/quality/check_codacy_zero.py
@@ -102,7 +102,9 @@
 
 
 def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
-    root = (base or Path.cwd()).resolve()
+    # Anchor all output paths under a trusted workspace root to prevent directory escapes.
+    root_base = base if base is not None else _SCRIPT_DIR
+    root = root_base.resolve()
     candidate = Path((raw or "").strip() or fallback).expanduser()
     if not candidate.is_absolute():
         candidate = root / candidate
EOF
@@ -102,7 +102,9 @@


def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
# Anchor all output paths under a trusted workspace root to prevent directory escapes.
root_base = base if base is not None else _SCRIPT_DIR
root = root_base.resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()
if not candidate.is_absolute():
candidate = root / candidate
Copilot is powered by AI and may make mistakes. Always verify output.

def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.

Copilot Autofix

AI about 10 hours ago

In general, to fix uncontrolled path usage when you want to permit some flexibility, you should (1) choose a well-defined safe root directory that the user must stay within, (2) normalize any user-controlled path, (3) resolve relative paths against that root, and (4) verify that the final path is still inside the root. The current code already performs steps (2)–(4), but uses Path.cwd() as the root, which may not be an appropriate security boundary because the working directory can be arbitrary. The best fix is to use a deterministic, internal root directory for outputs instead of the current working directory.

Concretely, we can change _safe_output_path to:

  • Use a dedicated, fixed root, e.g., a .deepscan-zero directory within the project tree based on _SCRIPT_DIR or _HELPER_ROOT.
  • Ensure that both the fallback paths (deepscan-zero/deepscan.json / .md) and any user-provided paths (absolute or relative) are resolved under this root, and reject anything that does not stay inside the root.

To keep behavior similar (still giving users some control over filenames and subdirectories), we can:

  • Derive a default_root = (_SCRIPT_DIR.parent / "deepscan-zero").resolve() and use that as the base for _safe_output_path when no custom base is provided.
  • Adjust _safe_output_path so that if raw is absolute, we still force it under root by stripping its root and joining only its relative .parts[1:]; or, more simply and safely, reject absolute paths and require paths to be relative, which avoids confusion and still gives plenty of flexibility.

Given the existing logic and minimal change requirement, the safest, least disruptive way is:

  • Set root to a deterministic directory under the repository (e.g., _SCRIPT_DIR.parent.resolve()), not Path.cwd().
  • Keep the rest of _safe_output_path as-is since it already canonicalizes and enforces containment.

This narrows where outputs can be written without changing the calling code.

Suggested changeset 1
scripts/quality/check_deepscan_zero.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/scripts/quality/check_deepscan_zero.py b/scripts/quality/check_deepscan_zero.py
--- a/scripts/quality/check_deepscan_zero.py
+++ b/scripts/quality/check_deepscan_zero.py
@@ -79,7 +79,9 @@
 
 
 def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
-    root = (base or Path.cwd()).resolve()
+    # Use a deterministic project root (directory containing this script by default)
+    default_root = _SCRIPT_DIR.resolve()
+    root = (base or default_root).resolve()
     candidate = Path((raw or "").strip() or fallback).expanduser()
     if not candidate.is_absolute():
         candidate = root / candidate
EOF
@@ -79,7 +79,9 @@


def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
# Use a deterministic project root (directory containing this script by default)
default_root = _SCRIPT_DIR.resolve()
root = (base or default_root).resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()
if not candidate.is_absolute():
candidate = root / candidate
Copilot is powered by AI and may make mistakes. Always verify output.

def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.

Copilot Autofix

AI about 10 hours ago

To fix this, we should ensure that any path derived from user-controlled input is normalized and strictly confined to a chosen root directory, without honoring user home expansions or arbitrary absolute paths that might bypass the intended workspace. The existing _safe_output_path already normalizes and checks containment, but it allows ~ expansion and absolute paths relative to the filesystem root. If base is provided and differs from Path.cwd(), an absolute path will be checked against base while having been resolved relative to /, which is more permissive than anchoring it under base.

The best targeted fix is to change _safe_output_path so that:

  • The raw input is stripped but not passed through expanduser().
  • The raw string is always treated as a path segment to be rooted under root (i.e., we do not trust absolute paths or user home shortcuts).
  • We always join root with the candidate (or fallback) before resolving.
  • We keep the resolve(strict=False) followed by relative_to(root) containment check for defense in depth.

Concretely, in scripts/quality/check_quality_secrets.py, we will update _safe_output_path:

  • Build candidate = (raw or "").strip() or fallback as a plain string.
  • Convert it to Path(candidate) and always prepend root via root / Path(candidate) (no expanduser, no “if not absolute” branch).
  • Resolve that combined path and keep the existing containment check and error handling.

No additional imports are needed, and the function’s interface and callers (main()) remain unchanged.

Suggested changeset 1
scripts/quality/check_quality_secrets.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/scripts/quality/check_quality_secrets.py b/scripts/quality/check_quality_secrets.py
--- a/scripts/quality/check_quality_secrets.py
+++ b/scripts/quality/check_quality_secrets.py
@@ -83,9 +83,10 @@
 
 def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
     root = (base or Path.cwd()).resolve()
-    candidate = Path((raw or "").strip() or fallback).expanduser()
-    if not candidate.is_absolute():
-        candidate = root / candidate
+    # Always treat the user-provided path as relative to the chosen root,
+    # and avoid expanding user homes or trusting absolute paths directly.
+    raw_value = (raw or "").strip() or fallback
+    candidate = root / Path(raw_value)
     resolved = candidate.resolve(strict=False)
     try:
         resolved.relative_to(root)
EOF
@@ -83,9 +83,10 @@

def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()
if not candidate.is_absolute():
candidate = root / candidate
# Always treat the user-provided path as relative to the chosen root,
# and avoid expanding user homes or trusting absolute paths directly.
raw_value = (raw or "").strip() or fallback
candidate = root / Path(raw_value)
resolved = candidate.resolve(strict=False)
try:
resolved.relative_to(root)
Copilot is powered by AI and may make mistakes. Always verify output.
Comment on lines +30 to +39
req = urllib.request.Request(
url,
headers={
"Accept": "application/vnd.github+json",
"Authorization": f"Bearer {token}",
"X-GitHub-Api-Version": "2022-11-28",
"User-Agent": "reframe-quality-zero-gate",
},
method="GET",
)

Check failure

Code scanning / CodeQL

Partial server-side request forgery Critical

Part of the URL of this request depends on a
user-provided value
.

Copilot Autofix

AI about 10 hours ago

Generally, to fix partial SSRF in this context you should constrain the user-controlled --repo argument to a safe format before using it in the URL. For a GitHub owner/repo identifier, the safe approach is to validate that it matches the expected pattern (two non-empty components separated by a single /, composed only of allowed GitHub name characters) and reject anything else. This prevents a malicious value from injecting .., additional slashes, or other characters that alter the URL path in unexpected ways.

The best minimally invasive fix here is:

  1. Add a small validation/sanitization helper, e.g. _sanitize_repo, that:
    • Ensures the value matches ^[A-Za-z0-9_.-]+/[A-Za-z0-9_.-]+$.
    • Raises a clear exception or SystemExit if it does not.
  2. Call this helper in main() after parsing arguments, replacing args.repo with the sanitized/validated version.
  3. Optionally, URL-encode the components (owner and repo) when constructing the URL to avoid path confusion from unusual but syntactically allowed characters.

All changes are confined to scripts/quality/check_required_checks.py. We can reuse the existing urllib.parse import for encoding and only add a small new function in the same file.

Concretely:

  • Around line 124 (before or after _safe_output_path is fine), define:

    def _sanitize_repo(raw: str) -> str:
        ...
  • In main(), after args = _parse_args() and before args.repo is used, replace args.repo with _sanitize_repo(args.repo) and store it (e.g. repo = _sanitize_repo(args.repo)).

  • Use that repo variable instead of args.repo when calling _api_get and when filling final_payload["repo"].

  • Optionally, adjust _api_get to URL-encode the owner and repo components; that change is internal and does not alter intended functionality.

Suggested changeset 1
scripts/quality/check_required_checks.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/scripts/quality/check_required_checks.py b/scripts/quality/check_required_checks.py
--- a/scripts/quality/check_required_checks.py
+++ b/scripts/quality/check_required_checks.py
@@ -26,7 +26,10 @@
 
 
 def _api_get(repo: str, path: str, token: str) -> dict[str, Any]:
-    url = f"https://api.github.com/repos/{repo}/{path.lstrip('/')}"
+    owner, repo_name = repo.split("/", 1)
+    owner_enc = urllib.parse.quote(owner, safe="")
+    repo_enc = urllib.parse.quote(repo_name, safe="")
+    url = f"https://api.github.com/repos/{owner_enc}/{repo_enc}/{path.lstrip('/')}"
     req = urllib.request.Request(
         url,
         headers={
@@ -134,6 +137,29 @@
     return resolved
 
 
+def _sanitize_repo(raw: str) -> str:
+    """
+    Validate and normalize a GitHub owner/repo string.
+
+    Ensures the value consists of exactly two path components separated by a
+    single slash, each using only characters allowed in GitHub repository
+    names. Raises ValueError if the value is invalid.
+    """
+    value = (raw or "").strip()
+    if not value:
+        raise ValueError("Repository must be a non-empty 'owner/repo' string")
+    # Enforce exactly one "/" separating owner and repo.
+    if value.count("/") != 1:
+        raise ValueError(f"Invalid repository format (expected 'owner/repo'): {value!r}")
+    owner, repo = value.split("/", 1)
+    if not owner or not repo:
+        raise ValueError(f"Invalid repository format (expected 'owner/repo'): {value!r}")
+    allowed = set("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_.-")
+    if any(ch not in allowed for ch in owner) or any(ch not in allowed for ch in repo):
+        raise ValueError(f"Repository contains invalid characters: {value!r}")
+    return f"{owner}/{repo}"
+
+
 def main() -> int:
     args = _parse_args()
     token = (os.environ.get("GITHUB_TOKEN", "") or os.environ.get("GH_TOKEN", "")).strip()
@@ -143,19 +169,23 @@
         raise SystemExit("At least one --required-context is required")
     if not token:
         raise SystemExit("GITHUB_TOKEN or GH_TOKEN is required")
+    try:
+        repo = _sanitize_repo(args.repo)
+    except ValueError as exc:
+        raise SystemExit(str(exc)) from exc
 
     deadline = time.time() + max(args.timeout_seconds, 1)
 
     final_payload: dict[str, Any] | None = None
     while time.time() <= deadline:
-        check_runs = _api_get(args.repo, f"commits/{args.sha}/check-runs?per_page=100", token)
-        statuses = _api_get(args.repo, f"commits/{args.sha}/status", token)
+        check_runs = _api_get(repo, f"commits/{args.sha}/check-runs?per_page=100", token)
+        statuses = _api_get(repo, f"commits/{args.sha}/status", token)
         contexts = _collect_contexts(check_runs, statuses)
         status, missing, failed = _evaluate(required, contexts)
 
         final_payload = {
             "status": status,
-            "repo": args.repo,
+            "repo": repo,
             "sha": args.sha,
             "required": required,
             "missing": missing,
EOF
@@ -26,7 +26,10 @@


def _api_get(repo: str, path: str, token: str) -> dict[str, Any]:
url = f"https://api.github.com/repos/{repo}/{path.lstrip('/')}"
owner, repo_name = repo.split("/", 1)
owner_enc = urllib.parse.quote(owner, safe="")
repo_enc = urllib.parse.quote(repo_name, safe="")
url = f"https://api.github.com/repos/{owner_enc}/{repo_enc}/{path.lstrip('/')}"
req = urllib.request.Request(
url,
headers={
@@ -134,6 +137,29 @@
return resolved


def _sanitize_repo(raw: str) -> str:
"""
Validate and normalize a GitHub owner/repo string.

Ensures the value consists of exactly two path components separated by a
single slash, each using only characters allowed in GitHub repository
names. Raises ValueError if the value is invalid.
"""
value = (raw or "").strip()
if not value:
raise ValueError("Repository must be a non-empty 'owner/repo' string")
# Enforce exactly one "/" separating owner and repo.
if value.count("/") != 1:
raise ValueError(f"Invalid repository format (expected 'owner/repo'): {value!r}")
owner, repo = value.split("/", 1)
if not owner or not repo:
raise ValueError(f"Invalid repository format (expected 'owner/repo'): {value!r}")
allowed = set("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_.-")
if any(ch not in allowed for ch in owner) or any(ch not in allowed for ch in repo):
raise ValueError(f"Repository contains invalid characters: {value!r}")
return f"{owner}/{repo}"


def main() -> int:
args = _parse_args()
token = (os.environ.get("GITHUB_TOKEN", "") or os.environ.get("GH_TOKEN", "")).strip()
@@ -143,19 +169,23 @@
raise SystemExit("At least one --required-context is required")
if not token:
raise SystemExit("GITHUB_TOKEN or GH_TOKEN is required")
try:
repo = _sanitize_repo(args.repo)
except ValueError as exc:
raise SystemExit(str(exc)) from exc

deadline = time.time() + max(args.timeout_seconds, 1)

final_payload: dict[str, Any] | None = None
while time.time() <= deadline:
check_runs = _api_get(args.repo, f"commits/{args.sha}/check-runs?per_page=100", token)
statuses = _api_get(args.repo, f"commits/{args.sha}/status", token)
check_runs = _api_get(repo, f"commits/{args.sha}/check-runs?per_page=100", token)
statuses = _api_get(repo, f"commits/{args.sha}/status", token)
contexts = _collect_contexts(check_runs, statuses)
status, missing, failed = _evaluate(required, contexts)

final_payload = {
"status": status,
"repo": args.repo,
"repo": repo,
"sha": args.sha,
"required": required,
"missing": missing,
Copilot is powered by AI and may make mistakes. Always verify output.

def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.

Copilot Autofix

AI about 10 hours ago

Generally, to fix uncontrolled path usage, normalize the path, force it under a known safe root directory, and reject any user input that would escape that root or that attempts to specify an absolute path directly. Do not trust user-provided absolute paths; instead, treat user input as a relative location inside a controlled base directory.

In this file, the best fix with minimal functional change is to tighten _safe_output_path so that:

  • All user input is treated as a relative path component under root (which remains base or Path.cwd()).
  • Absolute user-provided paths are rejected explicitly.
  • The normalization and relative_to(root) check are preserved.

Concretely:

  • Replace the current computation of candidate with logic that:
    • Normalizes raw (strip, default to empty) and, if empty, falls back to fallback.
    • Rejects if this “effective” path is absolute.
    • Joins it to root and resolves.
  • Keep the relative_to(root) check to defend against .. segments and symlink tricks.

All changes are confined to _safe_output_path in scripts/quality/check_required_checks.py; no new imports are needed.

Suggested changeset 1
scripts/quality/check_required_checks.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/scripts/quality/check_required_checks.py b/scripts/quality/check_required_checks.py
--- a/scripts/quality/check_required_checks.py
+++ b/scripts/quality/check_required_checks.py
@@ -123,15 +123,18 @@
 
 def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
     root = (base or Path.cwd()).resolve()
-    candidate = Path((raw or "").strip() or fallback).expanduser()
-    if not candidate.is_absolute():
-        candidate = root / candidate
-    resolved = candidate.resolve(strict=False)
+    raw_str = (raw or "").strip()
+    # If no raw value is provided, fall back to the default.
+    effective = Path(raw_str or fallback).expanduser()
+    # Do not allow user input to specify an absolute path directly.
+    if effective.is_absolute():
+        raise ValueError(f"Absolute output paths are not allowed: {effective}")
+    candidate = (root / effective).resolve(strict=False)
     try:
-        resolved.relative_to(root)
+        candidate.relative_to(root)
     except ValueError as exc:
         raise ValueError(f"Output path escapes workspace root: {candidate}") from exc
-    return resolved
+    return candidate
 
 
 def main() -> int:
EOF
@@ -123,15 +123,18 @@

def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()
if not candidate.is_absolute():
candidate = root / candidate
resolved = candidate.resolve(strict=False)
raw_str = (raw or "").strip()
# If no raw value is provided, fall back to the default.
effective = Path(raw_str or fallback).expanduser()
# Do not allow user input to specify an absolute path directly.
if effective.is_absolute():
raise ValueError(f"Absolute output paths are not allowed: {effective}")
candidate = (root / effective).resolve(strict=False)
try:
resolved.relative_to(root)
candidate.relative_to(root)
except ValueError as exc:
raise ValueError(f"Output path escapes workspace root: {candidate}") from exc
return resolved
return candidate


def main() -> int:
Copilot is powered by AI and may make mistakes. Always verify output.

def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.

Copilot Autofix

AI about 10 hours ago

In general, to fix uncontrolled path usage you should (a) define a trusted base directory, (b) normalize and resolve any user‑provided path relative to that base, and (c) reject any path that, after resolution, does not lie within the base directory. Avoid depending on mutable process state such as the current working directory for your trust boundary.

For this file, the safest minimal change is to introduce a fixed “workspace root” derived from the script location (_SCRIPT_DIR.parent is already used to locate helpers) and use that as the default root for _safe_output_path instead of Path.cwd(). We then keep the normalization and containment check, but ensure they are anchored to this stable root. Concretely:

  • Add a new constant (e.g. _WORKSPACE_ROOT) near _SCRIPT_DIR / _HELPER_ROOT that resolves the repository root (here (_SCRIPT_DIR / "..").resolve() or simply _SCRIPT_DIR.parent.resolve()).
  • Update _safe_output_path so that:
    • It uses root = (base or _WORKSPACE_ROOT).resolve() rather than Path.cwd().
    • The rest of its logic remains the same: build candidate from raw or fallback, expand ~, prepend root for non‑absolute paths, then resolve(strict=False) and enforce resolved.relative_to(root).

This preserves existing behavior for relative paths (still under a project directory), still allows absolute paths that lie under the workspace, but prevents user input and CWD from redirecting output outside the intended tree. No extra imports are required; we reuse pathlib.Path which is already imported.

Suggested changeset 1
scripts/quality/check_sentry_zero.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/scripts/quality/check_sentry_zero.py b/scripts/quality/check_sentry_zero.py
--- a/scripts/quality/check_sentry_zero.py
+++ b/scripts/quality/check_sentry_zero.py
@@ -11,6 +11,7 @@
 from typing import Any
 
 _SCRIPT_DIR = Path(__file__).resolve().parent
+_WORKSPACE_ROOT = _SCRIPT_DIR.parent.resolve()
 _HELPER_ROOT = _SCRIPT_DIR if (_SCRIPT_DIR / "security_helpers.py").exists() else _SCRIPT_DIR.parent
 if str(_HELPER_ROOT) not in sys.path:
     sys.path.insert(0, str(_HELPER_ROOT))
@@ -92,7 +93,7 @@
 
 
 def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
-    root = (base or Path.cwd()).resolve()
+    root = (base or _WORKSPACE_ROOT).resolve()
     candidate = Path((raw or "").strip() or fallback).expanduser()
     if not candidate.is_absolute():
         candidate = root / candidate
EOF
@@ -11,6 +11,7 @@
from typing import Any

_SCRIPT_DIR = Path(__file__).resolve().parent
_WORKSPACE_ROOT = _SCRIPT_DIR.parent.resolve()
_HELPER_ROOT = _SCRIPT_DIR if (_SCRIPT_DIR / "security_helpers.py").exists() else _SCRIPT_DIR.parent
if str(_HELPER_ROOT) not in sys.path:
sys.path.insert(0, str(_HELPER_ROOT))
@@ -92,7 +93,7 @@


def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
root = (base or _WORKSPACE_ROOT).resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()
if not candidate.is_absolute():
candidate = root / candidate
Copilot is powered by AI and may make mistakes. Always verify output.

def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.

Copilot Autofix

AI about 10 hours ago

In general, the way to fix this type of issue is to ensure that any filesystem path derived from untrusted input is (1) normalized, and (2) verified to lie within a trusted root directory before any filesystem operations are performed. This usually means establishing a root (such as the current workspace), resolving both the root and the user path, and then rejecting any path that, after normalization, is outside the root.

For this script, _safe_output_path is already implementing that pattern, but CodeQL flags the construction Path((raw or "").strip() or fallback) as directly dependent on user input. The best fix without changing functionality is to keep the same behavior (CLI user can specify either --out-json/--out-md or use defaults, paths are resolved relative to the current working directory, ~ is expanded, and escapes are rejected) while making the sanitization more explicit and robust. We can do this by:

  • Introducing a simple _sanitize_output_path_string helper that trims whitespace and falls back to the default if the result is empty.
  • Using that string to build a Path, then normalizing and validating it with resolve() while continuing to enforce that the resolved path is within root.
  • Optionally strengthening the containment check with a commonpath comparison, while still using Path.relative_to for clarity.

Concretely, in scripts/quality/check_sonar_zero.py:

  • Add an import for os at the top if we use os.path.commonpath (we already import os inside main, but that cannot be reused at module level; we’ll add a top-level import and keep the existing local one unchanged).
  • Add a small helper _sanitize_output_path_string(raw: str | None, fallback: str) -> str above _safe_output_path.
  • Rewrite _safe_output_path to:
    • Compute root = (base or Path.cwd()).resolve().
    • Call _sanitize_output_path_string(raw, fallback) to get a safe string path.
    • Build candidate = Path(safe_raw).expanduser().
    • If candidate is not absolute, join it to root.
    • Resolve it with strict=False.
    • Use resolved.relative_to(root) as before to enforce containment (we can keep this part; it is already correct).

No other parts of the file need changes, since calls at lines 150–151 already go through _safe_output_path.

Suggested changeset 1
scripts/quality/check_sonar_zero.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/scripts/quality/check_sonar_zero.py b/scripts/quality/check_sonar_zero.py
--- a/scripts/quality/check_sonar_zero.py
+++ b/scripts/quality/check_sonar_zero.py
@@ -72,13 +72,26 @@
     return "\n".join(lines) + "\n"
 
 
+def _sanitize_output_path_string(raw: str | None, fallback: str) -> str:
+    """
+    Normalize the raw output path string by stripping whitespace and
+    applying a safe fallback when the result is empty.
+    """
+    text = (raw or "").strip()
+    return text or fallback
+
+
 def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
     root = (base or Path.cwd()).resolve()
-    candidate = Path((raw or "").strip() or fallback).expanduser()
+    # Sanitize the input string before constructing the Path to avoid
+    # directly propagating untrusted data into the filesystem operations.
+    raw_path = _sanitize_output_path_string(raw, fallback)
+    candidate = Path(raw_path).expanduser()
     if not candidate.is_absolute():
         candidate = root / candidate
     resolved = candidate.resolve(strict=False)
     try:
+        # Ensure the resolved path is contained within the workspace root.
         resolved.relative_to(root)
     except ValueError as exc:
         raise ValueError(f"Output path escapes workspace root: {candidate}") from exc
EOF
@@ -72,13 +72,26 @@
return "\n".join(lines) + "\n"


def _sanitize_output_path_string(raw: str | None, fallback: str) -> str:
"""
Normalize the raw output path string by stripping whitespace and
applying a safe fallback when the result is empty.
"""
text = (raw or "").strip()
return text or fallback


def _safe_output_path(raw: str, fallback: str, base: Path | None = None) -> Path:
root = (base or Path.cwd()).resolve()
candidate = Path((raw or "").strip() or fallback).expanduser()
# Sanitize the input string before constructing the Path to avoid
# directly propagating untrusted data into the filesystem operations.
raw_path = _sanitize_output_path_string(raw, fallback)
candidate = Path(raw_path).expanduser()
if not candidate.is_absolute():
candidate = root / candidate
resolved = candidate.resolve(strict=False)
try:
# Ensure the resolved path is contained within the workspace root.
resolved.relative_to(root)
except ValueError as exc:
raise ValueError(f"Output path escapes workspace root: {candidate}") from exc
Copilot is powered by AI and may make mistakes. Always verify output.
Comment on lines +10 to +12
permissions:
contents: read

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Github api perms missing 🐞 Bug ⛯ Reliability

Quality Zero Gate restricts workflow token permissions to contents: read, but then calls the
GitHub Checks/Statuses APIs to query check results; this can cause 403 errors and a permanently
failing gate.
Agent Prompt
### Issue description
The Quality Zero Gate queries GitHub check-runs and commit statuses, but the workflow token is restricted to `contents: read`, which risks API authorization failures.

### Issue Context
The workflow runs `scripts/quality/check_required_checks.py` which calls `/check-runs` and `/status` endpoints.

### Fix Focus Areas
- .github/workflows/quality-zero-gate.yml[10-12]
- scripts/quality/check_required_checks.py[151-153]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +20 to +24
@property
def percent(self) -> float:
if self.total <= 0:
return 100.0
return (self.covered / self.total) * 100.0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Coverage gate false-pass 🐞 Bug ✓ Correctness

The 100% coverage assertion treats total <= 0 as 100% coverage, so empty/invalid coverage inputs
can incorrectly pass the gate.
Agent Prompt
### Issue description
The Coverage 100 gate can incorrectly pass on empty or unparsable coverage data.

### Issue Context
`CoverageStats.percent` currently returns 100% when `total <= 0`, and the evaluator relies on that percent.

### Fix Focus Areas
- scripts/quality/assert_coverage_100.py[20-25]
- scripts/quality/assert_coverage_100.py[86-100]
- scripts/quality/assert_coverage_100.py[49-83]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Prekzursil and others added 2 commits March 3, 2026 08:35
Co-authored-by: Codex <noreply@openai.com>
npm --prefix frontend/webcoder_ui ci
npm --prefix frontend/webcoder_ui test -- --coverage --watch=false
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v5

Check warning

Code scanning / CodeQL

Unpinned tag for a non-immutable Action in workflow Medium

Unpinned 3rd party Action 'Codecov Analytics' step
Uses Step
uses 'codecov/codecov-action' with ref 'v5', not a pinned commit hash
Co-authored-by: Codex <noreply@openai.com>
@sonarqubecloud
Copy link

sonarqubecloud bot commented Mar 3, 2026

Quality Gate Failed Quality Gate failed

Failed conditions
14 New issues
2 Security Hotspots
14.3% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

@Prekzursil
Copy link
Owner Author

Infra-wiring continuation update applied.

What changed in this branch:

  • Hardened .github/workflows/snyk-zero.yml with deterministic OSS target detection.
  • Added non-fatal OSS skip path with artifact output (snyk-oss-mode.json) when no supported OSS target exists.
  • Kept snyk code test mandatory and failing on findings.
  • Ensured CODECOV token provisioning path is configured for this repo.

Latest runs on this branch:

Merge policy for this wave remains: admin bypass allowed after infra-wiring verification, even if findings/coverage checks remain red.

@Prekzursil Prekzursil merged commit 850ac07 into main Mar 3, 2026
9 of 24 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant