Skip to content

chore: add strict codecov and zero-issue quality gates#98

Merged
Prekzursil merged 5 commits intomainfrom
feat/quality-zero-gates-2026-03-03
Mar 3, 2026
Merged

chore: add strict codecov and zero-issue quality gates#98
Prekzursil merged 5 commits intomainfrom
feat/quality-zero-gates-2026-03-03

Conversation

@Prekzursil
Copy link
Owner

@Prekzursil Prekzursil commented Mar 3, 2026

Summary

  • Adds strict Codecov + quality-zero workflows.
  • Adds fail-closed quality secrets preflight.
  • Adds 100% coverage gate (Coverage 100 Gate).
  • Adds aggregate context gate (Quality Zero Gate).
  • Adds momentstudio-style codecov.yml structure (require_ci_to_pass, component_management, bundle_analysis) with strict 100% policy.

Required Secrets/Vars

  • Secrets:
    • CODECOV_TOKEN
    • SONAR_TOKEN
    • CODACY_API_TOKEN
    • SNYK_TOKEN
    • SENTRY_AUTH_TOKEN
    • DEEPSCAN_API_TOKEN
  • Vars:
    • SENTRY_ORG
    • SENTRY_PROJECT
    • DEEPSCAN_POLICY_MODE
    • DEEPSCAN_OPEN_ISSUES_URL

Policy

This PR intentionally fails closed when required secrets/vars are missing.

Summary by CodeRabbit

  • New Features

    • Added automated CI quality gates enforcing 100% test coverage and zero open findings across multiple scanning tools, plus Codecov analytics and orchestrated required-check validation.
    • CI now emits and uploads standardized JSON/Markdown artifacts summarizing gate results.
  • Documentation

    • Added guidance describing the repository's strict quality gates and required tokens/configuration.
  • Chores

    • Introduced helper utilities for URL validation and quality-check tooling.

Prekzursil and others added 2 commits March 3, 2026 08:20
Co-authored-by: Codex <noreply@openai.com>
Co-authored-by: Codex <noreply@openai.com>
@devloai
Copy link

devloai bot commented Mar 3, 2026

Unable to trigger custom agent "Code Reviewer". You have run out of credits 😔
Please upgrade your plan or buy additional credits from the subscription page.

@chatgpt-codex-connector
Copy link

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@coderabbitai
Copy link

coderabbitai bot commented Mar 3, 2026

📝 Walkthrough

Walkthrough

Adds multiple GitHub Actions workflows and supporting Python scripts to enforce "Quality Zero" gates (100% coverage and zero-open findings) across Codacy, Codecov, Sonar, DeepScan, Sentry, and Snyk; includes artifact uploads, secret preflight checks, URL normalization helper, and documentation.

Changes

Cohort / File(s) Summary
Orchestration & CI Gate Workflows
​.github/workflows/quality-zero-gate.yml, ​.github/workflows/coverage-100.yml, ​.github/workflows/codecov-analytics.yml
New workflows to coordinate quality gates: secrets preflight, required-checks aggregation, enforce 100% coverage, run tests and upload coverage to Codecov.
Tool-specific Zero Checks (Workflows)
​.github/workflows/codacy-zero.yml, ​.github/workflows/deepscan-zero.yml, ​.github/workflows/sentry-zero.yml, ​.github/workflows/snyk-zero.yml, ​.github/workflows/sonar-zero.yml
Workflows that invoke scripts to query external services (Codacy, DeepScan, Sentry, Snyk, Sonar) and upload per-tool JSON/MD artifacts.
Quality-check Scripts
scripts/quality/assert_coverage_100.py, scripts/quality/check_codacy_zero.py, scripts/quality/check_deepscan_zero.py, scripts/quality/check_quality_secrets.py, scripts/quality/check_required_checks.py, scripts/quality/check_sentry_zero.py, scripts/quality/check_sonar_zero.py
New Python CLI utilities that parse inputs, query APIs or coverage outputs, evaluate pass/fail, render JSON and Markdown reports, and enforce safe output paths.
Security helper & config
scripts/security_helpers.py, codecov.yml
Added URL normalization and validation helper (strict HTTPS, allowlist, IP checks) and a Codecov config enforcing 100% coverage targets and component rules.
Docs
docs/quality/QUALITY_ZERO_GATES.md
Documentation describing the Quality Zero gate requirements, secrets preflight, and enforcement behavior.

Sequence Diagram(s)

sequenceDiagram
    participant GH as GitHub Actions
    participant Script as Quality Script
    participant API as External Service API
    participant Store as Artifact Store

    GH->>Script: checkout + set env & secrets
    GH->>Script: run specific check script (e.g. check_sentry_zero)
    Script->>API: authenticated HTTP request
    API-->>Script: issues/metrics payload
    Script->>Script: evaluate pass/fail, render JSON & MD
    Script->>Store: upload artifacts (always)
    GH-->>GH: aggregate required-checks (quality-zero-gate job)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested labels

area:ci, area:tooling, needs-reviewer, risk:medium

Poem

🐰 I hopped through workflows, pipes, and code,
I checked each API down the road.
Zero findings, coverage bright,
Artifacts saved, the gates held tight.
A happy rabbit guards the node! 🥕

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description provides a clear summary of changes, lists all required secrets and variables, and explains the fail-closed policy. However, it deviates significantly from the template structure and omits multiple required sections like Risk, Affected Profiles, Reliability Evidence, Rollback, and Scope Guard. Complete the PR description by adding the missing template sections: Risk (level, regression surface, security impact), Affected Profiles checkboxes, Reliability Evidence with test commands/results, Rollback steps, and Scope Guard verification items.
Docstring Coverage ⚠️ Warning Docstring coverage is 2.13% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: adding strict Codecov and zero-issue quality gates, which is accurately reflected in the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/quality-zero-gates-2026-03-03

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link

Review Summary by Qodo

Add strict quality zero-issue gates with comprehensive enforcement workflows

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Adds comprehensive quality gate enforcement scripts for zero-issue policies
• Implements strict 100% code coverage validation with XML/LCOV parsing
• Adds quality checks for Sonar, Codacy, Sentry, DeepScan, and Snyk integrations
• Creates GitHub Actions workflows for automated quality gate enforcement
• Includes fail-closed secrets preflight validation requiring all tokens/vars
• Adds codecov.yml with strict 100% coverage policy and component management
Diagram
flowchart LR
  Scripts["Quality Gate Scripts"]
  Workflows["GitHub Actions Workflows"]
  Config["Configuration Files"]
  Docs["Documentation"]
  
  Scripts -->|Coverage validation| Coverage["Coverage 100 Gate"]
  Scripts -->|API checks| Sonar["Sonar Zero"]
  Scripts -->|API checks| Codacy["Codacy Zero"]
  Scripts -->|API checks| Sentry["Sentry Zero"]
  Scripts -->|API checks| DeepScan["DeepScan Zero"]
  Scripts -->|Secrets validation| Preflight["Secrets Preflight"]
  Scripts -->|Context aggregation| Required["Required Checks"]
  
  Workflows -->|Runs scripts| Coverage
  Workflows -->|Runs scripts| Sonar
  Workflows -->|Runs scripts| Codacy
  Workflows -->|Runs scripts| Sentry
  Workflows -->|Runs scripts| DeepScan
  Workflows -->|Runs scripts| Preflight
  Workflows -->|Aggregates results| Required
  
  Config -->|Codecov policy| Coverage
  Docs -->|Quality standards| Workflows
Loading

Grey Divider

File Changes

1. scripts/quality/assert_coverage_100.py ✨ Enhancement +192/-0

100% coverage validation with XML/LCOV parsing

scripts/quality/assert_coverage_100.py


2. scripts/quality/check_codacy_zero.py ✨ Enhancement +198/-0

Codacy API integration for zero-issue enforcement

scripts/quality/check_codacy_zero.py


3. scripts/quality/check_deepscan_zero.py ✨ Enhancement +155/-0

DeepScan API integration for zero-issue enforcement

scripts/quality/check_deepscan_zero.py


View more (15)
4. scripts/quality/check_quality_secrets.py ✨ Enhancement +135/-0

Fail-closed secrets and variables validation preflight

scripts/quality/check_quality_secrets.py


5. scripts/quality/check_required_checks.py ✨ Enhancement +196/-0

GitHub context aggregation and required checks assertion

scripts/quality/check_required_checks.py


6. scripts/quality/check_sentry_zero.py ✨ Enhancement +181/-0

Sentry API integration for unresolved issues enforcement

scripts/quality/check_sentry_zero.py


7. scripts/quality/check_sonar_zero.py ✨ Enhancement +166/-0

SonarCloud API integration for zero-issue enforcement

scripts/quality/check_sonar_zero.py


8. scripts/security_helpers.py ✨ Enhancement +60/-0

HTTPS URL validation with security allowlist checks

scripts/security_helpers.py


9. .github/workflows/codacy-zero.yml ⚙️ Configuration changes +33/-0

Codacy zero-issue gate workflow automation

.github/workflows/codacy-zero.yml


10. .github/workflows/codecov-analytics.yml ⚙️ Configuration changes +49/-0

Codecov coverage upload and analytics workflow

.github/workflows/codecov-analytics.yml


11. .github/workflows/coverage-100.yml ⚙️ Configuration changes +49/-0

Coverage 100% gate workflow with dotnet test execution

.github/workflows/coverage-100.yml


12. .github/workflows/deepscan-zero.yml ⚙️ Configuration changes +32/-0

DeepScan zero-issue gate workflow automation

.github/workflows/deepscan-zero.yml


13. .github/workflows/quality-zero-gate.yml ⚙️ Configuration changes +86/-0

Aggregate quality gate with secrets preflight and context validation

.github/workflows/quality-zero-gate.yml


14. .github/workflows/sentry-zero.yml ⚙️ Configuration changes +34/-0

Sentry zero-issue gate workflow automation

.github/workflows/sentry-zero.yml


15. .github/workflows/snyk-zero.yml ⚙️ Configuration changes +36/-0

Snyk security scanning workflow automation

.github/workflows/snyk-zero.yml


16. .github/workflows/sonar-zero.yml ⚙️ Configuration changes +36/-0

SonarCloud analysis and zero-issue gate workflow

.github/workflows/sonar-zero.yml


17. codecov.yml ⚙️ Configuration changes +42/-0

Strict 100% coverage policy with component management

codecov.yml


18. docs/quality/QUALITY_ZERO_GATES.md 📝 Documentation +10/-0

Quality zero-gates enforcement documentation

docs/quality/QUALITY_ZERO_GATES.md


Grey Divider

Qodo Logo

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 3, 2026

Code Review by Qodo

🐞 Bugs (5) 📘 Rule violations (2) 📎 Requirement gaps (0)

Grey Divider


Action required

1. Tooling changes lack test evidence 📘 Rule violation ⛯ Reliability
Description
This PR introduces new CI/quality tooling but does not include deterministic test evidence artifacts
for these changes nor an explicit, documented skip justification in-repo. This reduces
auditability/reproducibility of the claimed quality enforcement behavior.
Code

.github/workflows/quality-zero-gate.yml[R30-38]

+      - name: Run quality secrets preflight
+        run: |
+          python3 scripts/quality/check_quality_secrets.py \
+            --required-secret DEEPSCAN_API_TOKEN \
+            --required-var SENTRY_PROJECT \
+            --required-var DEEPSCAN_POLICY_MODE \
+            --required-var DEEPSCAN_OPEN_ISSUES_URL \
+            --out-json quality-secrets/secrets.json \
+            --out-md quality-secrets/secrets.md
Evidence
PR Compliance ID 1 requires deterministic test evidence artifacts or an explicit documented skip
when tooling/tests are changed. The PR adds new quality workflows/scripts (tooling change) and the
added documentation describes the policy but does not document a test-evidence skip justification.

AGENTS.md
.github/workflows/quality-zero-gate.yml[30-38]
docs/quality/QUALITY_ZERO_GATES.md[1-10]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The PR adds new CI/tooling workflows and scripts, but does not include deterministic test evidence artifacts for these changes and does not include an explicit documented skip justification in the repository.
## Issue Context
Compliance requires that runtime/tooling/test-affecting changes include reproducible evidence (e.g., captured logs/results committed under an agreed location) OR a documented justification for skipping such evidence.
## Fix Focus Areas
- .github/workflows/quality-zero-gate.yml[30-38]
- docs/quality/QUALITY_ZERO_GATES.md[1-10]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. CoverageStats.percent treats zero as 100 📘 Rule violation ⛯ Reliability
Description
The coverage gate treats total <= 0 as 100%, which can allow empty/unparseable coverage reports
to pass without failing loudly. This violates the requirement to not report success when required
artifacts/outputs are missing or invalid.
Code

scripts/quality/assert_coverage_100.py[R21-24]

+    def percent(self) -> float:
+        if self.total <= 0:
+            return 100.0
+        return (self.covered / self.total) * 100.0
Evidence
PR Compliance ID 4 requires missing/invalid required artifacts to produce a clear error/failed
status. The new coverage gate explicitly returns 100.0 when total <= 0, which can mask
missing/invalid coverage content as a passing result.

AGENTS.md
scripts/quality/assert_coverage_100.py[21-24]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The coverage enforcement script currently treats `total &amp;amp;amp;amp;lt;= 0` as `100%`, which can silently pass when coverage output is empty or could not be parsed.
## Issue Context
The quality gate is intended to be fail-closed; empty or invalid coverage inputs should fail loudly with a clear message.
## Fix Focus Areas
- scripts/quality/assert_coverage_100.py[21-24]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Linux runner for Windows tests 🐞 Bug ✓ Correctness
Description
The new coverage workflows run dotnet test on ubuntu-latest, but the test project targets
net8.0-windows, so these jobs are likely to fail to build/run and will block merges.
Code

.github/workflows/coverage-100.yml[R14-34]

+  coverage-100:
+    name: Coverage 100 Gate
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v6
+      - uses: actions/setup-python@v6
+        with:
+          python-version: '3.12'
+      - uses: actions/setup-node@v6
+        with:
+          node-version: '20'
+
+      - uses: actions/setup-dotnet@v4
+        with:
+          dotnet-version: '8.0.x'
+      - name: Run tests with coverage
+        run: |
+          mkdir -p coverage
+          dotnet restore
+          dotnet test tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj -c Release             /p:CollectCoverage=true             /p:CoverletOutput=./TestResults/coverage.cobertura.xml             /p:CoverletOutputFormat=cobertura
+
Evidence
coverage-100 and codecov-analytics jobs run on Ubuntu, but the test project is Windows-targeted
(net8.0-windows). The existing CI uses windows-latest for tests, reinforcing that tests are
expected to run on Windows runners.

.github/workflows/coverage-100.yml[14-17]
.github/workflows/codecov-analytics.yml[14-17]
tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj[1-6]
.github/workflows/ci.yml[18-22]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Coverage workflows run on `ubuntu-latest` but execute tests from a `net8.0-windows` test project, which is expected to fail on Linux.
## Issue Context
The repo’s main CI already runs tests on `windows-latest`, indicating Windows is required.
## Fix Focus Areas
- .github/workflows/coverage-100.yml[14-34]
- .github/workflows/codecov-analytics.yml[14-41]
- (optional) tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj[1-6]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (3)
4. Coverlet coverage not wired 🐞 Bug ⛯ Reliability
Description
The workflows pass MSBuild Coverlet properties (CollectCoverage, CoverletOutput), but the test
project/repo does not reference any Coverlet package/config, so the coverage file likely won’t be
generated and downstream steps will fail.
Code

.github/workflows/codecov-analytics.yml[R37-46]

+      - name: Run tests with coverage
+        run: |
+          mkdir -p coverage
+          dotnet restore
+          dotnet test tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj -c Release             /p:CollectCoverage=true             /p:CoverletOutput=./TestResults/coverage.cobertura.xml             /p:CoverletOutputFormat=cobertura
+      - name: Upload coverage to Codecov
+        uses: codecov/codecov-action@v5
+        with:
+          token: ${{ secrets.CODECOV_TOKEN }}
+          files: tests/SwfocTrainer.Tests/TestResults/coverage.cobertura.xml
Evidence
The workflow expects Coverlet MSBuild integration, yet the test project only references xUnit and
the test SDK—no coverlet.* package. Repo-wide build props also contain no coverage tooling
configuration, so /p:CollectCoverage=true is unlikely to produce the expected Cobertura output,
making the later Codecov upload and 100% gate fail.

.github/workflows/codecov-analytics.yml[37-46]
tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj[7-16]
Directory.Build.props[1-12]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Coverage workflows configure Coverlet output but the repo has no Coverlet dependency/configuration, so the expected Cobertura file likely won’t be produced.
## Issue Context
Both the Codecov upload step and the `Coverage 100 Gate` script rely on `tests/SwfocTrainer.Tests/TestResults/coverage.cobertura.xml`.
## Fix Focus Areas
- tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj[7-16]
- .github/workflows/codecov-analytics.yml[37-46]
- .github/workflows/coverage-100.yml[29-40]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Secrets preflight always fails🐞 Bug ⛯ Reliability
Description
check_quality_secrets.py always requires extra secrets/vars (e.g., Applitools/Percy/BrowserStack
and SENTRY_PROJECT_BACKEND/WEB) because defaults are *appended* to CLI requirements; the workflow
does not provide these, so the preflight will fail closed unintentionally.
Code

scripts/quality/check_quality_secrets.py[R102-109]

+def main() -> int:
+    args = _parse_args()
+    required_secrets = _dedupe(DEFAULT_REQUIRED_SECRETS + list(args.required_secret or []))
+    required_vars = _dedupe(DEFAULT_REQUIRED_VARS + list(args.required_var or []))
+
+    result = evaluate_env(required_secrets, required_vars)
+    status = "pass" if not result["missing_secrets"] and not result["missing_vars"] else "fail"
+    payload = {
Evidence
The script defines broad default requirements and then *adds* CLI-provided requirements to them,
rather than replacing them. The workflow only sets SENTRY_PROJECT (not
SENTRY_PROJECT_BACKEND/WEB) and does not set Applitools/Percy/BrowserStack secrets, so the job
will report missing secrets/vars and fail.

scripts/quality/check_quality_secrets.py[11-27]
scripts/quality/check_quality_secrets.py[102-109]
.github/workflows/quality-zero-gate.yml[17-38]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Secrets preflight will fail even when the PR’s documented secrets/vars are configured, because the script always enforces additional default secrets/vars and appends CLI-required items instead of replacing defaults.
## Issue Context
The workflow currently only supplies `SENTRY_PROJECT` and a small set of DeepScan vars; it does not supply Applitools/Percy/BrowserStack or `SENTRY_PROJECT_BACKEND/WEB`.
## Fix Focus Areas
- scripts/quality/check_quality_secrets.py[11-27]
- scripts/quality/check_quality_secrets.py[102-109]
- .github/workflows/quality-zero-gate.yml[17-38]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. Sonar context name mismatch 🐞 Bug ✓ Correctness
Description
Quality Zero Gate waits for a required context named "SonarCloud Code Analysis", but this
repository’s Sonar workflow job is named sonarcloud; the required-checks poller will never observe
the expected context and will fail.
Code

.github/workflows/quality-zero-gate.yml[R64-76]

+          python3 scripts/quality/check_required_checks.py \
+            --repo "${GITHUB_REPOSITORY}" \
+            --sha "${GITHUB_SHA}" \
+            --required-context "Coverage 100 Gate" \
+            --required-context "Codecov Analytics" \
+            --required-context "Sonar Zero" \
+            --required-context "Codacy Zero" \
+            --required-context "Snyk Zero" \
+            --required-context "Sentry Zero" \
+            --required-context "DeepScan Zero" \
+            --required-context "SonarCloud Code Analysis" \
+            --required-context "Codacy Static Code Analysis" \
+            --required-context "DeepScan" \
Evidence
The aggregate gate checks check-run names returned by GitHub’s API. In this repo, the Sonar workflow
defines a job sonarcloud, so the check-run context name will be sonarcloud, not "SonarCloud Code
Analysis".

.github/workflows/quality-zero-gate.yml[64-76]
.github/workflows/sonarcloud.yml[1-16]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The aggregate required-checks gate requires a Sonar context name that doesn’t match any workflow/job in this repo.
## Issue Context
GitHub check-run context names generally follow the workflow job name (e.g., `sonarcloud`).
## Fix Focus Areas
- .github/workflows/quality-zero-gate.yml[64-76]
- .github/workflows/sonarcloud.yml[14-16]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

7. Actions pinned to v6 🐞 Bug ⛯ Reliability
Description
New workflows pin core GitHub actions to @v6, while the repo’s existing workflows consistently use
@v4/@v5; this inconsistency increases the risk of CI breakage or unexpected behavior.
Code

.github/workflows/codecov-analytics.yml[R20-26]

+      - uses: actions/checkout@v6
+      - uses: actions/setup-python@v6
+        with:
+          python-version: '3.12'
+      - uses: actions/setup-node@v6
+        with:
+          node-version: '20'
Evidence
The new workflows introduce a different major version pinning strategy than the rest of the repo.
Aligning versions reduces the likelihood of “works in CI but fails in quality gates” drift.

.github/workflows/codecov-analytics.yml[20-26]
.github/workflows/ci.yml[21-32]
.github/workflows/duplication-check.yml[18-24]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The new quality workflows pin actions to `@v6`, diverging from the repo’s existing `@v4/@v5` pins.
## Issue Context
Consistency reduces risk and simplifies maintenance/incident response.
## Fix Focus Areas
- .github/workflows/codecov-analytics.yml[20-26]
- .github/workflows/coverage-100.yml[18-24]
- .github/workflows/quality-zero-gate.yml[29-56]
- .github/workflows/snyk-zero.yml[20-24]
- .github/workflows/sonar-zero.yml[18-23]
- .github/workflows/sentry-zero.yml[22-24]
- .github/workflows/deepscan-zero.yml[21-23]
- .github/workflows/codacy-zero.yml[18-20]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +30 to +38
- name: Run quality secrets preflight
run: |
python3 scripts/quality/check_quality_secrets.py \
--required-secret DEEPSCAN_API_TOKEN \
--required-var SENTRY_PROJECT \
--required-var DEEPSCAN_POLICY_MODE \
--required-var DEEPSCAN_OPEN_ISSUES_URL \
--out-json quality-secrets/secrets.json \
--out-md quality-secrets/secrets.md

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Tooling changes lack test evidence 📘 Rule violation ⛯ Reliability

This PR introduces new CI/quality tooling but does not include deterministic test evidence artifacts
for these changes nor an explicit, documented skip justification in-repo. This reduces
auditability/reproducibility of the claimed quality enforcement behavior.
Agent Prompt
## Issue description
The PR adds new CI/tooling workflows and scripts, but does not include deterministic test evidence artifacts for these changes and does not include an explicit documented skip justification in the repository.

## Issue Context
Compliance requires that runtime/tooling/test-affecting changes include reproducible evidence (e.g., captured logs/results committed under an agreed location) OR a documented justification for skipping such evidence.

## Fix Focus Areas
- .github/workflows/quality-zero-gate.yml[30-38]
- docs/quality/QUALITY_ZERO_GATES.md[1-10]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +21 to +24
def percent(self) -> float:
if self.total <= 0:
return 100.0
return (self.covered / self.total) * 100.0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. coveragestats.percent treats zero as 100 📘 Rule violation ⛯ Reliability

The coverage gate treats total <= 0 as 100%, which can allow empty/unparseable coverage reports
to pass without failing loudly. This violates the requirement to not report success when required
artifacts/outputs are missing or invalid.
Agent Prompt
## Issue description
The coverage enforcement script currently treats `total <= 0` as `100%`, which can silently pass when coverage output is empty or could not be parsed.

## Issue Context
The quality gate is intended to be fail-closed; empty or invalid coverage inputs should fail loudly with a clear message.

## Fix Focus Areas
- scripts/quality/assert_coverage_100.py[21-24]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +14 to +34
coverage-100:
name: Coverage 100 Gate
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/setup-python@v6
with:
python-version: '3.12'
- uses: actions/setup-node@v6
with:
node-version: '20'

- uses: actions/setup-dotnet@v4
with:
dotnet-version: '8.0.x'
- name: Run tests with coverage
run: |
mkdir -p coverage
dotnet restore
dotnet test tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj -c Release /p:CollectCoverage=true /p:CoverletOutput=./TestResults/coverage.cobertura.xml /p:CoverletOutputFormat=cobertura

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Linux runner for windows tests 🐞 Bug ✓ Correctness

The new coverage workflows run dotnet test on ubuntu-latest, but the test project targets
net8.0-windows, so these jobs are likely to fail to build/run and will block merges.
Agent Prompt
## Issue description
Coverage workflows run on `ubuntu-latest` but execute tests from a `net8.0-windows` test project, which is expected to fail on Linux.

## Issue Context
The repo’s main CI already runs tests on `windows-latest`, indicating Windows is required.

## Fix Focus Areas
- .github/workflows/coverage-100.yml[14-34]
- .github/workflows/codecov-analytics.yml[14-41]
- (optional) tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj[1-6]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +37 to +46
- name: Run tests with coverage
run: |
mkdir -p coverage
dotnet restore
dotnet test tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj -c Release /p:CollectCoverage=true /p:CoverletOutput=./TestResults/coverage.cobertura.xml /p:CoverletOutputFormat=cobertura
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: tests/SwfocTrainer.Tests/TestResults/coverage.cobertura.xml

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

4. Coverlet coverage not wired 🐞 Bug ⛯ Reliability

The workflows pass MSBuild Coverlet properties (CollectCoverage, CoverletOutput), but the test
project/repo does not reference any Coverlet package/config, so the coverage file likely won’t be
generated and downstream steps will fail.
Agent Prompt
## Issue description
Coverage workflows configure Coverlet output but the repo has no Coverlet dependency/configuration, so the expected Cobertura file likely won’t be produced.

## Issue Context
Both the Codecov upload step and the `Coverage 100 Gate` script rely on `tests/SwfocTrainer.Tests/TestResults/coverage.cobertura.xml`.

## Fix Focus Areas
- tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj[7-16]
- .github/workflows/codecov-analytics.yml[37-46]
- .github/workflows/coverage-100.yml[29-40]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +64 to +76
python3 scripts/quality/check_required_checks.py \
--repo "${GITHUB_REPOSITORY}" \
--sha "${GITHUB_SHA}" \
--required-context "Coverage 100 Gate" \
--required-context "Codecov Analytics" \
--required-context "Sonar Zero" \
--required-context "Codacy Zero" \
--required-context "Snyk Zero" \
--required-context "Sentry Zero" \
--required-context "DeepScan Zero" \
--required-context "SonarCloud Code Analysis" \
--required-context "Codacy Static Code Analysis" \
--required-context "DeepScan" \

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

6. Sonar context name mismatch 🐞 Bug ✓ Correctness

Quality Zero Gate waits for a required context named "SonarCloud Code Analysis", but this
repository’s Sonar workflow job is named sonarcloud; the required-checks poller will never observe
the expected context and will fail.
Agent Prompt
## Issue description
The aggregate required-checks gate requires a Sonar context name that doesn’t match any workflow/job in this repo.

## Issue Context
GitHub check-run context names generally follow the workflow job name (e.g., `sonarcloud`).

## Fix Focus Areas
- .github/workflows/quality-zero-gate.yml[64-76]
- .github/workflows/sonarcloud.yml[14-16]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🧹 Nitpick comments (4)
.github/workflows/snyk-zero.yml (1)

25-26: Pin the Snyk CLI version for deterministic CI.

Line 26 installs the latest Snyk CLI every run, which can cause unexpected failures when upstream releases change behavior.

Suggested change
-      - name: Install Snyk CLI
-        run: npm install -g snyk
+      - name: Install Snyk CLI
+        run: npm install -g snyk@<tested-version>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/snyk-zero.yml around lines 25 - 26, Change the workflow
step named "Install Snyk CLI" to install a pinned Snyk CLI release instead of
the floating latest; update the run command that currently says `npm install -g
snyk` to install a specific version (e.g., `npm install -g snyk@<VERSION>`) or
reference an environment variable like `SNYK_VERSION` so CI is deterministic;
ensure the step name "Install Snyk CLI" and the run command are updated
accordingly and document/update the pinned version value.
.github/workflows/codacy-zero.yml (1)

18-18: Apply SHA pinning for workflow actions here as well.

Line 18 and Line 30 still use mutable tags; use full commit SHAs to keep this gate immutable and consistent with strict quality posture.

Also applies to: 30-30

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/codacy-zero.yml at line 18, Replace the mutable action
tags with immutable commit SHAs: locate the uses: entries such as "uses:
actions/checkout@v6" and the other uses: entry at the referenced line (line 30)
and pin each to the corresponding full commit SHA for the action repository
(e.g., actions/checkout@<full-commit-sha>) so the workflow is immutable and
consistent with the existing SHA-pinning policy; update both occurrences and
verify the SHAs match the upstream action releases you intend to lock to.
.github/workflows/quality-zero-gate.yml (1)

29-29: Pin checkout action to full SHA in both jobs.

Line 29 and Line 55 use a mutable action tag (@v6). Pinning removes retagging risk and improves gate reproducibility.

Also applies to: 55-55

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/quality-zero-gate.yml at line 29, Replace the mutable
action tag "uses: actions/checkout@v6" with the corresponding full commit SHA
for the actions/checkout v6 release in both job occurrences; locate every
instance of the string "uses: actions/checkout@v6" (e.g., the two occurrences
flagged) and update them to "uses: actions/checkout@v6@<full-sha>" (or the exact
SHA form "uses: actions/checkout@<commit-sha>") so the workflow is pinned to an
immutable commit.
scripts/quality/check_sentry_zero.py (1)

117-130: Align fallback project env handling with SENTRY_PROJECT.

Current fallback ignores SENTRY_PROJECT unless --project is passed. Supporting it directly would make CLI behavior consistent with workflow/preflight conventions.

Suggested refactor
-        for env_name in ("SENTRY_PROJECT_BACKEND", "SENTRY_PROJECT_WEB"):
+        for env_name in ("SENTRY_PROJECT", "SENTRY_PROJECT_BACKEND", "SENTRY_PROJECT_WEB"):
             value = str(os.environ.get(env_name, "")).strip()
             if value:
                 projects.append(value)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/quality/check_sentry_zero.py` around lines 117 - 130, The code only
collects SENTRY_PROJECT_BACKEND and SENTRY_PROJECT_WEB into projects and ignores
SENTRY_PROJECT; update the projects-building logic (the loop adding to projects)
to also consider SENTRY_PROJECT (e.g., read os.environ.get("SENTRY_PROJECT"))
and add it if present and not already in projects, but do not override an
explicit CLI --project option (check the parsed CLI variable named project or
similar) — this ensures projects includes the generic SENTRY_PROJECT as a
fallback while preserving explicit --project behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/codecov-analytics.yml:
- Around line 20-25: Replace mutable action tags with immutable full-length
commit SHAs for each used action (e.g., actions/checkout@v6,
actions/setup-python@v6, actions/setup-node@v6 and any other `@v5/`@v6 uses) by
finding those occurrences and changing them to the corresponding 40-character
commit SHA for the action repository; also add a trailing comment on each step
with the human-readable tag (e.g., // actions/checkout@v6) for auditability and
enable Dependabot to update SHAs automatically. Ensure you update every instance
(including the uses at lines showing actions/checkout, actions/setup-python,
actions/setup-node and any other mutable tags) and keep the step keys (uses:)
and input blocks unchanged.

In @.github/workflows/coverage-100.yml:
- Around line 22-24: Remove the unused Node setup step (the
actions/setup-node@v6 step with node-version: '20') from the workflow; locate
the step that references actions/setup-node@v6 and delete that entire step block
so the job only runs the dotnet test and python3-related steps, ensuring no
other steps depend on Node before committing the change.

In @.github/workflows/deepscan-zero.yml:
- Line 21: Replace mutable GitHub Actions version tags with immutable full
commit SHAs for the uses entries (e.g., replace actions/checkout@v6 and
actions/upload-artifact@v4) across the workflow; update each "uses:" line to
reference the corresponding full commit SHA and append a trailing comment with
the human-friendly release tag (e.g., "# v6" or "# v4") so intent remains clear
while preventing retargeting.

In @.github/workflows/quality-zero-gate.yml:
- Around line 10-12: The workflow grants only contents: read so the GITHUB_TOKEN
lacks scopes needed by check_required_checks.py; update the permissions block in
the workflow to explicitly add checks: read and statuses: read (in addition to
contents: read) so the script can call the check-runs and commit status
endpoints; ensure the permissions YAML includes these entries and that no other
steps override the token permissions.

In @.github/workflows/snyk-zero.yml:
- Around line 20-24: Replace the mutable action refs "uses: actions/checkout@v6"
and "uses: actions/setup-node@v6" with immutable pins to the full commit SHAs
for those actions (e.g., checkout and setup-node) and add a short comment
preserving the human-readable tag (e.g., // was `@v6`) for context; update the two
occurrences so both "actions/checkout" and "actions/setup-node" use their
respective commit SHA refs to ensure supply-chain stability.

In @.github/workflows/sonar-zero.yml:
- Around line 19-30: The Sonar scan step (uses:
SonarSource/sonarqube-scan-action@v6 / step name "Run Sonar scan") currently
fires analysis but the next step runs check_sonar_zero.py immediately causing a
race; add a synchronous wait between the scan and the "Assert Sonar zero-open
gate" step by inserting a new step that polls SonarCloud analysis/ce task or the
project analysis status until processing completes and the quality gate is
computed (or timeout), then proceed to run scripts/quality/check_sonar_zero.py;
ensure the polling logic checks Sonar Cloud's CE/task or
api/qualitygates/project_status for the given project key
"Prekzursil_SWFOC-Mod-Menu" and only exits when analysis state is finished or
quality gate is available.

In `@scripts/quality/assert_coverage_100.py`:
- Around line 20-24: The percent property currently returns 100.0 when
self.total <= 0 which lets empty/malformed reports pass; change the percent
property in the class (the percent getter) to "fail closed" by returning 0.0 (or
raising a ValueError) when total <= 0 so zero/negative totals do not count as
full coverage; also update the other identical check elsewhere in the file (the
second occurrence of the total <= 0 check around lines 92-99) to the same
behavior so all coverage calculations treat missing/zero totals as failing
rather than 100%.
- Around line 28-31: The three compiled regexes _XML_LINES_VALID_RE,
_XML_LINES_COVERED_RE and _XML_LINE_HITS_RE are using over-escaped sequences
(e.g. "\\b" and "\\.") inside raw strings which produce literal backslashes and
break the fallback hit-counter parser; update those patterns to use proper regex
escapes (use \b and \. in the pattern text, not double-escaped) so they match
XML attributes correctly, verify the fallback parser that uses _XML_LINE_HITS_RE
returns actual hit counts instead of zero, and run the percent calculation (the
percent property) to confirm uncovered code no longer reports 100.0.

In `@scripts/quality/check_deepscan_zero.py`:
- Around line 30-44: The extract_total_open function currently returns the first
matching TOTAL_KEYS found by recursive iteration which can be non-deterministic;
update extract_total_open to first check known pagination containers
("pagination", "page", "meta") inside payload dicts for any TOTAL_KEYS (and
return that value if present), then fall back to the existing recursive scan
across values and lists; keep using the same TOTAL_KEYS constant and function
name extract_total_open so callers remain unchanged.

In `@scripts/quality/check_quality_secrets.py`:
- Around line 11-27: The DEFAULT_REQUIRED_SECRETS and DEFAULT_REQUIRED_VARS
arrays include keys that aren’t part of the gate contract and cause false
preflight failures; update the constants by removing APPLITOOLS_API_KEY,
PERCY_TOKEN, BROWSERSTACK_USERNAME, BROWSERSTACK_ACCESS_KEY from
DEFAULT_REQUIRED_SECRETS and remove SENTRY_PROJECT_BACKEND and
SENTRY_PROJECT_WEB from DEFAULT_REQUIRED_VARS so the lists only contain the
tokens actually exported by the gate (keep SONAR_TOKEN, CODACY_API_TOKEN,
CODECOV_TOKEN, SNYK_TOKEN, SENTRY_AUTH_TOKEN and SENTRY_ORG as applicable).

In `@scripts/quality/check_required_checks.py`:
- Line 139: The token fallback currently evaluates the `or` before `.strip()`,
so a whitespace-only GITHUB_TOKEN bypasses the GH_TOKEN fallback; change the
logic to strip each environment value before applying the fallback (i.e., call
.strip() on GITHUB_TOKEN and GH_TOKEN individually) so token =
(strip(GITHUB_TOKEN) or strip(GH_TOKEN)), ensuring whitespace-only values are
treated as empty; update the assignment where `token` is defined to strip both
env reads (reference the `token` variable assignment in
check_required_checks.py).
- Around line 170-174: The loop only treats in-progress "check_run" contexts as
blocking, so pending commit-status contexts (source "status") can be ignored and
cause premature exit; update the in_progress computation in the function that
builds/waits on contexts so it also considers any status contexts with state ==
"pending" (e.g., set in_progress = any(v.get("state") != "completed" for v in
contexts.values() if v.get("source") == "check_run") or any(v.get("state") ==
"pending" for v in contexts.values() if v.get("source") == "status")), keep the
existing missing check, and continue to sleep using args.poll_seconds as before.

In `@scripts/quality/check_sonar_zero.py`:
- Around line 117-119: The code currently sets open_issues =
int(paging.get("total") or 0) which silently treats a missing or malformed
paging.total as zero; update the logic in the function that reads issues_payload
(look for variables issues_payload, paging, open_issues) to explicitly validate
that paging is a dict and that "total" exists and is an integer-like value—if
paging is missing or "total" is absent/invalid, raise a clear exception or log
an error and fail fast instead of defaulting to 0 so the Sonar gate does not
silently pass on malformed API responses.

---

Nitpick comments:
In @.github/workflows/codacy-zero.yml:
- Line 18: Replace the mutable action tags with immutable commit SHAs: locate
the uses: entries such as "uses: actions/checkout@v6" and the other uses: entry
at the referenced line (line 30) and pin each to the corresponding full commit
SHA for the action repository (e.g., actions/checkout@<full-commit-sha>) so the
workflow is immutable and consistent with the existing SHA-pinning policy;
update both occurrences and verify the SHAs match the upstream action releases
you intend to lock to.

In @.github/workflows/quality-zero-gate.yml:
- Line 29: Replace the mutable action tag "uses: actions/checkout@v6" with the
corresponding full commit SHA for the actions/checkout v6 release in both job
occurrences; locate every instance of the string "uses: actions/checkout@v6"
(e.g., the two occurrences flagged) and update them to "uses:
actions/checkout@v6@<full-sha>" (or the exact SHA form "uses:
actions/checkout@<commit-sha>") so the workflow is pinned to an immutable
commit.

In @.github/workflows/snyk-zero.yml:
- Around line 25-26: Change the workflow step named "Install Snyk CLI" to
install a pinned Snyk CLI release instead of the floating latest; update the run
command that currently says `npm install -g snyk` to install a specific version
(e.g., `npm install -g snyk@<VERSION>`) or reference an environment variable
like `SNYK_VERSION` so CI is deterministic; ensure the step name "Install Snyk
CLI" and the run command are updated accordingly and document/update the pinned
version value.

In `@scripts/quality/check_sentry_zero.py`:
- Around line 117-130: The code only collects SENTRY_PROJECT_BACKEND and
SENTRY_PROJECT_WEB into projects and ignores SENTRY_PROJECT; update the
projects-building logic (the loop adding to projects) to also consider
SENTRY_PROJECT (e.g., read os.environ.get("SENTRY_PROJECT")) and add it if
present and not already in projects, but do not override an explicit CLI
--project option (check the parsed CLI variable named project or similar) — this
ensures projects includes the generic SENTRY_PROJECT as a fallback while
preserving explicit --project behavior.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e114f29 and a63ce4a.

📒 Files selected for processing (18)
  • .github/workflows/codacy-zero.yml
  • .github/workflows/codecov-analytics.yml
  • .github/workflows/coverage-100.yml
  • .github/workflows/deepscan-zero.yml
  • .github/workflows/quality-zero-gate.yml
  • .github/workflows/sentry-zero.yml
  • .github/workflows/snyk-zero.yml
  • .github/workflows/sonar-zero.yml
  • codecov.yml
  • docs/quality/QUALITY_ZERO_GATES.md
  • scripts/quality/assert_coverage_100.py
  • scripts/quality/check_codacy_zero.py
  • scripts/quality/check_deepscan_zero.py
  • scripts/quality/check_quality_secrets.py
  • scripts/quality/check_required_checks.py
  • scripts/quality/check_sentry_zero.py
  • scripts/quality/check_sonar_zero.py
  • scripts/security_helpers.py

Comment on lines +20 to +25
- uses: actions/checkout@v6
- uses: actions/setup-python@v6
with:
python-version: '3.12'
- uses: actions/setup-node@v6
with:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

❓ Verification inconclusive

Script executed:

cat .github/workflows/codecov-analytics.yml

Repository: Prekzursil/SWFOC-Mod-Menu


Repository: Prekzursil/SWFOC-Mod-Menu
Exit code: 0

stdout:

name: Codecov Analytics

on:
  push:
    branches: [main, master]
  pull_request:
    branches: [main, master]
  workflow_dispatch:

permissions:
  contents: read

jobs:
  codecov-analytics:
    name: Codecov Analytics
    runs-on: ubuntu-latest
    env:
      CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
    steps:
      - uses: actions/checkout@v6
      - uses: actions/setup-python@v6
        with:
          python-version: '3.12'
      - uses: actions/setup-node@v6
        with:
          node-version: '20'
      - name: Validate Codecov token
        run: |
          if [ -z "${CODECOV_TOKEN}" ]; then
            echo "Missing CODECOV_TOKEN" >&2
            exit 1
          fi

      - uses: actions/setup-dotnet@v4
        with:
          dotnet-version: '8.0.x'
      - name: Run tests with coverage
        run: |
          mkdir -p coverage
          dotnet restore
          dotnet test tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj -c Release             /p:CollectCoverage=true             /p:CoverletOutput=./TestResults/coverage.cobertura.xml             /p:CoverletOutputFormat=cobertura
      - name: Upload coverage to Codecov
        uses: codecov/codecov-action@v5
        with:
          token: ${{ secrets.CODECOV_TOKEN }}
          files: tests/SwfocTrainer.Tests/TestResults/coverage.cobertura.xml
          flags: dotnet
          fail_ci_if_error: true
          verbose: true

GitHub Actions pinning best practices security mutable tags SHA


GitHub Actions pinning (security) — best practices

  • Prefer pinning every third‑party action to a full‑length commit SHA (40 chars) in uses: to make the reference immutable and reduce supply‑chain risk from moved tags/branches. Example: uses: actions/checkout@<sha> # vX.Y.Z. [1][2]
  • Don’t rely on mutable tags like @v1 / @main (or even @v1.2.3 if it’s not protected by immutability), because tags can be moved/deleted if a repo is compromised. [1]
  • If you use tags, prefer “immutable releases/tags” (GitHub’s immutable releases lock the release tag and assets so they can’t be changed). [3][4]
  • Pin to SHAs that correspond to a tagged release (not an arbitrary commit) and record the human version in a comment—this makes review/auditing easier and helps automation propose safe upgrades. [2]
  • Enable Dependabot “github-actions” updates so SHA-pinned actions can be regularly bumped via PRs. [5][6]
  • Enforce pinning via policy (enterprise/org/repo): GitHub Actions policy can require actions to be pinned to a full-length SHA; noncompliant workflows fail. [7][8]
  • Remember: SHA pinning isn’t enough if the action has mutable dependencies (e.g., unpinned container images, downloaded scripts, nested actions not pinned). Audit for hidden mutability. [2]

Sources

[1] GitHub Docs – Security hardening for GitHub Actions (third‑party actions, SHA vs tags) (docs.github.com)
[2] GitHub Well‑Architected – Actions security recommendations (SHA pinning, comments, mutable deps, enforcement) (wellarchitected.github.com)
[3] GitHub Docs – Immutable releases (what they protect) (docs.github.com)
[4] GitHub Docs – Using immutable releases and tags for actions (docs.github.com)
[5] GitHub Docs – Keeping your actions up to date with Dependabot (docs.github.com)
[6] GitHub Changelog – Dependabot updates support reusable workflows (github.blog)
[7] GitHub Docs (Enterprise) – Policies: “Require actions to be pinned to a full-length commit SHA” (docs.github.com)
[8] GitHub Changelog (2025‑08‑15) – Policy supports SHA pinning enforcement (github.blog)

Citations:


Pin all GitHub Actions to immutable commit SHAs.

Lines 20, 21, 24, and 43 use mutable version tags (@v6, @v5), which can be moved, deleted, or compromised. According to GitHub security hardening guidance, pin to full-length commit SHAs (40 chars) and add a comment with the human-readable version for maintainability and auditability. Use Dependabot to automate SHA updates.

Example hardening pattern
-      - uses: actions/checkout@v6
+      - uses: actions/checkout@abc123def456... # v6
-      - uses: actions/setup-python@v6
+      - uses: actions/setup-python@abc123def456... # v6
-      - uses: actions/setup-node@v6
+      - uses: actions/setup-node@abc123def456... # v6
-        uses: codecov/codecov-action@v5
+        uses: codecov/codecov-action@abc123def456... # v5
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/codecov-analytics.yml around lines 20 - 25, Replace
mutable action tags with immutable full-length commit SHAs for each used action
(e.g., actions/checkout@v6, actions/setup-python@v6, actions/setup-node@v6 and
any other `@v5/`@v6 uses) by finding those occurrences and changing them to the
corresponding 40-character commit SHA for the action repository; also add a
trailing comment on each step with the human-readable tag (e.g., //
actions/checkout@v6) for auditability and enable Dependabot to update SHAs
automatically. Ensure you update every instance (including the uses at lines
showing actions/checkout, actions/setup-python, actions/setup-node and any other
mutable tags) and keep the step keys (uses:) and input blocks unchanged.

Comment on lines +22 to +24
- uses: actions/setup-node@v6
with:
node-version: '20'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat .github/workflows/coverage-100.yml

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 1476


🏁 Script executed:

rg -n -C2 -e 'setup-node|node-version|npm|pnpm|yarn|\\bnode\\b' .github/workflows/coverage-100.yml

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 254


Remove unused Node setup from this workflow.

Line 22 adds actions/setup-node@v6, but this job does not execute any Node tooling. The workflow runs dotnet test and python3 scripts only. Removing it will reduce gate runtime and CI surface area.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/coverage-100.yml around lines 22 - 24, Remove the unused
Node setup step (the actions/setup-node@v6 step with node-version: '20') from
the workflow; locate the step that references actions/setup-node@v6 and delete
that entire step block so the job only runs the dotnet test and python3-related
steps, ensuring no other steps depend on Node before committing the change.

DEEPSCAN_API_TOKEN: ${{ secrets.DEEPSCAN_API_TOKEN }}
DEEPSCAN_OPEN_ISSUES_URL: ${{ vars.DEEPSCAN_OPEN_ISSUES_URL }}
steps:
- uses: actions/checkout@v6
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the actual content of the workflow file
cat -n .github/workflows/deepscan-zero.yml | head -40

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 1108


🏁 Script executed:

# Run the suggested verification command to find mutable action references
rg -n '^\s*-\s*uses:\s*[^@]+@v[0-9]+' .github/workflows

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 1502


🌐 Web query:

GitHub Actions security best practices pinning version tags vs commit SHA

💡 Result:

Recommendation (security-first)

Pin third‑party actions to a full-length commit SHA (e.g., actions/checkout@6929…) because it’s the most reliable immutable reference and prevents silently picking up malicious changes if a tag/branch is moved. GitHub’s own guidance calls full SHAs “the most reliable” approach and notes “immutable tags” can be disabled by repo owners. [1]

When version tags are acceptable

If you need human-readable versions, pin to a tag that is part of an immutable release (e.g., @v1.2.3 where the release is immutable). With immutable releases enabled, the release’s associated tag can’t be moved/deleted and assets can’t be modified, reducing the “tag retargeting” supply-chain risk. [2]

Practical best practice pattern (security + maintainability)

  • Pin to a SHA that corresponds to a known release tag, and add the tag as a comment so you can track intent and let automation propose updates: [1]
    uses: actions/checkout@<full_sha> # v4.1.7
  • Use Dependabot version updates (or similar tooling) to periodically bump the pinned SHAs (GitHub explicitly recommends this pairing). [1]

Org/enterprise enforcement

GitHub Actions policies can now enforce SHA pinning (workflows fail if an action isn’t pinned to a full commit SHA) and also support blocking specific actions/versions. [3]

Sources

[1] GitHub Well-Architected Framework – “Securing GitHub Actions Workflows” (Pin versions of actions) (wellarchitected.github.com)
[2] GitHub Docs – “Immutable releases” (docs.github.com)
[3] GitHub Changelog (2025-08-15) – Actions policy supports blocking and SHA pinning (github.blog)

Citations:


Pin GitHub Actions to full commit SHAs instead of mutable version tags.

Lines 21 and 29 use mutable version tags (actions/checkout@v6, actions/upload-artifact@v4). Per GitHub's official security guidance, full commit SHAs are immutable and prevent supply-chain attacks where tags can be silently retargeted. Pin to the full SHA with a comment noting the release tag for readability: uses: actions/checkout@<full_sha> # v6.

This pattern is used throughout the repository's workflows and should be addressed consistently.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/deepscan-zero.yml at line 21, Replace mutable GitHub
Actions version tags with immutable full commit SHAs for the uses entries (e.g.,
replace actions/checkout@v6 and actions/upload-artifact@v4) across the workflow;
update each "uses:" line to reference the corresponding full commit SHA and
append a trailing comment with the human-friendly release tag (e.g., "# v6" or
"# v4") so intent remains clear while preventing retargeting.

Comment on lines +10 to +12
permissions:
contents: read

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n .github/workflows/quality-zero-gate.yml

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 3700


🏁 Script executed:

cat -n scripts/quality/check_required_checks.py

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 8355


🌐 Web query:

GitHub API check-runs endpoint permissions requirements

💡 Result:

GitHub REST API “check-runs” endpoints — required permissions

Key rule: Write access to Checks (creating/updating/rerequesting check runs) is only available to GitHub Apps; OAuth apps and user tokens can view check runs but can’t create/update them. [1]

Endpoint / operation Fine-grained token / GitHub App permission PAT (classic) / OAuth app notes
Create a check run (POST /repos/{owner}/{repo}/check-runs) Checks: write [1][2] Must be a GitHub App (not OAuth / classic PAT) [1]
Update a check run (PATCH /repos/{owner}/{repo}/check-runs/{check_run_id}) Checks: write [1][2] OAuth apps and classic PATs cannot use this endpoint [1]
Get a check run (GET /repos/{owner}/{repo}/check-runs/{check_run_id}) Checks: read [1][2] For private repos, OAuth / classic PAT needs repo scope [1]
List check run annotations (GET /repos/{owner}/{repo}/check-runs/{check_run_id}/annotations) Checks: read [1][2] For private repos, OAuth / classic PAT needs repo scope [1]
Rerequest a check run (POST /repos/{owner}/{repo}/check-runs/{check_run_id}/rerequest) Checks: write [1][2] (Write action → GitHub App) [1]
List check runs in a check suite (GET /repos/{owner}/{repo}/check-suites/{check_suite_id}/check-runs) Checks: read [1][2] For private repos, OAuth / classic PAT needs repo scope [1]
List check runs for a Git ref (list-for-ref endpoint) Checks: read [1] For private repos, OAuth / classic PAT needs repo scope [1]

Tip: GitHub also returns X-Accepted-GitHub-Permissions in REST responses to tell you the required permission(s) for the endpoint you just called. [2]

Sources: GitHub REST “Check runs” docs [1]; “Permissions required for GitHub Apps” matrix [2].


🌐 Web query:

GitHub API status endpoint commits statuses permissions requirements

💡 Result:

GitHub Status endpoint (service health)

  • GitHub’s public Statuspage API endpoints are:
    • https://www.githubstatus.com/api/v2/status.json
    • https://www.githubstatus.com/api/v2/summary.json (and related /components.json, /incidents.json, etc.) [1]
  • These are public (no GitHub token/scopes required) because they’re not part of the GitHub REST API for repositories/users—it's the status site’s API. [1]

GitHub REST API commit statuses (per-commit CI/status contexts)

Key REST endpoints

  • Create a status: POST /repos/{owner}/{repo}/statuses/{sha} [2]
  • Read combined status: GET /repos/{owner}/{repo}/commits/{ref}/status [2]
  • List statuses: GET /repos/{owner}/{repo}/commits/{ref}/statuses [2]

Permission requirements

  • OAuth app / classic PAT scopes: repo:status is the targeted scope for read/write commit statuses (or repo which also grants code access). [4]
  • Fine-grained PAT: repository permission “Commit statuses”
    • write for POST .../statuses/{sha}
    • read for the two GET endpoints [3]
  • GitHub App: repository permission “Commit statuses”
    • write for POST .../statuses/{sha}
    • read for the two GET endpoints [5]

Repo access note

  • The “combined status” and “list statuses” endpoints are accessible to users with at least pull access for private repos, and can be accessed without auth when only public resources are involved. [2]

Sources: [1] GitHub Status API docs, [2] Commit Statuses REST endpoints, [3] Fine-grained PAT permissions table, [4] OAuth scopes (repo:status), [5] GitHub App permissions required table.


Grant explicit read scopes for checks and status APIs.

Lines 10-12 only grant contents: read, but the workflow's check_required_checks.py script calls the GitHub API endpoints for check-runs and commit statuses (lines 151-152). These endpoints require checks: read and statuses: read permissions. Without explicit grants, the GITHUB_TOKEN will lack the necessary scopes and the script will fail.

Suggested fix
 permissions:
   contents: read
+  checks: read
+  statuses: read
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
permissions:
contents: read
permissions:
contents: read
checks: read
statuses: read
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/quality-zero-gate.yml around lines 10 - 12, The workflow
grants only contents: read so the GITHUB_TOKEN lacks scopes needed by
check_required_checks.py; update the permissions block in the workflow to
explicitly add checks: read and statuses: read (in addition to contents: read)
so the script can call the check-runs and commit status endpoints; ensure the
permissions YAML includes these entries and that no other steps override the
token permissions.

Comment on lines +20 to +24
- uses: actions/checkout@v6
- name: Set up Node
uses: actions/setup-node@v6
with:
node-version: '20'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

GitHub Actions best practices version pinning checkout setup-node

💡 Result:

Version pinning (supply-chain safety)

  • Prefer pinning every third‑party action to a full commit SHA (immutable), ideally the SHA that corresponds to a tagged release; add a comment with the tag for readability. This mitigates tag/branch retargeting attacks. [1][2][6]
  • If you can’t SHA‑pin everywhere, pin at least to a major/minor tag you trust, and use “Verified creator” actions where possible—but tags can still be moved. [1][6]
  • Automate updates with Dependabot/Renovate to regularly bump pinned SHAs; note that Dependabot security alerts for Actions rely on semver, while SHA-pinned actions generally get version updates but not the same alerting. [2]
  • For org/enterprise governance, GitHub policies can enforce SHA pinning (and even block specific actions/versions). [5]

actions/checkout best practices

  • Use least privilege token permissions; for checkout, GitHub recommends:
    permissions: contents: read [3]
  • Default checkout fetches a single commit; use fetch-depth: 0 only if you truly need full history/tags (e.g., git describe, full diff tooling). [3]
  • The auth token is persisted in git config by default; set persist-credentials: false if you don’t need authenticated git operations later in the job. [3]

actions/setup-node best practices

  • Always specify the Node version (don’t rely on whatever is on the runner image). [4]
  • Prefer node-version-file (e.g., .nvmrc, .node-version, package.json) to keep CI and local dev aligned. [4]
  • Use built-in dependency caching intentionally (npm/yarn/pnpm) and be cautious with automatic caching in higher-privilege workflows. [4]

Example: SHA-pinned checkout + setup-node (recommended pattern)

permissions:
  contents: read

steps:
  - uses: actions/checkout@<FULL_SHA> # vX.Y.Z
    with:
      persist-credentials: false
      fetch-depth: 1

  - uses: actions/setup-node@<FULL_SHA> # vX.Y.Z
    with:
      node-version-file: .nvmrc
      cache: npm

  - run: npm ci
  - run: npm test

Sources: GitHub security hardening guidance on SHA pinning/auditing actions [1], GitHub Well‑Architected Actions security recommendations (pinning + Dependabot nuance) [2], actions/checkout docs (permissions/fetch-depth/persist-credentials) [3], actions/setup-node docs (specify Node version/caching) [4], GitHub policy enforcement for SHA pinning [5], OpenSSF discussion of pinning modes/risks [6].


🏁 Script executed:

# Check if the file exists and view the relevant lines
if [ -f ".github/workflows/snyk-zero.yml" ]; then
  cat -n .github/workflows/snyk-zero.yml | sed -n '15,30p'
fi

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 622


🌐 Web query:

actions/checkout v6 version tag stability GitHub Actions

💡 Result:

  • actions/checkout@v6 is a floating “major-version” tag: it’s intended to move to the newest compatible v6.x.y release over time (so your workflow can change when new v6 patch/minor releases ship). GitHub’s guidance for action maintainers is to keep major (vN) and minor (vN.N) tags updated to the latest appropriate commit. [1]
  • Tags are not inherently immutable: Git tags can be force-moved (retagged). GitHub Docs explicitly describes updating/moving major/minor tags to point at newer refs, and shows force-pushing tags. [2]
  • Current state (as of Mar 3, 2026): the latest actions/checkout v6 release is v6.0.2 (Jan 9, 2026). So actions/checkout@v6 will resolve to whatever commit the repository currently points the v6 tag at (typically the latest v6 release). [3]
  • If you need stability/security, GitHub recommends pinning actions to a full commit SHA (ideally the SHA for a tagged release), rather than a floating tag like v6. [4]

Sources: GitHub Docs on releasing/maintaining tags [1], GitHub Docs on immutable releases/tags and moving tags [2], actions/checkout releases showing v6.0.2 [3], GitHub Well-Architected Actions security guidance on SHA pinning [4].


🌐 Web query:

GitHub Actions major version tags immutable or mutable best practices

💡 Result:

Are GitHub Actions “major version” tags (e.g., @v1) immutable?

No—by default they’re mutable “moving tags.” Action maintainers commonly update v1 to point at the latest compatible v1.x.y release, and Git tags can be moved (e.g., via force-push) unless protected by additional controls. (docs.github.com)

They become effectively immutable only when you pin to an immutable reference (best: a full commit SHA) or when a repository uses GitHub’s immutable releases feature for release tags. (docs.github.com)

Best practices (security + stability)

  1. Pin third-party actions to a full-length commit SHA (most secure / truly immutable). (docs.github.com)
  2. If you use tags for convenience, prefer immutable release tags (e.g., v1.2.3 from an immutable release) over moving tags like v1, and only do this when you trust the publisher. (docs.github.com)
  3. Use Dependabot (or similar) to keep pinned SHAs updated to newer trusted releases. (docs.github.com)
  4. For orgs/enterprises: enforce SHA pinning via GitHub Actions policy to prevent workflows from using unpinned tags/branches. (github.blog)

Practical compromise pattern

Pin to SHA, and add a comment with the human-readable version:

- uses: actions/checkout@<full_commit_sha> # v4.1.7

This is explicitly recommended as a reliable way to get immutability while keeping reviews understandable. (wellarchitected.github.com)

Citations:


Pin action refs to commit SHAs to ensure supply-chain stability.

Lines 20 and 22 use floating major-version tags (@v6). These tags are mutable and will automatically resolve to newer v6.x.y releases, creating non-determinism and risking unexpected breaking changes. GitHub's security hardening guidance recommends pinning to immutable full-length commit SHAs instead (with a comment noting the version for readability).

Suggested fix
-      - uses: actions/checkout@v6
+      - uses: actions/checkout@<full-length-commit-sha> # v6.0.2
-        uses: actions/setup-node@v6
+        uses: actions/setup-node@<full-length-commit-sha> # v6.0.2
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/snyk-zero.yml around lines 20 - 24, Replace the mutable
action refs "uses: actions/checkout@v6" and "uses: actions/setup-node@v6" with
immutable pins to the full commit SHAs for those actions (e.g., checkout and
setup-node) and add a short comment preserving the human-readable tag (e.g., //
was `@v6`) for context; update the two occurrences so both "actions/checkout" and
"actions/setup-node" use their respective commit SHA refs to ensure supply-chain
stability.

Comment on lines +30 to +44
def extract_total_open(payload: Any) -> int | None:
if isinstance(payload, dict):
for key, value in payload.items():
if key in TOTAL_KEYS and isinstance(value, (int, float)):
return int(value)
for nested in payload.values():
total = extract_total_open(nested)
if total is not None:
return total
elif isinstance(payload, list):
for nested in payload:
total = extract_total_open(nested)
if total is not None:
return total
return None
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n scripts/quality/check_deepscan_zero.py | head -50

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 2246


🌐 Web query:

DeepScan API documentation open issues endpoint response schema

💡 Result:

DeepScan doesn’t appear to publish a public REST “open issues” JSON endpoint (and thus no JSON response schema) in its documentation; what is publicly documented/used are badge endpoints that return an image (SVG by default, PNG optional), not a JSON payload. [1]

“Open issues” badge endpoint (as exposed via Badgen)

Badgen’s DeepScan integration exposes an “issues” badge (unresolved/open issues count) at:

  • GET /deepscan/issues/team/{teamId}/project/{projectId}/branch/{branchId}

Response schema: image/svg+xml (badge SVG). [2]

DeepScan’s own badge URLs (image responses)

DeepScan’s docs describe badges as SVG by default (switch to PNG by replacing .svg with .png). [1] A known DeepScan badge URL format in the wild is, e.g.:

  • GET https://deepscan.io/api/teams/{teamId}/projects/{projectId}/branches/{branchId}/badge/grade.svg

Response schema: image/svg+xml (SVG image). [3]

If you meant a different “DeepScan” product/API, share the base URL (or the exact endpoint path you’re calling) and I’ll pin down the exact response schema.


🏁 Script executed:

cat -n scripts/quality/check_deepscan_zero.py | tail -50

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 2321


🏁 Script executed:

rg -n "deepscan\|_request_json\|http" scripts/quality/check_deepscan_zero.py

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 51


🏁 Script executed:

grep -r "DEEPSCAN_OPEN_ISSUES_URL" scripts/ --include="*.py" --include="*.md" --include="*.txt" -B 2 -A 2

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 843


🏁 Script executed:

find . -name "*.md" -o -name "*.txt" | xargs grep -l "deepscan\|DeepScan" 2>/dev/null | head -5

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 105


🏁 Script executed:

cat ./docs/quality/QUALITY_ZERO_GATES.md | grep -A 20 -B 5 -i "deepscan"

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 430


🏁 Script executed:

find . -type f \( -name "*test*" -o -name "*deepscan*" \) -name "*.py" -o -name "*.json" | grep -i deepscan

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 109


🏁 Script executed:

rg -A 10 "DEEPSCAN_OPEN_ISSUES_URL|extract_total_open" . --type py

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 7249


Align DeepScan extract_total_open with Codacy's deterministic extraction pattern.

The current implementation recursively searches for the first matching total/count/hits field in the JSON tree. With multiple counters in the payload, iteration order becomes non-deterministic, risking extraction of an incorrect field value. The similar extract_total_open in check_codacy_zero.py already implements a more robust approach: checking common pagination containers (pagination, page, meta) before falling back to full recursion.

Apply the same deterministic pattern here:

Suggested fix
 def extract_total_open(payload: Any) -> int | None:
     if isinstance(payload, dict):
         for key, value in payload.items():
             if key in TOTAL_KEYS and isinstance(value, (int, float)):
                 return int(value)
+
+        # Check common pagination structures before full recursion
+        for key in ("pagination", "page", "meta"):
+            nested = payload.get(key)
+            total = extract_total_open(nested)
+            if total is not None:
+                return total
+
         for nested in payload.values():
             total = extract_total_open(nested)
             if total is not None:
                 return total
     elif isinstance(payload, list):
🧰 Tools
🪛 GitHub Check: Codacy Static Code Analysis

[warning] 30-30: scripts/quality/check_deepscan_zero.py#L30
Method extract_total_open has a cyclomatic complexity of 10 (limit is 8)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/quality/check_deepscan_zero.py` around lines 30 - 44, The
extract_total_open function currently returns the first matching TOTAL_KEYS
found by recursive iteration which can be non-deterministic; update
extract_total_open to first check known pagination containers ("pagination",
"page", "meta") inside payload dicts for any TOTAL_KEYS (and return that value
if present), then fall back to the existing recursive scan across values and
lists; keep using the same TOTAL_KEYS constant and function name
extract_total_open so callers remain unchanged.


def main() -> int:
args = _parse_args()
token = (os.environ.get("GITHUB_TOKEN", "") or os.environ.get("GH_TOKEN", "")).strip()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Token fallback can be bypassed by whitespace in GITHUB_TOKEN.

On Line 139, or is evaluated before .strip(). If GITHUB_TOKEN=" " and GH_TOKEN is set, GH_TOKEN is ignored and the script exits as if no token exists.

Proposed fix
-    token = (os.environ.get("GITHUB_TOKEN", "") or os.environ.get("GH_TOKEN", "")).strip()
+    token = os.environ.get("GITHUB_TOKEN", "").strip() or os.environ.get("GH_TOKEN", "").strip()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
token = (os.environ.get("GITHUB_TOKEN", "") or os.environ.get("GH_TOKEN", "")).strip()
token = os.environ.get("GITHUB_TOKEN", "").strip() or os.environ.get("GH_TOKEN", "").strip()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/quality/check_required_checks.py` at line 139, The token fallback
currently evaluates the `or` before `.strip()`, so a whitespace-only
GITHUB_TOKEN bypasses the GH_TOKEN fallback; change the logic to strip each
environment value before applying the fallback (i.e., call .strip() on
GITHUB_TOKEN and GH_TOKEN individually) so token = (strip(GITHUB_TOKEN) or
strip(GH_TOKEN)), ensuring whitespace-only values are treated as empty; update
the assignment where `token` is defined to strip both env reads (reference the
`token` variable assignment in check_required_checks.py).

Comment on lines +170 to +174
# wait only while there are missing contexts or in-progress check-runs
in_progress = any(v.get("state") != "completed" for v in contexts.values() if v.get("source") == "check_run")
if not missing and not in_progress:
break
time.sleep(max(args.poll_seconds, 1))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Wait logic misses pending commit-status contexts.

Line 171 only waits on in-progress check_runs. If a required status context is pending, the loop exits early on Line 172 and fails immediately instead of polling until it resolves.

Proposed fix
-        # wait only while there are missing contexts or in-progress check-runs
-        in_progress = any(v.get("state") != "completed" for v in contexts.values() if v.get("source") == "check_run")
-        if not missing and not in_progress:
+        # wait while required contexts are still non-terminal (check-runs in progress or statuses pending)
+        pending_required = any(
+            (obs := contexts.get(ctx))
+            and (
+                (obs.get("source") == "check_run" and obs.get("state") != "completed")
+                or (obs.get("source") == "status" and obs.get("conclusion") == "pending")
+            )
+            for ctx in required
+        )
+        if not missing and not pending_required:
             break
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# wait only while there are missing contexts or in-progress check-runs
in_progress = any(v.get("state") != "completed" for v in contexts.values() if v.get("source") == "check_run")
if not missing and not in_progress:
break
time.sleep(max(args.poll_seconds, 1))
# wait while required contexts are still non-terminal (check-runs in progress or statuses pending)
pending_required = any(
(obs := contexts.get(ctx))
and (
(obs.get("source") == "check_run" and obs.get("state") != "completed")
or (obs.get("source") == "status" and obs.get("conclusion") == "pending")
)
for ctx in required
)
if not missing and not pending_required:
break
time.sleep(max(args.poll_seconds, 1))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/quality/check_required_checks.py` around lines 170 - 174, The loop
only treats in-progress "check_run" contexts as blocking, so pending
commit-status contexts (source "status") can be ignored and cause premature
exit; update the in_progress computation in the function that builds/waits on
contexts so it also considers any status contexts with state == "pending" (e.g.,
set in_progress = any(v.get("state") != "completed" for v in contexts.values()
if v.get("source") == "check_run") or any(v.get("state") == "pending" for v in
contexts.values() if v.get("source") == "status")), keep the existing missing
check, and continue to sleep using args.poll_seconds as before.

Comment on lines +117 to +119
paging = issues_payload.get("paging") or {}
open_issues = int(paging.get("total") or 0)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n scripts/quality/check_sonar_zero.py | head -130 | tail -20

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 1205


🌐 Web query:

SonarCloud SonarQube API issues/search endpoint response schema paging total

💡 Result:

/api/issues/search returns item-count paging, not page-count paging.

Response paging schema (relevant part)

{
  "paging": {
    "pageIndex": 1,
    "pageSize": 100,
    "total": 53105
  }
}
  • paging.pageIndex: current page number (1-based)
  • paging.pageSize: items per page (parameter ps; default is often 100; max is 500)
  • paging.total: total number of issues matching the query (items), not “total pages” (so it can be 53105, etc.). [1] (community.sonarsource.com)

So total pages is:

Notes / gotchas

  • Some SonarQube versions also include top-level total, p, ps deprecated since 9.8; they mean the same thing as paging.total, paging.pageIndex, paging.pageSize. [1] (community.sonarsource.com)
  • There’s a hard limit of 10,000 issues retrievable via /api/issues/search (even if you paginate); workaround is to add more filters (e.g., by date/facet) to keep each query under 10k. [2] (community.sonarsource.com)
  • On SonarQube Server (self-hosted), a reported bypass for retrieving all issues is using the internal endpoint /api/issues/pull (available since SQ v9.5). [1] (community.sonarsource.com)

Sources

  1. Sonar Community: “Number of pages in Rest API response is broken” (explains paging.total is item count; notes deprecated fields; mentions /api/issues/pull). (community.sonarsource.com)
  2. Sonar Community: “Sonarcloud Web API - Issues/Search endpoint record limit” (10k limit + filtering workaround). (community.sonarsource.com)

Citations:


Add explicit error handling when Sonar issue totals are missing from API response.

Line 118 defaults missing totals to 0, which silently passes the zero-issues gate if the payload structure is unexpected. While Sonar's standard /api/issues/search endpoint always returns paging.total, defensive error handling is warranted.

Suggested fix
             issues_payload = _request_json(issues_url, auth)
             paging = issues_payload.get("paging") or {}
-            open_issues = int(paging.get("total") or 0)
+            total = paging.get("total") or issues_payload.get("total")
+            if total is None:
+                findings.append("Sonar issues payload missing total count.")
+            else:
+                open_issues = int(total)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/quality/check_sonar_zero.py` around lines 117 - 119, The code
currently sets open_issues = int(paging.get("total") or 0) which silently treats
a missing or malformed paging.total as zero; update the logic in the function
that reads issues_payload (look for variables issues_payload, paging,
open_issues) to explicitly validate that paging is a dict and that "total"
exists and is an integer-like value—if paging is missing or "total" is
absent/invalid, raise a clear exception or log an error and fail fast instead of
defaulting to 0 so the Sonar gate does not silently pass on malformed API
responses.

workflow_dispatch:

permissions:
contents: read

Check notice

Code scanning / SonarCloud

Read permissions should be defined at the job level Low

Move this read permission from workflow level to job level. See more on SonarQube Cloud
Prekzursil and others added 2 commits March 3, 2026 08:36
Co-authored-by: Codex <noreply@openai.com>
Co-authored-by: Codex <noreply@openai.com>
@sonarqubecloud
Copy link

sonarqubecloud bot commented Mar 3, 2026

Quality Gate Failed Quality Gate failed

Failed conditions
14 New issues
3 Security Hotspots
14.3% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
.github/workflows/quality-zero-gate.yml (1)

10-12: ⚠️ Potential issue | 🟠 Major

quality-zero-gate permissions are too narrow for required-check context polling.

Line 10 to Line 12 only grants contents: read. The required-check gate typically needs checks: read and statuses: read; otherwise API polling can fail with permission errors. Also, defining these at job scope addresses the static-analysis note.

Suggested fix
-permissions:
-  contents: read
-
 jobs:
   secrets-preflight:
     name: Quality Secrets Preflight
     runs-on: ubuntu-latest
+    permissions:
+      contents: read
@@
   quality-zero-gate:
     name: Quality Zero Gate
     if: always()
     runs-on: ubuntu-latest
+    permissions:
+      contents: read
+      checks: read
+      statuses: read
     needs:
       - secrets-preflight

Use this read-only check to confirm endpoint usage and current permission block:

#!/bin/bash
set -euo pipefail

echo "== Workflow permissions =="
sed -n '1,40p' .github/workflows/quality-zero-gate.yml

echo
echo "== check_required_checks.py GitHub API usage candidates =="
rg -n -C2 --type=py 'check-runs|/commits/.*/status|/statuses' scripts/quality/check_required_checks.py

Expected: matches for check-runs and/or commit-status endpoints; if present, checks: read and statuses: read should be granted to the job.

Also applies to: 42-50

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/quality-zero-gate.yml around lines 10 - 12, The workflow
"quality-zero-gate" currently only grants "contents: read"; update the
workflow/job-level permissions to also include "checks: read" and "statuses:
read" (e.g., add these keys alongside contents: read under the permissions block
for the quality-zero-gate job) so the required-check context can poll check-runs
and commit status endpoints; after changing, run the suggested check (searching
for "check-runs" or "/commits/.*/status" in
scripts/quality/check_required_checks.py) to confirm the endpoints used match
the granted permissions.
scripts/quality/check_quality_secrets.py (1)

11-22: ⚠️ Potential issue | 🔴 Critical

Default preflight contract is still misaligned with the declared gate requirements.

From Line 11 to Line 22, the defaults still require APPLITOOLS_API_KEY but do not require CODECOV_TOKEN, DEEPSCAN_API_TOKEN, DEEPSCAN_POLICY_MODE, or DEEPSCAN_OPEN_ISSUES_URL. This breaks the intended fail-closed contract for this PR.

Suggested fix
 DEFAULT_REQUIRED_SECRETS = [
     "SONAR_TOKEN",
     "CODACY_API_TOKEN",
+    "CODECOV_TOKEN",
     "SNYK_TOKEN",
     "SENTRY_AUTH_TOKEN",
-    "APPLITOOLS_API_KEY",
+    "DEEPSCAN_API_TOKEN",
 ]
 
 DEFAULT_REQUIRED_VARS = [
     "SENTRY_ORG",
     "SENTRY_PROJECT",
+    "DEEPSCAN_POLICY_MODE",
+    "DEEPSCAN_OPEN_ISSUES_URL",
 ]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/quality/check_quality_secrets.py` around lines 11 - 22, The defaults
in DEFAULT_REQUIRED_SECRETS and DEFAULT_REQUIRED_VARS are out of sync with the
gate contract: update DEFAULT_REQUIRED_SECRETS (replace/remove
"APPLITOOLS_API_KEY" and add "CODECOV_TOKEN" and "DEEPSCAN_API_TOKEN") and
update DEFAULT_REQUIRED_VARS (add "DEEPSCAN_POLICY_MODE" and
"DEEPSCAN_OPEN_ISSUES_URL") so the constants DEFAULT_REQUIRED_SECRETS and
DEFAULT_REQUIRED_VARS reflect the required gate keys.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In @.github/workflows/quality-zero-gate.yml:
- Around line 10-12: The workflow "quality-zero-gate" currently only grants
"contents: read"; update the workflow/job-level permissions to also include
"checks: read" and "statuses: read" (e.g., add these keys alongside contents:
read under the permissions block for the quality-zero-gate job) so the
required-check context can poll check-runs and commit status endpoints; after
changing, run the suggested check (searching for "check-runs" or
"/commits/.*/status" in scripts/quality/check_required_checks.py) to confirm the
endpoints used match the granted permissions.

In `@scripts/quality/check_quality_secrets.py`:
- Around line 11-22: The defaults in DEFAULT_REQUIRED_SECRETS and
DEFAULT_REQUIRED_VARS are out of sync with the gate contract: update
DEFAULT_REQUIRED_SECRETS (replace/remove "APPLITOOLS_API_KEY" and add
"CODECOV_TOKEN" and "DEEPSCAN_API_TOKEN") and update DEFAULT_REQUIRED_VARS (add
"DEEPSCAN_POLICY_MODE" and "DEEPSCAN_OPEN_ISSUES_URL") so the constants
DEFAULT_REQUIRED_SECRETS and DEFAULT_REQUIRED_VARS reflect the required gate
keys.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a63ce4a and 3eaafc5.

📒 Files selected for processing (5)
  • .github/workflows/codecov-analytics.yml
  • .github/workflows/deepscan-zero.yml
  • .github/workflows/quality-zero-gate.yml
  • .github/workflows/snyk-zero.yml
  • scripts/quality/check_quality_secrets.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • .github/workflows/snyk-zero.yml
  • .github/workflows/codecov-analytics.yml

@Prekzursil
Copy link
Owner Author

Infra-wiring continuation update applied.

What changed in this branch:

  • Hardened .github/workflows/snyk-zero.yml with deterministic OSS target detection.
  • Added non-fatal OSS skip path with artifact output (snyk-oss-mode.json) when no supported OSS target exists.
  • Kept snyk code test mandatory and failing on findings.
  • Ensured CODECOV token provisioning path is configured for this repo.

Latest runs on this branch:

Merge policy for this wave remains: admin bypass allowed after infra-wiring verification, even if findings/coverage checks remain red.

@Prekzursil Prekzursil merged commit 2f91aea into main Mar 3, 2026
11 of 25 checks passed
@qodo-code-review
Copy link

ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan

CI Feedback 🧐

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: Codecov Analytics

Failed stage: Run tests with coverage [❌]

Failed test name: ""

Failure summary:

The action failed during the test step because the test runner could not start testhost.dll on the
Linux GitHub Actions runner.
- The test assembly was built for net8.0-windows and requires the
Microsoft.WindowsDesktop.App runtime (8.0.0), as shown by Framework: 'Microsoft.WindowsDesktop.App',
version '8.0.0'.
- On the Ubuntu runner, that Windows Desktop framework is not available (No
frameworks were found), so vstest aborts the run and exits with code 1.

Relevant error logs:
1:  ##[group]Runner Image Provisioner
2:  Hosted Compute Agent
...

210:  SwfocTrainer.Meg -> /home/runner/work/SWFOC-Mod-Menu/SWFOC-Mod-Menu/src/SwfocTrainer.Meg/bin/Release/net8.0/SwfocTrainer.Meg.dll
211:  SwfocTrainer.Core -> /home/runner/work/SWFOC-Mod-Menu/SWFOC-Mod-Menu/src/SwfocTrainer.Core/bin/Release/net8.0/SwfocTrainer.Core.dll
212:  SwfocTrainer.DataIndex -> /home/runner/work/SWFOC-Mod-Menu/SWFOC-Mod-Menu/src/SwfocTrainer.DataIndex/bin/Release/net8.0/SwfocTrainer.DataIndex.dll
213:  SwfocTrainer.Flow -> /home/runner/work/SWFOC-Mod-Menu/SWFOC-Mod-Menu/src/SwfocTrainer.Flow/bin/Release/net8.0/SwfocTrainer.Flow.dll
214:  SwfocTrainer.Runtime -> /home/runner/work/SWFOC-Mod-Menu/SWFOC-Mod-Menu/src/SwfocTrainer.Runtime/bin/Release/net8.0-windows/SwfocTrainer.Runtime.dll
215:  SwfocTrainer.Profiles -> /home/runner/work/SWFOC-Mod-Menu/SWFOC-Mod-Menu/src/SwfocTrainer.Profiles/bin/Release/net8.0/SwfocTrainer.Profiles.dll
216:  SwfocTrainer.Saves -> /home/runner/work/SWFOC-Mod-Menu/SWFOC-Mod-Menu/src/SwfocTrainer.Saves/bin/Release/net8.0/SwfocTrainer.Saves.dll
217:  SwfocTrainer.Helper -> /home/runner/work/SWFOC-Mod-Menu/SWFOC-Mod-Menu/src/SwfocTrainer.Helper/bin/Release/net8.0/SwfocTrainer.Helper.dll
218:  SwfocTrainer.Catalog -> /home/runner/work/SWFOC-Mod-Menu/SWFOC-Mod-Menu/src/SwfocTrainer.Catalog/bin/Release/net8.0/SwfocTrainer.Catalog.dll
219:  SwfocTrainer.App -> /home/runner/work/SWFOC-Mod-Menu/SWFOC-Mod-Menu/src/SwfocTrainer.App/bin/Release/net8.0-windows/SwfocTrainer.App.dll
220:  SwfocTrainer.Tests -> /home/runner/work/SWFOC-Mod-Menu/SWFOC-Mod-Menu/tests/SwfocTrainer.Tests/bin/Release/net8.0-windows/SwfocTrainer.Tests.dll
221:  Test run for /home/runner/work/SWFOC-Mod-Menu/SWFOC-Mod-Menu/tests/SwfocTrainer.Tests/bin/Release/net8.0-windows/SwfocTrainer.Tests.dll (.NETCoreApp,Version=v8.0)
222:  VSTest version 17.11.1 (x64)
223:  Starting test execution, please wait...
224:  A total of 1 test files matched the specified pattern.
225:  Testhost process for source(s) '/home/runner/work/SWFOC-Mod-Menu/SWFOC-Mod-Menu/tests/SwfocTrainer.Tests/bin/Release/net8.0-windows/SwfocTrainer.Tests.dll' exited with error: You must install or update .NET to run this application.
226:  App: /home/runner/work/SWFOC-Mod-Menu/SWFOC-Mod-Menu/tests/SwfocTrainer.Tests/bin/Release/net8.0-windows/testhost.dll
227:  Architecture: x64
228:  Framework: 'Microsoft.WindowsDesktop.App', version '8.0.0' (x64)
229:  .NET location: /usr/share/dotnet/
230:  No frameworks were found.
231:  Learn more:
232:  https://aka.ms/dotnet/app-launch-failed
233:  To install missing framework, download:
234:  https://aka.ms/dotnet-core-applaunch?framework=Microsoft.WindowsDesktop.App&framework_version=8.0.0&arch=x64&rid=linux-x64&os=ubuntu.24.04
235:  . Please check the diagnostic logs for more information.
236:  Test Run Aborted.
237:  ##[error]Process completed with exit code 1.
238:  Post job cleanup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant