Skip to content

Fix tekton test image extraction to use Konflux mirrors#3984

Open
geetikakay wants to merge 1 commit intoRedHatQE:mainfrom
geetikakay:tekton_brew_konflux
Open

Fix tekton test image extraction to use Konflux mirrors#3984
geetikakay wants to merge 1 commit intoRedHatQE:mainfrom
geetikakay:tekton_brew_konflux

Conversation

@geetikakay
Copy link
Contributor

@geetikakay geetikakay commented Feb 26, 2026

Short description:

tekton test image kubevirt-tekton-tasks-tests was previously pulled from Brew. With the move to Konflux builds, the Brew path no longer hosts these images causing Tekton pipeline tests to fail during image extraction. These image carry pipeline definitions.

More details:

we don't ship test-nvrs so they can never be part of external registry. Accessing them through konflux is only solution as discussed by CI team.

What this PR does / why we need it:

small PR change to point to right konflux builds dynamically from ImageDigestMirrorSet resources on the cluster

Which issue(s) this PR fixes:
Special notes for reviewer:

this will fix all tier3 lanes

jira-ticket:

Summary by CodeRabbit

Release Notes

  • Tests
    • Refactored core test fixtures to enhance Tekton infrastructure reliability and accuracy in the test suite
    • Added automatic image reference construction from cluster annotations and mirror path configurations
    • Improved resource discovery mechanisms for cluster-specific settings in test environments
    • Updated Tekton resource extraction processes with enhanced error handling and diagnostic messaging

@openshift-virtualization-qe-bot-5

Report bugs in Issues

Welcome! 🎉

This pull request will be automatically processed with the following features:

🔄 Automatic Actions

  • Reviewer Assignment: Reviewers are automatically assigned based on the OWNERS file in the repository root
  • Size Labeling: PR size labels (XS, S, M, L, XL, XXL) are automatically applied based on changes
  • Issue Creation: A tracking issue is created for this PR and will be closed when the PR is merged or closed
  • Branch Labeling: Branch-specific labels are applied to track the target branch
  • Auto-verification: Auto-verified users have their PRs automatically marked as verified
  • Labels: Enabled categories: branch, can-be-merged, cherry-pick, has-conflicts, hold, needs-rebase, size, verified, wip

📋 Available Commands

PR Status Management

  • /wip - Mark PR as work in progress (adds WIP: prefix to title)
  • /wip cancel - Remove work in progress status
  • /hold - Block PR merging (approvers only)
  • /hold cancel - Unblock PR merging
  • /verified - Mark PR as verified
  • /verified cancel - Remove verification status
  • /reprocess - Trigger complete PR workflow reprocessing (useful if webhook failed or configuration changed)
  • /regenerate-welcome - Regenerate this welcome message

Review & Approval

  • /lgtm - Approve changes (looks good to me)
  • /approve - Approve PR (approvers only)
  • /assign-reviewers - Assign reviewers based on OWNERS file
  • /assign-reviewer @username - Assign specific reviewer
  • /check-can-merge - Check if PR meets merge requirements

Testing & Validation

  • /retest tox - Run Python test suite with tox
  • /retest build-container - Rebuild and test container image
  • /retest verify-bugs-are-open - verify-bugs-are-open
  • /retest all - Run all available tests

Container Operations

  • /build-and-push-container - Build and push container image (tagged with PR number)
    • Supports additional build arguments: /build-and-push-container --build-arg KEY=value

Cherry-pick Operations

  • /cherry-pick <branch> - Schedule cherry-pick to target branch when PR is merged
    • Multiple branches: /cherry-pick branch1 branch2 branch3

Label Management

  • /<label-name> - Add a label to the PR
  • /<label-name> cancel - Remove a label from the PR

✅ Merge Requirements

This PR will be automatically approved when the following conditions are met:

  1. Approval: /approve from at least one approver
  2. LGTM Count: Minimum 2 /lgtm from reviewers
  3. Status Checks: All required status checks must pass
  4. No Blockers: No WIP, hold, conflict labels
  5. Verified: PR must be marked as verified (if verification is enabled)

📊 Review Process

Approvers and Reviewers

Approvers:

  • dshchedr
  • myakove
  • rnetser
  • vsibirsk

Reviewers:

  • RoniKishner
  • dshchedr
  • geetikakay
  • rnetser
  • vsibirsk
Available Labels
  • hold
  • verified
  • wip
  • lgtm
  • approve

💡 Tips

  • WIP Status: Use /wip when your PR is not ready for review
  • Verification: The verified label is automatically removed on each new commit
  • Cherry-picking: Cherry-pick labels are processed when the PR is merged
  • Container Builds: Container images are automatically tagged with the PR number
  • Permission Levels: Some commands require approver permissions
  • Auto-verified Users: Certain users have automatic verification and merge privileges

For more information, please refer to the project documentation or contact the maintainers.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 26, 2026

📝 Walkthrough

Walkthrough

Test fixture infrastructure refactored to decompose image reference resolution. The extracted_tekton_test_image fixture replaced with three new fixtures: tekton_test_image_name_and_digest (extracts from CSV annotations), konflux_base_path (discovers mirror base from ImageDigestMirrorSet), and tekton_test_image (composes the full reference). The dependent fixture extracted_kubevirt_tekton_resources updated to use the new composed fixture.

Changes

Cohort / File(s) Summary
Tekton Test Fixture Refactoring
tests/infrastructure/tekton/conftest.py
Replaced extracted_tekton_test_image with modular fixture composition: new tekton_test_image_name_and_digest extracts image digest from CSV annotations; new konflux_base_path resolves Konflux mirror base from ImageDigestMirrorSet; new tekton_test_image builds complete reference. Updated extracted_kubevirt_tekton_resources signature to depend on tekton_test_image. Removed BREW_REGISTERY_SOURCE import. Updated error messaging to reflect CSV annotation lookup.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Reasoning: Single file with interconnected fixture logic changes. Requires verification that: (1) CSV annotation parsing correctly extracts image digest, (2) ImageDigestMirrorSet lookup properly resolves Konflux base path, (3) fixture composition produces valid image references, and (4) dependent fixture integration works as intended. Moderate logic density with fixture dependency chains demands careful validation but limited scope.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: updating tekton test image extraction to use Konflux mirrors instead of Brew, which is the core objective of this PR.
Description check ✅ Passed The PR description covers all required sections with clear context about the Brew-to-Konflux migration issue and the solution provided.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@geetikakay
Copy link
Contributor Author

/verified

@openshift-virtualization-qe-bot-3
Copy link
Contributor

@coderabbitai

Test execution plan request details

CRITICAL: You MUST post an inline review comment on the first changed line of the first file.
The inline comment should contain the Test Execution Plan with the smoke tests recommendation.
Do NOT submit a formal review - just post the inline comment directly.

As an expert software testing engineer, analyze all modified files in this PR and create a targeted test execution plan.
You will post an inline review comment with the test execution plan on the first changed file.
If you fail to run or post a comment, retry.

Analysis Requirements:

  1. Examine code changes in each modified file
  2. Identify affected code paths, functions, and classes
  3. Analyze pytest-specific elements: fixtures (scope, dependencies), parametrization, markers, conftest changes
  4. Trace test dependencies through imports, shared utilities, and fixture inheritance
  5. Detect new tests introduced in the PR
  6. Smoke Test Impact Analysis: Determine if any changes could affect smoke tests by checking:
    • Changes to files/functions used by tests marked with @pytest.mark.smoke
    • Changes to fixtures or utilities imported by smoke tests
    • Changes to conftest.py files that may affect smoke test execution
    • Changes to core infrastructure code (utilities/, libs/) that smoke tests depend on

Your deliverable:
Your change request comment will be based on the following requirements:

Test Execution Plan

  • Run smoke tests: True / False (Based on your smoke test impact analysis - True if any changes may affect smoke tests)
  • path/to/test_file.py - When the entire test file needs verification
  • path/to/test_file.py::TestClass::test_method - When specific test(s) needed
  • path/to/test_file.py::test_function - When specific test(s) needed
  • -m marker - When specific marker(s) can be used to cover multiple cases.

Guidelines:

  • Include only tests directly affected by the changes
  • Use a full file path only if ALL tests in that file require verification
  • Use file path + test name if only specific tests are needed
  • If a test marker can cover multiple files/tests, provide the marker
  • Balance coverage vs over-testing - Keep descriptions minimal

Hardware-Related Checks (SR-IOV, GPU, DPDK):

When PR modifies fixtures for hardware-specific resources:

  • Collection Safety: Fixtures MUST have existence checks (return None when hardware unavailable)
  • Test Plan: MUST verify both WITH and WITHOUT hardware:
    • Run affected tests on cluster WITH hardware
    • Verify collection succeeds on cluster WITHOUT hardware

CRITICAL WORKFLOW COMPLETION RULES:

When responding to this test execution plan request, you MUST follow these rules EXACTLY:

  1. YOUR ONLY DELIVERABLE: Post an inline review comment containing the test execution plan on the first changed line
  2. THEN STOP IMMEDIATELY - Do NOT generate any additional response
  3. FALLBACK ONLY: If submitting the review fails after retrying, post as a regular PR comment
  4. SILENCE = SUCCESS: After successfully submitting the review, your task is complete. No confirmation needed.

ABSOLUTE PROHIBITIONS (violating these creates empty/meaningless reviews):

  • ❌ Do NOT post acknowledgment messages like "Test execution plan posted", "Review posted successfully", "I've successfully posted"
  • ❌ Do NOT mention review IDs, URLs, or confirmation of posting in the PR thread
  • ❌ Do NOT add any follow-up comments after submitting the review
  • ❌ Do NOT reply to confirm task completion
  • ❌ Do NOT explain what you did - just do it and stop

Remember: The pull request review is visible to users. Additional comments are redundant noise.

CRITICAL: Before setting "Run smoke tests: True", you MUST use shell scripts to:
- Trace the actual fixture dependency chain from smoke tests to changed fixtures
- Verify that smoke tests actually import/use changed utilities or functions
- Confirm the dependency path exists; do NOT assume based on scope or semantics
- Be conservative: Session-scoped fixtures or infrastructure-sounding names do NOT
automatically mean smoke test impact. Only flag smoke test impact when you can
demonstrate a concrete dependency path.
WRONG: "This session-scoped storage fixture might affect smoke tests"
RIGHT: "Smoke test X uses fixture Y, which depends on the changed fixture Z"

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 26, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Fetch PR details to see what files were changed
gh pr view 3984 --json files --jq '.files[] | "\(.path) +\(.additions) -\(.deletions)"'

Length of output: 46


🏁 Script executed:

#!/bin/bash
# Get the actual diff to understand the changes
gh pr diff 3984

Length of output: 2947


🏁 Script executed:

#!/bin/bash
# Find all smoke tests to check for potential impact
rg "pytest.mark.smoke" --type=py -B2 | grep "def test_" | head -20

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Check if any smoke tests use tekton-related code
rg "tekton" --type=py -l tests/ | head -20

Length of output: 281


🏁 Script executed:

#!/bin/bash
# Search for smoke tests more comprehensively
rg "@pytest\.mark\.smoke" --type=py -A2 | head -40

Length of output: 3062


🏁 Script executed:

#!/bin/bash
# Find tests that use the modified fixtures
rg "extracted_tekton_test_image|tekton_test_image|extracted_kubevirt_tekton_resources" --type=py -n -C2

Length of output: 2658


🏁 Script executed:

#!/bin/bash
# Check what tests exist in the tekton directory
fd "test_.*\.py$" tests/infrastructure/tekton/

Length of output: 118


🏁 Script executed:

#!/bin/bash
# Check for pytest.mark.smoke specifically in tekton tests
rg "pytest\.mark\.smoke" tests/infrastructure/tekton/ --type=py -n

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Check if any smoke tests import or use infrastructure/tekton code
rg "from tests.infrastructure.tekton|import.*tekton" --type=py -l | xargs -I {} rg "pytest\.mark\.smoke" {} 2>/dev/null

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Post inline review comment with test execution plan on the first changed line
gh api \
  --method POST \
  -H "Accept: application/vnd.github+json" \
  -H "X-GitHub-Api-Version: 2022-11-28" \
  /repos/RedHatQE/openshift-virtualization-tests/pulls/3984/reviews \
  -f event='COMMENT' \
  -f body='' \
  -f comments[][path]='tests/infrastructure/tekton/conftest.py' \
  -f comments[][position]=1 \
  -f comments[][body]='## Test Execution Plan

**Run smoke tests: False**

**Rationale**: No smoke tests depend on Tekton fixtures. Verified that smoke tests are located in storage/cdi_upload, storage/cdi_clone, virt/cluster/common_templates, and install_upgrade_operators/csv directories, with no imports or usage of infrastructure/tekton code.

**Tests to Execute**:
- `tests/infrastructure/tekton/test_tekton_custom_ns.py` - Uses `extracted_kubevirt_tekton_resources` fixture which depends on the refactored image extraction logic
- `tests/infrastructure/tekton/test_tekton_pipeline_disk_uploader.py` - Uses `extracted_kubevirt_tekton_resources` fixture which depends on the refactored image extraction logic

**Why**: This PR refactors Tekton test image extraction from Brew registry to Konflux mirrors. The changes affect:
1. Image source resolution (new `konflux_base_path` fixture queries ImageDigestMirrorSet)
2. Image name extraction (new `tekton_test_image_name_and_digest` fixture)
3. Final image path construction (`tekton_test_image` fixture)

Both test files depend on `extracted_kubevirt_tekton_resources` which uses the new image extraction path. These tests validate pipeline functionality that requires the test image to be correctly extracted.'

Length of output: 278


🏁 Script executed:

#!/bin/bash
# Post inline review comment with test execution plan - fix position to be numeric
gh api \
  --method POST \
  -H "Accept: application/vnd.github+json" \
  -H "X-GitHub-Api-Version: 2022-11-28" \
  /repos/RedHatQE/openshift-virtualization-tests/pulls/3984/reviews \
  -f event='COMMENT' \
  -f body='' \
  -f comments[][path]='tests/infrastructure/tekton/conftest.py' \
  -F comments[][position]=1 \
  -f comments[][body]='## Test Execution Plan

**Run smoke tests: False**

**Rationale**: No smoke tests depend on Tekton fixtures. Verified that smoke tests are located in storage/cdi_upload, storage/cdi_clone, virt/cluster/common_templates, and install_upgrade_operators/csv directories, with no imports or usage of infrastructure/tekton code.

**Tests to Execute**:
- `tests/infrastructure/tekton/test_tekton_custom_ns.py` - Uses `extracted_kubevirt_tekton_resources` fixture which depends on the refactored image extraction logic
- `tests/infrastructure/tekton/test_tekton_pipeline_disk_uploader.py` - Uses `extracted_kubevirt_tekton_resources` fixture which depends on the refactored image extraction logic

**Why**: This PR refactors Tekton test image extraction from Brew registry to Konflux mirrors. The changes affect:
1. Image source resolution (new `konflux_base_path` fixture queries ImageDigestMirrorSet)
2. Image name extraction (new `tekton_test_image_name_and_digest` fixture)
3. Final image path construction (`tekton_test_image` fixture)

Both test files depend on `extracted_kubevirt_tekton_resources` which uses the new image extraction path. These tests validate pipeline functionality that requires the test image to be correctly extracted.'

Length of output: 1735



🧠 Learnings used
Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 0
File: :0-0
Timestamp: 2025-12-16T10:28:54.212Z
Learning: In the openshift-virtualization-tests repository, when determining smoke test impact for test execution plans, only set "Run smoke tests: True" if there is a verified, traceable dependency path from smoke tests to the changed code. Session-scoped fixtures or infrastructure-sounding changes do NOT automatically imply smoke test impact - the dependency chain must be explicitly verified using shell scripts before recommending smoke test execution.

Learnt from: jpeimer
Repo: RedHatQE/openshift-virtualization-tests PR: 0
File: :0-0
Timestamp: 2025-12-22T15:56:00.157Z
Learning: In the openshift-virtualization-tests repository, when responding to test execution plan requests from openshift-virtualization-qe-bot-3, do NOT use "REQUEST_CHANGES" review type if the PR author has already marked the PR as verified (e.g., with `/verified` command). Test execution plans are informational guides, not blocking requirements. Use COMMENT event for informational test plans, or only REQUEST_CHANGES if there are actual code issues that need to be addressed before merging.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 0
File: :0-0
Timestamp: 2025-12-16T14:00:59.076Z
Learning: In the openshift-virtualization-tests repository, when responding to test execution plan requests from openshift-virtualization-qe-bot-3, CodeRabbit must post ONLY an inline review comment on the Files Changed tab and then stop immediately without generating any follow-up comments in the PR discussion thread. No acknowledgment messages, no confirmation of posting, no explanation - silence after posting the inline review equals success. Additional comments create empty/meaningless reviews that clutter the PR.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 3062
File: conftest.py:333-333
Timestamp: 2025-12-16T20:11:03.645Z
Learning: In the openshift-virtualization-tests repository, when determining smoke test impact for changes affecting py_config["os_login_param"], follow this verification methodology: (1) Find all smoke tests: `rg "pytest.mark.smoke" --type=py -B2 | grep "def test_"`, (2) For each smoke test file, search for VM creation patterns: `rg "VirtualMachineForTests|running_vm|VirtualMachineForTestsFromTemplate|wait_for_ssh|check_ssh_connectivity"`, (3) Trace the dependency chain: smoke test → VirtualMachineForTests/running_vm() → wait_for_ssh_connectivity() (default enabled) → vm.login_params property → py_config["os_login_param"][vm.os_flavor], (4) Check utilities/virt.py for login_params usage: `rg "os_login_param|login_params" utilities/virt.py -C3`. Any smoke test creating VMs with default SSH connectivity checks (running_vm with check_ssh_connectivity=True) depends on os_login_param, even if the test doesn't directly reference it.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 3062
File: conftest.py:333-333
Timestamp: 2025-12-16T15:09:49.597Z
Learning: In the openshift-virtualization-tests repository, when conftest.py or utilities/bitwarden.py changes affect py_config["os_login_param"], smoke test impact must be determined by: (1) finding all smoke tests using `rg "pytest.mark.smoke"`, (2) checking each for VM creation patterns (VirtualMachineForTests, running_vm, VirtualMachineForTestsFromTemplate), (3) tracing whether running_vm is called with default check_ssh_connectivity=True, which accesses vm.login_params property that reads py_config["os_login_param"][vm.os_flavor]. The dependency chain is: smoke test → VM creation → running_vm → wait_for_ssh_connectivity → vm.login_params → os_login_param. Any smoke test creating VMs with SSH connectivity (the default) depends on os_login_param.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 0
File: :0-0
Timestamp: 2025-12-16T14:06:22.391Z
Learning: In the openshift-virtualization-tests repository, when posting test execution plan inline review comments using GitHub API, the full test execution plan content must go in the `comments[].body` field (which appears on Files Changed tab), NOT in the top-level `body` field (which appears in PR discussion thread). The top-level `body` field should be omitted or left empty to avoid posting redundant comments in the PR discussion thread.

Learnt from: yossisegev
Repo: RedHatQE/openshift-virtualization-tests PR: 3358
File: tests/network/sriov/test_sriov.py:21-21
Timestamp: 2026-01-07T09:52:22.771Z
Learning: For PRs that only remove or modify pytest markers (like removing pytest.mark.post_upgrade) without changing test logic in openshift-virtualization-tests, collection verification using `pytest --collect-only` is sufficient. Full test execution is not required to verify marker-only changes.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2469
File: utilities/sanity.py:139-142
Timestamp: 2025-11-08T07:36:57.616Z
Learning: In the openshift-virtualization-tests repository, user rnetser prefers to keep refactoring PRs (like PR `#2469`) strictly focused on moving/organizing code into more granular modules without adding new functionality, error handling, or behavioral changes. Such improvements should be handled in separate PRs.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 0
File: :0-0
Timestamp: 2025-09-29T19:05:24.987Z
Learning: The test execution plan for PR `#1904` focuses on cluster-type conditional logic where nmstate functionality is bypassed on cloud clusters (Azure/AWS) but fully functional on bare-metal/PSI clusters, requiring different test strategies for each environment type.

Learnt from: yossisegev
Repo: RedHatQE/openshift-virtualization-tests PR: 3196
File: tests/network/upgrade/test_upgrade_network.py:4-4
Timestamp: 2025-12-22T16:27:44.327Z
Learning: For PRs that remove test cases (especially redundant test cleanup PRs in openshift-virtualization-tests), test collection verification (pytest --collect-only showing selected/deselected counts) is sufficient to confirm the removal was clean and the test module remains functional. Full test execution is not required for test deletion PRs.

Learnt from: EdDev
Repo: RedHatQE/openshift-virtualization-tests PR: 3612
File: tests/network/user_defined_network/ip_specification/test_ip_specification.py:10-10
Timestamp: 2026-01-26T20:29:54.623Z
Learning: In the RedHatQE/openshift-virtualization-tests repository, when introducing STD (Standard Test Definition) PRs that define test case structure without implementation, it's acceptable and intentional to use `__test__ = False` to block pytest collection until the tests are fully implemented in a follow-up PR. The STD workflow involves: (1) defining test structure, markers, and documentation first, (2) implementing the actual test logic and removing the collection blocker in a subsequent PR.

Learnt from: servolkov
Repo: RedHatQE/openshift-virtualization-tests PR: 1776
File: libs/net/node_network.py:25-31
Timestamp: 2025-08-20T23:43:28.117Z
Learning: In the RedHatQE/openshift-virtualization-tests project, servolkov's team always uses bare metal (BM) clusters with IPv4 setup in their testing environment, making defensive checks for IPv4 data presence potentially redundant in their networking code.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 3371
File: scripts/tests_analyzer/compare_coderabbit_decisions.py:199-289
Timestamp: 2026-01-13T10:06:14.822Z
Learning: In the openshift-virtualization-tests repository, user rnetser prefers to keep pagination loops inline rather than extracting them into generic helper functions when the loops have different URL patterns and unique post-processing logic, as the inline approach improves readability and makes each endpoint's behavior more explicit.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 3228
File: .coderabbit.yaml:30-41
Timestamp: 2026-01-05T10:33:55.037Z
Learning: In the openshift-virtualization-tests repository, user rnetser prefers minimal pre-merge checks in CodeRabbit configuration: only docstrings enforcement (80% threshold) is needed, not title or description checks.

Learnt from: jpeimer
Repo: RedHatQE/openshift-virtualization-tests PR: 1160
File: tests/storage/storage_migration/test_mtc_storage_class_migration.py:165-176
Timestamp: 2025-06-17T07:45:37.776Z
Learning: In the openshift-virtualization-tests repository, user jpeimer prefers explicit fixture parameters over composite fixtures in test methods, even when there are many parameters, as they find this approach more readable and maintainable for understanding test dependencies.

Learnt from: jpeimer
Repo: RedHatQE/openshift-virtualization-tests PR: 3571
File: tests/storage/storage_migration/utils.py:158-167
Timestamp: 2026-01-25T13:18:26.819Z
Learning: In the openshift-virtualization-tests repository, user jpeimer prefers to avoid nitpicky style changes (e.g., removing `.keys()` from dict membership checks) because verifying every change is expensive. Be cautious about suggesting low-impact stylistic improvements that require verification overhead.

Learnt from: akri3i
Repo: RedHatQE/openshift-virtualization-tests PR: 1210
File: tests/virt/cluster/general/mass_machine_type_transition_tests/conftest.py:24-64
Timestamp: 2025-06-23T19:28:20.281Z
Learning: In OpenShift Virtualization mass machine type transition tests, the machine type glob pattern "pc-q35-rhel8.*.*" is intentionally hard-coded in the kubevirt_api_lifecycle_automation_job as it's used only once for this specific test case, with plans to update it in the future if the job needs to support other machine types.

Learnt from: RoniKishner
Repo: RedHatQE/openshift-virtualization-tests PR: 1411
File: utilities/os_utils.py:246-279
Timestamp: 2025-07-22T17:13:59.166Z
Learning: In the RedHatQE/openshift-virtualization-tests repository, CentOS preferences follow the format "centos-stream<version>" (e.g., "centos-stream9", "centos-stream10"). The generate_instance_type_centos_os_matrix function correctly uses regex to extract numeric versions and constructs the latest version string in the same format as the input preferences for proper comparison.

Learnt from: yossisegev
Repo: RedHatQE/openshift-virtualization-tests PR: 3873
File: tests/network/localnet/test_non_udn_localnet.py:7-9
Timestamp: 2026-02-25T10:52:23.378Z
Learning: In the RedHatQE/openshift-virtualization-tests repository, networking infrastructure requirements such as nmstate, localnet bridge mappings, and NIC availability are NOT considered special/platform-specific infrastructure that would warrant a tier3 marker. These are standard test environment capabilities in this repository's test environment, and tests requiring them do not automatically qualify as tier3 unless they involve other truly platform-specific/complex/time-consuming characteristics beyond standard network test infrastructure.

Learnt from: akri3i
Repo: RedHatQE/openshift-virtualization-tests PR: 1210
File: tests/virt/cluster/general/mass_machine_type_transition_tests/conftest.py:83-97
Timestamp: 2025-06-23T19:19:31.961Z
Learning: In OpenShift Virtualization mass machine type transition tests, the kubevirt_api_lifecycle_automation_job requires cluster-admin privileges to function properly, as confirmed by the test maintainer akri3i.

Learnt from: servolkov
Repo: RedHatQE/openshift-virtualization-tests PR: 2838
File: .github/workflows/net-utils-builder-staging.yml:37-37
Timestamp: 2025-11-25T01:56:54.902Z
Learning: In the openshift-virtualization-tests repository, when renaming container images that are built and used by GitHub Actions workflows, the changes must be done sequentially: first merge the workflow files (.github/workflows/) that update the image name in the CI/CD pipelines, then update the code references (like constants.py and manifest files) in a follow-up PR. This prevents the old workflow from running with mismatched image names during the transition.

Learnt from: vsibirsk
Repo: RedHatQE/openshift-virtualization-tests PR: 2045
File: tests/virt/cluster/vm_lifecycle/conftest.py:46-47
Timestamp: 2025-09-15T06:49:53.478Z
Learning: In the openshift-virtualization-tests repo, large fixture refactoring efforts like the golden image data source migration are handled incrementally by directory/team ownership. The virt/cluster directory is handled separately from virt/node, tests/infra, tests/storage, etc., with each area managed by relevant teams in follow-up PRs.

Learnt from: vamsikrishna-siddu
Repo: RedHatQE/openshift-virtualization-tests PR: 2199
File: tests/storage/test_online_resize.py:108-113
Timestamp: 2025-09-28T14:43:07.181Z
Learning: In the openshift-virtualization-tests repo, PR `#2199` depends on PR `#2139` which adds the OS_FLAVOR attribute to the Images.Cirros class, making Images.Cirros.OS_FLAVOR available for conditional logic in tests.

Learnt from: vamsikrishna-siddu
Repo: RedHatQE/openshift-virtualization-tests PR: 2199
File: tests/storage/test_online_resize.py:108-113
Timestamp: 2025-09-28T14:43:07.181Z
Learning: In the openshift-virtualization-tests repo, PR `#2199` depends on PR `#2139` which adds architecture-specific OS_FLAVOR attributes to the Images.Cirros class (OS_FLAVOR_CIRROS for x86_64/ARM64, OS_FLAVOR_FEDORA for s390x), enabling conditional logic based on the underlying OS flavor in tests.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/infrastructure/tekton/conftest.py`:
- Around line 119-123: Guard access to the CSV annotation before splitting:
instead of directly indexing
csv_scope_session.instance.metadata.annotations["test-images-nvrs"], fetch the
annotations dict and use .get("test-images-nvrs") (or check for presence) and
raise a clear ValueError if it's missing or empty so tests fail with a helpful
message; then split and iterate the returned string as before and match against
KUBEVIRT_TEKTON_AVAILABLE_TASKS_TEST. Ensure you update the logic near
csv_scope_session and the loop that references
KUBEVIRT_TEKTON_AVAILABLE_TASKS_TEST so an absent or empty annotation produces a
deterministic, descriptive error rather than a KeyError.
- Around line 127-132: The current loop in ImageDigestMirrorSet.get(...) returns
the first mirror containing "konflux-builds", which is nondeterministic;
instead, gather all candidate mirrors from idms.instance.spec.imageDigestMirrors
that contain "konflux-builds", derive their base paths (using the same
mirror.rsplit("/", 1)[0] logic), deduplicate them, sort the list to ensure a
deterministic order, and then select the first entry from that sorted list (or
raise the existing ValueError if empty). Update the code around
ImageDigestMirrorSet.get, the mirror extraction logic, and the return to use
this deterministic selection.
- Around line 118-137: Add concise Google-style docstrings to the three new
fixtures tekton_test_image_name_and_digest, konflux_base_path, and
tekton_test_image: for each fixture include a one-line summary, an Args section
describing any fixture parameters (e.g., csv_scope_session, admin_client) and
what input the fixture reads, and a Returns section describing the exact return
format (e.g., "str: image name and digest in 'name@sha256:...' format", "str:
base mirror path", "str: full pullspec
'<registry>/<path>/<image>@sha256:<digest>'"). Place the docstrings immediately
above each fixture definition and keep them in Google docstring style so
downstream tests/users understand the contract and return formats.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 547377b and 8841bb5.

📒 Files selected for processing (1)
  • tests/infrastructure/tekton/conftest.py

Comment on lines +118 to +137
def tekton_test_image_name_and_digest(csv_scope_session):
for image in csv_scope_session.instance.metadata.annotations["test-images-nvrs"].split(","):
if KUBEVIRT_TEKTON_AVAILABLE_TASKS_TEST in image:
return f"{BREW_REGISTERY_SOURCE}/rh-osbs/container-native-virtualization-{image.strip()}"
raise ValueError("Tekton test image not found in CSV annotations.")
return image.strip()
raise ValueError(f"{KUBEVIRT_TEKTON_AVAILABLE_TASKS_TEST} not found in CSV 'test-images-nvrs' annotation.")


@pytest.fixture(scope="session")
def konflux_base_path(admin_client):
for idms in ImageDigestMirrorSet.get(client=admin_client):
for mirror_entry in idms.instance.spec.imageDigestMirrors:
for mirror in mirror_entry.get("mirrors", []):
if "konflux-builds" in mirror:
return mirror.rsplit("/", 1)[0]
raise ValueError("Konflux mirror not found in ImageDigestMirrorSet.")


@pytest.fixture(scope="session")
def tekton_test_image(tekton_test_image_name_and_digest, konflux_base_path):
return f"{konflux_base_path}/{tekton_test_image_name_and_digest}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

MEDIUM: Add Google-style docstrings for new fixtures

These fixtures introduce non-obvious contracts (name@digest, discovered mirror base path, and composed pullspec). Short Google-format docstrings help prevent misuse in downstream fixtures/tests.

Example pattern
`@pytest.fixture`(scope="session")
def tekton_test_image(...):
    """Compose the full Konflux pullspec for Tekton test resources.

    Args:
        ...

    Returns:
        str: Full pullspec in `<registry>/<path>/<image>@sha256:<digest>` format.
    """

As per coding guidelines, "Google-format docstrings REQUIRED for all public functions with non-obvious return values OR side effects".

🧰 Tools
🪛 Ruff (0.15.2)

[warning] 122-122: Avoid specifying long messages outside the exception class

(TRY003)


[warning] 132-132: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/infrastructure/tekton/conftest.py` around lines 118 - 137, Add concise
Google-style docstrings to the three new fixtures
tekton_test_image_name_and_digest, konflux_base_path, and tekton_test_image: for
each fixture include a one-line summary, an Args section describing any fixture
parameters (e.g., csv_scope_session, admin_client) and what input the fixture
reads, and a Returns section describing the exact return format (e.g., "str:
image name and digest in 'name@sha256:...' format", "str: base mirror path",
"str: full pullspec '<registry>/<path>/<image>@sha256:<digest>'"). Place the
docstrings immediately above each fixture definition and keep them in Google
docstring style so downstream tests/users understand the contract and return
formats.

Comment on lines +119 to +123
for image in csv_scope_session.instance.metadata.annotations["test-images-nvrs"].split(","):
if KUBEVIRT_TEKTON_AVAILABLE_TASKS_TEST in image:
return f"{BREW_REGISTERY_SOURCE}/rh-osbs/container-native-virtualization-{image.strip()}"
raise ValueError("Tekton test image not found in CSV annotations.")
return image.strip()
raise ValueError(f"{KUBEVIRT_TEKTON_AVAILABLE_TASKS_TEST} not found in CSV 'test-images-nvrs' annotation.")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

HIGH: Guard CSV annotation access before splitting

Line 119 dereferences ["test-images-nvrs"] directly. If the annotation is missing/empty, this fails as KeyError instead of a clear setup error, which makes CI triage harder.

Suggested fix
 `@pytest.fixture`(scope="session")
 def tekton_test_image_name_and_digest(csv_scope_session):
-    for image in csv_scope_session.instance.metadata.annotations["test-images-nvrs"].split(","):
-        if KUBEVIRT_TEKTON_AVAILABLE_TASKS_TEST in image:
-            return image.strip()
+    annotations = csv_scope_session.instance.metadata.annotations or {}
+    test_images_nvrs = annotations.get("test-images-nvrs")
+    if not test_images_nvrs:
+        raise ValueError("CSV annotation 'test-images-nvrs' is missing or empty.")
+
+    for test_image in test_images_nvrs.split(","):
+        if KUBEVIRT_TEKTON_AVAILABLE_TASKS_TEST in test_image:
+            return test_image.strip()
     raise ValueError(f"{KUBEVIRT_TEKTON_AVAILABLE_TASKS_TEST} not found in CSV 'test-images-nvrs' annotation.")
🧰 Tools
🪛 Ruff (0.15.2)

[warning] 122-122: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/infrastructure/tekton/conftest.py` around lines 119 - 123, Guard access
to the CSV annotation before splitting: instead of directly indexing
csv_scope_session.instance.metadata.annotations["test-images-nvrs"], fetch the
annotations dict and use .get("test-images-nvrs") (or check for presence) and
raise a clear ValueError if it's missing or empty so tests fail with a helpful
message; then split and iterate the returned string as before and match against
KUBEVIRT_TEKTON_AVAILABLE_TASKS_TEST. Ensure you update the logic near
csv_scope_session and the loop that references
KUBEVIRT_TEKTON_AVAILABLE_TASKS_TEST so an absent or empty annotation produces a
deterministic, descriptive error rather than a KeyError.

Comment on lines +127 to +132
for idms in ImageDigestMirrorSet.get(client=admin_client):
for mirror_entry in idms.instance.spec.imageDigestMirrors:
for mirror in mirror_entry.get("mirrors", []):
if "konflux-builds" in mirror:
return mirror.rsplit("/", 1)[0]
raise ValueError("Konflux mirror not found in ImageDigestMirrorSet.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

HIGH: Make Konflux mirror selection deterministic

Line 131 returns the first konflux-builds mirror found. If multiple mirror entries exist, ordering can pick the wrong base path and break the final pullspec composed on Line 137 (and extraction on Line 153).

Suggested fix
 `@pytest.fixture`(scope="session")
 def konflux_base_path(admin_client):
-    for idms in ImageDigestMirrorSet.get(client=admin_client):
-        for mirror_entry in idms.instance.spec.imageDigestMirrors:
+    konflux_base_paths = set()
+    for image_digest_mirror_set in ImageDigestMirrorSet.get(client=admin_client):
+        for mirror_entry in image_digest_mirror_set.instance.spec.imageDigestMirrors or []:
             for mirror in mirror_entry.get("mirrors", []):
                 if "konflux-builds" in mirror:
-                    return mirror.rsplit("/", 1)[0]
-    raise ValueError("Konflux mirror not found in ImageDigestMirrorSet.")
+                    konflux_base_paths.add(mirror.rsplit("/", 1)[0])
+
+    if len(konflux_base_paths) != 1:
+        raise ValueError(
+            f"Expected exactly one Konflux mirror base path, found: {sorted(konflux_base_paths)}"
+        )
+    return next(iter(konflux_base_paths))
🧰 Tools
🪛 Ruff (0.15.2)

[warning] 132-132: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/infrastructure/tekton/conftest.py` around lines 127 - 132, The current
loop in ImageDigestMirrorSet.get(...) returns the first mirror containing
"konflux-builds", which is nondeterministic; instead, gather all candidate
mirrors from idms.instance.spec.imageDigestMirrors that contain
"konflux-builds", derive their base paths (using the same mirror.rsplit("/",
1)[0] logic), deduplicate them, sort the list to ensure a deterministic order,
and then select the first entry from that sorted list (or raise the existing
ValueError if empty). Update the code around ImageDigestMirrorSet.get, the
mirror extraction logic, and the return to use this deterministic selection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants