Update get_tests_cluster_markers to collect markers by type#3907
Update get_tests_cluster_markers to collect markers by type#3907rnetser wants to merge 6 commits intoRedHatQE:mainfrom
Conversation
📝 WalkthroughWalkthroughAdds Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Report bugs in Issues Welcome! 🎉This pull request will be automatically processed with the following features: 🔄 Automatic Actions
📋 Available CommandsPR Status Management
Review & Approval
Testing & Validation
Container Operations
Cherry-pick Operations
Label Management
✅ Merge RequirementsThis PR will be automatically approved when the following conditions are met:
📊 Review ProcessApprovers and ReviewersApprovers:
Reviewers:
Available Labels
💡 Tips
For more information, please refer to the project documentation or contact the maintainers. |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@utilities/pytest_utils.py`:
- Around line 297-313: After parsing, add a post-parse check that inspects
section_headers and markers_by_section and emits a warning or raises an
assertion if any expected section (keys from section_headers values) has an
empty list in markers_by_section before building result; specifically reference
the existing variables section_headers and markers_by_section (and the resulting
dict `result`) and log a clear warning indicating which sections collected zero
markers so a drift in the exact pytest.ini comment strings is detected rather
than silently returning an empty result.
- Line 266: The function signature for get_tests_cluster_markers uses an
unparameterized list (items: list) which breaks mypy strict; update the
signature to use a concrete item type (e.g., items: list[pytest.Item]) and
ensure pytest is imported in the module (or referenced via typing if needed) so
the annotation resolves under strict mode; keep filepath: str | None as-is (or
change to Optional[str] if you prefer consistent typing style) and run mypy to
verify no other missing generic type parameters in the file.
- Line 292: The call open("pytest.ini") in utilities/pytest_utils.py is
CWD-dependent; change it to open the file using a path anchored to the module
file (use Path(__file__).parent / "pytest.ini") so the file is located relative
to pytest_utils.py regardless of working directory—e.g., construct anchored_path
= Path(__file__).parent / "pytest.ini" and open(anchored_path, "r") (optionally
specify encoding) wherever the current open("pytest.ini") is used.
- Add proper type annotation: items: list[pytest.Item] - Use pathlib to anchor pytest.ini path to module location - Add warning log when no markers found in any section - Fix unit test assertion for dict return type
…zation-tests into markers-operator
|
/verified (with makrer cmdline option) |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3907 +/- ##
==========================================
+ Coverage 98.56% 98.57% +0.01%
==========================================
Files 25 25
Lines 2297 2318 +21
==========================================
+ Hits 2264 2285 +21
Misses 33 33
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Test execution plan request detailsCRITICAL: You MUST post an inline review comment on the first changed line of the first file. As an expert software testing engineer, analyze all modified files in this PR and create a targeted test execution plan. Analysis Requirements:
Your deliverable: Test Execution Plan
Guidelines:
Hardware-Related Checks (SR-IOV, GPU, DPDK): When PR modifies fixtures for hardware-specific resources:
CRITICAL WORKFLOW COMPLETION RULES: When responding to this test execution plan request, you MUST follow these rules EXACTLY:
ABSOLUTE PROHIBITIONS (violating these creates empty/meaningless reviews):
Remember: The pull request review is visible to users. Additional comments are redundant noise. CRITICAL: Before setting "Run smoke tests: True", you MUST use shell scripts to: |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
utilities/unittests/test_pytest_utils.py (1)
886-916: 🧹 Nitpick | 🔵 TrivialLOW: Tests validate markers via log output strings rather than the return value.
The refactored
get_tests_cluster_markersnow returns a structureddict[str, list[str]], but all tests inTestGetTestsClusterMarkersassert against stringified logger output (e.g.,"ipv4" in logged_markers). This means the tests don't verify:
- That markers land in the correct section key (e.g.,
"configuration"vs"hardware")- The actual return value structure
Since the function now has a meaningful return value, asserting directly on it would be more robust and readable:
result = get_tests_cluster_markers(items) assert "configuration" in result, "Expected 'configuration' section in result" assert "ipv4" in result["configuration"], "Expected 'ipv4' in configuration markers"Not blocking — the existing tests do catch regressions in marker detection. But direct return-value assertions would catch section-assignment bugs that string matching cannot.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utilities/unittests/test_pytest_utils.py` around lines 886 - 916, Update the test case test_get_tests_cluster_markers_success in TestGetTestsClusterMarkers to assert on the actual return value of get_tests_cluster_markers instead of inspecting LOGGER output: call get_tests_cluster_markers(items), capture the returned dict, then assert that expected section keys (e.g., "configuration" and "hardware") exist and that markers like "ipv4", "dpdk", and "gpu" appear under their correct section lists; keep the mocked file content and marker setup but remove or avoid relying on mock_logger.info string matching so the test verifies structure and section assignment directly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@utilities/pytest_utils.py`:
- Around line 299-312: The section-termination check is too broad—change the
condition that clears current_section from testing stripped.startswith("#") to
stripped.startswith("##") so only double-hash section headers end a section;
update the logic around the loop that uses section_headers, current_section,
stripped, test_markers, and markers_by_section to use if not stripped or
stripped.startswith("##") instead of the current single-# check to allow
single-# comments inside marker sections while still ending on "##" headers.
---
Outside diff comments:
In `@utilities/unittests/test_pytest_utils.py`:
- Around line 886-916: Update the test case
test_get_tests_cluster_markers_success in TestGetTestsClusterMarkers to assert
on the actual return value of get_tests_cluster_markers instead of inspecting
LOGGER output: call get_tests_cluster_markers(items), capture the returned dict,
then assert that expected section keys (e.g., "configuration" and "hardware")
exist and that markers like "ipv4", "dpdk", and "gpu" appear under their correct
section lists; keep the mocked file content and marker setup but remove or avoid
relying on mock_logger.info string matching so the test verifies structure and
section assignment directly.
|
review the pr; all comments addressed |
|
🧠 Learnings used✅ Actions performedReview triggered.
|
|
/build-and-push-container --build-arg OPENSHIFT_PYTHON_WRAPPER_COMMIT=a85d7c99e8a76a598d90aace2a0787398735c524 |
|
New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-3907 published |
|
/verified ran with the marker collector on a few options |
Test execution plan request detailsCRITICAL: You MUST post an inline review comment on the first changed line of the first file. As an expert software testing engineer, analyze all modified files in this PR and create a targeted test execution plan. Analysis Requirements:
Your deliverable: Test Execution Plan
Guidelines:
Hardware-Related Checks (SR-IOV, GPU, DPDK): When PR modifies fixtures for hardware-specific resources:
CRITICAL WORKFLOW COMPLETION RULES: When responding to this test execution plan request, you MUST follow these rules EXACTLY:
ABSOLUTE PROHIBITIONS (violating these creates empty/meaningless reviews):
Remember: The pull request review is visible to users. Additional comments are redundant noise. CRITICAL: Before setting "Run smoke tests: True", you MUST use shell scripts to: |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 0 🏁 Script executed: Length of output: 0 🏁 Script executed: Length of output: 16040 🏁 Script executed: Length of output: 40 🏁 Script executed: Length of output: 301 🏁 Script executed: Length of output: 1735 🧠 Learnings used |
|
/approve |
Short description:
Group test-required configs by markers types
Assisted-by: Claude code
More details:
What this PR does / why we need it:
Which issue(s) this PR fixes:
Special notes for reviewer:
jira-ticket:
Summary by CodeRabbit
New Features
Chores
Tests