Skip to content

added test cases for requirements to add coverage#251

Draft
Saumya-R wants to merge 4 commits intoeclipse-score:mainfrom
qorix-group:saumya_cit_adding_missing_cases_2
Draft

added test cases for requirements to add coverage#251
Saumya-R wants to merge 4 commits intoeclipse-score:mainfrom
qorix-group:saumya_cit_adding_missing_cases_2

Conversation

@Saumya-R
Copy link
Contributor

No description provided.

@Saumya-R Saumya-R marked this pull request as draft February 24, 2026 13:31
@github-actions
Copy link

github-actions bot commented Feb 24, 2026

License Check Results

🚀 The license check job ran with the Bazel command:

bazel run //:license-check

Status: ⚠️ Needs Review

Click to expand output
[License Check Output]
2026/02/25 18:39:09 Downloading https://releases.bazel.build/8.4.2/release/bazel-8.4.2-linux-x86_64...
Extracting Bazel installation...
Starting local Bazel server (8.4.2) and connecting to it...
INFO: Invocation ID: 7eb884d6-71cf-453b-a54f-96bd02fbba33
Computing main repo mapping: 
Computing main repo mapping: 
Computing main repo mapping: 
Computing main repo mapping: 
Computing main repo mapping: 
Computing main repo mapping: 
Computing main repo mapping: 
Computing main repo mapping: 
WARNING: For repository 'rules_python', the root module requires module version rules_python@1.4.1, but got rules_python@1.8.3 in the resolved dependency graph. Please update the version in your MODULE.bazel or set --check_direct_dependencies=off
WARNING: For repository 'score_rust_policies', the root module requires module version score_rust_policies@0.0.4, but got score_rust_policies@0.0.5 in the resolved dependency graph. Please update the version in your MODULE.bazel or set --check_direct_dependencies=off
WARNING: For repository 'rules_cc', the root module requires module version rules_cc@0.1.2, but got rules_cc@0.2.16 in the resolved dependency graph. Please update the version in your MODULE.bazel or set --check_direct_dependencies=off
WARNING: For repository 'googletest', the root module requires module version googletest@1.17.0.bcr.1, but got googletest@1.17.0.bcr.2 in the resolved dependency graph. Please update the version in your MODULE.bazel or set --check_direct_dependencies=off
WARNING: For repository 'google_benchmark', the root module requires module version google_benchmark@1.9.4, but got google_benchmark@1.9.5 in the resolved dependency graph. Please update the version in your MODULE.bazel or set --check_direct_dependencies=off
Computing main repo mapping: 
Computing main repo mapping: 
Computing main repo mapping: 
Loading: 
Loading: 3 packages loaded
Loading: 3 packages loaded
    currently loading: 
Loading: 3 packages loaded
    currently loading: 
Loading: 3 packages loaded
    currently loading: 
Analyzing: target //:license-check (4 packages loaded, 0 targets configured)
Analyzing: target //:license-check (4 packages loaded, 0 targets configured)

Analyzing: target //:license-check (71 packages loaded, 10 targets configured)

Analyzing: target //:license-check (91 packages loaded, 10 targets configured)

Analyzing: target //:license-check (147 packages loaded, 1082 targets configured)

Analyzing: target //:license-check (160 packages loaded, 2315 targets configured)

Analyzing: target //:license-check (160 packages loaded, 2315 targets configured)

Analyzing: target //:license-check (160 packages loaded, 2315 targets configured)

Analyzing: target //:license-check (170 packages loaded, 5482 targets configured)

Analyzing: target //:license-check (172 packages loaded, 5510 targets configured)

Analyzing: target //:license-check (172 packages loaded, 5510 targets configured)

Analyzing: target //:license-check (172 packages loaded, 5510 targets configured)

Analyzing: target //:license-check (197 packages loaded, 5714 targets configured)
[12 / 17] Executing genrule //:filtered_cargo_lock; 0s disk-cache, processwrapper-sandbox
INFO: From Generating Dash formatted dependency file ...:
INFO: Successfully converted 66 packages from Cargo.lock to bazel-out/k8-fastbuild/bin/formatted.txt
Analyzing: target //:license-check (199 packages loaded, 5722 targets configured)
[14 / 17] JavaToolchainCompileClasses external/rules_java+/toolchains/platformclasspath_classes; 1s disk-cache, processwrapper-sandbox
Analyzing: target //:license-check (199 packages loaded, 5722 targets configured)
[16 / 17] [Prepa] Building license.check.license_check.jar ()
Analyzing: target //:license-check (199 packages loaded, 5722 targets configured)
[17 / 17] no actions running
Analyzing: target //:license-check (200 packages loaded, 10056 targets configured)
[17 / 17] no actions running
Analyzing: target //:license-check (200 packages loaded, 10056 targets configured)
[17 / 17] no actions running
Analyzing: target //:license-check (200 packages loaded, 10056 targets configured)
[17 / 17] no actions running
Analyzing: target //:license-check (200 packages loaded, 10056 targets configured)
[17 / 17] no actions running
Analyzing: target //:license-check (200 packages loaded, 10056 targets configured)
[17 / 17] no actions running
Analyzing: target //:license-check (200 packages loaded, 10056 targets configured)
[17 / 17] no actions running
Analyzing: target //:license-check (200 packages loaded, 10056 targets configured)
[17 / 17] no actions running
INFO: Analyzed target //:license-check (202 packages loaded, 10170 targets configured).
INFO: Found 1 target...
Target //:license.check.license_check up-to-date:
  bazel-bin/license.check.license_check
  bazel-bin/license.check.license_check.jar
INFO: Elapsed time: 47.684s, Critical Path: 3.99s
INFO: 17 processes: 12 internal, 4 processwrapper-sandbox, 1 worker.
INFO: Build completed successfully, 17 total actions
INFO: Running command line: bazel-bin/license.check.license_check ./formatted.txt <args omitted>
usage: org.eclipse.dash.licenses.cli.Main [-batch <int>] [-cd <url>]
       [-confidence <int>] [-ef <url>] [-excludeSources <sources>] [-help] [-lic
       <url>] [-project <shortname>] [-repo <url>] [-review] [-summary <file>]
       [-timeout <seconds>] [-token <token>]

@github-actions
Copy link

The created documentation from the pull request is available at: docu-html

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR expands the CIT (Component Integration Test) suite for both Rust and C++ KVS implementations, adding new scenarios and Python test cases intended to improve requirements coverage (constraints, snapshot behaviors, value length, etc.), plus a requirements coverage summary document.

Changes:

  • Added new CIT scenarios for constraints (compile/runtime + permission behavior) and additional snapshot behaviors in both Rust and C++.
  • Added supported datatype “value length” scenarios and new/updated Python requirement-mapped tests.
  • Added REQUIREMENTS_COVERAGE.md summarizing requirement-to-test mapping.

Reviewed changes

Copilot reviewed 15 out of 15 changed files in this pull request and generated 17 comments.

Show a summary per file
File Description
tests/test_scenarios/rust/src/cit/supported_datatypes.rs Adds ValueLength Rust scenario and registers it in supported_datatypes group.
tests/test_scenarios/rust/src/cit/snapshots.rs Adds Rust snapshot scenarios: ID assignment, deletion/rotation, no-versioning inspection.
tests/test_scenarios/rust/src/cit/mod.rs Registers new constraints scenario group in Rust CIT root group.
tests/test_scenarios/rust/src/cit/constraints.rs New Rust constraints/permission scenarios.
tests/test_scenarios/cpp/src/cit/supported_datatypes.cpp Adds C++ ValueLength scenario and JSON parsing helpers.
tests/test_scenarios/cpp/src/cit/snapshots.cpp Adds C++ snapshot scenarios: ID assignment, deletion, no-versioning file inspection.
tests/test_scenarios/cpp/src/cit/constraints.hpp Declares new C++ constraints scenario group.
tests/test_scenarios/cpp/src/cit/constraints.cpp Implements C++ constraints/permission scenarios.
tests/test_scenarios/cpp/src/cit/cit.cpp Registers C++ constraints group in CIT root group.
tests/test_cases/tests/test_cit_supported_datatypes.py Updates requirement mapping and adds value-length boundary tests.
tests/test_cases/tests/test_cit_snapshots.py Updates requirement mapping and adds snapshot ID/deletion/no-versioning tests.
tests/test_cases/tests/test_cit_persistency.py Adds skipped placeholders for additional persistency requirements.
tests/test_cases/tests/test_cit_default_values.py Adds many new default-values tests and expands requirement annotations.
tests/test_cases/tests/test_cit_constraints.py New Python test suite for constraints + permission requirements.
tests/test_cases/REQUIREMENTS_COVERAGE.md New documentation summarizing requirements coverage.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +684 to +687
# After setting to same value as default: should NOT be default
# Setting the value explicitly to 111.1 (same as default)
# The test scenario sets it to 432.1, but we're testing the concept
assert logs[1].value_is_default == "Ok(false)"
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is described as “setting a value explicitly to the same value as its default”, but the underlying cit.default_values.default_values scenario sets test_number to 432.1 (see scenario implementation), not to the provided default 111.1. As written, the test doesn’t actually validate the described behavior; it only checks value_is_default becomes false for any set. Consider adding a dedicated scenario that sets the value equal to the default, or adjust the scenario/test accordingly.

Copilot uses AI. Check for mistakes.


@add_test_properties(
fully_verifies=["comp_req__persistency__value_length_v2"],
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TestValueLength is marked as fully_verifies value-length enforcement, but the assertions later accept either outcome for values >1024 bytes. Either enforce rejection for >1024 in the test (and/or scenario logs), or downgrade this to partial/xfail until the implementation enforces the limit.

Suggested change
fully_verifies=["comp_req__persistency__value_length_v2"],
partially_verifies=["comp_req__persistency__value_length_v2"],

Copilot uses AI. Check for mistakes.
Comment on lines +623 to +626
if len(logs) > 0:
# Verify default value is accessible
assert logs[0].value_is_default == "Ok(true)"
assert "Err" not in logs[0].default_value
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The if len(logs) > 0 guard means this loop can silently do no assertions if the scenario never logs these keys. Since cit.default_values.default_values currently only operates on test_number/test_number_*, this test likely doesn’t validate the intended data types at all. Either extend the scenario to query these keys, or assert that the expected logs are present.

Suggested change
if len(logs) > 0:
# Verify default value is accessible
assert logs[0].value_is_default == "Ok(true)"
assert "Err" not in logs[0].default_value
# Ensure we actually observed the key in the logs
assert logs, f"Expected at least one log entry for key '{key}'"
# Verify default value is accessible
assert logs[0].value_is_default == "Ok(true)"
assert "Err" not in logs[0].default_value

Copilot uses AI. Check for mistakes.
Comment on lines +684 to +687
# After setting to same value as default: should NOT be default
# Setting the value explicitly to 111.1 (same as default)
# The test scenario sets it to 432.1, but we're testing the concept
assert logs[1].value_is_default == "Ok(false)"
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test intent is “set value equal to default”, but the underlying cit.default_values.default_values scenario sets test_number to 432.1, not to the provided default 111.1. As written, the test only proves that setting any non-default value makes value_is_default false. Consider adding a dedicated scenario that sets the value to the default, or adjust the existing scenario/test.

Copilot uses AI. Check for mistakes.
Comment on lines +865 to +868
if len(logs) > 0:
# Should successfully load these values
assert logs[0].value_is_default == "Ok(true)"
assert "Err" not in logs[0].default_value
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This loop can become non-asserting due to the if len(logs) > 0 guard. If the scenario doesn’t actually read these keys, the test will pass without verifying special numeric defaults. Prefer asserting log presence (and/or extend the scenario to query these keys).

Suggested change
if len(logs) > 0:
# Should successfully load these values
assert logs[0].value_is_default == "Ok(true)"
assert "Err" not in logs[0].default_value
# Ensure that each special numeric key was actually logged
assert logs, f"Expected at least one log entry for key '{key}'"
# Should successfully load these values
assert logs[0].value_is_default == "Ok(true)"
assert "Err" not in logs[0].default_value

Copilot uses AI. Check for mistakes.
"comp_req__persistency__value_data_types_v2",
"comp_req__persistency__key_uniqueness_v2",
],
fully_verifies=["comp_req__persistency__value_data_types_v2"],
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TestSupportedDatatypesKeys is marked as fully_verifies=["comp_req__persistency__value_data_types_v2"], but this test only exercises key handling and never stores/reads typed values. This requirement mapping looks incorrect and may skew coverage; consider removing value_data_types_v2 here.

Suggested change
fully_verifies=["comp_req__persistency__value_data_types_v2"],
fully_verifies=[],

Copilot uses AI. Check for mistakes.
@@ -56,7 +57,9 @@ def test_ok(self, results: ScenarioResult, logs_info_level: LogContainer) -> Non
partially_verifies=[
"comp_req__persistency__key_encoding_v2",
"comp_req__persistency__value_data_types_v2",
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TestSupportedDatatypesValues lists comp_req__persistency__value_data_types_v2 in both partially_verifies and fully_verifies. This is inconsistent for traceability; it should only appear in one list (likely fully_verifies only).

Suggested change
"comp_req__persistency__value_data_types_v2",

Copilot uses AI. Check for mistakes.
Comment on lines 46 to 49
"comp_req__persistency__snapshot_id_v2",
"comp_req__persistency__snapshot_rotate_v2",
"comp_req__persistency__snapshot_restore_v2",
"comp_req__persistency__snapshot_delete_v2"
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The partially_verifies list here includes snapshot ID/rotate/restore/delete requirements, but this test class only validates snapshot count behavior. Narrow the requirement list to what is actually exercised to avoid overstating coverage.

Suggested change
"comp_req__persistency__snapshot_id_v2",
"comp_req__persistency__snapshot_rotate_v2",
"comp_req__persistency__snapshot_restore_v2",
"comp_req__persistency__snapshot_delete_v2"

Copilot uses AI. Check for mistakes.
| comp_req__persistency__key_length_v2 | Limit key length to 32 bytes | *None (excluded by design)* | Not covered | Excluded from automated testing |
| comp_req__persistency__value_data_types_v2 | Accept only permitted value types | TestSupportedDatatypesValues_* | Fully | All supported types tested |
| comp_req__persistency__value_serialize_v2 | Serialize/deserialize as JSON | TestSupportedDatatypesValues, test_cit_persistency.py | Fully | JSON serialization/deserialization tested |
| comp_req__persistency__value_length_v2 | Limit value length to 1024 bytes | TestSupportedDatatypesValues, TestValueLength | Fully | Max length tested |
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The table marks comp_req__persistency__value_length_v2 as “Fully” covered, but TestValueLength currently allows either accept or reject for values >1024 bytes. Either strengthen the test/scenario to assert rejection beyond 1024, or downgrade this entry to partial/not covered to keep the document accurate.

Suggested change
| comp_req__persistency__value_length_v2 | Limit value length to 1024 bytes | TestSupportedDatatypesValues, TestValueLength | Fully | Max length tested |
| comp_req__persistency__value_length_v2 | Limit value length to 1024 bytes | TestSupportedDatatypesValues, TestValueLength | Partially | Max length boundary not strictly enforced; values >1024 bytes may be accepted or rejected |

Copilot uses AI. Check for mistakes.
Comment on lines 56 to 60
# Note: The following tests verify requirements but need extended scenarios
# They are marked as TODO until scenario implementations are added

@pytest.mark.skip(reason="Requires scenario implementation - comp_req__persistency__pers_data_csum_v2")
@add_test_properties(
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These placeholder tests are marked fully_verifies=[...] but are unconditionally skipped. If requirement coverage is derived from test metadata, this can inflate reported coverage despite zero execution. Consider removing fully_verifies from skipped placeholders until scenarios are implemented.

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 19 out of 19 changed files in this pull request and generated 6 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +86 to +88
// Check that files exist on filesystem (proof of filesystem usage)
let dir_path = params.dir.expect("No directory specified");
let uses_filesystem = std::path::Path::new(&dir_path).exists();
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uses_filesystem is currently derived from whether the configured directory exists, but that directory (e.g. a temp dir) can exist even if KVS never touched the filesystem. This makes the signal always-true and the test ineffective.

Instead, after flush() check that the expected KVS output files exist (e.g. kvs_<instance_id>_0.json / .hash) in params.dir so the scenario actually proves filesystem usage.

Suggested change
// Check that files exist on filesystem (proof of filesystem usage)
let dir_path = params.dir.expect("No directory specified");
let uses_filesystem = std::path::Path::new(&dir_path).exists();
// Check that expected KVS output files exist on filesystem (proof of filesystem usage)
let dir_path = params.dir.expect("No directory specified");
let instance_id = kvs.instance_id();
let base_name = format!("kvs_{}_0", instance_id);
let dir = std::path::Path::new(&dir_path);
let json_path = dir.join(format!("{}.json", base_name));
let hash_path = dir.join(format!("{}.hash", base_name));
let uses_filesystem = json_path.exists() && hash_path.exists();

Copilot uses AI. Check for mistakes.
Comment on lines +39 to +45
| comp_req__persistency__persist_data_com_v2 | Use file API, JSON format | test_cit_persistency.py | Fully | File and JSON format tested |
| comp_req__persistency__pers_data_csum_v2 | Generate checksum for data file | test_cit_persistency.py | Fully | Checksum generation tested |
| comp_req__persistency__pers_data_csum_vrfy_v2 | Verify checksum on load | test_cit_persistency.py | Fully | Checksum verification tested |
| comp_req__persistency__pers_data_store_bnd_v2 | Use file API to persist data | test_cit_persistency.py | Fully | File API tested |
| comp_req__persistency__pers_data_store_fmt_v2 | Use JSON format | test_cit_persistency.py | Fully | JSON format tested |
| comp_req__persistency__pers_data_version_v2 | No built-in versioning | TestNoBuiltInVersioning | Fully | Explicitly checked |
| comp_req__persistency__pers_data_schema_v2 | JSON format enables versioning | test_cit_persistency.py | Fully | JSON format and schema logic tested |
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These entries claim checksum generation/verification and storage-format requirements are fully covered by test_cit_persistency.py, but the corresponding tests are currently marked @pytest.mark.skip as TODOs. This makes the stated "Fully" coverage inaccurate.

Either remove/adjust the claims here or implement the missing scenarios and unskip the tests so the coverage report reflects reality.

Suggested change
| comp_req__persistency__persist_data_com_v2 | Use file API, JSON format | test_cit_persistency.py | Fully | File and JSON format tested |
| comp_req__persistency__pers_data_csum_v2 | Generate checksum for data file | test_cit_persistency.py | Fully | Checksum generation tested |
| comp_req__persistency__pers_data_csum_vrfy_v2 | Verify checksum on load | test_cit_persistency.py | Fully | Checksum verification tested |
| comp_req__persistency__pers_data_store_bnd_v2 | Use file API to persist data | test_cit_persistency.py | Fully | File API tested |
| comp_req__persistency__pers_data_store_fmt_v2 | Use JSON format | test_cit_persistency.py | Fully | JSON format tested |
| comp_req__persistency__pers_data_version_v2 | No built-in versioning | TestNoBuiltInVersioning | Fully | Explicitly checked |
| comp_req__persistency__pers_data_schema_v2 | JSON format enables versioning | test_cit_persistency.py | Fully | JSON format and schema logic tested |
| comp_req__persistency__persist_data_com_v2 | Use file API, JSON format | test_cit_persistency.py | Not covered | Planned in test_cit_persistency.py, but relevant tests are currently skipped (TODO) |
| comp_req__persistency__pers_data_csum_v2 | Generate checksum for data file | test_cit_persistency.py | Not covered | Planned in test_cit_persistency.py, but checksum tests are currently skipped (TODO) |
| comp_req__persistency__pers_data_csum_vrfy_v2 | Verify checksum on load | test_cit_persistency.py | Not covered | Planned in test_cit_persistency.py, but checksum verification tests are currently skipped (TODO) |
| comp_req__persistency__pers_data_store_bnd_v2 | Use file API to persist data | test_cit_persistency.py | Not covered | Planned in test_cit_persistency.py, but file API persistence tests are currently skipped (TODO) |
| comp_req__persistency__pers_data_store_fmt_v2 | Use JSON format | test_cit_persistency.py | Not covered | Planned in test_cit_persistency.py, but JSON format persistence tests are currently skipped (TODO) |
| comp_req__persistency__pers_data_version_v2 | No built-in versioning | TestNoBuiltInVersioning | Fully | Explicitly checked |
| comp_req__persistency__pers_data_schema_v2 | JSON format enables versioning | test_cit_persistency.py | Not covered | Planned in test_cit_persistency.py, but JSON schema/versioning tests are currently skipped (TODO) |

Copilot uses AI. Check for mistakes.
Comment on lines +738 to +740
if len(logs) > 0:
assert logs[0].value_is_default == "Ok(true)"
assert logs[0].default_value == "Ok(F64(111.1))"
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assertion block is guarded by if len(logs) > 0, so the test can pass even if no log entry is produced (e.g., if the scenario never queried/logged this key). That makes the test non-verifying.

Either assert that the expected log(s) exist (e.g. assert len(logs) > 0) or change the scenario to emit deterministic logs for this case.

Suggested change
if len(logs) > 0:
assert logs[0].value_is_default == "Ok(true)"
assert logs[0].default_value == "Ok(F64(111.1))"
assert len(logs) > 0
assert logs[0].value_is_default == "Ok(true)"
assert logs[0].default_value == "Ok(F64(111.1))"

Copilot uses AI. Check for mistakes.
Comment on lines +54 to +60
// Test that compile-time constraints exist
// In Rust, there's no hardcoded constant like C++ KVS_MAX_SNAPSHOTS,
// but we can verify that the default behavior exists
let compile_time_max = 3; // This matches the C++ KVS_MAX_SNAPSHOTS
info!(compile_time_max, "Compile-time max");

let compile_time_constraint_exists = true; // Constants are defined in source
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the Rust scenario's compile_time branch, the test result is hard-coded (compile_time_max = 3 and compile_time_constraint_exists = true) rather than derived from the KVS implementation. This can pass even if the default/compile-time constraint changes, so it doesn't actually verify the requirement.

Consider creating a KVS instance in this branch with the runtime snapshot_max_count unset (so the backend default is used) and log/compare kvs.snapshot_max_count() against the expected default instead of logging constants.

Suggested change
// Test that compile-time constraints exist
// In Rust, there's no hardcoded constant like C++ KVS_MAX_SNAPSHOTS,
// but we can verify that the default behavior exists
let compile_time_max = 3; // This matches the C++ KVS_MAX_SNAPSHOTS
info!(compile_time_max, "Compile-time max");
let compile_time_constraint_exists = true; // Constants are defined in source
// Test that compile-time constraints exist by inspecting the backend default.
// We create a KVS instance (with no explicit runtime snapshot_max_count override)
// and compare its effective snapshot_max_count against the expected default.
let kvs = kvs_instance(params).expect("Failed to create KVS instance");
let compile_time_max = kvs.snapshot_max_count();
info!(compile_time_max, "Compile-time max");
let expected_max = constraint_value;
let compile_time_constraint_exists = compile_time_max == expected_max;

Copilot uses AI. Check for mistakes.
Comment on lines +148 to +150
// Check that files exist on filesystem (proof of filesystem usage)
std::string dir_path = params.dir.value();
int uses_filesystem = std::filesystem::exists(dir_path) ? 1 : 0;
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uses_filesystem is set based on std::filesystem::exists(dir_path), which will be true as long as the directory exists (e.g. the provided temp directory), regardless of whether KVS actually persisted anything. This makes the scenario/test unable to distinguish an in-memory backend from a filesystem backend.

After flush(), check for the presence of the expected persisted files (e.g. kvs_<instance_id>_0.json and its hash) inside dir_path instead.

Suggested change
// Check that files exist on filesystem (proof of filesystem usage)
std::string dir_path = params.dir.value();
int uses_filesystem = std::filesystem::exists(dir_path) ? 1 : 0;
// Check that persisted files exist on the filesystem (proof of filesystem usage)
std::string dir_path = params.dir.value();
int uses_filesystem = 0;
if (std::filesystem::exists(dir_path) && std::filesystem::is_directory(dir_path))
{
for (const auto& entry : std::filesystem::directory_iterator(dir_path))
{
if (entry.is_regular_file())
{
uses_filesystem = 1;
break;
}
}
}

Copilot uses AI. Check for mistakes.
Comment on lines +386 to +390
# We should have at least (snapshot_max_count - 1) snapshots
# The exact behavior may vary slightly due to rotation timing
assert len(existing_snapshot_ids) >= snapshot_max_count - 1, (
f"Expected at least {snapshot_max_count - 1} snapshots, "
f"found {len(existing_snapshot_ids)}: {existing_snapshot_ids}"
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Despite the docstring claiming to verify snapshot ID assignment ("newest=1, older IDs increment"), the test only checks that some snapshot files exist and does not validate ordering or mapping of IDs to stored content. This can pass even if IDs are assigned incorrectly.

To actually verify ID assignment, consider reading the JSON contents of kvs_1_0.json, kvs_1_1.json, ... and asserting that the counter value corresponds to the expected flush order for each snapshot ID (or otherwise assert on a deterministic property of the ID scheme).

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove file, tracebility statistics are in docs-as-code pages

//
// SPDX-License-Identifier: Apache-2.0
// *******************************************************************************
// Copyright (c) 2025 Qorix
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove

#
# SPDX-License-Identifier: Apache-2.0
# *******************************************************************************
# Copyright (c) 2025 Qorix
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove

*
* SPDX-License-Identifier: Apache-2.0
********************************************************************************/
// Copyright (c) 2025 Qorix
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove

*
* SPDX-License-Identifier: Apache-2.0
********************************************************************************/
// Copyright (c) 2025 Qorix
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove

Comment on lines +53 to +60
} else if constraint_type == "compile_time" {
// Test that compile-time constraints exist
// In Rust, there's no hardcoded constant like C++ KVS_MAX_SNAPSHOTS,
// but we can verify that the default behavior exists
let compile_time_max = 3; // This matches the C++ KVS_MAX_SNAPSHOTS
info!(compile_time_max, "Compile-time max");

let compile_time_constraint_exists = true; // Constants are defined in source
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this doesn't prove anything, will never detect a change in source code

def build_tools(self, version: str) -> BuildTools:
assert version in ("cpp", "rust")
return BazelTools(option_prefix=version, config="per-x86_64-linux")
return BazelTools(option_prefix=version, command_timeout=60.0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why remove config? and explicit pass to build?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there must be incorrect conflicts resolution


import pytest
from common import CommonScenario, ResultCode
from .common import CommonScenario, ResultCode
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pythonpaths have been updated on main so the dot is no longer needed

Comment on lines +56 to +61
# Note: The following tests verify requirements but need extended scenarios
# They are marked as TODO until scenario implementations are added


@pytest.mark.skip(reason="Requires scenario implementation - comp_req__persistency__pers_data_csum_v2")
@add_test_properties(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can't do that. That will establish traceability to empty test and give fake statistics

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants