-
Notifications
You must be signed in to change notification settings - Fork 0
chore: speed-up of CI tests #46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughThis PR reorganizes CI into separate unit/integration/e2e pipelines with image caching and enhanced log collection; migrates async Mongo usage from Motor to PyMongo's async client across code and tests; updates test fixtures and scopes (session-scoped app, integration cleanup); and normalizes Mongo aggregation/list-index usage to await cursors. Changes
Sequence Diagram(s)sequenceDiagram
participant GH as GitHub Actions
participant Cache as Docker Image Cache Action
participant Compose as Docker Compose (CI)
participant K3s as K3s / Kubernetes
participant Tests as Test Runners
participant Codecov as Codecov
GH->>Cache: request/cache images (mongo, redis, kafka, schema-registry)
Cache-->>GH: images loaded or saved
GH->>Compose: start infra (integration job)
Compose->>Tests: run integration tests (with _cleanup)
Tests->>Compose: on failure collect docker-compose logs
Tests->>Codecov: upload integration coverage (flag)
GH->>K3s: bootstrap k3s (e2e job)
GH->>Cache: ensure images for k8s
K3s->>Tests: run k8s e2e tests
Tests->>K3s: on failure collect k8s events/logs
Tests->>Codecov: upload e2e coverage (flag)
GH->>Tests: run unit tests (isolated env)
Tests->>Codecov: upload unit coverage (flag)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Codecov Report❌ Patch coverage is
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 100 files with indirect coverage changes 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
docker-compose.ci.yaml (2)
117-124: Consider adding a healthcheck to shared-ca service.The
shared-caservice is a dependency forcert-generatorwithcondition: service_completed_successfully. However, thesleep 1command means it completes almost immediately. While this works, adding an explicit healthcheck or using a more deterministic completion signal would be more robust.
56-88: Consider using a specific Kafka image tag for reproducibility.The tag
bitnami/kafka:3.6is valid and points to the latest patch in the 3.6 series. However, for consistent CI builds, use an explicit full tag likebitnami/kafka:3.6.2-debian-12-r5to avoid potential manifest issues across different architectures or future tag changes.kafka: - image: bitnami/kafka:3.6 + image: bitnami/kafka:3.6.2-debian-12-r5 container_name: kafka.github/workflows/frontend-ci.yml (1)
117-120: Consider consolidating redundant log capture.Line 117 captures all compose logs, while lines 118-120 capture individual service logs. The individual service logs are subsets of the full log. Consider if you need both, or if filtering could be applied during analysis instead.
🔎 Alternative: Keep only full logs or add filtering
mkdir -p logs docker compose -f docker-compose.ci.yaml logs > logs/docker-compose.log 2>&1 - docker compose -f docker-compose.ci.yaml logs backend > logs/backend.log 2>&1 - docker compose -f docker-compose.ci.yaml logs frontend > logs/frontend.log 2>&1 - docker compose -f docker-compose.ci.yaml logs kafka > logs/kafka.log 2>&1 + # Individual logs can be extracted from docker-compose.log if needed kubectl get events --sort-by='.metadata.creationTimestamp' -A > logs/k8s-events.log 2>&1 || trueAlternatively, keep individual logs if you prefer separate files for easier debugging but note the duplication.
.github/workflows/backend-ci.yml (1)
159-160: Consider adding Kafka and Schema Registry logs for k8s job failures.The non-k8s integration job collects Kafka and Schema Registry logs on failure (lines 79-81), but the k8s-integration job only collects the general compose log. For consistency and easier debugging, consider adding service-specific logs.
🔎 Proposed enhancement
run: | mkdir -p logs docker compose -f docker-compose.ci.yaml logs > logs/docker-compose.log 2>&1 + docker compose -f docker-compose.ci.yaml logs kafka > logs/kafka.log 2>&1 + docker compose -f docker-compose.ci.yaml logs schema-registry > logs/schema-registry.log 2>&1 kubectl get events --sort-by='.metadata.creationTimestamp' -A > logs/k8s-events.log 2>&1 || true kubectl describe pods -A > logs/k8s-describe-pods.log 2>&1 || true
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.github/actions/setup-ci-compose/action.yml.github/workflows/backend-ci.yml.github/workflows/frontend-ci.ymldocker-compose.ci.yaml
💤 Files with no reviewable changes (1)
- .github/actions/setup-ci-compose/action.yml
🧰 Additional context used
🪛 Checkov (3.2.334)
docker-compose.ci.yaml
[medium] 160-161: Basic Auth Credentials
(CKV_SECRET_4)
.github/workflows/backend-ci.yml
[medium] 50-51: Basic Auth Credentials
(CKV_SECRET_4)
🪛 GitHub Actions: Backend CI
docker-compose.ci.yaml
[error] 1-1: Command 'docker compose -f docker-compose.ci.yaml up -d --wait --wait-timeout 120' failed: manifest for bitnami/kafka:3.6 not found (manifest unknown).
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: E2E Tests
🔇 Additional comments (10)
docker-compose.ci.yaml (3)
126-147: Network configuration conflict in cert-generator.The service declares
network_mode: host(line 144) while also having adepends_onrelationship withshared-cawhich usesci-network. When usingnetwork_mode: host, the container cannot communicate with containers on the custom bridge network by their service names. This may cause issues if cert-generator needs to reach services onci-network.Additionally,
network_mode: hostandnetworks:are mutually exclusive in Docker Compose - the service correctly omitsnetworks:but the dependency onshared-ca(which is onci-network) might not work as expected.Verify that the cert-generator can successfully complete its task without needing to communicate with other services on the
ci-network. The volume mount approach appears to be the intended communication method via/shared_ca.
21-39: Good CI optimizations for infrastructure services.The configuration demonstrates thoughtful CI optimizations:
- tmpfs for MongoDB (
/data/db) eliminates disk I/O- Redis memory limits and disabled persistence (
--save "")- KRaft mode Kafka with reduced heap and single-node replication factors
- Appropriate health check intervals for CI speed
Also applies to: 56-88, 90-110
27-29: Hardcoded credentials are acceptable for CI-only configuration.The static analysis tool flagged hardcoded credentials. These are intentionally hardcoded for a CI test environment and are not used in production. The file header and naming (
docker-compose.ci.yaml) clearly indicate this is CI-specific. No action needed.Also applies to: 160-161
.github/workflows/frontend-ci.yml (3)
84-89: Simplified kubeconfig creation approach looks good.The sed-based approach to replace the server URL is cleaner than maintaining a separate heredoc template. The
chmod 644is appropriate for read access by Docker containers.
91-94: Appropriate use of--waitflag with generous timeout.The
--build --wait --wait-timeout 300combination ensures services are built and healthy before proceeding. The 5-minute timeout is reasonable given the full stack build (cert-generator, backend, frontend).
113-121: Comprehensive log collection on failure.Good addition of Kafka logs and Kubernetes events. The
|| trueon kubectl ensures log collection doesn't fail if k8s setup was the issue..github/workflows/backend-ci.yml (4)
18-89: Well-structured non-k8s integration job.The job correctly:
- Sets up UV with caching for faster dependency installation
- Starts only infrastructure services (no
--profile full)- Explicitly ignores k8s tests with
--ignore=tests/integration/k8s- Uses appropriate environment variables for local service access
- Sets
COVERAGE_CORE: sysmonfor faster coverage collection per PR objectives
90-168: K8s integration job properly isolated.Good separation of k8s-specific tests into a parallel job with:
- Dedicated k8s setup with namespace creation
- Appropriate KUBECONFIG and K8S_NAMESPACE environment variables
- Distinct coverage flags (
backend-k8s) for Codecov
46-55: Hardcoded test credentials are acceptable for CI.The static analysis flagged credentials at lines 50-51. These are intentionally hardcoded for CI test environment matching the
docker-compose.ci.yamlconfiguration. This is standard practice for CI pipelines.Also applies to: 127-137
114-122: This concern is not valid.The kubeconfig file is already owned by the runner user because the shell redirection (
>) is processed by the runner's shell before sudo executes. This meanssudo k3s kubectl config view --raw > /home/runner/.kube/configcreates a runner-owned file, not a root-owned one. Withchmod 600, the runner user has read and write access, which is correct. The tests can access the file without issues.Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.github/workflows/backend-ci.yml (1)
18-24: Fix the invalid Kafka image tag in the environment variable.The
KAFKA_IMAGEenvironment variable usesbitnami/kafka:3.6which doesn't exist. This needs to match the fix indocker-compose.ci.yaml.🔎 Proposed fix
env: MONGO_IMAGE: mongo:8.0 REDIS_IMAGE: redis:7-alpine - KAFKA_IMAGE: bitnami/kafka:3.6 + KAFKA_IMAGE: bitnami/kafka:3.6.2 SCHEMA_REGISTRY_IMAGE: confluentinc/cp-schema-registry:7.5.0
🧹 Nitpick comments (2)
docker-compose.ci.yaml (1)
118-125: Consider a more restrictive chmod for shared CA directory.
chmod 777grants full permissions to everyone. While acceptable for CI,chmod 755would be slightly more secure while still allowing the necessary access.🔎 Proposed fix
shared-ca: image: alpine:latest profiles: ["full"] volumes: - shared_ca:/shared_ca - command: sh -c "mkdir -p /shared_ca && chmod 777 /shared_ca && sleep 1" + command: sh -c "mkdir -p /shared_ca && chmod 755 /shared_ca && sleep 1" networks: - ci-network.github/workflows/backend-ci.yml (1)
145-184: Caching logic is duplicated across jobs.The Docker image caching steps are identical in both
integrationandk8s-integrationjobs. Consider extracting this to a reusable workflow or composite action to reduce duplication.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/backend-ci.ymldocker-compose.ci.yaml
🧰 Additional context used
🪛 Checkov (3.2.334)
.github/workflows/backend-ci.yml
[medium] 98-99: Basic Auth Credentials
(CKV_SECRET_4)
docker-compose.ci.yaml
[medium] 161-162: Basic Auth Credentials
(CKV_SECRET_4)
🪛 GitHub Actions: Backend CI
docker-compose.ci.yaml
[error] 1-1: Docker image not found: bitnami/kafka:3.6 (manifest unknown). Docker could not pull the image required by docker-compose. Command failed: 'docker compose -f docker-compose.ci.yaml up -d --wait --wait-timeout 120'.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: E2E Tests
🔇 Additional comments (11)
docker-compose.ci.yaml (6)
21-39: Good CI optimization with tmpfs; credentials are acceptable for CI-only use.The
tmpfsmount for MongoDB data is a smart choice for faster CI cycles. The hardcoded credentials flagged by static analysis (CKV_SECRET_4) are acceptable here since this configuration is explicitly for CI testing and not production.
41-55: LGTM!Redis configuration is appropriately tuned for CI with memory limits and LRU eviction policy.
91-111: LGTM!Schema-registry is properly configured with health checks and correct dependency on Kafka.
127-148: LGTM!The cert-generator is properly configured as a one-shot container with correct dependency ordering on
shared-ca.
150-194: LGTM!Backend service has proper health checks and dependency chain (cert-generator → infra services). The explicitly named
ci-test-secret-keymakes it clear this is for testing only.
195-228: LGTM!Frontend service is correctly configured with dependency on backend health and proper TLS certificate mounts.
.github/workflows/backend-ci.yml (5)
33-72: Good caching strategy with parallel operations and zstd compression.The Docker image caching approach is well-designed:
- Parallel pulls/saves reduce wall-clock time
- zstd compression balances speed vs size
- Cache key includes all image versions for proper invalidation
86-121: LGTM!Good use of
--waitflag with timeout for infrastructure startup. TheCOVERAGE_CORE: sysmonsetting aligns with the PR objective for faster CI tests.
123-136: LGTM!Comprehensive log collection on failure will aid debugging CI issues.
203-231: LGTM!The k3s setup is well-designed:
- Disables traefik (not needed for tests)
- Uses idempotent namespace creation (
dry-run | apply)- Proper timeout for cluster readiness
- Environment variables correctly configured for k8s tests
244-257: LGTM!Good defensive logging with
|| trueto ensure log collection doesn't fail the step, maximizing debug information availability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
docker-compose.ci.yaml (1)
56-89: Kafka configuration successfully addresses the past image issue.The switch from
bitnami/kafka:3.6toapache/kafka:3.9.0resolves the invalid image tag issue flagged in previous reviews. The KRaft mode configuration is appropriate for CI, eliminating the Zookeeper dependency and speeding up startup.The health check uses 30 retries (60 seconds total with 2s intervals). While this provides a safety margin for Kafka's startup, you might optionally reduce it to 20-25 retries if you find Kafka consistently starts faster in your CI environment.
.github/workflows/backend-ci.yml (1)
145-184: Consider extracting Docker caching into a reusable composite action.The Docker image caching logic (lines 145-184) is identical to the integration job (lines 33-72). This duplication creates maintenance overhead and increases the risk of inconsistencies.
Consider creating a reusable composite action (e.g.,
.github/actions/cache-docker-images/action.yml) that both jobs can consume. This would centralize the caching logic and make future updates easier.Example structure for a composite action
Create
.github/actions/cache-docker-images/action.yml:name: Cache Docker Images description: Cache and load Docker images for CI runs: using: composite steps: - name: Cache Docker images uses: actions/cache@v5 id: docker-cache with: path: /tmp/docker-cache key: docker-${{ runner.os }}-${{ env.MONGO_IMAGE }}-${{ env.REDIS_IMAGE }}-${{ env.KAFKA_IMAGE }}-${{ env.SCHEMA_REGISTRY_IMAGE }} # ... rest of the caching logicThen in both jobs:
- uses: ./.github/actions/cache-docker-images
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/backend-ci.ymldocker-compose.ci.yaml
🧰 Additional context used
🪛 Checkov (3.2.334)
.github/workflows/backend-ci.yml
[medium] 98-99: Basic Auth Credentials
(CKV_SECRET_4)
docker-compose.ci.yaml
[medium] 161-162: Basic Auth Credentials
(CKV_SECRET_4)
🔇 Additional comments (11)
docker-compose.ci.yaml (5)
1-15: LGTM! Clear documentation of CI compose usage.The header comments provide helpful context on the two usage modes (infra-only vs full stack) and clearly explain the key differences from the main docker-compose.yaml file.
21-39: LGTM! MongoDB optimized for CI with tmpfs.Using tmpfs for
/data/dbis an excellent optimization for CI speed, avoiding disk I/O overhead. The health check configuration is appropriate for the CI environment.
117-149: LGTM! Certificate generation flow correctly sequenced.The dependency chain ensures proper startup order:
shared-ca→cert-generator→backend/frontend. Usinghostnetwork mode forcert-generatoris appropriate for certificate propagation in CI.
150-193: LGTM! Backend service properly configured with all dependencies.The backend service correctly depends on:
cert-generator(completion) for TLS setup- All infrastructure services (health) for runtime dependencies
The health check configuration with a 30s start period and 20 retries is appropriate given the backend's initialization requirements.
195-221: LGTM! Frontend service correctly depends on backend health.The frontend service properly waits for backend health before starting, ensuring the API is available for E2E tests. The health check configuration is appropriate.
.github/workflows/backend-ci.yml (6)
18-23: LGTM! Image version pinning ensures cache stability.Pinning image versions in environment variables provides consistent cache keys across CI runs, improving cache hit rates and reducing CI time.
33-72: LGTM! Efficient Docker image caching with parallel operations.The caching strategy is well-implemented:
- Parallel image pulls and saves with background jobs (
&andwait)- zstd compression with multi-threading (
-T0) and reasonable compression level (3)- Cache hit logic properly loads images in parallel
This should significantly speed up CI runs after the initial cache population.
86-110: LGTM! Test execution aligns with PR objectives.The changes successfully implement the PR objectives:
- ✅ Uses
--waitinstead of curl polling for service startup (line 88)- ✅ Sets
COVERAGE_CORE: sysmonfor improved coverage collection (line 104)- Correctly excludes k8s tests with
--ignore=tests/integration/k8s
123-136: LGTM! Comprehensive log collection on failure.The failure path collects relevant logs (docker-compose, kafka, schema-registry) with proper stderr redirection (
2>&1). The artifact namebackend-logsclearly distinguishes it from the k8s job's logs.
203-232: LGTM! k3s setup and test execution properly configured.The k8s-integration job is well-structured:
- k3s installation with traefik disabled (appropriate for test environment)
- Idempotent namespace creation with
--dry-run=client- Correct test filtering for k8s-specific tests
- ✅
COVERAGE_CORE: sysmonset per PR objectives (line 226)- Separate coverage flag (
backend-k8s) distinguishes k8s coverage from non-k8s
244-257: LGTM! k8s-specific log collection on failure.The failure path appropriately collects Kubernetes events and pod descriptions alongside docker-compose logs. The artifact name
k8s-logsclearly distinguishes it from the non-k8s job's logs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
docker-compose.ci.yaml (1)
125-133: Consider using a more robust completion signal.The
sleep 1pattern ensures the shared directory exists but is fragile. If the directory setup takes longer on slow CI runners, dependent services might fail.🔎 Alternative approach using a touch file
shared-ca: image: alpine:latest profiles: ["full"] volumes: - shared_ca:/shared_ca - command: sh -c "mkdir -p /shared_ca && chmod 777 /shared_ca && sleep 1" + command: sh -c "mkdir -p /shared_ca && chmod 777 /shared_ca && touch /shared_ca/.ready && tail -f /dev/null" networks: - ci-network + healthcheck: + test: ["CMD", "test", "-f", "/shared_ca/.ready"] + interval: 1s + timeout: 1s + retries: 5Then update cert-generator to depend on health instead:
depends_on: shared-ca: - condition: service_completed_successfully + condition: service_healthy
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
docker-compose.ci.yaml
🧰 Additional context used
🪛 Checkov (3.2.334)
docker-compose.ci.yaml
[medium] 171-172: Basic Auth Credentials
(CKV_SECRET_4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: E2E Tests
- GitHub Check: Integration Tests
- GitHub Check: K8s Integration Tests
🔇 Additional comments (9)
docker-compose.ci.yaml (9)
1-15: LGTM! Clear CI documentation.The header comments provide excellent context on usage patterns and key differences from the production compose file.
27-29: Hardcoded credentials are acceptable for CI.The MongoDB credentials are hardcoded, which is flagged by static analysis. For CI environments, this is acceptable since they're ephemeral and not exposed to production.
41-54: LGTM! Redis optimized for CI.The memory limits, disabled persistence (
--save ""), and tight healthcheck intervals are appropriate for ephemeral CI workloads.
91-111: LGTM! Schema Registry properly configured.The service correctly depends on Kafka health, uses internal listener (
kafka:29092), and has appropriate resource limits for CI.
135-156: Review the necessity of host networking for cert-generator.Using
network_mode: hostbypasses Docker network isolation. This might be required for certificate generation that needs to access the host, but it increases the attack surface in CI.Verify whether host networking is strictly necessary or if the same functionality can be achieved with the standard bridge network and volume mounts.
158-205: Hardcoded secrets are acceptable for CI environments.Lines 171 and 178 contain hardcoded MongoDB credentials and SECRET_KEY, which static analysis correctly flags. For ephemeral CI environments, this is acceptable since:
- The environment is temporary and isolated
- No production data is involved
- Credentials are not exposed outside the CI runner
The explicit
TESTING=trueflag (line 170) clearly marks this as a test configuration.
207-233: LGTM! Frontend properly configured for E2E testing.The service correctly:
- Depends on backend health before starting
- Uses
NODE_EXTRA_CA_CERTSto trust the self-signed CA- Has appropriate healthcheck with
-kflag for self-signed certificates
235-239: LGTM! Standard volume and network configuration.The
shared_cavolume enables certificate sharing across services, and the bridge network provides appropriate isolation for CI.
56-89: Apache Kafka imageapache/kafka:3.9.0exists and is stable.The official Apache Kafka image version 3.9.0 is available on Docker Hub and confirmed as a stable release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
.github/workflows/backend-ci.yml (1)
178-224: Docker caching logic is duplicated across jobs.The Docker caching steps (lines 185-224) are nearly identical to the integration job (lines 73-112). While this duplication is necessary since jobs run on separate runners, consider extracting this into a composite action for maintainability if the caching logic needs to evolve.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/backend-ci.ymlREADME.md
🧰 Additional context used
🪛 Checkov (3.2.334)
.github/workflows/backend-ci.yml
[medium] 138-139: Basic Auth Credentials
(CKV_SECRET_4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: E2E Tests
- GitHub Check: K8s Integration Tests
- GitHub Check: Unit Tests
- GitHub Check: Integration Tests
🔇 Additional comments (7)
.github/workflows/backend-ci.yml (7)
18-23: LGTM!Pinning image versions at the workflow level ensures cache key consistency and reproducible builds across all jobs.
26-64: LGTM!The unit test job is well-structured with proper UV caching, coverage collection using
sysmoncore for speedup, and Codecov upload with distinctbackend-unitflag.
73-112: Well-designed Docker image caching with parallel operations.The caching strategy is efficient: parallel pulls/loads with
&andwait, and zstd compression for optimal speed-to-size ratio. The cache key correctly includes all image versions for proper invalidation.One consideration: if an individual
docker loadfails silently in the background, it won't be caught until compose starts. The failure would still be detected, but the error message might be less clear.
131-151: LGTM!The test configuration correctly excludes k8s tests for the non-k8s integration path. The MongoDB credentials flagged by static analysis (CKV_SECRET_4) are CI-only test fixtures for ephemeral containers, not production secrets—this is a false positive.
163-176: LGTM!Targeted log collection for Kafka and Schema Registry alongside the general docker-compose logs will aid debugging of integration test failures.
243-251: LGTM!The k3s setup is well-structured with proper kubeconfig permissions, timeout for cluster readiness, and idempotent namespace creation using the dry-run pattern.
253-297: LGTM!The k8s test configuration is well-structured with proper environment setup, distinct Codecov flag (
backend-k8s), and comprehensive failure logging including Kubernetes events and pod descriptions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
.github/workflows/backend-ci.yml (1)
73-112: Consider extracting Docker caching logic to a reusable workflow.The Docker image caching implementation is excellent and will significantly speed up CI runs. However, this 40-line block is duplicated identically in the
k8s-integrationjob (lines 185-224).Consider extracting it into a composite action or reusable workflow to improve maintainability and ensure consistency across both jobs.
💡 Example: Create a composite action
Create
.github/actions/docker-cache/action.yml:name: 'Cache Docker Images' description: 'Cache and restore Docker images with zstd compression' runs: using: 'composite' steps: - name: Cache Docker images uses: actions/cache@v5 id: docker-cache with: path: /tmp/docker-cache key: docker-${{ runner.os }}-${{ env.MONGO_IMAGE }}-${{ env.REDIS_IMAGE }}-${{ env.KAFKA_IMAGE }}-${{ env.SCHEMA_REGISTRY_IMAGE }} - name: Load cached Docker images if: steps.docker-cache.outputs.cache-hit == 'true' shell: bash run: | echo "Loading cached images..." for f in /tmp/docker-cache/*.tar.zst; do zstd -d -c "$f" | docker load & done wait docker images - name: Pull and save Docker images if: steps.docker-cache.outputs.cache-hit != 'true' shell: bash run: | mkdir -p /tmp/docker-cache echo "Pulling images in parallel..." docker pull $MONGO_IMAGE & docker pull $REDIS_IMAGE & docker pull $KAFKA_IMAGE & docker pull $SCHEMA_REGISTRY_IMAGE & wait echo "Saving images with zstd compression..." docker save $MONGO_IMAGE | zstd -T0 -3 > /tmp/docker-cache/mongo.tar.zst & docker save $REDIS_IMAGE | zstd -T0 -3 > /tmp/docker-cache/redis.tar.zst & docker save $KAFKA_IMAGE | zstd -T0 -3 > /tmp/docker-cache/kafka.tar.zst & docker save $SCHEMA_REGISTRY_IMAGE | zstd -T0 -3 > /tmp/docker-cache/schema-registry.tar.zst & wait echo "Cache size:" du -sh /tmp/docker-cache/Then replace both caching sections with:
- uses: ./.github/actions/docker-cache
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/backend-ci.yml
🧰 Additional context used
🪛 Checkov (3.2.334)
.github/workflows/backend-ci.yml
[medium] 138-139: Basic Auth Credentials
(CKV_SECRET_4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: E2E Tests
- GitHub Check: K8s Integration Tests
- GitHub Check: Unit Tests
- GitHub Check: Integration Tests
🔇 Additional comments (8)
.github/workflows/backend-ci.yml (8)
26-64: Well-implemented unit test job with performance optimizations.The unit test job is cleanly separated from integration tests, includes proper UV caching, and leverages
COVERAGE_CORE=sysmonfor faster coverage collection. The 5-minute timeout is appropriate for unit tests.
126-129: Excellent improvement: replaced polling with--wait.The use of
docker compose --wait --wait-timeout 120is a clean replacement for the previous curl polling approach mentioned in the PR objectives. This makes the startup process more reliable and easier to maintain.
134-142: Note: Hardcoded credentials are acceptable for CI.The static analysis tool flagged the hardcoded MongoDB credentials. While hardcoded credentials are generally a security concern, these are specifically for ephemeral CI environments and pose no security risk. The credentials are:
- Only used in isolated CI containers
- Never exposed to production
- Reset with each test run
131-161: Well-structured integration test execution with proper coverage separation.The non-k8s integration tests are properly isolated using
--ignore=tests/integration/k8s, and theCOVERAGE_CORE=sysmonsetting aligns with the PR's speed-up objectives. The coverage upload with thebackend-integrationflag enables proper tracking in Codecov.
243-251: Verify that the 90-second timeout for k3s cluster readiness is sufficient.The k3s setup looks well-structured with proper kubeconfig handling and namespace creation. However, the 90-second timeout for
kubectl cluster-infomight be tight on slower runners or during resource contention.Consider monitoring CI run times to ensure this timeout is consistently adequate. If you observe intermittent failures, consider increasing to 120 seconds to match the docker-compose wait timeout.
253-282: K8s integration tests properly configured with distinct coverage tracking.The k8s integration tests are cleanly separated with their own job, use the
backend-e2ecoverage flag (matching the Codecov comment structure in the PR), and properly configure the Kubernetes environment variables. TheCOVERAGE_CORE=sysmonsetting maintains consistency with the other test jobs for performance.
284-297: Comprehensive failure logging for k8s troubleshooting.The log collection includes both docker-compose logs and Kubernetes-specific artifacts (events and pod descriptions), which will be invaluable for debugging k8s-related test failures.
18-23: All pinned Docker image versions are valid and actively maintained.The images resolve correctly:
mongo:8.0,redis:7-alpine,apache/kafka:3.9.0, andconfluentinc/cp-schema-registry:7.5.0are all official images from their respective registries. The Kafka version is current (released November 2024), addressing the prior concern about invalid image tags.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
backend/tests/unit/schemas_pydantic/test_events_schemas.py (1)
77-77: Consider moving import to module level for consistency.The import of
EventMetadataResponseis local to this function, while all other imports are at the module level. Moving it to the top would improve consistency and make the dependency more discoverable.🔎 Proposed refactor
Move the import to line 9-22 where other event schema imports are located:
from app.schemas_pydantic.events import ( EventAggregationRequest, EventBase, EventFilterRequest, EventInDB, EventListResponse, + EventMetadataResponse, EventProjection, EventQuery, EventResponse, EventStatistics, PublishEventRequest, PublishEventResponse, ResourceUsage, )And remove the local import:
# Minimal list response compose/decompose - from app.schemas_pydantic.events import EventMetadataResponse er = EventResponse(
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (28)
backend/tests/conftest.pybackend/tests/integration/app/__init__.pybackend/tests/integration/app/test_main_app.pybackend/tests/integration/conftest.pybackend/tests/integration/core/test_dishka_lifespan.pybackend/tests/integration/db/repositories/__init__.pybackend/tests/integration/db/repositories/test_admin_events_repository.pybackend/tests/integration/db/repositories/test_admin_settings_repository.pybackend/tests/integration/db/repositories/test_admin_user_repository.pybackend/tests/integration/db/repositories/test_dlq_repository.pybackend/tests/integration/db/repositories/test_event_repository.pybackend/tests/integration/db/repositories/test_execution_repository.pybackend/tests/integration/db/repositories/test_notification_repository.pybackend/tests/integration/db/repositories/test_replay_repository.pybackend/tests/integration/db/repositories/test_saga_repository.pybackend/tests/integration/db/repositories/test_saved_script_repository.pybackend/tests/integration/db/repositories/test_sse_repository.pybackend/tests/integration/db/repositories/test_user_repository.pybackend/tests/integration/db/repositories/test_user_settings_repository.pybackend/tests/integration/events/test_admin_utils.pybackend/tests/integration/services/sse/__init__.pybackend/tests/integration/services/sse/test_redis_bus.pybackend/tests/unit/.env.unitbackend/tests/unit/conftest.pybackend/tests/unit/db/repositories/__init__.pybackend/tests/unit/events/test_admin_utils.pybackend/tests/unit/infrastructure/mappers/test_replay_api_mapper.pybackend/tests/unit/schemas_pydantic/test_events_schemas.py
💤 Files with no reviewable changes (2)
- backend/tests/unit/infrastructure/mappers/test_replay_api_mapper.py
- backend/tests/unit/events/test_admin_utils.py
✅ Files skipped from review due to trivial changes (2)
- backend/tests/integration/services/sse/init.py
- backend/tests/integration/app/init.py
🧰 Additional context used
🧬 Code graph analysis (2)
backend/tests/integration/events/test_admin_utils.py (1)
backend/app/events/admin_utils.py (3)
AdminUtils(10-62)ensure_topics_exist(50-58)check_topic_exists(27-34)
backend/tests/unit/schemas_pydantic/test_events_schemas.py (2)
backend/app/schemas_pydantic/events.py (2)
EventMetadataResponse(26-37)EventResponse(40-52)backend/app/domain/events/event_models.py (1)
correlation_id(93-94)
🪛 dotenv-linter (4.0.0)
backend/tests/unit/.env.unit
[warning] 2-2: [UnorderedKey] The SECRET_KEY key should go before the TESTING key
(UnorderedKey)
[warning] 3-3: [UnorderedKey] The ENABLE_TRACING key should go before the SECRET_KEY key
(UnorderedKey)
[warning] 4-4: [UnorderedKey] The OTEL_SDK_DISABLED key should go before the SECRET_KEY key
(UnorderedKey)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: E2E Tests
- GitHub Check: Integration Tests
- GitHub Check: K8s Integration Tests
🔇 Additional comments (9)
backend/tests/unit/schemas_pydantic/test_events_schemas.py (1)
78-87: Excellent improvement to test data accuracy.The switch from an empty dict to a properly structured
EventMetadataResponseinstance makes the test more accurate and aligns with the actual schema definition. All required fields (service_name,service_version,correlation_id) are correctly provided, making the test both realistic and maintainable.backend/tests/conftest.py (1)
189-191: LGTM! Clear documentation of test fixture reorganization.The comments effectively explain the separation of concerns: integration tests handle cleanup via their own conftest, while unit tests use guards to prevent infrastructure access.
backend/tests/integration/conftest.py (1)
1-24: LGTM! Solid integration test isolation strategy.The autouse cleanup fixture ensures each integration test runs with a clean MongoDB and Redis state, filtering out system collections appropriately.
backend/tests/integration/events/test_admin_utils.py (2)
19-20: LGTM!The topic existence check properly validates that the topic was created/exists.
15-17: Weak assertion doesn't validate anything meaningful.The assertion
assert res.get(topic) in (True, False)will pass even ifres.get(topic)returnsNone. This doesn't verify that the topic was actually processed byensure_topics_exist.🔎 Proposed fix
- # Ensure topic exists (idempotent) - res = await au.ensure_topics_exist([(topic, 1)]) - assert res.get(topic) in (True, False) # Some clusters may report exists + # Ensure topic exists (idempotent) + res = await au.ensure_topics_exist([(topic, 1)]) + assert topic in res, f"Topic {topic} should be in results"Likely an incorrect or invalid review comment.
backend/tests/unit/.env.unit (1)
1-4: LGTM! Appropriate unit test environment configuration.The environment variables correctly configure unit test mode and disable tracing/telemetry.
backend/tests/unit/conftest.py (3)
1-9: LGTM! Proper unit test environment setup.Loading
.env.unitwithoverride=Trueensures unit test configuration takes precedence.
12-15: LGTM! No-op cleanup provides fixture compatibility.The
autouse=Falseno-op_cleanupfixture allows tests that explicitly depend on_cleanupto work in unit test context without performing unnecessary cleanup operations.
18-35: LGTM! Excellent enforcement of unit test boundaries.The guard fixtures correctly prevent unit tests from accessing infrastructure dependencies, with clear error messages directing developers to use mocks or integration tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
backend/tests/unit/services/pod_monitor/test_monitor.py (2)
202-202: Consider whether substring check is intentional.The assertion changed from an exact match to a case-insensitive substring check. While this makes the test more flexible, it could pass for unexpected state values like "RUNNING_IDLE" or "PENDING_IDLE".
If the state value format is stable, an exact comparison might catch regressions more reliably:
assert status["state"] == "IDLE" # or whatever the exact expected value is
560-560: Tests verify graceful handling without assertions.These tests validate that methods handle edge cases (already running/stopped, malformed input) without raising exceptions. The lack of explicit assertions is intentional—the test passes if no exception occurs.
For clarity, you could optionally add brief comments:
await pm.start() # Should not raise when already runningAlso applies to: 569-569, 605-605
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/tests/unit/services/pod_monitor/test_monitor.py
🧰 Additional context used
🧬 Code graph analysis (1)
backend/tests/unit/services/pod_monitor/test_monitor.py (1)
backend/app/services/pod_monitor/monitor.py (3)
PodMonitor(132-558)create_pod_monitor(562-574)_update_resource_version(322-328)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: E2E Tests
- GitHub Check: K8s Integration Tests
- GitHub Check: Integration Tests
🔇 Additional comments (3)
backend/tests/unit/services/pod_monitor/test_monitor.py (3)
13-43: Good centralization of K8s stubs.The shared stub classes effectively reduce duplication across tests. The minimal implementations are appropriate for unit testing where the focus is on the monitor's logic rather than K8s client behavior.
74-81: Excellent refactoring of K8s patching logic.The
_patch_k8shelper centralizes monkeypatching with sensible defaults while allowing test-specific overrides. This significantly reduces boilerplate and makes test setup more maintainable.
109-109: Consistent use of the new patching helper.The tests correctly leverage
_patch_k8swith appropriate overrides where needed, maintaining clear test intent while reducing setup boilerplate.Also applies to: 541-541, 650-650, 675-675
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
backend/tests/integration/db/repositories/test_user_repository.py (1)
12-12: Fix the MyPy type checking failure for the untypeddbparameter.The pipeline failure indicates MyPy's strict mode is rejecting the untyped
dbparameter. The# type: ignore[valid-type]comment is likely using the wrong error code for this scenario.Given the commit message mentions "motor -> pymongo", ensure you're using the correct type annotation. Consider one of these solutions:
Add proper type annotation (preferred):
from pymongo.database import Database async def test_create_get_update_delete_user(db: Database) -> None:If the type is complex or fixture-dependent, use the correct mypy error code:
async def test_create_get_update_delete_user(db) -> None: # type: ignore[no-untyped-def]backend/workers/run_saga_orchestrator.py (1)
114-114: Critical:close()must be awaited.MyPy correctly identifies that
db_client.close()returns a coroutine that is not being awaited, which means the database connection may not close properly.🔎 Proposed fix
- db_client.close() + await db_client.close()backend/app/services/k8s_worker/worker.py (1)
580-580: Usestack.push_async_callback()instead ofstack.callback()for async MongoDB client cleanup.
AsyncMongoClient.close()is an async coroutine in PyMongo 4.9.2 and must be awaited. The code at line 580 usesstack.callback()(for synchronous functions), but should usestack.push_async_callback()to properly handle the async cleanup, consistent with the other async callbacks on lines 578–579.backend/workers/run_event_replay.py (1)
71-71: Critical: Usepush_async_callbackinstead ofcallbackfor async close().PyMongo's
AsyncMongoClient.close()is now a coroutine that must be awaited. Usingstack.callback()will not await the coroutine, causing a resource leak and MyPy error.🔎 Proposed fix
- stack.callback(db_client.close) + stack.push_async_callback(db_client.close)backend/tests/fixtures/real_services.py (2)
96-101: Critical: Missing await onclient.close().PyMongo's
AsyncMongoClient.close()is a coroutine that must be awaited. The synchronous call will cause a resource leak and MyPy error.🔎 Proposed fix
# Drop test MongoDB database if self.mongo_client: await self.mongo_client.drop_database(self.db_name) - self.mongo_client.close() + await self.mongo_client.close()
314-320: Critical: Missing await onclient.close().PyMongo's
AsyncMongoClient.close()is a coroutine that must be awaited.🔎 Proposed fix
client = AsyncIOMotorClient( "mongodb://root:rootpassword@localhost:27017", serverSelectionTimeoutMS=5000 ) await client.admin.command("ping") - client.close() + await client.close()backend/scripts/seed_users.py (1)
107-107: Critical: Missing await onclient.close().PyMongo's
AsyncMongoClient.close()returns a coroutine that must be awaited. The pipeline failure explicitly flags this: "Value of type 'Coroutine[Any, Any, None]' must be used. Are you missing an await?"🔎 Proposed fix
- client.close() + await client.close()backend/app/core/database_context.py (1)
158-172: Critical: Remove incorrect await onstart_session().PyMongo's
client.start_session()returns anAsyncClientSessioncontext manager directly, not a coroutine. The pipeline failure explicitly flags this: "Incompatible types in 'await' (actual type 'AsyncClientSession', expected type 'Awaitable[Any]')".🔎 Proposed fix
- async with await self.client.start_session() as session: + async with self.client.start_session() as session: async with session.start_transaction(): yield sessionbackend/app/services/coordinator/coordinator.py (1)
543-548: Critical: Usepush_async_callbackinstead ofcallbackfor async close().PyMongo's
AsyncMongoClient.close()is a coroutine that must be awaited. Usingstack.callback()will not await the coroutine, causing a resource leak and contributing to the MyPy type checking failure.🔎 Proposed fix
await stack.enter_async_context(coordinator) stack.push_async_callback(idem_manager.close) stack.push_async_callback(r.aclose) - stack.callback(db_client.close) + stack.push_async_callback(db_client.close)
🧹 Nitpick comments (4)
backend/tests/integration/events/test_event_store.py (1)
5-10: Import ordering: third-party import after local imports.The
pymongoimport at line 10 is placed after theapp.*imports. Standard Python convention (PEP 8) groups imports as: stdlib → third-party → local. Consider moving the pymongo import before the app imports for consistency.🔎 Suggested reordering
from datetime import datetime, timezone, timedelta import pytest +from pymongo.asynchronous.database import AsyncDatabase as AsyncIOMotorDatabase from app.events.event_store import EventStore from app.events.schema.schema_registry import SchemaRegistryManager from app.infrastructure.kafka.events.metadata import AvroEventMetadata from app.infrastructure.kafka.events.pod import PodCreatedEvent from app.infrastructure.kafka.events.user import UserLoggedInEvent -from pymongo.asynchronous.database import AsyncDatabase as AsyncIOMotorDatabasebackend/tests/integration/conftest.py (1)
10-24: Consider extracting duplicate cleanup logic.The pre-test (lines 11-15) and post-test (lines 20-24) cleanup logic is identical. Extracting to a helper improves maintainability.
🔎 Suggested refactor
+async def _do_cleanup(db: AsyncIOMotorDatabase, redis_client: redis.Redis) -> None: + collections = await db.list_collection_names() + for name in collections: + if not name.startswith("system."): + await db.drop_collection(name) + await redis_client.flushdb() + + @pytest_asyncio.fixture(scope="function", autouse=True) async def _cleanup(db: AsyncIOMotorDatabase, redis_client: redis.Redis): """Clean DB and Redis before/after each integration test.""" - # Pre-test cleanup - collections = await db.list_collection_names() - for name in collections: - if not name.startswith("system."): - await db.drop_collection(name) - await redis_client.flushdb() + await _do_cleanup(db, redis_client) yield - # Post-test cleanup - collections = await db.list_collection_names() - for name in collections: - if not name.startswith("system."): - await db.drop_collection(name) - await redis_client.flushdb() + await _do_cleanup(db, redis_client).github/workflows/backend-ci.yml (2)
73-112: Consider extracting Docker caching to a composite action.The Docker image caching logic (cache restore, parallel pull, zstd save) is duplicated between the
integrationande2ejobs. Extracting this to a composite action would reduce duplication and simplify maintenance.Also applies to: 185-224
85-88: Backgrounddocker loaderrors may be silently ignored.When loading images in parallel with
&, if anydocker loadcommand fails, the error is not captured beforewait. Consider adding error handling or usingset -eat the script start to fail on first error.🔎 Suggested fix
- name: Load cached Docker images if: steps.docker-cache.outputs.cache-hit == 'true' run: | + set -e echo "Loading cached images..." + pids=() for f in /tmp/docker-cache/*.tar.zst; do - zstd -d -c "$f" | docker load & + zstd -d -c "$f" | docker load & + pids+=($!) done - wait + for pid in "${pids[@]}"; do + wait "$pid" || exit 1 + done docker images
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
backend/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (36)
.github/workflows/backend-ci.ymlbackend/app/core/database_context.pybackend/app/services/coordinator/coordinator.pybackend/app/services/k8s_worker/worker.pybackend/pyproject.tomlbackend/scripts/seed_users.pybackend/tests/conftest.pybackend/tests/fixtures/real_services.pybackend/tests/integration/app/test_main_app.pybackend/tests/integration/conftest.pybackend/tests/integration/core/test_container.pybackend/tests/integration/core/test_database_context.pybackend/tests/integration/db/repositories/test_admin_events_repository.pybackend/tests/integration/db/repositories/test_admin_settings_repository.pybackend/tests/integration/db/repositories/test_admin_user_repository.pybackend/tests/integration/db/repositories/test_dlq_repository.pybackend/tests/integration/db/repositories/test_event_repository.pybackend/tests/integration/db/repositories/test_notification_repository.pybackend/tests/integration/db/repositories/test_replay_repository.pybackend/tests/integration/db/repositories/test_saga_repository.pybackend/tests/integration/db/repositories/test_saved_script_repository.pybackend/tests/integration/db/repositories/test_sse_repository.pybackend/tests/integration/db/repositories/test_user_repository.pybackend/tests/integration/db/repositories/test_user_settings_repository.pybackend/tests/integration/events/test_event_store.pybackend/tests/integration/events/test_event_store_consumer.pybackend/tests/integration/events/test_event_store_consumer_flush_e2e.pybackend/tests/integration/events/test_event_store_e2e.pybackend/tests/integration/k8s/test_k8s_worker_create_pod.pybackend/tests/integration/result_processor/test_result_processor.pybackend/tests/integration/services/admin/test_admin_user_service.pybackend/tests/integration/services/saved_script/test_saved_script_service.pybackend/tests/integration/services/sse/test_redis_bus.pybackend/workers/dlq_processor.pybackend/workers/run_event_replay.pybackend/workers/run_saga_orchestrator.py
💤 Files with no reviewable changes (1)
- backend/pyproject.toml
✅ Files skipped from review due to trivial changes (1)
- backend/tests/integration/db/repositories/test_admin_user_repository.py
🧰 Additional context used
🧬 Code graph analysis (13)
backend/tests/integration/events/test_event_store_consumer.py (1)
backend/app/core/database_context.py (3)
database(67-69)database(146-149)database(201-202)
backend/tests/integration/events/test_event_store.py (1)
backend/app/core/database_context.py (3)
database(67-69)database(146-149)database(201-202)
backend/tests/integration/services/sse/test_redis_bus.py (3)
backend/app/schemas_pydantic/sse.py (2)
RedisSSEMessage(63-68)RedisNotificationMessage(102-112)backend/app/dlq/models.py (1)
event_type(93-95)backend/app/services/coordinator/queue_manager.py (1)
execution_id(30-31)
backend/tests/integration/events/test_event_store_consumer_flush_e2e.py (1)
backend/app/core/database_context.py (3)
database(67-69)database(146-149)database(201-202)
backend/tests/integration/events/test_event_store_e2e.py (1)
backend/app/core/database_context.py (3)
database(67-69)database(146-149)database(201-202)
backend/tests/integration/k8s/test_k8s_worker_create_pod.py (1)
backend/app/core/database_context.py (3)
database(67-69)database(146-149)database(201-202)
backend/tests/integration/conftest.py (3)
backend/app/core/database_context.py (3)
database(67-69)database(146-149)database(201-202)backend/tests/conftest.py (3)
scope(172-174)db(178-180)redis_client(184-186)backend/tests/unit/conftest.py (3)
_cleanup(13-15)db(19-20)redis_client(24-25)
backend/tests/integration/services/admin/test_admin_user_service.py (1)
backend/app/core/database_context.py (3)
database(67-69)database(146-149)database(201-202)
backend/tests/integration/result_processor/test_result_processor.py (1)
backend/app/core/database_context.py (3)
database(67-69)database(146-149)database(201-202)
backend/tests/conftest.py (1)
backend/app/core/database_context.py (3)
database(67-69)database(146-149)database(201-202)
backend/tests/integration/core/test_database_context.py (1)
backend/app/core/database_context.py (3)
database(67-69)database(146-149)database(201-202)
backend/tests/integration/core/test_container.py (1)
backend/app/core/database_context.py (3)
database(67-69)database(146-149)database(201-202)
backend/tests/fixtures/real_services.py (1)
backend/app/core/database_context.py (3)
database(67-69)database(146-149)database(201-202)
🪛 Checkov (3.2.334)
.github/workflows/backend-ci.yml
[medium] 138-139: Basic Auth Credentials
(CKV_SECRET_4)
🪛 GitHub Actions: MyPy Type Checking
backend/tests/integration/db/repositories/test_sse_repository.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/events/test_event_store_consumer.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/db/repositories/test_user_repository.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/db/repositories/test_saved_script_repository.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/events/test_event_store.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
.github/workflows/backend-ci.yml
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/services/sse/test_redis_bus.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/db/repositories/test_admin_events_repository.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/events/test_event_store_consumer_flush_e2e.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/db/repositories/test_event_repository.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/db/repositories/test_admin_settings_repository.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/workers/run_saga_orchestrator.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
[error] 114-114: Value of type "Coroutine[Any, Any, None]" must be used [unused-coroutine]. Are you missing an await?
backend/tests/integration/app/test_main_app.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/workers/run_event_replay.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/app/core/database_context.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
[error] 170-170: Incompatible types in "await" (actual type "AsyncClientSession", expected type "Awaitable[Any]") [misc]
backend/app/services/coordinator/coordinator.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/db/repositories/test_replay_repository.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/events/test_event_store_e2e.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/k8s/test_k8s_worker_create_pod.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/conftest.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/services/admin/test_admin_user_service.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/db/repositories/test_user_settings_repository.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/result_processor/test_result_processor.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/scripts/seed_users.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
[error] 107-107: Value of type "Coroutine[Any, Any, None]" must be used [unused-coroutine]. Are you missing an await?
backend/tests/conftest.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/db/repositories/test_dlq_repository.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/services/saved_script/test_saved_script_service.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/core/test_database_context.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/app/services/k8s_worker/worker.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/core/test_container.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/db/repositories/test_notification_repository.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/workers/dlq_processor.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/fixtures/real_services.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
backend/tests/integration/db/repositories/test_saga_repository.py
[error] 1-1: mypy check failed in CI step: 'uv run mypy --config-file pyproject.toml --strict .'
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Scan Backend
- GitHub Check: Scan Frontend
- GitHub Check: E2E Tests
- GitHub Check: E2E Tests
🔇 Additional comments (46)
backend/tests/integration/db/repositories/test_user_repository.py (1)
8-8: LGTM! Correct reclassification from unit to integration test.This change appropriately reflects that the test performs actual database operations, making it an integration test rather than a unit test.
backend/tests/integration/db/repositories/test_saga_repository.py (2)
9-9: LGTM! Test marker updated appropriately.The change from
pytest.mark.unittopytest.mark.integrationcorrectly reflects that this test requires database fixtures and tests repository integration logic.
27-28: No changes needed. The return type change from list toSagaListResultis properly implemented with correct return type annotation, and all usages across the codebase have been consistently updated to access.sagas. The assertion change at line 28 is correct.backend/tests/integration/db/repositories/test_notification_repository.py (2)
15-15: LGTM! Correct test categorization.The marker change from
unittointegrationis appropriate. These tests interact with a database through thedbfixture and perform actual I/O operations on MongoDB collections, which clearly qualifies them as integration tests rather than unit tests.
1-1: Thedatetime.UTCimport is valid for the target Python version.The import
from datetime import datetime, UTC, timedeltais syntactically correct for Python 3.12+ (UTC was added to the datetime module in Python 3.11). The usage ofUTCin the code is also correct (e.g.,datetime.now(UTC)on lines 50, 67, 90). If mypy is reporting a failure in this file, the issue is likely not related to this import statement. Please share the specific mypy error message from the CI pipeline to investigate further.backend/tests/integration/k8s/test_k8s_worker_create_pod.py (2)
36-43: No action needed. TheKubernetesWorkerconstructor already correctly acceptsDatabase(which isAsyncDatabase[MongoDocument]from pymongo's async API). The type is properly defined and matches the test fixture being passed.Likely an incorrect or invalid review comment.
29-29: No action required. The test file correctly retrieves the database from the dependency injection container. All integration tests consistently use this pattern (scope.get(AsyncIOMotorDatabase)), including the existingdbfixture in conftest.py, which confirms that the DI container properly handles both the type alias and generic type resolution. The retrieval is functional and consistent with the codebase.backend/tests/integration/db/repositories/test_admin_settings_repository.py (1)
6-6: LGTM! Test categorization is correct.Reclassifying this as an integration test is appropriate since it interacts with a real database instance via the
dbfixture.backend/tests/integration/db/repositories/test_saved_script_repository.py (1)
6-6: LGTM! Correct test categorization.The integration marker properly reflects that these tests require database connectivity.
backend/tests/integration/events/test_event_store_e2e.py (1)
4-4: LGTM! Clean migration to PyMongo's async API.The import alias maintains backward compatibility while migrating to PyMongo's asynchronous database type.
backend/tests/integration/services/sse/test_redis_bus.py (2)
7-7: LGTM! Appropriate test categorization.Integration marker correctly reflects the Redis-based testing scope.
80-80: LGTM! Cleanup of redundant timeout arguments.Removing the explicit
timeoutparameter is fine since_FakePubSub.get_message()defaults totimeout=0.5on line 33, maintaining the same behavior.Also applies to: 86-86, 114-114
backend/workers/run_saga_orchestrator.py (1)
20-20: LGTM! Consistent migration to PyMongo's async API.The import and instantiation of
AsyncMongoClientalign with the repository-wide migration from Motor to PyMongo's asynchronous API.Also applies to: 30-30
backend/tests/integration/db/repositories/test_replay_repository.py (1)
11-11: LGTM! Correct test categorization.Reclassifying as integration is appropriate for tests that interact with the database and event collections.
backend/app/services/k8s_worker/worker.py (1)
12-12: LGTM! Consistent PyMongo async client migration.The import and instantiation properly migrate to PyMongo's
AsyncMongoClient.Also applies to: 524-524
backend/workers/dlq_processor.py (2)
11-11: LGTM! Proper PyMongo async client migration.The import and instantiation correctly use PyMongo's
AsyncMongoClient.Also applies to: 103-107
136-136: Verify async close() handling in AsyncExitStack.Similar to
worker.pyand the critical issue inrun_saga_orchestrator.py, usingstack.callback()fordb_client.close()may be incorrect ifclose()is async and returns a coroutine. Consider usingstack.push_async_callback()instead if the verification confirmsclose()is async.backend/workers/run_event_replay.py (2)
15-15: LGTM! Migration to PyMongo async API.The import change from Motor to PyMongo's asynchronous API is correct.
39-39: LGTM! Client instantiation is correct.The
AsyncMongoClientinstantiation preserves the correct connection parameters.backend/tests/integration/db/repositories/test_user_settings_repository.py (1)
9-9: LGTM! Correct test marker.The pytest marker correctly reflects that this is an integration test, aligning with the file's location in
tests/integration/.backend/tests/integration/services/admin/test_admin_user_service.py (1)
4-4: LGTM! Clean migration with aliasing.The import change migrates to PyMongo's
AsyncDatabasewhile preserving theAsyncIOMotorDatabasesymbol for backward compatibility in tests.backend/tests/fixtures/real_services.py (1)
12-13: LGTM! Clean migration with aliasing.The import changes migrate to PyMongo's asynchronous API while preserving Motor-compatible symbol names through aliasing.
backend/tests/integration/events/test_event_store_consumer_flush_e2e.py (1)
5-5: LGTM! Clean migration with aliasing.The import change migrates to PyMongo's
AsyncDatabasewhile preserving theAsyncIOMotorDatabasesymbol for backward compatibility.backend/scripts/seed_users.py (3)
22-23: LGTM! Migration to PyMongo async API.The import changes correctly migrate to PyMongo's asynchronous API.
29-29: LGTM! Type signature updated correctly.The function signature correctly uses PyMongo's
AsyncDatabasetype.
77-77: LGTM! Client instantiation is correct.The
AsyncMongoClientinstantiation is correct.backend/app/core/database_context.py (5)
7-11: LGTM! Migration to PyMongo async API.The import changes correctly migrate to PyMongo's asynchronous API modules.
17-23: LGTM! Type aliases updated correctly.The type aliases are correctly updated to use PyMongo's asynchronous types while maintaining clear naming.
107-118: LGTM! Client initialization is correct.The comment accurately reflects PyMongo's automatic event loop usage, and the client instantiation preserves all necessary connection parameters.
120-127: LGTM! Error handling correctly awaits close().The error path correctly awaits
client.close()since PyMongo's close method is a coroutine.
132-137: LGTM! Disconnect correctly awaits close().The disconnect method correctly awaits
self._client.close().backend/app/services/coordinator/coordinator.py (2)
9-9: LGTM! Migration to PyMongo async API.The import change correctly migrates to PyMongo's asynchronous
AsyncMongoClient.
503-503: LGTM! Client instantiation is correct.The
AsyncMongoClientinstantiation preserves the correct connection parameters.backend/tests/integration/db/repositories/test_event_repository.py (1)
9-9: LGTM!The marker reclassification from unit to integration aligns with the test reorganization objectives.
backend/tests/integration/db/repositories/test_dlq_repository.py (1)
9-9: LGTM!The marker reclassification from unit to integration is consistent with the broader test suite reorganization.
backend/tests/integration/db/repositories/test_sse_repository.py (1)
6-6: LGTM!The marker change appropriately reclassifies this test as integration-level.
backend/tests/integration/app/test_main_app.py (1)
15-15: LGTM!The marker update correctly categorizes this test as integration-level, consistent with the test suite reorganization.
backend/tests/integration/result_processor/test_result_processor.py (1)
6-6: Remove or verify with actual MyPy output.The
AsyncIOMotorDatabasealias is consistently used across 12 test files (conftest.py, multiple integration test files, and fixtures). Without concrete MyPy error output showing that this alias causes type checking failures, the claim that it "correlates with reported MyPy failures" cannot be verified. If MyPy does report errors specifically on this alias, provide those error messages; otherwise, this pattern appears to be intentional and working.backend/tests/integration/core/test_database_context.py (1)
4-4: The import aliasAsyncDatabase as AsyncIOMotorDatabaseis correct and does not cause type-checking issues. The mypy configuration inpyproject.tomlexplicitly excludes all test files (exclude = '(^tests/|/tests/')), so test files are not type-checked by the pipeline. The alias is straightforward and used consistently across the test suite without complications.Likely an incorrect or invalid review comment.
backend/tests/integration/events/test_event_store_consumer.py (1)
6-6: The import alias is not causing any MyPy failures. MyPy's configuration explicitly excludes all test files (exclude = '(^tests/|/tests/)'), so this file is not analyzed by MyPy at all. The importfrom pymongo.asynchronous.database import AsyncDatabase as AsyncIOMotorDatabaseis valid, and the alias is used correctly and consistently throughout the file. No changes are needed.Likely an incorrect or invalid review comment.
backend/tests/integration/db/repositories/test_admin_events_repository.py (1)
12-12: LGTM!The test marker correctly reflects that this file is an integration test (located under
tests/integration/). This aligns with the PR's goal of properly categorizing tests.backend/tests/conftest.py (2)
14-14: LGTM!The import migration from motor to pymongo's async API is consistent with the broader codebase migration.
189-191: Good documentation of the cleanup migration.Clear comments explaining that autouse cleanup moved to
tests/integration/conftest.pyand unit tests usetests/unit/conftest.py. This helps future maintainers understand the test architecture..github/workflows/backend-ci.yml (3)
134-144: Hardcoded credentials are acceptable for CI test infrastructure.The static analysis tool flagged the MongoDB credentials (CKV_SECRET_4). These are local test infrastructure credentials used only in CI, which is a common and acceptable pattern. No action needed.
243-251: k3s setup looks solid.The Kubernetes setup with timeout, namespace creation using dry-run with apply (idempotent), and proper KUBECONFIG handling is well-designed for CI reliability.
284-296: Good failure diagnostics collection.Collecting docker-compose logs, k8s events, and pod descriptions on failure provides excellent debugging context. The
|| trueprevents log collection failures from masking the actual test failure.
backend/tests/integration/services/saved_script/test_saved_script_service.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
backend/tests/fixtures/real_services.py (2)
101-101: Missing await on AsyncMongoClient.close().Line 101 calls
self.mongo_client.close()withoutawait. Since the type changed from Motor'sAsyncIOMotorClientto PyMongo'sAsyncMongoClient(line 26, 34), theclose()method now returns a coroutine that must be awaited.🔎 Proposed fix
- self.mongo_client.close() + await self.mongo_client.close()
168-169: Fix database_context.py to follow PyMongo async API conventions.The patterns differ because
backend/app/core/database_context.py(lines 170-171) uses an incorrect async pattern:async with self.client.start_session() as session: async with await session.start_transaction():This violates PyMongo async API requirements:
start_session()must be awaited, butstart_transaction()is not awaitable. The correct pattern (already used inbackend/tests/fixtures/real_services.pylines 168-169) is:async with await client.start_session() as session: async with session.start_transaction():Update
database_context.pyto match this correct pattern.
♻️ Duplicate comments (1)
backend/tests/integration/k8s/test_k8s_worker_create_pod.py (1)
12-12: LGTM! Test fixture correctly updated to Database type.The migration from
AsyncIOMotorDatabasetoDatabasealigns with the DI container's type registrations and the broader codebase migration to PyMongo async types.Also applies to: 29-29
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (21)
backend/app/core/database_context.pybackend/app/db/repositories/admin/admin_events_repository.pybackend/app/db/repositories/dlq_repository.pybackend/app/db/repositories/event_repository.pybackend/app/db/repositories/execution_repository.pybackend/app/db/repositories/notification_repository.pybackend/app/db/repositories/saga_repository.pybackend/app/dlq/manager.pybackend/app/events/event_store.pybackend/tests/conftest.pybackend/tests/fixtures/real_services.pybackend/tests/integration/conftest.pybackend/tests/integration/core/test_container.pybackend/tests/integration/core/test_database_context.pybackend/tests/integration/events/test_event_store.pybackend/tests/integration/events/test_event_store_consumer.pybackend/tests/integration/events/test_event_store_consumer_flush_e2e.pybackend/tests/integration/events/test_event_store_e2e.pybackend/tests/integration/k8s/test_k8s_worker_create_pod.pybackend/tests/integration/result_processor/test_result_processor.pybackend/tests/integration/services/admin/test_admin_user_service.py
✅ Files skipped from review due to trivial changes (1)
- backend/app/db/repositories/notification_repository.py
🚧 Files skipped from review as they are similar to previous changes (4)
- backend/tests/integration/events/test_event_store.py
- backend/tests/integration/conftest.py
- backend/tests/integration/events/test_event_store_consumer_flush_e2e.py
- backend/tests/integration/core/test_container.py
🧰 Additional context used
🧬 Code graph analysis (9)
backend/tests/integration/services/admin/test_admin_user_service.py (1)
backend/tests/conftest.py (3)
app(130-138)db(178-180)scope(172-174)
backend/app/db/repositories/admin/admin_events_repository.py (2)
backend/app/domain/events/query_builders.py (3)
EventStatsAggregation(59-125)build_hourly_events_pipeline(94-101)build_top_users_pipeline(104-110)backend/app/domain/events/event_models.py (1)
HourlyEventCount(168-170)
backend/tests/integration/events/test_event_store_consumer.py (1)
backend/tests/conftest.py (3)
app(130-138)db(178-180)scope(172-174)
backend/tests/conftest.py (2)
backend/tests/unit/conftest.py (2)
app(34-35)db(19-20)backend/app/core/database_context.py (3)
database(67-69)database(146-149)database(201-202)
backend/tests/integration/events/test_event_store_e2e.py (1)
backend/tests/conftest.py (3)
app(130-138)db(178-180)scope(172-174)
backend/tests/integration/result_processor/test_result_processor.py (1)
backend/tests/conftest.py (3)
app(130-138)db(178-180)scope(172-174)
backend/tests/fixtures/real_services.py (2)
backend/tests/conftest.py (4)
app(130-138)redis_client(184-186)db(178-180)client(150-161)backend/app/core/database_context.py (6)
db_name(72-74)db_name(152-153)db_name(205-206)client(62-64)client(140-143)client(197-198)
backend/app/db/repositories/execution_repository.py (2)
backend/app/domain/execution/models.py (2)
DomainExecution(13-26)ResourceUsageDomain(43-64)backend/app/services/coordinator/queue_manager.py (1)
execution_id(30-31)
backend/tests/integration/core/test_database_context.py (1)
backend/tests/conftest.py (3)
app(130-138)db(178-180)scope(172-174)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: E2E Tests
- GitHub Check: E2E Tests
- GitHub Check: Integration Tests
🔇 Additional comments (27)
backend/app/db/repositories/execution_repository.py (2)
133-133: LGTM: Variable extraction improves readability.Extracting
resource_usage_datato a local variable is a good refactoring that improves code clarity and matches the pattern used inget_execution(line 58).
136-136: Defensivestr()coercion is acceptable.The explicit
str()coercion ensures type consistency forexecution_id, which is appropriate given the Motor → PyMongo migration context. While the fallback""is already a string, this defensive approach guards against unexpected types from the database.backend/tests/integration/result_processor/test_result_processor.py (1)
6-6: LGTM! Clean migration to the Database abstraction.The import and type annotation changes correctly align with the project-wide migration from Motor's
AsyncIOMotorDatabaseto the internalDatabaseabstraction. The DI resolution is updated appropriately.Also applies to: 33-33
backend/app/dlq/manager.py (1)
337-339: LGTM! Correct PyMongo async aggregation pattern.The change correctly awaits the
aggregate()call before iterating over the cursor. This aligns with PyMongo's async API whereaggregate()returns a coroutine that resolves to a cursor, unlike Motor's direct cursor return.backend/tests/integration/services/admin/test_admin_user_service.py (1)
4-4: LGTM! Type migration correctly applied.The import and DI resolution changes properly migrate to the
Databaseabstraction. The test logic remains intact.Also applies to: 15-15
backend/tests/integration/core/test_database_context.py (1)
4-4: LGTM! Database abstraction correctly integrated.The test properly validates the
Databasetype resolution from the DI container, aligning with the project-wide migration.Also applies to: 13-13
backend/tests/integration/events/test_event_store_consumer.py (1)
6-6: LGTM! Type migration consistent with the broader changes.The import and DI resolution correctly migrate to the
Databaseabstraction.Also applies to: 28-28
backend/app/events/event_store.py (1)
304-304: LGTM! Correct async aggregation pattern.The change to await
aggregate()before iterating aligns with PyMongo's async API whereaggregate()returns a coroutine that resolves to a cursor.backend/app/db/repositories/saga_repository.py (3)
96-97: LGTM! Consistent async aggregation pattern.The change correctly awaits
aggregate()before iterating, matching the PyMongo async API pattern applied throughout this PR.
126-127: LGTM! Async aggregation pattern correctly applied.
137-139: LGTM! Async aggregation pattern correctly applied.backend/app/db/repositories/dlq_repository.py (5)
38-39: LGTM! Async aggregation pattern correctly applied.The change awaits
aggregate()before iterating over the cursor, following PyMongo's async API pattern.
61-64: LGTM! Async aggregation pattern correctly applied.
74-76: LGTM! Async aggregation pattern correctly applied.
97-99: LGTM! Two-step async aggregation pattern correctly applied.The change splits the operation into awaiting
aggregate()to get the cursor, then awaitingto_list()to convert results. This is the correct pattern for PyMongo's async API.
152-167: LGTM! Async aggregation pattern correctly applied.The change awaits
aggregate()before iterating, consistent with the PyMongo async migration throughout this PR.backend/app/db/repositories/event_repository.py (3)
237-238: LGTM! Aggregation cursor handling correctly updated for PyMongo async.The explicit cursor handling pattern (
cursor = await self._collection.aggregate(pipeline)followed byawait cursor.to_list(length=1)) correctly adapts to PyMongo's async API whereaggregate()must be awaited to obtain a cursor.Also applies to: 300-301
326-328: LGTM! Change stream usage correctly updated.The pattern
async with await self._collection.watch(...)correctly reflects PyMongo's async API wherewatch()returns an awaitable that yields an async context manager.
445-445: LGTM! Async iteration over aggregation cursors correctly implemented.The pattern
async for doc in await self._collection.aggregate(pipeline)properly awaits the aggregation to get the cursor before iterating.Also applies to: 458-458
backend/app/db/repositories/admin/admin_events_repository.py (2)
123-124: LGTM! Aggregation patterns correctly standardized.The explicit cursor handling (
cursor = await self.events_collection.aggregate(pipeline)followed byawait cursor.to_list(...)) correctly implements PyMongo's async API and aligns with patterns used in other repository files.Also applies to: 144-145, 180-181
150-162: LGTM! Async iteration and defensive filtering.The async iteration over aggregation cursors is correctly implemented, and the filter on line 161 to exclude
Noneuser_ids is good defensive coding.backend/tests/integration/events/test_event_store_e2e.py (1)
4-4: LGTM! Test correctly migrated to Database abstraction.The import and type annotation updates align with the Motor → PyMongo migration, using the new
Databasetype alias fromapp.core.database_context.Also applies to: 18-18
backend/tests/conftest.py (2)
14-14: LGTM! Test fixture correctly migrated to Database abstraction.The
dbfixture now returnsDatabaseinstead ofAsyncIOMotorDatabase, aligning with the DI container's provider and the PyMongo migration.Also applies to: 178-179
189-191: LGTM! Cleanup responsibility clarified.Moving the autouse cleanup to
tests/integration/conftest.pyappropriately restricts database cleanup to integration tests only, avoiding unnecessary DB requirements for unit tests.backend/app/core/database_context.py (2)
7-23: LGTM! Type aliases correctly migrated to PyMongo async types.The migration from Motor types to PyMongo async equivalents is comprehensive and consistent:
DBClient:AsyncMongoClient[MongoDocument]Database:AsyncDatabase[MongoDocument]Collection:AsyncCollection[MongoDocument]Cursor:AsyncCursor[MongoDocument]DBSession:AsyncClientSession
107-108: LGTM! Connection lifecycle correctly updated for PyMongo async.The removal of explicit
io_loopbinding (PyMongo async uses the current event loop implicitly) and the addition ofawaitforclient.close()calls correctly adapt to PyMongo's async API.Also applies to: 126-126, 135-135
backend/tests/fixtures/real_services.py (1)
15-15: LGTM! Test fixtures correctly migrated to PyMongo async types.The migration from Motor to PyMongo's
AsyncMongoClientand the use ofDatabase/DBClienttype aliases align with the broader codebase migration.Also applies to: 17-17, 26-26, 32-42
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/tests/fixtures/real_services.py (1)
337-340: Resource leak: Redis client created but never closed in retry lambda.Similar to the MongoDB lambda above, this creates a new
redis.Redisinstance on each retry without closing it, causing a resource leak.🔎 Proposed fix
Create a helper function that properly closes the Redis client:
+async def _check_redis_async(host: str, port: int): + """Helper to check Redis connectivity.""" + client = redis.Redis(host=host, port=port, socket_connect_timeout=5) + try: + await client.execute_command("PING") + finally: + await client.aclose() + # Check Redis try: r = redis.Redis(host="localhost", port=6379, socket_connect_timeout=5) await r.execute_command("PING") await r.aclose() except Exception: print("Starting Redis...") subprocess.run(["docker-compose", "up", "-d", "redis"], check=False) await wait_for_service( - lambda: redis.Redis(host="localhost", port=6379).execute_command("PING"), + _check_redis_async("localhost", 6379), service_name="Redis" )
♻️ Duplicate comments (1)
backend/tests/fixtures/real_services.py (1)
324-327: Resource leak: AsyncMongoClient created but never closed in retry lambda.The lambda creates a new
AsyncMongoClientinstance on each retry but never closes it. While the wait loop is in a session-scoped fixture (limited impact), it's still a resource leak.🔎 Proposed fix
Create a helper function that properly closes the client:
+async def _check_mongo_async(url: str): + """Helper to check MongoDB connectivity.""" + client = AsyncMongoClient(url, serverSelectionTimeoutMS=5000) + try: + await client.admin.command("ping") + finally: + await client.close() + # Check MongoDB try: client = AsyncMongoClient( "mongodb://root:rootpassword@localhost:27017", serverSelectionTimeoutMS=5000 ) await client.admin.command("ping") await client.close() except Exception: print("Starting MongoDB...") subprocess.run(["docker-compose", "up", "-d", "mongo"], check=False) await wait_for_service( - lambda: AsyncMongoClient("mongodb://root:rootpassword@localhost:27017").admin.command("ping"), + _check_mongo_async("mongodb://root:rootpassword@localhost:27017"), service_name="MongoDB" )
🧹 Nitpick comments (2)
.github/workflows/backend-ci.yml (2)
73-112: Consider extracting Docker caching logic to reduce duplication.This Docker image caching block (lines 73-112) is duplicated in the e2e job (lines 185-224). Since both jobs use identical caching logic, consider extracting it into a composite action or reusable workflow to improve maintainability.
💡 Example: composite action structure
Create
.github/actions/cache-docker-images/action.yml:name: Cache Docker Images description: Cache and load Docker images for CI runs: using: composite steps: - name: Cache Docker images uses: actions/cache@v5 id: docker-cache with: path: /tmp/docker-cache key: docker-${{ runner.os }}-${{ env.MONGO_IMAGE }}-${{ env.REDIS_IMAGE }}-${{ env.KAFKA_IMAGE }}-${{ env.SCHEMA_REGISTRY_IMAGE }} - name: Load cached Docker images if: steps.docker-cache.outputs.cache-hit == 'true' shell: bash run: | echo "Loading cached images..." for f in /tmp/docker-cache/*.tar.zst; do zstd -d -c "$f" | docker load & done wait docker images - name: Pull and save Docker images if: steps.docker-cache.outputs.cache-hit != 'true' shell: bash run: | mkdir -p /tmp/docker-cache echo "Pulling images in parallel..." docker pull $MONGO_IMAGE & docker pull $REDIS_IMAGE & docker pull $KAFKA_IMAGE & docker pull $SCHEMA_REGISTRY_IMAGE & wait echo "Saving images with zstd compression..." docker save $MONGO_IMAGE | zstd -T0 -3 > /tmp/docker-cache/mongo.tar.zst & docker save $REDIS_IMAGE | zstd -T0 -3 > /tmp/docker-cache/redis.tar.zst & docker save $KAFKA_IMAGE | zstd -T0 -3 > /tmp/docker-cache/kafka.tar.zst & docker save $SCHEMA_REGISTRY_IMAGE | zstd -T0 -3 > /tmp/docker-cache/schema-registry.tar.zst & wait echo "Cache size:" du -sh /tmp/docker-cache/Then use it in both jobs:
- uses: ./.github/actions/cache-docker-images
243-251: Consider pinning the k3s version for consistency.The k3s installation uses the latest version from
get.k3s.io, which could lead to unexpected behavior changes or flakiness if k3s releases a breaking update. Consider pinning to a specific version for consistency with the Docker image version pinning strategy.🔎 Example: pin k3s version
-curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh - +curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.28.5+k3s1" INSTALL_K3S_EXEC="--disable=traefik" sh -To find the latest stable version:
curl -s https://api.github.com/repos/k3s-io/k3s/releases/latest | jq -r .tag_name
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.github/workflows/backend-ci.ymlbackend/scripts/seed_users.pybackend/tests/fixtures/real_services.pybackend/workers/run_saga_orchestrator.py
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/scripts/seed_users.py
🧰 Additional context used
🧬 Code graph analysis (2)
backend/workers/run_saga_orchestrator.py (1)
backend/app/services/event_bus.py (1)
close(349-354)
backend/tests/fixtures/real_services.py (2)
backend/tests/conftest.py (4)
app(130-138)redis_client(184-186)db(178-180)client(150-161)backend/app/core/database_context.py (6)
db_name(72-74)db_name(152-153)db_name(205-206)client(62-64)client(140-143)client(197-198)
🪛 Checkov (3.2.334)
.github/workflows/backend-ci.yml
[medium] 138-139: Basic Auth Credentials
(CKV_SECRET_4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Scan Backend
- GitHub Check: Scan Frontend
- GitHub Check: E2E Tests
- GitHub Check: E2E Tests
- GitHub Check: Integration Tests
🔇 Additional comments (14)
backend/workers/run_saga_orchestrator.py (2)
114-114: Critical: Properly await async close.Adding
awaittodb_client.close()is essential for PyMongo's async client. Without it, the database connection may not close properly, leading to resource leaks. This change correctly aligns with async cleanup patterns used elsewhere in the finally block.
20-20: The migration from Motor'sAsyncIOMotorClientto PyMongo's nativeAsyncMongoClientis properly implemented. PyMongo AsyncMongoClient supports bothtz_awareandserverSelectionTimeoutMSconstructor arguments, and the usage patterns in this file (subscript access for database selection,admin.command()calls, andclose()cleanup with await) are all compatible.backend/tests/fixtures/real_services.py (7)
15-17: LGTM! Import changes align with PyMongo migration.The imports correctly reference the new PyMongo async client and the Database/DBClient abstractions from the updated database context.
26-26: Type annotation correctly updated to DBClient.The change from
AsyncIOMotorClienttoDBClientaligns with the new database abstractions.
32-42: Method correctly migrated to PyMongo async API.The signature returns
Database, usesAsyncMongoClientwith appropriate timeouts and connection limits, and properly awaits the ping command.
96-102: Cleanup correctly handles async client closure.The addition of
await self.mongo_client.close()on line 101 properly releases MongoDB client resources.
133-139: Fixture return type correctly updated.The
real_mongodbfixture now returnsDatabase, consistent with the updatedconnect_mongodbmethod signature.
161-175: Transaction context manager correctly uses PyMongo async session API.The parameter type is updated to
Database, and the patternasync with await client.start_session()correctly handles PyMongo's async session lifecycle.
194-250: TestDataFactory correctly migrated to Database abstraction.All methods now accept
Databaseparameters and use the correct PyMongo async collection access pattern (e.g.,db.users.insert_one)..github/workflows/backend-ci.yml (5)
18-23: LGTM! Image version pinning ensures cache consistency.Pinning the Docker image versions at the workflow level ensures that the cache keys remain stable across runs, which directly supports the CI speed-up objectives.
45-63: PR objective achieved: COVERAGE_CORE set to sysmon.The unit test configuration correctly implements the PR objective by setting
COVERAGE_CORE: sysmonon line 48, which should improve coverage performance. The test isolation (no infrastructure dependencies) and 5-minute timeout are appropriate for unit tests.
126-129: PR objective achieved: curl polling replaced with--waitflag.Line 128 correctly uses
docker compose up -d --wait --wait-timeout 120to replace manual curl polling, which should improve CI reliability and speed. The 120-second timeout is reasonable for the infrastructure services (MongoDB, Redis, Kafka, Schema Registry).
131-150: LGTM! Integration test configuration is correct.The test execution correctly:
- Excludes k8s tests with
--ignore=tests/integration/k8sfor this non-k8s job- Sets
COVERAGE_CORE: sysmon(PR objective)- Provides appropriate environment variables for all services
- Uses a reasonable 10-minute timeout
253-282: LGTM! E2E test execution properly configured.The E2E test execution correctly:
- Targets only k8s tests (
tests/integration/k8s)- Sets
COVERAGE_CORE: sysmonconsistently with other jobs (PR objective)- Provides k8s-specific environment variables (KUBECONFIG, K8S_NAMESPACE)
- Uses distinct coverage flag
backend-e2efor proper Codecov segmentation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
backend/tests/fixtures/real_services.py (1)
314-329: Past issue resolved: async helper properly implemented.The
check_mongohelper correctly addresses the issue flagged in the previous review. The function is now properly async, awaits the ping command, and closes the client in a finally block.Optional: Add connectTimeoutMS for consistency
For consistency with
connect_mongodb(line 37), consider addingconnectTimeoutMS=5000:async def check_mongo() -> None: client = AsyncMongoClient( "mongodb://root:rootpassword@localhost:27017", - serverSelectionTimeoutMS=5000 + serverSelectionTimeoutMS=5000, + connectTimeoutMS=5000 )
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
backend/app/db/repositories/execution_repository.pybackend/tests/conftest.pybackend/tests/fixtures/real_services.pybackend/tests/integration/db/schema/test_schema_manager.pybackend/tests/integration/services/saved_script/test_saved_script_service.py
💤 Files with no reviewable changes (1)
- backend/tests/integration/services/saved_script/test_saved_script_service.py
🧰 Additional context used
🧬 Code graph analysis (3)
backend/tests/conftest.py (2)
backend/tests/unit/conftest.py (2)
app(34-35)db(19-20)backend/app/core/database_context.py (3)
database(67-69)database(146-149)database(201-202)
backend/app/db/repositories/execution_repository.py (1)
backend/app/domain/execution/models.py (2)
DomainExecution(13-26)ResourceUsageDomain(43-64)
backend/tests/integration/db/schema/test_schema_manager.py (1)
backend/tests/conftest.py (1)
db(178-180)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: E2E Tests
- GitHub Check: Integration Tests
- GitHub Check: E2E Tests
🔇 Additional comments (12)
backend/tests/integration/db/schema/test_schema_manager.py (1)
31-32: LGTM! Explicit cursor handling aligns with PyMongo async migration.The two-step pattern (await
list_indexes()to obtain cursor, then awaitcursor.to_list()) correctly follows PyMongo async conventions and is consistent with the repo-wide migration from Motor to PyMongo async types.backend/app/db/repositories/execution_repository.py (1)
133-133: LGTM! Consistent resource_usage handling.The local variable extraction at line 133 improves readability and eliminates duplicate
doc.get("resource_usage")calls. Lines 146-147 now match the pattern at line 70 inget_execution, creating consistency across both methods.Also applies to: 146-147
backend/tests/fixtures/real_services.py (5)
32-42: LGTM: MongoDB connection properly implemented.The async connection logic with timeout settings and ping verification is correct. The
maxPoolSize=10is reasonable for integration tests, though a lower value like 5 could reduce resource usage if multiple test processes run concurrently.
101-101: LGTM: Proper async cleanup.Correctly awaits the async
close()method for PyMongo's async client.
134-134: LGTM: Type annotations consistently updated.All function signatures and return types have been correctly updated to use
Databasefrom the new PyMongo async client instead of Motor'sAsyncIOMotorDatabase.Also applies to: 162-162, 195-195, 215-215, 234-234
332-344: LGTM: Redis readiness check properly implemented.The
check_redishelper correctly uses the async Redis client fromredis.asyncio, properly awaits the ping command, and closes the connection in a finally block. This follows the same pattern ascheck_mongo.
15-17: The PyMongo async migration is correctly implemented. The type aliasesDBClientandDatabaseare properly defined inapp.core.database_contextasAsyncMongoClient[MongoDocument]andAsyncDatabase[MongoDocument]respectively, and the fixture's usage is consistent with the codebase patterns. No compatibility issues exist.backend/tests/conftest.py (5)
14-14: LGTM: Database abstraction import.The migration from motor's
AsyncIOMotorDatabaseto the customDatabaseabstraction is appropriate for the test infrastructure refactoring.
141-145: LGTM: Consistent session scope for app_container.The session-scoped lifecycle correctly matches the app fixture dependency.
129-138: Session-scoped app fixture is appropriate and safe.The fixture eliminates repeated app creation and resolves Pydantic schema issues. Tests do not mutate app state (dependency overrides, middleware, or router configuration), so the session scope introduces no test isolation risk. The container is properly cleaned up on teardown.
189-191: Good separation: cleanup moved to integration tests.Removing the autouse cleanup from the base conftest.py is appropriate since unit tests should not access DB/Redis (enforced by unit/conftest.py raising RuntimeError). The integration/conftest.py properly implements the per-test cleanup fixture with autouse=True and function scope, handling both pre-test and post-test cleanup of MongoDB collections and Redis.
178-180: Database type migration to PyMongo verified; test compatibility confirmed.The
dbfixture now resolves PyMongo'sAsyncDatabase[MongoDocument]type (aliased asDatabase) instead of Motor'sAsyncIOMotorDatabase. Integration tests already use only standard PyMongo async methods (find().to_list(),list_collection_names(),drop_collection()), which are fully compatible with the new type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
backend/tests/conftest.py (1)
225-259: Consider the performance impact of function-scoped user fixtures.These fixtures changed from
scope="session"toscope="function", meaning each test that uses them will perform user registration and login HTTP requests. Given the comment on line 228 ("DB is wiped after each test"), this re-creation is necessary for correctness.However, if many tests depend on these fixtures, consider:
- Grouping tests that share user state into test classes with class-scoped users
- Using session-scoped users for read-only tests that don't modify user state
- Implementing a user fixture cache with lazy DB cleanup only when needed
The current approach prioritizes test isolation over speed, which is appropriate for "Chill" mode if test runtime is acceptable.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
backend/tests/conftest.pybackend/tests/fixtures/real_services.pybackend/tests/integration/conftest.pybackend/tests/integration/test_user_settings_routes.py
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/tests/integration/conftest.py
🧰 Additional context used
🧬 Code graph analysis (2)
backend/tests/conftest.py (2)
backend/tests/unit/conftest.py (4)
app(34-35)db(19-20)redis_client(24-25)client(29-30)backend/app/core/database_context.py (6)
database(67-69)database(146-149)database(201-202)client(62-64)client(140-143)client(197-198)
backend/tests/fixtures/real_services.py (1)
backend/app/core/database_context.py (6)
db_name(72-74)db_name(152-153)db_name(205-206)client(62-64)client(140-143)client(197-198)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: E2E Tests
- GitHub Check: E2E Tests
- GitHub Check: Integration Tests
🔇 Additional comments (14)
backend/tests/fixtures/real_services.py (6)
15-17: LGTM: Migration to PyMongo async types.The migration from Motor to PyMongo's AsyncMongoClient and the use of Database/DBClient types from the database context aligns with the broader refactoring mentioned in the PR summary and relevant code snippets.
Also applies to: 26-26
32-42: LGTM: MongoDB connection setup is correct.The connection parameters, ping verification, and database access pattern are appropriate. The return type aligns with the new Database abstraction.
96-112: LGTM: Proper async cleanup.Line 101 correctly awaits
mongo_client.close(), which is required for AsyncMongoClient.
161-175: LGTM: Transaction management correctly migrated to Database type.The
db.clientaccess is valid (as confirmed by the database_context.py snippets), and the transaction logic properly handles commits and rollbacks.
195-195: LGTM: TestDataFactory methods correctly updated to Database type.The signature changes from
AsyncIOMotorDatabasetoDatabasealign with the migration, and the database access patterns remain correct.Also applies to: 215-215, 234-234
314-345: LGTM: Previous async operation issue resolved.The service readiness checks now use proper async functions (
check_mongoat lines 314-322 andcheck_redisat lines 332-337) that correctly await operations and close connections. This addresses the concern raised in the previous review about synchronous lambdas not awaiting operations.backend/tests/conftest.py (6)
129-134: Good: Session-scoped app fixture prevents schema crashes.The explicit
scope="session"combined withloop_scope="session"correctly ensures the FastAPI app is created once per test worker, avoiding Pydantic schema registration conflicts while using the session-level event loop.
141-145: LGTM: Container fixture scope matches app fixture.The session scope for
app_containeris consistent with theappfixture and appropriate for the Dishka DI container.
149-161: Good: Function-scoped client ensures test isolation.The
scope="function"ensures each test gets a fresh HTTP client with isolated cookies, which is essential for auth testing. Theloop_scope="session"correctly reuses the event loop for performance.
189-191: Cleanup fixture correctly relocated to integration/conftest.py.The autouse
_cleanupfixture exists intests/integration/conftest.pyand properly handles per-test DB and Redis cleanup, confirming the architecture described in the comment is accurate.
14-14: Type migration verified as complete.The Database type has been properly migrated from
AsyncIOMotorDatabaseto a type aliasDatabase = AsyncDatabase[MongoDocument]defined inapp.core.database_context. All test files consistently use the new type with the correct dependency injection pattern (await scope.get(Database)), and no remaining Motor-specific type references exist in the codebase. All usage patterns (collection access, query methods) are compatible with the new type.
177-180: TheDatabasetype is properly registered in the Dishka container viaDatabaseProvider.get_database()method, which providesAsyncDatabase[MongoDocument](aliased asDatabase). The fixture correctly callsscope.get(Database)and will receive the proper database instance. This change is valid.backend/tests/integration/test_user_settings_routes.py (2)
20-49: The original review misunderstands pytest-asyncio'sloop_scopeparameter.In pytest-asyncio 1.3.0,
loop_scopecontrols which event loop instance is used, not the fixture scope. Thetest_userfixture only specifiesloop_scope="session"without an explicitscopeparameter, which means it defaults toscope="function". This creates a fresh fixture instance for each test, not a session-wide singleton.Additionally, the integration conftest.py includes an autouse
_cleanupfixture withscope="function"that clears all database collections and flushes Redis before and after each test. This prevents any state pollution between tests.The comment "Create a fresh user for each test" is accurate, and the CSRF token concerns are unfounded since a new token is fetched for each test.
Likely an incorrect or invalid review comment.
52-81: The fixture is function-scoped and works correctly with automatic database cleanup.The
loop_scope="session"parameter only controls the async event loop scope, not the fixture lifecycle. Without an explicitscope=parameter, pytest defaults toscope="function", meaning the fixture runs fresh for each test. The autouse_cleanupfixture (inintegration/conftest.py) wipes the database before and after each test, ensuring user state is not carried over. The docstring "Create a second fresh user for isolation tests" is accurate, and no state pollution occurs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
backend/tests/fixtures/real_services.py (1)
308-308: Addloop_scope="session"to matchscope="session".The
ensure_services_runningfixture specifiesscope="session"but lacksloop_scope="session". This mismatch can cause the fixture to use a function-scoped event loop with session-scoped resources, potentially leading to connection problems or event loop closure issues.🔎 Proposed fix
-@pytest_asyncio.fixture(scope="session") +@pytest_asyncio.fixture(scope="session", loop_scope="session") async def ensure_services_running():backend/tests/conftest.py (1)
24-24: Fix typo in comment.There's a trailing "y" character at the end of the comment.
🔎 Proposed fix
-# We import lazily inside the fixture after test env vars are set.y +# We import lazily inside the fixture after test env vars are set.
♻️ Duplicate comments (2)
.github/workflows/backend-ci.yml (2)
81-89: Parallel Docker load operations may mask failures.This concern was previously raised - the bare
waitcommand only captures the exit status of the last background job. If any intermediatezstd | docker loadoperation fails, the step will still succeed.
91-112: Parallel Docker pull/save operations share the same error handling concern.As noted in the previous review, the
waitcommands at lines 101 and 108 may mask failures from parallel operations.
🧹 Nitpick comments (4)
.github/workflows/backend-ci.yml (1)
193-224: Consider extracting Docker caching into a reusable workflow or composite action.The Docker image caching logic (lines 186-224) is duplicated verbatim from the integration job (lines 73-112). For maintainability, consider extracting this into a reusable workflow or composite action.
This is a nice-to-have improvement and can be deferred.
backend/tests/conftest.py (3)
67-114: Consider consolidating environment setup logic.The environment setup appears in two places: module-level (lines 28-60) and in the
_test_envfixture (lines 67-114). While the duplication provides defensive guarantees, the different approaches (e.g., line 77-80 using direct assignment vs. module-level checking"MONGODB_URL" not in os.environ) could lead to confusion during maintenance.Consider documenting why both levels are needed or consolidating to a single authoritative setup point.
225-244: Fixture naming is misleading given function scope.The fixtures
shared_userandshared_adminhave names suggesting shared/session scope, but they default to function scope and recreate users via HTTP for each test. This ensures isolation after per-test DB cleanup but incurs the cost of registration + login HTTP calls on every test invocation.Consider:
- Renaming to
test_userandtest_adminto reflect their function scope- Documenting the performance trade-off in comments
129-139: Clarify the rationale for session-scoped app fixture.The
appfixture is session-scoped with a comment stating "to avoid Pydantic schema crashes," but this reason is unclear and not documented. While the function-scopedclientfixture mitigates isolation concerns by providing a fresh HTTP client per test, the shared app instance should have its usage constraints documented.Recommend:
- Clarify what "Pydantic schema crashes" refers to and link to any related issue if this was a workaround for a known problem
- Document that tests must not modify
app.stateor registered routes after creation- Consider adding an assertion in fixture cleanup to detect unintended state mutations if this becomes a concern
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.github/workflows/backend-ci.ymlbackend/pyproject.tomlbackend/tests/conftest.pybackend/tests/fixtures/real_services.pybackend/tests/integration/conftest.py
🚧 Files skipped from review as they are similar to previous changes (2)
- backend/pyproject.toml
- backend/tests/integration/conftest.py
🧰 Additional context used
🧬 Code graph analysis (1)
backend/tests/fixtures/real_services.py (1)
backend/app/core/database_context.py (6)
db_name(72-74)db_name(152-153)db_name(205-206)client(62-64)client(140-143)client(197-198)
🪛 Checkov (3.2.334)
.github/workflows/backend-ci.yml
[medium] 138-139: Basic Auth Credentials
(CKV_SECRET_4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Scan Backend
- GitHub Check: Scan Frontend
- GitHub Check: E2E Tests
- GitHub Check: E2E Tests
- GitHub Check: Integration Tests
🔇 Additional comments (11)
.github/workflows/backend-ci.yml (5)
18-24: LGTM! Good practice pinning image versions for cache key consistency.The centralized environment variables for Docker images ensure consistent caching across jobs and make version updates easier to manage.
26-65: LGTM! Clean unit test job setup.The
COVERAGE_CORE: sysmonsetting aligns with the PR objective to speed up CI tests, and the 5-minute timeout is reasonable for unit tests.
131-151: CI test credentials are acceptable for ephemeral local services.The static analysis tool flagged the MongoDB credentials at line 138. These are intentional test credentials for CI-only ephemeral containers and not production secrets. The approach is standard for CI integration tests.
The
--waitand--wait-timeout 120flags on docker compose are a good improvement over polling, addressing the PR objective.
243-272: K8s setup and E2E test execution look solid.The k3s installation with timeout-based readiness check and the idempotent namespace creation pattern (
--dry-run=client -o yaml | kubectl apply -f -) are well designed. The KUBECONFIG environment variable is properly set for subsequent test execution.
284-297: Good log collection strategy for debugging failures.The defensive
|| trueon kubectl commands ensures log collection doesn't fail even if the cluster is in a bad state. Separating intok8s-logsartifact distinguishes these from the integration job'sbackend-logs.backend/tests/fixtures/real_services.py (3)
314-329: Previous async operation concern has been resolved.The earlier issue about the lambda not properly awaiting MongoDB operations has been fixed. The
check_mongofunction is now a proper async function that awaits the ping command and ensures the client is closed in a finally block.
15-17: LGTM: Clean migration from Motor to PyMongo AsyncMongoClient.The migration from
AsyncIOMotorClienttoAsyncMongoClientwith the newDatabaseandDBClienttype aliases is implemented consistently throughout the file. The connection setup includes appropriate timeouts and pool settings, proper ping verification, and correct cleanup withawait client.close().Also applies to: 26-26, 32-42, 101-101, 134-134, 162-162, 167-167, 195-195, 215-215, 234-234
332-344: LGTM: Redis readiness check properly implemented.The
check_redisfunction correctly implements async operations with proper resource cleanup in the finally block, matching the pattern used for MongoDB.backend/tests/conftest.py (3)
177-181: LGTM!The migration from Motor's
AsyncIOMotorDatabaseto the customDatabasetype is implemented correctly, and function-scoping ensures proper test isolation.
183-187: Inconsistency between code and AI summary.The AI summary indicates
loop_scope="session"was added to the Redis fixture, but this parameter is not present in the actual code. The fixture is function-scoped, which is appropriate for test isolation.
189-191: Cleanup implementation verified and correct. Thetests/integration/conftest.pyfile contains a properly implemented_cleanupfixture that drops all non-system database collections and flushes Redis before each integration test. Unit tests are appropriately isolated with a no-op fixture and explicit RuntimeError raises for infrastructure fixtures, preventing inadvertent DB/Redis access.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/pyproject.toml (1)
188-210: Missingcoverage_core=sysmonconfiguration mentioned in PR objectives.The PR description states "Set coverage_core=sysmon" but this setting is absent from the configuration. It should be added to the
[tool.coverage.run]section inbackend/pyproject.tomlto enable faster coverage measurement using sysmon instead of the default multiprocessing approach.Add:
[tool.coverage.run] core = "sysmon"
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/pyproject.toml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: E2E Tests
- GitHub Check: Integration Tests
- GitHub Check: E2E Tests
🔇 Additional comments (2)
backend/pyproject.toml (2)
6-125: Motor-to-PyMongo migration is complete.Verification confirms the removal of
motor==3.6.0from dependencies and no remaining Motor imports in the codebase. The migration to PyMongo's native async API is successful with no runtime errors expected.
205-206: The session-scoped event loop configuration is acceptable as designed. The codebase already implements robust test isolation through unique MongoDB database names per session/worker (e.g.,integr8scode_test_{session_id}_{worker_id}) and separate Redis databases per worker, so tests do not share mutable state despite sharing the event loop. Additionally, the integration test cleanup strategy inintegration/conftest.pydemonstrates that your team is already aware of potential event loop lifecycle issues and has intentionally designed around them by using pre-test cleanup only—a sound approach that avoids "Event loop is closed" errors when streaming/SSE tests hold connections across loop boundaries. All async fixtures properly use context managers or yield patterns, ensuring resource cleanup at appropriate lifecycle points. No further action needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
backend/tests/integration/test_replay_routes.py (1)
394-438: Potential issue: test_admin fixture may not contain 'user_id' field.Line 419 references
test_admin.get("user_id"), but thetest_adminfixture (defined inbackend/tests/conftest.py:244-252) returns a dict with keys:username,password,csrf_token, andheaders. It doesn't includeuser_id.This will likely result in
Nonebeing passed to theuser_idfilter, which may cause the test to behave unexpectedly or fail.🔎 Verify if user_id should be retrieved separately
If
user_idis needed, you may need to fetch it from the user profile or authentication response after login, rather than from the fixture dict.backend/tests/integration/test_execution_routes.py (1)
526-528: Remove duplicate assertion.Line 527 duplicates the assertion from line 526.
🔎 Proposed fix
# Use idempotency header on both requests to guarantee keying r1 = await client.post("/api/v1/execute", json=execution_request, headers=headers) assert r1.status_code == 200 - assert r1.status_code == 200 e1 = r1.json()["execution_id"]
🧹 Nitpick comments (8)
backend/tests/integration/test_events_routes.py (6)
112-118: Suggest removing redundant login call.The
test_userfixture already handles authentication and returns a dictionary withcsrf_tokenandheaders. The manual login call here is redundant and adds unnecessary overhead to the test execution time, which contradicts the PR's goal of speeding up CI tests.🔎 Proposed refactor to remove redundant login
async def test_get_execution_events(self, client: AsyncClient, test_user: Dict[str, str]) -> None: """Test getting events for a specific execution.""" - # Login first - login_data = { - "username": test_user["username"], - "password": test_user["password"] - } - login_response = await client.post("/api/v1/auth/login", data=login_data) - assert login_response.status_code == 200 - # Create an execution execution_request = {
152-158: Suggest removing redundant login call.The
test_userfixture already authenticates the client. This manual re-login is unnecessary and slows down test execution.🔎 Proposed refactor
async def test_query_events_advanced(self, client: AsyncClient, test_user: Dict[str, str]) -> None: """Test advanced event querying with filters.""" - # Login first - login_data = { - "username": test_user["username"], - "password": test_user["password"] - } - login_response = await client.post("/api/v1/auth/login", data=login_data) - assert login_response.status_code == 200 - # Query events with multiple filters
196-202: Remove redundant login calls across multiple test methods.These test methods manually re-authenticate even though the
test_userfixture already handles authentication. Removing these redundant calls would improve test execution speed, aligning with the PR's CI speed-up objective.Affected methods:
test_get_events_by_correlation_id(lines 196-202)test_get_current_request_events(lines 236-242)test_get_event_statistics(lines 258-264)test_get_single_event(lines 294-300)test_get_nonexistent_event(lines 325-331)test_list_event_types(lines 345-351)test_publish_custom_event_requires_admin(lines 375-382)test_aggregate_events(lines 439-445)test_delete_event_requires_admin(lines 474-480)test_replay_aggregate_events_requires_admin(lines 491-497)test_event_pagination(lines 548-554)Also applies to: 236-242, 258-264, 294-300, 325-331, 345-351, 375-382, 439-445, 474-480, 491-497, 548-554
403-408: Remove redundant login call in admin test.The
test_adminfixture already handles authentication. This manual re-login is unnecessary.🔎 Proposed refactor
async def test_publish_custom_event_as_admin(self, client: AsyncClient, test_admin: Dict[str, str]) -> None: """Test publishing custom events as admin.""" - # Login as admin - login_data = { - "username": test_admin["username"], - "password": test_admin["password"] - } - login_response = await client.post("/api/v1/auth/login", data=login_data) - assert login_response.status_code == 200 - # Publish custom event (requires Kafka); skip if not available
507-513: Remove redundant admin login call.The
test_adminfixture already authenticates the client.🔎 Proposed refactor
async def test_replay_aggregate_events_dry_run(self, client: AsyncClient, test_admin: Dict[str, str]) -> None: """Test replaying events in dry-run mode.""" - # Login as admin - login_data = { - "username": test_admin["username"], - "password": test_admin["password"] - } - login_response = await client.post("/api/v1/auth/login", data=login_data) - assert login_response.status_code == 200 - # Get an existing aggregate ID from events
589-608: Remove redundant login calls for both users.Both
test_userandtest_adminfixtures are already authenticated. These manual logins are unnecessary.🔎 Proposed refactor
async def test_events_isolation_between_users(self, client: AsyncClient, test_user: Dict[str, str], test_admin: Dict[str, str]) -> None: """Test that events are properly isolated between users.""" # Get events as regular user - user_login_data = { - "username": test_user["username"], - "password": test_user["password"] - } - user_login_response = await client.post("/api/v1/auth/login", data=user_login_data) - assert user_login_response.status_code == 200 - user_events_response = await client.get("/api/v1/events/user?limit=10") assert user_events_response.status_code == 200 user_events = user_events_response.json() user_event_ids = [e["event_id"] for e in user_events["events"]] # Get events as admin (without include_all_users flag) - admin_login_data = { - "username": test_admin["username"], - "password": test_admin["password"] - } - admin_login_response = await client.post("/api/v1/auth/login", data=admin_login_data) - assert admin_login_response.status_code == 200 - admin_events_response = await client.get("/api/v1/events/user?limit=10")backend/tests/integration/test_execution_routes.py (2)
58-64: Remove redundant login call.The
test_userfixture already handles authentication. This manual re-login is unnecessary and adds overhead to test execution, which contradicts the PR's CI speed-up objective.🔎 Proposed refactor
async def test_execute_simple_python_script(self, client: AsyncClient, test_user: Dict[str, str]) -> None: """Test executing a simple Python script.""" - # Login first - login_data = { - "username": test_user["username"], - "password": test_user["password"] - } - login_response = await client.post("/api/v1/auth/login", data=login_data) - assert login_response.status_code == 200 - # Execute script
102-108: Remove redundant login calls across multiple test methods.These test methods manually re-authenticate even though the
test_userfixture already handles authentication. Removing these redundant calls would significantly improve test execution speed.Affected methods:
test_get_execution_result(lines 102-108)test_execute_with_error(lines 143-149)test_execute_with_resource_tracking(lines 169-175)test_execute_with_different_language_versions(lines 212-218)test_execute_with_large_output(lines 246-252)test_cancel_running_execution(lines 285-291)test_execution_with_timeout(lines 337-343)test_sandbox_restrictions(lines 370-376)test_concurrent_executions_by_same_user(lines 427-433)Also applies to: 143-149, 169-175, 212-218, 246-252, 285-291, 337-343, 370-376, 427-433
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
.github/actions/docker-cache/action.yml.github/workflows/backend-ci.ymlbackend/pyproject.tomlbackend/tests/conftest.pybackend/tests/integration/test_admin_routes.pybackend/tests/integration/test_dlq_routes.pybackend/tests/integration/test_events_routes.pybackend/tests/integration/test_execution_routes.pybackend/tests/integration/test_health_routes.pybackend/tests/integration/test_notifications_routes.pybackend/tests/integration/test_replay_routes.pybackend/tests/integration/test_saga_routes.pybackend/tests/integration/test_saved_scripts_routes.pybackend/tests/integration/test_sse_routes.py
✅ Files skipped from review due to trivial changes (1)
- backend/tests/integration/test_admin_routes.py
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/pyproject.toml
🧰 Additional context used
🧬 Code graph analysis (10)
backend/tests/integration/test_saga_routes.py (1)
backend/tests/conftest.py (3)
client(158-169)test_user(234-241)another_user(256-268)
backend/tests/integration/test_sse_routes.py (3)
backend/tests/conftest.py (2)
client(158-169)test_user(234-241)backend/tests/integration/test_user_settings_routes.py (1)
test_user(21-49)backend/app/schemas_pydantic/sse.py (1)
SSEHealthResponse(126-136)
backend/tests/integration/test_health_routes.py (1)
backend/tests/conftest.py (2)
client(158-169)test_user(234-241)
backend/tests/integration/test_notifications_routes.py (1)
backend/tests/conftest.py (2)
client(158-169)test_admin(245-252)
backend/tests/integration/test_saved_scripts_routes.py (1)
backend/tests/conftest.py (3)
client(158-169)test_user(234-241)test_admin(245-252)
backend/tests/integration/test_events_routes.py (1)
backend/tests/conftest.py (3)
client(158-169)test_user(234-241)test_admin(245-252)
backend/tests/integration/test_replay_routes.py (1)
backend/tests/conftest.py (3)
client(158-169)test_user(234-241)test_admin(245-252)
backend/tests/conftest.py (2)
backend/tests/unit/conftest.py (4)
app(34-35)db(19-20)redis_client(24-25)client(29-30)backend/app/core/database_context.py (6)
database(67-69)database(146-149)database(201-202)client(62-64)client(140-143)client(197-198)
backend/tests/integration/test_dlq_routes.py (1)
backend/tests/conftest.py (2)
client(158-169)test_user(234-241)
backend/tests/integration/test_execution_routes.py (1)
backend/tests/conftest.py (2)
client(158-169)test_user(234-241)
🪛 Checkov (3.2.334)
.github/workflows/backend-ci.yml
[medium] 100-101: Basic Auth Credentials
(CKV_SECRET_4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Scan Backend
- GitHub Check: E2E Tests
- GitHub Check: E2E Tests
- GitHub Check: Integration Tests
🔇 Additional comments (18)
.github/actions/docker-cache/action.yml (1)
12-18: LGTM on cache key generation.The cache key derivation using sorted images and md5 hashing ensures consistent cache hits regardless of input ordering.
.github/workflows/backend-ci.yml (6)
26-62: LGTM on Unit Tests job.The job is well-structured with proper UV caching, timeout, coverage collection, and Codecov upload with
if: always()to ensure coverage is reported even on test failures.
88-91: Good use of--waitflag for service readiness.Using
docker compose up --wait --wait-timeout 120ensures services are healthy before tests run, which is more reliable than curl polling.
96-105: Test credentials are appropriately scoped.The static analysis hint (CKV_SECRET_4) about Basic Auth credentials is a false positive—these are ephemeral CI test credentials with no production impact. The environment variables are properly scoped to the test step only.
168-176: Verify kubectl access for log collection step.The k3s setup configures kubeconfig at
/home/runner/.kube/config, but line 174'sexport KUBECONFIG=...won't persist beyond this step. The test step (lines 189-190) correctly setsKUBECONFIGin its env block, but the log collection step (lines 213-214) doesn't setKUBECONFIG, which may cause kubectl commands to fail.🔎 Proposed fix to ensure kubectl works in log collection
- name: Collect logs if: failure() + env: + KUBECONFIG: /home/runner/.kube/config run: | mkdir -p logs docker compose -f docker-compose.ci.yaml logs > logs/docker-compose.log 2>&1 kubectl get events --sort-by='.metadata.creationTimestamp' -A > logs/k8s-events.log 2>&1 || true kubectl describe pods -A > logs/k8s-describe-pods.log 2>&1 || trueThe
|| trueguards prevent step failure, but logs won't be captured if kubectl can't find the kubeconfig.
107-111: Good separation of test scopes.Integration tests correctly ignore
tests/integration/k8swhile E2E tests specifically target that directory. This ensures proper coverage separation without overlap.
71-74: Docker cache action integration is clean.The action properly receives pinned image versions from workflow-level env vars, ensuring cache key consistency.
Note: Error handling for parallel operations should be addressed in the docker-cache action itself (see comments on that file).
backend/tests/integration/test_saga_routes.py (1)
28-427: LGTM: Fixture renamings are consistent.All fixture references have been correctly updated from
shared_usertotest_user, and the login flows properly use the new fixture's credentials. The changes are mechanical and maintain test functionality.Note: Some tests comment "Already authenticated via test_user fixture" but then explicitly call
/api/v1/auth/loginagain (e.g., lines 111-117, 137-143). This is redundant but harmless—the explicit login refreshes the session.backend/tests/integration/test_saved_scripts_routes.py (1)
36-482: LGTM: Fixture updates are correct.All fixture references have been systematically updated from
shared_usertotest_userandshared_admintotest_admin. The test logic remains unchanged, and authentication flows correctly reference the new fixtures.backend/tests/conftest.py (3)
233-252: Clarify DB cleanup scope in fixture docstrings.The docstrings state "Recreated each test (DB wiped between tests)" but the cleanup is implemented in
backend/tests/integration/conftest.py(per lines 197-199). Unit tests using these fixtures won't have DB cleanup.However, this is correct by design:
backend/tests/unit/conftest.pyblocks DB access entirely, so unit tests can't use these fixtures.
129-153: Session-scoped app fixture design is sound and properly safeguarded.Verification confirms:
- No tests modify
app.stateor registered routes- No background tasks or event listeners are registered in tests
- Client fixture is function-scoped, providing clean cookies and isolation per test
- App cleanup executes properly in fixture teardown
The session scope is justified to avoid Pydantic schema validator memory issues with pytest-xdist, and tests follow the documented constraints that prevent state pollution.
186-188: Database fixture migration is compatible with test code.The
dbfixture yieldsAsyncDatabase[MongoDocument]from PyMongo (viaapp.core.database_context), which provides the same interface as Motor'sAsyncIOMotorDatabase. All test operations—including.find(),.find_one(),.insert_one(),.get_collection(), indexing syntax, and cursor methods—are natively supported by PyMongo's async driver. No compatibility issues found.backend/tests/integration/test_dlq_routes.py (1)
35-479: LGTM: Consistent fixture updates.All occurrences of
shared_userhave been replaced withtest_user, and login data correctly references the new fixture's credentials.backend/tests/integration/test_health_routes.py (1)
51-86: LGTM: Fixture update is correct.The fixture parameter and login data have been correctly updated to use
test_user.backend/tests/integration/test_sse_routes.py (1)
41-206: LGTM: Fixture parameters updated correctly.All test signatures have been updated to use
test_userinstead ofshared_user.backend/tests/integration/test_replay_routes.py (1)
26-485: LGTM: Fixture renamings are consistent (aside from user_id issue).All fixture references have been updated from
shared_user/shared_admintotest_user/test_admin, and login flows correctly use the new fixture credentials.backend/tests/integration/test_notifications_routes.py (1)
34-503: LGTM: Fixture updates are correct and consistent.All fixture references have been systematically updated to use
test_userandtest_admin, with corresponding updates to login data throughout the file.backend/tests/integration/test_execution_routes.py (1)
495-498: LGTM: Concise inline dictionary construction.Lines 496 and 512 use inline dictionary construction for login data, which is more concise and readable compared to the multi-line format used in other tests.
Also applies to: 511-514
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/tests/integration/k8s/test_execution_routes.py (1)
497-500: Remove duplicate assertion.Line 499 contains a duplicate assertion
assert r1.status_code == 200that appears immediately after the same check on line 498. This is likely a copy-paste error.🔎 Proposed fix
r1 = await client.post("/api/v1/execute", json=execution_request, headers=headers) assert r1.status_code == 200 -assert r1.status_code == 200 e1 = r1.json()["execution_id"]
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.github/actions/docker-cache/action.ymlbackend/app/infrastructure/kafka/events/user.pybackend/app/services/user_settings_service.pybackend/tests/integration/k8s/test_execution_routes.py
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/actions/docker-cache/action.yml
🧰 Additional context used
🧬 Code graph analysis (1)
backend/app/services/user_settings_service.py (1)
backend/app/domain/user/settings_models.py (2)
DomainNotificationSettings(13-18)DomainEditorSettings(22-28)
🪛 GitHub Actions: MyPy Type Checking
backend/app/infrastructure/kafka/events/user.py
[error] 1-1: mypy check failed for the backend: 1 error in 1 file (checked 242 source files). Command to reproduce: uv run mypy --config-file pyproject.toml --strict .
backend/app/services/user_settings_service.py
[error] 396-396: Mypy error: Name "channels" already defined on line 346 [no-redef]. Step: 'mypy --config-file pyproject.toml --strict .'
backend/tests/integration/k8s/test_execution_routes.py
[error] 1-1: mypy check failed for the backend: 1 error in 1 file (checked 242 source files). Command to reproduce: uv run mypy --config-file pyproject.toml --strict .
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: E2E Tests
- GitHub Check: E2E Tests
- GitHub Check: Integration Tests
🔇 Additional comments (8)
backend/app/services/user_settings_service.py (4)
186-189: Avro compatibility stringification looks correct.The approach of JSON-encoding nested dicts while stringifying primitive values ensures Avro schema compatibility. The type annotation
dict[str, str]matches theupdatedfield inUserSettingsUpdatedEvent.
196-200: Payload key rename is consistent with schema.The
updatedkey aligns with the renamed field inUserSettingsUpdatedEvent.
343-355: Handler for notification settings events is well-structured.The logic correctly extracts settings and channels from the payload, builds the domain model, and updates the timestamp.
374-381:parse_valuehelper handles JSON-string payloads gracefully.The helper correctly parses stringified JSON or returns the value as-is, supporting both legacy and new payload formats. Defining it inside
_apply_eventkeeps it scoped to where it's needed.backend/app/infrastructure/kafka/events/user.py (1)
49-55: Field rename is correctly implemented and aligned with service layer changes.The
changestoupdatedrename on line 54 is properly reflected throughout the event publishing flow. Event payloads correctly pass theupdatedfield, and no broken references to the old field name remain in the codebase.backend/tests/integration/k8s/test_execution_routes.py (3)
17-17: LGTM: Clean test organization.The class-level
@pytest.mark.k8smarker is a good approach to designate all tests in this class as requiring Kubernetes infrastructure, replacing the previous per-test skip logic.
39-44: LGTM: Consistent fixture migration.All test methods have been consistently updated to use the
test_userfixture instead ofshared_user, and login data construction correctly derives credentials from the new fixture.Also applies to: 82-87, 122-127, 147-152, 188-194, 222-227, 260-265, 306-316, 343-348, 399-404, 466-469, 481-484
1-507: Test file is excluded from mypy checking.The file
backend/tests/integration/k8s/test_execution_routes.pyis explicitly excluded from mypy type checking via the configurationexclude = '(^tests/|/tests/)'inbackend/pyproject.toml. Mypy does not analyze test files, so no type checking errors are reported for this file. The type annotationDict[str, str]for thetest_userparameter is incomplete (the fixture returns additional keys like"csrf_token"and"headers"), but this cannot be caught by mypy since the file is excluded. The config comment notes# TODO: REMOVE NEXT LINE, indicating the exclusion is temporary.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
backend/tests/conftest.py (2)
236-244: Clarify status 400 handling and verify state consistency.Two concerns:
Status 400 treated as success: Line 241 skips the test only if status is not in
(200, 201, 400). Including 400 as acceptable might be intentional (user already exists from previous test), but could also mask validation errors. Consider checking the response body to distinguish "user exists" from other 400 errors.App state vs DB state: The docstring mentions "DB wiped between tests", but with the session-scoped app, any app-level caches or in-memory state won't be reset. Verify that user registration and authentication don't rely on stale app state after DB cleanup.
🔎 Proposed improvement to distinguish 400 error types
async def test_user(client: httpx.AsyncClient, test_user_credentials): """Function-scoped authenticated user. Recreated each test (DB wiped between tests).""" creds = test_user_credentials r = await client.post("/api/v1/auth/register", json=creds) - if r.status_code not in (200, 201, 400): + if r.status_code not in (200, 201): + if r.status_code == 400 and "already exists" in r.text.lower(): + pass # User exists, continue with login + else: - pytest.skip(f"Cannot create test user (status {r.status_code}).") + pytest.skip(f"Cannot create test user (status {r.status_code}): {r.text}") csrf = await _http_login(client, creds["username"], creds["password"]) return {**creds, "csrf_token": csrf, "headers": {"X-CSRF-Token": csrf}}
247-255: Same status 400 handling concern as test_user.The
test_adminfixture has the same status code handling pattern astest_user(line 252). Consider applying the same refinement to distinguish "user exists" from validation errors, and verify app state consistency with DB cleanup.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/app/services/user_settings_service.pybackend/tests/conftest.pybackend/tests/integration/conftest.py
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/tests/integration/conftest.py
🧰 Additional context used
🧬 Code graph analysis (2)
backend/app/services/user_settings_service.py (1)
backend/app/domain/user/settings_models.py (2)
DomainNotificationSettings(13-18)DomainEditorSettings(22-28)
backend/tests/conftest.py (2)
backend/tests/unit/conftest.py (4)
app(34-35)db(19-20)redis_client(24-25)client(29-30)backend/app/core/database_context.py (6)
database(67-69)database(146-149)database(201-202)client(62-64)client(140-143)client(197-198)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Scan Backend
- GitHub Check: E2E Tests
- GitHub Check: E2E Tests
- GitHub Check: Integration Tests
🔇 Additional comments (9)
backend/app/services/user_settings_service.py (3)
186-189: Good approach for Avro compatibility.The stringification logic correctly handles nested dictionaries by converting them to JSON strings while preserving simple values as strings. This ensures compatibility with Avro schema requirements.
343-368: LGTM! Clean refactoring of specific event type handling.The code correctly handles
USER_NOTIFICATION_SETTINGS_UPDATEDandUSER_EDITOR_SETTINGS_UPDATEDevents with proper fallback logic and early returns. The settings are properly reconstructed from event payloads with sensible defaults.
370-421: The event field rename from "changes" to "updated" is correctly implemented throughout the codebase. The event schema defines the field asupdated: dict[str, str], and the handler code properly accesses it viaevent.payload.get("updated"). No references to the old "changes" field remain. Theparse_valuehelper is appropriate for handling both string and dict formats in nested fields, providing defensive handling for payload deserialization edge cases.backend/tests/conftest.py (6)
14-14: LGTM: Database import aligns with the Motor → PyMongo migration.The import correctly references the new
Databasetype fromapp.core.database_context, consistent with the broader migration described in the PR objectives.
152-156: LGTM: Container scope aligns with app fixture.Session-scoping the
app_containeris consistent with the session-scopedappfixture, since the container is retrieved fromapp.state.dishka_container.
160-172: LGTM: Function-scoped fixtures provide test isolation.The function-scoped
client,scope,db, andredis_clientfixtures correctly provide per-test isolation despite the session-scoped app. TheDatabasetype change aligns with the Motor → PyMongo migration.Also applies to: 182-191, 194-197
215-222: LGTM: Credentials fixtures renamed and properly scoped.The renaming from
shared_*_credentialstotest_*_credentialsimproves clarity. Session scope is appropriate since credentials don't change during the test session, and usingPYTEST_SESSION_IDensures uniqueness across workers.Also applies to: 226-233
258-270: LGTM: another_user fixture correctly creates unique users.The fixture properly generates unique users per test using UUID, avoiding conflicts. Implementation is straightforward and correct.
200-202: No further action needed. Thetests/integration/conftest.pyfile contains the_cleanupfixture marked with@pytest_asyncio.fixture(autouse=True)that properly cleans both the database (dropping all non-system collections) and Redis before each integration test. The separation of cleanup logic between unit and integration test configurations is correctly implemented.
| "user_id": user_id, | ||
| "settings_type": SettingsType.PREFERENCES, | ||
| "changes": {"restored_to": timestamp.isoformat()}, | ||
| "updated": {"restored_to": timestamp.isoformat()}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent payload format for the same event type.
This line publishes USER_SETTINGS_UPDATED with a dict in the "updated" field, while lines 186-196 stringify all values for the same event type. Although the parse_value helper (lines 375-381) can handle both formats, this inconsistency reduces the predictability of the event contract.
🔎 Proposed fix: Apply the same stringification logic
- "updated": {"restored_to": timestamp.isoformat()},
+ "updated": {"restored_to": str(timestamp.isoformat())},Or better yet, apply the same stringification pattern used in lines 186-189:
+ restoration_data = {"restored_to": timestamp.isoformat()}
+ updated_stringified: dict[str, str] = {
+ k: json.dumps(v) if isinstance(v, dict) else str(v) for k, v in restoration_data.items()
+ }
await self.event_service.publish_event(
event_type=EventType.USER_SETTINGS_UPDATED,
aggregate_id=f"user_settings_{user_id}",
payload={
"user_id": user_id,
"settings_type": SettingsType.PREFERENCES,
- "updated": {"restored_to": timestamp.isoformat()},
+ "updated": updated_stringified,
},
metadata=None,
)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In backend/app/services/user_settings_service.py around line 301, the published
USER_SETTINGS_UPDATED event uses {"restored_to": timestamp.isoformat()} (a raw
datetime string) which is inconsistent with the earlier USER_SETTINGS_UPDATED
messages at lines 186-196 that stringify all values; change this payload to
apply the same stringification pattern used on lines 186-189 (i.e., convert the
value to a string in the same way the other updated keys are stringified) so the
"updated" field always contains string values matching the event contract.
| @pytest_asyncio.fixture(scope="session") | ||
| async def app(_test_env): # type: ignore[valid-type] | ||
| """Create FastAPI app once per session/worker. | ||
| Session-scoped to avoid Pydantic schema validator memory issues when | ||
| FastAPI recreates OpenAPI schemas hundreds of times with pytest-xdist. | ||
| See: https://github.com/pydantic/pydantic/issues/1864 | ||
| Depends on _test_env to ensure env vars (REDIS_DB, DATABASE_NAME, etc.) | ||
| are set before the app/Settings are created. | ||
| Note: Tests must not modify app.state or registered routes. | ||
| Use function-scoped `client` fixture for test isolation. | ||
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for potential stateful patterns that could leak between tests
echo "=== Checking for app.state modifications in tests ==="
rg -n "app\.state\." --type=py -g 'tests/**/*.py' -C2
echo -e "\n=== Checking for module-level caches or singletons in app code ==="
rg -n "@lru_cache|@cache|@functools" --type=py -g 'backend/app/**/*.py' -C2
echo -e "\n=== Checking for persistent caches in dependencies ==="
rg -n "cache|Cache" --type=py -g 'backend/app/**/*.py' -l | head -20Repository: HardMax71/Integr8sCode
Length of output: 4257
🏁 Script executed:
#!/bin/bash
# Examine the conftest.py to understand the fixture setup and cleanup
echo "=== Full conftest.py around app fixture ==="
sed -n '120,160p' backend/tests/conftest.py
echo -e "\n=== Checking how client fixture relates to app ==="
rg -n "def client" backend/tests/conftest.py -A10
echo -e "\n=== Checking _test_env fixture ==="
rg -n "def _test_env" backend/tests/conftest.py -A20Repository: HardMax71/Integr8sCode
Length of output: 3224
🏁 Script executed:
#!/bin/bash
# Look for app factory or app creation logic
echo "=== Checking app creation logic ==="
rg -n "FastAPI\(\)" backend --type=py -C2 | head -50
echo -e "\n=== Checking for background tasks or async state ==="
rg -n "@app\.(on_event|lifespan)" backend/app --type=py -C5Repository: HardMax71/Integr8sCode
Length of output: 467
🏁 Script executed:
#!/bin/bash
# Check if tests modify environment variables expecting Settings to change
echo "=== Checking for environment variable modifications in tests ==="
rg -n "os\.environ\[" backend/tests --type=py -C2 | head -50
echo -e "\n=== Check if get_settings is used/called in tests ==="
rg -n "get_settings" backend/tests --type=py
echo -e "\n=== Verify create_test_app() source ==="
grep -n "def create_test_app" backend/tests/conftest.py -A15Repository: HardMax71/Integr8sCode
Length of output: 4471
🏁 Script executed:
#!/bin/bash
# Check app.main to understand app creation
echo "=== Check app.main and create_app function ==="
head -100 backend/app/main.py
echo -e "\n=== Look for Settings usage in app initialization ==="
rg -n "Settings\|get_settings" backend/app/main.py -C3Repository: HardMax71/Integr8sCode
Length of output: 3124
Verify other @lru_caches don't cause cross-test pollution.
The session-scoped app is safe from most state leaks. The Settings cache is correctly cleared when the app is created (line 121 in create_test_app()), and the function-scoped client fixture properly isolates request state.
However, verify that cached functions in the app don't cause test interdependencies:
schema_registry.py:_get_event_class_mapping(),_get_all_event_classes(),_get_event_type_to_class_mapping()all use@lru_cache(maxsize=1)and are never clearedkafka/mappings.py:get_event_class_for_type()andget_topic_for_event()use@lru_cache(maxsize=128)and are never cleared
If tests modify event schemas or Kafka mappings, these caches won't reflect the changes. Confirm tests don't rely on dynamic schema/mapping updates.



wait instead of curl polling, + coverage_core=sysmon
Summary by CodeRabbit
New Features
Infrastructure
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.