Skip to content

feat: Wire end-to-end pipeline — Dashboard ↔ Gateway ↔ Intelligence#27

Open
rsalus wants to merge 7 commits intomainfrom
e2e-wiring
Open

feat: Wire end-to-end pipeline — Dashboard ↔ Gateway ↔ Intelligence#27
rsalus wants to merge 7 commits intomainfrom
e2e-wiring

Conversation

@rsalus
Copy link
Contributor

@rsalus rsalus commented Feb 21, 2026

Summary

Connects the three independently-working services into a complete end-to-end pipeline:

  • IntelligenceClient HTTP — Replaced stub with real HttpClient implementation that calls the Python Intelligence service. Includes AnalyzeRequestDto serialization bridge (C# PascalCase → Python snake_case) and DI wiring with IntelligenceOptions
  • ProcessPARequest mutation — Wired through real FHIR data aggregation (Athena sandbox) → Intelligence analysis → result mapping. Evidence items map to criteria (MET/NOT_MET/UNCLEAR), confidence score scales to 0-100
  • Intelligence generic policy — Removed hard 400 rejection for unsupported procedure codes. Falls back to a generic policy with medical necessity, diagnosis, and documentation criteria
  • MockDataService — Added ApplyAnalysisResult method to store real analysis results (status, confidence, clinical summary, criteria)

Data Flow (Target State)

Dashboard → GraphQL mutation → Gateway
  → FhirDataAggregator (Athena FHIR R4)
  → IntelligenceClient (POST /api/analyze)
  → Map PAFormData → PARequestModel
  → MockDataService.ApplyAnalysisResult
  → Dashboard receives updated model

Test Plan

  • 37 Intelligence tests pass (generic policy + analyze fallback)
  • 437 Gateway tests pass (serialization, HTTP, mutation, DI, MockDataService, Alba integration)
  • 226 Dashboard tests pass (EvidencePanel, PARequestCard, graphqlService)
  • Spec compliance review: PASS
  • Code quality review: PASS

🤖 Generated with Claude Code

rsalus and others added 7 commits February 20, 2026 23:45
…dure codes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ce integration

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…apes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ntation

Replace the hardcoded stub IntelligenceClient with a real HTTP client that
calls the Python Intelligence service at /api/analyze. Add snake_case DTO
serialization bridge, DI wiring with HttpClientFactory, and mock the
IntelligenceClient in integration test bootstraps.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ence pipeline

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor Author

rsalus commented Feb 21, 2026

This stack of pull requests is managed by Graphite. Learn more about stacking.

@coderabbitai
Copy link

coderabbitai bot commented Feb 21, 2026

📝 Walkthrough

Walkthrough

This PR integrates the Intelligence service HTTP client into the PA request processing workflow, replacing stubs with real service calls across the Gateway API. It adds comprehensive test coverage for ProcessPARequest mutation including FHIR data aggregation and intelligence analysis, and updates the Python intelligence service to support generic policy fallbacks for unsupported procedure codes.

Changes

Cohort / File(s) Summary
Dashboard GraphQL Service
apps/dashboard/src/api/__tests__/graphqlService.test.ts
Adds tests for useProcessPARequest hook mutation invocation and validates cache invalidation of four related query keys on success.
Dashboard Components
apps/dashboard/src/components/__tests__/EvidencePanel.test.tsx, apps/dashboard/src/components/__tests__/PARequestCard.test.tsx
Adds test suites for component rendering behavior including criterion label display, status badges, confidence indicators, workflow states, and callback handlers.
Gateway API Dependency Injection
apps/gateway/Gateway.API/DependencyExtensions.cs, apps/gateway/Gateway.API.Tests/DependencyExtensionsTests.cs
Replaces Intelligence client stub registration with real HTTP client configuration using IntelligenceOptions (BaseUrl, TimeoutSeconds) and applies caching decorator; adds verification tests.
Gateway API Intelligence Client
apps/gateway/Gateway.API/Services/IntelligenceClient.cs, apps/gateway/Gateway.API.Tests/Services/IntelligenceClientTests.cs, apps/gateway/Gateway.API.Tests/Services/IntelligenceClientSerializationTests.cs
Transitions from stub implementation to real HTTP POST client calling /api/analyze endpoint with snake_case request serialization and PAFormData response deserialization; includes unit tests for HTTP interactions and DTO serialization.
Gateway API Mutation Processing
apps/gateway/Gateway.API/GraphQL/Mutations/Mutation.cs, apps/gateway/Gateway.API.Tests/GraphQL/ProcessPARequestMutationTests.cs
Updates ProcessPARequest mutation to orchestrate multi-step pipeline: fetch PA request → aggregate clinical data via FHIR → analyze via Intelligence client → map evidence to criteria → apply results; adds comprehensive unit tests covering happy path, error cases, and cancellation handling.
Gateway API MockDataService
apps/gateway/Gateway.API/Services/MockDataService.cs, apps/gateway/Gateway.API.Tests/Services/MockDataServiceTests.cs
Introduces ApplyAnalysisResult method to update PA request status, confidence, clinical summary, and criteria atomically; adds tests validating state transitions and field updates.
Gateway Integration Testing
apps/gateway/Gateway.API.Tests/Integration/ProcessPARequestAlbaBootstrap.cs, apps/gateway/Gateway.API.Tests/Integration/ProcessPARequestIntegrationTests.cs, apps/gateway/Gateway.API.Tests/Integration/EncounterProcessingAlbaBootstrap.cs, apps/gateway/Gateway.API.Tests/Integration/GatewayAlbaBootstrap.cs
Introduces Alba-based test bootstraps with mocked FHIR and Intelligence dependencies; adds end-to-end integration tests verifying GraphQL mutation behavior, null handling, and criteria mapping across real ASP.NET pipeline.
Python Intelligence Service
apps/intelligence/src/policies/generic_policy.py, apps/intelligence/src/api/analyze.py, apps/intelligence/src/tests/test_generic_policy.py, apps/intelligence/src/tests/test_analyze.py
Adds generic policy fallback builder for unsupported procedure codes; updates analyze endpoints to use centralized policy resolution instead of hard validation; renames and extends tests to verify generic policy behavior.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client / GraphQL
    participant Gateway as Gateway API
    participant FHIR as FHIR Aggregator
    participant Intelligence as Intelligence Service
    participant DB as MockDataService

    Client->>Gateway: ProcessPARequest(id)
    activate Gateway
    
    Gateway->>DB: GetPARequest(id)
    DB-->>Gateway: PARequest
    
    Gateway->>FHIR: AggregatePatientData(patientId)
    activate FHIR
    FHIR-->>Gateway: ClinicalBundle
    deactivate FHIR
    
    Gateway->>Intelligence: AnalyzeAsync(bundle, procedureCode)
    activate Intelligence
    Intelligence->>Intelligence: BuildAnalyzeRequest()
    Intelligence->>Intelligence: POST /api/analyze
    Intelligence-->>Intelligence: Deserialize PAFormData
    Intelligence-->>Gateway: PAFormData
    deactivate Intelligence
    
    Gateway->>Gateway: MapEvidenceToCriteria(formData)
    Gateway->>Gateway: ComputeConfidence(score)
    
    Gateway->>DB: ApplyAnalysisResult(id, summary, confidence, criteria)
    DB-->>Gateway: Updated PARequestModel
    
    Gateway-->>Client: PARequestModel
    deactivate Gateway
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related issues

Possibly related PRs

Poem

🧠✨ From stubs to real, the Intelligence now speaks,
Aggregating FHIR through logical peaks,
Criteria mapped, confidence scored with care,
Generic policies catch what's not there!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 30.26% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main objective: wiring an end-to-end pipeline connecting Dashboard, Gateway, and Intelligence services into a complete system.
Description check ✅ Passed The description provides comprehensive context for the changes, detailing the data flow, specific implementations (HTTP client, mutation wiring, generic policy), and test results across all three services.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch e2e-wiring

Comment @coderabbitai help to get the list of available commands and usage tips.

@rsalus rsalus changed the title feat(intelligence): add generic policy fallback for unsupported procedure codes feat: Wire end-to-end pipeline — Dashboard ↔ Gateway ↔ Intelligence Feb 21, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (6)
apps/intelligence/src/tests/test_analyze.py (1)

55-68: Consider consolidating duplicate tests for unsupported procedure fallback.

Both test_analyze_unsupported_procedure_uses_generic_fallback (line 55) and test_analyze_unsupported_procedure_uses_generic_policy (line 99) verify the same behavior—that unsupported procedure codes return 200 with the correct procedure code. You could use @pytest.mark.parametrize to test multiple codes in one test.

Also applies to: 98-117

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/intelligence/src/tests/test_analyze.py` around lines 55 - 68, Two tests,
test_analyze_unsupported_procedure_uses_generic_fallback and
test_analyze_unsupported_procedure_uses_generic_policy, duplicate the same
assertion; consolidate them by converting one into a parameterized test using
pytest.mark.parametrize (e.g., parametrize procedure_code values like "99999"
and another code) and call analyze(request) once per parameter; update the test
function name to reflect parameterization (e.g.,
test_analyze_unsupported_procedures_use_generic_fallback), keep the same mocking
of src.reasoning.evidence_extractor.chat_completion and
src.reasoning.form_generator.chat_completion, and assert result.procedure_code
equals the parameterized procedure_code.
apps/intelligence/src/tests/test_generic_policy.py (1)

23-27: Consider adding test for clinical_documentation criterion.

Tests cover medical_necessity and diagnosis_present, but the third criterion clinical_documentation lacks explicit coverage. While test_build_generic_policy_criteria_have_required_fields validates all criteria have the required fields, a dedicated assertion would ensure the criterion exists.

Suggested addition
def test_build_generic_policy_includes_clinical_documentation_criterion():
    """Generic policy always includes clinical documentation criterion."""
    policy = build_generic_policy("27447")
    criterion_ids = [c["id"] for c in policy["criteria"]]
    assert "clinical_documentation" in criterion_ids
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/intelligence/src/tests/test_generic_policy.py` around lines 23 - 27, Add
a new unit test asserting the "clinical_documentation" criterion is present in
policies built by build_generic_policy; create a test named
test_build_generic_policy_includes_clinical_documentation_criterion() that calls
build_generic_policy("27447"), collects criterion ids from policy["criteria"]
(like the other tests), and asserts "clinical_documentation" is in that list to
ensure the criterion exists.
apps/gateway/Gateway.API.Tests/Integration/GatewayAlbaBootstrap.cs (1)

126-157: Mock setup duplicated with EncounterProcessingAlbaBootstrap.

This mock configuration is nearly identical to EncounterProcessingAlbaBootstrap.cs lines 127-158. Consider extracting a shared helper (e.g., TestMocks.CreateIntelligenceClientMock()) to reduce duplication. Not blocking—test infrastructure often tolerates some repetition for clarity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/gateway/Gateway.API.Tests/Integration/GatewayAlbaBootstrap.cs` around
lines 126 - 157, The mock setup for IIntelligenceClient in
GatewayAlbaBootstrap.cs duplicates the same configuration in
EncounterProcessingAlbaBootstrap.cs; refactor by extracting the shared setup
into a helper such as TestMocks.CreateIntelligenceClientMock() that returns a
configured Substitute<IIntelligenceClient>, then replace the inline mock
creation in both GatewayAlbaBootstrap and EncounterProcessingAlbaBootstrap with
calls to that helper and register its result
(services.RemoveAll<IIntelligenceClient>(); services.AddSingleton(...)). Ensure
the helper encapsulates the AnalyzeAsync Returns(...) behavior and preserves the
PAFormData contents and Argument usage (callInfo.ArgAt<string>(1)).
apps/gateway/Gateway.API.Tests/Services/IntelligenceClientSerializationTests.cs (1)

11-11: Missing sealed modifier per coding guidelines.

Test classes should also follow the "sealed by default" guideline for consistency and to prevent unintended inheritance.

♻️ Proposed fix
-public class IntelligenceClientSerializationTests
+public sealed class IntelligenceClientSerializationTests

As per coding guidelines: "Sealed by default"

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/gateway/Gateway.API.Tests/Services/IntelligenceClientSerializationTests.cs`
at line 11, The test class IntelligenceClientSerializationTests should be
declared sealed to follow the "sealed by default" guideline; update the class
declaration for IntelligenceClientSerializationTests to add the sealed modifier
(i.e., change "public class IntelligenceClientSerializationTests" to "public
sealed class IntelligenceClientSerializationTests") so the test class cannot be
inherited.
apps/dashboard/src/components/__tests__/EvidencePanel.test.tsx (1)

198-228: Assert the color thresholds you describe in the test names.

Right now these tests only check the percentage text, so regressions in warning/destructive styling would slip through.

[recommended change]

♻️ Add explicit class assertions
-      expect(screen.getByText(/35%/)).toBeInTheDocument();
+      const confidence = screen.getByText(/35%/);
+      expect(confidence).toBeInTheDocument();
+      expect(confidence.className).toMatch(/destructive/);
...
-      expect(screen.getByText(/50%/)).toBeInTheDocument();
+      const confidence = screen.getByText(/50%/);
+      expect(confidence).toBeInTheDocument();
+      expect(confidence.className).toMatch(/warning/);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard/src/components/__tests__/EvidencePanel.test.tsx` around lines
198 - 228, The tests EvidencePanel_WithLowConfidence_ShowsWarningColor and
EvidencePanel_WithBorderlineConfidence_ShowsCorrectColor only assert percentage
text; update them to also assert the rendered confidence element has the
expected CSS class for color (e.g., destructive/warning) so styling regressions
fail the test. Locate the confidence label within EvidencePanel (use the same
getByText(/35%/) and getByText(/50%/) or a data-testid if available) and add
assertions like expect(confidenceElement).toHaveClass('destructive') for 35% and
expect(confidenceElement).toHaveClass('warning' or the exact warning class used)
for 50%; if no test id exists, add a stable data-testid on the confidence node
in EvidencePanel to target in the tests.
apps/dashboard/src/components/__tests__/PARequestCard.test.tsx (1)

80-90: Scope the “no confidence badge” assertion to badges to avoid false positives.

If the card ever displays other percentages (progress, SLA, etc.), this test will fail even when the confidence badge is absent.

[recommended change]

♻️ Limit the query to badge elements
-      expect(screen.queryByText(/\d+%/)).not.toBeInTheDocument();
+      expect(
+        screen.queryByText(/\d+%/, { selector: '[data-slot="badge"]' })
+      ).not.toBeInTheDocument();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard/src/components/__tests__/PARequestCard.test.tsx` around lines
80 - 90, The current test
PARequestCard_WithUndefinedConfidence_ShowsNoConfidenceBadge uses
screen.queryByText(/\d+%/) which can catch unrelated percentage text; instead
scope the assertion to badge elements only by selecting elements with
data-slot="badge" from the rendered output (e.g., using
container.querySelectorAll('[data-slot="badge"]') or
screen.getAllByRole/attribute for the badge) and assert that none of those badge
elements' textContent matches /\d+%/; keep the existing check that no badge with
data-slot='badge' named "Review" exists (confidenceBadge) and remove or replace
the global percentage query accordingly so the test only fails when a confidence
badge (data-slot="badge") shows a percent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/dashboard/src/api/__tests__/graphqlService.test.ts`:
- Around line 98-100: The TypeScript tuple error comes from accessing
mockRequest.mock.calls[0][1]; fix it by asserting the call tuple as a flexible
array like the other test: change the assignment of callArgs to cast the mock
call to an any[] (e.g., const callArgs = mockRequest.mock.calls[0] as unknown as
any[]), then keep the checks expect(callArgs[0]).toContain('processPARequest')
and expect(callArgs[1]).toEqual({ id: 'PA-001' }) so TypeScript no longer
complains when accessing index 1.

In `@apps/gateway/Gateway.API.Tests/Integration/ProcessPARequestAlbaBootstrap.cs`:
- Around line 122-124: The nullable type parameter on AddSingleton
(services.AddSingleton<IConnectionMultiplexer?>(sp => null)) violates the
generic class constraint; change the registration to use the non-nullable
service type and return a null-forgiving value or a test double instead — e.g.
replace with services.AddSingleton<IConnectionMultiplexer>(sp => null!) if you
intentionally want a null instance, or better register a test double/mock
(services.AddSingleton<IConnectionMultiplexer>(sp =>
Mock.Of<IConnectionMultiplexer>())) so downstream code can safely consume
IConnectionMultiplexer; adjust the factory accordingly for
RemoveAll/IConnectionMultiplexer and any tests that expect null behavior.

In `@apps/gateway/Gateway.API/Services/IntelligenceClient.cs`:
- Line 49: ReadFromJsonAsync is called without the SnakeCaseSerializerOptions,
so snake_case JSON from the Python service won't map to PAFormData (e.g.,
confidence_score → ConfidenceScore); update the deserialization to pass the same
SnakeCaseSerializerOptions used during serialization (use
response.Content.ReadFromJsonAsync<PAFormData>(options:
SnakeCaseSerializerOptions, cancellationToken: cancellationToken) or equivalent)
so response.Content deserializes into PAFormData correctly.

In `@apps/intelligence/src/policies/generic_policy.py`:
- Around line 28-38: The "medical_necessity" policy dict has a description
string exceeding 100 chars; update the description field in the dict with id
"medical_necessity" by splitting it into shorter string literals (e.g., use
implicit adjacent string concatenation or join two shorter strings with a space)
so no line exceeds 100 characters while keeping the exact wording intact; edit
the description value in that policy entry in generic_policy.py (the dict with
"id": "medical_necessity") to use multiple shorter string fragments.

In `@apps/intelligence/src/tests/test_analyze.py`:
- Line 111: The mock return string assigned to mock_llm (AsyncMock) is too long;
split the long literal into shorter concatenated parts or use an implicitly
joined parenthesized string so the assignment to mock_llm =
AsyncMock(return_value=...) stays under 100 chars. Update the
AsyncMock(return_value="...") in the test_analyze.py test to use either multiple
quoted segments joined by + or a parenthesized multi-line string for the same
content, keeping the symbol name mock_llm and AsyncMock call unchanged.

In `@apps/intelligence/src/tests/test_generic_policy.py`:
- Around line 1-3: Remove the unused pytest import from the test module: in
apps/intelligence/src/tests/test_generic_policy.py delete the line "import
pytest" so the file only imports build_generic_policy from
src.policies.generic_policy; ensure no other pytest-specific features are
referenced and run linters/tests to confirm the F401/I001 warnings are resolved.

---

Nitpick comments:
In `@apps/dashboard/src/components/__tests__/EvidencePanel.test.tsx`:
- Around line 198-228: The tests
EvidencePanel_WithLowConfidence_ShowsWarningColor and
EvidencePanel_WithBorderlineConfidence_ShowsCorrectColor only assert percentage
text; update them to also assert the rendered confidence element has the
expected CSS class for color (e.g., destructive/warning) so styling regressions
fail the test. Locate the confidence label within EvidencePanel (use the same
getByText(/35%/) and getByText(/50%/) or a data-testid if available) and add
assertions like expect(confidenceElement).toHaveClass('destructive') for 35% and
expect(confidenceElement).toHaveClass('warning' or the exact warning class used)
for 50%; if no test id exists, add a stable data-testid on the confidence node
in EvidencePanel to target in the tests.

In `@apps/dashboard/src/components/__tests__/PARequestCard.test.tsx`:
- Around line 80-90: The current test
PARequestCard_WithUndefinedConfidence_ShowsNoConfidenceBadge uses
screen.queryByText(/\d+%/) which can catch unrelated percentage text; instead
scope the assertion to badge elements only by selecting elements with
data-slot="badge" from the rendered output (e.g., using
container.querySelectorAll('[data-slot="badge"]') or
screen.getAllByRole/attribute for the badge) and assert that none of those badge
elements' textContent matches /\d+%/; keep the existing check that no badge with
data-slot='badge' named "Review" exists (confidenceBadge) and remove or replace
the global percentage query accordingly so the test only fails when a confidence
badge (data-slot="badge") shows a percent.

In `@apps/gateway/Gateway.API.Tests/Integration/GatewayAlbaBootstrap.cs`:
- Around line 126-157: The mock setup for IIntelligenceClient in
GatewayAlbaBootstrap.cs duplicates the same configuration in
EncounterProcessingAlbaBootstrap.cs; refactor by extracting the shared setup
into a helper such as TestMocks.CreateIntelligenceClientMock() that returns a
configured Substitute<IIntelligenceClient>, then replace the inline mock
creation in both GatewayAlbaBootstrap and EncounterProcessingAlbaBootstrap with
calls to that helper and register its result
(services.RemoveAll<IIntelligenceClient>(); services.AddSingleton(...)). Ensure
the helper encapsulates the AnalyzeAsync Returns(...) behavior and preserves the
PAFormData contents and Argument usage (callInfo.ArgAt<string>(1)).

In
`@apps/gateway/Gateway.API.Tests/Services/IntelligenceClientSerializationTests.cs`:
- Line 11: The test class IntelligenceClientSerializationTests should be
declared sealed to follow the "sealed by default" guideline; update the class
declaration for IntelligenceClientSerializationTests to add the sealed modifier
(i.e., change "public class IntelligenceClientSerializationTests" to "public
sealed class IntelligenceClientSerializationTests") so the test class cannot be
inherited.

In `@apps/intelligence/src/tests/test_analyze.py`:
- Around line 55-68: Two tests,
test_analyze_unsupported_procedure_uses_generic_fallback and
test_analyze_unsupported_procedure_uses_generic_policy, duplicate the same
assertion; consolidate them by converting one into a parameterized test using
pytest.mark.parametrize (e.g., parametrize procedure_code values like "99999"
and another code) and call analyze(request) once per parameter; update the test
function name to reflect parameterization (e.g.,
test_analyze_unsupported_procedures_use_generic_fallback), keep the same mocking
of src.reasoning.evidence_extractor.chat_completion and
src.reasoning.form_generator.chat_completion, and assert result.procedure_code
equals the parameterized procedure_code.

In `@apps/intelligence/src/tests/test_generic_policy.py`:
- Around line 23-27: Add a new unit test asserting the "clinical_documentation"
criterion is present in policies built by build_generic_policy; create a test
named test_build_generic_policy_includes_clinical_documentation_criterion() that
calls build_generic_policy("27447"), collects criterion ids from
policy["criteria"] (like the other tests), and asserts "clinical_documentation"
is in that list to ensure the criterion exists.

Comment on lines +98 to +100
const callArgs = mockRequest.mock.calls[0];
expect(callArgs[0]).toContain('processPARequest');
expect(callArgs[1]).toEqual({ id: 'PA-001' });
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix TypeScript tuple access error causing pipeline failure.

The pipeline reports TS2493: Tuple type '[options: RequestOptions<object, unknown>]' of length '1' has no element at index '1'. The mock type inference doesn't recognize the second argument. Apply the same workaround used in useDenyPARequest test at line 74.

🐛 Proposed fix
-    const callArgs = mockRequest.mock.calls[0];
+    const callArgs = mockRequest.mock.calls[0] as unknown[];
     expect(callArgs[0]).toContain('processPARequest');
     expect(callArgs[1]).toEqual({ id: 'PA-001' });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const callArgs = mockRequest.mock.calls[0];
expect(callArgs[0]).toContain('processPARequest');
expect(callArgs[1]).toEqual({ id: 'PA-001' });
const callArgs = mockRequest.mock.calls[0] as unknown[];
expect(callArgs[0]).toContain('processPARequest');
expect(callArgs[1]).toEqual({ id: 'PA-001' });
🧰 Tools
🪛 GitHub Actions: CI

[error] 100-100: TS2493: Tuple type '[options: RequestOptions<object, unknown>]' of length '1' has no element at index '1'.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard/src/api/__tests__/graphqlService.test.ts` around lines 98 -
100, The TypeScript tuple error comes from accessing
mockRequest.mock.calls[0][1]; fix it by asserting the call tuple as a flexible
array like the other test: change the assignment of callArgs to cast the mock
call to an any[] (e.g., const callArgs = mockRequest.mock.calls[0] as unknown as
any[]), then keep the checks expect(callArgs[0]).toContain('processPARequest')
and expect(callArgs[1]).toEqual({ id: 'PA-001' }) so TypeScript no longer
complains when accessing index 1.

Comment on lines +122 to +124
// Remove Redis (not available in tests)
services.RemoveAll<IConnectionMultiplexer>();
services.AddSingleton<IConnectionMultiplexer?>(sp => null);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix nullable type parameter constraint violation.

Static analysis correctly flags line 124: IConnectionMultiplexer? doesn't satisfy the class constraint on AddSingleton<TService>. The nullable annotation creates a type mismatch.

🔧 Proposed fix
 // Remove Redis (not available in tests)
 services.RemoveAll<IConnectionMultiplexer>();
-services.AddSingleton<IConnectionMultiplexer?>(sp => null);
+services.AddSingleton<IConnectionMultiplexer>(sp => null!);

Alternatively, if downstream code needs to handle null gracefully, consider registering a mock that returns defaults rather than injecting null directly.

🧰 Tools
🪛 GitHub Check: Gateway Build & Test

[warning] 124-124:
The type 'StackExchange.Redis.IConnectionMultiplexer?' cannot be used as type parameter 'TService' in the generic type or method 'ServiceCollectionServiceExtensions.AddSingleton(IServiceCollection, Func<IServiceProvider, TService>)'. Nullability of type argument 'StackExchange.Redis.IConnectionMultiplexer?' doesn't match 'class' constraint.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/gateway/Gateway.API.Tests/Integration/ProcessPARequestAlbaBootstrap.cs`
around lines 122 - 124, The nullable type parameter on AddSingleton
(services.AddSingleton<IConnectionMultiplexer?>(sp => null)) violates the
generic class constraint; change the registration to use the non-nullable
service type and return a null-forgiving value or a test double instead — e.g.
replace with services.AddSingleton<IConnectionMultiplexer>(sp => null!) if you
intentionally want a null instance, or better register a test double/mock
(services.AddSingleton<IConnectionMultiplexer>(sp =>
Mock.Of<IConnectionMultiplexer>())) so downstream code can safely consume
IConnectionMultiplexer; adjust the factory accordingly for
RemoveAll/IConnectionMultiplexer and any tests that expect null behavior.


response.EnsureSuccessStatusCode();

var result = await response.Content.ReadFromJsonAsync<PAFormData>(cancellationToken: cancellationToken);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Deserialization missing snake_case options — will fail with Python service response.

The request is serialized with SnakeCaseSerializerOptions (line 44), but ReadFromJsonAsync on line 49 uses default options. If the Python service returns snake_case keys (e.g., confidence_score, clinical_summary), they won't map to PAFormData properties like ConfidenceScore and ClinicalSummary.

🐛 Proposed fix: Pass the same serializer options for deserialization
-        var result = await response.Content.ReadFromJsonAsync<PAFormData>(cancellationToken: cancellationToken);
+        var result = await response.Content.ReadFromJsonAsync<PAFormData>(SnakeCaseSerializerOptions, cancellationToken);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/gateway/Gateway.API/Services/IntelligenceClient.cs` at line 49,
ReadFromJsonAsync is called without the SnakeCaseSerializerOptions, so
snake_case JSON from the Python service won't map to PAFormData (e.g.,
confidence_score → ConfidenceScore); update the deserialization to pass the same
SnakeCaseSerializerOptions used during serialization (use
response.Content.ReadFromJsonAsync<PAFormData>(options:
SnakeCaseSerializerOptions, cancellationToken: cancellationToken) or equivalent)
so response.Content deserializes into PAFormData correctly.

Comment on lines +28 to +38
{
"id": "medical_necessity",
"description": "The requested procedure is medically necessary based on the clinical documentation",
"evidence_patterns": [
r"medically necessary",
r"medical necessity",
r"clinically indicated",
r"recommended.*procedure",
r"required.*treatment",
],
"required": True,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix E501: Line 30 exceeds 100 characters.

The CI pipeline flagged this line as 116 characters. Split the description string to comply with the line length limit.

Proposed fix
             {
                 "id": "medical_necessity",
-                "description": "The requested procedure is medically necessary based on the clinical documentation",
+                "description": (
+                    "The requested procedure is medically necessary "
+                    "based on the clinical documentation"
+                ),
                 "evidence_patterns": [
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{
"id": "medical_necessity",
"description": "The requested procedure is medically necessary based on the clinical documentation",
"evidence_patterns": [
r"medically necessary",
r"medical necessity",
r"clinically indicated",
r"recommended.*procedure",
r"required.*treatment",
],
"required": True,
{
"id": "medical_necessity",
"description": (
"The requested procedure is medically necessary "
"based on the clinical documentation"
),
"evidence_patterns": [
r"medically necessary",
r"medical necessity",
r"clinically indicated",
r"recommended.*procedure",
r"required.*treatment",
],
"required": True,
🧰 Tools
🪛 GitHub Actions: CI

[error] 30-30: E501 Line too long (116 > 100).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/intelligence/src/policies/generic_policy.py` around lines 28 - 38, The
"medical_necessity" policy dict has a description string exceeding 100 chars;
update the description field in the dict with id "medical_necessity" by
splitting it into shorter string literals (e.g., use implicit adjacent string
concatenation or join two shorter strings with a space) so no line exceeds 100
characters while keeping the exact wording intact; edit the description value in
that policy entry in generic_policy.py (the dict with "id": "medical_necessity")
to use multiple shorter string fragments.

"procedures": [],
},
)
mock_llm = AsyncMock(return_value="The criterion is MET based on the clinical evidence provided.")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix E501: Line 111 exceeds 100 characters.

Split the mock return value string to comply with line length limits.

Proposed fix
-    mock_llm = AsyncMock(return_value="The criterion is MET based on the clinical evidence provided.")
+    mock_llm = AsyncMock(
+        return_value="The criterion is MET based on the clinical evidence provided."
+    )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
mock_llm = AsyncMock(return_value="The criterion is MET based on the clinical evidence provided.")
mock_llm = AsyncMock(
return_value="The criterion is MET based on the clinical evidence provided."
)
🧰 Tools
🪛 GitHub Actions: CI

[error] 111-111: E501 Line too long (102 > 100).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/intelligence/src/tests/test_analyze.py` at line 111, The mock return
string assigned to mock_llm (AsyncMock) is too long; split the long literal into
shorter concatenated parts or use an implicitly joined parenthesized string so
the assignment to mock_llm = AsyncMock(return_value=...) stays under 100 chars.
Update the AsyncMock(return_value="...") in the test_analyze.py test to use
either multiple quoted segments joined by + or a parenthesized multi-line string
for the same content, keeping the symbol name mock_llm and AsyncMock call
unchanged.

Comment on lines +1 to +3
"""Tests for generic policy builder."""
import pytest
from src.policies.generic_policy import build_generic_policy
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove unused pytest import to fix F401 and I001 warnings.

The CI flagged that pytest is imported but never used. Since these tests don't use any pytest-specific features (no fixtures, markers, or raises), the import can be removed entirely.

Proposed fix
 """Tests for generic policy builder."""
-import pytest
 from src.policies.generic_policy import build_generic_policy
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"""Tests for generic policy builder."""
import pytest
from src.policies.generic_policy import build_generic_policy
"""Tests for generic policy builder."""
from src.policies.generic_policy import build_generic_policy
🧰 Tools
🪛 GitHub Actions: CI

[warning] 2-2: I001 Import block is un-sorted or un-formatted


[warning] 2-2: F401 'pytest' imported but unused

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/intelligence/src/tests/test_generic_policy.py` around lines 1 - 3,
Remove the unused pytest import from the test module: in
apps/intelligence/src/tests/test_generic_policy.py delete the line "import
pytest" so the file only imports build_generic_policy from
src.policies.generic_policy; ensure no other pytest-specific features are
referenced and run linters/tests to confirm the F401/I001 warnings are resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

1 participant