feat: Wire end-to-end pipeline — Dashboard ↔ Gateway ↔ Intelligence#27
feat: Wire end-to-end pipeline — Dashboard ↔ Gateway ↔ Intelligence#27
Conversation
…dure codes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ce integration Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…apes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ntation Replace the hardcoded stub IntelligenceClient with a real HTTP client that calls the Python Intelligence service at /api/analyze. Add snake_case DTO serialization bridge, DI wiring with HttpClientFactory, and mock the IntelligenceClient in integration test bootstraps. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ence pipeline Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis PR integrates the Intelligence service HTTP client into the PA request processing workflow, replacing stubs with real service calls across the Gateway API. It adds comprehensive test coverage for ProcessPARequest mutation including FHIR data aggregation and intelligence analysis, and updates the Python intelligence service to support generic policy fallbacks for unsupported procedure codes. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client / GraphQL
participant Gateway as Gateway API
participant FHIR as FHIR Aggregator
participant Intelligence as Intelligence Service
participant DB as MockDataService
Client->>Gateway: ProcessPARequest(id)
activate Gateway
Gateway->>DB: GetPARequest(id)
DB-->>Gateway: PARequest
Gateway->>FHIR: AggregatePatientData(patientId)
activate FHIR
FHIR-->>Gateway: ClinicalBundle
deactivate FHIR
Gateway->>Intelligence: AnalyzeAsync(bundle, procedureCode)
activate Intelligence
Intelligence->>Intelligence: BuildAnalyzeRequest()
Intelligence->>Intelligence: POST /api/analyze
Intelligence-->>Intelligence: Deserialize PAFormData
Intelligence-->>Gateway: PAFormData
deactivate Intelligence
Gateway->>Gateway: MapEvidenceToCriteria(formData)
Gateway->>Gateway: ComputeConfidence(score)
Gateway->>DB: ApplyAnalysisResult(id, summary, confidence, criteria)
DB-->>Gateway: Updated PARequestModel
Gateway-->>Client: PARequestModel
deactivate Gateway
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (6)
apps/intelligence/src/tests/test_analyze.py (1)
55-68: Consider consolidating duplicate tests for unsupported procedure fallback.Both
test_analyze_unsupported_procedure_uses_generic_fallback(line 55) andtest_analyze_unsupported_procedure_uses_generic_policy(line 99) verify the same behavior—that unsupported procedure codes return 200 with the correct procedure code. You could use@pytest.mark.parametrizeto test multiple codes in one test.Also applies to: 98-117
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/intelligence/src/tests/test_analyze.py` around lines 55 - 68, Two tests, test_analyze_unsupported_procedure_uses_generic_fallback and test_analyze_unsupported_procedure_uses_generic_policy, duplicate the same assertion; consolidate them by converting one into a parameterized test using pytest.mark.parametrize (e.g., parametrize procedure_code values like "99999" and another code) and call analyze(request) once per parameter; update the test function name to reflect parameterization (e.g., test_analyze_unsupported_procedures_use_generic_fallback), keep the same mocking of src.reasoning.evidence_extractor.chat_completion and src.reasoning.form_generator.chat_completion, and assert result.procedure_code equals the parameterized procedure_code.apps/intelligence/src/tests/test_generic_policy.py (1)
23-27: Consider adding test forclinical_documentationcriterion.Tests cover
medical_necessityanddiagnosis_present, but the third criterionclinical_documentationlacks explicit coverage. Whiletest_build_generic_policy_criteria_have_required_fieldsvalidates all criteria have the required fields, a dedicated assertion would ensure the criterion exists.Suggested addition
def test_build_generic_policy_includes_clinical_documentation_criterion(): """Generic policy always includes clinical documentation criterion.""" policy = build_generic_policy("27447") criterion_ids = [c["id"] for c in policy["criteria"]] assert "clinical_documentation" in criterion_ids🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/intelligence/src/tests/test_generic_policy.py` around lines 23 - 27, Add a new unit test asserting the "clinical_documentation" criterion is present in policies built by build_generic_policy; create a test named test_build_generic_policy_includes_clinical_documentation_criterion() that calls build_generic_policy("27447"), collects criterion ids from policy["criteria"] (like the other tests), and asserts "clinical_documentation" is in that list to ensure the criterion exists.apps/gateway/Gateway.API.Tests/Integration/GatewayAlbaBootstrap.cs (1)
126-157: Mock setup duplicated with EncounterProcessingAlbaBootstrap.This mock configuration is nearly identical to
EncounterProcessingAlbaBootstrap.cslines 127-158. Consider extracting a shared helper (e.g.,TestMocks.CreateIntelligenceClientMock()) to reduce duplication. Not blocking—test infrastructure often tolerates some repetition for clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/gateway/Gateway.API.Tests/Integration/GatewayAlbaBootstrap.cs` around lines 126 - 157, The mock setup for IIntelligenceClient in GatewayAlbaBootstrap.cs duplicates the same configuration in EncounterProcessingAlbaBootstrap.cs; refactor by extracting the shared setup into a helper such as TestMocks.CreateIntelligenceClientMock() that returns a configured Substitute<IIntelligenceClient>, then replace the inline mock creation in both GatewayAlbaBootstrap and EncounterProcessingAlbaBootstrap with calls to that helper and register its result (services.RemoveAll<IIntelligenceClient>(); services.AddSingleton(...)). Ensure the helper encapsulates the AnalyzeAsync Returns(...) behavior and preserves the PAFormData contents and Argument usage (callInfo.ArgAt<string>(1)).apps/gateway/Gateway.API.Tests/Services/IntelligenceClientSerializationTests.cs (1)
11-11: Missingsealedmodifier per coding guidelines.Test classes should also follow the "sealed by default" guideline for consistency and to prevent unintended inheritance.
♻️ Proposed fix
-public class IntelligenceClientSerializationTests +public sealed class IntelligenceClientSerializationTestsAs per coding guidelines: "Sealed by default"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/gateway/Gateway.API.Tests/Services/IntelligenceClientSerializationTests.cs` at line 11, The test class IntelligenceClientSerializationTests should be declared sealed to follow the "sealed by default" guideline; update the class declaration for IntelligenceClientSerializationTests to add the sealed modifier (i.e., change "public class IntelligenceClientSerializationTests" to "public sealed class IntelligenceClientSerializationTests") so the test class cannot be inherited.apps/dashboard/src/components/__tests__/EvidencePanel.test.tsx (1)
198-228: Assert the color thresholds you describe in the test names.Right now these tests only check the percentage text, so regressions in warning/destructive styling would slip through.
[recommended change]
♻️ Add explicit class assertions
- expect(screen.getByText(/35%/)).toBeInTheDocument(); + const confidence = screen.getByText(/35%/); + expect(confidence).toBeInTheDocument(); + expect(confidence.className).toMatch(/destructive/); ... - expect(screen.getByText(/50%/)).toBeInTheDocument(); + const confidence = screen.getByText(/50%/); + expect(confidence).toBeInTheDocument(); + expect(confidence.className).toMatch(/warning/);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard/src/components/__tests__/EvidencePanel.test.tsx` around lines 198 - 228, The tests EvidencePanel_WithLowConfidence_ShowsWarningColor and EvidencePanel_WithBorderlineConfidence_ShowsCorrectColor only assert percentage text; update them to also assert the rendered confidence element has the expected CSS class for color (e.g., destructive/warning) so styling regressions fail the test. Locate the confidence label within EvidencePanel (use the same getByText(/35%/) and getByText(/50%/) or a data-testid if available) and add assertions like expect(confidenceElement).toHaveClass('destructive') for 35% and expect(confidenceElement).toHaveClass('warning' or the exact warning class used) for 50%; if no test id exists, add a stable data-testid on the confidence node in EvidencePanel to target in the tests.apps/dashboard/src/components/__tests__/PARequestCard.test.tsx (1)
80-90: Scope the “no confidence badge” assertion to badges to avoid false positives.If the card ever displays other percentages (progress, SLA, etc.), this test will fail even when the confidence badge is absent.
[recommended change]
♻️ Limit the query to badge elements
- expect(screen.queryByText(/\d+%/)).not.toBeInTheDocument(); + expect( + screen.queryByText(/\d+%/, { selector: '[data-slot="badge"]' }) + ).not.toBeInTheDocument();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard/src/components/__tests__/PARequestCard.test.tsx` around lines 80 - 90, The current test PARequestCard_WithUndefinedConfidence_ShowsNoConfidenceBadge uses screen.queryByText(/\d+%/) which can catch unrelated percentage text; instead scope the assertion to badge elements only by selecting elements with data-slot="badge" from the rendered output (e.g., using container.querySelectorAll('[data-slot="badge"]') or screen.getAllByRole/attribute for the badge) and assert that none of those badge elements' textContent matches /\d+%/; keep the existing check that no badge with data-slot='badge' named "Review" exists (confidenceBadge) and remove or replace the global percentage query accordingly so the test only fails when a confidence badge (data-slot="badge") shows a percent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/dashboard/src/api/__tests__/graphqlService.test.ts`:
- Around line 98-100: The TypeScript tuple error comes from accessing
mockRequest.mock.calls[0][1]; fix it by asserting the call tuple as a flexible
array like the other test: change the assignment of callArgs to cast the mock
call to an any[] (e.g., const callArgs = mockRequest.mock.calls[0] as unknown as
any[]), then keep the checks expect(callArgs[0]).toContain('processPARequest')
and expect(callArgs[1]).toEqual({ id: 'PA-001' }) so TypeScript no longer
complains when accessing index 1.
In `@apps/gateway/Gateway.API.Tests/Integration/ProcessPARequestAlbaBootstrap.cs`:
- Around line 122-124: The nullable type parameter on AddSingleton
(services.AddSingleton<IConnectionMultiplexer?>(sp => null)) violates the
generic class constraint; change the registration to use the non-nullable
service type and return a null-forgiving value or a test double instead — e.g.
replace with services.AddSingleton<IConnectionMultiplexer>(sp => null!) if you
intentionally want a null instance, or better register a test double/mock
(services.AddSingleton<IConnectionMultiplexer>(sp =>
Mock.Of<IConnectionMultiplexer>())) so downstream code can safely consume
IConnectionMultiplexer; adjust the factory accordingly for
RemoveAll/IConnectionMultiplexer and any tests that expect null behavior.
In `@apps/gateway/Gateway.API/Services/IntelligenceClient.cs`:
- Line 49: ReadFromJsonAsync is called without the SnakeCaseSerializerOptions,
so snake_case JSON from the Python service won't map to PAFormData (e.g.,
confidence_score → ConfidenceScore); update the deserialization to pass the same
SnakeCaseSerializerOptions used during serialization (use
response.Content.ReadFromJsonAsync<PAFormData>(options:
SnakeCaseSerializerOptions, cancellationToken: cancellationToken) or equivalent)
so response.Content deserializes into PAFormData correctly.
In `@apps/intelligence/src/policies/generic_policy.py`:
- Around line 28-38: The "medical_necessity" policy dict has a description
string exceeding 100 chars; update the description field in the dict with id
"medical_necessity" by splitting it into shorter string literals (e.g., use
implicit adjacent string concatenation or join two shorter strings with a space)
so no line exceeds 100 characters while keeping the exact wording intact; edit
the description value in that policy entry in generic_policy.py (the dict with
"id": "medical_necessity") to use multiple shorter string fragments.
In `@apps/intelligence/src/tests/test_analyze.py`:
- Line 111: The mock return string assigned to mock_llm (AsyncMock) is too long;
split the long literal into shorter concatenated parts or use an implicitly
joined parenthesized string so the assignment to mock_llm =
AsyncMock(return_value=...) stays under 100 chars. Update the
AsyncMock(return_value="...") in the test_analyze.py test to use either multiple
quoted segments joined by + or a parenthesized multi-line string for the same
content, keeping the symbol name mock_llm and AsyncMock call unchanged.
In `@apps/intelligence/src/tests/test_generic_policy.py`:
- Around line 1-3: Remove the unused pytest import from the test module: in
apps/intelligence/src/tests/test_generic_policy.py delete the line "import
pytest" so the file only imports build_generic_policy from
src.policies.generic_policy; ensure no other pytest-specific features are
referenced and run linters/tests to confirm the F401/I001 warnings are resolved.
---
Nitpick comments:
In `@apps/dashboard/src/components/__tests__/EvidencePanel.test.tsx`:
- Around line 198-228: The tests
EvidencePanel_WithLowConfidence_ShowsWarningColor and
EvidencePanel_WithBorderlineConfidence_ShowsCorrectColor only assert percentage
text; update them to also assert the rendered confidence element has the
expected CSS class for color (e.g., destructive/warning) so styling regressions
fail the test. Locate the confidence label within EvidencePanel (use the same
getByText(/35%/) and getByText(/50%/) or a data-testid if available) and add
assertions like expect(confidenceElement).toHaveClass('destructive') for 35% and
expect(confidenceElement).toHaveClass('warning' or the exact warning class used)
for 50%; if no test id exists, add a stable data-testid on the confidence node
in EvidencePanel to target in the tests.
In `@apps/dashboard/src/components/__tests__/PARequestCard.test.tsx`:
- Around line 80-90: The current test
PARequestCard_WithUndefinedConfidence_ShowsNoConfidenceBadge uses
screen.queryByText(/\d+%/) which can catch unrelated percentage text; instead
scope the assertion to badge elements only by selecting elements with
data-slot="badge" from the rendered output (e.g., using
container.querySelectorAll('[data-slot="badge"]') or
screen.getAllByRole/attribute for the badge) and assert that none of those badge
elements' textContent matches /\d+%/; keep the existing check that no badge with
data-slot='badge' named "Review" exists (confidenceBadge) and remove or replace
the global percentage query accordingly so the test only fails when a confidence
badge (data-slot="badge") shows a percent.
In `@apps/gateway/Gateway.API.Tests/Integration/GatewayAlbaBootstrap.cs`:
- Around line 126-157: The mock setup for IIntelligenceClient in
GatewayAlbaBootstrap.cs duplicates the same configuration in
EncounterProcessingAlbaBootstrap.cs; refactor by extracting the shared setup
into a helper such as TestMocks.CreateIntelligenceClientMock() that returns a
configured Substitute<IIntelligenceClient>, then replace the inline mock
creation in both GatewayAlbaBootstrap and EncounterProcessingAlbaBootstrap with
calls to that helper and register its result
(services.RemoveAll<IIntelligenceClient>(); services.AddSingleton(...)). Ensure
the helper encapsulates the AnalyzeAsync Returns(...) behavior and preserves the
PAFormData contents and Argument usage (callInfo.ArgAt<string>(1)).
In
`@apps/gateway/Gateway.API.Tests/Services/IntelligenceClientSerializationTests.cs`:
- Line 11: The test class IntelligenceClientSerializationTests should be
declared sealed to follow the "sealed by default" guideline; update the class
declaration for IntelligenceClientSerializationTests to add the sealed modifier
(i.e., change "public class IntelligenceClientSerializationTests" to "public
sealed class IntelligenceClientSerializationTests") so the test class cannot be
inherited.
In `@apps/intelligence/src/tests/test_analyze.py`:
- Around line 55-68: Two tests,
test_analyze_unsupported_procedure_uses_generic_fallback and
test_analyze_unsupported_procedure_uses_generic_policy, duplicate the same
assertion; consolidate them by converting one into a parameterized test using
pytest.mark.parametrize (e.g., parametrize procedure_code values like "99999"
and another code) and call analyze(request) once per parameter; update the test
function name to reflect parameterization (e.g.,
test_analyze_unsupported_procedures_use_generic_fallback), keep the same mocking
of src.reasoning.evidence_extractor.chat_completion and
src.reasoning.form_generator.chat_completion, and assert result.procedure_code
equals the parameterized procedure_code.
In `@apps/intelligence/src/tests/test_generic_policy.py`:
- Around line 23-27: Add a new unit test asserting the "clinical_documentation"
criterion is present in policies built by build_generic_policy; create a test
named test_build_generic_policy_includes_clinical_documentation_criterion() that
calls build_generic_policy("27447"), collects criterion ids from
policy["criteria"] (like the other tests), and asserts "clinical_documentation"
is in that list to ensure the criterion exists.
| const callArgs = mockRequest.mock.calls[0]; | ||
| expect(callArgs[0]).toContain('processPARequest'); | ||
| expect(callArgs[1]).toEqual({ id: 'PA-001' }); |
There was a problem hiding this comment.
Fix TypeScript tuple access error causing pipeline failure.
The pipeline reports TS2493: Tuple type '[options: RequestOptions<object, unknown>]' of length '1' has no element at index '1'. The mock type inference doesn't recognize the second argument. Apply the same workaround used in useDenyPARequest test at line 74.
🐛 Proposed fix
- const callArgs = mockRequest.mock.calls[0];
+ const callArgs = mockRequest.mock.calls[0] as unknown[];
expect(callArgs[0]).toContain('processPARequest');
expect(callArgs[1]).toEqual({ id: 'PA-001' });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const callArgs = mockRequest.mock.calls[0]; | |
| expect(callArgs[0]).toContain('processPARequest'); | |
| expect(callArgs[1]).toEqual({ id: 'PA-001' }); | |
| const callArgs = mockRequest.mock.calls[0] as unknown[]; | |
| expect(callArgs[0]).toContain('processPARequest'); | |
| expect(callArgs[1]).toEqual({ id: 'PA-001' }); |
🧰 Tools
🪛 GitHub Actions: CI
[error] 100-100: TS2493: Tuple type '[options: RequestOptions<object, unknown>]' of length '1' has no element at index '1'.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard/src/api/__tests__/graphqlService.test.ts` around lines 98 -
100, The TypeScript tuple error comes from accessing
mockRequest.mock.calls[0][1]; fix it by asserting the call tuple as a flexible
array like the other test: change the assignment of callArgs to cast the mock
call to an any[] (e.g., const callArgs = mockRequest.mock.calls[0] as unknown as
any[]), then keep the checks expect(callArgs[0]).toContain('processPARequest')
and expect(callArgs[1]).toEqual({ id: 'PA-001' }) so TypeScript no longer
complains when accessing index 1.
| // Remove Redis (not available in tests) | ||
| services.RemoveAll<IConnectionMultiplexer>(); | ||
| services.AddSingleton<IConnectionMultiplexer?>(sp => null); |
There was a problem hiding this comment.
Fix nullable type parameter constraint violation.
Static analysis correctly flags line 124: IConnectionMultiplexer? doesn't satisfy the class constraint on AddSingleton<TService>. The nullable annotation creates a type mismatch.
🔧 Proposed fix
// Remove Redis (not available in tests)
services.RemoveAll<IConnectionMultiplexer>();
-services.AddSingleton<IConnectionMultiplexer?>(sp => null);
+services.AddSingleton<IConnectionMultiplexer>(sp => null!);Alternatively, if downstream code needs to handle null gracefully, consider registering a mock that returns defaults rather than injecting null directly.
🧰 Tools
🪛 GitHub Check: Gateway Build & Test
[warning] 124-124:
The type 'StackExchange.Redis.IConnectionMultiplexer?' cannot be used as type parameter 'TService' in the generic type or method 'ServiceCollectionServiceExtensions.AddSingleton(IServiceCollection, Func<IServiceProvider, TService>)'. Nullability of type argument 'StackExchange.Redis.IConnectionMultiplexer?' doesn't match 'class' constraint.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/gateway/Gateway.API.Tests/Integration/ProcessPARequestAlbaBootstrap.cs`
around lines 122 - 124, The nullable type parameter on AddSingleton
(services.AddSingleton<IConnectionMultiplexer?>(sp => null)) violates the
generic class constraint; change the registration to use the non-nullable
service type and return a null-forgiving value or a test double instead — e.g.
replace with services.AddSingleton<IConnectionMultiplexer>(sp => null!) if you
intentionally want a null instance, or better register a test double/mock
(services.AddSingleton<IConnectionMultiplexer>(sp =>
Mock.Of<IConnectionMultiplexer>())) so downstream code can safely consume
IConnectionMultiplexer; adjust the factory accordingly for
RemoveAll/IConnectionMultiplexer and any tests that expect null behavior.
|
|
||
| response.EnsureSuccessStatusCode(); | ||
|
|
||
| var result = await response.Content.ReadFromJsonAsync<PAFormData>(cancellationToken: cancellationToken); |
There was a problem hiding this comment.
Deserialization missing snake_case options — will fail with Python service response.
The request is serialized with SnakeCaseSerializerOptions (line 44), but ReadFromJsonAsync on line 49 uses default options. If the Python service returns snake_case keys (e.g., confidence_score, clinical_summary), they won't map to PAFormData properties like ConfidenceScore and ClinicalSummary.
🐛 Proposed fix: Pass the same serializer options for deserialization
- var result = await response.Content.ReadFromJsonAsync<PAFormData>(cancellationToken: cancellationToken);
+ var result = await response.Content.ReadFromJsonAsync<PAFormData>(SnakeCaseSerializerOptions, cancellationToken);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/gateway/Gateway.API/Services/IntelligenceClient.cs` at line 49,
ReadFromJsonAsync is called without the SnakeCaseSerializerOptions, so
snake_case JSON from the Python service won't map to PAFormData (e.g.,
confidence_score → ConfidenceScore); update the deserialization to pass the same
SnakeCaseSerializerOptions used during serialization (use
response.Content.ReadFromJsonAsync<PAFormData>(options:
SnakeCaseSerializerOptions, cancellationToken: cancellationToken) or equivalent)
so response.Content deserializes into PAFormData correctly.
| { | ||
| "id": "medical_necessity", | ||
| "description": "The requested procedure is medically necessary based on the clinical documentation", | ||
| "evidence_patterns": [ | ||
| r"medically necessary", | ||
| r"medical necessity", | ||
| r"clinically indicated", | ||
| r"recommended.*procedure", | ||
| r"required.*treatment", | ||
| ], | ||
| "required": True, |
There was a problem hiding this comment.
Fix E501: Line 30 exceeds 100 characters.
The CI pipeline flagged this line as 116 characters. Split the description string to comply with the line length limit.
Proposed fix
{
"id": "medical_necessity",
- "description": "The requested procedure is medically necessary based on the clinical documentation",
+ "description": (
+ "The requested procedure is medically necessary "
+ "based on the clinical documentation"
+ ),
"evidence_patterns": [📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| { | |
| "id": "medical_necessity", | |
| "description": "The requested procedure is medically necessary based on the clinical documentation", | |
| "evidence_patterns": [ | |
| r"medically necessary", | |
| r"medical necessity", | |
| r"clinically indicated", | |
| r"recommended.*procedure", | |
| r"required.*treatment", | |
| ], | |
| "required": True, | |
| { | |
| "id": "medical_necessity", | |
| "description": ( | |
| "The requested procedure is medically necessary " | |
| "based on the clinical documentation" | |
| ), | |
| "evidence_patterns": [ | |
| r"medically necessary", | |
| r"medical necessity", | |
| r"clinically indicated", | |
| r"recommended.*procedure", | |
| r"required.*treatment", | |
| ], | |
| "required": True, |
🧰 Tools
🪛 GitHub Actions: CI
[error] 30-30: E501 Line too long (116 > 100).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/intelligence/src/policies/generic_policy.py` around lines 28 - 38, The
"medical_necessity" policy dict has a description string exceeding 100 chars;
update the description field in the dict with id "medical_necessity" by
splitting it into shorter string literals (e.g., use implicit adjacent string
concatenation or join two shorter strings with a space) so no line exceeds 100
characters while keeping the exact wording intact; edit the description value in
that policy entry in generic_policy.py (the dict with "id": "medical_necessity")
to use multiple shorter string fragments.
| "procedures": [], | ||
| }, | ||
| ) | ||
| mock_llm = AsyncMock(return_value="The criterion is MET based on the clinical evidence provided.") |
There was a problem hiding this comment.
Fix E501: Line 111 exceeds 100 characters.
Split the mock return value string to comply with line length limits.
Proposed fix
- mock_llm = AsyncMock(return_value="The criterion is MET based on the clinical evidence provided.")
+ mock_llm = AsyncMock(
+ return_value="The criterion is MET based on the clinical evidence provided."
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| mock_llm = AsyncMock(return_value="The criterion is MET based on the clinical evidence provided.") | |
| mock_llm = AsyncMock( | |
| return_value="The criterion is MET based on the clinical evidence provided." | |
| ) |
🧰 Tools
🪛 GitHub Actions: CI
[error] 111-111: E501 Line too long (102 > 100).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/intelligence/src/tests/test_analyze.py` at line 111, The mock return
string assigned to mock_llm (AsyncMock) is too long; split the long literal into
shorter concatenated parts or use an implicitly joined parenthesized string so
the assignment to mock_llm = AsyncMock(return_value=...) stays under 100 chars.
Update the AsyncMock(return_value="...") in the test_analyze.py test to use
either multiple quoted segments joined by + or a parenthesized multi-line string
for the same content, keeping the symbol name mock_llm and AsyncMock call
unchanged.
| """Tests for generic policy builder.""" | ||
| import pytest | ||
| from src.policies.generic_policy import build_generic_policy |
There was a problem hiding this comment.
Remove unused pytest import to fix F401 and I001 warnings.
The CI flagged that pytest is imported but never used. Since these tests don't use any pytest-specific features (no fixtures, markers, or raises), the import can be removed entirely.
Proposed fix
"""Tests for generic policy builder."""
-import pytest
from src.policies.generic_policy import build_generic_policy📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| """Tests for generic policy builder.""" | |
| import pytest | |
| from src.policies.generic_policy import build_generic_policy | |
| """Tests for generic policy builder.""" | |
| from src.policies.generic_policy import build_generic_policy |
🧰 Tools
🪛 GitHub Actions: CI
[warning] 2-2: I001 Import block is un-sorted or un-formatted
[warning] 2-2: F401 'pytest' imported but unused
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/intelligence/src/tests/test_generic_policy.py` around lines 1 - 3,
Remove the unused pytest import from the test module: in
apps/intelligence/src/tests/test_generic_policy.py delete the line "import
pytest" so the file only imports build_generic_policy from
src.policies.generic_policy; ensure no other pytest-specific features are
referenced and run linters/tests to confirm the F401/I001 warnings are resolved.

Summary
Connects the three independently-working services into a complete end-to-end pipeline:
HttpClientimplementation that calls the Python Intelligence service. IncludesAnalyzeRequestDtoserialization bridge (C# PascalCase → Python snake_case) and DI wiring withIntelligenceOptionsApplyAnalysisResultmethod to store real analysis results (status, confidence, clinical summary, criteria)Data Flow (Target State)
Test Plan
🤖 Generated with Claude Code