-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Labels
priority:highAddress soonAddress soonscope:intelligenceIntelligence (Python AI/ML)Intelligence (Python AI/ML)scope:llmLLM and AI model integrationLLM and AI model integrationtype:featureNew feature or enhancementNew feature or enhancement
Description
Note for implementers: The design outlined here is a guideline, not a strict specification. Feel free to pivot from the suggested approach if you discover a better solution during implementation. Update the issue with your rationale when deviating significantly.
Summary
Implement LLM-based evidence extraction that evaluates clinical data against policy criteria. This is the core reasoning component of the Intelligence Service that determines whether each policy requirement is met, not met, or unclear based on the clinical documentation.
Context
What This Component Does
The evidence extractor sits in the middle of the Intelligence Service reasoning pipeline:
ClinicalBundle + Policy → [Evidence Extractor] → list[EvidenceItem]
Inputs:
- ClinicalBundle (
src/models/clinical_bundle.py) - Structured FHIR data containing patient, conditions, observations, procedures, document_texts - Policy (
src/policies/example_policy.py) - Dict with criteria, evidence_patterns, thresholds
Output:
list[EvidenceItem]- One item per criterion with status (MET/NOT_MET/UNCLEAR), evidence text, source, confidence
Where It Fits
This component is called by the /analyze endpoint (#9) and its output feeds into the form generator (#7).
Tasks
- Create prompt template using design doc §4.4
- Format clinical data for prompt (helper function)
- Format policy criteria for prompt (helper function)
- Call LLM via
llm_client.chat_completion() - Parse JSON response and map to EvidenceItem list
- Implement fallback for LLM unavailability (pattern matching)
- Add unit tests
Business Logic
Criterion Status Determination
| Status | When to Use |
|---|---|
| MET | Clinical data clearly satisfies the requirement |
| NOT_MET | Clinical data contradicts or fails to meet requirement |
| UNCLEAR | Insufficient information or ambiguous documentation |
Confidence Scoring
- High (0.8-1.0): Clear, explicit documentation
- Medium (0.5-0.79): Implicit evidence requiring interpretation
- Low (0.0-0.49): Weak or ambiguous evidence
Files
| File | Action |
|---|---|
apps/intelligence/src/reasoning/evidence_extractor.py |
Modify |
apps/intelligence/src/tests/test_evidence_extractor.py |
Modify |
apps/intelligence/src/llm_client.py |
Reference |
apps/intelligence/src/models/pa_form.py |
Reference |
Dependencies
| Issue | Relationship |
|---|---|
| #5 | Soft dependency - determines policy criteria |
| #8 | Provides policy structure |
| #7 | Downstream - uses EvidenceItem output |
| #9 | Integration point - calls this function |
Design References
- §4.1 Clinical Reasoning Architecture
- §4.3 Policy Definition
- §4.4 LLM Reasoning Chain - Contains
EVIDENCE_EXTRACTION_PROMPT
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
priority:highAddress soonAddress soonscope:intelligenceIntelligence (Python AI/ML)Intelligence (Python AI/ML)scope:llmLLM and AI model integrationLLM and AI model integrationtype:featureNew feature or enhancementNew feature or enhancement
Type
Projects
Status
Todo