-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Labels
priority:highAddress soonAddress soonscope:intelligenceIntelligence (Python AI/ML)Intelligence (Python AI/ML)scope:llmLLM and AI model integrationLLM and AI model integrationtype:featureNew feature or enhancementNew feature or enhancement
Description
Note for implementers: The design outlined here is a guideline, not a strict specification. Feel free to pivot from the suggested approach if you discover a better solution during implementation. Update the issue with your rationale when deviating significantly.
Summary
Generate PA form field mappings and clinical summary from extracted evidence.
Context
This function is the final step in the Intelligence Service reasoning pipeline. It receives:
- ClinicalBundle - structured FHIR data
- EvidenceItems - output from evidence extractor (feat(intelligence): implement evidence extraction with LLM #6)
- Policy - the matched policy definition from feat(intelligence): implement policy matching for demo procedure #8
It produces a complete PAFormResponse ready for PDF stamping by the Gateway.
Tasks
- Create prompt template for clinical summary generation (see §4.4)
- Implement recommendation calculation logic (see below)
- Generate field_mappings from clinical bundle + policy
- Call LLM via
llm_client.chat_completion() - Parse LLM response and build PAFormResponse
- Add unit tests
Recommendation Logic
def calculate_recommendation(evidence: list[EvidenceItem]) -> tuple[str, float]:
"""
Returns (recommendation, confidence_score).
Rules:
- APPROVE: All required criteria MET, confidence >= 0.8
- NEED_INFO: Any required criteria UNCLEAR
- MANUAL_REVIEW: Any required criteria NOT_MET, or confidence < 0.6
"""
met = [e for e in evidence if e.status == "MET"]
not_met = [e for e in evidence if e.status == "NOT_MET"]
unclear = [e for e in evidence if e.status == "UNCLEAR"]
if not_met:
return ("MANUAL_REVIEW", 0.5)
if unclear:
return ("NEED_INFO", 0.7)
avg_confidence = sum(e.confidence for e in met) / len(met) if met else 0.0
if avg_confidence >= 0.8:
return ("APPROVE", avg_confidence)
return ("MANUAL_REVIEW", avg_confidence)Files
| File | Action |
|---|---|
apps/intelligence/src/reasoning/form_generator.py |
Modify |
apps/intelligence/src/llm_client.py |
Reference |
apps/intelligence/src/models/pa_form.py |
Reference |
Dependencies
| Issue | Relationship |
|---|---|
| #6 | Provides EvidenceItem input |
| #8 | Provides policy with field mappings |
| #5 | Determines PDF field names |
Design References
- §4.4 LLM Reasoning Chain - Contains
FORM_GENERATION_PROMPT - §4.5 Structured Output Enforcement - PAFormResponse model
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
priority:highAddress soonAddress soonscope:intelligenceIntelligence (Python AI/ML)Intelligence (Python AI/ML)scope:llmLLM and AI model integrationLLM and AI model integrationtype:featureNew feature or enhancementNew feature or enhancement
Type
Projects
Status
Todo