FastAPI-based AI Agent Service for intelligent call handling, customer information extraction, and service scheduling.
The AI Service is the brain of the DispatchAI platform, handling all AI-powered interactions with customers during phone calls. It uses LangGraph-based multi-step agents to collect customer information, understand service requests, and schedule appointments.
apps/ai/
βββ app/
β βββ main.py # FastAPI application entry point
β βββ config.py # Configuration & settings
β βββ api/ # API endpoints
β β βββ health.py # Health check
β β βββ chat.py # Chat endpoints
β β βββ call.py # Call conversation handling
β β βββ summary.py # Call summaries
β β βββ email.py # Email sending
β β βββ dispatch.py # Service dispatch & calendar
β βββ services/ # Core business logic
β β βββ call_handler.py # Main conversation orchestrator
β β βββ dialog_manager.py # Multi-turn conversation state
β β βββ llm_service.py # LLM integration
β β βββ llm_speech_corrector.py # Speech-to-text correction
β β βββ call_summary.py # Post-call summaries
β β βββ redis_service.py # Redis interactions
β β βββ ses_email.py # Email service (AWS SES)
β β βββ ics_lib.py # ICS calendar file generation
β β βββ retrieve/ # Information extractors
β β βββ customer_info_extractors.py
β β βββ time_extractor.py
β βββ models/ # Data models
β β βββ call.py # CallSkeleton, Message, UserInfo
β β βββ chat.py # Chat models
β βββ client/ # External service clients
β β βββ mcp_client.py # MCP (Model Context Protocol) client
β βββ custom_types/ # Custom type definitions
β β βββ customer_service_types.py
β βββ infrastructure/ # Infrastructure clients
β β βββ redis_client.py # Redis async client
β βββ utils/ # Utilities
β βββ mcp_parse.py # MCP response parsing
β βββ prompts/ # LLM prompts
β βββ validators/ # Data validators
βββ tests/ # Test suite
βββ pyproject.toml # Python dependencies (uv)
βββ Dockerfile.dev # Development Docker image
βββ Dockerfile.uat # UAT Docker image
βββ Makefile # Build automation
The AI service implements a sophisticated 8-step LangGraph workflow to handle customer service conversations:
- Name Collection β Collect customer's full name
- Phone Collection β Verify contact phone number
- Address Collection β Get service address with address validation
- Service Selection β Help customer choose from available services
- Time Selection β Schedule preferred appointment time
- Booking Confirmation β Confirm all details
- Dispatch β Send email + calendar invite
- Completion β Wrap up conversation
The agent maintains conversation state in Redis using CallSkeleton format:
{
"callSid": "CAxxx...", # Unique call ID
"company": {...}, # Company info
"user": {
"userInfo": { # Collected customer info
"name": "...",
"phone": "...",
"email": "...",
"address": "..."
},
"service": {...} # Selected service
},
"services": [...], # Available services list
"history": [...] # Message history
}Base URL: http://localhost:8000/api
Main conversation endpoint for call handling.
Request:
{
"callSid": "CA1234567890",
"customerMessage": {
"message": "Hi, I need a cleaning service",
"speaker": "customer",
"timestamp": "2024-01-01T12:00:00Z"
}
}Response:
{
"assistantMessage": {
"message": "I'd be happy to help you with a cleaning service. May I get your name please?",
"speaker": "assistant",
"timestamp": "2024-01-01T12:00:01Z"
},
"userInfo": {...}, # Updated customer info
"conversationComplete": false, # Booking status
"currentStep": "collect_name" # Current workflow step
}Send email with calendar integration (Google/Outlook/iCal).
Request:
{
"to": "customer@example.com",
"subject": "Service Booking Confirmation",
"body": "Your appointment is confirmed...",
"summary": "Cleaning Service",
"start": "2024-01-15T10:00:00+10:00",
"end": "2024-01-15T12:00:00+10:00",
"calendarapp": "google", # "none", "google", "outlook"
"access_token": "...", # OAuth token
"calendar_id": "primary"
}Generate AI-powered call summary after conversation ends.
Send simple email without calendar.
Main orchestrator for conversation workflow.
Key Methods:
process_conversation(state, message)- Process user message through workflow_collect_name()- Name collection logic_collect_phone()- Phone validation & collection_collect_address()- Address extraction with validation_select_service()- Service recommendation & selection_select_time()- Time slot scheduling_complete_booking()- Final confirmation & dispatch
Features:
- Multi-attempt collection (3 max attempts per field)
- Speech correction for addresses
- Service price & description display
- Natural conversation flow
- Context-aware responses
Specialized extractors for structured data:
customer_info_extractors.py: Name, phone, email, address extractiontime_extractor.py: Natural language time parsing with timezone handling
Usage:
from services.retrieve.customer_info_extractors import (
extract_name_from_conversation,
extract_phone_from_conversation,
extract_address_from_conversation,
)
name = extract_name_from_conversation(message_history)
phone = extract_phone_from_conversation(message_history)
address = extract_address_from_conversation(message_history)Abstraction layer for OpenAI LLM calls.
Features:
- OpenAI GPT-4 integration
- Streaming support
- Custom system prompts
- Context management
- Error handling & retries
Corrects common speech-to-text errors for addresses.
Example:
Input: "1 twenty five Johnson street"
Output: "1/25 Johnson Street"
Generates structured summaries of completed calls:
- Key information extracted
- Service requested
- Booking details
- Customer sentiment
AWS SES integration for sending emails:
- Plain text & HTML emails
- ICS calendar attachments
- OAuth-based calendar integration
- Email templates
cd apps/ai
# Install dependencies
make sync
# Run all tests with coverage
make test
# Run tests in verbose mode
uv run pytest tests/ -v
# Run specific test file
uv run pytest tests/test_smoke.py -v
# Coverage report
uv run pytest --cov=app --cov-report=html# Lint with Ruff
make lint
# Auto-fix linting issues
make lint-fix
# Format code
make format
# Type check with MyPy
make typecheck
# Run all checks
make check-all# Start AI service
docker compose up ai
# Test health endpoint
curl http://localhost:8000/api/health
# View API docs
open http://localhost:8000/docsRequired:
OPENAI_API_KEY=sk-... # OpenAI API key
OPENAI_MODEL=gpt-4o-mini # LLM model name
REDIS_HOST=redis # Redis host
REDIS_PORT=6379 # Redis portOptional:
REDIS_URL=redis://localhost:6379 # Full Redis URL
API_PREFIX=/api # API prefix
DEBUG=true # Debug mode
MAX_ATTEMPTS=3 # Max collection attempts
OPENAI_MAX_TOKENS=2500 # Max response tokens
OPENAI_TEMPERATURE=0.0 # Temperature (0=deterministic)
CORS_ORIGINS=["*"] # CORS allowed originsServices are configured via config.py:
supported_services = [
"clean", "cleaning",
"garden", "gardening",
"plumber", "plumbing",
"electric", "electrical",
"repair"
]Natural language time parsing:
supported_time_keywords = [
"tomorrow", "morning", "afternoon", "evening",
"monday", "tuesday", "wednesday", "thursday",
"friday", "saturday", "sunday"
]-
Install dependencies:
cd apps/ai make sync -
Run locally:
# With uv uv run uvicorn app.main:app --reload --port 8000 # Or with Docker docker compose up ai
-
Access service:
- API: http://localhost:8000
- Docs: http://localhost:8000/docs
- Health: http://localhost:8000/api/health
The AI service uses uv for fast Python package management:
# Install new dependency
uv add package-name
# Add dev dependency
uv add --dev package-name
# Update dependencies
uv sync --upgrade
# Remove dependency
uv remove package-name-
Create extractor in
app/services/retrieve/customer_info_extractors.py:def extract_new_field_from_conversation(message_history: list) -> str: # Extract logic return extracted_value
-
Import in
call_handler.py:from .retrieve.customer_info_extractors import extract_new_field_from_conversation
-
Use in workflow step:
value = extract_new_field_from_conversation(state["message_history"])
- Add step to
CustomerServiceStateincustom_types/customer_service_types.py - Implement step handler in
call_handler.py - Add step transition logic in
process_conversation() - Update prompts in
utils/prompts/
# View AI service logs
docker logs dispatchai-ai -f
# Filter logs
docker logs dispatchai-ai 2>&1 | grep "CONVERSATION"# Connect to Redis
docker exec -it dispatchai-redis redis-cli
# List all call skeletons
KEYS callskeleton:*
# Get specific call skeleton
GET callskeleton:CA1234567890
# Check message history
LRANGE history:CA1234567890 0 -1Add debug logs in call_handler.py:
print(f"π [STEP] {state['current_step']}")
print(f"π [INFO] Name: {state['name']}, Phone: {state['phone']}")
print(f"π [COMPLETE] All fields: {all_complete}")Redis: CallSkeleton storage & message history
from services.redis_service import get_call_skeleton, update_user_infoMongoDB: Not directly accessed, all via Backend API
HTTP: Frontend calls /api/ai/conversation with user messages
WebSocket/SSE: Not currently used (can be added for real-time)
OpenAI: LLM inference AWS SES: Email sending Google Calendar: OAuth + Calendar API Outlook Calendar: Microsoft Graph API
Cause: Redis doesn't have call data Fix: Ensure Backend created CallSkeleton before AI service called
Cause: Invalid address format or speech-to-text error Fix: Check speech corrector, add more patterns
Cause: Customer requesting unsupported service
Fix: Update supported_services in config or add service mapping
Cause: Slow response from OpenAI or network issue Fix: Increase timeout, add retry logic, check API key
- FastAPI Docs: https://fastapi.tiangolo.com
- LangGraph Docs: https://langchain-ai.github.io/langgraph/
- OpenAI API: https://platform.openai.com/docs
- Redis Python: https://redis.readthedocs.io
- uv Package Manager: https://github.com/astral-sh/uv
When adding new features:
- Update tests in
tests/ - Add type hints
- Run
make check-all - Update this README
- Document new API endpoints