Self-hosted AI-powered platform for analyzing customer discovery calls to extract pain points, feature requests, and objections. Built with React, AWS Lambda, and OpenAI GPT-4.
🚀 Quick Start: Installation Guide • 🤝 Contributing: Guidelines • 📖 Docs: Technical Details
Full Platform Demo - See the entire workflow from upload to insights:
Call.Analyzer.demo.1.mov
Chatbot Query Demo - Watch how the AI chatbot answers questions across all calls:
Call.Analyser.chatbot.demo.mov
Dashboard - View call stats and recent analysis at a glance:
Upload & Analyze Calls - Simple interface for uploading transcripts and LinkedIn screenshots:
AI-Extracted Insights - Comprehensive analysis with supporting quotes and confidence scores:
Detailed Call Analysis - Deep dive into individual calls:
I got tired of manually analyzing customer discovery calls. Every time I needed to find a pattern or recall "that thing someone said about feature X," I'd waste hours digging through transcripts.
There are plenty of tools that solve this—Gong, Chorus, etc.—but they're expensive. As a developer, I realized: why pay for something I can build myself?
If you're tired of:
- 📝 Manually combing through call transcripts
- 🔍 Searching for "that one thing someone said about X"
- 💸 Paying $100+/month for enterprise tools you barely use
- ⏰ Wasting hours when you could be shipping
...then this is for you. Clone it. Deploy it. Own your data. 🚀
This platform helps product teams and founders analyze customer discovery calls at scale:
- Upload call transcripts (text files) along with optional LinkedIn profile screenshots
- AI automatically extracts:
- Pain points with severity ratings and supporting quotes
- Feature requests with priority levels
- Objections and concerns with context
- Participant information from LinkedIn (role, company, experience)
- Query across all calls using a natural language chatbot interface
- View insights on a dashboard with filtering and search capabilities
Use Cases:
- Validate product-market fit by identifying recurring pain points
- Prioritize feature development based on customer requests
- Track objections and competitive insights
- Build a searchable knowledge base of customer conversations
- Automated Analysis: Upload call transcripts and get AI-extracted insights using GPT-4
- LinkedIn Enrichment: Extract profile data from screenshots using GPT-4 Vision
- Intelligent Chatbot: Query insights across all calls with natural language (powered by embeddings)
- Dashboard: View stats and recent calls at a glance
- Detailed Insights: See pain points, feature requests, and objections with supporting quotes and confidence scores
- Serverless Architecture: Cost-effective AWS Lambda backend that scales automatically
- Fully Self-Hosted: Your data stays in your AWS account
Before you begin, ensure you have:
- Node.js 18+ and npm (Download)
- AWS Account with appropriate permissions (Lambda, S3, API Gateway, Systems Manager)
- AWS CLI configured with credentials (Installation Guide)
- AWS SAM CLI installed (Installation Guide)
- OpenAI API Key (Get one here)
- Git for version control
- Basic familiarity with AWS services
- Understanding of serverless architecture
For Users (deploying to your own AWS account):
git clone https://github.com/thrishma/analyze-calls.git
cd analyze-callsFor Contributors (making code changes):
- Fork this repository on GitHub (click "Fork" button)
- Clone your fork:
git clone https://github.com/YOUR-USERNAME/analyze-calls.git
cd analyze-calls- See CONTRIBUTING.md for development workflow
# Replace with your actual OpenAI API key
aws ssm put-parameter \
--name /call-analysis/dev/openai-api-key \
--value "sk-your-openai-api-key-here" \
--type SecureString \
--region us-east-2cd backend
# Copy the example configuration
cp samconfig.toml.example samconfig.toml
# Edit samconfig.toml to set your preferred:
# - stack_name (e.g., "my-call-analysis")
# - region (e.g., "us-west-2")
# - AWS profile (if not using default)# Install dependencies for each Lambda function
cd lambda/processCall && npm install && cd ../..
cd lambda/chatbotQuery && npm install && cd ../..
cd lambda/getCalls && npm install && cd ../..# Build the SAM application
sam build
# Deploy (first time - will prompt for configuration)
sam deploy --guided
# Follow the prompts:
# - Stack Name: call-analysis-dev (or your choice)
# - AWS Region: us-east-2 (or your choice)
# - Parameter Stage: dev
# - Confirm changes before deploy: Y
# - Allow SAM CLI IAM role creation: Y
# - Save arguments to configuration file: YAfter deployment completes, you'll see the API Gateway URL:
CloudFormation outputs from deployed stack
--------------------------------------------------------------------------
Outputs
--------------------------------------------------------------------------
Key ApiUrl
Description API Gateway endpoint URL
Value https://abc123xyz.execute-api.us-east-2.amazonaws.com/dev
--------------------------------------------------------------------------
📋 Copy this URL - you'll need it in Step 3a below.
To retrieve it later:
aws cloudformation describe-stacks \
--stack-name call-analysis-dev \
--query "Stacks[0].Outputs[?OutputKey=='ApiUrl'].OutputValue" \
--output textcd ../frontend
# Copy the example environment file
cp .env.example .env
# Edit .env and replace with your API Gateway URL from Step 2d
# VITE_API_BASE_URL=https://your-api-id.execute-api.us-east-2.amazonaws.com/dev# Install dependencies
npm install
# Run development server
npm run devVisit http://localhost:5173 in your browser. You should see the dashboard!
- Click "Upload Call" in the navigation
- Upload a sample transcript file (plain text format)
- Optionally upload a LinkedIn profile screenshot
- Click "Analyze Call"
- View the extracted insights on the Call Detail page
- Try querying across calls using the Chatbot page
┌─────────────┐ ┌──────────────┐
│ React │───── HTTPS ─────▶┌──────────────┐ │ OpenAI │
│ Frontend │ │ API Gateway │ │ GPT-4 API │
│ (Vite) │◀──── JSON ───────│ (REST) │ │ │
└─────────────┘ └──────────────┘ └──────────────┘
│ ▲
▼ │
┌─────────────┐ │
│ Lambda │──────────┘
│ Functions │ API Calls
│ │
│ • Process │
│ • GetCalls │
│ • Chatbot │
│ • Delete │
└─────────────┘
│ │
┌───────────┘ └──────────┐
▼ ▼
┌─────────────┐ ┌─────────────┐
│ S3 │ │ Parameter │
│ Bucket │ │ Store │
│ (Call Data) │ │ (API Keys) │
└─────────────┘ └─────────────┘
Tech Stack:
- Frontend: React 18 + Vite + Chakra UI + React Router
- Backend: AWS Lambda (Node.js 20.x) + API Gateway
- Storage: AWS S3
- AI: OpenAI GPT-4 Turbo + GPT-4 Vision + text-embedding-3-large
- Infrastructure: AWS SAM (Serverless Application Model)
-
Upload & Analysis:
- User uploads transcript + LinkedIn info → API Gateway → ProcessCall Lambda
- Lambda retrieves OpenAI API key from Parameter Store
- Sends transcript to GPT-4 for analysis (pain points, features, objections)
- Stores results and chunks in S3 bucket
-
Call Retrieval:
- GetCalls Lambda fetches metadata from S3
- Returns call list or detailed insights to frontend
-
Chatbot Queries:
- User query → ChatbotQuery Lambda
- Generates embedding with OpenAI text-embedding-3-large
- Searches call chunks for semantic matches
- GPT-4 synthesizes answer with source citations
-
Data Flow:
- All API calls go through API Gateway (CORS enabled)
- Lambda functions auto-scale based on demand
- S3 stores all call data (transcripts, metadata, chunks)
This project is structured as a monorepo (contains both backend/ and frontend/ folders). AWS Amplify requires special configuration to deploy frontend-only from a monorepo.
1. Deploy Backend First & Get API Gateway URL
Before deploying the frontend, deploy your backend to get the API Gateway URL:
cd backend
sam build
sam deploy --guidedAfter deployment completes, SAM will display outputs including your API Gateway URL:
CloudFormation outputs from deployed stack
--------------------------------------------------------------------------
Outputs
--------------------------------------------------------------------------
Key ApiUrl
Description API Gateway endpoint URL
Value https://abc123xyz.execute-api.us-east-2.amazonaws.com/dev
--------------------------------------------------------------------------
📋 Copy this URL - you'll need it in step 5 below.
To retrieve it later:
# If you need to find your API URL again:
aws cloudformation describe-stacks \
--stack-name call-analysis-dev \
--query "Stacks[0].Outputs[?OutputKey=='ApiUrl'].OutputValue" \
--output text2. Push Code to Git Repository
git add .
git commit -m "Prepare for Amplify deployment"
git push origin main3. Create Amplify App
- Go to AWS Amplify Console
- Click "New app" → "Host web app"
- Choose your Git provider (GitHub, GitLab, Bitbucket, etc.)
- Authorize AWS Amplify to access your repository
- Select the repository:
analyze-calls - Select the branch:
main(or your default branch)
4. Configure Monorepo Build Settings
amplify.yml file at the root that configures the monorepo setup. Amplify will automatically detect this file.
The amplify.yml file specifies:
- appRoot:
frontend- Points to the frontend directory - Build commands: Uses
npm install --legacy-peer-deps(required for React 19 compatibility) - Artifacts: Outputs from
distfolder - Cache: Caches
node_modulesfor faster builds
5. Add Environment Variables
In the Amplify Console, add the following environment variable:
| Key | Value |
|---|---|
VITE_API_BASE_URL |
Your API Gateway URL from Step 1 (e.g., https://abc123xyz.execute-api.us-east-2.amazonaws.com/dev) |
How to add environment variables in Amplify:
- In Amplify Console → Select your app
- Go to "App settings" → "Environment variables"
- Click "Manage variables"
- Click "Add variable"
- Enter key:
VITE_API_BASE_URL - Enter value: Paste your API Gateway URL from Step 1
- Click "Save"
Example:
Key: VITE_API_BASE_URL
Value: https://abc123xyz.execute-api.us-east-2.amazonaws.com/dev
6. Review and Deploy
- Review the build settings (should use
amplify.yml) - Click "Save and deploy"
- Wait for the build to complete (usually 2-5 minutes)
- Your app will be available at:
https://[app-id].amplifyapp.com
7. (Optional) Set Up Password Protection
To protect your app with basic authentication (username/password):
- In Amplify Console → Select your app
- Go to "App settings" → "Access control"
- Click "Manage access"
- Select the branch you want to protect (e.g.,
main) - Toggle "Enable access control" to ON
- Choose "Restrict access with username and password"
- Enter a username (e.g.,
admin) - Enter a strong password
- Click "Save"
Now when anyone visits your app, they'll need to enter the username and password. This is useful for:
- Protecting internal tools from public access
- Sharing with a small team without building full authentication
- Testing in production before going public
Note: This is basic HTTP authentication. For production apps with multiple users, consider implementing proper authentication with AWS Cognito or a third-party service.
8. Automatic Deployments
Amplify will automatically deploy on every git push to your main branch:
- Push changes → Amplify detects changes → Builds → Deploys
- View build logs in the Amplify Console
Build fails with "npm ci" error:
- Solution: The
amplify.ymlusesnpm install --legacy-peer-depsto handle React 19 peer dependencies - If you modified the file, ensure it uses
npm install --legacy-peer-depsinstead ofnpm ci
Monorepo not detected:
- Ensure
amplify.ymlis at the repository root - The file must use the
applicationskey withappRoot: frontend - Check the Amplify Console build logs to verify it's using the correct directory
Environment variables not working:
- Vite requires variables to be prefixed with
VITE_ - Ensure you added
VITE_API_BASE_URL(not justAPI_BASE_URL) - Rebuild the app after adding environment variables
CORS errors after deployment:
- Verify your API Gateway URL is correct in Amplify environment variables
- Ensure your backend Lambda functions return proper CORS headers (already configured in this template)
If you prefer to host the frontend yourself instead of using AWS Amplify:
cd frontend
npm run buildThis creates a dist/ folder with optimized static files (HTML, CSS, JS).
Option A: nginx (Ubuntu/Debian VPS)
# Copy built files to nginx web root
sudo cp -r dist/* /var/www/html/
# Configure nginx to handle React Router (optional but recommended)
sudo nano /etc/nginx/sites-available/default
# Add this inside the server block:
# location / {
# try_files $uri $uri/ /index.html;
# }
sudo systemctl reload nginxOption B: Vercel
# Install Vercel CLI
npm i -g vercel
# Deploy
cd frontend
vercel --prodOption C: Netlify
# Install Netlify CLI
npm i -g netlify-cli
# Deploy
cd frontend
netlify deploy --prod --dir=distOption D: Docker
# Dockerfile
FROM nginx:alpine
COPY dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]docker build -t call-analysis-frontend .
docker run -p 80:80 call-analysis-frontendImportant: Don't forget to set the VITE_API_BASE_URL environment variable before building!
The backend is already deployed to AWS via SAM. For production:
# Deploy to production stage
sam deploy --parameter-overrides Stage=prod
# Remember to create production Parameter Store key:
aws ssm put-parameter \
--name /call-analysis/prod/openai-api-key \
--value "sk-your-key" \
--type SecureStringFrontend (.env):
VITE_API_BASE_URL=https://your-api-gateway-url.amazonaws.com/devBackend (AWS Parameter Store):
/call-analysis/{stage}/openai-api-key- Your OpenAI API key
The SAM template creates:
- S3 Bucket:
call-analysis-data-{stage}-{random}for storing transcripts and metadata - Lambda Functions:
ProcessCallFunction- Analyzes call transcriptsChatbotQueryFunction- Handles chatbot queriesGetCallsFunction- Retrieves call dataDeleteCallFunction- Deletes calls
- API Gateway: REST API with CORS enabled
- IAM Roles: For Lambda execution with S3 and Parameter Store access
Monorepo Layout - This project contains both backend and frontend in a single repository.
analyze-calls/
├── backend/ # AWS Lambda backend
│ ├── template.yaml # AWS SAM Infrastructure as Code
│ ├── samconfig.toml.example # SAM deployment config template
│ └── lambda/
│ ├── processCall/ # Analyzes call transcripts with GPT-4
│ ├── chatbotQuery/ # Handles chatbot queries
│ ├── getCalls/ # Retrieves call data from S3
│ └── deleteCall/ # Deletes calls from S3
├── frontend/ # React frontend application
│ ├── src/
│ │ ├── pages/
│ │ │ ├── Home.jsx # Dashboard with stats
│ │ │ ├── UploadCall.jsx # Upload & analyze calls
│ │ │ ├── CallDetail.jsx # View call insights
│ │ │ └── Chatbot.jsx # Natural language query interface
│ │ ├── components/
│ │ │ └── Layout.jsx # Navigation and layout
│ │ └── api/
│ │ └── client.js # Axios API client
│ ├── .env.example # Environment variable template
│ └── vite.config.js # Vite configuration
├── amplify.yml # 🔧 AWS Amplify monorepo config (important!)
├── CLAUDE.md # Detailed technical documentation
├── CONTRIBUTING.md # Contribution guidelines
├── LICENSE # MIT License
└── README.md # This file
amplify.yml- Configures AWS Amplify to deploy only the frontend from this monorepo. SetsappRoot: frontendto point to the frontend directory.backend/template.yaml- AWS SAM template defining Lambda functions, API Gateway, and S3 bucketbackend/samconfig.toml- SAM CLI configuration (not tracked in git)frontend/.env- Frontend environment variables (not tracked in git).gitignore- Excludes sensitive files like.env,samconfig.toml, andnode_modules
For ~100 calls/month with moderate usage:
| Service | Estimated Cost |
|---|---|
| AWS Lambda | ~$5/month |
| AWS S3 Storage | ~$1/month |
| AWS API Gateway | ~$3.50/month |
| OpenAI API (GPT-4) | ~$10-20/month |
| Total | ~$20-30/month |
Notes:
- Costs scale with usage (more calls = higher OpenAI costs)
- AWS Free Tier can cover Lambda/S3/API Gateway for first 12 months
- OpenAI costs depend on transcript length and query volume
- Consider prompt caching strategies to reduce OpenAI costs
Lambda timeout on large transcripts:
- Increase timeout in
backend/template.yaml(default: 120s, max: 900s) - Check CloudWatch Logs for specific errors
OpenAI API errors:
- Verify API key is correct in Parameter Store
- Check OpenAI account has sufficient credits
- Review CloudWatch Logs for detailed error messages
CORS errors:
- Ensure CORS is properly configured in
template.yaml - Verify Lambda functions return CORS headers
Can't connect to API:
- Verify
VITE_API_BASE_URLin.envis correct - Check browser console for specific errors
- Ensure API Gateway is publicly accessible
Build errors:
- Delete
node_modulesand runnpm installagain - Ensure Node.js version is 18+
SAM deploy fails:
- Ensure AWS CLI is configured:
aws configure - Check IAM permissions (need Lambda, S3, API Gateway, CloudFormation)
- Verify region in
samconfig.tomlmatches Parameter Store region
S3 bucket creation fails:
- Bucket names must be globally unique
- SAM adds random suffix to ensure uniqueness
Frontend:
cd frontend
npm run devBackend (Local API):
cd backend
sam build
sam local start-api --port 3001Update frontend .env to point to http://localhost:3001
See CLAUDE.md for detailed testing instructions and development workflows.
This project uses GitHub Actions for CI/CD. On every push and pull request to main:
✅ Frontend checks:
- Linting with ESLint
- Build verification
- Artifact validation
✅ Backend checks:
- SAM template validation
- Lambda dependency installation
- SAM build verification
✅ Documentation checks:
- Markdown linting (warnings only)
✅ Security checks:
- Secret scanning with TruffleHog
Status: Check the badge at the top of this README for current build status.
Potential future enhancements:
- Vector database integration for better semantic search
- Batch processing for multiple transcripts
- PDF/CSV export of insights
- Integration with Gong/Chorus/Fireflies APIs
- Multi-user authentication with AWS Cognito
- Advanced analytics and pattern detection
- Real-time collaboration features
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
MIT License - see LICENSE file for details
- Built with OpenAI GPT-4
- Powered by AWS Serverless
- UI components from Chakra UI
- Issues: GitHub Issues
- Documentation: See CLAUDE.md for detailed technical docs
- Questions: Open a GitHub Discussion or Issue
Made with ❤️ (and a bit of frustration) for product teams who want to own their data.









