A Retrieval-Augmented Generation (RAG) bot that integrates with GPT-OSS via Hugging Face and provides access via Slack and WhatsApp. Optimized for Raspberry Pi deployment.
- 🤖 RAG System: Upload documents and query them using natural language
- 💬 Slack Integration: Access the bot directly from Slack
- 📱 WhatsApp Integration: Query via WhatsApp using Twilio
- 🔍 Vector Search: Powered by ChromaDB for efficient document retrieval
- 🧠 AI Responses: Uses GPT-OSS via Hugging Face for intelligent answers
- 🍓 Raspberry Pi Ready: Optimized for low-resource environments
chmod +x setup.sh
./setup.shEdit .env file with your credentials:
cp .env.example .env
nano .envRequired configuration:
HUGGINGFACE_API_TOKEN: Your Hugging Face API token
Optional (for integrations):
- Slack:
SLACK_BOT_TOKEN,SLACK_SIGNING_SECRET - WhatsApp:
TWILIO_ACCOUNT_SID,TWILIO_AUTH_TOKEN,TWILIO_PHONE_NUMBER
./start.shGET /- API informationGET /health- Health checkPOST /upload- Upload documents (PDF or text)POST /query- Query the RAG systemGET /stats- Get knowledge base statisticsPOST /slack/events- Slack webhookPOST /whatsapp/webhook- WhatsApp webhook
curl -X POST "http://localhost:8000/upload" \
-H "accept: application/json" \
-H "Content-Type: multipart/form-data" \
-F "file=@document.pdf"curl -X POST "http://localhost:8000/query" \
-H "Content-Type: application/json" \
-d '{"question": "What is machine learning?"}'- Create a Slack app at https://api.slack.com/apps
- Add bot token scopes:
app_mentions:read,chat:write,files:read - Enable events:
app_mention,file_shared - Set event request URL:
https://your-domain.com/slack/events - Install app to workspace
- Create Twilio account at https://www.twilio.com
- Set up WhatsApp sandbox or get approved number
- Configure webhook URL:
https://your-domain.com/whatsapp/webhook
- Raspberry Pi 4 (4GB+ RAM recommended)
- Python 3.8+
- 16GB+ SD card
- Use SSD instead of SD card for better I/O
- Increase swap space if needed
- Monitor temperature and use cooling
Create systemd service:
sudo nano /etc/systemd/system/ragbot.service[Unit]
Description=RAG Bot Service
After=network.target
[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi/ragbot
ExecStart=/home/pi/ragbot/venv/bin/python main.py
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.targetEnable and start:
sudo systemctl enable ragbot.service
sudo systemctl start ragbot.service┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Slack/WhatsApp│ │ FastAPI App │ │ FlexaAI API │
│ │───▶│ │───▶│ │
│ User Input │ │ RAG System │ │ GPT-OSS-120B │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ ChromaDB │
│ Vector Store │
└─────────────────┘
- Memory Issues on Pi: Reduce
CHUNK_SIZEin config.py - Slow Responses: Check network connection to Hugging Face
- ChromaDB Errors: Ensure write permissions to
chroma_dbdirectory - Import Errors: Activate virtual environment before running
- GPT-OSS API Errors: Verify your Hugging Face token has access to the model
Check application logs:
tail -f logs/ragbot.logcurl http://localhost:8000/health- Fork the repository
- Create feature branch
- Make changes
- Test on Raspberry Pi
- Submit pull request
MIT License - see LICENSE file for details.