A sophisticated AI-powered chatbot designed to assist Product Managers with RFE (Request for Enhancement) creation, validation, and JIRA integration. Built for Red Hat Enterprise context but adaptable to other organizations. Features Streamlit frontend, FastAPI backend, and deployed on OpenShift.
Try it now - no setup required! (Live access available until Friday, August 15th, 2025)
- π― Streamlit Web App: https://pm-chatbot.apps.cluster-znvdr.znvdr.sandbox203.opentlc.com
- π§ API Endpoint: https://pm-chatbot-api.apps.cluster-znvdr.znvdr.sandbox203.opentlc.com
- π API Documentation: https://pm-chatbot-api.apps.cluster-znvdr.znvdr.sandbox203.opentlc.com/docs
- π― RFE Generation: Create well-structured Request for Enhancement documents
- π Template Validation: Ensure RFEs follow proper guidelines and formats
- π JIRA Integration: Seamlessly create and manage JIRA tickets
- π Document Search: Query Red Hat AI documentation with vector-based search
- π REST API: Programmable access to all functionality
- π€ MCP Support: Model Context Protocol integration for AI agents and tools
- βοΈ Cloud Ready: Containerized and deployed on OpenShift
- Visit the Streamlit Web App
- Start chatting with the PM assistant
- Ask for help with RFE creation, JIRA integration, or Red Hat AI documentation
- Check out the API Documentation
- Test endpoints directly in the browser
Model Context Protocol (MCP) allows AI agents and tools to interact with PMBot programmatically. Perfect for integrating with Claude Desktop, VS Code extensions, or custom AI workflows.
Add this to your MCP client configuration (e.g., Claude Desktop's mcp.json):
{
"pm-chatbot-production": {
"command": "mcp-proxy",
"args": [
"https://pm-chatbot-api.apps.cluster-znvdr.znvdr.sandbox203.opentlc.com/mcp"
],
"env": {
"AUTHORIZATION": "Bearer pmbot-production-token"
}
}
}mcp_rfe_generate(POST /mcp/rfe/generate) - Generate RFEs from natural language requestsmcp_rfe_validate(POST /mcp/rfe/validate) - Validate RFE content against guidelinesmcp_search_documents(POST /mcp/documents/search) - Search Red Hat AI documentationmcp_get_models(GET /mcp/models) - List available AI modelsmcp_create_jira_issue(POST /mcp/jira/issues) - Create JIRA ticketsmcp_update_jira_issue(PUT /mcp/jira/issues/{issue_key}) - Update existing JIRA tickets
Note: For direct API access, use the /api/v1/ endpoints shown in the API examples below.
Once configured, AI agents can use natural language like:
- "Generate an RFE for improving API rate limiting"
- "Validate this RFE draft against Red Hat guidelines"
- "Search for Red Hat AI installation requirements"
- "What models are available for RFE generation?"
- "Create a JIRA ticket for this enhancement request"
- "Update JIRA issue RHOAI-123 with additional details"
MCP Endpoint: https://pm-chatbot-api.apps.cluster-znvdr.znvdr.sandbox203.opentlc.com/mcp
# With authentication token (required for deployed API)
curl -X POST "https://pm-chatbot-api.apps.cluster-znvdr.znvdr.sandbox203.opentlc.com/api/v1/rfe/generate" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer pmbot-production-token" \
--max-time 120 \
-d '{
"prompt": "Create an RFE for improving AI model performance monitoring",
"context": "Red Hat AI platform needs better monitoring capabilities for deployed models",
"selected_product": "Red Hat AI"
}'curl -X POST "https://pm-chatbot-api.apps.cluster-znvdr.znvdr.sandbox203.opentlc.com/mcp/documents/search" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer pmbot-production-token" \
-d '{
"query": "Red Hat AI installation requirements",
"limit": 5
}' βββββββββββββββββββ
β OpenShift β
βββ β Deployment β βββ
β β β β
βΌ βββββββββββββββββββ βΌ
βββββββββββββββββββ βββββββββββββββββββ
β Streamlit β β FastAPI β
β Frontend βββββΊβ Backend β
β β β + MCP Server β
βββββββββββββββββββ βββββββββββββββββββ\
β β \
βΌ βΌ \
βββββββββββββββββββ βββββββββββββββββββ \βββββββββββββββββββ
β User Chat β β Vector DB β β JIRA API β
β Interface βββββΊβ (FAISS) βββββΊβ Integration β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β
βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ
β MCP Clients β β AI Agents β
β (Claude, Tools) β β & Extensions β
βββββββββββββββββββ βββββββββββββββββββ
PMBot/
βββ π― Core Components
β βββ pm_chatbot_main.py # Streamlit frontend application
β βββ api_server.py # FastAPI backend server
β βββ atlassian_client.py # JIRA/Confluence integration
β βββ rfe_manager.py # RFE creation and validation
β βββ document_processor.py # Document processing and indexing
β βββ vector_database.py # Vector database operations
β βββ auth.py # Authentication utilities
β
βββ π Documentation & Data
β βββ documents/ # Source documentation (PDFs)
β βββ document_cache/ # Processed document cache (JSON)
β βββ vector_db/ # FAISS vector database files
β βββ docs/ # Project documentation
β βββ RFE_Issue_Description_Guidelines.md
β
βββ βοΈ Deployment & Infrastructure
β βββ deployment/
β β βββ docker/ # Container definitions
β β βββ openshift/ # OpenShift deployment configs
β β βββ scripts/ # Deployment scripts
β βββ .github/ # GitHub Actions workflows
β βββ deploy-to-cluster.sh # Main deployment script
β
βββ π§ Configuration
β βββ config/
β β βββ requirements.txt # Core Python dependencies
β β βββ requirements.api.txt # API-specific dependencies
β β βββ requirements.full.txt # Complete dependency list
β β βββ nginx.conf # Nginx configuration
β βββ .env.template # Environment variables template
β βββ .streamlit/ # Streamlit configuration
β
βββ π οΈ Utilities & Scripts
βββ scripts/
β βββ setup-github-secrets.sh # GitHub secrets setup
βββ utils/
βββ local-app.sh # Local Streamlit app launcher
βββ local-backend.sh # Local API server launcher
- Python 3.11+ installed
- Git installed
- JIRA account with personal access token
- MaaS (Model-as-a-Service) API credentials
macOS:
# Install SWIG (required for FAISS vector database)
brew install swigUbuntu/Debian:
# Install SWIG and build tools
sudo apt-get update
sudo apt-get install swig build-essentialRHEL/CentOS/Fedora:
# Install SWIG and development tools
sudo yum install swig gcc-c++ python3-devel
# OR for newer versions:
sudo dnf install swig gcc-c++ python3-devel# Clone the repository
git clone https://github.com/jhuang2026/PMBot.git
cd PMBot# Copy the template
cp .env.template .env
# Edit with your credentials
nano .env # or use your preferred editorJIRA Personal Access Token
- Go to https://issues.redhat.com/secure/ViewProfile.jspa?selectedTab=com.atlassian.pats.pats-plugin:jira-user-personal-access-tokens
- Create a new token
- Copy it to
JIRA_PERSONAL_TOKENin your.envfile
MaaS API Credentials For Red Hat users, visit the Red Hat MaaS Platform to get your credentials.
Contact your AI service provider for:
- API Key
- Base URL endpoint
- Model name
Add at least ONE model configuration to your .env file.
# Start the application (installs dependencies automatically)
./utils/local-app.sh
# Note: If you get FAISS installation errors, install SWIG first:
# macOS: brew install swig
# Ubuntu: sudo apt-get install swig build-essential
# RHEL/CentOS: sudo yum install swig gcc-c++ python3-develThe app will be available at: http://localhost:8501
# In a new terminal, start the API server
./utils/local-backend.sh
# Note: This installs additional API dependencies from requirements.api.txtAPI will be available at: http://localhost:8000
API docs at: http://localhost:8000/docs
# Minimum required for basic functionality
MAAS_PHI_4_API_KEY=your_api_key
MAAS_PHI_4_BASE_URL=https://your-endpoint.com/v1
MAAS_PHI_4_MODEL_NAME=microsoft/phi-4
JIRA_URL=https://issues.redhat.com/
JIRA_PERSONAL_TOKEN=your_token- OpenShift/Kubernetes cluster with admin access
- Container registry access (Quay.io recommended)
- Environment configuration (see below)
quay.io/rh-ee-jashuang as the default registry. You should set up your own registry to avoid pushing images to someone else's account.
Option 1: Quay.io (Recommended)
# 1. Create account at https://quay.io
# 2. Create repository: quay.io/your-username/pm-chatbot-streamlit
# 3. Create repository: quay.io/your-username/pm-chatbot-api
# 4. Make repositories public for easier access
# 5. Login locally
podman login quay.ioOption 2: Docker Hub
# 1. Create account at https://hub.docker.com
# 2. Create repositories for streamlit and api images
# 3. Login locally
podman login docker.ioThe deploy-to-cluster.sh script is a comprehensive, intelligent deployment tool that handles everything automatically. Perfect for first-time users!
quay.io/rh-ee-jashuang registry. You should change this to your own registry before deploying to avoid pushing to someone else's container registry.
Basic Usage (Simplest):
# Clone the repository
git clone https://github.com/jhuang2026/PMBot.git
cd PMBot
# Configure your environment
cp .env.template .env
# Edit .env with your credentials (see Step 3 above for credential sources)
# IMPORTANT: Use your own registry (recommended)
./deploy-to-cluster.sh -r quay.io/your-username
# OR: Deploy with default registry (uses rh-ee-jashuang - not recommended)
./deploy-to-cluster.shAdvanced Usage Options:
# Deploy to a different cluster (specify API URL)
./deploy-to-cluster.sh -a https://api.your-cluster.com:6443
# Use your own container registry
./deploy-to-cluster.sh -r quay.io/your-username
# Deploy to a custom namespace
./deploy-to-cluster.sh -n my-custom-namespace
# Combine multiple options
./deploy-to-cluster.sh -a https://api.new-cluster.com:6443 -r quay.io/myorg -n production
# See all available options
./deploy-to-cluster.sh --helpπ― What the Script Does Automatically:
- π Cluster Detection - Works with both new and existing OpenShift clusters
- ποΈ Smart Building - Only rebuilds images when source code changes (saves time!)
- π¦ Container Management - Handles Podman setup, registry login, and image pushing
- π Security Setup - Creates all required secrets from your
.envfile - π Deployment - Applies all OpenShift configurations and waits for readiness
- π Cache Upload - Uploads pre-built RAG cache for faster startup
- β Verification - Checks all services and provides final URLs
π First-Time User Checklist:
Before running the script, ensure you have:
- β
OpenShift cluster access (login with
oc login) - β Container registry account (Quay.io or Docker Hub)
- β Podman installed (script will try to start it automatically)
- β
Environment configured (
.envfile with your credentials)
β‘ Performance Features:
The script includes several optimizations for faster deployments:
- Smart caching - Remembers previous deployments and skips unchanged components
- Parallel operations - Builds and deploys multiple components simultaneously
- Layer reuse - Docker layers are cached between builds
- Incremental updates - Only updates what actually changed
π§ Troubleshooting:
If deployment fails:
# Check your cluster connection
oc whoami
oc get nodes
# Verify Podman is working
podman info
# Check container registry login
podman login quay.io # or your registry
# See detailed script help
./deploy-to-cluster.sh --helpπ Deployment Scenarios:
# Scenario 1: RECOMMENDED - Deploy with your own registry
./deploy-to-cluster.sh -r quay.io/your-username
# Scenario 2: Deploy to a different OpenShift cluster with your registry
./deploy-to-cluster.sh -a https://api.cluster-abc.sandbox.com:6443 -r quay.io/your-username
# Scenario 3: Use Docker Hub instead of Quay
./deploy-to-cluster.sh -r docker.io/your-username
# Scenario 4: Deploy to a custom namespace with your registry
./deploy-to-cluster.sh -r quay.io/your-username -n my-custom-namespace
# Scenario 5: Production deployment (custom cluster + your registry + namespace)
./deploy-to-cluster.sh \
-a https://api.prod-cluster.company.com:6443 \
-r quay.io/your-company \
-n pmbot-production
# Scenario 6: Development environment
./deploy-to-cluster.sh \
-a https://api.dev-cluster.sandbox.com:6443 \
-r quay.io/dev-team \
-n pmbot-dev
# β οΈ NOT RECOMMENDED: Deploy with default registry (uses rh-ee-jashuang)
# ./deploy-to-cluster.shπ‘ Pro Tips:
- The script remembers your last deployment settings for faster re-deployments
- Use
-aflag to easily switch between different OpenShift clusters - Each deployment creates cache files to speed up subsequent deployments
- The script works with any OpenShift-compatible cluster (OKD, RHOKS, etc.)
This repository includes automated GitHub Actions workflows for building and deploying to OpenShift. This builds on the manual deployment foundation above. IMPORTANT: You must set up GitHub Secrets before deploying.
This repository requires sensitive credentials (API keys, tokens) that must NOT be stored in the code. Instead, they should be configured as GitHub Secrets for secure deployment.
- Go to your GitHub repository
- Click Settings (in the repository menu)
- In the left sidebar, click Secrets and variables β Actions
- Click New repository secret
You need to add the following secrets (use the values from your .env file):
JIRA_URL- Your JIRA instance URL (e.g.,https://issues.redhat.com/)JIRA_PERSONAL_TOKEN- Your personal JIRA API token
For each model you want to use, add these secrets:
DeepSeek R1 Qwen 14B:
MAAS_DEEPSEEK_R1_QWEN_14B_API_KEYMAAS_DEEPSEEK_R1_QWEN_14B_BASE_URLMAAS_DEEPSEEK_R1_QWEN_14B_MODEL_NAME
Phi-4:
MAAS_PHI_4_API_KEYMAAS_PHI_4_BASE_URLMAAS_PHI_4_MODEL_NAME
Granite 3.3 8B Instruct:
MAAS_GRANITE_3_3_8B_INSTRUCT_API_KEYMAAS_GRANITE_3_3_8B_INSTRUCT_BASE_URLMAAS_GRANITE_3_3_8B_INSTRUCT_MODEL_NAME
Llama 4 Scout 17B:
MAAS_LLAMA_4_SCOUT_17B_API_KEYMAAS_LLAMA_4_SCOUT_17B_BASE_URLMAAS_LLAMA_4_SCOUT_17B_MODEL_NAME
Mistral Small 24B:
MAAS_MISTRAL_SMALL_24B_API_KEYMAAS_MISTRAL_SMALL_24B_BASE_URLMAAS_MISTRAL_SMALL_24B_MODEL_NAME
Use the provided script to set up all secrets easily:
# Install GitHub CLI if not already installed
# See: https://cli.github.com/
# Authenticate with GitHub
gh auth login
# Run the setup script
./scripts/setup-github-secrets.shAfter adding all secrets:
- The GitHub Actions workflow will automatically use them during deployment
- Check deployment logs to ensure no authentication errors
- Visit your deployed application to verify it's working
- Never commit credentials to version control
- Rotate API keys regularly
- Use least-privilege access for service accounts
- Monitor secret usage in logs and audit trails
- Remove unused secrets promptly
If deployment fails with authentication errors:
- Verify all required secrets are added to GitHub
- Check secret names match exactly (case-sensitive)
- Ensure API keys are valid and not expired
- Check base URLs are accessible from your deployment environment
- Review GitHub Actions logs for specific error messages
Once GitHub Secrets are configured, simply push to the main branch:
git add .
git commit -m "Configure deployment"
git push origin mainThe GitHub Actions workflow will automatically:
- Build container images
- Push to your container registry
- Deploy to OpenShift
- Set up networking and routes
"No MaaS models configured"
- Check that you have at least one complete MaaS model configuration (API_KEY, BASE_URL, MODEL_NAME)
- Verify environment variables are loaded:
echo $MAAS_PHI_4_API_KEY
"JIRA connection failed"
- Verify JIRA_URL and JIRA_PERSONAL_TOKEN are correct
- Test JIRA token: https://issues.redhat.com/rest/api/2/myself
- Check token permissions in JIRA settings
"Module not found" errors
- Ensure you're in the virtual environment:
source venv/bin/activate - Reinstall dependencies:
pip install -r config/requirements.txt - Try using the automated setup:
./utils/local-app.sh
FAISS installation issues
- Error: Microsoft Visual C++ 14.0 is required (Windows): Install Visual Studio Build Tools
- Error: swig executable not found (All platforms): Install SWIG first
- macOS:
brew install swig - Ubuntu/Debian:
sudo apt-get install swig build-essential - RHEL/CentOS:
sudo yum install swig gcc-c++ python3-devel
- macOS:
GitHub Actions Deployment Failures
- Verify all required GitHub Secrets are set (see Security Setup section)
- Check GitHub Actions logs for specific error messages
- Ensure secret names match exactly (case-sensitive)
- Verify API keys are valid and not expired
- Never commit credentials to version control
- Use GitHub Secrets for all sensitive values in CI/CD
- Rotate API keys regularly
- Use least-privilege access for service accounts
- Monitor secret usage in logs and audit trails
- Keep credentials in
.envfile (gitignored) - Use development-specific API keys when possible
- Regularly update dependencies for security patches
- Create an RFE: Use the chat interface to describe your enhancement request
- Validate RFE Format: Check if your RFE follows proper guidelines
- Search Documentation: Find relevant Red Hat AI documentation
- JIRA Integration: Create and manage JIRA tickets directly
- API Automation: Integrate RFE creation into your development workflow
- MCP Integration: Connect AI agents (Claude Desktop, VS Code) for automated workflows
- Streamlit App: No authentication required - visit the web interface directly
- Interactive Testing: Use the web interface for easy testing without tokens
- Production API: Requires authentication token in header:
Authorization: Bearer pmbot-production-token - Local Development: May not require authentication depending on configuration
- Token Format: Use
Bearerfollowed by your token
- Contact: Repository maintainer for production API tokens
- Local Setup: Configure your own tokens in
.envfile for local development - JIRA Integration: Requires separate JIRA personal access token
Ready to get started? Try the live demo or explore the API! π