██╗ ██╗██╗██████╗
██║ ██║██║██╔══██╗
██║ █╗ ██║██║██████╔╝
██║███╗██║██║██╔═══╝
╚███╔███╔╝██║██║
╚══╝╚══╝ ╚═╝╚═╝
This open source version is experimental and not thoroughly tested. It's provided for research purposes and academic review. Use at your own risk in production environments.
- Prompts and LLM logic: Check
backend/utils/llm.pyfor the actual prompt engineering - Business logic: Core retrieval and generation logic in
backend/business_logic/business_logic.py - Research purpose: This version supports academic research and reproducibility
This is the open source research version of the Frickly Assistant, an AI-powered tool for generating technical project specifications and requirements documentation. This version supports the research paper [Your Paper Title Here] and has been made available for reviewers and researchers to examine and reproduce the results.
The Frickly Assistant uses Retrieval-Augmented Generation (RAG) to help generate structured project documentation by combining:
- Vector database search through technical documents
- Large Language Model reasoning and synthesis
- Structured output generation following engineering templates
Key Features:
- 📋 Automated technical specification generation
- 🔍 Intelligent document retrieval and context integration
- 🤖 Multiple AI retrieval strategies with metadata analysis
- 💾 Conversation logging for research analysis
- 🌐 Web-based interface for interactive use
- Python 3.8+ - Download from python.org
- Node.js 18+ - Download from nodejs.org
- OpenAI API key - Get from platform.openai.com
# 1. Clone the repository
git clone https://github.com/frickly-systems/reqrag.git
cd reqrag
# 2. Add your OpenAI API key
cp .env.example .env
# Edit .env and add: OPENAI_API_KEY=your_key_here
# 3. Run setup
python setup.py# Terminal 1: Backend
cd backend && ./start.sh
# Terminal 2: Frontend
cd frontend && npm run dev
# Open: http://localhost:3000The research version includes sanitized sample documents in the sample-documents/ directory:
sample-project-brief.md- Example project descriptionsample-technical-spec.md- Example technical specification
These demonstrate the application's capabilities without exposing proprietary data.
This system implements several features for research analysis:
- Multiple Retrieval Strategies: Three different retrieval approaches (Old, Metadata, Structured)
- Conversation Logging: All interactions logged in
backend/logs/JSON/for analysis - Performance Metrics: Token usage and generation time tracking
- Configurable Processing: Customizable text splitting and embedding strategies
- Sample Data: Reproducible results using the provided sample documents
To connect your own Confluence instance:
-
Add credentials to
.envfile:CONFLUENCE_API_KEY=your_api_token CONFLUENCE_USER=your_email@company.com CONFLUENCE_URL=https://company.atlassian.net/wiki
-
Enable in
backend/config/vector_database_config.yaml:confluence: enabled: true # Change from false to true spaces: - key: "YOUR_SPACE" name: "Your Space Name"
The system is pre-configured to work with sample documents. All settings are in backend/config/vector_database_config.yaml.
For advanced options, see backend/config/README.md.
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Frontend │ │ Backend │ │ Vector DB │
│ (Next.js) │◄──►│ (Flask) │◄──►│ (ChromaDB) │
│ │ │ │ │ │
│ • Chat Interface│ │ • RAG Pipeline │ │ • Document │
│ • Session Mgmt │ │ • Multiple │ │ Embeddings │
│ │ │ Retrievers │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
The system implements several retrieval approaches:
- Metadata-Retriever: Analyzes document metadata for relevance
- Old-Retriever: Basic similarity search
- Structured-Retriever: Template-aware retrieval
All interactions are logged in structured JSON format for research analysis:
- User prompts and AI responses
- Retrieved document contexts
- Retrieval strategy metadata
- Token usage estimates
- Start a new conversation
- Upload or describe your project requirements and or project description
- Review and iterate on the generated document
- Add sample documents to
sample-documents/folder (optional) - For Confluence: Configure
.envfile and enable in config (see above) - Run:
cd backend && python createVectorDatabase.py