Skip to content

frickly-systems/reqrag

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

██╗    ██╗██╗██████╗ 
██║    ██║██║██╔══██╗
██║ █╗ ██║██║██████╔╝
██║███╗██║██║██╔═══╝ 
╚███╔███╔╝██║██║     
 ╚══╝╚══╝ ╚═╝╚═╝     

Work in Progress

Frickly Assistant - Research Version

License: MIT

⚠️ EXPERIMENTAL DISCLAIMER

This open source version is experimental and not thoroughly tested. It's provided for research purposes and academic review. Use at your own risk in production environments.

  • Prompts and LLM logic: Check backend/utils/llm.py for the actual prompt engineering
  • Business logic: Core retrieval and generation logic in backend/business_logic/business_logic.py
  • Research purpose: This version supports academic research and reproducibility

This is the open source research version of the Frickly Assistant, an AI-powered tool for generating technical project specifications and requirements documentation. This version supports the research paper [Your Paper Title Here] and has been made available for reviewers and researchers to examine and reproduce the results.

🎯 About This Research Version

The Frickly Assistant uses Retrieval-Augmented Generation (RAG) to help generate structured project documentation by combining:

  • Vector database search through technical documents
  • Large Language Model reasoning and synthesis
  • Structured output generation following engineering templates

Key Features:

  • 📋 Automated technical specification generation
  • 🔍 Intelligent document retrieval and context integration
  • 🤖 Multiple AI retrieval strategies with metadata analysis
  • 💾 Conversation logging for research analysis
  • 🌐 Web-based interface for interactive use

🚀 Quick Start

Prerequisites

Setup

# 1. Clone the repository
git clone https://github.com/frickly-systems/reqrag.git
cd reqrag

# 2. Add your OpenAI API key
cp .env.example .env
# Edit .env and add: OPENAI_API_KEY=your_key_here

# 3. Run setup
python setup.py

Starting the Application

# Terminal 1: Backend
cd backend && ./start.sh

# Terminal 2: Frontend  
cd frontend && npm run dev

# Open: http://localhost:3000

📚 Sample Documents

The research version includes sanitized sample documents in the sample-documents/ directory:

  • sample-project-brief.md - Example project description
  • sample-technical-spec.md - Example technical specification

These demonstrate the application's capabilities without exposing proprietary data.

� Research Features

This system implements several features for research analysis:

  • Multiple Retrieval Strategies: Three different retrieval approaches (Old, Metadata, Structured)
  • Conversation Logging: All interactions logged in backend/logs/JSON/ for analysis
  • Performance Metrics: Token usage and generation time tracking
  • Configurable Processing: Customizable text splitting and embedding strategies
  • Sample Data: Reproducible results using the provided sample documents

�🔧 Configuration Options

Confluence Integration (Optional)

To connect your own Confluence instance:

  1. Add credentials to .env file:

    CONFLUENCE_API_KEY=your_api_token
    CONFLUENCE_USER=your_email@company.com
    CONFLUENCE_URL=https://company.atlassian.net/wiki
  2. Enable in backend/config/vector_database_config.yaml:

    confluence:
      enabled: true          # Change from false to true
      spaces:
        - key: "YOUR_SPACE"
          name: "Your Space Name"

Configuration

The system is pre-configured to work with sample documents. All settings are in backend/config/vector_database_config.yaml.

For advanced options, see backend/config/README.md.

🏗️ Architecture

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Frontend      │    │   Backend       │    │  Vector DB      │
│   (Next.js)     │◄──►│   (Flask)       │◄──►│  (ChromaDB)     │
│                 │    │                 │    │                 │
│ • Chat Interface│    │ • RAG Pipeline  │    │ • Document      │
│ • Session Mgmt  │    │ • Multiple      │    │   Embeddings    │
│                 │    │   Retrievers    │    │                 │
└─────────────────┘    └─────────────────┘    └─────────────────┘

🔬 Research Features

Multiple Retrieval Strategies

The system implements several retrieval approaches:

  1. Metadata-Retriever: Analyzes document metadata for relevance
  2. Old-Retriever: Basic similarity search
  3. Structured-Retriever: Template-aware retrieval

Conversation Logging

All interactions are logged in structured JSON format for research analysis:

  • User prompts and AI responses
  • Retrieved document contexts
  • Retrieval strategy metadata
  • Token usage estimates

📖 Usage Examples

Basic Project Specification Generation

  1. Start a new conversation
  2. Upload or describe your project requirements and or project description
  3. Review and iterate on the generated document

Update Vector Database

  1. Add sample documents to sample-documents/ folder (optional)
  2. For Confluence: Configure .env file and enable in config (see above)
  3. Run: cd backend && python createVectorDatabase.py

About

Requirement Documents Assistant RAG - Open Source Version

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published