Skip to content

ali-amaan/Agentic-AI-Environment

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Workforce Research Platform

🤖 AI Workforce of the Future

Comparative Analysis of GraphRAG-Enhanced Multi-Agent Systems for Qualitative Organizational Research

5 Autonomy Levels 12 Agent Types 98% Memory Retention

TypeScript React Python FastAPI Neo4j OpenAI TailwindCSS Vite

Status License Aalto University


📋 Table of Contents


🎯 Overview

This research platform is a comprehensive tool for comparing AI autonomy levels in organizational analysis contexts. It implements a progression from simple reactive chatbots to sophisticated GraphRAG-enhanced multi-agent systems, enabling rigorous comparative analysis of AI capabilities for qualitative research tasks.

🎓 Academic Project
Developed as part of a Master's thesis at Aalto University School of Business, focusing on how different AI autonomy levels and multi-agent architectures can enhance qualitative organizational research.


📚 Research Context

Thesis Information

Field Details
Title Enhancing Qualitative Organizational Analysis through Integrated GraphRAG and Multi-Agent AI Systems
Author Ali Amaan
Programme Master's Programme in Economics & Business Administration
Major International Design Business Management (IDBM)
Institution Aalto University School of Business
Partner Department of Management Studies, Aalto University
Expected Completion December 2025

Research Questions

  1. How do different AI autonomy levels compare in supporting qualitative organizational analysis?
  2. What role does GraphRAG play in enhancing multi-agent system memory and context retention?
  3. How can human-in-the-loop (HIL) mechanisms improve AI-assisted research workflows?

Keywords

GraphRAG Multi-Agent Systems Knowledge Graphs Qualitative Analysis Organizational Research Agentic AI Human-in-the-Loop LLM Orchestration


✨ Features

🧠 Intelligent Analysis

  • 5 Autonomy Levels — From reactive chatbot to autonomous multi-agent systems
  • 12 Specialized Agents — Manager, Researcher, Analyst, Visualizer, Engineer, and more
  • Code Interpreter — Execute Python for data analysis and visualization
  • Web Search — Real-time information retrieval via DuckDuckGo
  • Image Generation — AI-powered visual content creation

📊 GraphRAG Integration

  • Knowledge Graph Memory — Persistent entity and relationship storage
  • Semantic Search — Find relevant facts and node summaries
  • Episode Management — Add, retrieve, and manage conversation episodes
  • Entity Extraction — Automatic extraction of entities from conversations
  • Temporal Awareness — Track when information was learned

🎨 Modern UI/UX

  • Real-time Streaming — WebSocket-based response streaming
  • Neo4j Visualization — Interactive knowledge graph explorer
  • Plotly Charts — Rich data visualizations
  • Dark Theme — Beautiful, researcher-friendly interface
  • Responsive Design — Works on desktop and tablet

🔧 Developer Experience

  • Type-Safe — Full TypeScript frontend, Pydantic backend
  • Hot Reload — Vite for frontend, Uvicorn for backend
  • Modular Architecture — Clean separation of concerns
  • Trace Logging — Detailed agent execution traces
  • Session Management — Persistent analysis sessions

🎚️ Autonomy Stages

The platform implements a Sheridan Scale-inspired progression of AI autonomy levels:

graph LR
    subgraph "Problem Space"
        S1[🗨️ Stage 1<br/>Reactive Baseline]
        S2[🔧 Stage 2<br/>Tool-Augmented]
    end
    
    subgraph "Collaboration Space"
        S3[👥 Stage 3<br/>HIL Single-Agent]
    end
    
    subgraph "Autonomy Space"
        S4[🤖 Stage 4<br/>Autonomous MAS]
        S5[🌐 Stage 5<br/>HIL Multi-Agent]
    end
    
    S1 --> S2 --> S3 --> S4 --> S5
    
    style S1 fill:#3B82F6,stroke:#1E40AF,color:#fff
    style S2 fill:#F59E0B,stroke:#B45309,color:#fff
    style S3 fill:#F97316,stroke:#C2410C,color:#fff
    style S4 fill:#EF4444,stroke:#B91C1C,color:#fff
    style S5 fill:#8B5CF6,stroke:#6D28D9,color:#fff
Loading
Stage Name Description Sheridan Level Capabilities
1 Reactive Baseline Stateless LLM with minimal context Level 1 Basic Q&A, no memory
2 Tool-Augmented Agent Single agent with tools Level 5-6 Code Interpreter, Web Search
3 HIL Single-Agent Human-supervised single agent Level 5-6 Human approval, transparent reasoning
4 Autonomous MAS Self-directed multi-agent system Level 8-10 Parallel agent execution, quality loops
5 HIL Multi-Agent Human-in-the-loop multi-agent Level 5-7 Team collaboration, human oversight

🤝 Agent Pool

The multi-agent stages (4 & 5) leverage a pool of 12 specialized agents that work together using an agents-as-tools pattern:

graph TB
    subgraph Manager["👔 Manager (Orchestrator)"]
        M[Project Manager]
    end
    
    subgraph Specialists["Specialist Agents"]
        R[🔍 Researcher<br/>Web Search]
        A[📊 Data Analyst<br/>Code Interpreter]
        V[📈 Visualizer<br/>Charts & Plots]
        G[🎨 Graphic Artist<br/>Image Generation]
        E[💻 Engineer<br/>Code Interpreter]
        W[✍️ Writer<br/>Documentation]
        C[🔎 Critic<br/>Quality Review]
        S[🎯 Strategist<br/>Business Analysis]
        D[🎓 Domain Expert<br/>Subject Matter]
        AS[🤝 Assistant<br/>General Support]
    end
    
    M --> R
    M --> A
    M --> V
    M --> G
    M --> E
    M --> W
    M --> C
    M --> S
    M --> D
    M --> AS
    
    style M fill:#3B82F6,stroke:#1E40AF,color:#fff
    style R fill:#10B981,stroke:#047857,color:#fff
    style A fill:#F59E0B,stroke:#B45309,color:#fff
    style V fill:#8B5CF6,stroke:#6D28D9,color:#fff
    style G fill:#E91E63,stroke:#AD1457,color:#fff
    style E fill:#EC4899,stroke:#BE185D,color:#fff
    style W fill:#06B6D4,stroke:#0E7490,color:#fff
    style C fill:#EF4444,stroke:#B91C1C,color:#fff
    style S fill:#F97316,stroke:#C2410C,color:#fff
    style D fill:#84CC16,stroke:#4D7C0F,color:#fff
    style AS fill:#64748B,stroke:#475569,color:#fff
Loading

Agent Capabilities

Agent Role Tools Specialization
👔 Manager Orchestrator Task decomposition, delegation, synthesis
🔍 Researcher Research Analyst Web Search Information gathering, source evaluation
📊 Data Analyst Quantitative Analyst Code Interpreter Statistical analysis, pattern detection
📈 Visualizer Visualization Specialist Code Interpreter, Image Gen Charts, graphs, dashboards
🎨 Graphic Artist Visual Concept Artist Image Generation Illustrations, concept art
💻 Engineer Software Engineer Code Interpreter Algorithms, data pipelines
✍️ Writer Technical Writer Reports, documentation
🔎 Critic Quality Reviewer Error detection, improvements
🎯 Strategist Business Strategist Web Search Strategic insights, recommendations
🎓 Domain Expert Subject Matter Expert Web Search, File Search Domain knowledge, validation
🤝 Assistant General Assistant Web Search, Code Interpreter Flexible support tasks

🏗️ System Architecture

graph TB
    subgraph Frontend["🖥️ Frontend (React + TypeScript)"]
        UI[UI Components]
        WS[WebSocket Client]
        Store[Zustand Store]
        GraphViz[Neo4j Visualizer]
    end
    
    subgraph Backend["⚙️ Backend (FastAPI + Python)"]
        API[REST API]
        WSS[WebSocket Server]
        Stages[Stage Implementations]
        Tools[Tool Layer]
    end
    
    subgraph GraphitiMCP["🧠 Graphiti MCP Server"]
        MCP[MCP Protocol Handler]
        GraphCore[Graphiti Core]
        Episodes[Episode Management]
        Search[Semantic Search]
    end
    
    subgraph Database["💾 Neo4j Database"]
        Nodes[(Entity Nodes)]
        Edges[(Relationships)]
        Embeddings[(Vector Embeddings)]
    end
    
    subgraph External["☁️ External Services"]
        OpenAI[OpenAI API<br/>GPT-5]
        DDG[DuckDuckGo<br/>Web Search]
    end
    
    UI --> Store
    Store --> WS
    WS <--> WSS
    UI --> API
    API --> Stages
    Stages --> Tools
    Tools --> OpenAI
    Tools --> DDG
    Stages --> MCP
    MCP --> GraphCore
    GraphCore --> Nodes
    GraphCore --> Edges
    GraphCore --> Embeddings
    GraphViz --> Nodes
    
    style Frontend fill:#1E293B,stroke:#475569,color:#fff
    style Backend fill:#1E293B,stroke:#475569,color:#fff
    style GraphitiMCP fill:#1E293B,stroke:#475569,color:#fff
    style Database fill:#1E293B,stroke:#475569,color:#fff
    style External fill:#1E293B,stroke:#475569,color:#fff
Loading

🛠️ Technology Stack

Frontend

Technology Version Purpose
React 19.2.0 UI framework
TypeScript 5.9.3 Type safety
Vite 7.2.4 Build tool & dev server
TailwindCSS 3.4.14 Styling
Framer Motion 12.23.26 Animations
Zustand 5.0.9 State management
React Router 7.11.0 Navigation
@neo4j-nvl/react 1.0.0 Graph visualization
Plotly.js 3.3.1 Data visualization

Backend

Technology Version Purpose
Python 3.10+ Runtime
FastAPI 0.115.5 API framework
Uvicorn 0.32.0 ASGI server
OpenAI SDK 2.9.0+ LLM integration
OpenAI Agents SDK 0.0.16+ Multi-agent orchestration
Pandas 2.2.3 Data processing
Pydantic 2.12.3+ Data validation
WebSockets 12.0 Real-time communication

Knowledge Graph

Technology Version Purpose
Graphiti Core Latest GraphRAG framework
Graphiti MCP Server Latest Model Context Protocol
Neo4j 5.26+ Graph database

🚀 Getting Started

Prerequisites

Before you begin, ensure you have the following installed:

  • Node.js 20+ and npm (or yarn/pnpm)
  • Python 3.10+ with pip or uv
  • Docker and Docker Compose (for Neo4j)
  • OpenAI API Key with GPT-5 access

⚠️ Important: This platform is designed to work with GPT-5 for optimal performance. Lower-tier models may not support all features (reasoning effort, extended thinking, etc.).

Backend Setup

  1. Navigate to the backend directory:
cd backend
  1. Create and activate a virtual environment:
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
  1. Install dependencies:
pip install -r requirements.txt
  1. Configure environment variables:
cp env.example .env

Edit .env with your configuration:

# Required
OPENAI_API_KEY=sk-your-api-key-here
OPENAI_MODEL=gpt-5

# Optional GPT-5 tuning
OPENAI_REASONING_EFFORT=high
OPENAI_VERBOSITY=high

# Session & Data
SESSION_DIR=./backend/sessions
DATASET_DIR=./backend/datasets
LOG_LEVEL=INFO
ENABLE_AGENT_TRACES=1

# Graphiti Knowledge Graph
GRAPHITI_MCP_URL=http://localhost:8000/mcp/
GRAPHITI_ENABLED=1

# Neo4j Database
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=your_secure_password
  1. Start the backend server:
uvicorn app.main:app --host 0.0.0.0 --port 8080 --reload

The API will be available at http://localhost:8080

Graphiti MCP & Neo4j Setup

  1. Navigate to the Graphiti MCP directory:
cd graphiti/mcp_server
  1. Start Neo4j and MCP Server using Docker Compose:
# Using Neo4j (recommended for this project)
docker compose -f docker/docker-compose-neo4j.yml up -d

Or if you prefer FalkorDB:

docker compose up -d
  1. Verify the services are running:
# Check MCP Server health
curl http://localhost:8000/health

# Access Neo4j Browser
open http://localhost:7474
  1. Configure your .env file in mcp_server/:
OPENAI_API_KEY=sk-your-api-key-here
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=your_secure_password

Frontend Setup

  1. Navigate to the frontend directory:
cd frontend
  1. Install dependencies:
npm install
  1. Start the development server:
npm run dev

The application will be available at http://localhost:5173

Quick Start (All Services)

For convenience, you can start all services together:

# Terminal 1: Neo4j + Graphiti MCP
cd graphiti/mcp_server && docker compose -f docker/docker-compose-neo4j.yml up

# Terminal 2: Backend API
cd backend && uvicorn app.main:app --host 0.0.0.0 --port 8080 --reload

# Terminal 3: Frontend
cd frontend && npm run dev

⚙️ Configuration

Environment Variables Reference

Variable Required Default Description
OPENAI_API_KEY Your OpenAI API key
OPENAI_MODEL gpt-5 Model to use for LLM calls
OPENAI_REASONING_EFFORT high Reasoning depth (low/medium/high)
OPENAI_VERBOSITY high Response verbosity
SESSION_DIR ./backend/sessions Session storage path
DATASET_DIR ./backend/datasets Dataset storage path
LOG_LEVEL INFO Logging level
ENABLE_AGENT_TRACES 1 Enable detailed agent traces
GRAPHITI_MCP_URL http://localhost:8000/mcp/ Graphiti MCP endpoint
GRAPHITI_ENABLED 1 Enable/disable GraphRAG
NEO4J_URI bolt://localhost:7687 Neo4j Bolt URI
NEO4J_USER neo4j Neo4j username
NEO4J_PASSWORD Neo4j password

Graphiti MCP Configuration

The Graphiti MCP server can be configured via config.yaml:

server:
  transport: "http"  # Options: http, stdio

llm:
  provider: "openai"  # Options: openai, anthropic, gemini, groq
  model: "gpt-5"

database:
  provider: "neo4j"  # Options: neo4j, falkordb
  providers:
    neo4j:
      uri: "${NEO4J_URI:bolt://localhost:7687}"
      username: "${NEO4J_USER:neo4j}"
      password: "${NEO4J_PASSWORD}"

graphiti:
  entity_types:
    - name: "Preference"
      description: "User preferences, choices, opinions"
    - name: "Organization"
      description: "Companies, institutions, groups"
    - name: "Topic"
      description: "Subjects of conversation or interest"

📡 API Reference

REST Endpoints

Method Endpoint Description
GET /health Health check
POST /session/new Create new session
DELETE /session/{id} Delete session
POST /session/{id}/reset Reset session
GET /session/{id}/suggested_tasks Get AI-suggested tasks
POST /datasets Upload dataset
GET /datasets List all datasets
GET /api/graph/status Check knowledge graph status
GET /api/graph/data Get graph visualization data
GET /api/agents/pool Get available agents

WebSocket Protocol

Connect to /ws/{session_id} for real-time communication:

// Message types
type WSMessage = {
  type: "start" | "continue" | "cancel";
  task?: string;
  stage?: number;
  reasoning_effort?: string;
};

// Response events
type WSEvent = {
  type: "text" | "thinking" | "tool_call" | "file" | "error" | "done";
  content?: string;
  metadata?: object;
};

🤝 Contributing

We welcome contributions that align with the research goals of this project. Please see CONTRIBUTING.md for detailed guidelines.

⚠️ Note: This is an academic research project. All contributions must comply with the licensing terms and academic integrity requirements.


📄 License

© 2024-2025 Ali Amaan. All Rights Reserved.

This project is proprietary software developed as part of academic research at Aalto University. See LICENSE for full terms.

Restrictions:

  • ❌ No redistribution without explicit permission
  • ❌ No commercial use
  • ❌ No derivative works without attribution
  • ✅ Academic citation permitted with proper attribution

🎓 Academic Attribution

If you reference this work in academic publications, please use the following citation:

@mastersthesis{amaan2025graphrag,
  author       = {Ali Amaan},
  title        = {Enhancing Qualitative Organizational Analysis through Integrated GraphRAG and Multi-Agent AI Systems},
  school       = {Aalto University School of Business},
  year         = {2025},
  type         = {Master's Thesis},
  address      = {Espoo, Finland},
  month        = {December},
  note         = {International Design Business Management (IDBM) Programme}
}

Built with ❤️ at Aalto University School of Business

Research Status IDBM Programme