A comprehensive AI-powered system that combines N8N's visual workflow builder with PostgreSQL's vector database capabilities and local Ollama models to create intelligent agents and document processing workflows.
This project demonstrates the integration of three powerful technologies:
- N8N: Visual AI workflow builder for creating sophisticated AI agents and processing pipelines
- PostgreSQL: Vector database for efficient document storage and semantic search
- Ollama: Local LLM runtime for privacy-focused AI processing
The flows/ directory contains pre-built N8N workflows for various AI tasks:
- File:
Document Q&A.json - Purpose: Question-answering system for uploaded documents
- Components: Chat input, prompt templates, Ollama model integration
- Use Case: Interactive document querying with natural language
- File:
Docling Processing.json - Purpose: Advanced document parsing and processing using Docling
- Components: Document ingestion, text splitting, export functionality
- Use Case: Structured data extraction from various document formats
- File:
News and Web Search.json - Purpose: AI-powered web content aggregation and analysis
- Components: News search, web scraping, agent-based processing
- Use Case: Real-time information gathering and summarization
- Visual Workflow Builder: Drag-and-drop interface for creating AI pipelines
- Multi-Agent Support: Orchestrate complex AI interactions
- Component Library: Pre-built components for common AI tasks
- API Deployment: Export workflows as RESTful endpoints
- Code Access: Full Python customization capabilities
- Vector Extension:
pgvectorfor high-dimensional similarity search - Document Storage: Efficient storage of document embeddings
- Semantic Search: Fast cosine similarity queries
- Scalable Architecture: IVFFlat indexes for production performance
- Local Models: Run LLMs locally without API costs
- Privacy First: No data sent to external services
- Model Flexibility: Support for Llama, Mistral, and custom models
- Embedding Generation: Snowflake Arctic Embed for semantic search
ai_workflows/
โโโ flows/ # N8N workflow definitions
โ โโโ Docling Processing.json # Document processing pipeline
โ โโโ Document Q&A.json # Question-answering workflow
โ โโโ News and Web Search.json # Web content aggregation
โโโ postgres/ # Database schema and setup
โโโ ollama/ # Ollama Docker Files
The PostgreSQL database includes optimized tables for document storage and retrieval:
-- Documents table with vector embeddings
CREATE TABLE documents (
id SERIAL PRIMARY KEY,
filename TEXT NOT NULL,
content TEXT NOT NULL,
metadata JSONB NOT NULL,
embedding vector(1536),
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
-- Document chunks for improved retrieval
CREATE TABLE document_chunks (
id SERIAL PRIMARY KEY,
document_id INTEGER REFERENCES documents(id) ON DELETE CASCADE,
chunk_index INTEGER NOT NULL,
content TEXT NOT NULL,
embedding vector(1536) NOT NULL,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);- Multi-format Support: PDF, text, and web content processing
- Intelligent Chunking: Optimal text segmentation for embedding
- Metadata Extraction: Automatic document metadata capture
- Vector Embeddings: High-quality semantic representations
- Natural Language Understanding: Advanced query comprehension
- Contextual Responses: Relevant answer generation
- Multi-turn Conversations: Maintained conversation state
- Tool Integration: External API and database access
- Intuitive Upload: Drag-and-drop document ingestion
- Real-time Search: Instant semantic query results
- Responsive Design: Mobile-friendly interface
- Progress Tracking: Visual feedback for processing status
- PostgreSQL 14+ with pgvector extension
- Python 3.10+ (for N8N)
- Ollama installed locally
- Docker (optional, for containerized deployment)
-
Clone the repository
git clone <repository-url> cd ai_workflows
-
Set up PostgreSQL
# Create database createdb vectordb # Enable vector extension psql vectordb -c "CREATE EXTENSION IF NOT EXISTS vector;"
-
Install Ollama
# Download from https://ollama.ai # Pull required models ollama pull llama3.2 ollama pull snowflake-arctic-embed2
-
Install N8N
- Configure environment variables
DATABASE_URL=postgresql://postgres:postgres@localhost/vectordb OLLAMA_HOST=http://localhost:11434
-
Start PostgreSQL
# Using Docker docker-compose up -d postgres -
Start Ollama
ollama serve # Runs at http://localhost:11434 -
Launch N8N
docker compose up -d # Access at http://localhost:5678
- Workflows expose RESTful endpoints automatically
- Ollama provides local model inference
- Real-time processing with streaming responses
- Error handling and retry mechanisms
# Build and run all services
docker-compose up -d- Database Scaling: Connection pooling and read replicas
- Caching Strategy: Redis for session and result caching
- Load Balancing: Nginx for traffic distribution
- Monitoring: Application performance and error tracking
- Multi-modal Processing: Image and audio document support
- Advanced Analytics: Usage metrics and insights
- Workflow Templates: Pre-built industry-specific solutions
- Collaboration Features: Multi-user workflow sharing
- Enhanced Security: Authentication and authorization
Built with โค๏ธ using N8N, PostgreSQL, and Ollama