Montscan: Automated scanner document processor with Vision AI, AI naming, and Nextcloud upload! β¨
- π‘ FTP Server - Receives documents from network scanners
- ποΈ Vision AI Processing - Analyzes scanned documents using Ollama vision models
- π€ AI-Powered Naming - Generates descriptive filenames in French using Ollama
- βοΈ Nextcloud Integration - Automatically uploads processed documents via WebDAV
- π¨ Colorful CLI - Beautiful startup banner with configuration overview
- π³ Docker Support - Easy deployment with Docker Compose
- Python 3.14+
- Poppler - For PDF to image conversion
- Ollama - Installation guide with a vision model (e.g.,
llava,llama3.2-vision) - Nextcloud instance (optional) - For cloud storage integration
-
Clone the repository
git clone https://github.com/SystemVll/Montscan.git cd montscan -
Install dependencies using uv
pip install uv uv sync
-
Install Poppler
- Windows: Download from GitHub Releases
- Linux:
sudo apt-get install poppler-utils - macOS:
brew install poppler
-
Set up Ollama with a vision model
# Install Ollama from https://ollama.ai/ ollama pull llava # or any other vision-capable model
| Variable | Description | Default |
|---|---|---|
FTP_HOST |
FTP server host address | 0.0.0.0 |
FTP_PORT |
FTP server port | 21 |
FTP_USERNAME |
FTP authentication username | Must be SET |
FTP_PASSWORD |
FTP authentication password | Must be SET |
FTP_UPLOAD_DIR |
Local directory for uploaded files | ./scans |
NEXTCLOUD_URL |
Nextcloud instance URL | - |
NEXTCLOUD_USERNAME |
Nextcloud username | - |
NEXTCLOUD_PASSWORD |
Nextcloud password | - |
NEXTCLOUD_UPLOAD_PATH |
Upload path in Nextcloud | /Documents/Scanned |
OLLAMA_HOST |
Ollama service URL | http://localhost:11434 |
OLLAMA_MODEL |
Ollama vision model to use | ministral-3:3b-instruct-2512-q4_K_M |
# Activate virtual environment (if using uv)
source .venv/bin/activate # Linux/macOS
.venv\Scripts\activate # Windows
# Run the application
python src/main.pyYou should see a colorful startup banner:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π¨οΈ MONTSCAN - Scanner Document Processing System π β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π‘ FTP Server Configuration:
ββ Host: 0.0.0.0
ββ Port: 21
ββ Username: your-username
ββ Upload Directory: /path/to/scans
βοΈ Nextcloud Integration:
ββ URL: https://your-nextcloud.com
π€ AI Processing (Ollama):
ββ Host: http://localhost:11434
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
All systems initialized - Ready to process documents!
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π Server is now running! Press Ctrl+C to stop.
- Configure your network scanner to send scans via FTP
- Set the FTP server address to your Montscan instance
- Use the credentials from your
.envfile - Scan a document - it will be automatically processed!
-
Update environment variables in
docker-compose.yml -
Build and start the container
docker-compose up -d
-
View logs
docker-compose logs -f
-
Stop the container
docker-compose down
# Build the image
docker build -t montscan .
# Run the container
docker run -d \
-p 21:21 \
-v ./scans:/app/scans \
--env-file .env \
--name montscan \
montscan- Solution: Check that the FTP port (default 21) is not blocked by firewall
- On Windows, you may need to allow Python through the firewall
- Solution: Verify Ollama is running and a vision model is downloaded
- Test with:
ollama listand ensure you have a vision-capable model (e.g.,ministral-3:3b-instruct-2512-q4_K_M,llama3.2-vision)
- Solution: Check Nextcloud credentials and URL
- Ensure the upload path exists in Nextcloud
- Verify WebDAV is enabled on your Nextcloud instance
- Solution: Install Poppler and ensure it's in your system PATH
- Windows: Add Poppler's
binfolder to PATH environment variable
This project is licensed under the MIT License - see the LICENSE file for details.
- pyftpdlib - Python FTP server library
- Ollama - Local AI vision model runner
- Nextcloud - Self-hosted cloud storage
- pdf2image - PDF to image conversion
For questions or support, please open an issue on GitHub.
