- View running models
- Lists all installed models
- Pull new models
- Update models (manually via update button)
- Delete old models
- Chat / Generate with any model
bash start.sh [options]Options:
-h, --host HOST- Ollama server host (default: localhost)-p, --port PORT- Ollama server port (default: 11434)--help- Show help message
The script will:
- Create a Python virtual environment (if it doesn't exist)
- Install dependencies from requirements.txt
- Start the UI server on
http://localhost:5000
docker-compose up -d --build- Change ENV in
docker-compose.yamlif needed. OLLAMA_URL- URL of the Ollama server (default:http://localhost:11434)LOG_LEVEL- Logging verbosity (default:INFO)- Options:
DEBUG,INFO,WARNING,ERROR,CRITICAL INFOrecommended for production (shows important operations without noise)DEBUGfor troubleshooting (shows all operations including frequent fetches)
- Options:
Access the UI at http://localhost:5000
Feel free to open issues or submit pull requests.
