A web-based tool for backing up, restoring, and synchronizing Dispatcharr configuration between instances. Perfect for migrating from a development/test instance to production, creating backups, or keeping multiple Dispatcharr installations in sync.
- Backup: Create a complete backup of your Dispatcharr configuration to a downloadable ZIP file
- Restore: Restore a previously created backup to a Dispatcharr instance
- Sync: Two-way synchronization between a source and destination instance with granular control over what gets synced
- Job Scheduler: Automate recurring backups and syncs with flexible scheduling options
- Backup Retention: Automatically clean up old backups, keeping only the last X backups per schedule
- Connection Management: Save and manage multiple Dispatcharr instance connections
- Job Tracking: Real-time progress monitoring with detailed logs for all operations
- Dry Run Mode: Preview what changes would be made before committing them
- Dark Mode: Light, dark, or auto theme (follows system preference)
- Update Notifications: Automatic check for new releases with dismissible banner
| Category | Export | Import | Sync |
|---|---|---|---|
| Channel Groups | Yes | Yes | Yes |
| Channel Profiles | Yes | Yes | Yes |
| Channels | Yes | Yes | Yes |
| M3U Sources | Yes | Yes | Yes |
| Stream Profiles | Yes | Yes | Yes |
| User Agents | Yes | Yes | Yes |
| Core Settings | Yes | Yes | Yes |
| EPG Sources | Yes | Yes | Yes |
| Plugins | Yes | Yes | Yes |
| DVR Rules | Yes | Yes | Yes |
| Comskip Config | Yes | Yes | Yes |
| Users | Yes | Yes | Yes |
| Logos | Yes | Yes | Yes |
DBAS runs entirely in Docker containers - no local installation required. It can run on any machine that has network access to your Dispatcharr instances (e.g., your home server, NAS, or even a different machine than where Dispatcharr is running).
services:
backend:
image: ghcr.io/motwakorb/dispatcharr-backup-sync-backend:latest
container_name: dispatcharr-manager-backend
restart: unless-stopped
ports:
- "6002:6002"
environment:
- NODE_ENV=production
- PORT=6002
- DATA_DIR=/data
volumes:
- backend-data:/data
- ./backups:/data/backup # Optional: mount local directory for backups
frontend:
image: ghcr.io/motwakorb/dispatcharr-backup-sync-frontend:latest
container_name: dispatcharr-manager-frontend
restart: unless-stopped
ports:
- "6001:6001"
depends_on:
- backend
volumes:
backend-data:docker compose up -dOpen http://<docker-host>:6001 in your browser (e.g., http://192.168.1.100:6001 or http://myserver:6001).
curl http://<docker-host>:6002/health- Navigate to the Settings tab
- Click Create Connection
- Enter your Dispatcharr instance URL (e.g.,
http://dispatcharr:5000) - Enter your admin username and password
- Click Test Connection to verify connectivity
- Save the connection for use in Backup, Restore, and Sync operations
- Go to the Backup tab
- Select or enter your source Dispatcharr instance
- Choose which configuration areas to backup
- Optionally enable Dry Run to preview without creating a file
- Click Start Backup
- Once complete, download the ZIP file from the Jobs tab
- Go to the Restore tab
- Select or enter your destination Dispatcharr instance
- Upload a previously created backup ZIP file
- Choose which configuration areas to restore
- Optionally enable Dry Run to preview changes
- Click Start Restore
- Go to the Sync tab
- Configure your Source instance (where config comes from)
- Configure your Destination instance (where config goes to)
- Select which configuration areas to sync
- Optionally enable Dry Run to preview changes
- Click Start Sync
The Jobs tab shows:
- Currently running jobs with real-time progress
- Job history with completion status
- Detailed logs for each job (click a job row to view)
- Download links for completed backups
The Schedules tab allows you to automate backups and syncs:
- Click Create Schedule
- Give your schedule a descriptive name
- Select the job type (Backup or Sync)
- Choose the source connection (and destination for sync jobs)
- Configure the schedule frequency:
- Hourly: Run every hour at a specific minute
- Daily: Run once a day at a specific time
- Weekly: Run on a specific day and time each week
- Monthly: Run on a specific day of the month
- Custom: Select multiple days of the week
- Choose which configuration areas to include
- For backup jobs, optionally set a retention count to automatically delete old backups (e.g., keep only the last 5 backups)
- Enable the schedule and save
Scheduled jobs will run automatically at the configured times. You can:
- Run Now: Manually trigger a scheduled job
- View History: See past runs and their status
- Enable/Disable: Temporarily pause a schedule without deleting it
- Edit: Modify schedule settings at any time
Note: The scheduler uses your configured timezone from the Settings tab.
# Clone the repository
git clone https://github.com/motwakorb/dispatcharr-backup-sync.git
cd dispatcharr-backup-sync
# Build images
docker build -t dispatcharr-backup-sync-backend:latest -f docker/backend.Dockerfile .
docker build -t dispatcharr-backup-sync-frontend:latest -f docker/frontend.Dockerfile .
# Start with local images
docker compose up -dRun development servers in containers (no local Node.js installation required):
# Backend dev server (port 6002)
docker run --rm -it -v ${PWD}/backend:/app -w /app -p 6002:6002 node:20-alpine sh -c "npm install && npm run dev"
# Frontend dev server (port 6001)
docker run --rm -it -v ${PWD}/frontend:/app -w /app -p 6001:3000 node:20-alpine sh -c "npm install && npm run dev -- --host --port 3000"| Variable | Description | Default |
|---|---|---|
NODE_ENV |
Node environment | production |
PORT |
Backend API port | 6002 |
DATA_DIR |
Persistent data directory | /data |
ALLOW_PRIVATE_URLS |
Allow connections to private IP addresses (192.168.x.x, 10.x.x.x, 172.16-31.x.x) | true |
TZ |
Timezone for scheduled jobs | UTC |
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Backend health check |
/api/connections/test |
POST | Test Dispatcharr connection |
/api/connections/info |
POST | Get instance info |
/api/export |
POST | Start export job |
/api/export/status/:jobId |
GET | Get export job status |
/api/export/download/:jobId |
GET | Download exported file |
/api/import |
POST | Start import job |
/api/import/status/:jobId |
GET | Get import job status |
/api/sync |
POST | Start sync job |
/api/sync/status/:jobId |
GET | Get sync job status |
/api/jobs |
GET | List active jobs |
/api/jobs/history/list |
GET | List completed jobs |
/api/jobs/:jobId/logs |
GET | Get job logs |
/api/jobs/:jobId/cancel |
POST | Cancel running job |
/api/saved-connections |
CRUD | Manage saved connections |
/api/schedules |
CRUD | Manage scheduled jobs |
/api/schedules/:id/run |
POST | Trigger manual run |
/api/schedules/:id/toggle |
POST | Enable/disable schedule |
/api/schedules/:id/history |
GET | Get schedule run history |
/api/settings |
GET/PUT | Manage app settings (timezone, time format, theme) |
/api/notifications/providers |
CRUD | Manage notification providers |
/api/notifications/settings |
GET/PUT | Manage notification settings |
/api/info |
GET | Get version info and update availability |
If no comskip configuration exists on the source instance, it will be reported as "skipped" rather than synced. This is expected behavior - there's nothing to sync if no config exists.
- Job state and history are persisted to the
/datavolume - Saved connections are stored in
/data/connections.json - Backup files are stored in
/data/backup/- mount a local directory (e.g.,./backups:/data/backup) for easy access to backup files from your host machine
cd backend && npm testThe E2E test suite covers key workflows including navigation, backup/restore, sync, jobs, and schedules.
# Install dependencies
cd tests && npm install
# Run tests (requires frontend and backend running)
npm test
# Run with UI for debugging
npm run test:uiRun a quick smoke test against a running Docker stack:
docker run --rm --network dispatcharr-backup-sync_dispatcharr-manager \
-v ${PWD}/tests:/work -w /work node:20 sh -c "\
npm install playwright@1.48.2 --no-save --no-package-lock && \
npx playwright install --with-deps chromium && \
node smoke.playwright.mjs"- Logo Sync: Full logo sync support between instances
- URL-based logos synced via API reference (no redundant downloads)
- File-based logos properly transferred between instances
- Fixed pagination bug that limited logo sync to 50 items
- Logos correctly assigned to channels after sync
- Sync Refactor: Complete rewrite of sync to use export/import internally
- Guarantees feature parity between backup/restore and sync operations
- Any fix to backup/restore automatically applies to sync
- Reduced codebase by ~1500 lines of duplicated logic
- More reliable and maintainable architecture
- Frontend Error Handling: Added error boundaries and loading state components for better UX
- Performance: Large backup compression now uses worker threads to avoid blocking the event loop
- Job Management: Improved job data cleanup with archiving strategy for jobs older than 30 days
- E2E Testing: Added Playwright test suite for key workflows (navigation, backup/restore, sync, jobs, schedules)
- Dependency Management: Added Dependabot for automated security and dependency updates
- Backend Improvements:
- Environment variable validation at startup
- Request correlation IDs for distributed tracing
- Exponential backoff retry for network operations
- Centralized constants for maintainability
- UI Improvements: Added Logs button to active jobs table, auto-refresh logs modal for running jobs
- Visual Polish: Softer notification colors (sky blue instead of yellow) for plugin mismatch notices
- Button Order Fix: History section buttons now show Download, Logos, then Logs (rightmost)
- Logo Backup/Restore: Full support for backing up and restoring logos
- URL-based logos stored as mappings (no redundant downloads)
- Local file logos downloaded and uploaded as needed
- Improved Performance: Significantly faster backups when most logos are URL-based
- Dark Mode: Light, dark, and auto theme support (follows system preference)
- Version Display: Shows current version in header
- Update Notifications: Automatic check for new releases with dismissible banner
- Job Scheduler: Schedule recurring backup and sync jobs
- Notification System: Alerts via Discord, Email, Slack, and Telegram
- Backup Retention: Automatically clean up old backups
Job Scheduler: Schedule recurring sync and backup jobs to run automatically(Added in v1.1.0)Notification System: Alert on job success or failure via Discord, Email, Slack, and Telegram(Added in v1.1.0)Dark Mode: Light, dark, and auto theme support(Added in v1.2.0)Version Display & Update Notifications: Show current version and notify when updates are available(Added in v1.2.0)Logo Backup/Restore: Backup and restore channel logos(Added in v1.3.0)Logo Sync: Sync logos between instances(Added in v1.4.1)- External Storage Export: Export backups to common filesystems such as SMB shares, NAS shares, or object storage (S3, etc.)
MIT