A highly-scalable, production-ready distributed task queue system built with Go and Redis Streams. This project implements multiple industrial-standard patterns including Priority Queues, Scheduled Tasks, Consumer Groups, and Graceful Shutdowns.
- Redis Streams & Consumer Groups: Uses Redis Streams for message persistence and consumer groups for load balancing across multiple worker instances.
- Multi-Level Priority Queues: Support for
high,medium, andlowpriority tasks. High-priority tasks are always processed first. - Delayed & Scheduled Tasks: Ability to schedule tasks to run at a specific future timestamp using Redis Sorted Sets (ZSETs).
- Dead Letter Queue (DLQ): Tasks that fail after all retry attempts are automatically moved to a DLQ list for later inspection and recovery.
- Graceful Shutdown: Handles OS termination signals (
SIGINT,SIGTERM) to ensure that workers finish active tasks before exiting, preventing data loss. - Retry Mechanism: Configurable retry logic to handle transient failures automatically.
- Real-time Observability: Built-in
/metricsHTTP endpoint for monitoring queue length, success rates, and failure statistics.
The system consists of two main components:
- Producer:
- Exposes a REST API (
/enqueue) to accept tasks. - Routes tasks to appropriate Redis Streams based on priority.
- Runs a background Scheduler that promotes delayed tasks from a ZSET to active Streams when they are due.
- Exposes a REST API (
- Consumer (Workers):
- Dynamically polls multiple priority streams (High -> Medium -> Low).
- Uses Consumer Groups to ensure each message is processed by only one worker.
- Handles task execution, error logging, retries, and DLQ management.
graph TD
A[Client] -->|POST /enqueue| B(Producer)
B -->|Immediate| C{Priority Router}
B -->|Delayed| D[ZSET Delayed Tasks]
D -->|Scheduler| C
C -->|High| E[task_stream_high]
C -->|Medium| F[task_stream_medium]
C -->|Low| G[task_stream_low]
E & F & G --> H[Worker Group]
H -->|Success| I[Log SUCCESS]
H -->|Fail & Retry| C
H -->|Final Fail| J[Dead Letter Queue]
- Go 1.21+
- Redis (Running on port 4500 by default, or configured via
.env)
- Clone the repository.
- Setup your environment:
cp .env.sample .env # Update REDIS_URL if necessary - Run Redis via Docker:
docker-compose up -d
Start the Producer:
go run cmd/producer/main.goStart the Consumer (Workers):
go run cmd/consumer/main.goPOST http://localhost:8080/enqueue
Payload Examples:
-
Standard Task:
{ "type": "send_email", "priority": "high", "retries": 3, "payload": { "to": "user@example.com", "subject": "Hello!" } } -
Scheduled Task (Run 1 minute from now):
{ "type": "resize_image", "scheduled_at": 1736920000, "payload": { "new_x": 800, "new_y": 600 } }
GET http://localhost:8081/metrics
- Safety: Uses
XREADGROUPandXACKto ensure no message is forgotten. - Scale: Add more consumer instances to the same consumer group to increase throughput linearly.
- Visibility: Dedicated worker.log captures the full lifecycle of every task.