NOTE : This worker is only for learning purpose and can be foundation to bigger projects , it's not complete or full replaceable to laravel queue .
Scaling Laravel Queues with Go: A High Performance Alternative to php artisan queue:work
The Problem with Traditional Laravel Queue Workers
Laravel's built in queue system uses php artisan queue:work
to process jobs, which operates as a single threaded process. While this works well for small to medium applications, it presents several performance bottlenecks:
- Single-threaded execution: Only one job can be processed at a time per worker
-
Resource inefficiency: Running multiple
queue:work
processes consumes significant memory and CPU - Limited concurrency: Scaling requires spawning multiple heavy PHP processes
The Solution: Go Powered Concurrent Workers
To address these limitations, I developed a Go based queue worker that maintains persistent Laravel processes and distributes jobs efficiently using goroutines. This hybrid approach combines Go's concurrency strengths with Laravel's job processing capabilities.
Architecture Overview
The system consists of three main components:
- Go Worker Pool: Initially started with a single PHP process, then scaled to manage multiple concurrent PHP processes
-
Custom Laravel Command:
queue:run-job
that accepts jobs via STDIN - Redis Integration: Pulls jobs from Laravel's queue system
Key Features
Persistent PHP Processes
// Each worker maintains a persistent PHP process
worker.cmd = exec.CommandContext(ctx, "php", "laravel/artisan", "queue:run-job")
The breakthrough came from passing the job data to a php worker from worker pool and don't wait for the results , instead spawn another goroutine to listen for stdout (results) of the job , using this technique of fire and forget allowed me to execute more jobs per second .
But in laravel queue worker it blocks until a job completed and then process the next job
Concurrent Job Distribution
// Round-robin job distribution across workers
worker := pool.workers[workerIndex]
workerIndex = (workerIndex + 1) % len(pool.workers)
After proving the concept with a single worker, scaling to multiple concurrent processes was straightforward. Jobs are distributed using a round robin algorithm across all available workers, with 6 concurrent PHP processes showing excellent performance gains.
Real-time Performance Monitoring
fmt.Printf("Stats => Processed: %d, Failed: %d\n",
atomic.LoadInt64(&pool.processedJobs),
atomic.LoadInt64(&pool.failedJobs))
Development Evolution
Phase 1: Single Worker Proof of Concept
The initial implementation used just one persistent PHP process managed by Go. This simple change alone delivered a 10.8x performance improvement (13.99s → 1.29s for 1,000 jobs) ( due to stdin and stdout separation (fire and forget) )
Phase 2: Concurrent Worker Pool
Building on the single-worker success, I implemented concurrent processing with 6 PHP workers. This delivered an additional 1.8x improvement (11.37s → 6.41s for 10,000 jobs), demonstrating excellent scaling characteristics.
class RunJob extends Command
{
protected $signature = 'queue:run-job';
protected $description = 'Run jobs continuously from STDIN (fed by Go pool)';
public function handle(Container $container, RedisQueue $redisQueue)
{
$stdin = fopen('php://stdin', 'r');
while (($line = fgets($stdin)) !== false) {
$data = json_decode(trim($line), true);
// Process job using Laravel's worker infrastructure
}
}
}
This approach maintains full compatibility with Laravel's job system while enabling concurrent processing.
Example Job Implementation
class ProcessPodcast implements ShouldQueue
{
use Queueable;
public $timeout = 5;
public function handle(): void
{
echo json_encode([
'uuid' => $payload['uuid'] ?? null,
'status' => 'done',
'success' => true,
]) . "\n";
flush();
fflush(STDOUT);
}
}
Jobs remain unchanged and work exactly as they would with standard Laravel queue workers.
Performance Results
Benchmark Comparison
The performance improvements are dramatic when compared to traditional Laravel queue workers:
1,000 Jobs Test
- Traditional Laravel worker: 13.99 seconds
- Go worker (single PHP process): 1.29 seconds
- Performance improvement: 10.8x faster
10,000 Jobs Test
- Traditional Laravel worker: 139.71 seconds (2 minutes 20 seconds)
- Go worker (single PHP process): 11.37 seconds
- Go worker (6 concurrent PHP processes): 6.41 seconds
- Performance improvement: 21.8x faster with 6 workers
Scaling Analysis
The results show clear performance scaling patterns:
- Single Process Optimization: Even with just one PHP process, the Go wrapper achieved ~10x performance improvement .
- Concurrency Scaling: Adding 6 concurrent PHP workers nearly doubled performance again (11.37s → 6.41s)
- Linear Scaling Potential: The 6 worker configuration suggests near-linear scaling with worker count
Note: These benchmarks focused purely on execution time. CPU usage and memory consumption comparisons were not measured in this initial testing phase, which would be important metrics for production evaluation.
Technical Implementation Details
Worker Pool Management
func NewConcurrentWorkerPool(workerCount int) *ConcurrentWorkerPool {
// Initialize worker pool with configurable concurrency
for i := 0; i < workerCount; i++ {
worker := pool.startLaravelWorker(i)
pool.workers = append(pool.workers, worker)
}
return &pool
}
Job Timing and Metrics
type JobResult struct {
JobID string
Success bool
Error error
Output string
Duration time.Duration
DurationMs int64
}
Use Cases and Applications
When to Use This Approach
- High-volume queue processing: Applications with thousands of jobs per minute
- Development and testing: Learning concurrent programming concepts
When to Stick with Traditional Workers
- Small applications: Low job volume doesn't justify the complexity
- Simple deployments: Standard Laravel hosting without custom infrastructure
- Team familiarity: When team expertise lies primarily in PHP
Future Possibilities
This experiment opens several interesting avenues for Laravel scaling:
- Production ready implementation: Enhanced error handling, graceful shutdowns, and monitoring
- Dynamic scaling: Auto-adjust worker count based on queue depth
- Multi-queue support: Handle different queue types with specialized workers
- Distributed processing: Extend across multiple servers for massive scale
Key Takeaways
- Laravel Bootstrap for each new process overhead is the real bottleneck: A 10x improvement with just persistent processes
- Concurrency scales well: Near linear performance gains with additional workers
- Simple is powerful: Even the single worker implementation drastically outperforms traditional approaches
- Incremental adoption: Start with one persistent worker, scale as needed
While this started as a learning experiment, the dramatic performance improvements suggest real production potential. The approach maintains Laravel's familiar job processing patterns (NOT ALL patterns , still missing errors handling , requeue, time management ...) while delivering performance that rivals dedicated queue systems.
This project showcases how different programming languages can complement each other, with Go handling the concurrency challenges while Laravel manages the business logic and job processing.
Top comments (0)