Go Concurrency Patterns in Practice
Real-world patterns for building fast and reliable concurrent systems in Go using goroutines, channels, and contexts.
August 13, 2025 (10d ago)
5 min read
Go's concurrency model, built around goroutines and channels, offers powerful primitives for building scalable systems. Based on patterns used in production at companies like Netflix and Google, here are the essential concurrency patterns that will transform your Go applications in 2025.
Worker Pool Pattern
The worker pool pattern is essential for controlling resource usage and preventing system overload. Instead of spawning unlimited goroutines, you maintain a fixed pool of workers.
func WorkerPool(jobs <-chan Job, results chan<- Result, numWorkers int) { var wg sync.WaitGroup // Start fixed number of workers for i := 0; i < numWorkers; i++ { wg.Add(1) go func() { defer wg.Done() for job := range jobs { result := processJob(job) results <- result } }() } wg.Wait() close(results) }
Best Practices for 2025:
- For CPU-bound work, use
runtime.NumCPU()
workers - For I/O-bound work, benchmark to find optimal pool size (typically 10-100x CPU cores)
- Use buffered channels for smooth async flow:
make(chan Job, numWorkers*2)
Context-Driven Cancellation
Context propagation is fundamental to building resilient systems. Google's internal guidelines require context as the first parameter in every function on the request path.
func FetchWithTimeout(ctx context.Context, urls []string) ([]Result, error) { ctx, cancel := context.WithTimeout(ctx, 5*time.Second) defer cancel() g, ctx := errgroup.WithContext(ctx) results := make([]Result, len(urls)) for i, url := range urls { i, url := i, url // capture loop variables g.Go(func() error { req, err := http.NewRequestWithContext(ctx, "GET", url, nil) if err != nil { return err } resp, err := http.DefaultClient.Do(req) if err != nil { return err } defer resp.Body.Close() // Process response... results[i] = processResponse(resp) return nil }) } return results, g.Wait() }
Production Impact: Netflix's engineering team reported 30% reduction in service latency tail by implementing proper context cancellation for downstream requests.
Errgroup for Structured Concurrency
Errgroup provides first-error cancellation and simplified coordination. It's particularly powerful for parallel data fetching where any failure should cancel remaining work.
import "golang.org/x/sync/errgroup" func ParallelDataFetch(ctx context.Context, ids []string) (*CombinedResult, error) { g, ctx := errgroup.WithContext(ctx) var userProfile UserProfile var preferences Preferences var analytics Analytics // Fetch user profile g.Go(func() error { profile, err := fetchUserProfile(ctx, ids[0]) if err != nil { return fmt.Errorf("user profile: %w", err) } userProfile = profile return nil }) // Fetch preferences g.Go(func() error { prefs, err := fetchPreferences(ctx, ids[0]) if err != nil { return fmt.Errorf("preferences: %w", err) } preferences = prefs return nil }) // Fetch analytics g.Go(func() error { data, err := fetchAnalytics(ctx, ids[0]) if err != nil { return fmt.Errorf("analytics: %w", err) } analytics = data return nil }) if err := g.Wait(); err != nil { return nil, err } return &CombinedResult{ Profile: userProfile, Preferences: preferences, Analytics: analytics, }, nil }
Key Advantages:
- Automatic context cancellation on first error
- Clean error propagation with context
- No manual WaitGroup management
Fan-Out/Fan-In Pattern
This pattern transforms parallelizable operations from unstructured goroutine spawning into controlled, efficient pipelines.
func FanOutFanIn(input <-chan Task) <-chan Result { // Fan-out: distribute work across multiple workers const numWorkers = 5 workers := make([]<-chan Result, numWorkers) for i := 0; i < numWorkers; i++ { worker := make(chan Result) workers[i] = worker go func(tasks <-chan Task, results chan<- Result) { defer close(results) for task := range tasks { // Process task... result := processTask(task) results <- result } }(input, worker) } // Fan-in: merge results from all workers return merge(workers...) } func merge(channels ...<-chan Result) <-chan Result { var wg sync.WaitGroup out := make(chan Result) output := func(c <-chan Result) { defer wg.Done() for result := range c { out <- result } } wg.Add(len(channels)) for _, c := range channels { go output(c) } go func() { wg.Wait() close(out) }() return out }
Backpressure and Load Shedding
Bounded channels prevent memory exhaustion and enable graceful degradation under load.
type RateLimitedProcessor struct { semaphore chan struct{} timeout time.Duration } func NewRateLimitedProcessor(maxConcurrent int, timeout time.Duration) *RateLimitedProcessor { return &RateLimitedProcessor{ semaphore: make(chan struct{}, maxConcurrent), timeout: timeout, } } func (p *RateLimitedProcessor) Process(ctx context.Context, job Job) error { // Try to acquire semaphore with timeout select { case p.semaphore <- struct{}{}: defer func() { <-p.semaphore }() // Release semaphore case <-time.After(p.timeout): return ErrRateLimited case <-ctx.Done(): return ctx.Err() } // Process job... return processJob(ctx, job) }
Load Shedding Strategies:
- Drop strategy: Reject new requests when at capacity
- Buffer strategy: Queue requests with bounded buffer
- Timeout strategy: Fail requests that can't be processed quickly
Graceful Shutdown Pattern
Production services need clean shutdown to avoid data corruption and connection leaks.
func GracefulServer() { server := &http.Server{Addr: ":8080"} // Setup signal handling sigChan := make(chan os.Signal, 1) signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM) // Start server go func() { if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed { log.Fatal("Server failed:", err) } }() // Wait for shutdown signal <-sigChan log.Println("Shutting down server...") // Create context with timeout for graceful shutdown ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) defer cancel() // Shutdown server gracefully if err := server.Shutdown(ctx); err != nil { log.Printf("Server forced to shutdown: %v", err) } log.Println("Server exited") }
Performance Considerations for 2025
Goroutine Lifecycle Management:
- Even though goroutines are lightweight (2KB initial stack), spawning tens of thousands unnecessarily stresses the runtime
- Use worker pools for CPU-bound work
- Benchmark to find optimal parallelism (typically between
runtime.NumCPU()
andruntime.NumCPU() * 2
for CPU-bound tasks)
Channel Best Practices:
- Buffered channels reduce goroutine blocking:
make(chan T, capacity)
- Use
select
withdefault
for non-blocking operations - Always close channels from the sender side
Memory and GC Impact:
- Unbounded channel growth can lead to memory pressure
- Consider using
sync.Pool
for frequent allocations in hot paths - Profile with
go tool pprof
to identify goroutine leaks
Real-World Production Metrics
A 2024 survey by the Go team found:
- 76% of production Go services use structured error handling with errgroup
- Services implementing proper context handling see 20-30% reduction in tail latency
- Worker pool patterns reduce CPU usage by 15-25% compared to unlimited goroutine spawning
These patterns form the foundation of scalable Go applications. Start with simple worker pools and context propagation, then gradually introduce more sophisticated patterns as your system grows. The key is measuring performance impact and iterating based on real metrics from your specific workload.
Here are some other articles you might find interesting.
Building Secure Authentication Layers: A Complete Guide for 2025
Comprehensive guide to implementing modern authentication systems with detailed comparisons, user flows, and security best practices for web applications.
Next.js 15 Production Checklist
A practical checklist to ship a rock-solid Next.js 15 app: performance, routing, caching, security, observability, and CI/CD.
Subscribe to my newsletter
A periodic update about my life, recent blog posts, how-tos, and discoveries.
NO SPAM. I never send spam. You can unsubscribe at any time!