5 Common Go Concurrency Mistakes That’ll Trip You Up

Explore 5 common Go concurrency mistakes – from forgotten goroutines to race conditions and channel deadlocks. Learn simple fixes.

5 Common Go Concurrency Mistakes That’ll Trip You Up

So, you’re diving into Go and its concurrency magic.

Goroutines, channels, the whole shebang.

It’s lightweight, it’s fast, and it feels like you’re wielding some superpower.

But here’s the catch—concurrency in Go is deceptively simple.

You think you’ve got it, and then bam, your program starts acting like a haunted house.

Random crashes, deadlocks, or just plain wrong results.

I’ve been there, and I’m betting you have too.

Let’s walk through five common concurrency mistakes in Go that’ll sneak up on you—and how to dodge them.

1. Forgetting to Wait for Goroutines

You fire off a goroutine with go, and it’s off to the races. But here’s the thing—if the main function exits, it doesn’t care if your goroutines are still chugging along. They get killed midstride. Rookie move, but we’ve all done it.

What Goes Wrong

Imagine this:

package main

import "fmt"

func printHello() {
    fmt.Println("Hello from goroutine!")
}

func main() {
    go printHello()
    fmt.Println("Done")
}

Run it. What do you see? Just "Done". The goroutine didn’t even get a chance to say hello. Why? Main exited before the goroutine could finish.

Fix It

Use a sync.WaitGroup. It’s like a bouncer that won’t let the party end until everyone’s accounted for.

package main

import (
    "fmt"
    "sync"
)

func printHello(wg *sync.WaitGroup) {
    defer wg.Done()
    fmt.Println("Hello from goroutine!")
}

func main() {
    var wg sync.WaitGroup
    wg.Add(1)
    go printHello(&wg)
    wg.Wait()
    fmt.Println("Done")
}

Output:

Hello from goroutine!
Done

wg.Add(1) says “one goroutine’s coming.” wg.Done() says “it’s finished.” wg.Wait() holds the line until everyone checks out. Simple, effective.

2. Sharing Memory Without Protection

Goroutines are cheap, so you spin up a bunch and let them mess with the same variables. Sounds fun until you realize they’re stepping all over each other. Race conditions are a nightmare—your data turns into gibberish because multiple goroutines write to it at the same time.

What Goes Wrong

Check this out:

package main

import "fmt"

func main() {
    counter := 0
    for i := 0; i < 1000; i++ {
        go func() {
            counter++
        }()
    }
    fmt.Println(counter)
}

You’d think counter hits 1000, right? Nope. Run it a few times. You might get 842, 917, 789—random junk. Why? Goroutines are incrementing counter simultaneously, and the writes clash.

Fix It

Lock that shared memory with a sync.Mutex. It’s like a “one at a time” sign.

package main

import (
    "fmt"
    "sync"
)

func main() {
    var mu sync.Mutex
    counter := 0
    var wg sync.WaitGroup
    for i := 0; i < 1000; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            mu.Lock()
            counter++
            mu.Unlock()
        }()
    }
    wg.Wait()
    fmt.Println(counter)
}

Output:

1000

mu.Lock() grabs the key. Only one goroutine messes with counter at a time. mu.Unlock() hands the key off. Add the WaitGroup so main doesn’t bail early. Now you’re golden.

3. Blocking Forever with Channels

Channels are Go’s bread and butter for concurrency. They’re awesome—until you misuse them and your program freezes like a deer in headlights. Sending or receiving on a channel with no one on the other end? That’s a deadlock waiting to happen.

What Goes Wrong

Here’s a classic:

package main

import "fmt"

func main() {
    ch := make(chan int)
    ch <- 42
    fmt.Println("Sent it!")
}

Run it. Nothing happens. Well, actually, it panics with fatal error: all goroutines are asleep - deadlock!. Why? You’re sending on ch, but no one’s listening. Main blocks forever.

Fix It

Either have a goroutine ready to receive, or use a buffered channel.

Option 1—Receiver goroutine:

package main

import "fmt"

func main() {
    ch := make(chan int)
    go func() {
        value := <-ch
        fmt.Println("Got:", value)
    }()
    ch <- 42
    fmt.Println("Sent it!")
}

Option 2—Buffered channel:

package main

import "fmt"

func main() {
    ch := make(chan int, 1) // Buffer of 1
    ch <- 42
    fmt.Println("Sent it!")
    value := <-ch
    fmt.Println("Got:", value)
}

Output (both):

Sent it!
Got: 42

Buffered channels let you send without an immediate receiver—up to the buffer size. Unbuffered? You need a buddy on the other end.

4. Closing Channels Too Soon (or Not At All)

Channels are great, but closing them is tricky. Close too early, and you’ll panic trying to send. Never close, and receivers might hang indefinitely waiting for more data.

What Goes Wrong

Early close:

package main

import "fmt"

func main() {
    ch := make(chan int)
    close(ch)
    ch <- 42 // Panic!
    fmt.Println("Sent it!")
}

Run it—panic: send on closed channel. Sending to a closed channel is a no-go.

Now, never closing:

package main

import "fmt"

func main() {
    ch := make(chan int)
    go func() {
        for i := 0; i < 3; i++ {
            ch <- i
        }
        // Forgot to close ch!
    }()
    for v := range ch {
        fmt.Println(v)
    }
}

This hangs forever after printing 0, 1, 2. The range loop waits for more, but the sender’s done and didn’t close the channel.

Fix It

Close the channel when you’re done sending, and only from the sender.

package main

import "fmt"

func main() {
    ch := make(chan int)
    go func() {
        for i := 0; i < 3; i++ {
            ch <- i
        }
        close(ch)
    }()
    for v := range ch {
        fmt.Println(v)
    }
}

Output:

0
1
2

close(ch) signals “no more data.” The range loop exits cleanly. Rule of thumb: sender closes, receivers read until closed.

5. Overusing Goroutines Like They’re Free

Goroutines are cheap, but not free. Spin up too many, and you’ll choke your CPU or blow out your memory. I’ve seen folks launch a goroutine for every tiny task, thinking it’s efficient. Spoiler: it’s not.

What Goes Wrong

Look at this:

package main

import "fmt"

func main() {
    for i := 0; i < 1000000; i++ {
        go func() {
            fmt.Println("Hi")
        }()
    }
    fmt.Println("Started all")
}

Run it. It might work—or crash with an out-of-memory error. A million goroutines? Each takes a small stack (starts at 2KB), but that adds up. Plus, the scheduler’s juggling like crazy.

Fix It

Pool them or batch the work. Use a worker pattern with a fixed number of goroutines.

package main

import (
    "fmt"
    "sync"
)

func worker(id int, jobs <-chan int, wg *sync.WaitGroup) {
    defer wg.Done()
    for job := range jobs {
        fmt.Printf("Worker %d: Job %d\n", id, job)
    }
}

func main() {
    jobs := make(chan int, 100)
    var wg sync.WaitGroup
    numWorkers := 5

    for i := 1; i <= numWorkers; i++ {
        wg.Add(1)
        go worker(i, jobs, &wg)
    }

    for i := 0; i < 20; i++ {
        jobs <- i
    }
    close(jobs)
    wg.Wait()
    fmt.Println("All done")
}

Output (varies by scheduling):

Worker 1: Job 0
Worker 2: Job 1
Worker 3: Job 2
...
All done

Five workers handle 20 jobs. Channels distribute the load. No resource explosion. Scale numWorkers based on your machine, not to infinity.

In conclusion...

Concurrency in Go is a blast, but it’s got sharp edges.

Forgetting to wait for goroutines, sloppy memory sharing, channel mishaps, and goroutine spam—these are the traps I’ve fallen into, and I bet you’ve hit a few too. Keep sync.WaitGroup and Mutex in your toolkit, respect channels, and use goroutines judiciously.

Next time your concurrent program is acting weird, you’ll know where to look.

Got a concurrency war story? Drop it below—I'm all ears!