12 min read
Dillon Browne

Shrinking Go Binaries 70%

Practical techniques to reduce Go binary sizes by 70%+ in production: build flags, symbol stripping, UPX compression, and dependency optimization.

go devops optimization containers
Shrinking Go Binaries 70%

Why Go Binary Size Matters in Production

A 100MB Go binary doesn’t sound like much until you’re deploying it 500 times per day across multiple regions. I’ve been deploying Go services to production for years, and binary size became a critical optimization when we started running hundreds of microservices across our Kubernetes clusters. Those extra megabytes add up fast when pulling container images across regions during rapid scaling events.

In my experience, there are three main reasons to care about binary size:

Cold start performance: Smaller binaries mean faster container startup times. When autoscaling kicks in during traffic spikes, every second counts.

Network transfer costs: Pulling 100MB images across AWS regions costs real money. Multiply that by thousands of deployments per day, and you’re looking at significant bandwidth bills.

Storage efficiency: Container registries charge for storage. Reducing binary sizes from 80MB to 20MB across hundreds of images can save thousands in registry costs.

Optimize Go Build Flags for Smaller Binaries

The Go compiler gives us several build flags, but not all of them provide meaningful size reductions. I’ve tested these extensively in production environments.

Basic Build Optimization

Start with the -ldflags approach. This is the most straightforward optimization:

go build -ldflags="-s -w" -o myapp main.go

The -s flag strips the symbol table and debugging information. The -w flag removes DWARF debugging data. Together, they typically reduce binary size by 20-30%.

Here’s what I see in a real microservice:

# Standard build
$ go build -o myapp main.go
$ ls -lh myapp
-rwxr-xr-x  1 user  staff    82M Feb 25 10:00 myapp

# Optimized build
$ go build -ldflags="-s -w" -o myapp main.go
$ ls -lh myapp
-rwxr-xr-x  1 user  staff    56M Feb 25 10:01 myapp

That’s a 32% reduction with zero code changes.

Advanced Linker Optimization

For more aggressive optimization, I use the -trimpath flag to remove file system paths from the compiled binary:

go build -ldflags="-s -w" -trimpath -o myapp main.go

This removes absolute file paths embedded in the binary, which:

  • Reduces size by another 2-5%
  • Improves build reproducibility
  • Enhances security by not leaking your directory structure

Compress Go Binaries with UPX

UPX (Ultimate Packer for eXecutables) can compress Go binaries by an additional 50-70%, but it comes with tradeoffs I’ve learned the hard way.

When UPX Works Well

UPX is excellent for CLI tools and batch jobs where startup time isn’t critical:

# Build and compress
go build -ldflags="-s -w" -trimpath -o myapp main.go
upx --best --lzma myapp

# Results
$ ls -lh myapp
-rwxr-xr-x  1 user  staff    18M Feb 25 10:05 myapp

That’s a 78% total reduction from the original 82MB binary.

When UPX Causes Problems

I’ve encountered issues with UPX in production:

Memory decompression overhead: The binary decompresses itself into memory at startup. For a 50MB compressed binary, you might need 150MB of RAM during startup.

Security scanners: Some container security tools flag UPX-compressed binaries as suspicious or potentially malicious. I’ve had to whitelist our own services in Falco and other runtime security tools.

Startup latency: Decompression adds 100-500ms to startup time. For Lambda functions or serverless environments where cold start is critical, this can be a dealbreaker.

My rule of thumb: Use UPX for internal tools and CLIs, avoid it for latency-sensitive microservices.

Reduce Go Dependencies and Eliminate Dead Code

The biggest wins often come from reducing dependencies, not just optimizing the build.

Analyzing Your Dependencies

I use go mod graph combined with a custom script to identify heavy dependencies:

#!/bin/bash
# analyze-deps.sh - Find large dependencies

go build -o /tmp/myapp .
go tool nm -size /tmp/myapp | grep -v ' T ' | sort -n -k2 | tail -20

This shows the largest symbols in your binary. In one project, I discovered we were importing the entire AWS SDK when we only needed S3. Switching to the modular v2 SDK reduced our binary by 15MB.

Removing Unused Code

Go’s linker automatically removes unused functions, but it can’t eliminate entire packages if any function is referenced. I’ve found these patterns help:

Use build tags for optional features:

// +build metrics

package monitoring

// This code only compiles when built with -tags=metrics
func InitMetrics() {
    // Prometheus, OpenTelemetry, etc.
}

This lets you ship lightweight binaries for development while keeping full observability in production.

Build Static Go Binaries Without CGO

CGO can bloat binaries significantly. I disable it whenever possible:

CGO_ENABLED=0 go build -ldflags="-s -w" -trimpath -o myapp main.go

This produces a fully static binary with no external dependencies. Benefits:

  • Smaller final size (no dynamic library references)
  • Easier container builds (can use FROM scratch)
  • Better portability across different Linux distributions

Here’s my standard Dockerfile pattern:

# Build stage
FROM golang:1.22-alpine AS builder
WORKDIR /build
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -ldflags="-s -w" -trimpath -o myapp .

# Runtime stage
FROM scratch
COPY --from=builder /build/myapp /myapp
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
ENTRYPOINT ["/myapp"]

This produces container images under 25MB for most services.

Monitor Go Binary Size in CI/CD

I measure binary size as part of our CI/CD pipeline:

#!/bin/bash
# ci-size-check.sh

BINARY="./myapp"
MAX_SIZE_MB=30

go build -ldflags="-s -w" -trimpath -o "$BINARY" .
SIZE_BYTES=$(stat -f%z "$BINARY" 2>/dev/null || stat -c%s "$BINARY")
SIZE_MB=$((SIZE_BYTES / 1024 / 1024))

if [ "$SIZE_MB" -gt "$MAX_SIZE_MB" ]; then
    echo "Binary size ${SIZE_MB}MB exceeds limit ${MAX_SIZE_MB}MB"
    exit 1
fi

echo "Binary size: ${SIZE_MB}MB (limit: ${MAX_SIZE_MB}MB)"

This prevents accidental binary bloat from sneaking into production.

Measuring Container Image Impact

For containerized services, I track the full image size:

docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}" | grep myapp

Our typical image progression:

  • Before optimization: 180MB
  • After build flags: 120MB
  • After dependency cleanup: 80MB
  • After switching to scratch base: 25MB

Binary Optimization Tradeoffs in Production

Binary optimization isn’t free. Here are the costs I’ve encountered:

Debugging production issues: Stripped binaries make it harder to debug crashes. I keep debug builds in our artifact repository for post-mortem analysis.

Build time increases: Aggressive optimization can add 10-20% to build times. For our CI/CD pipeline, this means slightly longer deployment cycles.

Platform compatibility: Static binaries compiled with CGO_ENABLED=0 won’t work if you need C libraries. I maintain separate build configurations for services that require database drivers with CGO.

Conclusion

Optimizing Go binary size in production environments delivers measurable results. In my deployments, I’ve achieved:

  • 70-80% Go binary size reduction across microservices
  • 40% faster container startup times
  • Significant cost savings on container registry storage and network transfer

The key is understanding your constraints. For latency-sensitive services, I optimize for startup time over maximum compression. For batch jobs and CLIs, I push compression as far as possible.

Start with build flags and dependency management. Those give you the best return on investment with minimal risk. Save UPX and aggressive optimization for specific use cases where you’ve measured the tradeoffs.

Most importantly, measure everything. Binary size should be monitored just like any other production metric. If you’re working on optimizing your cloud infrastructure or need help with Go deployment strategies, I’d love to discuss your specific challenges.

Found this helpful? Share it with others: