12 min read
Dillon Browne

Accelerate Docker Builds with BuildKit

BuildKit slashes Docker build times by 70% with parallel execution, smart caching, and secure secrets. Real production patterns for modern container pipelines.

docker devops performance infrastructure
Accelerate Docker Builds with BuildKit

When I first discovered BuildKit hiding inside Docker, I was skeptical. Another build tool? But after rebuilding our CI/CD pipeline around it, I saw build times drop from 12 minutes to under 4 minutes. BuildKit isn’t just faster—it fundamentally changes how container builds work.

Understanding BuildKit’s Core Architecture

BuildKit is Docker’s next-generation build engine. Unlike the legacy builder, it treats your Dockerfile as a dependency graph rather than a linear script. This means stages run in parallel when possible, caching is smarter, and you get features like multi-stage builds that actually make sense.

I started using BuildKit when our monorepo builds became unbearable. We had 15+ microservices sharing common base images, and every code change triggered full rebuilds. The legacy Docker builder would rebuild everything sequentially, even when nothing changed.

Enable BuildKit in Docker

The easiest way to use BuildKit is setting an environment variable:

export DOCKER_BUILDKIT=1
docker build -t myapp:latest .

For permanent enablement, I add this to /etc/docker/daemon.json:

{
  "features": {
    "buildkit": true
  }
}

After restarting Docker (sudo systemctl restart docker), BuildKit becomes the default. You’ll immediately notice the different build output—it’s more structured and shows parallel stages clearly.

Optimize Multi-Stage Builds with Parallelization

Multi-stage builds are where BuildKit shines. Here’s a pattern I use constantly:

# Build stage 1: Dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

# Build stage 2: Build assets
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Build stage 3: Runtime tests
FROM deps AS test
COPY --from=builder /app/dist ./dist
RUN npm test

# Final stage: Production image
FROM node:20-alpine AS runtime
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY package.json ./
USER node
CMD ["node", "dist/index.js"]

With legacy Docker, deps, builder, and test would run sequentially. BuildKit runs deps and builder in parallel immediately, then executes test once deps completes. The final runtime stage pulls from both completed stages.

This parallelism cut our build time from 8 minutes to 3 minutes for a typical Node.js service.

Maximize Performance with Advanced Layer Caching

BuildKit’s cache is remarkably intelligent. It doesn’t just cache layers—it understands content hashes and can mount external caches.

Inline Cache Export

I use inline caching to share build caches across CI runners:

docker build \
  --build-arg BUILDKIT_INLINE_CACHE=1 \
  -t myapp:latest \
  --push \
  .

This embeds cache metadata in the image. Later builds can reuse it:

docker build \
  --cache-from myapp:latest \
  -t myapp:latest \
  .

Our GitLab CI runners pull the previous image and reuse unchanged layers. This works across different machines, which is impossible with local cache only.

Registry Cache Backend

For larger teams, I set up dedicated registry caches:

docker buildx build \
  --cache-to type=registry,ref=registry.example.com/myapp:buildcache \
  --cache-from type=registry,ref=registry.example.com/myapp:buildcache \
  -t myapp:latest \
  .

The cache lives separately from your images. Multiple teams can share it, and you control cache expiration through registry policies. We saw cache hit rates jump from 40% to 85% after implementing this.

Secure Docker Secrets Management Without Leaks

The traditional approach to build secrets is dangerous:

# BAD: Secret ends up in layer history
ARG GITHUB_TOKEN
RUN git clone https://${GITHUB_TOKEN}@github.com/private/repo.git

Even if you delete files later, the secret remains in the image history. BuildKit’s secret mounts solve this:

# GOOD: Secret never enters layer history
RUN --mount=type=secret,id=github_token \
  git clone https://$(cat /run/secrets/github_token)@github.com/private/repo.git

Build with:

docker build --secret id=github_token,src=$HOME/.github_token .

The secret mounts temporarily during the RUN command, then disappears. No trace in the final image. I use this pattern for:

  • Private npm registry tokens
  • AWS credentials for S3 artifact downloads
  • SSH keys for private git dependencies
  • Database connection strings for integration tests

Accelerate Builds with Cache Mounts

Cache mounts persist directories across builds. This is huge for package managers:

FROM golang:1.21-alpine AS builder
WORKDIR /app

# Mount Go module cache
RUN --mount=type=cache,target=/go/pkg/mod \
    --mount=type=bind,source=go.sum,target=go.sum \
    --mount=type=bind,source=go.mod,target=go.mod \
    go mod download

COPY . .

# Mount both module cache and build cache
RUN --mount=type=cache,target=/go/pkg/mod \
    --mount=type=cache,target=/root/.cache/go-build \
    go build -o /app/server ./cmd/server

The first time this runs, it downloads all Go modules. Subsequent builds reuse /go/pkg/mod and /root/.cache/go-build, even if you blow away the container. This reduced our Go service builds from 6 minutes to 45 seconds.

Python example:

FROM python:3.11-slim AS builder

RUN --mount=type=cache,target=/root/.cache/pip \
    pip install --user -r requirements.txt

The pip cache persists across builds. Rebuilds skip package downloads entirely.

Configure SSH Forwarding for Private Repositories

Copying SSH keys into images is a security nightmare. BuildKit forwards your SSH agent:

FROM alpine:3.18
RUN apk add --no-cache git openssh-client

# Use host SSH agent
RUN --mount=type=ssh \
  git clone git@github.com:private/repo.git /app

Build with:

docker build --ssh default .

BuildKit forwards your local SSH agent into the build. The private key never touches the image. This works seamlessly in CI with forwarded agents or SSH key files:

docker build --ssh default=$SSH_AUTH_SOCK .

Build Multi-Platform Docker Images

Building ARM images from x86 machines used to require QEMU and patience. BuildKit with buildx makes it trivial:

docker buildx create --name multiplatform --use
docker buildx build \
  --platform linux/amd64,linux/arm64 \
  -t myapp:latest \
  --push \
  .

This creates native ARM64 and AMD64 images in one command. I use this for deploying to:

  • AWS Graviton instances (ARM64)
  • Traditional x86 EC2 instances
  • Apple Silicon development machines
  • Raspberry Pi edge devices

The same Dockerfile produces optimized binaries for each architecture. BuildKit handles cross-compilation transparently.

Integrate BuildKit into CI/CD Pipelines

Here’s my GitLab CI template using BuildKit features:

build:
  image: docker:24-dind
  services:
    - docker:24-dind
  variables:
    DOCKER_BUILDKIT: 1
  before_script:
    - echo "$CI_REGISTRY_PASSWORD" | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY
  script:
    - |
      docker build \
        --cache-from $CI_REGISTRY_IMAGE:latest \
        --build-arg BUILDKIT_INLINE_CACHE=1 \
        --secret id=npm_token,env=NPM_TOKEN \
        -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA \
        -t $CI_REGISTRY_IMAGE:latest \
        --push \
        .

Key elements:

  • DOCKER_BUILDKIT=1 enables BuildKit
  • --cache-from pulls previous image for layer reuse
  • --build-arg BUILDKIT_INLINE_CACHE=1 embeds cache metadata
  • --secret injects CI secrets safely
  • Tags both commit SHA and latest for easy rollbacks

This pipeline runs in 3-4 minutes for most services, down from 10-15 minutes with legacy builds.

Debug Failed BuildKit Builds

When builds fail, BuildKit’s output is more helpful than legacy Docker:

docker build --progress=plain .

This shows full command output instead of abbreviated logs. I also use:

docker buildx debug build .

This launches an interactive debugger at failure points. You can inspect the failing layer’s filesystem and environment.

Practical Gotchas

BuildKit behavior differs from legacy Docker in subtle ways:

  1. .dockerignore is stricter: BuildKit respects .dockerignore more aggressively. Files ignored won’t be available even with COPY . .. I learned this when builds failed because test fixtures were ignored.

  2. Cache invalidation is smarter: Changing unrelated files won’t invalidate layers. But BuildKit tracks file content, not timestamps. Touching files won’t force rebuilds.

  3. Parallel stage outputs: Multi-stage builds can produce confusing logs when stages run in parallel. Use --progress=plain to see sequential output.

  4. Resource usage spikes: BuildKit can use significant CPU and memory during parallel builds. I set --cpu-quota and --memory limits on CI runners to prevent resource exhaustion.

Measuring the Impact

Before BuildKit, our CI/CD spent 45% of time on builds. After implementing BuildKit with registry caching and parallel stages, build time dropped to 15% of total pipeline duration. This translates to:

  • Build time reduction: 70% average across services
  • Cache hit rate: 85% (up from 40%)
  • CI/CD throughput: 3x more deploys per day
  • Developer feedback: PR checks complete in 4 minutes vs 12 minutes

When Not to Use BuildKit

BuildKit isn’t always the answer:

  • Very simple Dockerfiles: Single-stage, linear builds see minimal improvement
  • Legacy Docker versions: BuildKit requires Docker 18.09+, and some features need 20.10+
  • Extremely constrained environments: BuildKit uses more memory than legacy builder during builds

But for any non-trivial Dockerfile, BuildKit delivers measurable improvements.

Getting Started

Start with these three changes:

  1. Enable BuildKit: Set DOCKER_BUILDKIT=1 in your environment
  2. Add cache mounts: Insert --mount=type=cache for package manager directories
  3. Use inline cache: Add --build-arg BUILDKIT_INLINE_CACHE=1 to CI builds

These give you 50-60% of BuildKit’s benefits with minimal effort. Then explore secrets, SSH forwarding, and multi-platform builds as needed.

BuildKit transformed our build pipeline from a bottleneck to a strength. It’s not just about speed—it’s about making Docker builds predictable, secure, and maintainable at scale.

Found this helpful? Share it with others: