Customize Terraform Modules Without Forking
Master proven patterns for customizing Terraform modules without maintenance nightmares. Stop forking and start composing your infrastructure code.
Forking Terraform modules is a maintenance nightmare. I learned this the hard way after inheriting a codebase with 23 forked modules, each pinned to versions 2-3 years out of date. Security patches required days of merge conflict resolution. Upstream improvements sat unused because upgrading meant re-applying our customizations to new code. The forks became technical debt that compounded with every release we skipped.
The problem isn’t unique to my experience. Every infrastructure team hits the same wall: you need to customize a third-party module, but the customization options don’t quite match your requirements. You need to add an extra tag, modify a security group rule, or inject an IAM policy that the module author didn’t anticipate. The module is 95% perfect, but that last 5% forces a fork.
I’ve spent the last three years finding alternatives to forking. The patterns I’ll share eliminate the need for maintaining custom module versions while giving you the flexibility to adapt upstream modules to your infrastructure standards. These aren’t theoretical solutions—they’re battle-tested approaches from production environments running thousands of Terraform resources across multi-cloud deployments.
Understand the Real Cost of Forking
Before diving into alternatives, let’s understand why forking creates problems beyond the obvious maintenance burden.
When you fork a module, you immediately diverge from the upstream project. The module author continues development, adds features, fixes bugs, and patches security vulnerabilities. Your fork sits frozen at the moment you created it. Each upstream change requires manual integration into your fork.
This divergence creates risk that scales with time. A security vulnerability in the AWS VPC module affects your infrastructure, but the patch requires merging upstream changes into your 18-month-old fork. You’re not just applying a patch—you’re resolving conflicts between your customizations and 50+ commits of upstream development.
The maintenance tax extends beyond security updates. New AWS features, provider improvements, and optimization opportunities all require integration work. Teams often skip upgrades entirely, accepting the growing gap between their infrastructure and current best practices.
Pattern 1: Compose Terraform Modules for Flexibility
The most reliable alternative to forking is wrapping the upstream module in your own module that adds the customizations you need.
Instead of forking the terraform-aws-vpc module to add custom tags, create a wrapper module that calls the upstream module and supplements it with your requirements:
# modules/our-vpc/main.tf
module "base_vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.5.0"
name = var.name
cidr = var.cidr
azs = var.availability_zones
private_subnets = var.private_subnet_cidrs
public_subnets = var.public_subnet_cidrs
enable_nat_gateway = var.enable_nat
enable_vpn_gateway = var.enable_vpn
}
resource "aws_ec2_tag" "vpc_tags" {
for_each = var.additional_tags
resource_id = module.base_vpc.vpc_id
key = each.key
value = each.value
}
# Note: IAM role and CloudWatch log group resources should be defined separately
resource "aws_flow_log" "vpc_flow_logs" {
count = var.enable_flow_logs ? 1 : 0
iam_role_arn = aws_iam_role.flow_logs[0].arn
log_destination = aws_cloudwatch_log_group.flow_logs[0].arn
traffic_type = "ALL"
vpc_id = module.base_vpc.vpc_id
}
output "vpc_id" {
value = module.base_vpc.vpc_id
}
output "private_subnet_ids" {
value = module.base_vpc.private_subnets
}
This wrapper approach gives you several advantages:
- Upstream compatibility: The base module upgrades independently of your customizations
- Separation of concerns: Your organizational requirements live in your wrapper, not interleaved with upstream code
- Testing isolation: You can test upstream module changes without immediately impacting your custom logic
I use this pattern extensively for adding organizational standards to community modules. Our security team requires VPC flow logs on all networks. Rather than forking every network module to add flow logs, our wrapper adds them automatically. When the upstream module adds new features or fixes bugs, we upgrade the version without touching our flow log configuration.
The composition pattern works best when your customizations are additive—adding resources, tags, or policies rather than modifying the module’s internal behavior.
Pattern 2: Deploy Overlay Resources for Modification
Sometimes you need to modify resources the module creates, not just add new ones. This is where the composition pattern breaks down. You can’t easily “wrap” a security group rule into the module’s security group from outside the module.
The overlay pattern addresses this by using Terraform’s resource targeting and terraform_data resources to modify resources after creation:
module "rds" {
source = "terraform-aws-modules/rds/aws"
version = "6.4.0"
identifier = var.db_identifier
engine = "postgres"
allocated_storage = 100
instance_class = "db.t3.medium"
}
# Overlay: Add custom parameter to parameter group
resource "terraform_data" "add_custom_parameters" {
input = {
parameter_group = module.rds.db_parameter_group_id
}
provisioner "local-exec" {
command = <<-EOT
aws rds modify-db-parameter-group \
--db-parameter-group-name ${module.rds.db_parameter_group_id} \
--parameters "ParameterName=log_statement,ParameterValue=all,ApplyMethod=immediate"
EOT
}
}
I’m not advocating for local-exec provisioners in production infrastructure—they’re fragile and hide state outside Terraform. But this illustrates the concept: you can supplement module behavior by operating on the resources it creates.
A more robust approach uses data sources to reference module outputs and creates additional resources that modify behavior:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.21.0"
cluster_name = var.cluster_name
cluster_version = "1.28"
vpc_id = var.vpc_id
subnet_ids = var.subnet_ids
}
# Overlay: Add custom security group rule
resource "aws_security_group_rule" "custom_access" {
type = "ingress"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = var.admin_cidr_blocks
security_group_id = module.eks.cluster_security_group_id
description = "Custom admin access"
}
# Overlay: Add IAM policy to node role
resource "aws_iam_role_policy_attachment" "custom_node_policy" {
role = module.eks.eks_managed_node_groups["main"].iam_role_name
# Note: aws_iam_policy.custom_node_policy must be defined separately
policy_arn = aws_iam_policy.custom_node_policy.arn
}
This pattern works when you need to modify resources the module creates but don’t want to fork the entire module to change one security group rule. The module handles the complex orchestration; your overlay handles your specific requirements.
Pattern 3: Configure Dynamic Locals for Computed Values
Some customizations require computing values based on module outputs before creating additional resources. This is common when you’re standardizing security policies or compliance controls across multiple modules.
module "s3_buckets" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.15.1"
for_each = var.buckets
bucket = each.value.name
# Note: Use aws_s3_bucket_acl resource separately for AWS provider 4.0+
}
locals {
# Compute additional S3 bucket policies based on module outputs
bucket_policies = {
for key, bucket in module.s3_buckets : key => {
bucket_id = bucket.s3_bucket_id
enforce_ssl = true
require_mfa_delete = contains(var.production_buckets, key)
enable_versioning = contains(var.critical_data_buckets, key)
}
}
# Define SSL enforcement policy statement
ssl_policy_statement = {
Sid = "EnforceSSLOnly"
Effect = "Deny"
Principal = "*"
Action = "s3:*"
Resource = [
"arn:aws:s3:::*/*"
]
Condition = {
Bool = {
"aws:SecureTransport" = "false"
}
}
}
# Define MFA delete policy statement
mfa_policy_statement = {
Sid = "RequireMFADelete"
Effect = "Deny"
Principal = "*"
Action = "s3:DeleteObject"
Resource = [
"arn:aws:s3:::*/*"
]
Condition = {
BoolIfExists = {
"aws:MultiFactorAuthPresent" = "false"
}
}
}
}
resource "aws_s3_bucket_policy" "enforce_standards" {
for_each = local.bucket_policies
bucket = each.value.bucket_id
policy = jsonencode({
Version = "2012-10-17"
Statement = concat(
each.value.enforce_ssl ? [local.ssl_policy_statement] : [],
each.value.require_mfa_delete ? [local.mfa_policy_statement] : []
)
})
}
I use this pattern when organizational policy varies based on resource classification. Production databases get different backup policies than development databases. Customer data buckets get different encryption requirements than logging buckets. The module creates the base resources; locals compute the appropriate policies based on resource metadata.
Pattern 4: Transform Configuration Before Deployment
Sometimes the limitation isn’t the module itself but how you need to transform input data before passing it to the module. This comes up frequently when integrating Terraform with existing systems or API responses.
#!/usr/bin/env python3
import json
import sys
def transform_vpc_config(input_config):
"""
Transform API-provided VPC configuration to Terraform module format.
Our service discovery API returns VPC configs in a different schema
than the terraform-aws-vpc module expects. Rather than fork the module,
we transform the input.
"""
terraform_vars = {
"name": input_config["vpc_name"],
"cidr": input_config["cidr_block"],
"availability_zones": input_config["azs"],
"private_subnet_cidrs": [
subnet["cidr"]
for subnet in input_config["subnets"]
if subnet["type"] == "private"
],
"public_subnet_cidrs": [
subnet["cidr"]
for subnet in input_config["subnets"]
if subnet["type"] == "public"
],
"enable_nat_gateway": input_config.get("nat_gateway", {}).get("enabled", True),
"single_nat_gateway": input_config.get("nat_gateway", {}).get("single", False)
}
return terraform_vars
if __name__ == "__main__":
input_data = json.load(sys.stdin)
output_data = transform_vpc_config(input_data)
print(json.dumps(output_data, indent=2))
You can invoke this preprocessing step in your CI/CD pipeline before running Terraform:
# Transform API config to Terraform variables
curl -s https://api.internal/vpc-configs/${VPC_ID} | \
python3 scripts/transform_vpc_config.py > terraform.tfvars.json
# Apply Terraform with transformed variables
terraform apply -var-file=terraform.tfvars.json
This pattern shines when you’re integrating Terraform into larger automation systems. Your internal tooling might represent infrastructure differently than the module expects. Rather than forking the module to match your internal schema, transform the data externally and keep the module unchanged.
Pattern 5: Deploy Multi-Account Infrastructure with Provider Aliases
A subtle but powerful customization technique uses provider aliases to deploy the same module across different AWS accounts or regions without modification:
provider "aws" {
alias = "production"
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::111111111111:role/TerraformAdmin"
}
}
provider "aws" {
alias = "disaster_recovery"
region = "us-west-2"
assume_role {
role_arn = "arn:aws:iam::222222222222:role/TerraformAdmin"
}
}
module "primary_vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.5.0"
providers = {
aws = aws.production
}
name = "production-vpc"
cidr = "10.0.0.0/16"
# ... other config
}
module "dr_vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.5.0"
providers = {
aws = aws.disaster_recovery
}
name = "dr-vpc"
cidr = "10.1.0.0/16"
# ... other config
}
This isn’t exactly customization in the traditional sense, but it solves a common problem that drives teams toward forking: deploying the same infrastructure across multiple environments that require different provider configurations.
Decide When Forking Makes Sense
These patterns cover most customization needs, but forking still makes sense in specific scenarios:
- The module is unmaintained: If the upstream project has been abandoned and you need critical bug fixes or features, forking gives you ownership
- Fundamental architectural differences: When your requirements diverge so significantly from the module’s design that wrapping or overlaying becomes more complex than forking
- Compliance requirements prohibit external dependencies: Some organizations require source code review and cannot use external modules directly
Even when forking is necessary, consider these practices to reduce the maintenance burden:
- Maintain a clean diff between your fork and upstream (automate merge tracking)
- Document every customization with GitHub issues linking to upstream
- Regularly rebase on upstream releases rather than merging to keep history clean
- Consider contributing your changes back upstream if they’re generally useful
Implement Your Migration Strategy
Don’t refactor all your forked modules at once. I’ve seen teams attempt this and burn weeks of engineering time with nothing deployed. Instead, pick the most painful fork—the one that breaks most often or requires the most frequent upstream merges—and apply one of these patterns as a proof of concept.
Start with composition (Pattern 1) since it’s the most straightforward. If that doesn’t work, try overlays (Pattern 2). The more complex patterns (3-5) make sense once you have experience with the basics and understand the trade-offs.
Track metrics before and after:
- Time spent on module maintenance
- Frequency of upstream version updates
- Incidents related to module configuration drift
These metrics justify the refactoring work and guide which modules to tackle next.
Build Evolving Infrastructure with Terraform Modules
Module customization patterns aren’t just about avoiding forks. They’re about building infrastructure that evolves with the ecosystem rather than against it. When you fork a module, you’re betting that your customizations are more valuable than every future improvement from the upstream maintainers. That’s rarely true.
The patterns I’ve shared keep you connected to the upstream project while giving you the flexibility to meet your organization’s requirements. You get security patches automatically. You benefit from new features without integration work. Your infrastructure improves as the ecosystem improves.
Most importantly, you spend less time maintaining infrastructure code and more time building the systems that deliver value to your users. That’s the ultimate goal of any infrastructure automation effort.