Terraform is the leading infrastructure as code tool. Following proven patterns ensures maintainable, scalable, and reliable infrastructure deployments.
Project Structure Patterns
1. Flat Structure (Small Projects)
For simple, single-environment projects.
terraform/
├── main.tf
├── variables.tf
├── outputs.tf
├── providers.tf
├── terraform.tfvars
└── .terraform.lock.hcl
Best For
- Learning/experimentation
- Single environment
- Small teams
- Simple infrastructure
2. Environment-Based Structure
Separate directories per environment.
terraform/
├── dev/
│ ├── main.tf
│ ├── variables.tf
│ ├── terraform.tfvars
│ └── backend.tf
├── staging/
│ ├── main.tf
│ ├── variables.tf
│ ├── terraform.tfvars
│ └── backend.tf
├── prod/
│ ├── main.tf
│ ├── variables.tf
│ ├── terraform.tfvars
│ └── backend.tf
└── modules/
├── vpc/
├── eks/
└── rds/
Best For
- Multiple environments
- Environment-specific configurations
- Clear separation of concerns
3. Component-Based Structure
Organize by infrastructure component.
terraform/
├── networking/
│ ├── vpc/
│ ├── security-groups/
│ └── load-balancers/
├── compute/
│ ├── ec2/
│ ├── ecs/
│ └── lambda/
├── data/
│ ├── rds/
│ ├── s3/
│ └── dynamodb/
└── modules/
Best For
- Large infrastructure
- Multiple teams
- Component ownership
- Microservices architecture
4. Terragrunt Structure
Using Terragrunt for DRY configurations.
infrastructure/
├── terragrunt.hcl # Root config
├── _envcommon/ # Shared configs
│ ├── vpc.hcl
│ ├── eks.hcl
│ └── rds.hcl
├── dev/
│ ├── terragrunt.hcl
│ ├── vpc/
│ │ └── terragrunt.hcl
│ ├── eks/
│ │ └── terragrunt.hcl
│ └── rds/
│ └── terragrunt.hcl
├── prod/
│ └── ...
└── modules/
Module Patterns
1. Basic Module Structure
modules/vpc/
├── main.tf # Resources
├── variables.tf # Input variables
├── outputs.tf # Output values
├── versions.tf # Provider versions
└── README.md # Documentation
Example: VPC Module
modules/vpc/main.tf
resource "aws_vpc" "main" {
cidr_block = var.cidr_block
enable_dns_hostnames = var.enable_dns_hostnames
enable_dns_support = var.enable_dns_support
tags = merge(
var.tags,
{
Name = var.name
}
)
}
resource "aws_subnet" "public" {
count = length(var.public_subnet_cidrs)
vpc_id = aws_vpc.main.id
cidr_block = var.public_subnet_cidrs[count.index]
availability_zone = var.availability_zones[count.index]
map_public_ip_on_launch = true
tags = merge(
var.tags,
{
Name = "${var.name}-public-${count.index + 1}"
Tier = "Public"
}
)
}
modules/vpc/variables.tf
variable "name" {
description = "Name prefix for VPC resources"
type = string
}
variable "cidr_block" {
description = "CIDR block for VPC"
type = string
validation {
condition = can(cidrhost(var.cidr_block, 0))
error_message = "Must be a valid CIDR block."
}
}
variable "public_subnet_cidrs" {
description = "List of public subnet CIDR blocks"
type = list(string)
default = []
}
variable "availability_zones" {
description = "List of availability zones"
type = list(string)
}
variable "tags" {
description = "Additional tags for resources"
type = map(string)
default = {}
}
variable "enable_dns_hostnames" {
description = "Enable DNS hostnames in VPC"
type = bool
default = true
}
variable "enable_dns_support" {
description = "Enable DNS support in VPC"
type = bool
default = true
}
modules/vpc/outputs.tf
output "vpc_id" {
description = "ID of the VPC"
value = aws_vpc.main.id
}
output "vpc_cidr" {
description = "CIDR block of the VPC"
value = aws_vpc.main.cidr_block
}
output "public_subnet_ids" {
description = "IDs of public subnets"
value = aws_subnet.public[*].id
}
2. Composition Pattern
Compose complex infrastructure from smaller modules.
# root/main.tf
module "vpc" {
source = "./modules/vpc"
name = "production"
cidr_block = "10.0.0.0/16"
public_subnet_cidrs = ["10.0.1.0/24", "10.0.2.0/24"]
availability_zones = ["us-east-1a", "us-east-1b"]
}
module "eks" {
source = "./modules/eks"
cluster_name = "production-cluster"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.public_subnet_ids
cluster_version = "1.28"
}
module "rds" {
source = "./modules/rds"
identifier = "production-db"
engine = "postgres"
engine_version = "15.3"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnet_ids
}
3. Feature Toggle Pattern
Enable/disable features with variables.
variable "enable_vpc_flow_logs" {
description = "Enable VPC Flow Logs"
type = bool
default = false
}
variable "enable_nat_gateway" {
description = "Enable NAT Gateway for private subnets"
type = bool
default = true
}
resource "aws_flow_log" "vpc" {
count = var.enable_vpc_flow_logs ? 1 : 0
vpc_id = aws_vpc.main.id
traffic_type = "ALL"
iam_role_arn = aws_iam_role.flow_logs[0].arn
log_destination = aws_cloudwatch_log_group.flow_logs[0].arn
}
resource "aws_nat_gateway" "main" {
count = var.enable_nat_gateway ? length(var.public_subnet_cidrs) : 0
allocation_id = aws_eip.nat[count.index].id
subnet_id = aws_subnet.public[count.index].id
}
State Management Patterns
1. Remote State Backend
S3 Backend with DynamoDB Locking
backend.tf
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "production/vpc/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-locks"
# Prevent accidental deletion
skip_region_validation = false
skip_credentials_validation = false
skip_metadata_api_check = false
}
}
Terraform Cloud Backend
terraform {
cloud {
organization = "my-org"
workspaces {
name = "production-infrastructure"
}
}
}
2. Remote State Data Source
Reference state from other Terraform projects.
data "terraform_remote_state" "vpc" {
backend = "s3"
config = {
bucket = "my-terraform-state"
key = "production/vpc/terraform.tfstate"
region = "us-east-1"
}
}
resource "aws_instance" "app" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.medium"
# Reference output from remote state
vpc_security_group_ids = [data.terraform_remote_state.vpc.outputs.app_security_group_id]
subnet_id = data.terraform_remote_state.vpc.outputs.private_subnet_ids[0]
}
3. State Locking
Prevent concurrent modifications.
# DynamoDB table for state locking
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
tags = {
Name = "Terraform State Lock Table"
}
}
Variable Patterns
1. Default Values with Validation
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t3.micro"
validation {
condition = contains(["t3.micro", "t3.small", "t3.medium"], var.instance_type)
error_message = "Instance type must be t3.micro, t3.small, or t3.medium."
}
}
variable "environment" {
description = "Environment name"
type = string
validation {
condition = can(regex("^(dev|staging|prod)$", var.environment))
error_message = "Environment must be dev, staging, or prod."
}
}
2. Complex Variable Types
variable "security_groups" {
description = "Security group configurations"
type = map(object({
description = string
ingress = list(object({
from_port = number
to_port = number
protocol = string
cidr_blocks = list(string)
}))
}))
}
# Usage
security_groups = {
web = {
description = "Web server security group"
ingress = [
{
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
},
{
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
]
}
}
3. Local Values for Computation
locals {
common_tags = {
Environment = var.environment
ManagedBy = "Terraform"
Project = var.project_name
}
# Computed values
az_count = length(data.aws_availability_zones.available.names)
private_subnets = [for i in range(local.az_count) : cidrsubnet(var.vpc_cidr, 8, i)]
public_subnets = [for i in range(local.az_count) : cidrsubnet(var.vpc_cidr, 8, i + 100)]
# Conditional logic
create_nat_gateway = var.environment == "prod" ? true : false
}
resource "aws_subnet" "private" {
count = local.az_count
vpc_id = aws_vpc.main.id
cidr_block = local.private_subnets[count.index]
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = merge(
local.common_tags,
{
Name = "${var.project_name}-private-${count.index + 1}"
}
)
}
Resource Patterns
1. Count vs For_Each
Count - For Identical Resources
resource "aws_instance" "web" {
count = 3
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
tags = {
Name = "web-${count.index + 1}"
}
}
For_Each - For Named Resources
variable "instances" {
type = map(object({
instance_type = string
ami = string
}))
}
resource "aws_instance" "app" {
for_each = var.instances
ami = each.value.ami
instance_type = each.value.instance_type
tags = {
Name = each.key
}
}
# Usage
instances = {
web1 = {
instance_type = "t3.micro"
ami = "ami-12345678"
}
api1 = {
instance_type = "t3.small"
ami = "ami-87654321"
}
}
2. Dynamic Blocks
Generate repeating nested blocks.
resource "aws_security_group" "app" {
name = "app-sg"
description = "Application security group"
vpc_id = var.vpc_id
dynamic "ingress" {
for_each = var.ingress_rules
content {
from_port = ingress.value.from_port
to_port = ingress.value.to_port
protocol = ingress.value.protocol
cidr_blocks = ingress.value.cidr_blocks
description = ingress.value.description
}
}
}
# Usage
ingress_rules = [
{
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = "HTTP"
},
{
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = "HTTPS"
}
]
3. Lifecycle Rules
Control resource behavior.
resource "aws_instance" "app" {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
lifecycle {
# Create new before destroying old
create_before_destroy = true
# Prevent accidental deletion
prevent_destroy = true
# Ignore changes to these attributes
ignore_changes = [
ami,
user_data,
tags["LastModified"]
]
}
}
Data Source Patterns
1. Dynamic Data Lookup
# Latest AMI
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
# Available AZs
data "aws_availability_zones" "available" {
state = "available"
}
# Current AWS region
data "aws_region" "current" {}
# Current AWS account
data "aws_caller_identity" "current" {}
2. External Data Source
data "external" "git_info" {
program = ["bash", "${path.module}/scripts/git-info.sh"]
}
resource "aws_instance" "app" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
tags = {
GitCommit = data.external.git_info.result.commit
GitBranch = data.external.git_info.result.branch
}
}
scripts/git-info.sh
#!/bin/bash
echo "{\"commit\":\"$(git rev-parse HEAD)\",\"branch\":\"$(git rev-parse --abbrev-ref HEAD)\"}"
Workspace Patterns
1. Environment-Based Workspaces
# Variables change based on workspace
locals {
environment_config = {
dev = {
instance_type = "t3.micro"
instance_count = 1
}
staging = {
instance_type = "t3.small"
instance_count = 2
}
prod = {
instance_type = "t3.medium"
instance_count = 3
}
}
config = local.environment_config[terraform.workspace]
}
resource "aws_instance" "app" {
count = local.config.instance_count
ami = data.aws_ami.ubuntu.id
instance_type = local.config.instance_type
tags = {
Name = "app-${terraform.workspace}-${count.index + 1}"
Environment = terraform.workspace
}
}
2. Workspace Commands
# Create workspace
terraform workspace new dev
# List workspaces
terraform workspace list
# Switch workspace
terraform workspace select prod
# Show current workspace
terraform workspace show
# Delete workspace
terraform workspace delete dev
Testing Patterns
1. Terraform Validate
# Validate configuration
terraform validate
# Format check
terraform fmt -check -recursive
# Plan without applying
terraform plan -out=tfplan
2. Terratest (Go Testing)
package test
import (
"testing"
"github.com/gruntwork-io/terratest/modules/terraform"
"github.com/stretchr/testify/assert"
)
func TestVPCCreation(t *testing.T) {
terraformOptions := &terraform.Options{
TerraformDir: "../modules/vpc",
Vars: map[string]interface{}{
"name": "test-vpc",
"cidr_block": "10.0.0.0/16",
},
}
defer terraform.Destroy(t, terraformOptions)
terraform.InitAndApply(t, terraformOptions)
vpcID := terraform.Output(t, terraformOptions, "vpc_id")
assert.NotEmpty(t, vpcID)
}
3. Kitchen-Terraform
.kitchen.yml
driver:
name: terraform
provisioner:
name: terraform
platforms:
- name: aws
suites:
- name: default
driver:
root_module_directory: test/fixtures/default
verifier:
name: terraform
systems:
- name: default
backend: aws
CI/CD Patterns
1. GitHub Actions Workflow
.github/workflows/terraform.yml
name: Terraform
on:
pull_request:
paths:
- 'terraform/**'
push:
branches:
- main
jobs:
terraform:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: 1.6.0
- name: Terraform Init
run: terraform init
working-directory: terraform
- name: Terraform Format
run: terraform fmt -check
working-directory: terraform
- name: Terraform Validate
run: terraform validate
working-directory: terraform
- name: Terraform Plan
run: terraform plan -out=tfplan
working-directory: terraform
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Terraform Apply
if: github.ref == 'refs/heads/main'
run: terraform apply tfplan
working-directory: terraform
2. Atlantis Pull Request Automation
atlantis.yaml
version: 3
automerge: false
delete_source_branch_on_merge: true
projects:
- name: production-vpc
dir: terraform/production/vpc
workspace: default
terraform_version: v1.6.0
autoplan:
when_modified: ["*.tf", "*.tfvars"]
enabled: true
apply_requirements: [approved, mergeable]
workflow: default
workflows:
default:
plan:
steps:
- init
- plan:
extra_args: ["-lock=false"]
apply:
steps:
- apply
Security Patterns
1. Secrets Management
Using AWS Secrets Manager
data "aws_secretsmanager_secret_version" "db_password" {
secret_id = "production/database/password"
}
resource "aws_db_instance" "main" {
identifier = "production-db"
engine = "postgres"
username = "admin"
password = data.aws_secretsmanager_secret_version.db_password.secret_string
# ... other config
}
Using Environment Variables
variable "db_password" {
description = "Database password"
type = string
sensitive = true
}
# Set via environment variable
# export TF_VAR_db_password="secret"
2. Sensitive Values
variable "api_key" {
description = "API key for external service"
type = string
sensitive = true
}
output "database_endpoint" {
description = "Database endpoint"
value = aws_db_instance.main.endpoint
}
output "database_password" {
description = "Database password"
value = aws_db_instance.main.password
sensitive = true # Hides from CLI output
}
Performance Patterns
1. Targeted Operations
# Target specific resource
terraform apply -target=aws_instance.web
# Target module
terraform apply -target=module.vpc
# Destroy specific resource
terraform destroy -target=aws_instance.temp
2. Parallelism Control
# Increase parallelism (default: 10)
terraform apply -parallelism=20
# Reduce for rate-limited APIs
terraform apply -parallelism=5
3. Resource Graph
# Generate dependency graph
terraform graph | dot -Tpng > graph.png
# Show resource dependencies
terraform state list
terraform state show aws_instance.web
Best Practices
- Use Modules: Reusable, tested, versioned components
- Remote State: Never commit state files to Git
- State Locking: Prevent concurrent modifications
- Pin Versions: Lock provider and module versions
- Validate Input: Use variable validation
- Sensitive Data: Mark sensitive variables
- Consistent Naming: Follow naming conventions
- Tagging Strategy: Tag all resources consistently
- Documentation: Document modules and complex logic
- Testing: Validate before apply, test in non-prod first
Common Anti-Patterns to Avoid
- Hardcoded Values: Use variables instead
- No State Backend: Always use remote state
- Manual State Edits: Use
terraform statecommands - No Module Versioning: Pin module versions
- Monolithic Configs: Break into smaller modules
- No Workspace Strategy: Plan for multiple environments
- Ignored Drift: Regularly check for drift
- No Destroy Plan: Review before destroying
- Inadequate Testing: Test changes in lower environments
- Poor Documentation: Document “why” not just “what”
Conclusion
Effective Terraform patterns lead to:
- Maintainable infrastructure code
- Reliable deployments
- Team collaboration
- Reduced errors
- Faster iterations
Start simple, adopt patterns as needed, and continuously refine based on team feedback and project requirements.