Terraform Tutorial: Infrastructure as Code for Beginners
Learn Terraform from scratch. Build and manage cloud infrastructure with code using practical AWS examples, modules, and best practices.
Moshiour Rahman
Advertisement
What is Terraform?
Terraform is an open-source Infrastructure as Code (IaC) tool by HashiCorp that lets you define cloud resources in human-readable configuration files. You can version, reuse, and share your infrastructure configurations.
Why Terraform?
| Feature | Benefit |
|---|---|
| Declarative | Define what you want, not how |
| Provider agnostic | Works with AWS, Azure, GCP, etc. |
| State management | Tracks resource changes |
| Plan before apply | Preview changes safely |
| Modular | Reusable components |
Getting Started
Installation
# macOS
brew install terraform
# Ubuntu/Debian
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform
# Verify
terraform version
AWS Configuration
# Install AWS CLI
brew install awscli
# Configure credentials
aws configure
# Enter: Access Key ID, Secret Access Key, Region, Output format
Your First Terraform Configuration
Project Structure
my-infrastructure/
├── main.tf # Main configuration
├── variables.tf # Input variables
├── outputs.tf # Output values
├── providers.tf # Provider configuration
└── terraform.tfvars # Variable values
providers.tf
terraform {
required_version = ">= 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = var.aws_region
default_tags {
tags = {
Environment = var.environment
Project = var.project_name
ManagedBy = "Terraform"
}
}
}
variables.tf
variable "aws_region" {
description = "AWS region"
type = string
default = "us-east-1"
}
variable "environment" {
description = "Environment name"
type = string
default = "dev"
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "Environment must be dev, staging, or prod."
}
}
variable "project_name" {
description = "Project name"
type = string
}
variable "vpc_cidr" {
description = "VPC CIDR block"
type = string
default = "10.0.0.0/16"
}
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t3.micro"
}
main.tf
# VPC
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "${var.project_name}-vpc"
}
}
# Internet Gateway
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "${var.project_name}-igw"
}
}
# Public Subnet
resource "aws_subnet" "public" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(var.vpc_cidr, 8, count.index)
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true
tags = {
Name = "${var.project_name}-public-${count.index + 1}"
Type = "public"
}
}
# Private Subnet
resource "aws_subnet" "private" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(var.vpc_cidr, 8, count.index + 10)
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "${var.project_name}-private-${count.index + 1}"
Type = "private"
}
}
# Route Table
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = "${var.project_name}-public-rt"
}
}
# Route Table Association
resource "aws_route_table_association" "public" {
count = length(aws_subnet.public)
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}
# Security Group
resource "aws_security_group" "web" {
name = "${var.project_name}-web-sg"
description = "Security group for web servers"
vpc_id = aws_vpc.main.id
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Restrict in production!
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.project_name}-web-sg"
}
}
# EC2 Instance
resource "aws_instance" "web" {
ami = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
subnet_id = aws_subnet.public[0].id
vpc_security_group_ids = [aws_security_group.web.id]
user_data = <<-EOF
#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "<h1>Hello from Terraform!</h1>" > /var/www/html/index.html
EOF
tags = {
Name = "${var.project_name}-web-server"
}
}
# Data Sources
data "aws_availability_zones" "available" {
state = "available"
}
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm-*-x86_64-gp2"]
}
}
outputs.tf
output "vpc_id" {
description = "VPC ID"
value = aws_vpc.main.id
}
output "public_subnet_ids" {
description = "Public subnet IDs"
value = aws_subnet.public[*].id
}
output "web_server_public_ip" {
description = "Web server public IP"
value = aws_instance.web.public_ip
}
output "web_server_url" {
description = "Web server URL"
value = "http://${aws_instance.web.public_ip}"
}
terraform.tfvars
aws_region = "us-east-1"
environment = "dev"
project_name = "my-app"
vpc_cidr = "10.0.0.0/16"
instance_type = "t3.micro"
Terraform Commands
# Initialize working directory
terraform init
# Format code
terraform fmt
# Validate configuration
terraform validate
# Plan changes (dry run)
terraform plan
# Apply changes
terraform apply
# Apply without confirmation
terraform apply -auto-approve
# Destroy infrastructure
terraform destroy
# Show current state
terraform show
# List resources in state
terraform state list
# Import existing resource
terraform import aws_instance.web i-1234567890abcdef0
Terraform Modules
Creating a Module
modules/
└── vpc/
├── main.tf
├── variables.tf
└── outputs.tf
# modules/vpc/variables.tf
variable "vpc_cidr" {
type = string
}
variable "project_name" {
type = string
}
variable "public_subnet_count" {
type = number
default = 2
}
# modules/vpc/main.tf
resource "aws_vpc" "this" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true
tags = {
Name = "${var.project_name}-vpc"
}
}
resource "aws_subnet" "public" {
count = var.public_subnet_count
vpc_id = aws_vpc.this.id
cidr_block = cidrsubnet(var.vpc_cidr, 8, count.index)
map_public_ip_on_launch = true
tags = {
Name = "${var.project_name}-public-${count.index + 1}"
}
}
# modules/vpc/outputs.tf
output "vpc_id" {
value = aws_vpc.this.id
}
output "public_subnet_ids" {
value = aws_subnet.public[*].id
}
Using a Module
# main.tf
module "vpc" {
source = "./modules/vpc"
vpc_cidr = "10.0.0.0/16"
project_name = "my-app"
public_subnet_count = 3
}
# Use module outputs
resource "aws_instance" "web" {
subnet_id = module.vpc.public_subnet_ids[0]
# ...
}
State Management
Remote State with S3
# backend.tf
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "prod/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-locks"
}
}
Create the S3 bucket and DynamoDB table:
# state-resources/main.tf
resource "aws_s3_bucket" "terraform_state" {
bucket = "my-terraform-state-bucket"
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_versioning" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
Workspaces
Manage multiple environments:
# Create workspace
terraform workspace new staging
terraform workspace new production
# List workspaces
terraform workspace list
# Switch workspace
terraform workspace select staging
# Show current workspace
terraform workspace show
# Use workspace in configuration
locals {
env_config = {
default = {
instance_type = "t3.micro"
instance_count = 1
}
staging = {
instance_type = "t3.small"
instance_count = 2
}
production = {
instance_type = "t3.medium"
instance_count = 3
}
}
config = local.env_config[terraform.workspace]
}
resource "aws_instance" "web" {
count = local.config.instance_count
instance_type = local.config.instance_type
# ...
}
Best Practices
1. Use Variables and Locals
locals {
common_tags = {
Environment = var.environment
Project = var.project_name
ManagedBy = "Terraform"
}
}
resource "aws_instance" "web" {
tags = merge(local.common_tags, {
Name = "web-server"
})
}
2. Use Data Sources
# Instead of hardcoding AMI IDs
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
}
}
3. Use Lifecycle Rules
resource "aws_instance" "web" {
# ...
lifecycle {
create_before_destroy = true
prevent_destroy = false
ignore_changes = [tags]
}
}
4. Sensitive Variables
variable "db_password" {
type = string
sensitive = true
}
Summary
| Command | Purpose |
|---|---|
terraform init | Initialize directory |
terraform plan | Preview changes |
terraform apply | Apply changes |
terraform destroy | Remove resources |
terraform fmt | Format code |
terraform validate | Check syntax |
Terraform enables you to manage infrastructure reliably and consistently. Start with simple configurations and gradually adopt modules and remote state as your needs grow.
Advertisement
Moshiour Rahman
Software Architect & AI Engineer
Enterprise software architect with deep expertise in financial systems, distributed architecture, and AI-powered applications. Building large-scale systems at Fortune 500 companies. Specializing in LLM orchestration, multi-agent systems, and cloud-native solutions. I share battle-tested patterns from real enterprise projects.
Related Articles
Stop Wrestling YAML: How to Deploy 50 AI Models with Python Loops
Infrastructure as Code shouldn't be a copy-paste nightmare. Learn how to use Pulumi and Python to programmatically deploy scalable AI infrastructure without the YAML fatigue.
DevOpsAWS EC2 Deployment Guide: Complete Tutorial for Beginners
Deploy applications on AWS EC2 from scratch. Learn instance setup, security groups, load balancing, auto-scaling, and production best practices.
DevOpsKubernetes Helm Charts: Package and Deploy Applications
Master Helm for Kubernetes deployments. Learn chart creation, templates, values, dependencies, and best practices for production applications.
Comments
Comments are powered by GitHub Discussions.
Configure Giscus at giscus.app to enable comments.