Docker Compose Tutorial: Building Multi-Container Applications
Master Docker Compose for multi-container applications. Learn to define, configure, and run complex application stacks with practical examples including web apps with databases.
Moshiour Rahman
Advertisement
What is Docker Compose?
Docker Compose is a tool for defining and running multi-container Docker applications. Instead of running multiple docker run commands, you define your entire application stack in a single YAML file.
Why Docker Compose?
| Without Compose | With Compose |
|---|---|
| Multiple docker run commands | Single docker-compose up |
| Manual network creation | Automatic networking |
| Complex volume management | Declarative volumes |
| Hard to reproduce | Version-controlled config |
Docker Compose Basics
Installation
Docker Compose comes bundled with Docker Desktop. For Linux:
# Install Docker Compose plugin
sudo apt-get update
sudo apt-get install docker-compose-plugin
# Verify installation
docker compose version
Your First docker-compose.yml
version: '3.8'
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./html:/usr/share/nginx/html:ro
# Start services
docker compose up -d
# View logs
docker compose logs -f
# Stop services
docker compose down
Building a Full-Stack Application
Let’s build a complete application with React frontend, Node.js API, PostgreSQL database, and Redis cache.
Project Structure
my-app/
├── docker-compose.yml
├── frontend/
│ ├── Dockerfile
│ └── src/
├── backend/
│ ├── Dockerfile
│ └── src/
└── nginx/
└── nginx.conf
Complete docker-compose.yml
version: '3.8'
services:
# PostgreSQL Database
postgres:
image: postgres:15-alpine
container_name: app-postgres
environment:
POSTGRES_DB: myapp
POSTGRES_USER: ${DB_USER:-postgres}
POSTGRES_PASSWORD: ${DB_PASSWORD:-secret}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
networks:
- app-network
# Redis Cache
redis:
image: redis:7-alpine
container_name: app-redis
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD:-redis123}
volumes:
- redis_data:/data
ports:
- "6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- app-network
# Node.js Backend API
backend:
build:
context: ./backend
dockerfile: Dockerfile
args:
NODE_ENV: development
container_name: app-backend
environment:
NODE_ENV: ${NODE_ENV:-development}
PORT: 3000
DATABASE_URL: postgresql://${DB_USER:-postgres}:${DB_PASSWORD:-secret}@postgres:5432/myapp
REDIS_URL: redis://:${REDIS_PASSWORD:-redis123}@redis:6379
JWT_SECRET: ${JWT_SECRET:-your-secret-key}
volumes:
- ./backend:/app
- /app/node_modules
ports:
- "3000:3000"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- app-network
restart: unless-stopped
# React Frontend
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
container_name: app-frontend
environment:
REACT_APP_API_URL: http://localhost:3000
volumes:
- ./frontend:/app
- /app/node_modules
ports:
- "3001:3000"
depends_on:
- backend
networks:
- app-network
# Nginx Reverse Proxy
nginx:
image: nginx:alpine
container_name: app-nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
depends_on:
- backend
- frontend
networks:
- app-network
restart: unless-stopped
# Adminer - Database Management
adminer:
image: adminer:latest
container_name: app-adminer
ports:
- "8080:8080"
depends_on:
- postgres
networks:
- app-network
profiles:
- debug
volumes:
postgres_data:
driver: local
redis_data:
driver: local
networks:
app-network:
driver: bridge
Backend Dockerfile
# backend/Dockerfile
FROM node:20-alpine AS base
WORKDIR /app
# Development stage
FROM base AS development
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]
# Build stage
FROM base AS build
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
# Production stage
FROM base AS production
ENV NODE_ENV=production
COPY --from=build /app/dist ./dist
COPY --from=build /app/node_modules ./node_modules
COPY package*.json ./
EXPOSE 3000
USER node
CMD ["node", "dist/index.js"]
Frontend Dockerfile
# frontend/Dockerfile
FROM node:20-alpine AS base
WORKDIR /app
# Development
FROM base AS development
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
# Build
FROM base AS build
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production with Nginx
FROM nginx:alpine AS production
COPY --from=build /app/build /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Nginx Configuration
# nginx/nginx.conf
events {
worker_connections 1024;
}
http {
upstream backend {
server backend:3000;
}
upstream frontend {
server frontend:3000;
}
server {
listen 80;
server_name localhost;
# API routes
location /api/ {
proxy_pass http://backend/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade;
}
# Frontend routes
location / {
proxy_pass http://frontend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
Environment Variables
Create a .env file:
# .env
NODE_ENV=development
DB_USER=postgres
DB_PASSWORD=supersecretpassword
REDIS_PASSWORD=redispassword123
JWT_SECRET=your-jwt-secret-key
Docker Compose automatically loads .env files:
services:
backend:
environment:
- DATABASE_URL=postgresql://${DB_USER}:${DB_PASSWORD}@postgres:5432/myapp
Essential Commands
# Start all services
docker compose up -d
# Start specific service
docker compose up -d backend
# View logs
docker compose logs -f backend
# Rebuild and start
docker compose up -d --build
# Stop all services
docker compose down
# Stop and remove volumes
docker compose down -v
# List running services
docker compose ps
# Execute command in container
docker compose exec backend npm run migrate
# Scale service
docker compose up -d --scale backend=3
# View resource usage
docker compose top
Advanced Features
Health Checks
services:
api:
image: my-api
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
Resource Limits
services:
api:
image: my-api
deploy:
resources:
limits:
cpus: '0.50'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
Profiles for Optional Services
services:
# Always starts
api:
image: my-api
# Only with --profile debug
debug-tools:
image: debug-tools
profiles:
- debug
# Only with --profile monitoring
prometheus:
image: prom/prometheus
profiles:
- monitoring
# Start with debug profile
docker compose --profile debug up -d
# Start with multiple profiles
docker compose --profile debug --profile monitoring up -d
Multiple Compose Files
# Base configuration
docker-compose.yml
# Override for development
docker-compose.override.yml
# Override for production
docker-compose.prod.yml
# Development (auto-loads override)
docker compose up -d
# Production
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Wait for Dependencies
services:
backend:
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
Production Best Practices
1. Use Specific Image Tags
# Bad
image: postgres
# Good
image: postgres:15.4-alpine
2. Don’t Store Secrets in Compose File
services:
api:
secrets:
- db_password
secrets:
db_password:
file: ./secrets/db_password.txt
3. Use Named Volumes
volumes:
postgres_data:
driver: local
driver_opts:
type: none
o: bind
device: /data/postgres
4. Logging Configuration
services:
api:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
Common Patterns
Database Initialization
services:
postgres:
volumes:
- ./init-scripts:/docker-entrypoint-initdb.d:ro
Hot Reload for Development
services:
backend:
volumes:
- ./backend:/app
- /app/node_modules # Prevent overwrite
command: npm run dev
Production Build
# docker-compose.prod.yml
services:
backend:
build:
context: ./backend
target: production
restart: always
frontend:
build:
context: ./frontend
target: production
Debugging Tips
# Check configuration
docker compose config
# View container details
docker compose ps -a
# Check logs for errors
docker compose logs --tail=100 backend
# Interactive shell
docker compose exec backend sh
# Check network
docker network ls
docker network inspect myapp_app-network
Summary
| Feature | Purpose |
|---|---|
| services | Define containers |
| volumes | Persistent data |
| networks | Container communication |
| depends_on | Start order |
| healthcheck | Service health |
| profiles | Optional services |
| secrets | Sensitive data |
Docker Compose simplifies multi-container development and makes your application stack reproducible. Start with simple configurations and gradually add complexity as needed.
Advertisement
Moshiour Rahman
Software Architect & AI Engineer
Enterprise software architect with deep expertise in financial systems, distributed architecture, and AI-powered applications. Building large-scale systems at Fortune 500 companies. Specializing in LLM orchestration, multi-agent systems, and cloud-native solutions. I share battle-tested patterns from real enterprise projects.
Related Articles
Docker Compose for Microservices: Complete Development Guide
Master Docker Compose for local microservices development. Learn multi-container orchestration, networking, volumes, and production-ready configurations.
DevOpsDocker Best Practices for Production: Complete Guide
Master Docker best practices for production deployments. Learn image optimization, security hardening, multi-stage builds, and container orchestration.
DevOpsKubernetes for Beginners: Complete Guide to Container Orchestration
Learn Kubernetes from scratch. Understand pods, deployments, services, and how to deploy your first application to a Kubernetes cluster with practical examples.
Comments
Comments are powered by GitHub Discussions.
Configure Giscus at giscus.app to enable comments.