If you've ever dealt with the classic "it works on my machine" problem, Docker is the definitive solution. In this hands-on guide, we'll cover Docker from fundamental concepts to advanced configurations with Docker Compose, multi-stage builds, and debugging — all with examples you can run right now in your terminal.
I've been using Docker in production projects with Laravel, Node.js, and Python for over 3 years, and once you integrate it into your workflow, there's no going back.
What is Docker and why should you use it?
Docker is a containerization platform that packages your application along with all its dependencies (operating system, libraries, configurations) into a portable unit called a container. Unlike virtual machines, containers share the host OS kernel, making them extremely lightweight and fast.
Docker vs Virtual Machines
| Feature | Docker | Virtual Machine |
|---|---|---|
| Startup time | Seconds | Minutes |
| RAM usage | Minimal (shares kernel) | High (full OS) |
| Disk size | MBs | GBs |
| Isolation | Process-level | Full (virtualized hardware) |
| Portability | Excellent | Good |
Core concepts
- Image: A read-only template with instructions to create a container. Think of it as a "class" in OOP.
- Container: A running instance of an image — the "object" of that class. You can run multiple containers from the same image.
- Dockerfile: A text file with step-by-step instructions to build an image. Your "recipe".
- Docker Compose: A tool for defining and running multi-container applications (e.g., app + database + cache).
- Volume: A mechanism to persist data beyond the container lifecycle. Without volumes, data is lost when you remove a container.
Installation
Windows/macOS: Download Docker Desktop from the official website.
Linux (Ubuntu): Follow the official installation guide:
# Add Docker's official repository and install
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Add your user to the docker group (avoid using sudo)
sudo usermod -aG docker $USER
newgrp docker
# Verify installation
docker run hello-world
Your first container
# Pull and run Nginx
docker run -d --name my-web -p 8080:80 nginx:alpine
# Verify it's running
docker ps
Open http://localhost:8080 and you'll see the Nginx welcome page.
Essential daily commands
# List running containers
docker ps
# List ALL containers (including stopped)
docker ps -a
# Stop / start / restart
docker stop my-web
docker start my-web
# View logs (follow mode)
docker logs -f my-web
# Execute a command inside the container
docker exec -it my-web sh
# Remove a stopped container
docker stop my-web && docker rm my-web
# Clean up everything unused
docker system prune -a
Building your own Dockerfile
Let's create a real Node.js API and containerize it. First, the project:
mkdir docker-api && cd docker-api
npm init -y && npm install express
Create server.js:
const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;
app.use(express.json());
let tasks = [
{ id: 1, title: 'Learn Docker', done: false },
{ id: 2, title: 'Build a Dockerfile', done: false },
];
app.get('/api/tasks', (req, res) => {
res.json({ total: tasks.length, data: tasks });
});
app.post('/api/tasks', (req, res) => {
const task = { id: tasks.length + 1, title: req.body.title, done: false };
tasks.push(task);
res.status(201).json(task);
});
app.listen(PORT, () => console.log(`API running on port ${PORT}`));
Now the Dockerfile:
FROM node:20-alpine
WORKDIR /app
# Copy dependency files first (leverages Docker cache)
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
# Run as non-root user for security
USER node
CMD ["node", "server.js"]
And .dockerignore:
node_modules
.git
.env
Dockerfile
docker-compose.yml
README.md
# Build and run
docker build -t my-node-api .
docker run -d --name api -p 3000:3000 my-node-api
curl http://localhost:3000/api/tasks
Multi-stage builds: smaller, more secure images
In real projects, your dev image has tools (compilers, devDependencies) you don't need in production. Multi-stage builds solve this:
# --- Stage 1: Build ---
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
# RUN npm run build # For TypeScript/React projects
# --- Stage 2: Production ---
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=builder /app/package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/server.js ./
USER node
EXPOSE 3000
CMD ["node", "server.js"]
The final image only contains production dependencies. For TypeScript or React projects, the difference can be 800MB vs 150MB.
Volumes: data persistence
Containers are ephemeral — delete them and all data is gone. Volumes persist data beyond the container lifecycle:
# Named volume for PostgreSQL
docker volume create pg-data
docker run -d \
--name my-postgres \
-e POSTGRES_PASSWORD=secret123 \
-e POSTGRES_DB=myapp \
-v pg-data:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:16-alpine
# Data survives even if you remove the container
docker stop my-postgres && docker rm my-postgres
# Create a new container with the same volume — data is still there
Bind mounts for development
# Mount current directory so code changes reflect instantly
docker run -d --name api-dev -p 3000:3000 \
-v $(pwd):/app \
-v /app/node_modules \
my-node-api
Docker Compose: multi-container applications
Real applications aren't a single container. Docker Compose orchestrates everything with one YAML file:
version: '3.8'
services:
api:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- MONGO_URI=mongodb://mongo:27017/myapp
- REDIS_URL=redis://redis:6379
volumes:
- .:/app
- /app/node_modules
depends_on:
mongo:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
mongo:
image: mongo:7
ports:
- "27017:27017"
volumes:
- mongo-data:/data/db
healthcheck:
test: mongosh --eval "db.adminCommand('ping')" --quiet
interval: 10s
timeout: 5s
retries: 3
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis-data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 3
volumes:
mongo-data:
redis-data:
# Essential Compose commands
docker compose up -d # Start all services
docker compose logs -f # Follow all logs
docker compose logs -f api # Follow one service
docker compose down # Stop and remove
docker compose up -d --build # Rebuild images
docker compose ps # Check status
Debugging: when things go wrong
# 1. Check container logs
docker logs my-container --tail 50
# 2. Get a shell inside the container
docker exec -it my-container sh
# 3. Inspect container configuration
docker inspect my-container
# 4. Monitor resource usage
docker stats
# 5. Copy files from/to container
docker cp my-container:/app/error.log ./error.log
Common errors and fixes
"port is already allocated"
lsof -i :3000 # Find what's using the port
# Solution: change the port or stop the process
"no space left on device"
docker system prune -a --volumes
docker system df # Check Docker disk usage
Production best practices
- Use Alpine images:
node:20-alpineis ~50MB vs ~350MB fornode:20. See available tags on Docker Hub. - Don't run as root: Always add
USER nodein your Dockerfile. - One process per container: Don't put your app, database, and Redis in one container.
- Use .dockerignore: Exclude
node_modules,.git,.env, and test files. - Pin image versions: Use
node:20.11-alpineinstead ofnode:latest. - Scan for vulnerabilities: Run
docker scout quickview. See Docker Scout docs. - Use healthchecks: Let Docker know if your app is actually working, not just if the process is alive.
Next steps
- Docker in CI/CD: Integrate with GitHub Actions or GitLab CI.
- Kubernetes: When you need to orchestrate hundreds of containers.
- Compose Profiles: Manage different configs (dev, test, prod) in one file.
Docker fundamentally changed how we develop and deploy software. What used to take hours of manual configuration now comes down to docker compose up. If you're just starting, my advice: containerize your next project from day one.