Docker Tutorial for Beginners — Build, Run, and Deploy Your First Container
Docker packages your application and all its dependencies into a single image that runs identically on any machine. To get started: install Docker, write a Dockerfile that defines your environment, run docker build to create the image, and docker run to start a container. This guide covers everything from your first container to production-ready multi-stage builds.
The classic developer excuse — "it works on my machine" — exists because every machine has different versions of Node, Python, system libraries, and configurations. Docker eliminates this entirely. You define exactly what your application needs in a Dockerfile, build it into an image, and that image runs the same way everywhere — your laptop, your colleague's laptop, a CI/CD pipeline, and production servers.
What Is Docker (In Simple Terms)
Think of Docker as a lightweight virtual machine, except it is not a virtual machine. A VM runs an entire operating system on top of your OS. Docker containers share the host OS kernel and only package your application code and its dependencies. This makes containers start in seconds instead of minutes and use a fraction of the resources.
- Image — a read-only blueprint. It contains your code, runtime (Node, Python, Java), system libraries, and configuration. Like a class in programming.
- Container — a running instance of an image. You can run multiple containers from the same image. Like an object created from a class.
- Dockerfile — a text file with instructions to build an image. It is the recipe.
- Registry — where images are stored and shared. Docker Hub is public. Amazon ECR is private.
Install Docker
Mac or Windows: Download and install Docker Desktop. It includes everything.
Linux (Ubuntu/Debian):
# Install Docker Engine
sudo apt-get update
sudo apt-get install -y docker.io
# Start Docker and enable on boot
sudo systemctl start docker
sudo systemctl enable docker
# Add your user to the docker group (so you don't need sudo)
sudo usermod -aG docker $USER
# Log out and back in, then verify
docker --version
Your First Container
Before writing any Dockerfile, let us run an existing container to see how Docker works:
# Pull and run the official Nginx web server
docker run -d -p 8080:80 --name my-nginx nginx
Open http://localhost:8080 in your browser. You will see the Nginx welcome page. That is a web server running inside a container on your machine.
What just happened:
docker run— creates and starts a container-d— runs in the background (detached mode)-p 8080:80— maps port 8080 on your machine to port 80 inside the container--name my-nginx— gives the container a namenginx— the image to use (pulled from Docker Hub automatically)
# See running containers
docker ps
# Stop the container
docker stop my-nginx
# Remove the container
docker rm my-nginx
Write Your First Dockerfile
Let us dockerize a simple Node.js application. Create these files:
# app.js
const http = require('http');
const port = process.env.PORT || 3000;
const server = http.createServer((req, res) => {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ message: 'Hello from Docker!', timestamp: new Date() }));
});
server.listen(port, () => {
console.log(`Server running on port ${port}`);
});
# package.json
{
"name": "docker-demo",
"version": "1.0.0",
"main": "app.js",
"scripts": {
"start": "node app.js"
}
}
Now the Dockerfile:
# Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package.json ./
RUN npm install --production
COPY app.js ./
EXPOSE 3000
CMD ["node", "app.js"]
Line by line:
FROM node:20-alpine— start from the official Node.js 20 image, Alpine variant (small, ~50MB instead of ~350MB)WORKDIR /app— set the working directory inside the containerCOPY package.json ./— copy package.json first (for caching — explained below)RUN npm install— install dependenciesCOPY app.js ./— copy your application codeEXPOSE 3000— document which port the app usesCMD— the command that runs when the container starts
Build and Run
# Build the image (the dot means "use current directory for context")
docker build -t my-app:v1 .
# Run it
docker run -d -p 3000:3000 --name my-app my-app:v1
# Test it
curl http://localhost:3000
# {"message":"Hello from Docker!","timestamp":"2026-04-03T..."}
Your application is now running inside a container. Anyone with Docker installed can run docker run my-app:v1 and get the exact same result — no "works on my machine" problems.
The .dockerignore File
Just like .gitignore, .dockerignore tells Docker which files to skip when copying:
# .dockerignore
node_modules
npm-debug.log
.git
.env
Dockerfile
docker-compose.yml
README.md
Without this, Docker copies your entire node_modules folder into the build context, making it slow and bloated. The RUN npm install inside the Dockerfile creates a fresh node_modules anyway.
Layer Caching — Why Order Matters
Docker builds images in layers. Each instruction in your Dockerfile creates a layer. Docker caches layers and reuses them if nothing changed.
This is why we copy package.json and run npm install BEFORE copying the rest of the code:
# Good order — npm install is cached unless package.json changes
COPY package.json ./
RUN npm install --production
COPY . ./
# Bad order — npm install runs on EVERY build even if only code changed
COPY . ./
RUN npm install --production
With the good order, changing your application code only rebuilds the last COPY layer. The npm install layer is cached because package.json did not change. This turns a 2-minute build into a 5-second build.
Docker Compose — Multiple Services
Most applications need more than one container — a web app plus a database, for example. Docker Compose lets you define and run them together:
# docker-compose.yml
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://user:pass@db:5432/myapp
depends_on:
- db
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
pgdata:
# Start everything
docker compose up -d
# See logs
docker compose logs -f app
# Stop everything
docker compose down
# Stop and remove volumes (deletes database data)
docker compose down -v
Docker Compose creates a network automatically. Your app container can reach the database at db:5432 — the service name becomes the hostname. No IP addresses to manage.
Multi-Stage Builds — Production Images
Your development image has build tools, dev dependencies, and debug utilities. Your production image should have none of that — only the compiled code and runtime dependencies. Multi-stage builds solve this:
# Dockerfile (multi-stage)
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package.json ./
EXPOSE 3000
USER node
CMD ["node", "dist/index.js"]
The first stage installs everything and builds. The second stage only copies the build output and production node_modules. Build tools, source code, and dev dependencies are left behind. The final image is dramatically smaller.
The USER node line is important — it runs the application as a non-root user inside the container. Running as root is a security risk because if someone exploits your app, they have root access to the container.
Essential Docker Commands
# Images
docker build -t my-app:v1 . # Build an image
docker images # List images
docker rmi my-app:v1 # Remove an image
# Containers
docker run -d -p 3000:3000 my-app # Run in background
docker ps # List running containers
docker ps -a # List all containers (including stopped)
docker stop my-app # Stop a container
docker rm my-app # Remove a container
docker logs my-app # View logs
docker logs -f my-app # Follow logs (live)
# Debugging
docker exec -it my-app sh # Open a shell inside the container
docker inspect my-app # View container details
# Cleanup
docker system prune # Remove unused images, containers, networks
docker system prune -a # Remove everything unused (reclaim disk space)
Push to a Registry
To deploy your image, push it to a container registry. Here is how with Amazon ECR:
# Authenticate with ECR
aws ecr get-login-password --region ap-south-1 | \
docker login --username AWS --password-stdin 123456789.dkr.ecr.ap-south-1.amazonaws.com
# Tag the image for ECR
docker tag my-app:v1 123456789.dkr.ecr.ap-south-1.amazonaws.com/my-app:v1
# Push
docker push 123456789.dkr.ecr.ap-south-1.amazonaws.com/my-app:v1
Once in ECR, you can deploy it to ECS Fargate or any container orchestrator.
Common Mistakes
- Using
latesttag in production.docker run my-app:latestdoes not mean the latest version — it means whatever was last tagged as "latest." Use specific version tags likemy-app:v1.2.3ormy-app:abc123(git SHA). - Running as root. Add
USER node(or a non-root user) in your Dockerfile. If your container is compromised, the attacker gets root access inside it otherwise. - Using full base images.
node:20is ~350MB.node:20-alpineis ~50MB. Always use-alpineor-slimvariants unless you need specific system libraries. - Not using .dockerignore. Without it, Docker copies
node_modules,.git, and other junk into the build, making it slow and large. - Installing dev dependencies in production images. Use
npm ci --productionor multi-stage builds to keep dev tools out of the final image.
Frequently Asked Questions
What is the difference between a Docker image and a container?
An image is a read-only blueprint containing your code and dependencies. A container is a running instance of that image. You can run multiple containers from one image. Think of an image as a class and a container as an object.
What is the difference between CMD and ENTRYPOINT?
CMD sets the default command but can be overridden with docker run ... /bin/sh. ENTRYPOINT always runs and cannot be easily overridden. Use ENTRYPOINT for the main process and CMD for default arguments.
Why is my Docker image so large?
Common causes: full base image instead of alpine/slim, no multi-stage build, missing .dockerignore, not combining RUN commands. A Node.js app on node:alpine can be under 100MB versus 900MB+ on the full image.
What is Docker Compose?
Docker Compose defines and runs multiple containers together using a YAML file. Use it when your app needs a database, cache, or other services alongside it. One docker compose up starts everything.
Should I use Docker in production?
Yes, but not with docker run. Use a container orchestrator like ECS Fargate or Kubernetes. Docker is the packaging format. The orchestrator handles scaling, health checks, and recovery.
What Comes After Docker
You have dockerized your app. The next steps:
- Set up CI/CD with GitHub Actions — automate building and pushing images on every commit
- Deploy to ECS Fargate — run your container in production without managing servers
- Set up a production VPC — the network your containers will run in