OverlayFS
A Docker image is a stack of read-only layers. When you run a container, a writable layer is added on top. Read a file — it comes from whichever layer has it. Write a file — it's copied to the top layer first. This copy-on-write design means 100 containers can share the same image layers without 100 copies of the data.
How OverlayFS Works
What are the lowerdir, upperdir, workdir, and merged directories?
OverlayFS has four key directories: lowerdir = read-only layers (image layers, can be multiple); upperdir = writable layer (per-container changes); workdir = internal scratch space (required by kernel); merged = the unified view presented to the container. Reads check merged (upperdir first, then lowerdir). Writes go to upperdir via copy-on-write.
Image with 3 layers:
Layer 1 (base): /etc/os-release, /bin/sh, /lib/*
Layer 2 (app): /app/server.py, /app/requirements.txt
Layer 3 (config): /etc/app/config.json (overrides layer 1's config)
OverlayFS mount:
lowerdir = layer3:layer2:layer1 (read-only, checked in order)
upperdir = /var/lib/docker/overlay2/CONTAINER-ID/diff (writable)
workdir = /var/lib/docker/overlay2/CONTAINER-ID/work (internal)
merged = /var/lib/docker/overlay2/CONTAINER-ID/merged (unified view)
When container reads /etc/app/config.json:
Check upperdir: not there (unchanged)
Check layer3: FOUND → return it
When container writes /etc/app/config.json:
Copy file from layer3 to upperdir (copy-on-write)
Modify copy in upperdir
Future reads see upperdir version (modified)
Original layer3 file unchanged (other containers unaffected)
Inspecting Layers on Disk
# Docker image layers location:
ls /var/lib/docker/overlay2/
# abc123sha... def456sha... l/
# 'l' directory contains short IDs (symlinks to avoid path length limits)
ls /var/lib/docker/overlay2/l/
# Inspect a running container's overlay mount:
docker inspect myapp --format '{{json .GraphDriver.Data}}'
# {
# "LowerDir": "/var/lib/docker/overlay2/abc.../diff:
# /var/lib/docker/overlay2/def.../diff",
# "MergedDir": "/var/lib/docker/overlay2/xyz.../merged",
# "UpperDir": "/var/lib/docker/overlay2/xyz.../diff",
# "WorkDir": "/var/lib/docker/overlay2/xyz.../work"
# }
# See the actual mount:
mount | grep overlay
# overlay on /var/lib/docker/overlay2/xyz.../merged type overlay
# (rw,lowerdir=.../diff:.../diff,upperdir=.../diff,workdir=.../work)
# What's in the writable layer (changes made by container):
ls /var/lib/docker/overlay2/CONTAINERID/diff/
# etc/ (if /etc files were modified)
# tmp/ (if /tmp was written to)
Docker Image Layers — Build Cache
# Each Dockerfile instruction creates a layer
# FROM ubuntu:22.04 - base layer
# RUN apt-get update - new layer (apt cache files)
# RUN apt-get install -y curl - new layer (curl binary)
# COPY app.py /app/ - new layer (your file)
# CMD ["python", "/app/app.py"] - metadata only, no new layer
# Layers are content-addressed: same content = same hash = shared
# If 10 images all use ubuntu:22.04, only 1 copy on disk
# View layers of an image:
docker history nginx
# IMAGE CREATED CREATED BY SIZE
# abc123 2 days ago CMD ["nginx" "-g" "daemon off;"] 0B
# def456 2 days ago EXPOSE 80 0B
# ghi789 2 days ago COPY ... 10.7MB
# ...
# Layer sharing between containers:
docker pull ubuntu:22.04
docker pull myapp:latest # built FROM ubuntu:22.04
# ubuntu base layer is shared — downloaded once, used by both
Container Writable Layer vs Volumes
Should I write application data to the container filesystem?
No. The container's writable layer (UpperDir) disappears when the container is removed. Also, writing through OverlayFS is slower than writing to a volume because of copy-on-write overhead on first write. Volumes are bind-mounts that bypass OverlayFS entirely — direct writes to the host filesystem, fast and persistent. Use volumes for any data that needs to survive container restarts.
# Volume: bypasses overlayfs, data persists
docker run -v /data/myapp:/app/data myimage
# Bind mount: specific host path mounted in container
docker run -v $(pwd)/config:/etc/app/config:ro myimage
# Named volume (Docker manages the path):
docker volume create mydata
docker run -v mydata:/app/data myimage
# tmpfs: RAM-backed, fast, no persistence
docker run --tmpfs /tmp:size=100m myimage
# Check disk usage by containers (writable layers):
docker system df
# TYPE TOTAL ACTIVE SIZE RECLAIMABLE
# Images 15 5 3.2GB 1.8GB
# Containers 5 3 127MB 89MB (writable layers)
# Volumes 8 3 2.1GB 0B
Frequently Asked Questions
What will I learn here?
This page covers the core concepts and techniques you need to understand the topic and progress confidently to the next lesson.
How should I use this page?
Start with the overview, then follow the section links to deepen your understanding. Use the table of contents on the right to jump to specific sections.
What should I read next?
Use the navigation below to continue to the next lesson or explore related topics.