Buffer & Page Cache

Linux uses spare RAM as a disk cache. When you read a file, the kernel keeps a copy in memory. Next time anything reads that file, no disk access needed — the data comes from RAM instantly. This is why a Linux system with "no free memory" often runs faster than one with lots of free RAM.

What Is the Page Cache?

Why does Linux use so much RAM for cache? Disk reads are 100,000x slower than RAM reads. The kernel caches every file it reads. When the same file is accessed again — by any process — it comes from RAM. This dramatically speeds up everything from web servers serving the same files repeatedly to databases reading indexes. The cache is automatically evicted when processes need more RAM.
# See page cache in action free -h # total used free buff/cache available # Mem: 16G 4G 1G 11G 11G # ^^ this is your page cache # "free" column is misleadingly low — system isn't starved # "available" = real usable RAM (cache can be reclaimed) # How much is cached right now cat /proc/meminfo | grep -E "Cached|Buffers|Dirty" # Buffers: 256000 kB ← filesystem metadata # Cached: 10000000 kB ← file data (page cache) # Dirty: 123456 kB ← modified, not yet on disk

Buffer Cache vs Page Cache

What's the difference between Buffers and Cached in /proc/meminfo? Historically these were separate. Today they're unified in the page cache, but the split in /proc/meminfo remains for compatibility. "Cached" = file data pages. "Buffers" = filesystem metadata (superblocks, directory entries, inode tables). Both are reclaimable under memory pressure.
Cached (page cache)Buffers
ContainsFile data — contents of files you've readFilesystem metadata — inodes, superblocks, dirs
ExamplesA 100MB log file you just cat'dThe directory listing of /etc
Reclaimable?Yes — evicted when RAM neededYes — evicted when RAM needed
In free -hBoth shown together in "buff/cache"Both shown together in "buff/cache"

Dirty Pages — Writes in Flight

When you write a file, does it go to disk immediately? No. Linux writes to the page cache first (fast RAM write), marks that page "dirty", and returns immediately to your program. A background kernel thread (pdflush/writeback) flushes dirty pages to disk asynchronously. This makes writes feel instant — but data isn't safe until flushed. A power loss before flush = data lost.
# Writeback tuning parameters (in /proc/sys/vm/) cat /proc/sys/vm/dirty_ratio # 20 ← when dirty pages hit 20% of RAM, processes must write synchronously cat /proc/sys/vm/dirty_background_ratio # 10 ← background writeback starts at 10% dirty cat /proc/sys/vm/dirty_expire_centisecs # 3000 ← dirty pages older than 30 seconds MUST be written cat /proc/sys/vm/dirty_writeback_centisecs # 500 ← writeback thread wakes every 5 seconds # For databases (reduce data loss window): echo 5 > /proc/sys/vm/dirty_background_ratio echo 10 > /proc/sys/vm/dirty_ratio

Read-Ahead — Predicting Sequential Reads

When the kernel sees sequential file reads, it pre-fetches upcoming pages into the cache before you ask for them. Next read hits cache instead of waiting for disk. This is why reading a large file the second time is much faster — and why even first reads of sequential data are faster than random reads.

# The kernel detects sequential access patterns automatically # You can also hint: posix_fadvise(fd, 0, 0, POSIX_FADV_SEQUENTIAL) # See read-ahead setting per device: cat /sys/block/sda/queue/read_ahead_kb # 128 kB (default) # Increase for large sequential reads (video, backups): echo 4096 > /sys/block/sda/queue/read_ahead_kb # Disable for random access patterns (databases): echo 0 > /sys/block/sda/queue/read_ahead_kb

Dropping the Cache

When would you ever want to drop the cache? Benchmarking disk performance — you want cold cache reads. Or freeing RAM in an emergency. Normally, don't drop it: the cache speeds things up, and the kernel evicts it automatically when processes need RAM. Never drop cache on a production system unless you understand the performance hit.
# sync first — flush dirty pages before dropping sync # Drop page cache only (safest) echo 1 > /proc/sys/vm/drop_caches # Drop dentries and inodes (filesystem metadata cache) echo 2 > /proc/sys/vm/drop_caches # Drop everything (page cache + dentries + inodes) echo 3 > /proc/sys/vm/drop_caches # This is non-destructive — only clean (non-dirty) pages dropped # Dirty pages are written first automatically

tmpfs — RAM as a Filesystem

tmpfs is a filesystem that lives entirely in the page cache — no disk backing. Reads and writes go directly to RAM. Used for /tmp, /run, /dev/shm. Files disappear on reboot. Useful for fast temporary storage: compiling, shared memory IPC, ramdisks.

# Check what's mounted as tmpfs df -h | grep tmpfs # tmpfs 16G 1.2G 15G 8% /dev/shm # tmpfs 3.1G 2.1M 3.1G 1% /run # tmpfs 16G 56M 16G 1% /tmp # Mount your own tmpfs mount -t tmpfs -o size=2G tmpfs /mnt/ramdisk # Use for fast builds (example: Go module cache) export GOPATH=/mnt/ramdisk/go

Frequently Asked Questions

What will I learn here?

This page covers the core concepts and techniques you need to understand the topic and progress confidently to the next lesson.

How should I use this page?

Start with the overview, then follow the section links to deepen your understanding. Use the table of contents on the right to jump to specific sections.

What should I read next?

Use the navigation below to continue to the next lesson or explore related topics.