I was wrong! zswap IS better than zram

Rule of thumb / TL;DR:
If your system only uses swap occasionally and keeping swap demand within ~20–30% of your physical RAM as zram is enough, ZRAM is the simpler and more effective option. But if swap use regularly pushes far beyond that, is unpredictable, or if your system has fast storage (NVMe), Zswap is the better choice. It dynamically compresses and caches hot pages in RAM, evicts cold ones to disk swap, and delivers smoother performance under heavy pressure.

For years I’ve championed zram as the best swap‑related feature you could enable on a Linux desktop. When your system starts paging out memory, traditional swap writes whole pages to disk, which is painfully slow compared to RAM. Back in 2017 through 2020, I wrote several posts that explained how to configure zram on servers, desktops, and even Raspberry Pi boards.

Dropped ZRAM and just used a 32 GB swap file for a while.
After removing ZRAM and just using a 32 GB swap file

Although zram had been promoted out of the kernel’s staging area in 2014, most mainstream Linux distributions did not enable it by default until around the 2020s. You still had to load the zram module and set up the block device yourself using tools like zram-config or zram-tools. Those posts walked readers through exactly those steps.

Increasingly over time more Linux distro have Zram and Zswap already installed. Some with one enabled by default.

By compressing memory pages and storing them in a block device in RAM, zram provided me the breathing room I needed on many systems low on free memory. For servers with slow-spinning hard disks (HDDs), desktops with budget or older model SSDs, enabling ZRAM felt like magic.

Fast‑forward a few years. My daily workstation still has only 16 GB of RAM, but my workloads have grown. Running virtual machines, dozens of browser tabs split across multiple workspaces, containers, editors, Remmina, Termius, Gimp, Spotify, etc., means I routinely chew through 20 GB of memory. Back in July 2023, I upgraded the storage on my desktop and laptop to very fast Hynix NVMe SSDs [Amazon affiliate link].

In that context, my old love affair with zram started to show cracks. After some experimentation and a lot of research, I’ve concluded that zswap is often a better choice than zram on modern desktops. This post revisits my earlier articles, explains why zswap may outperform zram, and shares the lessons learned from a few years of real‑world use.

Revisiting zram: why I loved it

Also read: Raspberry Pi Performance: Add ZRAM and These Kernel Parameters
Screenshot from the article: Raspberry Pi Performance: Add ZRAM and These Kernel Parameters

To understand why I once pushed zram so hard, it helps to recall what it does and why it made sense then. The Linux kernel includes a zram module, but it is not enabled or configured automatically by most distributions. When you load it and initialize a block device with zramctl, it creates a compressed block device in RAM that you can format like a disk.

When you use that device as swap, pages being swapped out are compressed to about 50% to 30% of their original size before being stored in memory (RAM) instead of being written to disk. Because the device resides in RAM, accessing it is much faster than reading from or writing to a physical drive. The Linux kernel documentation describes zram as creating “RAM‑based block devices… pages written to these disks are compressed and stored in memory itself”. This reduces disk wear and can significantly increase effective memory capacity.

In my earlier posts I recommended allocating zram at roughly 25% of your physical RAM. On a 16 GB system, using a 4 GB zram block with a typical 2:1 compression ratio gave me about 8 GB of compressed swap capacity. Together with the remaining 12 GB of RAM, that setup effectively provided 20 GB of usable memory.

When running my workloads on slow SSDs, the machine felt responsive despite heavy swapping. In those posts I stressed that zram is particularly attractive when you don’t have a disk‑backed swap device, because it can be set up entirely in memory and does not require any preexisting partition.

You can follow the timeline of my workstation setup in the forum comments for the blog article Running Out of RAM on Linux? Add Zram Before Upgrading!

Where zram started to fall apart

Also read: Running Out of RAM on Linux? Add Zram Before Upgrading!
Screenshot from the article: Running Out of RAM on Linux? Add Zram Before Upgrading!

The honeymoon with zram ended when my workloads grew beyond a few gigabytes of swap usage. I eventually found myself using more than 10 GB of swap routinely on top of 16 GB of physical memory. Clear case of buy more RAM, right? Well, I don’t plan to invest any more in this rig. I would like to build a new rig next year and prefer to put 32 GB of RAM there. So until then I need my workstation to survive my workload.

Following my earlier formula, I bumped the zram device to 8 GB (50% of RAM), hoping that the additional compressed space would keep me off the disk. At first it worked; compression around 2:1 effectively provided 16 GB of compressed swap capacity. But I quickly noticed, well, felt the catch: half my memory was now reserved for the zram block!

That left only 8 GB of normal RAM for active processes, which meant the system was under memory pressure sooner than before. When the zram device filled up and the kernel had to fall back to the disk swap partition, performance nosedived.

I also ran into an edge case: with zram occupying half of my RAM, suspending the system to RAM (sleep) and resuming from sleep occasionally caused complete lockups. While there isn’t an official bug report that I can point to, the kernel documentation warns that zram has limited scalability because the compressed device is limited by available RAM and incurs CPU overhead for compression/decompression.

In short, allocating too much of your memory to a fixed zram device can starve the system of the working memory it needs to handle suspend/resume, I/O, file caches, and other kernel tasks.

Enter zswap: a smarter compressed cache

After disabling zram I experimented with zswap, another memory‑compression feature in the Linux kernel. Unlike zram, zswap is not a separate block device. The kernel documentation describes it as “a lightweight compressed cache for swap pages.”

On Kali Linux, zswap is built into the kernel, but it’s not always enabled by default. You just have to enable it:

Enable Zswap

First, see what method of compression is available on your system with:

cat /sys/module/zswap/parameters/compressor

Check method of compression is available on your system.

Debian and derivatives usually lzo, Arch Linux zstd, it varies.

To enable permanently (at boot), edit your GRUB defaults so the kernel boots with zswap enabled:

sudo nano /etc/default/grub

Find the line starting with:

GRUB_CMDLINE_LINUX_DEFAULT="..."

Add:

zswap.enabled=1 zswap.compressor=lzo zswap.max_pool_percent=25

For example, my screenshot just above:

GRUB_CMDLINE_LINUX_DEFAULT=”quiet zswap.enabled=1 zswap.compressor=lzo zswap.max_pool_percent=25″

Save, then update grub:

sudo update-grub

Reboot. After reboot:

cat /sys/module/zswap/parameters/enabled

Show "Y".

Check if Zswap is enabled.

Also see wiki.archlinux.org/title/Zswap, fedoraproject.org/wiki/Zswap and wiki.debian.org/Zswap.

Rather than pre‑allocating RAM for a swap device, zswap intercepts pages as they are being swapped out and compresses them into a dynamic memory pool. If the page compresses well, it stays in RAM; if not, it’s written straight to your swap file or partition. When memory pressure eases and the page is freed, zswap releases the associated RAM. The pool grows and shrinks on demand, and it evicts the least recently used compressed pages to disk when it reaches its configured limit.

This design has several advantages:

  • Dynamic memory usage: The zswap pool isn’t pre‑allocated. It expands when you start swapping and shrinks when the pressure subsides. That means your system always has as much physical RAM as possible available for active processes, file caches, and kernel tasks. You set an upper limit (20% of RAM by default) to prevent zswap from consuming too much memory.
  • No separate swap device required: zswap augments your existing swap file or partition rather than replacing it. It is enabled with a simple kernel parameter (zswap.enabled=1) or at runtime via /sys/module/zswap/parameters/enabled. All swap operations continue to use your normal swap device; zswap just compresses pages before they hit the disk.
  • Fewer disk writes: Because zswap stores compressible pages in RAM and evicts them on an LRU basis only when the pool is full, many pages will never touch the disk. The Arch Wiki notes that pages are decompressed and written to disk only when the pool is full or RAM is exhausted. On systems with SSDs or NVMe drives, reducing writes can extend device life and improve responsiveness.
  • Lower CPU overhead: Compression costs CPU cycles. Baeldung’s comparison points out that zswap typically has lower CPU overhead than zram because it doesn’t compress every page by default and avoids repeated recompression when the pool fills.

My experience with zswap

With zswap enabled on my workstation (and a 16 GB NVMe‑backed swap file), I rarely notice any slowdowns even when using the previous equivalent of 16 GB of swap. I set the maximum zswap pool size to 25 % of RAM (zswap.max_pool_percent=25) and chose the Zstd compressor (if available, system will fall back to another method if not) for a good balance between compression ratio and speed.

The dynamic nature of zswap means the kernel always has access to the full 16 GB of RAM unless memory pressure demands more. Swapping happens in the background, and because NVMe write latency is only a few microseconds, even uncompressed pages written to the swap file come back quickly. Importantly, suspend/resume now works reliably because there’s no giant pre‑allocated zram device hogging half the memory.

From a performance‑versus‑complexity standpoint, zswap feels like the best of both worlds. When the swap usage is moderate (say, a few gigabytes), the compressed cache keeps the hottest pages in RAM and avoids disk entirely. When swap usage spikes well beyond the physical RAM size, zswap gracefully evicts cold pages to disk.

The only downside is that if your storage is really slow (e.g., a mechanical HDD), those evictions will still hurt. In that scenario, zram might still provide a snappier experience because it never touches the disk until it has to.

Comparing zram and zswap

The best way to summarize the trade‑offs is to lay out the facts side by side:

Feature ZRAM Zswap
Primary use case Small or steady swap; slow storage Large or spiky swap; fast SSD/NVMe
How it works Compressed RAM-backed swap device Compressed RAM cache in front of swap
Visible in swapon? Yes (/dev/zramX) Only the underlying swap shows
Is swap device? Yes (block device) No (cache)
Works w/o disk swap? Yes No
Memory behavior Fixed-cap; compresses all swapped pages Dynamic; grows under pressure, shrinks when idle
Disk I/O Avoids disk until device fills Reduces writes; evicts cold pages to disk
CPU cost Higher (compress every swapped page) Often lower (cache + eviction)
OOM behavior Delays OOM until device fills Delays OOM; evicts to disk when pool hits cap
Tuning knobs Device size, priority, compressor max_pool_percent, compressor
Rule of thumb Small, steady swap on slow storage Large or unpredictable swap, or fast NVMe

Both technologies trade CPU time for reduced disk I/O. If your workload rarely touches swap or your swap needs are comfortably served by compressing a few gigabytes, zram is an elegant, easy solution. But when swap usage is large and unpredictable, or you have a fast NVMe drive, zswap’s write‑behind cache and dynamic pool offer a smoother experience.

Why I now prefer zswap

So why the title of this post? Simply put, my real‑world needs and hardware have changed. For example, in more recent years, all the systems I work with have fast storage, SSDs, or better. In addition, here is more reasoning on why I flipped my stance:

  • Swap demand beyond what zram can comfortably handle: My workstation often uses 10–20 GB of swap. Allocating a fixed zram device big enough to absorb that would mean dedicating half of my RAM to compressed swap, leaving too little for everything else. Zswap lets me use only as much RAM as needed for the cache and gracefully spills to my NVMe swap file when the working set grows.
  • NVMe swap is fast: As I mentioned, with an NVMe drive, the latency penalty of writing pages to disk is tiny compared to HDDs or older SATA SSDs. Zswap still reduces writes by caching compressible pages, but if a page is incompressible or the cache is full, writing it to NVMe is not the nightmare it once was.
  • Better behavior during suspend/resume: Without a giant pre‑allocated zram device, the kernel has more free memory for device drivers and kernel work queues during suspend and resume. In my tests the system no longer hangs when waking from sleep.
  • Simpler management: Enabling zswap is a matter of setting zswap.enabled=1 at boot. No need to create and format block devices or fiddle with priorities. You can adjust the pool size and compression algorithm through sysfs on the fly.
  • Reduced risk of LRU inversion: Because zswap evicts the least recently used compressed pages to disk and keeps the most recent pages in RAM, it avoids the scenario where old data sits in fast memory while new pages go to slow storage. This makes swap behavior more predictable when the workload changes.

Situations where Zram still shines

I’m not declaring zram obsolete. There are scenarios where zram remains the right tool:

  • Small systems with no swap device: Many embedded boards, Raspberry Pi boards, and VMs don’t have swap partitions and sometimes run on SD cards or flash with limited write endurance. In those cases, zram provides fast, wear‑free swap right out of RAM. Several of my earlier articles targeted these devices, and the advice still stands.
  • Systems with extremely slow storage: If your only storage is a 5400 rpm HDD or old/slow SATA SSD, the cost of writing uncompressed pages to disk is high. Zram can avoid those writes entirely until the compressed device is full. On a system with 2–4 GB of RAM that seldom swaps more than a gigabyte or two, zram will feel snappier than zswap.
  • Transient workloads needing a temporary RAM disk: Because zram creates block devices, you can format them as ext4 or XFS and use them as /tmp or /var/cache to speed up builds and tests. The kernel docs also point this out.

Lessons learned and final setup

So what’s my current configuration? I’ve disabled zram entirely on my desktop. My /etc/default/grub includes zswap.enabled=1 zswap.compressor=lzo zswap.max_pool_percent=25. I reduced my 32 GB swap file on my NVMe drive to 16 GB. The system remains responsive even under heavy loads and hasn’t frozen during suspend/resume since making the change.

For servers I continue to deploy zram, but only at 20% of RAM. If a server’s workload pushes beyond that, it is usually a sign that I need to add more physical memory or tune applications. It goes without saying, but, under no circumstances should we allocate 50% of RAM to zram.

Conclusion

Changing one’s mind can be humbling, especially after writing a string of blog posts proclaiming something as the “best.” However, those articles are almost a decade old, and a lot has changed since then, with my distros including Zram and Zswap by default. The Linux kernel now offers multiple memory compression tools, and the right one depends on your hardware and workload. Unlike a decade ago. Zstd especially, has become a sensible staple.

My early enthusiasm for zram came from running resource‑constrained systems and hardware with slow storage. It remains a fantastic solution for that scenario. But on a modern system, zswap’s dynamic compressed cache provides smoother performance, better suspension behavior, and less tinkering.

Don’t be afraid to experiment with both; monitor your swap usage (swapon –show, free -h), try different pool sizes and compressors, and—most importantly—don’t forget the easiest fix of all: adding more RAM.

To recap! My Linux work PC experience with Zram vs. Zswap

Vanilla Linux partition post Kali Linux install.

Almost 3 years ago, when I built my workstation/gaming rig, I started with the default 1 GB of disk swap that came with the system, then moved to ZRAM. At first, it gave me noticeable performance improvements, but as my memory usage grew and I kept increasing the percentage of RAM allocated to ZRAM, performance started to suffer. Especially with the bug that prevented resuming from sleep. My NVMe originally was a budget-purchase that was about 1/3 of the speed of my current NVMe.

My next step was to disable ZRAM entirely and create a large 32 GB swap file. With plenty of free disk space and a fast NVMe, performance stayed smooth even under heavy memory pressure. I ran like this for a few months, no issues.

The only reason I’ve now enabled Zswap is to reduce unnecessary NVMe writes/wear due to ongoing heavily used swap. On a fast NVMe drive, the benefit is more about longevity and best practices, than responsiveness. Overall, it was a lesson learned that ZRAM wasn’t as fit for this workload, or for many others, as I once believed. Today, slow storage devices are mostly found in Raspberry Pi boards and other budget devices, since slow storage is less common on everyday desktop and server systems.

Tags: , , , ,

Comments from Our Members

  1. This article hopefully serves to balance the recommendation between both ZRAM and Zswap. At the very least, remove some of my previous bias. :grimacing:

    The community discussion this was based on can be followed here:

    As well as the related forum topics at the end of this page/below:



Top ↑