Optimizing dd Performance on Linux
The default dd command can feel sluggish because it uses a small block size (512 bytes) by default. You can dramatically improve throughput by increasing the bs (block size) parameter.
Basic optimization
Instead of:
dd if=/dev/sda2 of=./sda2.bak
Use:
dd if=/dev/sda2 of=./sda2.bak bs=1M
The bs parameter controls how much data dd reads and writes in each operation. Larger block sizes reduce the number of system calls, which is where the real overhead happens. A block size of 1M (1 megabyte) is a solid starting point for most workloads and typically performs 10-50x faster than the default 512-byte size.
Finding the optimal block size
While 1M works well universally, you can tune further based on your hardware:
- SSDs and fast storage: Try
bs=4Mor evenbs=16M - Slow USB drives:
bs=512Kbalances throughput with memory usage - Network storage (NFS, iSCSI):
bs=2Morbs=4Moften works best - Large sequential reads: Test with
bs=64Mif you have spare RAM
Test different sizes against your actual hardware:
dd if=/dev/zero of=/tmp/test bs=1M count=1000 oflag=direct
Use oflag=direct to bypass the page cache and get realistic measurements.
Additional performance improvements
Disable buffering for more predictable behavior:
dd if=/dev/sda2 of=./sda2.bak bs=1M oflag=direct iflag=direct
The direct flags tell dd to bypass the OS cache and write directly to the device, which prevents stalling when the buffer fills up.
Monitor progress:
dd if=/dev/sda2 of=./sda2.bak bs=1M status=progress
The status=progress option (available in GNU coreutils 8.24+) displays real-time throughput and completion percentage.
Combine optimizations:
dd if=/dev/sda2 of=./sda2.bak bs=4M oflag=direct iflag=direct status=progress
When to use alternatives
For very large backups or clones, consider faster tools:
ddrescue(GNU ddrescue): Better error handling and resume capabilitypv(pipe viewer): Wrapsddand adds progress reportingrsync: For file-level backups with checksumszstdorpigz: Pipe through these for simultaneous compression
Example with pv:
pv -tpreb /dev/sda2 | dd of=./sda2.bak bs=1M
Practical numbers
On typical modern hardware:
- Default
bs=512: 20-40 MB/s bs=1M: 100-400 MB/sbs=4Mwithoflag=direct: 200-800 MB/s (varies by device)
The gain depends entirely on your storage device’s characteristics. SSDs benefit more from larger block sizes than mechanical drives, but even mechanical drives see substantial improvements.
Start with bs=1M and adjust upward if you see room for improvement. Always test with small counts before running full backups.
Troubleshooting Common Issues
When encountering problems on Linux systems, follow a systematic approach. Check system logs first using journalctl for systemd-based distributions. Verify service status with systemctl before attempting restarts. For network issues, use ip addr and ss -tulpn to diagnose connectivity problems.
Package management issues often stem from stale caches. Run dnf clean all on Fedora or apt clean on Ubuntu before retrying failed installations. If a package has unmet dependencies, try resolving them with dnf autoremove or apt autoremove.
Related System Commands
These commands are frequently used alongside the tools discussed in this article:
- systemctl status service-name – Check if a service is running
- journalctl -u service-name -f – Follow service logs in real time
- rpm -qi package-name – Query installed package information
- dnf history – View package transaction history
- top or htop – Monitor system resource usage
Quick Verification
After applying the changes described above, verify that everything works as expected. Run the relevant commands to confirm the new configuration is active. Check system logs for any errors or warnings that might indicate problems. If something does not work as expected, review the steps carefully and consult the official documentation for your specific version.

I’m having issues with disk I/O speeds in latest kernel 5.8 with a brand new machine… Can you help me diagnose what’s happening?
The weird thing about this is that I don’t have these problems when I boot into Manjaro, these problems only happen in Debian side of things (I’m using MX Linux which features an updated custom kernel 5.8.0.3, it’s not the stock Debian 10 kernel, don’t worry…)
This problem only happens when swapping to disk is involved (some large applications require swap to be used at all times, unfortunately)
I’ve tried playing around with swappiness values and other VM values, without results. More details can be seen in these screenshots:
https://imgur.com/a/x7LkOvV