Blog

Erase your disk: secure and fast.

It’s easy to recover removed files from a fresh formatted disk. Formatting only affects the partition index, not the data. I’m a bit paranoid about my data and I decided to really clean the data on a flash drive I wanted to sell.

Securely erasing disks is an old topic and there are a lot of tools to do that. But, doing your own shell script for a simple task is a good thing. It’s portable, it’s simple, readable, and you will probably learn something doing it.

We can use /dev/zero as a data source to write 0’s on the disk. Theoretically, it should be impossible to recover with software only. But we can use a more secure way to really remove the data : using /dev/urandom as a data source.

Comparing /dev/zero and /dev/urandom speed

I’m on a MacBook Pro 13 2010, Core 2 Duo, SSD Vertex III (120go). First, we create a 5GB file with /dev/zero:

dd if=/dev/zero of=out.txt bs=1m count=5000 conv=noerror

2,4 seconds. Not bad : 206 MB / sec. This is the actual disk write speed because generating 0’s costs no CPU, no RAM and no disk read access. The only bottleneck is the disk write access.

Let’s try with /dev/urandom:

dd if=/dev/urandom of=out.txt bs=1m count=5000 conv=noerror

575 seconds. It’s 23x slower : 8.72 MB / sec

This is a normal and expected result. dd used 99% of my CPU during the test because it does random data generation.

Some mat

To securely erase a 100 GB disk :

  • With /dev/zero (208 MB / sec) : 8 min.
  • With /dev/urandom (8.7 MB / sec) : around 3h.

How can we speed up the process?

The first goal is to fill a disk with random data, and, if possible, make multiple passes. Using /dev/urandom is a good choice but it’s slow. Imagine, it’s 3h for a single pass on a 100 GB disk. As you probably read in advanced security blogs, 7 passes on a disk is a strong erasing option. So, 21h for that disk? Recently, I tried a solution to use /dev/urandom but faster. The trick is simple:

Infinite Loop :

  • I generate a random file with S size (S is random)
  • I copy it on the disk N times (N is random)
This speed up the process because we will generate less random data with a correct level of security for erasing our disk. This is a simple bash example of the concept :


#!/bin/bash

# CONFIGURE :
dir="/tmp/seed" # or /path/to/my/usb
# ============================================
# ============================================
mkdir -p $dir

n=0 # INIT LOOP NUMBER

function generate () {
    # remove previous base
    rm "$dir"/base

    # generate a block between 100 and 200 MB. You can use just $RANDOM for bigger interval
    b=$(( RANDOM%100+100 ))
    echo "generating base : $b MB"

    dd if=/dev/urandom of="$dir"/base bs=1m count=$b conv=noerror
}

function copy () {
    # Increment n, to generate a new file on each loop
    n=$((n + 1))
    cp "$dir"/base "$dir"/copy-$n
}

# ============================================
# LOOP
while [[ true ]]; do
    generate

    # copy N times where N is random between 100 and 200
    a=$(( RANDOM%100+100 ))
    echo "LOOP $a times for copy ..."

    for (( i = 0; i < $a; i++ )); do
        copy
        echo "Copy $i / $a."
    done
done


Result

  • With /dev/zero (208 MB / sec) : 8 min.
  • With /dev/urandom and loop (110 MB / sec) : 19min.
  • With /dev/urandom (8.7 MB / sec) : around 3h.

The previous script let me fill the disk with an average of 110 MB / sec. It’s 2x slower than /dev/zero but it’s 13x faster than “pure” /dev/urandom. And the security level is high since we use /dev/urandom for the data generation. It’s not perfect but I just want to share this little script with you.

Damian Le Nouaille