On Unix, device drivers for hardware (such as hard disks) and special device files (such as /dev/zero and /dev/random) appear in the file system just like normal files;
dd can also read and/or write from/to these files, provided that function is implemented in their respective driver. As a result,
dd can be used for tasks such as backing up the boot sector of a hard drive, and obtaining a fixed amount of random data. The
dd program can also perform conversions on the data as it is copied, including byte order swapping and conversion to and from the ASCII and EBCDIC text encodings.
dd may be an allusion to the DD statement found in IBM's Job Control Language (JCL), where the initials stand for "Data Description." The command's syntax resembles the JCL statement more than it does other Unix commands, so the syntax may have been a joke. Another explanation for the command's name is that "cc" (for "convert and copy", as in the command's description) was already taken by the C compiler. It is also jokingly said that
dd stands for "disk destroyer" or "delete data", since when used for low-level operations on hard disks, a small mistake such as reversing the input file and output file parameters could result in the loss of some or all data on a disk.
The command line syntax of
dd differs from many other Unix programs, in that it uses the syntax
option=value for its command line options, rather than the more-standard
--option value or
-option=value formats. By default,
dd reads from STDIN and writes to STDOUT, but these can be changed by using the
if (input file) and
of (output file) options.
Usage varies across different operating systems. Also, certain features of
dd will depend on the computer system capabilities, such as
dd's ability to implement an option for direct memory access. Sending a SIGINFO signal (or a USR1 signal on Linux) to a running
dd process makes it print I/O statistics to standard error once and then continue copying (note that signals may terminate the process on OS X).
dd can read standard input from the keyboard. When end-of-file (EOF) is reached,
dd will exit. Signals and EOF are determined by the software. For example, Unix tools ported to Windows vary as to the EOF: Cygwin uses <ctrl-d> (the usual Unix EOF) and MKS Toolkit uses <ctrl-z> (the usual Windows EOF).
Following the Unix philosophy of developing small yet capable software,
dd does one thing and contains no logic other than that required to implement the low-level decisions based on user-specified command-line options. Often, the options are changed for each run of
dd in a multi-step process to empirically produce desired results.[clarification needed]
The GNU variant of
dd as supplied with coreutils on Linux does not describe the format of the messages displayed on standard output on completion. However, these are described by other implementations, e.g. that with BSD.
Each of the "Records in" and "Records out" lines shows the number of complete blocks transferred + the number of partial blocks, e.g. because the physical medium ended before a complete block was read, or a physical error prevented reading the complete block.
A block is a unit measuring the number of bytes that are read, written, or converted at one time. Command line options can specify a different block size for input/reading (
ibs) compared to output/writing (
obs), though the block size (
bs) option will override both
obs. The default value for both input and output block sizes is 512 bytes (the traditional block size of disks, and POSIX-mandated size of "a block"). The
count option for copying is measured in blocks, as are both the
skip count for reading and
seek count for writing. Conversion operations are also affected by the "conversion block size" (
For some uses of the
dd command, block size may have an effect on performance. For example, when recovering data from a hard disk, a small block size will generally cause the most bytes to be recovered. Issuing many small reads is an overhead and may be non-beneficial to execution performance. For greater speed during copy operations, a larger block size may be used. However, because the amount of bytes to copy is given by bs×count, it is impossible to copy a prime number of bytes in a single
dd command without making one of two bad choices,
bs=N count=1 (memory use) or
bs=1 count=N (read request overhead). Alternative programs (see below) permit specifying bytes rather than blocks. When
dd is used for network transfers, the block size may have also an impact on packet size, depending on the network protocol used.
The value provided for block size options is interpreted as a decimal (base 10) integer and can also include suffixes to indicate multiplication. The suffix
w means multiplication by 2,
b means 512,
k means 1024,
M means 1024 × 1024,
G means 1024 × 1024 × 1024, and so on. Additionally, some implementations understand the
x character as a multiplication operator for both block size and count parameters.
For example, a block size such as
bs=2x80x18b is interpreted as 2 × 80 × 18 × 512 = 1474560 bytes, the exact size of a 1440 KiB floppy disk.
dd command can be used for a variety of purposes.
dd can duplicate data across files, devices, partitions and volumes. The data may be input or output to and from any of these; but there are important differences concerning the output when going to a partition. Also, during the transfer, the data can be modified using the
conv options to suit the medium.
An attempt to copy the entire disk using
cp may omit the final block if it is of an unexpected length; whereas
dd may succeed. The source and destination disks should have the same size.
dd if=/dev/sr0 of=myCD.iso bs=2048 conv=noerror,sync
|Creates an ISO disk image from a CD-ROM; in some cases the created ISO image may not be the same as the one which was used to burn the CD-ROM.|
dd if=/dev/sda2 of=/dev/sdb2 bs=4096 conv=noerror
|Clones one partition to another.|
dd if=/dev/ad0 of=/dev/ad1 bs=1M conv=noerror
|Clones a hard disk "ad0" to "ad1".|
noerror option means to keep going if there is an error, while the
sync option causes output blocks to be padded.
It is possible to repair a master boot record. It can be transferred to and from a repair file.
To duplicate the first two sectors of a floppy drive:
dd if=/dev/fd0 of=MBRboot.img bs=512 count=2
dd if=/dev/sda of=MBR.img bs=512 count=1
dd if=/dev/sda of=MBR_boot.img bs=446 count=1
dd can modify data in place.
Overwrite the first 512 bytes of a file with null bytes:
dd if=/dev/zero of=path/to/file bs=512 count=1 conv=notrunc
notrunc conversion option means do not truncate the output file — that is, if the output file already exists, just replace the specified bytes and leave the rest of the output file alone. Without this option,
dd would create an output file 512 bytes long.
To duplicate a disk partition as a disk image file on a different partition:
dd if=/dev/sdb2 of=partition.image bs=4096 conv=noerror
For security reasons, it is sometimes necessary to have a disk wipe of a discarded device.
To wipe a disk by writing zeros to it,
dd can be used this way:
dd if=/dev/zero of=/dev/sda bs=4k
Another approach could be to wipe a disk by writing random data to it:
dd if=/dev/urandom of=/dev/sda bs=4k
bs=4k option makes dd read and write 4 kilobytes at a time. For modern systems, an even greater block size may be beneficial due to the transport capacity (think RAID systems). Note that filling the drive with random data will always take a lot longer than zeroing the drive, because the random data must be rendered by the CPU and/or HWRNG first, and different designs have different performance characteristics. (The PRNG behind /dev/urandom may be slower than libc's.) On most relatively modern drives, zeroing the drive will render any data it contains permanently irrecoverable.
Zeroing the drive will render any data it contains irrecoverable by software; however it still may be recoverable by special laboratory techniques.
The shred program provides an alternate method for the same task, and finally, the wipe program present in many Linux distributions provides an elaborate tool (the one that does it "well", going back to the Unix philosophy mentioned before) with many ways of clearing.
The history of open-source software (OSS) for data recovery and restoration of files, drives, and partitions started with GNU
dd in 1984, with one block size per
dd process, and no recovery algorithm other than the user's interactive session running one form of
dd after another. Then, a C program was authored October 1999 called
dd_rescue. It has two block sizes in its algorithm. But the author of the 2003 shell script
dd_rhelp that enhances
dd_rescue's data recovery algorithm, now recommends
GNU ddrescue, a C++ program that was initially released in 2004 and is now in most Linux distributions. GNU
ddrescue has the most sophisticated block-size-changing algorithm available in OSS. To help distinguish the newer GNU program from the older script, alternate names are sometimes used for GNU's
addrescue (the name on freecode.com and freshmeat.net),
gddrescue (Debian package name), and
gnu_ddrescue (openSUSE package name).
ddrescue is stable and safe.
Another open source program called
savehd7 uses a sophisticated algorithm, but it also requires the installation of its own programming-language interpreter.
To make drive benchmark test and analyze the sequential (and usually single-threaded) system read and write performance for 1024-byte blocks :
dd if=/dev/zero bs=1024 count=1000000 of=file_1GB dd if=file_1GB of=/dev/null bs=1024
To make a file of 100 random bytes using the kernel random driver:
dd if=/dev/urandom of=myrandom bs=100 count=1
To convert a file to uppercase:
dd if=filename of=filename1 conv=ucase
Create a 1 GiB sparse file, or resize an existing file to 1 GiB without overwriting:
dd if=/dev/zero of=mytestfile.out bs=1 count=0 seek=1G
(A more modern tool for this is fallocate or truncate, both shipped with GNU coreutils.)
Seagate documentation warns, "Certain disc utilities, such as DD, which depend on low-level disc access may not support 48-bit LBAs until they are updated". Using ATA harddrives over 128 GiB requires 48-bit LBA. However, in Linux,
dd uses the kernel to read or write to raw device files.[a] Support for 48-bit LBA has been present since version 2.4.23 of the kernel, released in 2003.
dcfldd is a fork of dd that is an enhanced version developed by Nick Harbour, who at the time was working for the United States' Department of Defense Computer Forensics Lab. Compared to dd, dcfldd allows for more than one output file, supports simultaneous multiple checksum calculations, provides a verification mode for file matching, and can display of the percentage progress of an operation.