On Unix, device drivers for hardware (such as hard disks) and special device files (such as /dev/zero and /dev/random) appear in the file system just like normal files; dd can also read and/or write from/to these files, provided that function is implemented in their respective driver. As a result, dd can be used for tasks such as backing up the boot sector of a hard drive, and obtaining a fixed amount of random data. The dd program can also perform conversions on the data as it is copied, including byte order swapping and conversion to and from the ASCII and EBCDIC text encodings.
The name dd may be an allusion to the DD statement found in IBM's Job Control Language (JCL), where the initials stand for "Data Description." The command's syntax resembles the JCL statement more than it does other Unix commands, so the syntax may have been a joke. Another explanation for the command's name is that "cc" (for "convert and copy", as in the command's description) was already taken by the C compiler. It is also jokingly said that dd stands for "disk destroyer" or "delete data", since when used for low-level operations on hard disks, a small mistake such as reversing the input file and output file parameters could result in the loss of some or all data on a disk.
The command line syntax of dd differs from many other Unix programs, in that it uses the syntax option=value for its command line options, rather than the more-standard --option value or -option=value formats. By default, dd reads from stdin and writes to stdout, but these can be changed by using the if (input file) and of (output file) options.
Usage varies across different operating systems. Also, certain features of dd will depend on the computer system capabilities, such as dd's ability to implement an option for direct memory access. Sending a SIGINFO signal (or a USR1 signal on Linux) to a running dd process makes it print I/O statistics to standard error once and then continue copying (note that signals may terminate the process on OS X). dd can read standard input from the keyboard. When end-of-file (EOF) is reached, dd will exit. Signals and EOF are determined by the software. For example, Unix tools ported to Windows vary as to the EOF: Cygwin uses Ctrl+D (the usual Unix EOF) and MKS Toolkit uses <ctrl-z> (the usual Windows EOF).
Following the Unix philosophy of developing small yet capable software, dd does one thing and contains no logic other than that required to implement the low-level decisions based on user-specified command-line options. Often, the options are changed for each run of dd in a multi-step process to empirically produce desired results.[clarification needed]
The GNU variant of dd as supplied with coreutils on Linux does not describe the format of the messages displayed on standard output on completion. However, these are described by other implementations, e.g. that with BSD.
Each of the "Records in" and "Records out" lines shows the number of complete blocks transferred + the number of partial blocks, e.g. because the physical medium ended before a complete block was read, or a physical error prevented reading the complete block.
A block is a unit measuring the number of bytes that are read, written, or converted at one time. Command line options can specify a different block size for input/reading (ibs) compared to output/writing (obs), though the block size (bs) option will override both ibs and obs. The default value for both input and output block sizes is 512 bytes (the traditional block size of disks, and POSIX-mandated size of "a block"). The count option for copying is measured in blocks, as are both the skip count for reading and seek count for writing. Conversion operations are also affected by the "conversion block size" (cbs).
For some uses of the dd command, block size may have an effect on performance. For example, when recovering data from a hard disk, a small block size will generally cause the most bytes to be recovered. Issuing many small reads is an overhead and may be non-beneficial to execution performance. For greater speed during copy operations, a larger block size may be used. However, because the amount of bytes to copy is given by bs×count, it is impossible to copy a prime number of bytes in a single dd command without making one of two bad choices, bs=N count=1 (memory use) or bs=1 count=N (read request overhead). Alternative programs (see below) permit specifying bytes rather than blocks. When dd is used for network transfers, the block size may have also an impact on packet size, depending on the network protocol used.
The value provided for block size options is interpreted as a decimal (base 10) integer and can also include suffixes to indicate multiplication. The suffix w means multiplication by 2, b means 512, k means 1024, M means 1024 × 1024, G means 1024 × 1024 × 1024, and so on. Additionally, some implementations understand the x character as a multiplication operator for both block size and count parameters.
For example, a block size such as bs=2x80x18b is interpreted as 2 × 80 × 18 × 512 = 1474560 bytes, the exact size of a 1440 KiB floppy disk.
The dd command can be used for a variety of purposes.
dd can duplicate data across files, devices, partitions and volumes. The data may be input or output to and from any of these; but there are important differences concerning the output when going to a partition. Also, during the transfer, the data can be modified using the conv options to suit the medium.
An attempt to copy the entire disk using cp may omit the final block if it is of an unexpected length; whereas dd may succeed. The source and destination disks should have the same size.
dd if=/dev/sr0 of=myCD.iso bs=2048 conv=noerror,sync
|Creates an ISO disk image from a CD-ROM; in some cases the created ISO image may not be the same as the one which was used to burn the CD-ROM.|
dd if=/dev/sda2 of=/dev/sdb2 bs=4096 conv=noerror
|Clones one partition to another.|
dd if=/dev/ad0 of=/dev/ad1 bs=1M conv=noerror
|Clones a hard disk "ad0" to "ad1".|
The noerror option means to keep going if there is an error, while the sync option causes output blocks to be padded.
It is possible to repair a master boot record. It can be transferred to and from a repair file.
To duplicate the first two sectors of a floppy drive:
dd if=/dev/fd0 of=MBRboot.img bs=512 count=2
dd if=/dev/sda of=MBR.img bs=512 count=1
dd if=/dev/sda of=MBR_boot.img bs=446 count=1
dd can modify data in place. For example, this overwrites the first 512 bytes of a file with null bytes:
dd if=/dev/zero of=path/to/file bs=512 count=1 conv=notrunc
The notrunc conversion option means do not truncate the output file — that is, if the output file already exists, just replace the specified bytes and leave the rest of the output file alone. Without this option, dd would create an output file 512 bytes long.
To duplicate a disk partition as a disk image file on a different partition:
dd if=/dev/sdb2 of=partition.image bs=4096 conv=noerror
For security reasons, it is sometimes necessary to have a disk wipe of a discarded device.
To wipe a disk by writing zeros to it, dd can be used this way:
dd if=/dev/zero of=/dev/sda bs=4k
Another approach could be to wipe a disk by writing random data to it:
dd if=/dev/urandom of=/dev/sda bs=4k
The bs=4k option makes dd read and write 4 kilobytes at a time. For modern systems, an even greater block size may be beneficial due to the transport capacity (think RAID systems). Note that filling the drive with random data will always take a lot longer than zeroing the drive, because the random data must be rendered by the CPU and/or HWRNG first, and different designs have different performance characteristics. (The PRNG behind /dev/urandom may be slower than libc's.) On most relatively modern drives, zeroing the drive will render any data it contains permanently irrecoverable.
Zeroing the drive will render any data it contains irrecoverable by software; however it still may be recoverable by special laboratory techniques.
The shred program provides an alternate method for the same task, and finally, the wipe program present in many Linux distributions provides an elaborate tool (the one that does it "well", going back to the Unix philosophy mentioned before) with many ways of clearing.
The early history of open-source software for data recovery and restoration of files, drives and partitions included the GNU dd, whose copyright notice starts in 1985, with one block size per dd process, and no recovery algorithm other than the user's interactive session running one form of dd after another. Then, a C program called dd_rescue was written in October 1999, having two block sizes in its algorithm. However, the author of the 2003 shell script dd_rhelp, which enhances dd_rescue's data recovery algorithm, recommends GNU ddrescue, a data recovery program unrelated to dd that was initially released in 2004.
To help distinguish the newer GNU program from the older script, alternate names are sometimes used for GNU's ddrescue, including addrescue (the name on freecode.com and freshmeat.net), gddrescue (Debian package name), and gnu_ddrescue (openSUSE package name). Another open-source program called savehd7 uses a sophisticated algorithm, but it also requires the installation of its own programming-language interpreter.
To make drive benchmark test and analyze the sequential (and usually single-threaded) system read and write performance for 1024-byte blocks:
dd if=/dev/zero bs=1024 count=1000000 of=file_1GB dd if=file_1GB of=/dev/null bs=1024
To make a file of 100 random bytes using the kernel random driver:
dd if=/dev/urandom of=myrandom bs=100 count=1
To convert a file to uppercase:
dd if=filename of=filename1 conv=ucase
As stated in a part of documentation provided by Seagate, "certain disc [sic] utilities, such as DD, which depend on low-level disc [sic] access may not support 48-bit LBAs until they are updated". Using ATA hard disk drives over 128 GiB in size requires system support 48-bit LBA; however, in Linux, dd uses the kernel to read or write to raw device files instead of accessing hardware directly.[a] At the same time, support for 48-bit LBA has been present since version 2.4.23 of the kernel, released in 2003.
dcfldd is a fork of dd that is an enhanced version developed by Nick Harbour, who at the time was working for the United States' Department of Defense Computer Forensics Lab. Compared to dd, dcfldd allows for more than one output file, supports simultaneous multiple checksum calculations, provides a verification mode for file matching, and can display of the percentage progress of an operation.
Important note : For some times, dd_rhelp was the only tool (AFAIK) that did this type of job, but since a few years, it is not true anymore: Antonio Diaz did write a ideal replacement for my tool: GNU 'ddrescue'.