TZWorks LLC
System Analysis and Programming
www.tzworks.com
(Version 0.48)
TZWorks LLC software and related documentation ("Software") is governed by separate licenses issued from TZWorks LLC. The User Agreement, Disclaimer, and/or Software may change from time to time. By continuing to use the Software after those changes become effective, you agree to be bound by all such changes. Permission to use the Software is granted provided that (1) use of such Software is in accordance with the license issued to you and (2) the Software is not resold, transferred or distributed to any other person or entity. Refer to your specific EULA issued to for your specific the terms and conditions. There are 3 types of licenses available: (i) for educational purposes, (ii) for demonstration and testing purposes and (iii) business and/or commercial purposes. Contact TZWorks LLC (info@tzworks.com) for more information regarding licensing and/or to obtain a license. To redistribute the Software, prior approval in writing is required from TZWorks LLC. The terms in your specific EULA do not give the user any rights in intellectual property or technology, but only a limited right to use the Software in accordance with the license issued to you. TZWorks LLC retains all rights to ownership of this Software.
The Software is subject to U.S. export control laws, including the U.S. Export Administration Act and its associated regulations. The Export Control Classification Number (ECCN) for the Software is 5D002, subparagraph C.1. The user shall not, directly or indirectly, export, re-export or release the Software to, or make the Software accessible from, any jurisdiction or country to which export, re-export or release is prohibited by law, rule or regulation. The user shall comply with all applicable U.S. federal laws, regulations and rules, and complete all required undertakings (including obtaining any necessary export license or other governmental approval), prior to exporting, re-exporting, releasing, or otherwise making the Software available outside the U.S.
The user agrees that this Software made available by TZWorks LLC is experimental in nature and use of the Software is at user's sole risk. The Software could include technical inaccuracies or errors. Changes are periodically added to the information herein, and TZWorks LLC may make improvements and/or changes to Software and related documentation at any time. TZWorks LLC makes no representations about the accuracy or usability of the Software for any purpose.
ALL SOFTWARE ARE PROVIDED "AS IS" AND "WHERE IS" WITHOUT WARRANTY OF ANY KIND INCLUDING ALL IMPLIED WARRANTIES AND CONDITIONS OF MERCHANTABILITY, FITNESS FOR ANY PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT SHALL TZWORKS LLC BE LIABLE FOR ANY KIND OF DAMAGE RESULTING FROM ANY CAUSE OR REASON, ARISING OUT OF IT IN CONNECTION WITH THE USE OR PERFORMANCE OF INFORMATION AVAILABLE FROM THIS SOFTWARE, INCLUDING BUT NOT LIMITED TO ANY DAMAGES FROM ANY INACCURACIES, ERRORS, OR VIRUSES, FROM OR DURING THE USE OF THE SOFTWARE.
The Software are the original works of TZWorks LLC. However, to be in compliance with the Digital Millennium Copyright Act of 1998 ("DMCA") we agree to investigate and disable any material for infringement of copyright. Contact TZWorks LLC at email address: info@tzworks.com, regarding any DMCA concerns.
dup is a command line tool that was designed to be used for incident response in going after master boot records, raw clusters or image a portion of the drive. Added to the functionality of the tool, is the ability to copy files and/or folders. If targeting NTFS type volumes, the tool can use its NTFS engine to copy files that are locked down by the operating system by accessing their underlying data clusters. Originally architected for Windows, the tool also has compiled versions for Linux and OS-X
To use this tool an enterprise license is required to be in the same directory as the binary in order for the tool to run.
The dup tool is flexible in that it allows one to: (a) generate disk statistics, (b) pull out master boot record (MBR) data, (c) image a drive or volume, (d) copy files or folders, and (e) a few utilities to assist in merging and combining files. The tool makes use of the zlib v.1.2.11 library from Jean-loup Gailly and Mark Adler for general compression. Below is a screenshot of the menu of various options. Details of each option can be found here.
Usage: Disk stats -scandrives -scan_mountpts -diskstats <#> -scandrives -imagestats <dd image path/file> -vmdk "file1 | file2 | .." MBR stats/data -mbr [-disk <#> | -file <MBR file>] -mbr_compare -disk1 <disk#1> -disk2 <disk#2> -mbr_compare -disk1 <disk#> [-file <MBR expected> | -image <dd img>] Image drive or volume -copydrive <drv #> -out <dst> [image options] -copyvolume <vol letter> -out <dst> [image options] [image options] -offset <byte offset> = rounded up to a sector boundary -size <# bytes> = rounded up to a sector boundary -read_timeout <# secs> = 2 secs or above [only for bad drives] -retries <num> = # attempts after cluster read failure -gzip = compress the output [-md5 | -sha1] = compute MD5 or SHA1 hash of the output -force_volread = *** only avail for [-copyvolume] Copy Volume Shadow [raw data] -copysnaps <vol or image> -out <dir> [-gzip] Copy files from a volume (*** means experimental) -copyfile [<file1|file2|..> | -pipe] -out <dst> [file opts] -copydir <folder> -out <dst> [-level <#>] [file opts] -copyscript <script> [file opts] -copygroup [grp switches] -out <dst> [file opts] [-vss <#>] [file/folder copy options] -image <image file> = only for unmounted raw NTFS 'dd' images -filter <*partial*|*.ext> = filter files -filter_sqlite = *** filter on SQLite files -filter_plist = *** filter on plist files -filter_esedb = *** filter on ESE db files -filter_start_date <yyyy-mm-dd> = *** files at/greater than date -filter_stop_date <yyyy-mm-dd> = *** files at/less than date -ntfsraw = *** force raw cluster reads -ads_rename = for ADS replace colon with underscore -gzip = compress results -incl_indx = *** incl INDX data (NTFS only) [-copygroup switches (for Windows)] -pull_sysfiles = pull $MFT, $UsnJrnl:$J files -pull_reghives = pull reg hives -pull_evtlogs = pull event and other logs -pull_lnks = pull LNK files and JumpLists -pull_pfs = pull prefetch files -pull_systrash = pull recycle bin from system vol -pull_userdbs = pull various user acct dbs -pull_browsers = pull browser artifacts -pull_all = pull all the above -image <path to image> = only for unmounted raw NTFS 'dd' images -sysvol <path to mountpt> = only for mounted volumes understood by OS [still experimental (for Windows)] -pull_livestats = pull network and process/task stats [-copygroup addon volume shadow copy options] -vss <index> = target files in specified Vol Shadow -vssall <partition> = target files all Vol Shadows in partition File Utilities (*** means experimental) -expand <file> -out <dst> = decompress file that that used -gzip -merge -fromfile <f1> -tofile <f2> -tofile_offset <#> [-skip_null_sectors] -combine_files <file1|file2|...> -out <dst file> = appends to dst file -md5 [<file> | -pipe] = compute md5 hash of file or files -sha1 [<file> | -pipe] = compute sha1 hash of file or files -sqlite_log <file> = *** creates a SQLite log file -wipefile <file> = *** erase file -wipedir <folder> = *** erase dir/all subdirs
The first category of options is to gather disk statistics. One can use the -scandrives option to enumerate all the disks attached to a computer or to target a specific disk via the -diskstats <drive number> option. The report generated shows the drive and it associated partitions, listed by disk offset and size.
The second category is for extracting and analyzing the MBR. With the advent of malware using the boot record as an injection vector as well as a persistence mechanism to compromise your computer, analysis of this artifact should be part of a normal investigation. The dup tool has some rudimentary capabilities to assist in this area. It allows one to output the MBR for quick analysis (-mbr) or compare the content of the MBR with a known good version (-mbr_compare). The quick analysis functionality will look for abnormal branches from the MBR code to other locations. This simple behavior is indicative of a number of malware techniques.
The third category is for disk imaging, volumes or specific sections of a drive or volume. While there are many other tools available to do this, adding this functionality to dup was something that could be added with minimal additional code.
The forth category, and in some cases, duplicative to one of our other tools (ntfscopy) is the ability to copy files. Different than ntfscopy, dup offers one the ability to: (a) target files in volumes that are not NTFS, (b) target groups of files, (c) use a script to target custom sets of files, and (d) the ability to compress the results during the copy operation. ntfscopy, on the other hand, focuses only on NTFS volumes; and within the NTFS volume, the internals of the file that is copied, and therefore offers the option to extract the metadata for each file copy. So while there is overlap between these two tools, there are areas of file copy functionality specific to each tool. The target audience for the dup tool is geared for the incident responder, allowing the responder to quickly target and gather specific groups of files in a seamless way.
Finally, the fifth category was added to aid in the reconstruction of partial images/files and/or resulting collections. The reconstruction option -merge is used to take a partial image and merge it to a larger image. This becomes useful when trying to image a drive with bad sectors which may result in one getting a piecemeal set of images across sections of the drive. Using the -merge option, one can successfully merge the partial fragments to their proper offset and create a semi-complete replica of a bad drive. The other combine file option, allows one to take a number of files and concatenate them together into a final output file. Lastly, the -untar and -expand options allow one to use the dup tool to undue any tar (tape archive) packing and/or gzip compression.
If desiring to review the MBR, one can quickly scan and extract it using the -mbr option. This option does a quick look analysis of the master boot data, and determines if any strange jumps occur. The output includes a hex dump of the MBR data long with the primary partitions.
In the user's guide there is an example of a boot record infected by a MBR Bootkit, and how dup shows this type of data when it is detected.
For companies with a standard base image of their computers, one can extract the MBR from this known good image and use that to compare it to the MBR. With 2 versions of the MBR (a known good version and one from the endpoint being analyzed), this can be easily done with any reasonable hex editor. If desiring to do it live on a target box, one can use dup via the -mbr_compare option.
dup can either image a device either by specifying a volume letter or a disk number. Each option has their respective command: -copyvolume <letter> or -copydrive <disk number>. The imaging algorithm uses at a minimum of two threads (one for reading the device and one for writing the output). The default option outputs the data in 'dd' format, which is just a straight bit for bit copy of the data on the device. This tool also allows one to compress the imaging results, via the -gzip option. The gzip option is made available by the authors of the zlib library [ref 1]. This library is statically compiled into the dup binary. When using gzip, additional worker threads are spawned to service the compression and speed up the overall imaging time.
Other sub-options allow the user the ability to specify: (a) an offset relative to the volume (or disk) to start the imaging from and (b) the number of bytes to capture relative to the starting offset. If these sub-options are not used, the default is to image the device based on the volume start if using -copyvolume (or drive offset 0, if using -copydrive) and copy the entire space defined by the volume or drive.
If not wishing to pull an entire image of a volume, one can just target the clusters associated with the Volume Shadow Snapshots. With this option (-copysnaps), the snapshots are enumerated along with their cluster runs, so dup can extract each snapshot in sequence.
One other 'use-case' is available, depending if the one wishes to copy an unencypted version of the volume data that is BitLocker encrypted. This option is the -force_volread. This tells dup to copy using volume symantics versus the default of disk semantics. By using 'volume' semantics, it will allow dup to read unencrypted data (as seen from the opened volume perspective); whereas using the default 'disk' semantics will cause dup to blindly copy the actual raw data stored on disk (at the volume offset/size), which if BitLocker encrypted, would be an encrypted copy of the data. Depending on the requirement of the analyst, each option is available. This option has only been tested with Windows.
Targeting a group of files for copying during an incident response can range from easy to hard depending on the tools available to you. Further, if the target machine can be analyzed without taking it down, this is always a plus, but will depend on whether the tools you have available and whether they can copy files that have been locked down by the operating system. Further, if one has a scripted set of instructions of what files need to be copied and has the appropriate tool to handle the script, then it is very easy.
For the more common file types, the tool has the -copygroup option which can take a number of sub-options to identify which file groups to extract. There are groups for registry hives, event logs, prefetch files, LNK/JumpList, trash entries, and system files. One can use one or more of the sub-options in one session. The options are able to discern which operating system the tool is running on so the proper directories are targeted for the file groups selected. The other nice aspect about this option is it will spawn multiple instances of the dup tool to go after the specific groups. For computers with multiple cores, this will result in a faster copy. Results can either be tar'd into a packed archived file or gzip into a compressed archived file. Below are the sub-options currently available:
Group Options | Files Targeted |
---|---|
-pull_sysfiles |
[Windows] $MFT, $Boot, $LogFile, $Bitmap, $BadClus:$Bad, UsnJrnl:$J and Shim db files [uses -ads_rename internally] [Linux] certain /etc and /proc files, first level folder files [each user] [macOS] fseventsd folders, first level folder files [each user], bash and zsh sessions [each user] |
-pull_reghives | [Windows only] User and OS level (system, software, security, etc) registry hives |
-pull_evtlogs |
[Windows] Event, setupapi and other logs [Linux] /var/log folder [macOS] /var/log folder |
-pull_lnks | [Windows only] LNK and JumpList files |
-pull_pfs | [Windows only] prefetch files |
-pull_systrash |
[Windows] Recycle Bin directory on the system drive [Linux] .local/share/Trash folder [each user] [macOS] .Trash [each user] |
-pull_userdbs |
[Windows] ActivitiesCache DB, Push Notifications DB, Outlook, Thunderbird,
and the main top user-level folder contents, such as: Desktop, Documents,
Downloads, Pictures, and Videos [each user] [Linux] configuration data and terminal history data [each user] [macOS] configuration data and terminal history data [each user] |
-pull_browsers |
[Windows 7 and later] Browser artifacts: WebCache DB, Firefox, Edge, Chrome, Brave, Vivaldi and Opera [each user] [Linux] Browser artifacts: Firefox, Edge, Chrome, Brave, Vivaldi and Opera [each user] [macOS] Browser artifacts: Safari, Firefox, Edge, Chrome, Brave, Vivaldi and Opera [each user] |
-pull_all | [Windows/Linux/macOS] Invokes all the pre-defined groups above. Does not invoke the -pull_livestats option, so if that is required, one needs to explicitly include it. |
-pull_livestats | [Windows/Linux/macOS] Pulls host’s system networks statistics and running tasks/processes. Therefore, it is not necessarily useful when extracting artifacts from an image or mounted volume that is not the host system volume. |
Other options for copying include, targeting a directory and its subdirectories. There is also a -script option for more complex copy instructions. While dup was designed to target mounted partitions during the copy, it can also target 'dd' images using the -image syntax. If not desiring to mount the image, dup can handle raw NTFS volumes, however, if one is able to mount the image, dup can handle Windows, Linux and macOS types.
One can use the asterisk character '*' to denote a wildcard for a folder. This allows the directories to be scanned to be more generic. A good example is trying to copy all the Google Chrome files from the various user's accounts. Without knowing the names of the individual accounts before hand one can use a wildcard in the path, like this.
dup -copydir 'c:\users\*\AppData\Local\Google\Chrome\User Data' -level 8 -out c:\google_results -gzip
In this way, all the Google Chrome data will be archived in one zipped file for all the accounts for that computer.
The normal logging option is just redirecting the stdout into a file. This will generate a list of files that were copied. For a more detailed log, one can use the -sqlite_log <log file> option. This will generate a log that is a SQLite database of all the files copied and some metadata associated with the file; to include, the path, size of the file, inode (if applicable) of the file, and any timestamps associated with the file.
If targeting NTFS folders as well as files, one can ask dup to also pull the INDX data associative with each folder that was traversed. The resulting INDX data will be archived in the SQLite log file. The command to do this is -incl_indx along with the -sqlite_log <log file> command.
For starters, to access Volume Shadow copies, one needs to be running with administrator privileges. Also, it is important to note that, Volume Shadow copies as is discussed here, only applies to Windows Vista, Win7, Win8, and beyond. It does not apply to Windows XP.
The syntax varies for specifying volume shadows depending on whether one is specifying a discrete path or not. For example, if using a -copyfile or -copydir command, one would use the %vss%1 to specify \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1, %vss%2 to specify \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy2, etc. Thus, if wanted to copy the System hive from volume shadow copy 1, one could use the following syntax:
dup -copyfile %vss%1\Windows\System32\Config\System -out c:\test\system.bin
Alternatively, if one was using the -copygroup options, where a discrete path is not passed as an argument, one would then use the option [ -vss <vol shadow index>] appended to the copygroup command. As an example, if one wanted to pull all the registry hives from the volume shadow copy 1, one would using this command:
dup -copygroup -pull_reghives -vss "1" -out c:\test\vss_reghives -gzip
Also added with the -copygroup options, is the case where one might want to copy all the registry hives from all the volume shadows. To do this, we added the[ -vssall <partition> ] option. Therefore, to copy all the registry hives that were in the all the volume shadows stored on the C volume, one would do the following:
dup -copygroup -pull_reghives -vssall "c" -out c:\test\vss_reghives -gzip
If desiring to use dup to extract artifacts on Linux or macOS, one can do this on either: (a) the live Linux or macOS box, or (b) from a mounted image, where the mounted volume can be either Windows, Linux, or macOS.
If targeting the live system, one can use the artifact extraction options: -copyfile, -copydir, -copyscript, or -copygroup. The sub options -gzip and -sqlite_log work as well, if desiring to compress the artifacts and/or create an SQLite log file, respectively.
If targeting an offline 'dd' image, one would first need to mount the volume on the host OS. If the host operating system is Linux, this can be done this by using the native host mount command. A typical example might be:
> sudo mount -o loop,ro <location of the image to be mounted> <location of the mount point>If the format of the image cannot be determined using the above command, then use the the option -t <type of format>, to explicitly identify the format type during the mounting process.
Alternatively, if using macOS to mount a raw 'dd' image, then one can either use the native, hdiutil attach command if the format of the 'dd' image is recognized by macOS, otherwise one needs to use a 3rd party tool to mount and later dismount the volume. Here's an example of mounting a NTFS volume on macOS using hdiutil attach
hdiutil attach -noverify -noautofsck <path to image> -mountpoint <pathname> [-shadow | -readonly]The option -noverify skips the verification process to save time. The option -noautofsck skips the system checks if an unclean image is detected, and the -shadow option creates a shadow file to allow a locked forensic image to mount as if it was read/write (making all changes redirect to the shadow image). This latter option allows Spotlight to index the image (without actually writing on the original image). If indexing is not wanted or required, just use the -readonly option. The other useful option is to specify where you want the image mounted via -mountpoint. Without this option, the system will mount the image in the root Volumes path.
Once the imaged is mounted, one can use dup's -sysvol <location of the mount point> option to target artifacts from the specified mounted image. When the artifacts have been collected, you can then unmount the volume, via:
> umount <location of the mount point>The above is typical for unmounting devices that were mounted using the 'mount' command. Alternatively, if using hdiutil attach from macOS, one can just use the hdiutil detach <path of the mountpoint>.
When mounting an NTFS raw 'dd' system image using macOS utility hdiutil, for some reason the root system files do not show up. I'm referring to the $MFT, $Boot, etc files. For the NTFS case, since dup can handle a monolithic native NTFS dd image, one can resort to the -image <path of the raw NTFS image> to pull out artifacts.
Option | Description |
---|---|
-scandrives | Scan the drives attached to the computer and report the stats on each drive found. |
-scan_mountpts | Scan each volume and the associated mount point for the session. |
-diskstats | For a specified drive, return the statistics of the drive. The syntax is -diskstats <drive number>. |
-imagestats | For a specified 'dd' image, return the statistics of that drive. The syntax is -imagestats <dd image path/file>. |
-vmdk | For a specified VMWare VMDK monolithic image, return the statistics of that drive. The syntax is -vmdk <file1 | file2 | ...>. |
-mbr | Pull the Master Boot Record (MBR) of the drive. The syntax is -mbr <drive number>. |
-mbr_compare |
Given 2 Master Boot Records compare them with each other and report differences.
There are a number of variations for this, where one can compare two disk's, a
disk MBR with a file of an MBR, or a disk MBR with an MBR in a 'dd' image. The
available syntaxes are: -mbr_compare -disk1 <disk#1> -disk2 <disk#2> -mbr_compare -disk1 <disk#> -mbrfile <MBR expected data> -mbr_compare -disk1 <disk#> -image <dd image> |
-copydrive | Used to image a drive or a portion thereof. The syntax is -copydrive <drive number> -out <dst> [-offset <#>] [-size <#>]. The -out <dst> specifies where to store the image and its name. The -offset <#> allows one to start the image at a particular offset, and the -size <#> allows on to limit the number of bytes to copy. The offset and size values should be rounded to the sector size. For those unusual cases where a size is needed that is not on a sector boundary, one can use the switch -force_non_sector_size. This will allow to copy some value less than a sector size. If the drive has bad sectors, one can use the options: -retries <#> and/or -read_timeout <# secs>. This will tell dup on failure to read a sector to retry reading the bad sector after a specified timeout period. |
-copyvolume | Used to image a volume or a portion thereof. The syntax is -copyvolume <partition letter> -out <dst> [-offset <#>] [-size <#>] [-force_volread] . The -out <dst> specifies where to store the image and its name. The -offset <#> allows one to start the image at a particular offset, and the -size <#> allows on to limit the number of bytes to copy. The offset and size values should be rounded to the sector size. The last option -force_volread tells dup to copy using volume semantics versus the default of disk semantics. Where this option is useful, is if the volume is BitLocker encrypted (or other encryption is present), using 'volume' semantics will allow the copy operation to retrieve unencrypted data (as seen from the opened volume perspective); whereas using the 'disk' semantics will cause dup to blindly copy the actual raw data stored on disk (at the volume offset/size), which if BitLocker encrypted, would be an encrypted copy of the data. Depending on the requirement of the analyst, this option is available. This option has only been tested with Windows. |
-copysnaps | Used to pull the raw data associated with the Volume Shadow snapshots. This option can pull the snapshots from either a mounted volume or a 'dd' image. |
-copyfile |
Used to copy a specified file. The syntax is -copyfile <filename>.
One also needs to define where to copy the file, via the parameter -out <dst>.
Other parameters can be added, such as -tar, -gzip, -image, or -filter. See the
respective option for an explanation of their purpose. If desiring to copyfiles from standard input, use the
-pipe syntax. Here is an example: dir c:\users\*lnk /b /s /a | dup -copyfile -pipe -out results |
-copydir | Used to copy a specified folder. The syntax is -copydir <folder>. One also needs to define where to copy the file; one can use -out <dst>. One can use the -level <# of levels> to define how many subfolders to copy as well. Other parameters can be added, such as -tar, -gzip, -image, or -filter. See the respective option for an explanation of their purpose. |
-copyscript | Used to copy files defined in a script. The syntax is -copyscript <file containing the script> . The script should contain all the relevant arguments including where to copy the files, using -out <dst>. Other parameters can be added, such as -tar, -gzip, -image, or -filter. See the respective option for an explanation of their purpose. |
-copygroup |
Used to copy a predefined collection of files from the system volume. One also needs to define
where to copy the files, using -out <dst>. Other parameters can be
added, such as -tar, -gzip, -image, or -filter. See the respective option
for an explanation of their purpose. There are separate sub-options that are used for each predefined collection. The available sub-options include: -pull_sysfiles collects these files: $MFT, $Boot, $LogFile, $Bitmap, $BadClus:$Bad, UsnJrnl:$J files and Shim DB (*.sdb) files. -pull_reghives collects both user and system level registry hives -pull_evtlogs collects event, setupapi, and diagnosis logs. -pull_lnks collects LNK and JumpList files -pull_pfs collects prefetch files -pull_systrash collects the Recycle Bin directory on the system drive. -pull_userdbs collects various dbs including: ActivitiesCache DBs, Push Notification, Outlook, Thunderbird DB, and the main top user-level folder contents, such as: Desktop, Documents, Downloads, Pictures, and Videos -pull_browsers Win7 and later; collects following browser artifacts: WebCache DB, Firefox, Edge, Chrome, Safari, Brave, Vivaldi and Opera -pull_all invokes all the previous options except the -pull_livestats option. -sysvol <volume letter>> tells dup to target a specified moutned system volume versus the default live system volume While the default option is to target the system volume, one can target a volume snapshot of the system volume by using the -vss <index> option, or all the volume snapshots by using the -vssall <partition>. -pull_livestats collects the network stats and running tasks/processes (this option is not included in the -pull_all category). |
-merge | This option is to merge one file into another file. It also allows the user to specifically dictate where the file (offset wise) is to be merged. The purpose of this option was to handle situations where partial images were collected and need to be merged into a larger image, where the offsets of the images needed to be preserved. The syntax is merge -fromfile <file1> -tofile <file2> -tofile_offset <#>. There is an optional switch, -skip_null_sectors, where this tells the operation to only look at sectors that have data. This becomes important during a merge when you do not want to overwrite existing data with null data. |
-combine_files | To concatenate files in a sequence, use this option. The files to be concatenated or pipe delimited. The syntax is as follows: -combine_files "file1|file2|file3|..." -out <dst> |
-untar | To unpack a file that was packed with the –tar option, use -untar. The syntax is: -untar <file> -out <dst> |
-expand | Decompresses a file that was compressed with the -gzip option. The syntax is: -expand <file> -out <dst> |
-image | Sub-option to specify to target a raw image of an NTFS disk or volume during a copy operation. If targeting a disk image, one would need to also supply the offset of the system volume using the -offset <#> option. |
-filter | Sub-option to filter filenames that are passed in via stdin via one of the copy options. The syntax is -filter <"*.ext | *partialname* | ...">. The wildcard character '*' is restricted to either before the name or after the name. |
-filter_sig | Sub-option to filter file signatures that are passed in via stdin via one of the copy options. The syntax is -filter_sig <"hex bytes separated by spaces | offset">. This option is experimental. |
-filter_sqlite | Sub-option to filter SQLite files. This option looks at the file header to determine if the file is an SQLite file. The syntax is -filter_sqlite. |
-filter_plist | Sub-option to filter plist type files. This option looks at the file header to determine if the file is an plist file. The syntax is -filter_plist. |
-filter_esedb | Sub-option to filter ESE database type files. This option looks at the file header to determine if the file is an ESE database file. The syntax is -filter_esedb. |
-filter_start_date | Sub-option for copying files from a directory to only copy those files with a create or modify date at or greater than the date specified here. The syntax is -filter_start_date <yyyy-mm-dd>. |
-filter_stop_date | Sub-option for copying files from a directory to only copy those files with a create or modify date at or less than the date specified here. The syntax is -filter_stop_date <yyyy-mm-dd>. |
-sqlite_log | Option to generate a SQLite log database of the files copied by the dup tool. The syntax is -sqlite_log <log file>. |
-tar | Sub-option to collect all the files copied into one tar file. |
-gzip | Sub-option to tell the operation to compress the results. |
-md5 | Compute MD5 hash of file or files. To process multiple files, use the -pipe suboption and pass in the absolute path/filenames via stdin. To process a single file just pass in the filename as an argument to the -md5. |
-sha1 | Compute SHA1 hash of file or files. To process multiple files, use the -pipe suboption and pass in the absolute path/filenames via stdin. To process a single file just pass in the filename as an argument to the -sha1. |
-wipefile | Experimental. Zero out the contents of the specified file and delete it. Tries (on a best effort basis) to remove as much metadata associated with the file as well. |
-wipedir | Experimental. Zero out the contents of all files in the specified folder and child subfolders and deletes them. Tries (on a best effort basis) to remove as much metadata associated with the files as well. |
This tool has authentication built into the binary. The primary authentication mechanism is the digital X509 code signing certificate embedded into the binary (Windows and macOS).
The other mechanism is the runtime authentication, which applies to all the versions of the tools (Windows, Linux and macOS). The runtime authentication ensures that the tool has a valid license. The license needs to be in the same directory of the tool for it to authenticate. Furthermore, any modification to the license, either to its name or contents, will invalidate the license.