TZWorks LLC
System Analysis and Programming
www.tzworks.com
(Version 1.50)
TZWorks LLC software and related documentation ("Software") is governed by separate licenses issued from TZWorks LLC. The User Agreement, Disclaimer, and/or Software may change from time to time. By continuing to use the Software after those changes become effective, you agree to be bound by all such changes. Permission to use the Software is granted provided that (1) use of such Software is in accordance with the license issued to you and (2) the Software is not resold, transferred or distributed to any other person or entity. Refer to your specific EULA issued to for your specific the terms and conditions. There are 3 types of licenses available: (i) for educational purposes, (ii) for demonstration and testing purposes and (iii) business and/or commercial purposes. Contact TZWorks LLC (info@tzworks.com) for more information regarding licensing and/or to obtain a license. To redistribute the Software, prior approval in writing is required from TZWorks LLC. The terms in your specific EULA do not give the user any rights in intellectual property or technology, but only a limited right to use the Software in accordance with the license issued to you. TZWorks LLC retains all rights to ownership of this Software.
The Software is subject to U.S. export control laws, including the U.S. Export Administration Act and its associated regulations. The Export Control Classification Number (ECCN) for the Software is 5D002, subparagraph C.1. The user shall not, directly or indirectly, export, re-export or release the Software to, or make the Software accessible from, any jurisdiction or country to which export, re-export or release is prohibited by law, rule or regulation. The user shall comply with all applicable U.S. federal laws, regulations and rules, and complete all required undertakings (including obtaining any necessary export license or other governmental approval), prior to exporting, re-exporting, releasing, or otherwise making the Software available outside the U.S.
The user agrees that this Software made available by TZWorks LLC is experimental in nature and use of the Software is at user's sole risk. The Software could include technical inaccuracies or errors. Changes are periodically added to the information herein, and TZWorks LLC may make improvements and/or changes to Software and related documentation at any time. TZWorks LLC makes no representations about the accuracy or usability of the Software for any purpose.
ALL SOFTWARE ARE PROVIDED "AS IS" AND "WHERE IS" WITHOUT WARRANTY OF ANY KIND INCLUDING ALL IMPLIED WARRANTIES AND CONDITIONS OF MERCHANTABILITY, FITNESS FOR ANY PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT SHALL TZWORKS LLC BE LIABLE FOR ANY KIND OF DAMAGE RESULTING FROM ANY CAUSE OR REASON, ARISING OUT OF IT IN CONNECTION WITH THE USE OR PERFORMANCE OF INFORMATION AVAILABLE FROM THIS SOFTWARE, INCLUDING BUT NOT LIMITED TO ANY DAMAGES FROM ANY INACCURACIES, ERRORS, OR VIRUSES, FROM OR DURING THE USE OF THE SOFTWARE.
The Software are the original works of TZWorks LLC. However, to be in compliance with the Digital Millennium Copyright Act of 1998 ("DMCA") we agree to investigate and disable any material for infringement of copyright. Contact TZWorks LLC at email address: info@tzworks.com, regarding any DMCA concerns.
jp is a command line tool that targets NTFS change log journals. The change journal is a component of NTFS that will, when enabled, record changes made to files. The change journal is located in the $UsnJrnl MFT entry, and the journal entries are located in the alternate data stream $J. Each entry is of variable size and its internal structure is documented in the MSDN.
The change journal will record amongst other things: (a) time of the change, (b) affected file/directory, and (c) change type - delete, rename, size extend, etc, and therefore makes it a useful tool when looking at a computer forensically.
Microsoft provides tools to look/affect the change journal as well as a published API to programmatically read/write from/to the change log. jp however, doesn't make use of the Windows API, but does the parsing by traversing the raw structures. This allows jp to be compiled for use on other operating systems to parse the change journal as a component in a forensic toolkit.
Currently there are compiled versions for Windows, Linux and Mac OS-X.
To use this tool, an authentication file is required to be in the same directory as the binary in order for the tool to run.
The change log Journal data is located at the [root]\$Extend\$UsnJrnl:$J alternate data stream. If present, examining the cluster run for the $J data stream will show the beginning clusters are sparse, meaning they are not backed by physical disk clusters, and any data is after these sparse clusters.
The data within the journal is a series of packed entries. The structure for each entry is defined in the Microsoft Software Development Kit (SDK). This structure, with its variants, are shown below as documented by the SDK:
typedef struct { DWORD RecordLength; WORD MajorVersion; WORD MinorVersion; DWORDLONG FileReferenceNumber; DWORDLONG ParentFileReferenceNumber; USN Usn; LARGE_INTEGER TimeStamp; DWORD Reason; DWORD SourceInfo; DWORD SecurityId; DWORD FileAttributes; WORD FileNameLength; WORD FileNameOffset; WCHAR FileName[1]; } USN_RECORD_V2, *PUSN_RECORD_V2, USN_RECORD, *PUSN_RECORD; typedef struct { DWORD RecordLength; WORD MajorVersion; WORD MinorVersion; FILE_ID_128 FileReferenceNumber; FILE_ID_128 ParentFileReferenceNumber; USN Usn; LARGE_INTEGER TimeStamp; DWORD Reason; DWORD SourceInfo; DWORD SecurityId; DWORD FileAttributes; WORD FileNameLength; WORD FileNameOffset; WCHAR FileName[1]; } USN_RECORD_V3, *PUSN_RECORD_V3; typedef struct { DWORD RecordLength; WORD MajorVersion; WORD MinorVersion; FILE_ID_128 FileReferenceNumber; FILE_ID_128 ParentFileReferenceNumber; USN Usn; DWORD Reason; DWORD SourceInfo; DWORD SecurityId; DWORD RemainingExtents; WORD NumberOfExtents; WORD ExtentSize; USN_RECORD_EXTENT Extents[1]; } USN_RECORD_V4, *PUSN_RECORD_V4;
While jp extracts all the data from each USN record, it only outputs the following fields: (a) FileReferenceNumber (also referred to as MFT entry or inode for the file or directory), (b) ParentFileReferenceNumber (which is the parent inode), (c) Usn number (which translates to the offset within the data stream), (d) TimeStamp (UTC date/time when the record was entered), (e) Reason (what change occurred to the file/dir that caused a journal entry), (f) FileAttributes and (g) Filename. One also has the option to have jp resolve the parent inode to the directory path.
For the USN Record variants, version 2 and 3 have the same fields. Version 3, however, allows for 16 bytes each for the FileReferenceNumber and ParentFileReferenceNumber fields. This additional space is needed is to support the RsFS. While NTFS doesn't need the additional bytes, one can run across this version being enabled on an NTFS volume. So parsers parsing Win8, Win10, Win2012, etc, need to be aware of these other versions to parse the change log journal properly.
One should note the version 4 of the USN Record doesn't contain a timestamp like version 2 and 3. I believe it is used for supplemental data. Currently, jp parses version 4 data records internally, but doesn't reflect the data in the output at this time.
SDK reason name | jp output | jp log2timeline mapping |
---|---|---|
USN_REASON_BASIC_INFO_CHANGE | basic_info_changed | MACB |
USN_REASON_CLOSE | file_closed | - |
USN_REASON_COMPRESSION_CHANGE | compression_changed | MACB |
USN_REASON_DATA_EXTEND | data_appended | MACB |
USN_REASON_DATA_OVERWRITE | data_overwritten | MACB |
USN_REASON_DATA_TRUNCATION | data_truncated | MACB |
USN_REASON_EA_CHANGE | extended_attrib_changed | MACB |
USN_REASON_ENCRYPTION_CHANGE | encryption_changed | MACB |
USN_REASON_FILE_CREATE | file_created | MACB |
USN_REASON_FILE_DELETE | file_deleted | MACB |
USN_REASON_HARD_LINK_CHANGE | hardlink_changed | MACB |
USN_REASON_INDEXABLE_CHANGE | context_indexed_changed | MACB |
USN_REASON_NAMED_DATA_EXTEND | ads_data_appended | MACB |
USN_REASON_NAMED_DATA_OVERWRITE | ads_data_overwritten | MACB |
USN_REASON_NAMED_DATA_TRUNCATION | ads_data_truncated | MACB |
USN_REASON_OBJECT_ID_CHANGE | objid_changed | MACB |
USN_REASON_RENAME_NEW_NAME | file_new_name | MACB |
USN_REASON_RENAME_OLD_NAME | file_old_name | MACB |
USN_REASON_REPARSE_POINT_CHANGE | reparse_changed | MACC |
USN_REASON_SECURITY_CHANGE | access_changed | MACB |
USN_REASON_STREAM_CHANGE | ads_added_or_deleted | MACB |
In the above table, the mapping to MACB is defined differently than the normal MACB times used for file times. Since the 'A' label which normally stands for 'access time' does not ge used in the journaling flags, it has been redefined to mean 'data appended'. Also the 'M' and 'C', which normally stands for 'modify time' and 'system metadata changed time', respectively, are now defined as follows: 'M' is defined for 'data overwrite' which means the file has completely changed in content, and 'C' is for the rest of the changes to the file (whether it be file data or system metadata). Breaking it out this way adds fidelity in the results when analyzing the MACB for interesting changes.
For live extraction and analysis, the jp tool requires one to run with administrator privileges; without doing so will restrict one to only looking at offline files. One can display the menu options by typing in the executable name without parameters.
usage: jp -image <dd image> [-offset <offset>] jp -file <path/file> = parse extracted USNJRNL jp -partition <drv letter> = parse USNJRNL on live system jp -vss <num> = Volume Shadow index is the source Basic options -csv = output is comma separated value format -xml = output in xml format -bodyfile = output in sleuth kit body-file format -csvl2t = output in log2timeline format Additional options -a = all records, not just those closed -base10 = output numbers in base10 vice hex -mftfile <$mft file> = use exported $mft file to resolve path -show_dir_path = resolve dir path $mft in image or vol -pulltimes = use with -show_dir_path or -mftfile -dateformat mm/dd/yyyy = "yyyy-mm-dd" is the default -timeformat hh:mm:ss = "hh:mm:ss.xxx" is the default -pair_datetime = combine date/time into 1 field for csv -no_whitespace = not available for xml option -csv_separator "|" = use a pipe character for csv separator Experimental options -include_unalloc_clusters = include unalloc clusters in rawscan -include_vss_clusters = include VSS clusters in rawscan -include_slack_space = include slack space -show_offset = include offset of record in output
There are three data source options: (a) input from an extracted journal file, (b) a dd image of a volume or disk, and (c) a mounted partition of a live Windows machine. jp can handle each of these equally well.
jp can reconstruct the path of the journal entry either by an exported $MFT file or by using the internal $MFT of the specified volume (see options -show_dir_path and -mftfile in the options section). This is useful for easily identifying where the target file originated. In version 1.09, jp will check to ensure that both the inode and sequence number match the $MFT entry prior to reconstruction of the path. If for some reason it cannot, it will report in the output that the inode was reused in the $MFT.
One can also extract MACB timestamps from the $MFT file, and in some cases, from deleted files. To extract time information from deleted files, jp needs to analyze a volume where both the change log journal file and $MFT file are resident. It can then analyze the appropriate parent directory's slack INDX entries and see if any correspond to the journal entry. (see option -pulltimes from the options section).
There are four output format options available, ranging from: (a) the default CSV output, (b) XML format, (c) Log2Timeline format and (d) Body-file format from the Sleuth Kit. The default and XML options yield the most data per record. The Log2Timeline is geared for timeline analysis.
For verbosity, the default option is the most useful. It shows all the records that have been 'closed'. If desiring to see 'all' the records, one can use the -a switch. Using the -a switch gives one a lot of redundant data (eg it will show the action before the 'closed' action as well as the 'closed' action). The actions that occurred before the 'closed' action' are also shown in the final 'closed' record.
Starting with version 1.18, if one wants to scan all the unallocated clusters, one can issue the option: -include_unalloc_clusters, in combination with one of these options: -image, -partition or -vss options.
Using the -include_unalloc_clusters option, jp will first scan the normal $UsnJrnl:$J location and then proceed to scan all the unallocated clusters for old change log entries. The output will be annotated with another column titled "unalloc" to specify which change log entry was found in the unallocated cluster section and which was not.
Originally we thought this would be more difficult than it was, since the change log journal doesn't have a magic signature per se. So using some customized fuzzy logic, we added the option to scan unallocated clusters and pull out old change log journal entries. After some quick tests, surprisingly, there are a number of entries that are available and can be extracted successfully. We tried to tune the scanning to minimize false-positives at the expense of missing some valid entries. While this adds a useful option, it should be considered experimental.
For starters, to access Volume Shadow copies, one needs to be running with administrator privileges. Also, it is important to note that, Volume Shadow copies as is discussed here, only applies to Windows Vista, Win7, Win8, and beyond. It does not apply to Windows XP.
To make it easier with the syntax, we've built in a shortcut syntax in order to more easily access a specific file in a specified Volume Shadow copy, via the %vss% keyword. This internally gets expanded into \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy. Thus, to access index 1 of the volume shadow copy, one would combine the keyword and index, index, (eg. %vss%1) in front of the normal path of the file. For example, to access a System hive located on HarddiskVolumeShadowCopy1, the following syntax can be used:
jp -file %vss%1\$Extend\$UsnJrnl:$J > results.txt
The second option is much easier and uses the -vss <index of Volume Shadow> syntax. Below yields the same result as the first one above.
jp -vss 1 > results.txt
To access the change log journal on Volume Shadows, one uses the syntax -vss <index of volume shadow>.
To determine which indexes are available from the various Volume Shadows, one can use the Windows built-in utility vssadmin, as follows:
vssadmin list shadows -- or to filter out extraneous detail -- vssadmin list shadows | find /i "volume"
While the amount of data can be voluminous from that above command, the keywords one needs to look for are names that look like this:
Shadow Copy Volume: \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1
Shadow Copy Volume: \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy2
...
From the example above, notice the number after the word HarddiskvolumeShadowCopy. This would be the number that is passed as an argument to the previous options.
Open a cmd prompt via "Run as administrator" to ensure the cmd prompt process has admin privileges.
Parse the change log in NTFS partition c and redirect the output to the file output.txt. output.txt data will be comma separated values (CSV)
jp -partition c > output.txt
Parse the change log in NTFS partition c and format the output into XML UTF-8 format to the file called output.xml
jp -partition c -xml > output.txt
Below are simple examples of parsing USNJRNL entries from an extracted $UsnJrnl:$J file and from an 'dd' image:
jp -file <extracted $UsnJrnl:$J file> > output.txt jp -image <disk image> [-offset <volume offset>] -csv > output.csv
If one wanted to pull as much data as possible, one could extend the second example by telling jp to also look in the unallocated clusters, volume shadow clusters and slack space. The command to do this would be:
jp -image <disk image> -include_unalloc_clusters -include_vss_clusters -include_slack_space -csv > output.csv
Option | Description |
---|---|
-file | Extract artifacts from an extracted $UsnJrnl:$J file. |
-partition | Extract artifacts from a mounted Windows volume. The syntax is -partition <drive letter>. |
-image | Extract artifacts from a volume specified by an image and volume offset. The syntax is -image <filename> -offset <volume offset> |
-vss | Experimental. Extract USNJRNL artifacts from Volume Shadow. The syntax is -vss <index number of shadow copy>. Only applies to Windows Vista, Win7, Win8 and beyond. Does not apply to Windows XP. |
-xml | Outputs results in XML format. |
-csv | Outputs the data fields delimited by commas. Since filenames can have commas, to ensure the fields are uniquely separated, any commas in the filenames get converted to spaces. |
-csvl2t | Outputs the data fields in accordance with the log2timeline format. |
-bodyfile | Outputs the data fields in accordance with the 'body-file' version3 specified in the SleuthKit. The date/timestamp outputted to the body-file is in terms of UTC. So if using the body-file in conjunction with the mactime.pl utility, one needs to set the environment variable TZ=UTC. |
-a | Option to display 'all' records, not those just closed. The default option is to only display entries that are closed. |
-mftfile | Use data in exported $MFT file to resolve the directory path. The syntax is -mftfile <exported $mft file> |
-show_dir_path | Option to resolve the directory path. This option will attempt to use the volume or image $MFT specified in the -partition or -image option. |
-pulltimes | If either the -show_dir_path or -mftfile options are specified, this option will extract the appropriate MACB times from the $MFT and report them with the journal entry parsed. As a special case, if the $MFT is coming from a specified volume (via the -partition option) or via an image (via the -image option), if the journal entry is not found the in $MFT file, it will scan the entry's parent directory slack INDX records for the MACB times. This data will be annotated with the acronym 'wisp' to identify the data came from INDX slack space. |
-base10 | Ensure all size/address outputs are displayed in base-10 format versus hexadecimal format. Default is hexadecimal format. |
-no_whitespace | Used in conjunction with -csv option to remove any whitespace between the field value and the CSV separator. |
-csv_separator | Used in conjunction with the -csv option to change the CSV separator from the default comma to something else. Syntax is -csv_separator "|" to change the CSV separator to the pipe character. To use the tab as a separator, one can use the -csv_separator "tab" OR -csv_separator "\t" options. |
-dateformat | Output the date using the specified format. Default behavior is -dateformat "yyyy-mm-dd". Using this option allows one to adjust the format to mm/dd/yy, dd/mm/yy, etc. The restriction with this option is the forward slash (/) or dash (-) symbol needs to separate month, day and year and the month is in digit (1-12) form versus abbreviated name form. |
-timeformat | Output the time using the specified format. Default behavior is -timeformat "hh:mm:ss.xxx" One can adjust the format to microseconds, via "hh:mm:ss.xxxxxx" or nanoseconds, via "hh:mm:ss.xxxxxxxxx", or no fractional seconds, via "hh:mm:ss". The restrictions with this option is that a colon (:) symbol needs to separate hours, minutes and seconds, a period (.) symbol needs to separate the seconds and fractional seconds, and the repeating symbol 'x' is used to represent number of fractional seconds. (Note: the fractional seconds applies only to those time formats that have the appropriate precision available. The Windows internal filetime has, for example, 100 nsec unit precision available. The DOS time format and the UNIX 'time_t' format, however, have no fractional seconds). Some of the times represented by this tool may use a time format without fractional seconds and therefore will not show a greater precision beyond seconds when using this option. |
-pair_datetime | Output the date/time as 1 field vice 2 for csv option |
-include_unalloc_clusters | Experimental. Scan unallocated clusters as well. Results may include false positives. |
-include_vss_clusters | Experimental. Scan all Volume Shadow clusters as well. Results may include false positives. |
-include_slack_space | Experimental. Scan the MFT entries and examine any slack space for USNJRNL entries. Results may include false positives. |
-show_offset | Experimental. Output the offset the USNJRNL entry was found at. Useful to manually verify the results. |
-utf8_bom | All output is in Unicode UTF-8 format. If desired, one can prefix an UTF-8 byte order mark to the CSV output using this option. |
Field | Definition |
---|---|
usndate | Journal entry date |
time-UTC (after file usndate) | Journal entry time |
MFT entry | Target's MFT entry |
seqnum (after MFT entry) | Target's MFT sequence number |
parent MFT | Target's parent MFT entry |
seqnum (after parent MFT) | Target's parent MFT sequence number |
usn# | USN number |
type | If using a combination of parsing options, which one found the entry |
offset | Offset of where the entry was found |
attributes | Target attributes |
filename | Target name (file or folder) |
type change | Target type of change that caused the journal entry |
MFT status | Target status: valid or deleted and category (entry, parent, or wisp) |
mdate | Target modify date |
time-UTC (after file mdate) | Target modify time |
adate | Target access date |
time-UTC (after file adate) | Target access time |
mftdate | Target MFT change date |
time-UTC (after file mftdate) | Target MFT change time |
cdate | Target create date |
time-UTC (after file cdate) | Target create time |
Path | path of the entry |
This tool has authentication built into the binary. The primary authentication mechanism is the digital X509 code signing certificate embedded into the binary (Windows and macOS).
The other mechanism is the runtime authentication, which applies to all the versions of the tools (Windows, Linux and macOS). The runtime authentication ensures that the tool has a valid license. The license needs to be in the same directory of the tool for it to authenticate. Furthermore, any modification to the license, either to its name or contents, will invalidate the license.
The tools from TZWorks will output header information about the tool's version and whether it is running in limited, demo or full mode. This is directly related to what version of a license the tool authenticates with. The limited and demo keywords indicates some functionality of the tool is not available, and the full keyword indicates all the functionality is available. The lacking functionality in the limited or demo versions may mean one or all of the following: (a) certain options may not be available, (b) certain data may not be outputted in the parsed results, and (c) the license has a finite lifetime before expiring.