Hashcat-utils are a set of small utilities that are useful in advanced password cracking.
They all are packed into multiple stand-alone binaries.
All of these utils are designed to execute only one specific function.
Since they all work with STDIN and STDOUT you can group them into chains.
The Current Version is 1.7.
The programs are available for Linux and Windows on both 32 bit and 64 bit architectures. There are binaries (.app) for 64bit OSX too. The project is released as MIT-licensed open source software.
hashcat-utils does not have a dedicated homepage, but here is a download link to the latest version:
Each of them is described in detail in the following sections.
Tool used to generate .hccapx files from network capture files (.cap or .pcap) to crack WPA/WPA2 authentications. The .hccapx files are used as input by the hash type -m 2500 = WPA/WPA2.
The additional options allow you to specify a network name (ESSID) to filter out unwanted networks and to give cap2hccapx a hint about the name of a network (ESSID) and MAC address of the access point (BSSID) if no beacon was captured.
$ ./cap2hccapx.bin usage: ./cap2hccapx.bin input.pcap output.hccapx [filter by essid] [additional network essid:bssid]
$ ./combinator.bin usage: ./combinator.bin file1 file2
This program is a stand-alone implementation of the Combinator Attack.
Each word from file2 is appended to each word from file1 and then printed to STDOUT.
Since the program is required to rewind the files multiple times it cannot work with STDIN and requires real files.
Another option would be to store all the content from both files in memory. However in hash-cracking we usually work with huge files, resulting in a requirement that the size of the files we use does matter.
See Combinator Attack for examples.
This program (new in hashcat-utils-0.6) is designed to cut up a wordlist (read from STDIN) to be used in Combinator attack. Suppose you notice that passwords in a particular dump tend to have a common padding length at the beginning or end of the plaintext, this program will cut the specific prefix or suffix length off the existing words in a list and pass it to STDOUT.
$ ./cutb.bin usage: ./cutb.bin offset [length] < infile > outfile
Example wordlist file:
$ cat wordlist apple1234 theman fastcars
Example positive offset and fixed length (first 4 characters):
$ ./cutb.bin 0 4 < wordlist appl them fast
Example positive offset, no length (returns remaining characters in string):
$ ./cutb.bin 4 < wordlist e1234 an cars
Example negative offset (last 4 characters in string):
$ ./cutb.bin -4 < wordlist 1234 eman cars
Example negative offset, fixed length:
$ ./cutb.bin -5 3 < wordlist e12 hem tca
Remember to run sort -u on the output before using it in an attack!
This program has no parameters to configure.
Each word going into STDIN is parsed and split into all its single chars, mutated and reconstructed and then sent to STDOUT.
There are a couple of reconstructions generating all possible patterns of the input word by applying the following iterations:
Important: make sure you unique the output afterwards.
$ echo pass1 | ./expander.bin | sort -u 1 1p 1pas 1pass a as ass ass1 ass1p p pa pas pass pass1 s s1 s1p s1pa s1pas ss ss1 ss1p ss1pa
This program is the heart of the Fingerprint Attack.
Each wordlist going into STDIN is parsed and split into equal sections and then passed to STDOUT based on the amount you specify. The reason for splitting is to distribute the workload that gets generated.
For example if you have an i7 CPU and want to use your dictionary with a program that is unable to handle multiple cores, you can use gate to split your dictionary into multiple smaller pieces and then run that program in multiple instances.
$ ./gate.bin usage: ./gate.bin mod offset < infile > outfile
The two important parameters are “mod” and “offset”.
Here is an example input dictionary:
$ cat numbers 1 2 3 4 5 6 7 8 9 10 11 12 13 14
We want to split a dictionary into two equal dictionaries:
$ ./gate.bin 2 1 < numbers 2 4 6 8 10 12 14
$ ./gate.bin 2 0 < numbers 1 3 5 7 9 11 13
Tool used to generate .hcstat files for use with the statsprocessor.
usage: ./hcstatgen.bin out.hcstat < infile
Nothing much to say on here. Each outfile will have exactly 32.1mb.
Each word going into STDIN is parsed for its length and passed to STDOUT if it matches a specified word-length range.
usage: ./len.bin min max < infile > outfile
Here is an example input dictionary:
$ cat dict 1 123 test pass hello world
We want only these words that have the length 2, 3 or 4:
$ ./len.bin 2 4 < dict 123 test pass
Basically morph generates insertion rules for the most frequent chains of characters from the dictionary that you provide and that, per position.
usage: ./morph.bin dictionary depth width pos_min pos_max
- Dictionary = Wordlist used for frequency analysis.
- Depth = Determines what “top” chains that you want. For example 10 would give you the top 10 (in fact, it seems to start with value 0 so that 10 would give the top 11).
- Width = Max length of the chain. With 3 for example, you will get up to 3 rules per line for the most frequent 3 letter chains.
- pos_min = Minimum position where the insertion rule will be generated. For example 5 would mean that it will make rule to insert the string only from position 5 and up.
-pos_max = Maximum position where the insertion rule will be generated. For example 10 would mean that it will make rule to insert the string so that it's end finishes at a maximum of position 10.
This program is a stand-alone implementation of the Permutation Attack.
It has no parameters to configure.
Each word going into STDIN is parsed and run through “The Countdown QuickPerm Algorithm” by Phillip Paul Fuchs (
See Permutation Attack for examples.
This program is made as an dictionary optimizer for the Permutation Attack.
Due to the nature of the permutation algorithm itself, the input words “BCA” and “CAB” would produce exactly the same password candidates.
$ echo BCA | ./permute.bin BCA CBA ABC BAC CAB ACB $ echo CAB | ./permute.bin CAB ACB BCA CBA ABC BAC
The best way to sort out these “dupes” is to reconstruct the input word reordered by the ASCII value of each char of the word:
Now we can safely sort -u afterwards:
$ wc -l rockyou.txt 14344391 rockyou.txt $ ./prepare.bin < rockyou.txt | sort -u > rockyou.txt.prep $ wc -l rockyou.txt.prep 9375751 rockyou.txt.prep
Sorted out 4968640 words (34.6%) which would produce dupes in permutation attack.
Each word going into STDIN is parsed and passed to STDOUT if it matches an specified password group criteria.
Sometimes you know that some password must include a lower-case char, a upper-case char and a digit to pass a specific password policy.
That means checking passwords that do not match this policy will definitely not result in a cracked password. So we should skip it.
This program is not very complex and it can not fully match all the common password policy criteria, but it does provide a little help.
The following password groups are defined:
|OTHER||8||All others, not matching the above|
To configure a password group out of the single entries you just add the item numbers of all the single entries together.
For example if you want to pass to STDOUT only the words that match at least one lower and at least one digit, you would just lookup the table and search for “lower”, which is “1” and then “digit”, which is “4” and add them together so it makes “5”.
$ echo hello | ./req.bin 5 $ echo hello1 | ./req.bin 5 hello1 $ echo Hello1 | ./req.bin 5 Hello1
rli compares a single file against another file(s) and removes all duplicates:
rli usage: rli infile outfile removefiles...
Let's say we have two files
password 123 cards 999 aceofspades 1234 veryfast
123 999 1234
If we run the following command:
rli w1.txt OUT_FiLE.txt w2.txt
OUT_FiLE.txt will have:
password cards aceofspades veryfast
It also supports multiple files:
w3.txt has “
password” in it, we run:
rli w1.txt OUT_FiLE.txt w2.txt w3.txt
cards aceofspades veryfast
rli can be very useful to clean your dicts and to have one unique set of dictionaries.
But the dictionary size can not exceed host memory size. Read rli2 below for large files.
Unlike rli, rli2 is not limited. But it requires
removefile to be sorted and uniqued before, otherwise it won't work as it should.
For example using
w2.txt files from above, if we run:
rli2 w1.txt w2.txt
This will output:
password 123 cards 999 aceofspades 1234 veryfast
No change. But if we sort and unique
sort w1.txt > w1su.txt sort w1.txt > w1su.txt
rli2 w1su.txt w2su.txt
Will do it accurately:
aceofspades cards password veryfast
Note that rli2 can't do multiple files. And if you haven't already notice, rli2 outputs to STDOUT not a file. You can always pipe to a file to work-around that.
This program is designed to be a dictionary optimizer for oclHashcat.
oclHashcat has a very specific way of loading dictionaries, unlike hashcat. The best way to organize your dictionaries for use with oclHashcat is to sort each word in your dictionary by its length into specific files, into a specific directory, and then to run oclHashcat in directory mode.
$ ./splitlen.bin usage: ./splitlen.bin outdir < infile
All you need to do is to create a new directory, for example “ldicts”.
$ mkdir ldicts $ ./splitlen.bin ldicts < rockyou.txt
$ ls -l ldicts/ total 129460 -rw-r--r-- 1 root root 90 Oct 12 15:54 01 -rw-r--r-- 1 root root 1005 Oct 12 15:54 02 -rw-r--r-- 1 root root 9844 Oct 12 15:54 03 -rw-r--r-- 1 root root 89495 Oct 12 15:54 04 -rw-r--r-- 1 root root 1555014 Oct 12 15:54 05 -rw-r--r-- 1 root root 13634586 Oct 12 15:54 06 -rw-r--r-- 1 root root 20050168 Oct 12 15:54 07 -rw-r--r-- 1 root root 26694333 Oct 12 15:54 08 -rw-r--r-- 1 root root 21910390 Oct 12 15:54 09 -rw-r--r-- 1 root root 22150645 Oct 12 15:54 10 -rw-r--r-- 1 root root 10392420 Oct 12 15:54 11 -rw-r--r-- 1 root root 7219550 Oct 12 15:54 12 -rw-r--r-- 1 root root 5098436 Oct 12 15:54 13 -rw-r--r-- 1 root root 3727905 Oct 12 15:54 14 -rw-r--r-- 1 root root 0 Oct 12 15:54 15
NOTE: splitlen does not append, it overwrites the files in the outdir. Thats why you should use empty directories.
Some programs from hashcat-utils have a minimum and maximum allowed word-length range (like in “len” example).
E.g. see splitlen.c:
#define LEN_MIN 1 #define LEN_MAX 64
You can change them and then recompile the hashcat-utils. However we usually do not need plain words of greater length in password cracking.