oclHashcat v1.20
#1
Download here: https://hashcat.net/oclhashcat/

ACHTUNG!

You will need to take some time to go through all of the release notes, as there are megatons of new features. Don't worry, it's mostly just additions, so you won't have to relearn oclHashcat's syntax all over again. However, many of the new feature require an explanation. You should know what they do, how they work, and how you can use them -- or at least, how we think you can use them.

Our goal whenever we're adding these types of features is to nurture your creativity. You are not forced to use these features in exactly the same way we suggest. Actually, we hope that some of the new features enable your neurons to fire, and inspire you with new ideas of how you can design more efficient attacks, or simply to help make the task more comfortable.


Added algorithms


Here's a quick overview about the newly-added hash types:
  • Juniper Netscreen/SSG (ScreenOS)
  • MySQL323
  • MD5(SHA1())
  • Double SHA1
  • SHA1(MD5())
  • Cisco-ASA MD5
  • TrueCrypt 5.0+ PBKDF2 HMAC-RipeMD160 + AES + hidden-volume
  • TrueCrypt 5.0+ PBKDF2 HMAC-SHA512 + AES + hidden-volume
  • TrueCrypt 5.0+ PBKDF2 HMAC-Whirlpool + AES + hidden-volume
  • TrueCrypt 5.0+ PBKDF2 HMAC-RipeMD160 + AES + hidden-volume + boot-mode
  • IPMI2 RAKP HMAC-SHA1
  • Redmine
  • SAP CODVN B (BCODE)
  • SAP CODVN F/G (PASSCODE)
  • Drupal7
  • Sybase ASE
  • Citrix Netscaler
  • 1Password, cloudkeychain
  • DNSSEC (NSEC3)
  • WBB3, Woltlab Burning Board 3
  • RACF
You should take a close look at the SAP-B (BCODE) algorithm. AFAIK this hash is still in use in many enterprise installations. This algorithm is clearly broken! This is serious.

Also see Frank Dittrich's original writeup about the algorithm at http://www.revision-online.info/index.ph...Update.pdf
It does a decent job of explaining the weaknesses, but it was written in a time where there was no GPGPU-based cracking. Or at least, not for this algorithm.

SAP-B passwords are limited to a keyspace of 69^8. With oclHashcat v1.20, a single R9 290x can crack a hash of this type with a rate of 850 MH/s (the hd7970 is at 560 MH/s). Therefore, 8 x R9 290x can crack -every- possible SAP-B password in max. 20 hours.

The worst part about it is that the reduced keyspace is not just a matter of uppercasing the password like LM does, but it further replaces all characters outside the 0x20-0x80 ASCII range with 0xff. In other words, even if you use crazy keycodes in your password, it will be cracked in max. 20 hours. It's hopeless.



AMD Catalyst v14.x (Mantle) driver


The Mantle drivers have created some initial headaches for us. The main problem is that OpenCL binary kernels compiled for previous stable 13.x Catalyst drivers are incompatible with binary kernels compiled for Mantle drivers. So, it's not our fault that you are forced to update to Catalyst 14.x. More annoyingly, the 14.x drivers are also required if you are running Linux kernel 3.13+, so we really don't have a choice, do we?

There is an upside to upgrading to the Mantle drives, though. The OpenCL JIT compiler was updated to produce more optimized low-level instructions for the GPU, which we as developeres have no access to when using OpenCL. This means that the JIT compiler is finally starting to become as optimized as our OpenCL kernels, which translates into a 23% performance gain for NTLM.



Improved distributed cracking support


There have been a lot of different third-party approaches to distributed cracking with oclHashcat. The basic idea is simple: as in all parallel computing environments, you need to find a way to distribute the load across a set of worker nodes.

At this time, the following ideas have been developed:
  • Split the dictionary into N pieces, distribute the pieces to worker nodes
  • Split the rules into N pieces, distribute the pieces to worker nodes
  • Split the mask into N pieces, distribute the pieces to worker nodes
  • Create offsets in .restore files and distribute the restore files to worker nodes
They all work, but they are all more or less suboptimal. But that's just because oclHashcat was lacking a specific feature that developers need to make it easier, faster, and overall better.

What we added are just two parameters: -s and -l. If you are at all familar with hashcat, then you already know of these parameters, as hashcat CPU, maskprocessor and statsprocessor have had them for quite a while. They are very simple to use, and they are all you need to integrate oclHashcat into your favourite distributing system like boinc, or your own solution.

The -s and -l parameters stand for "skip" and "limit", and allow you to define a range to search within your keyspace. Parameter -s allows you to set the offset, and parameter -l allows you to set the range length. Simply divide the keyspace by the number of nodes to find the range length, and increment the offset by the range length for each node.

Here's an example: say you have a 1000-word dictionary and four identical worker nodes. So we divide the keyspace of 1000 by 4 nodes, and we get a range of 250. Your command line on each worker node will be as follows:

Code:
PC1: ./oclHashcat64.bin -s   0 -l 250 ...   // computes   0 - 249
PC2: ./oclHashcat64.bin -s 250 -l 250 ...   // computes 250 - 499
PC3: ./oclHashcat64.bin -s 500 -l 250 ...   // computes 500 - 749
PC4: ./oclHashcat64.bin -s 750 -l 250 ...   // computes 750 - 999

Now, the this example only works well when all of the nodes are identical. But sometimes you have a heterogeneous mixture of devices, and not all nodes will be the same speed. Handling failures also complicates things: what do you do if a node suddenly drops off the network? And what about if you want to add a new node while an attack is running?

To facilitate these scenarios we must take a different approach. We know the total keyspace is 1000, but this time we won't divide it by 4 because we don't know precisely how many nodes we have. Instead, we can simply use a fixed length for all nodes, and rely on the master node to keep track of the -s value. Then we can hand out work items to the nodes with a loop.

Here's an example of this approach using a fixed range length of 100.

Code:
long keyspace = 1000
long limit = 100

for (long skip = 0; skip < keyspace; skip += limit)
{
  PCxxxx: ./oclHashcat64.bin -s skip -l limit
}

This is a rudimentary and incomplete example, but it serves to demonstrate that those two parameters are all you need to distribute work, even in more complicated environments.

Now in the previous examples, calculating the keyspace was simple because we were using a dictionary attack. For dictionary attacks, the keyspace is simply the number of words in the dictionary. But it is a bit more complicated to calculate the keyspace when dealing with more advanced attack modes. Therefore, we have added another parameter called --keyspace that will calculate the keyspace for any given attack. When using a mask attack, for example, you should use --keyspace instead of trying to calculate the keyspace yourself.

Here's an example of how to use the --keyspace parameter:

Code:
atom@sf:~/oclHashcat-1.20$ ./oclHashcat64.bin some.hash -a 3 ?d?d?d?d?d?d?d?d?d --keyspace
1000000
atom@sf:~/oclHashcat-1.20$ ./oclHashcat64.bin some.hash -a 3 ?d?d?d?d?d?d?d?d --keyspace
100000
atom@sf:~/oclHashcat-1.20$ ./oclHashcat64.bin some.hash -a 3 ?d?d?d?d?d?d?d --keyspace
10000
atom@sf:~/oclHashcat-1.20$ ./oclHashcat64.bin some.hash -a 3 ?d?d?d?d?d?d --keyspace
10000

Take a close look at the last two examples. Please make life easy on yourself by using --keyspace to calculate the keyspace for all of your distributed attacks.

With all that said, we had hoped that when we started to add more distributed support it would encourage people to build more third-party distributed wrappers for oclHashcat. Already, a few beta testers have started working on such solutions. Here's an example: http://www.youtube.com/watch?v=0K4mTG5jiR8



Added outfiles directory


Soon after beta testers realized that they were now able to distribute the workload, they came up with another problem: What about the results of the cracked hashes?

Usually this doesn't matter if you are running a brute-force attack or running against an unsalted hashlist, but it's different when you have a salted hashlist. If you have a hashlist with 100 salted hashes, the time to process a keyspace is 100 timer longer than with a single salt. That should be clear, right?

oclHashcat has this optimization, as every good hash cracker should have, where if you crack all hashes bound to a specific salt, it removes that salt from the salt list and is never checked again. But in a distributed environment, there can be a node that cracked a specific salt completly by cracking all the hashes bound to it, but the other nodes do not know about that, and still process that salt unnecessarily.

We had the same problem a while back with oclHashcat-lite. It already supported -s and -l, and people were writing distributed wrappers around it. They raised the same question: what if one node cracked a hash (oclHashcat-lite was single hash), how do the other nodes know that they should stop working on it?

This eventually resulted in the following question: How to inform a running oclHashcat session with the information that a hash that it is trying to crack was cracked by a different node.

After discussing this with beta testers, we came up with a very easy solution: just put the cracked hash into a file in a directory that we call "the outfile directory." oclHashcat periodically scans the outfile directory, and reads all the files within it. For each file, and for each line in the file, it tries to match them against the internal hash table that keeps the information on which hash and which salt is cracked, and which not, and then marks it as cracked.

It's not required, but for example, to automate that process completly all you need to have is a shared directory like NFS or CIFS in which all your distributed nodes can write. Point all your nodes to write into a file in that shared directory (protip: you should use a unique file for each node.) Once a node cracks a hash, it writes it into its own outfile, and all other nodes are informed about it since they are periodically scanning the same directory.

There are some additional parameters to configure this behavior:
  • Patemeter "--outfile-check-dir" is the directory to periodically scan. If you do not configure it, it will be set to $session.outfiles by default
  • Parameter "--outfile-check-timer" can be used to configure the period in seconds to rescan the outfile directory. The default is set to 5 seconds and you can disable it by setting it to 0.
This solution also automatically stops all nodes if all hashes have been cracked, you can save some energy.



Rewrote restore system from scratch


Sometime oclHashcat is a bit pedantic. That was especially true when using --restore. It was so pedantic, I could barely use it myself. For example, restoring was only possible...
  • Only from the same computer. That means: same set of GPU's, same order on the PCI bus, etc. If your hardware broke, you're lost
  • Only from the same hashlist. In case you got cracked hashes from external sources there was no way to inform oclHashcat about it
  • Only from the same installation directory. In case you moved the installation directory, it was unable to restore
In theory, none of this would concern you if you were simply restoring after a power failure. But in reality, there are more complex reasons that force you to try to restore a previous session, such as hardware failure.

What we wanted was a more transparent, flexible, error-resistant and robust restore. With the new approach, you are no longer limited by the above points. There is, for instance, no more binding to the hardware or the hashlist.

But this new oclHashcat version is going much further. For example, you can now manually change the restore point. That means if you lost a .restore file for whatever reason, but you remember a position where it was, you can now set it manually. Also the size of the .restore files is now guaranteed to stay at a low filesize (somewhere < 2k).



Rewrote multihash structure


Not long ago, we had announced that it was possible to load up to 25 million hashes at once. Of course, we were talking about unsalted hashes that can be cracked with multihash techniques, not salted ones. That was not bad, but now it's even better! In 1.20, you are now able to load hashlists that contain up to 100 million hashes, and some beta testers have had success loading up to 150 million hashes. For those of you who think this is senseless, here's why we do it: Cracking huge unsalted hashlists is a great way to build new wordlists based on real passwords people use, originating from real hashdumps leaked on the Internet. Check out the compilation that KoreLogic did once, I think it was around 150 million unique MD5 hashes.

To accomplish this, we had to transition away from the previous technique where we transfered the password candiates used to crack a hash from GPU memory to host memory. Because there is no way to communicate between workgroups with OpenCL (only workitems can communicate), we were required to allocate the total amount of password buffers on the GPU, as we had the number of unique hashes multipled with the size of that password buffer. As you can imagine, that took a lot of GPU memory that could not be used for real hashes. By using a different technique that does not depend on allocating the total amount for the password buffers we can now use this memory for hashes instead.

Another thing was to speed up the process of cracking huge hashlists, which is a very memory-intensive task, we decided to increase the maximum bitmap size to 24. The bitmaps are what enable us to check for the possible nonexistence of a hash in a hashlist, before going into the costly search function. By increasing the size of the bitmap buffer, the number of unwanted collisions decreases. This increases the overall efficiency of the bitmap system, which results in an increase of overall performance.

These huge bitmaps can affect your ability to load huge hashlists, because they require a lot of GPU memory. Therefore you have a new parameter added called --bitmap-max. Usually you will never need it, but in case you want to load a huge hashlist and you get an error message from oclHashcat that it was unable to load it because the memory limit was reached, try to decrease the value of it (for example to 16) and it will save some GPU memory.



Added debugging support for rules


Most of you are already familiar with the debug parameters from hashcat CPU, and many of you wanted this feature in oclHashcat as well. Previously, it was not possible to implement this feature. However, due to the architecture changes described above, this feature is now possible.

There's a couple of new parameters to configure this new feature:
  • Parameter --debug-mode is used to configure whatever base-word, rule or cracked password to write
  • Parameter --debug-file is used to write the debugging information to a file rather than to stdout


This feature is primarily aimed for generating new rules, but it's also good if you want to find out which of your words in your dictionaries are efficient, or which rules in your rulesets crack the most hashes. But for this example, I'll only focus on the rule generator:

##
## 1. Crack some hashes with random generated rules with a small wordlist
##

Quote:
atom@ht:~/oclHashcat-1.20$ ./oclHashcat64.bin example0.hash example.dict --generate-rules 100 --debug-mode 3 --quiet
cf61d5aed48e2c5d68c5e3d2eab03241:alex999999999
alex99:Z5 Z2
a4bf29620bb32f40c3fc94ad1fc3537a:_hallo12
hallo12:^_
ba114384cc2dbf2f2e3230b803afce86:321654987Q
321654987:$Q
77719e24d4e842c8c87d91e73c7d1a8f:1123581322
1123581321:oAL *98 +8
e2a3f66b3de94593e2e0a6e5208b55af:anais20072007
anais2007:Y4
77108d6b734f4f4e06639fced921b1fe:1234qwerQ
1234qwer:$Q
66dec649460b9ebfdb3f513c2985525c:wrestlingg
wrestling:Z1
8c0d31cadefef386ed4ebb2daf1b80be:newports12
newports21:*98 p4

##
## 2. Above example is just for display of the use, usually you would do --debug-file which would contain the following information instead:
##

Quote:
atom@ht:~/oclHashcat-1.20$ cat debug.rules
alex99:Z5 Z2
hallo12:^_
321654987:$Q
1123581321:oAL *98 +8
anais2007:Y4
1234qwer:$Q
wrestling:Z1
newports21:*98 p4

##
## 3. Optimize rules with new rule-optimizer:
##

Quote:
atom@ht:~/oclHashcat-1.20$ tools/rules_optimize/rules_optimize.bin < debug.rules | sort -u
^_
*98 +8
*98 p4
$Q
Y4
Z1
Z5 Z2

What this did is removed the "oAL" function since it wasn't neccessary, thus sort -u packing rate will increase. The new rules optimizer is a standalone binary for use with debug-rules mode 3 output files, and can be found in the extra/ directory.

Over the last few days, I was running oclHashcat with -g parameter in an endless loop, always with around 10k generated rules. In total, I collected around 50k new rules, and each of them cracked at least one new hash. Then, I re-ran those 50k rules on my full dictionaries, and it had a great effect.

After a few days of letting this run in a loop, the beta testers collected a list of 600k new rules. Can you imagine that, 600k new rules. Each of them actually cracked a previously-uncracked hash. We thought this was really cool, and we wanted to share it. We ran it through the optimizer, and sorted by occurrence to have the best rules on top. We then removed all rules that did not at least crack -two- unique hashes, and the result is a list of 64k new rules sorted by occurrence. That file was name generated2.rule and added to the rules/ directory. Have fun!



Added support for $HEX[]


This addition basically goes back to the following trac ticket: https://hashcat.net/trac/ticket/148

The problem is with character encodings for various languages. To be completely honest, I really don't like this topic. There are many different encoding types, many languages, and characters. What you need to know when it comes to encoding and hashes is that most, if not all, algorithms do not care about encoding at all. Hashes algorithms just work on bytes. That means if you input a password that contains for example a german umlaut, this can result in multiple different hashes of the same unsalted algorithm. For example there are three different hashes depending if you used ISO-8859-1, utf-8 or utf-16.

We often have to deal with hashlists of unknown encoding. Therefore, the output encoding (in the shell or in the outfile) might not match with the configured encoding of our shell or our editor. The result is weird characters and user are getting confused. Worst case is if the hashlists contain mixed encodings, because the systems that generated the hashes had different encoding settings. That is something that makes our case unique, and which is why we can not simply output all plaintexts as utf-8.

Then there is more drama. There are hashes in hashlist compilations that have been put into these hashlist compilations by highly intelligent individuals. That is when they try to put in a hash into a submission mask for a hash of a complete different hash-type. For example the mask of raw MD5 but they have a salted MD5. They simply remove the salt and force in that way the acceptance from the system. Now, in combination, the problem is that some admins simply use \n, \r or even null-bytes as salt. But then, when oclHashcat is configured to automatically generate random rules it can happen that with + or - function we crack those \n salted hashes which leads to a complete different problem.

The solution is as the trac ticket suggest: if the plaintext password contains at least one character that is outside the 0x20 - 0x80 ASCII range we automatically switch the output format to $HEX[...] completely. That is a bit like utf-8 but we're not just converting the next character, we completely put the word into hex mode. Doing this, we workaround problems with:

  • The potfile, because the format is very simple. It works line by line and if there is a newline character in the password you password, if verified, would not match against the hash if $HEX[] was not used
  • The outfile, because it's not looking like weird characters when the encoding does not match to your configured one. This should help to avoid confuse unexperienced users
However, not everyone likes this feature. We added a parameter "--outfile-autohex-disable" such that oclHashcat output plains as it did with previous hashcat version.

Also note that we've added support for reading $HEX[...] encoded words from your wordlist. That is when you cracked some password that was then converted to $HEX[...] and you then merge that password with your wordlists you don't have to worry about it. oclHashcat identifies $HEX[...] encoding while reading wordlists and automatically converts them to what the words were originally.



Added tweaks for AMD OverDrive 6 and better fan speed control


This version of oclHashcat includes several changes to add better support for new AMD GPUs, i.e. OverDrive 6 enabled graphic cards. These new features range from the simple detection of OverDrive 6 GPUs, to better memory clock, core clock, powertune and fanspeed control. Since OverDrive 6 GPUs behave very different to previous AMD GPUs in what regards performance tuning (i.e. the powertune threshold and many other tuning settings need to be set to reach maximum performance), many of you may have used od6config tool by epixoip during the last months for e.g. R9 290x graphics cards. Therefore, we decided that oclHashcat should include some basic tuning support such that e.g. new users don't need to always use od6config before running oclHashcat for those cards.

Basically, this new version sets core clock, memory clock and the powertune threshold to reasonable values. The changes oclHashcat makes will always be undone after oclHashcat quits, therefore you won't need to bother about all those tuning options and the reset of it later on (because maybe you want to save eletricity). Anyway, we also added a new switch called --powertune-disable. If this switch was set, oclHashcat will skip all OverDrive 6 performance tuning steps. This way you can set this switch if you want to manually set different performance tuning options (e.g. with od6config) beforehand. We added all those powertuning change to make it more convenient for the user and to avoid that users are shocked by the low performance of OverDrive 6 cards if performance options were not manually set.

While doing all these changes, we discovered some problems with fan speed control and did try to improve this feature a lot. For instance, as mentioned here https://hashcat.net/trac/ticket/238 with previous versions it could happen that oclHashcat exits without resetting the fan speed to a reasonable value (i.e. either the speed it was before the run or the default value managed by the driver). For multi GPU setups we identified another strange behaviour with previous versions of oclHashcat and fixed it. Sometimes it could have happened that the fan speed showed N/A even if it should show the current fan speed in percentage. The problem for this unexpected behaviour was due to querying the wrong device within oclHashcat (read more about it for instance here: https://hashcat.net/trac/ticket/231 ). As you can read there, the temperature value was not accurate in some specific situations (multi gpu, windows and not all GPUs set to "active").



Adding new password candidates on-the-fly


The idea to support a way to add new password candidates (e.g. dictionary words) on-the-fly goes back to a different request that wanted a so-called loopback feature. Let me explain first what that loopback feature is.

The loopback feature makes only sense in straight-mode with rules. Whenever oclHashcat cracks a hash, the matching plain is re-queued to run through the rule-engine. So, when does this make sense?

Here's an example hashlist:

Quote:7c6a180b36896a0a8c02787eeafb0e4c
1e5c2776cf544e213c3d279c40719643

... and we have the following wordlist with just a single word:

Quote:password

... and a simple rule that append a 1 to each word from the wordlist:

Quote:$1

When I run this, it will crack one of the above hashes:

Quote:7c6a180b36896a0a8c02787eeafb0e4c:password1

Now, with the loopback feature enabled, it will take "password1" as a new candidate and the rule $1 is applied. It will now crack:

Quote:1e5c2776cf544e213c3d279c40719643:password11

This goes on and on, until there is no new hash cracked and therefore new password re-added to the queue.

Where is this useful in real life? For example when cracking millions of hashes at once to build you dictionaries. If you run it with many rules chances are good to automatically detect a pattern in that hashlist.

Now we can go back to the password candidates on-the-fly. When we thought about how to add that request we came up with the idea of the induction directory. This directory can be defined with the new parameter "--induction-dir" or you skip specifying it and oclHashcat will define it as $session.induct. oclHashcat will create that directory for you automatically (and remove it afterwards). While oclHashcat is running you can put files into that new directory which will be scanned by oclHashcat as soon as the current dictionary finishes.



Rewrote weak-hash check


This feature goes back to the following trac ticket: https://hashcat.net/trac/ticket/165

Note that our implementation is not exactly as it was requested in the ticket.

I'll explain: The goal of this feature is to notice if there is a hash whose plaintext is empty, that means a 0-length password. Typically when you just hit enter. We call it a weak-hash check even if it should have been called weak-password check but there are simply too many weak-passwords.

Previous version did support this, but only for unsalted hashes. That was easy to implement because on unsalted hashes the 0-length password always results in the same hash. By simply checking that hash, it was possible to find out if it's used. Thing is getting more complicated when a salt is involved. That means that we actually have to run the kernel and create a 0-length password result but with exactly that salt. But that wasn't too easy because oclHashcat has different attack-modes and depending on which attack-mode you choose a different kernel is loaded. Therefore the attack parameters change and we have to create different 0-length password attacks for each attack-mode a user can choose. But that's not all. There are also many differences if some special parameters are set for slow hashes and for fast hashes. That were those problems to solve just to get it working, but that's done, no more headache with this.

The next problem, however, is if your hashlist contains millions of salts. As already explained above, we have to run a kernel for each salt. If you want to check for empty passwords for that many salts you will have a very long initialization/startup time by running oclHashcat. To work around this problem we added a parameter called "--weak-hash-threshold". With it you can set a maximum number of salts for which weak hashes should be checked on start. The default is set to 100, that means if you use a hashlist with 101 unique salts it will not try to do a weak-hash check at all. Note we are talking about unique salts not unique hashes. Cracking unsalted hashes results in 1 unique salt (an empty one). That means if you set it to 0 you are disabling it completely, also for the unsalted hashes.



Reload previously-cracked hashes from potfile


With this feature added, oclHashcat will read the potfile every time oclHashcat starts and compares the content of the .pot file (the cracked hashes) with the hashes from the hashlist it is trying to crack. This is something that is present in JtR, and JtR users will alredy know how this works, but we've added it for a different reason.

When we rewrote the restore feature, we had that problem that, in case of a restore, oclHashcat did not know which hash were already cracked in the previous run. Unless you use --remove, which automatically removes all cracked hashes from your hashlist in real-time, it would start cracking the same hashes again, depending on your attack-type.

There's just one solution: you need to keep track of the hashes that have been cracked already, and compare it on every start with the hashlist. This is typically a very fast process, but if you have a lot of entries in your potfile, it can take some time. However, it is save to remove the potfile if you don't need it any longer. The potfile name is $session.potfile. If you dont want to remove the potfile you can also skip the loading delay by disabling the use of this new feature with the "--potfile-disable" flag completetly. But note, this also disables the writing of it. If you crack a hash it will create confusion if you want to restore a session. Make sure you know what you do.

The way this feature compares and finds hashes is basically the same as when reading files from the outfile directories.


Full Changeset


Quote:
Type: Driver
File: Kernel
Desc: Added support for AMD Catalyst v14.4 (mantle) driver

Type: Driver
File: Kernel
Desc: Added support for AMD new GPUs: "Spectre", "Spooky", "Kalindi", "Hainan", "Iceland", "Tonga" and "Mullins"

Type: Driver
File: Kernel
Desc: Added support for NV ForceWare 331.67 driver

Type: Driver
File: Kernel
Desc: Added support for NV new GPUs: "sm_50" (Maxwell)

Type: Reimplementation
File: Kernel
Desc: Rewrote multihash structure, ex: 290x can now load up to 100,000,000+ MD5/NTLM hashes at once

Type: Reimplementation
File: Kernel and Host
Desc: Rewrote rule engines (CPU and GPU) and made them more robust by synchronizing error handling

Type: Reimplementation
File: Host
Desc: Rewrote restore system from scratch; no longer requires same system with same GPUs

Type: Reimplementation
File: Host
Desc: Restructured .restore file; no longer create huge .restore files, stay < 2k of size

Type: Reimplementation
File: Host
Desc: Rewrote weak-hash check; support all algorithm types including salted ones
Trac: #165

Type: Reimplementation
File: Host
Desc: Rewrote workload dispatching when progress is near to keyspace end; act more conservative

Type: Reimplementation
File: Host
Desc: Rewrote mechanism to control the fan with AMD GPUs

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 22 = Juniper Netscreen/SSG (ScreenOS)
Trac: #235

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 200 = MySQL323
Trac: #377

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 1421 = hMailServer
Trac: #401

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 2410 = Cisco-ASA MD5
Trac: #365

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 4400 = md5(sha1($pass))
Trac: #198

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 4500 = Double SHA1
Trac: #390

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 4700 = sha1(md5($pass))
Trac: #198

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 4800 = MD5(Chap), iSCSI CHAP authentication
Trac: #214

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 6251 = TrueCrypt 5.0+ PBKDF2-HMAC-RipeMD160 + AES + hidden-volume
Trac: #378

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 6261 = TrueCrypt 5.0+ PBKDF2-HMAC-SHA512 + AES + hidden-volume
Trac: #378

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 6271 = TrueCrypt 5.0+ PBKDF2-HMAC-Whirlpool + AES + hidden-volume
Trac: #378

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 6281 = TrueCrypt 5.0+ PBKDF2-HMAC-RipeMD160 + AES + hidden-volume + boot-mode
Trac: #378

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 7300 = IPMI2 RAKP HMAC-SHA1
Trac: #233

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 7600 = Redmine Project Management Web App
Trac: #391

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 7700 = SAP CODVN B (BCODE)
Trac: #177

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 7800 = SAP CODVN F/G (PASSCODE)
Trac: #177

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 7900 = Drupal7
Trac: #326

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 8000 = Sybase ASE
Trac: #193

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 8100 = Citrix Netscaler
Trac: #369

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 8200 = 1Password, cloudkeychain
Trac: #126

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 8300 = DNSSEC (NSEC3)
Trac: #387

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 8400 = WBB3, Woltlab Burning Board 3
Trac: #181

Type: Feature
File: Kernel
Desc: Added support for algorithm -m 8500 = RACF
Trac: #192

Type: Feature
File: Kernel
Desc: Added support for $2y$ and $2a$ bcrypt signatures
Trac: #251

Type: Feature
File: Kernel
Desc: Added support for higher cost factors for -m 400 = phpass
Trac: #280

Type: Feature
File: Kernels
Desc: Increased support for username length up to 20 for -m 1100 = Domain Cached Credentials, mscash
Trac: #379

Type: Feature
File: Kernels
Desc: Increased support for username length up to 20 for -m 2100 = Domain Cached Credentials2, mscash2
Trac: #379

Type: Feature
File: Kernels
Desc: Added support for mixed cracking WPA and WPA2 at once, no more need for split
Trac: #388

Type: Feature
File: Host
Desc: Added support for Tesla Deployment Kit v5.319.85

Type: Feature
File: Host
Desc: Added parameter --workload-profile to give the user a convenient way to set the reduced, default or tuned performance tuning options

Type: Feature
File: Host
Desc: Added parameter -s for use in distributed computing, mark skip of range of keyspace

Type: Feature
File: Host
Desc: Added parameter -l for use in distributed computing, mark length of range of keyspace

Type: Feature
File: Host
Desc: Added parameter --keyspace for use in distributed computing, calculate keyspace

Type: Feature
File: Host
Desc: Load already cracked hashes from potfile on startup to avoid double cracking

Type: Feature
File: Host
Desc: Added inline induction directory that can be used for on-the-fly adding of new password candidates

Type: Feature
File: Host
Desc: Added switch --loopback to automatically write cracked plains into a file in the induction directory

Type: Feature
File: Host
Desc: Added debugging support for rules as in hashcat CPU; used for rule- and dictionary efficiency analysis

Type: Feature
File: Host
Desc: Added parameter --debug-mode and --debug-file to write found plains and/or rules as in hashcat CPU

Type: Feature
File: Host
Desc: Added --debug-mode 4 == original_plain:rule:modified_plain
Trac: #317

Type: Feature
File: Host
Desc: Added tweaks for AMD OverDrive 6 (powercontrol, core- and mem-clock profiles)

Type: Feature
File: Host
Desc: Added switch --powertune-disable to allow users to disable automatic power tuning for AMD OverDrive 6

Type: Feature
File: Host
Desc: Added --induction-dir to allow the users to specify the folder which will be used instead of the default induct folder

Type: Feature
File: Host
Desc: Added --outfile-check-dir to allow the users to specify the folder which should be monitored for cracked hashes

Type: Feature
File: Host
Desc: Added --outfile-check-timer to allow the users to control the outfile/potfile reading frequency (0 = disabled)

Type: Feature
File: Host
Desc: Added periodic outfile reading such that user can remove hashes while cracking by appending the hash[:salt]:plain to the file

Type: Feature
File: Host
Desc: Added support for automatic detection for hashfile-formats like pwdump, passwd, shadow, etc.
Trac: #393

Type: Feature
File: Host
Desc: Undo fan speed changes by oclHashcat after stopping/aborting
Trac: #238

Type: Feature
File: Host
Desc: Added support loading $HEX[...] format from dictionaries

Type: Feature
File: Host
Desc: Added switch --outfile-autohex-disable to disable $HEX[...] format

Type: Feature
File: Host
Desc: Added switch --hex-wordlist to enable parsing words in wordlists given in hex

Type: Feature
File: Host
Desc: Increased maximum bitmap size to 24 bits to speed up cracking of huge hashlists at once

Type: Feature
File: Host
Desc: Added parameter --bitmap-max to help loading huge hashlists and with small gpu ram

Type: Feature
File: Host
Desc: Added parameter --weak-hash-threshold to set a maximum number of salts for which weak hashes should be checked

Type: Feature
File: Host
Desc: Added parameter --remove-timer to set the frequency the hash-file should be updated when using --remove

Type: Feature
File: Host
Desc: Added bit for parameter --outfile-format to print the position of a candidate that cracked a hash

Type: Feature
File: Host
Desc: Added parameter --status-automat to let oclHashcat display the status view in a machine readable format
Trac: #406

Type: Feature
File: Host
Desc: Added column "Skipped" to status display to showing skipped candidates because of cracked salt(s)

Type: Feature
File: Host
Desc: Improved handling of signals and terminate events; SIGTERM support and windows cmd close handling
Trac: #143

Type: Feature
File: Host
Desc: Added ability to use restore files from previous versions in case the structure did not change

Type: Feature
File: Host
Desc: Set default retain and abort temperatures for AMD OverDrive6 GPUs according to the values reported by ADL
Trac: #225

Type: Feature
File: Host
Desc: Added parameter -v to displays the version string (as -V does)
Trac: #252

Type: Feature
File: Host
Desc: Added support to load and save invalid salt characters used in descrypt
Trac: #269, #405

Type: Feature
File: Host
Desc: Added support for variable iteration number for -m 2100 = mscash2
Trac: #380

Type: Feature
File: Host
Desc: outfile-check and potfile remove (at startup) can now also be used together with hash mode 2500 = WPA/WPA2 and 6800 = Lastpass
Trac: #400

Type: Feature
File: Host
Desc: Added rules_optimizer standalone binary for use with debug-rules mode 3 output files

Type: Feature
File: Host
Desc: While parsing hashes on start inform user about the progress

Type: Feature
File: Rules
Desc: Added InsidePro-HashManager.rule

Type: Feature
File: Rules
Desc: Added generated2.rule, each one cracked a real hash, sorted by occourance. use head -XXXX to make a top XXXX
Cred: EvilMog

Type: Change
File: Rules
Desc: Renamed passwordspro.rule to InsidePro-PasswordsPro.rule

Type: Change
File: Host
Desc: Modified output plains to $HEX[...] format in case cracked password contains chars outside 0x20 - 0x80 ASCII range
Trac: #148

Type: Change
File: Host
Desc: Modified switch --potfile-disable to disable loading already cracked hashes from potfile on startup

Type: Change
File: Host
Desc: Save potfile and dicstat in the current working directory instead of installation directory
Trac: #281

Type: Change
File: Host
Desc: Change input hash format for -m 2100 = mscash2
Trac: #380

Type: Change
File: Host
Desc: Update tab completion for bash (in extra folder) to match up with new parameters

Type: Change
File: Host
Desc: Renamed switch --disable-potfile to --potfile-disable to match up parameter logic

Type: Change
File: Host
Desc: Renamed switch --disable-restore to --restore-disable to match up parameter logic

Type: Change
File: Docs
Desc: Help and docs update to underline that OSX 10.9 uses same format as 10.8
Trac: #236

Type: Change
File: Docs
Desc: Help and docs update to underline that MSSQL(2014) uses same format as MSSQL(2012)

Type: Change
File: Docs
Desc: Removed examples.txt; see wiki for more information
Trac: #236

Type: Change
File: Host
Desc: Renamed hash type Joomla into 'Joomla < 2.5.18', -m 400 has now also the note about MD5(Joomla)
Trac: #402

Type: Bug
File: Kernel
Desc: Raw whirlpool -m 6100 hashes could not be cracked in -a 1 combinator mode

Type: Bug
File: Host
Desc: If increment and masks were used in combination, status display needs reset to INIT after each iteration

Type: Bug
File: Host
Desc: In attack-mode 1 and 7, if at least one word in right wordlist is exactly of length 31, memory corruption occoured over time

Type: Bug
File: Host
Desc: Status timer should be enabled by default when in stdin mode
Trac: #218

Type: Bug
File: Host
Desc: Improved reading of fan speed and temperature; It sometimes failed when using twin GPUs on windows
Trac: #231

Type: Bug
File: Host
Desc: File handling ('Permission denied' error) fixed when using --remove with -m 2500
Trac: #395

Type: Distribution
File: Packages
Desc: Created two packages for download: oclHashcat-* for AMD, cudaHashcat-* for CUDA

One last thing: With this update you'll be able to load pwdump, passwd and shadow unmodified. If you want other native formats added, please update this ticket: https://hashcat.net/trac/ticket/393

--
atom
#2
I am completely and utterly speechless. WOW. AMAZING.

You should do a 2-3 hour training session to explain and demonstrate all the new features in las Vegas. :-)
#3
Absolutely fantastic !

Thank you very much ! Smile
#4
Thanks! Smile With so many new features added it's highly possible that some bugs were introduced. But me and all the beta-tester did our best to find them before release. However, there's no way to check all functions/features in combination. Let's hope for the best.. Smile
#5
Awesome work dude. Sorry I have not been around as been traveling again! Will get GUI updated...
#6
Many thanks for this release :-)

Initial testing on the v14.4 drivers is raising my twin 290x's cards 5-8c hotter even when i down clock them to 925mhz core than the usual 1030mhz, So in turn i'm hitting the 90c limit in hashcat with a straight forward dict+rules on even the fastest algos like md5 after 3-4 mins.

With v13.12 + od6config @1030, the temperatures never reached 90c under the same conditions, so i'm a bit bummed at the that with amd.

I'd like to know how other 290x users fair with it.
#7
Amazing work, Atom!

Asap i'll try new features in my rig... mobo burned....

Congratulations!

best regards!
#8
(04-27-2014, 12:50 AM)Milzo Wrote: Many thanks for this release :-)

Initial testing on the v14.4 drivers is raising my twin 290x's cards 5-8c hotter even when i down clock them to 925mhz core than the usual 1030mhz, So in turn i'm hitting the 90c limit in hashcat with a straight forward dict+rules on even the fastest algos like md5 after 3-4 mins.

With v13.12 + od6config @1030, the temperatures never reached 90c under the same conditions, so i'm a bit bummed at the that with amd.

I'd like to know how other 290x users fair with it.

I don't know if you were having the same problem I was but the fan speed was initially way lower than usual running oclHashcat 1.2 for my 7970s. One or two of my cards would end up hitting like 90c and the fan would go nuts trying to bring the temperature down but then spin back down to like 30 percent again once the temperature went down a bit. The end result was the fan spinning up and down every minute or so trying to keep the GPU under 90c.

I set the gpu-temp-retain option to like 75c and it seems to have stabilized for me.
#9
Fantastic work, gentlemen- can't wait to try this out.

I especially appreciate the changes with the restore file.
#10
Fantastic update with a lot of new features, many many thanks!!!!