oclHashcat v1.20 - Printable Version +- hashcat Forum (https://hashcat.net/forum) +-- Forum: Deprecated; Previous versions (https://hashcat.net/forum/forum-29.html) +--- Forum: Old oclHashcat Announcements (https://hashcat.net/forum/forum-37.html) +--- Thread: oclHashcat v1.20 (/thread-3323.html) |
oclHashcat v1.20 - atom - 04-26-2014 Download here: https://hashcat.net/oclhashcat/ ACHTUNG! You will need to take some time to go through all of the release notes, as there are megatons of new features. Don't worry, it's mostly just additions, so you won't have to relearn oclHashcat's syntax all over again. However, many of the new feature require an explanation. You should know what they do, how they work, and how you can use them -- or at least, how we think you can use them. Our goal whenever we're adding these types of features is to nurture your creativity. You are not forced to use these features in exactly the same way we suggest. Actually, we hope that some of the new features enable your neurons to fire, and inspire you with new ideas of how you can design more efficient attacks, or simply to help make the task more comfortable. Added algorithms Here's a quick overview about the newly-added hash types:
Also see Frank Dittrich's original writeup about the algorithm at http://www.revision-online.info/index.php/Datei:SAP_Passwort_Update.pdf It does a decent job of explaining the weaknesses, but it was written in a time where there was no GPGPU-based cracking. Or at least, not for this algorithm. SAP-B passwords are limited to a keyspace of 69^8. With oclHashcat v1.20, a single R9 290x can crack a hash of this type with a rate of 850 MH/s (the hd7970 is at 560 MH/s). Therefore, 8 x R9 290x can crack -every- possible SAP-B password in max. 20 hours. The worst part about it is that the reduced keyspace is not just a matter of uppercasing the password like LM does, but it further replaces all characters outside the 0x20-0x80 ASCII range with 0xff. In other words, even if you use crazy keycodes in your password, it will be cracked in max. 20 hours. It's hopeless. AMD Catalyst v14.x (Mantle) driver The Mantle drivers have created some initial headaches for us. The main problem is that OpenCL binary kernels compiled for previous stable 13.x Catalyst drivers are incompatible with binary kernels compiled for Mantle drivers. So, it's not our fault that you are forced to update to Catalyst 14.x. More annoyingly, the 14.x drivers are also required if you are running Linux kernel 3.13+, so we really don't have a choice, do we? There is an upside to upgrading to the Mantle drives, though. The OpenCL JIT compiler was updated to produce more optimized low-level instructions for the GPU, which we as developeres have no access to when using OpenCL. This means that the JIT compiler is finally starting to become as optimized as our OpenCL kernels, which translates into a 23% performance gain for NTLM. Improved distributed cracking support There have been a lot of different third-party approaches to distributed cracking with oclHashcat. The basic idea is simple: as in all parallel computing environments, you need to find a way to distribute the load across a set of worker nodes. At this time, the following ideas have been developed:
What we added are just two parameters: -s and -l. If you are at all familar with hashcat, then you already know of these parameters, as hashcat CPU, maskprocessor and statsprocessor have had them for quite a while. They are very simple to use, and they are all you need to integrate oclHashcat into your favourite distributing system like boinc, or your own solution. The -s and -l parameters stand for "skip" and "limit", and allow you to define a range to search within your keyspace. Parameter -s allows you to set the offset, and parameter -l allows you to set the range length. Simply divide the keyspace by the number of nodes to find the range length, and increment the offset by the range length for each node. Here's an example: say you have a 1000-word dictionary and four identical worker nodes. So we divide the keyspace of 1000 by 4 nodes, and we get a range of 250. Your command line on each worker node will be as follows: Code: PC1: ./oclHashcat64.bin -s 0 -l 250 ... // computes 0 - 249 Now, the this example only works well when all of the nodes are identical. But sometimes you have a heterogeneous mixture of devices, and not all nodes will be the same speed. Handling failures also complicates things: what do you do if a node suddenly drops off the network? And what about if you want to add a new node while an attack is running? To facilitate these scenarios we must take a different approach. We know the total keyspace is 1000, but this time we won't divide it by 4 because we don't know precisely how many nodes we have. Instead, we can simply use a fixed length for all nodes, and rely on the master node to keep track of the -s value. Then we can hand out work items to the nodes with a loop. Here's an example of this approach using a fixed range length of 100. Code: long keyspace = 1000 This is a rudimentary and incomplete example, but it serves to demonstrate that those two parameters are all you need to distribute work, even in more complicated environments. Now in the previous examples, calculating the keyspace was simple because we were using a dictionary attack. For dictionary attacks, the keyspace is simply the number of words in the dictionary. But it is a bit more complicated to calculate the keyspace when dealing with more advanced attack modes. Therefore, we have added another parameter called --keyspace that will calculate the keyspace for any given attack. When using a mask attack, for example, you should use --keyspace instead of trying to calculate the keyspace yourself. Here's an example of how to use the --keyspace parameter: Code: atom@sf:~/oclHashcat-1.20$ ./oclHashcat64.bin some.hash -a 3 ?d?d?d?d?d?d?d?d?d --keyspace Take a close look at the last two examples. Please make life easy on yourself by using --keyspace to calculate the keyspace for all of your distributed attacks. With all that said, we had hoped that when we started to add more distributed support it would encourage people to build more third-party distributed wrappers for oclHashcat. Already, a few beta testers have started working on such solutions. Here's an example: http://www.youtube.com/watch?v=0K4mTG5jiR8 Added outfiles directory Soon after beta testers realized that they were now able to distribute the workload, they came up with another problem: What about the results of the cracked hashes? Usually this doesn't matter if you are running a brute-force attack or running against an unsalted hashlist, but it's different when you have a salted hashlist. If you have a hashlist with 100 salted hashes, the time to process a keyspace is 100 timer longer than with a single salt. That should be clear, right? oclHashcat has this optimization, as every good hash cracker should have, where if you crack all hashes bound to a specific salt, it removes that salt from the salt list and is never checked again. But in a distributed environment, there can be a node that cracked a specific salt completly by cracking all the hashes bound to it, but the other nodes do not know about that, and still process that salt unnecessarily. We had the same problem a while back with oclHashcat-lite. It already supported -s and -l, and people were writing distributed wrappers around it. They raised the same question: what if one node cracked a hash (oclHashcat-lite was single hash), how do the other nodes know that they should stop working on it? This eventually resulted in the following question: How to inform a running oclHashcat session with the information that a hash that it is trying to crack was cracked by a different node. After discussing this with beta testers, we came up with a very easy solution: just put the cracked hash into a file in a directory that we call "the outfile directory." oclHashcat periodically scans the outfile directory, and reads all the files within it. For each file, and for each line in the file, it tries to match them against the internal hash table that keeps the information on which hash and which salt is cracked, and which not, and then marks it as cracked. It's not required, but for example, to automate that process completly all you need to have is a shared directory like NFS or CIFS in which all your distributed nodes can write. Point all your nodes to write into a file in that shared directory (protip: you should use a unique file for each node.) Once a node cracks a hash, it writes it into its own outfile, and all other nodes are informed about it since they are periodically scanning the same directory. There are some additional parameters to configure this behavior:
Rewrote restore system from scratch Sometime oclHashcat is a bit pedantic. That was especially true when using --restore. It was so pedantic, I could barely use it myself. For example, restoring was only possible...
What we wanted was a more transparent, flexible, error-resistant and robust restore. With the new approach, you are no longer limited by the above points. There is, for instance, no more binding to the hardware or the hashlist. But this new oclHashcat version is going much further. For example, you can now manually change the restore point. That means if you lost a .restore file for whatever reason, but you remember a position where it was, you can now set it manually. Also the size of the .restore files is now guaranteed to stay at a low filesize (somewhere < 2k). Rewrote multihash structure Not long ago, we had announced that it was possible to load up to 25 million hashes at once. Of course, we were talking about unsalted hashes that can be cracked with multihash techniques, not salted ones. That was not bad, but now it's even better! In 1.20, you are now able to load hashlists that contain up to 100 million hashes, and some beta testers have had success loading up to 150 million hashes. For those of you who think this is senseless, here's why we do it: Cracking huge unsalted hashlists is a great way to build new wordlists based on real passwords people use, originating from real hashdumps leaked on the Internet. Check out the compilation that KoreLogic did once, I think it was around 150 million unique MD5 hashes. To accomplish this, we had to transition away from the previous technique where we transfered the password candiates used to crack a hash from GPU memory to host memory. Because there is no way to communicate between workgroups with OpenCL (only workitems can communicate), we were required to allocate the total amount of password buffers on the GPU, as we had the number of unique hashes multipled with the size of that password buffer. As you can imagine, that took a lot of GPU memory that could not be used for real hashes. By using a different technique that does not depend on allocating the total amount for the password buffers we can now use this memory for hashes instead. Another thing was to speed up the process of cracking huge hashlists, which is a very memory-intensive task, we decided to increase the maximum bitmap size to 24. The bitmaps are what enable us to check for the possible nonexistence of a hash in a hashlist, before going into the costly search function. By increasing the size of the bitmap buffer, the number of unwanted collisions decreases. This increases the overall efficiency of the bitmap system, which results in an increase of overall performance. These huge bitmaps can affect your ability to load huge hashlists, because they require a lot of GPU memory. Therefore you have a new parameter added called --bitmap-max. Usually you will never need it, but in case you want to load a huge hashlist and you get an error message from oclHashcat that it was unable to load it because the memory limit was reached, try to decrease the value of it (for example to 16) and it will save some GPU memory. Added debugging support for rules Most of you are already familiar with the debug parameters from hashcat CPU, and many of you wanted this feature in oclHashcat as well. Previously, it was not possible to implement this feature. However, due to the architecture changes described above, this feature is now possible. There's a couple of new parameters to configure this new feature:
This feature is primarily aimed for generating new rules, but it's also good if you want to find out which of your words in your dictionaries are efficient, or which rules in your rulesets crack the most hashes. But for this example, I'll only focus on the rule generator: ## ## 1. Crack some hashes with random generated rules with a small wordlist ## Quote: ## ## 2. Above example is just for display of the use, usually you would do --debug-file which would contain the following information instead: ## Quote: ## ## 3. Optimize rules with new rule-optimizer: ## Quote: What this did is removed the "oAL" function since it wasn't neccessary, thus sort -u packing rate will increase. The new rules optimizer is a standalone binary for use with debug-rules mode 3 output files, and can be found in the extra/ directory. Over the last few days, I was running oclHashcat with -g parameter in an endless loop, always with around 10k generated rules. In total, I collected around 50k new rules, and each of them cracked at least one new hash. Then, I re-ran those 50k rules on my full dictionaries, and it had a great effect. After a few days of letting this run in a loop, the beta testers collected a list of 600k new rules. Can you imagine that, 600k new rules. Each of them actually cracked a previously-uncracked hash. We thought this was really cool, and we wanted to share it. We ran it through the optimizer, and sorted by occurrence to have the best rules on top. We then removed all rules that did not at least crack -two- unique hashes, and the result is a list of 64k new rules sorted by occurrence. That file was name generated2.rule and added to the rules/ directory. Have fun! Added support for $HEX[] This addition basically goes back to the following trac ticket: https://hashcat.net/trac/ticket/148 The problem is with character encodings for various languages. To be completely honest, I really don't like this topic. There are many different encoding types, many languages, and characters. What you need to know when it comes to encoding and hashes is that most, if not all, algorithms do not care about encoding at all. Hashes algorithms just work on bytes. That means if you input a password that contains for example a german umlaut, this can result in multiple different hashes of the same unsalted algorithm. For example there are three different hashes depending if you used ISO-8859-1, utf-8 or utf-16. We often have to deal with hashlists of unknown encoding. Therefore, the output encoding (in the shell or in the outfile) might not match with the configured encoding of our shell or our editor. The result is weird characters and user are getting confused. Worst case is if the hashlists contain mixed encodings, because the systems that generated the hashes had different encoding settings. That is something that makes our case unique, and which is why we can not simply output all plaintexts as utf-8. Then there is more drama. There are hashes in hashlist compilations that have been put into these hashlist compilations by highly intelligent individuals. That is when they try to put in a hash into a submission mask for a hash of a complete different hash-type. For example the mask of raw MD5 but they have a salted MD5. They simply remove the salt and force in that way the acceptance from the system. Now, in combination, the problem is that some admins simply use \n, \r or even null-bytes as salt. But then, when oclHashcat is configured to automatically generate random rules it can happen that with + or - function we crack those \n salted hashes which leads to a complete different problem. The solution is as the trac ticket suggest: if the plaintext password contains at least one character that is outside the 0x20 - 0x80 ASCII range we automatically switch the output format to $HEX[...] completely. That is a bit like utf-8 but we're not just converting the next character, we completely put the word into hex mode. Doing this, we workaround problems with:
Also note that we've added support for reading $HEX[...] encoded words from your wordlist. That is when you cracked some password that was then converted to $HEX[...] and you then merge that password with your wordlists you don't have to worry about it. oclHashcat identifies $HEX[...] encoding while reading wordlists and automatically converts them to what the words were originally. Added tweaks for AMD OverDrive 6 and better fan speed control This version of oclHashcat includes several changes to add better support for new AMD GPUs, i.e. OverDrive 6 enabled graphic cards. These new features range from the simple detection of OverDrive 6 GPUs, to better memory clock, core clock, powertune and fanspeed control. Since OverDrive 6 GPUs behave very different to previous AMD GPUs in what regards performance tuning (i.e. the powertune threshold and many other tuning settings need to be set to reach maximum performance), many of you may have used od6config tool by epixoip during the last months for e.g. R9 290x graphics cards. Therefore, we decided that oclHashcat should include some basic tuning support such that e.g. new users don't need to always use od6config before running oclHashcat for those cards. Basically, this new version sets core clock, memory clock and the powertune threshold to reasonable values. The changes oclHashcat makes will always be undone after oclHashcat quits, therefore you won't need to bother about all those tuning options and the reset of it later on (because maybe you want to save eletricity). Anyway, we also added a new switch called --powertune-disable. If this switch was set, oclHashcat will skip all OverDrive 6 performance tuning steps. This way you can set this switch if you want to manually set different performance tuning options (e.g. with od6config) beforehand. We added all those powertuning change to make it more convenient for the user and to avoid that users are shocked by the low performance of OverDrive 6 cards if performance options were not manually set. While doing all these changes, we discovered some problems with fan speed control and did try to improve this feature a lot. For instance, as mentioned here https://hashcat.net/trac/ticket/238 with previous versions it could happen that oclHashcat exits without resetting the fan speed to a reasonable value (i.e. either the speed it was before the run or the default value managed by the driver). For multi GPU setups we identified another strange behaviour with previous versions of oclHashcat and fixed it. Sometimes it could have happened that the fan speed showed N/A even if it should show the current fan speed in percentage. The problem for this unexpected behaviour was due to querying the wrong device within oclHashcat (read more about it for instance here: https://hashcat.net/trac/ticket/231 ). As you can read there, the temperature value was not accurate in some specific situations (multi gpu, windows and not all GPUs set to "active"). Adding new password candidates on-the-fly The idea to support a way to add new password candidates (e.g. dictionary words) on-the-fly goes back to a different request that wanted a so-called loopback feature. Let me explain first what that loopback feature is. The loopback feature makes only sense in straight-mode with rules. Whenever oclHashcat cracks a hash, the matching plain is re-queued to run through the rule-engine. So, when does this make sense? Here's an example hashlist: Quote:7c6a180b36896a0a8c02787eeafb0e4c ... and we have the following wordlist with just a single word: Quote:password ... and a simple rule that append a 1 to each word from the wordlist: Quote:$1 When I run this, it will crack one of the above hashes: Quote:7c6a180b36896a0a8c02787eeafb0e4c:password1 Now, with the loopback feature enabled, it will take "password1" as a new candidate and the rule $1 is applied. It will now crack: Quote:1e5c2776cf544e213c3d279c40719643:password11 This goes on and on, until there is no new hash cracked and therefore new password re-added to the queue. Where is this useful in real life? For example when cracking millions of hashes at once to build you dictionaries. If you run it with many rules chances are good to automatically detect a pattern in that hashlist. Now we can go back to the password candidates on-the-fly. When we thought about how to add that request we came up with the idea of the induction directory. This directory can be defined with the new parameter "--induction-dir" or you skip specifying it and oclHashcat will define it as $session.induct. oclHashcat will create that directory for you automatically (and remove it afterwards). While oclHashcat is running you can put files into that new directory which will be scanned by oclHashcat as soon as the current dictionary finishes. Rewrote weak-hash check This feature goes back to the following trac ticket: https://hashcat.net/trac/ticket/165 Note that our implementation is not exactly as it was requested in the ticket. I'll explain: The goal of this feature is to notice if there is a hash whose plaintext is empty, that means a 0-length password. Typically when you just hit enter. We call it a weak-hash check even if it should have been called weak-password check but there are simply too many weak-passwords. Previous version did support this, but only for unsalted hashes. That was easy to implement because on unsalted hashes the 0-length password always results in the same hash. By simply checking that hash, it was possible to find out if it's used. Thing is getting more complicated when a salt is involved. That means that we actually have to run the kernel and create a 0-length password result but with exactly that salt. But that wasn't too easy because oclHashcat has different attack-modes and depending on which attack-mode you choose a different kernel is loaded. Therefore the attack parameters change and we have to create different 0-length password attacks for each attack-mode a user can choose. But that's not all. There are also many differences if some special parameters are set for slow hashes and for fast hashes. That were those problems to solve just to get it working, but that's done, no more headache with this. The next problem, however, is if your hashlist contains millions of salts. As already explained above, we have to run a kernel for each salt. If you want to check for empty passwords for that many salts you will have a very long initialization/startup time by running oclHashcat. To work around this problem we added a parameter called "--weak-hash-threshold". With it you can set a maximum number of salts for which weak hashes should be checked on start. The default is set to 100, that means if you use a hashlist with 101 unique salts it will not try to do a weak-hash check at all. Note we are talking about unique salts not unique hashes. Cracking unsalted hashes results in 1 unique salt (an empty one). That means if you set it to 0 you are disabling it completely, also for the unsalted hashes. Reload previously-cracked hashes from potfile With this feature added, oclHashcat will read the potfile every time oclHashcat starts and compares the content of the .pot file (the cracked hashes) with the hashes from the hashlist it is trying to crack. This is something that is present in JtR, and JtR users will alredy know how this works, but we've added it for a different reason. When we rewrote the restore feature, we had that problem that, in case of a restore, oclHashcat did not know which hash were already cracked in the previous run. Unless you use --remove, which automatically removes all cracked hashes from your hashlist in real-time, it would start cracking the same hashes again, depending on your attack-type. There's just one solution: you need to keep track of the hashes that have been cracked already, and compare it on every start with the hashlist. This is typically a very fast process, but if you have a lot of entries in your potfile, it can take some time. However, it is save to remove the potfile if you don't need it any longer. The potfile name is $session.potfile. If you dont want to remove the potfile you can also skip the loading delay by disabling the use of this new feature with the "--potfile-disable" flag completetly. But note, this also disables the writing of it. If you crack a hash it will create confusion if you want to restore a session. Make sure you know what you do. The way this feature compares and finds hashes is basically the same as when reading files from the outfile directories. Full Changeset Quote: One last thing: With this update you'll be able to load pwdump, passwd and shadow unmodified. If you want other native formats added, please update this ticket: https://hashcat.net/trac/ticket/393 -- atom RE: oclHashcat v1.20 - thorsheim - 04-26-2014 I am completely and utterly speechless. WOW. AMAZING. You should do a 2-3 hour training session to explain and demonstrate all the new features in las Vegas. :-) RE: oclHashcat v1.20 - Hash-IT - 04-26-2014 Absolutely fantastic ! Thank you very much ! RE: oclHashcat v1.20 - atom - 04-26-2014 Thanks! With so many new features added it's highly possible that some bugs were introduced. But me and all the beta-tester did our best to find them before release. However, there's no way to check all functions/features in combination. Let's hope for the best.. RE: oclHashcat v1.20 - blandyuk - 04-26-2014 Awesome work dude. Sorry I have not been around as been traveling again! Will get GUI updated... RE: oclHashcat v1.20 - Milzo - 04-27-2014 Many thanks for this release :-) Initial testing on the v14.4 drivers is raising my twin 290x's cards 5-8c hotter even when i down clock them to 925mhz core than the usual 1030mhz, So in turn i'm hitting the 90c limit in hashcat with a straight forward dict+rules on even the fastest algos like md5 after 3-4 mins. With v13.12 + od6config @1030, the temperatures never reached 90c under the same conditions, so i'm a bit bummed at the that with amd. I'd like to know how other 290x users fair with it. RE: oclHashcat v1.20 - stratomarco - 04-27-2014 Amazing work, Atom! Asap i'll try new features in my rig... mobo burned.... Congratulations! best regards! RE: oclHashcat v1.20 - jsp5107 - 04-27-2014 (04-27-2014, 12:50 AM)Milzo Wrote: Many thanks for this release :-) I don't know if you were having the same problem I was but the fan speed was initially way lower than usual running oclHashcat 1.2 for my 7970s. One or two of my cards would end up hitting like 90c and the fan would go nuts trying to bring the temperature down but then spin back down to like 30 percent again once the temperature went down a bit. The end result was the fan spinning up and down every minute or so trying to keep the GPU under 90c. I set the gpu-temp-retain option to like 75c and it seems to have stabilized for me. RE: oclHashcat v1.20 - davejcb - 04-27-2014 Fantastic work, gentlemen- can't wait to try this out. I especially appreciate the changes with the restore file. RE: oclHashcat v1.20 - parco - 04-27-2014 Fantastic update with a lot of new features, many many thanks!!!! |