Hashcat Development Report I
Whats next going on in the Hashcat development? Well, I am working on multiple battlefields Smile

- I was working on a new attack method that is a mix of markov attack and jtr-stylish increment mode. While doing this I found out that, using this algorithm, it is possible to "morph" substrings from any dictionary into an other dictionary by creating rules with "i" and "o" functions in combination with some statistics. I just added this morph rule generator to the hashcat-utils v0.06 which will be released when I am think its worth it.

- CPU version of hashcat has not been developed yet (just some bugfixes), but I bought myself a AMD Bulldozer CPU. When I am bored with GPGPU I will start adding some special XOP instructions to CPU hashcat but there is also a small thought that says I should start adding CPU support for oclHashcat-plus. This way I can deprecate CPU hashcat, too, without loosing any features. But maybe I will change my mind on this again. There is still this specific problem that CPU hashcat is extreme fast in quick checking huge hashlists in simple dictionary mode. I am not sure if its possible with OpenCL to hold this speed. Also I have no idea how I could port table-attack to GPGPU because of the vector datatypes. Its incompatible. This however is no longer a problem with the new GCN architecture in the new hd7xxx since its a scalar architecture.

- The Wiki pages are nearly complete. They already proven as a worthy investment of time. But the latest changes from oclHashcat-plus for example require some updates on some wiki pages that still have to be done.

- The AMD APP SDK v2.6! The first impression was much better than with AMD APP SDK v2.5. First thing I noticed that they added a lot of new c++ code which broke my offline compiler for the kernels. As a result I had to write a new offline compiler from scratch. Then I tested some long awaited features. First one was the support for mapping between bitselect and BFI_INT. Well, its still not done. OK, I dont care, I will continue to patch the binary kernel. Another thing I noticed is that they added support for the 7xxx series cards. The kernels compiled cleanly, but for some reason my binary patch for BFI_INT failed on them (changed opcodes?). So i did some more research on them to find out that bitselect finally has been mapped to BFI_INT but only on these cards. So i quickly added some macros for GCN and now fully support mapped BFI_INT for GCN Smile But the SDK 2.6 also has some disadvanges. It looks like it produces slower code on the same codebase. But maybe i just have to rewrite some stuff to get back the old speed. We will see..

- Catalyst 12.1 will be release soon. I am not sure if this might break all preview oclHashcat* versions. At least it did on this specific preview driver. That will be a nightmare! Also it requires SDK 2.6 since it segfaults when trying to compile kernels with SDK 2.4. This means: If catalyst 12.1 breaks kernels -> need upgrade to SDK 2.6 -> produces slower code. And there is nothing i can do against it Sad

- Some days ago I found out an optimized way for the binary digest -> ascii password candidate generation which is used in many PHP based apps which have an own hashing algorithm like VBULL, IPB, MYBB, etc. Instead of using the binary result of the md5 transformation they convert it to ascii HEX and then transform it. This new optimized functions is used in the -m 5 and -m 15 kernels and increased the performance by ~5% on both AMD and NV and on both oclHashcat-lite and oclHashcat-plus Smile

- My latest experiment succeeded in an 2% on-nearly-all-algorithms improvement on AMD cards. To make it short again: my hd5970 with stock clock settings reach 9950M/s (before 9780M/s) on MD5. Just 50M/s more needed to finally break the 10B mark! Another nice results is the one from NTLM which has broken the 18000M mark (on the same system). Yay!
Atom. Making awesome software awesomer.

Seriously, thanks for letting us know what's happening. Feels good to know you're working. Smile
A very interesting read !!

It's nice to hear about whats going on. I wonder if you would mind making a news report every month or so ? Smile

dont spend to much time in cpu-gpu combination like barsWF , ok some dual xenon guys would like it with 12-24-xxx cores they have , but some more algos and some more hd7xxx optimizing is more important.
again you show all hashcracking guys how to make biz without waiting for much $$$$$$$$$$$$ respect again.
ati6690: i think you got me wrong on this. the idea is not to do dual gpu + cpu cracking for more speed. dont be afraid, i also think its a bad idea. the idea was to deprecate cpu based hashcat and replace it with a oclHashcat-plus that supports CPU for cracking, just like we did with oclHashcat. the uber-goal is to have just one hashcat program, not 3 different versions with different features.

but this is not so easy as it sounds and requires time and a good migration plan. also its not very high in priority Smile
The order in which you work on things is clearly your decision but I was wondering if you would consider this.

How about making an effort to finish hashcat utilities first ? My reasoning is that they are useful for more than one purpose and everyone can use them.

Looking at the feature requests page the requests for maskprocessor for example look to be well within your capabilities and probably won’t take someone like you long to implement, you have probably already done them !! This would tidy a few jobs in one go !

Also it does seem an awful shame that the hashcat utilities are not promoted as much as your other software. They certainly deserve a webpage of their own !

I am really enjoying these news reports from the coders side !! Big Grin
I'm really interested to see what the morph dictionary utility does.

All the development for oclhashcat-* sounds great. I hope hashcat does get some attention too though, because more Linux systems are defaulting to sha512-unix (Ubuntu, RedHat, etc), so anything that can be done to speed that up would be excellent.

Thanks for all the fantastic work!