hashcat Forum
Performace Loss Numbers - Printable Version

+- hashcat Forum (https://hashcat.net/forum)
+-- Forum: Developer (https://hashcat.net/forum/forum-39.html)
+--- Forum: Beta Tester (https://hashcat.net/forum/forum-31.html)
+--- Thread: Performace Loss Numbers (/thread-2352.html)

Pages: 1 2


Performace Loss Numbers - atom - 06-10-2013

So I've got numbers people was waiting for.

Good news first, the supported slow hashes:
  • phpass, MD5(Wordpress), MD5(phpBB3)
  • md5crypt, MD5(Unix), FreeBSD MD5, Cisco-IOS MD5
  • md5apr1, MD5(APR), Apache MD5
  • sha512crypt, SHA512(Unix)
  • Domain Cached Credentials2, mscash2
  • WPA/WPA2
  • bcrypt, Blowfish(OpenBSD)
  • Password Safe SHA-256
  • TrueCrypt
  • 1Password
  • Lastpass

They will -not- drop in speed at all.

More good news is that due to the new caching model GPU's do no more require huge dictionaries or large amplifiers to fully utilize them and that's for all hashes.

Also, host memory requirements are less than before.

Bad News, the fast hashes like raw MD5 will drop. But that was to as expect and I said it couple of times.

Multi hash (500k hashes):

Brute-Force++/Mask: 2091 -> 1969 = 5.8%
Hybrid/Combinator: 1960 -> 1642 = 16.2%
Straight/Rules: 1081 -> 723 = 33.1%

The single hash numbers are a bit higher, but when we talk about fast hashes I think we're primarily talking about multihash cracking.

I was working hard for the last week to keep the loss as low as possible.

Just one thing to note. There can be no "old" method to stick to the high speeds with support less than 15 chars as many people already suggested.

The changes I made are to deep, they are no longer compatible and I dont want to maintain two different tools.

I'm just experimenting with this new model, maybe there is something I can do which I haven't notice yet.


RE: Performace Loss Numbers - thorsheim - 06-10-2013

Acceptable performance losses, considering the *massive* cries for len15+ support. :-)
Just buy more gpu's, period.


RE: Performace Loss Numbers - KT819GM - 06-10-2013

Also having in mind that one of most wanted hash algo for 15+ char was WPA/WPA2 we have no loss at all Smile Very good news indeed, thank you atom for your hard work.


RE: Performace Loss Numbers - craiu - 06-10-2013

Thanks atom, that's awesome news!


RE: Performace Loss Numbers - atom - 06-10-2013

I just found a way to reduce the loss for rule-based cracking from 33% to 24% Smile


RE: Performace Loss Numbers - philsmd - 06-10-2013

Awesome news, thx atom!!!


RE: Performace Loss Numbers - atom - 06-11-2013

One of the problems causing the loss is also the high number of rules available in the engine.

For example, there are rules like:
  • RULE_OP_MANGLE_SWITCH_FIRST
  • RULE_OP_MANGLE_SWITCH_LAST
  • RULE_OP_MANGLE_SWITCH_AT
  • RULE_OP_MANGLE_CHR_SHIFTL
  • RULE_OP_MANGLE_CHR_SHIFTR
  • RULE_OP_MANGLE_CHR_INCR
  • RULE_OP_MANGLE_CHR_DECR
  • RULE_OP_MANGLE_REPLACE_NP1
  • RULE_OP_MANGLE_REPLACE_NM1
  • RULE_OP_MANGLE_DUPEBLOCK_FIRST
  • RULE_OP_MANGLE_DUPEBLOCK_LAST
  • RULE_OP_MANGLE_TITLE

See https://hashcat.net/wiki/doku.php?id=rule_based_attack to get details what they do.

Those are supported by hashcat only. JtR and PWP do not support them. They are relatively ineffecient in comparison to the standard ruleset. If I would drop them from the list of supported rules the performance loss would be "only" around 13%. That is simply because the rule engine does not have to check if the user wanted to execute them or not.

Hard decision! Help me!


RE: Performace Loss Numbers - epixoip - 06-12-2013

i vote for flexibility over speed. speed is great, but having a more powerful and flexible engine gives you more control and more creativity, and can lead to finding more plains faster. and that's all that counts.


RE: Performace Loss Numbers - Rolf - 06-12-2013

My vote is to keep those rules.


RE: Performace Loss Numbers - atom - 06-18-2013

OK, latest version b31 is up:

- The kernels 0, 400, 500, 1600, 1800, 2500 and 6300 have been fully ported

- The kernels for NVidia are available as well

- The speed-o-meter hits nearly instantly at full speed. It was a bug in older versions that it was not doing it. There is no speed loss due to that

- For Brute-Force, I was able to eleminate the loss of speed completly. That is for both single hash and multihash! I did that by internally switch to an "old-style" kernel for masks < length 16. If you switch to a longer mask, it will automatically switch to a new-style kernel which will be a bit slower than. I'm not sure if I can do that for all hashtypes but for the raw ones it will work

- For Hybrid/Combinator, I was able to optimize the combinator function itself to run with higher speeds. That compensates the loss from 16.2% to just 8.5%

- For Straight/Rules, I also was able to optimize some of the rule functions which then resulted in higher speed which then compensates the original loss. Oh and I keep the rules from above, it's still the same featureset. Loss is compensated from 33.1% to 17.4%

Smile