Combinator attack - hashcat internals
#1
Hello folks,

Based on some bcrypt cracking experiments with combinator attack with same dictionary used as dict1 and dict2 and princeprocessor with this dictionary and "min/max elements in chains" = 2 (so, this is basically combinator attack using pp, same keyspaces), I have got quite interesting numbers - prince attack is much faster than combinator attack.


Prince attack is performed using a traditional way - "pp.bin | hashcat" so generation/cracking phrase overlaps as we dont need to wait until pp creates a final big dictionary and just then run hashcat with that big dictionary. 

BTW, is password piping the best solution?

As the results for combinator attack are worse, I am wondering how hashcat internally performs combinator attack and I would liek to possibly find explanation why prince attack is significatly faster here.

A) does it combine dictionaries at the start and then distribute "combined wordlist" to GPU to crack candidate passwords?
B) does it combine passwords from both dictionaries on CPU at the same time also cracks them on GPU? (generation/cracking phrase overlaps)
C) does it combine passwords directly on GPU somehow too?
D) ... ?
Reply
#2
what exactly are you comparing with ?
Is it this command ?
Code:
hashcat -m 3200 -a 1 -w 3 hash.txt dict1.txt dict2.txt

It also depends a lot on the sizes of the dicts and on the cost factor of the bcrypt hash e.g. $2a$05$... hashes vs $2a$12$... hashes etc

you could also try to use combinator.bin from hashcat-utils (https://hashcat.net/wiki/doku.php?id=has...combinator) to kind of proof or rule out that your observation about prince is true:
Code:
./combinator.bin dict1.txt dict2.txt | ./hashcat -m 3200 -a 0 -w 3 hash.txt

(what I'm trying to say here is, if even this command with combinator.bin is faster, it just proofs that prince is not really the "solution" of the problem here... the root of the problem could be just that the GPUs have only a small amount of input and therefore not maximum acceleration... see link below)

also see https://hashcat.net/faq/morework , but the claims/strategies mentioned there are more relevant for fast hashes... therefore the cost factor and the dictionary size matter a lot.

It's also good to always state what you mean about being faster ? Is it the H/s or the ETA (overall time it takes) ? For instance, just by adding rules on the right-hand side of the pipe, you could (often) increase the speed by a lot i.e. by giving the GPUs more work to do (better acceleration), but the more rules you add the longer it takes.

BTW: just that we don't forget one important fact: normally hash types like scrypt and bcrypt are better suited for CPUs (especially with high cost factors)... these algorithms are designed to be difficult to accelerate with GPUs etc... so with high parameters for scrypt/bcrypt you might also consider using a high-end CPU (but there is one caveat even to this rule - considering high cost factor bcrypt hashes - : normally you do not have 2+ CPUs in one system, but it's very possible to have 8 GPUs in one system... the 8 GPU systems might be both more cost-effective and faster even if you should normally prefer high-end CPUs for high-cost bcrypt hashes)
Reply