hashcat Forum
wordlist + rules VS large wordlist - Printable Version

+- hashcat Forum (https://hashcat.net/forum)
+-- Forum: Misc (https://hashcat.net/forum/forum-15.html)
+--- Forum: General Talk (https://hashcat.net/forum/forum-33.html)
+--- Thread: wordlist + rules VS large wordlist (/thread-9819.html)



wordlist + rules VS large wordlist - dupazonk - 01-25-2021

Hi guys

I am new to the Hashcat and decided to do a small test.
I checked if will have same results with 'wordlist + rules' or with large wordlist which are generated from 'wordlist + rules'.

Here is what I got and what is not right:
hash - file with 50 milion ntlm hashes
word - file with 257823994 lines as wordlist
rule - file with 250 rules.
words-stdout - file with 64198174506 lines which contains all words from --stdout of 'word + rule'

Test #1
word + rule = 64198174506 combinations and 37.75% recovered

Test #2
words-stdout = 64198174506 combinations and 43.40% recovered

Attack was like that:
#1 hashcat -a 0 -m 1000 w 3 --potfile-disable hash.txt word.txt -r rule.txt -o output.txt
#2 hashcat -a 0 -m 1000  w 3--potfile-disable hash.txt word.txt -o output.txt

Did both tests twice and got same result at every time.

Can anyone explain why there is a difference?

Thanks


RE: wordlist + rules VS large wordlist - Etienereum - 01-26-2021

Hi @dupazonk,

This is great, but you did not describe your hardware setup.


RE: wordlist + rules VS large wordlist - dupazonk - 01-26-2021

2x 6 core CPU, 64GB RAM, 1x RX 580 8GB

Another Test #1 'word + rule' on another system with 16 core Ryzen Threadripper, 32GB RAM and 4x 2080 Ti.
Got same keyspace 64198174506 and got 50.54% recovered.

How is that possible?


RE: wordlist + rules VS large wordlist - the_charm - 01-26-2021

Hi.

Hm... I don't see where Etienereum is going with this...
This should work correctly no matter the hardware.

Well, dupazonk this definitely sounds weird.
For now I just noticed you didn't post the actual commands you used.
(The ones you posted contain a number of errors.)
Posting the real commands could help clarify the situation.
If you are paranoid, you can ofc remove the paths and replace the filenames with generic ones,
but then please make sure the same file gets the same generic name in all commands and different files get different names.
(For example in your first post it looks like you used the same wordlist for both attacks)
Also -totally unrelated and just for the sake of being a smart ass- your threadripper/4x2080ti rig is lacking RAM.
4x2080ti, that's 44GB of VRAM so you probably should have 64GB of host RAM.


RE: wordlist + rules VS large wordlist - dupazonk - 01-27-2021

Thank you for answer.

Yes, I did a mistake with command in Test #2

If there are other errors then could you explain what is wrong?

Rig with 4 x 2080 Ti is rented GPU cloud solution - thank you for info about RAM > VRAM.

Lets start again.

Same files has been used in both tests:
ntlm-50m.txt - 50 milion ntlm hashes
250.rule - file with 250 rules
words.txt - file with passwords

########### Test #1 ###########
Done on my system with command below:
hashcat -a 0 -m 1000 -w 3 --potfile-disable /Volumes/hashcat/test/ntlm-50m.txt /Volumes/hashcat/test/words.txt -r /Volumes/hashcat/test/250.rule -o /Volumes/hashcat/test/output.txt

Dictionary cache built:
* Filename..: /Volumes/hashcat/test/words.txt
* Passwords.: 257823994
* Bytes.....: 2623698786
* Keyspace..: 64198174506

Hash.Name........: NTLM
Hash.Target......: /Volumes/hashcat/test/ntlm-50m.txt
Guess.Base.......: File (/Volumes/hashcat/test/words.txt)
Guess.Mod........: Rules (/Volumes/hashcat/test/250.rule)
Recovered........: 18874159/50000000 (37.75%) Digests
Progress.........: 64198174506/64198174506 (100.00%)
Rejected.........: 0/64198174506 (0.00%)


########### Test #2 ###########
Done on rented rig with 32 core Xeon, 128GB RAM, 4 x 2080 Ti with command below:
hashcat -a 0 -m 1000 -w 3 --potfile-disable test/ntlm-50m.txt test/words.txt -r test/250.rule -o test/output.txt

Dictionary cache built:
* Filename..: test/words.txt
* Passwords.: 257823994
* Bytes.....: 2623698786
* Keyspace..: 64198174506

Hash.Name........: NTLM
Hash.Target......: test/ntlm-50m.txt
Guess.Base.......: File (test/words.txt)
Guess.Mod........: Rules (test/250.rule)
Recovered........: 25271830/50000000 (50.54%) Digests
Progress.........: 64198174506/64198174506 (100.00%)
Rejected.........: 0/64198174506 (0.00%)


Any idea why those results wont match?


RE: wordlist + rules VS large wordlist - undeath - 01-27-2021

This looks like a bug indeed. Such stupid bugs are usually caused by buggy OpenCL runtimes. You can try to install a different version and see if the problem persists.