DCC2 - 2100 only using one GPU on multi GPU systems
#1
Interesting oddity here. I have multiple password cracking boxes, and I've noticed that DCC2 type hashes will only use one GPU. This is the case on two different four GPU systems, and a two GPU system. This issue cropped up running Hashcat 6.1.1 on Ubuntu 18.04 LTS, but I've also observed it on 6.0.0 and 5.1.0. 

Other hashtypes (specifically 1000 - NTLM and 7300 IPMI2) are happily using all the available GPUs. 

Has anyone else observed this behavior, or can you suggest some troubleshooting steps for me to look at? 

Thanks!
Reply
#2
what attack type (-a x) ? What input ? How large is the mask/dict(s) ?

Maybe try using this:
Code:
-S

but the speed will/could suffer a lot with -S.

I guess you could just optimize your attack, also read: https://hashcat.net/faq/morework
Reply
#3
(08-02-2020, 11:34 PM)philsmd Wrote: what attack type (-a x) ? What input ? How large is the mask/dict(s) ?

Maybe try using this:
Code:
-S

but the speed will/could suffer a lot with -S.

I guess you could just optimize your attack, also read: https://hashcat.net/faq/morework

I had it running through a relatively small password list with the OnePasswordToRuleThemAll rule applied. Across the entire cluster, it was pushing around 1000 hashes/sec. 

After looking at it again, and eyeballing the portion of the FAQ you linked, I killed the job and started a -a3 with ?a?a?a?a?a?a?a?a as the mask. The total hashrate's jumped and all the GPUs are now fully utilized. 

Am I correct in inferring that I accidentally starved Hashcat of workload?
Reply
#4
Yes, using very small wordlists (less than a few thousand words) is a common reason for low workloads.
Reply
#5
I'm having a similar issue. In my case, I have a large number of hashes (300,000) but a small wordlist (30,000). It's allocating the entire job to only one GPU, though I have eight available. Is there a way to make hashcat split the hashes over all GPUs other than starting 8 instances of hashcat?
Reply
#6
did you try -S ?

maybe a solution is to pre-compute something to generate a larger dict from the small dict. pipe could also be an option, but let's be honest both -S and pipes normally are only be used in very special cases (slow hashes) because they reduce speed by a lot (in general), but it depends on a lot of factors (with 7 GPUs idle, it might even be faster with -S, but this is something you need to test).

So the general rule is to just understand the problem (too little work, no acceleration possible) and solve it by more clever alternatives / attacks.
Reply