How are duplicate, but different, hashes for small subset of unique users handled
#1
Information 
So I am trying to figure out how hashcat handles duplicates of hashes such as NTLMv2 (5600) when you have say 2500+ hashes for only 12 unique users. My thought was always that the GPU's handled the algorithm and hashing the candidates, but then the CPU takes over for comparing them against the hashes. So to me, having duplicates as long as not in excessive amounts like several hundred thousand shouldn't really slow down the progress as I am currently hitting speeds of like 3300-3400MH/s. So, even if the GPU is doing all the work that it shouldn't slow down the speeds by much.

However, my friend tried to disprove me and his argument looks very appealing and wanted to see if his test was not correct or if someone that knows the platform better could help me understand what is happening under the hood.

I have attached two images, one with 2664 hashes for 12 unique users and one with only a single hash for the 12 unique users.


Attached Files
.png   ksnip_20240517-113029.png (Size: 265.12 KB / Downloads: 6)
.png   ksnip_20240517-113214.png (Size: 225.48 KB / Downloads: 6)
Reply
#2
Firstly, comparison is done on the GPU too. Secondly, the amount of salts matters a lot. If you have 50 salts, it'll take 50x as long as 1 salt but the hashrate shouldn't change so in your example, it's a significantly shorter run because it's doing 222x less work because it has 222x less salts despite having pretty similar overrall hashrate
Reply
#3
@PenguinKeeper,

Thank you so much for helping me understand. I wasn't thinking of the salts, which was a boneheaded mistake on my side, knowing they were salted.
Reply