Discrepancy between Benchmark numbers and actual numbers
#41
This screams "workload + tuning value missmatch" to me. And the fact that there are many different types of cards at play here makes this far more difficult to fix. You have 900 series cards, 10 series cards, and 20 series cards all mixed in the same system. This, from my experience, forces you to use non-optimal drivers to maintain cross generation compatibility and can lead to weird tuning problems. Also, the use of x1 risers here will absolutely destroy your card <-> host bandwidth, so attacks that rely on the host for candidates, such as stdin/pipe, dict, etc., will be bottlenecked and thus incapable of proper utilization in a fair number of cases, even with rules used as amplifiers. There are a lot of problems here that need to be fixed or cleared for testing before i would settle on this being a bug in hashcat. Even the system RAM is slightly underspec'ed here, with 33gb of aggregate VRAM and 32gb of host RAM, which does not align with the VRAM<=Host RAM rule that we tend to suggest.
Reply
#42
Yeah, the choice of cards isn't ideal but when you're resource constrained...

So how can I measure RAM/VRAM? I'm not seeing anything in Task Manager that indicates that's an issue, and I'm able to get within 5% or less of most of my benchmarks by tuning the attacks.

I'm all game for testing, got any pointers I can run with?
Reply
#43
Resource Monitor and msinfo32 in windows: https://answers.microsoft.com/en-us/wind...d39b979938

virtual memory used by host main memory (RAM)
Reply
#44
(09-03-2019, 10:55 AM)philsmd Wrote: Resource Monitor and msinfo32 in windows: https://answers.microsoft.com/en-us/wind...d39b979938

virtual memory used by host main memory (RAM)

Now THAT is a very informative link. Thank you! Definitely saving that one for further digestion.

So testing with resource monitor and task manager up, I never see more than 13.1 GB committed when running hashcat. Total available memory across pagefile and RAM is currently 41.4GB, and even when manually setting pagefile size to larger arbitrary sizes does not change memory consumption.

Interestingly, the -w2 performance in question only seems to impact MD5, LM and NTLM hashes. I haven't tested MD4, but SHA2-256, luks and other slow hashes at -w4 pretty much hit their benchmarked numbers or are just under them at a negligible amount when actually cracking vs running a benchmark.

Very strange.
Reply