Large SHA-1 hashfile
#1
Hello everyone,

I'm pretty new to this thus I have a few questions regarding large hashfiles and how they slow down hashcat.
I'm getting nowhere near the performance I get when using ./hashcat --benchmark

I have hashfile with 5.000.000 sha-1 hashes and a wordlist with 1.000.000.000 words. I use the best64 Rule.

command:
./hashcat -a 0 -O -m 100 ".\hashes\5mil_small.hashes" ".\wordlists\generated_pass_1bil.dict" -r .\rules\best64.rule -o ".\cracked\passGAN_1Bil_small.cracked" --status --status-timer 10

in my benchmark i get a sha-1 performance of 22237 MH/s
in my actual run i get ca 3897 KH/s which strangely increases over time?

In the taskmanager there is nearly no load on my GPU

Do you have any idea what my Problem could be and how to solve it? Thanks in advance.


This is the output when starting hashcat with the command shown above.

Code:
hashcat (v6.2.5) starting

* Device #1: WARNING! Kernel exec timeout is not disabled.
            This may cause "CL_OUT_OF_RESOURCES" or related errors.
            To disable the timeout, see: https://hashcat.net/q/timeoutpatch
* Device #2: WARNING! Kernel exec timeout is not disabled.
            This may cause "CL_OUT_OF_RESOURCES" or related errors.
            To disable the timeout, see: https://hashcat.net/q/timeoutpatch
CUDA API (CUDA 11.6)
====================
* Device #1: NVIDIA GeForce RTX 3090, 23336/24575 MB, 82MCU

OpenCL API (OpenCL 3.0 CUDA 11.6.110) - Platform #1 [NVIDIA Corporation]
========================================================================
* Device #2: NVIDIA GeForce RTX 3090, skipped

Minimum password length supported by kernel: 0
Maximum password length supported by kernel: 31

Bitmap table overflowed at 18 bits.
This typically happens with too many hashes and reduces your performance.
You can increase the bitmap table size with --bitmap-max, but
this creates a trade-off between L2-cache and bitmap efficiency.
It is therefore not guaranteed to restore full performance.

Hashes: 5000000 digests; 5000000 unique digests, 1 unique salts
Bitmaps: 18 bits, 262144 entries, 0x0003ffff mask, 1048576 bytes, 5/13 rotates
Rules: 77

Optimizers applied:
* Optimized-Kernel
* Zero-Byte
* Precompute-Init
* Early-Skip
* Not-Salted
* Not-Iterated
* Single-Salt
* Raw-Hash

Watchdog: Temperature abort trigger set to 90c

Host memory required for this attack: 1462 MB

Dictionary cache hit:
* Filename..: .\wordlists\generated_pass_1bil.dict
* Passwords.: 999999488
* Bytes.....: 9461779463
* Keyspace..: 76999960576

Cracking performance lower than expected?

* Append -w 3 to the commandline.
  This can cause your screen to lag.

* Append -S to the commandline.
  This has a drastic speed impact but can be better for specific attacks.
  Typical scenarios are a small wordlist but a large ruleset.

* Update your backend API runtime / driver the right way:
  https://hashcat.net/faq/wrongdriver

* Create more work items to make use of your parallelization power:
  https://hashcat.net/faq/morework
Reply
#2
Please read the errors in your output.


Code:
Bitmap table overflowed at 18 bits.
This typically happens with too many hashes and reduces your performance.
You can increase the bitmap table size with --bitmap-max, but
this creates a trade-off between L2-cache and bitmap efficiency.
It is therefore not guaranteed to restore full performance.
Reply
#3
(07-22-2022, 05:36 PM)Chick3nman Wrote: Please read the errors in your output.


Code:
Bitmap table overflowed at 18 bits.
This typically happens with too many hashes and reduces your performance.
You can increase the bitmap table size with --bitmap-max, but
this creates a trade-off between L2-cache and bitmap efficiency.
It is therefore not guaranteed to restore full performance.

Hmmm, I've read that and already halfed the used hashes down from 10.000.000. What is an appropriate nuber of hashes to process at a time? How do you still process all of them? Do you just split the file and write a bash script which executes hashcat for every file seperately or is there a more elegant solution?
Reply