hashcat Forum

Full Version: Big hash list problem
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Ey guys,

Just updated to 0.15

and now when trying to bruteforce a big list
(+5mil hashes) it gives an error and exits.
(this was good in 0.14, but I updated because I liked the +15 passwords)

===
oclHashcat-plus v0.15 by atom starting...

Hashes: 5743744 total, 1 unique salts, 5743744 unique digests
Bitmaps: 21 bits, 1048576 entries, 0x000fffff mask, 4194304 bytes
Rules: 1
Workload: 128 loops, 80 accel
Watchdog: Temperature abort trigger set to 80c
Watchdog: Temperature retain trigger set to 80c
Device #1: BeaverCreek, 512MB, 444Mhz, 4MCU
Device #2: Caicos, 1024MB, 444Mhz, 2MCU
Device #1: Kernel ./kernels/4098/m0100_a0.BeaverCreek_1124.2_1124.2 (VM).kernel (982864 bytes)
Device #1: Kernel ./kernels/4098/bzero.BeaverCreek_1124.2_1124.2 (VM).kernel (33872 bytes)
ERROR: clCreateBuffer() -61
Reduce the number of hashes per file / split the hashset into two files and everything will work ok.
sure, but then I have to bruteforce twice (If I split it up in 2) or triple scan when split in 3 files.
will take me much much longer (added days of scanning time then).
I use a large hashlist for testing my dicts and rulesets and markov.
And before 0.15 it worked fine. still, no harm done, just pointing it out.
GPU #1 does not have enough video memory to handle that many hashes with the new version.
I thought the new version generally uses less memory?
less host memory, not less device memory.
Thnxs for the answer. so, need better card.. lol..
too bad it won't use both memory.
it does use the memory on both cards, but it does so independently of course.