CL_INVALID_BUFFER_SIZE
#1
I have a 20GB NTLM hash file that I am testing with and I get the error CL_INVALID_BUFFER_SIZE.  Is there a maximum number of hashes that can be processed in one job?
Reply
#2
Yes, it depends on your system host memory & your gpu memory as well, so no fixed answer, try splitting the list until you find the right value Smile
Reply
#3
I have a Dual 8-core XEON, 128GB RAM, and dual R9-290x Video Cards. Any suggestion on a good starting point for the split?
Reply
#4
Start with the smallest one, which is gpu memory of a single R9 290X.
Reply
#5
It accepted it at 3GB file size.

I do have a question though. If I do a benchmark on my system it shows 43,000,000 H/s. When I processed my 3GB file the speed went down to about 3,000 H/s. I then broke down my file to 1Million Lines and I am running at about 4,000,000 H/s. Are there any parameters that I can set to fix that or is it just the way it works? I used -w 3 -O, but it didn't seem to help.
Reply
#6
(08-16-2019, 10:49 PM)slawson Wrote: It accepted it at 3GB file size. 

I do have a question though.  If I do a benchmark on my system it shows 43,000,000 H/s.  When I processed my 3GB file the speed went down to about 3,000 H/s.  I then broke down my file to 1Million Lines and I am running at about 4,000,000 H/s.  Are there any parameters that I can set to fix that or is it just the way it works?  I used -w 3 -O, but it didn't seem to help.

Hashcat is slow when it cracks a lot of hashes.

You should use mdxfind on the total list first, it can handle a lot of cracks a lot better. Then you should use mdsplit to find the remainder and run those on hashcat.
Reply
#7
Thanks for that info. Is there a sweet spot as far as the number of hashes that Hashcat can efficiently process at one time?
Reply
#8
(08-16-2019, 10:58 PM)slawson Wrote: Thanks for that info.  Is there a sweet spot as far as the number of hashes that Hashcat can efficiently process at one time?

I'd load as many as possible. But not much should remain after a while of mdxfind.
Reply