CL_INVALID_BUFFER_SIZE - Printable Version +- hashcat Forum (https://hashcat.net/forum) +-- Forum: Developer (https://hashcat.net/forum/forum-39.html) +--- Forum: hashcat (https://hashcat.net/forum/forum-40.html) +--- Thread: CL_INVALID_BUFFER_SIZE (/thread-8569.html) |
CL_INVALID_BUFFER_SIZE - slawson - 08-16-2019 I have a 20GB NTLM hash file that I am testing with and I get the error CL_INVALID_BUFFER_SIZE. Is there a maximum number of hashes that can be processed in one job? RE: CL_INVALID_BUFFER_SIZE - Xanadrel - 08-16-2019 Yes, it depends on your system host memory & your gpu memory as well, so no fixed answer, try splitting the list until you find the right value RE: CL_INVALID_BUFFER_SIZE - slawson - 08-16-2019 I have a Dual 8-core XEON, 128GB RAM, and dual R9-290x Video Cards. Any suggestion on a good starting point for the split? RE: CL_INVALID_BUFFER_SIZE - Xanadrel - 08-16-2019 Start with the smallest one, which is gpu memory of a single R9 290X. RE: CL_INVALID_BUFFER_SIZE - slawson - 08-16-2019 It accepted it at 3GB file size. I do have a question though. If I do a benchmark on my system it shows 43,000,000 H/s. When I processed my 3GB file the speed went down to about 3,000 H/s. I then broke down my file to 1Million Lines and I am running at about 4,000,000 H/s. Are there any parameters that I can set to fix that or is it just the way it works? I used -w 3 -O, but it didn't seem to help. RE: CL_INVALID_BUFFER_SIZE - dipeperon - 08-16-2019 (08-16-2019, 10:49 PM)slawson Wrote: It accepted it at 3GB file size. Hashcat is slow when it cracks a lot of hashes. You should use mdxfind on the total list first, it can handle a lot of cracks a lot better. Then you should use mdsplit to find the remainder and run those on hashcat. RE: CL_INVALID_BUFFER_SIZE - slawson - 08-16-2019 Thanks for that info. Is there a sweet spot as far as the number of hashes that Hashcat can efficiently process at one time? RE: CL_INVALID_BUFFER_SIZE - dipeperon - 08-16-2019 (08-16-2019, 10:58 PM)slawson Wrote: Thanks for that info. Is there a sweet spot as far as the number of hashes that Hashcat can efficiently process at one time? I'd load as many as possible. But not much should remain after a while of mdxfind. |