For large tasks I split hash files on several files. Each file contains about the max number of hashes my GPUs can handle, based on trial and error.
I've mentioned that the number of hashes I can crack using v4 at once is lower than when using v3.6 (using v4 with -O since it slower using the new kernels): files made based on v3.6 don't fit in GPU memory on v4. So it seems GPU memory usage increased.
Is the a way to easily determine the max number of hashes of a specific type a specific version of hashcat can handle at once? If not: might this be an interesting feature request? Something like hashcat64.bin -m100 --maxhashes? Thanks!
I've mentioned that the number of hashes I can crack using v4 at once is lower than when using v3.6 (using v4 with -O since it slower using the new kernels): files made based on v3.6 don't fit in GPU memory on v4. So it seems GPU memory usage increased.
Is the a way to easily determine the max number of hashes of a specific type a specific version of hashcat can handle at once? If not: might this be an interesting feature request? Something like hashcat64.bin -m100 --maxhashes? Thanks!