I have an 8 GPU cluster running GTX 980 GPU's. When I use a small wordlist it only uses a single GPU. I'm using large lists of hashes (100K or more) and it takes about 36 minutes and I'd like to use multiple GPUs to speed this up. I've read this:
https://hashcat.net/forum/thread-4161.html
I've tried the suggestion to pipe in the wordlist but that makes no difference to hashing speed and it still uses a single GPU.
Is there a way to get cudaHashcat to use all GPU's when using a small (10,000 word) dictionary? I've tried the above suggestion which just says to pipe in the wordlist, but that makes no difference. Hashing speed is the same and it still uses only one GPU.
The only option I can think of is to break a large number of hashes into smaller blocks and run 8 cudaHashcat processes and use -d option to specify GPU to balance load.
I'm using v1.35. Here's an example command line:
Thanks very much for any help and thanks for an amazing piece of software.
https://hashcat.net/forum/thread-4161.html
I've tried the suggestion to pipe in the wordlist but that makes no difference to hashing speed and it still uses a single GPU.
Is there a way to get cudaHashcat to use all GPU's when using a small (10,000 word) dictionary? I've tried the above suggestion which just says to pipe in the wordlist, but that makes no difference. Hashing speed is the same and it still uses only one GPU.
The only option I can think of is to break a large number of hashes into smaller blocks and run 8 cudaHashcat processes and use -d option to specify GPU to balance load.
I'm using v1.35. Here's an example command line:
Code:
cat 10kpasswds.txt | cudaHashcat64.bin -m 400 -w 3 hashes.txt --session mysess1
Thanks very much for any help and thanks for an amazing piece of software.