08-06-2020, 06:00 AM
You said previously:
If the purpose of your 100GB wordlist is to optimize attack order, simply split the file into smaller chunks, and supply them to hashcat in order on the command line. The end result will be identical, but the dictionary cache building cost will be distributed across the number of chunks. If the wait time is larger than desired, increase the number of chunks.
So I did that, but I didn't get any benefit like that because hashcat is still loading the dictionaries one by one instead of caching them all together. There's no cost distribution as you said.
If the purpose of your 100GB wordlist is to optimize attack order, simply split the file into smaller chunks, and supply them to hashcat in order on the command line. The end result will be identical, but the dictionary cache building cost will be distributed across the number of chunks. If the wait time is larger than desired, increase the number of chunks.
So I did that, but I didn't get any benefit like that because hashcat is still loading the dictionaries one by one instead of caching them all together. There's no cost distribution as you said.