Copy and reuse dictionary cache
#11
By the way I don't get any email notifications here, knowing that the option is enabled.

Code:
Subscribe and receive email notification of new replies
Reply
#12
The markov flag is unrealted to the dictionary.

I've used split -n l/3 in the past and it split properly. It's OK if the resulting files are not the same size, though they are usually close in my experience.
~
Reply
#13
I'm so sorry it was my mistake. I was opening different files.
Reply
#14
Hi Royce,

I'm trying to load multiple dictionaries into hashcat but the dictionary cache building is not getting distributed across the number of chunks. Hashcat is loading the dictionary files one by one. Why?
Reply
#15
I don't know what you mean by the first sentence. As for hashcat loading dictionary files one by one, what should it be doing instead?
~
Reply
#16
You said previously:

If the purpose of your 100GB wordlist is to optimize attack order, simply split the file into smaller chunks, and supply them to hashcat in order on the command line. The end result will be identical, but the dictionary cache building cost will be distributed across the number of chunks. If the wait time is larger than desired, increase the number of chunks.

So I did that, but I didn't get any benefit like that because hashcat is still loading the dictionaries one by one instead of caching them all together. There's no cost distribution as you said.
Reply
#17
Any updates?
Reply
#18
Splitting the wordlist into smaller chunks doesn't change the *total* load or attack time. It just distributes the dictionary load time into smaller chunks as well.

If this isn't helpful, please restate your question more clearly using an example of how you would like it to work.
~
Reply
#19
Alright, explanation:

Let's say I have 20GB dictionary file and hashcat cat needs 5 mins to cache it in the beginning of the attack.

Now if I split the file into 5 parts (20/5=4) so it's 4GB each file. This way I'm expecting hashcat to load the cache 5x faster (in 1 min all of them) because instead of using 1 core it should use 5 cores and in parallel cache all the 5 files at the same time together, but this is not happening, instead it loads the first file and does the attack, then load the second and does the attack and so on.

Thanks!
Reply
#20
The thing with cache is that its limited to how much that single system has. When you are splitting the dictionary you are gaining the ability to to use 5 separate nodes (if you want parallelize the work) but at the same time you get to do your work in chunks so that the load doesnt need to be cached all at once, in theory making it faster.

hashcat is already optimized to use multiple cores and splitting the dictionary doesnt give you more cores to process the wordlist if its happening on a single system. Hope that makes sense.
Reply