Large dictionary
Let's say I have a word list of 10 gb when I have 8 gb ram. Does hashcat split a word list automatically, or you have to do it manually?
I'm asking this because in the Hashcat-utils docs, under splitlen section, it says that "this optimization is no longer needed by modern hashcat."
Hashcat doesn't load the wordlist into memory almost at all, so even if you have a 100GB wordlist, it'll still work just fine. Only if you have a huge amount of rules (Millions) then it could start filling up your VRAM, but that's nothing to do with wordlist size