My having explicitly said "This isn't a direct answer to your question" isn't exactly "completely ignoring" your question, yes?
The canonical solution to this problem is to not do what you're doing. Just because there are lists bigger than 100GB doesn't mean that it's a good practice. This may not be the advice you're looking for, but it may be the advice you need.
Mashing up all of your lists into a single list is rarely necessary, and has no inherent efficiency gain. Multiple lists can be specified, or an entire directory name can be specified, on the hashcat command line.
If the purpose of your 100GB wordlist was deduplication, it is not necessary to do this via a single massive wordlist (and less efficient than the alternatives, such as using rli from the hashcat-utils suite to deduplicate across multiple wordlists)
If the purpose of your 100GB wordlist is to optimize attack order, simply split the file into smaller chunks, and supply them to hashcat in order on the command line. The end result will be identical, but the dictionary cache building cost will be distributed across the number of chunks. If the wait time is larger than desired, increase the number of chunks.
But if you wish to persist in mashing up your wordlists, I'm not aware of a way to automatically distribute dictionary caches across installations. You could experiment with copying the file yourself, but I'm not sure how effective that will be.
On Linux, the dictstat2 file is in ~/.hashcat/. Wherever the default Windows hashcat directory is, that's where it will be on Windows.
The canonical solution to this problem is to not do what you're doing. Just because there are lists bigger than 100GB doesn't mean that it's a good practice. This may not be the advice you're looking for, but it may be the advice you need.
Mashing up all of your lists into a single list is rarely necessary, and has no inherent efficiency gain. Multiple lists can be specified, or an entire directory name can be specified, on the hashcat command line.
If the purpose of your 100GB wordlist was deduplication, it is not necessary to do this via a single massive wordlist (and less efficient than the alternatives, such as using rli from the hashcat-utils suite to deduplicate across multiple wordlists)
If the purpose of your 100GB wordlist is to optimize attack order, simply split the file into smaller chunks, and supply them to hashcat in order on the command line. The end result will be identical, but the dictionary cache building cost will be distributed across the number of chunks. If the wait time is larger than desired, increase the number of chunks.
But if you wish to persist in mashing up your wordlists, I'm not aware of a way to automatically distribute dictionary caches across installations. You could experiment with copying the file yourself, but I'm not sure how effective that will be.
On Linux, the dictstat2 file is in ~/.hashcat/. Wherever the default Windows hashcat directory is, that's where it will be on Windows.
~