05-07-2020, 10:25 AM
Maybe this could be a possible speed improvement for hashcat with using restore or --skip:
Been running a 3-4 day run with a 157 GB wordlist the initial restore/--skip part takes 15+ mins to restore due to having to READ through the huge wordlist, (I'm currently at 82% so A LOT of reading). I run every day for ~10 hours as I have solar panels so does not cost me anything to run, but does mean I have to restore a previous session.
Seems pointless when the .restore file could also store the byte position in the wordlist file so, instant start basically with a simple change.
He could even store the byte position every 5% in the .dictstat2 file so, when using --skip, it can work out the lower % and skip straight to it, then read upto the --skip value, easy and FASTER!
Seem feasable?
FYI, if you want to download the 157 GB wordlist, find it below in 7zip chunks:
http://share.blandyuk.co.uk/wordlists/huge/
Been running a 3-4 day run with a 157 GB wordlist the initial restore/--skip part takes 15+ mins to restore due to having to READ through the huge wordlist, (I'm currently at 82% so A LOT of reading). I run every day for ~10 hours as I have solar panels so does not cost me anything to run, but does mean I have to restore a previous session.
Seems pointless when the .restore file could also store the byte position in the wordlist file so, instant start basically with a simple change.
He could even store the byte position every 5% in the .dictstat2 file so, when using --skip, it can work out the lower % and skip straight to it, then read upto the --skip value, easy and FASTER!
Seem feasable?
FYI, if you want to download the 157 GB wordlist, find it below in 7zip chunks:
http://share.blandyuk.co.uk/wordlists/huge/