I agree with @Karamba that an entropy check could be useful (btw: hashcat already does this internally https://github.com/hashcat/hashcat/blob/...#L222-L224 , but if you think it would reduce your list of chunks a lot, you could do it externally to avoid looking at too many "files").
hashcat unfortunately only allows to load 1 TrueCrypt "hash" at any time... that's normally not a problem, but a little bit of a bummer in your very specific situation (when you need to test a lot of different TrueCrypt containers "automatically").
I think it's a good strategy to pre-filter them (for instance only use 512 byte blocks that have enough entropy and maybe even rank/priorize them depending on their location on the disk, on the likelihood of a good candidate block etc etc) and then just run a bash/shell/batch script over it, something like this:
you could even exit the loops when hashcat returns a success, e.g.
all together:
Also the loop over all hash types 6213, 6223 and 6233 is NOT needed if you know exactly which hashing algorithm was used when creating the TrueCrypt container (RIPEMD160 vs SHA512 vs Whirlpool).
This also assumes that no boot-mode was used (otherwise you would need to only use -m 6243 for hashcat).
Of course, this might not be the fastest strategy, because it needs to initialize/start hashcat again and again and again.... the time needed for cracking might be much smaller than the speed of starting and initializing the devices. To optimize this you could also play around with other options in hashcat, like -D 1 (only use CPU) or select only one device with -d 1, or use --backend-ignore-cuda etc etc
In this very specific situation it could also be faster to just develop and test a very special perl/python/php etc script that does everything for you (testing the entropy and also trying to decrypt the data similar to what hashcat does internally, but this might involve much more work for you).
I think the ranked/priorized testing of the blocks within a shell loop could be quite fast/feasible (especially because hashcat does an entropy check and therefore rejects a lot of garbage blocks etc).
hashcat unfortunately only allows to load 1 TrueCrypt "hash" at any time... that's normally not a problem, but a little bit of a bummer in your very specific situation (when you need to test a lot of different TrueCrypt containers "automatically").
I think it's a good strategy to pre-filter them (for instance only use 512 byte blocks that have enough entropy and maybe even rank/priorize them depending on their location on the disk, on the likelihood of a good candidate block etc etc) and then just run a bash/shell/batch script over it, something like this:
Code:
for f in chunks/*; do for t in 6213 6223 6233; do ./hashcat -m $t $f dict.txt; done; done | tee always_track_progress.log
you could even exit the loops when hashcat returns a success, e.g.
Code:
ret=$?
if [ "$ret" -eq 0 ]; then break; fi
all together:
Code:
for f in chunks/*; do for t in 6213 6223 6233; do ./hashcat -m $t $f dict.txt; ret=$?; if [ "$ret" -eq 0 ]; then break; fi; done; if [ "$ret" -eq 0 ]; then break; fi; done | tee always_track_progress.log
Also the loop over all hash types 6213, 6223 and 6233 is NOT needed if you know exactly which hashing algorithm was used when creating the TrueCrypt container (RIPEMD160 vs SHA512 vs Whirlpool).
This also assumes that no boot-mode was used (otherwise you would need to only use -m 6243 for hashcat).
Of course, this might not be the fastest strategy, because it needs to initialize/start hashcat again and again and again.... the time needed for cracking might be much smaller than the speed of starting and initializing the devices. To optimize this you could also play around with other options in hashcat, like -D 1 (only use CPU) or select only one device with -d 1, or use --backend-ignore-cuda etc etc
In this very specific situation it could also be faster to just develop and test a very special perl/python/php etc script that does everything for you (testing the entropy and also trying to decrypt the data similar to what hashcat does internally, but this might involve much more work for you).
I think the ranked/priorized testing of the blocks within a shell loop could be quite fast/feasible (especially because hashcat does an entropy check and therefore rejects a lot of garbage blocks etc).