PKZip Length Question
#1
I have an encrypted zip (using PKZip's encryption) that has a single file inside with an unpacked length of 12,481,930 bytes and a packed length of 4,612,283 bytes, but I cannot run the hash output by zip2john (from JtR) through hashcat (with its new PKZip support) because, as I understand it, PKZip support in hashcat currently has a limit of 320 kilobytes (https://github.com/hashcat/hashcat/pull/2053). Is there any way around this I don't know about?
Reply
#2
short answer: not supported

long answer: it would actually be easily possible to support longer compressed data lengths (the decompressed length is even less problematic because we "only" need to compute the crc32 checksum and not store the result at all) with the current on-GPU inflate code, but the problems are the hash reading (line length problem, fixed/max length), hash output if cracked or hashes displayed on the status screen (fixed max length problem again) and the usage of stack variables in some cases in code (should be heap everywhere because storing too much data on stack is not allowed by some operating systems/compilers, max byte length of a single stack variable/array)... all of these problems are still not solved and as you can see they are already highlighted/mentioned within that same pull request you posted above (e.g. the binary file reading approach, instead of the fgetl () reading of the hash lines)
Reply
#3
(06-22-2019, 09:57 AM)philsmd Wrote: short answer: not supported

long answer: it would actually be easily possible to support longer compressed data lengths (the decompressed length is even less problematic because we "only" need to compute the crc32 checksum and not store the result at all) with the current on-GPU inflate code, but the problems are the hash reading (line length problem, fixed/max length), hash output if cracked or hashes displayed on the status screen (fixed max length problem again) and the usage of stack variables in some cases in code (should be heap everywhere because storing too much data on stack is not allowed by some operating systems/compilers, max byte length of a single stack variable/array)... all of these problems are still not solved and as you can see they are already highlighted/mentioned within that same pull request you posted above (e.g. the binary file reading approach, instead of the fgetl () reading of the hash lines)

Thanks, thought there might have been a trick I didn't know about.
Reply