PKZip Length Question - Printable Version +- hashcat Forum (https://hashcat.net/forum) +-- Forum: Support (https://hashcat.net/forum/forum-3.html) +--- Forum: hashcat (https://hashcat.net/forum/forum-45.html) +--- Thread: PKZip Length Question (/thread-8434.html) |
PKZip Length Question - DGS - 06-19-2019 I have an encrypted zip (using PKZip's encryption) that has a single file inside with an unpacked length of 12,481,930 bytes and a packed length of 4,612,283 bytes, but I cannot run the hash output by zip2john (from JtR) through hashcat (with its new PKZip support) because, as I understand it, PKZip support in hashcat currently has a limit of 320 kilobytes (https://github.com/hashcat/hashcat/pull/2053). Is there any way around this I don't know about? RE: PKZip Length Question - philsmd - 06-22-2019 short answer: not supported long answer: it would actually be easily possible to support longer compressed data lengths (the decompressed length is even less problematic because we "only" need to compute the crc32 checksum and not store the result at all) with the current on-GPU inflate code, but the problems are the hash reading (line length problem, fixed/max length), hash output if cracked or hashes displayed on the status screen (fixed max length problem again) and the usage of stack variables in some cases in code (should be heap everywhere because storing too much data on stack is not allowed by some operating systems/compilers, max byte length of a single stack variable/array)... all of these problems are still not solved and as you can see they are already highlighted/mentioned within that same pull request you posted above (e.g. the binary file reading approach, instead of the fgetl () reading of the hash lines) RE: PKZip Length Question - DGS - 06-22-2019 (06-22-2019, 09:57 AM)philsmd Wrote: short answer: not supported Thanks, thought there might have been a trick I didn't know about. |