7z2hashcat
#21
Hi!
 
Thank you! Excellent work!
 
Here is results of 0.9 version on my fileset (16 files):

14/16 - are ok!

Wrong hashes and this error are gone! Wonderful!

Code:
Wide character in Compress::Raw::Lzma::Decoder::code input parameter at script/7z2hashcat.pl line 585.

2/16 - These files still has similar error:

Code:
WARNING: the file 'qwerty.7z' unfortunately can't be used with hashcat since the data length
in this particular case is too long (SIZE of the maximum allowed 8192 bytes) and it can't be truncated.
This should only happen in very rare cases.

One of these files has ~250MB size with around 8k archived files.
Another has ~500MB size with 500 archived files.
SIZE value is very close to whole archive size (less on 500kB in 1st case and on 8kB in 2nd case).


p.s. one of these files shows same error with 0.4 and up to 0.9 version
another file shows this output:

Code:
7z2hashcat-0.4.exe qwerty.7z
Wide character in Compress::Raw::Lzma::Decoder::code input parameter at script/7z2hashcat.pl line 585.
 
7z2hashcat-0.5.exe qwerty.7z
WARNING: the LZMA header decompression for the file 'qwerty.7z' failed with status: 'Data is corrupt'
INFO: for some reasons, for large LZMA buffers, we sometimes get a 'Data is corrupt' error.
      This is a known issue of this tool and needs to be investigated.
      The problem might have to do with this small paragraph hidden in the 7z documentation (quote):
      'The reference LZMA Decoder ignores the value of the "Corrupted" variable.
       So it continues to decode the stream, even if the corruption can be detected
       in the Range Decoder. To provide the full compatibility with output of the
       reference LZMA Decoder, another LZMA Decoder implementation must also
       ignore the value of the "Corrupted" variable.'
      (taken from the DOC/lzma-specification.txt file of the 7z-SDK: see for instance:
       [url=https://github.com/jljusten/LZMA-SDK/blob/master/DOC/lzma-specification.txt#L343-L347]https://github.com/jljusten/LZMA-SDK/blob/master/DOC/lzma-specification.txt#L343-L347[/url])
 
7z2hashcat-0.7.exe qwerty.7z
WARNING: the file 'qwerty.7z' unfortunately can't be used with hashcat since the data length
in this particular case is too long (SIZE of the maximum allowed 384 bytes) and it can't be truncated.
This should only happen in very rare cases.
 
7z2hashcat-0.9.exe qwerty.7z
WARNING: the file 'qwerty.7z' unfortunately can't be used with hashcat since the data length
in this particular case is too long (SIZE of the maximum allowed 8192 bytes) and it can't be truncated.
This should only happen in very rare cases.


Thanks!
Reply
#22
Thanks for the feedback.

So if I understood you correctly everything works correctly now.

The only "problem" you did experience in very rare situation is a very high upper limit in file size (8KiB compressed data). This seems to be a sane value and if the data was compressible and of average file size that limit should never be reached.

Anyway, an advanced user should be able to increase that limit to a higher value (higher than the current 8KiB value) by just increasing these values with a small source code fix (at his/her own risk, the risk is that it might use much more RAM depeding on the value he/she chooses).

Modify the parsing function in src/interface.c:
Code:
https://github.com/hashcat/hashcat/blob/922fea7616255b3cc3603cebd5f3c0bb00654668/src/interface.c#L11394
the memory buffer (divided by 4 because of u32 data type) within the hook function in src/interface.c:
Code:
https://github.com/hashcat/hashcat/blob/922fea7616255b3cc3603cebd5f3c0bb00654668/src/interface.c#L14325
Increasing the value of DISPLAY_LEN_MAX_11600 (times 2 because of hex bytes) in include/interface.h:
Code:
https://github.com/hashcat/hashcat/blob/922fea7616255b3cc3603cebd5f3c0bb00654668/include/interface.h#L1104

Thanks
Reply
#23
(02-22-2017, 05:49 PM)philsmd Wrote: Thanks for the feedback.

So if I understood you correctly everything works correctly now.

The only "problem" you did experience in very rare situation is a very high upper limit in file size (8KiB compressed data). This seems to be a sane value and if the data was compressible and of average file size that limit should never be reached.

Anyway, an advanced user should be able to increase that limit to a higher value (higher than the current 8KiB value) by just increasing these values with a small source code fix (at his/her own risk, the risk is that it might use much more RAM depeding on the value he/she chooses).

Modify the parsing function in src/interface.c:
Code:
https://github.com/hashcat/hashcat/blob/922fea7616255b3cc3603cebd5f3c0bb00654668/src/interface.c#L11394
the memory buffer (divided by 4 because of u32 data type) within the hook function in src/interface.c:
Code:
https://github.com/hashcat/hashcat/blob/922fea7616255b3cc3603cebd5f3c0bb00654668/src/interface.c#L14325
Increasing the value of DISPLAY_LEN_MAX_11600 (times 2 because of hex bytes) in include/interface.h:
Code:
https://github.com/hashcat/hashcat/blob/922fea7616255b3cc3603cebd5f3c0bb00654668/include/interface.h#L1104

Thanks

Not to bring about an ancient thread out of existence, but any luck in getting large files in being decrypted?

Meaning I have a 92 megabyte file, when I check the contents of the file, it's 810 megabytes uncompressed. 
The unfortunate error I receive is 

$ ./7z2hashcat.pl QUEEN.7z 
WARNING: the file 'QUEEN.7z' unfortunately can't be used with hashcat since the data length
in this particular case is too long (92450992 of the maximum allowed 327528 bytes).

Or am I doing something wrong? Any help would be appreciated, or sent to the right spot. 

i tried this both with the 64 bit windows binary, and a cygwin perl version. Both resulted in the same issue.

Thanks
Reply
#24
Is the encrypted blob necessary for using hashcat to discover the password used to encrypt? eg can I send off everything needed to attack the password without giving them the actual file? Or does 7zip hash the password *with* the file itself?

Thanks!
Reply
#25
The original 7-zip file (*.7z) is not needed to recover the password.
But the entire output of 7z2hashcat is needed (the whole hash).

Yes, as mentioned here: https://github.com/philsmd/7z2hashcat#se...ta-warning some hashes generated by 7z2hashcat could in theory contain (encrypted and sometimes compressed) sensitive data.
Unfortunately, this is how the algorithm used by 7-Zip works, it needs to create a crc32 checksum of the data. In general, it's only the checksum of 1 file (in general "the first" file).

7z2hashcat tries to output the smallest amount of bytes possible (i.e. only those bytes that are really needed). The output of 7z2hashcat does not contain any extra bytes or data that could also be skipped/ignored (everything within the hash is needed for hashcat).
Reply
#26
(11-21-2017, 12:30 PM)philsmd Wrote: The original 7-zip file (*.7z) is not needed to recover the password.
But the entire output of 7z2hashcat is needed (the whole hash).

Yes, as mentioned here: https://github.com/philsmd/7z2hashcat#se...ta-warning some hashes generated by 7z2hashcat could in theory contain (encrypted and sometimes compressed) sensitive data.
Unfortunately, this is how the algorithm used by 7-Zip works, it needs to create a crc32 checksum of the data. In general, it's only the checksum of 1 file (in general "the first" file).

7z2hashcat tries to output the smallest amount of bytes possible (i.e. only those bytes that are really needed). The output of 7z2hashcat does not contain any extra bytes or data that could also be skipped/ignored (everything within the hash is needed for hashcat).

Thanks a lot for the explanation! So if it's one file that was compressed then the contents of that file are accessible to anyone who cracks the hash of the file I guess. Good to know.
Reply
#27
What's the actual problem? Does hashcat not load the hash?
Reply
#28
Hello! I've got some error:
user@test111:~$ perl 7z2hashcat.pl SQ5_SQ5V_Electrical_System.7z
WARNING: the file 'SQ5_SQ5V_Electrical_System.7z' unfortunately can't be used with hashcat since the data length
in this particular case is too long (30196848 of the maximum allowed 327528 bytes).

There is my file: [removed] (28.8 MB)

What I must do? Thank you.
---------------------
Translated with Google Translate (c)
Reply
#29
Hi,

You could try to chop the first 327528 bytes off as it is described in the 7z2hashcat.pl.
Then use the script on the new *file*


Code:
# This field is the first field after the hash signature (i.e. after "$7z$).
# Whenever the data was longer than the value of PASSWORD_RECOVERY_TOOL_DATA_LIMIT and the data could be truncated due to the padding attack,
# the value of this field will be set to 128.
#
# If no truncation is used:
# - the value will be 0 if the data doesn't need to be decompressed to check the CRC32 checksum
# - all values different from 128, but greater than 0, indicate that the data must be decompressed as follows:
#   - 1 means that the data must be decompressed using the LZMA1 decompressor
#   - 2 means that the data must be decompressed using the LZMA2 decompressor
#   - 3 means that the data must be decompressed using the PPMD decompressor
#   - 4 means that the data must be decompressed using the BCJ decompressor
#   - 5 means that the data must be decompressed using the BCJ2 decompressor
#   - 6 means that the data must be decompressed using the BZIP2 decompressor
#   - 7 means that the data must be decompressed using the DEFLATE decompressor

# Truncated data can only be verified using the padding attack and therefore combinations between truncation + a compressor are not allowed.
# Therefore, whenever the value is 128 or 0, neither coder attributes nor the length of the data for the CRC32 check is within the output.
# On the other hand, for all values above or equal 1 and smaller than 128, both coder attributes and the length for CRC32 check is in the output.
Reply