7z2Hashcat Output | What is this?
#1
Information 
Hello,

After trying for days to get a Hash from a encrypted 7zfiles, i ran up with 7z2hashcat
(https://github.com/philsmd/7z2hashcat)

First is what trowing a memory limit, so i decided to increase this limit on the PL file (x4)

It worked, but it threw an output and i dont have any idea what to do with it.

The output when put into a TXT file is 1MB

Starts with:
94eb58af7a8d3df82f25416dbbbe767976d32b8363dc048cfbdecbf82b5f517273cc1019c42be0aa60d7b6a0b52378d70acc4a926145c0562281eb21f509296d0a2587b6d7f343deb67ec8d1710193d55b65bbd06dc85aab3eb53464d39daf8b1961995c6585057031a42275a7e2e4a76c38e9571a84b65d90f36a19d7945f6b166433796d8bab30c72f6e


And ends with:
a13e93965ae9a856a8cc2350a3bd5907006ab7ee85317a632f927ea64de6930b485f075152af3316ca1b8f2000ac5a41093586a26c0d4b3dd478cdf080f6647a9e07100e62e432d7a4ef08891d5ece45dad0922d070fb4e29f9cc7f8e86f4e$388795692$17

Did i do something wrong? I am super noob at this, just starting to learn whats going on.
Your help is highly appreciated!

Love to all, and respect to all of you who make the world a better place by breaking it Smile

PS: I can upload the file with no problem, but i dont know if that's against the rules or anything.
Reply
#2
well, the 2 data length bounds between 7z2hashcat and hashcat are in sync... therefore if you increase one (the 7z2hashcat perl file), you would also need to increase the other one (in hashcat source code).

Changing hashcat might not be that easy, I addressed it already dozens of times in the forum and over at the 7z2hashcat and hashcat github issue tracker e.g. https://github.com/philsmd/7z2hashcat/is...ata+length+

but it's definitely possible up to some reasonable values (some dozens MB should be possible for sure, afterwards it gets even more complicated, because of hashcat's design/architecture of reading hash lines etc)

just have a look at the 7z2hashcat hashcat forum posts, hashcat issue and 7z2hashcat issue and you should be able to figure out what the problem is and how to solve it
Reply
#3
(12-17-2019, 09:47 AM)philsmd Wrote: well, the 2 data length bounds between 7z2hashcat and hashcat are in sync... therefore if you increase one (the 7z2hashcat perl file), you would also need to increase the other one (in hashcat source code).

Changing hashcat might not be that easy, I addressed it already dozens of times in the forum and over at the 7z2hashcat and hashcat github issue tracker e.g. https://github.com/philsmd/7z2hashcat/is...ata+length+

but it's definitely possible up to some reasonable values (some dozens MB should be possible for sure, afterwards it gets even more complicated, because of hashcat's design/architecture of reading hash lines etc)

just have a look at the 7z2hashcat hashcat forum posts, hashcat issue and 7z2hashcat issue and you should be able to figure out what the problem is and how to solve it

Thank you so much, i found your article that states the following:

Quote:Modify the parsing function in src/interface.c:
Code:
https://github.com/hashcat/hashcat/blob/...e.c#L11394
the memory buffer (divided by 4 because of u32 data type) within the hook function in src/interface.c:
Code:
https://github.com/hashcat/hashcat/blob/...e.c#L14325
Increasing the value of DISPLAY_LEN_MAX_11600 (times 2 because of hex bytes) in include/interface.h:
Code:
https://github.com/hashcat/hashcat/blob/...ce.h#L1104

I have no experience or whatsoever on compiling from source. But ill find my way around this, been dealing with this for a couple of days and feels nice having some guidance.

If you have another advice or an already compiled version for windows with size "200000000" (yes this is hex, a very close aproximate to what i need) would be really appreciated.

If you have a tutorial for noobies on how to make an executable from source on windows you would also save my life.

Thanks Phil, you're the hell of a programmer.
Reply
#4
I'm not sure what the 200000000 number means. I think you mean, that this is the double of the binary data size (BECAUSE of hexadecimal encoding), i.e. 200000000 / 2 = 100000000 should be accepted... but this is still very, very huge... almost 100 MB.
normally, if you say 200000000 hex encoded, it would be 0x200000000 = 8 GB... that's a huge difference

both are very huge numbers... just think about it: a hash line with 100 MB of data... this is just an absurdly long line in a file (even if you have several JPG pictures, they won't be hundreds of MB ... in one line).
Reply
#5
(12-18-2019, 09:53 AM)philsmd Wrote: I'm not sure what the 200000000 number means. I think you mean, that this is the double of the binary data size (BECAUSE of hexadecimal encoding), i.e. 200000000 / 2 = 100000000 should be accepted... but this is still very, very huge... almost 100 MB.
normally, if you say 200000000 hex encoded, it would be 0x200000000 = 8 GB... that's a huge difference

both are very huge numbers... just think about it: a hash line with 100 MB of data... this is just an absurdly long line in a file (even if you have several JPG pictures, they won't be hundreds of MB ...  in one line).

This number was the minimum that 7z2hash accepted, i was trying bit by bit, and that was the first one that actually worked.

Now i tried again and close to the minimum value admited is 0xA2179A0.

I still need now to edit Hashcat to allow me this size, Still dont know how but ill find my way around.

Thank you so much phil.
Reply
#6
Well, that patch you mentioned is pretty old, there weren't even hashcat modules used in the hashcat source code back then (well, a hashcat version with modules was not even release yet though, so it's both a long time ago, but still similar to old releases, because the last hashcat release was a while back).

I think I have a more up to date "patch" somewhere, used as a proof of concept for another hashcat user (who PM'ed me a while back), but I don't intend to release it to the general public, because it's probably not very good idea to use it in general or merge it into hashcat and therefore have so much more memory consumption etc for the average hashcat user (much larger buffers, using heap instead of stack etc)... just for some very, very rare/special scenarios for -m 11600 to work (which of course not every user would be interested and needs to use).

If you can't figure it out yourself (I definitely would advice to use the latest code on git with module support), I might help a little bit with my patch/POC.
Reply