Posts: 1
Threads: 1
Joined: Sep 2013
Hi,
I have been fiddling around using different character sets and using hashcat. When I use characters from the ?h character set, hashcat returns the incorrect password values.
As an example:
Cracking an NTLM password, password is êêêêê and the resulting NTLM hash is fdc960a5a41047a551af345d9a27329. When I run this through oclhashcat lite with the following arguments:
cudaHashcat-lite64.exe fdc960a5a41047a551af345d9a27329 --hash-type=1000 --pw-min=5 --pw-max=5 -1 ?h ?1?1?1?1?1
It returns the password as follows:
fdc960a5a41047a551af345d9a273293:ΩΩΩΩΩ
I feel like I am overlooking something basic.
Posts: 2,936
Threads: 12
Joined: May 2012
it's just the way your terminal is representing those characters.
Posts: 143
Threads: 9
Joined: Dec 2012
09-04-2013, 08:26 PM
(This post was last modified: 09-04-2013, 08:27 PM by magnum.)
Here's how to get it straight in this case.
Code:
$ echo ΩΩΩΩΩ | iconv -t cp437 | iconv -f cp1252
êêêêê
I always wondered how you tell HashCat what codepage to assume for input when converting to Unicode. What if the password was ккккк (in Russian) or κκκκκ (in Greece)? They do look the same but they are totally different in UTF-16. And like ΩΩΩΩΩ and êêêêê, both can be represented by 0xEA in some 8-bit codepage.
Posts: 5,185
Threads: 230
Joined: Apr 2010
hashcat cheats for unicode algorithms like NTLM, it just inserts zeros
Posts: 179
Threads: 13
Joined: Dec 2012
09-05-2013, 08:27 PM
(This post was last modified: 09-05-2013, 08:28 PM by Kuci.)
(09-05-2013, 02:46 PM)atom Wrote: hashcat cheats for unicode algorithms like NTLM, it just inserts zeros
Oou, that doesn't sound like a proper solution
Please, could you explain, why did you make it this way ?
Posts: 5,185
Threads: 230
Joined: Apr 2010
All the GPGPU cracking tools that support fast hashes do it this way, it's because it does not harm to ASCII based ^ 95 charset. It gives a lot more speed to do it this way but it's hard to predict a number.
Posts: 143
Threads: 9
Joined: Dec 2012
(09-05-2013, 08:27 PM)Kuci Wrote: Oou, that doesn't sound like a proper solution
Well it is 100% proper for converting full 8-bit ISO-8859-1 -> UTF16. Just not for any other 8-bit encoding.
FWIW I have a proper UTF8->UTF16 implementation on GPU (as well as conversion from some other codepages), in NTLMv2 in JtR. It was mostly as an experiment. The trick is it's only used when needed. It's trivial code but I'm sure it's slow - but right now that format is slow anyway because it lacks password generation on GPU.
Posts: 5,185
Threads: 230
Joined: Apr 2010
You can trick a bit with NTLM by using MD4 instead of NTLM and using the tricks explained in rurapenthes latest blogpost:
http://www.rurapenthe.me/2013/09/crackin...guage.html
Just not that you'd push zerobytes whenever required. It's the idea that counts.