08-11-2023, 08:49 PM
(This post was last modified: 08-11-2023, 08:53 PM by rodrigo.Brasil.)
(08-11-2023, 06:35 PM)Snoopy Wrote: welcome to the hell of character encodings
your hexeditor opened your file using utf-8 resulting in your shown hex BUT take a look at this, NTLM uses UTF16le for characterencoding and here we go
Soo... The correct input was:
Code:
val = bytearray.fromhex("5000c300a10073007a0074006f0072005a007300320030003100")
Now I really don't understand. I know NTLM uses UTF16le. And yes, I was doing it (but didn't realize in my first post). I put the core hash just because all this problems.
- From a utf-8 input, the hashcat convert to utf16le, hash it and check.
- If it find, it convert back to utf-8 and save in pot file?
- Did the hexeditor change something? Does it show EXCACTLY what I have in the file?
What do I need to know about encoding to not make mistakes?
Because the same code doesn't worked for this case:
Code:
val = 'vascão.321'
hash = hashlib.new('md4', val.encode('utf-16le')).digest()
print("db4f8d140ddc16d9b77578d07f1e9782") # value I want
So this use another encoding. What am I missing?