As you can see by looking into the charsets folder of hashcat, russian chars can be represented with different encodings (cp1251, ISO-8859-5, KOI8-R.hcchr etc).
So one problem is that all these input types (input encodings) need to be tested, not just one of them.
Second problem is that NTML is kind of special since it uses utf16 characters within the hashing algorithm. One limitation of hashcat is that it is cheating a little bit and doesn't do the utf16 conversion completely/correctly, it just sets the second byte to zero, since that is almost always the case (with 0x00-0xff input bytes)... but as we get some input characters that are represented by bytes > 0xff, this approach fails and hashcat can't crack them.
Here is the test, how to crack this particular example anyway (hashes masked):
I converted it like this:
So one problem is that all these input types (input encodings) need to be tested, not just one of them.
Second problem is that NTML is kind of special since it uses utf16 characters within the hashing algorithm. One limitation of hashcat is that it is cheating a little bit and doesn't do the utf16 conversion completely/correctly, it just sets the second byte to zero, since that is almost always the case (with 0x00-0xff input bytes)... but as we get some input characters that are represented by bytes > 0xff, this approach fails and hashcat can't crack them.
Here is the test, how to crack this particular example anyway (hashes masked):
Code:
./hashcat -m 900 -a 3 -i --hex-charset -1 04354045 c27xxxa2172xxxcced3fdxxxxd8x19 ?1?1?1?1?1?1?1?1
c27xxxa2172xxxcced3fdxxxxd8x19:$HEX[450435044004]
I converted it like this:
Code:
echo d185d0b5d180 | xxd -r -p | iconv -f utf-8 -t utf-16le | xxd -p
450435044004