Some remarks and maybe idea for improvement
#1
After usual default, wordlist, rule, hybrid approach i turn to mask approach. Since ?l(same with ?u) goes through all characters randomly, without any rule(almost), i thought this is not optimum approach. The reason is that when we fish for a password, which can resemble english language(i have three languages to worry about), it follows at least *some* linguistic rule. For example, by cycling through all characters(a-z) one by one in rockyou(for example) with command

fgrep -o a rockyou.txt | wc -l

it finds different occurence frequencies of those characters. By sorting those characters by frequencies, i found that it follows the rule of - "aeionrlstmcdyhubkgpjvfwzxq". That is, from most frequent character to least frequent character. With only this finding i could throw away 3(zxq) last characters, which would exclude about 1,9% of the characters and 2.088 x 10^11 (26^8) combinations turn into 7.83 x 10^10 (23^8). It means cracking time is reduced by 62%. Then i can turn to some other mask(with same approach) and later, when most frequent characters are used and hashes are not cracked, i turn to the least frequent characters (probably not, it may take too much time).

An important thing which i quickly realized is that first character and second character follow different rules. So after cycling through all characters based on the first position with the command

cat rockyou.txt | cut -c1-1 | fgrep -o a | wc -l

i found a different picture - "msacbljtdpkrnghiefwvoyzuxq", from most frequent to least frequent. The second character frequency line is - "aoeiurlhnmsytcdbpkjwqvf".

To cut long story short, i started doing cheesy tactics, where i started doing masks with longer lengths, but with truncated character set(starting even from first 14), from most frequent characters to least. -1 as the first character, -2 as second character and -3 all other characters. Then after exhausting(or cracking some hash) i started adding less frequent characters(to max 18-23). This is all manual and sometimes exhausting(adding characters one by one on a 8 length mask, where -3 occupies 6 slots, i have to go through all combinations manually), but the time reduced is big.

So here is the question. Is there a possibility to do this more automatically, where hashcat adds characters one by one in the mask without hitting duplicates? Well, to try to answer myself, i turned to brain solution, which is a little bit more automatic, but still doesn't solve everything and i had to set --brain-client-features 3, cause with the default approach, the custom character set doesn't work well("-1 m" only on first char, then "-1 s", then "-1 ms" should reject 100%, but doesn't). This also slows down the speed, depending on how many hashes i have.

The approach from the most frequent character set to least is comparable to mask attack, where we choose most frequent mask(ullllldd or lllllldd) to least.

So, what do you think?
Reply
#2
hashcat uses markov by default (it's not random), it's trained with rockyou.txt (update see below).

you can use --markov-hcstat2 and hcstat2gen (see https://github.com/hashcat/hashcat-utils....c#L21-L22 ) to generate your own markov hcstat2 file/chain.

update: I just wanted to make sure that my sentence about "trained with rockyou.txt" was actually accurate and asked atom about it ... and indeed he confirmed that the newer version was generated with a newer large public hash leak than rockyou.txt (but he wasn't quite sure himself what the original dict for the new hcstat2 is). It doesn't matter too much because you can create your own anyways. I think it should be somewhere mentioned anyways, I will try to find this out. Maybe somebody else remember the original input for the current hashcat.hcstat2 from the top of their head.
Reply