Raking writeup
#1
Credit on this first goes to atom for doing the hard work of sorting through all the crap rules, this rule wouldn't have existed without him.

https://github.com/evilmog/evilmog/wiki/...ated2.rule

So I'm pretty sure this hasn't been talked about much so I'm finally doing a writeup about it. Hashcat has this lovely mode called -g or generate random rules. Originally when it came out we had no control over the seed and this was pre hashcat open source but I digress.
Raking is the art of generating random rules when all else fails and is normally an attack of last resort. Now if you have a massive idle cluster it's also handy as a research project to make highly effective rules. The process works something like this.


1) Take big wordlist like hashes.org or the toryhunt Have I been Pwned list or something with a lot of hashes, it can even be a collection of active directory passwords, I don't care but this will be what you 'train' the raking process against

2)Setup an NFS share for all your cluster nodes to send output to

3) Load all your candidate dictionaries into a directory

4) Setup a shellscript to continually execute hashcat in a loop with -g 100000 (you may want to play with this number) and --debug-mode=4 --debug-file to the nfs/debug/$nodename, --outfile-format=2 to output straight plains, --outfile to the nfsdir/induct/$hostname, --loopback 1 to force it to run inductions of new hashes it cracks from other cluster nodes, --induction-dir to nfs/induct and --potfile-disable so we can catch weird hits, then setup dicts/* with your dictionaries for that node, -w for your workload profile etc

5) Harvest the debug files for rules and wordlists, also the induction will feed plaintexts cracked back into the system to crack with more generated rules

Once all of this is done you just need to cleanup the debug files, you may also want to change the seed on generated but that should work. The debug file format in this case is baseword:rule:processed word, which means you can collect basewords, you can collect the rules, and then even the effective processed word for things like cutb.

You will want to run these through optimize_rules from hashcat utils and take the top n by count to make an effective ruleset.

This is exactly how generated2.rule was made, 6 months of solid raking only occasionally taking a node out of the pool to do other work. You can also use some of these techniques like induction to make nodes work across by using different rule files and wordlists but adding in passwords cracked by other nodes, say you have prince going and you have something else with an insane ruleset or a -g 100000 and want to test the new inductions against those rules you can make a password cracking cluster exhibit emergent behavior.

In corporate style engagements or those where you are going for 90%+ this is another tool to add you your list. Its extremely inefficient but I call it the infinite monkey theory of password cracking.


Messages In This Thread
Raking writeup - by evilmog - 11-24-2017, 12:46 AM