Distributed hashcat
#1
I just love my hashcat but I suffers from really slow hardware.

On one box (xp) I get like 5M/s, next one (Linux) I get 2,5M/s and the last one (win 7) like 3M/s (plain old vanilla MD5). That gives me peaks of 10-10,5M/s in total but most of the time just 8M/s total

So far I played with more or less advanced rules and the same output file to "simulate" a distributed cracking cluster. (acctually splitted Deadone's 10 000 rule-file in equal parts)

Is that anyway to make hashcat to work as distributed cluster for real?

In the feature-list of hashcat it says: Able to work in an distributed environment. Does that mean that you must have a true distributed environment or can I use some random hardware (like my slow and crappy one's) and make it work in a distributed mode.

OR is there some more advanced rulesets for running more than one hashcat in parallell? Or even better, is there some way to make bruteforce to bruteforce just a part of the keyspace? In my case I would love to have a bf-keyspace in 4 parts (2 for my fastest box and then 1 each to the slower boxes)

As always many (newbie) questions from me ;-)

And now I wanna give some creds to the hashcat team. So far I've been able to crack more hashes with my 3 crappy boxes than I did with JtR (same hashfiles and dicts). On some hashfiles I got more than 80% just in a few hours with HC. With JtR I was lucky when I got 50%-60% in a month or so. The only difference is that I run 3 crappy boxes in a pseudo-distrubuted mode and apply better rules.



(And yes, I'm saving money to build a nvidia-based box for oclhashcat, because I really love to be able to crack hashes with speeds of a supercomputer ;-)
#2
ksp the -l option is simply a way of telling hashcat to only do so many words from a word list. It was added to give people the ability to script a distributed type system. Lets use your machine speeds as an example setup.

Here are your speeds:
#1 5M
#2 2.5M
#3 3M

Given that you want to use a dictionary of 100 words this is how many each will do:
#1 47.6%
#2 23.8%
#3 28.5%

So to distribute the load evenly so that they would all finish at the same time you can calculate the -s and -l value of each machine.
#1 -l 48 <-- -s option is not needed it will start from 1
#2 -s 48 -l 72
#3 -s 72 <-- -l option is not needed it will finish at 100

Basically the -l option keeps the machine from doing work that another machine is already doing. in the example above if #1 did not have the -l 48 option set it would continue all the way to 100 even though machine 2, and 3 are are already doing that that range. A crafty person could script the whole process.
#3
Wow!!

Exactly what I need!!!


Ps. your 10 000.long rule really rocks!! Divided evenly on my slow boxes it does magic ;-) Just in a few hours I'm at 70-80% at almost any random (plain old vanilla MD5) hashfile in my "collection"