Modifying hashcats optimized kernel
#8
So it sounds like, but correct me if i'm wrong here, you have a wordlist and some rules you are using for your attack. You want to limit the length of passwords that are tested by modifying the kernel to limit the length of passwords, instead of using the < or > rules since those don't work on GPU(or are really slow). In doing so, you will cut down on the total amount of stuff you test, cutting down your keyspace, and resulting in a fast attack overall. Right?

The problem with this is you are assuming that there is no penalty for kernel password rejections or that its significantly faster to reject a plain than it is to hash it. This assumption is incorrect, rejections on the GPU are pretty much always going to be slow as the candidate has already made it into the buffer, and you will likely not save nearly as much time as you'd think by rejecting them at that stage. With the way threads are executed in parallel groups, any thread that rejects all its candidates for its work chunk has to wait for the rest of the threads in its group to finish before it can get more work. You should be cutting the words that are too long out before passing them to the GPU and/or putting them in the buffer in the first place.

Now, there's a problem with doing this method too, which I'm sure you've run into. Processing all those rules on the host before sending to the candidates to the GPU is ALSO slow, otherwise you could just use -j to set a < or > rule. So you need to strike a balance. If your keyspace is manageable enough to place on disk, you could possibly save time by generating the whole thing, cutting out the words that are too long, and running that, but it's unlikely unless you are generating a LOT of words that are too long. If your keyspace is not that small, then you will need to filter your wordlist and rules to cut down on passwords that are too long for your usecase prior to running the attack or deal with the speed as is. Hashcat already actually takes advantage of a speed boost for lists of words that are short enough. You can see that here: https://github.com/hashcat/hashcat/blob/...2981-L2991

In an ideal scenario, you could generate your whole keyspace, cut out the words that are too long, and order the remaining words into chunks that fall into the buffer sizes used in hashcat for better speeds, putting the faster candidates first in the keyspace. If you order and chunk your keyspace this way, starting with the shorter passwords first, you will theoretically clear the entire keyspace faster than if they were distributed randomly in a single file. This is what i meant by "keyspace ordering" but in a more general sense because length is not the only defining feature you can use to increase overall attack speeds. You can order keyspaces by stuff such as most likely based on external metrics, cracking the majority of what you are going to crack earlier into the attack and reducing overall attack time. This is what we do with the markov chains used by the mask attacks, put more likely candidates earlier into the keyspace and less likely at the end. Now, other bottlenecks exist, but the concept remains the same. Using rules to create more work that specifically do not alter the length of the passwords to be significantly longer can be a great way to increase utilization AND recover significant speed, avoiding a number of slowdowns.

If i got my understanding of the problem wrong, then there may still be room for improvement in your attack via a kernel mod, but as far as trying to use the kernel to reject candidates, that's already a bit too late to gain significant speed and your efforts would be far better focused dealing with the keyspace _prior_ to loading. As far as trying to speed up MD5 by hashing less data, hashcat is already doing that about as fast as makes sense, so not much room there either, at least within hashcat.
Reply


Messages In This Thread
Modifying hashcats optimized kernel - by Robot - 04-20-2021, 01:21 AM
RE: Modifying hashcats optimized kernel - by Chick3nman - 04-21-2021, 01:51 AM