Custom mask generator
#1
For starters, I'm on a mac if that matters. I recently learned about masks but I'm finding their usage a bit limiting. I know I can specify min-max length for the passwords, but is there a way I can define something like:

1 - password length in range: 6,10
2 - number of lowercase letters:4,8
3 - number of digits: 0,3
4 - number of uppercase letters: 0,1
5 - number of special characters: 0,2

I know that I can manually try to create permutations of lists of masks that would output into something like that, but I can see two problems with this approach: 

1 - using multiple masks some strings could end up being repeated, and I'm not sure if hashcat automatically skips repeated strings
2 - the generated masks wouldn't likely be in order of most likely to be real passwords, which would slow down the process a lot.

So is there a way to simply specify these parameters or I'd have to "brute force" my way into making as many customs masks as possible? Also, as a side question, is there a way to make hashcat choose the "most likely" passwords in order USING MULTIPLE MASKS? From what I understand, once a mask starts it runs until it's exhausted, I wonder if there is a way to alternate between masks using the most likely passwords across the entire character space of the masks.

Sorry if this was a bit confusing, I'm happy to clarify anything if needed be. Thank you for your time (and Merry Christmas Big Grin )
Reply
#2
Most of what you're looking to do can be accomplished with the PACK toolkit:

https://github.com/iphelix/pack

The 'policygen' tool can help generate masks with arbitrary policies (and they won't overlap):

https://github.com/iphelix/pack/blob/mas...licygen.py

As to sorting them by likely frequency, that can also be accomplished against a given corpus using the 'statsgen' tool. Whether the stats from one corpus are useful against another corpus depends on the situation, YMMV, science it up. Big Grin
~
Reply
#3
(12-26-2019, 03:45 AM)royce Wrote: Most of what you're looking to do can be accomplished with the PACK toolkit:

https://github.com/iphelix/pack

The 'policygen' tool can help generate masks with arbitrary policies (and they won't overlap):

https://github.com/iphelix/pack/blob/mas...licygen.py

As to sorting them by likely frequency, that can also be accomplished against a given corpus using the 'statsgen' tool. Whether the stats from one corpus are useful against another corpus depends on the situation, YMMV, science it up. Big Grin


Thanks, that was super useful! The only thing that I couldn't find within these programs is the ability to select custom mask types from masks generated from a file. For example, if I use statsgen with rockyou.txt, it outputs the best masks regarding that file, but I couldn't find a way to implement a filter to it like PolicyGen implements. By that I mean, "look for the best masks in this file that follow these policies". But considering that is relatively easy to implement, I did a simple python code that does exactly that, if anyone is interested. Just change the input/output file names to wathever suits you best.

EDIT: Just found out unfortunately I can't attach python programs, oh well. If anyone is interested, and it is within the rules, I can provide a github link for it or something, just message me.

As a last note, I bring again a topic I briefly discussed in my initial comment: is there any possibility or experimentation of implementing some kind of "mask-merging" feature? By that I mean, let's say there are two masks ordered by MaskGenerator:

1- ?l?l?l?l
2 - ?d?d?d?d

If I run an analysis like this, all the possibilities from mask 1 will be tried before touching mask 2. But it is logical that 1234 is more likely to be the true password than xzwk. It would be amazing if there could be some sort of ranking of possibilities within all the possibilities of all the masks combined, then run the analysis on this entire list at once, ignoring the mask order list. I do understand that this seems REALLY complex and maybe that's why this feature isn't out there yet. But it'd be awesome if it did exist.
Reply
#4
this "sort of ranking" is called markov and hashcat supports it... the custom charset+mask "-2 ?l?d ?2?2?2?2" will do exactly that... you can test with --stdout
Reply
#5
(12-26-2019, 11:34 AM)philsmd Wrote: this "sort of ranking" is called markov and hashcat supports it... the custom charset+mask "-2 ?l?d ?2?2?2?2" will do exactly that... you can test with --stdout

That is great, but from what I understand this approach doesn't implement the benefits of using the PolicyGen, does it? Like using this mask "-2 ?l?d ?2?2?2?2", is it possible to only test for results with at most 2 lowercase letters and at most 3 numbers, for instance? In case I'm not being clear, my intent is to test for the best candidates for a given length in order of probabilities, but applying certain conditions to the sample space like min/max lowercase letters, and digits.
Reply
#6
policygen generates a mask file which is then used in hashcat's mask attack, hence subject to the markov generator.
Reply
#7
(12-26-2019, 05:06 PM)undeath Wrote: policygen generates a mask file which is then used in hashcat's mask attack, hence subject to the markov generator.

I do understand that. The problem is that it uses markov per mask. Which means that it will deplete the first mask completely before moving on to the second. In the example that I gave that ?l?l?l?l comes first than ?d?d?d?d, xyzk will be tested first than 1234. I'm looking for (but unsure if there is a way) to apply markov to "all masks" at once. The overall explanation would be something like:

Test for a password of 8 characters with at most 6 lowercase, at minimum 2 numbers, and so on, based on markov applied to all possibilites, not mask by mask. The output would be something like:

1 - 12345678
2 - john1234
3 - passwo12


It's almost like a ?a?a?a?a... mask but restraining the possibilities to the imposed restrictions. This seems actually really easy to implement, considering I imagine most of the time of the calculations is used actually hashing the string, not generating one. So if I have a mask with ?a?a?a... I could theoretically check if the conditions are met, and if not, just skip. But that would require changing the code in hashcat itself, I can't seem to find a way to do it from the outside. A pseudo code for this would look something like:

1-mask used: ?a?a?a?a
2-generate string
3-check requirements, if compliant, hash; else, skip.

This way markov would be applied to all possibilities at once, EVEN WITH THE RESTRICTIONS (which is the whole point of this, otherwise just using ?a?a... would solve the problem), making sure that 1234 is tested before zxat even if ?l?l?l?l is more likely than ?d?d?d?d
Reply
#8
Ah, I understand. No, that's not possible with hashcat and I believe it's not as easy to implement as you expect.

One thing you could do is run each mask until 5% or 10% (you can probably use -s/-l here for scripting), stop the attack, save the restore file for resuming later and then continue with the next mask.
Reply
#9
(12-26-2019, 05:31 PM)undeath Wrote: Ah, I understand. No, that's not possible with hashcat and I believe it's not as easy to implement as you expect.

One thing you could do is run each mask until 5% or 10% (you can probably use -s/-l here for scripting), stop the attack, save the restore file for resuming later and then continue with the next mask.

Ah, that's unfortunate. As a last resort, even though I believe it is highly unlikely, is there a way to supply something like a python generator (word by word not stored on memory) instead of a wordlist? That way I could try to do what I just mentioned before, creating an ordered generator for the possibilities with those conditions, basically cheating the system. What I want theoretically could be done by transforming the desired possibilities into a wordlist, but unfortunately, I can't fit trillions of words in memory :/. But I'll certainly try your idea of skipping masks after a certain point, thanks for that!
Reply
#10
You can use hashcat's wordlist mode (a0) and pipe in words through stdin using whatever generator you like. However on fast hash modes that will cost a lot performance.
Reply