Limiting the consecutive occurrence - Printable Version +- hashcat Forum (https://hashcat.net/forum) +-- Forum: Deprecated; Ancient Versions (https://hashcat.net/forum/forum-46.html) +--- Forum: Very old oclHashcat-plus Support (https://hashcat.net/forum/forum-23.html) +--- Thread: Limiting the consecutive occurrence (/thread-1201.html) |
Limiting the consecutive occurrence - Hash-IT - 05-23-2012 Limit the consecutive occurrence of a given character. I am playing with brute forcing and I am looking for a feature that hashcatplus doesn’t have and I wondered if any of you clever chaps know of a workaround I can use. I am testing WPA so I am using .hccap files. I know the password is 8 upper case alpha characters. I know how to make a brute force attack and mask to do this but I thought of a way to optimise this approach. Basically I am trying to think of a way to limit the amount of times a character can appear consecutively. Ideally I would like to be able to change the limit of instances for different tests. Perhaps a limit of 2 would mean that a single character can only be next to no more than one other exact same character. AABCDEFG would be tested but not AAACDEFG or AAAADEFG. I would very much like this to be a feature of hashcatplus somehow but until I can convince atom this would be a good feature to implement can anyone give me some advice / help to achieve this now ? Thank you. RE: Limiting the consecutive occurrence - M@LIK - 05-23-2012 I've already thought about this. The only solution I found is to generate words using maskprocess and piping them to sed, which will do the filtering thing. Code: mp64 ?u?u?u?u?u?u?u?u | sed '/\([A-Z]\)\1\1/d' You can either generate a filtered dictionary using the command above or just port the command directly to hashcat. Of course, generating a filtered dict will take long time + huge size, but it will be faster than porting that customized bruteforce every time. RE: Limiting the consecutive occurrence - Hash-IT - 05-23-2012 Hi M@lik Thank you very much for your suggestion, I have just checked back here before going to work and was surprised to see a reply so soon. I just wanted to write back and say thank you as I won't be able to experiment until tonight and I just wanted you to know I was grateful for your help. It is going to drive me crazy waiting to come home to play with this new idea ! Just a quick note, as I am testing WPA I am guessing I probably won't notice much of a speed reduction as WPA is such a slow algorithm anyway ? Also I am guessing (again) that the "1\1" part is the number of repetitions ? So would "1\1\1" mean AAA ? This should really be a rule or feature of hashcatplus so it can be performed on the GPU (much faster), I am trying to think of a way to convince atom it would be useful and popular. ...oh one more question if you don't mind, am I able to do this on a windows computer, the sed part ? Thanks again and I look forward to discussing this more with you. RE: Limiting the consecutive occurrence - M@LIK - 05-23-2012 Hash-IT Wrote:Just a quick note, as I am testing WPA I am guessing I probably won't notice much of a speed reduction as WPA is such a slow algorithm anyway ?It's not about the algorithm, It's sed, It's probably the fastest text editor, although the command I gave seems too slow for it. I tried the following: Code: mp64 ?u?u?u?u?u?u?u?u | sed '/\([A-Z]\)\1\1/d' | hc64p -m2500 -n160 test.hccap Hash-IT Wrote:Also I am guessing (again) that the "1\1" part is the number of repetitions ? So would "1\1\1" mean AAA ?I'll explain the sed command: sed '/\([A-Z]\)\1\1/d' '' = The script goes between these two quotes. / = A separator. [A-Z] = Any single upper-case character. \1 = Any occurrence of that same character. d = Delete the whole line whenever that happens. So yeah, \([A-Z]\)\1\1\1 will match AAAA. Note the fist character is the one which will be repeated. Hash-IT Wrote:...oh one more question if you don't mind, am I able to do this on a windows computer, the sed part ?Of course, I'm doing this on Windows. Just download Cygwin binaries and add them to PATH, and you can always use them. Hash-IT Wrote:Thanks again and I look forward to discussing this more with you.Me too, let me know how it goes. RE: Limiting the consecutive occurrence - Hash-IT - 05-24-2012 Just to update this post for others in case they are interested in this subject. Everything new to this thread was worked out by M@lik via PM, I am simply providing a follow up for the forum. This method does indeed work but it is VERY slow, however the idea would dramatically reduce the cracking time if pre generated tables were made or if we can interest atom into making this some sort of GPU rule based option. How to do it… Download the sed.exe for windows here, there is no need to install cygwin. Place maskprocessor, sed.exe and hashcatplus in the same directory and follow M@lik’s instructions above. To change the amount of reoccurrence of a given character simply change the “\1†part. More means less filtering so \1\1\1\1\1\1 allows more repetitions than \1\1 for example. Thanks to M@lik for all his hard work and if anyone else is interested in this method of optimising brute force would you please let atom know you would like this feature and we may be able to interest him enough to consider adding it. RE: Limiting the consecutive occurrence - Hash-IT - 05-24-2012 Another update to this. I have been trying to generate a dictionary using this method and it is so slow I don't think it is a viable option. M@lik do you have any suggestions ? I read the thread you linked to today regarding just random guesses and atom seemed to like it. I hope this interest can be extended to this idea also. I like to think of it as pseudo brute force or at least an educated guess !! In your example above Quote:mp64 ?u?u?u?u?u?u?u?u | sed '/\([A-Z]\)\1\1/d' I can see that you have 8 characters, when I try to do the same I can leave my computer for many hours and nothing is generated. Although when I do 6 characters I get results. Could there be an issue where the windows based version cannot handle >7 characters ? Edit.. I have a little more information on this last >7 problem. I can actually make 8 characters but if I use less than \1\1\1 nothing happens for hours. I guess the filter is incredibly slow or there is something wrong as your example has only \1\1 !!! Another update.... If and I mean a big if I have my calculations correct..and I am more than open to the possibility I don't, but just assuming the best case, I think if we had this feature or pre-prepared dictionaries an attack on a wpa of 8 upper alpha only should be reduced by about 18%. That's pretty significant I think ! RE: Limiting the consecutive occurrence - M@LIK - 05-25-2012 Hey, I've been thinking about this lately, and good news is that I've more than one possible-valid solution in my mind. Problem is I need more time for this, and that I don't have : ( I have final exams at college approaching. I'll post what I have asap. Anyways, you keep trying too, I see good ideas there too! RE: Limiting the consecutive occurrence - Hash-IT - 05-25-2012 OK M@lik, your exams come first ! I have just made a 75GB file using atoms mask-processor, 8 upper characters starting with A. .... So only 25 more to do !!! ha ha ! The idea for this is to write the files first and filter afterwards. I am just going to experiment with the first batch starting with "A". I must admit to being surprised at how large this file is, just shows you in a very clear way the scale of a seemingly simple 8 character password. Thinking about regular expressions to sort this, I would love to learn of your ideas but do your revision first ! RE: Limiting the consecutive occurrence - Pixel - 05-26-2012 Hello Hash-IT, I've been trying to do something similar to you I think. I wanted to limit the number of repeated characters in a line as well as limit the total amount of repeats in a line. Hope you get what I mean? Is this the same? Been able to do this with req.exe in Hashcat-utils would cool as it already does something similar which is match passwords to a criteria. Here's a couple of links you maybe interested in. Tapes blog which has some good sed command's (look at the adjacent characters command second yellow one up from the comments at the bottom) and the other one is Gitsniks blog which has code for a python script and is quicker than sed but the output is abit different. This link is also on Tapes blog. BTW I've got the whole set of A-Z upper 8 characters long, if you want it. I've split each letter into 75 1GB chunks so they are easier to manage and all 75 chunks are heavily compressed into 25.6 MB a letter!!! So they don't take all my hard up. The complete set is 667 MB talk about compression. Uncompressed it would be around 2 TB. I don't know as I've only got a 500GB hard drive. RE: Limiting the consecutive occurrence - Hash-IT - 05-26-2012 Hi Pixel Welcome to the forum and what a great first post ! Thank you for your links, I am just going to read through them now and I will come back. Again thank you for the offer of your brute force tables, that's very kind of you. At the moment I only have A ...ha ha !! They may be useful to members here if we can't convince atom to make this a feature in hashcat-plus. It does seem a little "old fashioned" storing huge lists like that for a specific attack when they could be generated on the fly. Its a very kind offer of you and I still might take you up on it. Perhaps see if others here ask for them also. I would feel guilty you uploading all that just for me ! I am pleased you are interested in this approach and I will get back to you later today. Update -------------- Those links were great Pixel ! I think I better clarify what I personally believe we should be aiming for. (Open to suggestions obviously). Ideally this filtering should be performed on the fly (at time of cracking) using the GPU as it is massively superior to CPU. I remember atom telling me some time ago now that I should try to include as many rules as possible whilst using hashcat-plus as it ensures I am using the GPU to its maximum, so this is a good opportunity to do that. I think “on the fly†filtering is the only sensible way to go as storing many TB’s doesn’t make sense. There are 2 methods of filtering I think we should be interested in. These should be available independently or as a combination. 1 Limit the amount of adjacent characters in any line to a user specified amount. 2 Limit the amount of times a character appears (in any position) in a given line. Does everyone agree with those 2 ? |