Limiting the consecutive occurrence - Printable Version +- hashcat Forum (https://hashcat.net/forum) +-- Forum: Deprecated; Ancient Versions (https://hashcat.net/forum/forum-46.html) +--- Forum: Very old oclHashcat-plus Support (https://hashcat.net/forum/forum-23.html) +--- Thread: Limiting the consecutive occurrence (/thread-1201.html) |
RE: Limiting the consecutive occurrence - Pixel - 05-26-2012 Glad you like the links. I know it seems "old fashioned", they are only for testing other tools out that don't support rules and masks, add the fact that my GPU is very weak, GF 8400gs and it gets me a massive 650 c/s don't laugh. So I use my CPU (i7 2600k) with other tools and gets me around 7000k/s. Looks like I'll have to get a new GPU sooner if this get added to hashcat-plus or I will be left out in the cold. I''l upload them files some where so anyone can get them and post (probably mediafire) links here if that's allowed/OK RE: Limiting the consecutive occurrence - Hash-IT - 05-26-2012 (05-26-2012, 10:09 PM)Pixel Wrote: Glad you like the links. I know it seems "old fashioned", they are only for testing other tools out that don't support rules and masks, add the fact that my GPU is very weak, GF 8400gs and it gets me a massive 650 c/s don't laugh. So I use my CPU (i7 2600k) with other tools and gets me around 7000k/s. Hey ,don't worry about it, we've all been there. Look on the bright side though when you do upgrade you will really notice the difference. I hate upgrading hardware and not really appreciating the gain so it is worth waiting as long as you can. ... however I must admit to giggling a little when you mentioned ...650 c/s. (05-26-2012, 10:09 PM)Pixel Wrote: Looks like I'll have to get a new GPU sooner if this get added to hashcat-plus or I will be left out in the cold. It certainly looks that way.:p (05-26-2012, 10:09 PM)Pixel Wrote: I''l upload them files some where so anyone can get them and post (probably mediafire) links here if that's allowed/OK Thats very kind of you but I was wondering if it might be better if we work out a way to filter them first and we can do it together ,perhaps M@lik might help also ? If we work out how to filter these lists we can share the filtered (optimised) versions which should be smaller. What do you think ? Anyway I think this request is gaining momentum, I will beg and plead with atom when he returns from his holiday and see if I can get him interested in it. That way we won't need lists at all. I am sure it is ok to post links to files like that on the forum, the only thing we are not allowed to do is post hashes to crack etc. RE: Limiting the consecutive occurrence - KT819GM - 05-26-2012 On dict or rule based attack you have to pull data to gpu, so you have plenty of it's power to utilize. On brute force atom already made optimizations to use all of gpu to process as much characters as possible. On the fly filtering will slow down calculation as same like any other process witch uses gpgpu. Somehow should be done with template, like on other thread for sl3 you have always same dataset so can make some template for not to check, but really would like to see how much combinations would be 'saved' if skipping 10+ same characters in a row RE: Limiting the consecutive occurrence - Hash-IT - 05-26-2012 (05-26-2012, 10:19 PM)KT819GM Wrote: On dict or rule based attack you have to pull data to gpu, so you have plenty of it's power to utilize. On brute force atom already made optimizations to use all of gpu to process as much characters as possible. On the fly filtering will slow down calculation as same like any other process witch uses gpgpu. Somehow should be done with template, like on other thread for sl3 you have always same dataset so can make some template for not to check, but really would like to see how much combinations would be 'saved' if skipping 10+ same characters in a row I understand what you are getting at ... I think ! It may be that atom could optimise the brute force, which is what I am hoping for. I did a very basic and simplistic calculation to try to work out the gain and I "think" if you had to search the entire keyspace using an optimised version it "should" possibly save you 18%. I could be completely wrong though and it is based on 8 upper alpha and filtered to remove all but unique characters consecutively. RE: Limiting the consecutive occurrence - Pixel - 05-27-2012 (05-26-2012, 10:33 PM)Hash-IT Wrote: I did a very basic and simplistic calculation to try to work out the gain and I "think" if you had to search the entire keyspace using an optimised version it "should" possibly save you 18%. would expected more. Is that only allowing no more than 2 chars at a time eg AABAABAA should save more if you limit it one step further, say repeats of any character and limit the total amount of repeats eg say I want no more than 2 character repeats and no more than 2 total amount of repeats AABAABAA not allowed it has six repeats of "A" AABDDCEE not allowed, total amount of repeats is three, one of "A", one of "D" and one of "E" AABBCDEF allowed it has two repeats of "A", two repeats of "B" and total amount of repeats is two. one of "A" and of "B" ABCABDEF allowed it has two repeats of "A", two repeats of "B" and total amount of repeats is two. one of "A" and of "B" For the the network I'm testing, I would set character repeats to 3 and keep total amount of repeats at 2 as the few keys I've seen, they does seem to be a patten and can't help think if its like a password policy of some sort, in the way the algorithm generates it and may allows/disallows a key if it spits out something like AAHAAEAA unlikely but possible? EDIT: Heres the full A-Z_UPPER_LEN_8 lists I generated some time ago. Bet Atom is shaking his head at this Oh well hehe. Also found another old link I have to a perl script. Its called "Powerful Word Generator" on the page. Heres some of the usage options what originally got my attention. -o number: max number of occurrencies of a letter -n number: max number of n-ple (AA, BBB, CCC, DDDD) -r number: max number of repeatitions (ABCABABBCDBCD has 5 repeatitions: 3 reps of AB and 2 of BCD) This almost does what I/we want but I think it has a bug somewhere as it skips passwords unexpectedly. I'll give an example if needed. ULM (Unified List Manager) also has a option to Limit the consecutive character instances in the beta functions. Been trying to do this for ages, so I may have a few other things on my PC. I'll post them here if I do. RE: Limiting the consecutive occurrence - Hash-IT - 05-27-2012 (05-27-2012, 01:52 AM)Pixel Wrote: would expected more. Is that only allowing no more than 2 chars at a time eg Hi Pixel Unfortunately not, it was based on the fact that there would only be a single instance consecutively. Also I haven’t worked out yet how to check for multiple entries in a line if they are not consecutive. I have asked someone about this and I will get back asap. I was hoping M@lik might have a couple of tips for us but he is revising for exams now I think. Saying that, an 18% reduction in keyspace is pretty significant to me ! (05-27-2012, 01:52 AM)Pixel Wrote: EDIT: Heres the full A-Z_UPPER_LEN_8 lists I generated some time ago. Bet Atom is shaking his head at this Oh well hehe. Thanks again for the links and effort you have put into this request, don’t worry about putting those sorts of links up, atom just doesn’t want warez, hacking, or hash requests etc. ULM is an old favourite of mine, I beta tested that so hard for Blazer when he was writing it I think I made him regret starting it. I made requests on an almost daily basis and I must have drove him nuts !! I really liked Blazer, he was very helpful and tried his best to make things work as users requested. I wish he hadn’t moved on to other things as ULM was really coming on. It has a few bugs still but it is by far the best password list manager there has ever been. He wouldn’t accept help and did everything on his own which may have made things more difficult but that was his hobby so that’s how he did it. I wish he would come back to it one day and if he does I am ready and waiting with requests and bug testing !!! If you ever get to talk with him he is a nice chap and very helpful. The lack of progress we have made on this, as humble forum members, just goes to show that without atom things just wouldn’t get done. I am sure he is already aware of this fact but it makes me appreciate his work here more than ever. To quote Waynes World, “We are not worthy†!!! Thanks for sticking with this Pixel, I am working on it ! RE: Limiting the consecutive occurrence - ntk - 05-27-2012 there are few more thoughts hash-IT + limit posibilities at start of password, e.g when guessed it would be a 'd', why create all possible with 'l' or 'u'? + perhaps limit posibilities of the second char too, so total possiblities for filtering being decreases a considerable amount. I mention it in one post some where by '... pack a dragon by its head', akas graps the first 2 chars and make an educated guess + when guessed it starts by 'a-m' 'or 'e-f' why create the rest? + use perl grep, eg. use pcregrep or grep -P, it is faster then sed + when have all of thoughts built-in, you should run one machine forwards, and a second machine backwads, I mean one from 'a-z' second from 'z-a'. So in the same run time you reach the most + networking if you want to sort it out fast, or recover passwords > 10 chars. This is the hardest part. RE: Limiting the consecutive occurrence - Hash-IT - 05-27-2012 Hi ntk Thank you for your interest, all good ideas. Good news !!! I think I might be able to get Tape interested in this thread and he is very clever with this sort of thing. Hang in there, our problems may be solved very soon ! RE: Limiting the consecutive occurrence - M@LIK - 05-27-2012 This is getting huge Hash-IT, you should be proud : P I see a lot of good ideas, actually every idea here gives me several new ideas xD - sed commands are customizable in an ultimate way, almost all what you guys wanted can be done. BUT, it iS slow! However, if you define more specific conditions (e.g: Work on the first two characters only), it might get a bit faster. And no, grep is way much slower than sed (With Perl regex too). - This might sound weird, I've been thinking if we can calculate all the combinations in a very accurate way, we can actually know the ranges where the repetition thing will happen, thus, skipping these ranges using the -l, -s options in maskprocessor, I didn't try it though (Could be all wrong! xD) - I'm a bit new to programming, but maybe I can program a generator for this. I'll make it support GPU too... In the next few years! xD crunch-3.2 is a word-generator, it's source is already available for downloading! I can't think of anything else now, let me see what you have! RE: Limiting the consecutive occurrence - Hash-IT - 05-27-2012 (05-27-2012, 03:30 PM)M@LIK Wrote: This is getting huge Hash-IT, you should be proud : P Ha ha yes, it does seem popular, I am ok at coming up with ideas but not so good at the rest, however I am more proud of the “multi-rules†thing, that was my finest moment I think… ha ha !!!! Can you think of a way to delete any generated word from maskprocessor that has N number of identical characters in it ? I can do the consecutive ones (thanks to you) but not the word as a whole. I have asked Tape about this and he said he will think about it. If atom isn’t able to make this work on the fly then we will need to make our own tables up. Pixel seems keen on this idea so perhaps once we have worked out how to filter these lists then we could each take a chunk and do the filtering on our own computers then swap the results between us so we end up with an optimised list. Perhaps we could provide very simple instructions so more people could help and share the benefits. |