11-30-2019, 06:32 AM
(11-29-2019, 09:29 AM)philsmd Wrote:It's not a good idea to have dynamic buffers/memory/sizes which would reduce speed tremendously and make code much more complex.
Charset per se aint fixed size, is it? So we talking about fixed number of pointers or maybe GPU pipelines of some kind?
(11-29-2019, 09:29 AM)philsmd Wrote:so users should choose a more clever and advanced approach which doesn't involve very small charsets like -1 abc -2 348 -3 FOBAR -4 .-=+ ?1?1?1?2?3?4
Well, unlimited charsets allow us to use just one attack type for many cases. Also, oneliners. Now, if you hit 4 charsets limit, you have to combine attacks, possibly use some processor, edit outside files and so on. You said that it's quite easy to find workaround, but actually it's a pita. I dont say it's impossible, but having unlimited charsets would make it significantly easier for end-user.
Working on a hunch, I could edit just one bat to run ten jobs to fast test several 100 millions narrowed down combinations. Expressed in rules those 10 jobs require either several often non-reusable files to be created or acceptance of significant overhead: say, instead of two sets of delimiters seem to be used by local vendor in its password "policy" like ...[_\-]{1}\d{4}[\.\s]{1}... I should use one overlapping set for each of 10 jobs. Nothing wrong with it when 50% success rate considered to be a good yield for longterm bulk job, but a pita when working on single target hash with short ttl.
Ok, maybe it's not a best idea to trade performace for usability in hascat engine itself. But what about it's processor function? Personally, I would be more than happy to trade some speed for ability to compose dictionaries from oneliner masks with unlimited number of charsets. Any chance your team will go that way in near future?