4 charsets limit (mask attack)
#1
Tongue 
Hello again!

First, i'd like to thank you guys for brain! Been using it for a while, great stuff! Brain solved most of my troubles with dics' intersections mentioned in my first thread not to say that distributed client/server is always a good design. Yall shouldve listened to lazy grandpa back in 2016 to perform better @ CMYIC 2018, lol. No, srsly, thanks! Now not a single outdated GPU goes to waste, every one gets place in cold garage. Overhead? Pfffft! Totally worth it!

Today I got yet another brilliant question you never heard before: what exactly is the technological reason behind limit on charsets (4)? Jumping through all those hoops with rules and processors to achieve something that could be easily achived using one-liner if only you supported unlimited charsets makes me crazy sometimes. If it's not too bold, I'd like to know rationale behind this end-user unfriendly decision. Thanks in advance for your replies!

PS
I believe princeprocessor has error in help section related to --pw-min/--pw-max (those NUMs are included, not excluded).
Reply
#2
our rule engine works on GPU to achieve full speed.
It's not a good idea to have dynamic buffers/memory/sizes which would reduce speed tremendously and make code much more complex. it's just a well-choosen upper bound (4) which honestly works in 99.99 % of the cases and the other ones can easily be converted or worked around. with too small charsets (?1, ?2, ?3, ?4) it would be anyway very bad for performance/acceleration so users should choose a more clever and advanced approach which doesn't involve very small charsets like -1 abc -2 348 -3 FOBAR -4 .-=+ ?1?1?1?2?3?4
Reply
#3
(11-29-2019, 09:29 AM)philsmd Wrote: It's not a good idea to have dynamic buffers/memory/sizes which would reduce speed tremendously and make code much more complex.


Charset per se aint fixed size, is it? So we talking about fixed number of pointers or maybe GPU pipelines of some kind?

(11-29-2019, 09:29 AM)philsmd Wrote: so users should choose a more clever and advanced approach which doesn't involve very small charsets like -1 abc -2 348 -3 FOBAR -4 .-=+ ?1?1?1?2?3?4


Well, unlimited charsets allow us to use just one attack type for many cases. Also, oneliners. Now, if you hit 4 charsets limit, you have to combine attacks, possibly use some processor, edit outside files and so on.  You said that it's quite easy to find workaround, but actually it's a pita. I dont say it's impossible, but having unlimited charsets would make it significantly easier for end-user. 

Working on a hunch, I could edit just one bat to run ten jobs to fast test several 100 millions narrowed down combinations. Expressed in rules those 10 jobs require either several often non-reusable files to be created or acceptance of significant overhead: say, instead of two sets of delimiters seem to be used by local vendor in its password "policy" like ...[_\-]{1}\d{4}[\.\s]{1}... I should use one overlapping set for each of 10 jobs. Nothing wrong with it when 50% success rate considered to be a good yield for longterm bulk job, but a pita when working on single target hash with short ttl.

Ok, maybe it's not a best idea to trade performace for usability in hascat engine itself. But what about it's processor function? Personally, I would be more than happy to trade some speed for ability to compose dictionaries from oneliner masks with unlimited number of charsets. Any chance your team will go that way in near future?
Reply