06-03-2013, 02:42 AM
What was more fresh in my mind that that the same dictionary and ruleset took a few seconds to run as an attack, but several hours to output the mangles to a file.
As pointed out, stdout isn't really made for high speed I/O. And atom hasn't optimized hashcat for outputting mangles, but for doing actual attacks.
The hashcat utilities mp and sp have a formal specified output file, and are very fast.
What might be needed is, rather than fiddle with hashcat itself, have just the rule engine in a separate mp or sp style program, for the express purpose of generating the mangles.
(The big picture of this question, and my JtR memory commands question in another thread, is rather than running my rules against normalized dictionaries, the running of those cracked word lists against someone else's rules. I think there are a lot of duplicates, so a lot of wasted time, but really can't "see" it, unless I can get the actual mangles to examine.)
As pointed out, stdout isn't really made for high speed I/O. And atom hasn't optimized hashcat for outputting mangles, but for doing actual attacks.
The hashcat utilities mp and sp have a formal specified output file, and are very fast.
What might be needed is, rather than fiddle with hashcat itself, have just the rule engine in a separate mp or sp style program, for the express purpose of generating the mangles.
(The big picture of this question, and my JtR memory commands question in another thread, is rather than running my rules against normalized dictionaries, the running of those cracked word lists against someone else's rules. I think there are a lot of duplicates, so a lot of wasted time, but really can't "see" it, unless I can get the actual mangles to examine.)