hashcat Forum

Full Version: How to use named pipe with hashcat?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hello,

I'm trying to use princeprocessor with mask together. While I can get princeprocessor work alone with normal pipe, I can't make it work with named pipe as it seems I can't do stdin with mask. As soon as hashcat reads the pipe, princeprocessor stops working immediately. I read and tried what the Practical PRINCE: 1 CPU + 24 hours = 63% Linkedin hashes cracked, 100% automated post wrote, named pipe without mask and it failed like the previous.

work:
Code:
$ princeprocessor < words_alpha.txt | hashcat -m 10900 -w4 10900.hash

Doesn't work:
Code:
$ mkfifo fifo
$ princeprocessor -o fifo < words_alpha.txt
$ hashcat -m 10900 -w4 10900.hash fifo

what I want to accomplish:
Code:
$ hashcat -a 6 -m 10900 -w4 10900.hash fifo ?u?d?s

Can anyone give me some advice on solving it? Thank you.
the problem with named pipes is that they act like a normal file, but have no "file size", so hashcat will read through all of the input to discover the number of password candidates (or "length") and can't really seek back (that's not allowed with named pipes)..

you could use other techniques like appending with rules etc
Thanks for the advice. I generated the needed rule with maskprocessor thus eliminated the need of named pipe. Still, I wish hashcat can take it as input.
For other web searchers ... you probably shouldn't do this unless you know why you need it ... but as a general workaround for the "hashcat named pipe" case, you can do this on Linux:

Code:
mkfifo hashpipe
tail -f hashpipe | hashcat --stdin-timeout-abort=[arbitrary high number] ...

... and then:

Code:
dosomestuff >hashpipe
otherstuff >hashpipe
evenmorestuff >hashpipe

It may not be super fast, depending on your use case - tail does some buffering, etc. - but for a flurry of small input sets against a large target (that takes time to load before each attack), it may be worth the trade-off.

But you should *only* do this after exhausting other options, optimizing you pipeline throughput, etc.

In other words: you probably don't need this. Big Grin