FIFO help
#1
Apologies for what is no doubt an easy question.  I don't know much about working with FIFO, but I'm trying to learn. 

Despite reading forum posts, instruction pages and the like, I can't get FIFO to work in the way I would like.  It may not be possible to do what I'm thinking if I've misunderstood how FIFOs work, so I would welcome any constructive guidance that educates me on the best path forward.

Scenario:
I have a smallish dictionary (~450MB).  Running the following attack, combining the file with itself, worked relatively well, but took about 15 hours to complete:

hashcat64.bin -m 1000 -a 1 hashfile.txt small.dict small.dict -o results.txt

I'd like to use combinator.bin to combine the files into a new, larger dictionary and then apply rules against that dictionary, but the output from combinator results in a file size that is far too large to be practical.  Obviously, this isn't the best way to accomplish what I want to do.

I tried using mkfifo to create a fifo, dump combinator.bin output to the fifo, and then use the fifo as a source for hashcat, but that doesn't seem to work.

Example:
In terminal 1:
mkfifo myfifo && ./combinator.bin small.dict small.dict > myfifo

In terminal 2:
hashcat64.bin -m 1000 -a 0 hashfile.txt myfifo -o results.txt -r rule1

This just doesn't work... combinator.bin terminates and hashcat gets into a confused state; running, but no work.

I tried doing the same as above, but this time combining the small.dict with the fifo, as follows:
In terminal 1:
mkfifo myfifo && ./combinator.bin small.dict small.dict > myfifo

In terminal 2:
hashcat64.bin -m 1000 -a 1 hashfile.txt myfifo small.dict -o results.txt

This leaves combinator.bin running (> myfifo), but hashcat immediately terminates with:

Generated bitmap tables...myfifo: Not a regular file.

I highly suspect I'm doing something obviously wrong (and can almost feel the experts rolling their collective eyes from here), so please go easy on me. :-)

Can you help point out the error of my ways?

Thank you in advance!
#2
If you are using just a few rules (or possibly even just 1), you can use -j/-k together with -a1 for each and every rule:
Code:
hashcat -m 1000 -a 1 -w 4 -j "$1 $2 $3" hash.txt dict1.txt dict2.txt

otherwise you can just use pipes (either with combinator.bin or with hashcat's --stdout option):
Code:
combinator.bin dict1.txt dict2.txt | hashcat -m 1000 -a 0 -w 4 -r rules.txt hash.txt

of course hashcat now isn't able to know how much input it will get (it can't determine the input size), therefore you will get regular status updates (without the remaining time - ETA - and without the total number of password candidates, i.e. to determine how much "keyspace" is left)
#3
Thank you for the reply philsmd!

The pipe solution works, but isn't as efficient as I had hoped (I've got 4x 970 cards and they're running around 6% each as compared to 95%+ with normal file based input). 

I'm focused solely on NTLM and my goal is to find ways to get at the longer or more complex passwords that have otherwise eluded discovery.  In my current testing, I've just run through ~14000 hashes and am down to the last 1600....but these last 1600 are proving difficult and I'm getting very little ROI at this point.  Given these were all created by humans who aren't security experts, I suspect there are at least one or two more common patterns that will yield more fruit if I can find them.

I'm certain I'm going to find more passwords with three word or four word n-grams, combined with common rule variability.... I just need a good way to build the n-gram permutations while maintaining speed and reducing system requirements.
#4
As said, the first approach (with -j/-k) should be used if you plan to use only very few rules: i.e. you can start hashcat a couple of times (one after the other in a sequence, or loop if you prefer) to use -a 1 together with the single rules, this should be very fast.

On the other hand, you only should use pipes whenever you can provide hashcat with a lot of rules, otherwise you risk to feed it with way too little work (of course this depends on the tool on the other side of the pipe - and the hash type -, well, combinator.bin is quite fast, but hashcat is way faster, especially with NTLM)... so a lot of rules are needed to get the workload up again to about ~100%!