I just tested with a hash file and potfile with 1000000 entries and it's very fast on my system.
Maybe you can do the same and report back.
This is how I generated the lists:
Code:
#!/usr/bin/env perl
my $NUM_HASHES = 1000000;
for (my $i = 0; $i < $NUM_HASHES; $i++)
{
my $hash = "";
for (my $j = 0; $j < 16; $j++)
{
$hash .= chr (int (rand (256)));
}
print STDOUT "user_$i:" . unpack ("H*", $hash) . "\n"; # hash file
print STDERR unpack ("H*", $hash) . ":a${i}b" . "\n"; # pot file
}
and run this script like this:
Code:
perl generate_rand_hash_pot_file.pl > tmp_hashes.txt 2> tmp_hashes.potfile
(where generate_rand_hash_pot_file.pl is the perl script above and tmp_hashes.txt will be the hash file and tmp_hashes.potfile will be our pot file. Note: the cracks are of course not correct, i.e. the password in the potfile is in this case not the correct one corresponding to the md5 hash, but hashcat doesn't verify this anyways)
after that run hashcat like this:
Code:
hashcat -m 0 --show --username --outfile-format 2 --potfile-path tmp_hashes.potfile -o outfile.txt tmp_hashes.txt
(btw, you could also shuffle the lines, e.g. with the linux shuf command, before you run hashcat, but it shouldn't change the speed by a lot, because hashcat will use it's own sorting internally, which is different from the output of generate_rand_hash_pot_file.pl)
I can't really reproduce a speed problem here. maybe your problem is a different one (some more special)