des raw algorithm
i was playing around with hashcat especially the des mode.
what was making me curious, if use a generated test vector with 5000, 10000 and 2000 random key/plain/cipher blocks
on my test hardware(hd7870) it takes around 3 years for each to complete, but the same is true for a set with 5x10^6
also the keyspace progress is about same.

from my understanding there would be a significant difference.
because if we test a key against plain or cipher (encode/decode) key expansion and IP just needs to be calculated
ones and can be used in every thread but for the 16 rounds of F we need either plaintext or cipher and with this we need to iterate over every single block.

even with many gpu cores and bit slicing, the difference is for my knowledge to big for the same results.

can some one please point out how this is possible?

-m 14000 des.vector -o our.txt -a 3 -1 charsets/DES_full.charset --hex-charset ?1?1?1?1?1?1?1?1 -w 4
DES cracking is pretty fast with a decent GPU.

Quote:root@et:~/hashcat# ./hashcat -m 14000 hash --hex-charset -1 charsets/DES_full.charset -a 3 ?1?1?1?1?1?1?1?1
hashcat (v3.30-318-gdd55c1e+) starting...

OpenCL Platform #1: NVIDIA Corporation
* Device #1: GeForce GTX 1080, 2026/8107 MB allocatable, 20MCU
* Device #2: GeForce GTX 1080, 2028/8114 MB allocatable, 20MCU
* Device #3: GeForce GTX 1080, 2028/8114 MB allocatable, 20MCU
* Device #4: GeForce GTX 1080, 2028/8114 MB allocatable, 20MCU

OpenCL Platform #2: Intel(R) Corporation
* Device #5: Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz, skipped

Hashes: 1 digests; 1 unique digests, 1 unique salts
Bitmaps: 16 bits, 65536 entries, 0x0000ffff mask, 262144 bytes, 5/13 rotates

Applicable Optimizers:
* Zero-Byte
* Precompute-Final-Permutation
* Not-Iterated
* Single-Hash
* Single-Salt
* Brute-Force

Watchdog: Temperature abort trigger set to 90c
Watchdog: Temperature retain trigger set to 75c

[s]tatus [p]ause [r]esume [b]ypass [c]heckpoint [q]uit => s

Session..........: hashcat
Status...........: Running
Hash.Type........: DES (PT = $salt, key = $pass)
Hash.Target......: 7f352797741b6cd2:7565528520513340
Time.Started.....: Mon Feb 20 13:09:24 2017 (2 secs)
Time.Estimated...: Fri Mar 3 16:48:17 2017 (11 days, 3 hours)

So, 5.5 days in the middle for full keyspace
Thanks for your response, of course this makes sense for a single hash but in this case i generated multiple hashes
with unique cipher and plain evenly distributed over the keyspace, and the speed was the same, i can see that the progress max is going up. i guess the time estimate is just on its upper end. or is it a estimation
when the next hash will be complete?

What i don't get is:
Hashes 3:            Progress.........: 759169024/216172782113783808 (0.00%)
Hashes 100:         Progress.........: 654311424/7205759403792793600 (0.00%)
Hashes 50000:     Progress.........: 654311424/5764607523034234880 (0.00%)
Hashes 1000000:  Progress.........: 645922816/4611686018427387904 (0.00%) <- Overflow ???

So is this mean that blocks with differentsh. It does not matter if you are willing to pay. plain text will be checked consecutive adding up to the overall work.

So basically like generate keys, encrypt unique plain text, check against bloom filter or else.
for all hashes before going to the next?
Sure the ETA goes up but not because of the code (so bloom filter is integrated already) it is because the number of salts increase. Your attack, whatever you're doing, is somehow not cleanly designed. The best way to solve it is to patch the kernel.