09-18-2018, 07:55 PM
After reading a few articles regarding Turing architecture, I can say for sure that hashcat can't use RT cores or Tensor cores.
RT cores leverage FP numbers which are useless to hashcat and Tensor cores leverage INT4 and INT8 low precision integers which are useless for hashcat too.
Regarding CUDA performance I find your speculated speeds to be extremely exaggerated.
I saw real gaming benchmarks of the online database results of Final Fantasy and the performance difference is this:
2080 Ti vs 1080 Ti ~ 30%
2080 vs 1080 ~ 30%
2080 vs 1080 Ti ~ 5%
So, I expect raw speed difference in hashcat to be even lower than this due to the fact that new architectures have some optimizations explicitly for games which can't be used in raw execution speed.
After all these I insist on RTX cards being one of the biggest failures of all time for nVidia for two reasons:
Performance and price.
What a disastrous combination for RTX cards!
RT cores leverage FP numbers which are useless to hashcat and Tensor cores leverage INT4 and INT8 low precision integers which are useless for hashcat too.
Regarding CUDA performance I find your speculated speeds to be extremely exaggerated.
I saw real gaming benchmarks of the online database results of Final Fantasy and the performance difference is this:
2080 Ti vs 1080 Ti ~ 30%
2080 vs 1080 ~ 30%
2080 vs 1080 Ti ~ 5%
So, I expect raw speed difference in hashcat to be even lower than this due to the fact that new architectures have some optimizations explicitly for games which can't be used in raw execution speed.
After all these I insist on RTX cards being one of the biggest failures of all time for nVidia for two reasons:
Performance and price.
What a disastrous combination for RTX cards!