Nvidia GTX Titan X Benchmarks
#11
(04-22-2015, 12:12 PM)Rolf Wrote: Comparing stock Titan X with overclocked GTX 980

Thanks for this, Rolf! We're currently working on page that will allow you to directly compare each GPU in different configurations for each algorithm (and I know dropdead is working on something like this as well.)


(04-22-2015, 01:42 PM)Flomac Wrote: Here's a picture from a well known Hardware site, showing the back of a Titan X under full load

That's something I was going to comment on, but forgot: the Titan X doesn't have a backplate! Doesn't make sense to me that the 970 and 980 would have a backplate, but the Titan X wouldn't. Either way, you are correct that for hash cracking the memory shouldn't be getting anywhere near that hot.


(04-22-2015, 07:48 PM)Flomac Wrote: But I'm a bit disappointed that the Titan X is only 30% faster than the GTX 980.

I think you misunderstood. Rolf was comparing the Titan X at stock clocks to the GTX 980 with a 250 Mhz overclock. Clock-for-clock, the Titan X is 50% faster than the GTX 980.
#12
(04-22-2015, 09:18 PM)epixoip Wrote:
(04-22-2015, 07:48 PM)Flomac Wrote: But I'm a bit disappointed that the Titan X is only 30% faster than the GTX 980.

I think you misunderstood. Rolf was comparing the Titan X at stock clocks to the GTX 980 with a 250 Mhz overclock. Clock-for-clock, the Titan X is 50% faster than the GTX 980.

Hm, ok, it looked to me they were running at the same clock speed (1215MHz), so the +50% shaders should deliver a +50% performance.
#13
(04-22-2015, 09:53 PM)Flomac Wrote: Hm, ok, it looked to me they were running at the same clock speed (1215MHz), so the +50% shaders should deliver a +50% performance.

No, this is deceiving because of the way PowerMizer works. The clock rate reported by hashcat will never be correct.
#14
FTR, those clock rates are those recieved from the CUDA runtime
#15
(04-24-2015, 01:38 AM)atom Wrote: FTR, those clock rates are those recieved from the CUDA runtime
And they correspond to a certain clock profile in the gBIOS, but not current clocks.
Nvidia doesn't rush to make this proper.
#16
Where did you buy the Titans? And was there a nice bulk discount?

(04-22-2015, 02:49 AM)epixoip Wrote: [*] First single GPU to break 35 GH/s on NTLM. Can actually break 40 GH/s with +275 Mhz OC, but it's not stable for all algorithms at this clockrate.

How do you determine stability of algorithms? I have a system wherein the GPU overall stability or it's heat output is irrelevant, so I want to OC it to the limit of various hash types. For example, OC for md5 and a separate OC for WPA. But what tools do you use to measure hash correctness under heavy load?
#17
We don't measure "hash correctness", this is not an issue. We measure stability by whether or not the GPU hangs at certain frequencies.

And you could easily measure "hash correctness" by running a list with known plaintext values and ensuring that 100% were cracked. But I've never seen this happen where the card will stop finding plains when overclocking.
#18
(05-15-2015, 05:52 PM)epixoip Wrote: We don't measure "hash correctness", this is not an issue. We measure stability by whether or not the GPU hangs at certain frequencies.

And you could easily measure "hash correctness" by running a list with known plaintext values and ensuring that 100% were cracked. But I've never seen this happen where the card will stop finding plains when overclocking.

Well I'm not really familiar with GPU architectures but for a CPU that isn't running the system, if you push it too far it will start making mistakes with certain algorithms.

If the GPU is different that it's great. But keep in mind that I did not imply the GPU would suddenly start getting all hashes wrong. Only a fraction of them, so it could be insidious. I was wondering if there was a tool that under heavy GPU load would keep checking a series of hashes for correctness and alert you if it got so much as a single one wrong.
#19
Not going to say it's impossible but this is not something that has been observed.
#20
(05-15-2015, 06:05 PM)epixoip Wrote: Not going to say it's impossible but this is not something that has been observed.

Well it would be somewhat difficult to observe, since it would manifest as the inability to crack some very small fraction of hashes. And the driver would likely crash well before someone can get it to that point.

But what I'm hoping to do is build OC profiles for every hash of interest where the GPU is taken to its limit with quite a few driver overrides. Perhaps it is as you say as the GPU will just start hanging well before it would start making hashing mistakes, but I was somewhat hoping there was a tool to be sure.

Various hashes engage the GPU in various pathways, so it could be technically possible to OC a GPU past what anyone could imagine.

I have seen it done with some machine learning algorithms, the GPU was overclocked by over 80%! and it was stable and correct.