hashcat Forum

Full Version: GTX Titan X
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2

So, Videocards confirms that GTX Titan X will have 3072 shaders at 1.076 GHz.
A rough estimation of speed (margin of error should be ~5%) can be made between gtx titan x and gtx 980: (3072 * 1.076) / (2048 * 1.190) = 1.3563 .
This means that a GTX Titan X should be around 35% faster than a GTX 980.
However, if the rumored price of $1000 proves to be true, GTX Titan X will not beat GTX 980 in terms of performance/$ .
Still, fastest single-gpu graphics card Nvidia has to offer.

Rolf out.
The price is enormous, but 12GB of RAM have to be paid I guess.

Besides that i recognized a strange behavior at one IT dealer two days ago, when suddenly prices dropped for the GTX980 and 970. Different brands like KFA², EVGA and MSI. Both cards dropped about 50 bucks each and only at that speacial dealer (a big one though). Then in the evening prices were back on the old level. I guess with the Titan X there will be a price cut on the 980/970 and that dealer made a mistake by one week.

We'll see.
Another source confirms power consumption: it's going to be around the same as GTX Titan's, 250W.
If the rumors about the 980Ti are true, then it looks like this will essentially be a 980Ti with double the RAM.
The 980Ti will clearly have a few shaders less then the Titan X, rumors say 2816. Combined with 6GB the price should significantly lower. The 250W if the Titan were totally logical (by multiplying the GTX980 165W with 1,5)

The 390X will be the game changer and will make NVidia adjust prices to be competitive (well, with the usual NVidia surcharge). But poorly it will come with watercooling so we cannot use it in Server chassis, except an OEM is forking it a working air cooler. I'm also expecting the 380X, maybe they took the 290X die, retune it to be efficient and sell it for the actual price (or less).

And again, we'll see.
Hmm, the last rumors I heard said 3072 shaders for the 980Ti, but it's possible that they were seeing the Titan X and thinking it was the 980Ti. Guess we'll have to wait and see.

I strongly disagree that 390X will be a game changer. Will almost certainly be worthless for Hashcat for the same reasons the 295X2 is worthless. Sure it will be fast even with the aggressive throttling, but as you identified, it will only work in a single card desktop unless you go full watercooling, and the server market is out completely. If AMD does make a 380X, it will likely be a re-branded 290X (just like the 280X is a rebranded 7970.)

The other problem with the 390X is that it's a strong indication of a major issue at AMD: they cannot innovate. They've been promising "GCN 2.0" for years now and cannot deliver. So their strategy is just to keep adding more cores to their aging architecture for their flagship GPU, driving the power & heat requirements way up, and then rebranding the last-gen flagship GPU as their "next best offering." This will be the fourth year in a row now that AMD has stuck with GCN, and they can only milk it for so long. Pretty soon they won't be able to tack any more cores onto GCN (surely by the end of this year), and then it will be time to shit or get off the pot. Will AMD deliver a new architecture next year? I highly doubt it. At one time I held out hope, but at this point I think they're done. They can't compete in the CPU world anymore, they can't compete in the GPU world anymore. About the only place they are winning is consoles. And look at their financials. Sure, they're a little over a billion in revenue, but they've been posting net losses (or very small net gains) for the past three years and laying off employees. They likely won't be around much longer.
The only innovative technology AMD is going to implement is stacked vram (HBM), and I'm sure they'll trip somewhere.
I agree on most terms except the innovation of the upcoming 390x. Sure, performancewise they just nail a few more cores on the die. And even with 4096 core it seems to be only slightly faster than the Titan X with its 3072 cores. But they also add HBM and that's where the game changing starts. It should have a big impact on the memory speed and latency, but even if that impact would not be big, it radically changes PCB design. Since the memory now is on the die, the PCB layout only needs some connection to the PCIE and some VCCD stuff, which now can be positioned almost freely on the PCB for best efficiency. So PCBs will be more compact hence cheaper to be produced. On top the revenue for the memory modules now shifts from OEM to AMD, raising their profit per card.

And if the watercooler is only nearly as powerful as the one from the 295x it only has to cool half the wattage away. It should be easily capable to do so and I bet you the 390X can run full speed even under hashcat if they didn't mess up that cooling system.

I also expect the normal 390 coming with a standard air cooler and a few less shaders, 3584 are rumored. Its performance (in games, not hashcat) and energy consumption will only be a little below the mighty Titan X. For maybe less than half the cost. Which is really not that bad.

Take a look at the bottom of this sheet, maybe we'll all know more tomorrow.

[Image: AMD_Radeon_R9_390_X_WCE_900x491.png]

And didn't it take Nvidia almost 3 years to catch up with GCN?

Since the die shrink to 14/16nm will be next year, there is still a chance AMD combines it with its new arcitecture. Let's hope they stay in the game and keep up the competition. Or to put in your words: the shit they're gonna take will be a fine piece of dump.
new amd will kickass all 390x power , hail amd ;--)
295X2 doesn't just throttle for heat, it also throttles for power consumption. 4096 cores on GCN with the die shrink will likely want to draw somewhere between 375W - 425W at full load, but the card is rumored to only have a TDP of 300W. That means it will throttle. How much it throttles all depends on what they do with PowerTune in the 390X's firmware.

HBM is great and all, but likely won't translate into any significant performance increase for hash cracking. It might mean less parasitic loss for multi-hash cracking, but that remains to be seen.

No, it didn't take Nvidia 3 years to catch up to GCN. On the contrary, GCN looks an awful lot like an Nvidia GPU. Nvidia has always produced superior GPUs, they just haven't historically haven't shown good performance for hash cracking until Maxwell. There may be some confusion as to why that is, and what exactly Maxwell changes, so let's review that.

Nvidia has always opted for fewer cores that run at a much higher clock, whereas AMD has always opted for more cores at much lower clocks (though by ditching VLIW and adopting a more Nvidia-esque design, it enabled them to increase their clock rates substantially) This meant that AMD was able to perform more integer operations per second than Nvidia, which matters for hash cracking.

But the real reason AMD has had the edge over Nvidia for hash cracking is that AMD has had instructions for BFI_INT and BIT_ALIGN_INT, which are heavily used in hashing algorithms. This means that we can reduce the instruction count for most algorithms on AMD vs Nvidia, since Nvidia was lacking these instructions in their ISA. Maxwell changes this because it introduces LOP3.LUT, which is a single instruction that can do everything BFI_INT and BIT_ALIGN_INT can do, plus more.

This is why Maxwell with 1664 cores @ 1250 Mhz can now match the hash cracking performance of GCN with 2816 cores at 1000 Mhz. And they can do this while drawing *half* the power that GCN draws.
Pages: 1 2