hashcat Forum

Full Version: NVidia RTX 2080
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10
(09-23-2018, 02:08 PM)Nikos Wrote: [ -> ]@Flomac
It seems that you were wrong in all parameters of Turing architecture and implementation eventually.
Nope.

Quote:RT cores are fixed-function not usable by hashcat and Tensor cores although more flexible they end up unusable too.
No new instructions or secret sauce for hashcat and Turing cards.
You don't even know how a GPU chip works and what an instruction set is, right? Read the manuals and come back.

Quote:Also, no 75W power consumption, the cards are very hungry and inefficient consuming more Watts than Pascal cards and similar to Vega.
You have to read my posts carefully. Maybe twice. I said the cards might draw less power under hashcat than in games and that there is a professional 75W card similar to the RTX2080.

Quote:Turing cards in hashcat benchmarks I think are very close to Vega64 being a lot more expensive of course.
You're obviously joking. Read the benchmarks.

Quote:Even for games there is no reason at all to buy such ridiculously expensive cards for 30% more fps.
Haters gonna hate.

Quote:All-in-all, skip it.
If you don't know what you're talking about skip the arrogance. Personal advice: Don't waste your time defending some lame company.
I don't care about any GPU manufacturer. If AMDs Vega 7nm will be THE new star in the game, wonderful. If not, nevermind. But I do care about hashcracking like anyone else here and we care about the best equipment. Don't try to sell us VEGA like it is the ultimate thing. If the numbers do not sell it, you won't either.
The results are in. The standard benchmark values are compared since this makes most sense and keeps the list clear.


Overall:
RTX 2080 vs. GTX 1080: +63%
RTX 2080 vs. GTX 980: +197%
GTX 1080 vs. GTX 980: +82%

The speed up from Pascal to Turing differs from 25% to 150%.

Sample:
MD5 +50%
WPA +43%
SHA-512 +75%
7-Zip +103%
LastPass +81%
Bitcoin +75%.


RTX 2080 vs GTX 1080 vs GTX 980 comparison sheet

Note: the newer driver has been skipped for the GTX-cards, since hash values mostly dropped due to unknown reasons
can you add comparison with the 1080ti as well? Considering the prices that seems fair.
I saw someone post a little earlier for the 1070 Ti for comparison, I went ahead and ran my 1070 Ti with stock clocks, as well as +180.

This is a EVGA 1070 Ti, Reference PCB, aftermarket cooler. Nvidia Drivers 398.36 Win10

Stock - https://paste.hashkiller.co.uk/ILU8sb_BEeiA_ECNXEjIzQ

+ 180 - https://paste.hashkiller.co.uk/TXX3s7_BEeiA_ECNXEjIzQ
Here is a 1080 Ti FE running stock clocks. Hashcat 4.2.1 with Drivers 398.36 Win10 if you would like to add them to the comparison sheet.

https://paste.hashkiller.co.uk/fQ2_WL_OEeiA_ECNXEjIzQ
Thanks for the benchmakrs guys, I'll add them tomorrow.
@Flomac
The issue here is that I read your posts, but from the beginning of the thread actually.
You were desperately trying to level up the hype, to build an image of Turing that simply doesn't exist.
The benchmarks that you posted show a smaller difference between Turing and Pascal than Pascal and Maxwell, for a lot bigger price.

If this isn't the definition of failure, then what is it ?

Also, in the context of Hashcat I'm waiting from you or any other to show me whch new instructions could be leveraged in order to be faster in cracking.

And please tell me which part of your analysis of Turing cards was accurate after seeing real world performance.

I don't think Vega is the perfect chip, but since there is no other GPU to compete with nVidia cards I would like to see a comparison between those three:
Pascal vs Vega vs Turing.

I'm not interested at all in selling ANYTHING, not just AMD cards.

But you on the contrary, seem too much sentimentally involved with nVidia cards.

AMD can't compete nVidia right now, but that doesn't mean that we have to buy every new nVidia card just because it's nVidia.

We don't have to buy anything actually.

We can keep our money in the pocket and just wait you know.
IMO, when a card has a "295W" TDP... Vega 64... (which we all know is WAY higher in practice), it thermal throttles almost immediately, and is years behind and cant even compete competitively with a 2-3 year old alternative card, there's a clear loser. I'm not discrediting AMD, they seriously stepped up their CPU game over the last few years, but their GPU game needs a lot of work and hopefully we will see that with whatever they release next (Navi)? But until then, AMD can't compete in this scene.

Personally, I'd love to see a turnaround to bring back competition and drive down prices because the RTX series is horribly overpriced. Intel is already starting to shit their pants now that AMD is taking a huge chunk of the CPU market back and it's drawing an insane amount of investor attention.
Nikos,

although it might be a waste of time to throw more time on this discussion with you, let's just correct some things one and for all.

(09-25-2018, 07:11 AM)Nikos Wrote: [ -> ]The benchmarks that you posted show a smaller difference between Turing and Pascal than Pascal and Maxwell, ...
Yes, sure. Anything else would be strange, since Pascal to Turing has been a "normal" step from 16nm to 12nm, where Maxwell to Pascal was an exeptional big step from 28nm to 16nm. Quantesized, 16 to 12nm refers to factor 1 and 28 to 16nm refers to ~1,8.
(the usual steps are 28nm -> 20nm -> 14nm -> 10nm -> 7nm).


Quote:...for a lot bigger price.
Yep. As you stated correctly earlier, there are too many GTX 10xx out there that still need to be sold.

Quote:If this isn't the definition of failure, then what is it ?
Nothing. It's normal business. CPUs tend be be extremly expensive at high end, even double price for a mere +20%. Take a look at the server world where it's even more absurd. No one cries about it.

A failure would be no one buying new cards like it happened to AMD a few times in the past when they came out with new cards and make big losses.

Since NVidia still is well ahead of AMD in Games (and now even features) there is no failure in sight.

Btw. AMD did also a step from 28nm to even 14nm. The result? Disappointing. THAT you could call a failure.

Quote:Also, in the context of Hashcat I'm waiting from you or any other to show me whch new instructions could be leveraged in order to be faster in cracking.
There are more than 50 new instructions in Turing. Some might be useful for hashcat, some not. (More: CUDA 10 Instruction Set Reference)

Some existing instructions made a big step in latency, e.g. IMAD now needs 5 cycles instead of 84 (cause it had been emulated before). Lots of instructions have now shorter cycles. (More: Dissecting the NVIDIA Volta GPU Architecture )

And again: tune down your arrogance. No one has to show you anything.

Quote:And please tell me which part of your analysis of Turing cards was accurate after seeing real world performance.
I stated the RTX2080 to be 45 - 55% faster in MD5, and it did ~50%. Spot on I'd say.
The values I posted where too high since I obviously took base hashes of overclocked GTX-cards. May bad. I'll promise to be more accurate next time.

Quote:I don't think Vega is the perfect chip, but since there is no other GPU to compete with nVidia cards I would like to see a comparison between those three:
Pascal vs Vega vs Turing.
Me too, but I'd need a clean  benchmark of VEGA, non-overclocked (!), with hashcat 4.2.1 - if you can deliver it, you're welcome to post it in here.

Quote:I'm not interested at all in selling ANYTHING, not just AMD cards.
Good. Noted.

Quote:But you on the contrary, seem too much sentimentally involved with nVidia cards.
Oh no, I was just excited about their new GPU generation. I once was similar excited about the Radeon Fury (Fiji), but epixoid was right that time they'd mess it up as they surely did.

Quote:AMD can't compete nVidia right now, but that doesn't mean that we have to buy every new nVidia card just because it's nVidia.
Everyone is free to buy whatever they want. Usually people want the best card for their money. And for hashcat, that's mostly NVidia right now.

Quote:We don't have to buy anything actually.

We can keep our money in the pocket and just wait you know.
You're missing out all the people that actually do need to buy something, because they're not some script kiddies fiddeling around at home with some hashes they found on the net or their neigbours wifi-router, but earning serious money with it. They might be very annoyed if they decide for a GTX1080Ti, cause it looks cheaper on the paper, only to find out that a RTX 2080 is way faster at the hashes they're working with.


May I remind you that it was you who stated "for hashcat like games, the new architecture is going to be slower" or "Turing is a lot slower regarding price to performance ratio."
Do the math and you'll see: It depends on what you're planning to do with it.

With no proof or arguments you were bullshitting and confusing everyone. Grow up man.
I've updated the benchmark comparison with the 1080 Ti.

Overall:
RTX 2080 vs. GTX 1080 Ti: +9%
RTX 2080 vs. GTX 1080: +63%
RTX 2080 vs. GTX 980: +197%
GTX 1080 Ti vs. GTX 1080: +49%
GTX 1080 vs. GTX 980: +82%


But something is odd with hashcats tuning parameters. For example:
Hashmode: 11600 - 7-Zip (Iterations: 524288)
GTX1080:     8151 H/s (56.87ms) @ Accel:256 Loops:64 Thr:768 Vec:1 (driver 398.82)
GTX1080:     8065 H/s (311.53ms) @ Accel:512 Loops:512 Thr:256 Vec:1 (driver 411.62)
GTX1070Ti: 9886 H/s (361.37ms) @ Accel:256 Loops:512 Thr:768 Vec:1 (driver 398.36)

Both cards should run under same specs, but they just don't.

With the GTX 980, turning on -w3 or -w4 results in up to 50% lower hashes.
Pages: 1 2 3 4 5 6 7 8 9 10