Posts: 2,936
Threads: 12
Joined: May 2012
Everyone please note: when Flomac says "So new is always better in energy terms. And the performance of a GPU generation scales very good under hashcat. 30% more performance result in 30% more energy used", he's talking about Nvidia only. This does not apply at all to AMD. Just we're clear on that. Don't go buy a brand-new AMD GPU and say "Well Flomac said new GPUs are more power efficient!" as your smoke detector is going off in the next room.
Posts: 40
Threads: 5
Joined: Dec 2013
03-11-2017, 09:00 AM
(This post was last modified: 03-11-2017, 09:01 AM by NikosD.)
VEGA is already faster than 1080 in OpenCL benchmarks.
Take a look at the bitcoin mining.
Posts: 381
Threads: 1
Joined: Aug 2014
03-11-2017, 06:48 PM
(This post was last modified: 03-11-2017, 06:49 PM by Flomac.)
(03-11-2017, 04:35 AM)epixoip Wrote: "Well Flomac said new GPUs are more power efficient!" as your smoke detector is going off in the next room.
Yep, I'd plead "not guilty". Thought I was clearly talking about NVidia. With AMD, well you never know. They are renaming a lot of their chips, even multiple times. So a new numbering does not certainly mean "new generation". The new VEGA will be definitly more efficient, since it is a new fabric process.
But the "30 for 30%" clearly belongs to NVidia.
Posts: 2,936
Threads: 12
Joined: May 2012
(03-11-2017, 09:00 AM)NikosD Wrote: VEGA is already faster than 1080 in OpenCL benchmarks. Take a look at the bitcoin mining.
I'm not inclined to believe that, as I just had a long discussion on Twitter with a guy who claimed the RX 480 was 4x faster than the GTX 1080 as well. Turned out his mining software was not at all optimized for Nvidia. I've also seen miners claim that the HD 7970 is faster than the GTX 980 Ti, R9 290X is faster than GTX 1070, so on and so forth. Mining has historically been an AMD game, and I strongly suspect no one is paying attention to Nvidia nor knows how to properly optimize for Nvidia. So mining benchmarks aren't at all a reflection of actual GPU performance.
That said, it's wholly possible that Vega will benchmark higher than the GTX 1080 with Hashcat as well. But that doesn't mean it's faster. Benchmarks don't run long enough for PowerTune to fully kick in. In real-world cracking scenarios, Vega will throttle like a motherfucker to keep the wattage down, as it will will still draw more power than the card can electrically support.
Posts: 381
Threads: 1
Joined: Aug 2014
03-11-2017, 11:50 PM
(This post was last modified: 03-11-2017, 11:52 PM by Flomac.)
(03-11-2017, 08:27 PM)epixoip Wrote: In real-world cracking scenarios, Vega will throttle like a motherfucker to keep the wattage down, as it will will still draw more power than the card can electrically support.
Let's not judge so fast. AMD has a small advantage with their fabrication process (14nm instead of 16nm) and HBM memory. Both usually rise energy efficiency. The base specs don't look bad. If the supposed 4096 cores are correct at speeds of 1.5GHz, it will be very tight for the GTX1080. But then Vega aims for the Ti model anyway and that's a challenge I'm not sure it will win.
Sure, they claim 12.5 GFLOPS, which is more than 10% higher compared to the NVidia GP102. But these 10% can be easily lost in bad OpenCL software, which is their weakest point.
I guess it will be like Ryzen - nice, reasonably priced, very competitive, maybe even faster than NVidia on some terms. But also with deficits and not at all a no-brainer. NVidia is VERY fast under Hashcat and it needs a lot to get on the same level. Driver, energy consumption, cooling, throttling - lots of things still to mess up with.
Vega is supposed to come end of May and rumours tell the Pascal refresh won't be far away from that date. So it might be an interesting summer. Let's hope they can be competitive and make NVidia slash some prices.
@epixoid: seriously, you have a long discussion with someone who still does Bitmining AND does it on usual PC hardware?!? You must have a big heart
Posts: 2,936
Threads: 12
Joined: May 2012
My heart is pretty big, but you're way more optimistic about AMD than I am
Posts: 40
Threads: 5
Joined: Dec 2013
(03-11-2017, 08:27 PM)epixoip Wrote: (03-11-2017, 09:00 AM)NikosD Wrote: VEGA is already faster than 1080 in OpenCL benchmarks. Take a look at the bitcoin mining. Mining has historically been an AMD game, and I strongly suspect no one is paying attention to Nvidia nor knows how to properly optimize for Nvidia. So mining benchmarks aren't at all a reflection of actual GPU performance.
The problem, serious problem, for Pascal 1080 is that, as you can obviously see, it looses ALL but one OpenCL benchmarks to VEGA.
You guys of hashcat must try too hard for Nvidia to win this battle with VEGA in OpenCL cracking.
Unless the OpenCL preliminary benchmarks are too biased for AMD, like you are for Nvidia.
Thankfully atom is in the middle, so I'm pretty sure VEGA is going to be clearly faster than 1080 and without thermal or power throttles.
After RyZen, now VEGA seems to be in a very good path after years.
2017 will be AMD's year.
Posts: 40
Threads: 5
Joined: Dec 2013
(03-11-2017, 11:50 PM)Flomac Wrote: (03-11-2017, 08:27 PM)epixoip Wrote: In real-world cracking scenarios, Vega will throttle like a motherfucker to keep the wattage down, as it will will still draw more power than the card can electrically support. Let's not judge so fast. AMD has a small advantage with their fabrication process (14nm instead of 16nm) and HBM memory. Both usually rise energy efficiency. The base specs don't look bad. If the supposed 4096 cores are correct at speeds of 1.5GHz, it will be very tight for the GTX1080. But then Vega aims for the Ti model anyway and that's a challenge I'm not sure it will win.
Sure, they claim 12.5 GFLOPS, which is more than 10% higher compared to the NVidia GP102. But these 10% can be easily lost in bad OpenCL software, which is their weakest point.
The sample tested producing the above result, clearly outperforming 1080 on every test but one and loosing by 1080 Ti is clocked at 1200MHz.
If that clock remains, we aren't talking anymore for 12.5 GFLOPS, but less.
I really don't know if AMD will manage to hit their target of 1500MHz clock and 12.5 GFLOPS, but if they do it, then according to that OpenCL table they could hit 1080 Ti.
Hitting 1500MHz clock, the 1080 Ti target is in VEGA's range.
Posts: 381
Threads: 1
Joined: Aug 2014
(03-12-2017, 10:21 AM)NikosD Wrote: Unless the OpenCL preliminary benchmarks are too biased for AMD, like you are for Nvidia.
No one is biased for anything. The fact is AMD does not have a proper Hashcat GPU since years. The mighty power of the 290X came with a bunch of drawbacks. If you're doing cracking professional and not for fun, some of these faults were crucial. The watercooling of the Fury X made it impossible to put a bunch of cards in a server rack. And don't get me startetd about drivers and software.
That's why almost everyone here favors NVidia.
(03-12-2017, 10:47 AM)NikosD Wrote: The sample tested producing the above result, clearly outperforming 1080 on every test but one and loosing by 1080 Ti is clocked at 1200MHz.
If that clock remains, we aren't talking anymore for 12.5 GFLOPS, but less.
The clock will not remain on 1.2GHz. AMD is doing a die shrink from 28nm to 14nm which is a huge step (actually it's two steps).
Die shrinks can be used for
- rising clock speeds
- rising transistors
- lowering energy consumption
- lowering die size
Let's roughly analyze the new GTX1080 compared to the GTX980. NVidia made a die shrink from 28nm to 16nm. A shrink down to 14nm would be a 4x favor. Since it's a logarithmic curve going from 28nm to 16nm results in a factor of ~3.2
- clock speed ~ 1.4x
- transistors ~ 1.4x
- die size ~ 0.8x (which is 1,26 square 2=1.6x)
- power TDP 1.0x
1.4 x 1.4 x 1.6 x 1.0=3.14x -> That's the factor of positive impacts in cause of the die shrink. It's very close to the predicted 3.2x
Compared with the Fury X the shaders of the Vega remain the same at 4096. Even if the shader logic is a bit more complex, it will not contain too many more transistors.
So doing the same calculation like above
- clock speed 1.5x (~1.5GHz)
- transistors 1.2x (~
- die size 0.8x (-> 1.6)
- power TDP 0.8x (-> 1.25)
1.5 x 1.2 x 1.6 x 1.25=3.6x -> even under the 4.0x resulting from the die shrink (28nm to 14nm)
So even with 20% more transistors (though shaders remain the same), a 50% higher clock rate, a die shrink and a 20% lower power consumption the above values can only be estimated as fairly.
And of course, everything can be still messed up. Screw up the cooling system and the GPU will "throttle like a motherfucker" (epixoip). Maybe the power drop needs to be bigger. Drivers can be crapped up easily. And so on.
And to answer epixoip: I'm not optimistic, just trying to be realistic with you being pessimistic about AMD, which I totally understand minding where AMD's floating around the last years
Posts: 2,936
Threads: 12
Joined: May 2012
My pessimism is grounded in realism though Keep in mind that Bitweasil and I predicted AMD's sharp downturn back in 2012. Even though the 7970 had just been released and was by all rights a fantastic card, we could see the storm clouds on the horizon. And lo, the rain did cometh. And all of those indicators that there were dark days ahead are still there, nothing has changed at AMD. Yeah, GCN got a process shrink; but it's needed a process shrink since 2014. Vega isn't a sign that the storm is over -- it's a sign that we're in the eye of the storm. Don't be fooled by the sun.
|