R9 Fury Nitro 1100 mhz gpu clock - so slow - why ?
#20
First you claim Nvidia is a software company, and now you're claiming AMD is a small company? I'm utterly baffled at how you view the world.

Stating that Intel has never made a discrete GPU, or that Nvidia has never made an x86 CPU (which if you knew your history at all, you'd know that Nvidia *can't* produce an x86 CPU thanks Intel v. Nvidia), is not even remotely relevant. And further stating that AMD's "huge risk and bold move" of acquiring ATi is about to pay off tells me you REALLY don't know your history. You do realize that AMD did not acquire ATi to get into the discrete GPU business, right? They acquired ATi because they were anticipating the death of the discrete GPU. They thought APUs were the future and wanted to corner the market. Obviously that plan backfired spectacularly; while discrete GPU sales have declined more than 50% over the past 6 years (and AMD's own discrete GPU sales have fallen 70% in the past 6 years), discrete GPU sales still total over 40M units per year, and certainly aren't going to disappear anytime soon. And consoles are the only market where AMD's APUs have found any traction, as Intel has them squarely beat in the desktop market. And now, because they banked on the notion that the GPU was dead, they haven't released a new GPU architecture in 5 years. That's not the definition of success, nor is that the definition of capitalizing on an investment. And their financials certainly reflect this.

It's no secret that TeraScale and GCN were faster at password cracking than pre-Maxwell Nvidia. Why you present this as some sort of profound fact or hidden insider knowledge is beyond me. What you don't seem to grasp is why ATi GPUs were faster for password cracking. It wasn't because ATi had a superior architecture (they didn't), and it wasn't because ATi GPUs were better (they weren't.) It was primarily because ATi had two instructions (as explained above) that enabled us to perform rotates and bitselects in one operation instead of three, reducing the instruction count. That's it. Once Nvidia added a similar instruction in Maxwell we could exploit Nvidia GPUs in a similar fashion, and suddenly they were much faster than AMD GPUs while drawing less than half the electricity. And with a better driver, better upstream support, and better cross-platform support as well.

And that's really the point here. While Nvidia GPUs are getting both faster and more power efficient, AMD's response is to continue gluing more cores onto the same old architecture, and attempt to rein in heat and electricity with firmware. AMD ran up against the electrical and thermal limits of GCN years ago. They're building 375-525W GPUs, sticking them on boards that can only electrically support 150-300W, and relying on firmware to prevent a fire. A die shrink will not save them now, it's physically impossible. How in the world you believe Vega will be some miracle GPU is utterly baffling.

You bring up the Instinct MI25... What if I told you the MI25 was just an overclocked R9 Fury X with a die shrink? Or more accurately, what if I told you it was just an HD7970 with a die shrink and 32 additional CUs glued on? Because that's exactly what it is. For AMD to claim it's <300W is beyond laughable. At 14nm, it's likely 375W+ GPU (actually probably closer to 425W since the clock rate kind of cancels out the die shrink's power savings) that they'll attempt to limit to <300W with PowerTune. Which, if you didn't know, means it will throttle under load, and throttling of course destroys performance. It's somewhat acceptable for gaming since gaming workloads are "bursty," but password cracking hammers the ALUs with steady load, and nothing stresses GPUs like password cracking does. Like all post-290X AMD GPUs, it will likely benchmark well because benchmarks are short, but it will fall apart in real-world cracking scenarios. Again, this is still the same old GCN we're well accustomed to. It's absolutely nothing new. Mark my words, Vega will be just as bad as any other AMD GPU made in the last 3 years.

I've been in this game just as long (if not longer) than you have. The difference is you seem to have very limited experience as an end-user with only a handful of mostly mid-range GPUs (and likely no more than 4 at a time), while I have datapoints for the past 7 years on literally thousands of top-end GPUs from both ATi/AMD and Nvidia in very dense cluster configurations. And as and end-user, I guarantee you have nowhere the grasp of the economics involved here than I do. To make it very clear, I'm not a fanboy by any means. But I know far more about password cracking hardware than anyone else, I depend on GPU sales for a living, and let me tell you something: relying on AMD to put food on my table is a terrifying position to be in. You know nothing of the panic I felt when AMD announced they were discontinuing the reference design 290X, or when they announced that the 390X would just be a rebadged 290X with no reference design, or when we discovered that the R9 290X was a motherboard killer, or when AMD announced that they'd be rolling with GCN for yet another generation and that their top GPU would be hybrid-cooled only. You want to know why I hate AMD so much? Because their failures and terrible decisions threatened my business and put me in a position where I was about to lose everything I worked hard to obtain. And that's why I have so much love for Nvidia right now. Maxwell and Pascal saved my business and my ass, they were a fucking godsend.

Again, I'm not a fanboy by any means. If AMD gets their shit together, and I can actually comfortably rely on them for an extended period of time, I would consider shipping AMD GPUs in my clusters again. But Vega is no miracle; it's a clear sign that AMD has no intention of jumping off the Titanic, and I want nothing to do with it or them.


Messages In This Thread
RE: R9 Fury Nitro 1100 mhz gpu clock - so slow - why ? - by epixoip - 01-15-2017, 05:27 AM