R9 Fury Nitro 1100 mhz gpu clock - so slow - why ?
#11
RX 480 was designed to hit the middle section of GPUs, it's a 200$ card and it did its job by beating in TFLOPs and hash cracking even more expensive cards like 1060 6GB at 250$.

RX 480 is cheaper and faster than 1060 6GB in oclhashcat.

Now, VEGA is a whole different beast.

We are talking about a card which is a major redesign of GCN architecture (the biggest change from the beginning) and not just a die shrink with faster clocks like Pascal vs Maxwell in Nvidia cards.

VEGA has more raw TFLOPs not only compared to 1080 but Titan X Pascal too.

See more here:
http://www.anandtech.com/show/11002/the-...ure-teaser
#12
I probably shouldn't have called the RX 480 a "flagship", but it is currently their top-of-the-line Arctic Islands offering, so let's not split hairs. You launch a new product line, and the best you can do is target mid-range GPUs? That's downright embarrassing. Let's not pretend that it isn't. It's also quite hypocritical to negatively state that Pascal is just a minor revision of Maxwell, when it took AMD 4 years to accomplish the same. And with Pascal, performance went up while real power consumption went down -- a feat AMD has yet to actually accomplish.

No, Vega isn't a whole different beast. It's the same shit that AMD has been shoveling for half a decade. GCN was "innovative" 5 years ago (in that they dropped VLIW and created a GPU architecture that looked a hell of a lot like an Nvidia GPU), but then again AMD didn't design GCN -- ATi did. There is literally nothing here that indicates that Vega is anything more than the same old tired GCN architecture with a long-overdue die shrink and better FP16 support. There is nothing in here to support your claim that Vega is a "major redesign," especially with regards to password cracking, as the ALUs and CUs remain unchanged for yet another year.

It seems to me you're taking all of AMD's marketing doublespeak at face value, while ignoring the fact that AMD have a history of stretching the truth (read: flat-out lying), and also ignoring the plain and simple fact that that AMD doesn't have the cash or the talent to produce a new GPU. They literally have not released a new architecture since acquiring ATi and firing all of their talent. There's a reason they've been limping to the barn with GCN for half a decade, trying to milk as much mileage out of it as possible. So let's not pretend that AMD is doing anything exciting or noteworthy, ok?

BFI_INT and BIT_ALIGN were the ONLY things AMD had going for it. They're the ONLY reason we ever used AMD GPUs in the first place. Now that Nvidia has LOP3.LUT, there's literally ZERO reason to ever consider an AMD GPU again. You want a $200 GPU that outperforms a GTX 1060? Go buy a used GTX 980.
#13
I don't find shame in launching a new series in the middle section.
It's where most of the gamers belong to, the around 200$ section.

It's not embarrassing, it's called strategy.

Also, AMD and ATi are (AMD) or were (ATi) major HW companies unlike Nvidia which is actually a SW company.

The talents of AMD in HW are clearly visible when you look at the RyZen architecture.
Don't you see ?

GCN has a clear distinction from Nvidia products as long as Mantle API (aka Vulkan API and DX12 API) is clearly superior than any other API in the gaming industry and an AMD exclusive.

So, games running in consoles (GCN architecture) or PC (Vulkan/DX12) will and already perform better on AMD GCN architecture that you are clearly negative.

You seem a lot biased to me, towards Nvidia HW for no actual reason.

Nvidia HW and its old architectures was never good on integer performance and all hash cracking, mining, bitcoins etc have been doing mainly on AMD HW due to a lot raw faster architecture when we are talking about TFLOPs or IOPs.

Take the recent history of the last 10 years and tell me which HW was faster in password cracking...Nvidia or ATi/ AMD ?

Nvidia didn't have a decent OpenCL driver until recently, they only used CUDA.

I wonder what are you going to say in a few months when you see Vega 10 outpace 1080 and probably Titan XP in password cracking.
#14
Strategy? AMD doesn't have a strategy outside of "don't go bankrupt." Do you know nothing of the company's financial struggles and internal turmoil?

Nvidia is a software company? Since when? NV1? Riva? nForce? GeForce? Tegra? Nvidia literally invented the GPU, and you're going to claim they're a software company!?

This is a conversation about GPUs, not CPUs, so I'm not sure why you're bringing up Ryzen. But what the hell, I'll bite. AMD hasn't actually released a competitive CPU in nearly 15 years. K8 was their last actually-competitive CPU, and even then it was only competitive because Intel was doing precisely what AMD is doing now with GCN: milking the ever-living fuck out of Netburst (and making some really bad decisions with IA64.) Intel had to hit an all-time low for AMD to start to look good. And then Intel released Core, and it was back to business as usual. K10 was lackluster at best, Bulldozer was a complete shitshow. Good for them if they can actually be competitive again with Ryzen, but all that means is they're pouring what little cash they have into CPU development, not GPU development, which doesn't really help your case for Vega.

Who cares about Mantle and DX12? This is hashcat, we're not pixel ponies here. Any talk about gaming performance is irrelevant on these forums.

I have very good reason to be biased against AMD. Maybe you don't know who I am and what I do, but I've bought quite literally thousands of AMD GPUs, and have had to put up with AMD's horseshit for years. I abhorred the fact that we were so reliant upon AMD. The drivers are atrocious, the sole OpenCL developer that they have stuffed in a closet somewhere is utterly incompetent, and every high-end ATi/AMD GPU for the past 7 years has grossly violated the PCI-e specification and thus are massive fire hazards. If you want to know what AMD's failures smell like, it smells a lot like burnt PCB and melted plastic.

Again, BFI_INT and BIT_ALIGN were the only reasons we ever used ATi/AMD GPUs for password cracking. It wasn't that the architecture that was faster, it was the ISA that made the difference. Nvidia has always had a superior architecture -- this is why ATi heavily borrowed from Nvidia's designs for GCN -- but their ISA lacked instructions that we could exploit to reduce instruction counts in hash algorithms. Now that Nvidia has LOP3.LUT, AMD is entirely irrelevant. And thank fucking Christ, because if I had to put up with AMD for one more year, I'd likely go insane.

Of course Nvidia focused more on CUDA than OpenCL. CUDA is more mature and overall a much better solution than OpenCL. OpenCL is a bit of a disjointed clusterfuck that doesn't really come anywhere close to living up to its promise of "write once, run everywhere." And in the industries that Nvidia was targeting (oil/gas, weather, finance, chemistry, etc.), CUDA is the dominant language. They had no real incentive to invest in more in OpenCL, until recently. Honestly I'm not entirely sure why they decided to focus more effort on OpenCL (maybe machine learning?), but the state of Nvidia OpenCL is still far better than anything AMD has ever produced. You state that Nvidia didn't have a decent OpenCL driver until recently, but have you worked with AMD's OpenCL at all? It's fucking horrendous. You have no idea how hard atom has had to work to work around bugs in AMD's OpenCL throughout the years. I have no idea how he keeps up with it all, I sure couldn't. And shit like that is why VirtualCL went stale -- they couldn't implement workarounds in their software faster than AMD could introduce bugs.

Vega certainly won't outpace Pascal, and it absolutely will not outpace Volta. You're insane to think otherwise. GCN has already been stretched well beyond its electrical and thermal limits. The die shrink will help a little bit, but their strategy of "glue some more cores on it!" will only take them so far, and they've already peaked. Every high-end card they've released since the 290X has been an unholy abomination, and there's absolutely no evidence of that changing anytime soon. To truly be competitive with Nvidia they will need cash (of which they have none) and talent (of which they either fire or drive away.) GCN has become AMD's Netburst, and they will limp to the barn with it until something dramatic happens.

EDIT: Also, regarding this claim: "It's where most of the gamers belong to, the around 200$ section." This is patently false, as demonstrated by this chart which shows that Enthusiast GPUs sales more than double those of Mainstream GPU sales in 2015-2016, and that Mainstream GPUs were consistently the least-performant sales category. So yes, it is absolutely embarrassing for AMD to target the bottom of the barrel.
#15
You seem to "forget" some fundamental things regarding those three (AMD, Intel, Nvidia)

Of course Intel and Nvidia are huge companies compared to AMD and even though that's a fact, AMD can sometimes compete them and that's a miracle (!) by its own.

I wouldn't need to write anything else than that. That a much smaller company like AMD has managed to be competitive with monsters like Intel and Nvidia.
It's like David and Goliath.

And in 2017 we are going to live another miracle, another huge task that AMD accomplished.
To be competitive in both CPU (RyZen) and GPU (Vega) with Intel and Nvidia at the same time.

It's like a myth, it is something that even if someone shows it in front of your eyes, you will find it very difficult to believe.

How the hell a small company like AMD, compared to the giants, can compete with Intel and Nvidia at the same time ?
What is the budget of AMD for CPU and GPU independently ?

Look at the facts:
What is the last discrete GPU that Intel managed to build ? (LOL)
What is the name of the first x86/x64 CPU that Nvidia managed to build ? (LOL)

The huge risk and bold movement of AMD acquiring ATi has already started to shine...

Now, I was looking at some performance tables of password cracking the last years and I will only post a simple fact.

ATI Radeon 5870 was released on Q3-Q4 2009 and it was faster than ANY Fermi card 4xx series, 5xx series and ANY Kepler card 6xx series, 7xx series besides the so special 780 Ti.

Can you believe that ?

It was only Maxwell architecture with 9xx series and of course Pascal that put Nvidia on top of password cracking again.
And Nvidia will loose again by VEGA.

The truth is that I don't know you, but you seem like a really passionate guy.
But you don't know me either.

I started to help Ivan Golubev in debugging the first ever GPU password cracking software I have ever used back in 2010 owing a perfect middle range ATi Radeon 5750 card.
His app was using ATi Stream and CAL, not OpenCL that was very new and was introduced for the first time by 5000 series release.

And then it was atom and his first tries around 2010 - 2011 IIRC with his oclhashcat, the first attempt to use OpenCL for password cracking with ATi Radeon 5000 series.

I've been exchanging directly emails with atom for months, debugging oclhashcat v0.xx and I very well remember his efforts of been consistent with his OpenCL app every time a new ATi driver was released.

We are talking about a real agony here.

Now, regarding GCN, as I told you earlier, it is now - 2017 - the time to shine using Vulkan and DX12 but this is gaming and I will not further discuss it in oclhashcat forum.

I'm telling you again that VEGA is already outpacing Pascal even in its full form of GP100.
Take a look at the specs of the Radeon Instinct MI25 (Vega 10) which I think it's not the full VEGA like GP100 (which is the full Pascal)

VEGA has the potential to be faster than Pascal.

The last comment is about the chart you published.

Do you mean the "performance" segment is bigger than "mainstream" ?
Because the "enthusiast" segment is very, very small like "workstation"

And if you see carefully, the performance and mainstream are very close.
And RX 480 is not exactly mainstream, is like mainstream to performance.

If you see the chart carefully, you proved exactly my point.
#16
I wasn't aware the concept of applefag existed for amd, heh.
#17
But I'm fully aware of Nvidiots.

The whole world is full of them.
#18
Not supporting any in particular, just using the one most reliable/powerful as of now, used amd in the past for reasons mentioned by epixoip, now using nvidia, trying that hard to preach for amd is the true idiocy.
#19
I don't support anyone in particular too, I just can't stand still when someone is accusing AMD in particular.

I have bought more Intel CPUs than AMD, but when AMD builds a good CPU I will definitely buy it.

The same is for GPUs.
I don't find AMD's GPUs cursed or inferior by default.

Nvidia and Intel are far more aggressive and certainly more offensive to their customers with their behavior.

They really exploit them badly.

Now if you or anyone else are so in favor of monopolies and you are really so attracted by the powerful players of the industry, then besides idiocy there is also the possibility of profit.

But I don't think this is what is really happening here.
#20
First you claim Nvidia is a software company, and now you're claiming AMD is a small company? I'm utterly baffled at how you view the world.

Stating that Intel has never made a discrete GPU, or that Nvidia has never made an x86 CPU (which if you knew your history at all, you'd know that Nvidia *can't* produce an x86 CPU thanks Intel v. Nvidia), is not even remotely relevant. And further stating that AMD's "huge risk and bold move" of acquiring ATi is about to pay off tells me you REALLY don't know your history. You do realize that AMD did not acquire ATi to get into the discrete GPU business, right? They acquired ATi because they were anticipating the death of the discrete GPU. They thought APUs were the future and wanted to corner the market. Obviously that plan backfired spectacularly; while discrete GPU sales have declined more than 50% over the past 6 years (and AMD's own discrete GPU sales have fallen 70% in the past 6 years), discrete GPU sales still total over 40M units per year, and certainly aren't going to disappear anytime soon. And consoles are the only market where AMD's APUs have found any traction, as Intel has them squarely beat in the desktop market. And now, because they banked on the notion that the GPU was dead, they haven't released a new GPU architecture in 5 years. That's not the definition of success, nor is that the definition of capitalizing on an investment. And their financials certainly reflect this.

It's no secret that TeraScale and GCN were faster at password cracking than pre-Maxwell Nvidia. Why you present this as some sort of profound fact or hidden insider knowledge is beyond me. What you don't seem to grasp is why ATi GPUs were faster for password cracking. It wasn't because ATi had a superior architecture (they didn't), and it wasn't because ATi GPUs were better (they weren't.) It was primarily because ATi had two instructions (as explained above) that enabled us to perform rotates and bitselects in one operation instead of three, reducing the instruction count. That's it. Once Nvidia added a similar instruction in Maxwell we could exploit Nvidia GPUs in a similar fashion, and suddenly they were much faster than AMD GPUs while drawing less than half the electricity. And with a better driver, better upstream support, and better cross-platform support as well.

And that's really the point here. While Nvidia GPUs are getting both faster and more power efficient, AMD's response is to continue gluing more cores onto the same old architecture, and attempt to rein in heat and electricity with firmware. AMD ran up against the electrical and thermal limits of GCN years ago. They're building 375-525W GPUs, sticking them on boards that can only electrically support 150-300W, and relying on firmware to prevent a fire. A die shrink will not save them now, it's physically impossible. How in the world you believe Vega will be some miracle GPU is utterly baffling.

You bring up the Instinct MI25... What if I told you the MI25 was just an overclocked R9 Fury X with a die shrink? Or more accurately, what if I told you it was just an HD7970 with a die shrink and 32 additional CUs glued on? Because that's exactly what it is. For AMD to claim it's <300W is beyond laughable. At 14nm, it's likely 375W+ GPU (actually probably closer to 425W since the clock rate kind of cancels out the die shrink's power savings) that they'll attempt to limit to <300W with PowerTune. Which, if you didn't know, means it will throttle under load, and throttling of course destroys performance. It's somewhat acceptable for gaming since gaming workloads are "bursty," but password cracking hammers the ALUs with steady load, and nothing stresses GPUs like password cracking does. Like all post-290X AMD GPUs, it will likely benchmark well because benchmarks are short, but it will fall apart in real-world cracking scenarios. Again, this is still the same old GCN we're well accustomed to. It's absolutely nothing new. Mark my words, Vega will be just as bad as any other AMD GPU made in the last 3 years.

I've been in this game just as long (if not longer) than you have. The difference is you seem to have very limited experience as an end-user with only a handful of mostly mid-range GPUs (and likely no more than 4 at a time), while I have datapoints for the past 7 years on literally thousands of top-end GPUs from both ATi/AMD and Nvidia in very dense cluster configurations. And as and end-user, I guarantee you have nowhere the grasp of the economics involved here than I do. To make it very clear, I'm not a fanboy by any means. But I know far more about password cracking hardware than anyone else, I depend on GPU sales for a living, and let me tell you something: relying on AMD to put food on my table is a terrifying position to be in. You know nothing of the panic I felt when AMD announced they were discontinuing the reference design 290X, or when they announced that the 390X would just be a rebadged 290X with no reference design, or when we discovered that the R9 290X was a motherboard killer, or when AMD announced that they'd be rolling with GCN for yet another generation and that their top GPU would be hybrid-cooled only. You want to know why I hate AMD so much? Because their failures and terrible decisions threatened my business and put me in a position where I was about to lose everything I worked hard to obtain. And that's why I have so much love for Nvidia right now. Maxwell and Pascal saved my business and my ass, they were a fucking godsend.

Again, I'm not a fanboy by any means. If AMD gets their shit together, and I can actually comfortably rely on them for an extended period of time, I would consider shipping AMD GPUs in my clusters again. But Vega is no miracle; it's a clear sign that AMD has no intention of jumping off the Titanic, and I want nothing to do with it or them.