R9 Fury Nitro 1100 mhz gpu clock - so slow - why ?
You sure are younger than me and don't remember/ know that Intel has of course made a discrete GPU back in late '90 around 1998 called Intel740 or better known as Intel i740.

A discrete card that failed badly.

And the whole project of Larrabee that ended like Xeon Phi was another failure for Intel.
It was another try to get into discrete GPUs and another failure of course.

Now, regarding Nvidia they should/could have bought a license from Cyrix or other x86 compatible license and pay for the patents like Intel does.

Intel pays a lot to Nvidia and AMD for the patents used in their iGPUs.

Nvidia should/ could have done the same.

The same pattern you describe of Nvidia missing integer instructions has been repeated again and again on various subjects regarding end user from both Nvidia and Intel.

They just don't care at all for the customer.

Look at the fraud of 970 memory (3.5GB), the fraud of HEDT Intel processors that RyZen is going to bash etc

If it wasn't AMD we wouldn't have more than 4GB in our PCs because Intel wanted to sell 64bit processors only for servers.

We could say that all modern CPUs are AMD x64 compatible not Intel compatible.

You also seem to forget Evga Pascal catching fire.

Your beloved, cool and low power consuming Pascal catching fire.
What a fail!

VEGA belongs to the GCN architecture of course but it has many changes and some of them are pretty big.

Of course for password cracking we have to wait and see.

I mark your words and you mark mine.

Vega 10 is going to be faster in password cracking than 1080.

I can understand your reasons of hating AMD so much, but they are profoundly personal when we talk about business here and in a more distant way, more professionally if you want.
Ah, right, I forgot the rules state if someone is older than you they must know more than you.

As a former Intel employee, I feel it is my duty to inform you that the i740 was technically a VPU, not a GPU. Nvidia was the only company that sold GPUs in 1999. Intel obviously saw more value in integrated graphics and GPGPU than discrete GPUs, and it's hardly accurate to call Xeon Phi a failure when it powers several of the TOP500 supercomputers, including Tianhe-2. But again, all of this is completely irrelevant to AMD's failures.

We had >4GB RAM on x86 a decade prior to x86-64, apparently you're ignorant of PAE. And also completely contradictory to your "AMD is a small company" argument; as already stated earlier in this thread, K8 was a huge win for AMD, especially on the Opteron side, and AMD held significant marketshare throughout most of the last decade. They threw it all away with a series of very poor decisions. Even if they are starting to make some good decisions on the CPU side of the house again, it's quite clear they're still making terrible decisions on the GPU side. And if Ryzen is as good as they say it is, this is actually a very poor sign for Radeon since we know from AMD's financials that they are flat broke, and it means all their cash is going to CPU development, not GPU development. So I'm not sure what exactly it is you're trying to argue.

The GTX 970 VRAM "scandal" was hardly fraud, the only people who thought it was are those who know absolutely nothing about GPUs. And for the record, AMD GPUs have the exact same "issue."

A quality control issue with a specific product from a specific manufacturer is vastly different from a willful and systemic engineering failure in a reference design that is propagated throughout the entire product family. Only the EVGA ACX has the design flaw you speak of, whereas ALL high-end AMD GPUs from all manufacturers, both reference design and OEM design, violate the PCI-e specification. How you can even attempt to argue otherwise is mind-numbingly ignorant. All your point does is reinforce our mantra of "always buy reference design GPUs."

My quarrel with AMD is hardly personal; it directly relates to the quality of their products, the instability of their product lines, and their reliability as a company -- things that ALL Hashcat users benefit from.

This thread has become tedious as you continue to side-step every legitimate point that I've raised, and have only countered with more irrelevant and inaccurate bullshit. While there is value in my posts for other users and future Hashcat forum readers to collect, this has ultimately become a waste of my time, and thus I will not be replying to any more of your nonsense.
OK...I think you deserve a reply for your previous post.

The only reason I mentioned your age, which I don't know actually, was in order to protect you, so to speak, from your very obvious lack of knowledge of the fact that Intel produced or tried to produce a discrete GPU.

You are very good at playing with words. Manipulating words of the others and yours.

You tried to cover your lack of knowledge of discrete GPUs from Intel by calling them VPUs...OK

For people reading, it's the exact same thing essentially, especially on late '90...ATI said that Radeon 9700 was a VPU on 2002, 3 years after the far inferior Geforce 256 that Nvidia called "GPU"

Let's not play with words...OK?

But on the other hand you say that you are a former Intel employee, so it shouldn't be a lack of knowledge but...who knows.

Also, you seem a little incapable of understanding what you read.

You first posted a chart showing exactly what I told you and then you write that I called Xeon Phi a failure.

I only wrote that the project Larrabee was failed as a discrete GPU and was converted to what we call today Xeon Phi (it has changed names a few times)

Your misleading and misinformations to the users reading your posts, goes on with PAE.

OMG, what an argument to try to hit the x64 architecture of AMD with PAE!

PAE was just a "mod" of x86 processors in order to allow them to go up to max. 64GB of RAM under very special circumstances.
For example almost all Windows x86 desktop versions don't allow more than 4GB RAM for compatibility reasons.
PAE could be used for Windows Server editions, but I clearly told you the achievement of AMD bringing REAL x64 processors to desktop.

AMD brought to desktop x64 processors. Period.

And it wasn't that Intel couldn't build x64 processors of course.
They wanted IA64 to be explicitly used by their Itanium - Itanic - Titanic processors.
They wanted to manipulate users by all means.

Regarding AMD's scandals.
Could you tell us what exactly AMD cards have the same issue like Nvidia's 970 3.5GB scandal that Nvidia had to pay for false advertising to the customers ?

Maybe it's time to take some money back from AMD besides Nvidia of course.

Regarding PCIe violations, it's a very old story involving Nvidia cards too.
For example the master of power spikes, Nvidia 750 Ti, with huge power bursts violates PCIe by far.

AMD has made a huge progress regarding drivers (Crimson and Crimson Relive) while on the other hand Nvidia has serious problems with Win 10 drivers.

AMD Polaris cards and the upcoming VEGA have already a very good fame regarding applications like oclhashcat or bitcoins, essentially continuing the good tradition of HD 4000 and 5000 series and onward that Nvidia managed to outperform only very recently.

Users of oclhashcat and all the other GPGPU OpenCL apps, don't need me or you to see the facts.

R9 290X was mainly sold for mining, not gaming.

Anyway, I have lost a lot of time too.

Be more objective, although you are a former Intel employee and a "burnt" by AMD guy and your posts will look prettier.

You said Nvidia saved your a$$, but that doesn't mean you have to bash AMD for ever.
After all this mud wrestling lets get back to the original question.

The upcoming Vega chip is supposed to have an updated architecture. Instead of GCN AMD themselves call it now NCU (New Generation Compute Unit). Although they obviously shut down the creativity department there's more behind it than just a few letters. The pipeline is restocked and ready for higher frequencies. The load balancer has improved. They can schedule more work around the CUs, where before small operations could block the whole chip. The engines now can natively process 8-/16-/32- and 64-bit ops in each clock cycle.
The engines are now a client of the onboard L2 cache. By making the render back-end clients of the L2 cache they get access to large pools in a much bigger buffer, the direct effect this has is in the end to improve in performance with applications that use heavy read-after-write operations. So here we should see an efficiency and thus performance increase. And let's not forget HBM2, which is also an improvement over NVidia.

There's more in there, but it should be already clear by now that they did not just a copy/paste job of GCN. Let's see what comes up. I personally like competition and have to agree with epixoip that AMD is not much of an alternative since NVidia came out with Maxwell two and a half years ago. Vega is the first promising chip for AMD in 14nm and could change something, at least to be at par.

And btw. the AMD shares roared up by 350% last year. The signs are there.
First unofficial OpenCL benchmarks of upcoming VEGA, indicate a 64 CU GPU core@1200MHz that is faster than 1080.

Look at the bitcoin mining for example.

[Image: AMD_Ve...0x1408.png]