Strategy? AMD doesn't have a strategy outside of "don't go bankrupt." Do you know nothing of the company's financial struggles and internal turmoil?
Nvidia is a software company? Since when? NV1? Riva? nForce? GeForce? Tegra? Nvidia literally invented the GPU, and you're going to claim they're a software company!?
This is a conversation about GPUs, not CPUs, so I'm not sure why you're bringing up Ryzen. But what the hell, I'll bite. AMD hasn't actually released a competitive CPU in nearly 15 years. K8 was their last actually-competitive CPU, and even then it was only competitive because Intel was doing precisely what AMD is doing now with GCN: milking the ever-living fuck out of Netburst (and making some really bad decisions with IA64.) Intel had to hit an all-time low for AMD to start to look good. And then Intel released Core, and it was back to business as usual. K10 was lackluster at best, Bulldozer was a complete shitshow. Good for them if they can actually be competitive again with Ryzen, but all that means is they're pouring what little cash they have into CPU development, not GPU development, which doesn't really help your case for Vega.
Who cares about Mantle and DX12? This is hashcat, we're not pixel ponies here. Any talk about gaming performance is irrelevant on these forums.
I have very good reason to be biased against AMD. Maybe you don't know who I am and what I do, but I've bought quite literally thousands of AMD GPUs, and have had to put up with AMD's horseshit for years. I abhorred the fact that we were so reliant upon AMD. The drivers are atrocious, the sole OpenCL developer that they have stuffed in a closet somewhere is utterly incompetent, and every high-end ATi/AMD GPU for the past 7 years has grossly violated the PCI-e specification and thus are massive fire hazards. If you want to know what AMD's failures smell like, it smells a lot like burnt PCB and melted plastic.
Again, BFI_INT and BIT_ALIGN were the only reasons we ever used ATi/AMD GPUs for password cracking. It wasn't that the architecture that was faster, it was the ISA that made the difference. Nvidia has always had a superior architecture -- this is why ATi heavily borrowed from Nvidia's designs for GCN -- but their ISA lacked instructions that we could exploit to reduce instruction counts in hash algorithms. Now that Nvidia has LOP3.LUT, AMD is entirely irrelevant. And thank fucking Christ, because if I had to put up with AMD for one more year, I'd likely go insane.
Of course Nvidia focused more on CUDA than OpenCL. CUDA is more mature and overall a much better solution than OpenCL. OpenCL is a bit of a disjointed clusterfuck that doesn't really come anywhere close to living up to its promise of "write once, run everywhere." And in the industries that Nvidia was targeting (oil/gas, weather, finance, chemistry, etc.), CUDA is the dominant language. They had no real incentive to invest in more in OpenCL, until recently. Honestly I'm not entirely sure why they decided to focus more effort on OpenCL (maybe machine learning?), but the state of Nvidia OpenCL is still far better than anything AMD has ever produced. You state that Nvidia didn't have a decent OpenCL driver until recently, but have you worked with AMD's OpenCL at all? It's fucking horrendous. You have no idea how hard atom has had to work to work around bugs in AMD's OpenCL throughout the years. I have no idea how he keeps up with it all, I sure couldn't. And shit like that is why VirtualCL went stale -- they couldn't implement workarounds in their software faster than AMD could introduce bugs.
Vega certainly won't outpace Pascal, and it absolutely will not outpace Volta. You're insane to think otherwise. GCN has already been stretched well beyond its electrical and thermal limits. The die shrink will help a little bit, but their strategy of "glue some more cores on it!" will only take them so far, and they've already peaked. Every high-end card they've released since the 290X has been an unholy abomination, and there's absolutely no evidence of that changing anytime soon. To truly be competitive with Nvidia they will need cash (of which they have none) and talent (of which they either fire or drive away.) GCN has become AMD's Netburst, and they will limp to the barn with it until something dramatic happens.
EDIT: Also, regarding this claim: "It's where most of the gamers belong to, the around 200$ section." This is patently false, as demonstrated by this chart which shows that Enthusiast GPUs sales more than double those of Mainstream GPU sales in 2015-2016, and that Mainstream GPUs were consistently the least-performant sales category. So yes, it is absolutely embarrassing for AMD to target the bottom of the barrel.
Nvidia is a software company? Since when? NV1? Riva? nForce? GeForce? Tegra? Nvidia literally invented the GPU, and you're going to claim they're a software company!?
This is a conversation about GPUs, not CPUs, so I'm not sure why you're bringing up Ryzen. But what the hell, I'll bite. AMD hasn't actually released a competitive CPU in nearly 15 years. K8 was their last actually-competitive CPU, and even then it was only competitive because Intel was doing precisely what AMD is doing now with GCN: milking the ever-living fuck out of Netburst (and making some really bad decisions with IA64.) Intel had to hit an all-time low for AMD to start to look good. And then Intel released Core, and it was back to business as usual. K10 was lackluster at best, Bulldozer was a complete shitshow. Good for them if they can actually be competitive again with Ryzen, but all that means is they're pouring what little cash they have into CPU development, not GPU development, which doesn't really help your case for Vega.
Who cares about Mantle and DX12? This is hashcat, we're not pixel ponies here. Any talk about gaming performance is irrelevant on these forums.
I have very good reason to be biased against AMD. Maybe you don't know who I am and what I do, but I've bought quite literally thousands of AMD GPUs, and have had to put up with AMD's horseshit for years. I abhorred the fact that we were so reliant upon AMD. The drivers are atrocious, the sole OpenCL developer that they have stuffed in a closet somewhere is utterly incompetent, and every high-end ATi/AMD GPU for the past 7 years has grossly violated the PCI-e specification and thus are massive fire hazards. If you want to know what AMD's failures smell like, it smells a lot like burnt PCB and melted plastic.
Again, BFI_INT and BIT_ALIGN were the only reasons we ever used ATi/AMD GPUs for password cracking. It wasn't that the architecture that was faster, it was the ISA that made the difference. Nvidia has always had a superior architecture -- this is why ATi heavily borrowed from Nvidia's designs for GCN -- but their ISA lacked instructions that we could exploit to reduce instruction counts in hash algorithms. Now that Nvidia has LOP3.LUT, AMD is entirely irrelevant. And thank fucking Christ, because if I had to put up with AMD for one more year, I'd likely go insane.
Of course Nvidia focused more on CUDA than OpenCL. CUDA is more mature and overall a much better solution than OpenCL. OpenCL is a bit of a disjointed clusterfuck that doesn't really come anywhere close to living up to its promise of "write once, run everywhere." And in the industries that Nvidia was targeting (oil/gas, weather, finance, chemistry, etc.), CUDA is the dominant language. They had no real incentive to invest in more in OpenCL, until recently. Honestly I'm not entirely sure why they decided to focus more effort on OpenCL (maybe machine learning?), but the state of Nvidia OpenCL is still far better than anything AMD has ever produced. You state that Nvidia didn't have a decent OpenCL driver until recently, but have you worked with AMD's OpenCL at all? It's fucking horrendous. You have no idea how hard atom has had to work to work around bugs in AMD's OpenCL throughout the years. I have no idea how he keeps up with it all, I sure couldn't. And shit like that is why VirtualCL went stale -- they couldn't implement workarounds in their software faster than AMD could introduce bugs.
Vega certainly won't outpace Pascal, and it absolutely will not outpace Volta. You're insane to think otherwise. GCN has already been stretched well beyond its electrical and thermal limits. The die shrink will help a little bit, but their strategy of "glue some more cores on it!" will only take them so far, and they've already peaked. Every high-end card they've released since the 290X has been an unholy abomination, and there's absolutely no evidence of that changing anytime soon. To truly be competitive with Nvidia they will need cash (of which they have none) and talent (of which they either fire or drive away.) GCN has become AMD's Netburst, and they will limp to the barn with it until something dramatic happens.
EDIT: Also, regarding this claim: "It's where most of the gamers belong to, the around 200$ section." This is patently false, as demonstrated by this chart which shows that Enthusiast GPUs sales more than double those of Mainstream GPU sales in 2015-2016, and that Mainstream GPUs were consistently the least-performant sales category. So yes, it is absolutely embarrassing for AMD to target the bottom of the barrel.