(08-31-2018, 01:17 PM)kamylkut Wrote: [ -> ]Why RTX 2070 isn't available to pre-order on GeForce website? And, why doesn't it come with NVLINK connection for SLI users?
Perhaps you’d be better off asking Nvidia.
(08-31-2018, 02:40 PM)GrepItAll Wrote: [ -> ]I have been running 4 x 1080 Ti Asus Turbo cards since September nearly 24/7 (brief downtimes due to hardware relocation) as it was very difficult to purchase multiple new FE cards at the time. Been very happy with their performance so far, both in terms of cooling and H/s.
I'll be interested to see how the 2080 Ti Turbo holds up.
I think statements like this must be at least accompanied with the details of the cooling solution. Are you really using a server case where one GPU is just few millimeters away from the next GPU? Does the fan of one GPU almost touches the next GPU like for most server/retail mainboards ? .... or are you instead using risers and the GPUs are "miles" away from each others etc
These are very different ways to setup rigs and of course you can't really say that the fan/cooling of a GPU is very effective/good (for server/retail mainboard setups without risers) if you use risers for your setup.
I'm not saying that I'm sure that you are using risers and this is the only way in which that specific model (I don't own this model therefore I have literally "no clue"
) can be cooled within a multi-GPU setup... but I think without the full details about your cooling strategy (which type of motherboards/case/fans/risers, how many inches/mm the GPUs are apart from each other etc) the statement could be a little bit incomplete/dangerous/misleading etc. I would like to know the details, because it could be also a very good fan designed by ASUS (again, I have no clue, because I don't own this specific model).
Please let us know about the details of your specific cooling strategy (and/or how you setuped your GPUs)
Thanks
(08-31-2018, 01:17 PM)kamylkut Wrote: [ -> ]Why RTX 2070 isn't available to pre-order on GeForce website? And, why doesn't it come with NVLINK connection for SLI users?
Simply because the RTX 2070 will be realeased later, supposedly in November. And why selling an NVLINK connection to everyone and letting them pay for something only few people will need? Those who do can buy one.
still noone who takes 2080ti has any test? im curius to see the performance between the 1080ti, cause if isnt x2 in most hashes i will take 3 1080ti, i know a website where they sell at 650€ each one
They're still not out yet so no one has any. But it will very sure be not x2, at least not in the beginning.
(09-04-2018, 06:26 PM)Flomac Wrote: [ -> ]They're still not out yet so no one has any. But it will very sure be not x2, at least not in the beginning.
what you mean about "at least not in the beginning"
I have low knowledge about gpu/hashcat but you just can mean 2 things
- They will sell better cards later
- Hashcat need to adaptate to the new gpus?
The latter. atom & his crew are doing an awesome job in improving GPU output and you never know what they might come up with. Plus the new genration migt bring new instrzctions that could speed up hashcat after implemented. Plus no one knows if the mighty tensor cores are any useful for hash cracking.
Just less than two weeks before the first gaming benchmarks and Turing cards have been already reported all over the Web as probably the biggest failure of all time for nVidia.
RTX Turing cards are not only unable to compete previous generation Pascal cards, it seems that 2080 is slower than 1080 Ti and 2070 is slower than 1080, but also they are extremely overpriced and power hungry cards.
Hashcat can't use any of the fixed-function hardware like Tensor Cores and Ray Tracing cores, so for hashcat like games, the new architecture is going to be slower.
The 12nm architecture of Turing cards is incompetent in terms of GPU clock and power efficiency and not really denser than 16nm Pascal.
The Turing cards for nVidia is like 10nm for Intel.
Their biggest mistake.
Stay away from this generation of nVidia cards and probably wait for the 7nm Vega20 which is going to be a monster in terms of performance with low power consumption and better clocks.
Nikos,
do you have any proof of what you're telling?
"The 12nm architecture of Turing cards is incompetent in terms of GPU clock and power efficiency and not really denser than 16nm Pascal."
Can you define "incompetent"? A boost clock of 1800MHz for the RTX2080 is not shabby at all. The TDP is not tell
Sure, the RTX2080 has less cores than the GTX1080Ti. But it also has way more cache (triple L1 and double L2), a different architechture and a newer instruction set. The latter made all the difference from Kepler to Maxwell. We will not know its performance under hashcat until someone has run a -b.
It might also have a worser pp-ratio in the beginning, but it still will be faster chip-chip-wise.
You can speculate and do your guessing, of course.
But giving people advice like "...so for hashcat like games, the new architecture is going to be slower."
is completely untrustworthy. Because you don't know that.
As you don't know if "the 7nm Vega20 which is going to be a monster in terms of performance with low power consumption and better clocks."
Especially AMD has been promising a lot with Fury and Vega over the last years. And I've been defending them (here)! But they never really delivered. Like NVidia instead, at least in the recent years.
The rumoured plus in gaming performance differs from 25-50% from Pascal to Turing. Even if it ends up being only 25% faster, oh boy, calling that "the biggest failure of all time for nVidia" only shows you're obviously too young to remember the disastrous Geforce 9800 GTX, the unviable FX 5000 series or Windows-crashing chipset drivers
NVidia has a killer feature at their hands with real time raytracing and the competitor has to watch out if it's not being swept of the market.
boy, I don't know how did we live all these years without this almighty raytracing feature. medieval!