So, I love my 1080 Ti from Palit. I bought it near launch from what I can tell was the initial batch of 1080 Ti’s here in the Philippines. I was fortunate enough to get it below PHP 40,000 (US$800) and its been an amazing card for me and will still provide me with great 4K content from here on out. (It’s officially 16 months old now!) But I was also hoping that I could sell it for a decent price and add about US$200 to grab a next gen GPU near launch as well. But NVIDIA’s new RTX cards definitely did not agree with my upgrade plan. See, the new 2080 Ti flagship card is at best US$300 more for the most basic reference card and likely US$500+ for any partner card with better cooling and clock speeds. This puts my plan way overbudget and, from what it looks like, not even that important of an upgrade. Why? Because NVIDIA’s new GPUs seem to rely heavily on the fact that they’ve added 2 new GPU cores that won’t make much sense in the forseeable near future.
They’ve added an AI Tensor “Deep Learning” core for very unspecific tasks. Vague? Yes its hella vague. It’s a core nobody uses today for gaming – and is mostly used for machine learning calculations for research, AI, etcetera. It’s not been traditionally needed for Gaming, and no game I know currently supports it. So that’s a new chip that’s embedded into the die of the GPU today that you’re paying for but won’t need for today’s games. In fact this core was from the brand’s Volta architecture that was meant for enterprise users.
Next, their new marketing strategy involves the new Ray Tracing Technology with its own specific core. That’s the second new core inside the GPU chip we didn’t have before. Raytracing is an interesting, potentially amazing technology that allows for fast, efficient graphics calculations of light reflections, realistic shadow effects, and lighting changes based on materials in the game. Previously, most games would just simulate how light bounces off objects and creates shadows. With ray tracing, light will realistically be rendered from the source, bounce off objects like mirrors and shiny objects, and cast complex shadows that blur off naturally.
But like I said, it requires a new GPU core outside of the already existing GPU as we know it, adding yet another cost.
While the effect of proper reflections and lighting would be great in any game, the fact of the matter is that this is a technology that was just now introduced. No game you’re playing today has ray tracing support. In fact, NVIDIA was only able to show 2-3 examples that could be considered part of an actual game. But they look more like tech demos set in those games instead of real gameplay. Why? Because the technology is so new, its clear these developers have cobbled together a demo to show the technology with their assets. Its pretty clear this is not running in-game yet. In fact one of the demos, Shadow of the Tomb Raider, will be launching next month but it won’t have ray tracing support at launch and will be patched in at a later date.
Support for Ray Tracing will be better in the future of course as more and more cards end up in consumers’ and developers’ hands, but for now 3 upcoming games is not a promising start. If I have to pay an extra US$500 for a new GPU for technology that will be underutilized for the better part of it’s life cycle, then its not a worthwhile investment – and that’s assuming I have to sell off my original 1080 Ti. It’ll cost much more if I bought it wholesale. But that’s not all – this technology isn’t available on AMD cards as of yet, which means the consumerbase will be split once again on whether to even support this new technology or not. For AMD Radeon users they’ll fight against wasting devs’ times supporting it, while devs won’t necessarily have a huge incentive if it alienates the AMD crowd.
But what about the performance? Is it any better?
It most definitely is, somewhat. Thankfully this metric is a bit easier to calculate, because the architecture for the traditional graphics cores are the same. We just need to count the CUDA cores (Shader Cores) to see how it’ll more or less perform. The 1080 Ti has 3584 cores and generally maxes out at about 1900mHz from most brands. The 2080 Ti will have 4352 cores and more or less the same clocks. That’s about a 21% increase. There’s a bit of overhead here so likely the traditional graphics processing performance of a 2080 Ti is around 10-15% better than a 1080 Ti.
Ouch. Did you read that right? Yep. It won’t be a substantial upgrade when it comes to that traditional graphics performance. So no, this won’t get us to 8K or 4K 144Hz gaming so much. It will likely let you max out more games like Witcher 3 on 4K60FPS though – or finally tame No Man’s Sky NEXT’s ridiculously bad optimization even at 1080p.
So, given the fact that the RTX 2080 Ti is nearly going to cost US$500 more than what I have now, and will only be about 10% more powerful, but with extra 2 cores that won’t really make much of a difference in maybe a year, it’s probably not the best time to jump into the next gen right now.
This was, unfortunately, not what I predicted. I thought we’d have a new smaller nanometer production process with an uptick of CUDA cores and essentially the same prices. Which means the Pascal GPUs would get a price cut, and the new Turing GPUs would occupy their previous space. But that didn’t happen. What happened was NVIDIA turned to the smartphone market and instead of making a new flagship at the same price, they decided to go to the US$1,000 price point that we haven’t heard of from any consumer level GPU.
Hence, let it be known that in August 2018, NVIDIA broke the US$1,000 barrier for consumer grade graphics cards and who knows what kind of prices we’ll see in the future because now the floodgates are open.
My 16-month old still going strong and very well utilized Palit 1080 Ti Super Jetstream
So yeah, I really didn’t think the NVIDIA RTX 2080 Ti would be THAT expensive. Maybe I’ll hold on to my 1080 Ti for a little bit longer after all.
(Obviously this is not strictly an informational post about the NVIDIA Turing GPUs so I didn’t fill it with specs and data. If you wanna learn more about them, go here.)
The post I Didn’t Think The NVIDIA RTX 2080 Ti Would Be THAT Expensive appeared first on The Technoclast.