• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD RX 6000 'Big Navi' GPU Event | 10/28/2020 @ 12PM EST/9AM PST/4PM UK

I need independent benches.

It's so hard to believe that AMD is competing on the high end. Lol.

I said it about the 3000 series cards and it rings true here as well, always wait for real benchmarks and don't necessarily buy into all of the marketing figures and buzz words.

Having said that, on average AMD's numbers tend to be closer to reality than Nvidia, it just is what it is, so I would say these figures are mostly in the ballpark of reality.

I guess what I'm trying to say is that I think 3080 and 6800XT will be roughly on par overall, with it being a game by game win/loss and tons of "draws" (I consider 1-3fps difference a draw, anything over 5fps a win)
 

MadYarpen

Member
More than independent, I need benches without SAM engaged. I will not be upgrading to Ryzen 5000, and I am deciding between 3070 and 6800. And they are close, AMD have shown results with SAM (unlike with 6800 XT), so it could look worse in reality for me - and the card is more expensive. But it has 16 GB of vram. But on the other hand, I am looking at UWQHD, so 8 GB should be enough, right?

And all this will mater nothing if only one of them is available to actually buy.
 

pr0cs

Member
I need independent benches.

It's so hard to believe that AMD is competing on the high end. Lol.
I could see some of the values they presented as bogus or 'best case', but they presented so many bench numbers that I can't see AMD being so disingenuous that they'd be willing to be untruthful in the whole affair. You know there are going to be a TON of people reviewing these new cards, it behooves AMD to be as accurate as possible, if even to say "we told you so" since it's been so long since Radeon has been competitive at high end.

Nothing wrong with being skeptical but I just don't see AMD being full of shit on all the values they presented.
 

TrackZ

Member
Heavy favorited games by AMD also people should stop testing the 5000 + 6000 combination. Zero people have that as of now and barely anybody will be upgrading towards it. It's completely useless for 99% of the people out there and also heavily limited to certain games.

Showcase me some real games people play like metro exodus / cyberpunk / cod / ac games / red dead redemption etc.

I sold my whole gaming PC a couple months ago and am currently debating to build a new one or go console. The idea of an all-AMD build to synergize 5000 CPU and 6000 GPU is incredibly relevant to the decision I'll make in the coming weeks.
 

BRZBlue

Member
I sold my whole gaming PC a couple months ago and am currently debating to build a new one or go console. The idea of an all-AMD build to synergize 5000 CPU and 6000 GPU is incredibly relevant to the decision I'll make in the coming weeks.

I'm actually in the same boat. I'm currently running a Gen 1 Ryzen, so a mobo/processor upgrade is in the cards. If I'm getting those might as well go with an AMD GPU.
 

CuNi

Member
Heavy favorited games by AMD also people should stop testing the 5000 + 6000 combination. Zero people have that as of now and barely anybody will be upgrading towards it. It's completely useless for 99% of the people out there and also heavily limited to certain games.

Showcase me some real games people play like metro exodus / cyberpunk / cod / ac games / red dead redemption etc.

I'm literally getting a new 5900 CPU and depending on upcoming benchmarks in full HD and VR games I'll either go with amd or Nvidia. If AMD is better, I'll sell my 3080 and use the profit to offset maybe the 6900XT to 3080FE prices.
 

Sejanus

Member
I believe amd make a compromise with the memory subsystem.
Let's say infinity cache (128mb) takes 128 mm of die and the 256bit gddr6 controllers takes another 64mm, total 192mm of 505mm.
If amd used the same memory subsystem of vega II (only 40mm) 16gb 1TB/s was far the better choice.
But we know cost/availability..
 

notseqi

Member
I believe amd make a compromise with the memory subsystem.
Let's say infinity cache (128mb) takes 128 mm of die and the 256bit gddr6 controllers takes another 64mm, total 192mm of 505mm.
If amd used the same memory subsystem of vega II (only 40mm) 16gb 1TB/s was far the better choice.
But we know cost/availability..
Dunno what that means but it sounds objective. I like it.
 

SantaC

Member
Heavy favorited games by AMD also people should stop testing the 5000 + 6000 combination. Zero people have that as of now and barely anybody will be upgrading towards it. It's completely useless for 99% of the people out there and also heavily limited to certain games.

Showcase me some real games people play like metro exodus / cyberpunk / cod / ac games / red dead redemption etc.
This is the dumbest post i ever read. Barely anyone will upgrade to zen 3? Ok, talk about being delusional.
 

notseqi

Member
I'm literally getting a new 5900 CPU and depending on upcoming benchmarks in full HD and VR games I'll either go with amd or Nvidia. If AMD is better, I'll sell my 3080 and use the profit to offset maybe the 6900XT to 3080FE prices.
That's probably not feasible, I'd save the cost&time to return shit. Buy AMD GFX next time round dut. Stick with AMD processors though or I'll shit on your porch decorations.
 

Sejanus

Member
vs 3070 that people were orgastic about:

16% higher price
for
18% higher perf
+8GB VRAM
and bettter perf/watt
But if it was 499$
It would be a
yEcxDUv.gif
 

VFXVeteran

Banned
The RT performance is going to be the do/die deal here. If AMD cards are significantly less (~40%) in performance then it's a no brainer to go with Nvidia. I can't get a card that's great at rasterization + 16G VRAM but has very slow RT performance for the next 3yrs. That's a waste of money in my book.
 

Elias

Member
The RT performance is going to be the do/die deal here. If AMD cards are significantly less (~40%) in performance then it's a no brainer to go with Nvidia. I can't get a card that's great at rasterization + 16G VRAM but has very slow RT performance for the next 3yrs. That's a waste of money in my book.
Rtx performance won't really matter, if their super resolution DLSS equivalent is solid.
 

VFXVeteran

Banned
Rtx performance won't really matter, if their super resolution DLSS equivalent is solid.

That's not the way things work. You can't tax those cores and think that DLSS will compensate for it. It completely depends on how much you tax them. The more you tax them, the worse the performance is going to be. DLSS or not. If you pull out rays to keep FPS high, your image quality will suffer. You absolutely need more performing hardware to really get some great monte carlo RT solutions going. You will continue to see this iterated upon from generation to generation.
 

notseqi

Member
The RT performance is going to be the do/die deal here. If AMD cards are significantly less (~40%) in performance then it's a no brainer to go with Nvidia. I can't get a card that's great at rasterization + 16G VRAM but has very slow RT performance for the next 3yrs. That's a waste of money in my book.
You love RT for some reason. I don't know why, apart from it being something good in the future. And with 3080/3090 performance we know it to be the far future.
 

Senua

Member
You love RT for some reason. I don't know why, apart from it being something good in the future. And with 3080/3090 performance we know it to be the far future.
I don't think it's very surprising for people into their graphics tech to be into RT. It's the future and it's awesome but It won't sway me completely to the green team considering the general price/performance and it's SO damn early in the RT game.
 
Last edited:
I believe amd make a compromise with the memory subsystem.
Let's say infinity cache (128mb) takes 128 mm of die and the 256bit gddr6 controllers takes another 64mm, total 192mm of 505mm.
If amd used the same memory subsystem of vega II (only 40mm) 16gb 1TB/s was far the better choice.
But we know cost/availability..

You're right it will take a significant portion of die space but I don't think compromise is the right word:

Anandtech said:
...the amount of die space they have to be devoting to the Infinity Cache is significant. So this is a major architectural trade-off for the company.

But AMD isn't just spending transistors on cache for the sake of it; there are several major advantages to having a large, on-chip cache, even in a GPU. As far as perf-per-watt goes, the cache further improves RDNA2’s energy efficiency by reducing the amount of traffic that has to go to energy-expensive VRAM. It also allows AMD to get away with a smaller memory subsystem with fewer DRAM chips and fewer memory controllers, reducing the power consumed there. Along these lines, AMD justifies the use of the cache in part by comparing the power costs of the cache versus a 384-bit memory bus configuration. Here a 256-bit bus with an Infinity Cache only consumes 90% of the power of a 384-bit solution, all the while delivering more than twice the peak bandwidth.

Furthermore, according to AMD the cache improves the amount of real-world work achieved per clock cycle on the GPU, presumably by allowing the GPU to more quickly fetch data rather than having to wait around for it to come in from VRAM. And finally, the Infinity Cache is also a big factor in AMD’s ray tracing accelerator cores, which keep parts of their significant BVH scene data in the cache

 
Last edited:

KungFucius

Member
I'm literally getting a new 5900 CPU and depending on upcoming benchmarks in full HD and VR games I'll either go with amd or Nvidia. If AMD is better, I'll sell my 3080 and use the profit to offset maybe the 6900XT to 3080FE prices.
Do you think you will be able to easily acquire a 6900XT and sell a used 3080 for significantly more than it cost in 6 weeks?
 

notseqi

Member
Because I've worked with RT for several years and it gives the best rendering results. It's just that simple.
You will remember that I asked you before why this is being pushed so hard and I understood that it's the way forward. I agree but performance isn't great, it should be better for me to accept it.
As is? I prefer the frames.
 

Sejanus

Member
You're right it will take a significant portion of die space but I don't think compromise is the right word:



Infinity cache + 256bit vs 384bit sure better tdp/latency/4gb more/easier reduction in future nodes for only ~ 100mm
But
Infinity cache + 256bit vs 4096bit/16gb/1TB/s(hbm controllers) same density (16gb vs 16gb)
smaller die /cheaper(more than 128mm)
Probably less tdp
Smaller board
But worse latency
Expensive( 2017 16gb hbm2 cost 320$/I don't know now)
Substrate cost
I am thinking the hbm solution was the better choice.
But maybe this is true '' Infinity Cache is also a big factor in AMD’s ray tracing accelerator cores, which keep parts of their significant BVH scene data in the cache'' and latency is a big factor for rt.
 
Top Bottom