cancel
Showing results for 
Search instead for 
Did you mean: 

Strix RTX3000 series cards not binned at all?

Joni
Level 7
Just received my RTX 3070 Strix OC 8GB card some days ago, and in all honesty its not really what expected from high prized "top tier" model. All reviews of this model suggest that they have get all stock boost clocks around 2010-2040mhz+ without touching anything, and with manual overclocking getting around 2150-2200mhz range. My sample is nothing like that, at stock settings boost im getting is around 1935-1950mhz at max, and it aint really any better with manual oc. Maximum before it gets unstable is +150mhz which results around 2100mhz. Well offcourse i know its working "as supposed" if it clocks over the specs "1905mhz boost", but really feel bit cheated here after watching all those reviews with great results on both stock & OC before deciding which model to get. If would of known this beforehand it would of been same to get cheapest dual version and save around 150€. Cooler and asthetics are definately not worth all the extra this strix cost over the entry models when performance is about same (or even worse if you are unlucky).

Basicly Asus must have send strictly binned samples to reviewers, which is quite misleading marketing to at least, if not pure cheat! Retail cards people are actually able to buy at shops mostly aint nothing like those have been in reviews. Honestly its quite ridicilous, my friend just got 3070 TUF which is like 120€+ cheaper than strix, yet performing much better (boosting to 2010mhz at stock and overclocks to 2175mhz) than this so called top tier model i got 😞

So my question really is why on earth people should pay premium, if its simply silicon lottery what kind of performance they will get? Usually these higher end models have been binned that they have atleast slightly better quality chips than entry and midrange models. Not this time i guess :mad:
10,957 Views
6 REPLIES 6

Shamoo
Level 7
With the experience I have had (and my friends as well) I would not expect your GPU to hit the clocks seen in the reviews.

For example, with the 3080 there are cards hitting 2000-2200 MHz. But in reality most cards peak around 19xx. Mine never goes over 1965, and hangs out in the 18xx range most of the time.

If you can get more than that you got lucky.

If it's any consolation you're missing out on 1 or 2 frames at most.

My friend compared the EVGA FTW3 against 2 XC3's (all 3090) and found they all hit the same clocks basically.

Kiriakos-GR
Level 7
Joni wrote:
All reviews of this model suggest that they have get all stock boost clocks around 2010-2040mhz+ without touching anything, and with manual overclocking getting around 2150-2200mhz range.


RTX3000 faced an issue caused by NVIDIA design, all GPU failed to clock above 2000 MHz, and even had issues at default clock speed.
NVIDIA forced to deal with this issue by drivers tweak ..
Reviews? They lied so to not add salt in their relations with the VGA providers.
You better post your complains at NVIDIA forums.

Kiriakos-GR wrote:
RTX3000 faced an issue caused by NVIDIA design, all GPU failed to clock above 2000 MHz, and even had issues at default clock speed.
NVIDIA forced to deal with this issue by drivers tweak ..
Reviews? They lied so to not add salt in their relations with the VGA providers.
You better post your complains at NVIDIA forums.
Thats not exatly true, there are plenty of videos of this model which shows those numbers i mentioned on stock & oc. Also 2000mhz isnt any limit, as i said earlier my friend got cheaper asus tuf model which runs much faster than this strix i got. So basicly its all down to luck, which in all honesty it shouldnt be. If you buy top tier model, it should be guaranteed by binning that it is faster than the cheaper models. Or do you think its ok that asus mid range card (TUF) is boosting like 50-65mhz higher with out of box settings and outperforming this top of the shelf strix.

You may consider any brand and VGA maker as responsible if the product fails to deliver at the in writing boost clock.
For anything else go to NVIDIA ...

I see a lot of posts like this on the rtx30 cards and often times folks dont realize that GPU boost 3.0 in these cards can cause a lot of confusion. I want to try and share a few things that may help you, or maybe it will help some others who come across this thread.

The way the gpu boost 3.0 works, and how aggressive it is, its hard to compare cards unless you are comparing apples to apples. You may be able to hit the same clocks you see in reviews, but you need to test under the same rendering load they are testing under to see it. For example. I have a Strix RTX 3090. In call of duty black ops cold war, With my +60 core overclock I see 2055mhz a lot. If I close CoD and load up 3d Mark and run Port Royal benchmark, my average clock in that benchmark will be around 1950mhz, on the same +60 core overclock. If I load up another game title I may see the card sit at 1995mhz most of the time. Under some graphics loads I have even seen 1905mhz, on the same +60 core overclock. What +60 means is that wherever your card lands on the clock/voltage curve under a given render load, the card will add +60mhz to the clock. GPU Boost 3.0 is already so aggressive, that overclocking ampere, and even my 2080 turing was very similar, can be very hard to nail down if you arent 100% aware of what is happening. I can run +120 core overclock in a lot of games, and then load up a different game and crash in 2 minutes. How high the GPU clocks is entirely dependent on the current rendering load. Not just the game you are playing, but at what resoution, and at what settings. The biggest decider though is the game you play. If for example you see someone hit 2055mhz in call of duty on your exact card, and then you put in the same overclock parameters in afterburner and run assassins creed odyssey, you are likely not going to see the same 2055mhz clock. The thing though is that neither would they see those same clocks in a different game. Some games render load is different, and the core will clock higher naturally. Other rendering loads can be more demanding and cause a 100mhz swing down to 1950mhz without you touching anything. To be honest, the way these cards function now is so advanced, and the clock switching so constant and so aggressive, that its taken a lot of the perceived head room out of GPU overclocking. On my 3090 strix I can jack the power limit to 123%, which will allow the card to pull nearly 500w of power by itself. It generates massive heat at this full power limit, and I bet I don't gain even 1-2% additional performance over my standard 107% power limit that I tend to run at. That extra 1-2% costs me about 8 degrees Celsius higher temps and much louder fan noise....Its not worth it. These cards are running right at their top end out of the box most times, especially these OC cards.

Back to your direct point however- how are you verifying your clocks? Run different games and loads. If you do, you should see some very different clock results. If you still feel youre not where you should be then look into the board power draw - how much power is the board pulling in various loads compared to reviews you are looking at. These cards, if they have the power and thermal room they will clock up. If its not clocking any higher then look at why. HWINFO or GPUZ can tell you if the card is hitting voltage or power limit, and let you know what is limiting you. VERY important though that you are comparing apples to apples here. You can not take someone showing a 2100mhz clock in one game and compare it to you clocking at 1995mhz in a different game, because the fact is that that is completely normal behaviour on these cards. You have to compare same exact game, so that the rendering load is the same across both cards, so that the cards will respond the same. Only then can you truly see where one card is perhaps a better overclocker.

Joni
Level 7
Yes i did use same programs to test it as example GN (gamersnexus) used on their review, so its not down to gpu boost. Also did direct compare with my friends old (now mine) tuf 3070 oc with same games & benchmarks, it consistently beats my strix results very easily clocking around 60-75mhz higher consistently on both stock and oc. Its all down to silicon quality on this case,strix card runs very cool (under 60c), has very high power limit (350w) which it will never get even close, so basicly only limiting factor with this 3070 strix oc model is silicon lottery. Offcourse in the end 65mhz doesnt mean much for actual gameplay, but its still sad to see that mid range card has better chip than top tier model, even vram is overclocking much better on this tuf. Either way allready sold the strix card with same money bought it, so doenst really matter anymore.

I just want to warn potential buyers that the most expensive product of the line always wont be the fastest one with these 3000-series cards, not even at stock/out of box (which is kind of cheating). If you look for great looking card that runs cool & are willing to pay extra for that alone, sure then its ok choice. But on performance point of view its simply too expensive when there seemingly aint any guarantee (binning) that it would outperform cheaper models.

It was very different with 2000-series cards. Cheaper models like asus dual oc etc had consistently worse bin gpu:s on them vs strix oc models. I had 2x 2080ti dual oc:s and normal black strix oc + white strix oc limited edition. White edition was easily best overclocker, normal strix oc was still very good & duals oc:s were both really poor clocking cards.