PDA

View Full Version : GPU 670M - my settings and usage



c_man
08-14-2012, 04:04 PM
Disclaimer: This is what I do, based on my experience. It might not apply to everyone and/or everything. Also, even if there is nothing dangerous involved, you are responsible for any unpleasant outcome.

The reading will be quite long, apologies (yeah, I know, too much Spartacus is not good for my health).

Let's see first who is the biggest enemy of the GPU.

While some consider this to be very high temperature, that is only part true. It's not exactly the temperature itself (up to a point), but the difference between minimum and maximum temperature. As the GPU cools down there will be some microfractures in the solder (well eco solder in the ball grid arrays is good for us, but not good for them and this can get more tech, but I'm not exactly this kind of person so I'll stop here) and in time, the connections at GPU level will not work anymore.

This problem can be fixed a several ways. Some put their cards into the oven to remake those connections. While this might work, it is not 100% safe. There is also the option of going to a pro guy with special equipment. If he does the job right, you will use the card for a long time. If not, maybe it will last for 6 months.

How to prevent this?

Well, make sure that the difference from low to high is not big and, most important, cooling cycles should be rare. Limit them as much as possible. I'm not saying you should not use the GPU at full power if needed, but if you game, then game. Do not exit the game every 5 min. to do something and cool down the GPU. Also, if you do not need the extra power, don't stress the GPU for nothing. I will show you what I do. Of course untill I had this practice, some cards died on me very fast.

I will use 670M as an example since most of you have this.

Programs:

- NvidiaInspector - download here (http://downloads.guru3d.com/NVIDIA-Inspector-1.94-download-2612.html);
- HWiNFO64 - download here (http://www.hwinfo.com/download64.html);
- Furmark - download here (http://www.ozone3d.net/benchmarks/fur/);
- Heaven benchmark - download here (http://unigine.com/products/heaven/download/);
- 3Dmark11 - download here (http://www.3dmark.com/3dmark11/download);

NvidiaInspector is the OC prog I use.

HWiNFO64 will give you a lot of info about your system.

Furmark is a stress app for GPU. DO NOT USE IT FOR LONG PERIODS OF TIME. It will damage your GPU. I only use it to put some load on GPU and do some initial testing.

Heaven is a nice benchmark that will help us determine some OC limits.

3Dmark11 will help us compare results.

The 670M has 4 working performance stages P-states. They are active depending on load.

The first one is P12 - minimum power consumption.

http://s7.postimage.org/fd3yozazv/Clipboard01.jpg

This will set lowest clocks used. As you can see, it's 51/135MHz.

The second one is an intermediate state, P8 - video playback.

http://s16.postimage.org/6m3vkfqrp/Clipboard02.jpg

The third one is another intermediate state, P1 - balanced 3D performance.

http://s13.postimage.org/nm6a375nb/Clipboard03.jpg

The last one is P0 - maximum 3D performance.

http://s8.postimage.org/8rxkqxuj9/Clipboard04.jpg

As you can see the GPU clock is grey. You cannot change them. But you can change the Shader clock. So let's see what happens.

The default value for Shader clock is 1240MHz (the truth is that the numbers we see are not 100% accurate, but since we all see them, the reference is valid and I will work with it).

I'll change that to 660MHz and hit Apply Clocks and Voltage.

You might be looking at your value and say that it bottoms at 365Mhz. Just click Unlock Min (next to P-states scroll menu).

http://s16.postimage.org/5j24rfznp/Clipboard05.jpg

Now during this Windows session, when P0 will get active, the maximum GPU clock will be 330MHz.

If I want to access this value in the future without starting the app, I have the option to create a shortcut on Desktop with Create Clocks Shortcut.

http://s16.postimage.org/3z1wdpa2t/Clipboard06.jpg

If I want to use this everytime Windows starts, I have the option with right click on the same button.

http://s16.postimage.org/7qbh00nr9/Clipboard07.jpg

Remember that for every P-state you will have to make a different shortcut.

At this point since P0 is the maximum performance, this is the one that I need to change to OC the card and get more performance (captain obvious here). I'll get to this later.

Underclocking

What if performance is not what I want, but more battery power or less heat.

Well, I have 2 performance states that I need to change, as the first 2 are already low. I need to change P0 and P1 and like I've said, I'll have to make Shortcuts for each (remember to hit Apply first, Shortcut second). Let's try it.

I'll set P0 to 135Mhz. Remember to Apply.

http://s12.postimage.org/nlkv90t8t/Clipboard08.jpg

If I open Furmark and start Burn-in test, the system will consider that I need P0 and:

http://s15.postimage.org/xb35r2thn/Clipboard09.jpg

I only do this for a few seconds to trigger P0. To stop Furmark, hit ESC.

If you change to P1 you will see that it has 365Mhz. I don't want to have a higher value so I change it to 135Mhz.

http://s11.postimage.org/i4etm1uhf/Clipboard11.jpg

135Mhz was just a random value. If I open a 4K video right now, the system will activate P8 state. This means I can go with P0 and P1 as low as 74Mhz without any problems. If the system can play 4K video, it can do most routine stuff under battery usage. This combined with Battery Saving (tweaked for low brightness, camera and ODD off) in Power4Gear and no keyboard lighting should give a maximum amount of battery time or minimum heat with still decent performance.

Don't forget to down clock the Memory as well, but in P0 state with current driver it does not go lower than 1500.

When you want the default values back, just click Apply Defaults for every P-state.

Overclocking

Let's see how I OC this.

Now you should really run Furmark for the first time with stock clocks to compare temps with other members. Use the Burn-in benchmark 1920x1080 for 15min. I have about 75C at room temp 33C. I've seen on this forum temps above 90C. If you have those, please solve the cooling problem and then OC.

If everything is fine, run Furmark again with Burn-in test, Resolution 1920x1080 and 8xMSAA for 10 min. Note the temperature.

I've said that for maximum performance the target is P0 so this is what I need to change.

I will use 20Mhz steps to increase value from 620 up (remember we need to change Shader clock, so there it will be 40Mhz). After every increase I start Heaven to see if I have any artifacts. I don't use Heaven for anything else. Artifacts should look like fireworks mostly or something similar. When you see them, stop and decrease the value.

Do the same for Memory clock.

Some GPUs can OC more, some less. Don't worry, it's normal. My card can run stable above 755 GPU/1650 Memory, but I've set this as top mark and so far I have used it only with Max Payne 3. With other games I run much lower clocks, for example I play Inversion at 365/1500 Mhz.

Adjust power as needed and remember to keep the difference between minimum and maximum temp as little as possible, when you can.

After you have set the best OC values with Heaven, run Furmark for the second time with Burn-in test at 1920x1080 and 8xMSAA. Let it run for 10 min. and compare results with stock. If it's within ~5C more, it's fine. If it's above 10C more, if you still need to use those values, do a in game temp check.

In the end ru 3Dmark 11 with basic settings and check the score. This is your maximum performance P0-state for the most demanding games. You can compare scores here (http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html) to see how close you are to the next best GPU.

Using NvidiaInspector you should have on the desktop the shortcuts you need to get quick access to any setting without starting the app. Remember that for every P-state you need a shortcut.

I know there are ways to force a P-state or to run more shortcuts at once, but I like the dinamic behaviour and the control that individual shortcuts give.

c_man
08-14-2012, 04:05 PM
HWiNFO64 has a zillion options. The Sensor part is very interesting as well.

http://s14.postimage.org/ksaziq88h/Clipboard12.jpg

mrwolf
08-14-2012, 07:20 PM
Nice guide bro.. i never knew that heating and cooling can really damage the GPU like that :s.. I mean i play alot of games and do testing on some games that are not even released yet, so i have to deal with un-expected crashes to desktop that bring my temp down from 75 to like 45-50. Would that erode my GPU in the long run?
If you could post a link here which goes in depth on the effects of cooling cycles it would be great as i am interested.
Also if you could provide more detail on how the damaged GPU linkages can be repaired in an oven and what a pro guy would it would be very help full to know how to repair these things..

When i had my alienware m17x i used to game even more with several heating a cooling cycles and now after 4 years its still running strong so at what extent is this impact evident..?

Thanks for the info :)

UltimaRage
08-14-2012, 07:26 PM
Why not just lower the fan speed when the usage of the GPU drops?

c_man
08-14-2012, 07:36 PM
Part I know from service guys, part from my own experience. Try google terms live "oven gpu", "reflowing", "reballing". English is not my language, I can't do much more.

I think that behaviour damages the GPU. But for testing you do get some money, so you are covered. And if you do this as a job, you most likely need a new powerful GPU after 1-2 years.

c_man
08-14-2012, 07:37 PM
Why not just lower the fan speed when the usage of the GPU drops?

I don't understand. Why, when and most important how?

mrwolf
08-14-2012, 07:50 PM
Cool, i will search those terms and have a look.

I do get a paid, but not that much.. Im not a developer so i dont have such contractual agreements that would cover me like that lol :/

But in case something goes bad a few years down the line, is it possible for me to get a new GPU fitted in in my G75 from the pro guys at a PC/Laptop repair store ? Do they do these kind of things?

As long as a new GPU can be fitted and solves the issues then i guess its not a huge problem as this could be like a maintenance thing for me :)

HiVizMan
08-14-2012, 08:17 PM
C_man I just want to take the time to thank you for posting this and the sound very helpful guide. You are a credit to this forum and I hope that you will long remain part of ROG.

My thanks and respect sir.

c_man
08-14-2012, 08:18 PM
Thank you!

SpeedyG75VW
08-14-2012, 08:31 PM
I don't necessarily disagree with you but if "cooling cycles should be rare", that would mean probably leaving your computers on all the time (both desktops and notebooks) right? I assume this would be valid for the CPU as well correct? Depending on which CPU you are using, my experience is that they run pretty close to the temps on the GPU even when not "maxed out" under full load. I've owned a few desktops and notebooks over the years and have always tuned them off all the time when not using them. I can't say I've ever had one failed due to the CPU or GPU.....and my computers always lasted at least 8 years or more for me.......most of the time, it's the hard drive that fails first.....

If this problem is inherent to ROHS compliant solder, then wouldn't we see this problem in more than just computers? Almost everything that has a circuit board these days have ROHS compliant solder and most electronics today have microprocessors/microcontrollers as well......Does that mean we should be avoiding heating/cooling cycles on these devices as well?

c_man
08-14-2012, 08:58 PM
Indeed most of them are, but they don't have cooling cycles similar to GPUs. Now don't image that the GPU will die after a few months (even if some do).

Eco solder is not exactly years old (I'm not sure how old the compliance is, but I don't remember seeing it years back).

Even old Pentium III laptops still work turned on 12h a day for years (well, they are IBM, simply perfect laptops, I wonder how long will they work, only the batteries had problems, they were never open for cleaning or anything), but they have another tech and were not gaming machines. In any scenario, there was little temp variation, not that it matters for them.

About the "always on" policy, that is something old and I know people doing so ever since 486, but for different reasons. I never did that. I don't think it is related to our problem here.

We could use simple common sense. You just shut the laptop once a day. I've seen some gaming routine with alt+tab(ing) so often in a short period of time, I think it's like shuting down my laptop for a couple of months. So if you leave it on, there is little gain. And only for this reason I don't think we can compare the situation of shuting down with a constant cooling from a high temp (like 80-90C to 50C) every 5 min.

About this problem there is a lot of info over internet. It might be more interesting to talk to tech service guys that repair electronics and they will explain exactly what happens and why.

mrwolf
08-14-2012, 09:45 PM
I did some reading on this subject. It seems it is something that is more evident in GPU's rather than CPU's and really only concerns gaming machines mostly because of the high temperatures you reach.
I dont think shutting down every night makes any difference lol its only an issue when your reaching very high temps and then rapidly cooling.

Technically this only happens when the GPU cools rapidly which is what happens when you quit a game. I guess you could say its perfectly safe to hit high temps in the 90s and then slowly cool down the GPU by 5-10 degrees every 5 minutes. This would prevents the 'Hair line fractures' to develop as it would smoothly change temp :)

I strongly think that we or someone at ROG should make an app or program that automatically starts to slowly scale down the GPU in this manner. By gradually reducing the temp after each gaming session this issue can be fully avoided :) I hope some devs or Asus marshal sees this and we get some results..

What do you guys think..? good idea no? it wouldnt even be that hard to make, very simple concept

UltimaRage
08-14-2012, 10:37 PM
Interesting topic.

I think that GPU's and CPU's are made to handle those temperature changes between the idle operating temperature and the load temperature though.

Especially since usage is never the same. Certain games don't use certain parts of the GPU, such as if you are running a game without tessellation, then the tessellators on the Nvidia shader cores are not being stressed. If you are running a DX9 game, then there's more of the shader cores you aren't using.

If you always keep your computer on, then everything stays at a nice idle temp, which is good.

Shutting down your computer daily will probably lower the lifespan much more than Alt - Tabbing frequently.... but even then, good computer components are made to last for decades. People don't get new parts every couple of years because something goes bad, for the most part. They do it to upgrade.

When you shut down the computer, temperatures plummet very fast as the heatsinks are still pulling heat from the chips. When you turn it on, it goes from 0 C to 50 C VERY fast, mostly by the time Windows starts. For someone with an SSD, that's about 25-30 seconds.

IMHO, that seems like it would have more of an effect than 50 C to 65-70C.

Is there anything conclusive on the subject from hardware manufacturers?

mrwolf
08-14-2012, 11:32 PM
Yea that makes sense.. If my notebook sleeps does the temp go all the way down too?

c_man
08-15-2012, 07:36 AM
Manufacturers will not tell you anything about this. Why should they? But there are some interesting facts. For example Dell had a problem like this with some GPUs and what they did was to make the GPU run a little hotter even at idle. It will hit around 75, but instead of original drops to 50, the limit was set at 65. The life span was increased from few months to about two years. And Dell always has BIOS updates that try to control problems like this and keep a good balance. Most people will only think that the cooling system is poor. A good cooling system is judged by how fast the temperature drops. This is part true, since sudden drops are not that good either. HP also has a number of failures due to this. Xbox was a big scandal due to this.

First this needs to happen lots of times before GPU will be affected. You don't shut down your laptop tens of times a day, but you could have more cooling cycles during few hours of usage.

The problem is real, there are tools to fix it and people that I do this for a living.

If you take let's say a former high-end card that has a few years on it's side, my trusty 8800GTX, you get about 1 million result on google for it dead due to microfractures. And you can search for people that repair them, about 3 million results.

Now it's up to you if you want to keep the laptop working as long as possible. I'm just the messenger here.

mrwolf
08-15-2012, 11:20 AM
Yea i totally agree.. I searched myself last night and found alot of info on this and there are businesses who soley repair this particular problem..
Also it was very common for xbox 360 to have this issue..

C_man, what do you think of my idea about the app that does gradual cooling?? How do we get someone to make it? lol

c_man
08-15-2012, 11:34 AM
It's not impossible, but it won't be that easy I think since control over fans is little to none, for starters. I know a lot about Dell as I had a few laptops from them and I know the problem, how they tried to fix them, how the fans behaviour changed after BIOS updates. I know little about Asus.

On the other hand we must not jump over the board with this. It does not happend over night. Just a bit more consideration to what's going on and how we can minimize it, without making a pain using something that we got to give us joy, no?

mrwolf
08-15-2012, 11:54 AM
Yea fair enough.. For now i have a temporary method to prevent this..
After playing your game or whatever uve been doing that heats up the GPU to optimum levels (65-75degrees) i jus put on an HD video and let it play for a while when im done so the temp gradually drops to about 50-55 and stays like that.. then u can close the movie and it can go back idle or whatever.. Either way, by doing this the temp doesn't shoot down from 75-40.. :)

c_man
08-15-2012, 12:19 PM
Smart move.

UltimaRage
08-15-2012, 01:03 PM
Manufacturers will not tell you anything about this. Why should they?

Sorry, but I have a GPU in another computer that has been used for over 5 years that is still just fine... and it gets FAR hotter than any GPU out currently. A G80 8800GTS.

The issue was so prevalent with Xbox's because they used cheap solder. Not all solder is the same, there is solder that has heat tolerances far higher than what a GPU will put out.

If this is common knowledge, then a manufacturer should tell their customers how best to take care of their equipment.

1-3 million... did you look at every single link to verify that they were 100% about what you thought they were? I don't think internet searches work the way you think they work. Searches are sorted by relevance, meaning the latter end of the results are irrelevant to the search terms.


I know you are trying to help, but with you brushing off that idle to load probably doesn't effect this nearly as much as idle to zero/ zero to idle during startup, it's hard to take this seriously.

It can't be that hard to find something from AMD or Intel about this subject if it is really that big of an issue.

c_man
08-15-2012, 01:44 PM
It's up to you. Info is easy to find, I know people in service, I know my laptops. I really don't have to convince anyone. It could take a lot of effort and I gain nothing from it. Look for topics like this is you want http://www.overclockers.com/forums/showthread.php?t=606658, you will find very different cards. Or any other keywords I gave so far.

PS. I know how Google works, the keywords are there for a reason.

PS2. I was hoping to get more feedback about the changes done for GPU to use less battery power.

UltimaRage
08-15-2012, 02:11 PM
The G80 cores got FAR, FAAAAR hotter than ANY GPU currently out.... Not really conclusive in regards to current GPU tech, which has gotten vastly better since then.

How does my 8800GTS G80 still work fine, 4-5 years later?


My issue is that you are passing this subject off as being conclusive, which it is not. There are many factors. Low quality solder has a lower melting point than higher quality solder. This is a fact.

Again, if this is truly a great way to extend hardware life, than it should be no big deal to find someone from AMD, Intel, or Nvidia saying something about it.

It is frustrating how you ignore that 0 to 50 C when starting up and vice versa when shutting down will likely have far more of an impact because the temperature changes are far more drastic, than say idle to load which is commonly 50 to 65-70 C.

Showing an instance of someone's GPU going out when we don't know the usage scenarios doesn't make sense either. We don't know if this person gamed for 8 hours a day, or had gaps between gaming from alt - tabbing

All I am saying, is don't say it as if it is fact.

There are too many variables as GPU usage is never the same - such as when playing a game with tessellation turned off, or playing a DX9 game.

I am a game developer, and if this was a truly large issue (as we go from idle to load CONSTANTLY), then we would see more evidence for this with such a large sample size.


I have always kept my machines on 24/7, because if this is a problem, then the zero to idle temps would have far more of an effect than idle to load.

mrwolf
08-15-2012, 02:20 PM
How hot do the G80's go..?

I think c_man is just trying to state some issues that have actually happened to alot of people so i guess its not fully conclusive nor is it inconclusive.. But at the end of the day it makes sense that rapidly cooling many many times can take its toll on the hardware as years progress.

UltimaRage
08-15-2012, 02:29 PM
G80's can go as hot as 90 C for the normal operating temperature during load.

c_man
08-15-2012, 02:33 PM
The G80 cores got FAR, FAAAAR hotter than ANY GPU currently out.... Not really conclusive in regards to current GPU tech, which has gotten vastly better since then.

How does my 8800GTS G80 still work fine, 4-5 years later?


My issue is that you are passing this subject off as being conclusive, which it is not. There are many factors. Low quality solder has a lower melting point than higher quality solder. This is a fact.

Again, if this is truly a great way to extend hardware life, than it should be no big deal to find someone from AMD, Intel, or Nvidia saying something about it.

It is frustrating how you ignore that 0 to 50 C when starting up and vice versa when shutting down will likely have far more of an impact because the temperature changes are far more drastic, than say idle to load which is commonly 50 to 65-70 C.

Showing an instance of someone's GPU going out when we don't know the usage scenarios doesn't make sense either. We don't know if this person gamed for 8 hours a day, or had gaps between gaming from alt - tabbing

All I am saying, is don't say it as if it is fact.

There are too many variables as GPU usage is never the same - such as when playing a game with tessellation turned off, or playing a DX9 game.

I am a game developer, and if this was a truly large issue (as we go from idle to load CONSTANTLY), then we would see more evidence for this with such a large sample size.


I have always kept my machines on 24/7, because if this is a problem, then the zero to idle temps would have far more of an effect than idle to load.

I don't care about shuting down since this is one time a day. OK, maybe for some 5 times a day. During the same day you might have 50 cooling cycles. So what is 5 compared to 50? Will it makes sense to keep it up 24/7? I guess not, you gain very little.

I know my routine. I know other people's routine. I know input from service.

No manufacturer will shoot itself in the leg over this. If Nvidia would release such info, how many would stop buying their products? Or do you think that AMD, out of the goodness of their hearts would jump to support them? Not going to happen.

You have one card. I know personal many cases and I looked into it as I wanted to prevent this as much as possible. That's about it. I don't have to convince anyone. I mean, it's not my discovery here. It's more or less common knowledge once you had your first.

c_man
08-15-2012, 02:34 PM
G80's can go as hot as 90 C for the normal operating temperature during load.

This is a 2006 release card, right? What solder do you have?

UltimaRage
08-15-2012, 02:47 PM
Incredible claims require incredible proof.

Again, it is rather short-sighted to pass this off as conclusive.

As I said above....

"I am a game developer, and if this was a truly large issue (as we go from idle to load CONSTANTLY), then we would see more evidence for this with such a large sample size."


This is a 2006 release card, right? What solder do you have?

Whatever solder Nvidia used of course.

Card was released in Dec 2007.

Please explain how the 8800GTS has lasted 5 years, as it is a card that gets far hotter than anything currently out.

You may have had this experience in the past, but to think that the situation wouldn't improve as operating temps go down doesn't make sense.

Things change. The tech world is not static.

c_man
08-15-2012, 03:32 PM
Please don't be offended by this. It's a public place, I am here to exchage ideas, experiences, no one has to be right about anything. We all have our own background. It's not a contest.

Please explain how me and the people I know and the people my friend from service knows had this with a bunch of cards? I cannot explain one card versus many.

I know you think, "hey, I have a desktop card that is working for years, no way this is true. What do I care if there are a lot of people who had it or if some guys decided to make the equipment to fix it. Mine still works so I'm right".

I am you most of the times. When I hear people complain about something and I never had that, I say "those guys must have done something wrong". Well, sometimes they didn't or they didn't know about it.

Why don't you just look it up over the Internet and talk to those people if you never heard of this? Also notice the cards. Mostly mobile Nvidia. You don't exactly fit. Or maybe you got lucky, who knows. It's just one card. Again there might be a lot more still working, but focus mainly on gaming laptops with Nvidia mobile cards (Alienware, Clevo and so on). Have you ever considered that it might be that very high temp. that keept it going? There is little lab info on this. I guess no one wants to start up a fire. I wish you were right. I wish my laptop and my friends laptops never to have this. Or had, since these GPUs are so damn expensive and this is if you are lucky to have MXM. For some it's even more complicated.

I can't say much about desktops, I've stopped using them years ago.

In the past the solder was different. Eco stream is more of a recent thing and everyone is free to use whatever they want, as long as it's not harmful. Yes, tech changes and for the better, but this Eco brakes, Eco this, Eco that has a price, just as Non-eco had. Nothing is perfect.

I'm just saying there is a problem. Everyone thinks is the high temperature is the only thing that matters. It's not. Will it hit everyone on the planet? Most likely no. Do you fit the pattern of exiting games every 5 min. or so? If you do, then please consider the problem. Do I really care if you don't? No. I'm not an alarmist here, I don't care about shut downs. How many can one have in one day anyway.

It's strange as people are really interested about this. Thus me saying that no one will release something over it. It's just too negative for this kind of market and it's one image hard to repair after.

In the end, I apologize for anything that sounded wrong. I am latin, we light up fast as you may know.

UltimaRage
08-15-2012, 04:02 PM
The issue is that you are ignoring a lot of what I am saying. I am not trying to argue, in fact I am trying to bring more to the conversation by adding things that you haven't mentioned.

Extreme temperature changes would cause the contraction and expansion to a more extreme degree than less extreme temperature changes. That's just what happens with materials.

Also... my experience isn't just because of having that card for so long.

As I said....


"I am a game developer, and if this was a truly large issue (as we go from idle to load CONSTANTLY), then we would see more evidence for this with such a large sample size."

Unless developers are getting some special hardware (They aren't), I don't see how you can dismiss my experience in regards to that as well.

Also, the G75 is new. It runs cooler than any other laptop in the past has, for the most part.

It's just strange that you won't think about how tech has changed and how that changes the outcome of the mechanics of electronic mitigation. I know it is real, that isn't what I am talking about. I also know many developers who only upgrade when you have to upgrade, for development purposes. Like going from DX10 cards to DX11 cards.

All developers consistently go from idle to load hundreds of times per day, and that is not an exaggeration. With mission critical parts like a GPU, we can't have room for error.

For instance, DX10 cards to DX11 cards. Pretty much all higher end DX11 cards run cooler than most higher end DX10 cards, because the nm of transistors goes down with each GPU generation.

c_man
08-15-2012, 04:21 PM
At this point there is nothing we know about G75. I'll be able to say if it's better or not after 2-3 years of usage. During this time I'm going to take care of it as best as I can, with the info I have. I'm not going to over do it, but I'll keep in mind some things.

I am not saying that you are wrong and I am right. What I know is that this problem is real. In service there are lots of cards that died because of this. I had it. I have friends. There are other people over the internet. There are machines designed to fix this. So ...

UltimaRage
08-15-2012, 06:32 PM
Common G80 card (Generally the 8800GTS and the 8800GTX): 60 C idle, 90 C load. 30 C difference.

660M: 50 C idle, 65 C load. 15 C difference.

Recognize that things have changed, please.

At this point, I know the idle and load temperatures of the 660M. I don't need to know about the G75 for that. Temperature is the thing that affects electronic mitigation, more specifically quick temperature changes. Let's just talk about the facts, so we can actually come up with a conclusion that is relevant to today's technology. As you should know, the G80 cores were notorious for the solder wearing out, like the early Xbox 360's were. I haven't heard of people having to use the oven trick for GPU's much anymore... because they simply don't run as hot as they used to.

Newer model Xbox 360's are far more reliable. By shrinking the transistor size for the CPU and GPU in the machine, they produce less heat.

You are relying too much on anecdotal evidence... and old anecdotal evidence at that.

I would much rather look at the facts of the situation of electronic mitigation in graphic cards.

Properly made electronics are made to last for decades.

The G80 and early G92 cards had naturally high failure rates. Everything isn't equal.

2008 article:

http://www.theinquirer.net/inquirer/news/1004378/why-nvidia-chips-defective


"Modern chips consume electricity in an uneven manner, as different parts of the chip use power at different rates. Sometimes parts of the chip are never used at all for a given workload. If you have a modern GPU and don't game or are smart enough to not run Vista, you will likely never touch the transistors that do all the 3D work. Think about it this way, there are hot spots on the chip as well as cold spots, it is uneven and changing constantly."

From the article. When talking about this subject, you must also consider things like that. You are biased because you are a repair tech, you must also recognize that as well, correct?

It just isn't good to cause new buyers unnecessary worry about something that isn't as big of an issue as it once was.

c_man
08-15-2012, 08:08 PM
Not all 660M run at 65C full load. I've writen about this several times. I had non-defective laptops with most new Kepler cards medium range (650, 660, 640) that would hit 90C and over under full system load. It's not the GPU itself, but the poor cooling design. At some point with those there might be 45 difference. Let's talk about real life stuff. I know people just presume Kepler runs extremly cool. The card itself might, but that card does not make a laptop alone. And right now I know nothing about Kepler behavoiur in time. It might be extremly good or not. What I know is what happend in the past and that now I can at least pay attention to this minor thing. It's not that hard to do.

I know what happens, I know how it's fixed. Is as simple as that. I am not biased at all. That is a decent article to make an idea about what's going on. You should also read the comments and related stuff. Instead of going all that tech, I keep an eye on a very simple thing that I know off. The article is far too complex and covers too much info. While some parts relate to this problems, other do not and if they cause a failure, that cannot be repaired just as simple or maybe at all.

You don't have to believe me, I don't have to convince you. I've stated from the start of OP, this is what I do and the reasons are also known.

You can't really convince me as I have my own first hand experience in the past (don't imagine years ago) and I can't presume anything about today's tech since you can't say based on limited experience and paper specs how reliable a product is. Maybe it has a weakness of some sort. I don't know. Maybe it doesn't. I am not paranoid to note every shut down or stuff like that, I'll just keep an eye on it and instead of exiting a game 500 times a day, I restrain myself to about 10 :D And hope that this GPU won't die in a 2-3 years as others did. If they fixed it, nothing to lose. If they didn't or if it's something new maybe my effort will be for nothing, but I did something.

About high temps, people think that old tech runs like lava and new tech like ice. What is the max temp you think my 8800 GTX had while gaming?

mrwolf
08-16-2012, 07:49 AM
I found out by some more reading and a technician friend of mine that this issue is alot more prevalent in GPU's that have a lead-free solder as this is what becomes easily brittle and can cause tiny little cracks that effects the GPU.
If our GPU's are using a lead solder than apparently this reduces the issue big time and causes the GPU to last 10x as long than with the lead free solder..

Just thought i would let you guys know..

Anyone know what kind of solder our 670M is using..? im pretty sure it is of the highest quality but just checkin..

c_man
08-16-2012, 08:18 AM
It should not use lead at all these days.