I don't know if this also fixes "Channel Timeout" errors, more testing is needed.
After reading several topics/post about power-states that are used on GeForce-cards, I found out that the Fermi series only has 3 P-states instead of 4 (or more).
P-states can control clockings on a GPU down or up when needed. (Down when more 'power' is needed/Up when being more idle)
You can check your clocks by running the tool nvidia Inspector in Windows
My EVGA GeForce 450 GTS (1GB) has the following clock-speeds:
State 3 (lowest energy use)
CPU Clock: 50Mhz
CPU Memory: 324Mhz
State 2 (mid. energy use)
CPU Clock: 405Mhz
CPU Memory: 324Mhz
State 1 (high. energy use)
CPU Clock: 783Mhz
CPU Memory: 1.80Mhz
After looking in Windows 7 what state the nVidia card runs on, I found out that this is usually state 3. In OS X however I could only get smooth animations if the GPU was running in state 2.
My theory is that OS X simple needs more 'power' because it is running on OpenGL and their are way more animations (enabled) than on Windows.
I also checked how my card was running on Linux (KDE). When using their interface I also find out my card was more in state 2 than in state 1.
To set the correct clock-speeds based on load, editing AGPM.kext was needed since they are undefined for my GPU.
After searching through some post, I found out that some users were using all the 4 'load-fields', when my Fermi-card only has 3.
Also some users simple tried to disable the last two, by setting really high values. This should prevent the CPU hitting state 3.
Some users claim that state 3 is causing the freezes and the slow interface. My theory is that the GPU doesn't switch to state 2 when needed. It is either taking to much time for the GPU to reach state 2 or it would simply mean that the GPU doesn't like being on state 2 all the time for some reason.
To make a long story short: I wanted to have a smooth interface, but also don't wanted the GPU to be fully loaded all the time (causing me a high energy bill).
At the moment I'm using iMac12,2 as model. This because my i5-2400 seems to be inside this model.
By setting this model, it also loaded the AGPM.kext.
These are the values I have chose for my GPU (device id 0dc4):
<key>Vendor10deDevice0dc4</key> <dict> <key>BoostPState</key> <array> <integer>0</integer> <integer>1</integer> <integer>2</integer> </array> <key>BoostTime</key> <array> <integer>2</integer> <integer>2</integer> <integer>2</integer> </array> <key>Heuristic</key> <dict> <key>ID</key> <integer>0</integer> <key>IdleInterval</key> <integer>200</integer> <key>SensorOption</key> <integer>1</integer> <key>TargetCount</key> <integer>1</integer> <key>Threshold_High</key> <array> <integer>70</integer> <integer>87</integer> <integer>100</integer> </array> <key>Threshold_Low</key> <array> <integer>0</integer> <integer>60</integer> <integer>92</integer> </array> </dict> <key>LogControl</key> <integer>0</integer> <key>control-id</key> <integer>17</integer> </dict>
As you can see (or don't) I have simply removed the last row (aka row 3). My GPU is clocking as it should be and I can enjoy movies again.
Also the sound from the GPU fan seems to be lower as it normally would, since this is also controlled by the GPU-clock.
The BoostPState and the BoostTime simple defined what state to be used when an application needs more 'power'.
The SensorOption should need to be set to read-out the GPU-status.
The TargetCount should be the state were the GPU should focus on.
At least I think so, please let me know if you have more/better information.
I would like to see some (test) results from Fermi-card users. That's why I posted this as new topic.
But please know that doing this is at your own risk!
As 'proof' please check the following pictures:
Idle (4% idle or less):
Schermafbeelding 2013-04-22 om 20.11.13.png 21.76KB 9 downloads
Browsing/doing stuff: (more than 4% load)
Schermafbeelding 2013-04-22 om 20.11.20.png 21.07KB 7 downloads
Benchmarking: (more than 45% load)
Schermafbeelding 2013-04-22 om 20.11.41.png 23.63KB 5 downloads