I'm writing from a ML 10.8 machine, installed on a P5Q-E with an EVGA GeForce GTX 560Ti as main video card.
The whole setup has been pretty flawless and easy, it's running mostly vanilla, with Chameleon 2042 as bootloader, plus few extra kexts: AD2000b, HDAEnabler, HDEFEnalber for audio, NullCPUPowerManagement, FakeSMC and SleepEnabler, and just some bits changed in the vanilla AppleRTC (to avoid the CMOS reset on sleep), AppleYukon2 and GeForceGLDriver.bundle (to enable respectively the internal Ethernet interface and the OpenCL for the GPU (officially available only on GTX 570 and greater)).
As the last thing in the post-installation, I've supplied to chameleon a MacPro 3,1 smbios, plus a patched DSDT (for ACPI shutdown/restart, patch found on olarila and applied to my bios DSDT dump) and SSDT tables dumped in AIDA64, and finally installed the official nVIDIA CUDA package from the nvidia website.
The whole machine is running greatly, bloody fast and extremely snappy performance, with full QE/CI and GPU acceleration with the vanilla GeForce driver.
I've also installed the netkas' HWSensors package, enabling some plugins for the FakeSMC kext and the HWMonitor app, just to keep an eye on temperatures and CPU/GPU clocks.
So why have I wrote this whole WOT?
Well, here comes the weird behaviour: after a cold boot or just a reboot, the whole cpu speedstep/gpu power management thing works without a hitch, scaling up or down CPU multiplier and/or GPU clock depending on what's requesting more power.
(a small premise: the GTX 560ti bios has 3 power levels, with 3 different GPU/VRAM/Shaders clocks and voltages, for the GPU they are respectively 50MHz at full idle, 405MHz on medium load and 850MHz at full speed, see attached screenshot for reference).
The GPU as i said before is working correctly, throttling from 50MHz through 405MHz to 850MHz forward and backward depending on how much is the graphics load (e.g. the login cube rotation animation, as well as running a benchmark like Unigine Heaven), but when I leave the machine idling (locking the screen to the login page, shutting down the display and, well, getting away from it), some hours after, when I log back to the desktop, the GPU clock, according to HWMonitor, is locked at full speed (850MHz), and doesn't throttle back to 405 or even 50MHz, regardless of the real GPU load at that moment.
I've tried starting a benchmark and suddenly quit it to see if it would throttle back, I've double checked with Activity Monitor to see if there's a background/resident process (using the GPU) responsible for this strange behaviour, but until now I wasn't able to find anything related, it just locks at full speed after some time idling without activity.
Any thought on what could be the cause of this strange behaviour (and maybe a solution)? apart from this, the machine is running greatly, but this particular thing causes me a lot of annoyance because I don't want to waste extra energy (and then dissipate more heat) when unnecessary.
Could it somehow be related to the Mountain Lion bad battery performances some users are experiencing on their laptops, for instance? (I'm not really sure about that, but if a laptop GPU does the same, well, the energy consumption is greater and then the battery life smaller).
Or could it just be a wrong reading from the GeForceX plugin of the FakeSMC?
Thank you very much
//edit: i've attached some screenshots with the clocks table and some measurements made with GPU-Z, I think the middle one (light loal) is the one happening under ML, but I cannot explain to myself why.