Jump to content

Search the Community: Showing results for tags 'fermi'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • InsanelyMac Lounge
    • Front Page News and Rumors
    • Reader News and Reviews
    • Forum Information and Feedback
  • OSx86 Project
    • New Releases and Updates
    • New Users Lounge
    • Developers Corner
    • Tutorials (The Genius Bar)
    • Technical FAQ
    • Installation
    • Post-Installation
    • DSDT and SSDT
    • Hardware Components and Drivers
    • Desktops
    • Notebooks
    • Netbooks
    • Tablets
    • MacMod of the Month
    • Multi-booting and Virtualisation
  • International
    • Your Language
    • Deutsch
    • Español
    • Français
    • Italiano
    • Português
    • Русский
  • Apple World
    • Mac OS X
    • Apple Computers and Hardware
    • iOS Devices
    • Mac Applications
    • Mac Programming and Development
    • iOS Programming and Development
    • Mac Gaming
    • Mac Accessories
  • Discuss and Learn
    • Windows Discussion
    • *nix
    • Apple Opinions and Discussion
    • The Great Debates
    • Internet(s), Servers, and Networks
    • Buying Thoughts, Reviews, and Recommendations
    • Mods and Overclocking
    • The Big Issues [Real Life]
  • Everything Else
    • Creativity
    • Thunderdome (Random Stuff)
    • Laughs
    • The Marketplace


  • Kexts
    • Graphics Cards
    • Audio
    • LAN and Wireless
    • Other
  • Kernels
  • Bootloaders
  • DSDTs
    • Patches
  • Pandora
  • Apps
  • Miscellaneous
  • Customization

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start




Website URL







Found 6 results

  1. All GF100 Graphics Cards 1-Just drop GeForce-GF100-Series.kext to Clover ⁨EFI⁩ ▸ ⁨CLOVER⁩ ▸ ⁨kexts⁩▸ ⁨Other 2-Replace CoreDisplay from 10.13.4 to 10.14 ⁨System⁩ ▸ ⁨Library⁩ ▸ ⁨Frameworks⁩ ▸ ⁨CoreDisplay.framework⁩ ▸ ⁨Versions⁩ ▸ ⁨A⁩ I made this kext for GF100 Graphics cards.Finally All GeForce and NV kexts are Loaded. GeForce-GF100-Series.kext.zip CoreDisplay
  2. I've got Sierra installed w/ Clover on my HP Z1 AIO workstation. Without video acceleration the internal screen is recognized just fine. With Inject NVidia set to true I get video acceleration, but only the external monitor works. I got a a rom dump for this card from techpowerup.com and also dumped the rom myself, both produce the same NVCAP in NVCAP Maker: 0400000000000700000000000000000700000000 Using this NVCAP doesn't change the results, still no internal display signal. actually blacks out both displays. For completeness, here is the original NVCAP from IOJones: 0400000000000100fe0000000000000700000000 Any tips on how to tweak this to get the internal display to work? I'm not sure if it's LVDS or eDP, does this make a difference? I have a source for a cheap Kepler based k2000m that would work in this machine. Any idea if that should be more likely to work? System Specs: HP Z1 AIO Workstation (gen 1) Intel Xeon E3-1245 V2 8GB DDR3 Quadro 1000m (mxm card)
  3. I'm running a GTX 580 on my machine, and picked up a second monitor recently. I noticed that when I plug in the second monitor, my GPU goes all the way up to the max speed even if I'm just browsing/listening to music/etc. Heard that it was "by design" - naturally rendering two monitors would take some more power, but I didn't think it'd burn that much power. Clock speeds remain stuck after unplugging the second monitor. How can I return clock speed to normal? For reference, here's my current AGPM config. <key>Vendor10deDevice1080</key> <dict> <key>Heuristic</key> <dict> <key>ID</key> <integer>0</integer> <key>IdleInterval</key> <integer>250</integer> <key>SensorOption</key> <integer>1</integer> <key>SensorSampleRate</key> <integer>4</integer> <key>TargetCount</key> <integer>5</integer> <key>Threshold_High</key> <array> <integer>25</integer> <integer>75</integer> <integer>90</integer> <integer>100</integer> </array> <key>Threshold_Low</key> <array> <integer>0</integer> <integer>97</integer> <integer>97</integer> <integer>98</integer> </array> </dict> <key>LogControl</key> <integer>1</integer> <key>control-id</key> <integer>18</integer> </dict>
  4. Alright. I have a PNY GeForce GT 630 as stated in title. I got the card a few weeks ago and switched from a Radeon 4450 (or 4550.. doesn't matter). Naturally, I had to remove the GraphicsEnabler=Yes flag and the computer ran great. Eventually I installed the CUDA driver and it seemed to do its thing correctly. All was good. One day, I booted to a black screen in the morning. I wasn't sure why, so I tried again and manually added GraphicsEnabler=No. It booted fine. ...ok! For almost a week the machine would boot just perfect if I did a "restart" (as opposed to power-off shut down) and sometimes would boot fine if I shut it down and turned it back on promptly. Otherwise, I would have to manually enter the GraphicsEnabler=No at my Chameleon prompt. When I checked Chameleon Wizard's bdmesg, it showed that I was booting with GraphicsEnabler=No TWICE at this point (I had added it manually to the .plist and then, as stated, would have to manually type it in a boot in the mornings, etc). Eventually this stopped working and I was magically able to limp along with the same methods except adding PCIRootUID=1 every boot. Adding this to the .plist, just like GraphicsEnabler, would show that the machine booted with the flag but it would only work if I manually inputted it into Chameleon when selecting the drive upon booting. A few days after that, I was absolutely unable to boot no matter what. Black screen. I could VNC into the machine and see the desktop. According to Kext Wizard, the nVidia kexts are in fact being loaded. In System Information > Graphics/Displays, my card shows up as "Nvidia Chip Model", device ID 0x0f00, 128mb. I don't know if I have any hardware acceleration at this point since I'm only VNC'ing into the machine and it's a little slow and crappy anyway. I don't seem to get any glitching that would be apparent when I booted with nv_disable. Oh yeah, nv_disable will boot fine, as you might expect, with the subsequent crappy graphics performance and glitching and all of that. Of course this makes sense as the nVidia drivers are not loading. I currently run the machine on my old ATi 4870 which works just great with GraphicsEnabler=Yes. A little old for my needs and also gets hot and sucks a lot of power (I think it's around a 200w card or something silly!) I have tried nearly every combination of the following boot flags GraphicsEnabler=No (and Yes) PCIRootUID=1 and 0 PciRoot=1 and 0 npci 2000 and 3000 (whatever those were) I made a flash drive with Clover on it and was able to boot to the same black screen. I kept my Yosemite USB stick and never touched it after I first successfully installed, back when I was using the Radeon 4450. I cannot boot from this USB stick using any combination of the above flags either. I plan on making a Mavericks stick with MyHack soon and seeing if I can get that to boot. There is a thread on Hackintosh.Zone (I hope it is ok that I link to there) where a person has the EXACT same problem with the same comptuer (well, Precision T3400 at least) except the thread was never resolved (heck, nobody but him/her replied) and they stopped posting at the point where the boot flags seemed to be completely random as to which one would work any given day. https://www.hackintosh.zone/hackintosh-topic/2474-must-always-type-graphicsenablerno-to-boot-even-though-its-in-plist/ They, however, were using an old Nvidia Quadro 1700 which was based on the 8600GT - very old card, completely different drivers as far as I am concerned. I assume they were running Mavericks (april 2014). Any ideas!? This seems to be an undocumented problem, I've been searching around for over a week now while banging my head against the desk or giving up and using my old Radeon. My computer: Dell Precision T3400 Intel X38 chipset Xeon x3363 (with 771->775 conversion) 6gb DDR2 Yosemite 10.10.2 Everything ran totally perfect, as I mentioned, for a while. I have looked through my BIOS and found no settings that would seem to hint toward solving the problem. There are no onboard graphics and the only video settings in BIOS are to make it prefer PCIe or PCI (both options of which I have tried). I reset the BIOS many times. The BIOS clock does set itself to GMT time (I think, 6 hours ahead from my USA Central time which is -6GMT I beleive) whenever I shut the computer off. The ONLY thing that I can remember changing during the genera time period when it first stopped booting was that I added a second hard drive (main is 500GB and additional was another 500GB) so that I could run a 100% Time Machine backup. I am so lost.
  5. After some research (took me a year ) I think I finally found a solution to get a more smoother interface. I don't know if this also fixes "Channel Timeout" errors, more testing is needed. Some history: After reading several topics/post about power-states that are used on GeForce-cards, I found out that the Fermi series only has 3 P-states instead of 4 (or more). P-states can control clockings on a GPU down or up when needed. (Down when more 'power' is needed/Up when being more idle) You can check your clocks by running the tool nvidia Inspector in Windows My EVGA GeForce 450 GTS (1GB) has the following clock-speeds: State 3 (lowest energy use) CPU Clock: 50Mhz CPU Memory: 324Mhz State 2 (mid. energy use) CPU Clock: 405Mhz CPU Memory: 324Mhz State 1 (high. energy use) CPU Clock: 783Mhz CPU Memory: 1.80Mhz After looking in Windows 7 what state the nVidia card runs on, I found out that this is usually state 3. In OS X however I could only get smooth animations if the GPU was running in state 2. My theory is that OS X simple needs more 'power' because it is running on OpenGL and their are way more animations (enabled) than on Windows. I also checked how my card was running on Linux (KDE). When using their interface I also find out my card was more in state 2 than in state 1. Editing AppleGraphicsPowerManagement.kext: To set the correct clock-speeds based on load, editing AGPM.kext was needed since they are undefined for my GPU. After searching through some post, I found out that some users were using all the 4 'load-fields', when my Fermi-card only has 3. Also some users simple tried to disable the last two, by setting really high values. This should prevent the CPU hitting state 3. Some users claim that state 3 is causing the freezes and the slow interface. My theory is that the GPU doesn't switch to state 2 when needed. It is either taking to much time for the GPU to reach state 2 or it would simply mean that the GPU doesn't like being on state 2 all the time for some reason. To make a long story short: I wanted to have a smooth interface, but also don't wanted the GPU to be fully loaded all the time (causing me a high energy bill). At the moment I'm using iMac12,2 as model. This because my i5-2400 seems to be inside this model. By setting this model, it also loaded the AGPM.kext. These are the values I have chose for my GPU (device id 0dc4): <key>Vendor10deDevice0dc4</key> <dict> <key>BoostPState</key> <array> <integer>0</integer> <integer>1</integer> <integer>2</integer> </array> <key>BoostTime</key> <array> <integer>2</integer> <integer>2</integer> <integer>2</integer> </array> <key>Heuristic</key> <dict> <key>ID</key> <integer>0</integer> <key>IdleInterval</key> <integer>200</integer> <key>SensorOption</key> <integer>1</integer> <key>TargetCount</key> <integer>1</integer> <key>Threshold_High</key> <array> <integer>70</integer> <integer>87</integer> <integer>100</integer> </array> <key>Threshold_Low</key> <array> <integer>0</integer> <integer>60</integer> <integer>92</integer> </array> </dict> <key>LogControl</key> <integer>0</integer> <key>control-id</key> <integer>17</integer> </dict> As you can see (or don't) I have simply removed the last row (aka row 3). My GPU is clocking as it should be and I can enjoy movies again. Also the sound from the GPU fan seems to be lower as it normally would, since this is also controlled by the GPU-clock. Setting details: I have set the IdleInterval from 250 to 100. It seems that setting this as 250 it takes to long for the GPU to switch to a higher or lower state. My theory is that the 250 value is set for the GFX0 (aka the nVidia/ATI-GPU) because it 'only' needs to render a game/high-use application. When the 100 IdleInterval of the Intel GPU, is needed because more/faster switching is needed for 'normal' use. The BoostPState and the BoostTime simple defined what state to be used when an application needs more 'power'. The SensorOption should need to be set to read-out the GPU-status. The TargetCount should be the state were the GPU should focus on. At least I think so, please let me know if you have more/better information. Testing: I would like to see some (test) results from Fermi-card users. That's why I posted this as new topic. But please know that doing this is at your own risk! As 'proof' please check the following pictures: Idle (4% idle or less): Browsing/doing stuff: (more than 4% load) Benchmarking: (more than 45% load) AppleGraphicsPowerManagement.kext.zip
  6. Hi community ! Is anybody successful with 10.13.x and multiple GPUs on a desktop hack ? My main rig (see in signature) HD4600 + GTX550Ti + GTX560 is working perfectly under 10.11.6 and 10.12.6. Very simple to install, Intel IGPU set to active and primary in bios, inject intel and igplatformid in bootloader (enoch or clover), no nividia injection, since Fermi GPUs are natively supported. Smbios set to iMac14.2 for my Haswell CPU, and to avoid AppleGraphicsDeviceControl devices unloading. But from 10.13.0 to 10.13.3, system boots fine until login screen where WindowServer crashes with a "no MTLDevice" error (MeTaL device I guess). I could workaround that error by unloading nvidia devices in AppleGraphicsDeviceControl, but that kind of defeats the purpose, since only the IntelHD4600 can then be used. 10.13.4 brings a little improvement, desktop can be reached, all 3 gpus active (AppleDisplay instances appear in IORegistryExplorer for Intel and Nvidias) but only Intel HDMI output shows desktop correctly, both nvidias (HDMI or DVI output) only show a black desktop where the mouse pointer can be moved normally. Displays can be arranged, orientation and resolution can be changed, as if everything was fine, but it is not. Console shows the following repeated messages : WindowServer (Skylight) : Unable to composite display surface due to null compositor. WindowServer (CoreDisplay) : [ERROR] - Attempting to get capabilities from capabilities with no devices Does anybody have an idea on how to avoid or workaround these errors ? Thanks ! O. Things I've tried: Bios Graphic Devices order changes (breaks everything unless IGPU is primary) nvidia injection in bootloader (fails in enoch, disables secondary nvidia GPU outputs in clover) Smbios changes and/or AppleDeviceControlPolicy plist edits (simply enables or disables nvidia outputs) nvidia official or web drivers (no changes at all, even though webdrivers are properly loaded ; requires nvram emulation with enoch) enoch or clover (no difference except clover boots a bit faster - than enoch with nvram - thanks to proper UEFI boot I guess) lilu intel and/or nvidia and/or coredisplay graphicsfixups (no visible changes but graphics devices names, and thus enables or disables AGPM) use binaries from Sierra 10.12.6 for CoreDisplay and/or Skylight frameworks (never reached desktop) NVIDIAeGPUSupport (no changes)