Jump to content
Welcome to InsanelyMac Forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.


  • Content count

  • Joined

  • Last visited

About PoisonDrop

  • Rank
    InsanelyMac Protégé
  1. OK...so I have a semi-working setup... WebDriver is installed and the system loads it over the Apple driver (although the Nvidia control panel doesn't say so, it is listed in kextstat output). nvda_drv=1 seems to have no effect on which driver loads, however changing this flag seems to trigger some magic in the graphics card. Now, each time I boot, I must toggle this flag so the driver will successfully load. I have tested this extensively for many reboots now. If I do not toggle the nvda_drv flag each time, the driver gives a kernel panic. Seems like the card needs an explicit reset or something, and there are only a few things that cause this to happen. One being booting a different card and another, as I have discovered, is toggling the nvda_drv flag. I can always boot using nv_disable, but the effect of that option never survives another reboot. I'd really like dual monitors and of course acceleration, so I guess it comes down to changing arguments on each boot if that's what the card needs. I guess I could write a script that edits config.plist on startup to make sure the next boot is successful, but that just feels sloppy and quite frankly, a bit dangerous. Could probably mess my config pretty easily if I'm not careful. So, I now have an additional question... I know there is an option in Clover to reset the audio on each reboot. Is there a similar option or something I can add to DSDT to reset the graphics card as well? Any help would be much appreciated!
  2. Funny...somehow I managed to get the WebDriver to actually load... Got 1 solid boot without issue...again dual monitors and full acceleration. Decided to push my luck and rebooted. Now the web version of the same driver constantly kernel panics. Same message but the driver name has "Web" appended to it. Looks like it must be a setup issue and not the driver itself since both Apple's version and NVidia's version are doing the same thing. I don't see where the setup could be wrong since 1) I have tried changing every accessible setting in the book manually and 2) reverting to a known working setup has no effect. EDIT: Even funnier, now I get ONE SINGLE BOOT where the card works great...no matter what options I choose...then the issue happens...but now I can toggle the nvda_drv=1 flag at boot and the issue will resolve itself until the next reboot. So if I get a GOOD boot with nvda_drv=1 set, then the next time the boot fails, all I have to do is REMOVE nvda_drv=1 and it will boot the next time. Then graphics predictably die again, so I just ADD BACK nvda_drv=1 and it boots like normal. WHAT IS HAPPENING?? This is the strangest thing I have ever seen. I figure there is some setting that is causing the panic, and only certain config changes can reset it (like changing out the card or messing with the nvda_drv flag)... I hope there is someone out there that understands the boot process a little more and can point me in the right direction...
  3. Thanks for the link! Just tried it out... Downloaded and ran...installed latest driver. No change. OS is fully upgraded so build should match...as such I did not use the build patch...is that OK? Used the same app to create/install NVEnabler.kext and this is where is gets interesting. Using the enabler kext WITHOUT injection from Clover causes WebDriver to KP. Using the enabler kext IN ADDITION TO injection from Clover produces a pretty colorful and fuzzy display of lines on the screen. No panic here, though. This is weird, since I was under the impression that one must use EITHER injection OR the enabler kext, not both. Thank you again for the link. I may play around with NVEnabler a little more to see what happens...Either way, still doesn't get me to the desktop... Any other suggestions?
  4. Upgrading a hack for the wife...the specs of her system are listed below (it is not my system listed in my sig): ASUS P5Q Core2 Quad Q8200 @ 2.33 GHz 6 GB Corsair DDR2 Zotac GeForce 8400GS 256 MB I would like to start by saying that the initial installation works perfectly. NVidia injection set to true is all that is needed. Clover picks the correct NVCAP etc on it's own. BOTH monitors are fully functional from the moment the installer boots. Full accelleration and everything. Here is what I am doing from the installer: - format the target drive - perform clean install (OS lives on it's own drive) - boot from USB to complete installation, answer all the questions etc - install Clover to the hard drive - copy the (known working) config.plist/kexts from the USB drive - eject the USB installer and reboot At this point El Capitan boots just fine. Dual monitors, full acceleration, no lag or issues whatsoever. I even reboot a few times to make sure everything comes back without any issues. I spent a few hours watching Netflix the first time I installed the system. NO ISSUES AT ALL. Then my wife tells me it won't boot for her. I take a look, and sure enough, kernel panic from the GeForce driver. She swears she didn't touch anything (wouldn't know how to anyway). So I wiped the drive, reinstall and got it working again...some time goes by and then...same thing. Seems that I get a few good boots out of the system, but then graphics just quit working. I can reboot 1000 times after with no change. It acts like it never worked in the first place. Same result if I use the USB installer to boot the hard drive. The only way I can get back into the system is by using nv_disable=1. I would like to take this opportunity to say once again that I am not touching ANYTHING in any way between reboots. I have tested this issue by starting from a clean install and literally just sitting there clicking the reset button on the login screen until the graphics die. It works for a completely random number of reboots and then nothing but the KP. I have tried the following to attempt to restore normal boot/graphics from this point: - reinstalled Clover - reinstalled the system - installed the NVidia web driver (even though my card isn't listed, but worth a try anyway) - replaced config.plist with a known working config - replaced kexts with known working versions (only FakeSMC and Ethernet needed for this setup) - removed Ethernet kext just in case - tried different versions of FakeSMC - removed EVERYTHING plugged into the PC except the GeForce and KB/mouse - deleted and replaced the entire EFI folder where Clover's config lives - reset the NVRAM (used nvram command, hope that's the correct way?) - rebuilt kernel caches - deleted all startup caches - touch S/L/E and let the system rebuild caches - reset the BIOS (even though I never touched it in the first place) - tried changing about every config.plist option known to man I have another system I mess around with as well. I can pop the hard drive in that one or into my machine (in my sig) and it boots without issues. Rightfully so, since there are no GeForce cards in those, but it does prove the install isn't corrupt or anything. There is only one fix that I have found that DOESN'T involve nv_disable, and it is a strange one. If I swap the graphics card with an old ATI I have laying around and boot the system just ONCE, shutdown, and swap the GeForce BACK in, everything works again just like in the beginning...dual monitors/acceleration, etc. Of course, this only lasts a few reboots, and then the process starts all over again. I am thinking that there is some sort of low-level setting that is getting changed or saved automatically. Must be crashing the graphics kext once set. When I swap cards, whatever is wrong gets reset or overwritten and all is well for a few more boots. I feel like the EFI partition is the most likely culprit since a clean OS install doesn't fix the issue. Other than swapping cards back and forth, I have to completely wipe the drive (including the EFI partition) to get a normal boot sequence again. At this point, I am going crazy trying to figure this out. I am aware that this is an extremely old card which I probably shouldn't be using in the first place. But since it works flawlessly for a few runs it makes me think that it is indeed possible to use it. I do know that I can get a new and better card for the machine, but I am cheap and would REALLY love to put off that upgrade for a bit... So, anyone have any ideas as to why graphics work great for a few boots and then out of nowhere I get a kernel panic from the GeForce drivers? Anyone know of any EFI settings I could be missing and how I might go about accessing or clearing those settings? Thank you in advance!
  5. [Guide] Getting XFX HD6850 connectors to work

    Just an update, if not for anyone else, then just for the sake of remembering how I got this working again! As of 10.8.4, my Gigabyte 6850 OC works OOB, including triple monitors with an active adapter. The only issue is the framebuffer selection. This card requires Duckweed (and AtiPorts=4). Currently, it appears that the Duckweed personality needs NO editing to run this card. I have found that framebuffer selection varies depending on which bootloader is used. I am using Chameleon r2069 now and I have to manually set it to Duckweed. Previous Chameleon versions selected Duckweed on their own. I don't mess around with my OS drive that often, so when this issue popped up, it had me stumped for a bit. Hopefully I will remember to re-read this post down the line and save myself a few headaches during the next clean install...
  6. [Guide] Getting XFX HD6850 connectors to work

    Just thought I'd let everyone know... I have a Gigabyte HD6850OC. I just upgraded to Mountain Lion 10.8.2 and this card appears to work OOB. The big news is that the default radeon frame buffer appears to be fixed. I have full acceleration, QE/CI and all that, on ALL THREE monitors with NO personality editing whatsoever (I had originally edited Duckweed). There is NO GraphicsEnabler, AtiConfig, or AtiPorts in my boot.plist. Oh and I'm using Chameleon 2.1 r1830, and that is definitely important. The latest version won't boot with this configuration. Bottom line: if you have the newest Mountain Lion and are having trouble with your 6xxx, try reverting to vanilla ATI kexts and use a PREVIOUS Chameleon. Worked great for me!
  7. [Guide] Getting XFX HD6850 connectors to work

    Just thought this may help someone... I have this card: http://www.gigabyte.us/products/product-page.aspx?pid=3630#ov It is a Gigabyte 6850 OC. The connectors are the same as the XFX in this guide. After booting Windows and doing the dump, I realized that EVERYTHING else is the same too. Output from the bios decoder was identicle. I am using Mountain Lion...just edited the new version of the driver with the final result at the end of the first post. Works flawlessly, even triple monitors with an active adapter. If anyone is wondering, Mountain Lion brought a revision to the ATI kexts, which is the only reason I had to edit anything. If you have regular Lion, the kext provided by the OP will probably work just fine. EDIT: On another note, Chameleon loads Bulrushes for this card by default, even though it only has 4 connectors. I had to use AtiConfig to load the correct fb. And if the DP screen looks funny on boot, adding a Graphics Mode to the boot plist will fix it.
  8. Help with IOMemoryMap

    Yeah, my structure in my code above was from Apple's example. I used it just for speed, because my structure is VERY long...it would have polluted my message. The actual data I'm trying to read is 1 byte long, as my structure defines. Actually, I have figured out my problem. I will clarify in case anyone else runs into this issue. My original code was actually correct, and was actually working. The problem was the device I was trying to work with doesn't support memory mapped i/o (read the data sheet and figured that out). I tried this operation (using the code from my first post) on a different device that I know supported memory mapped i/o and it worked. So the Apple example of IOMemoryMap was correct, and this was all for nothing. Although I'm extremely disappointed, at least now I know. Thank you again to all for your help!
  9. Help with IOMemoryMap

    OMG thank you! I actually get a value other than 0x00 with that code! That's amazing (sorry, I've been so frustrated that I'm amused by the simplest things). OK, so your code reads my test register as 0x73. But ioRead reads the same register as 0xef. Would THAT be an endianess problem? If so at least I get something. And endianess is an easy fix. Thanks again!
  10. Help with IOMemoryMap

    IORead32 gives 0x000000fe (the correct value). I just don't want to do all those read writes or have to have all those extra variables to operate on. Apple makes it sound like the OS takes care of endianess, either way, if that was the issue, it would at least read SOMETHING other than 0x00000000. Apple makes it sound so simple. It should be like mapping 2 pointers to the same address. PCIRegisters will point to the same address space as the PCI IO address space. At least that's what I thought an IOMemoryMap was for. Any suggestions?
  11. Help with IOMemoryMap

    Actually I am very familiar with pointers. PCIRegisters is a pointer to an address in memory. This address in memory should be the PCI card's memory mapped registers. At the address the pointer PCIRegisters references is data that has a structure of type PCIRegisterType. Therefore I should be able to access it like any other structure pointer: StructureName->Variable // to access the value of Variable Your code refers to Reg1 as if it is a seperate variable whose value is an offset of PCIRegisters. In my program, Reg1 is a member variable of the PCIRegisterType structure. Maybe it is structures that I need to better understand? Well either way, I tried your code anyway and got error: Reg1 not declared in this scope which was as expected. But thanks anyway! So can anyone else share what needs to be done? To reiterate, I need to access a block of mapped memory. I would like to do so using a structure, or a class would be ok (in my opinion a class would be overkill, no functions). PCIRegisters->Reg1 = 0x00; This is how I would like to access this block of memory. I know at this point it soulds like a dumb question, but I'm really stuck here. My pointer method (above) does not work for some reason.
  12. Here is what I would like to do: struct PCIRegisterType { UInt32 Reg1; UInt32 Reg2; ... } int main () { IOMemoryMap *PCIMemoryMap; PCIRegisterType *PCIRegisters; ... PCIMemoryMap = Device->getDeviceMapWithRegister(kIOPCIConfigBaseAddress0); PCIRegisters = (PCIRegisterType *)PCIMemoryMap->getVirtualAddress(); printf("Result 1: 0x%02x", Device->ioRead8(0x00, PCIMemoryMap)); /* Result 1: 0xfe */ printf("Result 2: 0x%02x", PCIRegisters->Reg1); /* Result 2: 0x00 */ ... } This compiles and runs fine. Everything except the printf lines were taken from Apple's documentation on the IOMemoryMap class. When I use the memory map to ioRead32(0x00, PCIMemoryMap) I get the correct value from the register. But when I try to read the register using PCIRegisters->Reg1 (or Reg2, etc...) all I get is a value of 0x00. The output of both printf lines should be the same because, according to what I would LIKE to happen, both statements should be reading the same address in memory. This, though, does not seem to be true. The memory mapping seems fine, but I can't use a struct pointer to access the mapped memory? Apple's documentation seems to let on that this is possible, but maybe I am just not getting it. Any help would greatly be appreciated, as I've been working on this for days...