Alex DeWolf Posted September 13, 2008 Share Posted September 13, 2008 Hello all: I have an Asus F8SN with a Nvidia 9500M with 512MB VRAM. I got 10.5.4 installed and working with QE/CI support using the latest NVinject. However the OPENGL support is really slow. How can I get topengl working properly? Alex Link to comment Share on other sites More sharing options...
spanakorizo Posted September 13, 2008 Share Posted September 13, 2008 how do u know is slow? Link to comment Share on other sites More sharing options...
Alex DeWolf Posted September 13, 2008 Author Share Posted September 13, 2008 how do u know is slow? I think the attached image shows pretty slow opengl tests. Also HalfLife2 on Crossover is really slow on this machine, on a Mac Book with the ATI X3100 it is much smoother and faster. Alex Link to comment Share on other sites More sharing options...
Slice Posted September 13, 2008 Share Posted September 13, 2008 I think the attached image shows pretty slow opengl tests. Also HalfLife2 on Crossover is really slow on this machine, on a Mac Book with the ATI X3100 it is much smoother and faster. Alex OpenGL Extension Viewer is a strange program. As well as XBench. Better try Open Mark and Cinebench. Crossover uses windows drivers so I doubt it can run fast OpenGL. What is ATI X3100? May be you mean Intel X3100? It would be nice if you publish your config sudo -skextstat >kextstatDeWolf.txt ioreg -l -x -w 2048 >ioregDeWolf.txt Link to comment Share on other sites More sharing options...
aqua-mac Posted September 13, 2008 Share Posted September 13, 2008 Can you show your info from system profiler under the section Graphics card. I doubt very much that you do actually have QE or CI enabled. Link to comment Share on other sites More sharing options...
mitch_de Posted September 13, 2008 Share Posted September 13, 2008 OpenGL Extension Viewer is a strange program. As well as XBench. Better try Open Mark and Cinebench.Crossover uses windows drivers so I doubt it can run fast OpenGL. What is ATI X3100? May be you mean Intel X3100? It would be nice if you publish your config No, OpenGL Extensions Viewer benches are very good and reliable - much better than Xbench, where GMA950 are faster than ATI 2600XT!!!! Xbench OpenGL values are for the trash (rest of valuesa are OK). Perhaps you use Softwarerendering mode (CPU) , not hardware OpenGL mode(GPU)! how check if sw rendering (CPU!!) or hw (GPU) rendering: Look in TAB Extensions , there should NOT be an Apple Software Renderer .... Rendering, insted Intel GMA X3100 Renderer (HW) I posted my ATI HD3850 Info : Renderer: ATI..... (GPU) and my Bench Vales at 1600x1200/32 Bit/BenchMode Also in the test this is shown upper left corner Renderer: If it shows Intel GMA X3100 then all is right, i would set the framebuffer as standard). X3100 isnt so fast in higer resolution - as yours. Link to comment Share on other sites More sharing options...
Alex DeWolf Posted September 13, 2008 Author Share Posted September 13, 2008 Attached is the system profiler screen shot, ioreg and kext txt files. Thanks for everyone's help. Alex DeWolf ioregDeWolf.txt kextstatDeWolf.txt Link to comment Share on other sites More sharing options...
Cheops Posted September 13, 2008 Share Posted September 13, 2008 Try setting one of your corners in expose to put the display to sleep then re awake it and re-run the tests Ade. Link to comment Share on other sites More sharing options...
Alex DeWolf Posted September 13, 2008 Author Share Posted September 13, 2008 No, OpenGL Extensions Viewer benches are very good and reliable - much better than Xbench, where GMA950 are faster than ATI 2600XT!!!! Xbench OpenGL values are for the trash (rest of valuesa are OK). Perhaps you use Softwarerendering mode (CPU) , not hardware OpenGL mode(GPU)! how check if sw rendering (CPU!!) or hw (GPU) rendering: Look in TAB Extensions , there should NOT be an Apple Software Renderer .... Rendering, insted Intel GMA X3100 Renderer (HW) I posted my ATI HD3850 Info : Renderer: ATI..... (GPU) and my Bench Vales at 1600x1200/32 Bit/BenchMode Also in the test this is shown upper left corner Renderer: If it shows Intel GMA X3100 then all is right, i would set the framebuffer as standard). X3100 isnt so fast in higer resolution - as yours. Well what I was saying is that it seems like rendering on my Asus laptop with the Nvidia 9500M and 512MB VRAM is slower than my Mac Book with the Intel (sorry not ATI) X3100 with shared memory. Here is a screenshot of the extensions tab. Thanks Alex Link to comment Share on other sites More sharing options...
Cheops Posted September 13, 2008 Share Posted September 13, 2008 See my post above! Ade. Link to comment Share on other sites More sharing options...
Alex DeWolf Posted September 13, 2008 Author Share Posted September 13, 2008 Try setting one of your corners in expose to put the display to sleep then re awake it and re-run the tests Ade. I did this and no change. Alex Link to comment Share on other sites More sharing options...
Cheops Posted September 13, 2008 Share Posted September 13, 2008 Does anything else seem smoother? Ade. Link to comment Share on other sites More sharing options...
Alex DeWolf Posted September 13, 2008 Author Share Posted September 13, 2008 Does anything else seem smoother? Ade. Nope. Alex Link to comment Share on other sites More sharing options...
Headrush69 Posted September 13, 2008 Share Posted September 13, 2008 No, OpenGL Extensions Viewer benches are very good and reliable - much better than Xbench, where GMA950 are faster than ATI 2600XT!!!! Xbench OpenGL values are for the trash (rest of valuesa are OK). They are both useless for any kind of real world benchmarking. Using these the apps the results between my Geforce 7900GS and Geforce 8800GTX were hardly substantial. (and to average joe meaningless) But any in game experience with 3D was incredibly dramatic. Using a benchmark tool like Santaduck's UT2004 tool showed the difference. They are useful for things like noticing when hardware acceleration is or isn't working and maybe a rough level of performance, but other than that, not much. Link to comment Share on other sites More sharing options...
Alex DeWolf Posted September 13, 2008 Author Share Posted September 13, 2008 Try setting one of your corners in expose to put the display to sleep then re awake it and re-run the tests Ade. Well check this out. There is an almost 10 fold speed improvement in the opengl test (1st screenshot is before and 2nd is after) However, HalfLife2 is still choppy. Alex Link to comment Share on other sites More sharing options...
Alex DeWolf Posted September 14, 2008 Author Share Posted September 14, 2008 bump Link to comment Share on other sites More sharing options...
mitch_de Posted September 14, 2008 Share Posted September 14, 2008 They are both useless for any kind of real world benchmarking. Using these the apps the results between my Geforce 7900GS and Geforce 8800GTX were hardly substantial. (and to average joe meaningless) But any in game experience with 3D was incredibly dramatic. Using a benchmark tool like Santaduck's UT2004 tool showed the difference. They are useful for things like noticing when hardware acceleration is or isn't working and maybe a rough level of performance, but other than that, not much. Then try it at higer res or (above 2000x1400 - like gamemagazines testresolutions), better aktivate the FSAA (Multisampling) to 4 or 8. You will see a BIG difference between both cards !!! FSAA ist very hard work for GPU and also VRAM-Speed. OpenGL driver differences and CPU-Limits have not more such a big effect aktivating FSAA*4 or FSAA*8. ALSO: real word game benches ahve some MINUS: - much more CPU + RAM dependend - bad comparable between different CPU + RAM systems - an GPU which wins GOLD on UNREAL may not get any upper place (means loose) in an other game bench (Crysis) - its too much work to get an GPU value which is only comparable for systems with SAME MB RAM, SAME CPU TYPE, SAME CPU GHZ conclusion: real world benches are only great for testing GPUs in ONE system and it is a must to test at least 3 differnet real world games to get an good average bench value. Game Magazines or HW Mags tests this way - one test pc , 3 games, 10 cards. To much work and not possible for us - complete different CPU+RAM systems, noone has chance to have 3+ realword games with same resolution, same game settings,... Running OpenGLExtensionsViewer Benches (switching on FSAA at least *2, newer cards *4 or *8) helps here really for "homeuser GPU tests". Link to comment Share on other sites More sharing options...
Cheops Posted September 14, 2008 Share Posted September 14, 2008 So it is proven that on laptops there are problems getting the nvidia hardware activated. Even though it says hardware accelorated the fact is it lies and isn't true so beware laptop users with nvidia cards we need a fix for this ASAP. Thanks Ade Link to comment Share on other sites More sharing options...
mitch_de Posted September 14, 2008 Share Posted September 14, 2008 So it is proven that on laptops there are problems getting the nvidia hardware activated. Even though it says hardware accelorated the fact is it lies and isn't true so beware laptop users with nvidia cards we need a fix for this ASAP. Thanks Ade To check this you can test with OpenGLExtensionvierer the results from "hw opengl" and Apple Software Rendering". If the two OpenGLExtemsiomsViewer Benchmark values are near the same hw opengl isnt really/not 100% working - even shown in systemprofiler as CI/QE card. If HW Bench values are at least by 10* higher(faster) than SW bench values, hw opengl is working, maybe some driver glitches with your mobile gpu. HOW: Look at my screenshoot , easy switch between those 2 modes (HW OPENGL / Apple Softwarerendering). My ATI3850 is in Apple Software Rendering Mode > 200 Times (openGl 1.1 -2.0) SLOWER than HW Rendering, At 2.1 20 times faster. Running in my res: 1600x1200/ 32 Bit, C2D 2.4 GHZ. The smaller the resolution or faster the CPU, the less the difference between both modes. So your Mobile GPUs should be at very least 20 times faster in HW OpenGL Mode in all is OK, or even 10 times faster with some OpenGL driver problems. Link to comment Share on other sites More sharing options...
Alex DeWolf Posted September 14, 2008 Author Share Posted September 14, 2008 To check this you can test with OpenGLExtensionvierer the results from "hw opengl" and Apple Software Rendering".If the two OpenGLExtemsiomsViewer Benchmark values are near the same hw opengl isnt really/not 100% working - even shown in systemprofiler as CI/QE card. If HW Bench values are at least by 10* higher(faster) than SW bench values, hw opengl is working, maybe some driver glitches with your mobile gpu. HOW: Look at my screenshoot , easy switch between those 2 modes (HW OPENGL / Apple Softwarerendering). My ATI3850 is in Apple Software Rendering Mode > 200 Times (openGl 1.1 -2.0) SLOWER than HW Rendering, At 2.1 20 times faster. Running in my res: 1600x1200/ 32 Bit, C2D 2.4 GHZ. The smaller the resolution or faster the CPU, the less the difference between both modes. So your Mobile GPUs should be at very least 20 times faster in HW OpenGL Mode in all is OK, or even 10 times faster with some OpenGL driver problems. OK I ran this first using Apple software rendering then Nvidia's open GL engine. Ineed the Apple software rendering is like 100 times slower. What I am wondering about are the numbers for the last 2 tests. Is this why HalfLife2 is so slow? Alex Link to comment Share on other sites More sharing options...
Cheops Posted September 14, 2008 Share Posted September 14, 2008 I have read somewhere that adding platform=X86PC to the boot up fixes it but unfortunately this does not work on my laptop it panics, maybe you could try this. Ade. Link to comment Share on other sites More sharing options...
Slice Posted September 14, 2008 Share Posted September 14, 2008 I think the attached image shows pretty slow opengl tests. Also HalfLife2 on Crossover is really slow on this machine, on a Mac Book with the ATI X3100 it is much smoother and faster. Alex OpenGL Extension Viewer is a strange program. As well as XBench. Better try Open Mark and Cinebench.Crossover uses windows drivers so I doubt it can run fast OpenGL. What is ATI X3100? May be you mean Intel X3100? Your Nvidia is really fast. I have no such result while all other tests and games work fine. OneGl Extension Viewer in WindowsXP run with the same speed as sowtware renderer in MacOSX. Try Cinebench that exists in both WinXP and in MacOX Link to comment Share on other sites More sharing options...
Cheops Posted September 14, 2008 Share Posted September 14, 2008 Why are we getting faster speeds when activating/deactivating "Sleep Display" is it a driver bug that is not fixable or can it be fixed? Thanks Ade. Link to comment Share on other sites More sharing options...
Alex DeWolf Posted September 14, 2008 Author Share Posted September 14, 2008 Here is a HalfLife2:Episode 2 screenshot with FPS displayed (11 FPS). I still can't figure this out. Alex Your Nvidia is really fast. I have no such result while all other tests and games work fine.OneGl Extension Viewer in WindowsXP run with the same speed as sowtware renderer in MacOSX. Try Cinebench that exists in both WinXP and in MacOX Cinebench R10 results (NOTE:I have to boot with cpus=1 so only single cpu results): CINEBENCH R10 **************************************************** Tester : Alex DeWolf Processor : Intel T9300 MHz : 2.4 GHZ Number of CPUs : 1 Operating System : OS X 32 BIT 10.5.4 Graphics Card : NVIDIA GeForce 8600 GTS OpenGL Engine Resolution : <fill this out> Color Depth : <fill this out> **************************************************** Rendering (Single CPU): 2462 CB-CPU Rendering (Multiple CPU): --- CB-CPU Shading (OpenGL Standard) : 4370 CB-GFX **************************************************** Alex Your Nvidia is really fast. I have no such result while all other tests and games work fine.OneGl Extension Viewer in WindowsXP run with the same speed as sowtware renderer in MacOSX. Try Cinebench that exists in both WinXP and in MacOX Here are my Cinebench R10 results (NOTE: I have to boot with cpus=1 so there are only single cpu results): CINEBENCH R10 **************************************************** Tester : Alex DeWolf Processor : Intel T9300 MHz : 2.4 GHZ Number of CPUs : 1 Operating System : OS X 32 BIT 10.5.4 Graphics Card : NVIDIA GeForce 8600 GTS OpenGL Engine Resolution : <fill this out> Color Depth : <fill this out> **************************************************** Rendering (Single CPU): 2462 CB-CPU Rendering (Multiple CPU): --- CB-CPU Shading (OpenGL Standard) : 4370 CB-GFX **************************************************** Alex Here's the Cinebench 10 results. I have top boot with cpus=1 so only single cpu usage: CINEBENCH R10 **************************************************** Tester : Alex DeWolf Processor : Intel T9300 MHz : 2.4 GHZ Number of CPUs : 1 Operating System : OS X 32 BIT 10.5.4 Graphics Card : NVIDIA GeForce 8600 GTS OpenGL Engine Resolution : <fill this out> Color Depth : <fill this out> **************************************************** Rendering (Single CPU): 2462 CB-CPU Rendering (Multiple CPU): --- CB-CPU Shading (OpenGL Standard) : 4370 CB-GFX Link to comment Share on other sites More sharing options...
Headrush69 Posted September 14, 2008 Share Posted September 14, 2008 Then try it at higer res or (above 2000x1400 - like gamemagazines testresolutions), better aktivate the FSAA (Multisampling) to 4 or 8. You will see a BIG difference between both cards !!! FSAA ist very hard work for GPU and also VRAM-Speed. OpenGL driver differences and CPU-Limits have not more such a big effect aktivating FSAA*4 or FSAA*8. Using the highest native resolution of my LCD 1680x1050. Obviously if you enable features that are more optimized/supported for newer cards you will see a difference. The OP isn't using those those settings so comparing to other cards can led to false assumptions. (Although in his case something is wrong as they are incredibly low.) ALSO:real word game benches ahve some MINUS: - much more CPU + RAM dependend - bad comparable between different CPU + RAM systems - an GPU which wins GOLD on UNREAL may not get any upper place (means loose) in an other game bench (Crysis) - its too much work to get an GPU value which is only comparable for systems with SAME MB RAM, SAME CPU TYPE, SAME CPU GHZ Never said it was perfect. The point was that the values produced by these benchmarks are essentially useless to most people. (What does a 40 point different mean?) The tools I suggested was for evaluating real performance (for that system), not necessarily as a tool for comparing GPU performance between different machines. Running OpenGLExtensionsViewer Benches (switching on FSAA at least *2, newer cards *4 or *8) helps here really for "homeuser GPU tests". Like I said, except for cases like the OP's where performance is exceptionally low, these numbers don't tell average users much. Unless they are comparing to others that run with the exact same settings (which often isn't the case, just look at the various threads posting their benchmarks), then getting a 300 compared to a 400 result doesn't mean much. Alex DeWolf, are the results the same using different games and are the frame rates the same in native games? Link to comment Share on other sites More sharing options...
Recommended Posts