Jump to content

opengl in osx


Alex DeWolf
 Share

35 posts in this topic

Recommended Posts

I think the attached image shows pretty slow opengl tests. Also HalfLife2 on Crossover is really slow on this machine, on a Mac Book with the ATI X3100 it is much smoother and faster.

 

Alex

OpenGL Extension Viewer is a strange program. As well as XBench. Better try Open Mark and Cinebench.

Crossover uses windows drivers so I doubt it can run fast OpenGL.

What is ATI X3100? May be you mean Intel X3100?

 

It would be nice if you publish your config

sudo -s

kextstat >kextstatDeWolf.txt

ioreg -l -x -w 2048 >ioregDeWolf.txt

Link to comment
Share on other sites

OpenGL Extension Viewer is a strange program. As well as XBench. Better try Open Mark and Cinebench.

Crossover uses windows drivers so I doubt it can run fast OpenGL.

What is ATI X3100? May be you mean Intel X3100?

 

It would be nice if you publish your config

 

No, OpenGL Extensions Viewer benches are very good and reliable - much better than Xbench, where GMA950 are faster than ATI 2600XT!!!! Xbench OpenGL values are for the trash (rest of valuesa are OK).

 

Perhaps you use Softwarerendering mode (CPU) , not hardware OpenGL mode(GPU)!

how check if sw rendering (CPU!!) or hw (GPU) rendering:

Look in TAB Extensions , there should NOT be an Apple Software Renderer .... Rendering, insted Intel GMA X3100 Renderer (HW)

I posted my ATI HD3850 Info : Renderer: ATI..... (GPU) and my Bench Vales at 1600x1200/32 Bit/BenchMode

Also in the test this is shown upper left corner Renderer:

 

If it shows Intel GMA X3100 then all is right, i would set the framebuffer as standard).

X3100 isnt so fast in higer resolution - as yours.

Bild_131.jpg

3850.jpg

Link to comment
Share on other sites

No, OpenGL Extensions Viewer benches are very good and reliable - much better than Xbench, where GMA950 are faster than ATI 2600XT!!!! Xbench OpenGL values are for the trash (rest of valuesa are OK).

 

Perhaps you use Softwarerendering mode (CPU) , not hardware OpenGL mode(GPU)!

how check if sw rendering (CPU!!) or hw (GPU) rendering:

Look in TAB Extensions , there should NOT be an Apple Software Renderer .... Rendering, insted Intel GMA X3100 Renderer (HW)

I posted my ATI HD3850 Info : Renderer: ATI..... (GPU) and my Bench Vales at 1600x1200/32 Bit/BenchMode

Also in the test this is shown upper left corner Renderer:

 

If it shows Intel GMA X3100 then all is right, i would set the framebuffer as standard).

X3100 isnt so fast in higer resolution - as yours.

 

Well what I was saying is that it seems like rendering on my Asus laptop with the Nvidia 9500M and 512MB VRAM is slower than my Mac Book with the Intel (sorry not ATI) X3100 with shared memory.

Here is a screenshot of the extensions tab.

 

Thanks

Alex

post-160982-1221316321_thumb.png

Link to comment
Share on other sites

No, OpenGL Extensions Viewer benches are very good and reliable - much better than Xbench, where GMA950 are faster than ATI 2600XT!!!! Xbench OpenGL values are for the trash (rest of valuesa are OK).

They are both useless for any kind of real world benchmarking.

 

Using these the apps the results between my Geforce 7900GS and Geforce 8800GTX were hardly substantial. (and to average joe meaningless)

But any in game experience with 3D was incredibly dramatic. Using a benchmark tool like Santaduck's UT2004 tool showed the difference.

 

They are useful for things like noticing when hardware acceleration is or isn't working and maybe a rough level of performance, but other than that, not much.

Link to comment
Share on other sites

Try setting one of your corners in expose to put the display to sleep then re awake it and re-run the tests

 

Ade.

 

Well check this out. There is an almost 10 fold speed improvement in the opengl test (1st screenshot is before and 2nd is after) However, HalfLife2 is still choppy.

 

Alex

post-160982-1221331288_thumb.png

post-160982-1221331299_thumb.png

Link to comment
Share on other sites

They are both useless for any kind of real world benchmarking.

 

Using these the apps the results between my Geforce 7900GS and Geforce 8800GTX were hardly substantial. (and to average joe meaningless)

But any in game experience with 3D was incredibly dramatic. Using a benchmark tool like Santaduck's UT2004 tool showed the difference.

 

They are useful for things like noticing when hardware acceleration is or isn't working and maybe a rough level of performance, but other than that, not much.

Then try it at higer res or (above 2000x1400 - like gamemagazines testresolutions), better aktivate the FSAA (Multisampling) to 4 or 8.

You will see a BIG difference between both cards !!!

FSAA ist very hard work for GPU and also VRAM-Speed. OpenGL driver differences and CPU-Limits have not more such a big effect aktivating FSAA*4 or FSAA*8.

ALSO:

real word game benches ahve some MINUS:

- much more CPU + RAM dependend - bad comparable between different CPU + RAM systems

- an GPU which wins GOLD on UNREAL may not get any upper place (means loose) in an other game bench (Crysis)

- its too much work to get an GPU value which is only comparable for systems with SAME MB RAM, SAME CPU TYPE, SAME CPU GHZ

conclusion:

real world benches are only great for testing GPUs in ONE system and it is a must to test at least 3 differnet real world games to get an good average bench value.

Game Magazines or HW Mags tests this way - one test pc , 3 games, 10 cards. To much work and not possible for us - complete different CPU+RAM systems, noone has chance to have 3+ realword games with same resolution, same game settings,...

Running OpenGLExtensionsViewer Benches (switching on FSAA at least *2, newer cards *4 or *8) helps here really for "homeuser GPU tests".

Link to comment
Share on other sites

So it is proven that on laptops there are problems getting the nvidia hardware activated. Even though it says hardware accelorated the fact is it lies and isn't true so beware laptop users with nvidia cards we need a fix for this ASAP.

 

Thanks

 

Ade

Link to comment
Share on other sites

So it is proven that on laptops there are problems getting the nvidia hardware activated. Even though it says hardware accelorated the fact is it lies and isn't true so beware laptop users with nvidia cards we need a fix for this ASAP.

 

Thanks

 

Ade

To check this you can test with OpenGLExtensionvierer the results from "hw opengl" and Apple Software Rendering".

If the two OpenGLExtemsiomsViewer Benchmark values are near the same hw opengl isnt really/not 100% working - even shown in systemprofiler as CI/QE card.

If HW Bench values are at least by 10* higher(faster) than SW bench values, hw opengl is working, maybe some driver glitches with your mobile gpu.

HOW: Look at my screenshoot , easy switch between those 2 modes (HW OPENGL / Apple Softwarerendering).

 

My ATI3850 is in Apple Software Rendering Mode > 200 Times (openGl 1.1 -2.0) SLOWER than HW Rendering, At 2.1 20 times faster. Running in my res: 1600x1200/ 32 Bit, C2D 2.4 GHZ. The smaller the resolution or faster the CPU, the less the difference between both modes.

So your Mobile GPUs should be at very least 20 times faster in HW OpenGL Mode in all is OK, or even 10 times faster with some OpenGL driver problems.

Bild_138.jpg

Link to comment
Share on other sites

To check this you can test with OpenGLExtensionvierer the results from "hw opengl" and Apple Software Rendering".

If the two OpenGLExtemsiomsViewer Benchmark values are near the same hw opengl isnt really/not 100% working - even shown in systemprofiler as CI/QE card.

If HW Bench values are at least by 10* higher(faster) than SW bench values, hw opengl is working, maybe some driver glitches with your mobile gpu.

HOW: Look at my screenshoot , easy switch between those 2 modes (HW OPENGL / Apple Softwarerendering).

 

My ATI3850 is in Apple Software Rendering Mode > 200 Times (openGl 1.1 -2.0) SLOWER than HW Rendering, At 2.1 20 times faster. Running in my res: 1600x1200/ 32 Bit, C2D 2.4 GHZ. The smaller the resolution or faster the CPU, the less the difference between both modes.

So your Mobile GPUs should be at very least 20 times faster in HW OpenGL Mode in all is OK, or even 10 times faster with some OpenGL driver problems.

OK I ran this first using Apple software rendering then Nvidia's open GL engine. Ineed the Apple software rendering is like 100 times slower. What I am wondering about are the numbers for the last 2 tests. Is this why HalfLife2 is so slow?

 

Alex

post-160982-1221401351_thumb.png

post-160982-1221401400_thumb.png

Link to comment
Share on other sites

I think the attached image shows pretty slow opengl tests. Also HalfLife2 on Crossover is really slow on this machine, on a Mac Book with the ATI X3100 it is much smoother and faster.

 

Alex

 

 

OpenGL Extension Viewer is a strange program. As well as XBench. Better try Open Mark and Cinebench.

Crossover uses windows drivers so I doubt it can run fast OpenGL.

What is ATI X3100? May be you mean Intel X3100?

Your Nvidia is really fast. I have no such result while all other tests and games work fine.

OneGl Extension Viewer in WindowsXP run with the same speed as sowtware renderer in MacOSX.

Try Cinebench that exists in both WinXP and in MacOX

Picture_4.png

Link to comment
Share on other sites

Here is a HalfLife2:Episode 2 screenshot with FPS displayed (11 FPS). I still can't figure this out.

 

Alex

 

Your Nvidia is really fast. I have no such result while all other tests and games work fine.

OneGl Extension Viewer in WindowsXP run with the same speed as sowtware renderer in MacOSX.

Try Cinebench that exists in both WinXP and in MacOX

 

Cinebench R10 results (NOTE:I have to boot with cpus=1 so only single cpu results):

 

CINEBENCH R10

****************************************************

 

Tester : Alex DeWolf

 

Processor : Intel T9300

MHz : 2.4 GHZ

Number of CPUs : 1

Operating System : OS X 32 BIT 10.5.4

 

Graphics Card : NVIDIA GeForce 8600 GTS OpenGL Engine

Resolution : <fill this out>

Color Depth : <fill this out>

 

****************************************************

 

Rendering (Single CPU): 2462 CB-CPU

Rendering (Multiple CPU): --- CB-CPU

 

 

Shading (OpenGL Standard) : 4370 CB-GFX

 

 

****************************************************

 

 

Alex

 

Your Nvidia is really fast. I have no such result while all other tests and games work fine.

OneGl Extension Viewer in WindowsXP run with the same speed as sowtware renderer in MacOSX.

Try Cinebench that exists in both WinXP and in MacOX

 

Here are my Cinebench R10 results (NOTE: I have to boot with cpus=1 so there are only single cpu results):

CINEBENCH R10

****************************************************

 

Tester : Alex DeWolf

 

Processor : Intel T9300

MHz : 2.4 GHZ

Number of CPUs : 1

Operating System : OS X 32 BIT 10.5.4

 

Graphics Card : NVIDIA GeForce 8600 GTS OpenGL Engine

Resolution : <fill this out>

Color Depth : <fill this out>

 

****************************************************

 

Rendering (Single CPU): 2462 CB-CPU

Rendering (Multiple CPU): --- CB-CPU

 

 

Shading (OpenGL Standard) : 4370 CB-GFX

 

 

****************************************************

 

 

Alex

 

Here's the Cinebench 10 results. I have top boot with cpus=1 so only single cpu usage:

CINEBENCH R10

****************************************************

 

Tester : Alex DeWolf

 

Processor : Intel T9300

MHz : 2.4 GHZ

Number of CPUs : 1

Operating System : OS X 32 BIT 10.5.4

 

Graphics Card : NVIDIA GeForce 8600 GTS OpenGL Engine

Resolution : <fill this out>

Color Depth : <fill this out>

 

****************************************************

 

Rendering (Single CPU): 2462 CB-CPU

Rendering (Multiple CPU): --- CB-CPU

 

 

Shading (OpenGL Standard) : 4370 CB-GFX

post-160982-1221418794_thumb.png

Link to comment
Share on other sites

Then try it at higer res or (above 2000x1400 - like gamemagazines testresolutions), better aktivate the FSAA (Multisampling) to 4 or 8.

You will see a BIG difference between both cards !!!

FSAA ist very hard work for GPU and also VRAM-Speed. OpenGL driver differences and CPU-Limits have not more such a big effect aktivating FSAA*4 or FSAA*8.

Using the highest native resolution of my LCD 1680x1050. Obviously if you enable features that are more optimized/supported for newer cards you will see a difference.

The OP isn't using those those settings so comparing to other cards can led to false assumptions. (Although in his case something is wrong as they are incredibly low.)

 

ALSO:

real word game benches ahve some MINUS:

- much more CPU + RAM dependend - bad comparable between different CPU + RAM systems

- an GPU which wins GOLD on UNREAL may not get any upper place (means loose) in an other game bench (Crysis)

- its too much work to get an GPU value which is only comparable for systems with SAME MB RAM, SAME CPU TYPE, SAME CPU GHZ

Never said it was perfect. The point was that the values produced by these benchmarks are essentially useless to most people. (What does a 40 point different mean?)

The tools I suggested was for evaluating real performance (for that system), not necessarily as a tool for comparing GPU performance between different machines.

 

Running OpenGLExtensionsViewer Benches (switching on FSAA at least *2, newer cards *4 or *8) helps here really for "homeuser GPU tests".

Like I said, except for cases like the OP's where performance is exceptionally low, these numbers don't tell average users much. Unless they are comparing to others that run with the exact same settings (which often isn't the case, just look at the various threads posting their benchmarks), then getting a 300 compared to a 400 result doesn't mean much.

 

Alex DeWolf, are the results the same using different games and are the frame rates the same in native games?

Link to comment
Share on other sites

 Share

×
×
  • Create New...