Jump to content

Unigine Valley (Highend OpenGL) Benchmark


mitch_de
 Share

78 posts in this topic

Recommended Posts

I included all options; then after running it, any that never went over a 0 value I unchecked so they wouldn't clutter the table view, so any that aren't seen in that screen shot were all zero. 3gb of VRAM on the card, based on the figures it doesn't seem to be getting anywhere near maxing that out. Tried rolling back to earlier 10.8.3 kexts, no luck. Low quality actually doesn't do it and runs fine. Going to try different smbios out.

Link to comment
Share on other sites

Interesting - that AMD 7870 (min FPS 33) hasn't that big FPS drop as the other AMD 7970 (min FPS 8) gpu.

CPU speed diff cant be the reason: i5 (no fps drop) vs i7 (fps drop). Other fps values of 7870 / 7970 - average fps & max fps are very close.

Link to comment
Share on other sites

Basic with AA enabled, although with usual AA color problems is more inline with what it should get:

Unigine Valley Benchmark AAon.jpg

And Extreme HD with AA off looked gorgeous and smooth with only a little hiccup at that same spot, and it always exactly 8.4fps exactly when it happens making me think it's with the openGL functions in Valley and not hardware or OSX problem:

Screen Shot 2013-02-28 at 7.35.34 AM.png

 

Edit: So after some additional testing, running bench in Win 8 using openGL api gives lower overall FPS vs OSX, but it also freezes at exact same spot and drops min FPS to exact same amount at 8.4. DX11 give higher average/peak FPS but also freeze at same spot, lowering min FPS to 16.4. Tests ran using both Ghz edition VBIOS and orig release 7970 VBIOS with same results. So it's either the applications handling of 7970 or issue with the card its self.

Link to comment
Share on other sites

  • 2 weeks later...

Hi Mitch

 

Nice Benchmarking app!

Here's my result using Preset Basic without AA for Radeon 5770.

Good result with poor card? It is because of good CPU i7 8 cores.

Link to comment
Share on other sites

I hear it is emulated OpenGL and it is like to be true.

 

No, its real OpenGL. But as every complex OpenGL Bench (as DirectX too) which uses real game content/tasks as bench, like Unique does, CPU does matter of course.

That happens also to real Win DirectX games with complex usage of gpu. Same GPU , highend AMD 7970 / GTX 680 running C2D up vs i7 OCed will give up to 30% FPS diff!!

Depends on the stuff benched but as more the bench uses real game content, like Unique does and not only syntetic bench tasks, the more the CPU does matter.

Also, number of cpu kernels isnt the only parameter! More diff happens by different L1/l2/L3 cpu cache sizes & their different Lx optimisations - because the opengl/directX driver "stays" often in those L1/l2/L3 cpu caches running benches. I guess that not more than 3 real cpu kernels will get "work" to do (means getting > 40% kernel load) running that bench. Perhaps someone checks that with activity cpu shown beside running an windowed Valley bench.

So >4 (real) cpu kernels will not make diff - only the speed of each used(>= 40%) kernel+ speed of L1/2/3 Cache + speed of systembus/RAM will have an effect.

 

But !!!!

Never will a lowend GPU come close or get same FPS as an highend gpu. C2D with AMD 7970 stay much faster than i7 with AMD 5770!!!

So gpu speed can be benched more real game like = highend and lowend gpus getting closer together, or more synthetic (unreal ;) which shows more speed diffs.

Also if you take a look on the game fps benches in Win, you see that the ranking of the gpus often changes between different games (= usage of directx/opengl code different, different driver speeds, different optimizations....)

For example OpenGLExtensionsViewer Bench results will be sure have less CPU decency BUT doesnt show GPU speed in REAL LIFE (= gaming).

 

To get an more overall overview of an gpu speed, its useful to compare many benchmark types - more real game benches and the more synthetic ones.

Link to comment
Share on other sites

OK, but how you explain that this test in not working with AA on Radeons? Radeon do support AA in real OpenGL.

I have very poor results on my comp#1. Ten times worst then blackosx with HD5770.

But my system works fine with 40fps in real game 4x4Evolution ported from Windows game by the mean of Wineskin. As far as I understand it emulates DirectX by system OpenGL that is hardware OpenGL in my case.

Link to comment
Share on other sites

1. The AA bug shows that the benches (both, Valley & Heaven) from them have some probs with AMD AA setting and perhaps other bugs too for AMD. Its OS X related - Win OpenGL doesnt have that AMD AA prob.

I think they know already about that and will fix that - perhaps also Apple must do some work on the AMD drivers too (10.8.3 fixed AA prob not i think).

Some bad results with some benches doesnt matter if other benches show your AMD 6670 working good compared to such 5770 (which performs much better you said )

here)

 

Q: If your 6670 is really 10 times slower than

blackosx with HD5770, you would have only around 5 FPS avr?! (They have 45-50 FPS, windowed, BASIC, AA OFF)

 

Perhaps others with 6770 & 6670 post their results, because until now no other 667o is listed here.

Link to comment
Share on other sites

Yep, but there are DDR3 & DDD5 VRAM & different vram clocked versions out for the AMD 6670 - also make up to 50% diff in gaming speed on same gpu type!

The value Memory Transfer rate (above list) 76.8 GB/s for the 6670 is only for the DDR5 version!

The 6670 DDR3 (900 MHz) version has only 28.8 GB/s - and also much slower general performance then the older AMD 5770 DDR5.

Because the other main speed parameters, pixel fill rate & texture fill rate is much higher for the 5770. Both values, together with the memory speed, corresponds to the main OpenGL/DirectX speeds.

But its getting even more complicated in this AMD 6670 case :weight_lift:

Beside the DDR3 vs DDR5 AMD 6670 versions, the different manufactures also clocked gpu & vram very much different.

 

6670 Memory clock (DDR3) versions: from 600 Mhz up to 900 MHz (900 Mhz = 28,8 GB/s, 600 MHz is around 19 GB/s vram speed!!)

ASUS + Gigabyte OC 6670 DDR3 clocked 900 MHz(vram) others mostly 800 MHz, some only worse 600 MHz or 667 MHz!

GPU clock is mostly 800 (only a few 820) so major diffs in VRAM MHZ, not in GPU MHz.

 

6670 Memory clock (DDR5) versions: in difference to the wide vram MHz range of the DDR3 versions, DDR5 versions are mostly same clocked (1000). No or very less vram clk diffs between different manufactures.

So even benching same main gpu type AMD 6670 , you will get mich different speeds between AMD 5770 DDR5 (850/1000) and an worse case AMD 6670 DDR3 (800/600 MHz)! (GPU clk/Mem clk)

So VRAM speed of the worst case 6670 version is only 1/4 ( 77 GB/s vs 19 GB/s) , major OpenGL functions 1/3 - 1/4 of the AMD AMD 5770.

Back to valley bench: Because Valley using often & much vram , vram speed will matter much more than in "normal" games.

 

Great site for comparing lost of GPUs (main features, main speeds):

 

http://www.gpureview...1=615&card2=656 AMD 6670 DDR3 vs AMD 5770 DDR5

 

Example to not get blended by higher major AMD numbers:

http://www.gpureview...1=579&card2=656 old AMD 4670 vs new AMD 6670 DDR3 = no / very less OpenGL speed benefit for the much newer modell 6670, only some (15%?) faster gpu computing / shader capabilities

 

You can select other gpus also - very useful before buying some new gpus.

Link to comment
Share on other sites

My card exactly is

Gigabyte GV-R667D3-2GI

DDR3, 2Gb, 800MHz

 

But I don't know why I had so bad result in Lion. Today I tested with ML and the result is much better

Screen Shot 2013-03-21 at 20.36.15.png

My be there is influence of other tunings.

 

Now it is comparable with blackosx's result and lower as it should be. -_-

 

And for comparison OpenGL Extension Viewer 4.0

Screen Shot 2013-03-21 at 20.54.27.png Screen Shot 2013-03-21 at 20.56.08.png

Link to comment
Share on other sites

"Very strange. Instead of seeing trees and valley I had seen black background with very small coloured squares" AMD user

 

 

Yep, well known Problem with AMD and having multisampling (AA) active running THAT bench. Same happen with Heaven bench.

Set AA off (x0) , by using BASIC preset (or any other PRESET) values but switch AA always off.

Nvidia cards are not effected by this problem.

Link to comment
Share on other sites

 Share

×
×
  • Create New...