Jump to content

AudioGod's Aorus Z390 Pro Patched DSDT Mini Guide and Discussion


AudioGod
5,858 posts in this topic

Recommended Posts

Uhm wait...why "we cannot have secondary dgpu"? You can...but for optimal workflow, need to be same dgpu ...you cannot have 580 and 6600 and say 580 is for task operation and 6600 is for compute, os decide it...but you can have dual dGPU ... ...i have 2x w6800 and it works 🤣 but in my scenario, same dgpu so, one make A and one make B or vice-versa, is irrilevant...i think there's no way to decide who make compute and who make other...but dual dGPU works

Edited by D3v1L
Link to comment
Share on other sites

No, the W series is too expensive... I could get a used RX 6600XT for 200 - 250€ but the 4 output limitation is a deal-breaker. It's a pity that we cannot have two dGPU in Hackintosh :( Anyways, thanks for the info!

 

UPDATE: Sorry I just saw your last message, it was on the next page haha.

 

So, if I understood it correctly, I now have an RX 580, and if I buy a second GPU RX 6600XT, I can use them both for output but I cannot control which of the two will compute right? Therefore it is best to have two of the same GPU so that you get the best computation result instead of having one "powerful" GPU and one "less powerful" right? In other words, I can buy now an RX 6600XT and it will work fine together with my RX 580 for output and then I could buy a second RX 6600XT and replace my  RX 580 just to make sure that the computation performance is at peak?

Edited by panosru
Link to comment
Share on other sites

23 minutes ago, panosru said:

No, the W series is too expensive... I could get a used RX 6600XT for 200 - 250€ but the 4 output limitation is a deal-breaker. It's a pity that we cannot have two dGPU in Hackintosh :( Anyways, thanks for the info!

 

UPDATE: Sorry I just saw your last message, it was on the next page haha.

 

So, if I understood it correctly, I now have an RX 580, and if I buy a second GPU RX 6600XT, I can use them both for output but I cannot control which of the two will compute right? Therefore it is best to have two of the same GPU so that you get the best computation result instead of having one "powerful" GPU and one "less powerful" right? In other words, I can buy now an RX 6600XT and it will work fine together with my RX 580 for output and then I could buy a second RX 6600XT and replace my  RX 580 just to make sure that the computation performance is at peak?

 

yep..."i think" xD LoL i've sold my rig with 590 and i don't have polaris for try...but maybe it works like with ram clock speed...so basically, the "lowest" win... 580+6600 result in 6600 downgraded to work like second 580... (or maybe not...never try it before...only with same card, in my case 2x w6800 and previous 2x 7750 looong time ago LoL on my neanderthal rig with snow leopard )

  • Like 1
Link to comment
Share on other sites

I will ask a friend of mine to come home so I can test first the dual dGPU thing, before I buy a GPU that may not work. If it will work I will run some tests to make sure that the computation happens in only one GPU and that it indeed uses the GPU with least power or if it randomly chooses with which GPU to make the computations. I currently run on Gigabyte GB-G750H PSU, so if the concept works, then I guess I'll have to upgrade to a more powerful PSU to handle the load. If I manage to make both GPU to work and somehow make so that the computations are happening only via the powerful GPU, then I don't need to buy a second 6600XT, but if it is a random choice or only the least powerful GPU, then indeed you need to have the same GPU for consistency.

 

Thanks @D3v1L! I hope to bring back the results as soon as possible for other people who may be interested in similar setup!

  • Like 2
Link to comment
Share on other sites

2 hours ago, panosru said:

I will ask a friend of mine to come home so I can test first the dual dGPU thing, before I buy a GPU that may not work. If it will work I will run some tests to make sure that the computation happens in only one GPU and that it indeed uses the GPU with least power or if it randomly chooses with which GPU to make the computations. I currently run on Gigabyte GB-G750H PSU, so if the concept works, then I guess I'll have to upgrade to a more powerful PSU to handle the load. If I manage to make both GPU to work and somehow make so that the computations are happening only via the powerful GPU, then I don't need to buy a second 6600XT, but if it is a random choice or only the least powerful GPU, then indeed you need to have the same GPU for consistency.

 

Thanks @D3v1L! I hope to bring back the results as soon as possible for other people who may be interested in similar setup!

 

Best way ever. Try before buy!

  • Like 1
Link to comment
Share on other sites

Yes, the only problem is that most of my friend are Nvidia fan boys haha so I'll have to ask around for someone who has AMD GPU who also could come to my home for me to test it out. I'll post the results here once I'll have some. 

 

UPDATE:

@D3v1L, I forgot to ask, does the 6600XT work fine with hackintosh or should I go for 5700XT? I read in some reddit threads that 6600XT is not working and if somehow it works it does not support hardware accelerator, but I'm not sure if that info is relevant to date. Thanks!

Edited by panosru
Link to comment
Share on other sites

On 1/23/2023 at 5:05 PM, panosru said:

Yeah, it seems that other people are also complaining about Wacom drivers in recent Ventura upgrade.... I most definitely will look for other options when the time comes for me to upgrade my old wacom tablet. Huion seems to be a very good alternative indeed, although, for anyone who speaks Russian the name "Huion" sounds "awkward" hahaha 

 

Thanks @D3v1L!

hahah how about your name? xD 

  • Haha 1
Link to comment
Share on other sites

1 hour ago, panosru said:

Yes, the only problem is that most of my friend are Nvidia fan boys haha so I'll have to ask around for someone who has AMD GPU who also could come to my home for me to test it out. I'll post the results here once I'll have some. 

 

UPDATE:

@D3v1L, I forgot to ask, does the 6600XT work fine with hackintosh or should I go for 5700XT? I read in some reddit threads that 6600XT is not working and if somehow it works it does not support hardware accelerator, but I'm not sure if that info is relevant to date. Thanks!

If is not working and it does not support hardware accelerator...simply is 'couse they don't know how to install a video card in a motherboard... 6600xt is on real MacPro so... xD LoL btw, I have it...and "seems" my hardware acceleration is working xD ahah

  • Thanks 1
Link to comment
Share on other sites

38 minutes ago, Blesh said:

hahah how about your name? xD 

 

hahahahaha well, I had it coming!!! The accent tone goes on a! :P it's pànosru not panòsru hahaha 

  • Haha 1
Link to comment
Share on other sites

Today I decided to enable SIP since I have it disabled since 2019 when I built my first Hackintosh. so I changed the csr-active-config to 00000000 (8 zeros), the system booted normally but when I was entering my password for my user, I was getting a black screen with only the mouse cursor visible.

 

I rebooted back into disabled SIP state, I created a completely new user and then I rebooted again with the EFI that have SIP enabled.

 

I could login to the newly created user normally but I wasn't able to login to my account.

 

I then used this Python script to toggle SIP values individually until I find out which SIP flag causes the issue (I'm not sure if those flags and values are updated for Ventura).

 

#   CSR_ALLOW_UNTRUSTED_KEXTS            0x0001 = 0x00000001
#   CSR_ALLOW_UNRESTRICTED_FS            0x0002 = 0x00000002
#   CSR_ALLOW_TASK_FOR_PID               0x0004 = etc. ...
#   CSR_ALLOW_KERNEL_DEBUGGER            0x0008
#   CSR_ALLOW_APPLE_INTERNAL             0x0010
#   CSR_ALLOW_UNRESTRICTED_DTRACE        0x0020
#   CSR_ALLOW_UNRESTRICTED_NVRAM         0x0040
#   CSR_ALLOW_DEVICE_CONFIGURATION       0x0080
#   CSR_ALLOW_ANY_RECOVERY_OS            0x0100
#   CSR_ALLOW_UNAPPROVED_KEXTS           0x0200
#   CSR_ALLOW_EXECUTABLE_POLICY_OVERRIDE 0x0400
#   CSR_ALLOW_UNAUTHENTICATED_ROOT       0x0800

After some digging I found that when I have the CSR_ALLOW_UNRESTRICTED_FS flag enabled, then I'm getting the black screen after login. Since that happens only for my user, then I guess I might have something that bothers that flag, but I'm not sure what that may be since I tried unloading all the agents and daemons that are related to my user, remove all the startup items and background items, but I'm still getting that issue. I'm not sure if it has something to do with user file permissions, I run First Aid in recovery mode just in case, but the issue remains.

 

If I only disable that flag and leave all others enabled like so:

image.thumb.png.f81eb035e5c91afd62d13ab1f609b83b.png

 

Then the csr-active-config HEX value would be 02000000 (6 trailing zeros after 02).

 

After boot, running csrutil status I'm getting the following:

 

image.png.ff3b13f48c4113344f6eb67e52fa4832.png

 

From what I read, when SIP is enabled, the default state of Apple Internal is disabled, is that correct?

 

Also, maybe someone have any idea why I cannot login to my user (not even via ssh) when Filesystem Protections flag is enabled?

 

Thanks!

 

PS: Tomorrow I'll test the dGPUs! One is MSI RX 580 Armor X and the other is Sapphire Radeon HD 5770 1GB (for testing purposes only)

Link to comment
Share on other sites

Uhm, try remove csrutil in config.plist reboot, reset nvram, reboot into recovery, open shell and try first, csrutil disable then reboot another time to recovery, shell, csrutil enable then reboot and see if they work with your account...

btw:

image.thumb.png.de4c9ce9d7fe576a6baa694f2badbe19.png

csrutil give me only "enabled" ...mine is enabled from about uhm...when i bought my first Mac and when i make my first hackintosh LoL ahah

So, i'm no fan of disabled csrutil 😛 

Link to comment
Share on other sites

Yes, if SIP is in default values, you will only get the message "System Integrity Protection status: enabled". Τhat this means that everything is enabled except Apple Internal which should be disabled by default. Apple Internal is enabled for internal systems and is preventing from getting updates when is enabled (based on info from sources I read). I ended up using csr 12000000 which is the same as running csrutil enable --without fs. I don't know what exactly bothers fs from working in my account, in any case, I'll dig it another time, or I might migrate to a new user account somewhere in summer time when I guess I'll have some spare time for such a tedious and painful process.

 

I also have SIP enabled everywhere, except my hackintosh, I started with Clover and if I recall correctly at one point you had to disable SIP for kexts to be properly injected, or something like that anyways.

 

Thanks @D3v1L

Link to comment
Share on other sites

So today a friend of mine came and I tested the dual dGPU concept. With two RX 580 I was able to make it work out of the  box, the benchmark scores before and after were identical (within marginal error), but I wasn’t able to identify which of the two cards was used for the computation.

 

Unfortunately the HD 5770 didn't work, I even tried to load  AMD5000Controller.kext but it seems that this card works up to High Sierra, so, because of that, I couldn't manage to have two different cards in order to check which card is used for computation.

 

I might buy an RX 6600XT and use it in conjunction with my RX 580, I believe that normally the system should us the GPU that is placed in first slot (closest to the CPU), I don't think that the least powerful card should be used since they are not in SLI (or Crossfire) mode. But I'll have to make sure about it once I'll get my hands on the "new" GPU.

 

So far, the only finding is that I can have two dGPU which was my main concern to begin with! :D

 

PS: Since I only need the card for rendering and in all major video editing apps you can select which GPU will handle the rendering, I don't mind having the RX 580 and one more powerful dGPU.

 

PS2: I also accidentally ripped the mini USB to 9-pin adapter of the Broadcom card for Bluetooth so I had to solder the wires into the broken adapter until I get a new cable that I ordered :D

 

Edited by panosru
  • Like 1
Link to comment
Share on other sites

Hey all!

 

Yesterday I received the PowerColor RX 6600 XT 8GB. As promised, I'll post here my findings for anyone who is also interested in a setup with two GPUs. When I installed RX6600XT I also changed Booter -> Quirks -> ResizeAppleGpuBars from -1 to 0 and enabled Resize BAR in BIOS (UEFI -> Quirks -> ResizeGpuBars must always remain -1).

 

  1. As I mentioned in my previous post, having two GPUs works perfectly fine. Now I can have 9 monitors in my setup :) 
  2. Our Motherboard specs say that PCIEX8 shares bandwidth with PCIEX16, so, if it is populated, then both PCIE slots work X8. System Report states x8 for the GPU in PCIEX8 slot and x16 for the GPU in PCIEX16 slot. I'm not sure if that is true, I tend to not believe everything I see in System Report on a Hackintosh. I put each GPU alone in each slot and run tests, they both worked exactly the same in each slot. I assume, since they both are mid-end GPUs and not high-end, x8 bandwidth is sufficient for them.
  3. Either I put my RX580 in PCIEX8 or PCIEX16, the system will identify it as GPU1 and RX6600XT as GPU2, makes zero difference.
  4. In FCPX Setting -> Playback -> Render/Share GPU option made zero difference. Either I select RX580 or RX6600XT, I'm getting the exact same time in rendering. While rendering I watching the GPU fans and both GPUs occasionally were spinning their fans, like if they were both used for rendering from time to time. When one GPU fans were spinning, the other wasn't and vice versa, and there were times that both were spinning.
  5. Exporting a small (5m 12s) project in FCPX took 23.42s with only RX 6600XT in PCIEX16 export settings set to Format: Video and Audio, Codec: Source - Apple ProRes 422. I had the exact same result with only RX 580 installed in PCIEX16 slot and also with both cards installed.
  6. Running Heaven benchmark with default settings (Render: OpenGL) with only RX 6600XT in PCIEX16 slot yielded FPS: 35.5, Score: 894, Min FPS: 11.6, Max FPS: 87.0 Temps where around 95 - 98ºC[1]. With only RX580 installed in PCIEX16 (Resize Bar disabled from BIOS obviously), the results was: FPS: 15.3, Score: 384, Min FPS: 6.1, Max FPS: 48.1 Temps where around 70 - 73ºC. Running the benchmark with RX6600XT in PCIEX16 and the RX580 in PCIEX8 slots yielded: FPS: 11.7, Score: 892, Min FPS: 11.7, Max FPS: 86.1. It seems that Heaven Benchmark choses the best in slot GPU (which in this case is GPU2).
  7. Trying StarCraft 2: When running Options -> Graphics I got a message "Graphics settings have been set to their default values for your video card. You should review them at this time.". I checked that they were all set to Ultra. The Display label said "AMD Radeon RX 580" but I checked that only my RX 6600XT fans were spinning, so I guess the label was only displaying the GPU1 but in reality GPU2 was used to render the game. I didn't play because I wound't be able to stop and I have too much work to do.
  8. In Geekbench v6.0, for OpenCL, RX580 scored 47600 and RX6600XT scored 70870, for Metal, RX580 scored 60757 and RX66600XT scored 110832. RX580 was installed on PCIEX16 and RX6600XT was installed in PCIEX8). I then swapped the GPU positions (RX580 in PCIEX8 and RX6600XT in PCIEX16) and the results where exactly the same (within 1-2k of marginal error)

I'm not sure why I'm not getting a better performance in rendering, that puzzled me a lot. The performance difference in gaming is 2x but for some reason rendering lacks that performance advantage which leads me thinking if there is a bottleneck somewhere because it isn't normal to get the exact same performance in rendering with both RX580 and RX6600XT when used separately.

 

I will continue investigating the matter and if I have a finding worth posting, I'll keep you all informed.

 

Additional info for those who also have a Windows installation. I can boot in Windows perfectly fine without the need to disable Resize Bar in BIOS. Both cards are detected, RX6600XT is chosen by Windows as primary GPU (the only one shown in dxdiag exe). Running Time Spy (Free edition) in 3dMark on RX6600XT yielded a Graphics score of 9545, RX580 yielded 4414

 

 

 

[1] I bought the card used for €250, I bet that the card was used for mining, the condition from outside was OKish, I had dust so I cleaned it, but from the inside, when I removed the heat sync, the thermal paste was 99% gone, the thermal pads was literally destroyed from heat, they became almost like dust... I know that the card wasn't opened because all the warranty stickers are in place and it has one more year of warranty (well, had... since I opened it already). I replaced the thermal paste and pads, but I used 1mm pads and therefore I put a bit more thermal paste (Arctic Silver), I'll buy 0.5mm pads, that should drop the temperatures >10ºC because the dye will make a better contact with the heat sync.

Link to comment
Share on other sites

Yes but that was before testing, I replaced thermal paste and thermal pads.

 

I now tested to render a video in DaVinci Resolve v18.1.3. The video is 12m 38s long. I checked RX580 only in GPU settings, it took 15m 46s to render, then I checked RX6600XT and unchecked RX580, the same video took 15m 49s, basically the same. Format: MP4, Codec: H.265, Resolution 1080p with 30 frame rate. The same video with H.264 codec, took 1m 30s with RX580 selected and 1m 31s with RX6600XT selected.

 

I’m really puzzled as to why the card outperforms in double the RX580 but performs EXACTLY the same in rendering. It’s not about thermals, I was monitor thermals all the time, also, if it would be for thermals, then it wouldn’t double the performance in gaming. 

  • Confused 1
Link to comment
Share on other sites

43 minutes ago, panosru said:

Yes but that was before testing, I replaced thermal paste and thermal pads.

 

I now tested to render a video in DaVinci Resolve v18.1.3. The video is 12m 38s long. I checked RX580 only in GPU settings, it took 15m 46s to render, then I checked RX6600XT and unchecked RX580, the same video took 15m 49s, basically the same. Format: MP4, Codec: H.265, Resolution 1080p with 30 frame rate. The same video with H.264 codec, took 1m 30s with RX580 selected and 1m 31s with RX6600XT selected.

 

I’m really puzzled as to why the card outperforms in double the RX580 but performs EXACTLY the same in rendering. It’s not about thermals, I was monitor thermals all the time, also, if it would be for thermals, then it wouldn’t double the performance in gaming. 

 

btw, mining is not really a problem... but...it's strange same result with polaris vs navi O_O this is really strange! uhm...i think is related to dual different gpu but...can you take a test, in davinci, but not "check" / "uncheck" ...try with only 580 in slot and only 6600 in slot? then, try add Whatevergreen and pikera bootarg... uhm...

Link to comment
Share on other sites

I just installed DaVinci in Windows as well, and I got the exact same behaviour and the exact same render time as in macOS but I tried by checking and unchecking the GPU used.

 

I’ll do a test first by adding WEG and pikera just to make sure that it behaves the same. If the behaviour is the same with WEG and pikera then I revert back and I will then proceed by removing each GPU and try rendering with DaVinci Resolve again. I’ll post the results once I’m done. But, since the behaviour was the same in Windows, I don’t think that adding WEG will help.

 

UPDATE:

With WEG, it took almost 3 minutes instead of 1m 30s, double the time… so, WEG is definitely removed! :D I’ll now will try with each GPU alone in the system and then I’ll call it a day… 

Edited by panosru
Link to comment
Share on other sites

I'll be interested to see how you feel about the 6600XT. I currently have RX 570 and have had zero issues. No gaming use at all, I use Creative Suite for design work. Not sure it would be wort the money to update. I realize that the card is better and way faster, but will I really see a difference in daily use?

Link to comment
Share on other sites

So, by having only RX580 in PCIEX16 I got 3m 2s render time, for the same project, with only RX6600XT in PCIEX16 I got 1m 31s, clearly it is half the time between the two GPUs. The project was H.264.

 

For H.265 thought, again, ONLY RX580 in PCIEX16, I got 15m 48s, I then removed it and placed RX6600XT in PCIEX16, and I got 15m 18s which is a standard deviation within the acceptable marginal error space, basically we can all it the same time between each cards.

 

Having said that, in my eyes, it clearly seems that dual-GPU has nothing to do with it. It smells bottleneck and all I can blame is the CPU. The reason I feel that I have to blame the CPU is that with H.264 I cut the rendering time in half with RX6600XT (as expected) but with H.265 I’m getting the same exact time with each GPU. H.265 is much more CPU intensive than H.264, therefore, I believe that up to a point the GPU is caped and many computations are handled by the CPU during the rendering process. My i7 9700K I believe handles a big chunk of the rendering workload, and that’s why I’m getting the same exact time. I feel that it makes sense for me to blame the CPU, since it is the only variable that remained unchanged in the equation, I tried different PCIe slots, different GPU, and I was getting the same results in all my tests. @D3v1L I really have no clue as to how you got 1.5m to render in H.265, seriously, maybe your CPU is much more powerful than mine?

 

@pkdesign from my only 2 day experience with all the tests I run, I can’t say that I daily drived enough to be eligible to give an advice, so, take what I say with a grain of salt. If you game, then I guess it is worth the upgrade, but, you, like myself, we are not games, therefore our ROI on that card is much lower. I also do graphic design and videography (not in a professional level as @D3v1L does, so he may have a better view on the matter), I never had an issue with my RX580, frankly speaking I bough the RX6600XT mostly because I though that I’ll gain x3 - x4 performance boost but I got only x2 (on H.264). Is it worth it? It depends, do you render every day? Do you render big projects that take hours to render? Then your ROI might be high enough to justify the upgrade cost. Is it your work or is that your hobby? If the average project you render takes 1hour to render, and you cut the time in half, so you gain 30m from it, at the end of the month, the cost of time you gained is enough for you to justify the cost of upgrade? Frankly speaking, I think that you can find a used RX6600XT for a good price and since you are in creative design field, then your IRR has a positive value and ultimately, YTY (Year-to-Year) you’ll get profit from all the time you’ll save. But, from what I can tell, don’t expect to see drastic changes from the get go, if someone remove my RX580 and placed RX6600XT, it would take some time for me to realise it. If you already have a very fast drive, a good CPU, RAM etch, then check for GPU (others might not agree about the order, but that’s my opinion).

Edited by panosru
  • Thanks 1
Link to comment
Share on other sites

I'll use default h.264/5 export master file for a try. If i use media management for export, time is longer. But not longer like yours...

 

 

Edited by D3v1L
Link to comment
Share on other sites

wait...i think i found the culprit... @panosru please, say me you don't use Main10 profile for transcoding h.265... (i know you never know the "trick", my fault didn't advise you...Main10 on DaVinci, use 80%cpu and 20%gpu...is an issue with 4:2:2 codec...and there's no way to switch to 4:2:0 so...we need to use Main)
Please, make another test when you have time... 
RX580 vs RX6600 vs both and on media management, set "Main" profile instead "Main10"...

223557142_Screenshot2023-02-18alle07_17_33.thumb.png.10f6332559bb605f07b99bc5916a2409.png

Edited by D3v1L
  • Thanks 1
Link to comment
Share on other sites

×
×
  • Create New...