Jump to content

EFI Variable Store on Aptio V (Haswell-E and up)


Go to solution Solved by vit9696,
198 posts in this topic

Recommended Posts

Is my issue kind of related to what is discussed here? 

Im experiencing RAM errors in conjunction with USB3 Devices. 

 

Example: Connecting a card reader (I tested different brands) without adding a SD card leads to RAM Errors in Rember. Populating or disconnecting solves RAM problems. The failure looks like this:

 

Running tests on full 20857MB region...

  Stuck Address       :                 setting  1 of 16testing  1 of 16
 
FAILURE! Data mismatch at local address 0x00000005b4495d30
Actual Data: 0x0000000500000000
 
Board: Asus Prime Deluxe X299
OS: 10.13.1

Being one of the people left without NVRAM under macOS after the Kaby Lake firmware update, I am trying to understand what is being discussed here. Do you guys have an hypothesis about what's the matter? Trying to summarize: you think that the data buffer between DXE and SMM is overwritten/relocated by boot.efi, so that it doesn't function anymore, is that correct?

Being one of the people left without NVRAM under macOS after the Kaby Lake firmware update, I am trying to understand what is being discussed here. Do you guys have an hypothesis about what's the matter? Trying to summarize: you think that the data buffer between DXE and SMM is overwritten/relocated by boot.efi, so that it doesn't function anymore, is that correct?

No, that was the old issue. Nothing is known about the new and I have no HW to tinker with.

I have the ASUS Maximus VII Impact (Z97). Version 0217 UEFI firmware allows for writing to NVRAM, 0412 and later no longer works.

I extracted nvramsmi from version 0217. I replaced nvramsmi module in UEFI 0412 with older version from 0217 and the saving to NVRAM started working. In this way I have been able to successfully modify many firmwares, in different versions and different motherboards.

I tried to disassemble these modules and compare what changed between version 0217 and 0412, but unfortunately I'm not a programmer, the assembly is already over me. Maybe someone would try to disassemble these modules, compare them and see something relevant.

I attach to the post both versions of nvramsmi modules.
0217 - NVRAM saving OK 0217-body.bin.zip
0412 - NVRAM broken 0412-body.bin.zip

  • Like 1

Being one of the people left without NVRAM under macOS after the Kaby Lake firmware update, I am trying to understand what is being discussed here. Do you guys have an hypothesis about what's the matter? Trying to summarize: you think that the data buffer between DXE and SMM is overwritten/relocated by boot.efi, so that it doesn't function anymore, is that correct?

This thread is like a brainstorm "How to make NVRAM working natively".

There are no answers, there are only questions and speculations.

  • 1 month later...

I have the ASUS Maximus VII Impact (Z97). Version 0217 UEFI firmware allows for writing to NVRAM, 0412 and later no longer works.

 

I extracted nvramsmi from version 0217. I replaced nvramsmi module in UEFI 0412 with older version from 0217 and the saving to NVRAM started working. In this way I have been able to successfully modify many firmwares, in different versions and different motherboards.

 

I tried to disassemble these modules and compare what changed between version 0217 and 0412, but unfortunately I'm not a programmer, the assembly is already over me. Maybe someone would try to disassemble these modules, compare them and see something relevant.

 

I attach to the post both versions of nvramsmi modules.

0217 - NVRAM saving OK attachicon.gif0217-body.bin.zip

0412 - NVRAM broken attachicon.gif0412-body.bin.zip

 

Checked back after a long break and I found the issue with these. There is a whitelist check that only a bunch of vars in EfiGlobalVar scope may be written in 0412. That doesn't seem to be the same issue as for the other Aptio V boards though. ASUS is using an older NVRAM module design, I suppose they backported security changes and screwed up.

  • Like 2

Been playing again and well... the issue was found and simultaneously is a significant security issue.

the NvramSmm driver, running in SMM, calls the AMI Flash protocol, which is a DXE Runtime protocol. Due to the need for execution perms, RT_code segments were not protected from relocation before and hence the protocol was "moved out of NvramSmm's reach". And yes, this means you can hack its functions to be called into SMM from outside, which is scandalous.

 

Thanks to vit9696 for excellent russian-quality kernel patch {censored}ery and ReddestDream for testing... a proper fix is yet to come as NvramDxe writes to global vars at runtime, which causes a GPF without a patched kernel.

EDIT: Test results have been interpreted based on a wrong premise.

  • Like 15

Been playing again and well... the issue was found and simultaneously is a significant security issue.

the NvramSmm driver, running in SMM, calls the AMI Flash protocol, which is a DXE Runtime protocol. Due to the need for execution perms, RT_code segments were not protected from relocation before and hence the protocol was "moved out of NvramSmm's reach". And yes, this means you can hack its functions to be called into SMM from outside, which is scandalous.

 

Thanks to vit9696 for excellent russian-quality kernel patch {censored}ery and ReddestDream for testing... a proper fix is yet to come as NvramDxe writes to global vars at runtime, which causes a GPF without a patched kernel.

 

That means that you have something working that gives a chance for NVRAM on Skylake+ platforms under macOS? Could you share more technical details regarding your solution (if you have some)? Maybe something to try on combination of Dell and mobile Skylake?

That means that you have something working that gives a chance for NVRAM on Skylake+ platforms under macOS? Could you share more technical details regarding your solution (if you have some)? Maybe something to try on combination of Dell and mobile Skylake?

 

Yes, a fix will be coming to AptioFix after figuring an issue with RT_code relocation. A fix I expect to be working will be tested soon... if it is successful, I will share it here for further validation.

  • Like 3

 

Hmmmmm... could it be that it was generic talk about RT_code relocation and rather random stuff, absolutely unrelated to a SMM driver using a DXE Runtime protocol? Most of your posts was false or unrelated. SMM drivers and their data are not reported in Memory Map, kernel relocation is not the issue, also SystemTable is unrelated and protecting RT_code will not work due to the drivers writing to global vars (thx vit9696). Nothing you said is really related in any way to this issue.

EDIT: Test results have been interpreted based on a wrong premise.

 

EDIT: Of course RT_code being relocated was ontopic, but... well... obvious from AptioFix code and known for the past five years? Yes, I said that I let a user "test with RT_code relocated", but if you check the second sentence, you see it makes no sense (I will not edit on purpose...), I was meaning "not relocated" of course (which obviously would not work). And in the first post, I simply forgot to mention it because I'm not a wandering AptioFix dictionary.

________

 

Here's some workaround I came up with quickly, totally untested for both general sanity and fixing the problem. Test at will and prepare for issues booting.

 

EDIT: File deleted due to a bug...

  • Like 1

I was literally talking about this problem. It is one of three bugs I identified in AptioFix.
 

the NVRAM issue has been known since UEFI booting macOS has been possible, it is an SMI locking problem.


There are a few regions of runtime that are moved, mainly the system table and that region. I think the relocation of this and not fixing the pointers inside the region is what causes the issue since SMM runs in physical mode.

 

You know I never noticed that this line only converts the EfiRuntimeServicesData and not EfiRuntimeServicesCode to EfiMemoryMappedIO. Shouldn't it be protecting all the runtime?
 
So I wonder why not also protect the code regions? Wouldn't the SMM driver code be located in a code region?
 
So I think I had a revelation that the EfiRuntimeServicesCode regions are relocated into the kernel, but EfiRuntimeServicesData are not. If the kernel is then relocated I don't think any of those regions are fixed at all and I believe that might be the bug, since there must be physical addresses present, not virtual, as SMM runs in physical mode.

 

Relocating the RT_Code in anyway causes physical addresses to no longer be valid which is kind of a problem when you need physical addresses for SMM.

 

EDIT: Also, what part of my posts were false? They seem to be pretty correct to me.

I was literally talking about this problem. It is one of three bugs I identified in AptioFix.

Relocating the RT_Code in anyway causes physical addresses to no longer be valid which is kind of a problem when you need physical addresses for SMM.

 

That RT_code is not protected is not a bug in AptioFix, but indeed there is one...

Also, I don't see you talking anywhere near the issue in any way. You noted that RT_code is relocated, which was known for years... nice?

The things you quoted are all unrelated or false...

 

"the NVRAM issue has been known since UEFI booting macOS has been possible, it is an SMI locking problem."

That was the Aptio IV problem and is 100% unrelated to this one, or why do you think the current fix doesn't work?

EDIT: Noticed I misread it, you basically said "it somehow has something to do with SMM"... cool.

 

"I think the relocation of this and not fixing the pointers inside the region is what causes the issue"

No.

 

"Shouldn't it be protecting all the runtime?"

No, because that breaks global var write.

 

"Wouldn't the SMM driver code be located in a code region?"

No, SMRAM is not tracked in UEFI memory map.

 

"If the kernel is then relocated I don't think any of those regions are fixed at all"

Unrelated, but worth a thought for AptioFIx1.

 

Now please get on topic, or PM/IRC for any more "claims".

  • Like 2

That RT_code is not protected is not a bug in AptioFix, but indeed there is one...

Also, I don't see you talking anywhere near the issue in any way. You noted that RT_code is relocated, which was known for years... nice?

The things you quoted are all unrelated or false...

 

That is a super contradictory statement. The RT_Code needs protection from relocation or protection from the effect of being relocated....

 

That was the Aptio IV problem and is 100% unrelated to this one, or why do you think the current fix doesn't work?

EDIT: Noticed I misread it, you basically said "it somehow has something to do with SMM"... cool.

 

I was referring to the fact that the SMI can not work after it is locked because SMM is broken. You seem to have realized that with your edit.

 

"I think the relocation of this and not fixing the pointers inside the region is what causes the issue"

No.

 

If you move the SMM driver code then it will not function as it contains physical addresses because it uses physical mode. So what do you mean, no? Yes. 100%.

 

 

"Shouldn't it be protecting all the runtime?"

No, because that breaks global var write.

 

How would protecting it break global write? That's just broken. Protecting is getting it to work, just like the runtime data memory regions....

 

 

"Wouldn't the SMM driver code be located in a code region?"

No, SMRAM is not tracked in UEFI memory map.

 

The driver to interact with SMRAM is! Which uses physical addresses to interact through SMI.... Come on man.

 

 

"If the kernel is then relocated I don't think any of those regions are fixed at all"

Unrelated, but worth a thought for AptioFIx1.

 

That was a specific tangent on AptioFix vs AptioFix2 and why the old version only needed the data region protected.

 

 

Now please get on topic, or PM/IRC for any more "claims".

 

Not sure how I'm not on topic. When you basically are saying you solved this by reiterating something I said two months ago and not even acknowledging it, then trying to act like I am making false claims when I call you out on it. Don't be a douche, man.

 

EDIT: And if relocation wasn't the issue then why wouldn't these firmwares have not working NVRAM in windows or linux that don't relocate these regions. Surely the calls to DXE you speak of happen then too right? So what is the difference between how windows/linux handles these regions and macOS. Probably the relocation....

That is a super contradictory statement. The RT_Code needs protection from relocation or protection from the effect of being relocated....

 

Second sounds a little better...

 

Quoting is broken for me, so I'll use inline quotes from now. on..

 

"If you move the SMM driver code then it will not function as it contains physical addresses because it uses physical mode. So what do you mean, no? Yes. 100%."

Correct. The problem is that the SMM driver is not moved as it is within the locked SMRAM, which is not even indexed in the Memory Map.

 

"How would protecting it break global write? That's just broken. Protecting is getting it to work, just like the runtime data memory regions...."

Global variables reside in which memory? And this memory has which attributes? Spoiler: RT_code, which has read and execute, but not write.

 

"The driver to interact with SMRAM is! Which uses physical addresses to interact through SMI.... Come on man."

No, it does not, drivers *interacting* with SMM drivers are DXE drivers, which have no access to the physical address space. And if you mean it passing the physical address of the communication buffer, that is still not the issue.

 

"That was a specific tangent on AptioFix vs AptioFix2 and why the old version only needed the data region protected."

Both need equal protection of memory, just that AptioFix *may* need address fixups, which I did not verify... probably not.

 

"Not sure how I'm not on topic. When you basically are saying you solved this by reiterating something I said two months ago and not even acknowledging it, then trying to act like I am making false claims when I call you out on it. Don't be a douche, man."

You are the douche for claiming the solution to an issue for you, while you obviously don't even understand it... Please understand issues before you claim their solution (i.e. spamming a lot of random and false stuff, one thing which happens to have at least a bit to do with the investigation of this issue, despite not being a discovery or whatsoever by you) for yourself.

 

Back in 2013 I had a terrible lot of respect for you and your knowledge, but now that I gained experience over the years, I'm seeing you convinced on half-truths and rubbish in a fashion where it makes no sense at all to discuss it anymore. Please just leave it.

"If you move the SMM driver code then it will not function as it contains physical addresses because it uses physical mode. So what do you mean, no? Yes. 100%."

Correct. The problem is that the SMM driver is not moved as it is within the locked SMRAM, which is not even indexed in the Memory Map.

 

Ok. So, then you are agreeing with me but yet not....

 

 

"How would protecting it break global write? That's just broken. Protecting is getting it to work, just like the runtime data memory regions...."

Global variables reside in which memory? And this memory has which attributes? Spoiler: RT_code, which has read and execute, but not write.

 

WHAT? Then no variables could ever be written ever if they were stored in read/execute but not write. That's the most ridiculous thing you could have ever said. There would be no working NVRAM.....

 

 

"The driver to interact with SMRAM is! Which uses physical addresses to interact through SMI.... Come on man."

No, it does not, drivers *interacting* with SMM drivers are DXE drivers, which have no access to the physical address space. And if you mean it passing the physical address of the communication buffer, that is still not the issue.

 

There is an SMM DXE driver that is responsible for going into physical mode and using an SMI to enter SMM. That code is in the driver, marked as RT_Code, and contains the vectors needed for the SMI to function. Tell me why other OSes work but not macOS when it is the only one relocating runtime regions?

 

"That was a specific tangent on AptioFix vs AptioFix2 and why the old version only needed the data region protected."

Both need equal protection of memory, just that AptioFix *may* need address fixups, which I did not verify... probably not.

 

Yes but there are firmwares that AptioFix causes non working NVRAM but with AptioFix2 it works.

 

 

"Not sure how I'm not on topic. When you basically are saying you solved this by reiterating something I said two months ago and not even acknowledging it, then trying to act like I am making false claims when I call you out on it. Don't be a douche, man."

You are the douche for claiming the solution to an issue for you, while you obviously don't even understand it... Please understand issues before you claim their solution (i.e. spamming a lot of random and false stuff, one thing which happens to have at least a bit to do with the investigation of this issue, despite not being a discovery or whatsoever by you) for yourself.

 

I literally helped write the code we are talking about. You can think whatever you like but I'm not the one who does not know what they are talking about.

 

 

Back in 2013 I had a terrible lot of respect for you and your knowledge, but now that I gained experience over the years, I'm seeing you convinced on half-truths and rubbish in a fashion where it makes no sense at all to discuss it anymore. Please just leave it.

 

I think you've confused me with yourself. You seem to be confused easily though so understandable.  :thumbsup_anim:

I literally got back out of bed to respond to this mess, as you are obviously not interested in discussing things privately, where this belongs.

 

"Ok. So, then you are agreeing with me but yet not...."

You are speaking about a hypothetical scenario. You were right if it actually happened, which it can't.

 

"WHAT? Then no variables could ever be written ever if they were stored in read/execute but not write. That's the most ridiculous thing you could have ever said. There would be no working NVRAM....."

I am talking about variables in the code... those from C. How the heck would RT_code being executable/writable or not be related to NVRAM variables?!

 

"There is an SMM DXE driver that is responsible for going into physical mode and using an SMI to enter SMM. That code is in the driver, marked as RT_Code, and contains the vectors needed for the SMI to function. Tell me why other OSes work but not macOS when it is the only one relocating runtime regions?"

Oh God... the SMI is what switches into physical mode, that is an attribute of SMM. The only physical address going from the DXE to the SMM *MIGHT* be the CommBuffer, but I'm quite certain this is a one-time thing on Aptio IV, while it changed to a pass-on-every-SMI thing for Aptio V. Either way, it is always RT_data and its address is always correct as of today's AptioFix. No, not a single address in the DXE module is {censored}ed. One address in the SMM module is {censored}ed, but that has nothing to do with what you are saying.

 

"I literally helped write the code we are talking about. You can think whatever you like but I'm not the one who does not know what they are talking about."

Will you understand that the Aptio IV issue is not the Aptio V issue, pretty please? Nobody is talking about the "old NVRAM issue", its fix does not fix this and its problem is not this problem, but only something roughly similiar.

 

"I think you've confused me with yourself. You seem to be confused easily though so understandable."

When you are confused easily, that is literally a sign of self-reflection. I'm doubting me more than anyone else and that should tell you a lot about situations where I tell things with absolute certainty.

 

Regarding your absolute certainty regarding the code as you even helped writing it, I may remind you about this discussion, where I spent a handful of posts desparately trying to get you to understand relocation is a thing: http://www.insanelymac.com/forum/topic/306156-clover-bugissue-report-and-patch/page-103?do=findComment&comment=2433423

 

Now think about it or don't, but stop spamming this thread.

I literally got back out of bed to respond to this mess, as you are obviously not interested in discussing things privately, where this belongs.

 

I think this is exactly where it belongs. You want it in private for the obvious reason.

 

You are speaking about a hypothetical scenario. You were right if it actually happened, which it can't.

 

Really so you have a working solution to everyone who has non-working NVRAM? Because I'm pretty sure that I do. What's your method? Because the method I've been working on since I brought that up before and have been rewriting v3 is that I make all RT_Data and RT_Code regions into MMIO regions, then I make copies of the RT_Code region and that is relocated. The original is still there in physical memory and the kernel is happy it got the relocated virtual code. But this is hypothetical right???

 

EDIT: That's such a weird statement, I questioned what you meant after I replied. Do you think I mean moving whats inside the SMRAM? Because I'm talking about the SMM runtime drivers that are not in SMRAM but interact with SMM. That's where the problem lies, when those modules are moved. There are physical addresses inside that pointed to where the region was previously (if you look it's at the leaked code it's a structure which means it's accessed by dereference like a pointer), and those point to buffers that were allocated for SMM. These are no longer valid so if they are accessed then GPF exception occurs (unless there is now another region there, then probably some other exception happens).

EDIT2: Also the protocol you refer to as what is causing the problems, the flash protocol, is also used in Aptio IV during NVRAM SMM communication. 

EDIT3: The NVRAM driver is also only partially in SMM. The driver starts in DXE and installs only the variable store in SMM. I see several variables that could be corrupted by relocation depending on how they may be used later.

 

"WHAT? Then no variables could ever be written ever if they were stored in read/execute but not write. That's the most ridiculous thing you could have ever said. There would be no working NVRAM....."

I am talking about variables in the code... those from C. How the heck would RT_code being executable/writable or not be related to NVRAM variables?!

 

This sentence just doesn't even make sense. I don't know if you miswrote that or what but.... What? You are the one that said that, and then I was asking how could any variables ever be written then if it is write protected? I don't even know what you mean those from C. You mean the physical addresses I was referring to that are now no longer valid because they were moved? Because I think that's what you mean....

 

"There is an SMM DXE driver that is responsible for going into physical mode and using an SMI to enter SMM. That code is in the driver, marked as RT_Code, and contains the vectors needed for the SMI to function. Tell me why other OSes work but not macOS when it is the only one relocating runtime regions?"

Oh God... the SMI is what switches into physical mode, that is an attribute of SMM. The only physical address going from the DXE to the SMM *MIGHT* be the CommBuffer, but I'm quite certain this is a one-time thing on Aptio IV, while it changed to a pass-on-every-SMI thing for Aptio V. Either way, it is always RT_data and its address is always correct as of today's AptioFix. No, not a single address in the DXE module is {censored}ed. One address in the SMM module is {censored}ed, but that has nothing to do with what you are saying.

 

You must be in real mode in order to invoke an SMI (not 100% on that actually). And in order to enter it you need to write to stuff that was set up and is probably no longer there because it was moved. I was just saying that only one module interacts with SMI, not that is what is the problem. The problem is the relocation of an region that was moved that deals with SMM, which is a lot of RT_Code. I have multiple computers, so I don't know why you think I don't know what different firmwares are like....

 

"I literally helped write the code we are talking about. You can think whatever you like but I'm not the one who does not know what they are talking about."

Will you understand that the Aptio IV issue is not the Aptio V issue, pretty please? Nobody is talking about the "old NVRAM issue", its fix does not fix this and its problem is not this problem, but only something roughly similiar.

 

At no point here was I referring to fixing it in the same way or saying they were the same problem. You said I did not understand what is happening, except I do understand. I just know that the problem is moving runtime modules after they have already been moved into SMM. And I said that two months ago and you agree with that yet refuse to acknowledge that I said it even though the evidence is pretty clear that I did.

 

 

"I think you've confused me with yourself. You seem to be confused easily though so understandable."

When you are confused easily, that is literally a sign of self-reflection. I'm doubting me more than anyone else and that should tell you a lot about situations where I tell things with absolute certainty.

 

When you are confused it is a sign of not being able to comprehend, it is no measure of self-reflection. I don't see any certainty here from you, you've contradicted yourself multiple times and even agreed that I am indeed correct that the problem is relocation of runtime modules that have been moved to SMM.

 

 

Regarding your absolute certainty regarding the code as you even helped writing it, I may remind you about this discussion, where I spent a handful of posts desparately trying to get you to understand relocation is a thing: http://www.insanelymac.com/forum/topic/306156-clover-bugissue-report-and-patch/page-103?do=findComment&comment=2433423

 

Ummmmmm.... Yeah not seeing what you mean there besides me just not thinking for a second then immediately correcting myself.....

 

Now think about it or don't, but stop spamming this thread.

 

Nah, I'll do neither or both but continue to post as I see fit. There's only one reason you don't want a conversation about this, because you know damn well that I'm right.

Guys - there is better way to decide who is right and who is not - just post your code here and ppl like me will compile and test your suggestions. Best working and less buggy solution will be the winner :-D

P.S. Sure, I'm joking a little bit. But for now discussion looks more like word fight without any result.

  • Like 3

Thanks for every one to find solution to solve the nvram problem.

 

I think the solution is not only one and may can be fixed by more ways.

 

If one have tested successfully it maybe one of the solution to solve the problem because there are most of hackintosh lost native nvram support so we can test it.

 

Thanks again for every one effort to this problem.

 

 

从我的 iPhone 发送,使用 Tapatalk

I'm not worried about being right, only that the correct and best solution is found. I'll post my code when ready, as I was going to like I said. The problem I have here is that DF knows damn well I told him this exact issue almost two months ago. Then he comes on here with no code or proof of concept basically just saying what I said and that he discovered the issue, then says I have no proof. When proof is given he acts like a child. I'm here for the conversation about solving the problem, as I always have been. For the almost seven years I've been part of coding for clover, if I wanted accolades I certainly wouldn't do this because I very rarely get thanks. And believe me, without me clover would not have a large amount of features, like one configuration file instead of three, being able to use plists fully, being able to do a bunch of cool stuff with themes, basically the whole mechanism of theme.plist, custom entries, not having to use five million boot arguments to do stuff, a semi-working GUI (although there's no saving that thing but I triiiieeeed), helping dmazar with AptioFix, and that's just off the top of my head. I also maintain and administrate the project, tickets, and repositories on sourceforge. I think that DFs approval is not needed, or anyone's but my own for that matter....

  • Like 1

I'm not worried about being right, only that the correct and best solution is found. I'll post my code when ready, as I was going to like I said. The problem I have here is that DF knows damn well I told him this exact issue almost two months ago. Then he comes on here with no code or proof of concept basically just saying what I said and that he discovered the issue, then says I have no proof. When proof is given he acts like a child. I'm here for the conversation about solving the problem, as I always have been. For the almost seven years I've been part of coding for clover, if I wanted accolades I certainly wouldn't do this because I very rarely get thanks. And believe me, without me clover would not have a large amount of features, like one configuration file instead of three, being able to use plists fully, being able to do a bunch of cool stuff with themes, basically the whole mechanism of theme.plist, custom entries, not having to use five million boot arguments to do stuff, a semi-working GUI (although there's no saving that thing but I triiiieeeed), helping dmazar with AptioFix, and that's just off the top of my head. I also maintain and administrate the project, tickets, and repositories on sourceforge. I think that DFs approval is not needed, or anyone's but my own for that matter....

All for you are the good developers.

 

The nvram problem is a common problem which exists too long and can’t be fixed yet.

 

I also try to solve this mad problem and I have a laptop with native nvram works and I found it may not related with kernel reallocation because i use aptiofix or aptiofixv2 or aptiofixfree2000 and with slide value the nvram are works well with no problem.

 

And I heard some motherboard can change “RTC RAM LOCK” to false to let nvram work but my xps don’t have this option.

 

With more thanks for your works in Clover and help Clover to be more and more greater.

 

With this nvram problem you chat with DF I think both or you are want to solve this problem so I think we can talk and chat this in a calm way.

 

DF mentioned vit9696 and you can also chat with vit9696 for more details about this if it is really said like DF.

 

The end I’m really grade to see more developers works on hackintosh and make our hackintosh more useful and easily!

 

Thanks for you and DF.

 

 

从我的 iPhone 发送,使用 Tapatalk

Don't let me and DF seem like we are fighting or adversarial. lol, we are not. We just like to argue with each other, HAHA!

 

EDIT: At least I'm not. I usually just come across as a d i c k.

  • Like 2
×
×
  • Create New...