Jump to content

ESXi 5 Mac OS X Unlocker


Donk
 Share

817 posts in this topic

Recommended Posts

It could be the bootbank partition running out of space. Please can you uninstall the locker and try the steps here http://www.insanelymac.com/forum/topic/267296-esxi-5-mac-os-x-unlocker/?p=1953036

This seems to have fixed the issues we were having on our old Dell PowerEdge R610 (random disconnects between the vSphere client and ESXi 5.5 host, console not functioning properly).  Thanks!

Link to comment
Share on other sites

This seems to have fixed the issues we were having on our old Dell PowerEdge R610 (random disconnects between the vSphere client and ESXi 5.5 host, console not functioning properly).  Thanks!

I am working with MSoK on another way to run the unlocker which does not impact the visorfs and bootbank. Will post here once I am satisfied with my method. Can I ask do you have a permanent scratch partition available in your ESXI systems? Indeed if possible could others let me know how thy feel about using the scratch partition to store files?

Link to comment
Share on other sites

I am working with MSoK on another way to run the unlocker which does not impact the visorfs and bootbank. Will post here once I am satisfied with my method. Can I ask do you have a permanent scratch partition available in your ESXI systems? Indeed if could other let me know how thy feel about using the scratch partition to store files?

Donk,

 

As I generally install ESXi to SD, I have to create a scratch location on persistent storage, so a good location for the unlocker related flies and backups, if that was what you were thinking.

Link to comment
Share on other sites

  • 2 weeks later...

Hi,

 

 I am new to this and Macs in general. A co-worker mentioned this and I became intrigued. 

Running ESXi 5 U2

Dell R520

Intel Xeon E5-2430

 

Here is my issue:

/vmfs/volumes/516fe690-969fc25b-e2b5-90b11c13e4e6/unlock-all-v120/unlock-all-v120 # cd esxi
/vmfs/volumes/516fe690-969fc25b-e2b5-90b11c13e4e6/unlock-all-v120/unlock-all-v120/esxi # chmod +x *.*
/vmfs/volumes/516fe690-969fc25b-e2b5-90b11c13e4e6/unlock-all-v120/unlock-all-v120/esxi # ./install.sh
VMware ESXi 5.x Unlocker 1.2.0
===============================
Copyright: Dave Parsons 2011-13
Patching files...
Segmentation fault
 
Any thoughts on this would be greatly appreciated
 
Thank you in advance
DR
Link to comment
Share on other sites

If you're having Segmentation Faults on 5.0.0 (U2 or U3), use version 1.1.1 of the unlocker instead of 1.2.0.

You can find it on the right sidebar of the download page for 1.2.0.

 

1.2.0's Unlocker.ESXi isn't working properly with 5.0 for some reason.

Seeing this with:

e338a9b0201256e1fb1d28346ef42cf9  unlock-all-v111/esxi/Unlocker.ESXi
ab487c919219238cc9e1d46bea266f4e  unlock-all-v120/esxi/Unlocker.ESXi
Link to comment
Share on other sites

 

If you're having Segmentation Faults on 5.0.0 (U2 or U3), use version 1.1.1 of the unlocker instead of 1.2.0.

You can find it on the right sidebar of the download page for 1.2.0.

 

1.2.0's Unlocker.ESXi isn't working properly with 5.0 for some reason.

Seeing this with:

e338a9b0201256e1fb1d28346ef42cf9  unlock-all-v111/esxi/Unlocker.ESXi
ab487c919219238cc9e1d46bea266f4e  unlock-all-v120/esxi/Unlocker.ESXi

Well scratch that, all I get when trying to install 10.8 is this:

macForbidden.png

 

Does this mean that the installer is checking for the SMC and failing?

Link to comment
Share on other sites

Hi,

 

I am new to this and Macs in general. A co-worker mentioned this and I became intrigued. 

Running ESXi 5 U2

Dell R520

Intel Xeon E5-2430

 

Here is my issue:

===============================
Copyright: Dave Parsons 2011-13
Patching files...
Segmentation fault

DR,

 

Couple of things are you able to install ESXi 5.5 if so try it with the latest unlocker (1.2.0) it should work fine, also are you running the customised Dell version of ESXi, might be worth trying the vanilla version, as not much space available on the ESXi OS partition for the unlocker files with all the additional Dell drivers, the same is true of the HP version.

Link to comment
Share on other sites

Thanks for all the hard work on this tool Donk. Has anyone been able to make the unlocker work on an ESXI install on a usb drive yet? I get the following output when I try to run unlocker 1.2 on my esxi 5.5 install:

VMware ESXi 5.x Unlocker 1.2.0

===============================

Copyright: Dave Parsons 2011-13

Deleting darwin.tgz from boot.cfg...

Acquiring lock /tmp/bootbank.lck

/vmfs/volumes/52a2baed-97cbb59e-11ee-902b346a7164/unlock-all-v120/esxi # ./install.sh

VMware ESXi 5.x Unlocker 1.2.0

===============================

Copyright: Dave Parsons 2011-13

Patching files...

Patching bin/vmx

File mapped @0x3ffcec6c010 length 22929216

Found OSK0 @ 0x3ffcfbc4265

Found OSK1 @ 0x3ffcfbc429d

Found SRVR @ 0x3ffcfc541f6

Patching bin/vmx-debug

File mapped @0x3ffcec6c010 length 27924264

Found OSK0 @ 0x3ffcfd3e8e5

Found OSK1 @ 0x3ffcfd3e91d

Found SRVR @ 0x3ffcfdd01f6

Patching bin/vmx-stats

File mapped @0x32002740 length 25723136

Found OSK0 @ 0x32f7cb15

Found OSK1 @ 0x32f7cb4d

Found SRVR @ 0x3300d926

Patching vmwarebase is not supported on this platform

Setting permissions...

Creating darwin.tgz...

bin/

bin/vmx

bin/vmx-debug

bin/vmx-stats

addr: 0, sz: 16065572, flags: 5

addr: 0xf54e3c, sz: 3416524, flags: 6

bin/vmx: textPgs: 3922, fixUpPgs: 0

Aligning executable bin/vmx

addr: 0, sz: 17618636, flags: 5

addr: 0x10ce47c, sz: 3529484, flags: 6

bin/vmx-debug: textPgs: 4301, fixUpPgs: 0

Aligning executable bin/vmx-debug

addr: 0, sz: 16203932, flags: 5

addr: 0xf7643c, sz: 3653996, flags: 6

bin/vmx-stats: textPgs: 3956, fixUpPgs: 0

Aligning executable bin/vmx-stats

Adding darwin.tgz to boot.cfg...

Acquiring lock /tmp/bootbank.lck

Copying darwin.vgz to /bootbank/darwin.vgz

cp: can't create '/bootbank/darwin.vgz': Read-only file system

Copying darwin.vgz to /bootbank/darwin.vgz failed: 1

 

I'm assuming the read-only error is due to esxi being installed on a usb drive. Unfortunately there's no mount binary that comes with esxi so I can't remount the volume as rw. Anyone know any other way to achieve a writeable mount on my /bootbank dir? I've also tried stopping the usbarbitrator service as outlined here (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1036340) to see if I can get the usb stick to remount, but no luck. What am I missing here?

Link to comment
Share on other sites

Thanks for all the hard work on this tool Donk. Has anyone been able to make the unlocker work on an ESXI install on a usb drive yet? I get the following output when I try to run unlocker 1.2 on my esxi 5.5 install:

VMware ESXi 5.x Unlocker 1.2.0

===============================

Copyright: Dave Parsons 2011-13

Deleting darwin.tgz from boot.cfg...

Acquiring lock /tmp/bootbank.lck

/vmfs/volumes/52a2baed-97cbb59e-11ee-902b346a7164/unlock-all-v120/esxi # ./install.sh

VMware ESXi 5.x Unlocker 1.2.0

===============================

 

Copying darwin.vgz to /bootbank/darwin.vgz

cp: can't create '/bootbank/darwin.vgz': Read-only file system

Copying darwin.vgz to /bootbank/darwin.vgz failed: 1

 

I'm assuming the read-only error is due to esxi being installed on a usb drive. Unfortunately there's no mount binary that comes with esxi so I can't remount the volume as rw. Anyone know any other way to achieve a writeable mount on my /bootbank dir? I've also tried stopping the usbarbitrator service as outlined here (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1036340) to see if I can get the usb stick to remount, but no luck. What am I missing here?

Pooch,

 

Not sure why you have the problem, installed ESXi 5.5 on to a 4 GB USB Stick (2 GB would have done), and then installed the unlocker 1.2 without issue, the bootbank short cut and partition are both read/write by default?

Link to comment
Share on other sites

Yup, bootbank and altbootbank are both read only for some reason, not sure if it's by default or due to some other change. I did end up using an ISO injector to add an unsupported nic driver vib to the install originally, perhaps that modified something somewhere? I'll try a fresh stock install and see if that makes a difference.

Link to comment
Share on other sites

  • 2 weeks later...

A quick heads up, OS X 10.9.1 update has now been released by Apple and works fine in both Workstation 10 and ESXi 5.5, also the pre-release 10.9.2 works fine in both Workstation 10 and ESXi 5.5, using the latest unlocker 1.2.0.

Edited by MSoK
Link to comment
Share on other sites

  • 2 weeks later...

i am a bit confused. I ran ESXi 5.1 for some time with the unlocked now an everything was nice, until i updated to 5.5 . After the update i could not install OSX on my ESXi, so i tried to run the unlocked V1.2.0 an i got errors telling me that the Volume ist busy. I took an empty HDD an installed a fresh and clean ESXi 5.1 again and without an unlocked running i was able to select a OSX VM as Guest OS and could start the VM. Well it didn't boot into OSX and ended into a spinning Beachball in front of the Apple Bootlogo. But how can it be? How can i select OSX as Guset without the unlocked on a fresh system? and what do i have to do to get a OSX VM up and running again?

 

post-409977-0-78620700-1388321350_thumb.png

 

 

Thnx

 

Dirk

Link to comment
Share on other sites

Hello Lowrider6,

the behaviour you noticed is "normal" That is, ESX 5.1 will allow you to install Mac OS X on a "fresh" ESXi 5.1 install because it supports installing Mac OS X on Apple Hardware. If you attempt to install Mac OS X on ESXi 5.1 without the unlocker on non APple hardware, you will be able to "install" the guest OS, but get the grey screen with beach ball (or grey gear). The recommended method when upgrading (or simply updating) ESXi is to first, uninstall the unlocker (run the uninstall script) then update or upgrade ESXi, then reinstall he unlocker. I bet that if you run the unlocker on the "fresh" ESXi 5.1 install you mention above, your Mac OS guest will boot normally. OR install 5.5 then the latest unlocker and you should be good to go. Please note the major changes in ESXi 5.5 before considering upgrading however!
Link to comment
Share on other sites

  • 2 weeks later...
I am experiencing a problem with the ESXi hostd daemon consistently crashing once I apply the latest unlocker patch and attempt to manage the host using the Windows vSphere Client. Two error dialogs are presented: a Microsoft Visual C++ Runtime Library runtime error and a Connection Failure asking if I want to login again. The running VMs are unaffected.

 

If I login again, the Windows vSphere Client seems to work fine with the exception of VM console access. Console windows do not appear to connect to the VMs with each window showing a blank display.

 

The ESXi host shows a core dump file named hostd-worker-zdump.000. ESXi automatically reloads hostd, evident with its new GID. However, nothing short of a host reboot will allow me full console access to running VMs. Of course, shortly after rebooting, hostd will eventually crash again…

 

If I remove the unlocker patch, everything works without issue.

 

It appears something changed with ESXi 5.5.0, as previous versions worked without issue with the unlocker patch.

 

Installation is on a new install of ESXi 5.5.0 (1331820) running on a dual L5639 Supermicro X8DTH-6F motherboard with 48 GB of memory. I also use the script that compresses the darwin.vgz file prior to installation.

 

Any ideas?

Link to comment
Share on other sites

 

I am experiencing a problem with the ESXi hostd daemon consistently crashing once I apply the latest unlocker patch and attempt to manage the host using the Windows vSphere Client. Two error dialogs are presented: a Microsoft Visual C++ Runtime Library runtime error and a Connection Failure asking if I want to login again. The running VMs are unaffected.
 
If I login again, the Windows vSphere Client seems to work fine with the exception of VM console access. Console windows do not appear to connect to the VMs with each window showing a blank display.
 
The ESXi host shows a core dump file named hostd-worker-zdump.000. ESXi automatically reloads hostd, evident with its new GID. However, nothing short of a host reboot will allow me full console access to running VMs. Of course, shortly after rebooting, hostd will eventually crash again…
 
If I remove the unlocker patch, everything works without issue.
 
It appears something changed with ESXi 5.5.0, as previous versions worked without issue with the unlocker patch.
 
Installation is on a new install of ESXi 5.5.0 (1331820) running on a dual L5639 Supermicro X8DTH-6F motherboard with 48 GB of memory. I also use the script that compresses the darwin.vgz file prior to installation.
 
Any ideas?

 

 

It is most probably the copied and patched executable files are exhausting the RAM disk associated with the nodes in the VisorFS system. I have a new experimental way of installing the patch and running it each time the system boots. I have uploaded it here: https://www.mediafire.com/?9y7d9zcef8r25a5

 

Test ESXi Unlocker:
 
1. Remove any exisittng unlocker

2. Upload and overwrite existing local.sh in /etc/rc.local.d

3. Reboot

 

Note:
1. System should have a persistent scratch partition - should be OK except for stateless and USB boot on drives less than 4GB capacity
2. Any changes you have made to local.sh would be lost. If you have made changes to that file, you will need to merge them into the supplied file.
3. This option runs at boot time to patch the relevant files. It may survive an upgrade as local.sh is part of the persisted local state.
4. Only patches vmx file not vmx-debug or vmx-stats, as most people never run those version of the file.
 
Please report back on how this works for you.
Link to comment
Share on other sites

 

It is most probably the copied and patched executable files are exhausting the RAM disk associated with the nodes in the VisorFS system. I have a new experimental way of installing the patch and running it each time the system boots.

 
1. Remove any exisittng unlocker

2. Upload and overwrite existing local.sh in /etc/rc.local.d

3. Reboot

 

Note:
1. System should have a persistent scratch partition - should be OK except for stateless and USB boot on drives less than 4GB capacity
2. Any changes you have made to local.sh would be lost. If you have made changes to that file, you will need to merge them into the supplied file.
3. This option runs at boot time to patch the relevant files. It may survive an upgrade as local.sh is part of the persisted local state.
4. Only patches vmx file not vmx-debug or vmx-stats, as most people never run those version of the file.
 
Please report back on how this works for you.

 

Donk,

 

Is this an updated version of the ESXi unlocker you asked me to verify, looking at the size of the local.sh file, unlocker.ESXi is no longer required and you are now patching and redirecting to the scratch partition rather than an unlocker directory on the datastore, if so I will give the updated version a go and report back, this will make my version of your unlocker 1.2.2.

 

Cheers MSoK.

 

P.S. Happy New Year, hope you had a good festive break?

Link to comment
Share on other sites

Donk,

 

Is this an updated version of the ESXi unlocker you asked me to verify, looking at the size of the local.sh file, unlocker.ESXi is no longer required and you are now patching and redirecting to the scratch partition rather than an unlocker directory on the datastore, if so I will give the updated version a go and report back, this will make my version of your unlocker 1.2.2.

 

Cheers MSoK.

 

P.S. Happy New Year, hope you had a good festive break?

 

 

Yes this is the test I gave you last year, but updated to use the scratch partition rather than main VMFS datastore. The Unlocker.ESXi is actually embedded in the shell script and is extracted and run on the scratch partition. If it tests out OK I will roll it out in a new release hosted on this site.

 

Had a great break, thanks. And hope yours was good as well.

Link to comment
Share on other sites

 

It is most probably the copied and patched executable files are exhausting the RAM disk associated with the nodes in the VisorFS system. I have a new experimental way of installing the patch and running it each time the system boots. I have uploaded it here: https://www.mediafire.com/?9y7d9zcef8r25a5

 

Test ESXi Unlocker:
 
1. Remove any exisittng unlocker

2. Upload and overwrite existing local.sh in /etc/rc.local.d

3. Reboot

 

Note:
1. System should have a persistent scratch partition - should be OK except for stateless and USB boot on drives less than 4GB capacity
2. Any changes you have made to local.sh would be lost. If you have made changes to that file, you will need to merge them into the supplied file.
3. This option runs at boot time to patch the relevant files. It may survive an upgrade as local.sh is part of the persisted local state.
4. Only patches vmx file not vmx-debug or vmx-stats, as most people never run those version of the file.
 
Please report back on how this works for you.

 

 
I see no evidence the unlocker patch is installed using the modified local.sh script.
 
ESXi 5.5.0 is installed on an 80 GB SSD, so the system has a persistent scratch partition. The updated local.sh file is copied to /etc/rc.local.d with permissions modified to match the original default local.sh file (1777). After rebooting, syslog.log reports init ran the local.sh script. Permissions on the local.sh file automatically changed to 0755.
 
Unfortunately, nothing indicates the updated script actually did anything (e.g., /scratch/osx does not exist).
 
I feel like I'm missing something obvious here…
Link to comment
Share on other sites

 

 
I see no evidence the unlocker patch is installed using the modified local.sh script.
 
ESXi 5.5.0 is installed on an 80 GB SSD, so the system has a persistent scratch partition. The updated local.sh file is copied to /etc/rc.local.d with permissions modified to match the original default local.sh file (1777). After rebooting, syslog.log reports init ran the local.sh script. Permissions on the local.sh file automatically changed to 0755.
 
Unfortunately, nothing indicates the updated script actually did anything (e.g., /scratch/osx does not exist).
 
I feel like I'm missing something obvious here…

 

 

OK thanks for the feedback. I have been using this for a few months now, but maybe there is something different on your system. I will change the script to try and log more details.

 

The 0755 is correct permission for local.sh, the 1777 is for the .#local.sh file which you should not touch.

Link to comment
Share on other sites

I discovered the replacement local.sh script file had ^M control characters at the end of each line. Once I removed them and rebooted, the patched vmx executable successfully installed and was properly linked from /bin. Everything appears to be working correctly now.

 

I will report back on the hostd stability issue after I use this new patch for a little while.

 

Thanks for all your support!

Link to comment
Share on other sites

I tried the new local.sh on ESXi 5.5 1331820 (removed the ^M's)

sed -e 's/\r//g' local.sh > local.sh.new

cp local.sh.new local.sh

chmod 755 local.sh

 

reboot

 

No change in smcPresent status from https://esx55host/mob/?moid=ha-host&doPath=hardware

(should it change to true?)

 

And the test Mac VM I created fails to power on with "The Guest operating system is not supported"

 

I am probably missing a step or two?

 

thanks for any tips - would be great to have this going!

 

 

Link to comment
Share on other sites

 

I discovered the replacement local.sh script file had ^M control characters at the end of each line. Once I removed them and rebooted, the patched vmx executable successfully installed and was properly linked from /bin. Everything appears to be working correctly now.
 
I will report back on the hostd stability issue after I use this new patch for a little while.
 
Thanks for all your support!

 

 

That's odd about the ^M. Wonder whether copyng to MediaFire did something to it. I will have another look at it and compare with my local copy.

I tried the new local.sh on ESXi 5.5 1331820 (removed the ^M's)

sed -e 's/\r//g' local.sh > local.sh.new

cp local.sh.new local.sh

chmod 755 local.sh

 

reboot

 

No change in smcPresent status from https://esx55host/mob/?moid=ha-host&doPath=hardware

(should it change to true?)

 

And the test Mac VM I created fails to power on with "The Guest operating system is not supported"

 

I am probably missing a step or two?

 

thanks for any tips - would be great to have this going!

 

The smcPresent flag is not changed by the unlocker. That is why it cannot be manged via vCenter. 

Link to comment
Share on other sites

The hostd instabilities I noted earlier have not occurred using the experimental local.sh patching mechanism, so this new method appears to be working.

 

After running more than 24 continuous hours, not one hostd crash…

 

Thanks again!

Link to comment
Share on other sites

 Share

×
×
  • Create New...