Jump to content

How to Install OS X 10.x (Snow Leopard to El Capitan) in VMware Workstation 10/11, Workstation Pro/Player 12, Player 6/7, ESXi 5/6


410 posts in this topic

Recommended Posts

In terms of upgrading to vHW v9 or v10, you can use Workstation 10 to "Connect to Server..." and "Edit virtual machine settings" with vHW set to either v9 or v10, as an alternative to the vCenter Web Client, or if you do not have vCenter Server or vCenter Server Appliance installed. 

 

A blog post on how to do this from command line of vCLI or the ESXi shell has just been posted http://www.virtuallyghetto.com/2013/10/quick-tip-using-cli-to-upgrade-to.html

Link to comment
Share on other sites

@MSoK

 

Certainly... I am running ESXi 5.5 (upgraded from 5.1) on a Dell PowerEdge 2900 with the following specs:

  • 2x Intel® Xeon® CPU E5410 @ 2.33GHz
  • 16 GB RAM
  • Perc 6i with:
    • vmfs datastore on RAID 5 with 3x 1TB SATA Enterprise disks
    • esxi (and secondary datastore) installed on RAID 5 with 3x 250GB SATA Enterprise disks

 

I have installed the unlocker 1.20 along with the patch listed in the thread which compresses the darwin boot file.

 

I created the Mavericks VM using the vSphere Client and kept pretty much the following settings:

  • OSX 10.7 x64 for Guest OS
  • Virtual Hardware 8
  • 4GB RAM (downgraded to 2GB after install)
  • CPU - 1 socket core
  • E1000 Network interface
  • LSI Logic Parallel
  • 20 GB Disk, Thin Provisioned

I don't know if going with the LSI Parallel SCSI interface is what caused it to be so slow.  Again, my install took *forever* but once the install was completed, the OS itself seems to boot and run perfectly fine.  Also, perhaps the thin provisioned could have caused it?  But I haven't seen performance issues like that before.

 

AFAIK the only thing that HCV9 adds is more CPU/memory support.  HCV10 adds Sata support...and that may be why it took forever to install for me(?).  But, apparently you don't *need* SATA for it to work, as evidenced by my success.

 

@Donk - Thanks for that link, I was just looking how to do it from CLI. Word of WARNING though to anyone looking to upgrade their HCV.  If you upgrade it past 8 on ESXi 5.1 or past 9 on 5.5 you won't be able to manage the settings for that VM from anything but the web cleint.  Thanks to MSoK for pointing out that you can connect a Workstation GUI to ESXi Server (I didn't know that), but you can't use it to properly manage all the settings (options shown are those for Workstation, and not ESX).

Link to comment
Share on other sites

Thanks to MSoK for pointing out that you can connect a Workstation GUI to ESXi Server (I didn't know that), but you can't use it to properly manage all the settings (options shown are those for Workstation, and not ESX).

 

Just a heads-up, you can create and Apple OS X guest from Workstation 10 connected to ESXi, but it won't boot correctly as there are some important settings missing from the VMX file. It may be possible to fix this but for now create it via the vSphere Client.

Link to comment
Share on other sites

@MSoK -

 

So, I did a bit more testing to see why my install had gone so slow.  I tried a couple of things, but I am 90%+ positive my issue comes from the fact that my DMG is stored in, and mounted from, an NFS volume.

Specifically, I have my main File Server with all my ISOs on it running as a virtual machine on the same ESX host that I am building this OSX on.

It exposes an NFS volume which I mount back to the same ESX Host that this VM is running off of.

 

This way, I am able to do all my system builds by mounting my ISOs directly off a mounted NFS share on the ESX host.

 

For some reason, the OSX install doesn't like this.  I have installed *countless* Windows and Linux/Unix systems in this same way.  All those installs are quick - what I would expect based on the hardware/network (e.g. anywhere from 10 mins for RHEL install to 20-30 mins for Windows 2008).

 

I went ahead an copied my DMG from this NFS mount directly to one of my VMFS datastores so it was "local" to the ESX host (although technically it was local before, but abstracted by a couple of application layers).

 

I then did another install and this time it completed within a more acceptable time frame (< 45 mins).

 

Additional Data points:

I have had this setup running on my ESX 3.x to ESXi 5.1 servers with the NFS share being hosted off a Windows 2003 NFS server and, as I said - I never had an issue with this.

I recently rebuilt my whole environment to ESXi 5.5 and the file server in question is now a Windows 2012 R2 NFS server.  I built about a dozen machines this way and had only two issues:

- Slowness with this OSX build as mentioned

- When trying to build a CentOS 5.8 box the install would begin to go very slowly and the memory utilization on my Windows NFS server would max out.  I found some other posts on MS about people having memory leaks like this on File Servers...and honestly, I'm kicking myself because I can't remember how I fixed it (I think it was possibly the NIC or SCSI controller chosen for my guest).

 

Anyway, the point is no other OS build seems to have any issues installing with the ISO up on my NFS share....but OSX does.  I'm curious how many other people are installing to ESXi directly and where/how they store their ISO.

 

@ Donk

Yeah, I tried to do my build from my Workstation client connected to the ESXi host and got an error immediately after bootup...I figure it isn't a good idea to use the Workstation client as the primary client to ESXi.

Link to comment
Share on other sites

Just a heads-up, you can create and Apple OS X guest from Workstation 10 connected to ESXi, but it won't boot correctly as there are some important settings missing from the VMX file. It may be possible to fix this but for now create it via the vSphere Client.

Just to confirm as Donk has posted, creating an OS X VM from Workstation 10 connected to an ESXi host, will not boot. However create the OS X VM from the vSphere Client, and then modify the OS X VM in Workstation 10, i.e. upgrade the vHW to v10 and change the OS Version to "Mac OS X 10.9" save and boot under vSphere Client works fine.

Link to comment
Share on other sites

Just to confirm, creating an OS X VM from Workstation 10 connected to an ESXi host, will not boot. However create the OS X VM from the vSphere Client, and then modify the OS X VM in Workstation 10, i.e. upgrade the vHW to v10 and change the OS Version to "Mac OS X 10.9" save and boot under vSphere Client works fine.

 

I can also confirm this works.  However, I'm still waiting for your reasoning behind having to be on HCV 9 or 10.  9 adds more CPU/RAM, 10 adds SATA.

If SATA were a *requirement* I could see having to go to 10, but since a SCSI install works, why is it necessary to be above HCV 8?

 

UPDATE:

 

Points of interest.  The following is a diff of my VMX file after upgrading from HCV9 to HCV10 and switching from 10.7 to 10.9 in the settings:

 

/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4 # cat Mac\ OS\ X\ 10.9/Mac\ OS\ X\ 10.9.vmx | sort > /tmp/HCV9.txt
/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4 # cat Mac\ OS\ X\ 10.9/Mac\ OS\ X\ 10.9.vmx | sort > /tmp/HCV10.txt
/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4 # diff /tmp/HCV9.txt /tmp/HCV10.txt
--- /tmp/HCV9.txt
+++ /tmp/HCV10.txt
@@ -17,7 +17,7 @@
 floppy0.clientDevice = "TRUE"
 floppy0.fileName = "vmware-null-remote-floppy"
 floppy0.startConnected = "FALSE"
-guestOS = "darwin11-64"
+guestOS = "darwin13-64"
 hpet0.present = "TRUE"
 ich7m.present = "TRUE"
 ide1:0.allowGuestConnectionControl = "TRUE"
@@ -68,7 +68,7 @@
 toolScripts.beforeSuspend = "TRUE"
 tools.syncTime = "FALSE"
 toolsInstallManager.lastInstallError = "0"
-toolsInstallManager.updateCounter = "8"
+toolsInstallManager.updateCounter = "9"
 usb.pciSlotNumber = "32"
 usb.present = "TRUE"
 usb:0.deviceType = "hid"
@@ -84,7 +84,7 @@
 uuid.location = "56 4d 08 c7 8c 4b 72 9a-70 91 50 36 e5 97 be a5"
 vc.uuid = "52 00 e8 7a bf d5 ca e1-c2 ac 6c 6a 16 2b a5 dc"
 virtualHW.productCompatibility = "hosted"
-virtualHW.version = "9"
+virtualHW.version = "10"
 vmci0.id = "-443040091"
 vmci0.pciSlotNumber = "35"
 vmci0.present = "TRUE"

I don't see much of a difference...you?

 
 
Link to comment
Share on other sites

I can also confirm this works.  However, I'm still waiting for your reasoning behind having to be on HCV 9 or 10.  9 adds more CPU/RAM, 10 adds SATA.

If SATA were a *requirement* I could see having to go to 10, but since a SCSI install works, why is it necessary to be above HCV 8?

I think the jury is out on this one, VMware recommend going to the latest vHW version, but if vHW8 is working fine for your OS X Mavericks VM stick with it. To-date the only time I have needed vHW10 and the vCenter 5.5 Web Client, is creating a virtual Windows Server with a 9TB data volume to act as a target for a Veeam 7 backup!

Anyway, the point is no other OS build seems to have any issues installing with the ISO up on my NFS share....but OSX does.  I'm curious how many other people are installing to ESXi directly and where/how they store their ISO.

I generally create an ISO folder on either a local datastore or a SAN attached via Fibre Channel, iSCSI or SAS to store any ISO or DMG images required for installations.

Link to comment
Share on other sites

I think the jury is out on this one, VMware recommend going to the latest vHW version, but if vHW8 is working fine for your OS X Mavericks VM stick with it. To-date the only time I have needed vHW10 and the vCenter 5.5 Web Client, is creating a virtual Windows Server with a 9TB data volume to act as a target for a Veeam 7 backup!

I generally create an ISO folder on either a local datastore or a SAN attached via Fibre Channel, iSCSI or SAS to store any ISO or DMG images required for installations.

 

Ouch...9TB...wouldn't want to have to deal with that volume.  Why not split it up and have each set of backup job go to a different Repository in Veeam?  Of course, if you are backing up something that is 9TB for one server....

 

Yeah, personally I haven't had anything needing over HCV8.  I didn't even realize I could go above that since I always use the vSphere GUI client.  When I upgraded to 5.5 I realized I could go to 10, so I went ahead and just brought everything to 9 so I had flexibility in management.

 

And yes...I know it would be better for me to mount my ISOs from another system over iSCSI or something, but this is just a SOHO lab setup and I don't have the resources to do that.  I host my "home" DC off the ESX box which is my main file server (apart from my HTPC).  So it shares out about .5 TB of data.  I also have about a dozen lab systems setup.  I have an old single core Athlon 64 3200 with 3GB of RAM that I run vCenter on (yes, I got it to run all components, including Update Manager on those specs!).  I also have Veeam installed on that same box - including the SQL server to host both of them!  It is slow to start, but it all works.  I have about 2-3TB of disk on there which is enough to backup my ESX box.

 

I might try to move my DMG to the Veeam NFS server so at least it is hosted on a different box and see if a build over GB Ethernet from a separate machine would also be problematic.  However, as OSX is the only one that has ever had an issue, and now I've got my template built...not sure it is worth the hassle.  Hopefully this info helps someone else out though...

Link to comment
Share on other sites

Ouch...9TB...wouldn't want to have to deal with that volume.  Why not split it up and have each set of backup job go to a different Repository in Veeam?  Of course, if you are backing up something that is 9TB for one server....

bengalih,

 

Thanks for all your hard work in diagnosing and reporting in detail you problems, and solution(s), as you say hopefully it will help others if they experience similar issues.

 

In terms of the large Veeam target, a little off topic, we needed to backup multiple virtual Windows servers in a single job to maximise deduplication and compression.

 

I can also confirm this works.  However, I'm still waiting for your reasoning behind having to be on HCV 9 or 10.  9 adds more CPU/RAM, 10 adds SATA.

If SATA were a *requirement* I could see having to go to 10, but since a SCSI install works, why is it necessary to be above HCV 8?

 

UPDATE:

 

Points of interest.  The following is a diff of my VMX file after upgrading from HCV9 to HCV10 and switching from 10.7 to 10.9 in the settings:

-guestOS = "darwin11-64"
+guestOS = "darwin13-64"

-toolsInstallManager.updateCounter = "8"
+toolsInstallManager.updateCounter = "9"
 
-virtualHW.version = "9"
+virtualHW.version = "10"

I don't see much of a difference...you?

Since the vmx is a configuration file to instruct the ESXi host how to run the VM, by setting virtualHW.version = "10" you are allowing your OS X VM to take advantage of all the additional features supported by ESXi 5.5, rather than restricting functionality to ESXi 5.0.

 

Now that may not appear to make any visible difference, or even appear relevant to your VMs, but why would you not want to take advantage of the latest and greatest, unless as is the case with OS X it makes the management of the VM(s) more difficult.

Link to comment
Share on other sites

bengalih,

 

Thanks for all your hard work in diagnosing and reporting in detail you problems, and solution(s), as you say hopefully it will help others if they experience similar issues.

 

In terms of the large Veeam target, a little off topic, we needed to backup multiple virtual Windows servers in a single job to maximise deduplication and compression.

Since the vmx is a configuration file to instruct the ESXi host how to run the VM, by setting virtualHW.version = "10" you are allowing your OS X VM to take advantage of all the additional features supported by ESXi 5.5, rather than restricting functionality to ESXi 5.0.

 

Now that may not appear to make any visible difference, or even appear relevant to your VMs, but why would you not want to take advantage of the latest and greatest, unless as is the case with OS X it makes the management of the VM(s) more difficult.

 

Setting the virtual hardware may have changes on the code paths in the hypervisor. Just because there are not many differences in the VMX do not let that fool you into thinking there are no other consequences.

 

A guest OS and its version are supported by virtual hardware and firmware, the emulation inside the hypervisor for the virtual chassis (e.g. virtual SATA, SCSI...), plus the actual VMX code. For example using 10.6 on ESXi 5.5 invokes hidden CPUID masks for compatibility.

 

Note that I am not saying you need to change the level of hardware supported, just be aware that it may not be the most efficient way to run the guest.

Link to comment
Share on other sites

Setting the virtual hardware may have changes on the code paths in the hypervisor. Just because there are not many differences in the VMX do not let that fool you into thinking there are no other consequences.

 

A guest OS and its version are supported by virtual hardware and firmware, the emulation inside the hypervisor for the virtual chassis (e.g. virtual SATA, SCSI...), plus the actual VMX code. For example using 10.6 on ESXi 5.5 invokes hidden CPUID masks for compatibility.

 

Note that I am not saying you need to change the level of hardware supported, just be aware that it may not be the most efficient way to run the guest.

 

Understood.  I'm curious if you (or anyone else) is actually running ESX (or Workstation) on an actual Mac box?  It would be interesting to know if you are required to choose HCV10 in order to choose 10.9 as the OS.

That is the way it seems to be in Workstation 10 when I use your unlocker - just wondering if that is the way it would be even with Mac hardware?

 

I would assume most people trying to use OSX as a production desktop will be running it on Workstation - so upgrading to HCV10 is not really an issue.  For me, I like using the vSphere Client for ESX, so I don't want to move my HCV to 10.  Being how 10.9 (and earlier) all seem to work fine for me - I'm ok.  Of course, as I mentioned, I don't use OSX for a desktop - I use it for some software testing, and it seems to work well enough (I don't test things like usb, sound, ideal graphics, etc).

 

Slightly OT - but is it possible that your update to the unlocker script that compresses the image could possible have fixed issues with vSphere connected ESX boxes?  Since I reinstalled with the compressed image, things *seem* more stable to me.  

Link to comment
Share on other sites

Understood.  I'm curious if you (or anyone else) is actually running ESX (or Workstation) on an actual Mac box?  It would be interesting to know if you are required to choose HCV10 in order to choose 10.9 as the OS.

That is the way it seems to be in Workstation 10 when I use your unlocker - just wondering if that is the way it would be even with Mac hardware?

 

I would assume most people trying to use OSX as a production desktop will be running it on Workstation - so upgrading to HCV10 is not really an issue.  For me, I like using the vSphere Client for ESX, so I don't want to move my HCV to 10.  Being how 10.9 (and earlier) all seem to work fine for me - I'm ok.  Of course, as I mentioned, I don't use OSX for a desktop - I use it for some software testing, and it seems to work well enough (I don't test things like usb, sound, ideal graphics, etc).

 

Slightly OT - but is it possible that your update to the unlocker script that compresses the image could possible have fixed issues with vSphere connected ESX boxes?  Since I reinstalled with the compressed image, things *seem* more stable to me.  

I haven't run Workstation in Bootcamp for awhile. Might be an interesting test, as actually that is what I wanted originally when I started trying to get OS X running in Workstation 6 years ago now!

 

My use of OS X is as a development & testing platform as well, including in Fusion on real Mac hardware. What may be interesting is for me to downgrade one of my test VM in Workstation 10 to HW8. So long as SCSI should work, and can see how stable it is.

 

I think the compression has helped. I have a new way of running the unlocker on ESXi, and MSoK has validated it in his test lab. Not ready just yet but gets rid of the vtar RAM disk, so hoping will make it more stable in 5.5. This will be an alternative installer until enough testing has happened.

Link to comment
Share on other sites

I haven't run Workstation in Bootcamp for awhile. Might be an interesting test, as actually that is what I wanted originally when I started trying to get OS X running in Workstation 6 years ago now!

 

My use of OS X is as a development & testing platform as well, including in Fusion on real Mac hardware. What may be interesting is for me to downgrade one of my test VM in Workstation 10 to HW8. So long as SCSI should work, and can see how stable it is.

 

I think the compression has helped. I have a new way of running the unlocker on ESXi, and MSoK has validated it in his test lab. Not ready just yet but gets rid of the vtar RAM disk, so hoping will make it more stable in 5.5. This will be an alternative installer until enough testing has happened.

 

How are you able to downgrade HCV?  I thought the only way to do this was to recover a snapshot or restore an older HCV from backup.  Do you have a trick? :)

 

Also, if you need any other testers on a pure ESXi 5.5 box, do let me know.

 

Thanks again for all your work.

Link to comment
Share on other sites

I haven't run Workstation in Bootcamp for awhile. Might be an interesting test, as actually that is what I wanted originally when I started trying to get OS X running in Workstation 6 years ago now!

 

My use of OS X is as a development & testing platform as well, including in Fusion on real Mac hardware. What may be interesting is for me to downgrade one of my test VM in Workstation 10 to HW8. So long as SCSI should work, and can see how stable it is.

Just to confirm, installing OS X Mavericks (10.9) in Workstation 10 using Custom (advanced) and changing "Hardware compatibility:" from "Workstation 10.0" to "Workstation 8.0" and "Select a Guest Operating System" to "Apple Mac OS X", Version: "Mac OS X 10.9" get a warning "Mac OS X 10.9 is not a supported guest operating system for Workstation 8.0 virtual machines.... Are you sure you want to continue?" click Yes to continue with the installation.

 

The installation completes and once the latest VMware Tools (6.0.1) are installed works as normal, interestingly the vmx file shows the virtualHW.version = "8" as expected but the guestOS = "darwin13-64".

Link to comment
Share on other sites

Hi, I just wanted to say how I got Mavericks working in VMWare Workstation 10.

 

I had created a VM in Fusion on my Hackintosh so I could update my apps, and thought why not just copy the VMDK file over to windows and try it there?

(actually it was a package with a buncha files in it including the VMDKl, I just opened the VMDK but copied the other files)

After using the unlocker I just opened it, it asked my if i had moved or copied this VM, clicked on "I copied it" and it started right up!

 

After putting the SVGA patch its working great, got it on a seperate monitor full screen just mouse over i'm in mac!

 

Thanks to this forum for all the valuable research and information available!

Link to comment
Share on other sites

what is the SVGA patch for mavericks?  I don't see any info about it...

 

Here is the link :http://sourceforge.net/projects/vmsvga2/files/Display/

 

First install the VMsvga2_v1.2.5_OS_10.9.pkg, then the guestd_patches.pkg if you want autofit (it set the resolution to the VM ware screen window size).

 

It even let me update iMovie, where before it said I didn't have the graphics cabability for it.

 

There are various threads in the forum about it, and I think there is a reference to it early on in this thread, (Post #21) which is where I found the link.

 

Before I installed it I had kind of a flicker when I moved windows around, etc, now its as smooth as my "real hardware" Hackintosh Installation (btw I have an i7 Haswell with gigabyte GA-H87-D3H, using the internal graphics)

 

-Greyy

  • Like 1
Link to comment
Share on other sites

Here is the link :http://sourceforge.net/projects/vmsvga2/files/Display/

 

First install the VMsvga2_v1.2.5_OS_10.9.pkg, then the guestd_patches.pkg if you want autofit (it set the resolution to the VMware screen window size).

 

It even let me update iMovie, where before it said I didn't have the graphics cabability for it.

Greyy,

 

Thanks for the reminder, we have been using Zenith432's enhanced graphics drivers and autofit patch for some time, so I have added the information to the original post, including a link to the associated topic as well as the download location on sourceforge.

Link to comment
Share on other sites

thanks greyy.MSoK, I'll try them out.

 

Is this package just for Mavericks, or is it viable for 10.8 and before as well?

bengalih,

 

Two packages are available one for OS X 10.6-10.8 and the other specifically for 10.9, the auto-fit patch is for all, see the first post in this topic for more details.

Link to comment
Share on other sites

I've installed the SVGA driver and the auto fit patch, but I can't seem to get auto fit to work in VMWare Player 6.1.  When I resize to full screen for example, it will look perfect for about 2-seconds, then will resize to the original windowed size within the fullscreen to where it looks like one screen on top of the other and the mouse cursor doesn't line up with the screen.

 

Any way to fix it?

Link to comment
Share on other sites

 Share

×
×
  • Create New...