In October, VMware released the vCenter Server Appliance 5.1.0a which is a virtual appliance alternative to the traditional Windows based vCenter server. I like the idea to have a virtual appliance that can be deployed and with a few clicks ready to run the vCenter server service, but I’ve been a bit skeptical to deploy it in a production environment. With the lates release and a upgrade of the ESXi hosts to vSphere 5.1.0 I decided to give it a shot.
This is a fairly easy transition in a virtual environment with servers only, but adding virtual desktops and VMware View to the same infrastructure complicates things. I tried googling and reading VMware documentation, but I could not find any supported method to migrate virtual desktops pools from one vCenter server to another, which in this case is the new vCenter Server Appliance.
Disclaimer: I must emphazise that the method described here are not confirmed or documented as a supported method by VMware and I take no responsibility for your actions.
I’ll take you through my upgrade from start to end starting with the vCenter server “upgrade”
The enviroment we’re talking about here is rather small. It consists of a single vCenter server, three ESXi hosts, one VMware View Connection server and a combination of automated and manual desktop pools. Non of the pools are linked clones pools provisioned using VMware View Composer, only full cloned virtual desktops desktops, both manual and automated.
Deploy vCenter server appliance
The first step in the upgrade phase is to deploy the vCenter server appliance. Deploy as any other virtual appliance and define static IP adressing during deployment. Once deployed, power on and log in to the appliance management page at https://vcsa01:5480. You’ll meet a configuration wizard page. You want to cancel this one to configure hostname and regenerate certificates before running the wizard.
In this case I chose to use the embedded database, which is based on Postgres in the latest vCSA release. VMware states that the supported number of hosts are 5 and only 50 virtual machines. Not sure why they’ve set the numbers this low though. Since the only external database supported in this release is Oracle and the current vCenter server is using MSSQL, I had to re-create the whole folder and permission structure before adding the ESXi hosts.
Adding ESXi hosts
The vCSA is configured with datacenter, cluster, folders and permissions. I consider vCenter to be a non-critical service and since disconnecting an ESXi host from vCenter doesn’t affect the running virtual machines, I don’t see any issues doing it. (I did this after the developers went home for the day). Before disconnecting, I disabled both VMware HA and DRS. When disconnecting ESXi hosts from vCenter, the list of virtual machines will not be pretty – they will be gray and with the status of disconnected. Don’t worry, they are still running.
On the vCSA, I added all three hosts to the new cluster using the FQHN. A reminder; the Datacenter name, cluster name and folder structure is identical between the vCenter and the vCSA server. All virtual machines will initially be added to the Discovered Virtual Machines folder so we’re looking at a drag and drop session here, putting all machines in their respective folder.
The Windows based vCenter server can be shut down and the vCSA is now managing all ESXi hosts and virtual machines in my environment.
In this environment, running only 3 ESXi hosts, manual upgrade of the ESXi hosts are very easy. Starting with the first host, migrate all virtual machines using vMotion to any of the other two hosts in the cluster and put the host in maintenance mode. If DRS is enabled, it will vMotion machines for you when activating maintenance mode.
SSH shell access is enabled on all hosts and the ESXi upgrade zip file uploaded to one of the shared datastores. Log in to the first ESXi host and execute the following command to start the upgrade:
~# esxcli software vib install -d /vmfs/volumes/iSCSI_01/updates/ESXi510-201210001.zip
This will start the upgrade and once completed successfully, reboot the host to load the new VMkernel version. When the host is back online, exit maintenance mode and repeat the task for additional hosts.
~ # esxcli system version get Product: VMware ESXi Version: 5.1.0 Build: Releasebuild-838463 Update: 0
For some reason, in my case, vMotion was disabled on my vMotion VMkernel port. I’m not sure why, but I just enabled vMotion again on the VMkernel port and it was all good.
The virtual infrastucture is now running on vSphere 5.1.0 ESXi hosts and the vCenter server appliance, vCSA, is running version 5.1.0a.