VVols
Perhaps the most wanted feature in vSphere 6 is Virtual Volumes, or VVOLs. VVOLs extends the VMware software-defined storage (SDS) story to its storage partners, and completely changes the way its hypervisor consumes storage; it radically changes how storage is presented, consumed and managed by the hypervisor. No longer is virtual machine (VM) storage bound by the attributes of the LUN, as each VM disk (VMDK) can have its own policy-driven SLA. VMware has a passel of storage vendors on board to equip its storage, with the ability to offer VVOLs storage to the VMware hypervisor. I'm sure this feature will get much press and customer attention in the coming days.
vMotion
vSphere vMotion just got 10 times better, and a lot more interesting. For one thing, it supports live VM migration across vCenter servers, and over long distances. It used to support round trip times (RTTs) of 10 ms, but now supports RTTs of 100 ms. A ping from Portland, Ore., to Boston, Mass., is 90 ms; so, in theory, you could move a live VM across the entire United States using vMotion. I'm not sure which of these I find more interesting: long distance vMotion, Cross vCenter vMotion or how shared storage will span a continent.
Fault Tolerance
Multi-processor fault tolerance is a feature unique to the VMware hypervisor. It allows a workload to run simultaneously on two different ESXi servers; if a server or VM goes down, the other one will continue running the load uninterrupted. This feature, until today, only supported one vCPU; now it can protect a four-vCPU VM in the Enterprise Plus version, and a 2-vCPU VM in other editions. VMware stressed the fact that to be effective, a 10Gb connection is required between the ESXi servers. From what the engineers have told me, this required a major rewrite of the FT code base.
Bigger VMs
The VMware VMs have gotten even bigger. VMware has doubled the number of vCPUs a VM can have from 64 to 128, and has quadrupled the amount of RAM from 1TB to 4TB. This opens up some unique possibilities; consider, for example, resource-intensive applications, such as the SAP HANA memory database.
As more powerful hosts come online, vSphere will be ready to support them, because an ESXi host is capable of supporting 480 CPU and 6TB of RAM.
vSphere now supports 64-node clusters. This change has been a long time coming, and it's good that VMware finally is supporting larger clusters. This should be a big boon to the 1,000-plus VMware customers running Virtual SAN.
Instant Clone
VMware has a feature called Instant Clone that I'm dying to try in my lab, as it creates clones 10 times faster than vSphere does presently. This is a welcome relief for all the test and dev shops that have been hampered, waiting for a clone to finish.
A new feature in the stack is vSphere Content Library. For those who have ISOs, VM templates and virtual appliances stored in multiple locations, this will be a nice central repository that can be synchronized and accessed from different sites and vCenter instances. In its initial release, vSphere Content Library has basic versioning and a publish-and-subscribe mechanism to keep things in sync.
On the network side, vSphere now supports Network I/O control on a per-VM basis, and can reserve bandwidth to guarantee SLAs.
Ready for VDI
VMware has also thrown in support for NVIDIA GRID vGPU, which will allow an individual VM to take advantage of all the goodness of a physical vGPU housed in an ESXi server. vSphere has had vDGA for a while, but whereas it tied a GPU to one guest, vGPU allows the GPU to share among eight. A lot of the people I've talked to have been waiting for this feature.
This change shows how serious VMware is about the virtual desktop infrastructure market, as it joins soft 3D, vSGA and vDGA as a way to make desktop VMs more performant.
As I mentioned before, this release is, for the most part, evolutionary rather than revolutionary -- and there's nothing wrong with that. Yes, it has VVOLs, and it might be the game changer the community believes it to be. But I also hope that people finally fully utilize fault tolerance to protect critical applications, use long-distance vMotion to move workloads as needed and create some truly monstrous VMs to run their in-memory databases.
vSphere 6 demonstrates that VMware wants to maintain its leadership in hypervisor development.
Perhaps the most wanted feature in vSphere 6 is Virtual Volumes, or VVOLs. VVOLs extends the VMware software-defined storage (SDS) story to its storage partners, and completely changes the way its hypervisor consumes storage; it radically changes how storage is presented, consumed and managed by the hypervisor. No longer is virtual machine (VM) storage bound by the attributes of the LUN, as each VM disk (VMDK) can have its own policy-driven SLA. VMware has a passel of storage vendors on board to equip its storage, with the ability to offer VVOLs storage to the VMware hypervisor. I'm sure this feature will get much press and customer attention in the coming days.
vMotion
vSphere vMotion just got 10 times better, and a lot more interesting. For one thing, it supports live VM migration across vCenter servers, and over long distances. It used to support round trip times (RTTs) of 10 ms, but now supports RTTs of 100 ms. A ping from Portland, Ore., to Boston, Mass., is 90 ms; so, in theory, you could move a live VM across the entire United States using vMotion. I'm not sure which of these I find more interesting: long distance vMotion, Cross vCenter vMotion or how shared storage will span a continent.
Fault Tolerance
Multi-processor fault tolerance is a feature unique to the VMware hypervisor. It allows a workload to run simultaneously on two different ESXi servers; if a server or VM goes down, the other one will continue running the load uninterrupted. This feature, until today, only supported one vCPU; now it can protect a four-vCPU VM in the Enterprise Plus version, and a 2-vCPU VM in other editions. VMware stressed the fact that to be effective, a 10Gb connection is required between the ESXi servers. From what the engineers have told me, this required a major rewrite of the FT code base.
Bigger VMs
The VMware VMs have gotten even bigger. VMware has doubled the number of vCPUs a VM can have from 64 to 128, and has quadrupled the amount of RAM from 1TB to 4TB. This opens up some unique possibilities; consider, for example, resource-intensive applications, such as the SAP HANA memory database.
As more powerful hosts come online, vSphere will be ready to support them, because an ESXi host is capable of supporting 480 CPU and 6TB of RAM.
vSphere now supports 64-node clusters. This change has been a long time coming, and it's good that VMware finally is supporting larger clusters. This should be a big boon to the 1,000-plus VMware customers running Virtual SAN.
Instant Clone
VMware has a feature called Instant Clone that I'm dying to try in my lab, as it creates clones 10 times faster than vSphere does presently. This is a welcome relief for all the test and dev shops that have been hampered, waiting for a clone to finish.
A new feature in the stack is vSphere Content Library. For those who have ISOs, VM templates and virtual appliances stored in multiple locations, this will be a nice central repository that can be synchronized and accessed from different sites and vCenter instances. In its initial release, vSphere Content Library has basic versioning and a publish-and-subscribe mechanism to keep things in sync.
On the network side, vSphere now supports Network I/O control on a per-VM basis, and can reserve bandwidth to guarantee SLAs.
Ready for VDI
VMware has also thrown in support for NVIDIA GRID vGPU, which will allow an individual VM to take advantage of all the goodness of a physical vGPU housed in an ESXi server. vSphere has had vDGA for a while, but whereas it tied a GPU to one guest, vGPU allows the GPU to share among eight. A lot of the people I've talked to have been waiting for this feature.
This change shows how serious VMware is about the virtual desktop infrastructure market, as it joins soft 3D, vSGA and vDGA as a way to make desktop VMs more performant.
As I mentioned before, this release is, for the most part, evolutionary rather than revolutionary -- and there's nothing wrong with that. Yes, it has VVOLs, and it might be the game changer the community believes it to be. But I also hope that people finally fully utilize fault tolerance to protect critical applications, use long-distance vMotion to move workloads as needed and create some truly monstrous VMs to run their in-memory databases.
vSphere 6 demonstrates that VMware wants to maintain its leadership in hypervisor development.