Microsoft Hyper-V 2012 R2 and Nutanix, a match made in Hyper-Convergence Pt.2


Now that we have the high-level benefits or reasons to consider Hyper-V 2012 R2 and Nutanix lets get into some of the configuration details. I’ll first show how it was done before Nutanix because it really helps show the transformation in the design. The typical virtualization environment before Nutanix consisted of these physical components:

FC SAN:

Virtualization Host (Blade or Rack Mount): Typically 2 to 3 at a minimum for redundancy and load distribution
- Blades would also require an enclosure
FC HBA: Typically 2 at a minimum per host for redundancy and load distribution
Fabric Switches: Two for redundancy and load distribution
FC Storage Array: Two controllers (Active-Active) or (Active-Passive) for redundancy and load distribution


In the converged configuration the network and storage network are combined.

Hyper-Converged:

Using Nutanix (3000 series or higher) your virtualization environment will typically consist of:

Virtualization Host (node): 3 at a minimum for redundancy and load distribution

10GbE NIC: - 2x 10GbE Ports
10GbE Switches: Two for redundancy and load distribution (Top of rack)

This is what a Nutanix block looks like in it’s 1000 and 3000 series which can hold up to four nodes.



From the physical aspect, things are are much more simplified with Nutanix. There’s still the separation of storage, compute and networking but storage and compute shared the same physical space. This is where the convergence comes into play. When we add the hypervisor layer (software) to the hardware we get the hyper-convergence. The hypervisor in this case is Microsoft Hyper-V 2012 R2. Without the hypervisor in this architecture nothing works because it’s what sits directly on top of the hardware.

In the other architectures there was physical separation of the roles for network, storage and compute but now it’s separated by leveraging virtualization. The storage in particular is the key component that Nutanix brings to the table in the form of multiple controller virtual machines (CVM) distributed to each host node.

Single Node: Image taking from Nutanix.com

Multiple Node: 
Image taking from Nutanix.com

The CVM cluster requires a minimum of 3 nodes. If you go with the 1000 series you can use 1GbE ports for up to 8 nodes in a single cluster. If you want to go higher with the 1000 series cluster then 10GbE is required. 10GbE is required for 3000 series and above from the start but I have been able to setup the environment without 10GbE then change it once the 10GbE ports were available.

The deployment process usually consists of these tasks:

  • Nutanix block physical rack setup and cabling usually done by the customer.
  • Nutanix block setup and installation usually done by Nutanix engineer. This is where they install the hypervisor and CVMs.
  • Nutanix cluster setup usually done by Nutanix engineer.
  • Hypervisor setup usually done by Nutanix engineer. With hyper-v this usually involves setting up the hyper-v cluster and adding the nodes to Active Directory (AD) and System Center Virtual Machine Manager (SCVMM).
  • Load test of the Nutanix storage cluster. This is where they run some load tests to see what the performance looks like. If the tests show performance that’s inline with normal expectations or better, everything is good to go.

Deployments may vary so don’t take this as gospel.

Typical issues during the Nutanix deployment with Hyper-V that I’ve seen usually consists of:

  • DNS
  • Network configuration (don’t forget Jumbo Frames)
  • Authentication with AD

None of these are directly related to Nutanix or the Nutanix storage cluster. Next we can get into the Hyper-V side of things.