Here is a solution that I have configured, worked with and feel provides an adequate solution for server virtualization in those locations where blade servers are not a good fit. Don’t get me wrong I love blade servers but from a cost model they don’t always make financial sense to buy costly infrastructure if your not going to use it. So this is for the smaller medium deployments of server virtualization that could also scale if the environment grows. This configuration is also somewhat simpler in my opinion. I say this only because it’s not dependent on fiber channel switches and storage. So you could just deploy this solution in just about any small to medium site granted 1gig ethernet networking support is in place.
This is what the would be used in the configuration:
DL380 G6/G7 Server ESX Host Configuration
This is a high level setup and build configuration that could be configured in many different ways depending on the requirements. Here are some of the basic configurations I do.
- The server hardware should be a minimum of 2x DL380 G6/G7 with 2x CPU.
- Adequate memory should be sized and determined per requirements.
- An additional 2x cards for network will be required for iSCSI.
- Odd ports should be connected to a single core switch A
- Even port should be connected to a single core switch B
- 2x 72GB drives for the vSphere hypervisor installation.
Here would be a logical view of how the network would be cabled to redundant switches for the esx host servers. Trunking is used on the “VM Network” to allow multiple VLANs.
iSCSI Storage Configuration
This is a high level setup and build configuration of the HP Lefthand iSCSI storage arrays.
- There should be 2x 1GigE ports installed in each HP P4000 series array.
- There should be a redundant network switches connecting redundant storage arrays to esx hosts.
- Array A should have 2x 1 GigE ports connected to switch A with LACP and Flow Control enabled
- Array B should have 2x 1 GigE ports connected to switch B with LACP and Flow Control enabled
Here would be a logical view of how the network would be cabled to redundant switches for the iscsi storage arrays. LACP and Flow Control are enabled on the ports.
To slim down the solution you could also use the DL360 model which doesn’t have as many drive slots or pci slots but it could work in certain scenarios. If it’s not apparent on how the scale the server and storage infrastructure, you just add additional storage nodes and/or attach additional storage shelves to the current storage arrays depending on whether you just need capacity and/or nic bandwidth. In this case I opted for the 1gig model but you could get the 10gig port model. And on the esx host side you just add additional servers to the cluster for more compute and memory resources. You’ll also get additional redundancy so don’t forget this when your sizing the design up front.