HP DL380 and P4500 for VMware Virtualization


Here is a solution that I have configured, worked with and feel provides an adequate solution for server virtualization in those locations where blade servers are not a good fit. Don’t get me wrong I love blade servers but from a cost model they don’t always make financial sense to buy costly infrastructure if your not going to use it. So this is for the smaller medium deployments of server virtualization that could also scale if the environment grows. This configuration is also somewhat simpler in my opinion. I say this only because it’s not dependent on fiber channel switches and storage. So you could just deploy this solution in just about any small to medium site granted 1gig ethernet networking support is in place.

This is what the would be used in the configuration:

DL380 G6/G7 Server ESX Host Configuration

This is a high level setup and build configuration that could be configured in many different ways depending on the requirements. Here are some of the basic configurations I do.

  1. The server hardware should be a minimum of 2x DL380 G6/G7 with 2x CPU.
  2. Adequate memory should be sized and determined per requirements.
  3. An additional 2x cards for network will be required for iSCSI.
  4. Odd ports should be connected to a single core switch A
  5. Even port should be connected to a single core switch B
  6. 2x 72GB drives for the vSphere hypervisor installation.

 

 

 

 

 

Here would be a logical view of how the network would be cabled to redundant switches for the esx host servers. Trunking is used on the “VM Network” to allow multiple VLANs.

 

 

 

 

 

 

 

 

 

iSCSI Storage Configuration

This is a high level setup and build configuration of the HP Lefthand iSCSI storage arrays.

  1. There should be 2x 1GigE ports installed in each HP P4000 series array.
  2. There should be a redundant network switches connecting redundant storage arrays to esx hosts.
  3. Array A should have 2x 1 GigE ports connected to switch A with LACP and Flow Control enabled
  4. Array B should have 2x 1 GigE ports connected to switch B with LACP and Flow Control enabled

 

 

 

 

 

Here would be a logical view of how the network would be cabled to redundant switches for the iscsi storage arrays. LACP and Flow Control are enabled on the ports.

 

 

 

 

 

 

 

To slim down the solution you could also use the DL360 model which doesn’t have as many drive slots or pci slots but it could work in certain scenarios. If it’s not apparent on how the scale the server and storage infrastructure, you just add additional storage nodes and/or attach additional storage shelves to the current storage arrays depending on whether you just need capacity and/or nic bandwidth. In this case I opted for the 1gig model but you could get the 10gig port model. And on the esx host side you just add additional servers to the cluster for more compute and memory resources. You’ll also get additional redundancy so don’t forget this when your sizing the design up front.

 

18 Comments

Add yours
  1. CaRaBeeN

    That’s also how we’re going to make our setup.
    For SMB companies, this is really wise solution which would have benefits of scalibity,performance,ha and bussiness continuity.
    Well done

  2. Zeus Hunt

    Was wondering would you suggest a Link Aggregate nic teaming over Adaptive Load Balancing on the P4500’s.

    I know LACP would yields 2GBps read and write performance to 2Gbps Read and 1Gbps write for ALB.

    • Antone Heyward

      I actually can’t speak to the real benefits of one versus the other because I have not tried ALB. I think it would really depend on your workload though. In some environments you would probably not see a difference between LACP and ALB. We just happened to stick with what we know in this case.

  3. Doug

    “To slim down the solution you could also use the DL360 model”

    You certainly can and it works very well. I’ve installed P4xxx SANs & DL360’s at several customer sites. Just remember to add a NC364T (or similar) 4 port network card.

    I’ve been hugely impressed by the Lefthand kit, still can’t believe how simple it is to setup.

    • Antone Heyward

      That’s a pretty open question. I would typically setup all esx hosts with the same configuration whether it’s 1gb or 10gb ethernet. I’d usually do 2 sets of bonded nics to physically separate the management & FT port groups from the vm network groups and vlans. But you could also bond all four nice together then use port groups and vlans to logically separate the traffic.

  4. Naz

    Awesmoe post, two questions. Do you create a trunk between switches to cope for if one of your ISCSI swtiches fail?

    Also does each esx host cable to 1 nic to each storage switch in the above config or do you cable both nics used for storage to each san switch?

  5. Ken Coley

    Antone,

    How did you get your DL380 to join as a LACP node. I tried using the various ESXi settings for NIC teaming and can not get the port channel to come up. I have the interfaces from the DL380 going to a Cisco 3560G-24PS. I see the interfaces up, but I see the port channel as unassigned.

    • Antone Heyward

      @Ken, let me be the first to say I am not a network engineer. So, I don’t work directly with configuring switches and routers. You’ll have to get someone with more knowledge than me for that question because you certainly don’t want to get the networking wrong. I will say that lacp from what I know has to either be configured for ports on the same switch or ports on switches that are linked together. Else you can’t configure lacp for ports on different switches. Maybe someone can confirm that but I think that’s why you maybe having issues.

      Another note is that I’m sure which links your doing the lacp for? Because it’s not needed for the iscsi links on the esx host.

  6. Naz

    Thanks Antone! :> I am new at this type of setup and appreciate your help.

    At present I have 2 x P4500 nodes,2 x HP 2510-24G switches (only for storage)and 2 DL380 G6’s with 6 nics and seperate swithches for LAN stuff.

    I was orginally looking at using ALB but it look’s like I won’t get 2 GB for writing under this so I am thinking of using Link Aggregation Dynamic Mode, which from the looks of it is what you are using? (according to the HP guide)

    My question is, I am going to use two of my nics for ISCSI. I am guessing I put them on the one vswitch and then set them as active / active?

    If that’s right, should I then be connecting one cable from each nic on the DL380 to each san switch. So at the end of the day I have a link going to each switch rather than both links going to the one switch correct?

    I assume then you enable LACP and flow control on the P4500 via the CMC and the switch ports it’s plugged into. No need to do anything on the ESXi side?

    I also won’t have a trunk connection on the switches for the SAN.

    Thanks

  7. Ken Coley

    Antone,

    Thanks for the reply. I was able to get PagP working to the Cisco Switch but not LACP. I will follow up with VMware on it. I was just curious what your setup between the DL380s and the switches was.

  8. Naz

    Antone,

    Another short question in the cabling diagram above, how does the network RAID work if both nodes are plugged into seperate switches and there is no trunk between them.

    From what I know the p4500 only come with two nics so if you use network RAID I am trying to figure out how the two nodes would keep in sync.

    Thanks sorry for all the questions.

    • Antone Heyward

      I happen to use a bond for all my configurations but your not forced too. If you don’t bond the nics then follow HP’s recommendations found in the help and admin guide.

      Taking from HP’s help:
      “””””””
      When you initially set up a storage node using the Configuration Interface, the first interface that you configure becomes the interface used for the SAN/iQ software communication.

      To select a different communication interface:

      • In the navigation window, select the storage node and log in.
      • Open the tree and select the TCP/IP Network category.
      • Select the Communication tab to bring that window to the front.
      • Select an IP address from the list of Manager IP Addresses.
      • Click Communication Tasks and select Select SAN/iQ Address.
      • Select an ethernet port for this address.
      • Click OK.

      Now, this storage node connects to the IP address through the ethernet port you selected.

      “””””””””

      It’s also recommended to have each nic on a separate subnet.

      Hope this helps.

  9. Ray

    Hello there have a question, hope you can help me…I have a 28.8 tb multisite san all 4 nodes configured on 1 location. now I would like to take out 2 nodes and places them in a different site.
    Is there an easy way to do so without loosing any data?
    Thanks
    Ray

    • Antone Heyward

      Sorry but I have not had to remove storage nodes from a lefthand cluster yet. Depending on your network raid configurations for the individual luns I wouldn’t see why you couldn’t do what your trying. You may want to consult hp support for more assistance.

  10. Hal

    We are having a similar setup in our school, how many days would you say is required in order to set up this network and configure it? We already have a lot of the infrastructure such as switches in place.

    Thanks
    Hal

Comments are closed.