HP P4500 Configuration


Summary:

This document outlines the process for installing and configuring the HP P4500 SAN Appliance. This document is NOT meant for the HP VSA. This document assumes that the audience has the basic skills and knowledge of storage area networks. These configurations are mainly geared for the use with Vmware vSphere 4 but could be used in other environments. The creator of this document is in no way affiliated with Vmware or HP and is not responsible for any adverse affects, performance issues, or downtime that may occur by performing the steps in this document. In other words – “Use at your own risk!”

For Vmware ESX Configuration – here

Requirements:

  • 2x HP P4500 SAN Appliances
  • KVM access to the HP P4500
  • 2x NIC cables per HP P4500 that goes to the same switch (with Flow Control turned ON)
  • HP LeftHand Central Management Software
  • HP Failover Manager ESX


Standard Cabling Diagram:

Each HP P4500 must be connected to different switches for redundancy as shown below.

Naming Convention:


The following naming convention must be followed.

Physical HP P4500 Appliance names should include “HSA” for the 3 digit application part of the naming convention. Stands for Hardware SAN Appliance
Example: If at a site “LABHSA01A” and “LABHSA01B

Virtual SAN Appliance names should include “VSA” for the 3 digit application part of the naming convention. Stands for Virtual SAN Appliance
Example: If at a site “LABVSA01A” and “LABVSA01B

Management Groups should include MG_ – Example: MG_LAB_1

Clusters should include CL_ – Example: CL_LAB_1
Volumes should include VOL_ – Example: VOL_LUN01
Snapshots should include _SS_ – Example: VOL_LUN01_SS_1
Remote Snapshots should include _RS_ – Example: VOL_LUN01_RS_1
Scheduled Snapshots should include _Sch_SS_ – Example: VOL_LUN01_Sch_SS_1
Scheduled Remote Snapshots should include _Sch_RS_ – Example: VOL_LUN01_Sch_RS_1

HP P4500 Installation:


  1. Using a keyboard and monitor connect directly to the physical HP P4500.
  2. Turn on the system.
  3. At the login prompt type: start
  4. Hit Enter at the “Login screen”
  5. Using the arrow keys move to the “Network TCP/IP Settings” then Enter
  6. Select “eth0” then Enter
  7. Enter the hostname
  8. Use static ip address, mask, and gateway then click Enter
  9. Click OK
  10. Log out
  11. Repeat the steps for each HP P4500
  12. Have a network admin configure “Flow Control” on the network ports being used.

HP Failover Manager (FOM) Installation: (VMware VM Guest)

  1. Execute the autorun.exe
  2. Click “Agree” for the license agreement
  3. Click the “Failover Manager” link
  4. Click the “Install FOM for ESX”
  5. Click Next
  6. Select “I accept the terms of the license agreement” then click Next
  7. Choose the installation path or take the defaults then click Next
  8. Click “Install”
  9. Click “OK” to the “You must still deploy the Failover Manager files to a VMware Server.”
  10. Then click “Finish”
  11. Open the VMware viclient and login to vCenter
  12. Select the ESX host that will host the FOM
  13. Select the “Configuration” tab then “Storage”
  14. Select the local datastore of the ESX host then right click and Browse the datastore
  15. At the root or / of the datastore upload the folder with the FOM and all it’s content
  16. When the upload is done, rename the folder to whatever the FOM’s hostname will be.
  17. Browse into the FOM’s directory, right click the *.vmx file and “Add to Inventory”
  18. Type the name then click Next
  19. Select the Host/Cluster then click Next
  20. Select the Host then click Next
  21. Verify the information then click Next
  22. Power on the FOM.
  23. At the login prompt type: start
  24. Hit Enter at the “Login screen”
  25. Using the arrow keys move to the “Network TCP/IP Settings” then Enter
  26. Select “eth0” then Enter
  27. Enter the hostname
  28. Use static ip address, mask, and gateway then click Enter
  29. Click OK
  30. Log out


Central Management Console (CMC) Installation:

  1. Execute the autorun.exe
  2. Click “Agree” for the license agreement
  3. Select the “Centralized Mangement Console”
  4. Click the “Install CMC”
  5. Click Next
  6. Select the “I agree then terms of the License Agreement” then click Next
  7. Select “Typical” then click Next
  8. Choose the installation path or take the defaults then click Next
  9. Select the “Desktop” checkbox then click Next
  10. Choose “NO” to not start the manager then click Next
  11. Verify the information then click “Install”
  12. When the installation is complete click Done

CMC Standard Configuration:

Definitions:

  • Management Groups—Management groups are collections of storage nodes within which one or more storage nodes are designated as managers. Management groups are logical containers for the clustered storage nodes, volumes, and snapshots.
  • Servers—Servers are application servers that you set up in a management group and assign to a volume to provide access to that volume.
  • Clusters—Clusters are groupings of storage nodes within a management group. Clusters contain the data volumes and snapshots.
  • Volumes—Volumes store data and are presented to application servers as disks.
  • Snapshots—Snapshots are copies of volumes. Snapshots can be created manually, as necessary, or scheduled to occur regularly. Snapshots of a volume can be stored on the volume itself, or on a different, remote, volume.
  • Remote Copies—Remote copies are specialized snapshots that have been copied to a remote volume, usually at a different geographic location, using the SAN/iQ software feature, Remote Copy.


  1. Open the CMC
  2. Go to “Help” then “Preferences”
  3. Uncheck the “SmartClones Volumes” element
  4. Check the “Management Groups” element
  5. Check the “Clusters” element
  6. Check the “Volumes” element


  1. Expand “Available Nodes” – If there are no Available Nodes, use the “Find” option in the menu
  2. Expand the first storage node then click “TCP/IP Network”
  3. Click the “TCP Status” tab and verify that “NIC Flow Control” is ON for each network adaptor.
  4. Click the “TCP/IP” tab.
  5. Either select both network adaptors and right click or click “TCP/IP Tasks” then select “New Bond”
  6. Change the Type to “Link Aggregation Dynamic Mode (802.3ad)”
  7. Configure the correct IP Address, Subnet Mask, and Default Gateway then click “OK”.
  8. Click “OK” the finalize the configuration.
  9. Now you must have a network admin configure LACP.
  10. Click the “DNS” tab
  11. Configure the DNS Domain Name, DNS Domain Servers, and DNS Suffixes.
  12. Click the storage node at it’s top level.
  13. Click the “Details” tab and verify the RAID: is Normal.
  14. Repeat steps 7 thru 19 for each storage node.

Create Management Group:

  1. Right click on a storage node
  2. Select “New Management Group”
  3. Name the “Management Group” i.e. “MG_LAB_1
  4. Select All nodes to be added to the management group then click Next
  5. Enter the User Name and Password then click Next
  6. Add a NTP server (domain controller) then click Next
  7. Select a cluster type of “Standard Cluster” then click Next
  8. Name the “Cluster” i.e. “CL_LAB_1” then click Next
  9. Click Add then enter the clusters virtual ip address and subnet mask then click OK
  10. Click Next
  11. Check the “Skip Volume Creation” then Finish.


Add FOM to Management Group:

  1. Expand the Management Group
  2. Right click the Virtual Manager if it exists
  3. Select “Delete Virtual Manager”
  4. Click “OK”
  5. Right click the FOM.
  6. Click the “Add to Existing Management Group
  7. Select the Group Name
  8. Click “Add”
  9. Right click the “Sites”
  10. Name the site with the 3 digit site code i.e. LAB
  11. Check the “Make this site primary”
  12. Add all the storage nodes and the FOM
  13. Click “OK”


Create Volume:

  1. Right click the “Cluster” or select the cluster Details tab then click “Cluster Tasks”
  2. Select “New Volume”
  3. Give a name to the volume i.e. “VOL_ESX_LUN01
  4. Give a size for the volume
  5. Click the “Advanced” tab.
  6. Set the “Data Protection Level” to “Network RAID-10 (2-Way Mirror)
  7. Set the “Provisioning” to “Thin
  8. Click “OK”


Add Server:

  1. Under the Management Group right click the “Servers” folder or click “Server Tasks”
  2. Select “New Server”
  3. Give a Name – the name should match the actual server
  4. Check the “Allow access via iSCSI”
  5. Check the “Enable load balancing”
  6. Under the Authentication section Select the “CHAP not required
  7. Enter the “Initiator Node Name” (This is the IQN from the ESX or Windows Server)
  8. Click “OK”


Assign Storage to Server:

  1. Select the server in CMC then select the “Volumes and Snapshots” tab
  2. Either right click the server or click “Tasks” then select “Assign and Unassign Volumes and Snapshots..
  3. Check the “Assigned” for the volumes
  4. Set permissions to “Read/Write”


Feature Registration:

  1. Go to https://webware.hp.com/ and register the storage nodes using the License Documentation that came with the HP P4300/P4500 Storage System LTU.
  2. Click the Generate New License
  3. Login with the HP Passport account
  4. Follow the instructions then use the license key
  5. Repeat for each storage node

Download

theHyperAdvisor.com


8 Comments

Add yours
  1. Michael Phillips

    The FOM installation, ESX, does it need to be connected to the iSCSI networks? We are using two networks for our iSCSI traffic, A and B subnets. Does this have to be on both or just one? Background we have two P4500’s.

  2. Antone Heyward

    The FOM does not need to be on the ISCSI network. It will need to be able to communicate with each node. It’s primarily used as a majority if the physical nodes can’t communicate with each other. And the FOM is not needed if you have 3 or more physical nodes.

  3. murat

    Hi,

    We have HP lefthand P4500 storage and have 3 LUNs,On the other hand we have 3 blade servers and Win Server 2008 R2 is running each of them.Also Hyper-V role enabled.

    LUN1 has assigned blade1

    LUN2 has assigned blade2

    LUN3 has assigned blade3

    after blade firmware updates LUN1 has seen RAW file format from blade1.

    Blade models:BL460c G6

    Storage models:HP P4500

    Opr Systems:Server 2008 R2

    Virtulazition Systems:Hyper-V

    How to change RAW to NTFS again…

    Thanks

  4. kashif

    Hello we have two HP p4330 Nodes we are about to configure FOM please can you help us does FOM VAPP has to be on iSCSI network, i read your earlier post where you said FOM does no need to be on the iSCSI network, if FOM is not on the iSCSI network how it will provide quorum?
    WE have two p4330 and they have four nics i have created two bonds one for iSCSI and one for management network if i connect from management network i am not able to login to FOM becuase its not on the management network how we can configure FOM so we can login via management network as well are we missing something?

    • Antone Heyward

      Basically the FOM doesn’t need access to the ISCSI storage itself but it does need to be able to communication with the 2 nodes in the cluster. The FOM is only used as a quorum in a 2 node configuration. Hope this helps.

  5. kashif

    Hi Antone

    Thanks for the reply i was able to configure it what i did installed esxi hypervisor free on one of the hp ML110 and deployed FOM virtual appliance. FOM Virtual appliance comes with two virtual Nics one i connected to iSCSI network and one on the management and all worked fine.

    regards

    kashif

  6. Kurt Grootjans

    Hi Antone, in fact a FOM is needed when you have an even number of Storage systems to make quorum. So when you have 3, 5 or 7… storage systems it’s not needed.

Comments are closed.