Read Configuration best practices for deploying VMware vSphere 4.1 on the HP P2000 G3 MSA Array combo controller text version

Configuration best practices for deploying VMware vSphere 4.1 on the HP P2000 G3 MSA Array combo controller

Technical white paper

Table of contents

Executive summary............................................................................................................................... 3 Challenges.......................................................................................................................................... 3 P2000 G3 configuration ...................................................................................................................... 3 Introduction of concepts and features.................................................................................................. 3 Unified LUN presentation (ULP) .......................................................................................................... 4 ULP failover ..................................................................................................................................... 5 VMware vSphere 4.1 ........................................................................................................................... 6 vSphere 4.x ALUA compliance .......................................................................................................... 6 HP P2000 G3 MSA configuration ......................................................................................................... 7 Array hardware configuration and cabling ......................................................................................... 7 Best practice when updating firmware ................................................................................................ 7 Configuring the HP P2000 G3 MSA array for iSCSI deployment ........................................................... 8 Configuring the HP P2000 G3 MSA array for FC deployment ............................................................... 9 Best practice to maximize capacity and disk usage ............................................................................ 10 Best practice in selecting disk type ................................................................................................... 11 Best practice when using multiple enclosures ..................................................................................... 11 Best practice when selecting the controller ownership for the vdisk ....................................................... 11 Best practice for storage configurations with several disks ................................................................... 11 Best practice for using a RAID level .................................................................................................. 11 Best practice to change the default name of the volumes ..................................................................... 12 Best practice for volume mapping .................................................................................................... 12 Best practice for fault tolerant configurations ..................................................................................... 13 ESX(i) 4.1 iSCSI configuration ............................................................................................................. 13 ESX vmnic settings .......................................................................................................................... 13 vNetwork distributed switch............................................................................................................. 15 Process for vmhba bond.................................................................................................................. 16 Configuring the P2000 G3 and provisioning a LUN to ESXi hosts........................................................ 18 VMware multi-pathing with iSCSI ..................................................................................................... 20 ESX(i) 4.1 FC configuration................................................................................................................. 21 Configuring the P2000 G3 ............................................................................................................. 21 Provisioning LUNs to the ESXi hosts .................................................................................................. 22 Best practice for naming hosts ......................................................................................................... 22 Mapping volumes to the ESX or ESXi host ......................................................................................... 24 ESX 4.1 configuration ........................................................................................................................ 26 ESX multi-path considerations .......................................................................................................... 26 Best practice for setting P2000 G3 MSA active-active arrays .............................................................. 27 VMware vSphere 4 multi-pathing framework ..................................................................................... 27

Best practice for changing the default PSP option ............................................................................... 30 Best practice for configuring the default in a multi-vendor SAN configuration ........................................ 31 Additional VMware considerations .................................................................................................. 31 Third-party multi-path plugins ............................................................................................................... 31 Summary .......................................................................................................................................... 32 Appendix A ...................................................................................................................................... 33 Installing Brocade 8Gb fibre channel drivers ..................................................................................... 33 Appendix B ­ P2000 G3 performance monitoring ................................................................................. 37 For more information .......................................................................................................................... 39

Executive summary

The HP P2000 G3 MSA array is designed for the small to medium size datacenter where there is a critical need for improved storage utilization and scalability. The P2000 G3 meets applicationspecific demands for transaction I/O performance for small to mid-range customers. It provides easy capacity expansion, snapshot based replication and simplified storage administration. The P2000 G3, combined with HP Storage Management Utility (SMU) and VMware vSphere 4.1, provides a comprehensive solution designed to simplify management and maximize performance for VMware infrastructures. HP continues to improve and develop best practices for HP P2000 G3 and VMware ESX. This white paper discusses best practices for deploying VMware vSphere ESX 4.1 on the HP P2000 G3 MSA Array combo controller. It also discusses P2000 G3 dual controller FC and iSCSI with ESX 4.1 software iSCSI-enabled; it does not cover SAS support. Target audience: VMware vSphere 4.1/SAN administrators who are familiar with VMware ESX/ESXi 4.1 and its vStorage features. This white paper describes testing performed in September 2010.

Challenges

With vSphere 4.1, VMware continues to stretch the boundaries of scalability. At release, the vSphere 4.1 supports virtual machine maximum of 255GB RAM and 8-way virtual SMP, hardware acceleration with vStorage APIs for Array Integration (VAAI), storage performance statistics, storage I/O control and many more features. Additionally, vSphere 4.1 enables support for up to 320 virtual machines on a properly configured single host, a storage stack architected to enable modularity, and integrates intimately with storage arrays. For administrators, this feature-packed hypervisor raises several questions about effectively configuring, tuning and deploying vSphere 4.1 in their respective SANs. What is the best way to configure my storage? What is the best I/O path policy to use for my storage? How do I reduce storage management by making it simple even in a complex environment with multiple storage systems and protocols? How do I effectively monitor a vSphere 4.1 SAN to make the necessary tweaks when needed? The ability to answer these questions quickly and effectively and take the appropriate action is critical to all vSphere 4/SAN administrators trying to meet their company's business objectives and also maximize the return on investment (ROI) from their vSphere 4 SAN. This paper will address these challenges and provide administrators with key information.

P2000 G3 configuration

Introduction of concepts and features

HP P2000 G3 MSA System P2000 G3 is an Asymmetric Logical Unit Access (ALUA)-compliant storage system. The P2000 G3 has leveraged the concept from its predecessor, HP 2000 Modular Smart Array (MSA2000) G2, by using Unified LUN Presentation (ULP). The concept exposes all LUNs through all the host ports on the two controllers. ULP appears to the host as an active-active storage system where the ESX 4.1 host can choose any available path to access a LUN regardless of the vdisk/LUN ownership.

3

The P2000 G3 combo controller with both 8Gb fibre channel and 1GbE iSCSI ports provides small business and departments with future growth and expansion at a fraction of a cost. The two 8 Gb fibre channel ports provide high speed access to data. The two 1GbE iSCSI ports allow smaller departments or small business to deploy shared storage for VMware environments without the cost of implementing an FC infrastructure. In certain cases, the combination of the two can also be deployed ­ for example, in large companies with several smaller departments and/or remote locations who need shared storage or disaster recovery and backups. Implementing a dual-protocol (FC & iSCSI) allows for high speed access through the 8Gb FC ports and the 1GbE iSCSI ports can be used for remote snaps to another location. For more information regarding case studies and best practices on the combo controller, please refer to the HP technical white paper, "Use cases and best practices for HP StorageWorks P2000 G3 MSA FC/iSCSI Combo Controller" at: http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA1-0941ENW&cc=us&lc=en

Figure 1. HP P2000 G3 array

Unified LUN presentation (ULP)

ULP presents all LUNs to all host ports, eliminating the need for the interconnect path between the controller units. ULP presents the same World-Wide Node Name (WWNN) for both controllers. There is no duplicate LUN allowed between controllers and either controller can use any unused logical unit number. As per ALUA specifications, the preferred path, which is indentified by the reported target port groups (RTPG), indicates the owning controller giving it the optimal path to the array. The owning controller will always perform the I/O to disk.

4

Figure 2. High level overviews of how all LUNs are presented to the host

ULP failover

When a controller unit fails on the P2000 G3 array with dual or combo controllers, the ownership will transfer from the failing controller to the secondary or backup controller in the array. Single controller configurations will not work in this scenario. For example, in figure 3, we encounter a failure on controller unit B. As the failure occurs with controller unit B, the vdisk ownership transfers to controller unit A. The same single World Wide Node (WWN) is still presented and all LUNs are now presented through controller unit A. The multi-pathing software continues providing I/O and the surviving controller reports all paths as preferred.

5

Figure 3. An example of controller unit B failure

The ability to identify and alter the LUN controller ownership is defined by the ALUA extensions in the SPC3 standard. The P2000 G3 MSA array supports implicit ALUA modes. This ability means that the array can assign and change the managing controller for the LUN but LUN ownership cannot be assigned to one particular P2000 controller.

VMware vSphere 4.1

vSphere 4.x ALUA compliance

VMware vSphere 4 is also ALUA-compliant. This is one of the major features that were added to the vSphere 4 iSCSI architecture. vSphere 4 ALUA compliance allows the hypervisor to detect that a storage system is ALUA-capable and to utilize ALUA to optimize I/O processing to the controllers and detect LUN failover between controllers. vSphere supports all four ALUA modes: Not supported Implicit Explicit Both implicit and explicit support Additionally, vSphere 4.1 also supports all ALUA access types: Active-optimized ­ The path to the LUN is through the managing controller. Active-non-optimized ­ The path to the LUN is through the non-managing controller. Standby ­ The path to the LUN is not an active path and must be activated before I/O can be issued. Unavailable ­ The path to the LUN is unavailable through this controller. Transitioning ­ The LUN is transitioning from and to any one of the types defined above.

6

In VMware vSphere 4.1, round robin load balancing is supported along with Most Recently Used (MRU) and Fixed I/O path policies. It is worth noting that round robin and MRU I/O path policies are ALUA-aware, meaning that both round robin and MRU load balancing will first attempt to schedule I/O requests to a LUN through a path that is through the managing controller. This enhancement is significant, with endless benefits for administrators of a P2000 G3 MSA SAN with vSphere 4.x. We will explore this facet in more detail in the multi-path configuration section.

HP P2000 G3 MSA configuration

Array hardware configuration and cabling

The P2000 G3 MSA combo controller best practices for configuration and cabling can be located in the "HP StorageWorks MSA2000 Family Best practices" technical white paper, available at: http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA0-8279ENW&cc=us&lc=en, also "HP StorageWorks P2000 G3 MSA Systems Installation Instructions" available at: http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02523110/c02523110.pdf?ju mpid=reg_R1002_USEN. Please refer to these guides for more information, as this topic will not be discussed here in great detail. The first topic discussed below will be the iSCSI configuration followed by fibre configuration. The P2000 G3 combo controller can simultaneously support both iSCSI and fibre protocol in a vSphere 4.1 environment connected to multiple ESX servers (or for disaster recovery). Servers that share a LUN from the P2000 G3 must use the same storage networking type. For example, ESX servers with fiber HBA connection may only share a LUN with other ESX servers with a fiber HBA connection. It is important to note that when configuring vSphere 4.1 to access a P2000 G3 in an iSCSI network, it is highly recommended to create redundant network paths by using two distinct network switches at a minimum and dual network interface cards (NICs) in each ESX server. The topology for an iSCSI deployment should look similar to figure 4, showing a vSphere 4.1 server attached to P2000 G3 MSA array through redundant network switches.

Best practice when updating firmware

As a best practice, stop all I/O to the P2000 MSA array while doing a firmware update. If the P2000 MSA array has only a single controller, the VMware server(s) will not be able to do any I/O while the upgrade is in process, since the controller must be shutdown to complete the firmware upgrade. In the single controller environment the servers should be shutdown during the upgrade process. If the array has dual controllers, ensure that the VMware servers are connected and able to communicate to both controllers via the multi-pathing software before performing a firmware upgrade. As a controller shuts down to complete the upgrade, the other controller will take over the I/O. If the servers are configured with multi-path software correctly, there is no need to shutdown the servers during the firmware upgrade process.

7

Figure 4. P2000 G3 combo controller / vSphere 4.x iSCSI topology

192.168.1.x

192.168.2.x

Configuring the HP P2000 G3 MSA array for iSCSI deployment

There are several reasons why the topology shown in figure 4 is ideal. The topology provides increased fault tolerance. The environment is protected against NIC, single network switch and controller failure. As mentioned earlier, it is always better to access a LUN through the optimized controller for I/O. In the event of a NIC failure, failover will occur at the NIC but access to the LUN remains on the same controller. It is highly recommended that each NIC in the vSphere 4.1 server access one or more controller ports on each controller. A controller failure in this topology will only trigger a controller failure, not cause a NIC failover, which will increase system recovery time. In a fabric SAN environment, the same principles can be achieved with two HBAs and two SAN fabric switches accessing two controllers on the P2000 G3 MSA array. It's recommended to have two different SAN fabrics and use zones for LUN presentation.

8

Figure 5. P2000 G3 combo controller / vSphere 4.x FC topology

Configuring the HP P2000 G3 MSA array for FC deployment

There are several reasons why the topology shown in figure 5 is ideal. The topology provides increased fault tolerance. The environment is protected against HBA, single fabric switch, and controller port or controller failure. As mentioned earlier, it is always better to access a LUN through the optimized controller for I/O. In the event of an HBA failure, failover will occur at the HBA but access to the LUN remains on the same controller. It is highly recommended that each HBA in the vSphere 4.1 server access one or more controller ports on each controller. A controller failure in this topology will only trigger a controller failure, not cause an HBA failover, which will increase system recovery time. Virtual disk (vdisk) A P2000 G3 array vdisk is the largest storage object within the array and is made up of one or more physical disks, having the combined capacity of those disks. The maximum number of drives that can be used in RAID 1 vdisk is 2; RAID 0, 3, 5, 6, and 10 vdisk is 16; and for RAID 50 vdisk is 32. When configuring a vdisk on P2000 G3 for vSphere 4.1, an administrator must keep in mind two factors: The application being virtualized The storage optimization objective All disks in a vdisk must be the same type (SAS or SATA, small or large form-factor). Each controller can have a maximum of 16 vdisks.

9

A vdisk can contain different models of disks, and disks with different capacities. For example, a vdisk can include a 500-GB disk and a 750-GB disk. If you mix disks with different capacities, the smallest disk determines the logical capacity of all other disks in the vdisk, regardless of RAID level. For example, if a RAID-0 vdisk contains one 500-GB disk and four 750-GB disks, the capacity of the vdisk is equivalent to approximately five 500-GB disks. To maximize capacity, use disks of identical size. For greatest reliability, use disks of the same size and rotational speed. Each disk has metadata that identifies whether the disk is a member of a vdisk, and identifies other members of that vdisk. This identification capability enables disks to be moved to different slots in a system; an entire vdisk to be moved to a different system; and a vdisk to be quarantined if disks are detected missing. In a single-controller system, all vdisks are owned by that controller. In a dual-controller system, when a vdisk is created the system automatically selects the owner to balance the number of vdisks each controller owns. Typically it does not matter which controller owns a vdisk. In a dual-controller system, when a controller fails, the partner controller assumes temporary ownership of the failed controller's vdisks and resources. If a fault-tolerant cabling configuration is used to connect the controllers to drive enclosures and hosts, LUNs are accessible to both controllers. When you create a vdisk, you can use the default chunk size or one that better suits your application. The chunk size is the amount of contiguous data that is written to a disk before moving to the next disk. After a vdisk is created, its chunk size cannot be changed. For example, if the host is writing data in 16-KB transfers that size would be a good choice for random transfers because one host read would generate the read of exactly one disk in the volume. This means if the requests are random-like, then the requests would be spread evenly over all of the disks, which is good for performance. If you have 16-KB accesses from the host and a 64-KB chunk size, then some of the hosts' accesses would hit the same disk; each chunk contains four possible 16-KB groups of data that the host might want to read, which is not an optimal solution. Alternatively, if the host accesses were 128-KB, then each host read would have to access two disks in the vdisk. For random patterns, twice as many disks are tied up. When you create a vdisk you can also create volumes within it. A volume is a logical subdivision of a vdisk, and can be mapped to controller host ports for access by hosts. The storage system presents only volumes, not vdisks, to hosts. A common misconception about server virtualization is that when an application is virtualized, its storage requirement can be reduced or changed; this is not the case. With large storage configurations, it might be best to consider creating fewer vdisks, each containing many drives versus many vdisks containing few drives. For example, one 12-drive RAID 5 vdisk will have one parity drive and 11 data drives compared to four 3-drive RAID 5 vdisk, each one having a parity drive and two data drives. Supporting large storage capacities requires advanced planning because it requires using large virtual disks with several volumes each or many virtual disks. Here is where performance and capacity come into effect with your planning. To maximize capacity, combine physical disks into a large vdisk, then subdivide that vdisk into several volumes with a capacity less than 2TB, since VMware VMFS has a 2TB limit on each "extent" of a datastore.

Best practice to maximize capacity and disk usage

To maximize capacity and disk usage, you can create vdisks larger than 2TB, increasing the usable capacity of storage configurations by reducing the total number of parity disks required when using parity-protected RAID levels. This method differs from using a volume larger than 2 TB, which requires specific support by the host operating system, I/O adapter, and application and does not maximize performance.

10

Best practice in selecting disk type

A best practice for disk type is to keep all vdisks of the same capacity and type ­ either SAS or SATA in small or large form-factor and rotational speed.

Best practice when using multiple enclosures

A best practice when using multiple enclosures is to stripe across shelf enclosures to enable data integrity in the event of an enclosure failure. The active-active controller configuration allows maximum use of dual controllers.

Best practice when selecting the controller ownership for the vdisk

As a best practice when creating vdisks, add them evenly across the two controllers. Since both controllers are active you will at least have one vdisk assigned to each controller. If you use the default value of Auto on, this will be done for you. In addition, each controller should own a similar number of vdisks. When optimizing for performance, the SAN administrator's goal is to drive as much performance out of the array or system as possible. This has implications on usable storage capacity due to the configuration decisions.

Best practice for storage configurations with several disks

As a best practice when an array has many disks available, create few vdisks, each containing many disks instead of many vdisks with each containing few disks.

Best practice for using a RAID level

As a best practice, use RAID5 for a lower storage-capacity cost and adequate redundancy for ESX 4.1 deployments. Refer to the "HP StorageWorks P2000 G3 MSA best practices" technical white paper for more information, available at: http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA32141ENW &cc=us&lc=en Volumes A volume is a logical subdivision of a vdisk, and can be mapped to controller host ports for access by ESX hosts. A mapped volume provides the storage for a VMFS file system partition created with operating system or third-party tools. The storage system presents only volumes, not vdisks, to hosts. A vdisk can have a maximum of 128 volumes. You can create a vdisk that has one volume or multiple volumes. Single-volume vdisks work well in environments that need one large, fault-tolerant storage space for data on one host. An example would be a large database accessed by users on a single host that is used only for that application. Multiple-volume vdisks work well when you have very large disks and you want to make the most efficient use of disk space for fault tolerance (parity and spares). For example, you could create one 10-TB RAID-5 vdisk and dedicate one spare to the vdisk. This action minimizes the amount of disk space allocated to parity and spares compared to the space required if you created five 2-TB RAID-5 vdisks. However, note that I/O to multiple volumes in the same vdisk can slow system performance.

11

When you create a volume you can specify the size. If the total size of a vdisk volume equals the size of the vdisk, you will not have any free space. Without free space, you cannot add a new or expand existing volumes. If you need to add or expand a volume in a vdisk without free space, you can delete a volume to create free space or, by adding more disks you can expand the vdisk and then either add a volume or expand a volume to use the new free space. In some cases your ESX host will share storage with other ESX hosts for VMware VMotion, VMware FT or other features from vSphere. In addition, you can also have other non-ESX hosts use the P2000 G3 for storage. For example, a volume can be used to store payroll information.

Best practice to change the default name of the volumes

As a best practice, change the default name of the volume to identify the purpose of the volume. For example, a volume used for storing datastores belonging to ESX cluster1 can be named esx_cluster1_datastores. Each volume has default host-access settings that are set when the volume is created; these settings are called the default mapping. The default mapping applies to any host that has not been explicitly mapped with different settings. Explicit mappings for a volume override its default mapping. Default mapping enables all attached hosts to see a volume using a specified LUN and access permissions set by the administrator meaning that when the volume is first created, all connected hosts can immediately access the volume using the advertised default mapping settings. This behavior is expected by some operating systems, such as Microsoft® Windows®, which can immediately discover the volume. The advantage of a default mapping is that all connected hosts can discover the volume with no additional work by the administrator. The disadvantage is that all connected hosts can discover the volume with no restrictions. Therefore, this process is not recommended for specialized volumes that require restricted access. You can change a volume's default mapping, and create, modify, or delete explicit mappings. A mapping can specify read-write, read-only, or no access through one or more controller host ports to a volume. When a mapping specifies no access, the volume is masked. You can apply access privileges to one or more of the host ports on either controller. To maximize performance, map a volume to at least one host port on the controller that owns it. To sustain I/O in the event of controller failure, map to at least one host port on each controller. When mapping a volume from a vdisk, it will be best to not use the default mapping. If you select the default mapping it will present the volume to all servers on the fibre or network switch with read/write access. After the volumes are created use fibre zones, explicit mapping or a combination of the two to avoid mapping issues with a rogue server accessing the LUN and formatting it with a different file system. Explicit mapping is basically mapping the WWN ports from the HBA to the ports on both or single storage controller on the array. When a controller fails, the surviving controller reports back to all vdisks that it is now the owning controller. This information is stored in the disk metadata.

Best practice for volume mapping

As a best practice for performance, follow the VMware-specific practices for volume mapping: Use explicit mapping Ensure that a shared LUN is mapped with the same LUN number to all ESX servers sharing the LUN Ensure the LUN is mapped through all the same controller ports for all mapped server WWNs so that each server has the same number of paths to the LUN Map the volumes to the ports on the controller that own the vdisk. Mapping to the non-preferred path may result in performance degradation

12

For more information, please refer to requirements found in the VMware fibre channel SAN configuration guide at: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_san_cfg.pdf and the VMware iSCSI SAN configuration guide at: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_iscsi_san_cfg.pdf.

Best practice for fault tolerant configurations

As best practice for fault tolerant configurations, map the volumes to all available ports on the controller for iSCSI configurations. You may also map all the ports of the fibre controller to the HBA, but you could also map a port from each controller. For example, controller A will have A1 and A2 fibre ports and controller B will have B1 and B2 fibre ports. Map HBA1 to controller A port A1 and controller B port B2; HBA2 can be mapped to controller A port A2 and controller B port B1. For more information on best practices for the P2000 G3 MSA array, please refer to "HP StorageWorks P2000 G3 MSA best practices" technical white paper, at: http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-2141ENW &cc=us&lc=en HP P2000 G3 MSA systems are managed through a web graphical interface (GUI) called Storage Management Utility (SMU). Using SMU, administrators can create vdisk and volumes, provision volumes to host, replicate and delete volumes, delete vdisk and monitor the health of the system components. SMU gives storage administrators a simple way to maintain day-to-day storage administration tasks. In addition, there is also a command line utility which not only does everything the GUI does, but also offers performance monitoring counters (See Appendix B).

ESX(i) 4.1 iSCSI configuration

One of the first ESX(i) 4.1 configurations tasks to undertake for proper connectivity and operation with the P2000 G3 iSCSI is correctly setting the multi-path configuration. Additionally, advance tuning parameters on the NICs, switches and ESX advance parameters can help achieve increased performance.

ESX vmnic settings

To correctly set iSCSI multi-pathing with the combo controller, you will replicate an iSCSI controlleronly configuration using the vSphere 4 client GUI and the VMware CLI. In the example shown in figure 4, we need to have two network paths to two different network switches, preferably on different subnets, connect to four iSCSI ports on the rear of the P2000 G3 MSA array. There are two iSCSI ports on combo controller A (A3, A4) and two iSCSI ports on combo controller B (B3, B4). For optimal performance and failover, the iSCSI traffic should be kept separately from service Console, virtual machine public network, VMware VMotion and VMware FT network traffic. Configuring the ESX servers will not be covered in detail as it has been covered in the VMware iSCSI SAN configuration guide and also documented on several VMware blogs and other storage related websites. Note that you must verify that you have enabled the iSCSI software adapter. Basically when configuring your ESX host, you should place two VMkernel ports on the same switch. Please refer to figure 6 for an example as it demonstrates two VMkernel ports on separate subnets. The VMkernel ports are using two physical adapters called vmnic2 and vmnic3 which, in our example, are Intel® network cards.

13

Figure 6. VMkernel iSCSI ports connected to two vmnics

To verify and configure NIC teaming:

1. Select Properties 2. Highlight one of the VMkernel ports (In our example, we selected VMk-iSCSI1) 3. Select the Edit button 4. Select the NIC teaming tab 5. Select one of the vmnics available and place it under the Active Adapters by using the Move Up

or Down button

6. Select the other available adapter and move it down to Unused Adapters 7. Select ok to complete the first adapter. 8. Return back to the properties and follow the same process for the second adapter

Now you have assigned a different physical network adapter from the active adapter for each VMkernel port on the virtual switch and ensured that the multiple VMkernel ports use different network adapters for their I/O. Each VMkernel port needs to use a single physical network adapter without using any standby adapters.

14

Figure 7. Network adapter failover order for Multi-path I/O (MPIO)

Once both adapters have been configured, the next step is to bond the two adapters with the vmhba. The process uses the command line interface, and since we are using ESXi, we can use either vSphere Management Assistant (vMA) or VMware vSphere CLI. The example in this white paper uses the CLI, as described in section "Process for vmhba bond."

vNetwork distributed switch

If you are using a vNetwork Distributed Switch (vDS) with multiple dvUplinks (Distributed Virtual Uplinks) for port binding, create a separate dvPort group per physical NIC. Set the team policy so each dvPort group has only one active dvUplink.

15

Process for vmhba bond

You will need to make sure you have the vMA appliance installed or install the vSphere CLI utility from the vSphere client. For more information, please refer to VMware's documentation as this white paper will not cover how to install VMware vMA or vSphere CLI utility. In the testing for our white paper, we enabled the software iSCSI, so we will not be making any references here to hardware iSCSI. The steps required to bind the two VMkernel ports to the software iSCSI adapter are shown below. Basically the vmhba# and vmk# must match the correct numbers for the ESX(i) server and virtual switch being configured. Select start All Programs Once at the command prompt, you will run esxcli. You can use a script to provide parameter values for server, user and password. For those who have not used vSphere CLI, we will use the entire command. Type esxcli ­server (name or ip address) ­username (login name) ­password (password for the user) swiscsi nic add ­n vmk1 ­d vmhba33 Type esxcli ­server (name or ip address) ­username (login name) ­password (password for the user) swiscsi nic add ­n vmk2 ­d vmhba33 Type esxcli ­server (name or ip address) ­username (login name) ­password (password for the user) swiscsi nic list ­d vmhba33. This will list the bond you just created. Figure 8 shows an example of the steps used to create the bond in our testing. Yours should be somewhat similar.

16

Figure 8. Command prompt window showing the steps to create the software iSCSI bond

Once the steps are complete, return to the vSphere client and perform a rescan. At this point, if a LUN has been presented through the SMU, the vSphere client should now see the LUNs available, as shown in figure 9. However, if you have not deployed or configured the P2000 G3 array, this paper provides valuable information.

17

Figure 9. Datastore devices view showing a single provisioned LUN to software iSCSI

Configuring the P2000 G3 and provisioning a LUN to ESXi hosts

The steps required to configure the P2000 G3 MSA mandate having the array preconfigured with the appropriate management ports. Please refer to the "HP StorageWorks P2000 G3 MSA Systems installation instructions" available at: http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02523110/c02523110.pdf?ju mpid=reg_R1002_USEN. Login to the P2000 G3 Storage Management Utility by using the array management port. Open your web browser and enter the address of the management port. Use the Provisioning Wizard to assist with the creation of vdisks and volumes and with mapping volumes to host. Name vdisk and RAID level and leave the Assign to: field as Auto. The Auto default is the recommended setting. Auto will balance vdisk ownership between controllers on the array. The association is static and it is set when the vdisk setting is changed from manual back to auto. The only time you would change the setting is when there is a good understanding of the workloads between the vdisk to effect a change on QoS. Select your disks. It is best to have more than the one disk enclosure. Consider selecting disks from different disk enclosures to avoid a single enclosure failure. Define your volumes. When mapping for volumes, make sure you select all the available iSCSI ports when presenting to the hosts to provide the redundancy needed in case a controller fails. Figure 10 shows A3, A4, B3 and B4 selected. Confirm your settings. Return back to your vSphere client Select your host and select the Configuration Tab Select the iSCSI Software Adapter and perform a Rescan All. Figure 11 shows a Rescan All result. If you had software iSCSI enabled you will locate your new provisioned LUN. If iSCSI is not enabled, please follow the steps from the "iSCSI SAN configuration guide" at: www.vmware.com/pdf/vsphere4/r41/vsp_41_iscsi_san_cfg.pdf

18

Figure 10. iSCSI ports A3, A4, B3 and B4 selected

Figure 11. Rescan All - showing the new LUNs available for use as datastores or RDMs

The ESXi hosts are now configured by either having hardware iSCSI or software iSCSI enabled. The vmhba ports have the appropriate bond to two separate vmnic ports on the ESXi hosts. The P2000 G3 has also been configured and has provisioned the LUNs to the ESXi hosts for use as either a datastore or a raw device mapping (RDM). To complete the configuration, the next section will focus on setting up VMware native MPIO driver without the need of additional third part plug-ins.

19

VMware multi-pathing with iSCSI

Native iSCSI multi-pathing in vSphere 4.1 provides bandwidth performance and aggregating network ports. In configuring iSCSI multi-pathing two network ports are required, which is why we showed how to bind the VMkernel ports to the iSCSI software earlier in this white paper. In order to achieve load balancing across the two paths, datastores should be configured with a path selection policy of round robin. The default vSphere multi-path setting when ESXi is installed is Most Recently Used (MRU). Since MRU supports ALUA-capable arrays and the P2000 G3 supports ALUA, it becomes the default setting. After installing ESXi, enabling the software iSCSI adapter, and configuring the P2000 G3 with a provisioned LUN, we see the default multi-path setting as displayed in figure 12.

Figure 12. VMware default path selection MRU for P2000 G3 MSA array

This path selection is an acceptable configuration, but if you are looking for an optimized configuration it would be best to consider round robin. Note that path selection is an HP recommendation and not mandatory. Again, to achieve load balancing across the two paths, the datastores need to be configured with a path selection policy of round robin. This configuration can be done manually for each datastore in the vSphere client or the ESX host can be configured to automatically select round robin for all datastores. To configure all new datastores to automatically use round robin, configure the ESX hosts to use it as the default path selection policy. Execute the following from the command line: esxcli corestorage claiming unclaim ­type location esxcli nmp satp setdefaultpsp ­s VMW_SATP_AA ­P VMW_PSP_RR

20

esxcli corestorage claimrule load esxcli corestorage claimrule run The ESX hosts will still show the MRU settings until a reboot is executed. Once the host returns back after a reboot, it should display round robin settings for manage paths.

Figure 13. Demonstrates the new path selection round robin after a reboot

The round robin path selection has additional parameters you can change from the ESX command line, such as the number of IOPS per path. The default of 1000 IOPS has shown to be efficient and will do well in most vSphere environments. Basically, 1000 IOPS will first go down the first path and the next 1000 IOPS will go to the next path. From the vMA command line, you can set the round robin parameter as follows: # esxcli nmp roundrobin setconfig ­d LUNID ­I 1000 ­t iops The LUNID is the vSphere 4.1 volume identifier e.g. naa.600c0ff000db103b17a3af4c01000000.

ESX(i) 4.1 FC configuration

Configuring the P2000 G3

Configuring the ESX or ESXi servers will not be covered in detail as it has been covered in the "VMware Fibre Channel SAN configuration Guide" and also documented on several VMware blogs and other storage related websites. Basically, when configuring your ESX host, you should place two fibre ports on separate fabrics. When scanned by the ESX hosts, the HP P2000 G3 array will always show LUN 0 in the controllers. Therefore, when planning your LUN's with the P2000 G3, remember LUN 0 is already used by the array for its storage controllers. If you have no fibre HBA drivers installed on the host, make sure you have installed the HBA drivers following the manufacturer's direction for the HBA model installed in your system. In our example, we used HBAs from Emulex and Brocade to test our proof-of- concept. Appendix A has an example of how to install Brocade 8Gb fibre HBA drivers after ESXi 4.1 has been installed. As ESXi 4.1 hosts are deployed or installed with the correct fibre HBA drivers, the storage adapters and their corresponding WWN name will be displayed. In our example, we used a Brocade 425/825 8G FC HBA (2 port HBA), which is the HP 82B PCI-e 8Gb FC dual port host-based controller, as shown in figure 14.

21

Figure 14. vSphere host with Brocade 425/825 HBAs and their WWN

At this point, if the ESX host does not see the LUN, check for zone configurations, fabric switches or fibre cables for any damage or non-functional components. With the correct drivers installed, you are now at the point where you can provision the LUNs for the vSphere hosts in your cluster or datacenter.

Provisioning LUNs to the ESXi hosts

Start by logging into the SMU application for provisioning vdisk and volumes for the ESX hosts to use as shared storage. The process is the same as the iSCSI configuration except that you are now presenting the LUNs to the FC ports of the combo controller. The process is straightforward but it does require some planning beforehand, for example determine a naming convention for your hosts in the Storage Management Utility. The SMU does not give you the ability to create a folder and place all of the WWN in one location. It displays the WWN under the hosts as shown in figure 15. Rename the hosts to something more meaningful for your environment.

Best practice for naming hosts

To ensure a better administration of your SAN environment, always rename the hosts to something meaningful. Datacenter administrators may have some sort of naming convention practice in use within their datacenter to keep track of servers and their components.

22

Figure 15. Host section highlighted with WWN from the SMU

Renaming the hosts The process to rename a host in the SMU entails several tasks: Locate the WWN from the vSphere client as shown in figure 16. Login to the SMU and locate the WWN which correspond to the ESXi host's fibre HBA. Rename the hosts to something understandable and manageable to the SAN Administrator. In many cases since the P2000 G3 is targeted to small businesses, you may be the SAN, vSphere, and network administrator; so it's particularly important to keep track of what hosts will be accessing which LUNs. This task is separate from working with zones at the fibre switch level. In the example we used, we included the last four digits of the WWN and followed it with a port designation. Since we are using a Dual Port Brocade 8Gb FC HBA in an HP ProLiant DL380 G6 server, we determined the name of the host to be DL380G6-d2ae-1A for the first HBA port and DL380G6-d2af-1B for the second. This example is just a recommendation, as most datacenters usually have their own naming schemes. The main point is to change the host name to something more manageable then several WWN port numbers. The profile to use is Standard, not HP-UX or OpenVMS. See figure 17.

23

Figure 16. Locate the vSphere client

Figure 17. SMU host renamed

Mapping volumes to the ESX or ESXi host

Now the hosts have been properly named and the vdisk has been provisioned manually or with the provisioning wizard. The next steps are to map the volumes to the fibre ports and set explicitly to the ESX hosts. In many cases, you do not want to give read-write access to all hosts on the same fabric when you provision a LUN(s). In VMware environments, several ESX(i) hosts need access to the same LUN to use features such as VMware VMotion and a few other VMware features. Select the vdisk by highlighting it from the configuration view. Select Provision from the drop down menus and select Map volumes. Place a check mark in the volume name, then select the mapping properties and select the host ID as shown in figure 18. Select the map check box. Enter a LUN number that all ESX hosts will use. Change the access to read-write. Select controller A port A1 and controller B port B2 or for fail-over select all FC ports.

24

Select apply. Select OK when the host mapping has been modified. Select the host from the configuration view, as shown in figure 19. Select maps radio button from the component view to make sure LUN 5 was presented to the host with read-write access.

Figure 18. SMU map volumes

25

Figure 19. SMU host overview

The LUN mapping is now complete and the LUN has been explicitly set for the ESX hosts. The above figures show LUN 5 as being presented to the host. At this point the vSphere client needs to start and rescan for the new LUN that was just presented to the ESX host.

ESX 4.1 configuration

ESX multi-path considerations

Now ALUA-complaint, ESX 4.0 does not require the same intricate configuration process as with older ESX 3.x generations. The actions the administrator needs to perform are: Configure the P2000 G3 volume and select the controller access policy Power on or reboot the ESX 4.x server or perform a rescan The process is simplified with the P2000 G3 MSA system. vSphere 4.x, at boot up or after a rescan, will detect the optimal access paths and as long as MRU and/or round robin is the employed I/O path policy, vSphere 4.x will give optimal paths to the volume or LUN with the higher preference for I/O queuing.

26

Starting with ESX 4.0, the round robin load balancing policy is supported; thus, both MRU and round robin path policy are ALUA-aware and have the following characteristics: MRU Gives preference to an optimal path to the LUN Uses a non-optimized path if all optimal paths are unavailable As an optimal path becomes available it will failback to the optimal path Only a single controller port is used for LUN access per ESX hosts Round robin Queues I/O to the LUNs on all ports of the owning controller in a round robin fashion providing instant bandwidth improvement Continue queuing I/O in a round robin fashion to the optimal controller ports until none are available and will failover to the non-optimized path Once an optimal path returns, it will failback to it ALUA compliance in ESX 4.x and the support for round robin load balancing were truly a giant leap forward for ESX 4.0 multi-pathing. These two simple features have eliminated all of the intricate configuration steps administrators carried out with ESX 3.x and older versions, and also helps to guarantee a much more balanced system configuration than administrators could achieve through manual preferred path configuration. Additionally, with round robin I/O path policy, I/O can be queued to multiple controller ports on the P2000 G3 providing an instant performance boost.

Best practice for setting P2000 G3 MSA active-active arrays

As a best practice, round robin I/O path policy is the recommended setting for the P2000 G3 MSA array. MRU is the default setting and also suitable if round robin is not the desired setting in the specific environment.

VMware vSphere 4 multi-pathing framework

VMware introduced a new multi-pathing framework with ESX 4.0. The components that comprise the framework are: Native Multi-Pathing (NMP) Storage Array Type Plugin (SATP) Path Selection Plugin (PSP) See figure 20 for more detail on these components.

27

Figure 20. vSphere 4 multi-pathing stack

NMP is the glue between the SATP and PSP operation. NMP handles many non-array specific operations such as periodical path probing and monitoring and also building the multi-pathing configuration. By communicating with the SATP and PSP, when failure on a path occurs, the NMP is able to take appropriate actions. If, for example, a path failure has occurred, the NMP will update its list of available paths and communicate with the PSP to make decisions on which paths I/O should be re-routed to based on the path selection policy being used. Although there is no way to display this list through VMware vCenter, on an ESX server we can display the list of all available SATP and their respective default PSP using the following CLI command: esxcli nmp satp list

28

Figure 21. VMware ESX 4.1 SATP table

As I/O is being queued to the storage system, the Path Selection Plug-in (PSP) is used to handle the selection of the best available path to use to queue I/O requests. ESX 4.0 provides three path selection policies: Fixed MRU Load balancing through round robin PSP settings are on a per-LUN basis, meaning that it is possible to have some LUNs from the same storage system using the MRU policy when others are using the round robin policy. SATP is an array specific plug-in which handles specific operations such as device discovery, handling of array specific error codes and failover. For example, storage arrays use a set of standard SCSI return codes to warn device drivers of various failure modes. In addition to these standard return codes, arrays also make use of vendor specific codes to handle proprietary functions and/or behaviors. The storage array type plug-in for a specific storage system would take the appropriate actions when these return codes are received. An important thing to understand is that SATPs are global to a system but PSP can be either global or set on a LUN basis. A specific array can only use one specific SATP. However, LUNs on an array can use multiple PSPs. As an example, a LUN can be set to round robin path I/O policy when another LUN on the same storage array is set to MRU. The three entries below are from figure 21 are the P2000 G3 array-relevant SATP entries. VMW_SATP_MSA VMW_SATP_ALUA VMW_SATP_DEFAULT_AA

29

The VMW_SATP_ALUA is an SATP for any array that is compliant with the SCSI ALUA specification. All active-active P2000 G3 will employ this SATP. There are two key configuration steps when connecting an ESX 4.1 server to a P2000 G3 activeactive array: Change the default PSP for the VMW_SATP_ALUA from VMW_PSP_MRU to VMW_PSP_RR Update the advanced configuration parameter for the VMW_PSP_RR The first configuration change that must be made is to change the VMW_SATP_ALUA default PSP from MRU to round robin. The reason for this change is that there is currently no method in ESX 4.0 to configure the path selection plug-in in a global way (PSP or PSM) based on an array model. PSPs in ESX 4.1 are set at the LUN level and are based on an SATP. Since all active-active P2000 G3 arrays will use the VMW_SATP_ALUA plug-in, by configuring the VMW_SATP_ALUA default PSP to VMW_PSP_RR, the system will automatically configure every new LUN from an ALUA-capable array to use the round robin path policy. Again, this change must be done through the command line using one of the many command line options (ESX console, vMA, esxcli tool kit) by executing the following command: esxcli nmp satp setdefaultpsp --satp VMW_SATP_ALUA --psp VMW_PSP_RR

Best practice for changing the default PSP option

As a best practice, change the default PSP option for VMW_SATP_ALUA to VMW_PSP_RR for P2000 G3 SAN environments. Secondly, for optimal default system performance with the P2000 G3, configure the round robin load balancing selection to IOPS with a value of 1. This has to be done for every LUN using the command: esxcli nmp round robin setconfig --type "iops" --iops 1 --device naa.xxxxxxxxx For environments with only P2000 G3 LUNs connected to ESX 4.x servers, the following simple script can be used to iterate through all LUNs and automatically set their I/O path access policy to round robin. for i in `ls /vmfs/devices/disks/ | grep naa.600` ; do esxcli nmp roundrobin setconfig --type "iops" --iops=1--device $i; done For environments that have other array models in addition to P2000 G3 arrays attached to ESX 4.x servers using the VMW_PSP_RR, merely change "grep naa.600" so that it matches the pattern to devices on the desired arrays only. Multi-vendor storage SAN environments In environments where multiple vendor ALUA-capable arrays are connected to the same ESX servers, proceed with caution when setting the default PSP for the VMW_SATP_ALUA, especially if the various vendor storage arrays have different default recommendations for the default PSP option. If there is a single P2000 G3 MSA system and the rest of the storage arrays are third-party ALUA-capable storage systems, then only set the default PSP option to VMW_PSP_RR for the VMW_SATP_ALUA if the thirdparty storage array vendor also recommends it. Otherwise use the recommended default for the thirdparty storage array product and manually configure the P2000 G3 MSA LUNs. Now the bulk of the configuration will be automatically set by the ESX host by default and the administrator can run a simple script against the P2000 G3 LUNs only to set them to the desired path selection plug-in of VMW_PSP_RR.

30

Best practice for configuring the default in a multi-vendor SAN configuration

In a multi-vendor SAN in which there are ALUA compliant arrays in the SAN environment, configure the default path selection plug-in for the VMW_SATP_ALUA SATP to the recommended setting of the array type you have the most of in the SAN environment or the recommended setting for the array type(s) that have the majority of the LUNs provisioned to ESX hosts for access.

Additional VMware considerations

VMware Storage I/O Control VMware introduce with vSphere 4.1 a new feature called Storage I/O control (SIOC). The feature enables you to perform attenuation of the I/O on per virtual disk basis. The feature is disabled by default. Storage I/O Control provides I/O prioritization of virtual machines running on a cluster of ESX servers that access a shared storage pool. It extends the familiar constructs of shares and limits, which have existed for CPU and memory, to address storage utilization through a dynamic allocation of I/O queue slots across a cluster of ESX servers. When a certain latency threshold is exceeded for a given block-based storage device, SIOC will balance the available queue slots across a collection of ESX servers to align the importance of certain workloads with the distribution of available throughput. It can reduce the I/O queue slots given to virtual machines that have a low number of shares in the interest of providing more I/O queue slots to a virtual machine with a higher number of shares. SIOC provides a means of throttling back I/O activity for certain virtual machines in the interest of other virtual machines getting a more fair distribution of I/O throughput and an improved service level. For more information, please refer to the technical white paper available at: http://www.vmware.com/files/pdf/techpaper/VMW-Whats-New-vSphere41-Storage.pdf Both VMware Storage I/O Control and HP P2000 G3 MSA system combine together to provide a more optimized solution. Enabling Storage I/O control is a simple process; the more important point will be to understand the virtual machine environment with regards to the I/O demand being placed on the array. Storage I/O control is not dependent on the array as it is more of VMware vSphere infrastructure solution.

Third-party multi-path plugins

ESX 4.1 provides the ability for third-party storage vendors to develop - in the form of plug-in, also known as management extension modules (MEM) proprietary PSP, SATP, or Multi-Path Plug-in (MPP). The third-party MEMs are offered to customers at incremental license cost and also require enterprise VMware licenses. With the P2000 G3 MSA system there is no need for such a plug-in with ESX or ESXi 4.1. The built-in multi-pathing plug-in is completely adequate and functional. When configured and tuned appropriately, it can significantly reduce the configuration time and provide enhanced performance in most environments at no cost with all versions of ESX or ESXi 4.x, while maintaining a simplified solutions.

31

Summary

The best practices highlighted in this document will provide improved performance and reduced configuration time in most environments. However, as with all best practices, administrators must carefully evaluate the pros and cons of the recommendations presented and assess the value to their respective environment. This document also provides valuable knowledge of VMware's latest technologies such as the multi-pathing storage stack. This document is a reference guide for anyone configuring a VMware vSphere 4 SAN with HP P2000 G3 MSA Arrays.

32

Appendix A

Installing Brocade 8Gb fibre channel drivers

In the testing for this white paper, we used an HP ProLiant DL380 G6 server with a Brocade 425/825 8G FC HBA (HP 82B PCI-e 8Gb FC dual port host-based controller). When you install vSphere 4.1 ESXi, most drivers (including Emulex and Qlogic) are also installed. However, under certain instances during an ESXi install or deployment from an HP Insight Control deployment server1, you may not have the scripts needed to inject the latest drivers. Taking the following steps allowed us to successfully install the Brocade fibre HBA drivers for ESXi 4.1. Download the latest fibre HBA drivers from HP or VMware website. Locate the ISO image and place on a network drive or CD. Use a utility like Daemon tools to mount the ISO image from your client. Go to your vSphere client and select the datastore of the ESXi host in which you would like to update the drivers. Select the volume with a green arrow pointing upward to upload a file, as shown in figure A-1.

Figure A-1. Uploading the file

Select the zip file and copy to the datastore1 directory Once the zip files are copied, return to vSphere client and place your ESXi host in maintenance mode. Use your vMA client or use SSH to login to the host. Change directories to datastore1. Run esxupdate ­bundle BRCD-bfa-2.1.1.1-00000-offline_bundle-285864-zip update. You will see something similar to what is shown in figure A-2.

1

HP Insight Control server deployment was previously referred to HP Rapid Deployment Pack (RDP).

33

Figure A-2. Running esxupdate

Reboot To verify the module installed correctly, run the following commands (as shown in figures A-3 and A-4): vmkload_mod ­list vmkload_mod ­s bfa | more

34

Figure A-3. Running vmkload_mod -list

Figure A-4. Running vmkload_mod ­s bfa | more

Return to the vSphere client and take the host out of maintenance mode. Select the ESXi host then select Storage Adapters from the Configuration tab. You should now be able to see the 8Gb fibre HBA as shown in figure A-5.

35

Figure A-5. Displaying the 8 Gb fibre HBA

HP Brocade 8Gb HBA drivers for VMware ESX 4.1 are now installed and ready to be used. For additional information, please refer to the VMware document, "Installing the VMware ESX/ESXi 4.x driver CD on an ESX 4.x host" at: http://kb.vmware.com/kb/1019101.

36

Appendix B ­ P2000 G3 performance monitoring

HP P2000 G3 MSA systems with firmware T201R014 and later provide a command line mode that allows you to display performance counters, helping optimization and troubleshooting efforts. Table B1 shows the commands available for use:

Table B-1. Available commands

Command Show host-portstatistics Show controllerstatistics Show vdisk-statistics

Description Shows performance statistics for each controller host port. Shows performance statistics for controller A, B or both. Shows performance statistics for all or specified vdisks. Shows performance statistics for all specified volumes Shows performance statistics for all or specified disks.

Show volume-statistics

Show disk-statistics

To display performance counters, run a program from your client or desktop such as putty to login to one of the management controllers of the HP P2000 G3 MSA system. Login with the managed account to access the command line interface. Figure B-1 shows a sample output ­ in this case, the output to a show volume-statistics command.

37

Figure B-1. Example of "show volume-statistics" output

38

For more information

HP StorageWorks P2000 G3 FC MSA Best Practices Technical white paper, http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-2141ENW &cc=us&lc=en HP P2000 G3 MSA system FC/iSCSI User Guide, http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?lang=en&cc=us&content Type=SupportManual&prodTypeId=12169&prodSeriesId=4118559&docIndexId=64179#2 HP P2000 G3 FC and iSCSI MSA System controller Firmware release notes, http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?lang=en&cc=us&content Type=SupportManual&prodTypeId=12169&prodSeriesId=4118559&docIndexId=64179#2 HP MSA2000 ­ Technical Cook Book, http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA2-5505ENW&cc=us&lc=en VMware HCL, www.vmware.com/go/hcl VMware Storage Solutions from HP, http://h71028.www7.hp.com/enterprise/us/en/solutions/storage-vmware.html HP Single Point of Connectivity Knowledge (SPOCK), http://h20272.www2.hp.com/ VMware vSphere 4.1 iSCSI SAN Configuration Guide, http://www.vmware.com/pdf/vsphere4/r41/vsp_41_iscsi_san_cfg.pdf VMware vSphere 4.1 Fibre Channel SAN Configuration Guide, http://www.vmware.com/pdf/vsphere4/r41/vsp_41_san_cfg.pdf

To help us improve our documents, please provide feedback at http://h20219.www2.hp.com/ActiveAnswers/us/en/solutions/technical_tools_feedback.html.

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Intel is a trademark of Intel Corporation in the U.S. and other countries. 4AA3-3801ENW, Created March 2011

Information

Configuration best practices for deploying VMware vSphere 4.1 on the HP P2000 G3 MSA Array combo controller

39 pages

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate

965475


You might also be interested in

BETA
Configuration best practices for deploying VMware vSphere 4.1 on the HP P2000 G3 MSA Array combo controller