Read Deploying Microsoft Hyper-V Cloud Fast Track on the Hitachi Adaptable Modular Storage (AMS) 2500 text version


Deploying Microsoft Hyper-V Cloud Fast Track on the Hitachi Adaptable Modular Storage 2500

Reference Architecture

By Rick Andersen April 2011

Month Year


Hitachi Data Systems welcomes your feedback. Please share your thoughts by sending an email message to [email protected] Be sure to include the title of this white paper in your email message.


Table of Contents

Solution Overview ..................................................................................................................... 4 Key Solution Components ........................................................................................................ 6 Hardware Components ............................................................................................................ 6 Software Components .............................................................................................................. 11 Solution Design ......................................................................................................................... 12 High-level Architecture ............................................................................................................. 12 Hitachi Compute Blade 2000 Chassis Configuration ............................................................... 14 Hitachi Compute Blade 2000 Server Architecture ................................................................... 15 Storage Architecture ................................................................................................................. 16 SAN Architecture ...................................................................................................................... 21 Path Configuration .................................................................................................................... 22 Network Architecture ................................................................................................................ 23 Management Architecture ........................................................................................................ 27 Engineering Validation .............................................................................................................. 31 Conclusion ................................................................................................................................. 32


Deploying Microsoft Hyper-V Cloud Fast Track on the Hitachi Adaptable Modular Storage 2500

Reference Architecture

The Hitachi and Microsoft Hyper-V Cloud Fast Track solution provides a reference architecture for building private clouds according to an organization's unique requirements. This fast-track solution helps organizations implement private clouds with ease and confidence. The benefits of this solution are faster deployment, reduced risk, predictability and a lower cost of ownership as outlined in the following list:

Faster Deployment Speed deployment of an initial cloud by installing a pre-validated reference architecture. Rapidly grow the infrastructure to adapt to market pressures by leveraging the scalable

design of the architecture.

Simplify infrastructure and virtual machine deployment with integrated management. Rapidly deploy and provision resources using a self-service portal. Reduced Risk Deploy a solution with tested, end-to-end interoperability of compute, storage and network. Deploy pre-defined out-of-box solutions based on a common cloud architecture that has

already been tested and validated.

Quickly provision virtual machines with the assurance that the underlying infrastructure

resources are in place.

Adapt to failures by utilizing automation that detects and reacts to events. Increase virtual machine density while maintaining performance through monitoring of

systems to ensure bottlenecks are detected and corrected.

Predictability HDS delivers an underlying infrastructure that assures a consistent experience to the hosted


HDS standardization of underlying physical servers, network devices and storage systems. Lower cost of ownership A cost-optimized platform and software-independent solution for rack system integration. High performance and scalability with Hitachi Adaptable Modular storage solutions along

with the Windows 2008 R2 operating system and Hyper-V technology. The Hitachi Compute Blade 2000 along with the Hitachi Adaptable Modular Storage 2500 provides a highly available and highly scalable platform on which to build a private cloud infrastructure.


One of the primary objectives of the private cloud solution is to enable rapid provisioning and deprovisioning of virtual machines. Doing so on a large scale requires tight integration with the storage architecture and robust automation. Provisioning a new virtual machine on an already existing LUN is a simple operation; however, provisioning a new LUN to support additional virtual machines and adding it to a host cluster are complicated tasks that greatly benefit from automation. Storage architecture is a critical design consideration for Hyper-V Cloud solutions. The topic is challenging as it is rapidly evolving in terms of new standards, protocols and implementations. Storage and the supporting storage networking are critical to the overall performance of the environment and affect the overall cost. The Hitachi Adaptable Modular Storage 2500 is ideal for private clouds because of its ability to scale as additional workloads are hosted in the cloud and for providing high availability to these workloads. A private cloud is far more than a highly available infrastructure providing computing resources to higher-level applications. A fundamental shift of cloud computing is due to IT moving from server operator to a service provider. This requires a set of services to accompany the infrastructure, such as reporting, usage metering and self-service provisioning. If these services are unavailable, then the cloud service layer is unavailable, and IT is little more than a traditional data center. For this reason, high availability must be provided to the management systems. This reference architecture guide is intended for IT administrators involved in data center planning and design; specifically those with a focus on the planning and design of a Hyper-V private cloud infrastructure. It assumes some familiarity with the Hitachi Adaptable Modular 2000 family, Hitachi Storage Navigator Modular 2 software, Microsoft Windows Server 2008 R2 and Hyper-V failover clustering.

Solution Overview

The reference architecture, described in this paper, is built on Hitachi's and Microsoft's latest generation of hardware and software virtualization platforms. The Hitachi Compute Blade 2000 was configured with two distinct Hyper-V failover clusters, a six node tenant cluster that hosts the production or tenant VMs and a two node management cluster. The management cluster contains the Hitachi and Microsoft software required to deploy virtual machines (VMs) to the tenant cluster along with the products and tools to manage the Hyper-V private cloud infrastructure components. The server blades used for this reference architecture can typically host an average of 32 VMs per blade for a total of 192 VMs hosted by the six node tenant cluster. This reference architecture provides the following capabilities:

Virtual machine high availability--With the Hitachi Compute Blade 2000 running Hyper-V failover

clustering, the virtual machines deployed in the failover cluster are made highly available. In case one of the blades in the cluster fails, the virtual machines residing on that blade will automatically failover to another blade in the cluster.

Virtual machine live migration--The administrator can live migrate a virtual machine from one

blade in the cluster to another to balance workloads or before performing server maintenance.

Template based virtual machine provisioning--Virtual machine templates that allow

administrators to deploy rapidly virtual machines.


Self-service virtual machine provisioning--Administrators can delegate authority to other users

or a group of business owners, which will allow them to create virtual machines, based on a set of predetermined templates. This is provided via a web interface.

Integration with System Center Operations Manager--Hitachi provides monitoring packs for the

Hitachi Compute Blade 2000 and the Hitachi Adaptable Modular Storage 2500 (AMS 2500). This enables the administrator to be notified of any alerts that require attention. This reference architecture was sized based on the following goals:

Tolerate the failure of a single Hyper-V host in the tenant node cluster and continue to run all the

virtual machines from that failed node by restarting them on other nodes in the failover cluster.

Reserve the appropriate amount of memory for the Hyper-V management partition Provide adequate storage capacity and performance on the AMS 2500 to support the virtual

machines. Figure 1 shows a high-level design of the reference architecture documented in this white paper:


Figure 1


To support this reference architecture, several components must be in place or deployed with this reference architecture. Active Directory and Domain Name Services (DNS) are required to support the Hyper-V failover clusters and the management products such as System Center, Virtual Machine Manager R2 and System Center Operations Manger R2. In addition, a network must be in place to support the out-of-band hardware management for the Hitachi Compute Blade 2000 and the Hitachi Adaptable Modular Storage 2500.

Key Solution Components

The following sections describe the key hardware and software components used to deploy this solution.

Hardware Components

Table 1 lists the detailed information about the hardware components used in the Hitachi Data Systems lab.

Table 1. Hardware Components

Hardware Hitachi Adaptable Modular Storage 2500 storage system

Description Dual controller 8 x 8Gbps Fibre Channel ports 4x 1Gbps iSCSI ports, 16GB cache memory 266 SAS 600GB 10K RPM Disks 48 SATA 1TB 7.5K RPM Disks 8-blade chassis 2 x 8Gbps fiber channel switches 4 x 1/10 Gbps network switches 8 external 10Gbps network ports 2 x management modules 8 x cooling fan modules 4 x power supply modules Full blade 2 x 4-Core Intel Xeon X5640 2.66GHz 72GB memory per blade

Version 0897/A-Y

Quantity 1

Hitachi Compute Blade 2000 chassis



Hitachi Compute Blade 2000 E55A2 blades



Hitachi Adaptable Modular Storage 2500

The Hitachi Adaptable Modular Storage 2500 (AMS 2500) provides a reliable, flexible, scalable and cost effective modular storage system for the Hyper-V Cloud Fast Track solution. The AMS 2500 is ideal for a demanding environment and delivers enterprise-class performance, capacity and functionality at a midrange price.


The Hitachi Adaptable Modular Storage 2000 family is the only midrange storage product with symmetric active-active controllers that provide integrated, automated hardware-based front-to-backend I/O load balancing. Both controllers in an AMS 2500 storage system are able to assign dynamically and automatically the access paths from the controller to the LU. All LUs are accessible regardless of the physical port or the server from which the access is requested. Utilization rates of each controller are monitored so that a more even distribution of workload between the two controllers can be maintained. Storage administrators are no longer required to manually define specific affinities between LUs and controllers, simplifying overall administration. In addition, this controller design is fully integrated with standard host-based multipathing, thereby eliminating mandatory requirements to implement proprietary multipathing software. No other midrange storage product that scales beyond 100TB has a serial attached SCSI (SAS) drive interface. The point-to-point back-end design virtually eliminates I/O transfer delays and contention associated with Fibre Channel arbitration and provides significantly higher bandwidth and I/O concurrency. It also isolates any component failures that might occur on back-end I/O paths. For more information about the 2500 and other models of the 2000 family, see the Hitachi Adaptable Modular Storage 2000 Family Overview brochure.

Hitachi Compute Blade 2000

The Hitachi Compute Blade 2000 features a modular architecture that delivers unprecedented configuration flexibility, as shown in Figure 2.

Figure 2

The Hitachi Compute Blade 2000 combines all the benefits of virtualization with all the advantages of the blade server format: simplicity, flexibility, high-compute density and power efficiency. This enables you to take advantage of the following benefits:

Consolidate more resources. Extend the benefits of virtualization solutions (whether Hitachi logical partitioning, VMware vSphere,

Microsoft Hyper-V, or all three).

Cut costs without sacrificing performance.


Hitachi Compute Blade 2000 enables you to use virtualization to consolidate application and database servers for backbone systems, areas where effective consolidation was difficult in the past. By removing performance and I/O bottlenecks, Hitachi Compute Blade 2000 opens new opportunities for increasing efficiency and utilization rates and reduces the administrative burden in your data center. No blade system is more manageable or flexible than the Hitachi Compute Blade 2000. You can configure and administer the Hitachi Compute Blade 2000 using a web-based HTML browser that supports secure encrypted communications, or leverage the optional management suite to manage multiple chassis using a unified GUI-based interface.


The Hitachi Compute Blade 2000 chassis is a 19-inch rack compatible, 10U-high chassis with a high degree of configuration flexibility. The front of the chassis has slots for eight server blades and four power supply modules, and the back of the chassis has six bays for I/O switch modules, eight fan modules, two management modules, 16 half-height PCIe slots and two AC power input modules, as shown in Figure 3.

Figure 3

All modules, including fans and power supplies, can be configured redundantly and hot swapped, maximizing system uptime. The Hitachi Compute Blade 2000 accommodates up to four power supplies in the chassis and can be configured with mirrored power supplies, providing backup on each side of the chassis and higher reliability. Cooling is provided by efficient, variable-speed, redundant fan modules. Each fan module includes three fans to tolerate fan failures within a module, but in addition, if an entire module fails, the other fan modules will continue to cool the chassis.


Server Blades

The Hitachi Compute Blade 2000 supports two blade server options that can be combined within the same chassis. Table 2 lists the specifications for each server blade option.

Table 2. Hitachi Compute Blade 2000 Server Blade Specifications

Feature Processors (up to two per blade) Processor cores (per blade) Memory slots (per blade) Maximum memory (per blade) Hard drives Network interface cards (onboard) Other interfaces Mezzanine slots PCIe 2.0 (8x) expansion slots

X55A2 Intel Xeon 5600 - 4 or 6 cores 4,6,8 or 12 18 144GB (with 8GB DIMMS) Up to 4 Up to 2 1Gb Ethernet 2 USB 2.0 port and One 1 port 2 2

X57A1 Intel Xeon 7500 ­ 6 or 8 core 6, 8, 12 or 16 32 256GB (with 8GB DIMMS) N/A Up to 2 1Gb Ethernet 2 USB 2.0 and 1 Serial port 2 2

Four X57A1 blades can be connected using the SMP interface connector to create a single eight-socket SMP system with up to 64 cores and 1024GB of memory.

I/O Options

The connections from the server blades through the chassis' mid-plane to the bays or slots on the back of the chassis consist of the following:

The two on-board NICs connect to switch bays one and two. The optional mezzanine card in the first mezzanine slot connects to switch bays three and four The optional mezzanine card in the second mezzanine slot connects to switch bays five and six Two connections to dedicated PCIe slots

The I/O options supported by the optional mezzanine cards and the switch modules are either 1Gbps Ethernet or 8Gbps Fibre Channel connectivity.

Hitachi Compute Blade 2000 Management Modules

The Hitachi Compute Blade 2000 supports up to two management modules for redundancy. Each module is hot-swappable and supports live firmware updates without the need for shutting down the blades. Each module supports an independent management LAN interface from the data network for remote and secure management of the chassis and all blades. Each module supports a serial command line interface and a web interface. SNMP and email alerts are also supported.


N+1 or N+M Cold Standby Failover

The Hitachi Compute Blade 2000 maintains high uptime levels through sophisticated failover mechanisms. The N+1 cold standby function enables multiple servers to share a standby server, increasing system availability while decreasing the need for multiple standby servers or costly softwarebased high-availability servers. It enables the system to detect a fault in a server blade and switch to the standby server, manually or automatically. The hardware switching is executed even in the absence of the administrator, enabling the system to return to normal operations in a short time. The N+M cold standby function has "M" backup server blades for every "N" active server blade, so failover is cascading. In the event of multiple hardware failures, the system automatically detects the fault and identifies the problem by indicating the faulty server blade, allowing immediate failure recovery. This approach can reduce total downtime by enabling the application workload to be shared among the working servers.

Fibre Channel Switch Modules

The Hitachi Compute Blade 2000 provides support for 12-port and 22-port internal Fibre Channel switch modules along with 2 or 4 port Fibre Channel mezzanine cards. These internal Fibre Channel switch modules allow for the connection of up to eight server blades with 6 external 8Gbps ports per switch and 16 internal ports at 8Gbps per port. The use of the Fibre Channel mezzanine cards along with the internal Fibre Channel switch can reduce the space required for supporting the storage infrastructure by configuring the switching within the chassis thus eliminating the need for external Fibre Channel switches. For this reference architecture, two Fibre Channel switch modules were deployed.

Network Switch Modules

A standard Hitachi Compute Blade 2000 contains two internal LAN switch modules. For this reference architecture, four internal LAN switch modules were used. All of the LAN switch modules used were 1/10Gbps switches. These internal LAN switches provide two 10Gbps uplinks per LAN switch module for a total of eight 10GigE uplinks for this reference architecture. Four 1Gbps uplinks are also provided per LAN switch module for a total of 16 1GigE uplinks available for connection to the customer's network. The first port on each switch can also be used for switch management if required. Figure 4 shows the internal LAN switches in the Hitachi Compute Blade 2000:

Figure 4


Software Components

This section describes the software components deployed for this reference architecture. Table 3 describes the software used in this reference architecture.

Table 3. Software Components

Software Hitachi Storage Navigator Modular 2 Hitachi Dynamic Provisioning Microsoft MPIO Windows Server 2008 (for Hyper-V server) Windows Server 2008 (for all virtual machines) SQL Server 2008 R2 Microsoft Virtual Machine Manager 2008 Microsoft Systems Center Operations Manager 2007 Microsoft Systems Center Configuration Manager 2007 Microsoft Virtual Machine Manager 2008 SelfService Portal Microsoft Deployment Toolkit Microsoft Windows Deployment Services 2008

Version Microcode Dependent Microcode Dependent 006.0001.7600.16385 Datacenter edition, R2 Enterprise edition, R2 Enterprise edition, R2 R2 R2 R2 2.0 2010 R2

Hitachi Dynamic Provisioning Software

On Hitachi Adaptable Modular Storage 2000 family systems, Hitachi Dynamic Provisioning software provides a wide-striping technology that dramatically improves performance, capacity utilization and management of your environment. By deploying the Cloud Fast Track architecture using volumes from Hitachi Dynamic Provisioning storage pools on the Hitachi Adaptable Modular Storage 2500, you can expect the following benefits:

An improved I/O "buffer" to burst into during peak usage times. A smoothing effect to the Virtual Machine workload that can eliminate hot spots, resulting in

reduced virtual machine issues related to performance.

Minimization of excess, unused capacity by leveraging the combined capabilities of all disks

comprising a storage pool.

Elimination of the need to manage the placement of virtual machines via a manual process. Allows

for the automation of virtual machine creation and deployment.

Microsoft Hyper-V 2008 R2

Microsoft Windows Hyper-V is a hypervisor-based virtualization technology that is integrated into Windows Server 2008 x64 and Windows Server 2008 R2 versions of the operating system. It allows for the reduction of hardware footprints and capital expenses through server consolidation.


Microsoft Virtual Machine Manager 2008 R2 (SCVMM)

Virtual Machine Manager 2008 R2 helps enable centralized management of physical and virtual IT infrastructure, increased server utilization, and dynamic resource optimization across multiple virtualization platforms. It includes end-to-end capabilities such as planning, deploying, managing, and optimizing the virtual infrastructure. For this solution, SCVMM is used to manage only Hyper-V Cloud Fast Track hosts and guests in a single datacenter.

Microsoft Virtual Machine Manager 2008 R2 Self-Service Portal R2

The System Center Virtual Machine Manager Self-Service Portal is an extensible Web-based application that provides a way for groups in an organization (referred to as business units) to manage the self-service provisioning of IT infrastructures while the physical resources (servers, networks, storage devices, and related hardware) remain in a centralized pool. Instead of using physical servers and related hardware to build an IT infrastructure, a business unit IT (BUIT) administrator uses the selfservice portal to build an IT infrastructure from virtual machines.

Microsoft System Center Operation Manager 2007 R2

Operation Manager agents were deployed to the Fabric Management Hosts and VMs. These in-guest agents are used to provide performance and health monitoring of the operating system only. The Operation Manager instance is used for Hyper-V Cloud infrastructure monitoring only.

Microsoft System Center Configuration Manager 2007 R2

System Center Configuration Manager 2007 R2 comprehensively assesses, deploys, and updates servers, client computers, and devices-across physical, virtual, distributed, and mobile environments.

Microsoft Deployment Toolkit

Microsoft Deployment Toolkit (MDT) provides for the network deployment of Windows 2008 R2. All the software required for installation (operating system, drivers and updates) are packaged into deployment packages. These packages are then deployed over the network from an MDT server.

Solution Design

This section provides detailed information on the Hyper-V Cloud Fast Track design used for this reference architecture. It includes both software and hardware design information required to build the basic infrastructure for the Cloud Fast Track environment.

High-level Architecture

For ease of management, scalability and predictable performance, this solution uses the Hitachi Compute Blade 2000 and the Hitachi Adaptable Modular Storage 2500 (AMS 2500) as pooled resources in support of the private cloud architecture. The Hitachi Compute Blade 2000 provides the resources to host large number of VMs. The AMS 2500 supports Hitachi Dynamic Provisioning to ease management and provide rapid deployment of virtual machines on the AMS 2500.


This reference architecture deploys a Hitachi Compute Blade 2000 chassis with eight blades. All of the blades are running Windows 2008 R2 Hyper-V and failover clustering. A two node Hyper-V failover cluster was configured to support the management infrastructure and provide high availability. This two node management cluster will support the deployment and management of virtual machines through Virtual Machine Manager 2008 R2 and the Virtual Machine Manager Self-Service Portal. In addition, this high availability cluster will host the tools and utilities to monitor and collect performance statistics for the private cloud infrastructure. A six node Hyper-V Failover Cluster was configured to host the tenant virtual machines that will be deployed via the management cluster. This provides the ability to move quickly virtual machines between nodes in the cluster enabling high availability. In order to support the storage requirement for capacity, performance and rapid provisioning of a private cloud infrastructure, the AMS 2500 was deployed and configured to utilize Hitachi Dynamic Provisioning pools. The storage configuration for the Cloud Fast Track Architecture consists of four Dynamic Provisioning pools configured on the AMS 2500. One pool was allocated to host the VHDs for the virtual machine operating systems, two pools were allocated to host the application data and one pool was allocated for backup data. In the OS and application data dynamic provisioning pools, Cluster Shared Volume LUNs were allocated. The LUNs mapped from the backup pool were standard LUNs. For this reference architecture, the backup pool of 96TB has sufficient capacity to use Microsoft Data Protection Manager 2010 or a customer existing backup strategy to ensure proper protection of the storage environment. For specific details on protecting Hyper-V CSVs with Data Protection Manager 2010 and the Hitachi Data Systems VSS Hardware Provider please see Protecting Hyper-V CSVs with Microsoft Data Protection Manager and the Hitachi VSS Hardware Provider on the Hitachi Adaptable Modular Storage 2000 Family Implementation Guide. Microsoft System Center management tools hosted in the two-node Management Cluster were used to allocate the VMs and their associated application data LUNs as required across the dynamic provisioning pools. It is recommended that a round robin method for allocation of application LUNs be utilized to minimize the management of VM application data in the two pools used for application data.


Figure 5 shows the physical layout of the Hitachi Hyper-V Cloud Fast Track reference architecture.

Figure 5

Hitachi Compute Blade 2000 Chassis Configuration

This reference architecture uses eight X55A2 blades, four 1/10 Gbps LAN switch modules, and two Fibre Channel switch modules. Each blade has two on-board NICs, and four additional NICs provided by a mezzanine card. Each of these NICs is connected to a LAN switch module. Each blade also has a 2 port Fibre Channel mezzanine card installed that is connected to both of the Fibre Channel switch modules. Figure 6 shows the front and back view of Hitachi Compute Blade 2000 used in this solution.


Figure 6

Hitachi Compute Blade 2000 Server Architecture

The host server architecture is a critical component of the virtualized infrastructure. The ability of the host servers to handle the workload of a large number of consolidation candidates increases the consolidation ratio and helps provide the desired cost benefits. The Hitachi Compute Blade 2000 was chosen for this reference architecture due to the ability to support large numbers of virtual machines per blade and meets the requirements set forth by the Microsoft Cloud Fast Program for processor, RAM, and network capability.


Table 4 lists the blade configuration. Each blade is running Windows 2008 R2 Datacenter with two 4core Xeon X5640 2.66GHz processors and 72GB of RAM.

Table 4: Blade Configuration

Blade Server Blade 0 Blade 1 Blade 2 Blade 3 Blade 4 Blade 5 Blade 6

Server Name CFT-Node0 CFT-Node1 CFT-Node2 CFT-Node3 CFT-Node4 CFT-Node5 CFT-Node6

Role Hyper-V Host Tenant VMs Hyper-V Host Tenant VM`s Hyper-V Host Tenant VM`s Hyper-V Host Tenant VM`s Hyper-V Host Tenant VM`s Hyper-V Host Tenant VM`s Hyper-V Host Hyper-V Management Hyper-V Host Hyper-V Management

Blade 7


Testing has shown that with this Hitachi Compute Blade 2000 configuration a total of 32 VMs can be hosted per blade with a total of 192 VMs in a six node Hyper-V Failover Cluster as described in this reference architecture. The Hyper-V host OS and paging files for each blade were located on two local 146GB SAS drives configured as RAID-1 for high performance and availability.

Storage Architecture

A Hitachi Adaptable Modular Storage 2500 was used for the Hyper-V Cloud Fast Track architecture. For this architecture, Hitachi Dynamic Provisioning pools were used to host the virtual machine OS VHDs and application VHDs. All LUNs in the environment were presented to the Hyper-V hosts as Cluster Shared Volumes (CSVs) except the LUNs used for backup data. Three Dynamic Provisioning pools were allocated to host the virtual machines VHDs. Pool 1 hosted the guest machine operating system VHDs and pools two and three hosted the data LUNs for the guest VMs. Figure 7 provides a high level diagram of the AMS 2500 storage configuration.


Figure 7


Clustered Shared Volumes

For this reference architecture Cluster Shared Volumes (CSV) were implemented to host the virtual machine operating system and application data. CSVs are exclusively for use with Hyper-V failover clustering and enables all nodes in the cluster to access the same cluster storage volumes at the same time. This eliminates the one VM per LUN requirement allowing multiple VMs to be placed on a single CSV. This simplifies the management of the storage infrastructure in a private cloud environment. Because all cluster nodes can access all CSVs simultaneously, standard LUN allocation methodologies based on performance and capacity requirements of the expected workloads can be used. Microsoft recommends isolating the VM operating system I/O from the application data I/O. This is the reason behind creating multiple Hitachi Dynamic Provisioning pools to host the CSVs, one to contain the VM OS VHDs and two additional pools to support application specific VHDs. CSV architecture differs from other traditional clustered file systems, which frees it of the scalability limitations such as one VM per LUN and drive letter limitations. As a result, there is no special guidance for scaling the number of Hyper-V nodes or VMs on a CSV volume. The important thing to keep in mind is that all of the VMs' virtual disks running on a particular CSV will contend for storage I/O. It is important to understand the I/O workload characteristics of those VMs that will be hosted on CSVs located in Dynamic Provisioning pools. Take into consideration the IOPS requirements of the VM to be deployed along with the I/O profile (For example, random read and write operations vs. sequential write operations).

RAID Configuration

To satisfy the requirement to support up to 196 virtual machines this reference architecture uses a RAID-5 (4D+1P) configuration of 600GB 15K RPM SAS drives to host the CSVs. A RAID-6 (6D+2P) pool consisting of 2TB 7.2K RPM SATA drives was used to support backup of the CSV volumes. The four Hitachi Dynamic Provisioning pools used for this solution were created from 52 RAID-5 (4D+1P) groups on the Hitachi Adaptable Modular Storage 2500 and five RAID-6 (6+2) groups. Table 5 lists configurations for each of the Dynamic Provisioning pools used in the Hitachi Data Systems lab.

Table 5. Dynamic Provisioning Configuration for Cloud Fast Track Architecture

Dynamic Provisioning Pool 1 2 3 4

Number of RAID Groups 10 21 22 5

Number of Drives 50 105 110 40

Usable Pool Capacity (TB) 20.0 43.0 46.5 80.0


Pool Configuration

Pool 1 contains a CSV LUN for the management cluster and CSV LUNS for the operating system VHDs of the guest VMs that will be deployed in the tenant cluster. Pool 2 and 3 contain CSVs for allocation to the guest VM's application databases and files. Table 6 shows the LUN allocation for each of the Dynamic Provisioning pools used in the Hitachi Data Systems lab. Table 6. LUN Allocation Dynamic Provisioning Pool Pool 1 LUN Size (GB) 5000 1 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 1 Hyper-V Cluster Management

LUN Allocation Cluster Shared Volume 1 Mgmt Cluster Quorum Disk Cluster Shared Volume 1 Cluster Shared Volume 2 Cluster Shared Volume 3 Cluster Shared Volume 4 Cluster Shared Volume 5 Cluster Shared Volume 6 Cluster Shared Volume 7 Cluster Shared Volume 8 Cluster Shared Volume 9 Cluster Shared Volume 10 Cluster Shared Volume 11 Cluster Shared Volume 12 Cluster Shared Volume 13 Cluster Shared Volume 14 Cluster Shared Volume 15 Cluster Shared Volume 16 Tenant Cluster Quorum Disk

LUNs 00 01 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10

Storage Ports 0F,1F

Pool 2

0E,1E,0G,1G 0H,1H


Pool 3

0E,1E,0G,1G 0H,1H



Drive and LUN Configuration

Each LU on the storage is presented as a LUN to the Hyper-V tenant cluster nodes or the Hyper-V management cluster nodes. Figure 8 shows the detailed drive configuration used in this solution.

Figure 8


SAN Architecture

The SAN architecture consists of two Fibre Channel switch modules within the Hitachi Compute Blade 2000 chassis. The Hyper-V host management cluster has two paths to the Hitachi Adaptable Modular Storage 2500 (AMS 2500) using ports 0F and 1F. The Hyper-V host tenant cluster has four paths to the AMS 2500 using ports 0E, 0G, 0H, and 1E, 1G and 1H. The configuration shown in Figure 9 below supports high availability by providing multiple paths from the hosts within the Hitachi Compute Blade 2000 to multiple ports on the AMS 2500.

Figure 9

The Microsoft MPIO software is used for multipathing, employing the round-robin multipathing policy. Microsoft MPIO software's round-robin load balancing algorithm automatically selects a path by rotating through all available paths, thus balancing the load across all available paths and optimizing IOPS and response time.


Path Configuration

All zoning is defined on the two Fibre Channel switch modules. The 2-port mezzanine cards internal to each blade provide the HBA interface to the internal switches. Host groups on the Hitachi Adaptable Modular Storage 2500 (AMS 2500) were used to ensure that each blade could access only the LUNs allocated to that blade. Table 7 lists the connections between the Hyper-V failover clusters and the storage system ports.

Table 7. Path Configuration

Blade HBA and Port Number Blade 0 HBA 1 port 1 Blade 0 HBA 1 port 2 Blade 1 HBA 1 port 1 Blade 1 HBA 1 port 2 Blade 2 HBA 1 port 1 Blade 2 HBA 1 port 2 Blade 3 HBA 1 port 1 Blade 3 HBA 1 port 2 Blade 4 HBA 1 port 1 Blade 4 HBA 1 port 2 Blade 5 HBA 1 port 1 Blade 5 HBA 1 port 2


Zone Name

Storage System Port 0E 1E 0E 1E 0E 1E 0E 1E 0G 1G 0G 1G 0G 1G 0G 1G 0H 1H 0H 1H 0H 1H 0H 1H

Storage System Host Group Blade0_HBA1_1
















Blade_2_HBA1_2_SW1_ AMS_0G_1G












Blade_4_HBA1_2_SW1_ AMS_0H_1H






Blade_5_HBA1_2_SW1_ AMS_0H_1H



Blade 6 HBA 1 port 1 Blade 6 HBA 1 port 2 Blade 7 HBA 1 port 1 Blade 7 HBA 1 port 2



0F 1F 0F 1F 0F 1F 0F 1F



Blade_6_HBA1_2_SW1_ AMS_0F_1F






Blade_7_HBA1_2_SW1_ AMS_0F_1F


iSCSI Architecture

iSCSI target ports are available on the Hitachi Adaptable Modular Storage 2500 (AMS 2500) to support connection of virtual machines that require iSCSI connections. iSCSI is addressed in this reference architecture to support guest clustering if required in the customers environment. Guest clustering is not supported if using fiber channel connections in a Hyper-V Failover Cluster. The iSCSI connections available on the AMS 2500 for this reference architecture are available on ports 0A, 0B, 1A and 1B. A physically separate network has been defined in this reference architecture for iSCSI storage traffic to provide higher throughput and performance.

Network Architecture

For private cloud solutions to provide both high network performance and high availability it is important that the network is designed properly. It is also important that the network support isolation between the various types of traffic in a virtualized environment. In keeping with Microsoft best-practice recommendations for private cloud implementations, network traffic is broken down into separate networks. Each network type is assigned to a different subnet. Subnetting is used to break the configuration into smaller more efficient networks. Further isolation of network types is achieved by using VLAN isolation and the use of dedicated network switches. For VLAN based network segmentation or isolation, several components including the host servers, host clusters, VMM, and the network switches must be configured correctly to enable both rapid provisioning and network segmentation. With Hyper-V and host clusters, identical virtual networks must be defined on all nodes in order for a virtual machine to be able to failover to any node and maintain its connection to the network. The following networks were defined for the Cloud Fast Track reference architecture:

A dedicated management network is required to manage the Hyper-V hosts to avoid competition

with the virtual machine guest traffic. A dedicated network provides a degree of separation for security and ease of management purposes. This typically means dedicating a network adapter per host and port per network device to the management network. This network is used for remote administration of the host, communication to management systems (i.e. System Center Agents), and so on.

A dedicated Cluster Shared Volumes and cluster communication network is required to ensure that

when storage connectivity is lost to CSVs due to a failure in the fiber channel network, the I/O can be re-directed using the cluster network.


A Live Migration network is required to ensure the high-speed transfer of VMs between nodes in

the Hyper-V failover cluster.

One or more networks dedicated to virtual machine LAN traffic. When using iSCSI a dedicated network is required so that storage traffic is not in contention with

any other network traffic. Table 8 below lists the multiple networks required in the Hyper-V Cloud Fast Track environment.

Table 8. Required Networks

Type Cluster Public/Management Cluster Private Cluster Shared Volumes Virtual Machine Live Migration

Details Provides the management interface to the cluster. External management applications such as SCVMM and SCOM communicate with the cluster via this network. Primary network interface for communication between the nodes in the cluster. CSV traffic will utilize the same network specified for use by the Cluster Private network. Provides a network for virtual machines to communicate with clients. A Hyper-V virtual switch is required to support virtual machine network traffic. By default Live Migration traffic will prefer private networks over public networks. In a configuration with more than one private network, live migration will flow over the private network that is not being used by the cluster private or CSV network. The priority of which network to use can be set in the cluster management interface. Provides a dedicated network for storage traffic.


To provide support for these network traffic requirements the internal network switches on the Hitachi Compute Blade 2000 were configured as shown in Table 9 below. This configuration creates separate VLANS on the physical network interfaces.

Table 9. LAN Switch Module Configuration

Switch Module # 0 1,2 1,2 3 0

VLAN # 2500 2501 2502 2503 2504

Network Traffic Type Management Live Migration CSV/Cluster Virtual Machine iSCSI

IP Address Range 10.64.1.x 10.64.2.x 10.64.3.x 10.64.4.x 10.64.5.x


This reference architecture uses two on-board NICs on each blade that are connected to LAN Switch modules 0 and 1. Each blade also has an additional four NIC ports provided by an onboard mezzanine expansion card. Figure 10 below shows relationship between the LAN switch modules and the NIC ports on the server blades:

Figure 10


Figure 11 shows the network configuration used for this reference architecture.

Figure 11


Network Uplink Connectivity

Off-rack connectivity is provided by using either the 1Gbps or the 10Gbps uplink capabilities of the network switch modules. These switches, when connected to the customer core network switching infrastructure, provide extensive connectivity into the environment.

Management Architecture

This section describes the management systems and tools that are utilized to deploy and operate the Cloud Fast Track Reference Architecture. These consist of different toolsets for managing the hardware and software. The Hitachi Command Suite along with the Microsoft System Center management suite provides the capabilities to manage the Private Cloud end-to-end. In order to provide high availability for the management system a two-node Hyper-V Failover cluster was implemented to host the management software. Table 12 summarizes the virtual machines deployed in the management cluster. All of the virtual machines are run as highly available virtual machines.

Table 12. Management Cluster Virtual Machines

Virtual Machine AMS-MSFT-SNM2

Role Storage Management Active Domain/DNS SQL Server 2008 R2 SQL Server 2008 R2 SCOM/VMM SCOM/OpsMgr

Hyper-V Host BS-Node6

Operating System, Processor, and Memory Windows Server 2008 R2 Enterprise Edition, 1 VCPUs, 4 GB RAM. Windows Server 2008 R2 Enterprise Edition, 1 VCPUs, 8 GB RAM. Windows Server 2008 R2 Enterprise Edition, 4 VCPUs, 8 GB RAM. Windows Server 2008 R2 Enterprise Edition, 4 VCPUs, 8 GB RAM. Windows 2008 R2 Enterprise Edition, 2 VCPUs, 4GB RAM Windows 2008 R2 Enterprise Edition, 2 VCPUs, 4GB RAM








BS-Node7 BS-Node7

Microsoft SQL Server 2008 R2

The Microsoft System Center family includes a comprehensive set of capabilities for management of the private cloud. The components of Microsoft System Center are database driven applications. SQL Server 2008 R2 is ideal for providing a highly available and well-performing database platform that is critical to the overall management of the environment. For this reference architecture, two highly available SQL Server virtual machines were deployed to support the management infrastructure. Each SQL virtual server was configured with 8GB of memory, four virtual CPUs and running Windows 2008 R2 Enterprise Edition.


Table 13 shows the configuration for each SQL Server virtual machine.

Table 13. SQL Server Virtual Machine Specifications

LU LUN1, CSV Volume LUN2, CSV Volume LUN3, CSV Volume

Purpose Operating System Database LU Log LU

Size 30 GB 100 GB 25GB

Table 14 shows how the databases were configured for each SQL virtual machine:

Table 14. Database Configuration

Database Client VMM SSP VMM Ops Mgr Ops Mgr

Instance Name SQL1Instance SQL1Instance SQL1Instance SQL2Instance

Database Name SCVMMSSP VMM_DB Ops_Mgr_DB Ops_Mgr_DW_DB

Virtual Machine SQL1 SQL1 SQL1 SQL2

Systems Center Virtual Machine Manager R2 (SCVMM)

SCVMM was deployed for this reference architecture to manage the Hyper-V Hosts and guests in a single datacenter. It is important to note that the System Center Virtual Machine Manager that manages this solution should manage no virtualization infrastructure outside of this solution. The System Center Virtual Machine Manager is designed to operate only within the scope of this reference architecture. SCVMM was deployed on a VM running Windows Server 2008 R2 Enterprise Edition, with two virtual CPUs, and 4GB of memory. One 30GB OS VHD was allocated in a CSV and one 500GB VHD was also provisioned to the SCVMM guest machine from a CSV. This 500GB LUN is the Library share for VMM that contains virtual machine templates, hardware profiles and syspreped VHDs for deployment via VMM and VMMSSP. For this environment, the following roles were enabled within SCVMM:

SCVMM Administrator Administrator Console Command Shell SCVMM Library SQL Server Database (remote)


Table 15 below shows a standardized set of VMM templates that were utilized to deploy virtual machines in this configuration. These templates were based on the Microsoft Hyper-V Cloud Fast Track Reference Architecture Guide. These can be customized to fit a customer's private cloud virtual machine deployment requirements.

Table 15. Virtual Machine Templates

Template Template 1 ­ Small Template 2 ­ Med Template 3 ­ Large

Specs 1 vCPU, 2GB Memory, 50GB Disk 2 vCPU, 4GB Memory, 100GB Disk 4 vCPU, 8GB Memory, 200GB Disk

Network VLAN 2503 VLAN 2503 VLAN 2503

OS WS 2008 R2 WS 2008 R2 WS 2008 R2

Systems Center Operations Manager Integration with Microsoft Virtual Machine Manager 2008 R2

SCVMM was also configured to integrate with System Center Operations Manager. SCVMM uses System Center Operations Manager (SCOM) to monitor the health and availability of the Hyper-V Hosts and the virtual machines that SCVMM is managing. SCVMM also uses Systems Center Operations Manager to monitor the health and availability of the System Center Virtual Machine Manager server, databases, library servers, and self-service web servers, and to provide views of the virtualized environment in the SCVMM administrator console. SCVMM was also integrated with SCOM to enable Performance and Resource Optimization (PRO) packs. PRO packs enable the dynamic management of a virtualized infrastructure. The host-level PRO actions in the System Center Virtual Machine Manager 2008 Management Pack recommend migrating the VM with the highest resource usage on the host whenever the CPU or memory usage on the host exceeds the threshold defined by a PRO monitor. The VM is migrated to another host in the host group or host cluster that is running the same virtualization software. If an IT organization has a workload that is not suitable for migration when running in a VM, it can exclude that VM from host-level PRO actions. The VM will not be migrated even if it has the highest usage of the elevated resource on the host.

Virtual Machine Manager Self-Service Portal 2.0

The Microsoft Virtual Machine Manager Self-Service Portal 2.0 is a partner extensible portal that enables private clouds and IT as a service with Windows Hyper-V and the System Center suite of management software. Key benefits of the portal for this private cloud architecture are:

Allocation of Data Center Resources--The portal pools data center infrastructure resources such

as storage, networks, and virtual machine templates, and makes them available to business units to meet their infrastructure needs.

Simplifying bringing new applications online--This portal can simplify the process of a business

unit bringing online a new application by providing methods so that a business unit owner can request resources form the infrastructure pool to host their IT services.

Self-Service Provisioning--The portal provides an end-user self-service capability for virtual

machine provisioning. It streamlines the end user experience of managing virtual machine.


For this reference architecture, VMMSSP was setup and configured to create VMs and provision the appropriate hardware, OS settings and storage based on the templates previously defined in this paper.

Microsoft System Center Operations Manager 2007 R2

System Center Operations Manager agents were deployed to the Hyper-V hosts and VMs. The in-guest agents are used to provide performance and health of the operating system within the VM. The scope of this System Center Operations Manager instance is for Hyper-V cloud infrastructure monitoring only. Application level monitoring is out of scope for this System Center Operations Manager instance. The following roles were enabled for this instance of System Center Operations Manager:

Root Management Server Reporting Server (Database resides on SQL Server) Data Warehouse (Database resides on SQL Server) Operator Console Command Shell

In addition, the following management packs were installed to provide for monitoring of the cloud infrastructure:

Hitachi Data Systems Storage Array Management Pack Hitachi Compute Blade 2000 Management Pack Virtual Machine Manager 2008 R2 Windows Server Base Operating System Windows Server Failover Clustering Windows Server 2008 Hyper-V Microsoft SQL Server Management Pack Microsoft Windows Server Internet Information Services(IIS) 2000/2003/2008 System Center Management Packs Hitachi Data Systems Storage Array Management Pack

The Hitachi Data Systems Storage Array Management Pack integrates with the Systems Center Operations Management server to report and monitor on Hitachi Storage Arrays. This management pack provides the following object views that are displayed inside the monitoring pane (under "Hitachi Root") of the Operations Manager console.

Subsystem--displays all the arrays that are managed Controller and Port Views and Status Array Drive View and Status HDP Pools View and Status LU View and Status Raid Group View and Status


Health of a subsystem is monitored based on the status of the physical components of the subsystem

Hitachi Data Systems Compute Blade 2000 Management Pack

The Hitachi Compute Blade 2000 Management Pack integrates with the Systems Center Operations Management server to report and monitor on Hitachi Compute Blade 2000 chassis and blades. This management pack provides Alert, Diagram and State views that are displayed inside the monitoring pane under Hitachi Compute Blade of the Operations Manager console. The Alert View provides the following information:

Chassis Alerts Windows Server Alerts

The Diagram View provides the following information:

Hitachi Compute Blade Comprehensive Diagram Windows Server Blade Diagram

The State View provides the following information:

Blade State View Chassis State View Fan State View Management Module Sate View Partition State View PCI State View Power Supply View Internal Switch State View

Engineering Validation

This Hyper-V Cloud Fast Track Reference architecture was designed to provide a robust infrastructure for the customer to deploy quickly a private cloud. Since each customer's environment is unique, specific testing was only performed to ensure that the underlying infrastructure met the requirements of the Hyper-V Cloud Fast Track program from Microsoft. The specific tests performed were:

Ensure that the network design was properly configure for both performance and availability. Ensure that the management structure was configured properly for monitoring and reporting on the

health of the infrastructure.

Validate that the VMM Self-Service Portal correctly deployed virtual machines into the Tenant Node

cluster based on the pre-defined VMM templates. The solution described in this white paper passed all of Microsoft validation testing for the Hyper-V Cloud Fast Track program.



Using this Hitachi reference architecture built on Microsoft Hyper-V Cloud Fast Track, organizations can quickly deploy private cloud infrastructures with predictable results. This solution provides a validated reference architecture for combining the Hitachi Compute Blade 2000 and Hitachi Adaptable Modular Storage 2500 with network infrastructure and Windows Server 2008 R2 with Hyper-V and System Center software. This architecture can be tuned to different business needs, so organizations can quickly build and use the benefits of this private cloud solution to improve agility, maximize efficiency, and optimize control of their data centers. Hitachi Data Systems Global Services offers experienced storage consultants, proven methodologies and a comprehensive services portfolio to assist you in implementing Hitachi products and solutions in your environment. For more information, see the Hitachi Data Systems Global Services web site. Live and recorded product demonstrations are available for many Hitachi products. To schedule a live demonstration, contact a sales representative. To view a recorded demonstration, see the Hitachi Data Systems Corporate Resources web site. Click the Product Demos tab for a list of available recorded demonstrations. Hitachi Data Systems Academy provides best-in-class training on Hitachi products, technology, solutions and certifications. Hitachi Data Systems Academy delivers on-demand web-based training (WBT), classroom-based instructor-led training (ILT) and virtual instructor-led training (vILT) courses. For more information, see the Hitachi Data Systems Academy web site. For more information about Hitachi products and services, contact your sales representative or channel partner or visit the Hitachi Data Systems web site. .


Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd., in the United States and other countries. All other trademarks, service marks and company names mentioned in this document are properties of their respective owners. Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems Corporation

© Hitachi Data Systems Corporation 2010. All Rights Reserved. AS-082-00 April 2011 Corporate Headquarters 750 Central Expressway, Santa Clara, California 95050-2627 USA Regional Contact Information Americas: +1 408 970 1000 or [email protected] Europe, Middle East and Africa: +44 (0) 1753 618000 or [email protected] Asia Pacific: +852 3189 7900 or [email protected]



Deploying Microsoft Hyper-V Cloud Fast Track on the Hitachi Adaptable Modular Storage (AMS) 2500

34 pages

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate


You might also be interested in

Technical Whitepaper
Deploying SAP NetWeaver Information Lifecycle Management - Implementation Guide
Virtualizing Microsoft SQL Server 2008 R2 Using VMware vSphere 5 on Hitachi Compute Rack 220 and Hitachi Unified Storage 150 Reference Architecture Guide
Hitachi Content Platform Object Based Storage - Datasheet
Deploying Microsoft Hyper-V Cloud Fast Track on the Hitachi Adaptable Modular Storage (AMS) 2500