Read Best practices for networking and load balancing with HP E5000 Messaging Systems text version

Best practices for networking and load balancing with HP E5000 Messaging Systems

Technical white paper

Table of contents Executive summary .......................................................................................................................2 Introduction ..................................................................................................................................2 E5000 network configurations .......................................................................................................3 E5000 default configuration.......................................................................................................3 E5300 .....................................................................................................................................4 E5500/E5700 .........................................................................................................................5 Additional networking notes .......................................................................................................6 Network example .....................................................................................................................6 Load balancing definition ..............................................................................................................7 Transport layer configuration .........................................................................................................9 Send connector configuration .....................................................................................................9 External connector requirement ...................................................................................................9 Routing Group Connector for Exchange 2003............................................................................10 Transport layer load balancing.....................................................................................................10 E5000 in existing Exchange environments .....................................................................................11 Overview ...............................................................................................................................11 Understanding network traffic in Exchange 2010 ...........................................................................11 Understanding proxy traffic ......................................................................................................11 Defining a CAS array .................................................................................................................12 Secure Sockets Layer (SSL) offloading ...........................................................................................12 Certificate installation types ......................................................................................................12 Load balancer considerations.......................................................................................................13 Types of load balancers ..............................................................................................................13 Native hardware load balancer................................................................................................13 Server hardware (HP server with third party load balancing application) .......................................14 Virtual server load balancing ....................................................................................................15 Windows Network Load Balancing ...........................................................................................15 Hardware load balancing ...........................................................................................................15 Coyote Point...........................................................................................................................16 KEMP Technologies .................................................................................................................17 Understanding affinity .................................................................................................................18 Cookies and HTTP headers ..........................................................................................................18 IP port considerations for load balancing.......................................................................................18 For more information...................................................................................................................19

Executive summary

This white paper provides details on the HP E5000 Messaging System's network configuration and best practices for how a hardware load balancer can be used. HP recommends the use of a hardware load balancer with the E5000. The HP E5000 Messaging System comes equipped with multiple network interfaces which include integrated and mezzanine network interface cards1 (NICs). These NICs have several purposes including client-based Messaging Application Programming Interface (MAPI) messaging traffic, Microsoft® Windows® domain traffic, management networking, and replication. These interfaces will be discussed in detail within this paper. Microsoft has made significant changes to the way clients interact with Exchange in the 2010 release. All clients connect via the Client Access Server role (CAS). This workload can be distributed across multiple CAS servers by assigning them to a CAS array. However, the CAS array does NOT provide any load balancing or high availability. This presents a significant performance and reliability issue for the Client Access Servers. To solve the problem of load balancing and high availability, HP and Microsoft recommend the use of a load balancer. In this document, load balancing concepts are presented initially at a high level followed by more detailed examples. Finally, HP has tested two different third-party hardware load balancers and has included information regarding testing and implementing those products. Target audience: This paper is intended for decision makers, IT support staff and project managers involved in planning and deploying Microsoft Exchange Server 2010 solutions. A basic familiarity with Microsoft Exchange 2010 is assumed.

Note 1 Mezzanine NICs are installed on the E5500 and E5700 models by default. Mezzanine NICs can be purchased for the E5300, if desired.

Introduction

HP and Microsoft have partnered to produce and deliver the E5000 Messaging System for Exchange Server 2010. This paper focuses on planning and configuration for the network portion of the E5000 to enable a successful implementation within each customer's messaging environment. The E5000 is designed to simplify the deployment of Microsoft Exchange by providing installation guidance every step of the way. HP and Microsoft have designed stepby-step wizards to install Exchange 2010 in accordance with industry best practice recommendations. The E5000 has multiple network interfaces which are used to connect to your existing Exchange messaging infrastructure and clients or to create an entirely new Exchange environment. Following the best practices outlined in this document will enable a smooth implementation and reduce the need for support center help. Networking can be extremely complex within a corporate environment. Please be sure to plan very carefully before deploying the E5000. Microsoft has expanded the Client Access Server role within Microsoft Exchange Server 2010. This role is utilized for all client connections to the Exchange infrastructure. The significant change from Exchange 2007 is that the MAPI and directory access connection points have moved from the mailbox server role to the CAS role. This is accomplished with a new CAS service known as the RPC Client Access service. MAPI clients no longer connect to the mailbox server directly. This increases the need for a load balanced and highly available network access to the CAS array. Load balancing client traffic to the CAS servers will reduce the impact if a single CAS server or service fails. Additionally, it will ensure the client workload will be evenly divided among the available CAS servers. A separate load balanced CAS array is required for each Active Directory site in your organization. If you are installing the E5000 into an existing Exchange 2003 or Exchange 2007 organization and configure a legacy namespace to integrate with previous versions of Exchange, your clients will automatically connect to the Exchange 2010 CAS server array. This allows a client with a legacy mailbox to assign the CAS array as their "home database server". The array will then proxy or redirect client requests for legacy Exchange mailboxes to the appropriate front-end servers. More information regarding legacy namespaces can be found at http://technet.microsoft.com/enus/library/ee332348.aspx

2

E5000 network configurations

The E5000 Messaging System is equipped to provide a very flexible network configuration. The E5300 has 4 network ports available and the E5500/E5700 have 8 network ports available. Additionally, each E5000 has a combined HP Integrated Lights-Out (iLO) and Enclosure Manager (EM) network interface. You may purchase additional network ports for the E5300, if desired. Upon installation, the network interfaces are configured as described in the following sections. It is a recommended best practice to use the ports as prescribed and not to deviate from the default settings. Figure 1 is specific to the E5300, while Figure 2 is specific to the E5500 and E5700.

E5000 default configuration

When deploying the E5000, the E5000 Configuration Wizard (ECW) will execute. This wizard will provide a default configuration for all network interfaces. It configures the Client/MAPI and Replication networks as follows (see figures 1 and 2):

Client/MAPI network

­ Server 1/Port 1 and Server 2/Port 2 network ports connect to this network (see figure 1 for port legend) ­ This network is labeled as the MAPI network on each server. ­ The default setting is DHCP, but you can use the E5000 Configuration Wizard to configure static addressing (recommended).

Replication network

­ On the E5300: Server 1/Port 2 and Server 2/Port 1 network ports connect to the replication network. ­ On the E5500/5700: Server 1 Mezzanine NIC 1, Port 1; Server 2 Mezzanine NIC 1, Port 2 ­ The E5000 Configuration Wizard automatically sets these static addresses by default (but also allows you to change them): Server 1 ­ 10.0.0.1, Server 2 ­ 10.0.0.2

Note The EM/iLO port and the Management networks are not configured via the E5000 Configuration Wizard. These must be configured manually to match the administrator's needs.

3

E5300

Figure 1. E5300 network configuration

Server 1 ­ Port1 "MAPI" Network

Server 2 ­ Port 2 "MAPI" Network

Access to EM & iLO (Servers 1&2)

Servers 1&2 "Replication" Network

MAPI network (BLUE) ­ The production network used in your data center for client-server and server-server network traffic. Replication network (ORANGE) ­ A private network used only for Database Availability Group (DAG) nodes to share cluster heartbeat and replication traffic. These ports should either be connected to each other with a cross-over network cable or connected on a segmented network using a VLAN or separate hardware connection. This will create a self-contained LAN to enable replication of the two E5000 server nodes. It is of critical importance that there are no routes for the MAPI and replication networks to connect. The default IP addressing for these ports is 10.0.0.1 for server 1 and 10.0.0.2 for server 2. Be sure "File and Printer Sharing on Microsoft Networks" and "Register the connection in DNS" are unchecked. EM/iLO port (PURPLE) ­ This port allows access to the integrated iLO management cards and the Enclosure Manager. This port can be connected to a dedicated management network or to the production network used in your data center. The IP addresses reachable through this port should be accessible by network administrators for management of the servers (iLO) or the enclosure (EM).

4

E5500/E5700

Figure 2. E5500 and E5700 network configuration

Server 1 ­ Port 1 "MAPI" Network Server 2 ­ Port 1 Server 1 ­ Port 2 "Alternate" Network "Alternate" Network

Server 2 ­ Port 2 "MAPI" Network

Servers 1 & 2 "Replication" Network

(NC382m-port1)

Access to EM & iLO (Servers 1 & 2)

Servers 1&2 "Management" network

(NC382m-port2)

MAPI network (BLUE) ­ The production network used in your data center for client-server and server-server network traffic. Alternate network (GREEN) ­ These ports provide an optional set of network interfaces which can be connected to a backup network to perform system backups.

Note NIC teaming is not recommended or supported with these interfaces.

Replication Network (ORANGE) ­ A private network used only for DAG nodes to share cluster heartbeat and replication traffic. These ports should either be connected to each other with a cross-over network cable or connected on a segmented network using a VLAN or separate hardware connection. This will create a self-contained LAN to enable replication of the two E5000 server nodes. It is of critical importance that there are no routes for the MAPI and replication networks to connect. The default IP addressing for these ports is 10.0.0.1 for server 1 and 10.0.0.2 for server 2. Be sure "File and Printer Sharing on Microsoft Networks" and "Register the connection in DNS" are unchecked. EM/iLO port (PURPLE) ­ This port allows access to the integrated iLO management cards and the Enclosure Manager. This port can be connected to a dedicated management network or to the production network used in your data center. The IP addresses reachable through this port should be accessible by network administrators for management of the servers (iLO) or the enclosure (EM).

5

Management network (PURPLE) ­ These ports are designed to allow a secondary management interface (such as HP Insight Control server deployment) for each server. It is best practice to configure these ports to connect to a management network which is different from the MAPI network.

Additional networking notes

While the networking interfaces contained in the E5000 Messaging System are flexible, there are a few caveats to keep in mind:

Network teaming is not supported within the E5000 Messaging System. When deploying multiple E5000s, it is important to properly address the network interfaces prior to powering on

additional E5000 systems. This is to avoid duplicate IP addresses on the same network since the E5000 uses statically assigned IP addresses on the replication network.

IP addresses should be selected during the design and planning stage, prior to deploying your new E5000

Messaging System into your environment. During the design and planning stage, all servers and interfaces that are planned to be deployed should be assigned IP addresses within your planning documentation.

The alternate network interfaces are not intended as "redundant" network interfaces but are intended to provide

connection to a separate network used to perform system backups.

Network example

Using an example will allow the illustration of the information in the previous sections. There is no load balancer configured in this example. Please see the load balancing section for additional information regarding the use and configuration of a load balancer. Company Y specifics

5,000 users Domain: ycompany.com Purchased E5700 No existing Exchange environment 10.10.x.x/16 network IP range 10.10.100.x ­ 10.10.200.x networks are assigned to the production client workstation 10.10.10.x ­ segmented network dedicated to backup traffic (no client access) 10.10.30.x ­ segmented management network (administrator access only)

Note The above networks can be deployed on the same physical network switches through the use of VLANs which is beyond the scope of this white paper.

6

Table 1 details the configuration of each server within the E5700. These configurations are based on the company network settings in the bulleted list above.

Table 1. E5700 server configuration for Company Y

Setting Server name Client (MAPI) network address Alternate network address Replication network address Enclosure Manager address (only one interface) iLO address (connected to management network) Management port address

Server 1 Exchange01 10.10.100.21 10.10.10.21 10.0.0.21 10.10.30.10 10.10.30.21 10.10.30.31

Server 2 Exchange02 10.10.100.22 10.10.10.22 10.0.0.22 10.10.30.10 10.10.30.22 10.10.30.32

Load balancing definition

Load balancing is a technique used to distribute workload evenly across two or more computers and provide resilience in case of system failure. Load balancing allows connection redundancy and divides workload equally among multiple nodes. Within Exchange 2010 and the E5000 Messaging System, load balancing provides client load distribution and a level of redundancy in the event of a service failure. Load balancing is a best practice and it is highly recommended to design and implement a load balancing solution as part of your overall Exchange implementation. Load balancing distributes the client workload and can provide your organization with high availability for client connections to Exchange. The E5000 does NOT ship with a load balancer included, but HP has several recommendations to consider when purchasing and implementing a solution.

Note HP strongly recommends purchasing a load balancing solution to use in conjunction with the E5000 Messaging System.

7

Figure 3 depicts one E5000 installed (depicted as Server 1 and Server 2). Each of the E5000's servers has a unique IP address on the MAPI network. Both servers have the Client Access, Hub Transport, and Mailbox server roles installed. Each CAS server is configured into a CAS array by the E5000 Messaging System Exchange Deployment Tool (EDT).

Figure 3. E5000 typical network configuration

Server 1

Server 2

Mailbox Server

Mailbox Server

Hub Transport Server Client Access Server

Client Access Server Hub Transport Server

CAS Array

Load Balancer

Mail.yourcompany.com

O

Outlook Web App Outlook Smartphone

In a load balanced client traffic scenario, the specific IP address on each E5000 node is not used for direct client connections. The host name for the CAS array "mail.yourcompany.com" is used for all clients. The Fully Qualified Domain Name (FQDN) is associated in the Domain Name System (DNS) with a virtual IP (VIP) that is created on a load balancer. The RpcClientAccessServer attribute which contains the CAS array name (mail.mycompany.com) is entered into each Exchange database on the server. When a client initiates a connection to their Exchange mailbox, an autodiscover process is used to identify the mailbox server that holds the active copy of the database that the user's mailbox is located on and the CAS server that the client needs to connect to. The client

8

then uses DNS to look up the IP address for the CAS array and makes the connection to the load balancer virtual IP. From this point the load balancer will determine, based on a number of criteria, which CAS server to route the request to within the configured CAS array.

Note All devices depicted exist within the same Active Directory site.

Transport layer configuration

Send connector configuration

The E5000 Messaging System is configured, by default, with Hub Transport, Client Access, and Mailbox roles. The Edge Transport role cannot be installed on the same server with these other roles, thus, it is not installed on the E5000 Messaging System. Send connectors can be configured on the Hub Transport server, which is done by creating a send connector that routes outgoing e-mail to the Internet within the Hub Transport role and modifying the default receive connector to accept e-mail from the Internet. When the Hub Transport Server is configured as a transport connector, it is required to be exposed to the Internet. This topology is NOT recommended because it increases the security risks by exposing all roles installed on the E5000 Messaging System to the Internet. The recommended best practice is to install an Edge Transport Server on a separate Exchange 2010 server or a third party SMTP connector within a perimeter network or to be hosted at an ISP. Additional information regarding configuring the Hub Transport as a send and receive connector is found at: http://technet.microsoft.com/en-us/library/bb738138.aspx Configuring the Hub Transport as a send or receive connector is NOT a recommended best practice. It is recommended to deploy an additional server with Internet connectivity with either Exchange 2010's Edge Transport Server or third party perimeter SMTP connector. The E5000 does not include either of these components and they must be purchased and configured separately.

External connector requirement

Since the E5000 Messaging System does not include an Edge Transport server, it is a recommended best practice that the organization deploys a perimeter network-based SMTP gateway. A Microsoft Exchange 2010 Edge Transport Server could fill this role.

Note FOR DEPLOYMENTS IN AN EXCHANGE 2003 ENVIRONMENT: You must create specific Send and Receive connectors on the Edge Transport server and update the configuration of the Exchange 2003 bridgehead servers. This applies if you're deploying an Edge Transport server before introducing Exchange 2010 to your Exchange 2003 organization.

Please refer to "Configure Internet Mail Flow Through Exchange Hosted Services or an External SMTP Gateway" at: http://technet.microsoft.com/en-us/library/bb738161.aspx

9

Routing Group Connector for Exchange 2003

The first Routing Group Connector (RGC) between Exchange 2003 and 2010 is created when the E5000 installs the Hub Transport role. This is done automatically by the E5000 Exchange Deployment Tool (EDT) during the first server's deployment. Prior to deploying an E5000 Messaging System into an Exchange 2003 environment, there are four actions that must be taken for co-existence:

Specify an Exchange 2003 bridgehead server for the first RGC that is created during setup of Exchange 2010. Verify that every Exchange 2003 routing group has at least one connector to another routing group before you

introduce the first Exchange 2010 server.

Suppress minor link state updates for each server in the Exchange 2003 organization. For details regarding

suppressing minor link state updates, refer to: http://technet.microsoft.com/en-us/library/aa996728.aspx

Make sure that the Exchange 2010 RGC isn't the only communication path between Exchange 2003 routing

groups to ensure that major link state updates continue to occur. More information can be found at: http://technet.microsoft.com/en-us/library/dd638103.aspx

Transport layer load balancing

Load balancing the transport servers within Exchange 2010 is more concerned with high-availability then it is with equally distributing the load to each Hub Transport server. By default, connections to Hub Transport servers are automatically load balanced if more than one Hub Transport server is deployed in an Active Directory site. If one Hub Transport server is unavailable, the operational Hub Transport servers continue to accept connections. If all Hub Transport servers in an Active Directory site are unavailable, messages are queued until a Hub Transport server becomes available or the messages expire. Load balancing can be used to provide high availability in the following scenarios:

Load balancing of inbound SMTP connections for POP and IMAP client connections to the default Receive

connector named "Client <Server Name>" that is created only on Hub Transport servers.

Load balancing of inbound SMTP connections for applications that submit e-mail to the Exchange organization.

Load balancing of outbound connections to remote domains is achieved by specifying more than one Hub Transport server in the same Active Directory site as a source server for the corresponding Send connector. Load balancing doesn't occur when the source servers for a Send connector are located in different Active Directory sites. Under normal operating conditions, one of three scenarios is possible:

The Hub Transport (HT) server can receive e-mails directly from external mail servers. It is a best practice to

configure a load balancer with a virtual IP. Route all mail to this IP address and the load balancer forwards mail to the Hub Transport server, based on load balancing policies. This configuration equally divides the incoming mail among HT servers and provides high availability.

The Edge Transport server is installed in a DMZ. In this configuration, the Edge Transport server will act as a

message relay to the internal Hub Transport server. More information is provided at this link: http://technet.microsoft.com/en-us/library/bb267003(EXCHG.80).aspx

A third party SMTP relay server is configured in the DMZ or at an Internet Service Provider. It is a best practice to

configure a load balancer with a virtual IP. Route all mail to this IP address and the load balancer forwards mail to the Hub Transport server, based on load balancing policies. This configuration equally divides the incoming mail among HT servers and provides high availability.

10

E5000 in existing Exchange environments

Overview

Most customers who purchase the E5000 will integrate the system into an existing Exchange environment. From a networking perspective, there are a few tasks that should be completed to ensure interoperability. Exchange 2007 and Exchange 2003 Exchange 2010 can be installed into an existing Exchange 2003 or Exchange 2007 organization. After completing the installation of your E5000 Messaging System, your organization will be running in a coexistence mode. It is a recommended best practice to migrate your resources to Exchange 2010 in the shortest amount of time as possible. This will enable decommissioning of the Exchange 2003/07 servers. The Active Directory forest and all domains must be configured to Windows 2003 functional level or greater to install Exchange 2010. Also, you must properly prepare your AD forest for Exchange 2010 by running the command "setup.com /PrepareAD [/OrganizationName: <organization name> ]" from the I: drive of the E5000 Messaging System. This command must be run in the same domain and in the same Active Directory site as the forest Schema Master. There are several additional steps necessary to configure the interoperability with older versions of Exchange. The upgrade to Exchange 2010 is complex and out of the scope of this document. It is critically important the administrator understands the significant changes to the methods clients use to gain access to the Exchange environment when the installation is complete. Without proper planning, some services may fail to function as expected. Please refer to the following link for details on upgrading the Client Access role from Exchange 2007: http://technet.microsoft.com/en-us/library/dd351133.aspx For more details regarding Exchange 2003 and Exchange 2010 coexistence, please refer to: http://technet.microsoft.com/en-us/library/dd638130.aspx For more details regarding Exchange 2007 and Exchange 2010 coexistence, please refer to: http://technet.microsoft.com/en-us/library/bb124350.aspx

Understanding network traffic in Exchange 2010

Exchange 2010 CAS arrays will experience three types of traffic: external clients, internal clients, and traffic proxied from other CAS servers or arrays. Each of these types of traffic use different protocols and affect the load balancing options used. External and internal client traffic is straightforward, however, it is important to point out that external clients will connect with HTTPS and internal clients can use HTTPS or RPC. In Exchange Server 2007 and Exchange 2010, the Client Access server communicates with an Exchange Mailbox server over RPC.

Understanding proxy traffic

Proxy traffic occurs when one CAS server sends traffic to another CAS server. If your company doesn't have multiple Active Directory sites in the organization, you don't have to configure Exchange 2010 for proxying. However, you might want to configure load balancing of URLs. An Exchange 2010 Client Access Server is able to proxy requests in the following situations:

Between Exchange 2010 Client Access Servers

Connection proxy requests between two Exchange 2010 Client Access servers enables organizations that have multiple Active Directory sites to designate one Client Access server as an Internet-facing server and have that server proxy requests to Client Access servers in sites that have no Internet presence. The Internet-facing Client Access server then proxies the request to the Client Access server closest to the user's mailbox.

Between Exchange 2010 CAS and Exchange 2003/07

Connection proxy requests between an Exchange 2010 Client Access server and an Exchange 2003/Exchange 2007 Mailbox server within one Active Directory site enables Exchange 2010 and Exchange 2003/Exchange 2007 to coexist in the same organization.

11

In an organization that has multiple Active Directory sites and multiple Client Access servers in each site, you can use network load balancing to divide traffic proxied between the Client Access servers in each site and for users directly accessing those servers. Simply deploying a load balancer isn't enough to ensure traffic is balanced effectively. You must also perform some additional configuration of the InternalURL and ExternalURL properties, which is detailed here: http://technet.microsoft.com/en-us/library/bb310763.aspx#proxynlb Each AD site containing multiple CAS servers is recommended to contain at least one load balancer. It is a best practice to only include CAS servers within the same AD site in a load balancing array. Each CAS server will then be added to the CAS array. You can deploy hardware load balancers in an Active Directory site which has Internet connectivity or an Active Directory site without Internet connectivity. This will provide load distribution and high availability between CAS servers. A more detailed description of proxy traffic within Exchange 2010 is available at the following link: http://technet.microsoft.com/en-us/library/bb310763.aspx.

Defining a CAS array

Each Active Directory site can only have a single CAS array (HP's E5000 Exchange Deployment Tool (EDT) configures the CAS array during the initial server deployment), but there can be multiple CAS servers per CAS array. Mailbox databases need to be configured to use the CAS array as the client connection endpoint instead of an individual CAS server. The Client Access Server role is one of five roles for Microsoft Exchange Server 2010. In each E5000 Messaging System the EDT will configure both nodes which have the CAS role installed into a single CAS array by default. The CAS role is responsible for all client connections, regardless of method. Microsoft has detailed information on the CAS role at: http://technet.microsoft.com/en-us/library/bb124915(EXCHG.140).aspx

Secure Sockets Layer (SSL) offloading

SSL offloading is a term commonly used when discussing load balancing. This can be used with SQL, WWW, and Exchange load balancing; it is not used exclusively with Exchange implementations. It can be beneficial when deploying the E5000 Messaging System. SSL provides a secure connection from the client to the destination. SSL ensures trust and privacy are maintained from end to end. In order to use SSL within Exchange 2010, you must have a valid certificate installed both on the server (or load balancer) and on the client. Exchange 2010 uses digital certificates for authentication and encryption in the following situations:

SMTP traffic (using Transport Layer Security) between Transport servers HTTP traffic (using Secure Sockets Layer) for client access methods such as Outlook Web App, Outlook Anywhere,

Exchange ActiveSync, and Exchange Web Services

HTTP traffic for federation

Certificate installation types

There are three configuration methods for using certificates with load balancers for Exchange.

Certificates are installed on the Exchange Server and on the client machine with no load balancer configured. This

is the default configuration.

Certificates are installed on the Exchange Server and on the client machine with a load balancer configured. The

load balancer acts as a layer 4 switch and passes the traffic to the Exchange Server to be decrypted. No packet inspection occurs on the load balancer.

Certificates are installed on the load balancer and on the client machine. The load balancer inspects and decrypts

the network packets at layer 7. Unencrypted information is then passed from the load balancer to the Exchange Server. Generally speaking, the link between the load balancer and the Exchange Server is a segmented, secure network segment, whereas the link between the client and the load balancer is less secure. This is the recommended best practice.

12

For more information regarding Exchange 2010 and certificate usage, refer to the following link: http://technet.microsoft.com/en-us/library/gg502577.aspx

Load balancer considerations

The following categories must be considered when selecting a load balancing solution.

Performance ­ How many requests per second can be handled? Manageability ­ Is it simple to configure and deploy? Failover automation and detection ­ Does the load balancer detect when the CAS server or service has failed? Affinity ­ Does it support CAS affinity? Which types of affinity?

Types of load balancers

There are several different types of load balancers available on the market. Each of them can be implemented to address both load balancing and highly availability for Exchange 2010 running on the E5000. Windows Network Load Balancing is not recommended within this configuration.

Native hardware load balancer

This type of load balancer contains proprietary hardware which is manufactured for companies that produce load balancing products. These devices can be sized for a specific workload and if needed include an application-specific integrated circuit (ASIC) that allows offloading of the processor-intensive SSL encrypted tasks. The current length of encryption keys for SSL is 2048 bit which can place a heavy load on any load balancing system. Two hardware load balancing products were tested for use with the E5000.

Coyote Point E250GX: http://www.coyotepoint.com/products/e250.php KEMP Technology LM-2600: http://www.kemptechnologies.com/us/server-load-balancing-appliances/loadmaster2600/overview.html Both of these devices have been tested in HP's E5000 lab and have been effective in providing both high availability and load balancing features required for production Exchange implementations.

Note Hardware load balancers are recommended to be implemented as a best practice for deploying the E5000 Messaging System.

13

Figure 4 shows an E5000 installed in a high availability, load balanced environment. Two load balancers are deployed in a high availability configuration. Within DNS, all CAS server traffic is directed to the Virtual IP (VIP) 10.0.0.51. The load balancers will distribute the client load to the appropriate E5000 server in the backend. The IP addressing is based on the table used earlier in the document.

Figure 4. Highly available hardware load balancer configuration

Load Balancer Management Port IP 10.10.30.51 Upstream & Internet (Switch) LAN and WAN connections with access for SMTP and client traffic

Load Balancer

*Spd Mode off = 10Mbps flash = 100Mbps

ProCurve

ProCurve Switch 4204vl-48GS J9064A

Reset Clear

Self Test Status

on = 1000Mbps

1

Auxiliary Port

2

A

B

C

D

Act

FDx

Spd

!

Use vl modules only

Console

Power

Fan

Power

Modules

LED Mode Select

Fault

1 3 5 7

10/100/1000Base-T Ports - all ports are IEEE Auto MDI/MDI-X

10/100/1000Base-T Ports - all ports are IEEE Auto MDI/MDI-X

Use ProCurve mini-GBICs and SFPs only

9

11

13

15

17

19

21

23

A

1

3

5

7

9

11

13

15

17

19

21

23

B

Load Balancer Failover Pair IP: 10.0.0.50 CAS Array VIP 10.0.0.20

ProCurve 24p Gig-T vl Module J8768A

vl

2 4 6 8 10 12 14 16 18 20 22 24

Module

ProCurve Gig-T/SFP vl Module J9033A

vl

2 4 6 8 10 12 14 16 18 20 22 24

Module

C

D

*Spd Mode

off = 10Mbps flash = 100Mbps

ProCurve

ProCurve Switch 4204vl-48GS J9064A

Reset Clear

Self Test Status

on = 1000Mbps

1

Auxiliary Port

2

A

B

C

D

Act

FDx

Spd

!

Use vl modules only

Console

Power

Fan

Power

Modules

LED Mode Select

Fault

1 3 5 7

10/100/1000Base-T Ports - all ports are IEEE Auto MDI/MDI-X

10/100/1000Base-T Ports - all ports are IEEE Auto MDI/MDI-X

Use ProCurve mini-GBICs and SFPs only

9

11

13

15

17

19

21

23

A

1

3

5

7

9

11

13

15

17

19

21

23

B

ProCurve 24p Gig-T vl Module J8768A

vl

2 4 6 8 10 12 14 16 18 20 22 24

Module

ProCurve Gig-T/SFP vl Module J9033A

vl

2 4 6 8 10 12 14 16 18 20 22 24

Module

Load Balancer

Exchange Server MAPI Client Connection Load Balancer Management Port IP 10.10.30.52

C

D

ISL links between two switches

E5000 Server 1 IP 10.10.100.21

1 (System Fan)

1 2 1 2

E5000 Server 2 IP 10.10.100.22

2 (Drive Fan)

A

B

MEZZ A

1

2

1

2

MEZZ B

1

2

E5000 Server 1 and 2 are members of the CAS Array.

MGMT

PCI A

PCI B

3

4

DP-A

1

2

UID

iLO

PS 2 of 2

3 4

DP-B

The CAS Array A record in DNS points to the VIP on the Load Balanced pair.

PS 1 of 2

Server hardware (HP server with third party load balancing application)

This type of load balancer allows the implementation of solutions similar in functionality to the native hardware load balancer in the previous section. However, this solution installs onto an HP ProLiant server to provide the hardware support directly from HP. The third party solution developer maintains support for the load balancing application.

14

Virtual server load balancing

With the proliferation of virtualization, it is becoming more common for vendors to offer load balancers packaged as virtual appliances. Typically the software will install as a guest operating system on a dedicated virtual machine. Most virtual load balancers (VLB) offer full L4 (no network packet inspection, simple pass-through) and L7 (full packet inspection) content switching, SSL offloading, server and service health, and compression. In short, they are as fully functional as their hardware counterparts. HP has not tested these configurations with the E5000 system. A sample product offering can be found at:

KEMP Technologies VLM-Exchange: http://www.kemptechnologies.com/us/server-load-balancingappliances/virtual-loadmaster-exchange/overview.html

Windows Network Load Balancing

Neither HP nor Microsoft support Windows NLB for the E5000. This is not a limitation of the E5000 configuration, rather it is a limitation of running NLB on the same servers which are running Windows Failover Clustering which is the mechanism used for the server nodes in a DAG. Additional reasons why NLB it is not supported to run on the E5000:

Does not provide service awareness. This means it is not possible to detect if the Exchange Web Services (EWS)

service is failed but the Outlook Web Access (OWA) is still functioning in a CAS array.

Only provides IP-based affinity, so there is not any persistent capabilities. It does not scale well (over 8 nodes is not recommended by Microsoft). Does not support Windows Failover Clustering. This means you cannot co-locate your CAS and mailbox role when

in a DAG and expect Windows Failover Clustering and Windows NLB to work on the same server.

If you remove a single node from your CAS array, all clients must reconnect to the NLB array and establish new

server affinity.

Hardware load balancing

HP recommends the use of a hardware load balancer for Exchange deployments with the E5000 Messaging System. HP has tested load balancers from two vendors which are described below. Refer to the following link for a complete list of Microsoft reviewed hardware load balancing devices: http://technet.microsoft.com/en-us/exchange/gg176682.aspx

15

Coyote Point

A hardware load balancer tested with the E5000 series is from Coyote Point and is the Equalizer Model E250Gx. Here is a sample of the Service configuration screen:

Figure 5. Coyote Point service configuration screen

As shown in figure 5, the Coyote Point load balancer is configured to support load balancing for Exchange 2010. A single E5000 with two client interfaces, 192.168.100.21 and 192.168.100.22. Also configured are the RCP MAPI ports of 135, 59532, and 59533. Each service requires the creation of a separate rule, thus maximizing flexibility. Other services can also be configured, such as Exchange 2010 EWS or Exchange 2010 Outlook Anywhere. Notice that the device is service-aware, not only server-aware, meaning that it will automatically redirect to a server where a specific service is running. More details can be found at: http://www.coyotepoint.com/pdfs/10/Microsoft/MSExchange2010.pdf

16

KEMP Technologies

Another hardware load balancing technology the E5000 series solutions have been tested with is the LoadMaster device from KEMP Technologies. Below is a sample of the service configuration screen:

Figure 6. KEMP Technologies service configuration screen

As shown in figure 6, the KEMP Technologies load balancer has been configured based on the previous example with a single E5000 containing two servers with IP addresses of 192.168.50.21 and 192.168.50.22. The RPC MAPI ports of 135, 59532, and 59533 were configured since each service requires the creation of a separate rule, thus maximizing flexibility. Load balancing has been configured to assign connections to the "least connection" node. The KEMP device permits easy installation of certificates and configuration of specific service rules. Other services can also be configured, such as Exchange 2010 Exchange Web Service, Outlook Web App, or Exchange 2010 Outlook Anywhere. The load balancer is service-aware, not only server-aware, meaning that it will automatically redirect to a server where a specific service is running. More details can be found at: http://www.kemptechnologies.com/us/loadbalancingresource/ms-exchange2010.html Each load balancing solution should be configured as recommended by that specific vendor.

17

Understanding affinity

Load balancers support client-to-CAS affinity. Affinity is a long-standing association between a particular client and a particular Client Access server. Affinity ensures that requests from a client go to the same CAS server. This is particularly useful when there are a large number of clients with fairly static IP addresses. The client can be a laptop running Outlook 2010, a mobile devices connecting with ActiveSync, a desktop connecting with Outlook Web Access, or any number of other client applications. Load balancing solutions, like those from Coyote Point or KEMP Technologies, provide support for affinity.

Cookies and HTTP headers

Client cookies are a method to uniquely identify the client to the CAS server. These cookies are a small dataset within the HTTP header. HTTP headers are the most reliable way to identify a client and associate it with a specific CAS server. These cookies and headers are created by the client or server as part of the communications negotiation. When you use this cookies and HTTP headers as an affinity option, be aware of the following:

Your load balancer must support this type of affinity. Currently only hardware load balancers support this affinity. This affinity only works for protocols that pass traffic using HTTP. There must be an existing cookie or header that remains constant during the client session and is unique to each

specific client, or small set of clients, in the protocol. The load balancer solution must be able to read and interpret the HTTP data stream. If you're using SSL, this means that the load balancer must decrypt the traffic to read the contents (L7 inspection). This causes an increased load on the load balancer. This decryption is not possible in some circumstances, such as when you use client certificate authentication with the SSL session where the client connects to the CAS server. In this scenario, it is advisable to utilize SSL off-loading to enable the load balancer to decrypt the message.

IP port considerations for load balancing

Most Exchange 2010 services are built on top of HTTP and use port 443 for Secure Sockets Layer (SSL) access and port 80 for non-SSL access (for example, Outlook Web App, Exchange ActiveSync, Outlook Anywhere, and Exchange Web Services). POP3 and IMAP4 use ports 110 and 143 respectively when not encrypted with SSL, and ports 995 and 993 respectively when encrypted with SSL. Other Exchange services, such as the RPC Client Access service (used by MAPI clients) and the Exchange Address Book service, are RPC services. When an Outlook client connects directly to the CAS server using these protocols, instead of using Outlook Anywhere, the endpoint TCP ports for these services are allocated by the RPC endpoint manager. Allocation occurs when the services are started. This requires a large range of destination ports to be configured for load balancing without the ability to specifically target traffic for these services based on port number.

Note It is a Microsoft best practice to configure static ports for RPC Client Access and Exchange Address Book. The E5000 follows this guidance and ports 59532 and 59533 are configured as static for RPC Client Access and the Address Book respectively.

18

For more information

HP E5000 line of products: http://h71028.www7.hp.com/enterprise/us/en/partners/microsoft-messaging-system.html HP's E5000 Admin Guide and Quick Start Guide: http://www.hp.com/go/e5000 > HP Support and Drivers > Select any model > Manuals Microsoft Exchange 2010 Approved Load Balancer deployment list: http://technet.microsoft.com/en-us/exchange/gg176682.aspx Microsoft Legacy Namespace Discussion: http://technet.microsoft.com/en-us/library/ee332348.aspx Coyote Point: http://www.coyotepoint.com/pdfs/10/Microsoft/MSExchange2010.pdf KEMP Technologies: http://www.kemptechnologies.com/us/loadbalancingresource/ms-exchange-2010.html

To help us improve our documents, please provide feedback at http://h71019.www7.hp.com/ActiveAnswers/us/en/solutions/technical_tools_feedback.html.

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. 4AA3-3433ENW, Created March 2011; Updated December 2011, Rev. 1

Information

Best practices for networking and load balancing with HP E5000 Messaging Systems

19 pages

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate

312139


You might also be interested in

BETA
Best practices for networking and load balancing with HP E5000 Messaging Systems
Best practices for deploying Microsoft Exchange Server 2010 on HP E5000 Messaging System