Load Balancing Exchange 2013

Introducing Load Balancing in Exchange Server 2013

In this two part article, we’ll examine layer-4 load balancing support in Exchange Server 2013 and explain how these changes along with a move to HTTPS-only client access simplify configuration and management of the new Exchange.

Introduction

In Exchange Server 2010 it’s often said that the main skill administrators needed to learn when was how to deploy and manage load balancing. The concept of the RPC Client Access Array, the method used to distribute MAPI traffic between Client Access Servers was a common area of pain. Modern advances in Layer 7 load balancing also allowed for SSL offload, service level monitoring and load balancing and intelligent affinity using cookies to mitigate against some of Exchange 2010’s shortcomings.

Putting all the above together, along making use of scarcely documented procedures such as implementing Kerberos for the newly deployed CAS arrays often meant that some of the key benefits associated with Exchange 2010, such as the ability to perform maintenance in-hours was never truly realised by all organizations – by the time the infrastructure and configuration to support load balancing was in place, managing switchovers at the load balancer level could be a daunting task.

For others the true benefits of using Load Balancers never entered the equation, as a Windows Network Load Balancing was used in many implementations as a cost saving measure, against the Exchange product team’s recommendation.

Just like any new software release Exchange 2013 might seem initially more complicated, but all that new technology hidden away inside is actually all about making your life easier. In this article we’ll look at how Load Balancing suddenly become the technology you just do – set and forget, rather than spend a lot of time researching before realising it’s more complicated than you first thought.

Changes to Client Access

One of the biggest changes to Exchange 2013 is to the server roles. In Exchange 2007 and 2010, we’ve become familiar with the well-defined and discreet Client Access, Hub Transport, Mailbox and Unified Messaging Roles that can be deployed together, or apart.

In Exchange 2013, a new model is introduced where all the benefits of Microsoft’s work to separate the roles is still present, but a Mailbox Server always includes all the functions to route mail, render web content and receive voicemail.

If you’re wondering where that leaves the Client Access role – well that’s there too, however it’s primary role, even when combined with the mailbox role, is to authenticate clients and route requests to the correct mailbox server.

To put that a different way, when a user accesses a Client Access Server, it no longer renders OWA for them, and instead acts as a reverse proxy. After authenticating the user and determining which Mailbox Database their mailbox is located on, the CAS proxies the request to the back-end mailbox server that currently hosts that database. That mailbox server will render the OWA content, not the Client Access Server.

Another aspect we haven’t mentioned is MAPI, or RPC client access, and there’s a good reason for that. No longer do clients interact with Exchange using RPC, it’s all over HTTPS. Outlook Anywhere is the protocol Outlook clients use to access their mailbox.

Improvements to Load Balancing

What we’re getting at is the two key improvements in Exchange 2013 that make load balancing suddenly quite simple. HTTPS-only access from clients means that we’ve only got one protocol to consider, and HTTP is a great choice because it’s failure states are well known, and clients typically respond in a uniform way.

The second improvement is to the way that affinity now works, and this is where it’s really clever. As we’ve mentioned, one half of the equation is where OWA is rendered, on the same server that’s hosting that user’s mailbox database. Therefore if a client hits different Client Access servers there’s no performance degradation as the session rendering OWA for that user is already up and running.

The other half of the equation is when it comes to forms-based authentication. The Exchange Team have solved this part of the equation by improving the way HTTP cookies are handled by Exchange. The authentication cookie is provided to use the user after logon, encrypted using the Client Access Server’s SSL certificate. This enables a logged in user to resume that session on a different Client Access server without re-authentication; assuming servers share the same SSL certificate, and therefore are able to decrypt the authentication cookie the client presents.

 

Image
Figure 1: Example request sequence for an OWA client

This now means that if desired, a DNS round robin could in theory be used entirely in place of a hardware load balancer – in fact it’s almost as effective as using Windows Network Load balancing. It doesn’t matter if the client reaches a different Client Access server on each request as the authentication is maintained and the session in progress will continue against the same back-end Mailbox server.  If one of the servers fail, then the HTTP client takes care of using DNS round robin to select the next server to access.

However, DNS round robin has it’s downsides too, namely that there’s no real load balancing per-se – different DNS servers will hand out ordered IP address lists in a different rotation and just like Windows Network Load balancing, we’ve not got any checks against the Client Access server itself. So if OWA , EWS or any other web application has a fault clients will still attempt to access that server and see an error message until an administrator resolves the issue.

Downsides of DNS round robin aside, this does demonstrate that the intelligence of the load balancer required in Exchange 2010, for example, the ability to pull apart the HTTPS connection and insert a cookie for affinity – all the expensive Layer 7 stuff are now not really required. Layer 4, which is sending on the TCP traffic itself is all we need.

In addition to just sending on that traffic, what we also want our Load Balancer to do is understand when a server is in some sort of failure state and react accordingly. In it’s simplest scenario a Layer 4 load balancer can check a particular service, for example Outlook Web App, and if it’s up and running send all the traffic to it.

What it can’t do for a single HTTPS endpoint, or Virtual IP (VIP), is look at multiple services and if one service is down, such as EWS, only route traffic to ones with working EWS whilst still distributing traffic evenly for the remaining services. Layer 4 load balancing just doesn’t have that ability to look at what the client is requesting from the server and make decisions accordingly.

So we have a choice – firstly to assume that if our most important service is up, say OWA, then we’ll class that Client Access server as available, or implement multiple VIPs tied to different names; one for each service. The latter is just a little complicated and effectively means we need an IP address assigned to each service like OWA or EWS.

Summary

In this article we’ve looked at the changes to client access in Exchange 2010 and how they affect the way Exchange services are accessed by end-users. We’ve also looked at how under the hood improvements affect load balancing in a big way, and really make low-cost load balancing a no brainer with Exchange 2013.

Introduction

In the first part of this article, we looked at the improvements in Exchange 2013 at the client access level, both in the changes to roles, back-end rendering of OWA on the new mailbox role and removing the need for session affinity to individual Client Access servers.

In the final part of this series, we’ll look at the practical side of implementing these new features using a low cost hardware load balancer, making use of Layer 4 load balancing features.

Implementing Simple Load Balancing

We’ll first look at the simplest configuration for load balancing in Exchange 2013, using a KEMP load balancer as an example to try out the configuration on.

In our example, we’ll be using a single HTTPS namespace for services like OWA, EWS, OAB and ActiveSync along with our AutoDiscover namespace.

These two names will share Virtual IP (VIP) using the same SAN certificate. We’ll move forward using Layer 4, and performing a check against the OWA URL. On the back-end we’ve just got two client access servers to load balance:


Figure 1: Single VIP Load Balancing

To add our single service, we’ll log into our blank load balancer, and choose to add a new service, specifying our single VIP (in this case 192.168.15.17) along with the HTTPS port, TCP port 443:


Figure 2: Creating the initial VIP

Next, we’ll choose to inform the load balancer under the heading Standard Options that the service is Layer 4 by deselecting Force L7. We’ll also make sure affinity is switched off by selecting None within Persistence Options, and leave Round Robin as the scheduling method to distribute load:


Figure 3: Configuring the VIP to use Layer 4 Load Balancing

We’ll leave the SSL properties and advanced options at their defaults, then move on to adding our Client Access servers under the heading Real Servers.

First, we’ll define what to monitor by ensuring that within Real Server Check Parameters, the HTTPS Protocol is defined and the URL is configured. We’ll use /owa/auth/logon.aspx as the URL then ensure we save that setting by choosing Set URL:


Figure 4: Configuring the OWA check URL

Next choose Add New and then on the page that follows, enter the IP address of your first Client Access server in the field Real Server Address. Leave all other options as their defaults, and choose Add this real server to save the configuration. Repeat the process for each Client Access server.


Figure 5: Adding Client Access Servers

After adding both of our client access servers, choose View/Modify Services to list the VIPs. We should see our new VIP listed, along with each Client Access server under the Real Servers column. If all is well, the status should show in green as Up:


Figure 6: Completed Load Balancer configuration for a single VIP

After ensuring that DNS records for our HTTPS namespaces – mail.stevieg.org and autodiscover.stevieg.org are configured to point at our VIP of 192.168.15.17, we’ll then configure our only Client Access servers to use these names, by visiting the Exchange Admin Center, and then navigating to Servers. Within Servers, click on the Configure External Access Domain button highlighted below:


Figure 7: Configuring the HTTPS namespaces within the Exchange Admin Center

Next, we’ll select both of our servers hosting the Client Access role and enter our primary HTTPS name, then chooseSave to implement our configuration of OWA, ECP, OAB, EWS and ActiveSync virtual directories.


Figure 8: Applying the single HTTPS namespace to all web services

Finally, we’ll configure Outlook Anywhere by returning to the Servers page and choosing each server one by one and selecting the Edit icon highlighted below:


Figure 9: Editing the individual Exchange Server properties

We’ll then navigate to the Outlook Anywhere tab of Server Properties window and enter our HTTPS namespace,mail.stevieg.org for both the internal and external names:


Figure 10: Configuring the Outlook Anywhere internal and external URL

After saving the configuration, along with performing an iisreset /noforce against these servers, we should have a complete configuration.

Implementing Per-Service Load Balancing

With per-service load balancing we gain the benefits of simple Layer 4 load balancing and individual service level high availability, at the expense of using multiple IP addresses and names:


Figure 11: An overview of per-service configuration

To get started, we’ll need to use multiple names on our Subject Alternative Name (SAN) SSL certificate, with appropriate DNS entries configured, for example:

  • mail.stevieg.org for Outlook Web App
  • autodiscover.stevieg.org for our standard AutoDiscover namespace
  • ews.stevieg.org for Exchange Web Services
  • eas.stevieg.org for Exchange ActiveSync
  • outlook.stevieg.org for Outlook Anywhere
  • oab.stevieg.org for the Offline Address Book

We’ll then build upon the configuration we’ve done to present Outlook Web App above to configure additional Virtual IPs. We’ll select our single VIP from the list and choose Modify within the KEMP load balancer:


Figure 12:
Selecting the existing single VIP within the Load Balancer

We’ll then choose Duplicate VIP to create a copy of this service including duplicating the configuration and configured Client Access servers:


Figure 13: Duplicating the existing VIP configuration

After duplicating our first VIP, we’ll give it an appropriate service name, then scroll down to the Real Servers section and change the Real Server Check Parameters and set the URL to one appropriate for the service, in this case for AutoDiscover, /autodiscover/autodiscover.xml and then select Set URL:


Figure 14: Altering the per-service URL to test

We’ll then repeat this process for each service as follows

Service Check URL
OWA /owa/auth/logon.aspx
AutoDiscover /AutoDiscover/AutoDiscover.xml
EWS /EWS/Exchange.asmx
EAS /Microsoft-Server-ActiveSync
Outlook Anywhere /rpc/rpcproxy.dll
Offline Address Book /OAB

Table 1

N.B. These parameters are subject to change as Microsoft is currently working with Load Balancer vendors to determine optimum configurations.

After configuring each service, we should see a multitude of services listed:


Figure 15: Finished configuration for multiple per-service VIPs

To make use of this configuration we can again build upon the work we’ve done to present a single OWA URL and just implement the deviations from this configuration by altering each service’s External URL. To do this, we’ll visit the Exchange Admin Center and navigate to Servers, then choose Virtual Directories.

After selecting all virtual directories for a particular service, we’ll use the Bulk Edit options to update the External URL for each service one by one:


Figure 16: Bulk editing web services within the Exchange Admin Center

For each service that requires modification, we’ll update the External URL as follows:

Service External URL
EWS https://ews.stevieg.org/EWS/Exchange.asmx
EAS https://eas.stevieg.org/Microsoft-Server-ActiveSync
Offline Address Book https://oab.stevieg.org/OAB

Table 2

After updating the External URLs for each service, except Autodiscover – which we can leave as-is, we can then repeat the process in the previous section to set the Outlook Anywhere HTTPS namespace to our dedicated Outlook Anywhere name.

Summary

In this two-part article we’ve looked at the improvements Exchange 2013 brings to load balancing allowing simpler load balancers to be used whilst at the same time providing better throughput. We’ve also seen that the configuration required and is a lot simpler than Exchange 2010 and provides a much better level of reliability. Although to get the best from a load balancer you might need to look at multiple namespaces, the simplicity overall is certainly worth it.

Be the first to comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.