Today I'm beginning a
series of articles detailing load balancing for Exchange using the Kemp virtual
load balancer (VLB). In this series I will cover the following:
- Introduction to Load Balancing for Exchange and Other Workloads
(this article)
- How to Configure General Settings on the KEMP Virtual LoadMaster
- How to Configure an L7 KEMP Virtual Load Balancer (VLB) for Exchange 2013
- How to Configure an L4 KEMP Virtual Load Balancer (VLB) for Exchange 2013
- How to Restrict Exchange Admin Center Access From the Internet Using KEMP VLB
I'm using a KEMP virtual
load balancer in this series for a number of reasons. First, they offer a free
trial version downloadable from their website. Second, it's very easy to
configure. And third, a virtual load balancer works great with a home lab setup
like my 5th Gen Hyper-V Lab Server.
Why Use a Load Balancer?
A load balancer is
required when you have two or more Exchange servers with the Client Access
Server (CAS) role installed in the same site for high availability. Load
balancers have the intelligence to distribute client traffic amongst the CAS
roles in either a round-robin or least connections method. Layer 7 (L7) load
balancers also have the intelligence to perform multiple health checks on each
node to determine if it's healthy to accept new connections. If the service on
one node becomes unavailable, the load balancer will automatically redirect all
traffic for that service to a healthy node, if one is available.
Load balancers are also
able to load balance many other workloads such as web servers, SharePoint
Servers, Lync servers, etc. Typically each service or workload has its own
virtual IP (VIP). When clients connect to a service, say OWA, DNS points the
namespace to that VIP on the load balancer. The load balancer then directs the
traffic to a healthy node offering that service.
Load Balancer Configurations
Here's an explanation of the differences between Layer 7 and Layer 4 load balancing.
Layer 7 Load BalancingLoad Balancers are often either configured as one-arm (single NIC) or two-arm (two NICs, one for inbound traffic and another for outbound traffic). See Basic Load-Balancer Scenarios Explained for details on the two. For simplicity we will be configuring our load balancer as one-arm.
L7 load balancing happens at the application layer. Health checks are performed on the applications (for example OWA, EWS, ActiveSync, etc.). The SSL connection must terminate on the load balancer, the content is inspected, and then re-encrypted back to the real servers. This requires that the L7 load balancer has to have an understanding of the applications being load balanced. It also usually involves some sort of persistence, such as cookie-based or source IP. Because of all this, L7 load balancing is more complex. Exchange 2010 required L7 load balancing due to the different ways that each application protocol handled persistence. Exchange 2013 does not require persistence even when using L7 load balancing.
Layer 4 Load Balancing
L4 load balancing happens at the network layer after routing is compete (Routing occurs at Layer 3). Health checks are performed on the servers, not the applications. Layer 4 load balancers do not decrypt, inspect, and re-encrypt SSL traffic. This means L4 load balancers have higher performance and are less complex, but the load balanced application must support it. Exchange 2013 CAS is designed for L4 load balancing, but also supports L7 load balancing.
Oftentimes load
balancers are used to load balance HTTPS traffic. For example, mail.contoso.com
for OWA. Here, you have three choices:
1.
Terminate
the SSL connection at the load balancer and pass the connection through to the
target node unencrypted. This
is known as SSL offloading. The SSL certificate is installed only on the load
balancer. Exchange virtual directories are configured to use HTTP and SSL
Offloading is enabled.
2.
Terminate
the SSL connection at the load balancer and then re-encrypt the connection to
the target node. This is called SSL
bridging. The SSL certificate is installed on the load balancer and another SSL
certificate is installed on the target nodes. The load balancer must trust the
certificate on the target nodes.
3.
Pass
the SSL connection through the load balancer and terminate the SSL connection
on the target node. This is called SSL
passthrough. The SSL cert is installed only on the target nodes.
Of these options, SSL
bridging and SSL passthrough are most common. SSL bridging has the advantage of
protocol inspection by the load balancer. Since the session terminates on the
load balancer it is able to read or inspect the traffic that going through it.
This can be useful for advanced load balancing features or logic, but it adds
complexity to the load balancing solution. You'll need to manage separate SSL
certificates for the load balancer and target nodes, and it adds CPU overhead
because all traffic must be unencrypted, inspected, and re-encrypted. On the
flip side, the obvious benefit is the ability to maximize server resource usage
by being able to load balance individual services such as OWA, ActiveSync, etc.,
instead of failing over the entire server when one of the services is affected.
SSL passthrough simply
passes all SSL traffic through the load balancer to the target node where it is
unencrypted. The load balancer is unable to read or inspect the traffic going
through it because it is encrypted, so you won't be able to do anything fancy
on the load balancer. As an administrator, you'll only need to manage the SSL
certs on the target nodes. On the flip side, if any service used for health check
fails, the entire server is taken out of load balancing pool.
In all these three
options you should configure the load balancer to NAT the traffic to the real
servers. Because of this, the target nodes always see the load balancer as the
source IP for load balanced connections. This is also important to know when
you are reviewing IIS logs on the target server. For this reason, I usually
configure an X-Forward-For header that includes the original source IP. More on
that later in the series.
Note: If you prefer to not use
NAT on the load balancer, Direct Server Return may be an option to consider. DSR
requires additional configuration on load balanced servers and may not be
desired due to supportability and additional overhead concerns. Another
option when not using NAT is to configure the load balanced servers to use the load
balancer as their default gateway. I do not recommend DSR for Exchange and will
not be covering it in this series.
Now that we have some of
the basics out of the way, let's get started.
Getting Your Free KEMP Virtual Load Balancer Trial
KEMP Technologies offers
a fully functional 30-day free trial for their entire LoadMaster family of
virtual load balancers. Better yet, if you are a Microsoft Certified
Professional (MCP), MVP, or MCM you can register for a free NFR license! This license is good for one year with
free renewals as long as the offer is valid. You also get free web support!
For this series I'll be
working with the VLM-200. To download the virtual LoadMaster simply click the
free trial link, select your hypervisor, country and click download. VLM
supports 14 different hypervisors including various versions of Hyper-V,
VMware, and Xen.
After you click download you will be redirected to the Free Trial Activation page and your download will begin. To activate your free trial you need to create a KEMP ID from this page. Do that while you're downloading. If you're requesting a LoadMaster NFR license you'll need a KEMP ID, the VLM serial number (get that from the VLM after it's running), and your MCP transcript ID and access code.
The download will consist of a ZIP file that contains the correct files for your hypervisor along with an installation guide. All you need to do is extract the contents and import them into your virtual server management console. For Hyper-V R2 select Import Virtual Machine, browse to the Loadmaster VLM folder, and click Next three times. On the Connect to Network page select your virtual switch, click Next and Finish. The new VLM virtual machine is preconfigured to use 2 virtual CPUs and 1GB of RAM.
Once imported start it up so you can get your trial license. Connect to the VM console to watch it boot up. The VLM is configured for DHCP and should show its management URL and login credentials. The default username is bal and the password is 1fourall. You'll change this later.
KEMP VLM Boot Screen |
Licensing the KEMP Virtual LoadMaster |
Open your web browser to the URL shown in the console. It's normal to receive a certificate warning, just click through it. Accept the license terms and allow automatic updates. Now you will license your KEMP LoadMaster. Enter your KEMP ID and password and the LoadMaster will license itself as long as it has Internet access.
If it does not have Internet access you will need to select Offline Licensing and complete the form to obtain your license information to paste into the VLM licensing form. When licensing is successful, the VLM will indicate that the license has been installed and when it expires.
Next, the VLM will have you change the default password and log back in to begin configuration.
In the next part of the series, I will show you how to configure general settings on the virtual LoadMaster to load balance Exchange 2013.
No comments:
Post a Comment
Thank you for your comment! It is my hope that you find the information here useful. Let others know if this post helped you out, or if you have a comment or further information.