- IT Support
- About Us
Just a few years ago, the concept of server load balancing was an expensive luxury for businesses with endless budgets, making it almost entirely out of reach for smaller companies with more modest means. However, as with many other innovations, load balancers have become increasingly accessible for small and medium-sized businesses, and products are more affordable than ever before.
However, if you’re new to the world of load balancing technology, it may seem like a confusing concept, and the notion of selecting a product has probably gone over your head. Choosing the right load balancer requires a good knowledge of networking, server administration and applications too, so it’s even more vital that your IT knowledge is up to scratch.
So, to help you make an informed decision about load balancing, and assist you in choosing the right product, let’s start with the basics…
A load balancer is a device that acts as a reverse proxy, distributing network or application traffic across more than one server. Load balancers are used to increase capacity and reliability of applications.
This type of product can come with a considerable number of features, from cookie persistence to on-the-fly HTTP header re-writes. Therefore, it’s important to know what is important to your business beyond the functions that all load balancers provide.
We discuss some of these additional features below.
Server persistence is among the most important functions performed by a load balancer. Persistence is what holds a user on a single web server, rather than being sent to a different server for every request.
Businesses with interactive websites should always opt for cookie persistence. Interactive session information for an individual - such as a shopping cart on an ecommerce site - are often kept on that server during the session, and not shared with the other servers. So, if a user was sent to a different web server, they may find their shopping cart is empty.
The most common types of persistence methods for load balancers are source IP address and cookies. With the former, the load balancer looks at the source address of the incoming request to keep track of the individual users, while with cookie persistence, the load balancer looks at HTTP cookies to differentiate users.
For websites with HTTP traffic, cookie persistence is usually the best choice. It is simple to set up and helps to overcome a few tricky issues involved with source IT persistence, such as office routers and mega-proxies where hundreds of users can come from a single IP address.
This is the mechanism by which the load balancer will check to ensure a server that is being load balanced is fully functioning. This is an area in which load balancers can vary considerably, so it’s important to pay attention to these differences.
The most basic forms of health checking include ICMP ping, TCP port open and doing an HTTP HEAD or GET command and looking for an HTTP 200 response. Some load balancers offer an interactive approach to health checking, in which the user can send a specific request and parse for a certain response. For the majority of users, standard HTTP GET or HEAD commands will be sufficient.
Available in the majority of load balancers, redundancy is an integral aspect of this process. If your solution does not offer redundancy immediately, ensure the unit supports it so you have the chance to add it later. In most cases, there are two ways to do redundancy: active standby and active-active. Active-active is available typically only in higher-end units.
It is important to get a load balancer that is capable of serving up the traffic you are supporting, while also considering your future traffic needs too. Further to this, the issue of capacity can be a source of headache for businesses. Load balancers have more in common with web servers when it comes to performance characteristics. Web servers typically are measured in connections per second, whereas switches are measured in pure throughput.
The most important performance metric for load balancers is connections per second. The work involved in accepting and establishing a TCP session, potentially parsing the HTTP header and forwarding the traffic to another web server, is enough when compared with the fairly easy task of throughput.
One area of performance where throughput is a factor in load balancers is in picking the speed of the network interface. This solution usually comes in either fast ethernet (100Mbps) or Gigabit Ethernet (1,000Mbps). Usually, the infrastructure in most small to midsize businesses do not come anywhere near 100Mbps, so fast ethernet is usually adequate. Only the load balanced traffic will go through the load balancer.
If your business is currently gaining 20Mbps traffic or less, chances are, you won’t need gigabit ethernet. And, even if you are pushing 50Mbps, you still have 50% capacity growth for your website.
Usually the biggest factor for any business to consider, remember, the cost of the unit is only part of the total cost equation. If you want redundant load balancers, which are highly recommended, you will need to double the single unit price. It is also worth keeping in mind that some vendors charge extra for the support contract, and that these often come in various levels.
KEMP is an application load balancer that enables high performance and secure delivery of application workloads from a wide range of vendors across a variety of industries. Available as virtual, physical or bare metal appliance (Built on a Dell Chassis), KEMP goes beyond load balancing to include security, scalability and management capabilities that simplify the challenge of resilient application delivery.
KEMP solutions are Microsoft-approved, and they’re optimised to provide high availability and applications traffic acceleration for platforms including Exchange, Lync, Sharepoint, AADFS, IIS, Dynamics and Remote desktop services.
KEMP incorporates many intelligent layer 4-7 front end application delivery services to make applications perform better. These include the below features in which work together to improve application response time, scalability, and capacity to meet the needs defined for Microsoft workloads: