Network Layouts


Introduction
There are several setups possible with the Zeus Load Balancer. This document explains each configuration in turn, together with advice about situations in which each layout would be appropriate.

Essential components of a load-balanced system

The basic architecture of the server farm consists of frontend machines, backend machines, and a network connection to the outside world. The front-end machines receive requests from the Internet and distribute the requests to the back-end machines using the Zeus Load Balancer software.

The Administration Server

Somewhere on the network, you should also install an Administration Server (Adminserver). This can be located anywhere, as long as it can directly contact all of the frontend and backend machines.

The Adminserver holds the master copy of configuration information for the websites and handles deploying this info across the machines in the server farm. The Adminserver is only required to be running when using the web-based admin interface to alter the configuration of virtual servers, it is not needed for the functioning of the system under normal operation. Use of the Adminserver is covered elsewhere.

How many backend webservers to use?

Any number of back-end machines can be used. If mostly static content will be served, and the Balancer is required more for fault-tolerance than performance, a minimal configuration of two back-end machines would be sufficient. If mostly complex dynamically generated content will be served, more back-end machines may be required to handle the load. The beauty of the Zeus Balancer system is that back-end machines can be dynamically added or removed as needs demand.

The front-end machines distribute requests to the back-end machines running in an active 'master-master' relationship, both funneling requests to the back-end machines. This gives both improved performance and fault-tolerance compared to using a single front-end machine.

It is also possible to set up two load balancers in a 'master-slave' configuration. In this case, one load balancer is visible to the outside world, and handles all the traffic to the website(s). If the machine running this balancer should fail, the slave load balancer will step in and masquerade as this machine, taking on all the workload.

Simple setup

A simple network

In this setup, all the machines are run off the same network switch. The inefficiency in this network is that it will not give maximum throughput between the outside world and the backend webservers. Consider the traffic passing through the frontend machines: Each webpage request gets sent to the frontend. The same request gets sent out by the frontend to a specific backend machine. The server sends the web page data back to the frontend. Finally, the frontend sends this data back out to the network connection. Too much data is being sent and received through the frontend machine's one network link - this could become a bottleneck. The next setup shows a solution to this problem.

More efficient setup

An efficient network

Now, the frontend machines each have two network cards. One is used solely to send and receive data to/from the outside world. The other is used to communicate with the backend webservers. Ensure that the routing on both frontend machines is configured correctly!

There are several points related to these setups that require discussion:

IP addresses
Both frontend machines must have externally visible IP addresses. It is essential that the DNS for the websites is set up appropriately - the IP addresses for both machines should be listed in the DNS for the site, e.g. www.cluster.zeus.co.uk has IP addresses 192.168.0.100 and 192.168.0.101. This enables round-robin DNS to function correctly - web browsers will pick one of the two addresses at random, and try to contact that individual machine.

If only one frontend is to be in active use, and the other is simply a backup, then obviously only one machine should appear in the DNS for the site.

Another fundamental issue is that, with two frontend machines, each machine must in fact have at least two IP addresses, even if they only have one network card! This is because during a failure, when one frontend dies, the other frontend must still have a valid IP address with which to try and contact the failed frontend, while at the same time masquerading as the failed frontend so that it receives all the HTTP requests.

This is better explained in terms of 'public' and 'private' IP addresses - do not confuse this with 'internal' and 'external' IP address ranges, there is a difference!

Example

Website www.cluster.zeus.co.uk has two 'public' IP addresses, both in the DNS records - 192.168.0.100 and 192.168.0.101.

It also has two frontend machines to handle web page requests. These machines are mercury and venus. The IP addresses for mercury and venus are 192.168.0.2 and 192.168.0.3 respectively. These are the 'private' IP addresses.

In normal use, mercury and venus claim one of 192.168.0.100 / 101 each. If one frontend dies, then the other one claims both public IP addresses. In this way, all web page requests are received, even if one frontend stops working.

As discussed above, it is sensible to give the backend machines internal IP addresses. Remember that each network card interface requires a unique IP address, not just each machine.

A detailed setup
Here is an example of a possible setup, using two frontends, three switches and a fileserver:

A complex network

A final note
Before installing any Zeus Web Servers or Load Balancers on the machines, they should all have the correct IP-address(es) configured, can ping each other, and the back-end machines are mounting the fileserver. Each machine should be configured to do this on machine boot in the appropriate rc.d/ startup files. See the UNIX documentation from your server vendor for information on how to do this configuration.