IPVSADM Setup and Initial Configuration

Linux Virtual Server Tutorial

Horms (Simon Horman) - horms@valinux.co.jp

VA Linux Systems Japan, K.K. - www.valinux.co.jp

July 2003. Revised March 2004



with assistance from


The Linux Virtual Server Project (LVS) allows load balancing of networked services such as web and mail servers using Layer 4 Switching. It is extremely fast and allows such services to be scaled to service 10s or 100s of thousands of simultaneous connections. The purpose of this tutorial is to demonstrate how to use various features of LVS to load balance Internet services, and how this can be made highly available using tools such as such as heartbeat and keepalived. It will also cover more advanced topics which have been the subject of recent development including maintaining active connections in a highly available environment and using active feedback to better distribute load.


The Linux Virtual Server Project (LVS) implements layer 4 switching in the Linux Kernel. This allows TCP and UDP sessions to to be load balanced between multiple real servers. Thus it provides a way to scale Internet services beyond a single host. HTTP and HTTPS traffic for the World Wide Web is probably the most common use. Though it can also be used for more or less any service, from email to the X Windows System.

LVS itself runs on Linux, however it is able to load balance connections from end users running any operating system to real servers running any operating system. As long as the connections use TCP or UDP, LVS can be used.

LVS is very high performance. It is able to handle upwards of 100,000 simultaneous connections. It is easily able to load balance a saturated 100Mbit ethernet link using inexpensive commodity hardware. It is also able to load balance saturated 1Gbit link and beyond using higher-end commodity hardware.

LVS Basics

This section will cover the basics of how LVS works. How to obtain and install LVS, and how to configure for its main modes of operation. In short it will cover how to set up LVS to load balance TCP and UDP services.


  • Linux Director: Host with Linux and LVS installed which receives packets from end users and forwards them to real servers.
  • End User: Host that originates a connection.
  • Real Server: Host that terminates a connection. This will be running some sort of daemon such as Apache.
  • A single host may be act in more than one of the above roles at the same time.
  • Virtual IP Address (VIP): The IP address assigned to a service that a Linux Director will handle.
  • Real IP Address (RIP): The IP address of a Real Server.


The virtual service is assigned a scheduling algorithm that is used to allocate incoming connections to the real servers. In LVS the schedulers are implemented as separate kernel modules. Thus new schedulers can be implemented without modifying the core LVS code.

There are many different scheduling algorithms available to suit a variety of needs. The simplest are round robin and least connected. These work using a simple strategy of allocating connections to each real server in turn and allocating connections to the real server with the least number of connections respectively. Weighted variants of these schedulers allow connections to be allocated proportional to the weighting of the real server, more powerful real servers can be set with a higher weight and thus, will be allocated more connections.

More complex scheduling algorithms have been designed for specialised purposes. For instance to ensure that requests for the same IP address are sent to the same real server. This is useful when using LVS to load balance transparent proxies.

Installing LVS

Some distributions, such as SuSE ship with kernels that have LVS compiled in. In these cases installation should be as easy as installing the supplied ipvsadm package. At the time of writing Ultra Monkey provides packages built against Debian Sid (Unstable) and Woody (Stable/3.0) and Red Hat 7.3 and 8.0. Detailed information on how to obtain and install these packages can be found on www.ultramonkey.org. The rest of this section will discuss how to install LVS from source as it is useful to understand how this process works.

Early versions of LVS worked with Linux 2.2 series kernels. This implementation involved extensive patching of the Kernel sources. Thus, each version of LVS was closely tied to a version of the Kernel. The netfilter packet filtering architecture[4] which is part of the 2.4 kernels has allowed LVS to be implemented almost exclusively as a set of kernel modules. The result is that LVS is no longer tied closely to an individual kernel release. LVS may also be compiled directly into the kernel. However, this discussion will focus on using LVS as a module as this approach is easier and more flexible.

1. Obtain and Unpack Kernel

  • It is always easiest to start with a fresh kernel. You can obtain this from www.kernel.org. This example will use the 2.4.20 kernel. It can be unpacked using the following command which should unpack the kernel into the linux-2.4.20 directory.
  • tar -jxvf linux-2.4.20.tar.bz2

2. Obtain and Unpack LVS

  • LVS can be obtained from www.linuxvirtualserver.org. This example will use 1.0.9. It can be unpacked using the following command which should pack the kernel into the ipvs-1.0.9 directory.
  • tar -zxvf ipvs-1.0.9.tar.gz

3. Apply LVS Patches to Kernel

  • Two minor kernel patches are required in order for the LVS modules to compile. To apply these patches use the following:
    • cd linux-2.4.20/
    • patch -pq < ../ipvs-1.0.9/linuxkernel_ksyms_c.diff
    • patch -pq < ../ipvs-1.0.9/linuxnet_netsyms_c.diff
  • A third patch is applied to allow interfaces to be hidden. Hidden interfaces do not respond to ARP requests and are used on real servers with LVS direct routing.
    • patch -pq < ../ipvs-1.0.9/contrib/patches/hidden-2.4.20pre10-1.diff

4. Configure the kernel

First ensure that the tree is clean:

make mrproper

Now configure the kernel. There are a variety of ways of doing this including make menuconfig, make xconfig and make config. Regardless of the method that you use, be sure to compile in netfilter support, with at least the following options. It is suggested that where possible these options are built as modules.

Networking options --->

Network packet filtering (replaces ipchains)

<m> IP: tunnelling

IP: Netfilter Configuration --->

<m> Connection tracking (required for masq/NAT)

<m> FTP protocol support

<m> IP tables support (required for filtering/masq/NAT)

<m> Packet filtering

<m> REJECT target support

<m> Full NAT

<m> MASQUERADE target support

<m> REDIRECT target support

<m> NAT of local connections (READ HELP) (NEW)

<m> Packet mangling

<m> MARK target support

<m> LOG target support

5. Build and Install the Kernel

As the kernel has been reconfigured the build dependencies need to be reconstructed.

  • make dep
  • The kernel and modules may now be build using:
    • make bzImage modules
  • To install the newly built modules and kernel run the following command. This should install the modules under /lib/modules/2.4.20/and the kernel in /boot/vmlinuz-2.4.20
  • make install modules_install

6. Update boot loader

In the case of grub is used as the boot loader then a new entry should be added to /etc/grub.conf. This example assumes that the /boot partition is /dev/hda3. Existing entries in /etc/grub.conf should be used as a guide.

title 2.4.20 LVS

root (hd0,0)

kernel /vmlinuz-2.4.20 ro root=/dev/hda3

If the boot loader is lilo then a new entry should be added to /etc/lilo.conf. This example assumes that the / partition is /dev/hda2. Existing entries in /etc/lilo.conf should be used as a guide.





Once /etc/lilo.conf has been updated run lilo.


Added Linux-LVS *

Added Linux

Added LinuxOLD

7. Reboot the system.

At your boot loader's prompt be sure to boot the newly created kernel.

8. Build and Install LVS

The commands to build LVS should be run from the ipvs-1.0.9/ipvs/ directory. To build and install use the following commands. /kernel/source/linux-2.4.20 should be the root directory that the kernel was just built in.

make KERNELSOURCE=/kernel/source/linux-2.4.20 all

make KERNELSOURCE=/kernel/source/linux-2.4.20 modules_install

9. Build and Install Ipvsadm

Ipvsadm is the user-space tool that is used to configure LVS. The source can be found in the ipvs-1.0.9/ipvs/ipvsadm/ directory. To build and install use the following commands.

make all

make install


LVS NAT is arguably the simplest way to configure LVS. Packets from real servers are received by the linux director and the destination IP address is rewritten to be one of the real servers. The return packets from the real server have their source IP address changed from that of the real server to the VIP.

Linux Director

  • Enable IP forwarding. This can be done by adding the following to /etc/sysctl.conf and then running sysctl -p.
    • net.ipv4.ip_forward = 1
  • Bring up on eth0:0. This is best done as part of the networking configuration of your system. But it can also be done manually.
    • ifconfig eth0:0 netmask broadcast

Configure LVS

ipvsadm -A -t

ipvsadm -a -t -r -m

ipvsadm -a -t -r -m

Real Servers

Make sure return packets are routed through linux director. Typically this is done by setting the VIP on the server network the default gateway.

Make sure that the desired daemon is listening on port 80 to handle connections from end-users.

Testing and Debugging

  • Testing can be done by connecting to from outside the server network.
  • Running a packet tracing tool on the linux directors and real servers is very useful for debugging purposes. Many setup problems can be resolved by tracing the path of a connection and observing at which step packets fail to appear. Using Tcpdump will be discussed here as an example, there are variety of tools available for various operating systems.
  • The following trace shows a connection being opened by an end user to the VIP which is forwarded to the real server
  • It shows packets being received by the linux director and then forwarded to the real server and vice versa. Note that the packets forwarded to the real server still have the end user's ip address as the source address. The linux director only changes the destination IP address of the packet. Similarly replies from the real servers have the destination address set to that of the end user. The linux director only rewrites the source IP address of reply packets so that it is the VIP.

tcpdump -n -i any port 80

12:40:40.965499 >

S 2555236140:2555236140(0) win 5840

<mss 1460,sackOK,timestamp 16690997 0,nop,wscale 0>

12:40:40.967645 >

S 2555236140:2555236140(0) win 5840

<mss 1460,sackOK,timestamp 16690997 0,nop,wscale 0>

12:40:40.966976 >

S 2733565972:2733565972(0) ack 2555236141 win 5792

<mss 1460,sackOK,timestamp 128711091 16690997,nop,wscale 0> (DF)

12:40:40.968653 >

S 2733565972:2733565972(0) ack 2555236141 win 5792

<mss 1460,sackOK,timestamp 128711091 16690997,nop,wscale 0> (DF)

12:40:40.971241 >

. ack 1 win 5840 <nop,nop,timestamp 16690998 128711091>

12:40:40.971387 >

. ack 1 win 5840 <nop,nop,timestamp 16690998 128711091>


ipvsadm -L -n can be used to show the number of active connections.

ipvsadm -L -n

IP Virtual Server version 1.0.9 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP rr

-> Masq 1 7 3

-> Masq 1 8 4

ipvsadm -L -stats will show the number of packets and bytes sent and received per second.

ipvsadm -L -n --stats

IP Virtual Server version 1.0.9 (size=4096)

Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes

-> RemoteAddress:Port

TCP 114 1716 1153 193740 112940

-> 57 821 567 94642 55842

-> 57 895 586 99098 57098

ipvsadm -L -rate will show the total number of packets and bytes sent and received.

ipvsadm -L -n --rate

IP Virtual Server version 1.0.9 (size=4096)

Prot LocalAddress:Port CPS InPPS OutPPS InBPS OutBPS

-> RemoteAddress:Port

TCP 56 275 275 18739 41283

-> 28 137 137 9344 20634

-> 28 138 137 9395 20649

ipvsadm -L -zero will zero all the statistics counters.