I setup heartbeat and ldirectord recently for use with ipvs, basically a manually configured ultramonkey.
I was running into issues with the system being very very slow, when I believed it should be very very fast. I did some debugging and found lots of FIN_WAITS:
/sbin/ipvsadm -L -n --connection IPVS connection entries pro expire state source virtual destination TCP 01:58 FIN_WAIT 74.94.149.33:51617 192.168.10.235:80 192.168.10.12:80 TCP 01:58 FIN_WAIT 74.94.149.33:51622 192.168.10.235:80 192.168.10.12:80 TCP 01:58 FIN_WAIT 74.94.149.33:51619 192.168.10.235:80 192.168.10.12:80 TCP 01:58 FIN_WAIT 74.94.149.33:51618 192.168.10.235:80 192.168.10.12:80 TCP 01:58 FIN_WAIT 74.94.149.33:51621 192.168.10.235:80 192.168.10.12:80 TCP 01:58 FIN_WAIT 74.94.149.33:51620 192.168.10.235:80 192.168.10.12:80
The LVS persistence document mentions lowering the expire for testing:
ipvsadm --set 5 4 0
Works fine for me, and speeds up the service. I'll keep monitoring for unwanted side effects. For updating Wordpress posts, the process required more time. Here's what I'm using now:
ipvsadm --set 60 10 2
Related info:
This link seems to suggest problem is caused by MTU
This link has several LVS related switch hardware issues (Cisco)
http://www.vs.inf.ethz.ch/edu/WS0102/VS/TCP-State-Diagram.html
http://blog.pfsense.org/?p=137
How I use Apache behind pfSense (including FIN_WAIT gotchas)
¥