transparent DNS load-balancing with a Cisco ACE

Chuck Swiger cswiger at mac.com
Fri Oct 19 21:09:28 UTC 2012


Hi--

On Oct 19, 2012, at 1:04 PM, John Miller wrote:
>> IMO, the only boxes which should have IPs in both public and private netblocks should be your firewall/NAT routing boxes.
> 
> That's how we usually have our servers set up--the load balancer gets the public IPs, the servers get the private IPs, and we use NAT to translate between the two.

OK.

>>> Here's a question, however: how does one get probes working for a transparent LB setup?  If an rserver listens for connections on all interfaces, then probes work fine, but return traffic from the uses the machine's default IP (not the VIP that was originally queried) for the source address of the return traffic.
>> 
>> That's the default routing behavior for most platforms.  Some of them might support some form of policy-based routing via ipfw fwd / route-to or similar with other firewall mechanisms which would let the probes get returned from some other source address if you want them to do so.
> 
> Good to know--you'd definitely expect traffic to come back on the main interface.  I've considered setting up some iptables rules to make this happen, but if I can avoid it, so much the better.  Sounds like this is what I need to do, however, if I want both probes and regular requests to work.

Perhaps I misunderstand, but if the internal boxes only have one IP, how can they not be using the right source address when replying to liveness probes from your LB or some other monitor?  Do you probe on an external IP and have something else doing NAT besides the LB itself?

Or do you setup a second IP on your reals which is what the LB sends traffic to?
(That's kinda what your lo:1 entry of 129.64.x.53 looked like.)

>>> What have people done to get probes working with transparent LB?  Are any of you using NAT to handle your dns traffic?  Not tying up NAT tables seems like the way to go, but lack of probes is a deal-breaker on this end.
>> 
>> The locals around here have the luxury of a /8 netblock, so they can setup the reals behind a LB using publicly routable IPs and never need to NAT upon DNS traffic.  Folks with more limited # of routable IPs might well use LB to reals on an unrouteable private network range behind NAT, but in which case they wouldn't configure those boxes with public IPs.
> 
> We're on a /16, so we have plenty of public IPs (though not as many as you!) to play with, too.  The choice to NAT has historically been more about security than anything else--if something is privately IPed, we've got it on a special VLAN as well.

OK.  I've seen too many examples of traffic leaking between VLANs to completely trust their isolation, but good security ought to involve many layers which don't have to each be perfect to still provide worthwhile benefits.

> Presumably those reals are still behind a virtual ip address that's also public, right?

Yes, presumably.  :)

> If that's the case, how do you keep your probes (to the IP behind the LB) working, while still sending back regular DNS traffic (that was originally sent to the virtual IP) with the VIP as a source address?  Seems like you get only one or the other unless you tweak iptables/ipfw/etc.

There are two types of probes that I'm familiar with.

One involves liveness probes between the LB itself to the reals, which is done so that the LB can decide which of the reals are available and should be getting traffic.  For these, the reals are replying using their own IPs.  The other type of probe is to the VIP; the LB forwards traffic to the reals, gets a reply, and then proxies or rewrites these responses and returns them to the origin of the probe using the IP of the VIP.  Or you can short-cut replies going back via the LB using DSR ("Direct Service Return"), or whatever your LB vendor calls that functionality...

All of your normal clients would only be talking to the VIP, and would only see traffic coming from the VIP's IP.

> I appreciate the help, Chuck!  Would you mind PMing me or posting your configs?  That might be the most useful.

Pretend that some folks nearby are using Citrix Netscaler MPX boxes rather than Cisco hardware, so this might not be too useful to your case; an example config for a webserver would look something like:

add serviceGroup SomeService-svg HTTP -maxClient 0 -maxReq 0 -cip ENABLED x-user-addr -usip NO -useproxyport YES -cltTimeout 120 -svrTimeout 300 -CKA YES -TCPB YES -CMP NO
add lb vserver LB-SomeService-80 HTTP 1.2.3.4 80 -persistenceType NONE -cltTimeout 120
bind lb vserver LB-SomeService-80 SomeService-svg
bind serviceGroup SomeService-svg rserver1 8080
bind serviceGroup SomeService-svg rserver2 8080
bind serviceGroup SomeService-svg rserver3 8080
bind serviceGroup SomeService-svg rserver4 8080

[ This is a generic example for a webserver, or for similar things which use HTTP to communicate.  Another group handles DNS, so I don't have a generic example for that handy.  And yeah, NDA issues prevent me from being as specific as I might like.  ]

Regards,
-- 
-Chuck




More information about the bind-users mailing list