Inbound solid-state multihoming without BGP

Guillaume Filion gfk at logidac.com
Sat Nov 1 20:43:07 UTC 2003


Kevin Darcy <kcd at daimlerchrysler.com> wrote:
> Guillaume Filion wrote:
> >We're looking for a cheap multihoming solution, and I spent some time
> >trying to understand how commercial products that do link sharing
> >work. (AstroCom PowerLink, f5.Net BigIP, Fat Pipe Xtreme, Radware
> >Linkproof, Alteon Link Optimizer, etc). I think that I found a way of
> >doing it with a PC running OpenBSD, I'd like to have your comments on
> >this.
> >[...]
> >I would really like to hear comments about this. We're actively
> >searching for a cheap multihoming solution and I believe that this
> >could be what we're looking for.
> >
> Using DNS for load-balancing is inherently icky. 

I should have said that we allready considered using BGP for
multihoming but it would involve too large costs for our provider. We
are well aware that doing multihoming with DNS will be less reliable
than with BGP. BTW, do anyone knows if the commercial products that I
cited in my first message are better than my proposition? I believe
that they all use dynamic DNS.

> In order to defeat the 
> effects of caching, you have to reduce your TTL values to levels that 
> are arguably anti-social to the rest of the Internet (always remember, 
> when you lower your TTL, you're not just making your own nameserver work 
> harder to answer queries, you're also making other peoples' nameservers 
> and network in between work harder too).

Good point, I had not considered this. It's not a big issue for us,
since we're not a big organisation, we get about 1 GB of traffic per
day from about 400-500 different users. The additionnal load on DNS
servers worldwide would be insignificant. I realise that a DNS based
solution would not scale well to tens of thousands of simultaneous
users.

> Beyond that, though, your solution consists basically of tying each DNS 
> server to given link. How is this really an improvement? Sure, you get 
> out of the business of trying to figure out whether a link is really 
> down (as long as the query packets aren't getting to the DNS server on 
> that particular link, or the responses can't get back, then effectively 
> it's "down" in your solution), but in eliminating that source of 
> uncertainty, you're introducing another class of uncertainty, i.e. where 
> the link is fine, but DNS or the DNS server on that link is having a 
> problem. The simplest case would be the crash -- or simply the reboot -- 
> of the DNS server. Do you really want all of your network traffic 
> "failing over" every time your DNS server hiccups?

In my analysis, I considered the event of a DNS server going down as a
simple nuisance, but your comment made me reconsider my position. It's
true that *any* DNS problem would transfer all of the traffic on one
link. Using 4 DNS servers on two computers would reduce this problem
somewhat.

> Lastly, for completeness, I'll point out a couple of discrete downsides 
> that you neglected to mention in your list:
> 1) the hassle of maintaining two different versions of the same zone(s) 
> on two different nameservers
> 2) the fact that, due to the inability to set up off-site nameservers, 
> the query load on the nameserver associated with a particular link will 
> approximately double whenever the other link is down.

Good points. We should always ensure that the DNS servers don't go
over 50% use when both links work. It's not a problem for our
situation, but could be for a larger organisation.

Thanks for your comments, you made me realise that this system would
not work well for a large organisation, but I still think that it
would work for us.

Regards,
GFK's
-- 
Guillaume Filion, ing. jr
Logidac Tech., Beaumont, Québec, Canada - http://logidac.com/
PGP Key and more: http://guillaume.filion.org/


More information about the bind-users mailing list