One Domain; Multiple IPs.

Barry Margolin barmar at genuity.net
Thu Jul 19 14:38:59 UTC 2001


In article <9j5cm3$jps at pub3.rc.vix.com>,
Brad Knowles  <brad.knowles at skynet.be> wrote:
>	Use TCP triangulation.  Have the SYN packet come into a 
>particular device, have that device make a decision as to where to 
>cause that connection to be set up, then it generates an outgoing SYN 
>packet to that back-end machine, which then proceeds to finish 
>setting up the TCP connection with the client that generated the 
>original SYN.

How can that work?  TCP requires the SYN/ACK to come from the same IP
address that it sent the SYN to.  If there's a device in the middle like
this, it would have to proxy the entire connection, in which case you don't
get the intended benefit of directing the connection to the closest server.

It would be really nice if the network or transport protocols offered good
solutions for this.  ISTM that the protocol researchers have been spending
plenty of effort on designing stuff for mobile clients, but not much for
"mobile servers".  As a result, the folks who needed to do it had to cook
something up with whatever is available.

My guess is that the folks at Cisco were really reluctant to do a DNS-based
solution.  They're router/switch people, so their expertise is at layers 3
and below.  If there were a decent solution at that layer, I trust that
they might have thought of it; I'm sure they would have preferred something
similar to what Local Director does, rather than developing a mini-DNS
server from scratch.  Or if they couldn't think of it, Bay Networks would
have.

You seem to think that there are plenty of possibilities in the lower
layers, yet not a single solution vendor has managed to discover them.
Maybe this is an opportunity for you to make some bucks.

>	That load-balancing/distribution decision should not be made when 
>the recursive/caching nameserver on the remote end asks a question 
>regarding the IP address for a particular "machine".  Instead, that 
>decision should be pushed off until such time as the client(s) 
>actually go to make the connection to the "machine" in question.

We wish that it were easier to defer the decision.  However, since caching
nameservers are usually pretty close to their clients, this ends up being a
decent compromise.

>>  Admittedly, a problem with this type of solution is keeping the patched
>>  version in sync with the changes being made in the on-going BIND
>>  development.
>
>	If that code were integrated into the public version of BIND, and 
>were therefore hopefully reasonably well-tested at a variety of 
>sites, that would at least eliminate my concerns over the quality of 
>the code that is performing the service(s), and the degree to which 
>it had been tested.

True.  But I think Genuity considered their Hopscotch technology to be
proprietary.  At the time that they were initially deploying it I don't
think Distributed Director was yet available, and there wasn't much else
that did anything similar, so it was viewed as providing a competitive
advantage.  Contributing it back to the BIND general development project
would have negated that advantage.

-- 
Barry Margolin, barmar at genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.


More information about the bind-users mailing list