One Domain; Multiple IPs.

Brad Knowles brad.knowles at skynet.be
Thu Jul 19 09:10:11 UTC 2001


At 9:25 PM -0400 7/18/01, Kevin Darcy wrote:

>  Other than the TTL issue, how is it "inherently bad", how does it constitute
>  "abuse"?

	That's like saying "Other than that Mrs. Lincoln, how did you 
like the play?"

>           It provides a useful function at a reasonable cost to those who
>  need it, and without causing any intractable incompatibility problems.
>  Sure, it may not be the most technologically-elegant solution, but it
>  works well enough for many organizations.

	I think you're losing the forest for the trees.  Yes, some sort 
of solution needs to be had for the distribution/load-balancing 
issues, but the DNS is a particularly poorly suited tool to use for 
this purpose.

	When all you've got is a hammer, yada yada yada.


	In this case, all you can see is the hammer in your hand, and 
you're ignoring the other tools in the toolbox.

>  But I think you're illustrating my point quite well. *Why* wasn't there
>  anyone with a clue about DNS at DISA?

	Typical government incompetence and rice-bowl politics.

>  If management/brass was visionary enough to realize that DNS was the
>  way of the future, they would have hired DNS-knowledgeable people, or
>  gotten their existing staff trained in it.

	Management doesn't give a damn about DNS.  They couldn't possibly 
care less.  What management has to worry about are much larger 
problems, and it is us techies who try our best to recommend 
particular solutions, which may or may not involve the DNS.

>                             Given your DISA experience, I'd think you'd
>  know better than that.

	With my DISA experience, I know plenty about government 
incompetence and the total disinterest on the part of management with 
respect to particular technical solutions.

>                          In our case, I just implemented DNS without
>  bothering to clear it with management. Once the technical community
>  realized its value, most of them dropped the use of /etc/hosts and
>  started using it.

	It was pretty much the same for me.  I was hired straight from 
college in 1989, and with my years of experience using Unix (at 
college), I was immediately made the Unix systems administrator for 
my new employer.  I had all of one system, which was a diskless Sun 
386i running SunOS 4.0.1.

	Most office automation was done by the branch secretary on a 
real-life Wang terminal, and there were a very small number of 
communal PCs that were connected via Wang network cards, and used for 
other types of data processing.  Most "real" development and other 
work was done in classified environments, in a completely separate 
equipment.


	By sometime in 1990 or 1991, I had two servers with actual disks 
of their own (plus the 386i), and I implemented the DNS on those 
machines without asking anyone or talking to anyone.  I just did it, 
for my own private use.

	A friend of mine soon found out (he was the lead contract Vax/VMS 
System Manager for that part of the agency), and he did the same on 
the MicroVax II that they had as their primary development/support 
machine.  Everybody in his group liked that, so with approval from 
their local management, they went ahead and did the same on their 
other unclassified Vax/VMS systems, which the customers seemed to 
love.

	At that point, middle management started to hear good things 
about this new-fangled thing, and we started implementing the DNS 
across other parts of the unclassified network (which was also around 
the time the Wang mini-computers were being tossed and replaced by PC 
file/print servers running OS/2 LAN Manager).

>                                                Up until that point, though,
>  I had to battle DNS-FUD constantly. So I'm understandably a little wary
>  when someone FUDs novel uses for DNS such as load-balancing.

	I never really had any problems with this.  Most people were just 
plodding along with their /etc/hosts files, and didn't know anything 
about the DNS.  What few technical people I talked to about it, once 
they heard how it worked and what it could do for them, they were 
quick to join me and try to get their management to make these 
changes on their systems, but almost none of them actually understood 
much of anything about how the DNS worked.

>                                                                How can you
>  be so sure that it won't rise to dominance (or at least acceptance), even
>  in spite of your criticisms?

	Because there are other solutions that exist which solve the same 
problems in better ways (as well as solving some additional problems 
that aren't addresses by classic DNS load-balancing), and which avoid 
the myriad problems which have plagued DNS load-balancing since the 
misshapen idea was first hatched in someone's brain (Roland Schemers 
or whomever).

	You've gotten yourself into a rut, and you don't seem to 
comprehend that you need to step back and re-evaluate the real 
problem to be solved, the available methods which you can apply to 
help solve the problem, and the potential resulting benefits.

>                     There are plenty of Bad Ideas out there which deserve
>  to die. But DNS-based load-balancing is something people obviously
>  want,

	That's just the point.  People *don't* want it.  What they *want* 
is a solution to the real problem, for which people in the past have 
taken the DNS hammer and used to pound the crap out of the system 
until it has been forcibly re-shaped to fit the model that they have 
retroactively applied.

>        and although the implementations are still somewhat immature, and
>  we have the TTL issue, as an overall concept, it's *working*, without
>  breaking anything else.

	No, I submit that it is *not* really working particularly well, 
and that the only reason it works at all is because people have 
gotten used to the kinds of problems caused by this technique and 
they seem to have been willing to accept the problems that result as 
being "typical" of the DNS, and therefore something that is "normal" 
and to be expected.

>  I think you're referring to a level beyond ordinary DNS-based load-balancing,
>  something I would call "topology-sensitive" DNS-based load-balancing. This is
>  what akamai does, isn't it? I agree that there are serious pitfalls involved
>  with trying to discern the convoluted, ever-changing topology of 
>the Internet.

	Okay, so what you're suggesting is that the technique people 
should use is to arbitrarily be redirected to the server that is the 
"least loaded" as of the last time the nameserver surveyed the 
application servers, regardless of where the server is located 
relative to the topological distance between it and the client, and 
any potential networking problems that there may be between those two 
points, etc....  Riiiiiiiiiiiight.

	Just because the load-balancing nameserver may be able to contact 
all or a particular subset of its application servers does *NOT* 
necessarily mean that any other client in the world could likewise 
contact the same sets of machines.  If your client is in LA and your 
load-balancing nameserver is located in Miami, the client should not 
be directed to a server in Botswana just because five minutes ago it 
happened to be slightly less loaded than a server in San Francisco.

>                              But the simple form of DNS-based load-balancing,
>  where you just differentiate answers based on the performance metrics and the
>  up/down status of the backend servers, is much simpler and I think has mostly
>  proven its worth.

	The more I think about it, the more I believe that this technique 
is probably actually a key player in the kinds of network routing and 
congestion problems that have plagued the 'net within the last few 
years.

>  It may. And I'd say if one needs such granularity in their DLB like 
>the time it
>  takes to turn around a DNS query, then pay the extra $$$ and go for a more
>  sophisticated L4-based solution.

	It's not just the time it takes to turn around a DNS query.  It's 
also the average amount of time it takes to survey all the 
application servers (to determine how "loaded" they are), the amount 
of time it takes for the client to receive the DNS query and then 
make the connection attempt, and the length of time that DNS result 
is cached on the remote end.

	And then there are all sorts of topological issues.

>                                    DNS-based load-balancing is *rough*, but
>  relatively cheap, load-balancing. Nobody should be selling it as 
>anything more
>  than that.

	IMO, in the long run you're almost certainly better off using 
simple round-robin.  At the very least, it's not nearly so complex, 
there's a decent chance that the remote caching server will continue 
to rotate the records on your behalf, and there's also a decent 
chance that the remote nameserver or the client may be able to do 
address sorting and be able to contact a "closer" application server 
out of the list of addresses it was provided.

	Trying to build too much knowledge into the central nameserver 
process and then hand out single "dumb" answers to the questions is 
likely to cause more problems than it solves.

>                   If DNS clients are liberal in what they accept, then the
>  non-conservative replies of load-balancers don't cause a problem.

	But we know that many DNS clients are *not* liberal in what they 
accept.  Therefore, your conclusion is invalid because it is based on 
a false assumption.


	Moreover, you have perverted the guiding principle into:

		If you're a client, you must be liberal in what you accept
		and conservative in what you generate, but if you are a
		server you may generate whatever the hell you want and the
		rest is a client problem.

	Unfortunately, the guiding principle only works as a guiding 
principle if *everyone* uses it.  Otherwise, we might as well not 
even bother.

>                                     We can't put technology into stasis
>  simply out of fear that Microsoft will screw things up.

	Who said anything about "putting technology into stasis"?  I'm 
here advocating a better technical solution to the problem, one that 
distributes the intelligence about making distribution and 
load-balancing decisions, that does so using the exact same network 
topology factors that cause congestion and other routing problems, 
that allows the network to always find the closest application server 
farm, and that avoids all the other problems caused with DNS-based 
load-balancing.

	If anything, I am the one advocating a "new technology", and you 
are the one actively resisting this suggestion by suggesting that we 
continue to use the same technology that we've had in stasis for many 
years.

>  Maybe the only *open*source* implementation you can find is 
>lbnamed. Open source
>  projects die and/or get orphaned all of the time. There are plenty 
>of commercial
>  implementations of DNS-based load-balancing.

	And just how reliable are their products?  How many times has 
that code path been executed?  How many people have seen the code 
that's being used, so as to try to make all bugs shallow?


	If you're going to advocate purely commercial solutions to this 
problem and then think I'm going to let you get away with the claim 
that this is a "cheap and easy" solution to the problem, you are 
sadly mistaken.

	Once we're into the plane of using purely commercial solutions 
for this problem, it's a whole new ball game.  Moreover, the 
technique of anycasting is one that is available in existing software 
implementations (to the best of my knowledge, including open source), 
the technique of TCP triangulation is one that is available in 
existing software implementations (again, also available in open 
source), and the technique of connection load-balancing is also 
available in software (and open source).

	So, I've got both commercial and reasonably trust-worthy open 
source implementations that are available for each of the components 
of my solution, whereas you do not.

>  I'm not recommending GD in particular. I'm only referring to the *concept* of
>  using DNS to load-balance, which should be easier to implement and 
>thus cheaper.

	Should.  That sounds like a pretty key word to me.  Since this 
requires the use of commercial hardware, and there is no reasonably 
trust-worthy open source implementation, your assumption that this 
should be "cheaper" is obviously flawed.

>                                And no matter how much one may 
>support Open Source,
>  it has to be acknowledged that some proprietary solutions are ahead of their
>  Open Source equivalents.  The fact that a technology is implemented 
>exclusively
>  or almost exclusively in closed-source is not a sufficient reason 
>to reject it
>  out of hand.

	True enough.  But when you're comparing a crude abuse and mis-use 
of the DNS through exclusively commercial closed-source 
implementations that solve only one aspect of the original problem 
and causes other problems, against an alternative that better solves 
several aspects of the original problem and uses "natural" non-DNS 
techniques (without abusing them), and which the various parts are 
available from multiple sources in both commercial hardware & 
software, and open-source software implementations, I think that the 
winner is pretty clear.

>                                             DNS-based load-balancing is an
>  *option* that may be the right one for some organizations.

	Once they discover the other solutions to the original problem, I 
believe that DNS load-balancing will be recognized as a poor option 
that almost certainly is not right for anyone.

>                                                              More 
>options, more
>  choices: these are a Good Things. This is a central tenet held by all Open
>  Source advocates, isn't it?

	Yes.  Which is why I'm so very surprised to hear you advocate 
that people should be tied down to a small number of solutions that 
are only available as commercial closed-source, and which seriously 
limit the options available to the customers, as well as causing as 
many or more problems than they solve.

-- 
Brad Knowles, <brad.knowles at skynet.be>

/*        efdtt.c  Author:  Charles M. Hannum <root at ihack.net>          */
/*       Represented as 1045 digit prime number by Phil Carmody         */
/*     Prime as DNS cname chain by Roy Arends and Walter Belgers        */
/*                                                                      */
/*     Usage is:  cat title-key scrambled.vob | efdtt >clear.vob        */
/*   where title-key = "153 2 8 105 225" or other similar 5-byte key    */

dig decss.friet.org|perl -ne'if(/^x/){s/[x.]//g;print pack(H124,$_)}'


More information about the bind-users mailing list