Protecting the Internet's Domain Name System

ICANN is now taking a look at the actual stability of the net - this is both refreshing and proper.  And it's about time.

Let us take a moment and ask ourselves: Why, on an Internet that was originally designed to survive a nuclear holocaust, is this DNS thing seemingly so vulnerable?

The reason is pretty obvious: Nearly every other part of the Internet is based on the concept that the individual parts should be able to operate independently.  But of all the parts of the Internet, the Domain Name System has a clear heart, a singular point from which all other parts radiate.  On most of the net, if one damages a part, the rest of the net will remain and will function.  With DNS as it is presently deployed, if one damages the heart, then the rest of DNS becomes uprooted and lost.

(This note will come back to this singular vulnerability of DNS and ask the question "why", but that will be a bit later.  In case you need instant gratification - here's a preview: DNS could be more fully distributed and its singular point of vulnerability eliminated.  The deeper question will thus be: Are we intentionally refusing to consider, much less adopt, a solution that could give to DNS the same near invulnerability that adheres to the rest of the Internet?  Are we captives of our own dogma and blinding ourselves to solutions?)

How is DNS Vulnerable?

In today's climate, even the discussion of vulnerability is considered by many to be improper.  I personally find this rather strange - we've adopted the mentality of Victorian-era physicians who were not permitted to look at or touch female patients; effectively withholding from them the benefits of medical care.

Sure, there are script kiddies out there who are Internet sociopaths and who will attack anything that that moves.  Most of those folks are so uninventive that they'd attack address 127.0.0.1 if somebody told 'em to do so.

And there are really evil people - evil and smart people.  We are fooling ourselves if we believe that if we don't talk about DNS vulnerabilities that these bad people will somehow not learn about the vulnerabilities.  Security through obscurity is a short-term tactic; it is not a very good long-term strategy for protection.

So the balance is then to find the right forums and the right people so that we can not only identify the weaknesses of DNS but also to quickly repair them before word of those weaknesses spreads too far.

Since I am not myself sure where the boundaries might be, I am not going to say much about the specific vulnerabilities.  Instead I'm going to jump forward and describe how I would fix those vulnerabilities that I perceive.  Let me suggest, however, that ICANN's role in all of this is somewhat ancillary - much of the protective tasks need to be taken by ISPs, DNS registries/registrars, and users.  In this effort ICANN's role is primarily that of a cheerleader; not of a player.

A Small List of Things To Do To Protect the Domain Name System

The following is a list of concrete, specific steps that can be taken immediately to protect the upper layers of the domain name system.  It is the responsibility of those at the lower layers to undertake their own protective steps.

Of course we ought not to forget that there is already a great body of good thinking on these matters.  For example the see RFC 2870 - Root Name Server Operational Requirements. R. Bush, D. Karrenberg, M. Kosters, R. Plzak. June 2000. (Format: TXT=21133 bytes) (Obsoletes RFC2010) (Also BCP0040) (Status: BEST CURRENT PRACTICE)

Free and Wide Dissemination of DNS Zone Files

We have all watched movies in which an actor cries out "every man for himself".  It's usually yelled at the top of the actor's lungs while all hell is breaking loose.  The phrase means exactly what it says - every person needs to take responsibility for his or her own protection.

Well, that's not a bad ultimate principle - If there were to be a widespread attack on DNS, the possession of reasonably complete and accurate copies of the root data would speed recovery.  People could load that data into their own computers and start rebuilding, even before the true root, or connectivity to it, were re-established.

But for that to happen, copies of that critical data would have to be distributed in advance of any attack on DNS.

For this to be possible, the current zone files for the DNS root and for each TLD listed in that root must be freely available from multiple sources.  And people must be taking advantage of the availability of this data and be making their own copies.

It is important that not just the current data be available - the predecessor versions covering the prior four weeks should also be available.  Why the predecessor versions?  The net could be attacked by a slowly creeping degradation of DNs data as much as by a single event.  By having historical data, one can go back to a time when it is likely that the data is reasonably correct and free of corruption.

This backup information must be in the standard DNS zone file format.  Files may be compressed using a standard compression method such as used in "zip" or "gzip".

An MD5 checksum must be posted for each version.

This information should be available freely to anyone who wants it, for free, without any prior agreements - in other words, without restriction.

Dissemination should be via publication on the world-wide-web.  Additional publication of the current version may be done by establishing one or more servers that permits a DNS zone transfers.

When Was The Last Time You Saw A Trans-Oceanic Airliner With A Single Motor?

Suppose you had to take a 15 hour flight across the Pacific Ocean on some stormy winter evening.  Would you feel comfortable stepping onto an airplane that had but one motor and but one pilot (and no co-pilot)?

Many of us would consider that to be folly.

But ICANN insists that we run the Internet with only one DNS root.

I know from my own experience of using DNS roots other than the ICANN/NTIA root, that I have been insulated from the errors that have occurred on the catholic DNS root over the last years (such as the loss of .com) and, at the same time, have not experienced any loss of interoperability on the net.

It's a simple fact: Root systems that have consistent contents will give people consistent answers.  I'll come back to this idea in a few paragraphs.  In the meantime, the message is worth repeating:  Multiple roots do not necessarily mean conflicting name resolutions.

It seems only rational to establish multiple, consistent systems of DNS roots so that loss or corruption of one root system will have no direct impact on other root systems.

Several years ago an experimental system was deployed called "Grass Roots".  It was a web-based system that allowed anyone who wanted to do so to build an appropriate configuration file so that he or she could establish his or her own root on his or her own computers.  I tried it - it worked perfectly; I found no loss of interoperability.

The stability of the Internet, its resilience against attack, and its ability to recover after a catastrophe, would be vastly improved if DNS were to have the redundancy offered by multiple roots.

Now let's get back to the fact that multiple consistent roots will result in consistent answers to users.

There appear to be three distinct cases to describe the events that transpire when users of the Internet use different DNS roots.

In order to simplify things let me adopt some simple terminology:  “Root-D” stands for the dominant (NTIA/ICANN controlled) DNS root – this is the one that serves the vast majority of Internet users.  “Root-X” stands for any of the other root systems.

The three cases seem to be:

  1. Root-D and Root-X have identical contents.

  2. Root-X has more top-level domains than does Root-D but for those TLDs in common, the contents are identical.

  3. Root-X and Root-D contain at least one top-level domain with the same name but with different contents.

Most people consider Case-1 to be essentially a benign mirroring situation.

And it is case 1 that I'm suggesting here that we adopt as a means to give resiliency to DNS.

Back-Up That Registration Data!

All DNS registration data at the root and TLD levels must be backed up in easily readable forms onto highly stable forms of permanent media (e.g. CD-ROMs) using well publicized human-readable formats (such as XML).

Each weekly (or daily) version of this data must be kept for at least a period of a four weeks.

There must be multiple copies stored in multiple locations that are at least 250km apart.

These copies must be physically protected and must be periodically tested for readability.

Periodic tests must be made to ensure that these backup can be successfully reloaded.

Early Warning System

The sooner we know that DNS is having trouble the sooner we can start dealing with the situation.

For the World-Wide-Web there are companies, such as Keynote, that monitor reachability and responsivity of web sites.

A similar system ought to be deployed to monitor DNS roots and major top level domain (TLD) servers.

Continuous polling could be performed (at a relatively low data rate) from monitoring stations around the world to note whether the major servers are visible and responding in a timely fashion with reasonable answers.  The cost of establishing this system would be very low.

Pre-Written Filter Skeletons

Distributed denial of service (DDOS) attacks have been a major headache on the Internet for several years.  Anyone who has been on the receiving end of one of these can testify to the difficulty of working backwards through chains of often not-very caring ISPs to track down the sources and smothering them.  The IETF is working on technology to help do this backtracking.  But that technology isn't here today and, assuming that it is perfected, it will take quite some time - probably years - to deploy and obtain sufficient coverage.

There is something we can do in the meantime.  The number of machines that are DNS roots and TLD servers is relatively small and predictable.  Consequently, we could prepare a small set of router filters skeletons that an ISP can pull out of its book of procedures, modify as appropriate, and slap into its routers.  This could greatly reduce the time it takes to dampen a DDOS attack.

This would possibly give us a means to start reducing the impact of a DDOS attack within a period of minutes instead of a period of hours.

Pre-Planned Routing to Pre-Planned Fallback Positions

The Internet is not the Alamo; we can retreat when there is an attack.

When there are distributed denial of service attacks on DNS, it will sometimes be prudent to move some operations to new locations.  Sometimes this will mean picking up a block of IP addresses - blocks that contain well known DNS servers - and moving them to a new point-of-attachment to the Internet.

This kind of shift will require an adjustment to the routing information used by the Internet.  While this is not an extremely difficult task it is one that is somewhat delicate and not infrequently requires a cooperative effort, particularly as ISP's tend to be suspicious of routing information received from sources outside of their own networks.  It would be prudent to preplan for this.  In particular it would be worthwhile to work with the ISP community to have some of the potential routing changes thought through in advance in written down in a book of emergency procedures.

Diversity of Server Software

One of the things that we have learned from the viruses and worms that have plagued our existence on the Internet is that diversity improves the resistance of the overall system.

However, we do not have a great deal of software diversity at the upper layers of the Domain Name System- to a very large degree the same software is used: BIND running on Unix (including BSD and Linux derivatives).  This means that many of these servers may be vulnerable to the same kind of attack.

We ought to consider whether it is prudent to maintain his degree of homogeneity or whether we should require that every DNS zone be served by multiplicity of implementations on a diverse set of platforms.  This is not unlike the long established requirement of geographic diversity between servers.


© 2001 Karl Auerbach, All Rights Reserved.
Updated: Sunday, October 14, 2001 06:57:10 AM