Time on the internet should be uniform and global
If it is 1:18 am GMT in Paris then it also ought to be 1:18am GMT in Singapore.
You might be a dog on the internet. But woe to you if you try to make up your own version of the time and date and foist it off on others.
ICANN and the IAB claim that the internet's Domain System (DNS) requires a single catholic root.
If that is true than it is even more imperative that there be one single source of internet time.
Dong! Wrong answer.
The time protocol used on the internet, NTP, (RFC1305) not only uses, but actually encourages, multiple time sources. These sources can all claim to be fully authoritative. Even under these conditions false tickers - bad clocks - are properly handled and rejected.
The domain name system is in serious need of renovation. Some of that renovation is of a technical nature - these are things like expanding the UDP packet size to a modern value, expanding the "label" size to better accommodate international scripts, etc.
Among the most important of these technical changes would be a relaxation of the perceived need for there to be a single catholic root - which represents a single point of attack and single point of failure.
Some of the risk has been ameliorated by the deployment of anycast. But replica servers - which is what anycast is - suffer from the old problem of GIGO (Garbage-In/Garbage-Out): If an anycast cluster becomes contaminated with bad data then all the members of that cluster are equally contaminated.
What DNS needs is a means, such as we find in the internet's time protocol, NTP, to use multiple, even inconsistent, sources of data.
This is quite possible. NTP does this by applying a number of sanity checks and excluding data from sources that appear insane. DNS could also use heuristics, filters, and cross checks to accept data that is good and reject data that is bad.
Is there anything that prevents us from moving towards such a more robust name system? There's nothing technical (beyond the mammoth size of the task of upgrading from the existing DNS base). The biggest obstacle is psychological and bureaucratic. Through techno-self-hypnotism and anti-innovative/self-preserving assertions of bodies such as the US Department of Commerce and its progeny, ICANN, many of us have been led to believe that today's internet is the best of all possible internets and the DNS is the best of all possible naming systems.
If someone takes a moment to examine the DNS root file one of the first things that is seen is how small it is. And if you apply a bit of text compression the result is about 15K bytes - that's smaller than most of the cutesy little icons that decorate web pages.
We don't need DNS root servers at all - copies of the DNS root zone file could be easily disseminated by P2P networks, by IP multicast, and even be published in newspapers (in text or as a 2-D barcode).
And each TLD could publish, again using any number of mechanisms, including out-of-band channels such as newspapers, signatures that would allow one to winnow out the good root zone files from the false ones.
The internet's time system is extremely robust because everyone can pick the time servers they want to follow and the protocol design weeds out the sources that drift or fail (and such drifting and failing does occur in real life, even in the absence of malice.)
Today's DNS is like the Douglas fir tree - strong but brittle: it will stand firm against even the strongest of forces until it suddenly snaps, breaks, and falls to the ground in pieces - the result is firewood. NTP is more like the laurel tree - flexible and even if it falls in a storm its roots can sustain the fallen trunk and it can continue to live and grow.
One of the great dangers of ICANN and our efforts towards internet governance is the ossification of technology and, more importantly, the ossification of our creative powers through the loss of our ability to see better ways of solving problems.Posted by karl at February 19, 2005 1:17 AM