February 18, 2004

My Three Contributions to the ITU Workshop On Internet Governance

Next week the International Telecommunications Union (ITU) is holding a Workshop On Internet Governance.

I will be attending as an invited expert.  It looks like there will be lots of interesting people there.

I have submitted three written contributions to the workshop.  The list of contributions may be see on the ITU's website at: http://www.itu.int/osg/spu/forum/intgov04/contributions.html

My three contributions are these:

Update (March 7, 2004): See 000085.html for a link to my presentation at the meeting.


Submission to the Workshop on Internet Governance
26-27 February 2004

Deconstructing Internet Governance

Author: Karl Auerbach, former North American publicly elected Director, ICANN

In my final report to ICANN[1] I suggested this definition of the internet:

    The internet is the open system that carries IP packets from source IP addresses to destination IP addresses.

This proposed definition of the internet focuses on the flow of IP packets between end points designated by IP addresses.  IP addresses, and the mechanisms that guide a packet to its intended destination as it flows across the intricate spider web of the internet are topics that many consider arcane and comprehensible only to a few technologists.  Yet in many regards, the issues of IP addresses and the routing of packets are far more important to the public and to nations than the domain name system.

My proposed definition is narrow.  It regards things such as Voice over IP (VOIP) and the World Wide Web as applications that are layered on top of the internet but which themselves are not necessarily part of the internet.  I know that this distinction will disturb many people.  Let me therefore mention that my definition does not exclude these applications from governance.  Rather, I believe that by clearly articulating the linkages and dependencies between things like the VOIP and the base internet we will be able to design more appropriate governance structures.

Under my proposed definition, the Domain Name System is an application, albeit a critical one, that is layered upon the base internet.  It is my sense that we ought to deal with DNS as a matter distinct and separate from the system of packet routing and delivery that I have defined as the base internet.

The End-To-End Principle and The Risk of Internet Fragmentation

You may have heard of the "end-to-end principle"[2].  This principle is implicit in my proposed definition of the internet.

The end-to-end principle is one of the primary reasons why the internet has been so successful.   Failure to maintain the end to end principle could lead to several negative consequences:  Without a firm commitment to the end-to-end principle, the internet could evolve into separate networks that touch one another only through guarded portals.  Without the end-to-end principle innovation on the net would be more expensive and occur more slowly.  Promising technologies such as Voice over IP could be crippled or stillborn.  Without the end to end principle the internet could easily stagnate.

Is the end to end principle at risk?  The answer is "yes".

We have already begun to observe the first symptoms of fragmentation of the internet.[3].

Very understandable and legitimate concerns about unsolicited bulk e-mail ("spam"), the distribution of unsavory material, the protection of children, and the protection of cultural values have fueled the creation of what amount to protected gates that today control the passage of network traffic.  These portals could harden and not only reduce the value of the worldwide internet but also create opportunities for those in charge of the portals to take advantage of their privileged position either for profit or political gain.  A good example of this is Versign's "SiteFinder",[4] a recent attempt to profit by leveraging Verisign's highly privileged position over the .com and .net top level domains.

The IP address allocation system has driven many people and companies to deploy Network Address Translation (NAT) devices.  These devices break the end to end principle.  NATs have already begun to impede the deployment of Voice over IP products.

The Internet As A Multifaceted System

Let me return to my original purpose - to inquire how our approach to internet governance may be informed through a clear understanding of what the internet is.

Let me submit the following proposition:  There is no single thing called "the internet".  Rather, I submit that the internet has several distinct aspects.  Let me further suggest that these aspects may each be governed separately with a mode of governance most appropriate to its particular circumstances.

What I have suggested above is a departure from the current practice in which governance of multiple aspects of the internet are merged into one body.  It is my strongly held opinion that the division of internet governance into distinct bodies is more than merely prudent, I believe that it is a necessity.

What are the distinct aspects of the internet that ought to be considered as subjects of governance?  Here is my list:

  1. First, a system of IP address allocation that meshes well with the IP packet routing systems.

    This function, to date, has been handled with relatively little controversy by various "Regional IP Registries" (RIRs).  However, I anticipate that questions of fairness of IP address allocation, as well as quality of service demands for network services such as VOIP will begin to inject public-interest concerns into what has been a largely technical area.

  2. Second, a system of inter-carrier/inter-ISP traffic exchange in which end users can obtain usable assurances not merely that packets can actually flow between senders and receivers but also that designated traffic flows will achieve specified levels of service.

    Today the internet is composed of carriers and ISPs who are often jealous and suspicious of one another.  However, it is only by virtue of the adherence to at least a minimal set of shared practices that IP packets can find their way across the internet, through a sequence of carriers and ISPs, from senders to receivers.  The dissemination and processing of information regarding the routing of IP packets is a complex technical matter.  Overlaying that technical difficulty is the resistance of carriers and ISPs to disclose how they connect to one another and under what terms.

    It is not unusual for large portions of the net to be unreachable or invisible at any given moment.  Today most of these events are transitory (on a timescale ranging from minutes to a few hours.)  With the increasing use of potentially permanent filtering, selective reachability may become the norm rather than the exception; the scope of the internet will begin to vary depending on the place from whence one looks.

    New uses of the internet, such as for Voice-over-IP (VOIP) will require adequate end-to-end service levels.  Without adequate service, applications such as VOIP may find it difficult to expand beyond local scope or be treated as anything but a toy.

    The notion that internet packet routing, issues of inter-ISP peering and transit, and end-to-end service levels are matters for governance is a notion that may be strongly resisted by carriers and ISPs.  It is very important to initiate a dialog with that community.

  3. Third, a system to allocate protocol numbers and other similar identifiers.  This has been, and will remain an essentially clerical function performed on behalf of standards bodies.  (I do not believe that this aspect of the internet is in need of governance, however the legacy of ICANN and IANA have placed this aspect into the realm of internet things that are expected to be governed.)

  4. Fourth, the responsible and accountable operation of the upper layers of the DNS hierarchy including oversight, on behalf of the community of internet users, of a suite of Domain Name System (DNS) root servers.

  5. Fifth,  the management of the DNS root zone file.  This function includes the clerical task of preparing the root zone file for distribution to the root servers.  This function also includes the discretionary task of developing and applying policies to determine which new top-level domains will be allowed entry into the root zone.  (This latter function could conceivably be split so that national and "country code" top level domains are handled separately from other top level domains.)

I will return to those aspects of governance in my next submission and suggest how appropriate structures of governance might be designed for each.

Earlier in this note I indicated that I believe that layered upon the internet are several important applications.  These include, but are certainly not limited to, the World Wide Web, Voice over IP, and Instant Messaging.  It is my suggestion that for each of these applications, to the extent that governance is appropriate at all (and I strongly urge that in many cases there is no need for governance), should be handled by its own distinct body of governance.

A Note of Concern

The internet is rapidly becoming a public utility.  People and entities are basing economic plans, products and services, and, increasingly, matters involving health and safety on the internet.  As part of that evolution, I believe that not only do our engineering practices have to evolve[5] but I also believe that we need to consider how to ensure that the net's infrastructure remains stable and dependable into the future without badly compromising the ability of the still nascent net to evolve.


Notes:

[1] My final report to ICANN is available online at http://www.cavebear.com/archive/rw/senate-july-31-2003.htm.  The referenced material is found towards the end of that document.

[2] Saltzer, Reed, Clark, "End-to-End Arguments in System Design", 1981 available online at http://www.reed.com/Papers/EndtoEnd.html

[3] See my note Is the Internet Dying? at 000051.html

[4] See "IAB Commentary: Architectural Concerns on the use of DNS Wildcards", available online at http://www.iab.org/documents/docs/2003-09-20-dns-wildcards.html

[5] See "From Barnstorming to Boeing - Transforming the Internet Into a Lifeline Utility" slides at http://www.cavebear.com/archive/rw/Barnstorming-to-Boeing.ppt and speakers notes at http://www.cavebear.com/archive/rw/Barnstorming-to-Boeing.pdf

Updated:

Submission to the Workshop on Internet Governance
26-27 February 2004

Governing the Internet, A Functional Approach

Author: Karl Auerbach, former North American publicly elected Director, ICANN

This paper is based on my previous submission to this workshop, a paper entitled Deconstructing Internet Governance.  In that paper I suggested that we may find it useful to consider governance of the internet not as a single undifferentiated issue, but rather as a collection of distinct and separate functions amenable to different treatment according to the particular circumstances of each.

I would like to suggest that we approach the question of internet governance by adopting two proven rules of successful design:

  • The first of these rules comes from Louis Sullivan, one of the great architects of the 19th century.  Sullivan  articulated the principle that "form follows function."

  • The second of these rules comes from Niklaus Wirth.  Wirth brought to the art of computer programming the concept of information hiding and modularity.  Wirth articulated the idea that to be a well designed program each module must cleanly encapsulate its mechanisms and its data; and the relationships and interactions between modules must be precisely identified.  In addition, the whole structure must be organized to minimize the information and control flows between modules.

The thesis of this paper is that an appropriate way to structure internet governance is to create distinct bodies that each deal only with a single administrative or policymaking task.

This paper begins by looking at those aspects of the internet that require governance.  Those aspects are examined to ascertain which functions are structured and have little need for the exercise of discretion and which functions require greater freedom of decision and action.  From this a modular structure of governance bodies is articulated.

Pitfalls and Myths

Our first experiment in internet governance, ICANN, has shown us that we need to clearly comprehend the nature of what we are trying to do.  In this section I list a few lessons that we have learned from ICANN.

First caveat: It is worthwhile to take note of the problems that have occurred within ICANN as a result of the concepts of "consensus" and "stakeholder".  Both of these are very vague terms and readily lead to situations in which power struggles occur not through reasoned debate but through manipulation and selection.  Any new structures of governance that are created ought to use counted systems of votes rather than "consensus" and should allow participants to freely associate with points of view rather than be pre-classified as members of this or that group of "stakeholders".

Second caveat: If there is anything that we have learned from ICANN it is that electronic communications undiluted by face-to-face meetings are poor vehicles for debate and discussion.  Without adequate face-to-face contact the kind of mutual respect of participants for one-another does not mature, thus making it far too easy for participants to be dismissive or rude.  Moreover the nature of electronic communications amplifies small differences and makes compromise difficult.  Occasional face-to-face interaction is time consuming and expensive, but necessary.  I am of the firm belief that we must not base internet governance exclusively on electronic communications.  Real gatherings in real places are required.  Otherwise we may see yet another round of failed attempts at internet governance.

Third caveat:  The phrase "public private partnership" has often been used in conjunction with internet governance.  I have strong personal reservations about this concept because it implies the transfer of governmental powers (often ultra vires powers) into the hands of private actors without simultaneously imposing the obligations of due process, oversight, and accountability that are hallmarks of modern governments.  In addition, no matter whether a governance body is private, public, or a blend, its role must be carefully defined and constrained lest it be captured by those it purports to oversee or by others who find the body to be a means to promote a private agenda.  We have seen all of these problems arise within ICANN.

Fourth caveat: There is a myth that ICANN engages only in technical matters.  In actuality, ICANN does nearly nothing that can claim to have more than the most tenuous relationship to technical issues.  ICANN has spent its existence promoting the agenda of certain selected commercial interests, primarily interests found in the United States and Europe, and has avoided engaging in matters that actually deal with, much less promote, the technical stability of the internet.  There are those who say that ICANN "administers, coordinates, and allocates IP addresses".  ICANN does not do this.  There are those who say that ICANN "administers and coordinates the root server system."  ICANN does not do this.  There are those who claim that ICANN's job of "promoting competition within the generic top-level domain space" is a matter of technical coordination.  As a technologist I find that disingenuous.  And ICANN's creation of a worldwide domain name dispute policy has no technical component whatsoever and represents a clear case of supranational lawmaking on matters of economic and business policy.

Fifth caveat:  Governance of the internet does, in fact, require us to deal with matters that go beyond the merely technical.  This is to say that governance of the internet is governance in the whole meaning of that word.  Governance is an exercise in plenary power.  We should not delude ourselves to believe that the presence of the word "internet" in any way reduces or eliminates the risks and dangers of such power.  It is critical that the power of internet governance be limited and constrained in exactly the same ways that any other governmental power are limited and constrained.  Governance should be structured to divide power, not concentrate it.  Tensions must be built into the system so that ambition counters ambition.  Internet governance structures must be subject to oversight and review; they must be open to all who feel affected by the matters being governed.  Internet governance structures must be built upon and be responsible to the community of internet users with no more than one level of representation between the members of that community and those who have been entrusted with using these powers of governance.

Sixth caveat: The issues that face the internet and the world today reflect a change in the concept of national sovereignty.  We are observing the development of a system, the internet, that erodes national sovereignty.  Those lost powers do not disappear; they are flowing into private hands, into Quasi Non-Governmental Organizations (Quangos), and into institutions of internet governance.  We have much to learn.  We will make many mistakes.  We may find it useful at this time to encourage local and national experimentation in order to learn what works and what does not before we try to create new worldwide institutions based on untried assumptions and untried mechanisms.

A disclaimer:  This paper does not directly address the always present, but rarely asked question: Who is watching the watchman?  By this I mean, in whom or in what is the ultimate power over the internet vested?  My own personal feeling is that this power should be vested in the community of internet users as individual persons.  There are others who believe that the community of internet users is best represented via their respective governments.  And there are yet others who argue that only "stakeholders" (whatever that might mean) ought to have authority.  In this paper I deal with those matters of the internet that require oversight and with the nature of the body that could exercise that oversight.  In this paper I am not directly dealing with the question of who or what will populate those oversight bodies or to whom those bodies may be called to make an account of their actions and behavior.  Although I am not dealing with this question in this paper, I believe that this question is a significant one that should be squarely addressed.

Tailoring the Mode of Governance To The Matter Needing To Be Governed

This paper emphasizes the distinction between functions that involve the exercise of discretion and those that do not.  Functions that involve only limited or constrained exercises of discretion will require relatively simple oversight and governance  Functions that involve significant amounts of discretionary freedom will require more complex mechanisms of oversight and governance.

For those functions in which discretion is extremely limited and the consequences of abuse of that discretion are limited, oversight might be achieved simply by a publish-then-challenge system.  Notice of actions would be published after the fact.  For some period after publication those who feel that the decision is wrong could come forth and challenge that decision.

For those functions in which discretion does exist but its scope is limited, a notice-and-comment system might be appropriate.  Notice of a proposed decision would be published to the public in a well known and readily accessible place (for example, on the world-wide-web), comments would be invited, after a reasonable interval those comments would be considered, then a final decision based on those comments would be published.  Notice and comment systems usually have some means of external oversight and review mechanisms to handle those extraordinary situations in which the exercise of discretion is arbitrary or capricious, or in which the discretion is exercised without adherence to proper procedures.

In both publish-then-challenge and notice-and-comment it is necessary that challenges and comments be considered without prejudice and the decision maker must be able to demonstrate that such submissions have actually been reviewed and considered.

For those functions that require a greater exercise of discretion or where the public impact is significant, more intricate systems of governance would be required.  For example, the creation of policy regarding the Domain Name system requires that all potentially affected parties have the right to participate in the debate and policy decision as peers with other parties.

What Are The Aspects of the Internet That Require Governance?

In my prior paper I described five aspects of the internet that might be subjects of governance:

  1. A system of IP address allocation that meshes well with the IP packet routing systems.  (Note: in this paper, I am referring only to unicast IP addresses.  There are other forms of IP addresses, such as multicast IP addresses, that are outside the scope of this paper.)

  2. A system of inter-carrier/inter-ISP traffic exchange in which end users can obtain usable assurances that designated traffic flows will achieve specified levels of service.  (Note that I am using the word "assurance".  I use this word to mean something less than a hard "guarantee.")

  3. A system of allocation of protocol numbers and other similar identifiers.

  4. The responsible and accountable oversight of a suite of Domain Name System ( DNS) root servers.

  5. The management of the DNS root zone file, including the clerical task of preparing the root zone file for distribution to the root servers and the task of developing and applying policies to determine which new top-level domains will be allowed entry into the root zone.

Defining Specific Functions That Require Governance

In this section I examine the five aspects mentioned above and dissect them to reveal the specific functions or tasks that need to be performed.  For each of these functions I will examine the degree of discretion that is required and suggest how that function might best be supervised.

IP Address Allocation

  1. Formulation of policies for IP address allocation

    Evaluation:   This is a job that appears at first glance to be highly arcane and technical.  However, the decisions that are made have broad impact not only in a technical sense but also in an economic sense.  In many regards, countries and institutions that can obtain IP address allocations have a stronger competitive position vis-à-vis other countries and institutions that can not obtain such allocations.  Many of the decisions regarding IP address allocation are made today with only implicit assumptions about the impact of those decisions, and that impact is often measured only with regard to its effects on ISPs and vendors of packet routing equipment rather than on the general community of internet users.

    It is likely that if all ramifications were considered the cumulative effect of IP address allocation policies could dwarf those of policies regarding the domain name system.

    As I mentioned in my prior paper, the matter of policy for IP address allocation is unlikely to remain insulated from debate over allocation policies.  It is likely that the creation of IP address allocation policies may require more governance oversight in the future than they require today.

    Although the creation of IP address policy has many social and economic side effects, it is likely that few apart from ISPs, large consumers of IP address space, or router manufacturers would take the time and effort to be involved.  Consequently, the policymaking apparatus presently used by the Regional IP Address Registries (RIRs such as APNIC, ARIN, RIPE, LACNIC) could be continued and left undisturbed until such time as difficulties arise.  Such an approach, however, implies that this matter would have to be reviewed periodically by some body in order to ascertain whether such difficulties have in-fact occurred and that some more intricate system of  governance is required.

    Costs: Today the cost of policymaking by the RIRs is covered through fees charged by those to whom IP addresses are allocated.

    Conclusion: Leave this function governed by the existing RIR mechanisms but come back and review the situation periodically (e.g. yearly or bi-yearly.)

  2. Allocation of large IPv4 and IPv6 address blocks

    Evaluation: This is the mechanical job of putting IP address allocation policies into effect.  Today there is a multi-tier system.  At the top is IANA which makes very large allocations to the Regional IP Address Registries (RIRs).  The RIRs, in turn, allocate to large users (ISPs, countries, educational bodies, corporations).  Further levels of allocation occur as necessary.  Occasionally some allocations are made directly by IANA without going through the RIR hierarchy (for example, ICANN has allocated address space to itself without going through the RIR system.)

    Depending on the specificity of the overall IP allocation policies, the task of administering those policies can be relatively predictable with only a low degree of administrative discretion.  Because of the scope of impact of address decisions it might be tempting to adopt a notice-and-comment system.  However, address allocation is often a matter in which time is of the essence and for that reason a publish-then-challenge system might be most appropriate.

    Costs: Today the cost of allocations by the RIRs is covered through fees charged by those to whom IP addresses are allocated.

    Conclusion: Leave this function governed by the existing RIR mechanisms.

  3. Maintenance of the top level of the in-addr.arpa (and it's IPv6 equivalent) DNS zone

    Evaluation: In-addr.arpa is used for address-to-name mapping.  At every tier in the hierarchy of IP address allocation it is necessary to build appropriate entries into the in-addr.arpa DNS hierarchy.  This job is usually done by the body or person who is doing the address delegation.  At the upper levels this is done by the RIRs and at the lower levels by the entities or people to whom address blocks have been delegated.

    This task is essentially clerical.  A publish-then-challenge system would be appropriate for allocations by the RIRs.  No governance system ought to be imposed upon the lower tiers.

    Because this this task is entwined with the allocation of address blocks by the RIRs it may be convenient to bundle this task with that of address allocation by the RIRs.

    Costs: Today the cost of maintaining the upper tier of in-addr.arpa is borne by the RIRs through the fees charged to whom IP addresses are allocated.

    Conclusion: Leave this function governed by the existing RIR mechanisms.

Availability of Usable End-to-End Routing and Levels of Service

This is an area that has not been widely discussed.  As I mentioned in my previous paper the internet is composed of carriers and ISPs who are often jealous and suspicious of one another.  It is to be expected that ISPs and carriers will oppose any attempt to impose governance on the matters of packet routing, packet filtering, and inter-carrier recognition of end-to-end level-of-service obligations.

Because this is a new area, and one that is likely to cause a strong reaction from carriers and ISPs I will not try to give more detailed descriptions of the tasks to be performed.  However, it is clear to me that there will be areas that involve significant exercise of discretion and require the balancing of many competing technical and economic and social issues.  In addition, the impact of these balances will be broad and could easily determine the fate of entire industries, such as Voice-over-IP.  For this reason, I anticipate that the governance structures that will be required here will need to be particularly intricate and could end up resembling the structure that supervises the connectivity and end-to-end quality of the world's voice telephone networks.

Allocation of Protocol Numbers and Other Similar Identifiers

  1. Assign and record protocol numbers and other such values as specified by standards bodies

    Evaluation: For standards issued by the IETF, performance of this job requires a loose coordination with the IETF in order to properly process the "IANA Considerations" of those RFCs that contain them and to handle those situations in which such RFC guidance is absent.  Similar coordination is to be expected with regard to documents issued by other standards bodies.

    This function is essentially clerical and performed on behalf of standards bodies, mainly the IETF.  The review systems that are intrinsic to standards organizations are an appropriate mode of governance.

    Costs: The total cost of this function is very low - it cumulates to at most a job for a single person.  The costs of this system ought to be borne by those standards bodies that use it, most notably the IETF.

    Conclusion: The IETF and other standards bodies should subsume this task at their own expense.

Responsible and Accountable Oversight of DNS Root Servers

  1. Oversight of Root Server operations

    Body Title: Root Services Oversight Board

    Evaluation:  In many respects we should honor the cliché "if it isn't broken, don't fix it".  The root server system runs very well today and, if we were looking only at today, it clearly is not broken.  The problem, however, is not today but tomorrow.  The issue, as I see it, is how to create a layer of oversight that gives the community of internet users firm confidence in the future while otherwise having minimal impact on a system that already works.

    DNS root servers must  be operated with care and skill.  A kind of mysticism has grown up around the root server operators that they are possessed of a unique wizardry and could not be replaced.  In truth, however, it is not a job that is significantly different in quality or kind from that of operating any high-availability worldwide database service.  Expertise is required, but that expertise is not unique.  There are many people and groups around the world who would be competent to run DNS root servers and who would be both willing and able to enter into stringent operations contracts in return for a reasonable operations fee.

    Today the DNS root servers are coordinated via the loose federation of entities and individuals listed at http://root-servers.org/.  These entities and individuals have done a superlative job.

    RFC 2870[1] sets out a reasonable set of operational requirements for DNS root servers.  But that RFC is neither binding nor complete.

    Today there is an oversight vacuum.  There is no body that is establishing standards of operation.  Nor is there any body that can require adherence to such standards were such standards to exist.  The only oversight that does exist is through local corporate boards, university trustees, and the United States military.  These local oversight bodies are subject to competing goals, changeable priorities, and shifting resources.  Few of these bodies are obligated to continue to offer DNS root server services; few of these bodies are obligated to offer such services on a fair and impartial basis.  None of these bodies is in any way accountable to the community of internet users or the governments that represent them.  This vacuum suggests that it will be necessary to create an oversight body or to invest that function into an existing body.

    ICANN has explicitly disclaimed that it has or desires such an oversight role.

    The Root Services Oversight Board proposed here would fill that vacuum.  The Board would enter into contracts  with those who actually operate root servers.  Such contracts would specify technical standards, service levels, and other obligations regarding security,  physical infrastructure, and disaster recovery plans.  I would anticipate such a contract would for the most part reify and extend RFC2870.

    There are real costs that the root server operators incur.  There is no easily available accounting to indicate the level of these costs.[2]  However those costs cumulate into the millions of dollars (USD) per year.  Any system of governance ought to consider how these costs are to be borne.  The stability of this critical part of internet infrastructure should not depend on voluntary donations of time, equipment, offices, connectivity, money, or people.

    As for the type of governance that should be exercised by that Root Services Oversight Board:  There is a fair degree of discretion regarding the establishment of the service levels that the root operators must meet.  However, the issues are largely technical.  A notice-and-comment system would be appropriate.

    Costs: The Root Services Oversight Board would incur meeting costs while it established operating standards.  However, once those standards have been established, the ongoing costs would be relatively small.  However, somewhere, either via the Board or via the root server operators themselves, the cost of actual operations will have to be recognized and handled.

    Conclusion: A supervisory body, what I call the Root Services Oversight Board, must be created or designated.  This body should not simply be a reincarnation of ICANN's Root Server Advisory Committee; it should instead contain elements not affiliated with incumbent operators that fully represent the interest of the community in the stable provision of root server services into the future.

    This Root Services Oversight Board must create operations standards and enter into clearly binding legal agreements, not "Memorandums of Understanding" of questionable enforceability, with the operators of root servers.  The creation of operations standards can be controlled through a notice-and-comment system.  This supervisory body must itself be accountable to governments and the community of internet users so that it may itself be compelled to take its oversight duties and powers seriously.

DNS Root Zone Management

DNS root zone management consists of a number of distinct tasks.  Many of these are clerical tasks in which instructions received from other bodies are processed according to predefined procedures.  I have structured this section to distinguish between those tasks that are clerical and those that involve the balancing of equities.  My conception is that the latter give instructions to the former.

In this section I posit the existence of several entities or roles:

  • A DNS Root Zone Administrator to oversee the periodic preparation and dissemination of a DNS root zone file.

  • A ccTLD Policy Organization to  promulgate appropriate procedures, perhaps derived from those in RFC1591, for the DNS Root Zone Administrator with respect to ccTLDs.  This ccTLD Policy Organization would issue directives to the Root Administrator to create or remove ccTLDs.

  • A gTLD Policy Organization to promulgate procedures for the DNS Root Zone Administrator with respect to gTLDs.  This gTLD Policy Organization would issue directives to the Root Administrator to create or remove gTLDs.

  1. Maintain and publish the zone file that defines the contents of the DNS root

    Body Title: Root Zone Administrator

    Evaluation: The root zone file is a relatively small text file that lists the names of each top level domain as well as the IP addresses of the name servers for each of those domains.

    Root Zone Administration primarily involves maintenance in that file of the Name Server (NS) records that indicate which DNS servers handle a particular top level domain (TLD). This is essentially a clerical job to be performed according to instructions from the entity or person who has the authority over each TLD.

    The Root Zone Administer will receive instruction from the various TLD Policy Organizations regarding who or what is the entity or person who has the authority over each TLD.

    On occasion an entirely new TLD would be added.  (It is possible that an old TLD might be removed, but that has never occurred.)  This is also a clerical job that would be performed according to instructions from the particular Policy Organization that has the duty of deciding such things.

    The root zone file affects the entire internet.  Preparation of that file, while not a complex task and not one involving large amounts of data, is one that does require care and a large amount of double and cross checking to protect against human, procedural, or technical errors.

    The data in the root zone file is a highly focused point of control over the internet.  For example, a small change could cause an entire country to disappear from the internet or its internet presence be transferred.  This sensitivity requires that the clerical job be well protected from manipulation and be made immune to political or economic pressures.

    At the present time this job is done daily by Verisign.  ICANN has indicated it would like to take over this job.

    The Root Zone Administrator performs what is essentially a highly sensitive clerical task.  The governance that is required for this task is that of ensuring that it is performed by competent people and to performed according to well defined and appropriate procedures.  Those procedures should be published so that those who wish to suggest improvements may do so.

    It could make sense to let Verisign continue this job as they have proven competence in this regard.  Or ICANN, which has never itself had such an operational role, could take it over.  Or the job could be moved to some other body.  In any case, the question remains: What is to be the entity that actually selects and appoints the Root Zone Administrator and pays the costs.  To date this has been handled via Memorandums of Understanding, Cooperative Agreements, and simple purchase orders from the United States Department of Commerce.  Because of the latent power that is implicit in this relationship, there is a legitimate concern among nations that this system gives the United States unwarranted control.

    Costs: The root zone file contains roughly 250 names.  The name server data associated with each of these names is updated occasionally - once per year is a reasonable metric.  The rate at which new TLDs are added is virtually nil (although I hope this situation changes.)  The amount of effort to handle these updates aggregates to no more than a few hours per week, plus whatever overhead is required to ensure that the entity requesting the change is, in fact, who it purports to be.  Software tools exist to do much of the content and form checking.

    Conclusion: Some internationally accepted body must be charged to establish and fund a Root Zone Administrator.  This job is largely clerical and lacks discretionary authority; errors of malfeasance would be noticed fairly quickly by the internet community.  For this reason, there is no need to include a large public presence in the oversight body.  A publish-then-contest type of process would be appropriate.

  2. To define and apply the rules to establish, remove, or transfer ccTLDs

    Body Title: ccTLD Policy Organization

    The ccTLD PO focuses on the needs and issues of concern to those who provide and use ccTLDs.

    Under ICANN this has proven to be a troublesome area.  There is a degree of uncertainty regarding the nature of ccTLDs: are they aspects of a sovereign country or are they simply database keys that coincidentally reflect (somewhat inaccurately) the existence of countries?

    The approach taken here is to establish a governance body that will wrestle with the conception of ccTLDs.  This body will define the rules to be used to recognize the appropriate administrator for a ccTLD as well as other rules regarding the maintenance and transfer of ccTLDs.  Because it is anticipated that the application of these rules will require a special degree of sensitivity, this body will also administer its own rules.

    The output of this body will be instructions to the Root Zone Administrator that identify who or what is the entity that has control of a each ccTLD.

    This Policy Organization will be responsible for the creation of policies to decide when new ccTLDs should be created, old ones removed, and how transfers of control of ccTLDs should be accomplished.

    Costs: It is to be anticipated that during the first years of existence that this policy body will have to meet frequently.  However, after the major policies have been adopted, the ongoing costs will probably be quite low.

    Evaluation:  ICANN's ccNSO might be an appropriate body to assume this role.

    Conclusion: A ccTLD Policy Organization should be designated.  ICANN's existing ccNSO could be considered.  Because ccTLDs are so closely tied to the existence of sovereign countries it is likely that the primary interest in ccTLD issues will come from national representatives and from those who operate ccTLDs.  However, allowance should be made to include those representing the views of the internet community to be part of these decisions.  This body will require a relatively complex decision making process in which there is adequate time for all points of view to be raised, discussed, and considered.  ICANN's ccNSO may already be evolving appropriate mechanisms in this regard.

  3. To define and apply the rules to establish, remove, or transfer gTLDs

    Body Title: gTLD Policy Organization

    Evaluation:  ICANN has tried to perform this task.  However, even after 5 years it has created no coherent policy in these matters beyond the creation of extraordinary rules that reflect two, non-technical policy choices: 1) A strong bias in favor of intellectual property protection over other uses of names. 2) The protection of incumbent gTLD operators by making it virtually impossible for newcomers to join the club.

    Although it is possible to make the creation of new gTLDs a rather mechanical function[3], this issue has become one filled with emotion and is quite contentious.  Consequently, this plan places the responsibility for decisions regarding gTLD creation into the gTLD Policy Organization, which will issue directives to the DNS Root Zone Administrator so that the latter may make those directives manifest in the root zone.

    I would, however, like to point out that from a technical point of view the root zone has a lot of room available for growth.  Today that zone contains less than 300 TLDs total, of which only a few are gTLDs.  It could easily accommodate hundreds of thousands, and perhaps even millions of TLDs.[4]  The impact of this growth would not so much be on the servers themselves as on the human procedures and the time required to transfer and load updates.  Because of the tremendous gap between the numbers of TLDs found today and the numbers of TLDs that are possible, we ought to drop the pretense that TLDs are a scarce and precious resource that must to be stewarded so carefully that the rate of increase under ICANN's administration has amounted to only about one per year.  Under the existing regime, the policies regarding new gTLDs are based not on technical concerns but rather with business and economic positions designed to preserve the status quo of the few incumbents and to protect one industry at the expense of other interests.  We must be careful that revised or new governance structures serve the broader public interest and not merely the goals of a few.

    Note that I have identified this task as a separate item from that of defining the registration rules within a gTLD once that gTLD is established.

    ICANN gTLD policies and ICANN's gTLD body (GNSO) have ossified over the years into a Byzantine maze that protects incumbents and intellectual property.  It is unlikely that ICANN's gTLD structures can be rewoven by any means less subtle than the Alexandrian approach to the Gordian Knot, in other words, radical restructuring.

    Costs: ICANN has incurred enormous "costs" in this area.  The "cost" of evaluating the 47 applications in year 2000 amounted to in excess of $2,300,000(USD).  However, this "cost" was largely for items that are utterly irrelevant to the question of technical competency to run a TLD.  In other words, ICANN's experience is of no use in ascertaining what the real cost of developing and applying gTLD selection policies would be if those policies were be concerned with technical competence rather than used as vehicles to protect selected business interests.

    Conclusion: A new entity should be created for this task.  ICANN's GNSO and ICANN itself are too heavily burdened with past errors to be recycled.

    This new entity must be driven by the interest of the community of internet users, not by selected business interests and incumbent DNS providers.

  4. To define rules pertaining to the registration of domain names within gTLDs

    Body Title: Registration Policy Organization

    Evaluation: It is the belief of the author of this note that internet governance of DNS ought not to include things that are more properly within the sphere of established legislative and judicial bodies.  For that reason I would argue that  matters that involve the creation of supra-national laws, such as the UDRP and the quasi-judicial system that accompanies it, are not appropriately within the scope of the kinds of internet governance bodies we are discussing here.

    Because DNS is an easy means for certain interests to exercise a great deal of worldwide control for a relatively tiny expense there will be great pressure for this Policy Organization to engage in policy making about things that go beyond how the DNS root zone should be managed.

    It is recommended here that the Registration Policy Organization have explicit limitations in its organic documents.  This may help constrain the degree to which it may engage in what amounts to internet lawmaking.  Its ability to adopt policies regarding business practices should be similarly constrained except for those needed to ensure that any failed DNS registrar or registry maintains enough recoverable assets and information so that a successor may pick up the pieces and resume services to the customers of the failed entity.

    Areas that are appropriate for this body to consider would be the measures to protect registrants in the face of registry or registrar failures, the degree of privacy protection to be afforded to the data disclosed by those who acquire a domain name, policies regarding grace periods to recover a name should there be an inadvertent failure to renew the name, policies regarding transfers of names, etc.  Many of these policies have been discussed under ICANN's GNSO and its predecessor the DNSO, for that reason, it may be possible to construct the Registration Policy Organization using parts of ICANN's GNSO.  However, as was mentioned previously, that organization may be too burdened and compromised by its past.

    Costs: The best estimate that can be made regarding costs is to examine some of the more recent "Policy Development Processes" that have been undertaken within ICANN.  Because much of the cost is borne by the individual participants it is not possible to make meaningful estimates.

    Conclusion: A Registration Policy Organization should be established.  It may make sense to build upon the policy development apparatus of ICANN's GNSO.  However, that apparatus would require significant overhaul to eliminate the structural biases created through the excessive use of "consensus" and "stakeholder" concepts as building blocks.  In addition, representatives of the internet community ought to have at least a parity seat at the decision-making table.

  5. To define the rules to establish, remove, transfer, and maintain infrastructure TLDs (such as .arpa or .int.)

    Body Title: Infrastructure TLD Policy Body

    Evaluation: There are infrastructure TLDs.  The most widely used is .arpa (usually in the form of in-addr.arpa for address-to-name lookups).  The .int infrastructure TLD also exists and has been put to various uses.

    Infrastructure uses are usually tied to internet standards and operations - it is unlikely that any matter would be contentious.  Businesses and members of the internet community of internet users are chiefly concerned that the internet is managed competently.  For that reason it may be easiest to simply join this task with the previously discussed task of assigning and recording protocol numbers specified by standards bodies.

    There has been discussion of reusing the .int TLD for use by international bodies.  This paper takes no position on that question.

    Costs: This should be a relatively low cost task.

    Conclusion: There is no reason to agonize about the details of how this function is to be handled, particularly if the IETF and other standards bodies are willing to allow this to be subsumed into the task of assigning and recording protocol numbers.

Legal Structures, Degrees of Separation, Open Communications

To the greatest extent reasonable, each governance body described above ought to be entirely distinct and separate.  Each such body ought to be embedded in a distinct legal structure.  There ought to be no shared trustees, directors, managers, employees, or funding.

All communications within and between governance bodies must be in writing and be posted for public viewing on the internet at or before the time the communication is made.


Notes:

This paper is derived from a previously published note: 'A Plan To Reform ICANN: A Functional Approach', June 2004 available online at http://www.cavebear.com/archive/rw/apfi.htm

[1] RFC 2870,  "Root Name Server Operational Requirements", June 2000, available online at http://www.ietf.org/rfc/rfc2870.txt?number=2870

[2] While I was on the Board of Directors of ICANN I tried to isolate the costs associated with the operation of the IANA root server.  The accounting records were, unfortunately, inadequately detailed.  It was clear, however, that the total aggregated to a non-trivial amount and, because ICANN was establishing a more powerful system at a physically better location, the cost was rising.

[3] Several proposals have been put forth to use auctions or lotteries (or a combination of both) as a means of constraining the rate of new TLD grants.  See sTLD Beauty Contests: An Analysis and Critique of the Proposed Criteria to Be Used in the Selection of New Sponsored TLDs by Karl M. Manheim & Lawrence B. Solum.  (Online at http://gtld-auctions.net/sTLD_Analysis.html)  Also see TLD addition procedure by Milton Mueller and Lee McKnight (Online at http://dcc.syr.edu/miscarticles/NewTLDs-MM-LM.pdf)

[4] Two or three years ago I participated in an ad hoc experiment to see what would happen if a root server were to contain several millions of TLDs.  We took the then available .com zone and elevated it so that nearly every .com name became a TLD.  We then ran a synthetic query load.  This was done on equipment that would be considered grossly underpowered by today's standards.  And, after we added enough memory to the computer, it worked.  These were, of course, laboratory conditions, not subject to the rigors that a real root server would have to endure.  And we only measured response times and not the increased administrative burden.  The conclusion I would like to suggest is threefold:  First: There is not some clear hard upper boundary on the number of TLDs, but rather a soft area in which administrative concerns may provide back pressure against TLD growth, not intrinsic software or hardware limits.  Second:  The limit, wherever it is, is not sufficient to grant a TLD to everyone who might possibly ask; it is clear that some system of allocation of new TLDs is necessary.  Third: even if we were to create 10 new TLDs per day, it is unlikely that even after 100 years that we would reach a number of TLDs that was administratively problematic, even on today's hardware.


Updated:

Submission to the Workshop on Internet Governance
26-27 February 2004

First Law of the Internet

Author: Karl Auerbach, former North American publicly elected Director, ICANN

I would like to suggest that in our inquiry into governance of the internet that we ought to consider the adoption of certain guiding principles.

Several times over the last few years I have referred to a formulation that I call "The First Law of the Internet".

I believe that this First Law represents an appropriate balance between the public and private effects of internet activity.

The First Law of the Internet:

Every person shall be free to use the Internet in any way that is privately beneficial without being publicly detrimental.

  • The burden of demonstrating public detriment shall be on those who wish to prevent the private use.

    • Such a demonstration shall require clear and convincing evidence of public detriment.

  • The public detriment must be of such degree and extent as to justify the suppression of the private activity.

Updated:
Posted by karl at February 18, 2004 11:14 AM