Latest Posts:


Is The Internet At Risk From Too Much Security?

A Roman Aqueduct - Giuseppe de Nittis

The Internet has become a lifeline grade utility.[1]

Our health, safety, and financial security depend on reliable and consistent availability of Internet services.

Yet over the years we have given relatively little consideration to actually having a reliable and consistently available Internet.

We are to a large extent flying the Internet on good luck and the efforts of unheralded people often working with tools from the 1980s.

As we wrap the Internet with security walls and protective thorns, maintenance and repair work is becoming increasingly difficult to accomplish in a reasonable period of time, or even at all.

With the increasing inter-dependency between the Internet and our other lifeline grade utilities — such as power, water, telephone, and transportation — outages or degradations of any one of these systems can easily propagate and cause problems in other systems. Recovery can be difficult and of long duration; significant human and economic harm may ensue.

Although we can hope that things will improve as the Internet matures, outages, degradations, and attacks can, and will occur. And no matter how much we prepare and no matter how many redundant backup systems we have, equipment failures, configuration errors, software flaws, and security penetrations will still happen.

The oft quoted line, “the Internet will route around failure”, is largely a fantasy.

When we designed the ARPAnet and similar nets in the 1970s we did have in mind that parts of the net would be vaporized and that packet routing protocols would attempt — notice that word “attempt” — to build a pathway around the freshly absent pieces.[2]

Today’s Internet is less dynamic than the old ARPAnet; today’s Internet is more “traffic engineered”, and subject to peering and transit agreements than the old ARPAnet. Although the possibility of dynamically routing around path problems remains, that possibility is constrained.

Today’s Internet is far more intricate than the ARPAnet. Today’s Internet services are often complicated aggregations of inter-dependent pieces. For example, web browsing depends upon more than mere packet routing; it depends upon a well operating domain name service, upon well operating servers for the many side-loads that form a modern web page, and upon compatible levels of cryptographic algorithms. Streaming video or music, and even more so interactive gaming or conversational voice, requires not only packet connectivity but also fast packet delivery with minimal latency, variation of latency (jitter), and packet loss.

As any one today can attest, today’s Internet service quality varies from day to day.

When the Internet was less ingrained into our lives, network service wobbles were tolerable. Today they are not.

Problems must be detected and contained; the causes ascertained and isolated; and proper working order restored.

Individually and as a society we need strong assurance that we have means to monitor the Internet to detect problems, to isolate those problems, and to deploy repairs. Someone is going to need adequate privileges to watch the net; to run diagnostic tests; and to make configuration, software, and hardware changes.

However, we do not have that strong assurance.

And the few assurances we do have are becoming weaker due to the deployment of ever thicker, stronger, and higher security barriers.

Simply put: Our ability to keep the net running is being compromised, impeded, and blocked by the deployment of ever stronger security measures.

This is a big problem. It is a problem that is going to get worse. And solutions are difficult because we can not simply relax security protections.

This paper describes this problem in greater detail, speculates what we might be able to do about it, and offers a few suggestions.[3]