There’s a lot of talk about governance and the Internet.
Some of it is extremely confused (often conflating the technology with its use, with wild proposals for marking ‘18-certificate adult content’ on every packet). Some is less confused but concerns the management of technical resources, of which, in my opinion, there are no real shortages. In this area, there are just artificial problems created by accidents of poor design, or else lack of design in the early Internet (when it was never anticipated this system would actually be used as it is). Indeed, much of the Internet technology was a prototype for researchers, who always intended that someone later on would ‘build a real one’. Unfortunately, the prototype was deployed un-matured. Many of the discussions of governance stem from technical shortcomings of deploying an incomplete prototype.
In my view, a lot of the heat could be taken out of the debate, and light could be shed on more useful topics for policy scientists to consider, by fixing these technical shortcomings. I call this approach ‘un-governating the Internet’.
First, let’s define ‘governance’. A political science definition, taken from the Marsden et al RAND report for the European Commission, is this:
‘Governance is the sum of the many ways individuals and institutions, public and private, manage their common affairs. It is the continuing process through which conflicting or diverse interests may be accommodated and cooperative action may be taken. It includes formal institutions and regimes empowered to enforce compliance, as well as informal arrangements that people and institutions either have agreed to or perceive to be in their interest.’[1]
My definition has it: political, non-neutral, possibly admits of arbitrary decisions, eg price or customer price or performance discrimination, although I would not include differentiation.
There’s a lot of Internet governance. What gives us a clue that much is misdirected is that it is not often concerning the components of the system that are expensive (such as connectivity/ capacity/traffic, or content) nor does it often concern the stakeholders who generate most employment and wealth (such as equipment vendors). Indeed, self-organisation through markets seems to work for them!
Why is there so much governance discussion on things that shouldn’t really matter much – Address Spaces, Name Spaces and so forth – and not on Protocol Space, Service Space and so on? As a technical researcher, I find it depressing that so much energy is wasted on figuring out how to manage purely abstract resources such as addresses and names, when the real resources which have significant capital and operational running expenses work so well.
Roots of the Problem?
This excessive discussion is partly due to the technology shortcomings which I alluded to above. It also emerges from the history, philosophy and economics of the Internet. The US Department of Defense Advanced Research Projects Agency (D)ARPA funded the private company BBN (and others, including universities partly funded through the National Science Foundation) to build the ARPANET (and SATNET and Packet Radio Net) from 1973 to 1981.[2] From 1981 to 1992, the National Science Foundation (and other agencies such as NASA) built the regional networks in a star (they were EGP Stubs off the ARPA and NSFNet core). In 1992, the divestment (privatisation) of the Internet was achieved rapidly. At this time, the provisioning of links, routers and routes was heavily competitive, and the market was ripe for many entrants to start building up commercial versions of what they had successfully built in the academic and research networks. The rapid development of the Border Gateway Protocol (BGP) promoted innovation in ISP relationships (ignoring protocol problems, it allows emergent interconnect policies).
It is worth noting that, in parallel, creativity in cellular business relationships was also very successful in promoting innovation in the network layer.
However, this all pre-dated the deployment of the commercial World Wide Web, for which only the very first Web server and browsers were emerging in 1992. The WWW creates a massive dependence on identifiers in the network layer, because servers need to be globally reachable and therefore need always-on non-NATed IPv4 addresses, and in the application layer, because URLs include FQDNs (Fully Qualified Domain Names). Unfortunately, neither the IP address space nor the DNS saw any evolution at that time.[3]
In the operational arenas (eg North American Network Operators Group and IETF), there was a strong mix of technically able people, conflating research, engineering and operations. That led to the strong, continued rapid development of the technology, which was inclusive of both industry and researchers. Leadership was good, with the IETF/IESG/IAB having a pivotal role. The IANA (Internet Assigned Number Authority) linked from that into the identifier space, including protocol identifiers and operational service object identifiers. The IANA and Jon Postel (based at ISI/USC) system worked well while dominated by researchers. Recall that in 1992, divestiture was still largely into regional authorities, and even the first commercial ISPs were often founded by academics.[4]
Post-WWW, commercial realities started to impinge from 1992 to 1999 as most of the growth in demand for sites and content in the ‘dot.com boom’ was of commercial value. Online shopping was initially key. It is worth noting also the positive influence of non-imposition of state taxes on Internet shopping, which encouraged e-commerce in the USA as well as elsewhere, but also the boom was sustained by just plain corporate presence and, eventually, new media in the form of context-relevant adverts (the advent of the search engine). The idea of click-through finally realised the value of a virtual presence, and a customer base in the Internet.
Since 2001, much of the growth has been in services of true value such as P2P file-sharing of music, VoIP, social networking sites and Internet games. This has put pressure on the Internet address space, requiring reachable sites in ever larger numbers. Much of this is in unregulated content service (piracy but also legal content) as in application innovations. So while the old-fashioned values of real concrete resources, such as transmission and switching capacity (and later on data center storage and processing), seemed to manage itself nicely, the new abstract resources such as identifier spaces in the network (IPv4 addresses) and applications (domain names) were under exponentially increasing pressure.
Identifier Spaces
Inevitably, the question whether we could privatise the identifier spaces arises. We may need to, in the network layer due to the scarcity of IPv4 addresses. What about in the application space (domain names)?
We can do this technically. Indeed, historically both network addresses and names management systems were proposed outside of the Internet community (NSAPs in the ISO world, and the X.500 Directory in the ITU world). Separate directories (with access through LDAP) where search results are attributed and location independent names remove the need for artificial competition over things which are only abstract bit-strings!
A market in IPv4 Addresses is only a temporary stop-gap, while we see how in practice to deploy IPv6 addresses. Initially, this could start with an auction (like radio spectrum auctions). We would require capable bidders (there is no point in selling to organisations who then sit on the address space to hike the price).We could even use the revenue to fund the backbone router upgrades to IPv6, ideally with innovations like location/identity split, since this would ease the integration with smart mobile devices from the 4G world (smart phones with 3G and wifi), which are of rapidly growing importance. Think carbon tax but with carbon trading.
Note that any scarcity in market might lead to run on the address bank and hoarding, but a run on price would promote innovation. Note too that regional Internet registries (ARIN, RIPE, and APNIC) are already putting in systems to trade blocks (though not a market) over the next year.
In the DNS, we could envisage a technical solution based on multiple parallel name spaces. At the moment (compared with the old ITU X.500 directory) the Internet is a bit simplistic. You can’t have two sites called apple.com, even though on an Apple computer you can have two files called the same thing. The solution is to attribute named objects.
This is a feature of X.500 Directories and the results from search engines, and of multi-lingual support too. A simplistic hierarchy with a single root doesn’t cut it (cf ‘Fire, Women and Dangerous Things’). There have been some doubts raised about attributes in the past, typically due to immature implementations in the old 1980s directory work. This is no longer a fear we should labour under.
Of course, this future is not without problems, but the existing ones should be reduced. Name space squatting, Address Space theft etc, should be harder to commit and lower in impact. MacDonald’s Scottish restaurant could co-exist with the international burger chain.
This does not mean we will not need policing in the world of abstract (virtual) objects and identifiers. But if virtual objects are already the subject of economies, then normal property law applies, although we will need the right type of property definition.
Note of course that property law is governance too. In this future, why shouldn’t objects in my home, Second Life (SL) and in the DNS be treated the same?
So we need a set of allocation and management technology: a meta-DHCP ‘server in the sky’, connected to the stock market. We need transactional support for atomic buying of identifiers. All of this is existing technology: indeed we devised a technology for this for Internet multicast address assignment mechanisms in the late 1980s.[5]
We already have organisational registries in the market (company name, trademark etc). Simply attribute names properly so searches results can distinguish returns, then the pressure on DNS names goes away. Users don’t care if Apple Computer maps to apple-computer.com and Apple Records maps to apple-records.co.uk, since search results are all that they view and click. DNS is now just a technical component free to perform better rotaries and dynamics and security.
Summary
Technical contributors like me should design so as to avoid ‘Tussle’. The evolution of the protocol, connectivity and service space was and is highly innovative. The Internet self-balances on multiple time-scales (congestion control, TE, provisioning). The Internet routes around problems.[6] In contrast, due to artificial shortcomings, the evolution in name and address spaces has been restrictive and backwards looking. I assert this is due to excess governance. By applying the ‘un-governating’ principle, we can reset these markets and move the activity on to spaces where innovation is useful.
Jon Crowcroft is Marconi Professor at the Computer Lab, University of Cambridge.
The author would like to acknowledge Ian Brown, kc, Craig Partridge, Jeanette Hofmann, Ken Carlberg, John Andrews, Ran Atkinson and others for input (although they may not agree with what is said here at all).
——————————————————————————–
[1] http://ec.europa.eu/dgs/information_society/evaluation/data/pdf/studies/s2006_05/phase2.pdf
[2] See Hafner and Lyon (2003) Where Wizards Stay Up Late: The Origins of the Internet, Pocket Books.
[3] It is also worth noting again that creativity in use of identifiers (SIM/location) in the cellular world is also not very good, at least until very recently!
[4] We all trusted IANA in the person of Jon Postel, since he was ‘one of us’. You can tell, since he allowed April 1st RFCs and chose ‘real’ RFC numbers wittily (e.g. RFC1984).
[5] Although it never saw widespread deployment, but for other reasons.
[6] Such as the accidental black-holing of YouTube by the main Pakistani ISP in 2007.