How the Hourglass Won - Systems Approach

Date: 2024-08-19T07:18:00+00:00

Location: systemsapproach.org

Last week’s post about Internet architecture made several references to the hourglass design and how it has contributed to the Internet’s success. This week we are going back to the early days of the hourglass and considering the competition that was going on then among at least three competing visions for the future of networking. It was far from obvious that the Internet would emerge as the dominant architecture, so this week’s post examines how that competition played out.


After Larry paraphrased David Clark saying “Architecture tells you what you cannot do”, I started looking at what else David Clark had to say about architecture. I have just read the opening chapters of his 2018 book, “Designing an Internet”, which is a great resource: it not only describes the current Internet architecture, it imagines what other architectural choices we might make. I was hoping for a short definition of “architecture” in the networking context, but it takes a whole chapter of the book to explain. One important takeaway is that network architecture includes the things we have to agree on–such as the meaning of IP addresses–while leaving a lot of flexibility for variations in the design and implementation of specific networks.

David is known as “the architect of the Internet” and has written the foreword for each edition of our textbook. Less well-known is that David is the person who introduced me to Larry. Sometime in the early 1990s I was working with David on Aurora, a gigabit networking project involving MIT, UPenn, IBM, and Bellcore, where I worked. I was building a high-speed (for its time) network interface, which I have come to describe as “the accidental SmartNIC”. David made an intro to Larry (then at the University of Arizona) and we would go on to collaborate, with his student Peter Druschel writing software to make my NIC useful. This also gave me an excuse to visit Arizona during the New Jersey winter for several years in a row. This collaboration went well enough that Larry later invited me to be his co-author on the first edition of Computer Networks: A Systems Approach

At the time that we were working on Aurora, David was also involved in an effort, supported by the National Research Council (of the U.S.) to shape the agenda for “National Information Infrastructure (NII)”. This work took place in 1993 and 1994, at a time when the term “Information Superhighway” was very much in vogue but there were at least three main competing views on what such a “superhighway” might entail. The result was a book-length report “Realizing the Information Future”. This book had a huge influence on my thinking, and one reason is that it introduced me to the idea of the Internet’s architecture being pictured as an hourglass. The hourglass was popularized later in the 1990s by Steve Deering, who gave a talk entitled “Watching the Waist of the Protocol Hourglass”. If you go searching for pictures of the Internet hourglass, you are likely to end up with a slide from Steve’s talk such as the one below. 

We discussed the hourglass in our textbook in 1995 as an important aspect of the Internet architecture, but didn’t go to the effort to draw it as tidily as above. I reached out to David Clark last week to see if he could recall when the hourglass first appeared. He admitted it was shrouded in the mists of time, but thought it likely that the first publication of the hourglass image was the 1994 RTIF report, even though the idea predated the book. Importantly, the breadth of the hourglass at the top and bottom captures the notion that there is room for flexibility in the Internet architecture.

Competing Visions For Networking

It’s important to consider the context in which this picture of the Internet emerged. Older readers will remember the “net-heads vs bell-heads” debates, essentially a struggle between a telco-centric view of networking and an Internet-centric one. When I joined Bellcore in 1988, fresh from my Ph.D. studies, I had almost no background in networking. While the team I joined was one of the least “bell-headed” within Bellcore, the system we were building was based on ATM and the telephone-company-led plan to build “Broadband ISDN (B-ISDN)”. So my first real exposure to network architecture was through the eyes of the telcos. Working with David and Larry, and reading Realizing the Information Future, were important correctives to this.

It is hard to believe now, but there was a third competing vision for the future of information infrastructure at this time, based on an expansion of the cable TV network. Cable TV was ubiquitous in the US and digital video was starting to emerge. One view of the “information superhighway” was often summarized (depressingly) as “500 channels of video”. The view was that digitization and other technological advances would allow the cable network to deliver hundreds of channels, with limited upstream data enabling including some interactive and on-demand services. To a degree, this is what the cable system turned into, but it didn’t become the centerpiece of national information infrastructure that its proponents were arguing for in 1994. Importantly, the cable industry was quick to realize that their infrastructure could also be used to offer broadband Internet access, so that cable modem-based access emerged as an early high-speed alternative to dial-up for home Internet users.

What Realizing the Information Future did brilliantly was to highlight the key differences between these competing visions of the future of networking. While the Internet could be modeled as an hourglass, ATM looked more like a funnel: the entire bottom part of the stack was pinned down to a small set of technology choices, with ATM requiring SONET and a specific set of link technologies. And the cable-TV version, with the view that the only application that mattered was video, was the inverse of this, narrowing to a single application class at the top. 

RTIF didn’t claim that the Internet architecture was the one true choice for the future, but it did point out the drawbacks of narrowing either the top or the bottom part of the hourglass to a small set of choices. If you narrow the bottom, you rule out a whole lot of current and future technology choices for the link layer, such as Ethernet and WiFi. By contrast, IP just keeps working over every new link layer technology that gets invented, embracing all sorts of link layer innovations such as various generations of cellular data and a proliferation of broadband access technologies. It also provided a clear deployment path leveraging existing networks that ATM lacked. And if you narrow the top, you rule out the diversity of applications that have flourished since the Internet was invented.

Recall that the World Wide Web was in its infancy in 1994, and voice and video applications barely worked due to low link speeds. One of the claims of the ATM camp was that it had inherently superior performance to IP due to the small cell size (and hence would better serve video and voice). This was arguably true in 1994 but turned out to be irrelevant as faster links became available and a combination of new router designs and Moore’s law enabled high-speed routing to flourish. So in the end it was breadth at both the top and bottom of the hourglass that really enabled the Internet to emerge as the dominant architecture. Its support for innovation in both applications and underlying technologies has been critical to its success.

By the time Larry and I started working on our first edition in 1995, I was pretty convinced that the Internet was going to be the winner in this war of competing visions. (In a related move, I left Bellcore for Cisco the same year.) Hence we structured our book around the Internet architecture–a novel choice at the time–although we made a point of including alternative approaches as well. One of our guiding principles was: don’t assume that today’s technology is the one true approach. Explain the foundational principles that have gone into making the Internet work the way it does, and explore alternatives, so that students will learn how they might design the networks of the future. Clark’s Designing an Internet takes this approach as well–hence the indefinite article in the title. 

This goes back to a tension we highlighted in the prior post: teaching students about an idealized architecture doesn’t necessarily reflect the reality of the Internet today. But if you focus too much on how the Internet looks today, you can miss the core principles among all the artifacts that have been built over the decades. This is not just an issue of teaching theory versus practice, but also a matter of trying to help students understand what is really fundamental to network architecture. Our sense of what is fundamental may change over time–tunneling, for example, feels like a basic building block of today’s networks in a way that it did not in 1995. (For a strong version of that view, see this blog from Tailscale.) To reiterate a point from Larry’s post, if we can teach students that they have the power to change the Internet, and give them the tools to do so, that is more valuable than just telling them how it works today. 


Thanks to the people who have become paid subscribers to the newsletter. Please consider joining them to support our work.

Cory Doctorow covered the decision by MIT to stop paying Elsevier for their journals, and as authors whose book is published by Elsevier, we applaud this action.

If, like us, you missed SIGCOMM two weeks ago, you can catch up on YouTube and read the notes from the scribes here.

Preview image by Bruno Bergher on Unsplash