The Internet Core: We Can Rebuild it – We Have the Technology

sagawa
Print Friendly
Share on LinkedIn0Tweet about this on Twitter0Share on Facebook0

Paul Sagawa

203.901.1633

sagawa@sector-sovereign.com

September 11, 2010

The Internet Core: We Can Rebuild it – We Have the Technology

  • The major paradigm shifts in the TMT landscape – e.g. the rise of mobile devices, streaming media, cloud applications and telepresence – are driving 50% annual growth in Internet traffic, but also revealing the unacceptable latency and reliability of traditional internet architecture. While once the “last mile” was considered the bottleneck to high performance Internet service, multi-megabit wired broadband speeds are now commonplace and the advent of 4G portends the same performance for mobile connections. The explosive growth of new applications that require nearly instantaneous response time has catalyzed a massive traffic shift away from traditional Internet core networks, toward content delivery networks (CDN) that distribute servers and storage to networked data centers located as close as possible to users. We believe that this shift will accelerate, rewarding network operators and technology vendors with scale and expertise in the new approach
  • Traditional Internet architecture is hierarchical – passing data from router to router over many hops between a user and the server. These hops create delays (latency) and errors that are unacceptable for users of cloud applications or streaming media – the fastest growing categories of traffic on the Internet. One possible response is to add capacity – e.g. investing in more, bigger and faster routers to connect more and faster optical pipes. This alleviates congestion at high traffic bottlenecks, where data packets can stall in electronic queues to await processing or dropped as buffers occasionally overflow. While the US is still absorbing over-investment in fiber during the Internet bubble, carriers are approaching an upgrade to 100Gbps optical transmission, replacing or augmenting 40Gbps gear now going in to replace 10Gbps equipment. In turn, router vendors – primarily Cisco and Juniper – are following with 100Gbps products of their own. However, while this is a necessary part of the solution, the fast growth in traffic has made unsnarling bottlenecks akin to a game of whack-a-mole – fix one and another pops up
  • While traditional carriers chase their bottlenecks, CDNs have emerged as a real solution. Internet latency and error increase with the number of hops that data packets must make on their way from server to user. Thus, geographically distributing server and storage resources, optimizing private links between distributed data centers and duplicating commonly accessed content in multiple locations, and delivering data from servers as near as possible to the user, can dramatically improve network response times. Applications served via CDNs have considerable competitive advantage over those that are not. Similarly, CDN operators with superior scale and skill, such as Google and Amazon, can generate further advantage. These advantages can be seen in the extraordinary increase in the concentration of Internet traffic. Today, fewer than 150 companies carry more than half of the world’s Internet traffic, down from more than 10,000 companies in 2007
  • Building faster core networks and CDNs to improve the performance of next generation Internet applications should drive double digit annual growth for 40/100Gbps optical equipment, high speed core routers, content delivery technologies (layer 4-7 switching, application delivery controllers, etc.), high capacity storage systems, and blade servers. Companies with a disproportionate exposure to these markets include Ciena, Cisco, Juniper, F5, Riverbed, Blue Coat Systems, Citrix, Radware, Brocade, EMC, Network Appliance, JDS Uniphase, Finisar, PMC-Sierra, AMCC, Broadcom, and Marvell. Second, companies that are successful in gaining critical mass with a CDN for both their own and commercial traffic will benefit as new performance sensitive applications gain further hold. These companies include Google, Amazon, Yahoo, Akamai, Limelight, Microsoft, IBM, Rackspace, Terremark, and Internap
  • The losers in this scenario are the operators of traditional IP backbones and regional networks. The extraordinary growth of CDNs and cloud hosting siphons traffic off of carrier networks and squeezes pricing for plain vanilla transit. These companies include Verizon, AT&T, Level 3, Global Crossing, and Sprint, amongst others. While carriers have belatedly pressed into cloud hosting and CDNs, their flatfooted start makes the threat more acute than the opportunity

The Need For Speed

The Technology/Media/Telecom landscape is in the midst of massive change. Users are untethering themselves from wired connections, taking to the streets with portable computing platforms. Increasingly, they are accessing cloud-based applications, where processing and storage are handled by data centers on the network. Their consumption of video and music streamed to their devices exploding, with Internet TV in position to begin bullying channelized television out of the living room as well. The use of telepresence by enterprises is taking off, with video calling by mobile users poised to do the same.

These new paradigms are placing enormous demands on the many networks that comprise the Internet. The Atlas Internet Observatory, a non-profit organization charged with assessing the state of the global Internet, reports that overall traffic continues to grow at a torrid 50% annual pace (Exhibit 1). At the same time, the new applications that are the primary drivers of traffic growth are unusually sensitive to the performance of the network. Cloud applications demand that the response time of network-based servers to user inputs be imperceptivity different to the response of the devices own processor. This is not possible if the network adds noticeable delay. Streaming media, in particular video, is unforgiving of packet errors and of inconsistencies in the pace and ordering of data packets. Video, the fastest growing form of traffic on the internet, is also a notorious bandwidth hog – a single stream of HDTV can tie up 5-8Mbps from server to user (Exhibit 2).

Traditional Internet architecture makes little allowance for these application needs. The original purpose of the Internet was to assure communications even under the harsh conditions of battle. Data is broken into small packets, which are routed separately on a hop-by-hop basis and reassembled on the other end. If a link is too congested, traffic is rerouted around the bottleneck. If rerouting is impossible, the packet is just dropped and the sender is asked to retransmit. The speed across the network – known as latency – is dependent on the number of router to router hops each packet must take, as packets wait in queue for processing at each router – longer trips take more time. If the network is congested, packets tend to take even more hops, creating more latency, and more packets are dropped, creating errors (Exhibit 3). By original design, the many individual packets that make up a message will arrive at their destination at unpredictable intervals and out of sequence. The receiving computer then must wait for the arrival of the packets, reassemble them in sequence, determine if any are missing, request a retransmission of lost packets, await the retransmission, reassemble, etc.. This approach is perfect for applications like e-mail or file transfers, where delays of multiple seconds (or even minutes) are no big deal. However, if a user is editing a spreadsheet and must wait several seconds for every recalculation, or if a user is watching a movie and packet errors force an interruption, it IS a big deal.

Fixing a Hole

The traditional solution to Internet congestion is to throw capacity at the problem. More direct connections over fatter pipes tied to bigger, faster routers will reduce hops, shorten data queues and bring down error rates. As such, backbone operators are making investments to upgrade optical equipment from 10 Gbps to 40 Gbps per wave length, positioning to move to 100Gbps before mid-decade, and adding new generation routers able to keep up with the new speeds (Exhibit 4). At the same time, Internet engineers have developed technologies that would identify packets associated with specific applications and/or specific users and prioritize them in data queues – basically a Disney World VIP pass for IP packets. While these technologies have been in long use in private IP networks, public Internet carriers do not now use them to give priority to certain packets over others. The fervent support for “net neutrality” amongst Internet cognoscenti, which oppose the practice, raises the philosophical question, while the FCC’s firm posture against discriminating on traffic types or customer makes it a political issue as well. The debate may be somewhat moot, as most traffic still traverses multiple company backbones on its way from server to user, making it difficult to enforce traffic prioritization even if it were possible to stop free riders from claiming priority status for their packets.

Even if Internet quality of service mechanisms could be implemented, they would not be a panacea. First, Internet bottlenecks are a bit like a game of “whack-a-mole”, that is, resolving one bottleneck often reveals other bottlenecks to address (Exhibit 5). Second, 50% traffic growth exacerbates the frustration of the relentless creation of new bottlenecks – it is very hard, and very expensive to maintain status quo, much less resolve the problems. Finally, even if enough capacity could be brought on-line to eliminate data queues and packet error, the sheer number of router hops necessary to cross the country, much less the globe would still yield unacceptable latency for the most demanding applications.

The Network Behind the Network

The best way to reduce the latency caused by distance and router hops is to shorten the distance and eliminate router hops, and the best way to accomplish that is to locate frequently used content and applications as close to users as possible (Exhibit 6). Multiple data centers can be geographically dispersed and internetworked to allow rapid transfer of data within the private content delivery network (CDN). The user then accesses a nearby server, bypassing the snarl of public internet backbones. This yields a dramatic improvement in response times and error rates.

Of course, it is not quite as easy as all of that. First, building out an effective CDN is expensive – numerous data centers (the more, the better) must be maintained along with a high-speed private network to connect them – and without a roster of popular and profitable anchor applications to foot the bill, most CDNs struggle to reach critical mass. This factor gives companies with high traffic web businesses, like Google, Yahoo and Amazon, an enormous advantage. Second, the technical challenges of managing content delivery across a huge network of distributed servers are considerable and experience is invaluable. Companies that have built institutional expertise through years of experience – essentially the same companies that have built scale – add to their advantage.

The shift to CDNs has been dramatic. In 2007, the largest 10,000 Internet networks accounted for 50% of the world’s traffic. By 2009, the number of organizations accounting for 50% of Internet traffic had dropped to less than 150, a more than 98% increase in concentration in just two years (Exhibit 7). Over the last year, Google alone grew from 5.2% of Internet traffic to about 6%, an 18% increase in share and more than a 70% increase in total traffic.

We note that the shift toward CDNs has been coincident with a change in the economics of the Internet. Prices for basic Internet connectivity have plummeted, squeezing network revenues, while advertising revenue on the web has exploded (Exhibit 8). The connection of ad revenues to application adoption, which is linked to network performance, and thus to investment in CDN capabilities is a key driver for future network investment.

The Arms Merchants

The 50% annual growth in Internet traffic and the extraordinary concentration of that traffic onto just several dozen CDNs is an obvious boon to the equipment providers focused on the opportunity. At the heart of the Internet, there is a significant upgrade cycle underway to 40Gbps and onto 100Gbps optical equipment. Of the companies leading in optical transmission equipment, Ciena is by far the most exposed to this opportunity. The other leaders, – i.e. Alcatel Lucent, Fujitsu, Nokia Siemens Networks, Ericsson, etc. – all generate only a small portion of their sales from optical gear and face substantial issues in their other businesses. We believe demand at the high end of the optical market will likely grow more than 40% per year. Similarly, core router vendors – primarily Cisco and Juniper – will also benefit from the general growth in Internet traffic being generated by video, cloud applications and mobile internet (Exhibit 9).

Focusing on CDNs, the largest players – Google, Amazon, and Akamai – rely on highly customized proprietary distributed processing and content management technology (Exhibit 10). However, demand for commercially available content delivery hardware from Cisco, F5, Riverbed, Blue Coat, Brocade and others, and software solutions from Citrix, RadWare, and others, should be robust. Finally, components that enable faster network equipment – both optical (JDS Uniphase, Finisar, etc.) and electronic (PMC-Sierra, AMCC, Broadcom, Marvell, etc.) – should also see strong market growth.

Within CDN data centers, server and storage hardware implementations tend to be fairly plain vanilla, with rows of inexpensive rack-mounted blade servers and network attached RAID storage. Strong demand for servers will help PC stalwarts, like Microsoft, Intel, Dell, HP and others, partially offset the twilight of PC architecture for user devices. Meanwhile, data center storage vendors – e.g. EMC, NetApp, Brocade, HP, Oracle, etc. – should see very strong demand from the CDN segment.

The Quick and the Dead

We believe owning one’s own CDN will be an enormous advantage to Internet denizens with the scale toexploit it. Of these, Google stands out with its reputed 250,000 servers spread out over dozens of global data centers, all connected by its own fiber network, managed with proprietary best-in-class CDN algorithms. Google now serves more than 6% of the earth’s Internet traffic, and after its acquisition of YouTube, more than 40% of the video traffic (Exhibit 11). After Google, Amazon and Yahoo also have built out private CDNs to service their own Internet traffic and that of network clients. Akamai leads the list of CDN operators primarily focused on commercial customers rather than their own web businesses. Other well regarded public CDN carriers include Limelight, Internap, Terremark, and Rackspace. Microsoft and AT&T are also participants in the CDN space, but their exposure represents a small piece of their overall businesses.

On the flip side, traditional Internet backbones have been commoditized just as paying customers are migrating to the CDN alternative. This is another step in the Bataan death march of the once proud US telecommunications carriers. We believe Sprint, AT&T, Verizon, Level 3, Global Crossing, Qwest, and others will all suffer as a result (Exhibit 12).

 

Print Friendly