Net Neutrality Is a Brand Name, Not a Policy

X
Story Stream
recent articles

Some politicians have favored regulating the Internet since it first became a commercial entity, and over the years, the number has expanded. About a decade ago, these politicians decided to glom onto Tim Wu's altruistic efforts and brand their regulatory cause "Network Neutrality." As is all too often the case in Washington, this branding has been used as a cover to hide the nefarious goal of regulating the Internet as a public utility using Title II of the 1934 Telecom Act.

As this political branding campaign gained traction with consumers that never bothered to look past the cuddly name, "The Consumer Internet Bill of Rights" was introduced in section 903 or Senate Bill S. 2686. In its girth, it listed nine covenants that all the major service providers, including AT&T (T), Comcast (CMCSA) and Verizon (VZ) stated they would support. These follow:

1. Access and post any lawful content of the subscriber's choosing;
2. Access any web page of the subscriber's choosing;
3. Access and run any voice application of the subscriber's choosing;
4. Access and run any video application of the subscriber's choosing;
5. Access and run any email application of the subscriber's choosing;
6. Access and run any search engine of the subscriber's choosing;
7. Access and run any other application, software, or services of the subscriber's choosing;
8. Connect any legal device of the subscriber's choosing to the Internet access equipment of the subscriber, if such device does not harm the network of the Internet service provider; and
9. Receive clear and conspicuous information, in plain language, about the estimated speeds, capabilities, limitations, and pricing of any Internet service offered to the public without interference from Internet service providers or Federal, state, or local governments.

Even though these covenants stated clearly that service providers would not do the alleged bad things the Network Neutrality proponents claimed as being clear threats for Internet consumers, this Bill died in committee, and the politicians riding the skirts of their carefully Network Neutrality branding marched on to draft Senate Bill S. 2917.

In one form or another, S. 2917 included all of the above points, plus one other. In section 12 of that Bill it read:

"(4) enable any content, application, or service made available via the Internet to be offered, provided, or posted on a basis that--

(A) is reasonable and nondiscriminatory, including with respect to quality of service, access, speed, and bandwidth;
(B) is at least equivalent to the access, speed, quality of service, and bandwidth that such broadband service provider offers to affiliated content, applications, or services made available via the public Internet into the network of such broadband service provider; and
(C) does not impose a charge on the basis of the type of content, applications, or services made available via the Internet into the network of such broadband service provider;"

What this one sticking point brings up without calling it by its proper name, is quality of service or QoS1. QoS may seem like an innocent and highly technical point, but in reality it describes is the technology used across the Internet to deliver the "quality" of video (pause and pixilation free) and smooth telephone conversations we've all become accustom to enjoying.

1. In this context, QoS includes Deterministic QoS, Differential QoS and Hierarchical QoS. You can learn more about these technologies using your favorite web search tools.

What section 12 of S. 2917 says is service providers cannot sell QoS to consumers or content providers, and that they must deliver QoS for all outside services that is just as good as the QoS the providers use to deliver their own services. Unfortunately, this is the real world and not Lake Woebegone where everyone is slightly above average.

What Does this Mean in the Real World (outside Lake Woebegone)?

A very important point for consumers to understand is that the bandwidth consumers buy, which is often billed as being 10 Megabits or greater per second (10Mbs), and the bandwidth that is sold to enterprise customers, is vastly different. The short story here is consumers buy what is known as "best effort" bandwidth, and enterprise customers buy what is known as a "service level agreement" (SLA).

A SLA usually costs five to ten times as much as a best effort bandwidth agreement. This is not because service providers can simply charge enterprise customers more; it is because with an SLA enterprise customers are guaranteed to have the full bandwidth 24/7. This is substantially different from a "best effort" contract that may deliver more or considerably less bandwidth than the headline number at any given moment.

Obviously, it is more costly for a service provider to delivered 24/7 guaranteed bandwidth, and consumers would choke at paying five to ten times what they do today to get it. To keep consumer costs down, numerous consumers share the aggregate bandwidth that is delivered to what is called a "node."

Since the node has a bandwidth limitation that is substantially below the aggregate best effort bandwidth of the consumers that are connected to the node, it is impossible to deliver that full best effort bandwidth to all the consumers that are connected to a single node simultaneously. Due to this architecture that shares bandwidth with numerous customers, it is not uncommon for consumers to experience at least occasional pausing and pixilation when trying to view high definition (HD) streaming video, like a NetFlix (NFLX) movie at peak usage times.

The Tragedy of the Commons:

What this NFLX example illustrates is very similar to what is popularly referred to in economics as "The Tragedy of the Commons." The Tragedy of the Commons is an economic theory presented by Garrett Hardin in 1968 that was based on the paper written by William Foster Lloyd in 1833.

What Lloyd's paper described was the plight of the commoners that were allowed to graze cattle and sheep on common grounds without limitations as to how many animals each commoner could own. With no limitation as to the number of animals a commoner could graze in what was an inherently limited area, each commoner was motivated to increase the size of his heard. As each commoner did this to optimize his situation, the number of animals eventually exceeded the grazing capacity of the land, and it was laid barren.

In 19th century England and Wales they handled this "tragedy" by limiting the grazing area allocated to a given commoner. This was a logical method for sharing a limited resource. However, these lessons are being ignored in policies that are being branded as Network Neutrality.

In a real-life situation where you are sharing bandwidth, if enough of your neighbors are downloading or watching streaming video, your Internet experience will be degraded, and if the bandwidth is substantially overtaxed, no one will have a satisfactory Internet experience. The simple solution is to allow service providers to sell QoS, which can be reasonably viewed as a sliver of a SLA.

Bandwidth Abundance:

Network Neutrality proponents often cite that more bandwidth will solve the problem, and with that we can all live happily on a superhighway of bandwidth where QoS is unnecessary. The problem here is we simply cannot flip a switch that will provide every consumer with this superhighway connection. The price differences between best effort and SLA contracts attest to this fact, and under even ideal conditions it will take years if not well more than a decade of favorable economic and regulatory conditions to deliver this nirvana.

Regulating the Internet with Title II Telecom regulations will effectively quash any chance of this ever happening. The evidence to support this claim is readily available, but conveniently ignored by the politicians that want to regulate the Internet. As Network Neutrality legislation built traction a decade ago, capital spending on broadband initiatives declined. However, when Senate Bill S. 2917 died after being referred to the Committee on Commerce, Science, and Transportation, capital investments by telecom and cable companies increased sharply, and with that, the cost per megabit of bandwidth decreased dramatically.

Prior to this flurry of investment, NFLX was a struggling, small and not terribly well-known company. However, with more abundant and cheaper bandwidth, the NFLX model built traction, and stockholders have been rewarded with a total gain of nearly 2,000% since the summer of 2007. However, with more people using NFLX, and other over-the-top (OTT) video streaming services, many consumers are again suffering from "The Tragedy of the Commons."

A Deeper Dig into QoS - Why is it Important:

The Internet uses what is called a "packet switched" network topology. This is radically different from the "circuit switched" network that telecom companies deployed years ago to support what was then virtually all voice traffic (telephone conversations). In a circuit switched network two or more parties are connected over what is essentially a dedicated line. In other words, the parties are afforded a sliver of bandwidth that is all theirs, and cannot be disrupted by other users.

In a packet switched network there are no dedicated connections, and packets compete with each other all the way from origin to destination. Sometimes packets even crash and have to be resent, but since the network is designed to handle these crashes, traditional internet users don't even know about these events. Here is where we need to draw some distinctions between what was considered "traditional" Internet use (browsing web sites, sending and receiving emails, and even on line chat conducted via text sessions).

In all of these "traditional" uses a computer buffers incoming data, and assembles it to display a web page, email message or text chat. In other words, the computer hides any delays that the packets might encounter between the sending source and your eyes. Your computer can also buffer a video file. While waiting for the computer to buffer enough data to display the video can be frustrating, it will be able to play more often than not without significant interruptions.

Some consumer premise equipment (CPE) will also buffer incoming video streams like a NFLX program. Again, while you might have to wait for enough of the program to buffer before you begin watching, this buffering minimizes the pauses and pixilation caused by inadequate bandwidth most of the time (depends on the aggregate bandwidth demand from other consumers that are attached to your node).

The point here is that with prerecorded programs being delivered at 1080i resolution, we can work around the inherent bandwidth limitations and enjoy tolerable viewing quality without implementing QoS technology. The problem is cable and telecom networks can't support even 1080p resolution that virtually all TVs can deliver today let alone the 4K UltraHD resolution that is becoming increasingly affordable and common in TV sets. And, in addition to trailing the resolution of TV technology, there is no way to deliver live streaming TV using packet switched Internet Protocol (IP) without giving that live stream a guaranteed lane of QoS2.

2. Live video files like watching the stream of Apple (AAPL) introducing its new iPhone 6 and 6 Plus cannot be buffered, and if you suffered through trying to watch that, you know well the frustration of trying to watch a live video stream without the benefit of QoS.

This is where QoS is most often misunderstood. QoS is not a designation of a "fast lane," or even an implication that other traffic will be slowed; whether or not it is slowed depends on the aggregate bandwidth demand across your shared node. QoS is more accurately viewed as a lane where traffic can flow without the threat of interruption. This does not mean the QoS lane moves faster, or that other Internet traffic sharing the same fiber or wire will move slower.

We actually implement QoS on our highway system in three different ways. First, in some cities where traffic congestion is high, you'll see carpool only lanes. With these lanes, commuters that share a vehicle are allowed to drive in a lane that has less inherent congestion. In tech-terms, the carpool lane delivers a higher QoS guarantee.

The second representation of QoS is the law that says traffic must yield to emergency vehicles using lights or sirens. If a police car, fire truck or ambulance drives down a highway with lights and sirens blaring, commuters are obligated by law to move to the right and give the emergency vehicle free passage.

The third and most graphic analogy is the growing trend for toll lanes on some highways. Here, if you are willing to pay what is usually a modest toll, you are afforded a less congested and usually faster driving experience.

Fortunately it doesn't cost nearly as much to increase Internet bandwidth as it does to add lanes to highways. However, when you consider the fact that bandwidth expansion initiatives must addresses bottlenecks that extend from the Internet backbone to virtually every home and business in the U.S., the expense is still very substantial. A very important point to embrace here is that since it takes years to implement meaningful bandwidth expansion initiatives, service providers must have a clear view as to what their return on investment (RoI) will be for the expense.

Free Market Competition, not More Regulations, leads to More Bandwidth:

A decade ago the threat of the unknown regulations that were being proposed under the branding of Network Neutrality significantly slowed capital spending by service providers. However, as the proponents of Network Neutrality lost power, and various attempts by the FCC to impose so-called Neutrality rules were overturned by the courts, capital spending increased. However, with the regulatory effort now rearing its ugly head again, companies like AT&T have publicly stated it will again constrain spending.

Last April AT&T outlined its plan to high-speed fiber optics networks in 100 U.S. cities. However, following the renewed push for Internet regulation under the brand name Network Neutrality, AT&T's CEO, Randall Stephenson stated the following at a November 2014 analyst conference:

"We can't go out and invest that kind of money deploying fiber to 100 cities not knowing under what rules those investments will be governed."

During his November 12, 2014 conference call, Cisco's (CSCO) CEO, John Chambers also cited the renewed Network Neutrality threat, as the reason CSCO has seen a substantial decline in demand from the largest three U.S. service providers.

Positive Spirals of Innovation:

There are few if any industries in the history of the world that have enabled as much innovation as the semiconductor industry. As it was described by Intel (INTC) founder, Gordon Moore roughly a half a century ago, advances in technology have enabled semiconductor companies to double the number of transistors that can be integrated within a given space about once every two years. With that engine, which is popularly called "Moore's Law," the industry has improved computing performance and lower prices so much that we can buy smartphones today that have vastly more computing power than all of NASA had when it launched John Glenn into space the first time.

The point here is things advance at a feverish pace in the technology world, and as we can clearly see when we evaluate the history of the heavily regulated telecom industry, government regulation often not only slows the pace of advancement, but also keeps costs higher than they should be if advancement was being governed by the open competition forces of a free market.

As a matter of a fact, had Senate Bill S. 2917 been passed, neither AT&T nor Verizon could have deployed the IPTV systems known today as U-verse and FiOS respectively. The reason is simple; in both cases TV channels are delivered via IPTV technology and given preferential QoS. Under Section 12, of S. 2917, doing that without providing identical QoS for all Internet traffic would be illegal.

Cable companies want to now adopt a similar architectural approach that would allow them to dynamically allocate bandwidth as it is needed at any given moment between TV programming channels and other Internet traffic. That change will involve the deployment of a Converged Cable Access Platform (CCAP) that combines the functions that are currently split between the Quadrature Amplitude Modulation (QAM) device that handles TV channels, and the Cable Modem Termination System (CMTS) that handles traditional Internet traffic.

The deployment of CCAP technology represents a huge undertaking for cable companies (very large capital expense), but promises to not only provide substantially improved bandwidth efficiency, but also enable cable companies to deliver new forms of OTT programming, and do all this at a lower cost. As is the case with U-verse and FiOS, CCAP would be illegal under the rules of S. 2917, and since the current push for full Title II regulation is undefined, possibly illegal under those regulations too.

The short story here is while we can reasonably define what technologies and benefits a regulated environment might impact today, we can't really define what new technologies the regulations might limit tomorrow. Going forward, technology will likely continue to advance at or near the speed of Moore's Law, but as we know from past experience, even the best written regulations are bridled by the speed of Congress.

Corporate Greed:

Since I know the definition of "greed," I'm not fond of the term "corporate greed." Greed is defined "a selfish and excessive desire for more of something (as money) than is needed." While this sounds like a rather nasty trait, the corollary would suggest that anything we spend money on beyond food, shelter and caring for our health, would be associated with greed. Capitalism and free market societies, which have been proven time and time again to be inherently more compassionate than socialism or communism, would also be inherently greedy.

Rather than debate the subjective merits and sins of greed, maybe we would be better off debating how we can deliver the inherent benefits of technological innovation, which if we applied a strict definition, would be largely products of someone's greed.

As it is viewed by most consumers, the Internet comes in two pieces. In one piece is the connection between content providers like GOOG and the service providers. We'll call this the first mile. This connection is often viewed as one that goes from the content providers to the "Internet Backbone," and from there to the ISPs. The so-called backbone is the superhighway portion of the Internet that is owned by companies like Level 3 (LVLT), TeliaSonera, CenturyLink (CTL), Vodafone (VOD) and the big three U.S. telecom companies, Sprint (S), AT&T and Verizon.

Well, that's kind of the way it used to be, but larger content providers like GOOG, NFLX and others have since created peer to peer connections that bypass the backbone and use what are largely known as "content delivery networks" (CDN). There are also independent CDNs like Akami (AKAM) that sells services to smaller content companies that cannot afford to build a dedicated CDN. ISPs have also began getting into the CDN business, which as I'll conclude below, is likely a good place for the government to add some degree of general regulatory control.

The other leg of the network is the connection that goes between the service providers and their end customers. This is often called the "last mile." Because there are so many "last miles," it is the most expensive piece of the Internet to upgrade.

As is the case across most of the Internet, upgrades to the last mile are often implemented in waves. A cable company, for example, might boost the bandwidth to a node that is shared by some number of customers, and with that, give all customers that are connected to that node more aggregate bandwidth at high demand times (prime TV viewing, etc.). Cable companies might also add new nodes, and with that, reduce the number of customers sharing each node.

Other methods that both cable and telecom companies have leveraged during the last few years, which were fought ahead of the implementation by the FCC, have included the deployment of newer technologies like switched digital video (SDV). With SDV the service providers limit the number of TV channels that are delivered all the way to a given customer's home, and instead only switch on the less watched channels when they are in demand by a given customer. Customers don't mind this because technological advances make it possible to deliver these channels within fractions of a second after they are requested (if your system uses SDV it's not likely something you notice).

The Last Mile:

In the last mile of the Internet, I want freedom - that is my connection and I want to be able to buy the features that I want. However, if S. 2917 was passed into law, it would have been illegal for my service provider to deliver that freedom. The same would be true if the current Network Neutrality proposal was to become law.

The freedom I want is to be able to buy only as much bandwidth as I want, to connect to any legal web site or service, to connect any device that does not degrade services for other customers, to run any legal application, and to buy QoS for the services that need it. These freedoms are precisely what service providers want to deliver.

As a NFLX subscriber, I can buy my programming and QoS in bulk from NFLX, and it can in turn pay my service provider for the QoS services incrementally as they are needed. In other words, while I don't want to pay the hundreds of dollars per month my service provider would charge for a fulltime 24/7 SLA (see above), I would occasionally like to enjoy the bandwidth guarantee benefits of a SLA when I run certain applications, and I don't want my government getting in the way of this open market transaction.

If Network Neutrality (Title II regulations) becomes the law of the land, such an agreement will be illegal. One workaround for the service providers that want to deliver an OTT service like NFLX will be to "channelize" the service (get NFLX from channel 700 on a cable service for example). Since service providers will be able to deliver TV channels with QoS, assigning NFLX a channel gets them around the QoS laws, but in all the monkey business, will likely lead to consumers paying a lot more for NFLX than they do today.

What I do want is a government that maintains policies that optimize competition in the service provider market, and minimizes the regulatory hurdles new entrants must clear. GOOG has learned a number of brutal lessons about the high and shifting regulatory costs imposed by municipal governments and local Public Utility Commissions (PUC) as it has tried to deploy its 1Gbs GoogleFiber in the Kansas City metro area.

The First Mile:

During the first lap of Network Neutrality a decade ago, GOOG was an outspoken supporter. After staying quietly on the sidelines since about 2010, GOOG finally broke its silence on the topic a few months ago. In its official statement last September (2014), GOOG wrote in broad platitudes about the virtues of an open network, but was vastly more constrained than it was in 2006, and totally mum about what could be done in the first mile where it currently has huge advantages over its competition.

A decade ago GOOG was one of only a few content providers that built out its own CDN, and that meant that GOOG had huge advantages over other companies that had to either wait their turn, or pay substantial fees to companies like AKAM for similar services. This, in my opinion, was one of the reasons GOOG was an outspoken supporter for last mile neutrality as it was being called ten years ago. The last mile was the only spot in the Internet that GOOG couldn't leverage its immense size to control, so with the first mile sewed up, blocking anyone from selling benefits in the last mile was the most logical strategy.

In the last mile, consumers should be able to decide when and if they want to buy QoS, and if they want to subscribe to a service like NFLX that will incrementally buy the QoS for them, that should be okay too. This open QoS policy doesn't provide large players with undue competitive advantages that cannot be replicated economically by smaller players. As a matter of a fact, the policy would likely hasten capital investment in the last mile, and lead to many new services consumers could buy, and new market entrants that are anxious to provide those services.

A Solution:

There are no perfect solutions, and even if there was, given the rate of technological innovation, a solution that looks good today would likely be obsolete and an encumbrance to innovation in five years or less.

With that in mind, I think the best solution for all parties involved comes in three pieces. I most certainly think there is room to negotiate these points, but I also think they all merit consideration.

Last Mile:

• Open the last mile for options we don't have clearly protected today, and that means enable rather than block the sale of QoS to the consumer and consumer proxies. A consumer proxy would be a legal content provider such as NFLX that holds a subscription with a specifically identified consumer. This would enable content providers to pay service providers for streaming OTT content delivered to their subscribers with QoS.

First Mile:

• Force service providers out of the CDN industry - a service provider cannot participate, directly or indirectly as a CDN. This means service providers would have to divest current CDN operations, and would enable and encourage independent CDNs that would have only direct commerce objectives (no objective to slow traffic to give preference to content generated by service providers).

Overarching:

• When the Interstate Highway system was enabled in the 1950s, Congress passed broad and sweeping laws to provide the right-of-ways. These included even condemning and taking private land. Today we don't need anything nearly as substantial. While some topsoil will be disturbed, the Internet superhighway is easily buried and the land above will remain useful. However, we do need regulations to rein in the onerous and burdensome power wielded by municipal governments and local PUCs. With universal federal regulations we can substantially lower costs and accelerate the deployment of new systems and technology.

• Force service providers to abide by a form of the U.S. government "most favored customer" clause. This does not mean every customer in the U.S. would have the same price, but it provides a baseline for regulators to determine if a service provider is taking undue advantage of a situation where there is limited competition.

Bottom Line: Regulating the Internet using Title II of the 1934 Telecom Act is the wrong solution. With that direction we are most likely to see less capital spending, and that means slower speeds, fewer features and higher costs.

As you contemplate whether or not you should support Network Neutrality, pause briefly to evaluate the taxes associated with your phone bill, cable bill and Internet service bill. For me, taxes and surcharges add 27% to my phone bill, 14% to my cable bill, and absolutely nothing to my Internet service bill. If that's not enough to sway your opinion, ask someone with gray hair if they miss the days when long distance calling was government regulated.

 

Paul McWilliams worked in the technology industry for nearly three decades prior to starting Next Inning Technology Research in 2002.  After a dozen years as its editor he retired from regular writing in 2014.  During that time he was a regular speaker at Gilder/Forbes Telecosm conferences.  Presently he works as an executive adviser for companies in the tech industry.  

Comment
Show commentsHide Comments

Related Articles