Thursday, January 11, 2018

Study Finds Municipal and Competitive Private ISP Networks Have Lower Prices

A study of internet access prices in 23 communities where municipal internet access services are offered has found that “most community-owned FTTH networks charged less and offered prices that were clear and unchanging, whereas private ISPs typically charged initial low promotional or “teaser” rates that later sharply rose, usually after 12 months.”


The comparisons were made of the entry-level services, offering 25/3 Mbps service in 2015 and 2016. In these 23 communities, prices for the lowest-cost program were between 2.9 percent and 50 percent less than the lowest-cost such service offered by a private provider.


In the other four cases, a private provider’s service cost between 6.9 percent and 30.5 percent less.


The study supports the notion that more competition--of any sort--leads to lower consumer prices, whether provided by a municipal provider or a private firm.  


The study does suggest that municipal networks do indeed lead to lower consumer prices. The study might also support the thesis that municipal networks--which often also aim to boost internet access speeds--actually do so.


Those findings arguably are what one would expect if the objective of launching any such municipal broadband network is precisely to provide lower prices and higher speeds (lower price per unit of speed).


One could make the same predictions for private gigabit internet access providers such as Ting, which have a business objective of supplying gigabit internet access at far lower prices than offered by incumbent internet service providers.


The study, conducted by David Talbot, Kira Hessekiel and Danielle Kehl, and published by the Berkman Klein Center for Internet & Society Research, found that in 23 cases, the community-owned FTTH providers’ pricing was lower when averaged over four years.



Wednesday, January 10, 2018

"Invest Where You Can Make Money," CenturyLink CFO Says

In regulated monopoly telecom markets, there is not much contradiction between investing and earning a return. That is what “guaranteed rate of return” means.

In competitive markets, investing where one can earn a return takes precedence. That has been the watchword in business customer markets for decades.

The new development is the extension of that concept in consumer markets. Pioneered by Google Fiber, the new model is to build gigabit internet access facilities, for example, initially in areas where potential demand is highest.

“Instead of focusing capital on getting broadband speeds up to 10 to 20 Mbps, you would focus your money more surgically on areas that have higher population densities and better socioeconomic demographics that are in coexistence with businesses and where wireless infrastructure might be needed to get a better return on capital,” Sunit Patel, CenturyLink CFO said. “You would focus your capital on providing much higher broadband speeds than just offering 10 to 20 Mbps.”

So investing in “facilities where you can make money” is the new rule, followed by attackers and incumbents alike.

The other new wrinkle, arguably lead by Verizon, is the deployment of “multiple-use” facilities that work for wholesale, retail enterprise and consumer customer segments.

Arguably, Verizon is relying on NG-PON2 capabilities--specifically the ability to peel off wavelengths to discrete customers, while using one physical infrastructure.

CenturyLink seems to be considering something similar, perhaps also using fixed wireless to reach consumers in areas where the optical infrastructure supports it.

The other possible conclusion one might draw from Patel’s comments is that, since CenturyLink competes against cable operators who routinely offer speeds of 100 Mbps to a gigabit per second, even investing on a wide scale to boost speeds to 20 Mbps likely does not produce an adequate return.

Such investment would mostly produce stranded assets.

That economic reality might be jarring for people who grew up with the regulated model, where it was virtually necessary to offer the same service “everywhere,” irrespective of actual purchasing behavior.

These days, incremental investment virtually has to be made first in areas where a return on the investment can be earned. That raises social issues, of course. But it now is virtually certain that “universal” service as a floor is different from highest-possible levels of service as a ceiling.

That always has been the case. High profits for business services and long distance subsidized services provided to consumers. Urban customer profits supported money-losing operations in rural areas.

These days, profits from mobility arguably cover weaknesses in fixed network service revenues.  

CenturyLink and Verizon are not the only firms seeking to invest where a profit can be earned. But they also are especially interested in deploying new capital in a “multi-use” manner.

Monday, January 8, 2018

Telecom AI: Customer Service Now, Self-Organizing Net works, Eventually

Customer service seems to be the most-visible way that machine learning (artificial intelligence) actually is used today by telecom service providers. Chat bots are used to automate customer service inquiries, route customers to the proper support or sales agents. The Spectrum Virtual Assistant provides an example.

The Angie interaction tool used by CenturyLink provides an example of an AI-powered system for generating retail sales leads.   

Atticus, an AI-powered chatbot, also is used by AT&T to provide information about TV content.

Machine learning also enables speech and voice services, such as voice remote control features. Comcast’s “X1” voice-powered remote is a good example of that.

In the network, machine learning is used for predicting network element failures.


Verizon, for its part, has integrated AI into its Exponent service aimed at other service providers, and providing a platform for internet of things, media, internet and cloud computing services provided by other service providers to retail customers.  

While it likely is a given that internet of things and artificial intelligence will get higher spending by all sorts of enterprises in the future, the precise degree of adoption within the telecom industry, and its impact, remain uncertain, it seems obvious enough that machine learning will underpin moves towards self-healing and zero-touch networks that sense problems and make changes without manual human intervention.

SK Telecom is among telecom service providers using an AI-assisted management system known as T Advanced Next Generation Operational Supporting System (TANGO ), based on big data analytics and machine learning, to automatically detect, troubleshoot and optimize its fixed and mobile networks.

That self-organizing network approach could become a reality for many more service providers over time.

How Fast Does Broadband Really Have to Be?

There now is debate over whether 10 Mbps or 25 Mbps should provide the baseline minimum definition of “broadband.” Leaving aside the commercial dimensions for a moment, the 25-Mbps standard is a bit problematic, as a “one size fits all” definition.

In a larger sense, the floor does not indicate the present ceiling. In most urban areas, people can buy 100-Mbps and faster service if they want it, on fixed networks. Also, speeds only matter in relation to what people want to do with their access.

And speed does not always take care of latency issues, which for some users already is the prime performance issue.

Beyond some ever-changing point, any single user can only effectively “use” so much bandwidth. Whether that minimum is 8 Mbps or some higher number, there is a point beyond which having access to faster speeds does not improve user experience.

For mobile apps, there arguably are few, if any, routine apps used by consumers that require more than about 15 Mbps.

For fixed accounts, there is debate about whether gaming or high-definition video has the most stringent requirements. Some suggest 4 Mbps is enough for gaming. Others think 10 Mbps to 25 Mbps is required.
Activity
Minimum Download Speed (Mbps)
General Usage
General Browsing and Email
1
Streaming Online Radio
Less than 0.5
VoIP Calls
Less than 0.5
Student
5 - 25
Telecommuting
5 - 25
File Downloading
10
Social Media
1
Watching Video
Streaming Standard Definition Video
3 - 4
Streaming High Definition (HD) Video
5 - 8
Streaming Ultra HD 4K Video
25
Video Conferencing
Standard Personal Video Call (e.g., Skype)
1
HD Personal Video Call (e.g., Skype)
1.5
HD Video Teleconferencing
6
Gaming
Game Console Connecting to the Internet
3
Online Multiplayer
4


For fixed accounts, the major variable is likely to be the number of concurrent users, not the actual apps being used at any time. In other words, it typically is multi-user households that require speeds in excess of 25 Mbps.

Basic web surfing and email might require less than 5 Mbps, according to Netgear. Web surfing or single-user streaming might require 10 Mbps.

Online gaming might require speeds of 10 Mbps to 25 Mbps. Beyond that, consumer connections mostly hinge on the number of concurrent users, assuming each concurrent user is a gamer or watches lots of high-definition video.

By some estimates, users heavily reliant on cloud storage might need 50 Mbps per user.

All those estimates probably assume one bandwidth-intensive activity at a time, by any single user, is the pattern. As always, there is a difference between “peak” potential usage and “routine” usage, which will be lower.

Also, it is not so clear how fast the typical fixed connection now operates.

On one hand, average access speeds in 2016 were, by some measures, already in excess of 50 Mbps. So it really does not matter whether the floor is set at 10 Mbps or 25 Mbps. Other estimates of average speed in 2016 suggested the average was in excess of 31 Mbps.  

On the other hand, In 2017, the “average” U.S. internet access connection ran at 18.75 Mbps, by some estimates. If that is true, then the definitions do matter.

Using the 25-Mbps standard, many--perhaps most--common access services--including Wi-Fi, many fixed access connections, satellite access and mobile connections (at some locations and times) are not “broadband,” even if people actually use them in that way.

The definitions matter most where it comes to mobile internet access, which arguably is the way most people actually use internet access on any given day.

Fixed network internet access subscriptions in the United States have declined in recent years, falling from 70 percent in 2013 to 67 percent in 2015, for example.

Some 13 percent of U.S. residents rely only on smartphones for home internet access, one study suggests. Logically, that is more common among single-person households, or households of younger, unrelated persons, than families. But it is a significant trend.

Some suggest that service providers are actively pushing mobile services as an alternative to fixed access, for example.

In fact, some studies suggest that U.S. fixed internet access peaked in 2009, and is slowly declining, though other studies suggest growth continues. Still, some studies suggest U.S.  fixed network subscriptions declined in 2016, for example.

The point is that it is getting harder to clearly delineate internet access by the type of connection. And, until 5G is ubiquitous, mobile, satellite, non-5G fixed wireless and public Wi-Fi speeds will lag.

That, it can be argued, means a single definition does not work for every access method and network. Though 5G likely will change matters, access speed on most networks other than cable TV or fiber-to-home platforms will vary dramatically. And those other networks arguably carry most of the traffic, and represent much of the value of internet access.

That is not an argument for maintaining “slow” access on any network, but simply to note that people use all sorts of networks daily, and most of those networks, while providing satisfactory experience, do not run as fast as fixed networks of the cable TV or fiber to  home variety.

In other words, it arguably makes little sense to define out of existence many access connections that work well enough to support nearly-all the apps and use cases buyers actually want to use.  

In early 2017, the typical U.S. mobile user, for example,  had routine access at speeds ranging from about 15 Mbps to 21 Mbps.



Public hotspot speeds are less than 10 Mbps, according to a study by Ooma. The Hughesnet and Exede satellite services now operate at 25 Mbps, in the fastest retail tier.

That, of course, is the reason some prefer using the 25-Mbps standard: it creates a “problem” to be solved.

But is is a bit problematic when “most connections” able to support nearly-all consumer end user requirements are deemed “not broadband.”

Is Architecture Destiny?

“Architecture is destiny” is one way of looking at the ways networks are able to support--or not support--particular use cases. Coverage, latency and capacity always are key issues. So one reason low earth orbit satellite constellations are important is that such constellations potentially change architecture, changing latency and capacity constraints that traditionally have been architectural constraints for use of satellite networks as point-to-point networks.

On the other  hand, one-to-many use cases are the classic advantage of broadcast networks (TV, radio, satellite broadcasting), in terms of efficient use of capacity. It is hard to beat the cost per delivered bit advantage of any multicast (broadcast) network that is optimized for one-to-many broadcast use cases.

On the other hand, architecture also shapes other potential use cases, beyond the matter of bandwidth efficiency.

Geosynchronous satellite networks have round-trip latency of about 500 milliseconds. That means geosynchronous satellites are not appropriate for real-time apps that require low latency (less than 100 milliseconds).

Where neither latency nor bandwidth is a particular concern, however, most two-way networks could find roles in supporting sensor communications, which are almost-exclusively many-to-one (point-to-point, or sensor to server).

In other words, most two-way networks (not TV or radio broadcast networks or simple bent-pipe uplink networks, including satellite networks supporting TV distribution) can theoretically support some internet of things and machine-to-machine sensor networks.

Many of those apps are not latency dependent, nor do they require lots of bandwidth. Instead, the key real-world constraints are likely to be sensor network element cost and bandwidth cost (cost to move Mbytes).

That, in fact, is the battleground for mobile and low-power wide area networks. The argument has been that LPWANs could move sensor data at far lower cost than mobile networks, in addition to having a transponder cost advantage. Some note that is likely to change over time, with cost differentials narrowing substantially, if not completely.

One way to describe the unique role for 5G is to say that 5G will have unique advantages for real-time apps requiring ultra-low latency or ultra-high bandwidth. Autonomous driving is a good example of the former use case, while augmented reality and virtual reality apps are good examples of the latter, requiring both ultra-low latency and ultra-high bandwidth.

Mobile cloud-based enterprise apps might be an example of new use cases where ultra-high bandwidth is a requirement.

The point is that 5G and IoT use cases will hinge--as all apps running at scale do--on the architectural capabilities of various networks and the cost of communicating over those networks.

Non-real-time apps of any bandwidth can be handled by any number of networks. Content distribution arguably can be supported by both point-to-point and multicast (broadcast) networks.

But ultra-low-latency apps or ultra-high-bandwidth apps arguably require 5G (advanced 4G might work as well).

Low-bandwidth sensor networks arguably can be supported by almost any two-way network in a technology sense, but might vary based on cost-to-deploy and cost-to-use dimensions.

High bandwidth uplinks will work best on bi-directional networks with lots of capacity in the upstream direction, when such networks operate at scale. So long as actual demand is low or highly distributed, more networks could work.


Sunday, January 7, 2018

Telcos and Fintech, Blockchain

Caution is a reasonable attitude for most communications services providers to take towards any of blockchain-related or other fintech ventures, though baby steps already seem to be underway.

The best reasons for caution are based on history. “Telcos” in general have a poor track record of creating sustainable new app or platform businesses with scale, beyond their core access operations.

Also, Blockchain is potentially-transformative financial technology (fintech) development, and tier-one telcos have in recent years tried to create a role for themselves in retail mobile payments, without much success.

Fintech generally includes a huge range of functions and applications, all of which essentially could disrupt banking and financial services:
  • Payments
  • E-commerce
  • Credit
  • Ordering
  • Insurance
  • Savings
  • Banking
  • Risk assessment
  • Accounting
  • Remittances
  • Corporate finance
  • Investing
  • Consumer lending
  • Mortgages
  • Crypto currency
  • Mobile wallets

That noted, some mobile payments and banking services have achieved moderate success. Mobile banking services have proven sustainable in several markets (small business lending and consumer remittances and payments) and countries.  Africa, Scandinavia, Eastern Europe, India and Mexico are among regions where mobile operators have had success with mobile banking and payments.  



But there have been big failures--mostly in other developed countries, where telcos have failed in recent years to get much traction in mobile payments.

All that noted, as access providers who wish to survive and thrive, moving up the stack into new platforms, apps and services beyond connectivity is essential. If fintech, like internet of things, proves to be a huge growth area, telcos are almost forced to consider how they will become a bigger part of those ecosystems.



Many Winners and Losers from Generative AI

Perhaps there is no contradiction between low historical total factor annual productivity gains and high expected generative artificial inte...