Kennys on Superfast: Some quick responses

I’ve now had a look at the study by Chris and Robert Kenny, Superfast: Is It Really Worth a Subsidy?. Some immediate responses

* The study starts from a presumption that broadband policy is an intervention, which may be compared to a putatively natural market outcome. This assumption is clearly inapplicable to Australia, where Telstra used as a regulatory bargaining chip its monopoly position as the only plausible supplier of new broadband infrastructure on a large scale. The NBN was the only way to move forward

* The study finds a variety of reasons to discount estimates of the benefits of superfast broadband, without giving any basis for lower estimates

* (Very important, I think) The study lacks any sense of quantitative magnitudes. Looking at the (upper bound) estimate of $40 billion for the NBN, what would be a reasonable social return? Allowing for fairly rapid depreciation (say a 10-year lifetime) and a 5 per cent real rate of return, we would want a net service flow of $6 billion per year, about 0.5 per cent of GDP (the right measure in this case, since depreciation is taken into account). If we assume the network is implemented over 5 years, we need additional growth of 0.1 percentage points per year. The estimates criticised in the report are far higher then this

* While the report discounts various possible sources of demand (eg home nursing) it trivialises the obvious commercial benefits of faster and sharper video-on-demand, video telephony, immersive gaming and so on, and disregards the point that, on the basis of past experience, we can expect new uses of high-speed internet even if we can’t yet identify them

18 thoughts on “Kennys on Superfast: Some quick responses

  1. The TV show Top Gear tries to tell us we really need cars that can do 300 kph rather than 200 kph. I’m not so sure. Having gone from 54 kbps dialup internet to 4 mbps satellite internet the improvement is wonderful. It’s more than adequate for streaming video. Do bush people really need any more? I wonder if bandwidth rather than speed is the real need.

    I have a relative with a 70km each way commute. I doubt she could work from home because the workplace culture says that we must be under the eagle eye of the management. When the IT system crashes at least everybody has made the effort to turn up and they won’t be mowing the lawn in the downtime. I suspect that realtime monitoring of the frail elderly will be a major application within 20 years. This will have a wireless component. Again I think bandwidth and reliability not speed are the parameters.

    If you cut the NBN budget in half that would buy a lot of clean energy technology.

  2. Chris Kenny?
    that’s not the same Chris Kenny, the tabloid journo from Adelaide once busted for obtaining a labor MP’s private details from a bank manager under false pretences?

  3. @dez
    I use ‘bandwidth’ in the context of download limits. Satellite internet (iPstar) downloading fees are quite high (say $30 per Gb) compared to urban ADSL fees. I haven’t spotted any rule of thumb that says higher speeds guarantee higher download limits; in this link for example
    http://en.wikipedia.org/wiki/Broadband_Internet_access
    Take the case of a remote farmhouse that occupied by someone at risk of stroke or heart attack who is on a monitoring device. That device may have to report regularly to health professionals. The person may also use a VoIP phone to contact outside helpers in non-medical emergencies. That cost must be affordable and reliable, not necessarily superfast.

  4. The terms bandwidth and speed are pretty much interchangeably these days, though they are technically not exactly the same. Download limits are not at all the same as bandwidth.

  5. @Hermit
    ‘Bandwidth’, when talking about the internet, is usually used to refer to the rate of transfer of the data, ie speed, (as you’ll notice in the link you give).

    You’re right that limits on the total amount that can be transferred (and uploads may also be included in these counts) are not dependent on bandwidth, but it is a connected issue. Greater bandwidth implies larger download limits–if I have 1Gb/s bandwidth, my current monthly limit of 50GB is not going to last long (~7 minutes?). I hope by then, my limit is measured in TB!

    I don’t know of any service that turns off the data once the limit is reached. Usually it’s either choked back to a slower speed, or extra charges kick in. For the situation you describe it’s true that being affordable and reliable are important, but greater bandwidth will ensure that VoIP and remote health monitoring over IP (we need an acronym for this!) works well even if someone in the house is streaming video (and others are playing WoW, etc).

  6. Now, I always like higher bandwidth, especially I used to help design supercomputers with some of the highest I/O and networking bandwidths at the time, and supercomputer centers were always looking for long-distance bandwidth.

    But really, while new applications come along, one cannot just cannot rely on unknown new applications. Computer systems and networking people tend to know:

    a) How much end-to-end bandwidth is needed for each kind of application, such as video-streaming at specified resolutions and frame rates or video-conferencing of various natures.
    (The latter has additional response latency requirements for practical use).

    Really, this is not magic. In the early 1990s, we were doing a video-on-demand trial for Time-Warner in Florida, a good experiment, but the bandwidth needed was then just too expensive and unavailable for more than a trial.

    b) Then, you need to know the plausible usage patterns, given shared resources, i.e., *you* may be able to get adequate end-to-end bandwidth, but not when your neighbors all want to use much of it as well. Telecom engineers have done this kind of thing for many decades.

    c) While indeed surprise usages happen, given the nature of R&D, almost anything you might imagine being widespread years from now is probably in some R&D lab right now, where you afford much higher bandwidth and can do experiments. That gives you a handle on the technology side a), although it doesn’t tell you as much about the market demand/volume side, b).

    More bandwidth is always better, but the question is: is it better enough to be worth the cost? It’s just like CPU performance or 3D graphics performance. The elasticities of demand versus price and performance matter and differ by person.

  7. Interesting critique, the lesson here is that while the government continue with their obsession with secrecy then others will fill the information vacuum.

    For those of us who support the NBN, it’s getting just too hard to argue for it when there’s no real numbers for us to use against the critical studies.

  8. I assume that that main advantage of building high speed internet in Australia is that it will save us a large amount of money on capital costs as it will allow a great deal of computing to be done in the cloud which will allow for greater efficiencies and lower costs than point of use computing. That is, it will be cheaper and easier for my mobile phone to use high speed internet to access computing power spread about the place to translate Korean or simulate having a girlfriend/boyfriend or whatever it is I want to do than it would be to buy a mobile phone with the power to do all that and more on its own.

    Of course, I could be wrong about this, but hopefully someone will point this out if I am.

  9. I’m most certainly no economist, but it seems to me that cost benefit analysis for this sort of thing is so heavily dependent on assumptions that you can get to whatever conclusion you want by jiggling the assumptions which are not “provable” in any meaningful sense. In this context drawing on several year old studies of broadband may not be especially useful, as the technology available to the subscriber is changing so rapidly.

    If we consider video for example, it is argued that people are perfectly happy with 720i (standard def DTV) or even youtube, though the latter is a bit of a stretch. But if we look back say 6 to 7 years, display technologies to make use of say 1080p cost an arm and a leg, video display controllers in general did not have hardware support for high def video and hard disk storage for any decent amount of hi def video was prohibitively expensive. That’s at the subscriber end. At the service provider end, hardware capabilities and costs would have scaled much like end user costs. It would not really have mattered how many bits you could have pushed down the wire, high def video over IP was not a feasible proposition on any reasonable scale.

    We may be not quite there yet for hi def video over IP for widespread consumer use, but we will surely be there in the next ten years. We are certainly there as far as specialized applications such as eMedicine is concerned which could be deployed “tomorrow” if the network infrastructure was up to the job.

    Predicting how fast the uptake of any of this stuff is likely to be is easy, but getting it right is nearly impossible, which is why cost benefit analyses may not be worth much at all.

    On a separate note, it has been observed that telcos burned a lot of money ten years ago in building out internet infrastructure in the dot com boom. This is certainly true. I worked for a consultancy doing this stuff for a large telco. Eventually the whole division was closed down and something like 5000 people were made redundant. Quite simply, they got ahead of the technology – not just from the network point of view but also from the viewpoint of the equipment and services available to the subscriber. It is far too glib to assert that history is repeating itself with the NBN. My view is that the timing of the NBN may be a tad early, but probably not by much. This of course is impossible to “prove” – sometimes you just have to take some risk.

  10. @Ronald Brak

    I don’t buy a lot of this cloud computing stuff. There is no disputing the value of a “cloud” for some things such as modeling in science and engineering for which there is no limit on the amount of compute power that can be productively used. But for ordinary consumers, I really don’t known what they could use it for – except possibly for some development of gaming.

    People simply do not comprehend the power of modern microprocessors which in general are quite adequate for most consumers. There may be some merit in translating Korean at the web server, but it’s not really going to save much money. The utility of web services will undoubtedly continue to grow, but that is not really cloud computing – at least not how I see it.

  11. I think having access to the cloud would be quite useful personally. For example, if I met a nice Polish woman at a conference and she called me, “Wielki odbyt,” with access to the cloud I could potentially translate what she was saying in real time. This seems like a much cheaper and easier option than buying a mobile phone with the number crunching ability to make accurate translations of spoken language, and also much cheaper than buying and storing programs with the ability to tranlate hundreds of languages on the off chance that I’ll need them. And, perhaps most importantly of all, no installation, compatability or inexplicable crashing problems. (That alone should be worth the price of admission for many people.) In a similar vein to translation, there are a lot of people who will want their mobile phones to have accurate voice recognition so they can use conversational English to tell their phone what they want. Even the best systems currently truely suck at this and it’s not the sort of thing a mobile phone is going to be able to handle any time soon on its own without a Kurzweilian improvement in technology. And there are plenty of other neat things that can be done with a mobile phone and distributed computing once one has access to cheap and fast broadband.

  12. The report disregards a lot:

    1) Moving broadcast applications to NBN frees up valuable radio spectrum to be (potentially) used to increase bandwidth for the exploding use of mobile applications – which, ironically, is what the author recommends we do in his conclusion.

    2) The report does not address upstream speed other than in the context of smart grids. High-def video conferencing should have got a mention here – which would have hurt the argument.

    3) Does not account for the huge (and arbitrary) variation in speeds available to Australians today. Does not address the fact that this reduces the addressable market for high bandwidth apps, making them more expensive. As a corollary, our current network promotes the creation of broken apps designed to work with low-speed connections – also not addressed.

    4) Spruiks ADSL, but does not take into account speeds available with ADSL tech varies with regard to distance from the exchange, which is a particular problem in Australia.

  13. At my workplace they have just moved to 10Gbps – yes, ten gigabits per second – network speed for cluster GPU/CPU setup(s). That’s so the data can be sucked off the network-attached-storage (NAS) and onto the highly data parallel GPUs, get crunched and spat back out to storage. The speeds to and from these modern clusters – getting the data to and from the cluster’s NAS in the first place – must follow the same trend in increasing bandwidth if the data-hungry clusters are to be kept busy. Since the cost of building these clusters is dropping quite quickly, they are going to be a factor driving bandwidth growth among the tech industries.

    These days though give a thought to things like modern MRI scans as an application waiting for good networks and by extension internet services. MRI racks out raw data, sent to GPU cluster for processing into voxels and full 3D rendering of scan data, then sent back to the MRI specialist to confirm visualisation, then sent to relevant doctor for inspection, and round-trip to MRI specialist to perform additional scans if necessary. All in real-time or as close to it as possible. As opposed to spending a couple of weeks p!ss-f&rting around waiting for the printed scan to be checked by GP and determined to be useless for diagnosis due to minor error/lack of resolution on the region of interest. Surely this is an area where efficiency and increased productivity – better utilisation at least – of specialist skills and time, and of expensive MRI equipment, can be a push factor for growth in ISP “bandwidth” offerings for technical businesses at least. Many other such applications, in which an almost real-time experience is required among distantly connected participants, and with a data heavy and bit-crunching process is part of the application’s function, can be dreamt up fairly easily.

  14. quokka: when you Google, you are in effect using a cloud.

    I offer a simple model to cover the way computing works:

    1) A person has an access device, perhaps with local storage and some amount of non-dedicated local compute power.

    2) Their device can access one or more networks, in ascending order of size and latency and descending order of bandwidth:
    a) Local
    b) (Sometimes, company or organization)
    c) Global

    3) “Cloud”
    3a) Backend compute
    3b) Backend strorage where 3a) and 3b) are sitting in computer rooms

    Historically, as items 1-3) have changed, the ratios of bandwidths (and sometimes latencies) of the networks, as sell as costs, everywhere have kept changing optimum distributed systems architecture. At one point, if disks weren’t attached directly to your compute complex, forget it … but as network bandwidths increased/costs lowered, more options worked.

    Some tasks are better done locally if you can, others better in the backend “cloud”. Some simply cannot be done anywhere else.

    For example, if you make a query that requires crunching a lot of date to produce a short answer, you want to send the query to a piece of the cloud that has lots of CPUs, lots of disks with high bandwidth connections, and systems with high-bandwidth connections. That just doesn’t happen in your local device, even if you had infinite compute power. You still need access to the data. I can make a Google request from my iPhone in a coffee shop, and it works fine because all the real I/O is in a cloud in Google, followed by a miniscule delivery to my iPhone.

    On the other hand, it is much easier/cheaper to drive 3D graphics with a local GPU, not by having it a thousand miles a way, and especially not on the other end of a best efforts network where timing can get irregular and you can’t really buffer the way streaming video can be.

    Of course, these days, 4) the “internet of things”, i.e., wireless sensor nets and other embedded applications is growing rapidly, and that is somewhat of a new category in practice. People talk about smart fridges that order food for you when they are running low, although I worry one would call my cardiologist and tell her it’s seen a piece of chocolate cake.

    Watch out when your fridge sells Vi**gra to your microwave (H/T Kris Pister).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s