Home > Economics - General > An echo of Y2K

An echo of Y2K

January 3rd, 2009

Microsoft Zune music players stopped working on New Years Day because of a software bug, raising the inevitable comparisons with the Y2K fiasco. The way in which the largely spurious Y2K problem was handled raises some interesting comparisons with the all too real problem of climate change. Although many billions of dollars were spent on making systems Y2K-compliant, there was no serious scientific study of the problem and its implications. The big decisions were made on the basis of anecdotal evidence, and reports from consultants with an obvious axe to grind. Even the simplest objections were never answered (for example, many organisations started their fiscal 2000 year in April or July 1999, well before remediation was completed, and none had any serious problems). There was nothing remotely resembling the Intergovernmental Panel on Climate Change, let alone the vast scientific literature that needs to be summarised and synthesised for an understanding of climate change.

Thus, anyone who took a genuinely sceptical attitude to the evidence could safely predict that 1 January 2000 would pass without any more serious incidents than usual, even for the many countries and businesses that had ignored the problem. The retrospective evaluations of the policy were even more embarrassingly skimpy. I analysed some of the factors involved in this paper in the Australian Journal of Public Administration.

A really interesting point here is the fact that, in the leadup to 1 January 2000, self-described global warming sceptics, for the most part, went along with the crowd. If any of them rallied to the support of those of us who called for a “fix on failure” approach, I didn’t notice it. By contrast, the moment that the millennium had arrived without incident, retrospective scepticism about Y2K became a staple of their rhetoric. The IPA, for example, started its commentary on 15 January 2000 and it’s been a staple ever since. Of course, I’m open to correction here. I’d be very interested if anyone could point to a piece published before 2000 taking a sceptical line both Y2K and AGW.

Categories: Economics - General Tags:
  1. Martin
    January 3rd, 2009 at 18:25 | #1

    John,

    Y2K was a serious issue and one that the software industry handled quite well. It was never the doom-laden issue that some people made out, but I had friends working on this issue in 97 and 98 for major banks and they found several mission-critical errors along with a host of other errors. These would have brought down major systems.

    The big risk for large companies that had been in software since the sixties was that they didn’t know where they were going to have a problem and as with AGW, the potential cost of not doing anything was more than sufficient to balance out that it wasn’t 100% certain it was going to happen. So the remediation effort was more than worth it as an insurance policy.

    Fix on failure was probably a reasonable option for organisations that had developed their software in the 80s or later or were not running systems that had a large impact on their business processes, but for organisations like the banks, that would have been a horrendous risk to take.

    Of course, as you point out, it is an interesting comparison with AGW.

  2. Alanna
    January 3rd, 2009 at 23:29 | #2

    There is a solitary Wall St Journal article dated 20th July 1998 expressing Y2K scepticism in Factiva (there must be a few more somewhere but the believers appear to hugely outnumber the disbelievers).

    Manager’s Journal: To Figure Out Y2K Hype, Follow the Money
    By Paul Kedrosky
    (was assistant professor of commerce at the University of British Columbia in Vancouver)

  3. John Mashey
    January 4th, 2009 at 12:14 | #3

    I second Martin 1), with some commentary from visiting computer people worldwide from 1988-2000, including yearly visits to OZ/NZ.

    I read JQ’s2005 paper, which generally seemed reasonable, but I’d add a few key points:

    1) People *rationally* worried about Y2K problems when the following factors were more important:

    a)EARLY COMPUTERIZATION Key apps starting on mainframes/early minis. People starting with UNIX or PCs rarely had this.

    b) MISSION-CRITICAL-ness.

    c) SCALE – big organizations have to worry about this much more, and of course, they tend to have a) and b). In particular, they tend to have networks or interdependent apps of various vintages, major changes can take *years* to roll out, and consequences like “cannot run phone system, wait until we fix bug” are unacceptable.

    Basically, the companies and governments that were big enough to be computerizing heavily in the 1950s, 1960s, 1970s are those with the issue. For Y2K, the later you started, the better. In particular, UNIX and other systems that got popular in 1980s tended not to ahfe this problem (there is a 32-bit UNIX Year 2038 issue, but it’s much less pernicious).

    Anyway, one would *expect* the USA to worry most about the problem. We simply have cases whose scale and “legacy-hood” aren’t found anywhere else to to as large an extent.

    I didn’t expect small businesses *anywhere* to have a lot of bad Y2K problems. They tend not to have big legacy systems, most of what they use post-dates UNIX. Many have little or no internally-developed code of the troublesome sort, relying from the start on third-party packages…. that didn’t exist when big companies got rolling.

    (p.50) In general, amongst computer-using countries, the *more* advanced countries (like USA, Japan, Germany) have a longer computing history, and hence more legacy-mainframes embedded in their operations. They still *build* mainframes.

    Italy doesn’t surprise me at all, either [and I've been there half a dozen times helping sell computers or chips.] For the size of its economy, Italy is heavily-skewed towards small/medium-sized businesses. The main indigenous computer supplier was Olivetti, which always focused on small office systems, never mainframes. I don’t know about the Italian government, but I suspect it did not lead Europe in speed of computerization.

    Here’s a list of world’s largest public companies. The USA had the problem.

    2) Australia probably could well have taken its time. Oz is computer-hip, but you just didn’t have that many organizations with the 3 issues in 1)above.

    I was in NZ in 1999, and the Kiwis had a good idea: a local NZ IT magazine’s Y2K story said:

    “People say when you land in Auckland, turn your watch back …. 20 years.
    So, let’s ALL do it, and we won’t have to worry about it for a while, and we can see how everyone else does.”

    3) But as an example of a problem of scale, in *1982* the old Bell System (1M+ people) already had 400+ operations support systems [software or software/hardware systems, of which many had started in the 1960s]. The Bell Labs Common Language Department existed to catalog these [the 1982 version has 500+ pages] and coordinate the plethora of coding schemes & interfaces.

    Many of these things *still* exist. One (TIRKS) was already running when I joined the Labs in 1973, with several hundred programmers (not me, thank goodness). It was written in IBM S/360 Assembly Language [and as of ~2004, still was]. Over the years, *thousands* of programmers have been involved. It keeps big databases of equipment, and it interfaces with many other mission-critical systems.

    If the systems are out of service, it is *disastrous*.

    In the old days, a system like TIRKS would have one major release/year, and 3 concurrent releases would be supported:

    N: in production at many sites in field
    N-1: a few laggard sites, who couldn’t upgrade
    N+1: the next main release, probably in beta-test at 1-2 sites … running in parallel with N, with duplicated databases and elaborate scheduling efforts by hundreds of people.

    This kind of environment is *totally* different in its requirements, and fix-on-fail as a general solution there is *unthinkable*.

    4) The problem is usually not in the (relatively simple) date calculations, it’s the *databases* of bad dates, it’s in interchange with other software, it’s in backups. It’s in infrequently-run (but critical) code, which you’d better check out beforehand.

    Sometimes, source code gets lost, and old binary programs may even be running on different CPUs via emulation. All this gets worse when systems get networked together into mission-critical combinations …

    The Bell System had all three issues in 1). So did the US Government. I once saw the code used to run the air traffic control system. I still flew, but was in terror for a while.

    5) Anyway, this *was* a real problem for some people, and in fact, it really did take a lot of hard work. Sensible people worried about the subset of systems that mattered.

  4. Ubiquity
    January 4th, 2009 at 12:18 | #4

    The similarity of Y2K and AGW is the unknown and potential damage both could cause. However, the difference in response to these two events is more relevant, AGW became a political agenda bankrolled by the state and Y2K was largely left up to each business to solve with some guidance from the state.

    (I remember paying for all my Y2K upgrades – but I am happy to be corrected).

  5. Alan
    January 4th, 2009 at 19:01 | #5

    Am I missing the point here?

    OK, the global warming denialists are inconsistent. So what? Prof. Q. can score debating points but that will not change their minds. At most, they might divert a little of their time and effort to pointing out the myriad ways in which Y2k was different.

    John Winston Rudd has already made up his mind to be a pusillanimous bureaucrat on the subject and analogies with Y2k won’t change government policy any more than it will change the story put out by Bolt, Plimer, Marohasy et al.

  6. John Mashey
    January 5th, 2009 at 08:16 | #6

    A better environmental comparison for Y2K would be acid rain:

    a) The problem accumulated over a relatively short historical period (modest number of decades).
    [Human modification of climate is a few centuries old, or if you believe Ruddiman, ~8,000 years.]

    b) It was a *serious* problem in some places [US NorthEast], moderate problem in some places, and a total non-problem over most of the Earth.

    [AGW is a global issue, even if the effects are seen earlier in some places, and some forthcoming effects have mixed blessings. For example, Canada gets a longer growing season, but also gets more West Nile, the bark beetles consuming the trees in British Columbia, and in a few decades, will lack the coldspells to fend off "the plant that the South", Kudzu.] I see you have that in Australia as well, but you’ll probably have more of it.]

    c) Once it was recognized in most of those places with the problem, it was (mostly) fixed by well-understood engineering efforts, site-by-site. They even worked akin to software/system releases, i.e., no one rushed out to fix everything at the same time, but built new plants with scrubbers, and did retrofits along with planned upgrades.

    [AGW solutions are much harder than either acid rain or Y2K].

    d) Once Y2K problems are fixed, they stay fixed, as opposed to causing continuing trouble in proportion to the original number present. Likewise, scrubbers keep working. In either case, the results of the fix can be seen relatively quickly, rather than having inertia for decades/centuries/millenia.

    Of course, the Y2K problem is essentially done with, whereas China in particular seems to be reintroducing acid rain.

  7. MH
    January 5th, 2009 at 20:48 | #7

    A key difference between Y2K and AGW is knowledge, most of the sceptics re AGW and climate change knew bugger all about computer systems and computer languages and thus deferred to the experts BUT everyone knows something about the weather or think they do so they buy into the climate change argument readily. Another point is the that the Y2K problem offered an imminent and identifiable risk with possible catastrophic outcomes but climate change is slow moving in real time so less evident unless you are very observant. There were plenty of denialists for Y2K issues not sure if they were the same fools. I run some old legacy systems that still have the ocasional and odd clock problem which suggests to me that the experts did not get rid of all the bugs, but with a PC I can simply reset the clock and reboot not so easy with an integrated client server network based on a mainframe.

  8. David Irving (no relation)
    January 7th, 2009 at 11:57 | #8

    Not meaning to diss our host here, its an observation only. In my experience most people who don’t think Y2K was a problem are not particularly computer savvy, or at least they’re only familiar with PCs and have had no exposure to large systems. Basically, it’s much the same as the AGW deniers, they think they understand the problem much better than they actually do.

  9. Alanna
    January 7th, 2009 at 13:42 | #9

    David – I think you are right on that one. Its just that in doing some readings it seems a lot of larger companies were working on and solving Y2k concerns a lot earlier than most people realise? and a lot earlier than it became fashionable in the media. Perhaps there was also a bit of late profiteering by the IT industries for firms who were never going to have a problem at the same time. That was to be expected because of the fear factor in the media.

  10. Philip B
    January 28th, 2009 at 10:53 | #10

    It surprised me when I first saw Y2K compared with AGW. I thought it was yet another tactic by increasingly desperate denialists, but it does seem to be getting a lot of currency lately.

    I worked on banking systems in the mid to late 90s fixing Y2K bugs (among other things). It wasn’t hard to fix but there were so many lines of code that had to be checked. I am quite sure that if it hadn’t been fixed banks (at least the ones I worked for) would have been in quite a lot of difficulty.

    The argument that Y2K was a fraud is such a strange way of looking at it. The argument goes something like:- There was a problem. Lots of money was spent fixing it. The fixes worked. Therefore there was no problem in the first place.

    I find this quite bizarre. It’s probably because most people do not understand the internals of computer systems very well, or their experience with computers is with modern sophisticated GUI based systems such as Windows / Macs. Most back-office systems were much more primitive having been around since the 60s or 70s. The primitive nature of such systems would, if they could appreciate it, come as a great shock to most people.

    Nevertheless, there was a lot of hype about Y2K. There was an expert (can’t remember his name) saying it would cause a major recession. There were also the survivalists. But those of us working on it understood it for what it was. Just another bug that had to be fixed – certainly a widespread bug, but one that was well understood.

    It disappoints me that what should be seen as a triumph is disparaged as a fraud.

  11. Ben
    January 28th, 2009 at 12:50 | #11

    There’s a similar ‘denialist’ line of argument about the banning of CFC’s and the hole in the ozone layer.

    Of course what these people forget is that the Montreal Protocol is an exetremely good case study of how international cooperation can used to deal with these sorts of trans-border environmental issues.

Comments are closed.