Oddly enough, although we seem to be overwhelmed with alleged sceptics on other topics, only a handful of people challenged the desirability of spending hundreds of billions of dollars to fix a problem which was not, on the face of it, any more serious than dozens of other bugs in computer systems. Admittedly not all the money was wasted, since lots of new computers were bought. But a lot of valuable equipment was prematurely scrapped and a vast amount of effort was devoted to compliance, when a far cheaper “fix on failure” approach would have sufficed for all but the most mission-critical of systems.
As far as I know, there was no proper peer-reviewed assessment of the seriousness of the problems published in the computer science literature. Most of the running was made by consultants with an axe to grind, and their scaremongering was endorsed by committees where no-one had any incentive to point out the nudity of the emperor.
Why was there so little scepticism on this issue? An obvious explanation is that no powerful interests were threatened and some, such as consultants and computer companies, stood to gain. I don’t think this is the whole story, and I tried to analyse the process here, but there’s no doubt that a reallocation of scepticism could have done us a lot of good here.