Back on blog

Hi everyone! I’ve been travelling in various parts of North Queensland, most recently beautiful Mission Beach, but now I’m back on air, though it will still be fairly light for a week or so.

Attack of the clones?

The announcement (still not verified) that a cloned baby has been born seems likely to produce the usual handwringing about Brave New World and technology running ahead of the law. In fact, while it will be almost impossible to stop human cloning, my assessment is that the net social impact will be close to zero.
The basic premise for this claim is that hardly anyone wants a fundamental alternative to the traditional method of conception. All the popular applications of human reproductive technologies have involved making the traditional method work more reliably – either by enabling infertile couples to have children or by preventing the transmission of genetic defects.
Conversely, most of the supposed ‘Brave New World’ applications have been feasible, using low-tech methods, since the dawn of time, or at least since the basics of genetics became properly understood around a century ago. The only widespread example of genetic selection has been the use of amniocentesis, followed by selective abortion, for sex selection. This is just a marginally modified version of selective infanticide, though for some, the differences are crucial. A similar point applies to the most likely non-therapeutic use of cloning, to permit lesbians to have children without male intervention.
With these marginal exceptions, interest in human applications of genetic engineering, both high-tech and low-tech, has been close to zero. From attempts at promoting eugenic breeding in the early 20th century to the ‘genius’ sperm bank of the 1980s, hardly anyone has been interested in improving the genetic quality of the species, and particularly not if it involved removing their own genes from the pool (of course, as the famous Darwin awards attest, many of the less-fit manage to find creative ways of removing their genes from the pool before reproducing).
Coming back to cloning, if a science-fiction version of cloning were possible, producing exact adult copies of a given individual, there would probably be men rich and egotistical enough to go for it (I can’t imagine women being interested). But I doubt that many men would really want an identical twin thirty years younger than themselves, and whose inevitably disappointing behavior can’t be blamed on anyone else.

Futurology

Humans have always tried to predict the future, generally with limited success. Before the modern era, most attempts at prediction relied on magical approaches. The only science with a record of successful prediction was astronomy, and this success gave rise to its magical counterpart, astrology.
Over the course of the modern era, the predictive capacity of scientific disciplines, from geology and meteorology to demography and economics, improved steadily. Nevertheless, on most issues of central concern in human society, our capacity to predict events more than a few years into the future remains modest, especially by comparison with the grandiose claims made by the 19th Century pioneers of the social sciences.
The idea of futurology as an organised effort to predict the future became popular around the middle of the 20th century, and was most closely associated with Herman Kahn and the Hudson Institute. The aim of futurologists was to bring a wide range of disciplinary perspectives to bear on the task of predicting future developments in society and technology.
The most notable innovation in futurology was methodological. In place of forecasts, futurologists introduced the idea of ‘scenarios’, a metaphor taken from the technical language of scriptwriting. In futurology, a scenario is a general description of conditions which forms the basis of more detailed conditional predictions of specific outcomes. Thus, we might consider a scenario for 2050 in which the United Nations has become a world government or, alternatively, has ceased to exist.
Scenarios were initially used to add rigor to informal forecasts. Increasingly, however, they have been used as a way of specifying parameter values for simulation modelling using large-scale computer models. The first such exercise to gain widespread attention was the ‘Limits to Growth” model produced by the Club of Rome in the 1970s. The weaknesses of this model, which predicted severe shortages of most commodities to emerge by the 1990s, supported critics who argued that the large scale of the model distracted attention from fundamental theoretical difficulties such as the failure to take price responses into account.
Scenario-based approaches to prediction of the future do not lend themselves to empirical testing, since they are, in essence, conditional forecasts and the conditions are rarely satisfied exactly. It is safe to say, however, that the prediction of social events remains an unsolved problem. Since the act of making predictions may well affect social outcomes, the problem may in fact be insoluble.

Expected utility

In the 1930s and 1940s, economists reformulated economic analysis in terms of preferences, eliminating, seemingly once and for all, the troublesome notion of utility and the link between classical economics and utilitarianism. Almost immediately, however, the concept of cardinal utility theory was revived by von Neumann and Morgenstern in their analysis of behavior under uncertainty, and its application to game theory, based on the idea of expected utility maximisation. When faced with an uncertain prospect, under which any of a set of outcomes could occur with known probability, von Neumann and Morgenstern suggested attaching a numerical utility to each outcome and evaluating the prospect by calculating the mean value of the utilities. This procedure is feasible only for cardinal measures of utility.
Von Neumann and Morgenstern denied that the cardinal nature of the utility function they used had any normative significance, and most advocates of expected utility agreed. Savage (1954) warned against confusing the von Neumann–Morgenstern utility function with “the now almost obsolete notion of utility in riskless situations.” Arrow (1951) described cardinal utility under certainty as “a meaningless concept”. However, as Wakker (1991a, p. 10) observes
The same cardinal function that provides an expectation representing individuals’ preferences over randomized outcomes is also used to provide the unit of exchange between players. The applicability of risky utility functions as a means of exchange between players is as disputable as their applicability to welfare theory, or to any other case of decision making under certainty.
This view was adopted by Allais (1953), the most prominent critic of the expected utility model. Allais argued that a proper analysis of choice under risk required both a cardinal specification of utility as a function of wealth under certainty and a separate specification of attitudes towards uncertainty. Allais’ position has been strengthened by the development of the rank-dependent family of generalisations of the expected utility model (Quiggin 1982), in which there is a clear separation between diminishing marginal utility of wealth and risk attitudes derived from concerns about the probability of good and bad outcomes. These models have been combined iwth other generalisations such as the prospect theory of Kahneman and Tversky (1979).
The use of cardinal utility models of social choice has been encouraged by the popularity of contractarian models such as that of Rawls (1971). Rawls introduces the device of a ‘veil of ignorance’ behind which individuals choose social arrangements without knowing what place they will occupy in those arrangements. Rawls argues, largely on the basis of intuition about choices under uncertainty, that rational individuals will adopt a ‘maximin’ criterion, focusing on the worst possible outcome. This is an extreme form of the decision-weighting process represented in rank-dependent expected utility. From the maximin criterion of choice under uncertainty, Rawls derives his theory of justice based on concern for the worst-off members of the community. The approach used by Harsanyi (1953) may be interpreted in similar terms. Unlike Rawls, Harsanyi assumes that rational individuals seek to maximise expected utility. He therefore derives the conclusion that they will prefer utilitarian social arrangements.

Bibliography

Allais, M., (1987), The general theory of random choices in relation to the invariant cardinal utility function and the specific probability function: The (U, q) model – A general overview,, Centre National de la Recherche Scientifique Paris.
Arrow, K. (1951), ‘Alternative approaches to the theory of choice in risk-taking situations’, Econometrica 19, 404–437.
Harsanyi, J. (1953), ‘Cardinal utility in welfare economics and in the theory of risk taking’, Journal of Political Economy 61, 434–435.
Quiggin, J. (1982), ‘A theory of anticipated utility’, Journal of Economic Behavior and Organization 3(4), 323–43.
Rawls, J. (1971), A Theory of Justice, Clarendon, Oxford.
Savage, L. J. (1954), Foundations of statistics, Wiley, N.Y.
von Neumann, J. and Morgenstern, O. (1944), Theory of Games and Economic Behavior, Princeton University Press.
Wakker, P. (1991), ‘Separating marginal utility and probabilistic risk aversion’, paper presented at University of Nijmegen, Nijmegen.

Utility

The concept of utility in economics refers to the pleasure, or relief of pain, associated with the consumption of goods and services. The terminology is derived from the utilitarian theory of social choice proposed by Bentham in the 18th century. Disregarding the difficulties of constructing a numerical measure of utility, Bentham based his utilitarian theory on the proposition that political organisations should be organised to achieve ‘the greatest good of the greatest number’ by maximising the sum of individual utilities.

Although utilitarianism, with its emphasis on rational optimisation, was compatible with the spirit of classical economics, economists made little use of utility concepts until the neoclassical ‘marginalist revolution’, associated with the names of Jevons, Menger and Walras. Their central insight was that the terms on which individuals were willing to exchange goods depended not on the total utility associated with consuming those goods, but on the utility associated with consuming the last or ‘marginal’ unit of each good. The critical point is the principle of diminishing marginal utility, based on the observation that consumption of any commodity, such as water, is first directed to essential needs, such as quenching thirst, and then to less important purposes, such as hosing down pavements.
It is the utility associated with the marginal use of the commodity that determines willingness to engage in trade at any given prices. The use of the principle of diminishing marginal utility led to a resolution of the classical ‘paradox of value’, exemplified by the observation that, wherever water is plentiful and diamonds are not, diamonds are more valuable than water, even though water is essential to life and diamonds are purely decorative.
The principle of diminishing marginal utility had egalitarian implications which Bentham almost certainly did not anticipate. If the marginal utility from consumption of an additional unit of each individual commodity is diminishing, the marginal utility from an additional unit of wealth must also be diminishing. If utility is represented as a real-valued function of wealth, diminishing marginal utility of wealth is equivalent to downward concavity of the utility function. If all utility functions are concave then, other things being equal, an additional unit of wealth yields more utility to a poor person than a rich one, and a more equal distribution of wealth will yield greater aggregate utility.
The rise of positivism and behaviorism in the early 20th Century reduced the appeal of theoretical frameworks based on the unobservable concept of utility. The ‘New Welfare Economics’ developed by Hicks (1938) and others, showed that ordinal concepts of utility, requiring only the use of statements like ‘commodity bundle A yields higher utility than commodity bundle B’ were sufficient for all the ordinary purposes of demand theory and could be used to derive a welfare theory independent of cardinal utility. An ordinal utility function allows the ranking of commodity bundles, but not comparisons of the differences between bundles.
Opponents of egalitarian income redistribution also attacked the use of cardinal utility theories to make judgements about the welfare effects of economic policies. Robbins’ (1938) claim that all interpersonal utility comparisons were ‘unscientific’ was particularly influential in promoting the idea that cardinal utility concepts should be avoided. The basic difficulty is that there is no obvious way of comparing utility scales between individuals, and, in particular, no way of showing that two people with similar income levels get the same additional utility from a given increase in income.
The apparent coup de grace was given by Samuelson’s (1947) recasting of welfare economics in terms of revealed preference. Samuelson showed that, the standard theory of consumer demand could be constructed without any overt reference to utility. Even the use of ordinal utility, Samuelson suggested, was purely a matter of expositional convenience. The analysis of consumer demand can be undertaken using only statements about preferences. Samuelson’s claim is correct in a formal sense.
However, consumers will have well-defined demand functions only if preferences over bundles of goods are convex, that is, if a bundle containing an appropriate mixture of two goods is preferred to either of two equally valued bundles each containing only one of the goods. The only plausible basis for postulating this kind of convexity of preferences is the principle of diminishing marginal utility.
Moreover, as is discussed , cardinal utility was no sooner driven out the front door of economic theory than it re-entered through the back gate of game theory and expected utility theory.
Bibliography

Robbins, L. (1938), ‘Interpersonal comparisons of utility: a comment’, Economic Journal 48(4), 635–41.
Samuelson, P. (1947), Foundations of Economic Analysis, Harvard University Press, Cambridge, Massachusetts.

Nationalisation

For most of the 20th century, growth in the scale and scope of government activity appeared to be an irreversible trend in developed economies, at least those that had not embraced full-scale socialism. Many observers predicted gradual convergence between the economic systems of the capitalist and communist blocs, with the final outcome being some form of mixed economy.
Expansion in the scale of government activity was primarily the result of expansion in the importance, relative to the economy as a whole, of the services provided by government, such as health, education and social welfare services. Expansion in the scope of government activity was the result of a range of policies including the nationalisation of private firms. Outside the United States, most infrastructure services, including railways, airlines, electricity and telecommunications were nationalised. Beyond the infrastructure sector, nationalisation policies varied from country to country, but included a range of financial services, manufacturing and mining.
The case for nationalisation rested in part on socialist views about the undesirability of private profit. In the mixed economy, however, no fundamental challenge to the legitimacy of private business was posed. Arguments for nationalisation of particular industries rested on the view that governments would manage these industries better than private owners, by avoiding the exploitation of monopoly and by undertaking better-planned investment.
By the 1970s, however, disillusionment with the performance of nationalised industries was widespread, and a range of theoretical arguments in favour of private ownership were developed. These arguments focused on the role of private capital markets in guiding investment and disciplining the managers and employees of businesses.
The first large-scale privatisation program was that of the Thatcher government in the United Kingdom during the 1980s. Other English-speaking countries, including Australia and New Zealand, followed the UK lead. Privatisation soon became part of the political orthodoxy throughout the world, especially after the collapse of the Soviet Union.
By the beginning of the 21st century, however, the first signs of a resurgence of public ownership were becoming evident. The collapse, and effective renationalisation, of Railtrack the privatised owner of the British rail network was an indication that governments could not walk away from the consequences of poorly-designed privatisations. Other English-speaking countries, which had taken the lead in privatisation, also appeared ready to reconsider the issue. New Zealand has renationalised its airline and accident compensation scheme, and re-established a publicly-owned bank.
The arguments for and against privatisation have become more sophisticated over the years. However, no final resolution of the debate is likely in the near future.

Monday Message Board

Like some other Ozploggers, I’m counting on my readers and commentators to keep the blog alive while I eat, drink and make merry for at least the next week! Comment on any topic (no coarse language and civilised discussion please).

Blogging Christmas

This is my first Christmas since I started blogging, and it’s a particularly big one as my son Leigh is getting married early in the New Year! I’ll be returning to the Deep North (Townsville and further) for a couple of weeks. The TiBook is coming with me, so there may be occasional posts, but obviously I’ll have more important things on my mind than blogging. Judging by visitor numbers over the past few days, a lot of readers have already blogged off, but I still feel the need to supply something for those who remain. Ken Parish has dealt with the problem by addressing a set of questions to his readers and letting them argue it out. The debate seems to be moving along pretty well, particularly on the perennial question “What should Labor do next?”. I’ll try to post the Monday Message Board as usual, but I thought I’d try something different.

Using the “Future post” facility of Blogger Pro, I’ve put up a series of posts on various aspects of modern thought (part of the dictionary project in which I’m involved) to be published at a rate of one per day. I’d really appreciate your comments. But if you’re the kind of person who prefers to rip open all their presents at once, the whole series is already available over at Modern thought.

I also plan, if I get time, to implement the “Best of …” feature discussed a while ago, resurrecting posts I found interesting and using them to fill programming gaps in the non-ratings season.

In case I don’t get back to blogging till 2003, I wish all my readers peace and happiness for the New Year and all who celebrate it a Merry Christmas.