It’s time for another Monday Message Board. Post comments on any topic. Civil discussion and no coarse language please. Side discussions and idees fixes to the sandpits, please.
The 1970s saw two important and influential publications in the long debate over justice, equality and public policy. In 1971, there was Rawls Theory of Justice, commonly described in terms like “magisterial”. Then in 1974, at lunch with Jude Wanniski, Dick Cheney and Donald Rumsfeld, Arthur Laffer drew his now-eponymous curve on a napkin. Of course there was nothing new about the curve: it’s pretty obvious that an income tax levied at rates of either zero or 100 per cent isn’t going to raise any money, and interpolation does the rest. What was new was the Laffer hypothesis, that the US at the time was on the descending side of the curve, where a reduction in tax rates would raise tax revenue.
I’ve always understood Rawls in terms of the Laffer curve, as arguing in essence that we should be at the very top of the curve, maximizing the resources available for transfer to the poor, but not (as, say, Jerry Cohen might have advocated) going further than this to promote equality.
A couple of interesting Facebook discussions have led me to think that I might be wrong in my understanding of Rawls and that the position I’ve imputed to him is actually far closer to that of classical utilitarianism in the tradition of Bentham (which is, broadly speaking, my own view).
Facebook has its merits, but promoting open public discussion isn’t one of them, so I thought I’d throw this out to the slightly larger world of blog readers.
I wrote not long ago about the zombie idea that the US ban on agricultural use of DDT, enacted in 1972, somehow caused millions of people elsewhere in the world (where DDT remains available for anti-malaria programs) to die of malaria. A thorough refutation is now available to anyone who cares to look at Wikipedia, but the notion remains lurking in the Republican hindbrain.
So, with the recent outbreak of Ebola fever (transmitted between humans by direct contact and bodily fluids), the free-association process that passes for thought in Republican circles went straight from “sick people in Africa” to “DDT”. Ron Paul was onto the case early, with stupid remarks that were distilled into even purer stupidity in a press release put out by his organization. Next up, Diana Furchgott-Roth, of the Manhattan Institute.
And here’s the American Council on Smoking and Health.
… persuade them to stop being rightwingers
I have a piece in Inside Story arguing that the various efforts to “frame” the evidence on climate change, and the policy implications, in a way that will appeal to those on the political right are all doomed. Whether or not it was historically inevitable, anti-science denialism is now a core component of rightwing tribal identity in both Australia and the US. The only hope for sustained progress on climate policy is a combination of demography and defection that will create a pro-science majority.
With my characteristic optimism, I extract a bright side from all of this. This has three components
(a) The intellectual collapse of the right has already proved politically costly, and these costs will increase over time
(b) The cost of climate stabilization has turned out to be so low that even a delay of 5-10 years won’t render it unmanageable.
(c) The benefits in terms of the possibility of implementing progressive policies such as redistribution away from the 1 per cent will more than offset the extra costs of the delay in dealing with climate change.
I expect lots of commenters here will disagree with one or more of these, so feel free to have your say. Please avoid personal attacks (or me or each other), suggestions that only a stupid person would advance the position you want to criticise and so on.
fn1. Or, in the case of young people, not to start.
Now that renewable energy sources like solar and PV are cheaper than new coal-fired power stations in most jurisdictions (anywhere with either favorable conditions or a reasonable carbon price), the big remaining question is that of supply variability/intermittency. As I’ve argued before, this problem is greatly overstated by critics of renewables who assume that the constant 24/7 supply characteristic of coal is the ideal. In fact, this constant supply produces a mismatch with variable demand and current pricing structures are set up to deal with this. A system dominated by renewables would have different kinds of mismatch and require different pricing structures.
That said, for a system dominated by solar PV, meeting demand in the late afternoon and evening will clearly depend on a capacity to store energy in some form or another. There are lots of options, but it makes sense to look first at relatively mature technologies like lithium and lead-acid batteries. Renewable News is reporting a project in Vermont, which integrates solar PV and storage.
The 2.5-MW Stafford Hill solar project is being developed in conjunction with Dynapower and GroSolar and includes 4 MW of battery storage, both lithium ion and lead acid, to integrate the solar generation into the local grid, and to provide resilient power in case of a grid outage.
The project cost is stated at $10 million, or $4m/Mw of generation capacity.
Assuming this number is correct, let’s make some simplifying assumptions to get a rough idea of the cost of electricity and the workability of storage. If we cost capital and depreciation at 10 per cent, assume 1600 hours of full output per year and, ignoring operating costs, the cost of electricity is 25c/KwH. There would presumably be some distribution costs, given the need to connect to the grid. Still, given that Vermont consumers are currently paying 18c/Kwh, this doesn’t look too bad. A carbon tax at $75/tonne would make up the difference.
How would the storage work? I’m starting from scratch here, so I’ll be interested in suggestions and corrections. I assume that the storage is ample to deal with short-term (minute to minute or hour to hour) fluctuations, which are more of a problem for wind.
How about on a daily basis? It seems to me that the critical thing to look at is the point in the afternoon/evening at which consumption exceeds generation (As I mentioned, prices matter a lot here). This is the point at which we would like the batteries to be fully charged. The output assumption suggests an average of about 12 MWh generated per day. If we simplify by assuming that the cutoff time is 6pm and that output drops to zero after that, the system requires that 8MWh be used during the day and 4MWh at night. That wouldn’t match current demand patterns, but if you added in some grid connected power (say, from wind, which tends to blow more at night) and shifted the pricing peak to match the demand peak, it would probably be feasible.
As regards seasonal variability, this would be a problem in Vermont, where (I assume) the seasonal demand peak is in winter. But in places like Queensland, with a strong summer peak, a system with lots of solar power should do a good job in this respect.
What remains is the possibility of a long run of cloudy days, during which solar panels produce 50 per cent or less of their rated output. Dealing with such periods will require a combination of pricing (such periods can be predicted in advance, so it’s just a matter of passing the price signals on to consumers), load-shedding for industrial customers and dispatchable reserve sources (hydro being the most appealing candidate, given that potential energy can be stored for long periods, and turned on and off as needed).
To sum up, we aren’t quite at the point where PV+storage is a complete solution, but we’re not far off.
George Brandis’ spectacular live meltdown over metadata retention has distracted attention from the abandonment of the government’s plans to repeal Section 18C of the Racial Discrimination Act, prohibiting the kind of racial abuse dished out by the likes of Andrew Bolt and Fredrick Toben. Abbott’s rationale is that a purist attitude to freedom of (racially divisive) speech is something we can’t afford, given the need to unite against terrorism.
Obviously, neither Bolt nor Toben is a member of Team Australia. Each makes it their primary business to stir up hatred, in Toben’s case against Jews and in Bolt’s case against (among many others) the “muslims, jihadists, people from the Middle East” he sees as responsible for Abbot’s backdown. The striking conflation of religion, geographical origin and terrorism is typical of Bolt’s approach.
Horrible as he is, though, Toben is not a serious problem. His Holocaust denialism is universally reviled, and it is a sign of strength, not weakness, in our democracy that he is free to walk the streets. Repealing the constraints imposed on him by 18C would only emphasise this.
Bolt is another story. It is his case that led the government to seek the repeal of 18C, and that motivated George Brandis’ gaffe (that is, a politically inconvenient statement of an actual belief) that people have a right to be bigots. Far from being reviled, Bolt has been embraced and coddled by the government, to the point of having exclusive access to the Prime Minister. He enjoys a well-rewarded position in the Murdoch Press. Even casting the net wider among our so-called libertarians, I’ve can’t recall seeing a harsh word against Bolt. He’s a tribal ally and his bigotry is either endorsed or passed over in silence.
It’s impossible in these circumstances, for the government to be taken seriously when they mouth the (apocryphal) Voltaire line about defending to the death speech with which they disagree. The repeal of 18C was clearly intended as an endorsement of Bolt, and not a statement of bare toleration. That position is now untenable, and it’s too late to switch back to Voltaire.
In summary, those on the right lamenting the continued existence of 18C ought to reflect on the fact that it’s their own overt or tacit endorsement of bigotry that’s brought this about. If they cleaned house, and dissociated themselves from the likes of Bolt, their claims to be supporting free speech might acquire a little more credibility.
fn1. I was going to add Sheikh Hillaly to this list. But based on this report, he seems to have joined the Team.
I got lots of very helpful responses to my recent post on the search theory of unemployment, here and at Crooked Timber. But it has occurred to me that I haven’t seen any answer to one crucial question: How many offers do unemployed workers receive and decline before taking a new job, or leaving the labour market? This is crucial, both in simple versions of search theory and in more sophisticated directed search and matching models. If workers don’t get any offers, it doesn’t matter what their reservation wage is, or what their judgement of the state of the market. Casual observation and my very limited experience, combined with my understanding of the unemployment benefit rules, is that very few unemployed workers receive and decline job offers, except perhaps for temporary work where the loss of benefits outweighs potential earnings. Presumably someone must have studied this, but my Google skills aren’t up to finding anything useful.
And, on a morbidly humorous note, it’s a sad day for the LNP when efforts to bash dole bludgers actually cost them support. But that seems to be the case with the latest plans, both expanded work for the dole and the requirement for 40 job applications a month. I’ll leave it to Andrew Leigh to take out the trash on work for the dole (BTW, his new book, The Economics of Almost Everything is out now).
The 40 applications requirement has already been the subject of some amusing calculations. I want to take a slightly different tack. Suppose (to make the math simple) that the average job vacancy lasts a month. There are roughly five unemployed workers for every vacancy, so meeting the target will require an average of 200 applications per vacancy. The government will be checking for spam, so lets suppose that all (or a substantial proportion) of the applicants take some time to talk about how they would be a good fit with the employer and so on. Dealing with all these applications would be a mammoth task. One option would be to pick a short list at random. But, there’s a simpler option. In addition to the 200 required applications from unemployed people, most job vacancies will attract applications from people in jobs. A few of them may be looking for an outside offer to improve their bargaining position with their current employer (this is a big deal for academics), but most can be assumed to be serious about taking the job and in the judgement that they have a reasonable chance of getting it. So, the obvious strategy is to discard all the applications except for those from people who already have jobs. What if there aren’t any of these? Given that formal applications are going to be uninformative, employers may pick interviewees at random or may resort to the informal networks through which many jobs are filled already.
Trying to relate this back to theory, the effect of a requirement like this is to negate the benefits of improved matching that ought to arise from Internet search. By providing strong incentives to provide a convincing appearance of looking for jobs for which workers are actually poorly suited, the policy harms both employers and unemployed workers who would be well suited to a given job.
Update I found the following quote widely reproduced on the web
On average, 1,000 individuals will see a job post, 200 will begin the application process, 100 will complete the application,
75 of those 100 resumes will be screened out by the Applicant Tracking System (ATS) software the company uses,
25 resumes will be seen by the hiring manager, 4 to 6 will be invited for an interview, 1 to 3 of them will be invited back for final interview, 1 will be offered that job and 80 percent of those receiving an offer will accept it.
Data courtesy of Talent Function Group LLC
Visiting the TFG website, I couldn’t find any obvious source. The numbers sound plausible to me, and obviously to those who have cited them. But, if the final number (80 per cent acceptance) is correct, then it seems as if the search theory of unemployment is utterly baseless. Assuming independence, the proportion of searchers who reject even three offers must be minuscule (less than 1 per cent).
One of the striking features of (propertarian) libertarianism, especially in the US, is its reliance on a priori arguments based on supposedly self-evident truths. Among[^1] the most extreme versions of this is the “praxeological” economic methodology espoused by Mises and his followers, and also endorsed, in a more qualified fashion, by Hayek.
In an Internet discussion the other day, I was surprised to see the deductive certainty claimed by Mises presented as being similar to the “certainty” that the interior angles of a triangle add to 180 degrees.[^2]
In one sense, I shouldn’t be surprised. The certainty of Euclidean geometry was, for centuries, the strongest argument for the rationalist that we could derive certain knowledge about the world.
Precisely for that reason, the discovery, in the early 19th century of non-Euclidean geometries that did not satisfy Euclid’s requirement that parallel lines should never meet, was a huge blow to rationalism, from which it has never really recovered.[^3] In non-Euclidean geometry, the interior angles of a triangle may add to more, or less, than 180 degrees.
Even worse for the rationalist program was the observation that the system of geometry (that is, “earth measurement”) most relevant to earth-dwellers is spherical geometry, in which straight lines are “great circles”, and in which the angles of a triangle add to more than 180 degrees. Considered in this light, Euclidean plane geometry is the mathematical model associated with the Flat Earth theory.
John Howard’s endorsement of Ian Plimer’s children’s version of his absurd anti-science tract Heaven and Earth has at least one good feature. I can now cut the number of prominent Australian conservatives for whom I have any intellectual respect down from two to one. Howard’s acceptance of anti-science nonsense shows that, for all his ability as a politician, he is, in the end, just another tribalist incapable of thinking for himself. 
Although not all the tribal leaders are old men, an old, high-status man like Howard is certainly emblematic of Australian delusionism . Like a lot of old, high status men, he stopped thinking decades ago, but is even more confident of being right now than when he had to confront his prejudices with reality from time time. Like other delusionists, Howard has no scientific training, shows no sign of understanding statistics and almost certainly hasn’t read any real scientific literature, but nonetheless believes he can rank clowns like Plimer and Monckton ahead of the real scientists.
The situation in the US is similar but even more grimly amusing, with the sole truthteller in the entire Republican party, Jon Huntsman, recently reduced to waffling (in both US and UK/Oz senses of this term) because he briefly looked like having a chance to be the next non-Romney. This tribal mindlessness is reflected in the inability of the Republican Party, at a time when they ought to be unbackable favorites in 2012, to come up with a candidate who can convince the base s/he is one of them, but who doesn’t rapidly reveal themselves as a fool, a knave or both.
And, as evidence of the utter intellectual shamelessness of delusionism, you can’t beat the campaign against wind power, driven by the kinds of absurd claims of risk that would be mocked, mercilessly and deservedly, if they came from the mainstream environmental movement.
The global left is in pretty bad shape in lots of ways. Still, I would really hate to be a conservative right now.
fn1. Now down to zero. Turnbull has proved he lacks any real substance.
fn2. I’m not saying that all Australian conservatives are mindless tribalists. There’s a large group, epitomized by Greg Hunt and now Malcolm Turnbull, who understand the issues quite well, but are unwilling to speak up. Then there is a group of postmodern conservatives of whom Andrew Bolt is probably the best example, who have passed the point where concepts of truth or falsehood have any meaning – truth is whatever suits the cause on any given day.
I’ve been working for quite a while now on a book which will respond to Henry Hazlitt’s Economics in One Lesson a book that was issued just after 1945 and has remained in print ever since. It’s an adaptation of the work of the 19th century French free-market advocate Frederic Bastiat for a US audience, specifically aimed at refuting the then-novel ideas of Keynes.
My planned title is Economics in Two Lessons. In my interpretation, Hazlitt’s One Lesson is that prices are opportunity costs. My Second Lesson is that, in the absence of appropriate government policy, private opportunity costs (market prices) won’t reflect social opportunity costs. Here’s a central piece of the argument, responding to Hazlitt’s exposition of Bastiat’s glazier’s fallacy.
I was very pleased with my post on this topic, making the point that standard microeconomic analysis only works properly on the assumption that the economy is at a full employment equilibrium.
But, it turns out, exactly the same point, using the same title, was made by David Colander 20 years ago
Colander (1993), The Macrofoundations of Micro, Eastern Economic Journal, Vol. 19, No. 4 (Fall, 1993), pp. 447-457
And he wasn’t the first. The term and the idea have a long history, including a contribution by my UQ colleague Bruce Littleboy
The term macrofoundations, I suspect, has been around for a long time. Tracing the term is a paper in itself. Axel Leijonhufvud remembered using it in Leijonhufvud  . I was told that Roman Frydman and Edmund Phelps  used the term and that Hyman Minsky had an unpublished paper from the 1970s with that title; Minsky remembered it, but doubted he could find it and told me that he used the term in a slightly different context. I was also told by Christof Ruhle that a German economist, Karl Zinn, wrote a paper with that title for a Festschrift in 1988, but that it has not been translated into English. I suspect the term has been used many more times because it is such an obvious counterpoint to the microfoundations of macro, and hence to the New Classical call for microfoundations. While he does not use the term explicitly, Bruce Littleboy , in work that relates fundamentalist Keynesian ideas with Clower and Leijonhufvud’s ideas, discusses many of the important issues raised here.
I have a piece in The National Interest, looking at various recent events including the latest round of the Argentinian debt crisis, in which a New York court ruled in favor of a group of ‘vulture’ investors, led by a New York billionaire, and the agreement of the US Department of Justice and Citibank, involving a financial settlement to avoid a lawsuit over bad mortgage deals and CDOs in the pre-crisis period.
My central observation is that while legal forms are being observed, these are obviously political processes, with outcomes reflecting relative political power rather than any kind of neutral application of the law. So, the world financial system is part of international power politics: it matters a lot that Citibank is a US bank, while BNP Paribas is French and so on. This is very different from the picture of a global financial system independent of, and standing in judgement on, national governments that seemed to be emerging in the 1990s.
As an illustration, I found this ad put out by the ‘vultures’. To see my point, try interchanging “US” and “Argentina” throughout and assuming an adverse judgement by an Argentinian court against the US government.
This is going to be a long and wonkish post, so I’ll just give the dot-point summary here, and let those interested read on below the fold, for the explanations and qualifications.
* The dominant model of unemployment, in academic macroeconomics at least, is based on the idea that unemployment can best be modelled in terms of workers searching for jobs, and remaining unemployed until they find a good match with an employer
* The efficiency of job search and matching has been massively increased by the Internet, so, if unemployment is mainly explained by search, it should have fallen steadily over the past 20 years.
* Obviously, this hasn’t happened, but economists seem to have ignored this fact or at least not worried too much about it
* The fact that search models are more popular than ever is yet more evidence that academic macroeconomics is in a bad way
Tony Abbott hasn’t exactly covered himself in glory on his overseas trip. But he has found one ally: Canadian PM (at least until next years election) Stephen Harper, also a climate denialist. They made a joint statement denouncing carbon taxes as “job killing”. I didn’t notice any massive destruction of jobs when the carbon price/tax was introduced in 2012, but rather than do my own analysis, I thought I’d take a look at the government’s own Budget outlook, to see how many jobs they claim to have been destroyed by the carbon tax, and what great benefits we can expect from its removal. Here’s the relevant section of the summary (note that the outlook is premised on the Budget measures being passed)
The Australian economy is in the midst of a major transformation, moving from growth led by investment in resources projects to broader‑based drivers of activity in the non‑resources sectors. This is occurring at a time when the economy has generally been growing below its trend rate and the unemployment rate has been rising. During this transition, the economy is expected to continue to grow slightly below trend and the unemployment rate is expected to rise further to 6¼ per cent by mid‑2015.
In this environment, the Government is focused on implementing measures to support growth and jobs while putting in place lasting structural reforms to restore the nation’s finances to a sustainable footing. The timing and composition of the new policy decisions mean that the faster pace of consolidation in this Budget does not have a material impact on economic growth over the forecast period, relative to the 2013‑14 Mid‑Year Economic and Fiscal Outlook (MYEFO).
Since MYEFO, the near‑term outlook for the household sector has improved. Leading indicators of dwelling investment are consistent with rising activity, while household consumption and retail trade outcomes have improved recently, consistent with gains in household wealth. This is partly offset by weaker business investment intentions, particularly for non‑resources sectors.
The outlook for the resources sector is largely unchanged from MYEFO. Resources investment is still expected to detract significantly from growth through until at least 2015‑16, as reflected in the outlook for investment in engineering construction which is forecast to decline by 13 per cent in 2014‑15 and 20½ per cent in 2015‑16. Rising resources exports are only expected to partially offset the impact on growth. Overall, real GDP is forecast to continue growing below trend at 2½ per cent in 2014‑15, before accelerating to near‑trend growth of 3 per cent in 2015‑16.
The labour market has been subdued since late 2011, characterised by weak employment growth, a falling participation rate and a rising unemployment rate, although outcomes since the beginning of 2014 have been more positive. The unemployment rate is forecast to continue to edge higher, settling around 6¼ per cent, consistent with the outlook for real GDP growth. Consumer price inflation is expected to remain well contained, with moderate wage pressures and the removal of the carbon tax.
The reference to the CPI effects of the carbon price (around 0.4 per cent) is, as far as I can tell, the only mention in the whole of the Economic Outlook statement.
I have a couple of pieces up on the topic that’s likely to consume much of my attention for some time to come: Piketty’s Capital in the 21st century.
Here’s a long review article at Inside Story focusing on the conditions that have made Piketty a bestseller. And here, at The Drum is my take on claims by Chris Giles at the Financial Times that Piketty’s data is fatally flawed.
Update Piketty has responded to the Financial Times. To sum up, as I said in the Drum piece, the criticisms are (mostly incorrect) nitpicks except for the point about UK wealth inequality. Here Piketty’s demolition is convincing. The FT hasn’t used a consistent series. Rather, it’s taken a recent survey estimate (likely to underestimate wealth) and spliced it onto older estate data to produce the counterintuitive finding that the inequality of wealth hasn’t increased.
Like lots of other readers of Thomas Piketty’s Capital, my big concern is not with the accuracy of the diagnosis and prognosis but with the feasibility of the prescription. Piketty’s proposal for a global wealth tax requires an end to the capacity of capital to escape taxation by exploiting the limitations of national taxations system, through tax havens, transfer pricing, artificial corporate structures and so on.
Given the limited record of success in past efforts to control global tax evasion and avoidance, Piketty is reasonably pessimistic about efforts in this direction. But the latest news from the OECD is remarkably positive. All members of the OECD (notably including evader-friendly jurisdictions like Austria, Luxembourg and Switzerland) have agreed to a system of automatic information exchange for tax purposes. Moreover, the “too big to jail” status of major banks engaged in facilitating tax evasion and money laundering, may finally be coming to an end.
On the face of it, the oft-repeated, but so far unjustified claim that “the days of tax havens are over“, may finally be coming true, at least for all but the wealthiest individuals. But the crackdown on individual tax evaders only points up the ease with which corporations (and individuals with the means to establish complex corporate structures) can avoid tax through a mixture of legal avoidance and unprovable evasion (for example, by illegal but unprovable internal transfers).
At the core of the problem is the ability to establish corporations in ways that make their true ownership impossible to trace. And, the jurisdiction most responsible for this is not a Caribbean island or European mini-state, but the “First State” of the US – Delaware, which has long been the preferred location for US incorporation by reason of its business friendly laws.
The efforts of the right to discredit Piketty’s Capital have so far ranged from unconvincing to risible (there’s a particularly amusing one from Max Hastings in the Daily Mail, to which I won’t bother linking). One point raised in this four-para summary by the Economist is that ” today’s super-rich mostly come by their wealth through work, rather than via inheritance.” Piketty does a good job of rebutting this, but for those who haven’t acquired the book or got around to reading it, I thought I’d repost my own response, from 2012.
When people call for a university system more like that of the US, they commonly have in mind the idea that Australia should have institutions like Harvard and Princeton, and a belief that more competition in tertiary education would bring this about. There are a couple of obvious problems with this.
First, high-status universities like this provide undergraduate education only a tiny proportion of young Americans. Around 1 per cent of the college age cohort attends high-status private institutions like the Ivy League unis, Chicago and Stanford, and this proportion has been declining steadily over time. Most of the Ivies enrol no more undergrads than they did in the 1950s. Adjusting for population, an Australian Ivy League would consist of a single institution enrolling perhaps a thousand students a year.
Second, the US experience shows that the idea of competition between universities is a nonsense. Harvard, Princeton and the rest were the leading universities in North America before the US even existed, and they are still the leaders today. The newest of the really high status universities is probably Stanford, founded in 1885. Competition between universities is pretty much the same as the competition between the Harlem Globetrotters and the Washington General.s
The reality of US education is a highly stratified system. Below the high-status private universities are the “flagship” state universities, which educate around 10 per cent of the college age cohort (again, a proportion that is declining, or at best stable).
After that, there are lower-tier state universities, two-year community colleges and, worst of all, for-profit degree mills like the University of Phoenix which exist largely to lure low-income students into debt and extract Federal grant money, with only a minority ever completing their courses.
Australia has always had a stratified system, but to a much lesser extent. (More on the history when I get a chance). The big question facing policy is whether to increase stratification, by widening the gap between the “Group of 8″ and the rest, or to treat tertiary education like other public services, available to all who can benefit from it, at the best quality we can provide for everyone.
University education systems mirror and recreate the society to which they belong. A highly stratified system, like that in the US and UK, reflects and reinforces a class-bound society in which the best thing you can do in life is to choose the right parents. We should be aiming at less stratification, not more.
Update Just by chance, one of the lead articles in the NY Times advises that, thanks to increased international intakes, the number of places for domestic undergraduates at the Ivies has fallen sharply
All through the Bligh government’s three year campaign to sell public assets, I challenged Treasurer Andrew Fraser to a public debate on the issue, or at least to a response to the criticisms I and other economists made of the government’s case. Fraser never responded: even when we spoke at the same event (to a friendly business-oriented crowd) he gave his speech and left before anyone else was allowed on the platform. Doubtless, he made the judgement that this was the politically clever thing to do: by sticking to events that could be scripted, and relying on the authority of Queensland Treasury, he maintained controlled of public discussion. We all know how that worked out.
Now there’s a new Treasurer, pushing the same arguments. I challenged Tim Nicholls to a debate on the “StrongChoices” campaign. I don’t suppose he’s going to respond in person, but he has at least acknowledged my criticisms (as reported by Paul Syvret) and attempted to rebut them in this piece in the Courier-Mail.
Nicholls’ argument is confused, as the case for asset sales has always been, but he does make at least some progress. The usual magic pudding is in evidence: selling assets is supposed to repay debt, finance new infrastructure spending and obviate the need for higher taxes to maintain services, all at the same time.
But there is one point of light: responding to my observation that the StrongChoices website counts the interest savings from selling assets and paying down debt, but not the foregone earnings of public enterprises, Nicholls says
the value of a government-owned asset is not the same in private sector hands. Governments are not well placed to act nimbly when it comes to changing markets and commercial decisions. Who thinks the value of Telstra would be the same if it reverted back to full government ownership? What about the Commonwealth Bank?
While Nicholls’s specific examples don’t work well (see below), he at least expresses the right general principle. Privatising assets is a good deal for the public if their sale price is greater than their value in continued public ownership (and assuming that the gain isn’t achieved by raising prices or reducing service quality). Indeed, that’s true of every kind of sale: there’s a net benefit only if the item sold is worth more to the buyer than to the seller.
So, there’s a simple fix for the StrongChoices website. Instead of quoting the total sale price for assets, give an estimate of the difference between the sale price and the value in continued public ownership. I did this for both of Nicholls examples, the Commonwealth Bank and Telstra (all three “tranches”) and found a net loss to the public in every case except T2, the second Telstra tranche where the value was inflated by the Internet bubble. Even in that case, we would have done far better off by selling the overvalued Internet assets and using the proceeds to buy back the rest of Telstra, as I advocated at the time, just before the bubble burst.
fn1. If, as has been reported, the Queensland Government paid good money to a PR firm for this ludicrous name, then there is certainly an opportunity to cut waste and efficiency by dumping.
I’ve written a few times about the idea that betting markets provide a more accurate guide to political outcomes than do polls or ‘expert’ judgements or statistical models (which usually incorporate polls along with economic and other data). The problem is that, close to an election, they all tend to converge. So, the best time to do a comparison is early in the election cycle. Right now there’s quite a sharp contrast. The polls have had the (federal) ALP and LNP just about level for months, but the betting markets have the LNP as strong favorites.
One possible explanation is that governments generally do worse in polls than in election, so that the polls underestimate the government’s support. I’ve heard this claimed, but never seen any systematic evidence to support it. Another possibility is that market participants know something that’s not reflected in the polls. I’m sceptical on this.
The final possibility is that betting markets this far out from the election are thin and inefficient. If that’s right, then the odds for Labor look very favorable. I’m not going to bet myself (I did OK on my one foray into the US Republican primaries, but the hassle involved was too much to make it worthwhile), and I’m not giving betting advice.
Still, I’d be interested in responses from those among my fellow economists who’ve claimed efficiency properties for betting markets. I guess Andrew Leigh is precluded from commenting, and Justin Wolfers is a long way from the action in Oz, but I’m sure there must be others willing to jump in
That’s the question I looked at a while back in this piece in the National Interest, which I was too busy to post about at the time. TNI’s headline, which I didn’t pick, is the more definitive ‘China Can Make Nuclear Power Work‘. The key point is that, when France embarked on a crash program to implement nuclear energy in the early 1970s, all the right ingredients were in place: a centralised state in which a skilled technocratic elite could push projects through without much regard to public opinion, the ability to fix on a single standardised design, low real interest rates and preferential access to capital, and the ability to fix pricing structures that eliminated much of the risk in the enterprise.
Over time, these factors were eroded, with the result that as the program progressed, the cost per megawatt of French nuclear plants tripled in real terms. As the Flamanville fiasco has shown, whatever the secret of French success 40 years ago, it has been well and truly lost now. And the picture is equally bleak for nuclear power in other developed countries. New nuclear power is far more expensive than renewables, even after making every possible allowance for the costs of intermittency, the various subsidies available, and so on. That’s why, despite the vast range of different policy settings and market structures in developed countries, the construction of new nuclear plants has been abandoned almost everywhere.
But China today looks, in many respects, like France in the 1970s, a technocratic state-capitalist society with the capacity to decide on, and implement, large scale projects with little regard to anyone who might object. If nuclear power can be made to work anywhere, it’s probably in China.
Obviously, pro-nuclear commenters like Hermit and Will Boisvert are welcome to have their say on this one.
I’ve just had an article published at New Left Project, under the title Don’t Blame the Internet for Rising Inequality. Much of it will be familiar, but I want to stress a particular, and I think novel, critique of the idea that skill-intensive technology is responsible for rising inequality
while technology explains the decline of the middle and working classes relative to the professional and managerial class, even this latter group has barely maintained its share of national income since the 1980s. The real gains over this period have gone to a subset of the top 1 per cent, dominated by CEOs, other senior managers and finance industry operators. This group has nearly quadrupled its real income over the past 30 years, far outpacing the professional and managerial class as a whole.
This is a major problem for the Race Against the Machine hypothesis. Much of the growth in income share of the top 1 per cent occurred before 2000, when the stereotypical CEO was a technological illiterate who had his (sic) secretary print out his emails. Even today, the technology available to the typical senior manager—a PC with access to the Internet, and a corporate intranet with very limited capabilities—is no different to that of the average knowledge worker, and inferior to that of workers in tech-intensive specialities.
Nor does the ownership of capital explain much here. Even for tech-intensive jobs, the capital and telecomm requirements for an individual worker cost no more than $10,000 for a top-of-the-line computer setup (amortized over 3-5 years), and perhaps $1000 a year for a broadband internet connection. This is well within the capacity of self-employed professional workers to pay for themselves, and in fact many professionals have better equipment at home than at work. Advances in information and communications technology thus can’t explain the vast majority of the growth in inequality over the past three decades.
On the same day as this came out, Paul Krugman was demolishing another version of the argument, the zombie idea that current high unemployment in the US is due to a “skills gap” which apparently emerged on the day Lehman Brothers collapsed.
For quite some time, I’ve been saying that research effort into the economics of happiness would be better devoted to researching unhappiness. I’ve now presented this argument in the excellent online magazine Aeon, with the takeaway
So, perhaps we need a new research programme, to examine how unhappiness really works. Does hunger, or unemployment, or the loss of a family member to preventable illness make you a stronger and better person? Is striving after more and better possessions more fulfilling than satisfaction with what you have? It’s obvious from the way I’ve posed these questions what I believe the answer to be. But genuine research into the economics of unhappiness might yield some surprising answers to such questions as these, and reveal new questions that we have never before considered.
I have a post up at The National Interest, arguing that embargoes imposed by commodity export countries in pursuit of geopolitical objectives rarely, if ever, work. Opening paras:
At the beginning of the Civil War, the leaders of the South were, as is normal at the outset of war, confident that their superior military prowess would yield a rapid victory. But the Confederates had another reason for confidence: their possession of a near-monopoly in the market for the most important commodity of the day: cotton.
Like oil in the twentieth century, cotton was vital to the industrial economies of the nineteenth, and particularly that of Britain, the preeminent naval and military power of the day. And the Southern United States was the world’s dominant producer of cotton, accounting for 77 percent of British imports in the 1850s.
Rhetoric about ‘King Cotton’ matched the most hyperbolic claims about ‘energy superpowers’ to be heard today. In 1858, South Carolina senator James Hammond said ‘old England would topple headlong and carry the whole civilized world with her…. No, you dare not make war on cotton. No power on earth dares to make war upon it. Cotton is king.’
The most immediate application, obviously, is to Russia and gas. Feel free to discuss the broader issues raised by the Ukraine crisis.
This is a contribution to a discussion of Sen’s capability approach, taking place at Crooked Timber. It’s a bit too wonkish for the CT readership, it seems, and maybe the same here, but I’ll toss it up anyway.
Most of the discussion of capabilities has concerned poor/developing countries. Moreover, most of it has been qualitative rather than quantitative. One consequence is that, although the idea of capabilities has been around for a while now, its impact on the policy process in developed countries has been modest at best.
My own work on capabilities, represented by an article published last year in the Journal of Health Economics has also had a modest impact, but for very different reasons. While not strictly quantitative, it’s mathematical, more so than the average reader of JHE tends to be comfortable with, and its direct relevance to policy is limited by the fact that we are, at least to start with, not addressing distributional issues.
The main objective is to explore the idea that capabilities can provide a basis for allocating health care resources based on the QALY (Quality-Adjusted Life Year) measure. in previous work, we looked at the “welfarist” idea that policy should be based on maximizing lifetime expected utility. It turns out that, considered purely as a technical problem, this can’t be done, except in very special cases. The appeal of capabilities is that they provide a non-welfarist (or at least ‘extra-welfarist’ in that it is more than a simple expected utility maximization) rationale for policies involving scarce resources like health care.
I’ll be at Unmasking Austerity in Adelaide on Tuesday. I’m going to talk about Commissions of Audit and the following question occurred to me.
Have such Commissions ever achieved anything of the kind you might expect from auditors, that is, detecting and fixing Fraud, Inefficiency and Waste? In this context, I’m not interested in proposals to kill government programs the Commissioners don’t like, privatise public assets, contract out public sector work and so on. I am interested in work showing that public programs are being defrauded and proposing checks that would fix the problem, cases of duplication between agencies and levels of government that can be fixed with substantial savings, cases where governments are wasting money by paying obviously excessive prices for services etc.
A little while ago, Ross Douthat tweeted a link to this Aeon article of mine, reflecting on Keynes ‘Economic Possibilities for our Grandchildren’, which gave rise to some interesting discussion (Memo to self: Find out about Storify). Now he’s addressed the topic in the New York Times, linking directly to Keynes essay. There’s some interesting food for thought here. Unfortunately, it’s mixed up with some silly stuff reflecting his job as the NY Times token Republican, in which capacity he has to do some damage control over the exposure of the latest Repub lie saying that Obamacare will cost 2.5 million jobs. As Douthat delicately puts it “this is not exactly right”. But, although his heart clearly isn’t it, he tries to construct a narrative in which the Repubs might be right for the wrong reasons, or, in an even less-felicitous defence, mean-spirited and inaccurate but justified by the success of Reaganism thirty years ago.
More interesting though, is Douthat’s discussion comparing idealised hopes for a post-work society with the reality in which well-educated professionals are working longer hours than ever, while many at the bottom end of the income distribution, particularly poorer men have withdrawn from the formal labour force altogether (presumably, relying on disability benefits or scraping a living in the informal economy). One possible solution to this problem, is simply to give the poor more money, for example, in the form of a basic income, and not worry about whether they choose to work. Douthat isn’t too happy about this idea, saying
Both “rugged individualist” right-wingers and more communitarian conservatives tend to see work as essential to dignity, mobility and social equality, and see its decline as something to be fiercely resisted. The question is whether tomorrow’s liberals will be our allies in that fight.
But this position elides a bunch of crucial issues.
First, while work may be necessary to “dignity, mobility and social equality” in a market society, it certainly isn’t sufficient. For unionised US workers in the mid-20th century, earning middle-class incomes in relatively secure jobs and expecting better for their children, work was, arguably both necessary and sufficient to achieve a fair measure of these things. But an at-will employee, juggling two or three tenuous jobs that pay $7.25 an hour, and looking at a steady decline in real income, is scarcely getting much in the way of dignity, let alone mobility or social equality.
Equally importantly, market work isn’t the only kind of work people can do, and certainly not the most valuable. Most obviously, there’s the raising of children. The US the developed countries that does not provide any kind of paid parental leave, and even the legislative provision for unpaid leave (12 weeks a year for mothers in firms with more than 50 employees, nothing for fathers) is incredibly stingy. The idea that the ‘rugged individualists’ who block any improvements to these conditions actually care about the dignity of the working class is simply laughable.
I don’t need to tell Douthat any of this. It’s all in his book Grand New Party with Reihan Salam, notably including a proposal for a full year of paid parental leave. The book received cautiously respectful reviews from many in the centre and centre-left, but fell entirely flat with its intended audience in the Republican Party.
I’ll have a bit more to say about the kind of technological determinism that seeks to explain labour market polarisation as arising from computers and the Internet a bit later. For the moment, I’ll repeat the conclusion of my Aeon essay that a response to technological change that will preserve the link between work, dignity and equality will require both a reduction in total hours of work and an expansion in the range of social contributions regarded as work, beyond those that generate a market return
In my book, Zombie Economics, I started the account of macroeconomics with the observation
Macroeconomics began with Keynes. Before Keynes wrote The General Theory of Employment, Interest, and Money, economic theory consisted almost entirely of what is now called microeconomics. The difference between the two is commonly put by saying that microeconomics is concerned with individual markets and macroeconomics with the economy as a whole, but that formulation implicitly assumes a view of the world that is at least partly Keynesian.
Long before Keynes, neoclassical economists had both a theory of how prices are determined in individual markets so as to match supply and demand (“partial equilibrium theory”) and a theory of how all the prices in the economy are jointly determined to produce a “general equilibrium” in which there are no unsold goods or unemployed workers.
I went on to observe how the pre-Keynesian approach had been revived by the “New Classical” school, and how the apparent convergence with “New Keynesian” economics had been shown to be illusory after the failure of Dynamic Stochastic General Equilibrium models to deal with the 2008 financial crisis and the subsquent, still continuing, depression.
With all of this, though, I still never thought of academic macro, in either saltwater or freshwater form, as being a simple reversion to the pre-Keynesian notion of general equilibrium, with no concern about aggregate demand or unemployment, even in the short run. It turns out that, at least for a large segment of the profession, this is quite wrong. I’ve just received a book entitled Big ideas in Macroeconomics: A nontechnical view by Kartik Athreya, an economist at the Richmond Federal Reserve who made a splash a few years back with a piece entitled Economics is Hard. Don’t Let Bloggers Tell You Otherwise, which, unsurprisingly, did not endear him to bloggers. As a critic of mainstream macro, I’m briefly mentioned, and I just got a review copy.
The new book is an attempt to simplify things, and indeed it has proved enlightening to me and also to Herb Gintis who contributes a blurb on the back, commending it as an accessible and accurate description of the dominant way of thinking about macroeconomics.
The easiest way to see why the book is so striking is to list some topics that do not appear in the index (and are not discussed, or only mentioned in passing, in the text). These include: unemployment, inflation, recession, depression, business cycle, Phillips curve, NAIRU, Taylor Rule, money, monetary policy and fiscal policy.
The term “New Old Keynesian” was coined by Tyler Cowen a couple of years ago, to describe the revival of the view that the Keynesian analysis of recessions caused by lack of aggregate demand is relevant, not only in the short run (in this context, the time taken for wage contracts to reset, say 2-3 years) but in the long run (5 years or more) as well. When Cowen was writing, in September 2011, the New Depression could still, just about, be seen as a short run phenomenon. In particular, the anti-Keynesian advocates of austerity in the US, UK and Europe were predicting rapid recovery.
As 2014 begins, it’s clear enough that any theory in which mass unemployment or (in the US case) withdrawal from the labour force can only occur in the short run is inconsistent with the evidence. Given that unions are weaker than they have been for a century or so, and that severe cuts to social welfare benefits have been imposed in most countries, the traditional rightwing explanation that labour market inflexibility [arising from minimum wage laws or unions], is the cause of unemployment, appeals only to ideologues (who are, unfortunately, plentiful).
So, on the face of it, Cowen’s “New Old Keynesianism” looks pretty appealing. But what are the alternatives? Leaving aside anti-Keynesian views for the moment, the terminology suggests four logical possibilities: Old Old Keynesianism, Old New Keynesianism, New Old Keynesianism and New New Keynesianism.
But do these logical possibilities correspond to actual viewpoints, and, if so, whose?
I’m on the way back from bitterly cold Philadelphia at the moment after attending the meetings of the American Economic Association (and a bunch of related societies). I was at a very interesting session on long-run discounting, which had a panel of six with (as is common) one woman[^1]. Looking around the room, I realised that the panel was actually balanced (inside econometric joke) when compared with the audience, which was about 90 per cent male.
I don’t think that the academic economics profession is quite as male-dominated as that. Some casual discussions suggested a couple of hypotheses:
(i) There were some parallel sessions on gender issues for which the audience was mostly female (not surprising, but kind of ambivalent)
(ii) Men were more likely to attend the sessions while female colleagues were more likely to be on the hiring teams. For those unfamiliar with this exercise, a large part of academic conferences consists of academics sitting in hotel rooms for days on end while a string of recent PhDs give a 15 minute pitch on a piece of research (their ‘job market paper’) followed by a ritual Q&A (a plausible but depressing story)
I get the impression that academic philosophy is even worse than economics, but that most other disciplines are better. Any thoughts?