Whenever I post anything about taxation and public expenditure, it’s a good bet that someone will pop up in the comments section to claim that, according to Modern Monetary Theory, states that issue their own currency don’t need taxation to finance public expenditure. That’s a misunderstanding of the theory, but it’s proved hard to explain this. The current crisis in Russia provides a teachable moment.
Whenever I post anything about taxation and public expenditure, it’s a good bet that someone will pop up in the comments section to claim that, according to Modern Monetary Theory, states that issue their own currency don’t need taxation to finance public expenditure. That’s a misunderstanding of the theory, but it’s proved hard to explain this. The current crisis in Russia provides a teachable moment.
The title of a piece in Inside Story on nuclear power in Australia. Readers won’t be surprised to learn that I don’t think it’s feasible in any relevant time frame (say, before 2040). I don’t expect nuclear devotees to be convinced by this (I can’t think of any evidence that would have this effect), but I’d be interested to see someone lay out a plausible timetable to get nuclear built here sooner than my suggested date.
To clarify this, feel free to assume a conversion of both major parties and the majority of the public to a pro-nuclear position, but not to assume away the time needed to generate a legislative and regulatory framework, take proper account of concerns about siting, licensing and so on.
I’m following up Henry Farrell’s post on the superiority or otherwise of economists, and Krugman’s piece, also bouncing off Fourcade et al, with a few observations of my own, that don’t amount to anything systematic. My perspective is a bit unusual, at least for the profession as it exists today. I didn’t go to graduate school, and I started out in an Australian civil service job in the low-status[^1] field of agricultural economics.
So, I have long experience as an outsider to the US-dominated global profession. But, largely due to one big piece of good luck early on (as well as the obligatory hard work and general ability), I’ve done pretty well and am now, in most respects, an insider, at least in the Australian context.
That’s the title of my latest piece in The Conversation. The bottom line
Leaving aside the ethics of divestment and pursuing a purely rational economic analysis, the cold hard numbers of putting money into fossil fuels don’t look good.
Unless universities are willing to bet on the destruction of the planet they have committed themselves to understanding and preserving, divestment from fossil fuels is the only choice they can make. Forward-thinking investors of all kinds would be wise to follow suit.
Among the sceptical reactions to China’s part of the joint announcement on climate policy made by the US and China, two were particularly prominent
* The statement didn’t require China to do anything until 2030
* The statement simply reflected “business as usual”
These arguments were almost immediately refuted when China announced, in its http://thediplomat.com/2014/11/in-new-plan-china-eyes-2020-energy-cap/ that it would cap coal consumption at 4.2 billion tonnes by 2020, with total primary energy consumption (including oil and gas) held below 4.8 billion tonnes of coal equivalent. By contrast, in 2013, the estimate was for 4.8 billion tonnes of coal alone. Back in 2010, the US Energy Information Administration was predicting continued growth in Chinese coal assumption to 2035 and beyond.
After spending months warning us of terrorists, rioters, and (most fearsome of all) thousands of political minders roaming the streets of the Brisbane CBD, warning us to reconsider our need to travel and giving us a long weekend, Brisbane Lord Mayor Graham Quirk is upset with us for taking off to the beach or staying home and waiting the whole thing out. He has been roundly mocked. It’s now clear enough that, except for high-end hotels and restaurants, G20 is going to be an economic disaster for Brisbane.
There is a broader lesson here. Paying substantial amounts to attract an event where the audience is mostly going to regard the venue as interchangeable with lots of others (car races being a prime example) is almost never going to be a sensible economic policy. The inflow of event visitors will mostly be offset by the deterrence of other potential visitors and by an exodus of locals. And the idea that events like this “put Brisbane on the map” is silly.
We won’t be lining up for another international summit any time soon, but the Commonwealth Games will be in the Gold Coast in 2018. I’m confident that an analysis after the fact will reveal very little to show for the $2 billion we are spending on them.
I’ll qualify the above by saying that it’s a different story with mass participation events. Noosa Triathlon for example, attracted 14 000 participants and 50 000 spectators (mostly family members, I think). The local tourism council tipped in $250k. Assuming a similar amount from Tourism Queensland, that’s a subsidy of $10/head. The event could probably have gone ahead without any subsidy: the main contribution for this kind of event is organizing road closures and crowd safety.
Not surprisingly, the US Supreme Court’s non-decision on equal marriage has caused plenty of debate, including John Holbo’s smackdown of NR’s Matthew Franck.
The discussion got me thinking about the broader problem of legal reasoning, at least in its originalist and textualist forms, and also in precedent-based applications of common law. The assumption in all of these approaches is that by examining (according to some system of rules) what was legislated or decided in the past, lawyers and judges can determine the law as it applies to the case at hand. There are all sorts of well-known difficulties here, such as how words written a century ago should apply to technologies and social structures that did not exist at the time. And it often happens that these approaches produce results that seem unacceptable to most people but for which a legislative or constitutional fix is impossible for some reason.
It’s always seemed to me, though, that there is a much bigger problem with this approach, namely the implicit assumption that “the law” actually exists. That is, it is assumed that, if the appropriate procedure is used to interpret the inherited text, and applied to the problem at hand, it will produce a determinate answer. But why should this be true? The same law might contain contradictory clauses, supported by contradictory arguments, voted in by different majorities, and understood at the time of its passage in contradictory ways. Most notably, the same constitution might grant universal freedoms in one place, while recognising slavery in another.
At a minimum, such contradictions mean that there is no determinate law on the particular points of difference. But the problem is worse than this. The law rarely prescribes an exact answer in a specific case. The standard view of legal reasoning is the principles can be extracted from case law, then applied to new cases. But contradictory laws and contradictory cases produce contradictory principles. The ultimate stopping point is the paradox of entailment: a contradiction implies anything and everything.
I don’t have a fully worked out answer to this problem but I think it underlies a lot of the disquiet so many people feel about legal reasoning (apart from the ordinary disappointment when the answer it produces isn’t the one we want).
A standard piece of advice to researchers in math-oriented fields aiming to publish a popular book is that every equation reduces the readership by a factor of x (x can range from 2 to 10, depending on who is giving the advice). Thomas Piketty’s Capital has only one equation (or more precisely, inequality), at least only one that anyone notices, but it’s a very important one. Piketty claims that the share of capital owners in national income will tend to rise when the rate of interest r exceeds the rate of growth g. He suggests that this is the normal state, and that the situation prevailing for much of the 20th century, when r was less than g, was an aberration.
I’ve seen lots of discussion of this, much of it confused and/or confusing. So, I want to offer a very simple explanation of Piketty’s point. I’m aware that this may seem glaringly obvious to some readers, and remain opaque to others, but I hope there is a group in between who will benefit.
Suppose that you are a debtor, facing an interest rate r, and that your income grows at a rate g. Initially, think about the case when r=g. For concreteness, suppose you initially owe $400, your annual income is $100 and r=g is 5 per cent. So, your debt to income ratio is 4. Now suppose that your consumption expenditure (that is, expenditure excluding interest and principal repayments) is exactly equal to your income, so you don’t repay any principal and the debt compounds. Then, at the end of the year, you owe $420 (the initial debt + interest) and your income has risen to $105. The debt/income ratio is still 4. It’s easy to see that this will work regardless of the numerical values, provided r=g. To sum it up in words: when the growth rate and the interest rate are equal, and income equals consumption expenditure, the ratio of debt to income will remain stable.
On the other hand, if r>g, the ratio of debt to income can only be kept stable if you consume less than you earn. And conversely if r < g (for example in a situation of unanticipated inflation or booming growth), the debt-income ratio falls automatically provided you don’t consume in excess of your income.
Now think of an economy divided into two groups: capital owners and everyone else (both wage-earners and governments). The debt owed by everyone else is the wealth of the capital owners. If r>g, and if capital owners provide the net savings to allow everyone else to balance income and consumption, then the ratio of the capital stock to (non-capital) income must rise. My reading of Piketty is that, as we shift from the C20 situation of r <= g to one in which r>g the ratio of capital to stock to non-capital income is likely to rise form 4 (the value that used to be considered as one of the constants of 20th century economics) to 6 (the value he estimates for the 19th century)
This in turn means that the ratio of capital income to non-capital income must rise, both because the capital stock is getting bigger in relative terms and because the rate of return, r, has increased as we move from r=g to r>g. For example if the capital-income ratio goes from 4 to 6 and r goes from 2 to 5, then capital incomes goes from 8 per cent of non-capital income to 30 per cent[^1]. This can only stop if the stock of physical capital becomes so large as to bring r and g back into line (there’s a big dispute about whether and how this will happen, which I’ll leave for another time), or if non-capital owners begin to consume below their income.
There’s a lot more to Piketty than this, and a lot more to argue about, but I hope this is helpful to at least some readers.
[^1]: Around 20 per cent of GDP is depreciation, indirect taxes and other things that don’t figure in a labor-capital split, so this translates into a fall in the labor share of all income from a bit over 70 per cent to around 50 per cent, which looks like happening.
I appeared yesterday before the Senate Committee inquiring into the government’s proposed higher education reforms. I focused on criticism of the US model being advocated by the government and the Go8, and was ready with quotes from the Go8 submission. I was unprepared, however, for the line of questioning I got from LNP members of the Committee, who denied that the government, as distinct from the Go8, was pushing the US model. My iPhone wasn’t up to the job of producing a definitive statement on the spot, but I have now located the source I wanted, in which Pyne, speaking to the Policy Exchange group in the UK says (emphasis added)
We have much to learn about universities competing for students and focussing on our students. Not least, we have much to learn about this from our friends in the United States. They have developed a diverse array of institutions encouraging prospective students to pick and choose their futures and where they are going to study, immerse themselves in enriching extra-curricular activities, and make life-long friends. Students routinely chase a range of options as to where they study, whether that’s at home or in a place known as college. Going to college is a rite of passage for American high school graduates. And it is a gift that keeps on giving.
The competitive nature of American tertiary education breeds the sort of focus on competition for students that Glyn Davis referred to. It breeds loyalty and devotion to one’s alma mater – and we know that American colleges leave us for dead when it comes to attracting philanthropic support from their graduates.
Another Australian Vice Chancellor, Professor Warren Bebbington of the University of Adelaide, wrote last week in The Times Higher Education supplement, and I quote:
higher education in Australia could be transformed into the most dynamic system in the world. It (could) have the rich variety of the US university landscape but without the crippling debts that American students suffer.
This should be the focus of a fundamental community-wide debate.
He opined that:
the debate has been largely contained thus far, and has taken place in terms incomprehensible to the average person. Even worse, some of the most influential academic voices seem intent on preventing Australians ever benefitting from what is proposed.
In the US, nearly half of all students do not go from high school to a public university of the Australian type, but instead attend teaching-only undergraduate colleges offering only Bachelor degrees. Without research programmes, these colleges do a first-class job of teaching: through small classes and an intense extra-curricular programme. Students have an unforgettable, utterly life-changing educational experience.
He continued that:
this huge array of highly-individual undergraduate colleges is one of the glories of American higher education.
Such colleges do not exist in Australia. Ours has been a highly constrained system of universities with limited scope for universities to shape their own offerings to students.
As I’ve previously pointed out, Bebbington’s claim is ludicrously wrong. He’s describing liberals arts colleges that educate perhaps 2 per cent of the college-age cohort in the US, charge around $50 000 per year and have endowments of the order of $1 million per student. The second-tier state universities, community colleges and for-profits actually attended by half or more of the student population are nothing like this.
Clearly, Pyne (like the Go8) doesn’t have a clue about the model he is pushing. This whole package should be scrapped: If the government wants to make changes, it should do some research first.
I could talk forever about these graphs (from IndexMundi), and may do so in the future. For now, I’ll just note that these are nominal prices. The US CPI has roughly doubled since 1989
It’s time for another Monday Message Board. Post comments on any topic. Civil discussion and no coarse language please. Side discussions and idees fixes to the sandpits, please.
The 1970s saw two important and influential publications in the long debate over justice, equality and public policy. In 1971, there was Rawls Theory of Justice, commonly described in terms like “magisterial”. Then in 1974, at lunch with Jude Wanniski, Dick Cheney and Donald Rumsfeld, Arthur Laffer drew his now-eponymous curve on a napkin. Of course there was nothing new about the curve: it’s pretty obvious that an income tax levied at rates of either zero or 100 per cent isn’t going to raise any money, and interpolation does the rest. What was new was the Laffer hypothesis, that the US at the time was on the descending side of the curve, where a reduction in tax rates would raise tax revenue.
I’ve always understood Rawls in terms of the Laffer curve, as arguing in essence that we should be at the very top of the curve, maximizing the resources available for transfer to the poor, but not (as, say, Jerry Cohen might have advocated) going further than this to promote equality.
A couple of interesting Facebook discussions have led me to think that I might be wrong in my understanding of Rawls and that the position I’ve imputed to him is actually far closer to that of classical utilitarianism in the tradition of Bentham (which is, broadly speaking, my own view).
Facebook has its merits, but promoting open public discussion isn’t one of them, so I thought I’d throw this out to the slightly larger world of blog readers.
I wrote not long ago about the zombie idea that the US ban on agricultural use of DDT, enacted in 1972, somehow caused millions of people elsewhere in the world (where DDT remains available for anti-malaria programs) to die of malaria. A thorough refutation is now available to anyone who cares to look at Wikipedia, but the notion remains lurking in the Republican hindbrain.
So, with the recent outbreak of Ebola fever (transmitted between humans by direct contact and bodily fluids), the free-association process that passes for thought in Republican circles went straight from “sick people in Africa” to “DDT”. Ron Paul was onto the case early, with stupid remarks that were distilled into even purer stupidity in a press release put out by his organization. Next up, Diana Furchgott-Roth, of the Manhattan Institute.
And here’s the American Council on Smoking and Health.
… persuade them to stop being rightwingers
I have a piece in Inside Story arguing that the various efforts to “frame” the evidence on climate change, and the policy implications, in a way that will appeal to those on the political right are all doomed. Whether or not it was historically inevitable, anti-science denialism is now a core component of rightwing tribal identity in both Australia and the US. The only hope for sustained progress on climate policy is a combination of demography and defection that will create a pro-science majority.
With my characteristic optimism, I extract a bright side from all of this. This has three components
(a) The intellectual collapse of the right has already proved politically costly, and these costs will increase over time
(b) The cost of climate stabilization has turned out to be so low that even a delay of 5-10 years won’t render it unmanageable.
(c) The benefits in terms of the possibility of implementing progressive policies such as redistribution away from the 1 per cent will more than offset the extra costs of the delay in dealing with climate change.
I expect lots of commenters here will disagree with one or more of these, so feel free to have your say. Please avoid personal attacks (or me or each other), suggestions that only a stupid person would advance the position you want to criticise and so on.
fn1. Or, in the case of young people, not to start.
Now that renewable energy sources like solar and PV are cheaper than new coal-fired power stations in most jurisdictions (anywhere with either favorable conditions or a reasonable carbon price), the big remaining question is that of supply variability/intermittency. As I’ve argued before, this problem is greatly overstated by critics of renewables who assume that the constant 24/7 supply characteristic of coal is the ideal. In fact, this constant supply produces a mismatch with variable demand and current pricing structures are set up to deal with this. A system dominated by renewables would have different kinds of mismatch and require different pricing structures.
That said, for a system dominated by solar PV, meeting demand in the late afternoon and evening will clearly depend on a capacity to store energy in some form or another. There are lots of options, but it makes sense to look first at relatively mature technologies like lithium and lead-acid batteries. Renewable News is reporting a project in Vermont, which integrates solar PV and storage.
The 2.5-MW Stafford Hill solar project is being developed in conjunction with Dynapower and GroSolar and includes 4 MW of battery storage, both lithium ion and lead acid, to integrate the solar generation into the local grid, and to provide resilient power in case of a grid outage.
The project cost is stated at $10 million, or $4m/Mw of generation capacity.
Assuming this number is correct, let’s make some simplifying assumptions to get a rough idea of the cost of electricity and the workability of storage. If we cost capital and depreciation at 10 per cent, assume 1600 hours of full output per year and, ignoring operating costs, the cost of electricity is 25c/KwH. There would presumably be some distribution costs, given the need to connect to the grid. Still, given that Vermont consumers are currently paying 18c/Kwh, this doesn’t look too bad. A carbon tax at $75/tonne would make up the difference.
How would the storage work? I’m starting from scratch here, so I’ll be interested in suggestions and corrections. I assume that the storage is ample to deal with short-term (minute to minute or hour to hour) fluctuations, which are more of a problem for wind.
How about on a daily basis? It seems to me that the critical thing to look at is the point in the afternoon/evening at which consumption exceeds generation (As I mentioned, prices matter a lot here). This is the point at which we would like the batteries to be fully charged. The output assumption suggests an average of about 12 MWh generated per day. If we simplify by assuming that the cutoff time is 6pm and that output drops to zero after that, the system requires that 8MWh be used during the day and 4MWh at night. That wouldn’t match current demand patterns, but if you added in some grid connected power (say, from wind, which tends to blow more at night) and shifted the pricing peak to match the demand peak, it would probably be feasible.
As regards seasonal variability, this would be a problem in Vermont, where (I assume) the seasonal demand peak is in winter. But in places like Queensland, with a strong summer peak, a system with lots of solar power should do a good job in this respect.
What remains is the possibility of a long run of cloudy days, during which solar panels produce 50 per cent or less of their rated output. Dealing with such periods will require a combination of pricing (such periods can be predicted in advance, so it’s just a matter of passing the price signals on to consumers), load-shedding for industrial customers and dispatchable reserve sources (hydro being the most appealing candidate, given that potential energy can be stored for long periods, and turned on and off as needed).
To sum up, we aren’t quite at the point where PV+storage is a complete solution, but we’re not far off.
George Brandis’ spectacular live meltdown over metadata retention has distracted attention from the abandonment of the government’s plans to repeal Section 18C of the Racial Discrimination Act, prohibiting the kind of racial abuse dished out by the likes of Andrew Bolt and Fredrick Toben. Abbott’s rationale is that a purist attitude to freedom of (racially divisive) speech is something we can’t afford, given the need to unite against terrorism.
Obviously, neither Bolt nor Toben is a member of Team Australia. Each makes it their primary business to stir up hatred, in Toben’s case against Jews and in Bolt’s case against (among many others) the “muslims, jihadists, people from the Middle East” he sees as responsible for Abbot’s backdown. The striking conflation of religion, geographical origin and terrorism is typical of Bolt’s approach.
Horrible as he is, though, Toben is not a serious problem. His Holocaust denialism is universally reviled, and it is a sign of strength, not weakness, in our democracy that he is free to walk the streets. Repealing the constraints imposed on him by 18C would only emphasise this.
Bolt is another story. It is his case that led the government to seek the repeal of 18C, and that motivated George Brandis’ gaffe (that is, a politically inconvenient statement of an actual belief) that people have a right to be bigots. Far from being reviled, Bolt has been embraced and coddled by the government, to the point of having exclusive access to the Prime Minister. He enjoys a well-rewarded position in the Murdoch Press. Even casting the net wider among our so-called libertarians, I’ve can’t recall seeing a harsh word against Bolt. He’s a tribal ally and his bigotry is either endorsed or passed over in silence.
It’s impossible in these circumstances, for the government to be taken seriously when they mouth the (apocryphal) Voltaire line about defending to the death speech with which they disagree. The repeal of 18C was clearly intended as an endorsement of Bolt, and not a statement of bare toleration. That position is now untenable, and it’s too late to switch back to Voltaire.
In summary, those on the right lamenting the continued existence of 18C ought to reflect on the fact that it’s their own overt or tacit endorsement of bigotry that’s brought this about. If they cleaned house, and dissociated themselves from the likes of Bolt, their claims to be supporting free speech might acquire a little more credibility.
fn1. I was going to add Sheikh Hillaly to this list. But based on this report, he seems to have joined the Team.
I got lots of very helpful responses to my recent post on the search theory of unemployment, here and at Crooked Timber. But it has occurred to me that I haven’t seen any answer to one crucial question: How many offers do unemployed workers receive and decline before taking a new job, or leaving the labour market? This is crucial, both in simple versions of search theory and in more sophisticated directed search and matching models. If workers don’t get any offers, it doesn’t matter what their reservation wage is, or what their judgement of the state of the market. Casual observation and my very limited experience, combined with my understanding of the unemployment benefit rules, is that very few unemployed workers receive and decline job offers, except perhaps for temporary work where the loss of benefits outweighs potential earnings. Presumably someone must have studied this, but my Google skills aren’t up to finding anything useful.
And, on a morbidly humorous note, it’s a sad day for the LNP when efforts to bash dole bludgers actually cost them support. But that seems to be the case with the latest plans, both expanded work for the dole and the requirement for 40 job applications a month. I’ll leave it to Andrew Leigh to take out the trash on work for the dole (BTW, his new book, The Economics of Almost Everything is out now).
The 40 applications requirement has already been the subject of some amusing calculations. I want to take a slightly different tack. Suppose (to make the math simple) that the average job vacancy lasts a month. There are roughly five unemployed workers for every vacancy, so meeting the target will require an average of 200 applications per vacancy. The government will be checking for spam, so lets suppose that all (or a substantial proportion) of the applicants take some time to talk about how they would be a good fit with the employer and so on. Dealing with all these applications would be a mammoth task. One option would be to pick a short list at random. But, there’s a simpler option. In addition to the 200 required applications from unemployed people, most job vacancies will attract applications from people in jobs. A few of them may be looking for an outside offer to improve their bargaining position with their current employer (this is a big deal for academics), but most can be assumed to be serious about taking the job and in the judgement that they have a reasonable chance of getting it. So, the obvious strategy is to discard all the applications except for those from people who already have jobs. What if there aren’t any of these? Given that formal applications are going to be uninformative, employers may pick interviewees at random or may resort to the informal networks through which many jobs are filled already.
Trying to relate this back to theory, the effect of a requirement like this is to negate the benefits of improved matching that ought to arise from Internet search. By providing strong incentives to provide a convincing appearance of looking for jobs for which workers are actually poorly suited, the policy harms both employers and unemployed workers who would be well suited to a given job.
Update I found the following quote widely reproduced on the web
On average, 1,000 individuals will see a job post, 200 will begin the application process, 100 will complete the application,
75 of those 100 resumes will be screened out by the Applicant Tracking System (ATS) software the company uses,
25 resumes will be seen by the hiring manager, 4 to 6 will be invited for an interview, 1 to 3 of them will be invited back for final interview, 1 will be offered that job and 80 percent of those receiving an offer will accept it.
Data courtesy of Talent Function Group LLC
Visiting the TFG website, I couldn’t find any obvious source. The numbers sound plausible to me, and obviously to those who have cited them. But, if the final number (80 per cent acceptance) is correct, then it seems as if the search theory of unemployment is utterly baseless. Assuming independence, the proportion of searchers who reject even three offers must be minuscule (less than 1 per cent).
One of the striking features of (propertarian) libertarianism, especially in the US, is its reliance on a priori arguments based on supposedly self-evident truths. Among[^1] the most extreme versions of this is the “praxeological” economic methodology espoused by Mises and his followers, and also endorsed, in a more qualified fashion, by Hayek.
In an Internet discussion the other day, I was surprised to see the deductive certainty claimed by Mises presented as being similar to the “certainty” that the interior angles of a triangle add to 180 degrees.[^2]
In one sense, I shouldn’t be surprised. The certainty of Euclidean geometry was, for centuries, the strongest argument for the rationalist that we could derive certain knowledge about the world.
Precisely for that reason, the discovery, in the early 19th century of non-Euclidean geometries that did not satisfy Euclid’s requirement that parallel lines should never meet, was a huge blow to rationalism, from which it has never really recovered.[^3] In non-Euclidean geometry, the interior angles of a triangle may add to more, or less, than 180 degrees.
Even worse for the rationalist program was the observation that the system of geometry (that is, “earth measurement”) most relevant to earth-dwellers is spherical geometry, in which straight lines are “great circles”, and in which the angles of a triangle add to more than 180 degrees. Considered in this light, Euclidean plane geometry is the mathematical model associated with the Flat Earth theory.
John Howard’s endorsement of Ian Plimer’s children’s version of his absurd anti-science tract Heaven and Earth has at least one good feature. I can now cut the number of prominent Australian conservatives for whom I have any intellectual respect down from two to one. Howard’s acceptance of anti-science nonsense shows that, for all his ability as a politician, he is, in the end, just another tribalist incapable of thinking for himself. 
Although not all the tribal leaders are old men, an old, high-status man like Howard is certainly emblematic of Australian delusionism . Like a lot of old, high status men, he stopped thinking decades ago, but is even more confident of being right now than when he had to confront his prejudices with reality from time time. Like other delusionists, Howard has no scientific training, shows no sign of understanding statistics and almost certainly hasn’t read any real scientific literature, but nonetheless believes he can rank clowns like Plimer and Monckton ahead of the real scientists.
The situation in the US is similar but even more grimly amusing, with the sole truthteller in the entire Republican party, Jon Huntsman, recently reduced to waffling (in both US and UK/Oz senses of this term) because he briefly looked like having a chance to be the next non-Romney. This tribal mindlessness is reflected in the inability of the Republican Party, at a time when they ought to be unbackable favorites in 2012, to come up with a candidate who can convince the base s/he is one of them, but who doesn’t rapidly reveal themselves as a fool, a knave or both.
And, as evidence of the utter intellectual shamelessness of delusionism, you can’t beat the campaign against wind power, driven by the kinds of absurd claims of risk that would be mocked, mercilessly and deservedly, if they came from the mainstream environmental movement.
The global left is in pretty bad shape in lots of ways. Still, I would really hate to be a conservative right now.
fn1. Now down to zero. Turnbull has proved he lacks any real substance.
fn2. I’m not saying that all Australian conservatives are mindless tribalists. There’s a large group, epitomized by Greg Hunt and now Malcolm Turnbull, who understand the issues quite well, but are unwilling to speak up. Then there is a group of postmodern conservatives of whom Andrew Bolt is probably the best example, who have passed the point where concepts of truth or falsehood have any meaning – truth is whatever suits the cause on any given day.
I’ve been working for quite a while now on a book which will respond to Henry Hazlitt’s Economics in One Lesson a book that was issued just after 1945 and has remained in print ever since. It’s an adaptation of the work of the 19th century French free-market advocate Frederic Bastiat for a US audience, specifically aimed at refuting the then-novel ideas of Keynes.
My planned title is Economics in Two Lessons. In my interpretation, Hazlitt’s One Lesson is that prices are opportunity costs. My Second Lesson is that, in the absence of appropriate government policy, private opportunity costs (market prices) won’t reflect social opportunity costs. Here’s a central piece of the argument, responding to Hazlitt’s exposition of Bastiat’s glazier’s fallacy.
I was very pleased with my post on this topic, making the point that standard microeconomic analysis only works properly on the assumption that the economy is at a full employment equilibrium.
But, it turns out, exactly the same point, using the same title, was made by David Colander 20 years ago
Colander (1993), The Macrofoundations of Micro, Eastern Economic Journal, Vol. 19, No. 4 (Fall, 1993), pp. 447-457
And he wasn’t the first. The term and the idea have a long history, including a contribution by my UQ colleague Bruce Littleboy
The term macrofoundations, I suspect, has been around for a long time. Tracing the term is a paper in itself. Axel Leijonhufvud remembered using it in Leijonhufvud  . I was told that Roman Frydman and Edmund Phelps  used the term and that Hyman Minsky had an unpublished paper from the 1970s with that title; Minsky remembered it, but doubted he could find it and told me that he used the term in a slightly different context. I was also told by Christof Ruhle that a German economist, Karl Zinn, wrote a paper with that title for a Festschrift in 1988, but that it has not been translated into English. I suspect the term has been used many more times because it is such an obvious counterpoint to the microfoundations of macro, and hence to the New Classical call for microfoundations. While he does not use the term explicitly, Bruce Littleboy , in work that relates fundamentalist Keynesian ideas with Clower and Leijonhufvud’s ideas, discusses many of the important issues raised here.
I have a piece in The National Interest, looking at various recent events including the latest round of the Argentinian debt crisis, in which a New York court ruled in favor of a group of ‘vulture’ investors, led by a New York billionaire, and the agreement of the US Department of Justice and Citibank, involving a financial settlement to avoid a lawsuit over bad mortgage deals and CDOs in the pre-crisis period.
My central observation is that while legal forms are being observed, these are obviously political processes, with outcomes reflecting relative political power rather than any kind of neutral application of the law. So, the world financial system is part of international power politics: it matters a lot that Citibank is a US bank, while BNP Paribas is French and so on. This is very different from the picture of a global financial system independent of, and standing in judgement on, national governments that seemed to be emerging in the 1990s.
As an illustration, I found this ad put out by the ‘vultures’. To see my point, try interchanging “US” and “Argentina” throughout and assuming an adverse judgement by an Argentinian court against the US government.
This is going to be a long and wonkish post, so I’ll just give the dot-point summary here, and let those interested read on below the fold, for the explanations and qualifications.
* The dominant model of unemployment, in academic macroeconomics at least, is based on the idea that unemployment can best be modelled in terms of workers searching for jobs, and remaining unemployed until they find a good match with an employer
* The efficiency of job search and matching has been massively increased by the Internet, so, if unemployment is mainly explained by search, it should have fallen steadily over the past 20 years.
* Obviously, this hasn’t happened, but economists seem to have ignored this fact or at least not worried too much about it
* The fact that search models are more popular than ever is yet more evidence that academic macroeconomics is in a bad way
Tony Abbott hasn’t exactly covered himself in glory on his overseas trip. But he has found one ally: Canadian PM (at least until next years election) Stephen Harper, also a climate denialist. They made a joint statement denouncing carbon taxes as “job killing”. I didn’t notice any massive destruction of jobs when the carbon price/tax was introduced in 2012, but rather than do my own analysis, I thought I’d take a look at the government’s own Budget outlook, to see how many jobs they claim to have been destroyed by the carbon tax, and what great benefits we can expect from its removal. Here’s the relevant section of the summary (note that the outlook is premised on the Budget measures being passed)
The Australian economy is in the midst of a major transformation, moving from growth led by investment in resources projects to broader‑based drivers of activity in the non‑resources sectors. This is occurring at a time when the economy has generally been growing below its trend rate and the unemployment rate has been rising. During this transition, the economy is expected to continue to grow slightly below trend and the unemployment rate is expected to rise further to 6¼ per cent by mid‑2015.
In this environment, the Government is focused on implementing measures to support growth and jobs while putting in place lasting structural reforms to restore the nation’s finances to a sustainable footing. The timing and composition of the new policy decisions mean that the faster pace of consolidation in this Budget does not have a material impact on economic growth over the forecast period, relative to the 2013‑14 Mid‑Year Economic and Fiscal Outlook (MYEFO).
Since MYEFO, the near‑term outlook for the household sector has improved. Leading indicators of dwelling investment are consistent with rising activity, while household consumption and retail trade outcomes have improved recently, consistent with gains in household wealth. This is partly offset by weaker business investment intentions, particularly for non‑resources sectors.
The outlook for the resources sector is largely unchanged from MYEFO. Resources investment is still expected to detract significantly from growth through until at least 2015‑16, as reflected in the outlook for investment in engineering construction which is forecast to decline by 13 per cent in 2014‑15 and 20½ per cent in 2015‑16. Rising resources exports are only expected to partially offset the impact on growth. Overall, real GDP is forecast to continue growing below trend at 2½ per cent in 2014‑15, before accelerating to near‑trend growth of 3 per cent in 2015‑16.
The labour market has been subdued since late 2011, characterised by weak employment growth, a falling participation rate and a rising unemployment rate, although outcomes since the beginning of 2014 have been more positive. The unemployment rate is forecast to continue to edge higher, settling around 6¼ per cent, consistent with the outlook for real GDP growth. Consumer price inflation is expected to remain well contained, with moderate wage pressures and the removal of the carbon tax.
The reference to the CPI effects of the carbon price (around 0.4 per cent) is, as far as I can tell, the only mention in the whole of the Economic Outlook statement.
I have a couple of pieces up on the topic that’s likely to consume much of my attention for some time to come: Piketty’s Capital in the 21st century.
Here’s a long review article at Inside Story focusing on the conditions that have made Piketty a bestseller. And here, at The Drum is my take on claims by Chris Giles at the Financial Times that Piketty’s data is fatally flawed.
Update Piketty has responded to the Financial Times. To sum up, as I said in the Drum piece, the criticisms are (mostly incorrect) nitpicks except for the point about UK wealth inequality. Here Piketty’s demolition is convincing. The FT hasn’t used a consistent series. Rather, it’s taken a recent survey estimate (likely to underestimate wealth) and spliced it onto older estate data to produce the counterintuitive finding that the inequality of wealth hasn’t increased.
Like lots of other readers of Thomas Piketty’s Capital, my big concern is not with the accuracy of the diagnosis and prognosis but with the feasibility of the prescription. Piketty’s proposal for a global wealth tax requires an end to the capacity of capital to escape taxation by exploiting the limitations of national taxations system, through tax havens, transfer pricing, artificial corporate structures and so on.
Given the limited record of success in past efforts to control global tax evasion and avoidance, Piketty is reasonably pessimistic about efforts in this direction. But the latest news from the OECD is remarkably positive. All members of the OECD (notably including evader-friendly jurisdictions like Austria, Luxembourg and Switzerland) have agreed to a system of automatic information exchange for tax purposes. Moreover, the “too big to jail” status of major banks engaged in facilitating tax evasion and money laundering, may finally be coming to an end.
On the face of it, the oft-repeated, but so far unjustified claim that “the days of tax havens are over“, may finally be coming true, at least for all but the wealthiest individuals. But the crackdown on individual tax evaders only points up the ease with which corporations (and individuals with the means to establish complex corporate structures) can avoid tax through a mixture of legal avoidance and unprovable evasion (for example, by illegal but unprovable internal transfers).
At the core of the problem is the ability to establish corporations in ways that make their true ownership impossible to trace. And, the jurisdiction most responsible for this is not a Caribbean island or European mini-state, but the “First State” of the US – Delaware, which has long been the preferred location for US incorporation by reason of its business friendly laws.
The efforts of the right to discredit Piketty’s Capital have so far ranged from unconvincing to risible (there’s a particularly amusing one from Max Hastings in the Daily Mail, to which I won’t bother linking). One point raised in this four-para summary by the Economist is that ” today’s super-rich mostly come by their wealth through work, rather than via inheritance.” Piketty does a good job of rebutting this, but for those who haven’t acquired the book or got around to reading it, I thought I’d repost my own response, from 2012.
When people call for a university system more like that of the US, they commonly have in mind the idea that Australia should have institutions like Harvard and Princeton, and a belief that more competition in tertiary education would bring this about. There are a couple of obvious problems with this.
First, high-status universities like this provide undergraduate education only a tiny proportion of young Americans. Around 1 per cent of the college age cohort attends high-status private institutions like the Ivy League unis, Chicago and Stanford, and this proportion has been declining steadily over time. Most of the Ivies enrol no more undergrads than they did in the 1950s. Adjusting for population, an Australian Ivy League would consist of a single institution enrolling perhaps a thousand students a year.
Second, the US experience shows that the idea of competition between universities is a nonsense. Harvard, Princeton and the rest were the leading universities in North America before the US even existed, and they are still the leaders today. The newest of the really high status universities is probably Stanford, founded in 1885. Competition between universities is pretty much the same as the competition between the Harlem Globetrotters and the Washington General.s
The reality of US education is a highly stratified system. Below the high-status private universities are the “flagship” state universities, which educate around 10 per cent of the college age cohort (again, a proportion that is declining, or at best stable).
After that, there are lower-tier state universities, two-year community colleges and, worst of all, for-profit degree mills like the University of Phoenix which exist largely to lure low-income students into debt and extract Federal grant money, with only a minority ever completing their courses.
Australia has always had a stratified system, but to a much lesser extent. (More on the history when I get a chance). The big question facing policy is whether to increase stratification, by widening the gap between the “Group of 8″ and the rest, or to treat tertiary education like other public services, available to all who can benefit from it, at the best quality we can provide for everyone.
University education systems mirror and recreate the society to which they belong. A highly stratified system, like that in the US and UK, reflects and reinforces a class-bound society in which the best thing you can do in life is to choose the right parents. We should be aiming at less stratification, not more.
Update Just by chance, one of the lead articles in the NY Times advises that, thanks to increased international intakes, the number of places for domestic undergraduates at the Ivies has fallen sharply
All through the Bligh government’s three year campaign to sell public assets, I challenged Treasurer Andrew Fraser to a public debate on the issue, or at least to a response to the criticisms I and other economists made of the government’s case. Fraser never responded: even when we spoke at the same event (to a friendly business-oriented crowd) he gave his speech and left before anyone else was allowed on the platform. Doubtless, he made the judgement that this was the politically clever thing to do: by sticking to events that could be scripted, and relying on the authority of Queensland Treasury, he maintained controlled of public discussion. We all know how that worked out.
Now there’s a new Treasurer, pushing the same arguments. I challenged Tim Nicholls to a debate on the “StrongChoices” campaign. I don’t suppose he’s going to respond in person, but he has at least acknowledged my criticisms (as reported by Paul Syvret) and attempted to rebut them in this piece in the Courier-Mail.
Nicholls’ argument is confused, as the case for asset sales has always been, but he does make at least some progress. The usual magic pudding is in evidence: selling assets is supposed to repay debt, finance new infrastructure spending and obviate the need for higher taxes to maintain services, all at the same time.
But there is one point of light: responding to my observation that the StrongChoices website counts the interest savings from selling assets and paying down debt, but not the foregone earnings of public enterprises, Nicholls says
the value of a government-owned asset is not the same in private sector hands. Governments are not well placed to act nimbly when it comes to changing markets and commercial decisions. Who thinks the value of Telstra would be the same if it reverted back to full government ownership? What about the Commonwealth Bank?
While Nicholls’s specific examples don’t work well (see below), he at least expresses the right general principle. Privatising assets is a good deal for the public if their sale price is greater than their value in continued public ownership (and assuming that the gain isn’t achieved by raising prices or reducing service quality). Indeed, that’s true of every kind of sale: there’s a net benefit only if the item sold is worth more to the buyer than to the seller.
So, there’s a simple fix for the StrongChoices website. Instead of quoting the total sale price for assets, give an estimate of the difference between the sale price and the value in continued public ownership. I did this for both of Nicholls examples, the Commonwealth Bank and Telstra (all three “tranches”) and found a net loss to the public in every case except T2, the second Telstra tranche where the value was inflated by the Internet bubble. Even in that case, we would have done far better off by selling the overvalued Internet assets and using the proceeds to buy back the rest of Telstra, as I advocated at the time, just before the bubble burst.
fn1. If, as has been reported, the Queensland Government paid good money to a PR firm for this ludicrous name, then there is certainly an opportunity to cut waste and efficiency by dumping.
I’ve written a few times about the idea that betting markets provide a more accurate guide to political outcomes than do polls or ‘expert’ judgements or statistical models (which usually incorporate polls along with economic and other data). The problem is that, close to an election, they all tend to converge. So, the best time to do a comparison is early in the election cycle. Right now there’s quite a sharp contrast. The polls have had the (federal) ALP and LNP just about level for months, but the betting markets have the LNP as strong favorites.
One possible explanation is that governments generally do worse in polls than in election, so that the polls underestimate the government’s support. I’ve heard this claimed, but never seen any systematic evidence to support it. Another possibility is that market participants know something that’s not reflected in the polls. I’m sceptical on this.
The final possibility is that betting markets this far out from the election are thin and inefficient. If that’s right, then the odds for Labor look very favorable. I’m not going to bet myself (I did OK on my one foray into the US Republican primaries, but the hassle involved was too much to make it worthwhile), and I’m not giving betting advice.
Still, I’d be interested in responses from those among my fellow economists who’ve claimed efficiency properties for betting markets. I guess Andrew Leigh is precluded from commenting, and Justin Wolfers is a long way from the action in Oz, but I’m sure there must be others willing to jump in