Don’t do useful research

That’s the message being given to Australian social science and humanities researchers from the systems of journal rankings adopted in many disciplines.

This point was made to the Senate recently by historians from opposite ends of the political spectrum, Greg Melleuish and Stuart McIntrye, who are more interested in researching and arguing about Australian history than in following Northern hemisphere fads

Here’s a submission I wrote in relation to Economic Analysis and Policy, the journal published by the Queensland branch of the Economic Society of Australia

Under the reward systems prevailing in most Australian universities, publications in a journal ranked B or lower has a negative rating. Despite (or perhaps because of) my success in publishing in Top 5 and other A* journals, I have been actively discouraged from publishing in B journals

As numerous senior academics have pointed out to the Senate recently, a ranking system which punishes work on Australian policy issues reduces the value of the university sector to Australia, and increases pressure for reductions in research funding or redirection to more relevant institutions.

I therefore urged ABDC to upgrade Economic Analysis and Policy to A ranking.

12 thoughts on “Don’t do useful research

  1. Its not just Australian work. There are entire fields where people simply don’t publish much in journals such as linguistics. Some areas like the cutting edge of computational linguistics suffered serious damage because of this, because not only did people get punished in Australia for working in the area, but then the web companies offered really well paid jobs for people sick of fighting with their universities. It is easy to think of areas that will go the same way (basically, many new technology areas) because they don’t even have B rated journals to argue with each other in. So most linguistics departments don’t have this type of person despite it being so important an area.

    Similarly, theoretical work itself becomes difficult in places because it actually takes time and thought. Where I work (and this seems common), the person that wins the university research prize each year has inevitably publish many papers (the last one more than 40 in a year). Some people might call this scientific corruption, but it is clearly looked favourably upon, and so they benefit to the expense of everyone else given the way workloads are distributed.

  2. Not sure you realise this this situation sounds pretty ridiculous – either academics can’t rank journals appropriately for themselves, or someone outside of the profession is ranking something that you wouldn’t expect someone on the outside to be able to sufficiently understand.
    (Actually I may have some sort of vague inkling how this actually comes about, but I think the above comment is still holds.)

  3. Aren’t academics supposed to demonstrate Engagement and Impact as part of their performance evaluation? Seems a tad contradictory to mark down publications in journals that have a policy focus.

  4. MartinK and Smith9 — management at different levels (university, government, faculty…) does the rankings, and they often have no connection at all with academics. Many have been bureaucrats their entire lives. So typically quantitative measures get used, often for all people at the same university. So you end up with people judging others on quantitative distinctions for things that are qualitatively different. A bit like going into K-mart and saying they are the best because they have more chocolate than a chocolatier that sells only fine Belgium chocolate or that Woolworths is the best because they sell more oranges than a furniture shop does chairs. That’s no exaggeration — try thinking of quantitative scores based on the same measure to evaluate musicology and electrical engineering. This is what happens.

  5. Conrad

    even worse, they probably get paid more than academics. That must really stick in the craw.

  6. Smith9: only impact outside the academy counts for the metrics. So publishing policy oriented work doesn’t count for these purposes unless you can demonstrate that it has actually changed policy. Obviously, any change in policy will usually be years down the track and will usually be influenced by too many different factors to be able to demonstrate that a particular paper or even researching program had a measurable impact.

  7. Conrad, that description is inaccurate. At least for the ERA, quantitative measures are used only for the sciences. Impact factors determine journal rankings. The system JQ is describing – with journals ranked A*, A, B and C – was developed by ARC panels composed almost exclusively of academics. But that’s now been replaced outside science by direct assessment of a sample of the work by academics, plus ARC panels composed again largely of academics.

  8. A nice anecdote on research rankings from Dutch higher education, from the days when I worked on international cooperation in the field. Sorry no sources, but it rings true.

    The Netherlands government set up a shiny new system for the evaluation of research quality. The auditors came to the Technical University of Delft, and assembled the indicators, including those for the Department of Water Engineering.
    “Oh dear, we seem to have a problem. Why are there so few publications in foreign journals?”
    “We are terribly sorry, but basically we have a world monopoly in the field, and the only serious journal is the one we run.”

  9. Neil, the first two ERAs worked by putting publications generally gamed into categories by universities, sorting them by the rank, and then giving points for A and A* publications and subtracting points them for everything else, hence the move by bureaucrats to stop people publishing in some journals. I agree there were some areas excluded.

    The rules have been changed to try to stop gaming of them over time (which is impossible), and these rankings were officially dropped, although the idea this isn’t largely a quantitative game based on rankings is wrong. The ARC (c.f., everyone else academics are responsible too) was good in this respect, because in the rankings they initially didn’t take overall quantity into account (apart from being over certain number). This led to the opposite problem of universities getting world class ranks on tiny amounts of work.

    If the outputs had actually been scrutinized seriously, this wouldn’t have happened. For example, Sydney University got 4 in Neuroscience, and more or less every other university got 5, including some doing next to no neuroscience. There are many other examples. Similarly, many departments have moved large amounts on the same measures despite the same people doing more or less the same thing. This is because people worked out the formula used, not because the scores are being scrutinized by people that actually know what is going on.

  10. Presumably the world’s best researchers of kangaroos are in Australian universities. Do they get marked down for publishing in local journals?

  11. It’s just straight irrationality, which is obviously not a sensible way to arrange or deal with academic information.

    What is ‘A’ grade or ‘B’ grade about the paper or electronic webite on which the articles are published? Nothing. It is the content of the articles which may be ‘A’ grade or ‘B’ grade, which may depend on how you measure them and who does the measuring.

    If a journal is ‘A’ grade and only ‘A’ grade papers are submitting/published there and only ‘A’ grade people consider submitting or appearing there then that journal will be ‘A’ grade. Same with any, every other grade you invent. If ‘A’ grade academics subscribe and submit, publish to a journal that used to be B grade, then that journal will get more A grade articles and attract more A grade persons, and the paper or electronic space of that journal will become ‘A’ grade.

    It’s artificial. The meaning is in the content of the articles–the value of which is what the journals are invented to decide, through argument and peer review. It’s not supposed to happen the other way about.

    George Orwell called this, “a major mental disease”* it’s the belief that what is currently happening should and will continue to happen and will, more strongly than before, because of that belief. Not healthy.

    Second thoughts on James Bunham, 1946
    *http://orwell.ru/library/reviews/burnham/english/e_burnh.html

  12. US legal centric yet applicable to this discussion.
    “The Problems Of Measuring Scholarly Impact (‘Stuff’)
    If we’re seeking to adopt some measure to assess scholarly impact, there are serious caveats that need to be addressed before we begin.

    “His tweet that got me to thinking was this one:  “My pal @lawprofblawg has got some points here, but I think at some point s/he has got to get a little more concrete with an alternative. Is the status quo working? Why would citation-based metrics be worse? Should law schools evaluate scholarship at all? How should hiring work?”

    “However, the current metrics aren’t working.   My coauthor and I have explained the biases and entry barriers facing certain potential entrants into legal academia.  Eric Segall and Adam Feldman have explained that there is severe concentration in the legal academy, focused on only a handful of schools that produce the bulk of law professors. ”
    https://abovethelaw.com/2019/10/the-problems-of-measuring-scholarly-impact-stuff/

Leave a comment