Citations to Australian Astronomy: 5 and 10 Year Benchmarks

Not everything that can be counted counts, and not everything that counts can be counted.
Albert Einstein, (attributed)
I don't know how widely it is known outside academic circles, but researchers these days are surveyed and counted continuously, in an effort to show that tax-payers money is being spent on research excellence and research impact. A number of countries have held such exercises, and here we are into the second round of the Excellence in Research for Australia.

Such exercises have real impact, as some at The University of Sydney just found out; those not producing enough "research outputs" are in the firing-line for redundancies.

With the growth of online databases of papers and citations, it's now easy to get an assessment of someones research output, and with things like Google Scholar it's all nicely displayed; here's mine. People's careers get wrapped into a few metric, and the principle one is the h-index.

The idea is simple. Each paper you write is cited by others, and when you rank your papers in order of the number of citations it gets, your h-index is the number of the paper with the same number of citations. As an example, my h-index is 50, and so I have 50 papers with more than 50 citations.

This is taken to be a measure research impact. The more active you are, the more papers you produce. But you need people to read and cite them (this is the impact part). The bigger the h-index, the "better" the researcher (for some vague definition of better).

Of course, many people hate this kind of ranking system as many factors can affect your h-index, such as the field you work in, being ahead of your time etc. But it is common for people to look at h-indicies of similar people at similar level of their careers, at least as a starting point of assessing research impact.

Such ranking exercises have become a subfield, and a recent paper appeared which looked at the impact of Australian astronomers (in terms of their h-index and citations). Here's the abstract:

Citations to Australian Astronomy: 5 and 10 Year Benchmarks

Expanding upon Pimbblet's informative 2011 analysis of career h-indices for members of the Astronomical Society of Australia, we provide additional citation metrics which are geared to a) quantifying the current performance of b) all professional astronomers in Australia. We have trawled the staff web-pages of Australian Universities, Observatories and Research Organisations hosting professional astronomers, and identified 383 PhD-qualified, research-active, astronomers in the nation - 131 of these are not members of the Astronomical Society of Australia. Using the SAO/NASA Astrophysics Data System, we provide the three following common metrics based on publications in the first decade of the 21st century (2001-2010): h-index, author-normalised citation count and lead-author citation count. We additionally present a somewhat more inclusive analysis, applicable for many early-career researchers, that is based on publications from 2006-2010. Histograms and percentiles, plus top-performer lists, are presented for each category. Finally, building on Hirsch's empirical equation, we find that the (10-year) h-index and (10-year) total citation count T can be approximated by the relation h = (0.5+sqrt{T})/sqrt{5} for h > 5.
One of the things they do is produce ranked lists of the "top" astronomers in Australia. Here it is in terms of h-index.

I'm not going to comment too much about this, but it's nice to be on the list, and I'm in excellent company also.

Right, back to grant writing!


  1. The paper where OPERA thought they saw faster-than-light neutrinos (turns out to have been a loose cable) has over 250 citations.

    1. Are you saying that the OPERA results didn't generate a lot of new research, some of which may (in the future) take physics in a new direction? Or are you suggesting that, because it was a wonky cable, that this 250 citations represents wasted effort?

      The number of citations is not a community "yes, yes, this is all correct" statement, and a number of papers I know have a large number of cites from people calling them incorrect. It's a measure of how many people read the paper, think it's interesting and do some research from it.

      I'm surprised it's as low as 250.

  2. Interestingly, all of the authors apart from Alister Graham appear to be high school students. That's cool.

    Congratulations on the ranking. Those are some impressive numbers. I'm hoping to break into the double digits of h-index soon. :)


Post a Comment

Popular posts from this blog

Proton: a life story

For the love of Spherical Harmonics

Misconceptions About the Universe