Thursday, 17 May 2012

Slicing The Monoceros Overdensity with Suprime-Cam

I've finally succumbed to the sickness sweeping the land, and find myself wide awake at 5am (this is not really a natural state for an astronomer). So, as I sit here with sore throat, a quick post for you.

Blair Conn, my ex-student and now Humboldt Fellow in Heidelberg, and I, have had a paper accepted for publication in the  Astrophysical Journal. The focus of the paper is the Monoceros Ring, a vast "stream" of stars that appears to circle at the outer edge of the Milky Way galaxy.



The ring has had a bit of a checkered past, not its existence, but its origin.

People generally fall into two camps, those that think that Monoceros is just a natural piece of galaxy, a region of the stellar disk that has been puffed up (also known as the flare or warp of the disk), whereas others think the ring is the debris from a dwarf galaxy which was tidally disrupted when it came to close to the disk of the Milky Way. Potentially it is the debris from the Canis Major Dwarf Galaxy, a little galaxy thought to be nestled into the disk of the Milky Way.

The problem is that the Monoceros Ring is immense on the sky, and to map it in detail takes a lot of work. But it is this mapping that is required to get a chance to tell the difference between the two ideas.

Which brings us to Blair's paper. If you want to map an immense structure, you need a big field of view, and one of the biggest, on one of the best telescopes, is Suprime-Cam on Subaru in Hawaii.

IMHO, this is one of the best imagers in the world (and it's only going to get better, but that's for another post).

Superime-Cam was used to take a number of deep fields in directions away from the disk of the Milky Way in three strips.
The red regions are the fields (distorted as we are looking at a large patch of sky on flat paper). The grey underneath is the extinction due to galactic dust. This dust is the real bane of astronomers!

 Cutting to the chase (as I need to get a shower and get to work) what do we find. Well, the distribution of stars in these fields (in terms of how many and how far they are) appear to match both the models, the galactic and extra-galactic source for Canis Major.

But!!! it appears that the chemical composition of stars in the ring is different to that in the disk of the Milky Way, strongly suggesting that the stars in the ring are not simply puffed up from the galactic disk. Maybe Monoceros really is an extragalactic invader?

However, experience has taught me that evidence doesn't strongly sway peoples' viewpoint on things, and I am sure that we will here counter claims about its origin. But this is fun and how science is done. Well done Blair!

Slicing The Monoceros Overdensity with Suprime-Cam 

Blair C. Conn, Noelia E. D. Noël, Hans-Walter Rix, R. R. Lane, G. F. Lewis, M. J. Irwin, N. F. Martin, R. A. Ibata, A. Dolphin, S. Chapman
We derive distance, density and metallicity distribution of the stellar Monoceros Overdensity (MO) in the outer Milky Way, based on deep imaging with the Subaru Telescope. We applied CMD fitting techniques in three stripes at galactic longitudes: l=130 deg, 150 deg, 170 deg; and galactic latitudes: +15 < b [deg] < +25 . The MO appears as a wall of stars at a heliocentric distance of ~ 10.1\pm0.5 kpc across the observed longitude range with no distance change. The MO stars are more metal rich ([Fe/H] ~ -1.0) than the nearby stars at the same latitude. These data are used to test three different models for the origin of the MO: a perturbed disc model, which predicts a significant drop in density adjacent to the MO that is not seen; a basic flared disc model, which can give a good match to the density profile but the MO metallicity implies the disc is too metal rich to source the MO stars; and a tidal stream model, which bracket the distances and densities we derive for the MO, suggesting that a model can be found that would fully fit the MO data. Further data and modeling will be required to confirm or rule out the MO feature as a stream or as a flaring of the disc.

Sunday, 13 May 2012

Ranking Astronomers

If you want to fire up astronomers (and scientists in general), start discussing the topics of research impact and research metrics. These are the buzz words at the moment, as governments around the world are carrying out assessment of research done with public funds. Here in Australia we are in the current round of The Excellence in Research for Australia, where research in universities is scored on a scale which compares it to international standards.

I could write pages on attitudes to such exercises, and what it means, but I do note that the number of papers an academic has was a factor considered in a recent round of redundancies at the University of Sydney. But what I will focus upon is an individual ranking, the h-index.

You can read the details of the h-index at wikipedia, but simply put, take all the papers written by an academic, and find out how many times each of them has been cited. Order the papers from the most cited to the least, and where the number of the paper (going down the list) matches the number of citations, then  this is the academics h-index. Here's the piccy from wikipedia:


If you check me out at Google Scholar, I have an h-index of 49, which means I have 49 papers with at least 49 citations. Things like Google Scholar, and the older (but still absolutely excellent) Astrophysical Data Service, make calculating the h-index for astronomers (and even academics in general) extremely easy. The result is that people now write papers about people h-indicies, papers like this one which ranks Australian astronomers in terms of their output over particular periods.

Now, a quick google search will turn up a bucket load of articles for and against the h-index. There are a lot of complaints about the h-index, that it does not take things like the field of research, or number of authors, or the time needed to build up citations (slow cooker research which is not recognised at the time, but becomes influential after a long period, sometimes after the researcher has died!). 

Others seem to have a bit of a dirty feeling about the h-index, that ranking astronomers and research is somewhat below being an academic, especially things like producing league tables as in the paper above. 

The problem is that, in reality, scientists are ranked all the time, be it for grant applications, telescope time, jobs etc. and in all of these, it is necessary to compare one researcher to another. Such comparisons can be very difficult. When faced with a mountain of CVs, with publication records as long as your arm, grand success and even outside interests (why, oh why, in job applications do people feel it is necessary to tell me they like socialising and reading fantasy books??), it can be hard to compare John Smith with Jayne Smythe.

This is why I am a fan of the h-index. 

But let's be clear why and how. I know the age-old statement that "Past performance is not an indicator of future success" but when hiring someone, or allocating grant money or telescope time, people are implicitly looking at a return on their investment, they want to see success. And to judge that, you need to look at peoples' past record and extrapolate into the future. If Joe Blogs has received several grants in the past and nothing has come of it, then do you really want to give them more money? And what if they received quantities of telescope time and never publish the results? Is it a good idea to give them more time? 

But the stakes are higher. "Impact" is key, and one view of impact is that your research is read, and more importantly cited, by other academics around the world. What if Joe is a prolific publisher, and all of his papers appear in the Bulgarian Journal of Moon Rocks, with no evidence that anyone is reading his papers? Do you want to fund him to write more papers that no-one is going to read?

Now, some will say that academic freedom means that Joe should be free to work on whatever he like, and I agree that this is true. But as the opening of this post pointed out, governments, and hence universities, are assessing research and funding, and this assessment wants to see "dollars = impact". 

So, when looking at applications, be it for a job or grants, then the h-index is a good place to start; does this researcher have a track record of publishing work that is cited by others? Especially for jobs, it appears that the h-index actually has some predictive powers (although I know this is not globally true, as I know a few early hot-shots that fell off the table).

But let me stress, the h-index is a good place to start, not end, the process. 

I agree with Bryan Gaensler's statement that we should "Reward ideas, not CVs", and the next "big thing" might come from left-field, from a little known researcher who has yet to establish themselves, but realistically a research portfolio should be a mix of ventures; research that is guaranteed to produce solid results, with some risky ideas that might pay off big time, and we have to judge that by looking at the research proposal as a whole (and I think this should be true for an individual's research portfolio, or a department, or even a country).


Anyway, professional academics know that they are being assessed and ranked, and know that those that count beans are watching. I know there are a myriad of potential metrics that one can use to assess a researcher (and funnily enough, many researchers like the one that they look good in :), and I also know that you should look at the whole package when assessing the future potential of a researcher.


But the h-index is a good place to start.



Sunday, 6 May 2012

The Lives of High Redshift Mergers

Our office move has been completed, and I must admit that I am very pleased. The offices are swish, the french patisserie at the end of the street is superb, my walk to the trains has been cut in half, and I love my glass office wall/whiteboard. Time to get back to some work.

And to kick things off, my PhD student, Tom McCavana, has gotten a paper accepted for publication in the Monthly Notices in the Royal Astronomical Society. It's a bit of a complex tale, but the underlying question is simple, namely how do galaxies grow over time?

We know from Cold Dark Matter model for the Universe that they grow over time through the accretion of smaller systems. Here's a little movie illustrating such a growing galaxy
Looks cool eh? These movies are generated in massive simulations of structure growth, something you need supercomputers for. The problem is that most simulations are not the complete picture, as they only consider the evolution of dark matter, as the complex physics of gas and star formation make the simulations hard to do.

So, the technique which is often applied is to do a dark matter only simulation, and paint on galaxies in what is know as semi-analytic techniques.

What is important is to really know the details of how galaxies form over time, and know know what happens with smaller accreting blobs. But there is a dilemma. You can either do a large simulation of the Universe, looking at the accretion histories of lots of galaxies, but the problem there is resolution. To simulate large chunks of the Universe, even with lots of particles, the smallest mass you can consider can often be larger than the dwarf galaxies we see around us today. We just can't catch the detail.

So, you can simulate a single galaxy in detail. But the problem there is that while we know what is going on that galaxy, it is only a single galaxy, and so we don't have a global picture of accretion history. Also, with a single galaxy, we might be losing the influence of the push and pull of cosmic structure on the evolution of galaxies.

And the telling thing is you seem to get different results in the two scenarios. Which is "correct"?

The situation is even more complex, because how do you define when something has merged? This might sound like a silly question, but when you run a computer simulation of galaxies forming, you run through lots of time steps, but you can only write out a limited number of "snapshots", so unlike the movie above, what you actually have is a series of still photographs, and it is from these that you have to decided whether something has merged.

And it turns out that the definition of a merger is tricky. You need to decide on criteria, and, with all good criteria, there are events that break the rules. In fact, you can get objects which *look* like they have merged, but at a later snapshot they have separated and what you realise is that this system was actually just a fly-by rather than a merger.

So, Tom delved into all of this with some new simulations with lots of snapshot. In brief, the result is that the relation derived from cosmologically motivated simulations are the ones which more accurately describe the growth of galaxies, and you really need to unpick mergers and be careful on your definitions.

At the end of the day, you can make a accurate merger-tree, which shows you how the halo of a galaxy grows. Here's one from the paper
but I'll let you read the paper to get all the gory details. Well done Tom!


The Lives of High Redshift Mergers

Tom McCavana, Miroslav Micic, Geraint F. Lewis, Manodeep Sinha, Sanjib Sharma, Kelly Holley-Bockelmann, Joss Bland-Hawthorn
We present a comparative study of recent works on merger-timescales with dynamical friction and find a strong contrast between idealized/isolated mergers (Boylan-Kolchin et al. 2008) and mergers from a cosmological volume (Jiang et al. 2008). Our study measures the duration of mergers in a cosmological N-body simulation of dark matter, with emphasis on higher redshifts (z < 10) and a lower mass range. In our analysis we consider and compare two merger definitions; tidal disruption and coalescence. We find that the merger-time formula proposed by Jiang et al. (2008) describes our results well and conclude that cosmologically motivated merger-time formulae provide a more versatile and statistically robust approximation for practical applications such as semi-analytic/hybrid models.