Sunday, 26 October 2014

The Redshift Drift

Things are crazily busy, with me finishing teaching this week. Some of you may know, that I am writing a book, which is progressing, but more slowly than I hoped. Up to just over 60,000 words, with a goal of about 80 to 90 thousand, so more than half way through.

I know that I have to catch up with papers, and I have another article in The Conversation brewing, but I thought I would write about something interesting. The problem is that my limited brain has been occupied by so many other things that my clear thinking time has been reduced to snippets here and there.

But one thing that has been on my mind is tests of cosmology. Nothing I post here will be new, but you might not know about it. But here goes.

So, the universe is expanding. But how do we know? I've written a little about this previously, but we know that almost 100 years ago, Edwin Hubble discovered his "law", that galaxies are moving away from us, and the further away they are, the faster they are moving. There's a nice article here describing the importance of this, and we end up with a picture that looks something like this
Distance is actually the hard thing to measure, and there are several books that detail astronomers on-off love affair with measuring distances. But how about velocities?

These are measured using the redshift. It's such a simple measurement. In our laboratory, we might see emission from an element, such as hydrogen, at one wavelength, but when we observe it in a distant galaxy, we see it at another, longer, wavelength. The light has been redshifted due to the expansion of the universe (although exactly what this means can be the source for considerable confuddlement).

Here's an illustration of this;
Relating the redshift to a Doppler shift we can turn it into a velocity. As we know, the Hubble law is  what we expect if we use Einstein's theory of relativity to describe the universe. Excellent stuff all around!

One thing we do know is that the expansion rate of the universe is not uniform in time. It was very fast at the Big Bang, slowed down for much of cosmic history, before accelerating due to the presence of dark energy.

So, there we have an interesting question. Due to the expansion of the universe, will the redshift I measure for a galaxy today be the same when I measure it again tomorrow.

This question was asked before I was born and then again several times afterwards. For those that love mathematics, and who doesn't, you get a change of redshift with time that looks like this

(taken from this great paper) where z is the redshift, Ho is Hubble's constant today, while H(z) is Hubble's constant at the time the light was emitted from the galaxy your observing. 

The cool thing is that last term depends upon the energy content of the universe, just how much mass there is, how much radiation, how much dark energy, and all the other cool things that we would like to know, like if dark energy is evolving and and changing, or interacting with matter and radiation. It would be a cool cosmological probe.

Ah, there is a problem! We know that Hubble's constant is about Ho = 72 km/s/Mpc, which seems like a nice sort of number. But if you look closely, you can see that it actually had units of 1/time. So, expressing it in years, this number is about 0.0000000001 per year. This is a small number. Bottom.

But this does not mean that astronomers pack up their bags and head home. No, you look for solutions and see if you can come up with technologies to allow you to measure this tiny shift. I could write an entire post on this, but people are developing laser combs to give extremely accurate measurement of the wavelength in spectra, and actually measure the changing expansion of the Universe in real time!

Why am I writing about this? Because these direct tests of cosmology have always fascinated me, and every so often I start doodling with the cosmological equations to see if I can come up with another one. Often this ends up with a page of squiggles and goes no where, but some times I have what I thing is a new insight.


And this gives me a chance to spruik an older paper of mine, with then PhD student, Madhura Killedar. I still love this stuff!


The evolution of the expansion rate of the Universe results in a drift in the redshift of distant sources over time. A measurement of this drift would provide us with a direct probe of expansion history. The Lyman alpha forest has been recognized as the best candidate for this experiment, but the signal would be weak and it will take next generation large telescopes coupled with ultra-stable high resolution spectrographs to reach the cm/s resolution required. One source of noise that has not yet been assessed is the transverse motion of Lyman alpha absorbers, which varies the gravitational potential in the line of sight and subsequently shifts the positions of background absorption lines. We examine the relationship between the pure cosmic signal and the observed redshift drift in the presence of moving Lyman alpha clouds, particularly the collapsed structures associated with Lyman limit systems (LLSs) and damped Lyman alpha systems (DLAs). Surprisingly, the peculiar velocities and peculiar accelerations both enter the expression, although the acceleration term stands alone as an absolute error, whilst the velocity term appears as a fractional noise component. An estimate of the magnitude of the noise reassures us that the motion of the Lyman alpha absorbers will not pose a threat to the detection of the signal.

Catching the Conversation

Wow!!! Where has time gone! I must apologise for the sluggishness of posts on this blog. I promise you that it is not dead, I have been consumed with a number of other things and not all of it fun. I will get back to interesting posts as soon as possible.

So, here's a couple of articles I've written in the meantime, appearing in The Conversation

One on some of my own research: Dark matter and the Milky Way: more little than large



And the other on proof (or lack of it) in science: Where’s the proof in science? There is none



There's more to come :)

Wednesday, 27 August 2014

Sailing under the Magellanic Clouds: A DECam View of the Carina Dwarf


Where did that month go? Winter is almost over and spring will be breaking, and my backlog of papers to comment on is getting longer and longer.

So a quick post this morning on a recent cool paper by PhD student, Brendan McMonigal, called "Sailing under the Magellanic Clouds: A DECAm View of the Carina Dwarf". The title tells a lot of the story, but it all starts with a telescope with a big camera.

The camera is DECam, the Dark Energy Camera located on the 4m CTIO telescope in Chile. This is what it looks like;
It's not one CCD, but loads of them butted together allowing us to image a large chunk of sky. Over the next few years, this amazing camera will allow the Dark Energy Survey which will hopefully reveal what is going on in the dark sector of the Universe, a place where Australia will play a key-role through OzDES.

But one of the cool things is that we can use this superb facility to look at other things, and this is precisely what Bendan did. And the target was the Carina Dwarf Galaxy. Want to see this impressive beast! Here it is;
See it? It is there, but it's a dwarf galaxy, and is so quite faint. Still can't see it? Bring on DECam. We pointed DECam at Carina and took a picture. Well, a few. What did we see?
So, as you can see, we took 5 fields (in two colours) centred on the Carina dwarf. And with the superb properties of the camera, the dwarf galaxy nicely pops out.

But science is not simply about taking pictures, so we constructed colour-magnitude diagrams for each of the fields. Here's what we see (and thanks Brendan for constructing the handy key for the features in the bottom-right corner).
All that stuff in the labelled MW are stars in our own Milky Way, which is blooming contamination getting in our way. The blob at the bottom is where the we are hitting the observational limits of the camera, and can't really tell faint stars from galaxies.

The other bits labelled Young, Intermediate and Old tell us that Carina has had several bursts of star-formation during its life, some recent, some a little while ago, and some long ago (to express it in scientific terms), while the RGB is the Red Giant Branch, RC is the Red Clump and HB is the Horizontal Branch.

We can make maps of each of the Young, Intermediate and Old population stars, and what we see is this;
The Young and Intermediate appear to be quite elliptical and smooth, but the Old population appears to be a little ragged. This suggests that long ago, Carina has been shaken up through some gravitational shocks when it interacted with the larger galaxies of the Local Group, but the dynamics of these interactions are poorly understood.

But there is more. Look back up there at the Colour-Magnitude Diagram schematic and there is a little yellow wedge labelled LMC, the Large Magellanic Cloud; what's that doing there?

What do we see if we look at just those stars? Here's what we see.
So, they are not all over the place, but are located only in the southern field, overlapping with Carina itself (and making it difficult to separate the Old Carina population from the Magellanic Cloud stars).

But still, what are they doing there? Here's a rough map of the nearby galaxies.
As we can see, from the view inside the Milky Way, Carina and the LMC appear (very roughly) in the same patch of sky but are completely different distances. But it means that the Large Magellanic Cloud must have a large halo of stars surrounding it, possibly puffed up through interactions with the Small Magellanic Cloud as they orbit together, and with the mutual interaction with the Milky Way.

It's a mess, a glorious, horrendous, dynamically complicated mess. Wonderful!

Well done Brendan!

Sailing under the Magellanic Clouds: A DECam View of the Carina Dwarf

We present deep optical photometry from the DECam imager on the 4m Blanco telescope of over 12 deg[Math Processing Error] around the Carina dwarf spheroidal, with complete coverage out to 1 degree and partial coverage extending out to 2.6 degrees. Using a Poisson-based matched filter analysis to identify stars from each of the three main stellar populations, old, intermediate, and young, we confirm the previously identified radial age gradient, distance, tidal radius, stellar radial profiles, relative stellar population sizes, ellipticity, and position angle. We find an angular offset between the three main elliptical populations of Carina, and find only tentative evidence for tidal debris, suggesting that past tidal interactions could not have significantly influenced the Carina dwarf. We detect stars in the vicinity of, but distinct to, the Carina dwarf, and measure their distance to be 46[Math Processing Error]2 kpc. We determine this population to be part of the halo of the Large Magellanic Cloud at an angular radius of over 20 degrees. Due to overlap in colour-magnitude space with Magellanic stars, previously detected tidal features in the old population of Carina are likely weaker than previously thought.

Friday, 25 July 2014

A cosmic two-step: the universal dance of the dwarf galaxies

We had a paper in Nature this week, and I think this paper is exciting and important. I've written an article for The Conversation which you can read it here.

Enjoy!


Saturday, 19 July 2014

Resolving the mass--anisotropy degeneracy of the spherically symmetric Jeans equation

I am exhausted after a month of travel, but am now back in a sunny, but cool, Sydney. It's feels especially chilly as part of my trip included Death Valley, where the temperatures were pushing 50 degrees C.

I face a couple of weeks of catch-up, especially with regards to some blog posts on my recent papers. Here, I am going to cheat and present two papers at once. Both papers are by soon-to-be-newly-minted Doctor, Foivos Diakogiannis. I hope you won't mind, as these papers are Part I and II of the same piece of work.

The fact that this work is spread over two papers tells you that it's a long and winding saga, but it's cool stuff as it does something that can really advance science - take an idea from one area and use it somewhere else.

The question the paper looks at sounds, on the face of it, rather simple. Imagine you you have a ball of stars, something like this, a globular cluster:
You can see where the stars are. Imagine that you can also measure the speeds of the stars. So, the questions is - what is the distribution of mass in this ball of stars? It might sound obvious, because isn't the mass the stars? Well, you have to be careful as we are seeing the brightest stars, and the fainter stars, are harder to see. Also, there may be dark matter in there.

So, we are faced with a dynamics problem, which means we want to find the forces; the force acting here is, of course, gravity, and so mapping the forces gives you the mass. And forces produces accelerations, so all we need is to measure these and... oh.. hang on. The Doppler Shift gives us the velocity, not the acceleration, and so we have wait (a long time) to measure accelerations (i.e. see the change of velocity over time). As they say in the old country, "Bottom".

And this has dogged astronomy for more than one hundred years. But there are some equations (which I think a lovely, but if you are not a maths fan, they may give you a minor nightmare) called the Jeans Equations. I won't pop them here, as there are lots of bits to them and it would take a blog post to explain them in detail.

But there are problems (aren't there always) and that's the assumptions that are made, and the key problem is degeneracies.

Degeneracies are a serious pain in science. Imagine you have measured a value in an experiment, let's say it's the speed of a planet (there will be an error associated with that measurement). Now, you have your mathematical laws that makes a prediction for the speed of the planet, but you find that your maths do not give you a single answer, but multiple answers that equally well explain the measurements. What's the right answer? You need some new (or better) observations to "break the degeneracies".

And degeneracies dog dynamical work. There is a traditional approach to modelling the mass distribution through the Jeans equations, where certain assumptions are made, but you are often worried about how justified your assumptions are. While we cannot remove all the degeneracies, we can try and reduce their impact. How? By letting the data point the way.

By this point, you may look a little like this

OK. So, there are parts to the Jeans equations where people traditionally put in functions to describe what something is doing. As an example, we might choose a density that has a mathematical form like
that tells us how the density change with radius (those in the know will recognise this as the well-known Navarro-Frenk-White profile. Now, what if your density doesn't look like this? Then you are going to get the wrong answers because you assumed it.

So, what you want to do is let the data choose the function for you. But how is this possible? How do you get "data" to pick the mathematical form for something like density? This is where Foivos had incredible insight and called on a completely different topic all together, namely Computer-Aided Design.

For designing things on a computer, you need curves, curves that you can bend and stretch into a range of arbitrary shapes, and it would be painful to work out the mathematical form of all of the potential curves you need. So, you don't bother. You use extremely flexible curves known as splines. I've always loved splines. They are so simple, but so versatile. You specify some points, and you get a nice smooth curve. I urge you to have a look at them.

For this work, we use b-splines and construct the profiles we want from some basic curves. Here's an example from the paper:
We then plug this flexible curve into the mathematics of dynamics. For this work, we test the approach by creating fake data from a model, and then try and recover the model from the data. And it works!
Although it is not that simple. A lot of care and thought has to be taken on just how you you construct the spline (this is the focus of the second paper), but that's now been done. We now have the mathematics we need to really crack the dynamics of globular clusters, dwarf galaxies and even our Milky Way.

There's a lot more to write on this, but we'll wait for the results to start flowing. Watch this space!

Well done Foivos! - not only on the paper, but for finishing his PhD, getting a postdoctoral position at ICRAR, but also getting married :)

Resolving the mass--anisotropy degeneracy of the spherically symmetric Jeans equation I: theoretical foundation

A widely employed method for estimating the mass of stellar systems with apparent spherical symmetry is dynamical modelling using the spherically symmetric Jeans equation. Unfortunately this approach suffers from a degeneracy between the assumed mass density and the second order velocity moments. This degeneracy can lead to significantly different predictions for the mass content of the system under investigation, and thus poses a barrier for accurate estimates of the dark matter content of astrophysical systems. In a series of papers we describe an algorithm that removes this degeneracy and allows for unbiased mass estimates of systems of constant or variable mass-to-light ratio. The present contribution sets the theoretical foundation of the method that reconstructs a unique kinematic profile for some assumed free functional form of the mass density. The essence of our method lies in using flexible B-spline functions for the representation of the radial velocity dispersion in the spherically symmetric Jeans equation. We demonstrate our algorithm through an application to synthetic data for the case of an isotropic King model with fixed mass-to-light ratio, recovering excellent fits of theoretical functions to observables and a unique solution. The mass-anisotropy degeneracy is removed to the extent that, for an assumed functional form of the potential and mass density pair (\Phi,\rho), and a given set of line-of-sight velocity dispersion \sigma_{los}^2 observables, we recover a unique profile for \sigma_{rr}^2 and \sigma_{tt}^2. Our algorithm is simple, easy to apply and provides an efficient means to reconstruct the kinematic profile.

and


Resolving the mass--anisotropy degeneracy of the spherically symmetric Jeans equation II: optimum smoothing and model validation

The spherical Jeans equation is widely used to estimate the mass content of a stellar systems with apparent spherical symmetry. However, this method suffers from a degeneracy between the assumed mass density and the kinematic anisotropy profile, β(r). In a previous work, we laid the theoretical foundations for an algorithm that combines smoothing B-splines with equations from dynamics to remove this degeneracy. Specifically, our method reconstructs a unique kinematic profile of σ2rr and σ2tt for an assumed free functional form of the potential and mass density (Φ,ρ) and given a set of observed line-of-sight velocity dispersion measurements, σ2los. In Paper I (submitted to MNRAS: MN-14-0101-MJ) we demonstrated the efficiency of our algorithm with a very simple example and we commented on the need for optimum smoothing of the B-spline representation; this is in order to avoid unphysical variational behaviour when we have large uncertainty in our data. In the current contribution we present a process of finding the optimum smoothing for a given data set by using information of the behaviour from known ideal theoretical models. Markov Chain Monte Carlo methods are used to explore the degeneracy in the dynamical modelling process. We validate our model through applications to synthetic data for systems with constant or variable mass-to-light ratio Υ. In all cases we recover excellent fits of theoretical functions to observables and unique solutions. Our algorithm is a robust method for the removal of the mass-anisotropy degeneracy of the spherically symmetric Jeans equation for an assumed functional form of the mass density.

Tuesday, 8 July 2014

Should academia be like Logan's Run? All out at 40?

A quick post, as I am still on the road.

One of my favourite movies of all time is Logan's Run (partly because of the wonderful Jenny Agutter, who was also in another fav of mine, An American Werewolf in London). The premise of the movie is that in a futuristic society, to maintain populations, children are manufactured to order and when you get to thirty years of age, you go to carousel where you float up in to the air and explode.

 Should academia be like this? Not killing everyone at 30, but how about requiring everyone to leave at 40?

Now, before you start screaming about "academic freedom" and "tenure", hear me out. I quite like the idea. Let's start with what (I think) we can all agree on.

Basically, there are not enough academic jobs, and academic pipe leaks at all stages, with talented people having to leave due to the lack of positions at the next level.

Additionally, prising academics out of their jobs is notoriously hard, with many working until they drop. This is not helped by the push on retirement ages to older and older ages (by the time I retire, in Australia the retirement age will be 70, meaning I have quarter of a century until I can "retire" - although the meaning of that is complex). And, at some point, productivity declines as we get older. Now, my paper output is much larger than when I wore a younger man's clothes, but it is because I have a group of students and postdocs. My personal research time is squeezed by all of the "non-research" academic roles, including teaching and administration etc.

I think many would agree that they would like to be a postdoc for life if they could.

Now for another truth. Academics are expensive. Cards on the table, I am a Level-E professor and my salary is public information and is currently $177,887. The salary budget is a major part of a university's cost, and it is not getting easier as academics are getting more career driven and are climbing the academic scale a lot faster. Universities would save a lot of money if I was replaced with a junior academic, with a lecturer earning almost $95,000.

So, what's my proposal. We'll many sports stars retire quite young, when their bodies are worn, and they can no longer compete with younger incoming stars. This doesn't mean that these people sit around watching afternoon TV, but find new careers. Why don't academics do the same?

My proposal:

  • At 40, academics are given the option of retiring from academia. As we are unlikely to have the funds that sports stars into retirement, the academics are offered a lump sum (3-5 years of pay?) to smooth the transition into another career. This would be cheaper than paying you for the next 30 years.
  • Universities can fill your position with a junior academic with a job until they reach 40.
  • If you decide to stay with the university, your admin and teaching loads increase to ensure the junior academics get lots of research done (but they will still have a teaching and admin role at the university).
  • The "retiring" academics can still hold adjunct positions with the university, accessing resources, supervising students (with the junior staff) and effectively becoming hobby researchers. They could potentially be still be listed on grants and access some funds to attend conferences etc. Companies could view academic commitments as a social contribution and could offer some time (10% of the working week) to these duties.
As a lot of my personal research is done out of hours, I would probably get more done. 

Of course, there is the "fear" that you won't get a job at 40, but academics are supposed to be talented, smart people who know how to learn. I am not too stupid to say that academics can magically transform into leading hedge fund managers or brain surgeons, but I doubt we'd end up on the street begging for food. Many people make career stages at many stages of their life; academics are no different.

But what if we get less pay? Well, the payout will help smooth this (and will clear many a mortgage), and we didn't get into this game to get rich now did we?

And in reality, stepping aside doesn't mean that you are exiting the game, you will still contribute and be engaged. But the fact that junior academics will get longer in the game will be better for science and human knowledge. Isn't that a good thing?

Saturday, 28 June 2014

The Nature and Origin of Substructure in the Outskirts of M31 -- II. Detailed Star Formation Histories

I am still playing catch up on papers, and I've just woken up early here in San Francisco and have a small amount of time before I have to prepare for my talk today. So, this will be quick.

The topic again is our nearest largest companion, the Andromeda Galaxy, especially working out the history of how stars have formed in the (relatively) inner regions of the galaxy. It might seem a little strange that we can work that out, because all we can see is stars, but with the magic of science, it is possible. That's the topic of this new paper by postdoctoral researcher, Edouard Bernard.

This beautiful science is done with the Hubble Space Telescope. The first thing you need to do is decide where to look. So, here's the fields we looked at
One of the sad things is that the area Hubble can image (its field of view) is tiny compared to the extent of Andromeda, and so we are doing key-hole surgery in select areas on interesting bits of Andromeda, especially prominent bits of substructure scattered about.

We image each of the fields in two colour bands, a blue(ish) one and a red(ish) one, and once you have this you can construct a colour-magnitude diagram. But how do you work out the star formation history?

Well, every new star that is born lives initially on the main sequence, but massive stars live on there for a relatively short time, and little stars sit on there for a long time. So, if you create a bunch of stars at the same time, they are all on the main sequence, but as you wait, the massive ones move off first, and then the lesser massive ones etc. In fact, you can tell the age by looking at the mass of stars now moving off the main sequence, something called the main sequence turn-off (rather imaginatively). Here's a nice picture from the Lick Observatory


If you want to have a look at evolutionary tracks in details, have a look at the Padova isochrones.

So, every burst of stars gives you a new population on the main sequence, and then after time they start to move off, and if you imagine this happening over and over again, you get a complex mess in the colour-magnitude diagram. And, with hard work, you can invert this. Here's a picture from the paper
The colour magnitude diagram is in the upper right; if you are an amateur astronomer and understand the magnitude scale, check out the numbers on the side. This tells you about the power of Hubble!

The other panels give the star formation rate (SFR) in the upper left, and metallicity (chemical enhancement) in the bottom right.

So, what do we find? Fields were identified as being disk-like, being part of the main body of Andromeda, stream-like, so they look like they are associated with the giant tidal stream in Andromeda, and composite, which are, well, more complicated.

Here's the cumulative star formation history for the various fields, but it should be clear that stars in the disk-like fields (blue) formed more recently than those in the other fields. But why?
Here's the actual histories, which shows how much mass in stars is formed as a function of time.
Now, they look similar, but the disk-like distribution is skewed to lower times, again showing that more stars formed more recently. 

Argh! I'm running out of time, but could go on for ages, but we are scratching the surface. Essentially, but clearly we have on going star formation in the galaxy disk, making a lot recently, where as the stars in the giant stream (which was formed a while ago and then fell into Andromeda) are somewhat older.

But the star formation histories are not smooth, all with a broad peak in their earlier history, but rather curiously possessing a spike in star formation around two billion years ago. What caused this? Well, it looks like it occurred when M33 crashed the party, exciting a new burst of stars, and scattering others to large distance.

I'm out of time, but this is all cool stuff. Well done Eduoard!

The Nature and Origin of Substructure in the Outskirts of M31 -- II. Detailed Star Formation Histories

While wide-field surveys of M31 have revealed much substructure at large radii, understanding the nature and origin of this material is not straightforward from morphology alone. Using deep HST/ACS data, we have derived further constraints in the form of quantitative star formation histories (SFHs) for 14 fields which sample diverse substructures. In agreement with our previous analysis of colour-magnitude diagram morphologies, we find the resultant behaviours can be broadly separated into two categories. The SFHs of 'disc-like' fields indicate that most of their mass has formed since z~1, with one quarter of the mass formed in the last 5 Gyr. We find 'stream-like' fields to be on average 1.5 Gyr older, with <10 percent of their stellar mass formed within the last 5 Gyr. These fields are also characterised by an age--metallicity relation showing rapid chemical enrichment to solar metallicity by z=1, suggestive of an early-type progenitor. We confirm a significant burst of star formation 2 Gyr ago, discovered in our previous work, in all the fields studied here. The presence of these young stars in our most remote fields suggests that they have not formed in situ but have been kicked-out from through disc heating in the recent past.