Tuesday, 24 December 2013

The Most Important Equation in all of Science

The end of the year is rapidly approaching (just where did 2013 go). I've just spend the week in Canberra at a very interesting meeting, but on the way down I actually thought I was off in to space. Why? Because the new Qantas uniform looks like something from Star Trek.
Doesn't Qantas know what happens to red shirts?

Anyway - why was I in Canberra? I was attending MaxEnt 2013, which, to give it its full name, was the 33rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. 

It was a very eclectic meeting, with astronomers, economists, quantum physicists, even people who work on optimizing train networks.  So, what was it that brought this range of people together? It was the most important equation in science. Which equation is that, you may ask? It's this one.
Huh, you may say. I've spoken about this equation multiple times in the past, and what it is, of course, is Bayes Rule.

Why is it so important? It allows you to update your prior belief, P(A), in light of new new data, B. So, in reality, what it does it connects the two key aspects of science, namely experiment and theory. Now, this is true not only in science done by people in white coats or at telescopes, it is actually how we live our day-to-day lives.

You might be shaking your head saying "I don't like equations, and I definitely haven't solved that one", but luckily it's kind of hardwired in the brain.

As an example, imagine that you were really interested in the height of movie star, Matt Damon (I watched Elysium last night). I know he's an adult male, and to I have an idea that he is likely to be taller than about 1.5m, but is probably less than around 2.5m. This is my prior information. But notice that it is not a single number, it is a range, so my prior contains an uncertainty.

Here, uncertainty has a very well defined meaning. It's the range of heights of Matt Damon that I think are roughly equally likely to be correct. I often wonder if using the word uncertainty confuddles the general public when it comes to talking about science (although it is better than "error" which is what is often used to describe the same thing). Even the most accurate measurements have an uncertainty, but the fact that the range of acceptable answers is narrow is what makes a measurement accurate.

In fact, I don't think that it is incorrect to say that it's not the measurement that gets you the Nobel Prize, but a measurement with a narrow enough uncertainty that allows you to make a decisive statement that does.

I decide to do an internet search, and I find quite quickly that he appears to be 1.78m. Sounds pretty definitive, doesn't it, but a quick look at the internet reveals that this appears to be an over estimate, and he might be closer to closer to 1.72m.  So my prior is now updated with this information. In light of this data, it's gone from having a width of a metre to about 10cms. What this means is that the accuracy of my estimate in the height of Matt Damon has improved.

Then, maybe, I am walking down the street with a tape measure, and who should I bump into is Mr Damon himself. With measure in hand, I can get some new, very accurate data, and update my beliefs again. This is how learning processes proceed.

But what does this have to do with science? Well, science proceeds in a very similar fashion. The part that people forget that there is not a group consciousness deciding on what our prior information, P(A), is. This is something inside the mind of each and every researcher.

Let's take a quick example - namely working out the value of Hubble's constant. The usual disclaimer applies here, namely, I am not a historian but I did live through and experience the last stages of the history.

Hubble's constant is a measure of the expansion of the Universe. It comes from Hubble's original work on the recession speed of galaxies, with Hubble discovering that the more distant a galaxy is, the fast it is moving away from us. Here's one of the original measures of this.
Across the bottom is distance, in parsecs, and up the side is velocity in km/s. So, where does the Hubble constant come in? It's just the relation between the distance and the velocity, such that

v = Ho D

The awake amongst you will have noted that this is just a linear fit to the data. The even more awake will note that the "constant" is not the normal constant of a linear fit, as this is zero (i.e. the line goes through the origin), but is, in fact, the slope.

So, Hubble's constant is a measure of the rate of expansion of the Universe, and so it is something that we want to know. But we can see there is quite a bit of scatter in the data, and a range straight lines fit the data. In fact, Hubble was way off in his fit as he was suffering from the perennial scientific pain, the systematic error as he didn't know he had the wrong calibration for the cephid variable stars he was looking at.

As I've mentioned before, measuring velocities in astronomy is relatively easy as we can use the Doppler effect. However, distances are difficult; there are whole books on the precarious nature of the cosmological distance ladder.

Anyway, people used different methods to measure the distances, and so got different values of the Hubble constant. Roughly put, some were getting values around 50 km/s/Mpc, whereas others were getting roughly twice this value. 

But isn't this rather strange? All of the data were public, so why were there two values floating around? Surely there should be one single value that everyone agreed on?

What was different was the various researchers prior, P(A). What do I mean by this? Well, for those that measured a value around 50 km/s/Mpc, further evidence that found a Hubble's constant about this value reinforced their view, but their priors strongly discounted evidence that it was closer to 100 km/s/Mpc. Whereas those who thought it was about 100 km/s/Mpc discounted the measurements of the researchers who found it to be 50 km/s/Mpc.

This is known as confirmation bias. Notice that it nothing to do with the data, but is in researchers' heads, it is their prior feeling of what they think the answer should be, and their opinion of other researchers and their approaches.

However, as we get more and more data, eventually we should have lots of good, high quality data, so that the data overwhelms the prior information in someone's head. And that's what happened to Hubble's constant. Lots of nice data.
And we find that Hubble's constant is about 72 km/s/Mpc - we could have saved a lot of angst if the 50 km/s/Mpc groups and the 100 km/s/Mpc group split the difference years ago.

It is very important to note that science is not about black-and-white, yes-and-no. It's about gathering evidence and working out which models best describe the data, and using the models to make more predictions. However, it is weighted by your prior expectations, and different people will require differing amounts of information before they change their word view.

This is why ideas often take a while to filter through a community, eventually becoming adopted as the leading explanation. 

On a final note, I think to be a good scientist, you have to be careful with your prior ideas and it's very naughty to set a prior on a particular idea to be precisely zero. If you do, then no matter how much evidence you gather, then you will never will be able to accept a new idea; I think we all know people like this, but in science there's a history of people clinging to their ideas, often to the grave, impervious to evidence that contradicts their favoured hypothesis. And again, this has nothing to do with the observations of the Universe, but what is going on in their heads (hey - scientists are people too!).

Anyway, with my non-zero priors, I will update my world-view based upon new observations of the workings of the Universe. And to do this I will use the most important equation in all of science, Bayes rule. And luckily, as part of the MaxEnt meeting, I can now proudly wear Bayes rule where ever I go. Go Bayes :)


Saturday, 7 December 2013

The large-scale structure of the halo of the Andromeda Galaxy Part I: global stellar density, morphology and metallicity properties

And now for a major paper from the Pan-Andromeda Archaeological Survey (PAndAS), led by astronomer-extraordinare Rodrigo Ibata. I've written a lot about PAndAS over the years (or maybe a year and a bit I've been blogging here) and we've discovered an awful lot, but one of the key things we wanted to do is measure the size and shape of the stellar halo of the Andromeda Galaxy.

The stellar halo is an interesting place. It's basically made up of the first generation of stars that formed in the dark matter halo in which the spiral galaxy of Andromeda was born, and the properties of the halo are a measure of the  formation history of the galaxy, something we can directly compare to our theoretical models.

But there is always a spanner in the works, and in this case it is the fact that Andromeda, like the Milky Way, is a cannibal and has been eating smaller galaxies. These little galaxies get ripped apart by the gravitational pull of Andromeda, and their stars litter the halo in streams and clumps. As we've seen before, Andromeda has a lot of this debris scattered all over the place.

So, we are left with a problem, namely how do we see the stellar halo, which is quite diffuse and faint, underneath the prominent substructure? This is where this paper comes in.

Well, the first thing is to collect the data, and that's where PAndAS comes in. The below picture confirms just how big the PAndAS survey is, and just how long it took us to get data.
It always amazes me how small the spiral disk of Andromeda is compared to the area we surveyed, but that's what we need to see the stellar halo which should be a couple of hundred kiloparsecs in extent.

Taking the data is only the first step. The next step, the calibration of the data, was, well, painful. I won't go into the detail here, but if you are going to look for faint things, you really need to understand your data at the limit, to understand what's a star, what's a galaxy, what's just noise. There are lots of things you need to consider to make sure the data is nice, uniform and calibrated. But that's what we did :)

Once you've done that, we can ask where the stars are. And here they are;
As you can see, chunks and bumps everywhere, all the dinner leftovers of the cannibal Andromeda. And all of that stuff is in the way of finding the halo!

What do we do? We have to mask out the substructure and search for the underlaying halo. We are in luck, however, as we don't have one map of substructure, we have a few of them. Why? Well, I've written about this before, but the stars in the substructure come from different sized objects, and so them chemical history will be different; in little systems, the heavy elements produced in supernova explosions are not held by their gravitational pull, and so they can be relatively "metal poor", but in larger systems the gas can't escape and gets mixed into the next generation of stars, making them more
"metal-rich".

So, here's our masks as a function of the iron abundance compared to hydrogen.
We see that the giant stream is more metal rich, but as we go to metal poor we see the more extensive substructure, including the South West Cloud.

What do we find? Well, we see the halo (horrah!) and it does what it should - it is brightest near the main body of Andromeda, but gets fainter and fainter towards the edge. Here's a picture of the profile:
It's hard to explain just how faint the halo is, but it is big, basically stretching out to the edge of our PAndAS data, and then beyond, and looks like it accounts for roughly 10% of the stellar mass in Andromeda. It is not inconsequential!

But as we started out noting, its properties provide important clues to the very process of galaxy formation. And it appears that it looks like we would expect from our models of structure formation, with large galaxies being built over time through the accretion of smaller systems.

We're working on a few new tests of the halo, and should hopefully have some more results soon. But for now,  well done Rod!

The large-scale structure of the halo of the Andromeda Galaxy Part I: global stellar density, morphology and metallicity properties

We present an analysis of the large-scale structure of the halo of the Andromeda galaxy, based on the Pan-Andromeda Archeological Survey (PAndAS), currently the most complete map of resolved stellar populations in any galactic halo. Despite copious substructure, the global halo populations follow closely power law profiles that become steeper with increasing metallicity. We divide the sample into stream-like populations and a smooth halo component. Fitting a three-dimensional halo model reveals that the most metal-poor populations ([Fe/H]<-1.7) are distributed approximately spherically (slightly prolate with ellipticity c/a=1.09+/-0.03), with only a relatively small fraction (42%) residing in discernible stream-like structures. The sphericity of the ancient smooth component strongly hints that the dark matter halo is also approximately spherical. More metal-rich populations contain higher fractions of stars in streams (86% for [Fe/H]>-0.6). The space density of the smooth metal-poor component has a global power-law slope of -3.08+/-0.07, and a non-parametric fit shows that the slope remains nearly constant from 30kpc to 300kpc. The total stellar mass in the halo at distances beyond 2 degrees is 1.1x10^10 Solar masses, while that of the smooth component is 3x10^9 Solar masses. Extrapolating into the inner galaxy, the total stellar mass of the smooth halo is plausibly 8x10^9 Solar masses. We detect a substantial metallicity gradient, which declines from [Fe/H]=-0.7 at R=30kpc to [Fe/H]=-1.5 at R=150kpc for the full sample, with the smooth halo being 0.2dex more metal poor than the full sample at each radius. While qualitatively in-line with expectations from cosmological simulations, these observations are of great importance as they provide a prototype template that such simulations must now be able to reproduce in quantitative detail.

Sunday, 24 November 2013

Seeing length contraction

It's the 50th anniversary of both the assassination of John F. Kennedy and the first episode of Doctor Who, and yes, I did get up at 6:50 and watched The Day of the Doctor. So, today's post is about Time and Relative Dimensions in Space.

Everyone loves a bit of relativity, even though its consequences can be quite mind-bending. Of course, one of the things that happens is that when things are moving relative to each other you get length contraction, and people disagree on how long something is. Length contraction is responsible for some cool physical effects, including explaining why two parallel currents attract one another.

Check out this excellent video by Derek Muller on Veritasium which explains this.
While the video is mostly correct, there is something which is not quite right. Notice the bit where he drives passed himself in a car, he sees the car squeezed due length contraction. So, the question I want to look at is "What do things look like moving at relativistic speeds?"

To answer the question, we need to think about two things. Firstly, we need to consider how we transform coordinates from the "moving" object into the coordinates of the observer. Let's assume the moving object is a sphere. In its own coordinate system is is sitting there, doing nothing, with its clock ticking. To the observer, the sphere is moving, and so its coordinates are changing, and its clock is ticking at a different rate to the observer.

We've known how to transform between the coordinates of the two for more than 100 years, and what we use is the Lorenz transformation. Using this we can work out where the points on the sphere are in the observer's coordinate system as a function of the observer's clock - easy!

Here's the Lorenz transformation (from wikipedia).

where 

and
Most of you will recognise this as being nothing but a matrix multiplication - the mathematics of special relativity is really not that difficult.

But remember the question is "What do we see?" And what we need to consider is the time that light rays take to travel from the sphere into the eye of the observer. 

How do we work out the path that light will follow? That's quite easy, as light rays travel along what is known as "null geodesics" and it is simple to trace out the path through space-time as it must follow.


Again, nothing complicated. Just a bit of algebra. Any high school student can do this :)

So, all you need to do is pick a time at the observer, as we want to consider all light rays that arrive at a certain time, and fine out at what time did photons leave the sphere to arrive at the observers eye. This is a bit of algebra, but it's a little "non-linear" so I use a small root finding algorithm to find the solution (for the curious, I use brentq function in python).

So, what do we get? To start with, let's just throw some random points onto the surface of the sphere and get it moving passed the observer 90% of the speed of light across the sky. Here's an image of the points on the sky as seen by the observer.
 Eermmmm. It looks round. Why isn't squashed in one of the directions?

Ah! you say, perhaps 90% of the speed of light is not fast enough? What if we go at 99.99%? Here's what we get.
Hmmmmm. It still looks like a circle on the sky. Just where has the length contraction gone?

Let's investigate this a little more. Instead of throwing down points at random on a sphere, let's make the moving object a cube, so we'll gave eight points at the corners of the cube and one at the centre.
Here's the cube at rest, as seen by the observer.
Lovely! Now, let's get this cube moving. Here's what we see if the cube is moving at 10% the speed of light.
Hmmm. The cube is not contracted, but appears to have rotated slightly. What if we up the speed to 50% the speed of light?
Still no contraction, but the cube definitely appears to be rotated! OK - let's up the speed to 90% of the speed of light.
And 99%!
And may as well pull out all stops and get up to 99.99%!
Wow. How cool is that - we don't have "length contraction" along the direction of motion (from left to right), but the cube has rotated and got skinny. So, actually seeing "length contraction" is more complicated than you think.

Now, none of this is new. The effect, known as Terrell rotation, was published by Terrell and separately by Roger Penrose, in 1959, although it was noted as far back as 1924 by Anton Lampa, although it doesn't appear in too many textbooks. There are some great articles out there on the interwebs about the optical effects of seeing things moving at relativistic speeds; I heartedly recommend you have a read. If I get some time of Christmas, I'll write about another favourite of mine, namely Bell's spaceship paradox.

Before I go, however, I was quite amazed how the ABC on both TV and radio were clammering to interview whovians to ask them what they thought of the new episode, and who their favourite Doctor was. However, some of us have a long memory and remember how the ABC, through the Chaser, presented fans of Doctor Who.
How things have changed :)

Saturday, 9 November 2013

Major Substructure in the M31 Outer Halo: the South-West Cloud

Another week has flown by and I don't know where the time went. But another good week in terms of research with a new paper accepted.

This one is led by postdoctoral researcher, Nick Bate, with newly minted doctor, Anthony Conn, and PhD student, Brendan McMonigal. The focus of the study, substructure in the halo of the Andromeda Galaxy from, you guessed it, the rather fantastic Pan-Andromeda Archaeological Survey (PAndAS). The focus this time is a particularly prominent blob, known as the South-West Cloud (or, more colloquially to us, Japan). Here's a map of the substructure again.
OK - I'll admit that the SW-Cloud doesn't look a lot like Japan, but the name stuck.

As you can see, it's a reasonably big chunk of stuff, but we want to know what it is. And that's the focus of the present paper. This is my favourite picture from the paper, presenting the density of stars in and around the SW-Cloud.
As you can see, it's a bit of a mess, but the SW-Cloud is clearly visible, as is a small dwarf galaxy (And XIX). The little stars labelled PA-7, PA-8 and PA-14 are really interesting as they are globular clusters, balls of roughly a million stars that I wrote about in the previous post. As I've written about before, it looks like a lot of these globulars were brought in on galaxies that have now been disrupted.

So, it looks like the SW-Cloud used to be a dwarf galaxy, with some of its own globular clusters, that has fallen into Andromeda and is being tidally torn apart. Now, that's interesting, but what else can we learn?

A while ago, I talked about the work Anthony was doing to measure the distances to M31 by locating the tip of the red giant branch, a very useful distance indicator. Once we have isolated the stars in the SW-Cloud, getting rid of all those annoying stars within our own Galaxy, we can search for the brightest red giant stars, which are the ones that define the tip. Here's the luminosity function.
Slightly hard to see (this stuff isn't easy!) but there is a set at around 21 which is the location of the tip.

We chopped the SW-Cloud into pieces, and calculated the distance to each bit, and this is what we find;
The black points are the individual fields, and you can see that the measurements are a bit noisy, but the red is the average of the three, which puts the SW-Cloud at almost the same distance and Andromeda itself. But the cool thing is that two of the globulars are at the same distance, slowing that they are all part of the same system.

But, as they say, there is more! We were also to work out the metallicity of the SW-Cloud, which means we are working out how chemically enriched the stars are. Remembering that the first stars were purely hydrogen and helium, the amount of chemical enrichment is a measure of how many generations of stars a population has gone through; in every generation, heavier chemical elements are produced and they pollute the gas clouds which make the next generation of stars.

Big galaxies like the Milky Way have lots of gas and are constantly producing stars, but dwarfs, which have a much smaller gravitational pull, can easily lose their gas and so only have a very limited number of stellar generations. This means that the smallest are usually metal-poor. What's the chemical make up of the SW-Cloud? This figure tells us all;
The left-hand panels are the colour-magnitude diagrams, and the thick black smudge up the middle of the top one is the red giant branch of the SW-Cloud. The middle panel is the "background" which is all the stars in the Milky Way (and some mis-classified background galaxies) which we subtract off, leaving the nice piccy at the bottom.

The right-hand panels are the the distribution of the metallicity of the stars in each of the colour-magnitude diagrams, the bottom is that for the SW-Cloud. We can see that the metallicity is about -1.3, which is not really metal-poor, and not really metal-rich. This tells us that what ever the SW-Cloud was originally, it was not a small dwarf galaxy, but would be amongst the largest that we know. Looking at other similar dwarfs, we can see that the SW-Cloud has lost about 25% of its mass, meaning that we must be looking at a very recent disruption.

How cool is that? Right, that's one substructure down, quite a few more to go!

Well done Nick, Anthony and Brendan!


Major Substructure in the M31 Outer Halo: the South-West Cloud

We undertake the first detailed analysis of the stellar population and spatial properties of a diffuse substructure in the outer halo of M31. The South-West Cloud lies at a projected distance of ~100 kpc from the centre of M31, and extends for at least ~50 kpc in projection. We use Pan-Andromeda Archaeological Survey photometry of red giant branch stars to determine a distance to the South-West Cloud of 793 +/- 45 kpc. The metallicity of the cloud is found to be [Fe/H] = -1.3 +/- 0.1. This is consistent with the coincident globular clusters PAndAS-7 and PAndAS-8, which have metallicities determined using an independent technique of [Fe/H] = -1.35 +/- 0.15. We measure a brightness for the Cloud of M_V = -12.1 mag; this is ~75 per cent of the luminosity implied by the luminosity-metallicity relation. Under the assumption that the South-West Cloud is the visible remnant of an accreted dwarf satellite, this suggests that the progenitor object was amongst M31's brightest dwarf galaxies prior to disruption.

Saturday, 2 November 2013

Dynamical Modeling of NGC 6809: Selecting the best model using Bayesian Inference

Science has been in the news over the last week, and it's been quite a successful research week for me. But while science has been in the news, I'm not 100% impressed by the way it has been presented.

Firstly, there was the lack of a dark matter detection by the Lux experiment. The reports around the web on this have been generally OK, but some have indicated that this is somehow a failure. But what is important, and is often not appreciated, is that in science the lack of a detection is as important as a detection. Negative results like this rule out possibilities and so are vital in cutting down the possibilities for what dark matter is. In fact, a lot of dark matter searches basically following the Holmes adage "eliminated the impossible, whatever remains, however improbable, must be the truth". Not seeing something increases our knowledge.

The second made me a little unhappy. The article in question appeared in the Conversation and was titled "Is it possible to add statistics to science? you can count on it". The target of the article is the award of the Prime Minister's Prize for Science to Terry Speed at the Walter+Eliza Hall Institute of Medical Research.

Now, don't get me wrong! I'm not unhappy about the existence of the award (I think it is great when science is recognised) and Terry Speed is a very worth recipient whose expertise in statistical analysis and bioinformatics has advanced cancer research.

No, the thing that annoys me is the title of the article - "Is it possible to add statistics to science? You can count on it". I have mentioned this before, and I will say it again, but the situation is not that science is over here and statistics is over there, but statistics (or more accurately inference) is absolutely and utterly central to science. Often people think science is observation and experimentation on one side, and theoretical study on the other. But the meat of science is the interface between the two, and to do that you need to be statistically (and also mathematically) adept.

It depresses me that we struggle to convince our students of this during their undergraduate years :(

But that brings me to the good news, the acceptance of a new paper by PhD student Foivos Diakogiannis. The question is a seemingly simple one, namely do globular clusters contain dark matter. Globulars are balls of roughly a few million stars whizzing about together and they orbit galaxies. They are a bit weird, and people are unsure of their formation mechanism.

If they collapsed from gas clouds in the very early universe then they should not have lots of dark matter in them, but if they formed like dwarf galaxies they would have formed from gas pooling in a dark matter halo and they would be dark matter dominated.

The focus of this paper is a particular cluster, NGC 6809, also known as M55. Here's a lovely picture of it.
A few years ago we took spectra of the stars in this globular cluster, and we used the Doppler shift to measure their speeds. Here's a picture of the speeds that we saw, as a function of the distance away from the centre of the cluster.
So, the stars seen towards the centre zip around the fastest, and they go more slowly at the edge.

How do we work out the mass? It is actually a very tricky problem as we only know the velocity along the line of sight, and don't know the velocity on the sky. But we also know how the light is distributed, and what we want to do is make a "self consistent" so that we have we can predict the observed light distribution and observed velocity profile, and that the globular cluster is stable.

How we do this is quite mathematical, and so read the paper if you are interested, but the we used inference, and in particular Bayesian analysis (I'll say it again, this kind of thing is central to science). But here's an example of the fits that we get
Notice that the velocity data is a bit ratty, but we get good fits.

And the result? We have shown conclusively that this particular cluster is not dominated by the presence of dark matter, and so they were not formed in the same way as objects like our own Milky Way. How did they form? We don't know, but our results are giving us some more clues.

Well done Foivos!

Dynamical Modeling of NGC 6809: Selecting the best model using Bayesian Inference
The precise cosmological origin of globular clusters remains uncertain, a situation hampered by the struggle of observational approaches in conclusively identifying the presence, or not, of dark matter in these systems. In this paper, we address this question through an analysis of the particular case of NGC 6809. While previous studies have performed dynamical modeling of this globular cluster using a small number of available kinematic data, they did not perform appropriate statistical inference tests for the choice of best model description; such statistical inference for model selection is important since, in general, different models can result in significantly different inferred quantities. With the latest kinematic data, we use Bayesian inference tests for model selection and thus obtain the best fitting models, as well as mass and dynamic mass-to-light ratio estimates. For this, we introduce a new likelihood function that provides more constrained distributions for the defining parameters of dynamical models. Initially we consider models with a known distribution function, and then model the cluster using solutions of the spherically symmetric Jeans equation; this latter approach depends upon the mass density profile and anisotropy β parameter. In order to find the best description for the cluster we compare these models by calculating their Bayesian evidence. We find smaller mass and dynamic mass-to-light ratio values than previous studies, with the best fitting Michie model for a constant mass-to-light ratio of Υ=0.90+0.140.14 and Mdyn=6.10+0.510.88×104M. We exclude the significant presence of dark matter throughout the cluster, showing that no physically motivated distribution of dark matter can be present away from the cluster core.

Wednesday, 23 October 2013

A letter to my previous self.....

I find myself a bit on an invalid for a couple of days, but have a mountain of stuff to get through so a brief post today.

It doesn't take much digging about on the internet to find people who send a message to their previous self; don't worry, this is not about me writing a letter to a pimply version of myself warning of all of the mistakes I will make - it would make for a very long post!

But when I was recently in Blighty, I was invited to visit a couple of my previous schools, namely Crynant Primary School and Llangatwg Comprehensive School and talk about my journey from the Welsh Valleys to being an astronomer at The University of Sydney.

Visiting the schools was like traveling through time. Of course, things had changed, definitely more computers and smart boards than in my day, but the layout of the rooms brought back so many memories.

The best thing was meeting the children, who were keen and were prepared with a mountain of cool astronomy questions, from "How far away is space?" to "What happens when you fall into a black hole?". At the primary school, we had an open call on questions from the older kids, and that could have continued all day! The local paper, The Evening Post, were there and ran a little story on my visit to Llangatwg.

I got some excellent feedback from the teachers and children on my visit, with several expressing their desire to go to university and even to study astrophysics (the children that is, not the teachers).

While I clearly was not talking to a younger me, I was talking with kids who are not too dissimilar to me when I was their age. In some ways, it did feel like I was sending a letter to a younger me, and hopefully it will be more fruitful than a self-indulgent blog-post to a fast receding youthful me :) 

Sunday, 13 October 2013

The masses of Local Group dwarf spheroidal galaxies: Not too small after all?

After a week of battling jet-lag, it's time to get back to some science. And this week, a new paper from the PAndAS, from Heidelberg based researcher,  Michelle Collins.

The target here is dwarf galaxies. Here's one from wikipedia.
There's lots of dwarf galaxies out there in the Universe. In fact, in terms of number, they represent the dominant galaxies out there, but they are much smaller than our own Milky Way, so they don't have the dominant mass.

Don't believe me? Well, my colleague, Alan McConnachie, recently compiled the most comprehensive compilation of the galaxies within our Local Group; you can read the details here. So, we have three large galaxies, the Milky Way, Andromeda and Triangulum, and then almost 100 smaller galaxies, a sea of dwarf galaxies.

Now, you might think that dwarfs are simple things, just a billion or so stars living together in a dark matter halo, just a smaller version of large galaxies, but no. There are a number of problems with them, such as the missing satellite problem.

In this paper, the focus is another problem with dwarfs, namely the mass of the dark matter halo in which they reside. There have been claims that even though dwarfs can have quite differing number of stars, they all seem to sit within dark matter halos of the same mass. This is a little bizarre as we would expect that in any proto-galaxy the relative amount of dark matter and gas would be the same, and so we would expect that the mass of the stars that form should be proportional to the dark matter halo in which they reside. Why don't we see that?

One thing to remember is that while it is easy to count stars in a dwarf galaxy, it is hard to measure the total amount of dark matter. The only way we can do this is to look at how the stars are moving, and see how the dark matter is influencing their orbits. Unfortunately, the Doppler effect can only reveal velocities, whereas we really need accelerations to measure the force due to dark matter. So, how to you measure the mass? I could write a book on this, but it is not easy and normally involves using things like the Jeans equations, which are fun if you enjoy mathematics, and are a nightmare if you don't.

So, what did Michelle do in this new paper? Well, for the first time, we have kinematic measurements of most of the dwarf galaxies orbiting Andromeda. I've spoken about these kind of observations before, and their not easy. The stars are are faint (well, intrinsically, they are red giant stars and are quite bright, but they are far away) and you need to use a big telescope to observe them. And we used one of the best, the mighty Keck telescope.
So, here's a plot of the half-light radius (i.e. the size) verses the velocity dispersion (which is really just the spread of speeds of the stars).
The red are the dwarf galaxies orbiting the Milky Way, and the blue are those orbiting Andromeda. Notice that while a lot of them it in the same parts of the figure, meaning they have similar properties, but there are a few brighter dwarfs in Andromeda which low velocity dispersions, which seems to indicate they are less massive.

Without getting into the gory detail (as it does get quite gory unless you like a bit of model fitting), we then tried to work out what the mass profile the dwarfs have, that is, what is the mass of the stars and the dark matter enveloping it. Here's a picture of the parameter distributions -
But what do these tell us?

Well, when we work out the masses of the dwarfs and pop them on a picture, you get
Firstly a few of the dwarfs around Andromeda have to be thrown out as their internal velocities appear to have been shaken up, probably by tidal interactions with the larger galaxy. But once that has been done, then the dwarf populations in the Milky Way and Andromeda roughly agree (there is some difference, but it is small). This is nice, as it shows that we seem to understand some of the basic properties of the dwarf population and we can compare these to our numerical simulations of galaxy formation.

There is still a problem, known as "Too big to fail", which relates to the relation between the amount of dark matter and the amount of stars that we see, and what this is telling us is something that we know, basically that gas physics is hard to do in numerical simulations, and just how gas turns into the stars we see today is quite complicated, with lots of feedback and (heaven forbid - magnetic fields).

Well done Michelle!

The masses of Local Group dwarf spheroidal galaxies: Not too small after all?
We investigate the claim that all dwarf spheroidal galaxies (dSphs) reside within halos that share a common, universal mass profile as has been derived for dSphs of the Galaxy. By folding in kinematic information for 25 Andromeda dSphs, more than doubling the previous sample size, we find that a singular mass profile can not be found to fit all the observations well. Further, the best-fit dark matter density profile measured for solely the Milky Way dSphs is marginally discrepant (at just beyond the 1 sigma level) with that of the Andromeda dSphs, where a profile with lower maximum circular velocity, and hence mass, is preferred. The agreement is significantly better when three extreme Andromeda outliers, And XIX, XXI and XXV, all of which have large half-light radii (>600pc) and low velocity dispersions (sigma_v < 5km/s) are omitted from the sample. We argue that the unusual properties of these outliers are likely caused by tidal interactions with the host galaxy. We also discuss the masses of all Local Group dSphs in the context of the 'too big to fail problem', and conclude that these are potentially reconcilable with theoretical predictions when the full scope of baryonic physics and observational uncertainties are taken into account.

Monday, 7 October 2013

The Red Lady of Paviland

Apologies for the continued delays, but travels have come to an end, I find myself in a warm and sunny Sydney, and normal services will be resumed as soon as possible.

A brief non-astronomy post. One of my other interests is history, especially prehistory, and on my break, myself and the family walks from Port Eynon to Rhossilli on the Gower Peninsula in Wales. This is very near where I grew up, but I never really explored it.

It is a spectacularly beautiful piece of coast line, even in the sea fog.
One of the reasons I wanted to visit was not only the beauty, but the history.

A while ago, I bought a fantastic book called Homo Britannicus, detailing the prehistory of the British Isles.
In this book, I learnt about an amazing discovery on this very shore line, a cave containing the remains of the Red Lady of Paviland. This "lady" is red as the bones have been died with red ochre. I'll let wikipedia tell the whole story, but when discovered in 1823, it was supposed that the bones were that of a roman-era prostitute or which.

However, the bones are actually that of a man and are about 33,0000 years old. What makes the "Red Lady" special is that he was the first human fossil discovered and the oldest ceremonial burial in western Europe.

But what was he doing in a cave on a cliff just above the ocean? Fisherman? Now this is where it gets very cool. When he lived, sea-levels were considerably lower and the Severn Estuary between England and Wales was a broad river valley. Here's the map from wikipedia
The dark blue line represents the coastline during the Upper Paleolithic, and there was a lot of dry land around what is modern Britain. Standing there on the coastline, looking out on the ocean, it was very hard to imagine that thousands of years ago, mammoth, rhino and cave lions.

And the same is true about the North Sea. It is amazing to hear that bones are regularly dredged up from the bottom of the ocean.

Unfortunately, I did not go down to the cave as I thought it was off limits, but it isn't. You can scramble down the cliff and check it out, but this is not advertised as, as told to me my one of the life guards at Rhosilli, every "idiot" would be going down there and many will require rescuing when the sea comes in. Next time :)

We did a few other cool walks, including climbing Mount Snowdon. It was a slog, but we lucked out with the weather and could see the Wicklow Mountains in Ireland. And Snowdon lived up to it's name as one of the busiest mountains around, with many hundred of others taking advantage of the weather for one last climb.