Sunday, 30 September 2012

The German Tank Problem Revisited

I've written before about The German Tank Problem. Basically, it's a problem in which you try and estimate the number of tanks being produced by the enemy, based upon the serial numbers of the ones that were knocked out. This was a real problem from World War II, but is now often sold as the taxi problem, in case you are too sensitive to think about tanks.
The moral of the story is that the spooks and spies were wrong, and the mathematicians right. Yay!!!

I was happy with this until my post on this (which was almost a year ago). My ex-student, and Bayesian extraordinaire, Brendon Brewer, said something that bothered me. Namely, the chance of seeing a tank depends on the number of tanks, with more chance of seeing one if there are more of them (obviously!), and this seems to cancel out the effect of knowing the serial numbers of the tanks. So how could it work.

I had a good week and so decided to return to the problem.

Let's start you being a battlefield intelligence officer, and nearby a big battle breaks out and rages. it is too dangerous to venture onto the battle field at the moment, but reports are coming in that there is new, dangerous tank prowling the battlefield. Reports continue, and soldiers report these tanks are everywhere. Reports also come in that the soldiers are knocking out these monsters with an impressive success rate.

Eventually, the battle abates, and you venture on to the battle field, expecting to see heaps of wrecked tanks, but you find far fewer than you expected.

What's happening? If the reports were correct, there were loads of tanks and the soldiers were good and defeating them. So where are all the wrecks?
There are two possibilities; either the soldiers were exaggerating the number of tanks that they saw, or their ability to knock them out.  Based on what you find, how can you work out the total number of tanks that were present, as this is related to their ability to build new tanks.
Let's assume that there were actually M tanks on the battlefield, and the soldiers' abilities meant that they could actually knock a fraction of then, f, out during the battle. The expected number of wrecks on the battlefield is then just λ= f M.

Now we can see the problem! There could have been a small number of tanks, and the soldiers could have been good at knocking them out, but equivalently there could have been a large number of tanks and the soldiers were actually pretty poor at destroying them, or somewhere in between. In all case, the number of wrecks on the battlefield would be the same.

OK, now for the technical part. While λ might be the expected number of wrecks you find, it might not be an integer number. So, how do you relate λ to the probability of seeing a number of wrecks, N, on a battlefield?

The problem was solved long ago, and what you use is the Poisson distribution. If you are not mathematical, the following may look horrible, but if, like me, you don't mind a bit of recreational mathematics, it's rather lovely.
Basically, it says that if your expected number is λ, then the chance of you seeing an integer number k is the above expression. A simple example, the number of buses past a bus-stop. The average number in an hour might be 2.6, and so you sit there and could the number in each hourly segment. In the first hour you see two, and then two in the next hour, and then none, and then two, one, three, none, two, two and, more rarely, five buses pass in an hour.

Suppose you saw the wrecks of 27 tanks on the battlefield, what does the probability distribution of f and M look like? Now, this is a 2-dimensional problem, and so I can do it analytically, and, with the help of Matlab, you get
 The red is high probability, the blue is low. We have a degeneracy (which is what we knew previously). So, what do you do? Well, last time we discussed this problem, we considered the serial numbers of the knocked out tank. On the battlefield, you take a look at the first tank and its serial number is 123. We know there is at least 123 tanks on the field, and (look back to the previous post if you don't remember), the probability of higher values of the total number of tanks is reduced. So, here's our new probability distribution adding this one serial number.
 Adding in the next 4 serial numbers (45, 150, 213 & 58) we get this
 We know that there are at least 213 tanks, but the smaller number also helps reduce the probability of larger total numbers of tanks. Going to 15
 and then the serial numbers of the total number of 27 serial numbers, we get this.
Now the probability is concentrated in a little blob, instead of a huge swath of values right across the plane. And what was the input numbers I used for the problem? A total number of tanks being 300, and the ability to knock them out as 0.1. It works :)

Just to do a final check, let's assume that there is only a small number of tanks in total, 34, but that the soldiers are good at knocking them out (f=0.75). What does the probability distribution look like now?
Again, everything still works. We see there was a small number of tanks, and that the troops were pretty good at knocking them out (although the uncertainty on this is quite large), but that doesn't matter, as he who has Reverend Bayes on their side has already won the war.

Have a good long weekend!

Saturday, 29 September 2012

A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (Part II); Distances to the Satellites of M31

It sometimes comes as a surprise to non-astronomers that one of the hardest things to do when you look at the Universe is to measure distances to objects out there. People have head about the almost 100 year battle to measure Hubble's Constant, but what people fail to realise is that all of the uncertainty was in measuring distances; how far is it to that galaxy, or that supernova, or that star.

Books have been written about the titanic struggle of measuring and calibrating distances in the Universe, so I am not going to cover that here again. But let's talk about my (and my collaborators) effort in the field.

I've written before about some work I've been doing with PhD student, Anthony Conn, using the tip of the Red Giant Branch to measure the distances to the dwarf galaxies orbiting our nearest neighbours, the Andromeda (M31) and Triangulum (M33) galaxies.

It's easy to understand the method, basically it says that things are fainter when they are further away. If you know how bright things truly are, you can calculate their distance using the inverse square law.
The problem is know how bright something is really is. This is where the tip comes in.

Here's some colour-magnitude diagrams for a globular cluster. The stars are not all over the place, but lie in particular places.

The main sequence is where stars are burning hydrogen into helium in their cores. This is where th Sun finds itself. One day, however, the hydrogen fuel in the core is used up, so what happens then?

Stars are simple objects. Basically gravity squeezing inwards is balanced by energy (in terms of pressure) pushing outwards. So, when the energy flow form the core is used up, the star starts to collapse in on itself. The squeezing rises, the temperature sizes, until a shell of hydrogen starts to burn into helium just outside the core.

However, this burning changes the properties of the star, with the flow of energy into the outer parts of the star, causing it to swell up. As it swells, the atmosphere cools and becomes red, but because the star is getting larger, it actually emits more radiation into space. The star has become a Red Giant (this is the future for the Sun!).

The swelling stars are the line of stars up the right hand side of the picture. The star continues to swell, and get brighter and brighter. Due to continual squeezing though, the core gets hotter and hotter, until it BOOOM, the core ignites again, burning helium into heavier elements. This is called the helium flash.

The outer layers of the become less luminous and the star drops back down the giant branch. The cool thing is that the point that this happens is the same for all stars (there is an effect of the chemical composition of the star, but that's a smallish effect). So, the tip of the red giant branch, the point in the colour-magnitude diagram where the stars stop getting brighter and fall back down, is a standard candle, something we can use to measure distances. And this is what Anthony did.

Now, that might make it sound easy, but the data we are working with is not as clean as the picture up there, there are a mess of contamination from stars in our galaxy, to faint galaxies at the limit of detection. Here's an example of what we are working with;
The top is the colour-magnitude diagram, with the box being the area of the red giant branch we are interested in. The inset box shows a dwarf galaxy orbiting Andromeda, where we have used colour-coding to note how far the star is away from the centre of the dwarf; this allows us to more robustly measure the tip.

The bottom right box is the luminosity function, with the bright being on the left, and faint on the right (I know, I know, astronomers are stupid for using the barse-ackward magnitude system). Above the tip, no stars, then we have a sharp jump at the tip and then more and more stars below.

The bottom left is our measurement of the location of the tip in this case. Notice that we don't have a single number, we have a probability distribution function; the peak of this distribution might be the bestest value for the location of the top, but the width of the distribution is also very important, show how accurately we have made the measurement. I will stress again, you don't get the Nobel prize for measuring a number, you get it for measuring a number and its uncertainty.

To cut to the chase, we now know the three dimensional distribution of dwarf galaxies about Andromeda. What does it look like, here's the picture from the paper;
Now, the question is, is this distribution of dwarfs just a random scattering of galaxies, or does it agree with our computer simulations of galaxy formation and evolution, or does it look like something else? That's a story for another day, hopefully a day not to far in the future. For now, well done Anthony!

A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (Part II); Distances to the Satellites of M31

Anthony R. Conn, Rodrigo A. Ibata, Geraint F. Lewis, Quentin A. Parker, Daniel B. Zucker, Nicolas F. Martin, Alan W. McConnachie, Mike J. Irwin, Nial Tanvir, Mark A. Fardal, Annette M. N. Ferguson, Scott C. Chapman, David Valls-Gabaud
In `A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (PART I),' a new technique was introduced for obtaining distances using the TRGB standard candle. Here we describe a useful complement to the technique with the potential to further reduce the uncertainty in our distance measurements by incorporating a matched-filter weighting scheme into the model likelihood calculations. In this scheme, stars are weighted according to their probability of being true object members. We then re-test our modified algorithm using random-realization artificial data to verify the validity of the generated posterior probability distributions (PPDs) and proceed to apply the algorithm to the satellite system of M31, culminating in a 3D view of the system. Further to the distributions thus obtained, we apply a satellite-specific prior on the satellite distances to weight the resulting distance posterior distributions, based on the halo density profile. Thus in a single publication, using a single method, a comprehensive coverage of the distances to the companion galaxies of M31 is presented, encompassing the dwarf spheroidals Andromedas I - III, V, IX-XXVII and XXX along with NGC147, NGC 185, M33 and M31 itself. Of these, the distances to Andromeda XXIV - XXVII and Andromeda XXX have never before been derived using the TRGB. Object distances are determined from high-resolution tip magnitude posterior distributions generated using the Markov Chain Monte Carlo (MCMC) technique and associated sampling of these distributions to take into account uncertainties in foreground extinction and the absolute magnitude of the TRGB as well as photometric errors. The distance PPDs obtained for each object both with, and without the aforementioned prior are made available to the reader in tabular form...

Sunday, 23 September 2012

Gauss's Sheets

It's early on Sunday morning, and I'm up and about due to the chorus of Sulphur-Crested Cockatoos that often make my suburb sound like the depths of Jurassic Park. While lying there in the din, I was thinking about yesterday's post, and found myself pondering the following:

"What would happen if I stopped using a cube, but instead had two parallel sheets? Now, in this case, I don't have a closed surface anymore, but if I make my sheets infinitely large, I am guessing that the total integral over the two sheets would converge to the result expected by Gauss's law."

Can you see why?

Anyway, this is not going to be a long post, as I haven't had any coffee yet, and have a paper to deal with, but I am very quickly going to scribble down the solution. Basically, I am going to replace my square piece with a circle, and then make the circle infinitely large in radius, and this will be the same as making a square sheet infinitely large.

Note to mathmos out there - I'm a physicist and this is how we do maths.

OK - the geometry.
So, the reason why we choose a circle is that the problem then, due to circular symmetry, boils down to be a one-dimensional integral in the radius, r.

OK - simplify and actually doing the integral (and by doing, I mean Mathematica again)
Adding up the integral over the two sheets, top and bottom, we get the same answer as we did when we integrated over the sphere and the box.

But as we noted, the sphere and the box are closed surfaces, but two sheets are not. Why does this work?

Anyway, I'm getting off the Gauss hobbyhorse, as there are astronomy results coming. Watch this space.

Saturday, 22 September 2012

Gauss's Cube

It has been a busy week, with a talk to the Macarthur Astronomical Society on Monday, and to the Astronomical Society of New South Wales last night. And if you were in Blighty on Wednesday at 4am, you would have heard me join dr karl on Radio 5 Live's graveyard shift to talk about why high speed protons don;t become black holes, but that is the topic for another post.

As I mentioned, I've recently been teaching electromagnetism, and I like to take a little bit of a side-ways glance at derivations and equations. Why? Because sometimes those given in text books can appear a little too idealized or simplified.

One of the things that you have to talk about in electromagnetism is Gauss's law. Mathematically, Gauss's law can seem quite intimidating to a first year student, even those in our advanced class, but let's take a look at what it means in a simplified sense, and then something a little more complicated.

Right, the maths (and to our American cousins, it is maths, not math. Mathematics is the plural of the word mathematic).
If you are feeling mathematically challenged at this point, let's try and understand what we are looking at here. The right had side is the amount of charge in a region (the q) and the thingy on the bottom is a constant of nature, the permittivity of free space. What's the flippy thing on the left. Let's look at this in picture terms.
Thanks to the genius of Michael Faraday, we talk about idea of the electric field, which we think of each charge in the universe being the source (if positive, sink if negative) of electric field lines (the blue lines in the above picture). The number of lines is dependent upon the amount of charge, so doubling the charge makes twice as many blue lines.

So, what is the right side of the equation telling us? Imagine we consider a surface around the charge (the red, badly drawn thing in the picture above), then the what the right hand size effectively does is count the number of blue lines going through the surface; in this case 4.

It should not take a huge amount of mental effort to understand that if you change the shape and size of the red surface, total number of blue lines crossing the surface is 4. There is a little thought needed for this as you can imagine deforming the surface so much that a blue line going out comes back in again, but then goes back out. So really the right hand side is the number of outward going blue lines minus the number of inward heading blue lines.

When students are introduced to this, you usually have the following picture.
Firstly, we make the surface a sphere. We can work out the size of the electric field on the surface of the sphere using Coulomb's law (that's the first bit up there, and yes, I know that I left the unit vector of on the right hand side). The integral simply becomes the size of the electric field over the surface, multiplied by the area of the surface (I've not worried about the vector dot product which we will return to in a minute). The simplicity of choosing a sphere makes the problem, well, simple.

But let's consider something a little more complex. Let's put the charge at the centre of a cube, with a side length of 2a.
It is easiest if we break the problem and consider the number of lines through each side of the box (as we have put the charge in the centre of the box, it should be clear than the number of lines through the total box is 6 times that through one side).

For those that don't like maths, look away now. First, the geometry of the problem
and then we can work out what the integral has to be (note that we cannot ignore the dot product now)
Ugh... Ugly integral time (and not strictly correct, as this is an integral over one face of the cube, and so is not over the entire surface). But we are saved! No more slogging over integrals, now we can use wonderful software like Mathematica to do them for us. So, what do we find?
But this is the integral through one face of the cube, and as we noted above, we need to multiply this by 6 to get the total over the cube. And this is simply the right hand side of where we came in.

Gauss's law works for a cube (which, of course, we knew it would :). Right, back to the grindstone.

Friday, 14 September 2012

I'm not young enough to know everything

I've just finished my lectures for the year. I've been teaching electromagnetism to our advanced first year class, and it's been the usual rollercoaster ride of a bit of integral calculus, laws by Gauss, Ampere and Lenz, and culminating in the rather wonderful Maxwell's equations
These are amongst the most important equations in the Universe. Why?

Well, there are 4 forces in the Universe, two of which we generally don't worry about on a day-to-day basis (namely the Strong force which holds the nuclei of your atoms together, and the Weak force which is responsible for radioactivity), gravity, which wants to drag you to the centre of the Earth, and electromagnetism, which is responsible for everything else.


From stopping you falling to the centre of the Earth, to controlling the flow of your ADSL, to the radio stations coursing through the air. The friction that stops your car, the things that give your bones strength, and allow you PS3 to have wireless controllers (well, to give it it's fair dues, it was responsible for making the wired controllers work also).

But I'm not here to wax lyrical about electromagnetism (although I could :).

I thought I would talk about my experience teaching it. Well, actually, teaching in general.

A bit of background. I've been lecturing at a university for the last decade (precisely!!). Before that, I worked as a research astronomer and had no teaching to to do. Becoming a lecturer, however, came as a bit of a shock. Why?

It's what we had to teach. The material was very similar to what I received as an undergraduate (which was more than 20 years ago), and so it should be like water off the proverbial duck to teach it.

But every course I have taught, even though I did really well in these subjects as a student myself, I realised there were gaping holes in my understanding. I clearly did not understand it as well as I thought I did. In truth, I had to relearn a lot of stuff.

For some subjects, this was not so bad. I use a lot of classical mechanics in my research, so there was a lot of material that was very familiar, but there were nook, crannies and surprises even here.

But with other topics, such as general relativity and quantum mechanics, and especially electromagnetism, I realised that the holes in my knowledge were quite substantial, and so I spend a significant amount of time chasing information and building up my understanding of a topic.

The result is not only do I feel confident about fielding questions from students, sounding knowledgeable and introducing the quirks and stories of a topic, but it makes me a better researcher. Improving my understanding of a wealth of topics has bolstered my research by incorporating techniques, ideas and approaches.

 I should mention that the way funding schemes are structured, the best researchers who get fellowships tend to run from teaching (to focus on their research), or are only willing to teach a course extremely related to their research. I personally think this is wrong. The best researchers should be put in front of first years, teaching something from left field. Not only does this expose the students to the research leaders, but should make the researchers stop and think about their own knowledge and understanding, and potentially improve them as a scientist. In my humble opinion, this should be a requirement of fellowship schemes.

Anyway, at the end of a course, we do unit evaluations, to see how well the course was received by the students etc. Some of the scores can be a little sad, especially the self-assessment of the students on how consistently they worked on the topic, but I am happy to say that I generally score OK on my lecturing. The feedback often focuses upon my mangled accent (being the son of a welsh coal miner who when to university in England, and then off to the US and Canada before coming to Australia), but I got the following this time around:
 I didn't mention Torchwood in my course, but it's nice to hear that I am still welsh enough to be considered part of the team :)