Saturday, 30 July 2011

My journey into GPGPUs

The semester has begun here, Down Under, and that means two things. Firstly, I am teaching General Relativity from Monday onwards to our Honours class (this is my favorite course and I'll blog about it a little more, as I have a particular view of the teaching of this subject), and I have become a student again.

Why? Well, because of these;
Those that have braved opening up a computer may recognise this as a GPU, or Graphics Programming Unit, and it's the engine that make high level graphics possible, especially for computer gaming. The explosion in their development came around because of
Not the flash and manic grin, but the hair (and my understanding is that it is long, flowing female hair that is the goal, which may tell us more about those who write computer games!).

The result is that GPUs have become computationally very powerful, but the computer architecture is different to a CPU. Basically GPUs are massive parallel processors, many quite simple computation engines. This means that if you have a simple calculation that you want to perform many times, a CPU might have to step through each calculation, whereas the GPU can do them all at once.

This is precisely what we want to do in many astronomical (and generally scientific) applications. As an example, to calculate the gravitational force on an object, then you need to add up the force due to all the other objects. Typically, you do this one at a time, which can get quite slow for many (i.e. billions) of object, and so things would go much faster if we could do the summation at once.

There is a problem, however. The makers (e.g. NVIDIA and AMD) keep the details of the architecture close to their chests. And they have, in the past, not been as rigorous as CPUs as ensuring floating point arithmetic works as it should; if you are simulating hair, then 2+2=5 is not such a problem now and again, but it can render useless the output of a scientific simulation (would you fly on a plane whose wings had been tested on a machine that sometime got floating point arithmetic wrong?)

But this is changing, and more robust arithmetic is now the name of the game, as well as providing computing libraries, specifically CUDA and openCL to allow us to develop applications on GPGPUs (the first GP is now for General Purpose). There is some urgency on getting to grips with this, as we are starting to build GPU-based supercomputers (in Australia, we will soon have g-Star to undertake GPU-based supercomputing of theoretical astrophysics). So, I have enrolled in a programming course for CUDA in the School of IT here.

There is, however, a problem. The problem (and I know this is going to hurt) are generally not very good at coding. Some are, but the majority aren't. We rely on the fact that we don't have to worry about complicated stuff because things like memory management, order of processing etc are hidden in high level codes, typically C and fortran, although python seems to be getting a foothold. We are bad enough for me to chuckle at the fact that this book
has an astronomy picture on the front; is it an example of a field that is renowned for needing this book, or perhaps we are better than the rest (which is a scary thought).

Anyway, back to GPGPUs. They are difficult to program. I think it was best put by by lecturer, they are difficult to program because you are
"programming bare metal"
You HAVE to worry about memory, and what's computing what and when, and, and this will shock most astronomer, you can't debug your code by sticking write statements everywhere (this will cause your code to fall over in a heap.

Anyway, I have had my first lecture, which so far is fine, but I also got my first homework, essentially playing with memory management in C. Of course, the young IT students confidently read over the homework sheet as I replayed the opening script of Four Weddings and a Funeral in my mind; it's been a little while since I really programmed in C.

I'll keep the blog updated on my journey into GPGPUs.

Sunday, 24 July 2011

Intermission: Australia vs South Africa

No zombies or 2-d universes this weekend, as I have been busy with a few other things. This included heading to ANZ Stadium to see Australia play South Africa at rugby.

 It is very easy for me to get to the stadium here in Sydney, much easier than it was to get to Arms Park when I was younger, and me and the family trundled down for the 8pm kick off. Trundling through the happy rugby cloud, we almost ended up at Acer Arena where Enrique Iglesias (whose crowd was as equally rowdy, and friendly, as the rugby crowd) was performing. It was interesting to see that also-Neath-born Katherine Jenkins will be performing there with Placido Domingo in September.

After finding the stadium, we watched a good game of rugby, especially given the Wallabies woeful appearance against Samoa last week. So good, in fact, that I only caught a couple of piccies.

As I said, a good game, although the crowd doesn't sing Delilah. One cool point was the mexican wave. Due to the torrential downpour over the week, we all got little foam squares to sit on, which became a spectacular part of the wave as it went round.

Friday, 22 July 2011

A New Collisional Ring Galaxy at z = 0.111: Auriga's Wheel

A great week for the Conns! My ex-students, Blair Conn (who is no relation to Anthony), Richard Lane and I have just had a new paper accepted on the discovery of a Collisional Ring Galaxy at redshift z=0.111 (the most distant one known - correction, one of the most distant known).

The galaxy was found serendipitously in a Subaru survey of the galactic disk and looks quite interesting. Here's two images of the system, in g-band (left) and r-band (right).

Cool eh! But it looks a bit better when you combine these to make a colour picture (we can make a pretend in-between band by averaging the two pictures above). This is what you get (and is annotated as in the paper).
The Slits in the image correspond to where we pointed the GMOS spectrograph on the Gemini-North telescope, allowing us to measure the velocities of the various components. It also revealed that there are active galaxies hidden down in the middle.

So, what's going on here? Well, we have some local examples of these Collisional Ring Galaxies. Here's a pretty one as seen by the Hubble Space Telescope
As the name suggests, these are formed when two galaxies collide, where the collision is almost a bulls-eye, rather than a glancing blow. The collision causes gas to collapse and result in a burst of star formation in the ring. In Auriga's Wheel, gas also flows into the centre, pouring fuel onto the central black holes and resulting in the active galaxies we see. This suggest that the ring is very young, only 50 million years (a cosmic baby).

We've also done a preliminary set of computational modeling of the collision which confirms this:
Here we have two galaxies of about the same mass, one with red particles, and the other with black. Looks like what we see (isn't physics wonderful!). Rory Smith is working on some more detailed modeling, and I'll pop that here when it's done. Well done Blair and Richard!!

A New Collisional Ring Galaxy at z = 0.111: Auriga's Wheel

Blair C. Conn, Anna Pasquali, Emanuela Pompei, Richard R. Lane, André-Nicolas Chené, Rory Smith, Geraint F. Lewis
We report the serendipitous discovery of a collision ring galaxy, identified as 2MASX J06470249+4554022, which we have dubbed 'Auriga's Wheel', found in a SUPRIME-CAM frame as part of a larger Milky Way survey. This peculiar class of galaxies is the result of a near head-on collision between typically, a late type and an early type galaxy. Subsequent GMOS-N long-slit spectroscopy has confirmed both the relative proximity of the components of this interacting pair and shown it to be the most distant spectroscopically confirmed collisional ring galaxy with a redshift of 0.111. Analysis of the spectroscopy reveals that the late type galaxy is a LINER class Active Galactic Nuclei while the early type galaxy is also potentially an AGN candidate, this is very uncommon amongst known collision ring galaxies. Preliminary modeling of the ring finds an expansion velocity of ~200 kms^-1 consistent with our observations, making the collision about 50 Myr old. The ring currently has a radius of about 10 kpc and a bridge of stars and gas is also visible connecting the two galaxies.

Tuesday, 19 July 2011

A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (Part I)

PhD student, Anthony Conn from Macquarie University, has been working with me on measuring distances to dwarf galaxies and substructure in our nearest cosmological companion, the Andromeda Galaxy. I'm very pleased to say that Anthony's first paper, using Bayesian methods to measure these distances, has now been accepted for publication in the Astrophysical Journal. Excellent result, Anthony!!

A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (Part I)

A. R. Conn, G. F. Lewis, R. A. Ibata, Q. A. Parker, D. B. Zucker, A. W. McConnachie, N. F. Martin, M. J. Irwin, N. Tanvir, M. A. Fardal, A. M. N. Ferguson

 We present a new approach for identifying the Tip of the Red Giant Branch (TRGB) which, as we show, works robustly even on sparsely populated targets. Moreover, the approach is highly adaptable to the available data for the stellar population under study, with prior information readily incorporable into the algorithm. The uncertainty in the derived distances is also made tangible and easily calculable from posterior probability distributions. We provide an outline of the development of the algorithm and present the results of tests designed to characterize its capabilities and limitations. We then apply the new algorithm to three M31 satellites: Andromeda I, Andromeda II and the fainter Andromeda XXIII, using data from the Pan-Andromeda Archaeological Survey (PAndAS), and derive their distances as $731^{(+ 5) + 18}_{(- 4) - 17}$ kpc, $634^{(+ 2) + 15}_{(- 2) - 14}$ kpc and $733^{(+ 13)+ 23}_{(- 11) - 22}$ kpc respectively, where the errors appearing in parentheses are the components intrinsic to the method, while the larger values give the errors after accounting for additional sources of error. These results agree well with the best distance determinations in the literature and provide the smallest uncertainties to date. This paper is an introduction to the workings and capabilities of our new approach in its basic form, while a follow-up paper shall make full use of the method's ability to incorporate priors and use the resulting algorithm to systematically obtain distances to all of M31's satellites identifiable in the PAndAS survey area.

Just as a passing note, this is another cool astro-ph date, with another Geraint on there.  Have a read of;

An MCMC approach to extracting the global 21-cm signal during the cosmic dawn from sky-averaged radio observations

by Geraint Harker. I wonder if he is related to the famous Jonathan Harker (vampire slasher)? At some point, I'll write about the mathematical modelling of vampire outbreaks!! One thing I know, is that a vampire outbreak will not look like this!!

Sunday, 17 July 2011

Dawn of the Dead

Given my age, I was a young teenager when videos arrived in the UK, and horror movies were the rage. I'm pretty sure that the first zombie movie I saw was Dawn of the Dead (the original, although the recent remake wasn't that bad).

Dawn is one in a long line of zombie films by George A. Romero, colloquially known as the Living Dead Series, starting with the classic Night of the Living Dead in 1968. Some of the later movies are, well, not so good, but Dawn is an excellent movie (well, excellent zombie movie).

I'm not going to give the story away here, but as noted in Dawn (and especially the recent Zombieland), there will be survivors, and these survivors will get better at surviving by not being killed, and getting rid of more zombies. As noted in Zombieland;
The first rule of Zombieland: Cardio. When the zombie outbreak first hit, the first to go, for obvious reasons... were the fatties
 So, in my zombie models, I have added a factor to account for the "hardening" of the population, by making the various parameters a function of time. As a reminder, here's the starting point, with all the parameters kept constant.

I've modified the plot and now, on the bottom, have the key parameters that control the zombie apocalypse. These are α, humans killed by zombies, β, humans infected by zombies, and δ, zombies killed by humans.

With the constant values given above, the population crashes.

OK, let's change these. What I've done is use a logistic function to change the values. The key points to this is a time which represents the mid-point of the change, a time scale and a size of change. Here's one where the population stiffens at around 300 days.

Again, the population collapses and the zombies take over. What we need to do is make the population stiffen a lot earlier. Let's get the population fighting back harder at about 250 days, during the collapse of the population.

Now this is more like it. What we see now is that the population drops, but the stiffening of the population halts the decline and the population flattens out and the zombies are basically eradicated.

Moving the stiffening back again, we should expect the population to do better, and to go back to 200 days, we find
So the rule is simple. Fight back, and the earlier we fight back, the better.

Keep your eyes peeled for the shuffling undead!

Friday, 15 July 2011

If I had a blank cheque I’d … trace the history of the Milky Way

A busy day, zipping down to Melbourne for a meeting on computer infrastructure. In the meantime, I got another article published in The Conversation titled If I had a blank cheque I’d … trace the history of the Milky Way.

The article is part of a series on what would happen if you give scientists a blank cheque. I was very restrained (I'd get rid of my mortgage first), and I focused on things that are actually achievable with a reasonable (rather than infinite) pot of money.

Not to steal the thunder of the article, I said I would use $100 million to build WFMOS, a project which I was very involved with. Here's a random image

WFMOS was to be the next generation multi-fibre spectrograph, built as part of a consortium between Gemini and Subaru, and to be placed at the top-end of the mighty Subaru Telescope.

This project was a long time in development,  a history which I won't recount here now. But there were a number of meetings to discuss the science, the focus of which was the nature of dark energy and galactic archaeology. In fact, I organized one of these meetings;

The meeting was very enjoyable, but alas, it was also the platform for Gemini to announce that WFMOS was cancelled. A rather depressing outcome after a huge amount of effort.

However, the Japanese have continued with a new project, called Sumire, which will attack the dark energy questions that WFMOS was intending to do. Alas, the galactic archaeology waits in the wings.

As I said, I will not reproduce The Conversation article here, and I will write more on galactic archaeology in the future. I will take a moment, however, to lament the memory of WFMOS.

Wednesday, 13 July 2011

A long night in the dome.....

I've been observing at the AAT a couple of time over the last few months. It's a long drive from Sydney (almost 6 hours), for long winter nights in the dome. We were using the rather wonderful AAOmega spectrograph to chase stars in the Sagittarius Dwarf and its associated tidal stream (I'll write more about these later).

I'm not much of an Astronomer, in the sense that I am pretty clueless about constellations and the names of stars (but having Google Sky on my phone is really starting to help). But when I am observing, I take the chance to really look up. My first observing run (in 1991) was at the William Herschel Telescope in the Canary Island, and it was the first time that I **really** saw stars, and saw that they had colours. I have a valid excuse, having grown up in the not-so-clear skies of South Wales, followed by a few years in London.

However, there are more than things to look at other than stars. I really like catching Iridium Flares, a family of communication satellites which catch the Sun just before sunrise/sunset, and can become amongst the brightest things in the sky. Sometimes, we are luck to get a pair of flares within 30 seconds of each other, and on the last run, the legendary AAT TO Steve Lee photographed such a pair of flares. Here's the image;


For the keen-eyed, you should also see the disk of the Milky Way and a Magellanic Cloud.

One of the weirdest things about observing is that when the Sun is on it's way up, but still well below the horizon, we can no longer observe due to the sky brightness, even though it (to your eyes) looks dark outside.

After all the calibrations are done, and your body is yearning for bed (and it is important to get to sleep before the Sun really comes up), what else can you do? Well, if Saturn is up (as noted by Anthony Conn who was observing with us), perhaps we can look at it with the AAT? Remember, however, we have a spectrograph on, and what we really want is an image. In fact, we used something called the Focal Plane Imager (which allows us to do things like monitor the seeing when observing with the AAOmega), and if you cut the integration time down to a bare minimum (remember, we do have a 3.9m telescope here), what you get is


Again, we have Steve to thank for this excellent image (with 4 moons thrown in also, although I have a sneaking suspicion that the one at the top is a cosmic ray).

I guess, as an astronomer, I should look up a little more. It's all rather pretty up there.

Sunday, 10 July 2011

Feyman on Computers

Over at In the Dark, Peter Coles has been quoting Richard Feynman on various topics (and Feynman had a lot to say on a lot of topics, much of it quite eye-opening). One of his recent quotes was the following (which I creatively copied - i.e. pinched - directly from Peter's blog);
Well, Mr. Frankel, who started this program, began to suffer from the computer disease that anybody who works with computers now knows about. It’s a very serious disease and it interferes completely with the work. The trouble with computers is you *play* with them. They are so wonderful. You have these switches – if it’s an even number you do this, if it’s an odd number you do that – and pretty soon you can do more and more elaborate things if you are clever enough, on one machine.
After a while the whole system broke down. Frankel wasn’t paying any attention; he wasn’t supervising anybody. The system was going very, very slowly – while he was sitting in a room figuring out how to make one tabulator automatically print arc-tangent X, and then it would start and it would print columns and then bitsi, bitsi, bitsi, and calculate the arc-tangent automatically by integrating as it went along and make a whole table in one operation.
Absolutely useless. We *had* tables of arc-tangents. But if you’ve ever worked with computers, you understand the disease – the *delight* in being able to see how much you can do. But he got the disease for the first time, the poor fellow who invented the thing.
I remember reading this in Feynman's excellent book Surely You're Joking Mr Feynman, a book I read about the time I decided not to be a particle physicist and to be a astrophysicist instead. But I remember that when I originally read this quote, I *think* I misunderstood what it was saying. I think it was saying that "if you use computers, don't waste your time playing around with them".

Unfortunately, I already knew I had the disease Feynman talks about. Back in the early 1980s (I think it was 1982), I remember trundling on the bus to Swansea to spend almost £100 on a ZX81 and 16k RAM pack (on which the slightest breath would delete everything). Just for nostalgia sake;

While there were games available to buy (on cassette tape,  which you could actually listen to as they loaded), they weren't too cheap. To make up for this, there was a number of magazines that published game source code (in Sinclair BASIC) which you could type in and play.

Games often weren't much cop, with basic graphics and game play - below is one you actually bought

and often when typing in the source code, there would often be a bug that you introduced, or a bug in the actual source code. To make the games work, you had to understand what the code was actually doing, such as mucking about with memory with peeks and pokes, and, even though I didn't realise it at the time, you had to learn how to debug code. Learning these skills helped me play around, write my own code, and actually start to use it. Again, I remember writing a code adding wave after wave to show that you could make a square wave, and this was before I was formally introduced to Fourier analysis.

OK, time for grumpy old man time. I'm told that the current generation of students coming into universities these days are computer-literate, having grown up with computers. While this is true, it is clear that many students have never played with computers in the sense of Feynman's statement. Web searching, email and Facebook just don't cut it.

Does it matter? If you want to do a degree in physics, both undergraduate and graduate, I think you need these skills. Many (most) research projects will require students to integrate/differentiate functions, find correlations, fit models to data etc etc, and I see students struggle having to learn these concepts from scratch. This is made worse as many students shun computational physics courses when doing their degree (probably because they don't realise how important these skills are).

I'll leave the final word to the wonderful Randall Munroe and xkcd. While it was BASIC for me, rather than PERL, I very much agree.



Saturday, 9 July 2011

The last shuttle


Atlantis has set off on the last shuttle flight, and so I thought I would put down a few of my thoughts on the shuttle program.

I was born a couple of months before Armstrong walked on the moon, and I effectively grew up expecting the Space Shuttle to change spaceflight. I don't really remember the Apollo-Soyuz link-up, but I remember Skylab flying, and then falling. I barely remember anything about the Russian space program as a child.

About the time of the first shuttle flight, I sent a letter to NASA asking if they had any information about the shuttle I could have (remember, this was in the stone age when there was no internet), and I received a deluge, more packages than I can remember on the shuttle program, and its future promise.

I've always enjoyed watching the shuttle fly, but looking back I realise that it didn't really reach the potential outlined at the starts. The Challenger accident, as well as introducing me to Feynman, revealed how dangerous the shuttle was (it still does not have a launch abort system, unlike every other manned rocket), and this was recognized by the astronauts.

But the shuttle has done some amazing things, and for me (and my research) repairing Hubble was amazing. If Hubble breaks down now, then that's it. With JWST looking a little shaky, we may have to say goodbye to large optical telescopes in orbit.

So, it's almost goodbye shuttle. I hope Atlantis has a safe flight and eventually finds rest in a museum somewhere. With the Americans now heading towards a glorified apollo capsule in the form of the Orion spacecraft, I can't but help think that a big step backwards in space travel is being made.

Before leaving, I should mention that I am a fan of manned spaceflight, but the shuttle is not my favorite spacecraft, or apollo, gemini or mercury. In fact, my favorite is not even american, and I would fly on it, if offered, in a heartbeat. My favorite is soyuz;

Two friends in my 2-d Universe

So, I have now generalized the 2-d Universe a little more, and here are two particles interacting with each other within the surface of a sphere.

Cool, isn't it? So, how does one calculate such a pair of paths? As I mentioned previously, it's all standard non-Euclidean geometry and vectors and the like. So, let's go through the basics (and maths-types, please remember I am an astrophysicist and don't get grumpy about the words I use - it works :)).

Starting point is that we are on a sphere, and so it makes sense to use spherical polar coordinates. Now, one painful thing is that different people define which angle is ϑ and which is φ, so I will be following the convention shown in the top figure on Wikipedia. Remember, however, we are working in the surface of the sphere, and so we have no radial (r) coordinate.

If we have two infinitesimal displacements in our coordinate (and assuming a unit sphere for convenience), then the separation between the displaced coordinates is given by;

And this defines the terms in the metric of the surface. We can write this as a matrix (I can feel the mathmos blood starting to boil) of the form;

To calculate the motion, we need some equations of motion :) and for those, we use Christoffel symbols, which tell us the equations of motion are given by


where we are using the Einstein summation convention, and the greek characters are just our coordinates that we are using. The v-terms are velocities in our coordinate directions. If we plug all the terms in, calculating the Christoffel symbols, the equations of motion become;
Excellent. We can immediately see that if we plonk a particle down at rest (so its velocity terms are zero), these equations are zero. If the velocity is not zero, the particle will move over the sphere, but these equations are not zero, so the components of the velocity are changing. What does this mean for the path of the particle? It's not hard to show that the particle moves with a velocity of constant magnitude and covers a great circle path over the sphere, but here I will just show this graphically. Here's the same experiment with the magnitude of the interacting force set to zero.

As we expect, two great circles. But what about the components of the velocities? Let's just take an example path;

Here, the blue is the ϑ velocity, and the green is the φ, while the red is the magnitude of the velocity, which is constant. The important thing to remember is that we are not sitting on a flat, Euclidean surface with a cartesian coordinate system, and so the normalization of the velocity is not simply obtained using Pythagoras, but by


So, that all works. Particles happily travel through the spherical surface on effectively free-fall paths, covering great circles. But now we need to add the interaction between the particles, and for that we need to calculate a few things. Firstly, there is the distance between particles in the surface of the sphere! That's relatively straight-forward. Then we need to work out the vector components of the force, that is somewhat harder. Once we have these, we then use the modified form of the geodesic equation, which includes the influences of forces, that is

where the a terms are components of the acceleration. But I'll leave that for next time.

Sunday, 3 July 2011

A Dynamical 2-d Universe

One last post for today, as I have conquered movie making in matlab and handbrake (for small values of conquered) to make a movie of particle motion in my 2-d universe.

I have softened the interaction so that it is now 1/r (which seems more sensible as we are one dimension down here). Here's the path (again, with the blue dot being the primary mass).

And for your viewing pleasure, here's a movie of the orbit
video

Two-Dimensional Universe

What this?

If you said "That's a small mass orbiting a large mass in a 2-d spherical universe", then you're correct.

I am going to be teaching general relativity in a couple of weeks time, and every time I do I start thinking about geodesics, not just through 4-d space-time, but also standard 3-d geometry. One of the problems (IMHO) with current physics degrees is that we don't really touch on curvilinear coordinates and tensors until their final year, and (especially when it comes to general relativity), this all comes as a bit of a shock.

However, we can cast much of (all of?) physics in generalized coordinates, and we should be doing this from the start, showing how things like classical mechanics can be done in Euclidean or polar coordinates (or whatever), and the key thing being that the physical predictions come out to be the same.

I also think this will help students understand things like conserved quantities a bit more, and realize that there is nothing particularly magical about momentum, and that momentum and angular momentum are just conserved "things" in different coordinate systems.

Anyway, back to the above figure. I was thinking about geodesics on a 2-d spherical surface. It is quite simple to show that these are great circles. But how to you show some other things, like if you are a point A and want to travel to a point B, which direction should you set off in to travel along the great circle path (this, of course, is what aeroplanes have to do every day).

But this got me thinking about non-geodesic paths, paths which are pushed and pulled by forces. So, I plonked a mass at the pole (the blue dot) and added a 1/r^2 force (where r is now the great circle distance) and integrated the path. The key part is the force is a vector, and so you have to know which direction it points in the sphere. For the mass on the pole, it's relatively straight-forward. For a more generalized location, it's a bit harder, but doable with vectors and coordinate transformations (and without having to rotate coordinates back and forth). The goal is to have several particles interacting with each other on the sphere.

Why do it? Well, I don't think this 2-d universe has any practical uses; I'm not advocating that this is in anyway a real system. But you do get to use some very key concepts as outlined above, and it is an interesting problem. For me, learning complex techniques is very much helped by trying out toy models first.

I'll discuss how I worked out the orbits (they're numerically integrated, but I'll demonstrate how to set up the equations). Just to prove I can put the mass at any location, here's another figure.