My journey into GPGPUs

The semester has begun here, Down Under, and that means two things. Firstly, I am teaching General Relativity from Monday onwards to our Honours class (this is my favorite course and I'll blog about it a little more, as I have a particular view of the teaching of this subject), and I have become a student again.

Why? Well, because of these;
Those that have braved opening up a computer may recognise this as a GPU, or Graphics Programming Unit, and it's the engine that make high level graphics possible, especially for computer gaming. The explosion in their development came around because of
Not the flash and manic grin, but the hair (and my understanding is that it is long, flowing female hair that is the goal, which may tell us more about those who write computer games!).

The result is that GPUs have become computationally very powerful, but the computer architecture is different to a CPU. Basically GPUs are massive parallel processors, many quite simple computation engines. This means that if you have a simple calculation that you want to perform many times, a CPU might have to step through each calculation, whereas the GPU can do them all at once.

This is precisely what we want to do in many astronomical (and generally scientific) applications. As an example, to calculate the gravitational force on an object, then you need to add up the force due to all the other objects. Typically, you do this one at a time, which can get quite slow for many (i.e. billions) of object, and so things would go much faster if we could do the summation at once.

There is a problem, however. The makers (e.g. NVIDIA and AMD) keep the details of the architecture close to their chests. And they have, in the past, not been as rigorous as CPUs as ensuring floating point arithmetic works as it should; if you are simulating hair, then 2+2=5 is not such a problem now and again, but it can render useless the output of a scientific simulation (would you fly on a plane whose wings had been tested on a machine that sometime got floating point arithmetic wrong?)

But this is changing, and more robust arithmetic is now the name of the game, as well as providing computing libraries, specifically CUDA and openCL to allow us to develop applications on GPGPUs (the first GP is now for General Purpose). There is some urgency on getting to grips with this, as we are starting to build GPU-based supercomputers (in Australia, we will soon have g-Star to undertake GPU-based supercomputing of theoretical astrophysics). So, I have enrolled in a programming course for CUDA in the School of IT here.

There is, however, a problem. The problem (and I know this is going to hurt) are generally not very good at coding. Some are, but the majority aren't. We rely on the fact that we don't have to worry about complicated stuff because things like memory management, order of processing etc are hidden in high level codes, typically C and fortran, although python seems to be getting a foothold. We are bad enough for me to chuckle at the fact that this book
has an astronomy picture on the front; is it an example of a field that is renowned for needing this book, or perhaps we are better than the rest (which is a scary thought).

Anyway, back to GPGPUs. They are difficult to program. I think it was best put by by lecturer, they are difficult to program because you are
"programming bare metal"
You HAVE to worry about memory, and what's computing what and when, and, and this will shock most astronomer, you can't debug your code by sticking write statements everywhere (this will cause your code to fall over in a heap.

Anyway, I have had my first lecture, which so far is fine, but I also got my first homework, essentially playing with memory management in C. Of course, the young IT students confidently read over the homework sheet as I replayed the opening script of Four Weddings and a Funeral in my mind; it's been a little while since I really programmed in C.

I'll keep the blog updated on my journey into GPGPUs.

Comments

  1. It is not just in the scientific world that GPUs are becoming more common.

    As to hair---look up how much CPU time was used to calculate King Kong's hair in the Peter Jackson film. (So, it's not just virtual damsels which are occupying these geeky minds.)

    ReplyDelete

Post a Comment

Popular posts from this blog

Falling into a black hole: Just what do you see?

Journey to the Far-Side of the Sun

Proton: a life story