Why? Well, because of these;
The result is that GPUs have become computationally very powerful, but the computer architecture is different to a CPU. Basically GPUs are massive parallel processors, many quite simple computation engines. This means that if you have a simple calculation that you want to perform many times, a CPU might have to step through each calculation, whereas the GPU can do them all at once.
This is precisely what we want to do in many astronomical (and generally scientific) applications. As an example, to calculate the gravitational force on an object, then you need to add up the force due to all the other objects. Typically, you do this one at a time, which can get quite slow for many (i.e. billions) of object, and so things would go much faster if we could do the summation at once.
There is a problem, however. The makers (e.g. NVIDIA and AMD) keep the details of the architecture close to their chests. And they have, in the past, not been as rigorous as CPUs as ensuring floating point arithmetic works as it should; if you are simulating hair, then 2+2=5 is not such a problem now and again, but it can render useless the output of a scientific simulation (would you fly on a plane whose wings had been tested on a machine that sometime got floating point arithmetic wrong?)
But this is changing, and more robust arithmetic is now the name of the game, as well as providing computing libraries, specifically CUDA and openCL to allow us to develop applications on GPGPUs (the first GP is now for General Purpose). There is some urgency on getting to grips with this, as we are starting to build GPU-based supercomputers (in Australia, we will soon have g-Star to undertake GPU-based supercomputing of theoretical astrophysics). So, I have enrolled in a programming course for CUDA in the School of IT here.
There is, however, a problem. The problem (and I know this is going to hurt) are generally not very good at coding. Some are, but the majority aren't. We rely on the fact that we don't have to worry about complicated stuff because things like memory management, order of processing etc are hidden in high level codes, typically C and fortran, although python seems to be getting a foothold. We are bad enough for me to chuckle at the fact that this book
Anyway, back to GPGPUs. They are difficult to program. I think it was best put by by lecturer, they are difficult to program because you are
"programming bare metal"You HAVE to worry about memory, and what's computing what and when, and, and this will shock most astronomer, you can't debug your code by sticking write statements everywhere (this will cause your code to fall over in a heap.
Anyway, I have had my first lecture, which so far is fine, but I also got my first homework, essentially playing with memory management in C. Of course, the young IT students confidently read over the homework sheet as I replayed the opening script of Four Weddings and a Funeral in my mind; it's been a little while since I really programmed in C.
I'll keep the blog updated on my journey into GPGPUs.