4. GPGPU 101 – APIs

This is the next part in the series of GPU 101 posts I started some time ago. If you haven’t checked part 0part1part2 or part3, please do so.

Let’s look a bit deeper into the APIs used for GPGPU programming. Basically, every GPGPU API we will discuss is C extension language, in which you write the kernels that run on the GPU itself, plus an API to call those kernels, send them arguments and receive results. Some APIs (like OpenCL) have API bindings to script languages like PyOpenCL (Python), jock (Java) and even for the browser.

Historically, the first popular API was from nVidia and it is called CUDA. CUDA is proprietary and runs on nVidia hardware only. Being such allowed it to evolve faster than other committee-driven APIs, since decisions for changes in it are much easier to take when there is no committee and you have all the control.

OpenCL showed few years after CUDA in attempt to prevent the chaos that would happen if everybody (read Intel, AMD, Imagination, ARM) did there own API (which in the end is what happened) and to provide a standard, which everybody (making CPU, GPU or whatever) could implement for their hardware. nVidia were not really happy about OpenCL, since programs written in it could possibly run on Intel and AMD GPUs or CPUs, but they supported it nevertheless.

The history of OpenCL so far is dark and full of terror – back then AMD were using VLIW architecture, which was everything but ready for GPGPU. VLIW can’t handle data dependencies well. In a game shader, you may have only few such, but in a general purpose app their number easily goes up, since the general purpose programs usually contain more complicated logic. GPUs from Intel on the other hand were small and not ready for that, too. nVidia had Tesla architecture, but they lack the will to make an OpenCL implementation that runs better than CUDA. There were lack of tools, too – both for debugging and profiling. Compilers were buggy, took a lot of time to compile and often crashed. Most of the runtime bugs in the code ended up in BSOD. Those compilers were often done in a quick ‘n’ dirty manner by modifying some C frontend and appending some shader-like backend. As you would tell, this didn’t end well. There were OpenCL implementation for CPUs as well, both from AMD and Intel, and both of them were really good (and run on all kind of x86 processors, of course). These OpenCL implementations for CPU allowed faster compiling, easier debugging and testing for quite some time – they were the glass of ice water for the GPU developer, who was in the hell named OpenCL.

But it evolved. AMD moved from VLIW to SIMD with the GCN architecture, which allowed them to do GPGPU very well, since SIMD is better suited for data dependencies and since there are already a lot of general purpose compilers that can be used for such approach (like llvm). Intel started making bigger/more powerful GPUs, too. nVidia improved their OpenCL implementation over time (as of 2015 it still runs slower compared to CUDA, but in spite of that, they have one of the best OpenCL implementations as of today – faster than most and quite stable). There are tools both for profiling and debugging, not as mature as C/C++, but they are here. BSOD is something that almost does not happen when there is bug in the GPGPU app.
CUDA evolved with that. It added support for recursion and virtual functions, C++ and even C++11. It has the best tools for debugging and profiling these days and gives deeper control. In the same time, using CUDA is much easier, since the API is simpler and does a lot of stuff for you. Also, it runs really fast. nVidia is making its compiler and drivers for SIMD-like architecture for a much longer time compared to its competitors, so it should be not much of a surprise that they have one of the best toolchains in the industry.
So, we have OpenCL which is a standard and runs everywhere, and CUDA, which was the first GPGPU API implementation. Two is not so bad after all, right ? Wrong.
Firstly, Google declined the support for OpenCL in Android. I don’t know if that has something with the fact that OpenCL was initially pushed by Apple, but here is a statement they made.

Android has no plans to support OpenCL. There is really nothing more to discuss.

They actually had support for OpenCL in Android 4.2, but decided to permanently drop it without any further explanation. They made their own GPGPU language named RenderScript, which as OpenCL can run both on CPU and GPU and automatically decides that. So far, it always chooses to run on the CPU, so I have no idea if that RenderScript is any worthy at the moment (or will it ever be).

Microsoft extended their DirectX API with support for GPGPU as well, named DirectCompute. It runs on Windows systems only, and just like RenderScript is quite similar to OpenCL and CUDA.

After starting the initiative for OpenCL, Apple also created another (!) API with syntax similar to the OpenCL, called Metal. Metal can be used as a low-level replacement both for OpenGL and OpenCL (meaning it has two parts – graphics and compute, just like DirectX) and can run both on iOS and OS X – OpenCL can run only on OS X (no iOS support). And as we speak for OpenCL and Apple – Apple is well known for one of the worst OpenCL  implementations, ever.

It is worth mentioning that some of these APIs require the kernel to be precompiled (like CUDA before version 7) – which is called offline compiling, when others require the kernel to be shipped as source with the app and to be build on-the-fly (like OpenCL) – which is called, of course, online compiling.

So, CUDA gives you speed, OpenCL better portability, RenderScript gives you Android and Metal gives you iOS. This sucks and I thought I can make it even worse, but more on that later.

We will use OpenCL for our first in-depth look of hello-gpgpu app, since it can run on most of the systems without need of special OS or hardware.
OpenCL works as follows – from the host (read CPU) you query for platforms (read vendors, like nVidia, Intel, AMD), then you query each vendor for device (you can have multiple OpenCL implementations, let say an Intel CPU one, multiple nVidia and AMD GPUs). After that you create something like context object . Each context is bound to exactly one device. Lastly, you have queues in which you add commands to be executed on the device (this would get clearer in the code). The source for doing that is ugly and old-school, so most people prefer to hide it in prettier and friendlier wrappers.
Also, from the host, you have to alloc, copy, dealloc device memory, call device kernels and wait for their results.

Don’t get scared with that messy looking code, just go through it. You will see why later. The source is modified version of the one from here.

We have the source of the kernel in (1). In (2) we query for devices and we setup context and queue for the first device from the first platform, after that in (3) we compile the source of the program using the OpenCL API call. In (4) we allocate memory on the device and we transfer the input host array in it. In (5) we call the kernel itself and in (6)  we wait for its completion and we transfer the result from the device to the host.

Some time ago I decided to wrap all that in a pretty-looking abstract layer over that OpenCL. It became easily clear, that that layer can be over OpenCL and CUDA at the same time – I could just wrap the API calls from both of those APIs and change which one to use only with compiler define. The language differences between the OpenCL kernels and CUDA were easily hidden with some macro-magic. It is true however, that only the common part of each API could be used. But using that wrapper, I could have the portability of OpenCL with the speed of CUDA.

However, it turned out to be even better (or worse, still not sure). It seems that having such a wrapper allows you to run the kernels natively – you have just to compile them as regular C++ code and implement all the methods of the abstraction layer. All that needs to be done is to implement the GPGPU extensions to C, which are not that many.

Unpretentiously, I call that GPAPI (General Purpose API) – you can write your source once and run it in OpenCL (so you get portability), CUDA (so you get speed) and Native(C++) – so you have debugging. Here is how the example from above looks like in GPAPI.

kernel.cl

main.cpp

As a bonus, the sample code with the GPAPI will run the program on all the devices, that are on the system (in my Macbook Pro, there are 3 – Intel CPU, Intel GPU and nVidia GPU). If we want to make the OpenCL code run on all those devices, it would take at least hundred more lines.

The point here is, that the APIs and languages for GPGPU are practically the same. GPAPI could be easily ported to support RenderScript, Metal or DirectCompute. But this should not be any of a surprise – after all the hardware behind that is the same – SIMD machines controlled by a host CPU. In the next parts and code samples, I will use the macro magic words from GPAPI – they make sense (KERNEL, GLOBAL, SHARED, etc), and they can easily translated to whatever API you like using that reference file.

The native version of GPAPI is designed for debug purpose (it could be used to run CUDA on the CPU too, but I doubt somebody cares about that a lot). But it had to support everything that is common between CUDA and OpenCL, or at least, everything I need so far. There is curious approach to implementing shared memory – as we mentioned in the previous part, if the max thread block size is 1, the private memory matches the shared one (because it shared along 1 thread). And that is how it is implemented – the max thread block size when using Native mode in GPAPI is 1 (which is perfectly valid according the OpenCL standard). Private thread memory is implemented using C++ automatic/stack variables, and global memory is implemented using heap memory.

There are of course stuff in CUDA that are worth using and are not in OpenCL (and vice versa). We will see such cases in the next parts and we will make each and every one of them compilable in both APIs.

Stay tuned.

3. GPGPU 101 – GPU memory model

This is the next part in the series of GPU 101 posts I started some time ago. If you haven’t checked part 0part1 or part2, please do so. In the last part we discussed the modern GPUs thread model. In this one we will take a look at the memory model of the GPUs.

So, as we’ve seen so far the GPUs basically is a huge SIMD machine. Because we had to have locality for efficiency, the SIMD units were grouped. We called those groups SIMD groups. AMD calls them computation units (CU), Intel execution units (EU) and nVidia streaming multi-processors (SMX). They do have differences (thats why the vendors gave them different names), but their idea is the same – put memory and ALUs near each other to spare the power needed for moving data.

C++, Java, Python and all of the popular modern languages have no notion of different memory space. For a C++ developer a pointer is basically a pointer and it does not matter when or how that pointer is being accessed – you can always deference it, write or read from it and you don’t know explicitly what the price of that will be.

Regarding memory types, there are  different buzz words in CUDA and OpenCL, and the syntax differences between those too. We will stick to the OpenCL notation for the purposes of this part (but you can think of the source shown here as if it is written in GPU pseudo code). The table below is showing those differences.

Keywords regarding memory in OpenCL and CUDA

The types of memory that we care about on the GPU are registers and local memory, shared memory, global memory and constant memory. You might think that having so much types of memory would make the programming complicated. This is somewhat true, but in the end programming having that much memory types is not much more difficult.

A simplified view of SIMD Group / SMX / EU / CU.

The global memory is the slowest and largest memory (reaching tens of gigabytes) in the GPU. All threads (SIMD lanes) have the same view of it. Unfortunately, it can’t be used to implement synchronization, since there is no way to prevent race conditions among SIMD lanes (threads). This is one of the fundamental design decisions in the GPUs, so don’t expect that to change anytime soon. You can think of it more or less just as a regular memory in the CPU. We should note that the global memory can’t be allocated on the device itself**. A pointer to such memory has to be passed to the kernel as an argument. In OpenCL such pointer has to be marked with the __global prefix. Just like in x86, all reads from the global memory are cached with L1 and L2 caches, wheres the exact configuration of the caches is vendor specific. You can’t count on cache coherency.

In addition to the global memory, each and every GPU thread has access to a memory type called “registers“. This memory is private, so each thread has its own view of it. It is physically very close to the ALUs and it is the fastest memory in the GPU. Generally, the register memory a GPU has is huge compared to the CPU one. But we should consider the fact, that we are running thousands of threads (SIMD lanes) at the same time. And those registers should be divided along all SIMD lanes, so at the end each SIMD lane (thread) has few of them. So, what happens when we have a big and complicated GPU program (kernel) that needs more registers per thread ?

Well, the first that a GPU stack might do is to prepare fewer tasks for the chip, so those registers would be split among fewer threads. This is so-called “lowering the threads in flight“. But as we discussed in the previous chapter, the primary GPU way of hiding latency is through multithreading, and when we reduce the number of threads in flight, the performance might suffer.
Another approach would be to use one of the other slower memory types as registers memory. And this is done too and it is called “register spill“. Register are being spilled (stored) in the global memory. So, each thread can have its own part of the global memory. In CUDA, this part of the global memory is called “local*” memory (there is no special name for that in OpenCL). You do not have to manage those spills manually. So you can just write code as if there is infinite amount of registers and the compiler would figure out how many tasks to send to the GPU – if any, (or if it should reduce the number of threads in flight) and how many registers to spill. Some compilers are offering command line options to specify that manually.

Every stack variable a kernel uses is stored in the registers, if possible. On some platforms this does not apply for stack arrays, so we would consider them stored in the slow “local” memory (meaning that the stack arrays are just like spilled-into-local-memory regular stack variables). On other platforms, stack vars not fitting the registers are actually allocated in true local memory, chunked off from the SIMD shared mem pool. Here is a sample kernel using only registers:

It is hard to calculate the exact register usage by looking at the code, since the register are being recycled and reused (this reuse might be possible if there are no data dependencies). Some compilers have options to print the register usage of every function, though.

GPU memory types (keywords in CUDA)

The register usage is crucial for the performance of the GPU. Hiding latency with multithreading implies that a lot of threads should be ready for execution at any given time and if the data they need is far away, this might hurt. This also introduces another phenomena – you might end up in situation in which you are adding source code that is never being executed, but it still slows down the program execution a lot! The reason for this is, that the new source code might take up the registers you’ll need and make them spill into the local memory (or just reduce the threads in flight). There are several approaches to that and we will discuss them in the next chapters, but it is clear that having a GPU kernel that runs fast is one thing, and having a huge GPU kernel that runs fast is another.

Now, on to shared memory. It has several purposes. All SIMD lanes (threads) inside SIMD unit have the same view of it – meaning that all threads that we have grouped share that memory and they can communicate through it. Being able to do so, they can cooperate and help each other. The shared memory is almost as fast as a register. Because of that it is often used as manually managed cache – in most of the GPGPU apps the primary efforts are finding what kind of memory your thread might share and use that fact to pre-cache it. Lets do a simple kernel that finds the difference between every two elements of an array. The straightforward solution would be:

Every thread does one subtraction. If you do that on an average GPU with input data large enough, it would outperform the CPU version of this code by a lot. But we can do better. If we think about it, from the view of the SIMD group same memory will be read multiple times from the slow global memory. We can reduce those reads using shared memory as follows.

The second code is a bit more complex, since we do manually pre-load. But it easily can make the execution tens of times faster, since the reads from the global memory are reduced. The amount of shared memory a kernel might needs has to be known prior the kernel launch, so it either has to be declared with a constant amount inside the kernel, or to be explicitly declared with an API function when the kernel is launched from the host. This is needed, because just like registers, shared memory is a finite resource and can reduce the number of threads in flights – if a lot of it is needed by every thread block, you will have to run fewer of them.

Fun fact is that if you calculate FLOP/s per line of code, the example with shared memory is still times faster.

According to the OpenCL standard, the shared memory should be at least 16KB. It is interesting, that according to the same standard, the minimum size a thread block / SIMD unit might have is 1 (one). If you have such OpenCL implementation (with block size = 1), the shared memory will match the private(register, local) memory, since it will be shared along 1 thread. Doing shared memory optimizations on such systems may turn out to slow down the execution, because of the extra work being done (in the same manner, if you run block size that is not big enough for your hardware, the shared memory implementation of the task above might take longer to execute).

Memory types size in nVidia GTX 980 (log scale)

Constant memory is special read-only type of global memory, which on some devices can be cached better. Some vendor implement clever mechanisms with it too – for example when a SIMD lane reads constant var it might broadcast it to the other SIMD lane in the SIMD unit (some vendors like nVidia implement that broadcast on half a SIMD unit) and if any of those need it, it will get it for free. As you can see, if none of the other SIMD lanes needs that particular memory, constant memory can be in fact slower than global one (because of the extra work done for the broadcasting and caching). The amount of constant memory is limited (it is usually in terms of tens of kilobytes and is vendor specific). Also, it should be noted that in contrast to the common notion of constant memory, the constant memory on the GPU is constant during the kernel execution, meaning that you can change it before you launch your kernel (with API calls).

There is another type of memory in some GPUs named “texture” memory. If you happen to see such, you can just think of it as a read-only global memory with fancy addressing capabilities (and in some architectures, with better cache system). Texture memory is usually used to store 2D (or 3D) textures. When you do texture lookups, you often need to read not only the data stored at [x][y] position, but also the ones stored near that, like the ones at [x+1][y], [x][y+1], [x-1][y], etc. Because of that, the texture memory might be internally optimized for such read. For example, instead of storing linearly in the memory every row of the texture one after another, the data for the texture can be stored using an space filling curve pattern like Hilbert curve. Using such curve assures that the texture lookups will read data fields that are close to each other. The good part is that you don’t have to manage that data ordering manually. This is handled by the driver and the GPU.

There is no way to to do synchronization along all threads in the GPU, but most of the hardware today is capable of doing atomic operations in the global memory. So source like this is perfectly valid:

If there are a lot of threads trying to do atomic operation on the same memory, they will be serialized and the performance will suffer (a lot). Here is an improved version of the source above – instead of doing atomic_add of global memory, we will do it in a shared variable, and after that we will just update the global variable with the accumulated result. Thus we will reduce the number of threads trying to modify the same variable at the same time.

And lastly – as a rule of thumb – mixing pointers from different memory spaces is bad idea, so don’t do that. And by design, if you put memory_barrier() call in a place from where not all threads will go (for example a branch) your system will most likely hang (since it will wait forever for all threads to get on that barrier, and they will not since some went trough the true branch of the if, and some trough the false, so the GPU could not continue to execute all threads).

In the next part, we will continue with more real code.

* Again, please not that there is significant difference between the local memory in CUDA and OpenCL. We are using the term ‘local’ here as in CUDA (meaning, registers spilled to the global memory). OpenCL has no word for that (calls them spilled registers), so it uses the word ‘local’ as ‘shared’. Meaning, that in OpenCL the memory types are named private(or registers), local (or shared) and global (this one is the same). It is confusing. You could think that it is nVidia fault for not using the same keywords as the OpenCL standard, but than again, CUDA was born before OpenCL.

**Actually, you can preallocate global memory before kernel launch, and use memory manager on the device itself, thus having malloc/delete in the device code, but this is not a good idea since the GPUs are not designed to be good memory managers.

2. GPGPU 101 – GPU thread model

This is the next part in the series of GPGPU 101 posts I started some time ago. If
you haven’t checked part 0 or part1, please do so. In part 1 we’ve talked how the x86 processor approaches the problems of efficiency (locality) and performance (parallelism). Now, we will discuss how the modern GPUs are approaching those. In this part we will talk more about parallelism and in the next we will focus on locality.

As the name of the GPU (Graphics Process Units) suggests it was designed for processing graphics. And graphics tends to be an embarrassingly parallel problem – you have to calculate the color for tens of millions of pixels, and (most of the time) the color of every pixel is independent from the other pixels. But graphics are not the sole embarrassingly parallel problem and at one point people started using GPUs for stuff different from graphics. First it was used in a hacky fashion, since there was no API for general purpose programming and after that CUDA and OpenCL were born. As we mentioned, there were multiple GPUs designs and approaches, but today more or less all desktop GPU vendors (AMD, Intel and nVidia) have converged to similar architectures. They do have differences which should be taken into consideration when optimizing a specific program, but the general approach to writing programs for those GPUs is the same. Here are the overviews of the GPU approaches from Intel, nVidia and AMD.

CPU architecture – optimized for single thread performance

The CPU is designed to perform a single task as fast as possible (or “optimized for low latency“). In order to do so, it uses complex control logic (superscalar pipeline, branch predictor, register renaming) and complex (and big) data and instruction caches. On the other hand the GPUs have many simple cores, that are lacking all of the complex stuff in the CPUs (or “optimized for high throughput“). So, what does this means for us? …

In a nutshell, the GPU is a huge SIMD machine, so they are well suited for data parallelism tasks. The width of the SIMD unit depends on the vendor (4 in Intel GPUs, 32 in nVidia GPUs, 64 in AMD ones – wider lanes do not mean better performance of course, so keep that in mind). Some of the vendors are calling each lane of the SIMD unit a ‘core’. Usually, the vendors are combining some SIMD units into groups (4 SIMD units in nVidia are forming a “Streaming Multiprocessor”, 2 SIMD units in Intel are forming are “Execution Unit”, and in AMD 4 SIMD units are forming a “GCN Compute Unite (CU)”). We will call that “SIMD Group“. The idea of those groups is that the SIMD units inside them are sharing some kind of hardware (most common a cache unit, but also instruction fetch/decode system and others). This also provides some locality, which is good.

GPU architecture – optimized for throughput

In the basic diagram above, every little green square is a lane from a SIMD unit, which can perform a floating point (or other) operation over some data. All green boxes (or SIMD lanes, or “cores”) in one SIMD unit (surrounded by the whitest box) are executing the same instruction. We’ve grouped 4 SIMD units in a SIMD group and we have 2 such groups here.  So instead of multiple complex hardware, we have many simple cores. By the way, if you ever wondered how the vendors are calculating the FLOP/s on their processors – take the number of the SIMD lanes (cores), multiply that by 2 (since every core can executed up to 2 floating point operations per cycle with something called multiply-add or ‘mad‘) and multiply that by the frequency of the core. It is not hard to see that this theoretical FLOP/s peak is really, really very theoretical – there is no way to keep all units busy all the time and make them do mads all the time in any meaningful app.

But, as we discussed in the previous chapter, this complex hardware (branch predictor, register renaming etc.) is very important for the performance, since it keeps the pipelines busy. So if we are lacking that in the GPUs, isn’t the performance going to suffer a lot?

Well, obviously not. The trick that the GPU does is, that instead of trying to minimize the latency, it is trying to hide it with parallelism. It works as follows – every SIMD unit has a number of registers that it can use. The tasks we give to the GPU are in amount of hundreds of thousands. The GPU prepares some of those tasks in the hardware, so at any given moment there are multiple tasks ready to be executed (they share those registers). So, let say that we execute an instruction on a SIMD unit, and this instruction needs some memory from the DRAM also called global memory. In fact it is a bit different from the CPU DRAM and is named GDDRAM, but for our needs it is DRAM enough. On the CPU, if this instruction is not in the cache, we will have to wait and CPU will stall. If you have other thread ready to run, possibly the OS will switch to it (but this switch is expensive, since all the resources that were used by the stalled thread have to be swapped with the new one). The GPU on the other hand can simply choose another task to execute, since the data needed for it is already in the registers and waiting. So in some sense, we can say that the thread switch on the GPU is hardware implemented and basically free. The key here is that you have to give the GPU enough tasks in order to achieve high level of parallelism, that will be used to hide the latency.

But everything comes at a price, right. The price here are the registers. If your threads need a lot of data, at one point the registers would not be able to fit it all. At this point we have a so-called “register spill” to the global memory. Compared to the register, reads and writes to the global memory are ridiculously slow, so it is best to avoid that situation and keep the memory that every thread needs low.

Now, what about synchronization. If you have hundreds of thousands of threads, probably synchronizing them will be hell (after all synchronizing even 2 threads on the CPU is pretty hard). The answer is that the GPU is suited for tasks that don’t need synchronization at all – like graphics (real time or photorealistic), deep learning, simulations, etc. If you need synchronization on a global level – it is impossible. The only possible synchronization can be done is on a SIMD group level. But there are atomics on global level available. Tasks that require global synchronization usually are split into multiple GPU programs, thus offloading the synchronization job to the CPU.

And lastly, some programming. The thread model of the general purpose programming languages for the GPUs follows more or less the thread model of the hardware. For example in CUDA, threads are grouped in “thread blocks”. Threads in a block can talk to each other, even more they run on the same SIMD unit. How many threads are in a block is programmable, so if you have more than 32 (the size of the SIMD unit in nVidia), the thread block will be executed on multiple runs. And you can run multiple of those thread blocks (grouped in so-called “grid”). Both CUDA and OpenCL are offering multi-dimensional entities like groups of thread blocks and grids for convenience, but I’ve always preferred to use 1D spaces. Enough talking, lets do some coding, since it will make it much easier to understand.

Here is a program that sums up two arrays in C++.

And here it is in CUDA. Instead of having a for loop, every thread (or core) will sum one of the array fields.

Finally, some common keywords with the GPGPU apps. The CPU is usually called the “host“. The GPU is called the “device“. The host and the device are having physically different memories, and we have to do explicit copies from one to the other if we want to move stuff (more on that in the next part). The functions written for the GPU, that can be called from the host are called “kernels” (I’ve no idea why). Kernels can be only void functions (this makes sense, since they are executed from many threads, and if every one of them returns a result it could get messy). Functions, that can only be called from the GPU are called “device functions“.

Both OpenCL and CUDA are C based languages, and both of them offer extensions to get a unique per-thread id. CUDA has a C++ support too (though you don’t want to use virtual functions and other advanced stuff, but more on that in the next parts). Malloc and new can’t be used on the device. You have to allocate memory on the device before you start the kernel. This happens with a API call from the host (in fact, there is a way to use them with CUDA, but it is something that we will avoid and will talk about in the next parts).

How to synchronize the work between the host and the device ? What about having multiple devices ? How does a SIMD unit works, when there is a branch in the source and some threads go into the ‘true’ path, while others into the ‘false’ path? We look at all that (and more), again, in the next part.

CPU thermal profile during runtime

P.S. As you may have noticed the CPU has these very big cores located on one part of the chip, caches and memory on the other. In the GPUs on the other hand, you have kind of a better mixture of cores and memory (and the core are using much less power, since they don’t have any advanced features and a running at around 1GHZ). And we’ve talked how cooling/power is a major problem with current (and next) processors. But it is not only ‘heat’, it is heat per square mm. So in the CPUs, you have parts of the chip that are getting way hotter than the CPU vendors would like (caches require less power compared to ALU units). This makes the life (and the future) for the GPUs even brighter.

0. GPGPU 101 – Intro

So, if you want to get familiar with why we have to use CUDA and OpenCL and stuff like that, in the text below there are a few links to articles, talks and papers that explain that in length – you can read those in their order of appearance. I strongly advise everybody to at least check those. However, if you want to read just a summary of all that in one place, here you have it …

We have to start with a bit of history, since this is how every good story begins …

At one point early on during the computer era, people decided that they could show the result of the computer calculations on screen in other-than-alpha-numeric form, so their job could be more interactive (and less boring, I guess). Special hardware was needed for that – something that could convert large amounts (for that time) of data from the computer to something appropriate that the screen could understand, and thus the framebuffer was born. In some sense, probably this is the birth of the GPU and it happened long, long time ago (1970-something).

The GPUs have had a really crazy history since that time – till the Voodoo showed up in mid 90s (and OpenGL just before that), when everybody wanted (and did make) GPUs. There were all kinds of designs, approaches and implementations and it was time of really dynamic changes. Many of the companies did not manage to keep up with that, so many of them quit the business. These days we basically have nVidia, AMD and Intel, and in the mobile sector there are ARM (Mali), Imagination Technologies (PowerVR), Qualcomm (Adreno) and a few even smaller IP vendors. Here is pretty much the full story of it.

But the history that brings us to today is actually pretty cool. As it seems from the beginning of the computing era, a lot of chips designs were targeted at the CPU, but at some point everybody just started using x86. Yes, we had MIPS, Power/PowerPC, Sparc, etc, but they were nothing as x86 in terms of market share. More or less, we have been stuck with x86 for a long time. We have ARM now, which is quite different, though in some way it is just a cleaned-up, RISC variant of the same.

In comparison, GPUs have always had the freedom to completely change the architecture or perpetually try new stuff since they don’t have to support tons of legacy software. So there one can find all kind of crazy ideas, some working great, some not.

I would like to split the GPU history into three parts – before Voodoo, after Voodoo and after the GPGPU advent.

The Voodoo GPUs were so great back then (circa 1996), that they marked the whole period. They took the GPU ‘to the masses’ and everybody knew about Voodoo; I was 14-year-old and living in post-socialist country, but even I knew of that Voodoo stuff.

They were good, but not good enough to save their maker, 3dfx, from failing – in the dynamic situation back then it did not take many wrong steps to fail. 3dfx were bought by nVidia (circa 2000), which at that time had some great GPUs too. After a lot of market acquisitions, some time in the beginning of the 21st century there were only two major discrete desktop GPU makers left – nVidia and AMD, and two dominant APIs – DirectX and OpenGL. And actually I find that not so bad – the times when everybody had different approaches on the GPUs were the times when you have to write your code to fit tons of hardware (and this is not fun). This third period continues up to today, but something happened around ~2005, which marked the start of a new era – the GPGPU one, which is what we will focus on for the rest of this text.

Herb Sutter has great article about why that third period is so exciting and important. In my eyes, this article may have turned out to be as foreseeing and important for the computer industry as Dijkstra’s “Go To Statement Considered Harmful”.

Now, lets get away from the history for a moment and look closer at why this GPGPU stuff matters (a lot) …

In a nutshell, because of the way processors are made, we cannot make them a lot faster anymore (as we could every year, till 2005). So our programs can’t get faster and faster as the time goes by (or at least, they can’t get faster without some developer’s efforts).

There is Moore’s Law that everybody talks about – basically it is a rule-of-thumb that says that every 18-24 months or so, the number of transistors in a processor roughly doubles. The way this happens usually is that we start making half as big transistors, so we can put twice as many on the die. And on top of that those transistors can run at two times the frequency they had and because they are smaller, we can feed them considerably less voltage, and this resulted in the roughly same power consumption (but running faster). However, because of a physics phenomenon called “leakage” (if you want to know the details and much more on the topic, this talk is great) we can’t keep lowering the voltage and if we don’t lower the voltage we get four times the power consumption we had before. If you don’t believe that, ask Intel – they predicted 10ghz (and people were thinking a lot of crazy stuff, too) processors for 2011. By the way, because of the limitation of the speed of light, if we want to have a 100GHz single-clock-domain circuit, say, some CPU, the furthest distance an electron can travel during a clock is 2mm. Good luck cooling a chip that is 2mm big!

So, the processors today are power limited – we can’t cool them infinitely, we can’t make them use less power for higher frequencies. So people started saying that Moore’s Law was dead. Here is an image, that represents the number of transistor on Y axis, and the year on X.

As we can see, Moore’s Law was not dead by 2010, but at some point it will be – nothing exponential in nature lasts forever. But this day is not today – and perhaps won’t be in the next 10 years. We can still put more and more transistors, but we can’t make them run at higher frequencies. So the processors these days are getting more cores. Today’s expectations are that single core performance will improve somewhere between 2% and 15% per year (because of better architectures, optimizations in the process of their production and so on).

This makes x30 CPU performance for the next 50 years – for a comparison, for the last 30 years this number is x2500. At the same time, we expect the multi-core speed-up to be around 75% per year. And this happens today. The image below shows the theoretical GFLOPS of modern CPU (which has few very smart cores) vs GPU (which has thousands not-so-smart ones).

I think that most developers underestimate the above, and really bad so.

Let me give you an example – I worked in a company that made MMORPGs. Our code was single-threaded and slow, the game was slow, but we knew that at the moment we start selling that, computers would be fast enough and the game would run fine – MMORPGs usually take 3 to 5 years to make if you do your own engine (the game failed in the end, but that’s another story), so there was plenty of time. These days there are millions of people out there that still write code like that (embarrassingly single-threaded code, using languages and tools that are designed to be such), who still believe the same – that computers are fast enough or that they will get faster (Swift, Javascript – I am looking at you).

Well, they will not, and that’s why from now on the efficiency and performance of the programs will get more important than ever before (this is quite the opposite of what the majority of developers think). In the past, you could have a slow but sophisticated single-core and feature-complete program that runs sluggish on most machines. But you knew that after some time your program would run fine. This is the past – now everybody can design sophisticated programs (since we all have zillions of tools), but not everybody knows how to make them run faster, not to mention that the need for more and more computations has been outgrowing the speed at which processors are getting faster since the beginning of the computing.

“Concurrency is the next revolution in how we write software. The vast majority of programmers today dont grok concurrency, just as the vast majority of programmers 15 years ago didnt yet grok objects” Herb Sutter.*

So, lets get to the power (see the image below). As we noted above, processors are power limited. If we take one 20 mm^2 processor chip and fill it with ALUs only (for instance, FMAD – fused multipy-add, basically simple units that can do some floating point operations), this would give us 12 TFLOPS (using 28nm production process) and would result in 300 Watts of power (since every double FMAD uses 50pj). For a comparison today’s modern GPUs (which are considered really power efficient) are using around 200Watts for ~5 TFLOPS. That is because these units can’t do anything without data – they have to get input data, over which to do their computations, and store the output results somewhere. And moving data comes at its own costs – moving 64 bits at 1mm distance costs 25pj. If you have to move 2×64 bits, you are already wasting more (or the same) power for the data transfer (2×25), than for the calculations (50). These costs are growing linearly – if you have to move the data along 10mm, this would cost 250pj, and so on and so forth. For some cases (as in the DRAM of the GPU) it goes up to 10000pj. These joules can be converted to latency too, since when the data has to travel longer distance, it arrives later. It is clear that you want to have your data as close to the computation units as possible.

And when we look at the “modern” computer languages, we can see that they clearly are “above” all that – C++, Java, Ruby, Haskell, Go – all of them present the world as “flat” (read: fully-coherent) – from their perspective all memory is the same and accessing it costs the same, too. And I think they seem to be proud of that, in some manner. The same goes for x86 architecture itself, by the way – it has some instructions for pre-fetch and cache management, but they are just little patches over a large problem. It is good that we have all those caches in the x86, trying so badly to hide the latencies, but it is just that the developers sometimes (by nature) know more than any heuristics does. There are a lot (really, a lot) problems of such nature. So if there is a way to manually “manage the cache”, a lot of developers/programs could benefit.

“Performance = Parallelism”, “Efficiency = Locality”William Dally

Who targets what and how – in the next part.

* Parallelism is to run stuff in parallel – like having multiple cars on the highway, all going parallel on the road.
Concurrency is to be able to advance multiple tasks in a system. For example, if you have two cars that have to be moved form A to B, and one driver to do the job. The driver might drive the first car for some walkable distance, than go to the second car and drive it to go in front of the first one, than go back to the first car, etc. After a lot of time, both of the cars will be moved from A to B – this is concurrency. If you have two drivers for the job, this is parallelism.
The situation, at which some resource might be needed from different subjects at the same time – for example if the cars have to cross ajunction, they would have to wait for each other before they can cross it (or otherwise a crash will occur) is concurrency problem.
If you have parallelism, you have a concurrency too. Having heavy parallelism presupposes having to deal with concurrency problems.