This is a performance-related question. I've written the following simple CUDA kernel based on the "CUDA By Example" sample code:
#define N 37426 /* the (arbitrary) number of hashes we want to calculate */
#define THREAD_COUNT 128
__device__ const unsigned char *m = "Goodbye, cruel world!";
__global__ void kernel_sha1(unsigned char *hval) {
sha1_ctx ctx[1];
unsigned int tid = threadIdx.x + blockIdx.x * blockDim.x;
while(tid < N) {
sha1_begin(ctx);
sha1_hash(m, 21UL, ctx);
sha1_end(hval+tid*SHA1_DIGEST_SIZE, ctx);
tid += blockDim.x * gridDim.x;
}
}
The code seems to me to be correct and indeed spits out 37,426 copies of the same hash (as expected. Based on my reading of Chapter 5, section 5.3, I assumed that each thread writing to the global memory passed in as "hval" would be extremely inefficient.
I then implemented what I assumed would be a performance-boosting cache using shared memory. The code was modified as follows:
#define N 37426 /* the (arbitrary) number of hashes we want to calculate */
#define THREAD_COUNT 128
__device__ const unsigned char *m = "Goodbye, cruel world!";
__global__ void kernel_sha1(unsigned char *hval) {
sha1_ctx ctx[1];
unsigned int tid = threadIdx.x + blockIdx.x * blockDim.x;
__shared__ unsigned char cache[THREAD_COUNT*SHA1_DIGEST_SIZE];
while(tid < N) {
sha1_begin(ctx);
sha1_hash(m, 21UL, ctx);
sha1_end(cache+threadIdx.x*SHA1_DIGEST_SIZE, ctx);
__syncthreads();
if( threadIdx.x == 0) {
memcpy(hval+tid*SHA1_DIGEST_SIZE, cache, sizeof(cache));
}
__syncthreads();
tid += blockDim.x * gridDim.x;
}
}
The second version also appears to run correctly, but is several times slower than the initial version. The latter code completes in about 8.95 milliseconds while the former runs in about 1.64 milliseconds. My question to the Stack Overflow community is simple: why?
I looked through CUDA by Example and couldn't find anything resembling this. Yes there is some discussion of GPU hash tables in the appendix, but it looks nothing like this. So I really have no idea what your functions do, especially sha1_end. If this code is similar to something in that book, please point it out, I missed it.
However, if sha1_end writes to global memory once (per thread) and does so in a coalesced way, there's no reason that it can't be quite efficient. Presumably each thread is writing to a different location, so if they are adjacent more-or-less, there are definitely opportunities for coalescing. Without going into the details of coalescing, suffice it to say that it allows multiple threads to write data to memory in a single transaction. And if you are going to write your data to global memory, you're going to have to pay this penalty at least once, somewhere.
For your modification, you've completely killed this concept. You have now performed all the data copying from a single thread, and the memcpy means that subsequent data writes (ints, or chars, whatever) are occurring in separate transactions. Yes, there is a cache which may help with this, but it's completely the wrong way to do it on GPUs. Let each thread update global memory, and take advantage of opportunities to do it in parallel. But when you force all the updates on a single thread, then that thread must copy the data sequentially. This is probably the biggest single cost factor in the timing difference.
The use of __syncthreads() also imposes additional cost.
Section 12.2.7 of the CUDA by Examples book refers to visual profiler (and makes mention that it can gather information about coalesced accesses). The visual profiler is a good tool to help try to answer questions like this.
If you want to learn more about efficient memory techniques and coalescing, I would recommend the NVIDIA GPU computing webinar entitled "GPU Computing using CUDA C – Advanced 1 (2010)". The direct link to it is here with slides.
Related
I have a few questions regarding memory barriers.
Say I have the following C code (it will be run both from C++ and C code, so atomics are not possible) that writes an array into another one. Multiple threads may call thread_func(), and I want to make sure that my_str is returned only after it was initialized fully. In this case, it is a given that the last byte of the buffer can't be 0. As such, checking for the last byte as not 0, should suffice.
Due to reordering by compiler/CPU, this can be a problem as the last byte might get written before previous bytes, causing my_str to be returned with a partially copied buffer. So to get around this, I want to use a memory barrier. A mutex will work of course, but would be too heavy for my uses.
Keep in mind that all threads will call thread_func() with the same input, so even if multiple threads call init() a couple of times, it's OK as long as in the end, thread_func() returns a valid my_str, and that all subsequent calls after initialization return my_str directly.
Please tell me if all the following different code approaches work, or if there could be issues in some scenarios as aside from getting the solution to the problem, I'd like to get some more information regarding memory barriers.
__sync_bool_compare_and_swap on last byte. If I understand correctly, any memory store/load would not be reordered, not just the one for the particular variable that is sent to the command. Is that correct? if so, I would expect this to work as all previous writes of the previous bytes should be made before the barrier moves on.
#define STR_LEN 100
static uint8_t my_str[STR_LEN] = {0};
static void init(uint8_t input_buf[STR_LEN])
{
for (int i = 0; i < STR_LEN - 1; ++i) {
my_str[i] = input_buf[i];
}
__sync_bool_compare_and_swap(my_str, 0, input_buf[STR_LEN - 1]);
}
const char * thread_func(char input_buf[STR_LEN])
{
if (my_str[STR_LEN - 1] == 0) {
init(input_buf);
}
return my_str;
}
__sync_bool_compare_and_swap on each write. I would expect this to work as well, but to be slower than the first one.
static void init(char input_buf[STR_LEN])
{
for (int i = 0; i < STR_LEN; ++i) {
__sync_bool_compare_and_swap(my_str + i, 0, input_buf[i]);
}
}
__sync_synchronize before each byte copy. I would expect this to work as well, but is this slower or faster than (2)? __sync_bool_compare_and_swap is supposed to be a full barrier as well, so which would be preferable?
static void init(char input_buf[STR_LEN])
{
for (int i = 0; i < STR_LEN; ++i) {
__sync_synchronize();
my_str[i] = input_buf[i];
}
}
__sync_synchronize by condition. As I understand it, __sync_synchronize is both a HW and SW memory barrier. As such, since the compiler can't tell the value of use_sync it shouldn't reorder. And the HW reordering will be done only if use_sync is true. is that correct?
static void init(char input_buf[STR_LEN], bool use_sync)
{
for (int i = 0; i < STR_LEN; ++i) {
if (use_sync) {
__sync_synchronize();
}
my_str[i] = input_buf[i];
}
}
GNU C legacy __sync builtins are not recommended for new code, as the manual says.
Use the __atomic builtins which can take a memory-order parameter like C11 stdatomic. But they're still builtins and still work on plain types not declared _Atomic, so using them is like C++20 std::atomic_ref. In C++20, use std::atomic_ref<unsigned char>(my_str[STR_LEN - 1]), but C doesn't provide an equivalent so you'd have to use compiler builtins to hand-roll it.
Just do the last store separately with a release store in the writer, not an RMW, and definitely not a full memory barrier (__sync_synchronize()) between every byte!!! That's way slower than necessary, and defeats any optimization to use memcpy. Also, you need the store of the final byte to be at least RELEASE, not a plain store, so readers can synchronize with it. See also Who's afraid of a big bad optimizing compiler? re: how exactly compilers can break your code if you try to hand-roll lockless code with just barriers, not atomic loads or stores. (It's written for Linux kernel code, where a macro would use *(volatile char*) to hand-roll something close to __atomic_store_n with __ATOMIC_RELAXED`)
So something like
__atomic_store_n(&my_str[STR_LEN - 1], input_buf[STR_LEN - 1], __ATOMIC_RELEASE);
The if (my_str[STR_LEN - 1] == 0) load in thread_func is of course data-race UB when there are concurrent writers.
For safety it needs to be an acquire load, like __atomic_load_n(&my_str[STR_LEN - 1], __ATOMIC_ACQUIRE) == 0, since you need a thread that loads a non-0 value to also see all other stores by another thread that ran init(). (Which did a release-store to that location, creating acquire/release synchronization and guaranteeing a happens-before relationship between these threads.)
See https://preshing.com/20120913/acquire-and-release-semantics/
Writing the same value non-atomically is also UB in ISO C and ISO C++. See Race Condition with writing same value in C++? and others.
But in practice it should be fine except with clang -fsanitize=thread. In theory a DeathStation9000 could implement non-atomic stores by storing value+1 and then subtracting 1, so temporarily there's be a different value in memory. But AFAIK there aren't real compilers that do that. I'd have a look at the generated asm on any new compiler / ISA combination you're trying, just to make sure.
It would be hard to test; the init stuff can only race once per program invocation. But there's no fully safe way to do it that doesn't totally suck for performance, AFAIK. Perhaps doing the init with a cast to _Atomic unsigned char* or typedef _Atomic unsigned long __attribute__((may_alias)) aliasing_atomic_ulong; as a building block for a manual copy loop?
Bonus question: if(use_sync) __sync_synchronize() inside the loop.
Since the compiler can't tell the value of use_sync it shouldn't reorder.
Optimization is possible to asm that works something like if(use_sync) { slow barrier loop } else { no-barrier loop }. This is called "loop unswitching": making two loops and branching once to decide which to run, instead of every iteration. GCC has been able to do that optimization (in some cases) since 3.4. So that defeats your attempt to take advantage of how the compiler would compile to trick it into doing more ordering than the source actually requires.
And the HW reordering will be done only if use_sync is true.
Yes, that part is correct.
Also, inlining and constant-propagation of use_sync could easily defeat this, unless use_sync was a volatile global or something. At that point you might as well just make a separate _Atomic unsigned char array_init_done flag / guard variable.
And you can use it for mutual exclusion by having threads try to set it to 1 with int old = guard.exchange(1), with the winner of the race being the one to run init while they spin-wait (or C++20 .wait(1)) for the guard variable to become 2 or -1 or something, which the winner of the race will set after finishing init.
Have a look at the asm GCC makes for non-constant-initialized static local vars; they check a guard variable with an acquire load, only doing locking to have one thread do the run_once init stuff and the others wait for that result. IIRC there's a Q&A about doing that yourself with atomics.
I am having a hard time understanding how shortp_incr_bp() works. How is it able to atomically increment without the need for a spinlock or semaphore? (I don't really understand the provided comments.) What could happen if barrier() wasn't there? How does the optimization cause incorrect values? What is one way this optimization can go wrong?
/*
* Input is managed through a simple circular buffer which, among other things,
* is allowed to overrun if the reader isn't fast enough. That makes life simple
* on the "read" interrupt side, where we don't want to block.
*/
static unsigned long shortp_in_buffer = 0;
static unsigned long volatile shortp_in_head;
static volatile unsigned long shortp_in_tail;
/*
* Atomically increment an index into "shortp_in_buffer"
*
* This function has been carefully written to wrap a pointer into the circular
* buffer without ever exposing an incorrect value. The "barrier" call is there
* to block compiler optimizations across the other two lines of the function.
* Without the barrier, the compiler might decide to optimize out the "new"
* variable and assign directly to "*index". That optimization could expose an
* incorrect value of the index for a brief period in the case where it wraps.
* By taking care to prevent in inconsistent value from ever being visible to
* other threads, we can manipulate the circular buffer pointers safely without
* locks.
*/
static inline void shortp_incr_bp(volatile unsigned long *index, int delta)
{
unsigned long new = *index + delta;
barrier(); /* Don't optimize these two together */
*index = (new >= (shortp_in_buffer + PAGE_SIZE)) ? shortp_in_buffer : new;
}
In my code I have a buffer, and my code to add data to it is:
bool push_string(file_buffer *cb, const char* message, const unsigned short msglen)
{
unsigned int size = msglen;
if(cb->head >= (cb->size - size))
{
size = cb->size - cb->head - 1;
}
if(size < 1) return false;
char* dest = cb->head += size;
memcpy(dest, message, size);
return (size == msglen);
}
Since I add data from multiple interrupts (which can exempt eachother), I was wondering if this code is thread-safe? I marked 'cb->head' as volatile, but if another interrupt exempts exactly between the increase of 'head' and the asignment to 'dest', things could go wrong.
How can I improve this code to make it safer?
EDIT: Maybe I shouldn't have used the term 'thread-safe' because there are no threads running in parallel, just the possibility of interrupts.
C99 has no concept of threads and thus none for thread-savety either. Only C11 has.
In C99 the only data type that is interrupt safe is sig_atomic_t, but evidently this says nothing about threads either.
Generally you are completely mistaken in attempting to access data structures concurrently, volatile is no guarantee at all that you receive sensible data. There is no guarantee as such of atomicity of any of the operations, even in C11, so you could e.g be in a situation where the lower half of a pointer value is already written but not the upper half. This could give you a completely bogus result. Since such thing would perhaps just happen once in a million or under special circumstances (heavy load e.g) this could lead to bugs that are very difficult to trace.
Don't do that.
C11 gives you new tools to handle such things, in particular atomic operations. It is not completely implemented but many compilers already have extensions that could help you. I have wrapped some of these in the P99 macro package, so with certain compilers you could start to use these features as of today.
Think about signals interrupting signals... if you really need that:
You could block all relevant signals while in push_string().
Another, application dependant possibility might be moving the signal handler code into the main 'thread' (signal handler code just generate 'events' that wake up the main thread of execution). I have not enough information about your app, to say if it is a good choice or not.
Can someone please help me with a very simple example on how to use shared memory? The example included in the Cuda C programming guide seems cluttered by irrelevant details.
For example, if I copy a large array to the device global memory and want to square each element, how can shared memory be used to speed this up? Or is it not useful in this case?
In the specific case you mention, shared memory is not useful, for the following reason: each data element is used only once. For shared memory to be useful, you must use data transferred to shared memory several times, using good access patterns, to have it help. The reason for this is simple: just reading from global memory requires 1 global memory read and zero shared memory reads; reading it into shared memory first would require 1 global memory read and 1 shared memory read, which takes longer.
Here's a simple example, where each thread in the block computes the corresponding value, squared, plus the average of both its left and right neighbors, squared:
__global__ void compute_it(float *data)
{
int tid = threadIdx.x;
__shared__ float myblock[1024];
float tmp;
// load the thread's data element into shared memory
myblock[tid] = data[tid];
// ensure that all threads have loaded their values into
// shared memory; otherwise, one thread might be computing
// on unitialized data.
__syncthreads();
// compute the average of this thread's left and right neighbors
tmp = (myblock[tid > 0 ? tid - 1 : 1023] + myblock[tid < 1023 ? tid + 1 : 0]) * 0.5f;
// square the previousr result and add my value, squared
tmp = tmp*tmp + myblock[tid] * myblock[tid];
// write the result back to global memory
data[tid] = tmp;
}
Note that this is envisioned to work using only one block. The extension to more blocks should be straightforward. Assumes block dimension (1024, 1, 1) and grid dimension (1, 1, 1).
Think of shared memory as an explicitly managed cache - it's only useful if you need to access data more than once, either within the same thread or from different threads within the same block. If you're only accessing data once then shared memory isn't going to help you.
Coming from CUDA I'm interested in how shared memory is read from a thread and compares to the reading alignment requirements of CUDA. I'll used the following code as an example:
#include <sys/unistd.h>
#include <pthread.h>
#include <stdlib.h>
#include <stdio.h>
#define THREADS 2
void * threadFun(void * args);
typedef struct {
float * dataPtr;
int tIdx,
dSize;
} t_data;
int main(int argc, char * argv[])
{
int i,
sizeData=5;
void * status;
float *data;
t_data * d;
pthread_t * threads;
pthread_attr_t attr;
data=(float *) malloc(sizeof(float) * sizeData );
threads=(pthread_t *)malloc(sizeof(pthread_t)*THREADS);
d = (t_data *) malloc (sizeof(t_data)*THREADS);
data[0]=0.0;
data[1]=0.1;
data[2]=0.2;
data[3]=0.3;
data[4]=0.4;
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
for (i=0; i<THREADS;i++)
{
d[i].tIdx=i;
d[i].dataPtr=data;
d[i].dSize=sizeData;
pthread_create(&threads[i],NULL,threadFun,(void *)(d+i));
}
for (i=0; i<THREADS; i++)
{
pthread_join(threads[i],&status);
if(status);
//Error;
}
return 0;
}
void * threadFun(void * args)
{
int i;
t_data * d= (t_data *) args;
float sumVal=0.0;
for (i=0; i<d->dSize; i++)
sumVal+=d->dataPtr[i]*(d->tIdx+1);
printf("Thread %d calculated the value as %-11.11f\n",d->tIdx,sumVal);
return(NULL);
}
In the threadFun, the entire pointer d is pointing to shared memory space (I believe). From what I've encountered in documentation reading from multiple threads is ok. In CUDA reads need to be coalesced - is there similar alignment restrictions in pthreads? I.e. if I have two threads reading from the same shared address I'm assuming somewhere along the line a scheduler has to put one thread ahead of the other. In CUDA this could be a costly operation and should be avoided. Is there a penalty for 'simultaneous' reads from shared memory - and if so is it so small that it is negligible? i.e. both threads may need to read d->datPtr[0] simultaneously - I'm assuming that memory read cannot occur simultaneously - is this assumption wrong?
Also I read an article from intel that said to use a structure of arrays when multithreading - this is consistent with cuda. If I do this though, it is almost inevitable I will need the thread ID - which I believe will require me to use a mutex lock the thread ID until it is read into the thread's scope, is this true or would there be some other way to identify threads?
An article on memory management for mulithreaded programs would be appreciated as well.
While your thread data pointer d is pointing into a shared memory space, unless you increment that pointer to try and read from or write to an adjoining thread data element in the shared memory space array, you're basically dealing with localized thread data. Also the value of args is local to each thread, so in both cases if you are not incrementing the data pointer itself (i.e., you're never calling something like d++, etc. so that you're pointing to another thread's memory), no mutex is needed to guard the memory "belonging" to your thread.
Also again for your thread ID, since you're only writing that value from the spawning thread, and then reading that value in the actual spawned thread, there is no need for a mutex or synchronization mechanism ... you only have a single producer/consumer for the data. Mutexes and other synchronization mechanisms are only needed if there are multiple threads that will read and write the same data location.
CPUs have caches. Reads come from caches, so each CPU/core can read from its own cache, as long as the corresponding cacheline is SHARED. Writes force cachelines into EXCLUSIVE state, invalidating the corresponding cachelines on other CPUs.
If you have an array with a member per thread, and there are both reads and writes to that array, you may want to align every member to a cacheline, to avoid false sharing.
memory read to the same area in different thread to the same memory isn't a problem in shared memory systems (write is another matter, the pertinent area is the cache line: 64-256 bytes depending on the system)
I don't see any reason for which getting the thread_id should be a synchronized operation. (And you can feed your thread with any id meaningful for you, it can be simpler than getting a meaningful value from an abstract id)
Coming from CUDA probably let's you think to complicated. POSIX threads are much simpler. Basically what you are doing should work, as long as you are only reading in the shared array.
Also, don't forget that CUDA is a dismemberment of C++ and not on C, so some things might look different from that aspect, too. E.g in your code the habit of casting the return from malloc is generally frowned upon by real C programmers since it can be the source of subtle errors, there.