Improving a simple function using threading - c

I have written a simple function with the following code that calculates the minimum number from a one-dimensional array:
uint32_t get_minimum(const uint32_t* matrix) {
int min = 0;
min = matrix[0];
for (ssize_t i = 0; i < g_elements; i++){
if (min > matrix[i]){
min = matrix[i];
}
}
return min;
}
However, I wanted to improve the performance of this function and was advised using threads so I have modified it to the following:
struct minargument{
const uint32_t* matrix;
ssize_t tid;
long long results;
};
static void *minworker(void *arg){
struct minargument *argument = (struct minargument *)arg;
const ssize_t start = argument -> tid * CHUNK;
const ssize_t end = argument -> tid == THREADS - 1 ? g_elements : (argument -> tid + 1) * CHUNK;
long long result = argument -> matrix[0];
for(ssize_t i = start; i < end; i++){
for(ssize_t x = 0; x < g_elements; x++){
if(result > argument->matrix[i]){
result = argument->matrix[i];
}
}
}
argument -> results = result;
return NULL;
}
uint32_t get_minimum(const uint32_t* matrix) {
struct minargument *args = malloc(sizeof(struct minargument) * THREADS);
long long min = 0;
for(ssize_t i = 0; i < THREADS; i++){
args[i] = (struct minargument){
.matrix = matrix,
.tid = i,
.results = min,
};
}
pthread_t thread_ids[THREADS];
for(ssize_t i =0; i < THREADS; i++){
if(pthread_create(thread_ids + i, NULL, minworker, args + i) != 0){
perror("pthread_create failed");
return 1;
}
}
for (ssize_t i = 0; i < THREADS; i++){
if(pthread_join(thread_ids[i], NULL) != 0){
perror("pthread_join failed");
return 1;
}
}
for(ssize_t i =0; i < THREADS; i++){
min = args[i].results;
}
free(args);
return min;
}
However this seems to be slower than the first function.
Am I correct in using threads to make the first function run faster? And if so, how do I modify the second function so that it is faster than the first function?

Having more threads than cores available to run them on is always going to be slower than a single thread due to the overhead of creating them, scheduling them and waiting for them all to finish.
The example you provide is unlikely to benefit from any optimisation beyond that which the compiler will do for you, as it is a short and simple operation. If you were doing something more complicated on a multi-core system, such as multiplying two huge matrices, of running a correlation algorithm on high speed real-time data then multi-threading may be the solution.
A more abstract answer to your question is another question: do you really need to be optimising it at all? Unless you know for a fact that there are performance issues, then your time would be better spent adding more functionality to your program than fixing a problem that doesn't really exist.
Edit - Comparison
I just ran (a representative version of) the OP's code on a 16 bit ARM microcontroller running with a 40 MHz instruction clock. Code compiled using GCC with no optimisation.
Finding the minimum of 20,000 32 bit integers took a little over 25 milliseonds.
With a 40 kByte page size (to hold half of a 20,000 array of 4 byte values) with threads running on different cores of a dual Intel 5150 processor clocked at 2.67 GHz, it takes nearly 50 ms just to do the context switch and paging operation!
A simple, single-threaded microcontroller implementation takes half as long in real time terms as a multi-threaded desktop implementation.

Related

Why MultiThread is slower than Single?? (Linux,C,using pthread)

I have a question with MultiThread.
This code is simple Example about comparing Single Thread vs MultiThread.
(sum 0~400,000,000 with singlethread vs 4-multiThread)
//Single
#include<pthread.h>
#include<unistd.h>
#include<stdio.h>
#include<stdlib.h>
#define NUM_THREAD 4
#define MY_NUM 100000000
void* calcThread(void* param);
double total = 0;
double sum[NUM_THREAD] = { 0, };
int main() {
long p[NUM_THREAD] = {MY_NUM, MY_NUM * 2,MY_NUM * 3,MY_NUM * 4 };
int i;
long total_nstime;
struct timespec begin, end;
pthread_t tid[NUM_THREAD];
pthread_attr_t attr[NUM_THREAD];
clock_gettime(CLOCK_MONOTONIC, &begin);
for (i = 0; i < NUM_THREAD; i++) {
calcThread((void*)p[i]);
}
for (i = 0; i < NUM_THREAD; i++) {
total += sum[i];
}
clock_gettime(CLOCK_MONOTONIC, &end);
printf("total = %lf\n", total);
total_nstime = (end.tv_sec - begin.tv_sec) * 1000000000 + (end.tv_nsec - begin.tv_nsec);
printf("%.3fs\n", (float)total_nstime / 1000000000);
return 0;
}
void* calcThread(void* param) {
int i;
long to = (long)(param);
int from = to - MY_NUM + 1;
int th_num = from / MY_NUM;
for (i = from; i <= to; i++)
sum[th_num] += i;
}
I wanna change using 4-MultiThread Code, so I changed that calculate function to using MultiThread.
...
int main() {
...
//createThread
for (i = 0; i < NUM_THREAD; i++) {
pthread_attr_init(&attr[i]);
pthread_create(&tid[i],&attr[i],calcThread,(void *)p[i]);
}
//wait
for(i=0;i<NUM_THREAD;i++){
pthread_join(tid[i],NULL);
}
for (i = 0; i < NUM_THREAD; i++) {
total += sum[i];
}
clock_gettime(CLOCK_MONOTONIC, &end);
...
}
Result(in Ubuntu)
But,It's slower than Single Function Code. I know MultiThread is faster.
I have no idea with this problem :( What's wrong?
Could you give me some advice ? Thanks a lot!
"I know MultiThread is faster"
This isn't always the case, as generally you would be CPU bound in some way, whether that be due to core count, how it is scheduled at the OS level, and hardware level.
It is a balance how many threads is worth giving to a process, as you may run into an old Linux problem where you would be spending more time scheduling the processes than actually running them.
As this is very hardware and OS dependant, it is difficult to say exactly what the issue may be, but make sure you have the appropriate microcode for your CPU installed (generally installed by default in Ubuntu), but just in case, try:
sudo apt-get install intel-microcode
Otherwise look at what other processes are being run, and it may be that a lot of other things are running on the cores that are being allocated the process.

user time increase in multi-cpu job

I am running the following code:
when I run this code with 1 child process: i get the following timing information:
(I run using /usr/bin/time ./job 1)
5.489u 0.090s 0:05.58 99.8% (1 job running)
when I run with 6 children processes: i get following
74.731u 0.692s 0:12.59 599.0% (6 jobs running in parallel)
The machine I am running the experiment on has 6 cores, 198 GB of RAM and nothing else is running on that machine.
I was expecting the user time reporting to be 6 times in case of 6 jobs running in parallel. But it is much more than that (13.6 times). My questions is from where this increase in user time comes from? Is it because multiple cores are jumping from one memory location to another more frequently in case of 6 jobs running in parallel? Or there is something else I am missing.
Thanks
#define MAX_SIZE 7000000
#define LOOP_COUNTER 100
#define simple_struct struct _simple_struct
simple_struct {
int n;
simple_struct *next;
};
#define ALLOCATION_SPLIT 5
#define CHAIN_LENGTH 1
void do_function3(void)
{
int i = 0, j = 0, k = 0, l = 0;
simple_struct **big_array = NULL;
simple_struct *temp = NULL;
big_array = calloc(MAX_SIZE + 1, sizeof(simple_struct*));
for(k = 0; k < ALLOCATION_SPLIT; k ++) {
for(i =k ; i < MAX_SIZE; i +=ALLOCATION_SPLIT) {
big_array[i] = calloc(1, sizeof(simple_struct));
if((CHAIN_LENGTH-1)) {
for(l = 1; l < CHAIN_LENGTH; l++) {
temp = calloc(1, sizeof(simple_struct));
temp->next = big_array[i];
big_array[i] = temp;
}
}
}
}
for (j = 0; j < LOOP_COUNTER; j++) {
for(i=0 ; i < MAX_SIZE; i++) {
if(big_array[i] == NULL) {
big_array[i] = calloc(1, sizeof(simple_struct));
}
big_array[i]->n = i * 13;
temp = big_array[i]->next;
while(temp) {
temp->n = i*13;
temp = temp->next;
}
}
}
}
int main(int argc, char **argv)
{
int i, no_of_processes = 0;
pid_t pid, wpid;
int child_done = 0;
int status;
if(argc != 2) {
printf("usage: this_binary number_of_processes");
return 0;
}
no_of_processes = atoi(argv[1]);
for(i = 0; i < no_of_processes; i ++) {
pid = fork();
switch(pid) {
case -1:
printf("error forking");
exit(-1);
case 0:
do_function3();
return 0;
default:
printf("\nchild %d launched with pid %d\n", i, pid);
break;
}
}
while(child_done != no_of_processes) {
wpid = wait(&status);
child_done++;
printf("\nchild done with pid %d\n", wpid);
}
return 0;
}
Firstly, your benchmark is a bit unusual. Normally, when benchmarking concurrent applications, one would compare two implementations:
A single thread version solving a problem of size S;
A multi-thread version with N threads, solving cooperatively the problem of size S; in your case, each solving a problem of size S/N.
Then you divide the execution times to obtain the speedup.
If your speedup is:
Around 1: the parallel implementation has similar performance as the single thread implementation;
Higher than 1 (usually between 1 and N), parallelizing the application increases performance;
Lower than 1: parallelizing the application hurts performance.
The effect on performance depends on a variety of factors:
How well your algorithm can be parallelized. See Amdahl's law. Does not apply here.
Overhead in inter-thread communication. Does not apply here.
Overhead in inter-thread synchronization. Does not apply here.
Contention for CPU resources. Should not apply here (since the number of threads is equal to the number of cores). However HyperThreading might hurt.
Contention for memory caches. Since the threads do not share memory, this will decrease performance.
Contention for accesses to main memory. This will decrease performance.
You can measure the last 2 with a profiler. Look for cache misses and stalled instructions.

Prevent False Sharing without using padding

I'm currently learning about pthreads in C and came across the issue of False Sharing. I think I understand the concept of it and I've tried experimenting a bit.
Below is a short program that I've been playing around with. Eventually I'm going to change it into a program to take a large array of ints and sum it in parallel.
#include <stdio.h>
#include <pthread.h>
#define THREADS 4
#define NUMPAD 14
struct s
{
int total; // 4 bytes
int my_num; // 4 bytes
int pad[NUMPAD]; // 4 * NUMPAD bytes
} sum_array[4];
static void *worker(void * ind) {
const int curr_ind = *(int *) ind;
for (int i = 0; i < 10; ++i) {
sum_array[curr_ind].total += sum_array[curr_ind].my_num;
}
printf("%d\n", sum_array[curr_ind].total);
return NULL;
}
int main(void) {
int args[THREADS] = { 0, 1, 2, 3 };
pthread_t thread_ids[THREADS];
for (size_t i = 0; i < THREADS; ++i) {
sum_array[i].total = 0;
sum_array[i].my_num = i + 1;
pthread_create(&thread_ids[i], NULL, worker, &args[i]);
}
for (size_t i = 0; i < THREADS; ++i) {
pthread_join(thread_ids[i], NULL);
}
}
My question is, is it possible to prevent false sharing without using padding? Here struct s has a size of 64 bytes so that each struct is on its own cache line (assuming that the cache line is 64 bytes). I'm not sure how else I can achieve parallelism without padding.
Also, if I were to sum an array of a varying size between 1000-50,000 bytes, how could I prevent false sharing? Would I be able to pad it out using a similar program? My current thoughts are to put each int from the big array, into an array of struct s and then use parallelism to sum it. However I'm not sure if this is the optimal solution.
Partition the problem: In worker(), sum into a local variable, then add the local variable to the array:
static void *worker(void * ind) {
const int curr_ind = *(int *) ind;
int localsum = 0;
for (int i = 0; i < 10; ++i) {
localsum += sum_array[curr_ind].my_num;
}
sum_array[curr_ind].total += localsum;
printf("%d\n", sum_array[curr_ind].total);
return NULL;
}
This may still have false sharing after the loop, but that is one time per thread. Thread creation overhead is much more significant than a single cache-miss. Of course, you probably want to have a loop that actually does something time-consuming, as your current code can be optimized to:
static void *worker(void * ind) {
const int curr_ind = *(int *) ind;
int localsum = 10 * sum_array[curr_ind].my_num;
sum_array[curr_ind].total += localsum;
printf("%d\n", sum_array[curr_ind].total);
return NULL;
}
The runtime of which is definitely dominated by thread creation and synchronization in printf().

Dividing processes evenly among threads

I am trying to come up with an algorithm to divide a number of processes as evenly as possible over a number of threads. Each process takes the same amount of time.
The number of processes can vary, from 1 to 1 million. The threadCount is fixed, and can be anywhere from 4 to 48.
The code below does divide all the work evenly, except for the last case, where I throw in what is left over.
Is there a way to fix this so that the work is spread more evenly?
void main(void)
{
int processBegin[100];
int processEnd[100];
int activeProcessCount = 6243;
int threadCount = 24;
int processsInBundle = (int) (activeProcessCount / threadCount);
int processBalance = activeProcessCount - (processsInBundle * threadCount);
for (int i = 0; i < threadCount; ++i)
{
processBegin[ i ] = i * processsInBundle;
processEnd[ i ] = (processBegin[ i ] + processsInBundle) - 1;
}
processEnd[ threadCount - 1 ] += processBalance;
FILE *debug = fopen("s:\\data\\testdump\\debug.csv", WRITE);
for (int i = 0; i < threadCount; ++i)
{
int processsInBucket = (i == threadCount - 1) ? processsInBundle + processBalance : processBegin[i+1] - processBegin[i];
fprintf(debug, "%d,start,%d,stop,%d,processsInBucket,%d\n", activeProcessCount, processBegin[i], processEnd[i], processsInBucket);
}
fclose(debug);
}
Give the first activeProcessCount % threadCount threads processInBundle + 1 processes and give the others processsInBundle ones.
int processInBundle = (int) (activeProcessCount / threadCount);
int processSoFar = 0;
for (int i = 0; i < activeProcessCount % threadCount; i++){
processBegin[i] = processSoFar;
processSoFar += processInBundle + 1;
processEnd[i] = processSoFar - 1;
}
for (int i = activeProcessCount % threadCount; i < threadCount; i++){
processBegin[i] = processSoFar;
processSoFar += processInBundle;
processEnd[i] = processSoFar - 1;
}
That's the same problem as trying to divide 5 pennies onto 3 people. It's just impossible unless you can saw the pennies in half.
Also even if all processes need an equal amount of theoretical runtime it doesn't mean that they will be executed in the same amount of time due to kernel scheduling, cache performance and various other hardware related factors.
To suggest some performance optimisations:
Use dynamic scheduling. i.e. split your work into batches (can be size 1) and have your threads take one batch at a time, run it, then take the next one. This way the threads will always be working until all batches are gone.
More advanced is to start with a big batch size (commonly numwork/numthreads and decrease it each time a thread takes work out of the pool). OpenMP refers to it as guided scheduling.

Using shared memory in CUDA without reducing threads

Looking at Mark Harris's reduction example, I am trying to see if I can have threads store intermediate values without reduction operation:
For example CPU code:
for(int i = 0; i < ntr; i++)
{
for(int j = 0; j < pos* posdir; j++)
{
val = x[i] * arr[j];
if(val > 0.0)
{
out[xcount] = val*x[i];
xcount += 1;
}
}
}
Equivalent GPU code:
const int threads = 64;
num_blocks = ntr/threads;
__global__ void test_g(float *in1, float *in2, float *out1, int *ct, int posdir, int pos)
{
int tid = threadIdx.x + blockIdx.x*blockDim.x;
__shared__ float t1[threads];
__shared__ float t2[threads];
int gcount = 0;
for(int i = 0; i < posdir*pos; i += 32) {
if (threadIdx.x < 32) {
t1[threadIdx.x] = in2[i%posdir];
}
__syncthreads();
for(int i = 0; i < 32; i++)
{
t2[i] = t1[i] * in1[tid];
if(t2[i] > 0){
out1[gcount] = t2[i] * in1[tid];
gcount = gcount + 1;
}
}
}
ct[0] = gcount;
}
what I am trying to do here is the following steps:
(1)Store 32 values of in2 in shared memory variable t1,
(2)For each value of i and in1[tid], calculate t2[i],
(3)if t2[i] > 0 for that particular combination of i, write t2[i]*in1[tid] to out1[gcount]
But my output is all wrong. I am not even able to get a count of all the times t2[i] is greater than 0.
Any suggestions on how to save the value of gcount for each i and tid ?? As I debug, I find that for block (0,0,0) and thread(0,0,0) I can sequentially see the values of t2 updated. After the CUDA kernel switches focus to block(0,0,0) and thread(32,0,0), the values of out1[0] are re-written again. How can I get/store the values of out1 for each thread and write it to the output?
I tried two approaches so far: (suggested by #paseolatis on NVIDIA forums)
(1) defined offset=tid*32; and replace out1[gcount] with out1[offset+gcount],
(2) defined
__device__ int totgcount=0; // this line before main()
atomicAdd(&totgcount,1);
out1[totgcount]=t2[i] * in1[tid];
int *h_xc = (int*) malloc(sizeof(int) * 1);
cudaMemcpyFromSymbol(h_xc, totgcount, sizeof(int)*1, cudaMemcpyDeviceToHost);
printf("GPU: xcount = %d\n", h_xc[0]); // Output looks like this: GPU: xcount = 1928669800
Any suggestions? Thanks in advance !
OK let's compare your description of what the code should do with what you have posted (this is sometimes called rubber duck debugging).
Store 32 values of in2 in shared memory variable t1
Your kernel contains this:
if (threadIdx.x < 32) {
t1[threadIdx.x] = in2[i%posdir];
}
which is effectively loading the same value from in2 into every value of t1. I suspect you want something more like this:
if (threadIdx.x < 32) {
t1[threadIdx.x] = in2[i+threadIdx.x];
}
For each value of i and in1[tid], calculate t2[i],
This part is OK, but why is t2 needed in shared memory at all? It is only an intermediate result which can be discarded after the inner iteration is completed. You could easily have something like:
float inval = in1[tid];
.......
for(int i = 0; i < 32; i++)
{
float result = t1[i] * inval;
......
if t2[i] > 0 for that particular combination of i, write
t2[i]*in1[tid] to out1[gcount]
This is where the problems really start. Here you do this:
if(t2[i] > 0){
out1[gcount] = t2[i] * in1[tid];
gcount = gcount + 1;
}
This is a memory race. gcount is a thread local variable, so each thread will, at different times, overwrite any given out1[gcount] with its own value. What you must have, for this code to work correctly as written, is to have gcount as a global memory variable and use atomic memory updates to ensure that each thread uses a unique value of gcount each time it outputs a value. But be warned that atomic memory access is very expensive if it is used often (this is why I asked about how many output points there are per kernel launch in a comment).
The resulting kernel might look something like this:
__device__ int gcount; // must be set to zero before the kernel launch
__global__ void test_g(float *in1, float *in2, float *out1, int posdir, int pos)
{
int tid = threadIdx.x + blockIdx.x*blockDim.x;
__shared__ float t1[32];
float ival = in1[tid];
for(int i = 0; i < posdir*pos; i += 32) {
if (threadIdx.x < 32) {
t1[threadIdx.x] = in2[i+threadIdx.x];
}
__syncthreads();
for(int j = 0; j < 32; j++)
{
float tval = t1[j] * ival;
if(tval > 0){
int idx = atomicAdd(&gcount, 1);
out1[idx] = tval * ival
}
}
}
}
Disclaimer: written in browser, never been compiled or tested, use at own risk.....
Note that your write to ct was also a memory race, but with gcount now a global value, you can read the value after the kernel without the need for ct.
EDIT: It seems that you are having some problems with zeroing gcount before running the kernel. To do this, you will need to use something like cudaMemcpyToSymbol or perhaps cudaGetSymbolAddress and cudaMemset. It might look something like:
const int zero = 0;
cudaMemcpyToSymbol("gcount", &zero, sizeof(int), 0, cudaMemcpyHostToDevice);
Again, usual disclaimer: written in browser, never been compiled or tested, use at own risk.....
A better way to do what you are doing is to give each thread its own output, and let it increment its own count and enter values - this way, the double-for loop can happen in parallel in any order, which is what the GPU does well. The output is wrong because the threads share the out1 array, so they'll all overwrite on it.
You should also move the code to copy into shared memory into a separate loop, with a __syncthreads() after. With the __syncthreads() out of the loop, you should get better performance - this means that your shared array will have to be the size of in2 - if this is a problem, there's a better way to deal with this at the end of this answer.
You also should move the threadIdx.x < 32 check to the outside. So your code will look something like this:
if (threadIdx.x < 32) {
for(int i = threadIdx.x; i < posdir*pos; i+=32) {
t1[i] = in2[i];
}
}
__syncthreads();
for(int i = threadIdx.x; i < posdir*pos; i += 32) {
for(int j = 0; j < 32; j++)
{
...
}
}
Then put a __syncthreads(), an atomic addition of gcount += count, and a copy from the local output array to a global one - this part is sequential, and will hurt performance. If you can, I would just have a global list of pointers to the arrays for each local one, and put them together on the CPU.
Another change is that you don't need shared memory for t2 - it doesn't help you. And the way you are doing this, it seems like it works only if you are using a single block. To get good performance out of most NVIDIA GPUs, you should partition this into multiple blocks. You can tailor this to your shared memory constraint. Of course, you don't have a __syncthreads() between blocks, so the threads in each block have to go over the whole range for the inner loop, and a partition of the outer loop.

Resources