So i have this program im working on, and the guist of it is that i need to do some operations with threads, following the next shcheme: The j-th thread Hj calculates a group of 100 consecutive iterations of the sum, making a cyclic distribution of the groups among all the threads. For example, if H = 4, the
thread H2 does the calculation of iterations [100..199, 500..599, 900..999, ...].
To ensure no data races occur, the threads must work each on a different sum variable.
Then compare after joining the threads the result achieved by the threads and the one done sequentally.
Here is the code:
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <math.h>
#include <pthread.h>
#include <sys/time.h>
#define H 4
double res[H] = {0};
//Time function
float restar_tiempo(struct timeval *inicio, struct timeval *fin) {
return (fin->tv_sec - inicio->tv_sec) + 1e-6 * (fin->tv_usec - inicio->tv_usec);
}
//Thread function
void *_hilo(void *arg) {
int a = * ((int*)arg);
double pi = 0;
double n = 100 * a;
while (n < 10000000) {
res[a] += (pow(-1, n) / pow(4, n)) * ((2 / (4 * n + 1)) + (2 / (4 * n + 2)) + (1 / (4 * n + 3)));
pi++;
n++;
if ((int) n % 100 == 0)
n += (H - 1)*100;
}
printf("Result on thread[%d]: %f\n", a, res[a]);
pthread_exit(NULL);
}
int main() {
pthread_t hilo[H];
struct timeval in, mid, fin;
gettimeofday(&in, NULL);
for (int i = 0; i < H; i++) {
int* p = malloc(sizeof (int));
*p = i;
printf("Esto es i: %d\n", i);
res[i] = 0;
if (pthread_create(&hilo[i], NULL, _hilo, p) != 0) {
perror(" Error creando hilo");
exit(EXIT_FAILURE);
}
free(p);
}
//Join
for (int i = 0; i < H; i++)
pthread_join(hilo[i], NULL);
//Partial sum
double f = 0;
for (int i = 0; i < H; i++){
printf("Resultado parcial de hilo %d: %f\n", i, res[i]);
f += res[i];
}
//Total partial sum
printf("Resultado total: %lf\n", f);
//printf("Hola/n");
gettimeofday(&mid, NULL);
//Secuential sum
double s = 0;
for (double n = 0; n < 10000000; n++)
s += (pow(-1, n) / pow(4, n)) * ((2 / (4 * n + 1)) + (2 / (4 * n + 2)) + (1 / (4 * n + 3)));
//Print secuential
printf("Resultado secuencial: %f\n", s);
gettimeofday(&fin, NULL);
//Result diff
printf("Diferencia resultados: %f\n", fabs(f - s));
//Time threads
printf("Tiempo por hilos: %f\n", restar_tiempo(&in, &mid));
//Secuential time
printf("Tiempo secuencial: %f\n", restar_tiempo(&mid, &fin));
//Time diff
printf("Diferencia tiempos: %f\n", restar_tiempo(&in, &mid) - restar_tiempo(&mid, &fin));
return 0;
}
I can compile everything without warnings, but when i execute the program, the result provided by the first thread is erratic, as it changes between executions (the rest of threads display 0 because they work with very little values).
Example with some added prints inside the thread function and after doing the join:
First execution:
This is i:0
This is i:1
This is i:2
This is i:3
//Inside thread funct
Thread result[2]: 0.000000
Thread result[2]: 0.000000
Thread result[3]: 0.000000
Thread result[0]: 3.141593
//After join
Partial result of thread 0: 3.141593
Partial result of thread 1: 0.000000
Partial result of thread 2: 0.000000
Partial result of thread 3: 0.000000
Total result: 3.141593
Sequential result: 3.141593
Difference results: 0.000000
Time per threads: 0.183857
Sequential time: 0.034788
Difference times: 0.149069
Second execution:
This is i:0
This is i:1
This is i:2
This is i:3
Thread result[2]: 0.000000
Thread result[0]: 6.470162
Thread result[0]: 6.470162
Thread result[3]: 0.000000
Partial result of thread 0: 6.470162
Partial result of thread 1: 0.000000
Partial result of thread 2: 0.000000
Partial result of thread 3: 0.000000
Total result: 6.470162
Sequential result: 3.141593
Difference results: 3.328570
Time per threads: 0.189794
Sequential time: 0.374017
Difference times: -0.184223
How can i make it so the sum works properly?
I think it has something to do with arg in the function _hilo, or the subsequent int cast with int a.
(Excuse the mix in languages, i speak spanish so most of the printfs are in said language. Dont mind them, the block with the results example has the traduction)
Okay i solved it but i dont know why it works like this fully or why this caused issues. I just deleted the free (p) statement and now it works like a charm. If someone can enlighten me on why this happens, i´ll be grateful.
Related
Running time is about the same, regardless of the number of threads. I am having trouble figuring out why. I know that the threads are running in parallel as they are supposed to, but I don't have even a good guess as to why there would be no performance improvement. (approx. 21 seconds to find all primes less than 8 million, for both single and multiple threads) What is going on here?
typedef struct prime_finder_vars {
long from;
long to;
int idx;
} PrimeFinderVars;
int is_prime(long num) {
int limit = round(sqrt(num));
for (long i = 2; i <= limit; i++) {
if (num % i == 0)
return FALSE;
}
return TRUE;
}
void *prime_finder(void *pf) {
PrimeFinderVars *pf_vars = (PrimeFinderVars *) pf;
long next_cand = pf_vars->from;
while (next_cand < pf_vars->to) {
if (is_prime(next_cand)) {
++counts[pf_vars->idx];
}
next_cand += 2;
}
return pf;
}
int main(void) {
struct timespec start;
struct timespec end;
double start_sec, end_sec, elapsed_sec;
int sum = 0;
clock_gettime(CLOCK_REALTIME, &start);
pthread_t threads[NUM_THREADS];
PrimeFinderVars vars[NUM_THREADS];
int slice_size = SEARCH_RANGE / NUM_THREADS;
for (int i = 0; i < NUM_THREADS; i++) {
vars[i].from = i * slice_size + 1;
vars[i].to = (i + 1) * slice_size;
vars[i].idx = i;
pthread_create(&threads[i], NULL, prime_finder, &vars[i]);
}
for (int i = 0; i < NUM_THREADS; i++) {
pthread_join(threads[i], NULL);
sum += counts[i];
}
clock_gettime(CLOCK_REALTIME, &end);
start_sec = start.tv_sec + start.tv_nsec / NANO_PER_SEC;
end_sec = end.tv_sec + end.tv_nsec / NANO_PER_SEC;
elapsed_sec = end_sec - start_sec;
}
This is an interesting question. Everything Mikhail Vladimirov says is true but I decided to do some testing on my laptop to see what I got. My laptop is a modern MacBook pro with an eight core i9. I'm not sure if it is hyper threaded or not, but here are my results:
I tested with the number of threads varying between 1 and 50 and a search range of 10,000,000.
With one thread it takes nearly eleven seconds but this drops rapidly to around 1.5 seconds with 16 threads, and it doesn't get any better after that.
My conclusion is
My comment on Mikhail's answer about the cost of the thread functions is wrong, at least on my platform. I see no increased overhead with more threads
There's something wrong with your thread library.
I think you probably need to satisfy yourself that the threads really are running in parallel on separate cores. One explanation of your results could be that they are all competing for the same CPU.
Just for fun I decided to try profiling the program.
Each step represents another core going to 100%. I'm not sure why the prt with three threads doesn't go to 300%, but you can see with four threads that it goes up to 400% straight away but comes down in steps of 100%. This is the effect of you splitting the task into equal ranges and the threads dealing with lower numbers finishing sooner.
The first 16 data points
Threads Time
1 11.893418
2 7.352520
3 5.117278
4 4.062026
5 3.511605
6 2.892274
7 2.401555
8 2.172573
9 1.910534
10 1.864023
11 1.860944
12 1.369277
13 1.628883
14 1.196646
15 1.626215
16 1.548878
The code I used to produce the test results (slightly modified from yours).
#include <stdio.h>
#include <pthread.h>
#include <math.h>
#include <stdbool.h>
#define SEARCH_RANGE 10000000
#define NANO_PER_SEC 1000000000
typedef struct prime_finder_vars {
long from;
long to;
int* count;
} PrimeFinderVars;
int is_prime(long num) {
int limit = round(sqrt(num));
for (long i = 2; i <= limit; i++) {
if (num % i == 0)
return false;
}
return true;
}
void *prime_finder(void *pf)
{
PrimeFinderVars *pf_vars = (PrimeFinderVars *) pf;
long next_cand = pf_vars->from;
while (next_cand < pf_vars->to)
{
if (is_prime(next_cand))
{
(*pf_vars->count)++ ;
}
next_cand += 2;
}
return pf;
}
void trial(int numThreads)
{
struct timespec start;
struct timespec end;
double start_sec, end_sec, elapsed_sec;
int sum = 0;
clock_gettime(CLOCK_REALTIME, &start);
int counts[numThreads];
pthread_t threads[numThreads];
PrimeFinderVars vars[numThreads];
int slice_size = SEARCH_RANGE / numThreads;
for (int i = 0; i < numThreads; i++)
{
counts[i] = 0;
vars[i].from = i * slice_size + 1;
vars[i].to = (i + 1) * slice_size;
vars[i].count = &counts[i];
pthread_create(&threads[i], NULL, prime_finder, &vars[i]);
}
for (int i = 0; i < numThreads; i++)
{
pthread_join(threads[i], NULL);
sum += counts[i];
}
clock_gettime(CLOCK_REALTIME, &end);
start_sec = (double)start.tv_sec + (double)start.tv_nsec / NANO_PER_SEC;
end_sec = (double)end.tv_sec + (double)end.tv_nsec / NANO_PER_SEC;
elapsed_sec = end_sec - start_sec;
printf("%d\t%f\n", numThreads, elapsed_sec);
}
int main()
{
printf("Threads\tTime\n");
for (int threads = 1 ; threads <= 50 ; ++threads)
{
trial(threads);
}
}
I pursued this a bit further over the last day or two. Firstly, I was intrigued about why there seemed to be a double line of timings: After about 12 threads, a run would take either 1.5 seconds or 1 second. I theorised above it was because of the bug Mikhail mentioned so I plotted the actual answer given for each number of threads and found that, while the answer was usually around 664,579 it would often be around half that and unsurprisingly, when the answer was half the real answer, that corresponded to the lower of the two timing lines.
So I fixed that bug and the double line effect disappeared. However, I was still getting more than one different answer depending on the number of threads.
The reason for this is that there are two more bugs.
the original algorithm fails to test the top number in each range.
The size of the ranges is calculated by doing an integer division of the search range by the number of threads. Unless there is no remainder, numbers at the top of the search range will not be checked.
I fixed both bugs and did a third run. This didn't affect the timings appreciably, but I got the same answer for each number of threads used.
For comparison, I wrote a sieve of Eratosthenes and timed that too. Using it and a single thread took only 0.2 seconds - about seven times faster than the fastest number of cores.
I've published a spreadsheet of the results and there's a git repo of the code.
Workload is not well balanced between threads. Each thread has to check about the same number of candidates, but for threads with higher indexes it takes more time to check each candidate, than for threads with lower indexes.
I would rewrite the main loop like this:
for (long candidate = pf_vars->idx;
candidate < SEARCH_RANGE;
candidate += NUM_THREADS) {
if (is_prime (candidate)) {
++counts [pf_vars->idx];
}
}
NUM_THREADS has to be prime itself for this to work efficiently.
Also, I doubt your code produces correct results, as in case pf_vars->from is even, prime_finder will check only even candidates which doesn't make much sense.
Also, threads run in parallel only when they run on different cores. If the number of thread is much more than the number of cores, then performance will degrade as switching a core between several threads also takes some time.
After you've identified 2 and 3 as prime, all remaining primes take the form 6N±1, for N >= 1. To balance the workload across K threads, you should have each of the K threads stepping through its own sequence of values for N: thread T working on 1 + T + K * X, where for each thread, X sequences from 0 upwards. If you have 8 threads, it means thread 0 works on N₀ = { 1, 9, 17, … }, etc. It still means thread K-1 does more work than thread 0 because it is tackling bigger numbers, but the discrepancy is much less than when you slice the ranges horizontally. This means you need to provide each thread with a starting number, S, and the total number of threads, K, and the thread will then set a counter x to values 0, 1, ... and check for primes 6(S + xK)±1.
Now, with that as a decent basis for creating a multi-threaded prime finder. This is closely based on your code. It does use some code that is available in my SOQ (Stack Overflow Questions) repository on GitHub as files timer.c, timer.h, stderr.c and stderr.h in the src/libsoq sub-directory. It uses function isprime() renamed from IsPrime3B() found in the file isprime.c in the src/Primes sub-directory of my SOQ repository. This program is prime-thread.c from the same src/Primes directory.
/* SO 6438-1942 */
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include "stderr.h"
#include "timer.h"
#define NANO_PER_SEC 1.0E9
enum { NUM_THREADS = 8 };
enum { MAX_NUMBER = 10000000 };
static size_t counts[NUM_THREADS];
static const unsigned int small_primes[] =
{
5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47,
53, 59, 61, 67, 71, 73, 79, 83, 87, 89, 91, 97
};
enum { NUM_SMALL_PRIMES = sizeof(small_primes) / sizeof(small_primes[0]) };
/* IsPrime3B() from isprime.c - renamed is_prime() */
static int is_prime(unsigned number)
{
if (number <= 1)
return 0;
if (number == 2 || number == 3)
return 1;
if (number % 2 == 0 || number % 3 == 0)
return 0;
for (unsigned i = 0; i < NUM_SMALL_PRIMES; i++)
{
if (number == small_primes[i])
return 1;
if (number % small_primes[i] == 0)
return 0;
}
/* After 97, the next prime numbers are 101, 103, 107, 109 */
/*
** It would be feasible to start this loop from:
** i = (((small_primes[NUM_SMALL_PRIMES - 1] + 1) / 6) + 1) * 6
*/
for (unsigned i = 102; (i - 1) <= number / (i - 1); i += 6)
{
if (number % (i - 1) == 0 || number % (i + 1) == 0)
return 0;
}
return 1;
}
typedef struct prime_finder_vars
{
unsigned from;
unsigned to;
unsigned increment; /* Number of threads */
unsigned idx;
} PrimeFinderVars;
static void *prime_finder(void *pf)
{
PrimeFinderVars *pf_vars = (PrimeFinderVars *) pf;
printf("Thread %u: from = %u, to = %u, inc = %u\n",
pf_vars->idx, pf_vars->from, pf_vars->to, pf_vars->increment);
unsigned next = pf_vars->from;
while (next < pf_vars->to) {
unsigned six_n = 6 * next;
if (is_prime(six_n - 1))
++counts[pf_vars->idx];
if (is_prime(six_n + 1))
++counts[pf_vars->idx];
next += pf_vars->increment;
}
printf("Thread %u: done\n", pf_vars->idx);
return pf;
}
int main(int argc, char **argv)
{
err_setarg0(argv[0]);
if (argc != 1)
err_usage("");
struct timespec start;
struct timespec end;
double start_sec, end_sec, elapsed_sec;
int sum = 0;
Clock clk;
clk_init(&clk);
clk_start(&clk);
clock_gettime(CLOCK_REALTIME, &start);
pthread_t threads[NUM_THREADS];
PrimeFinderVars vars[NUM_THREADS];
int max_n = (MAX_NUMBER + 5) / 6;
for (int i = 0; i < NUM_THREADS; i++)
{
vars[i].from = i + 1;
vars[i].to = max_n;
vars[i].idx = i;
vars[i].increment = NUM_THREADS;
int rc;
if ((rc = pthread_create(&threads[i], NULL, prime_finder, &vars[i])) != 0)
err_syserr("failed to create thread %d: ", i);
}
for (int i = 0; i < NUM_THREADS; i++)
{
pthread_join(threads[i], NULL);
sum += counts[i];
}
clock_gettime(CLOCK_REALTIME, &end);
clk_stop(&clk);
start_sec = start.tv_sec + start.tv_nsec / NANO_PER_SEC;
end_sec = end.tv_sec + end.tv_nsec / NANO_PER_SEC;
elapsed_sec = end_sec - start_sec;
printf("Time 1: %.6f\n", elapsed_sec);
char buffer[32];
printf("Time 2: %s\n", clk_elapsed_us(&clk, buffer, sizeof(buffer)));
/* Because 2 and 3 are primes but are not analyzed */
size_t t_count = 2;
for (int i = 0; i < NUM_THREADS; i++)
{
t_count += counts[i];
printf("%d: %7zu primes found\n", i, counts[i]);
}
printf("Total primes found up to %d = %zu\n", MAX_NUMBER, t_count);
return 0;
}
Example output:
$ timecmd -u -- prime-thread
2020-10-16 12:15:05.101785 [PID 75174] prime-thread
Thread 0: from = 1, to = 1666667, inc = 8
Thread 7: from = 8, to = 1666667, inc = 8
Thread 2: from = 3, to = 1666667, inc = 8
Thread 3: from = 4, to = 1666667, inc = 8
Thread 5: from = 6, to = 1666667, inc = 8
Thread 4: from = 5, to = 1666667, inc = 8
Thread 6: from = 7, to = 1666667, inc = 8
Thread 1: from = 2, to = 1666667, inc = 8
Thread 0: done
Thread 6: done
Thread 4: done
Thread 7: done
Thread 3: done
Thread 5: done
Thread 2: done
Thread 1: done
Time 1: 0.231135
Time 2: 0.231135
0: 83090 primes found
1: 83176 primes found
2: 83023 primes found
3: 82996 primes found
4: 83060 primes found
5: 82995 primes found
6: 83179 primes found
7: 83058 primes found
Total primes found up to 10000000 = 664579
2020-10-16 12:15:05.341489 [PID 75174; status 0x0000] - 0.239704s
$
There are indeed 664,579 primes less than 10,000,000.
Note that the timecmd program counts the entire running time (start-up and printing) of the prime-thread, whereas the internal timing only counts the thread creation, running, and termination time. That accounts for the 8 ms timing difference. (It's a home-brew program that I use for timing commands. It's loosely similar to the system-provided time command — but significantly different too.)
Given a list of the primes up to 10,000,000, it would be feasible to calculate how many primes each thread should have found. Given that the totals are correct, it is unlikely that there's a problem there, though.
Timing
Note that the question says it took 21 seconds to count the number of primes up to 8,000,000. This code took 0.231 seconds to count the number of primes up to 10,000,000.
That suggests that the isprime() function in use is not as good as the one I used.
Indeed, the code shown is:
int is_prime(long num) {
int limit = round(sqrt(num));
for (long i = 2; i <= limit; i++) {
if (num % i == 0)
return FALSE;
}
return TRUE;
}
This does far more work than necessary. There's only one even prime number, 2. This code checks 4, 6, 8, … which are trivially non-prime. That's twice as much work as necessary. Checking for 2 and then only checking odd numbers would be a significant improvement. Checking 2, 3, and then numbers which match 6N±1 gives another improvement.
Even so, checking one third as much data would only improve things by a factor of 3. It is likely that the unbalanced workload is a bigger factor. With 8 threads (0..7), thread 7 working on the range 7,000,000..8,000,000 and it has a lot more computation to do than thread 0 working on the range 0..1,000,000, even though there are fewer primes for it to count.
The question doesn't show a complete MCVE (Minimal, Complete, Verifiable Example
— or MRE or whatever name SO now uses)
or an
SSCCE (Short, Self-Contained, Correct Example). It doesn't show how many threads are in use.
I have not, but perhaps should, parameterize prime-thread.c to take a variable number of threads and a variable range for analysis (and remove the thread debugging printing) and see how much it changes behaviour. On my machine, it is unlikely that more threads will improve things; it may be that fewer threads would be better.
I have a program primes which prints the primes in a given range using a Sieve of Eratosthenes. It prints all 664,579 primes up to 10 million in about 0.135 seconds when the output goes to (SSD) file or /dev/null. That is significantly faster than prime-thread manages to count the primes. Quite a lot of the benefit there is from the better algorithm. This isprime() function does a lot of computation for each candidate number.
Two lessons to draw from this:
Algorithms matter.
Threads aren't a panacea for speeding things up.
I wrote a program with 2 threads doing the same thing but I found the throughput of each threads is slower than if I only spawn one thread. Then I write this simple test to see if that's my problem or it's because of the system.
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <time.h>
/*
* Function: run_add
* -----------------------
* Do addition operation for iteration ^ 3 times
*
* returns: void
*/
void *run_add(void *ptr) {
clock_t t1, t2;
t1 = clock();
int sum = 0;
int i = 0, j = 0, k = 0;
int iteration = 1000;
long total = iteration * iteration * iteration;
for (i = 0; i < iteration; i++) {
for (j = 0; j < iteration; j++) {
for (k = 0; k < iteration; k++) {
sum++;
}
}
}
t2 = clock();
float diff = ((float)(t2 - t1) / 1000000.0F );
printf("thread id = %d\n", (int)(pthread_self()));
printf("Total addtions: %ld\n", total);
printf("Total time: %f second\n", diff);
printf("Addition per second: %f\n", total / diff);
printf("\n");
return NULL;
}
void run_test(int num_thread) {
pthread_t pth_arr[num_thread];
int i = 0;
for (i = 0; i < num_thread; i++) {
pthread_create(&pth_arr[i], NULL, run_add, NULL);
}
for (i = 0; i < num_thread; i++) {
pthread_join(pth_arr[i], NULL);
}
}
int main() {
int num_thread = 5;
int i = 0;
for (i = 1; i < num_thread; i++) {
printf("Running SUM with %d threads. \n\n", i);
run_test(i);
}
return 0;
}
The result still shows the average speed of n threads is slower than one single thread. The more threads I have, the slower each one is.
Here's the result:
Running SUM with 1 threads.
thread id = 528384,
Total addtions: 1000000000,
Total time: 1.441257 second,
Addition per second: 693838784.000000
Running SUM with 2 threads.
thread id = 528384,
Total addtions: 1000000000,
Total time: 2.970870 second,
Addition per second: 336601728.000000
thread id = 1064960,
Total addtions: 1000000000,
Total time: 2.972992 second,
Addition per second: 336361504.000000
Running SUM with 3 threads.
thread id = 1064960,
Total addtions: 1000000000,
Total time: 4.434701 second,
Addition per second: 225494352.000000
thread id = 1601536,
Total addtions: 1000000000,
Total time: 4.449250 second,
Addition per second: 224756976.000000
thread id = 528384,
Total addtions: 1000000000,
Total time: 4.454826 second,
Addition per second: 224475664.000000
Running SUM with 4 threads.
thread id = 528384,
Total addtions: 1000000000,
Total time: 6.261967 second,
Addition per second: 159694224.000000
thread id = 1064960,
Total addtions: 1000000000,
Total time: 6.293107 second,
Addition per second: 158904016.000000
thread id = 2138112,
Total addtions: 1000000000,
Total time: 6.295047 second,
Addition per second: 158855056.000000
thread id = 1601536,
Total addtions: 1000000000,
Total time: 6.306261 second,
Addition per second: 158572560.000000
I have a 4-core CPU and my system monitor shows each time I ran n threads, n CPU cores are 100% utilized. Is it true that n threads(<= my CPU cores) are supposed to run n times as fast as one thread? Why it is not the case here?
clock() measures CPU time not "Wall" time.
it also measures the total time of all threads..
CPU time is time when the processor was executing you code, wall time is real world elapsed time (like a clock on the wall would show)
time your program using /usr/bin/time to see what's really happening.
or use a wall-time function like time(), gettimeofday() or clock_gettime()
clock_gettime() can measure CPU time for this thread, for this process, or wall time. - it's probably the best way to do this type of experiment.
While you have your answer regarding why the multi-threaded performance seemed worse than single-thread, there are several things you can do to clean up the logic of your program and make it work like it appears you intended it to.
First, if you were keeping track of the relative wall-time that passed and the time reported by your diff of the clock() times, you would have noticed the time reported was approximately a (n-proccessor core) multiple of the actual wall-time. That was explained in the other answer.
For relative per-core performance timing, the use of clock() is fine. You are getting only an approximation of wall-time, but for looking at a relative additions per-second, that provides a clean per-core look at performance.
While you have correctly used a divisor of 1000000 for diff, time.h provides a convenient define for you. POSIX requires that CLOCKS_PER_SEC equals 1000000 independent of the actual resolution. That constant is provided in time.h.
Next, you should also notice that your output per-core wasn't reported until all threads were joined making reporting totals in run_add somewhat pointless. You can output thread_id, etc. from the individual threads for convenience, but the timing information should be computed back in the calling function after all threads have been joined. That will clean up the logic of your run_add significantly. Further, if you want to be able to vary the number of iterations, you should consider passing that value through ptr. e.g.:
/*
* Function: run_add
* -----------------------
* Do addition operation for iteration ^ 3 times
*
* returns: void
*/
void *run_add (void *ptr)
{
int i = 0, j = 0, k = 0, iteration = *(int *)ptr;
unsigned long sum = 0;
for (i = 0; i < iteration; i++)
for (j = 0; j < iteration; j++)
for (k = 0; k < iteration; k++)
sum++;
printf (" thread id = %lu\n", (long unsigned) (pthread_self ()));
printf (" iterations = %lu\n\n", sum);
return NULL;
}
run_test is relatively unchanged, with the bulk of the calculation changes being those moved from run_add to main and being scaled to account for the number of cores utilized. The following is a rewrite of main allowing the user to specify the number of cores to use as the first argument (using all-cores by default) and the base for your cubed number of iterations as the second argument (1000 by default):
int main (int argc, char **argv) {
int nproc = sysconf (_SC_NPROCESSORS_ONLN), /* number of core available */
num_thread = argc > 1 ? atoi (argv[1]) : nproc,
iter = argc > 2 ? atoi (argv[2]) : 1000;
unsigned long subtotal = iter * iter * iter,
total = subtotal * num_thread;
double diff = 0.0, t1 = 0.0, t2 = 0.0;
if (num_thread > nproc) num_thread = nproc;
printf ("\nrunning sum with %d threads.\n\n", num_thread);
t1 = clock ();
run_test (num_thread, &iter);
t2 = clock ();
diff = (double)((t2 - t1) / CLOCKS_PER_SEC / num_thread);
printf ("----------------\nTotal time: %lf second\n", diff);
printf ("Total addtions: %lu\n", total);
printf ("Additions per-second: %lf\n\n", total / diff);
return 0;
}
Putting all the pieces together, you could write a working example as follows. Make sure you disable optimizations to prevent your compiler from optimizing out your loops for sum, etc...
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <time.h>
#include <unistd.h>
/*
* Function: run_add
* -----------------------
* Do addition operation for iteration ^ 3 times
*
* returns: void
*/
void *run_add (void *ptr)
{
int i = 0, j = 0, k = 0, iteration = *(int *)ptr;
unsigned long sum = 0;
for (i = 0; i < iteration; i++)
for (j = 0; j < iteration; j++)
for (k = 0; k < iteration; k++)
sum++;
printf (" thread id = %lu\n", (long unsigned) (pthread_self ()));
printf (" iterations = %lu\n\n", sum);
return NULL;
}
void run_test (int num_thread, int *it)
{
pthread_t pth_arr[num_thread];
int i = 0;
for (i = 0; i < num_thread; i++)
pthread_create (&pth_arr[i], NULL, run_add, it);
for (i = 0; i < num_thread; i++)
pthread_join (pth_arr[i], NULL);
}
int main (int argc, char **argv) {
int nproc = sysconf (_SC_NPROCESSORS_ONLN),
num_thread = argc > 1 ? atoi (argv[1]) : nproc,
iter = argc > 2 ? atoi (argv[2]) : 1000;
unsigned long subtotal = iter * iter * iter,
total = subtotal * num_thread;
double diff = 0.0, t1 = 0.0, t2 = 0.0;
if (num_thread > nproc) num_thread = nproc;
printf ("\nrunning sum with %d threads.\n\n", num_thread);
t1 = clock ();
run_test (num_thread, &iter);
t2 = clock ();
diff = (double)((t2 - t1) / CLOCKS_PER_SEC / num_thread);
printf ("----------------\nTotal time: %lf second\n", diff);
printf ("Total addtions: %lu\n", total);
printf ("Additions per-second: %lf\n\n", total / diff);
return 0;
}
Example Use/Output
Now you can measure the relative number of additions per-second performed based on the number of cores utilized -- and have it return a Total time that is roughly what wall-time would be. For example, measuring the additions per-second using a single core results in:
$ ./bin/pthread_one_per_core 1
running sum with 1 threads.
thread id = 140380000397056
iterations = 1000000000
----------------
Total time: 2.149662 second
Total addtions: 1000000000
Additions per-second: 465189411.172547
Approximatey 465M additions per-sec. Using two cores should double that rate:
$ ./bin/pthread_one_per_core 2
running sum with 2 threads.
thread id = 140437156796160
iterations = 1000000000
thread id = 140437165188864
iterations = 1000000000
----------------
Total time: 2.152436 second
Total addtions: 2000000000
Additions per-second: 929179560.000957
Exactly twice the additions per-sec at 929M/s. Using 4-cores:
$ ./bin/pthread_one_per_core 4
running sum with 4 threads.
thread id = 139867841853184
iterations = 1000000000
thread id = 139867858638592
iterations = 1000000000
thread id = 139867867031296
iterations = 1000000000
thread id = 139867850245888
iterations = 1000000000
----------------
Total time: 2.202021 second
Total addtions: 4000000000
Additions per-second: 1816513309.422720
Doubled again to 1.81G/s, and using 8-cores gives the expected results:
$ ./bin/pthread_one_per_core
running sum with 8 threads.
thread id = 140617712838400
iterations = 1000000000
thread id = 140617654089472
iterations = 1000000000
thread id = 140617687660288
iterations = 1000000000
thread id = 140617704445696
iterations = 1000000000
thread id = 140617662482176
iterations = 1000000000
thread id = 140617696052992
iterations = 1000000000
thread id = 140617670874880
iterations = 1000000000
thread id = 140617679267584
iterations = 1000000000
----------------
Total time: 2.250243 second
Total addtions: 8000000000
Additions per-second: 3555171004.558562
3.55G/s. Look over the both answers (currently) and let us know if you have any questions.
note: there are a number of additional clean-ups and validations that could be applied, but for purposes of your example, updating the types to rational unsigned prevents strange results with thread_id and the addition numbers.
I'm working on a programming assignment and I'm getting strange results.
The idea is to calculate the number of processor ticks and time taken to run the algorithm.
Usually the code runs so quickly that the time taken is 0 sec, but I noticed that the number of processor ticks was 0 at the start and at the finish, resulting in 0 processor ticks taken.
I added a delay using usleep so that the time taken was non-zero, but the processor ticks is still zero and the calculation between the time stamps is still zero.
I've been banging my head on this for several days now and can't get past this problem, any suggestions are extremely welcome.
My code is below:
/* This program takes an input "n". If n is even it divides n by 2
* If n is odd, it multiples n by 3 and adds 1. Each time through the loop
* it iterates a counter.
* It continues until n is 1
*
* This program will compute the time taken to perform the above algorithm
*/
#include <stdio.h>
#include <time.h>
void delay(int);
int main(void) {
int n, i = 0;
time_t start, finish, duration;
clock_t startTicks, finishTicks, diffTicks;
printf("Clocks per sec = %d\n", CLOCKS_PER_SEC);
printf("Enter an integer: ");
scanf("%d", &n); // read value from keyboard
time(&start); // record start time in ticks
startTicks = clock();
printf("Start Clock = %s\n", ctime(&start));
printf("Start Processor Ticks = %d\n", startTicks);
while (n != 1) { // continues until n=1
i++; // increment counter
printf("iterations =%d\t", i); // display counter iterations
if (n % 2) { // if n is odd, n=3n+1
printf("Input n is odd!\t\t");
n = (n * 3) + 1;
printf("Output n = %d\n", n);
delay(1000000);
} else { //if n is even, n=n/2
printf("Input n is even!\t");
n = n / 2;
printf("Output n = %d\n", n);
delay(1000000);
}
}
printf("n=%d\n", n);
time(&finish); // record finish time in ticks
finishTicks = clock();
printf("Stop time = %s\n", ctime(&finish));
printf("Stop Processor Ticks = %d\n", finishTicks);
duration = difftime(finish, start); // compute difference in time
diffTicks = finishTicks - startTicks;
printf("Time elapsed = %2.4f seconds\n", duration);
printf("Processor ticks elapsed = %d\n", diffTicks);
return (n);
}
void delay(int us) {
usleep(us);
}
EDIT: So after researching further, I discovered that usleep() won't affect the program running time, so I wrote a delay function in asm. Now I am getting a value for processor ticks, but I am still getting zero sec taken to run the algorithm.
void delay(int us) {
for (int i = 0; i < us; i++) {
__asm__("nop");
}
}
You can calculate the elapsed time using the below formula.
double timeDiff = (double)(EndTime - StartTime) / CLOCKS_PER_SEC.
Here is the dummy code.
void CalculateTime(clock_t startTime, clock_t endTime)
{
clock_t diffTime = endTime - startTime;
printf("Processor time elapsed = %lf\n", (double)diffTime /CLOCKS_PER_SEC);
}
Hope this helps.
You are trying to time an implementation of Goldbach's Conjecture. I don't see how you can hope to get a meaningful execution time when it contains delays. Another problem is the granularity of clock() results, as shown by the value of CLOCKS_PER_SEC.
It is even more difficult trying to use time() which has a resolution of 1 second.
The way to do it is to compute a large number of values. This prints only 10 of them, to ensure the calculations are not optimised out, but not to distort the calculation time too much.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define SAMPLES 100000
int main(void) {
int i, j, n;
double duration;
clock_t startTicks = clock();
for(j=2; j<SAMPLES; j++) {
n = j; // starting number
i = 0; // iterations
while(n != 1) {
if (n % 2){ // if n is odd, n=3n+1
n = n * 3 + 1;
}
else { // if n is even, n=n/2
n = n / 2;
}
i++;
}
if(j % (SAMPLES/10) == 0) // print 10 results only
printf ("%d had %d iterations\n", j, i);
}
duration = ((double)clock() - startTicks) / CLOCKS_PER_SEC;
printf("\nDuration: %f seconds\n", duration);
return 0;
}
Program output:
10000 had 29 iterations
20000 had 30 iterations
30000 had 178 iterations
40000 had 31 iterations
50000 had 127 iterations
60000 had 179 iterations
70000 had 81 iterations
80000 had 32 iterations
90000 had 164 iterations
Duration: 0.090000 seconds
This program estimates Pi by throwing random "darts" (sampling points) to a circle or radius=1 inscribed inside a square board of length=2. Using the relationship
Area of circle / Area of Square = Pi/4
we can estimate Pi using the same relationship expressed as
Darts Inside Circle / Darts Outside Circle = Pi/4
The program works fine when I specify NDARTS in a #define, but when trying to broadcast it as a long long int, read from scanf, I get the following execution error:
mpirun -np 4 ./pi_montecarlo.x
-----------------------------------------------------------------------------
One of the processes started by mpirun has exited with a nonzero exit
code. This typically indicates that the process finished in error.
If your process did not finish in error, be sure to include a "return
0" or "exit(0)" in your C code before exiting the application.
PID 10591 failed on node n0 (127.0.0.1) due to signal 11.
Why?
Is there anything wrong with my MPI_Bcast declaration?
long long int *NDARTS=0;
scanf("%Ld",NDARTS);
MPI_Bcast(NDARTS, 1, MPI_LONG_LONG_INT, 0, MPI_COMM_WORLD);
Full code:
/*
mpicc -g -Wall -lm pi_montecarlo3.c -o pi_montecarlo.x
mpirun -np 4 ./pi_montecarlo.x
*/
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
#include <mpi.h>
#define MASTER 0
#define PI 3.1415926535
d ouble pseudo_random (double a, double b) {
double r;
r = ((b-a) * ((double) rand() / (double) RAND_MAX)) +a;
return r;
}
int main(int argc, char*argv[]){
long long int *NDARTS=0;
int proc_id,
n_procs,
llimit,
ulimit,
n_circle,
i;
double pi_current,
pi_sum,
x,
y,
z,
error,
start_time,
end_time;
struct timeval stime;
llimit = -1;
ulimit = 1;
n_circle =0;
MPI_Init(&argc, &argv);
MPI_Comm_rank (MPI_COMM_WORLD, &proc_id);
MPI_Comm_size (MPI_COMM_WORLD, &n_procs);
if (proc_id == MASTER){
printf("\nMonte Carlo Method to estimate Pi \n\n");
printf("Introduce Number of Darts \n");
scanf("%Ld",NDARTS);
printf(" Number of processes: %d \n", n_procs);
printf(" Number of darts: %Ld \n", *NDARTS);
MPI_Bcast(NDARTS, 1, MPI_LONG_LONG_INT, 0, MPI_COMM_WORLD);
start_time = MPI_Wtime();
}
gettimeofday(&stime, NULL);
srand(stime.tv_usec * stime.tv_usec * stime.tv_usec * stime.tv_usec);
for (i=1; i<=*NDARTS;i++){
x = pseudo_random(llimit, ulimit);
y = pseudo_random(llimit, ulimit);
z = pow(x,2) + pow(y,2);
if (z<=1.0){
n_circle++;
}
}
pi_current = 4.0 * (double)n_circle / (double) *NDARTS;
MPI_Reduce (&pi_current, &pi_sum, 1, MPI_DOUBLE, MPI_SUM, MASTER, MPI_COMM_WORLD);
if (proc_id == MASTER) {
pi_sum = pi_sum / n_procs;
error = fabs ((pi_sum -PI) / PI) *100;
end_time = MPI_Wtime();
printf("Known value of PI : %11.10f \n", PI);
printf("Estimated Value of PI : %11.10f\n", pi_sum);
printf("Error Percentage : %10.8f\n", error);
printf("Time : %10.8f\n\n", end_time - start_time);
}
MPI_Finalize();
return 0;
}
You're not using scanf() correctly. It should be like this instead:
long long int NDARTS;
scanf("%lld",&NDARTS);
MPI_Bcast(&NDARTS, 1, MPI_LONG_LONG_INT, 0, MPI_COMM_WORLD);
In your current code, long long int *NDARTS=0; effectively initializes NDARTS as a NULL pointer. So scanf() will obviously seg-fault when it tries to write to it.
I've been trying to parallelize this piece of code for about two days and keep having logical errors. The program is to find the area of an integral using the sum of the very small dx and calculate each discrete value of the integral. I am trying to implement this with openmp but I actually have no experience with openmp. I would like your help please. The actual goal is to parallelize the suma variable in the threads so every thread calculates less values of the integral. The program compiles successfully but when I execute the program it returns wrong results.
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(int argc, char *argv[]){
float down = 1, up = 100, dx, suma = 0, j;
int steps, i, nthreads, tid;
long starttime, finishtime, runtime;
starttime = omp_get_wtime();
steps = atoi(argv[1]);
dx = (up - down) / steps;
nthreads = omp_get_num_threads();
tid = omp_get_thread_num();
#pragma omp parallel for private(i, j, tid) reduction(+:suma)
for(i = 0; i < steps; i++){
for(j = (steps / nthreads) * tid; j < (steps / nthreads) * (tid + 1); j += dx){
suma += ((j * j * j) + ((j + dx) * (j + dx) * (j + dx))) / 2 * dx;
}
}
printf("For %d steps the area of the integral 3 * x^2 + 1 from %f to %f is: %f\n", steps, down, up, suma);
finishtime = omp_get_wtime();
runtime = finishtime - starttime;
printf("Runtime: %ld\n", runtime);
return (0);
}
The problem lies within your for-loop. If you use the for-pragma, OpenMP does the loop splitting for you:
#pragma omp parallel for private(i) reduction(+:suma)
for(i = 0; i < steps; i++) {
// recover the x-position of the i-th step
float x = down + i * dx;
// evaluate the function at x
float y = (3.0f * x * x + 1)
// add the sum of the rectangle to the overall integral
suma += y * dx
}
Even if you would convert to a parallelisation scheme where you would have to compute the indices by yourself, that would be problematic. The outer loop should be executed only nthread times.
You should also consider switching to double for increased accuracy.
Let's just consider the threads=1 case. This:
#pragma omp parallel for private(i, j, tid) reduction(+:suma)
for(i = 0; i < steps; i++){
for(j = (steps / nthreads) * tid; j < (steps / nthreads) * (tid + 1); j += dx){
suma += ((j * j * j) + ((j + dx) * (j + dx) * (j + dx))) / 2 * dx;
}
}
turns into this:
for(i = 0; i < steps; i++){
for(j = 0; j < steps; j += dx){
suma += ((j * j * j) + ((j + dx) * (j + dx) * (j + dx))) / 2 * dx;
}
}
and you can start to see the problem; you're basically looping over steps2.
In addition, your second loop doesn't make any sense, as you're incrementing by dx. That same confusion between indicies (i, j) with locations in the physical domain (i*dx) shows up in your increment. j+dx doesn't make any sense. Presumably you want to be increasing suma by (f(x) + f(x'))*dx/2 (eg, trapezoidal rule); that should be
float x = down + i*dx;
suma += dx * ((3 * x * x + 1) + (3 * (x + dx) * (x + dx) + 1)) / 2;
As ebo points out, you want to be summing the integrand, not its antiderivative.
Now if we include a check on the answer:
printf("For %d steps the area of the integral 3 * x^2 + 1 from %f to %f is: %f (expected: %f)\n",
steps, down, up, suma, up*up*up-down*down*down + up - down);
and we run it in serial, we start getting the right answer:
$ ./foo 10
For 10 steps the area of the integral 3 * x^2 + 1 from 1.000000 to 100.000000 is: 1004949.375000 (expected: 1000098.000000)
Runtime: 0
$ ./foo 100
For 100 steps the area of the integral 3 * x^2 + 1 from 1.000000 to 100.000000 is: 1000146.562500 (expected: 1000098.000000)
Runtime: 0
$ ./foo 1000
For 1000 steps the area of the integral 3 * x^2 + 1 from 1.000000 to 100.000000 is: 1000098.437500 (expected: 1000098.000000)
Runtime: 0
There's no point at all in worrying about the OpenMP case until the serial case works.
Once it comes time to OpenMP this, as ebo points out, the easiest thing to do is to just let OpenMP do your loop decomposition for you: eg,
#pragma omp parallel for reduction(+:suma)
for(i = 0; i < steps; i++){
float x = down + i*dx;
suma += dx * ((3 * x * x + 1) + (3 * (x + dx) * (x + dx) + 1)) / 2;
}
Running this, one gets
$ setenv OMP_NUM_THREADS 1
$ ./foo 1000
For 1000 steps the area of the integral 3 * x^2 + 1 from 1.000000 to 100.000000 is: 1000098.437500 (expected: 1000098.000000)
Runtime: 0
$ setenv OMP_NUM_THREADS 2
$ ./foo 1000
For 1000 steps the area of the integral 3 * x^2 + 1 from 1.000000 to 100.000000 is: 1000098.437500 (expected: 1000098.000000)
Runtime: 0
$ setenv OMP_NUM_THREADS 4
$ ./foo 1000
For 1000 steps the area of the integral 3 * x^2 + 1 from 1.000000 to 100.000000 is: 1000098.625000 (expected: 1000098.000000)
Runtime: 0
$ setenv OMP_NUM_THREADS 8
$ ./foo 1000
For 1000 steps the area of the integral 3 * x^2 + 1 from 1.000000 to 100.000000 is: 1000098.500000 (expected: 1000098.000000)
One can do the blocking explicitly in OpenMP if you really want to, but you should have a reason for doing that.