I'm working on developing sorting algorithms and was surprised to find c's qsort taking 1.6x as long Julia's default sorting algorithm. I imagine I'm making some sort of benchmarking mistake. Here are my benchmarking programs and their results:
Julia:
# time (julia bench.jl)
using Printf
function main()
len = 100_000_000
x = rand(Int64, len)
t = #elapsed sort!(x)
#printf "%d elements:\nclaim\t%fs" len t
end
main()
c
// time (gcc -O3 bench.c && ./a.out)
#include <stdlib.h>
#include <stdio.h>
#include <sys/time.h>
int comp (const void * elem1, const void * elem2)
{
int f = *((int*)elem1);
int s = *((int*)elem2);
if (f > s) return 1;
if (f < s) return -1;
return 0;
}
long long utime()
{
struct timeval now_time;
gettimeofday(&now_time, NULL);
return now_time.tv_sec * 1000000LL + now_time.tv_usec;
}
int main(int argc, char* argv[])
{
long length = 100000000;
long long *x;
x = (long long *) malloc(length * sizeof(long long));
if (x == NULL)
{
printf("Malloc failed\n");
return 1;
}
for (long cnt = 0 ; cnt < length ; cnt++)
x[cnt] = rand();
long long start = utime();
qsort (x, length, sizeof(*x), comp);
long long end = utime();
//for (long cnt = 0 ; cnt < length ; cnt += length/10)
// printf("%lld\n", x[cnt]);
free(x);
printf ("%ld elements:\nclaim\t%fs", length, (end-start)/1000000.0);
return 0;
}
Results
bash-3.2$ time (julia bench.jl)
100000000 elements:
claim 12.405531s
real 0m16.560s
user 0m13.883s
sys 0m1.297s
bash-3.2$ time (gcc -O3 bench.c && ./a.out)
100000000 elements:
claim 20.592641s
real 0m24.604s
user 0m21.352s
sys 0m2.479s
Is it true that Julia's algorithm (median of 3 quicksort with an insertion sort base case for less than 20 elements) is substantially faster than c's qsort? Can I sort faster than qsort in c?
It's easy to sort faster than C's qsort. You could, for example, use C++'s std::sort. The C++ library is not faster because it uses a better algorithm; rather, it's because C++'s generics allow the compiler to avoid the overhead of calling the comparison function and a smaller overhead in qsort's swap, which needs to handle elements of arbitrary size.
In the following, the only difference between sortbench-c and sortbench-cc is the use of std::sort in the latter:
$ diff sortbench-c.c sortbench-cc.cc
1c1
< // time (gcc -O3 sortbench-c.c && ./a.out)
---
> // time (gcc -O3 sortbench-cc.cc && ./a.out)
2a3
> #include <algorithm>
7,14d7
< int comp (const void * elem1, const void * elem2)
< {
< int f = *((int*)elem1);
< int s = *((int*)elem2);
< if (f > s) return 1;
< if (f < s) return -1;
< return 0;
< }
38c31
< qsort (x, length, sizeof(*x), comp);
---
> std::sort(x, x+length);
The difference is dramatic:
$ time (gcc -O3 sortbench-c.c && ./a.out)
100000000 elements:
claim 16.673827s
real 0m17.774s
user 0m17.387s
sys 0m0.379s
$ time (gcc -O3 sortbench-cc.cc && ./a.out)
100000000 elements:
claim 9.948971s
real 0m11.133s
user 0m10.926s
sys 0m0.204s
There is no performance guarantee for qsort:
Despite the name, neither C nor POSIX standards require this function to be implemented using quicksort or make any complexity or stability guarantees.
To do a proper sorting benchmark between Julia and C, you will need another implementation.
The problem is that the rand functions are [probably] different.
Quicksort is data/order dependent. For example, mergesort will always execute in the same amount of time, regardless of what data it is sorting.
However, quicksort's time will vary depending upon the data.
To do a proper benchmark, do not use rand unless you write them yourself or guarantee that Julia's version and libc's version are exactly the same.
I'd write an initialization function for both langs. For example, the requisite for (i = 0; i < length; ++i) array[i] = length - i; or some such, so that the initial data is guaranteed to be the same.
You can use a random function if you have one program generate the array and save it to a file. The other program can then read in the [same] data.
Sometimes, I write a separate program that generates the input data, and saves it to a file. Then, I pass that file off to both programs. This decouples the test data generation from the programs under test.
Related
In the context of my homework task I need to smart brute-force a set of passwords. Every password in the set has either of three possible masks:
%%##
##%%
#%%#
%##%
( # - a numeric character, % - a lowercase alpha character ).
At this point I am doing something like this to run over only one pattern ( the 1st one ) in multithreading:
// Compile: $ gcc test.c -o test -fopenmp -O3 -std=c99
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <omp.h>
int main() {
const char alp[26] = "abcdefghijklmnopqrstuvwxyz";
const char num[10] = "0123456789";
register int i;
char pass[4];
#pragma omp parallel for private(pass)
for (i = 0; i < 67600; i++) {
pass[3] = num[i % 10];
pass[2] = num[i / 10 % 10];
pass[1] = alp[i / 100 % 26];
pass[0] = alp[i / 2600 % 26];
/* Slow password processing here */
}
return 0;
}
But, unfortunately, that technique has nothing to do with searching passwords with different patterns.
So my question is:
Is there a way to construct an effective set of parallel for instructions in order to run the attack simultaneously on each password pattern?
Help is much appreciated.
The trick here is to note that all four password options are simply rotations/shifts of each other.
That is, for the example password qr34 and the patterns you mention, you are looking at:
qr34 %%## #Original potential password
4qr3 #%%# #Rotate 1 place right
34qr ##%% #Rotate 2 places right
r34q %##% #Rotate 3 places right
Given this, you can use the same generation technique as in your first question.
For each potential password generated, check the potential password as well as the next three shifts of that password.
Note that the following code relies on an interesting property of C/C++: if the truth value of a statement can be deduced early, no further execution takes place. That is, given the statement if(A || B || C), if A is false, then B must be evaluated; however, if B is true, then C is never evaluated.
This means that we can have A=CheckPass(pass) and B=CheckPass(RotatePass(pass)) and C=CheckPass(RotatePass(pass)) with the guarantee that the password will only be rotated as many times as necessary.
Note that this scheme requires that each thread have its own, private copy of the potential password.
//Compile with, e.g.: gcc -O3 temp.c -std=c99 -fopenmp
#include <stdio.h>
#include <unistd.h>
#include <string.h>
int PassCheck(char *pass){
return strncmp(pass, "4qr3", 4)==0;
}
//Rotate string one character to the right
char* RotateString(char *str, int len){
char lastchr = str[len-1];
for(int i=len-1;i>0;i--)
str[i]=str[i-1];
str[0] = lastchr;
return str;
}
int main(){
const char alph[27] = "abcdefghijklmnopqrstuvwxyz";
const char num[11] = "0123456789";
char goodpass[4] = "----"; //Provide a default password to indicate an error state
#pragma omp parallel for collapse(4)
for(int i = 0; i < 26; i++)
for(int j = 0; j < 26; j++)
for(int m = 0; m < 10; m++)
for(int n = 0; n < 10; n++){
char pass[4] = {alph[i],alph[j],num[m],num[n]};
if(
PassCheck(pass) ||
PassCheck(RotateString(pass,4)) ||
PassCheck(RotateString(pass,4)) ||
PassCheck(RotateString(pass,4))
){
//It is good practice to use `critical` here in case two
//passwords are somehow both valid. This won't arise in
//your code, but is worth thinking about.
#pragma omp critical
{
memcpy(goodpass, pass, 4);
//#pragma omp cancel for //Escape for loops!
}
}
}
printf("Password was '%.4s'.\n",goodpass);
return 0;
}
I notice that you are generating your password using
pass[3] = num[i % 10];
pass[2] = num[i / 10 % 10];
pass[1] = alp[i / 100 % 26];
pass[0] = alp[i / 2600 % 26];
This sort of technique is occasionally useful, especially in scientific programming, but usually only for addressing convenience and memory locality.
For instance, an array of arrays where an element is accessed as a[y][x] can be written as a flat-array with elements accessed as a[y*width+x]. This gives a speed gain, but only because the memory is contiguous.
In your case, this indexing does not produce any speed gains, but does make it more difficult to reason about how your program works. I would avoid it for this reason.
It's been said that "premature optimization is the root of all evil". This is especially true of micro-optimizations such as the one you're trying here. The biggest speed gains come from high-level algorithmic decisions, not from fiddly stuff. The -O3 compilation flag does most of everything you'll ever need done in terms of making your code fast at this level.
Micro-optimizations assume that doing something convoluted in your high-level code will somehow enable you to out-smart the compiler. This is not a good assumption since the compiler is often quite smart and will be even smarter tomorrow. Your time is very valuable: don't use it on this stuff unless you have a clear justification. (Further discussion of "premature optimization" is here.)
I'm still a beginner with CUDA and I have been trying to write a simple kernel to perform a parallel prime sieve on the GPU. Originally I had written my code in C but I wanted to investigate the speed up on a GPU so I rewrote it:
41.cu
#include <stdio.h>
#include <stdlib.h>
#include <cuda.h>
#include <cuda_runtime.h>
#define B 1024
#define T 256
#define N (B*T)
#define checkCudaErrors(error) {\
if (error != cudaSuccess) {\
printf("CUDA Error - %s:%d: '%s'\n",__FILE__,__LINE__,cudaGetErrorString(error));\
exit(1);\
}\
}\
__global__ void prime_sieve(int *primes) {
unsigned int i = threadIdx.x + blockIdx.x * blockDim.x;
primes[i] = i;
primes[0] = primes[1] = 0;
if (i > 1 && i<N) {
for (int j=2; j<N/2; j++) {
if (i*j < N) {
primes[i*j] = 0;
}
}
}
}
int main() {
int *h_primes=(int*)malloc(N * sizeof(int));
int *d_primes;
checkCudaErrors(cudaMalloc( (void**)&d_primes, N*sizeof(int)));
checkCudaErrors(cudaMemcpy(d_primes,h_primes,N*sizeof(int),cudaMemcpyHostToDevice));
prime_sieve<<<B,T>>>(d_primes);
checkCudaErrors(cudaMemcpy(h_primes,d_primes,N*sizeof(int),cudaMemcpyDeviceToHost));
checkCudaErrors(cudaFree(d_primes));
int size = 0;
int total = 0;
for (int i=2; i<N; i++) {
if (h_primes[i]) {
size++;
}
total++;
}
printf("\n");
printf("Length = %d\tPrimes = %d\n",total,size);
free(h_primes);
return 0;
}
I run the program on Ubuntu 16.04 (4.4.0-83-generic) and I compile using nvcc 41.cu -o 41.o -arch=sm_30 under version 8.0.61. The program is run on a GeForce GTX 780 Ti but everytime it runs, it always produces non-deterministic results:
Length = 262142 Primes = 49477
Length = 262142 Primes = 49486
Length = 262142 Primes = 49596
Length = 262142 Primes = 49589
There were no errors reported back. At first I thought it was a race condition but cuda-memcheck didn't report back any hazards for racecheck,initcheck or synccheck and I couldn't think of any problems with my assumptions. I was thinking this could be a synchronisation problem?
This non-deterministic behaviour only occurs when I increase the block size and thread size as seen in the code. When I tried a block size and thread size of say 16, then there were no problems (as far as I could tell). It seems that not all threads get the chance to execute? I was planning to run this on very large array sizes (< 1 billion integers) but I am stuck at this point.
What am I doing wrong here?
There is a giant race-condition
So prime[i] > 0 means prime, while prime[i]=0 means composite.
primes[i] = i; is executed as first update on primes by each thread. Keep this in mind.
Now let's see what happen when thread 16 executes. It marks primes[16]=16 and and all multiples of 16 too. Something like the following
primes[16] = primes[32] = primes[48]=....=primes[k*16]=0
Imagine that thread 48 gets scheduled just after thread 16 completed its job (or when j>3 in thread 16 loop`).
Thread 48 sets primes[48] = 48. You have lost the update made by thread 16.
That is a race condition.
When coding in CUDA you should make sure that the correctness of your code does not depend on a particular scheduling of warps.
You should think as the order of execution as something non-deterministic.
I implemented OpenMP parallelization in a for loop where I have a sum that is the principal cause of slowing down my code. When I did so, I found out that the final results were not the same that I obtained for the non-parallelize code (which is written in C). So first, one might think "well, I just didn't implemented well the parallelization" but the curious thing is that when I run the parallelized code with the -Ofast optimization suddenly the results are correct.
That would be:
-O0 correct
-Ofast correct
OMP -O0 wrong
OMP -O1 wrong
OMP -O2 wrong
OMP -O3 wrong
OMP -Ofast correct!
What could -Ofast be doing that solves an error that only appears when I implement openmp?
Any recommendation of what could I check or test?
Thanks!
EDIT
Here I include the smallest version of my code that still reproduces the problem.
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <gsl/gsl_rng.h>
#include <gsl/gsl_randist.h>
#define LENGTH 100
#define R 50.0
#define URD 1.0/sqrt(2.0)
#define PI (4.0*atan(1.0)) //pi
const gsl_rng_type * Type;
gsl_rng * item;
double CalcDeltaEnergy(double **M,int sx,int sy){
double DEnergy,r,zz;
int k,j;
double rrx,rry;
int rx,ry;
double Energy, Cpm, Cmm, Cmp, Cpp;
DEnergy = 0;
//OpenMP parallelization:
#pragma omp parallel for reduction (+:DEnergy)
for (int index = 0; index < LENGTH*LENGTH; index++){
k = index % LENGTH;
j = index / LENGTH;
zz = 0.5*(1.0 - pow(-1.0, k + j + sx + sy));
for (rx = -1; rx <= 1; rx++){
for (ry = -1; ry <= 1; ry++){
rrx = (sx - k - rx*LENGTH)*URD;
rry = (sy - j - ry*LENGTH)*URD;
r = sqrt(rrx*rrx + rry*rry + zz);
if(r != 0 && r <= R){
Cpm = sqrt((rrx+0.5*(0.702*cos(M[k][j])-0.702*cos(M[sx][sy])))*(rrx+0.5*(0.702*cos(M[k][j])-0.702*cos(M[sx][sy]))) + (rry+0.5*(0.702*sin(M[k][j])-0.702*sin(M[sx][sy])))*(rry+0.5*(0.702*sin(M[k][j])-0.702*sin(M[sx][sy]))) + zz);
Cmm = sqrt((rrx-0.5*(0.702*cos(M[k][j])-0.702*cos(M[sx][sy])))*(rrx-0.5*(0.702*cos(M[k][j])-0.702*cos(M[sx][sy]))) + (rry-0.5*(0.702*sin(M[k][j])-0.702*sin(M[sx][sy])))*(rry-0.5*(0.702*sin(M[k][j])-0.702*sin(M[sx][sy]))) + zz);
Cpp = sqrt((rrx+0.5*(0.702*cos(M[k][j])+0.702*cos(M[sx][sy])))*(rrx+0.5*(0.702*cos(M[k][j])+0.702*cos(M[sx][sy]))) + (rry+0.5*(0.702*sin(M[k][j])+0.702*sin(M[sx][sy])))*(rry+0.5*(0.702*sin(M[k][j])+0.702*sin(M[sx][sy]))) + zz);
Cmp = sqrt((rrx-0.5*(0.702*cos(M[k][j])+0.702*cos(M[sx][sy])))*(rrx-0.5*(0.702*cos(M[k][j])+0.702*cos(M[sx][sy]))) + (rry-0.5*(0.702*sin(M[k][j])+0.702*sin(M[sx][sy])))*(rry-0.5*(0.702*sin(M[k][j])+0.702*sin(M[sx][sy]))) + zz);
Cpm = 1.0/Cpm;
Cmm = 1.0/Cmm;
Cpp = 1.0/Cpp;
Cmp = 1.0/Cmp;
Energy = (Cpm + Cmm - Cpp - Cmp)/(0.702*0.702); // S=cte=1
DEnergy -= 2.0*Energy;
}
}
}
}
return DEnergy;
}
void Initialize(double **M){
double random;
for(int i=0;i<(LENGTH-1);i=i+2){
for(int j=0;j<(LENGTH-1);j=j+2) {
random=gsl_rng_uniform(item);
if (random<0.5) M[i][j]=PI/4.0;
else M[i][j]=5.0*PI/4.0;
random=gsl_rng_uniform(item);
if (random<0.5) M[i][j+1]=3.0*PI/4.0;
else M[i][j+1]=7.0*PI/4.0;
random=gsl_rng_uniform(item);
if (random<0.5) M[i+1][j]=3.0*PI/4.0;
else M[i+1][j]=7.0*PI/4.0;
random=gsl_rng_uniform(item);
if (random<0.5) M[i+1][j+1]=PI/4.0;
else M[i+1][j+1]=5.0*PI/4.0;
}
}
}
int main(){
//Choose and initiaze the random number generator
gsl_rng_env_setup();
Type = gsl_rng_default; //default=mt19937, ran2, lxs0
item = gsl_rng_alloc (Type);
double **S; //site matrix
S = (double **) malloc(LENGTH*sizeof(double *));
for (int i = 0; i < LENGTH; i++)
S[i] = (double *) malloc(LENGTH*sizeof(double ));
//Initialization
Initialize(S);
int l,m;
for (int cl = 0; cl < LENGTH*LENGTH; cl++) {
l = gsl_rng_uniform_int(item, LENGTH); // RNG[0, LENGTH-1]
m = gsl_rng_uniform_int(item, LENGTH); // RNG[0, LENGTH-1]
printf("%lf\n", CalcDeltaEnergy(S, l, m));
}
//Free memory
for (int i = 0; i < LENGTH; i++)
free(S[i]);
free(S);
return 0;
}
I compile with:
g++ [optimization] -lm test.c -o test.x -lgsl -lgslcblas -fopenmp
and run with:
GSL_RNG_SEED=123; ./test.x > test.dat
Comparing the outputs for different optimizations one can see what I stated before.
Disclaimer: I have little to no experience with OpenMP
It's probably a race condition you run into when using OpenMP.
You'll need to declare all those variables inside the OpenMP loop to be private. One core may calculate their values for a certain value of index, which gets promptly recalculated to different values on a core that uses another value of index: the variables such as k, j, rrx, rry etc are shared between the compute nodes.
Instead of using a pragma like
#pragma omp parallel for private(k,j,zz,rx,ry,rrx,rry,r,Cpm,Cmm,Cpp,Cmp,Energy) reduction (+:D\
(credits to comment by Zulan below:) you can also declare the variables inside the parallel region, as locally as possible. This makes them private implicitly and is less prone to initialization issues and easier to reason about.
(You could even consider putting everything inside the outer for-loop (over index) in a function: the function call overhead is minimal compared to the calculations.)
As to why -Ofast together with OpenMP does actually produce correct output.
My guess is: mostly luck. Here's what -Ofast does (gcc manual):
Disregard strict standards compliance. -Ofast enables all -O3 optimizations. It also enables optimizations that are not valid for all standard-compliant programs. It turns on -ffast-math [...]
Here's the section on -ffast-math:
This option is not turned on by any -O option besides -Ofast since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications.
Thus, the sqrt, cos and sin will likely be a lot speedier. My guess is, that in this case, the calculations of the variables inside the outer loop don't bite each other, since the individual threads are so fast, they don't conflict. But that is a very handwavingly explanation and guess.
we are working on a model checking tool which executes certain search routines several billion times. We have different search routines which are currently selected using preprocessor directives. This is not only very unhandy as we need to recompile every time we make a different choice, but also makes the code hard to read. It's now time to start a new version and we are evaluating whether we can avoid conditional compilation.
Here is a very artificial example that shows the effect:
/* program_define */
#include <stdio.h>
#include <stdlib.h>
#define skip 10
int main(int argc, char** argv) {
int i, j;
long result = 0;
int limit = atoi(argv[1]);
for (i = 0; i < 10000000; ++i) {
for (j = 0; j < limit; ++j) {
if (i + j % skip == 0) {
continue;
}
result += i + j;
}
}
printf("%lu\n", result);
return 0;
}
Here, the variable skip is an example for a value that influences the behavior of the program. Unfortunately, we need to recompile every time we want a new value of skip.
Let's look at another version of the program:
/* program_variable */
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char** argv) {
int i, j;
long result = 0;
int limit = atoi(argv[1]);
int skip = atoi(argv[2]);
for (i = 0; i < 10000000; ++i) {
for (j = 0; j < limit; ++j) {
if (i + j % skip == 0) {
continue;
}
result += i + j;
}
}
printf("%lu\n", result);
return 0;
}
Here, the value for skip is passed as a command line parameter. This adds great flexibility. However, this program is much slower:
$ time ./program_define 1000 10
50004989999950500
real 0m25.973s
user 0m25.937s
sys 0m0.019s
vs.
$ time ./program_variable 1000 10
50004989999950500
real 0m50.829s
user 0m50.738s
sys 0m0.042s
What we are looking for is an efficient way to pass values into a program (by means of a command line parameter or a file input) that will never change afterward. Is there a way to optimize the code (or tell the compiler to) such that it runs more efficiently?
Any help is greatly appreciated!
Comments:
As Dirk wrote in his comment, it is not about the concrete example. What I meant was a way to replace an if that evaluates a variable that is set once and then never changed (say, a command line option) inside a function that is called literally billions of times by a more efficient construct. We currently use the preprocessor to tailor the desired version of the function. It would be nice if there is a nicer way that does not require recompilation.
You can take a look at libdivide which works to do fast division when the divisor isn't known until runtime: (libdivide is an open source library
for optimizing integer division).
If you calculate a % b using a - b * (a / b) (but with libdivide) you might find that it's faster.
I ran your program_variable code on my system to get a baseline of performance:
$ gcc -Wall test1.c
$ time ./a.out 1000 10
50004989999950500
real 0m55.531s
user 0m55.484s
sys 0m0.033s
If I compile test1.c with -O3, then I get:
$ time ./a.out 1000 10
50004989999950500
real 0m54.305s
user 0m54.246s
sys 0m0.030s
In a third test, I manually set the values of limit and skip:
int limit = 1000, skip = 10;
I then re-run the test:
$ gcc -Wall test2.c
$ time ./a.out
50004989999950500
real 0m54.312s
user 0m54.282s
sys 0m0.019s
Taking out the atoi() calls doesn't make much of a difference. But if I compile with -O3 optimizations turned on, then I get a speed bump:
$ gcc -Wall -O3 test2.c
$ time ./a.out
50004989999950500
real 0m26.756s
user 0m26.724s
sys 0m0.020s
Adding a #define macro for an ersatz atoi() function helped a little, but didn't do much:
#define QSaToi(iLen, zString, iOut) {int j = 1; iOut = 0; \
for (int i = iLen - 1; i >= 0; --i) \
{ iOut += ((zString[i] - 48) * j); \
j = j*10;}}
...
int limit, skip;
QSaToi(4, argv[1], limit);
QSaToi(2, argv[2], skip);
And testing:
$ gcc -Wall -O3 -std=gnu99 test3.c
$ time ./a.out 1000 10
50004989999950500
real 0m53.514s
user 0m53.473s
sys 0m0.025s
The expensive part seems to be those atoi() calls, if that's the only difference between -O3 compilation.
Perhaps you could write one binary, which loops through tests of various values of limit and skip, something like:
#define NUM_LIMITS 3
#define NUM_SKIPS 2
...
int limits[NUM_LIMITS] = {100, 1000, 1000};
int skips[NUM_SKIPS] = {1, 10};
int limit, skip;
...
for (int limitIdx = 0; limitIdx < NUM_LIMITS; limitIdx++)
for (int skipIdx = 0; skipIdx < NUM_SKIPS; skipIdx++)
/* per-limit, per-skip test */
If you know your parameters ahead of compilation time, perhaps you can do it this way. You could use fprintf() to write your output to a per-limit, per-skip file output, if you want results in separate files.
You could try using the GCC likely/unlikely builtins (e.g. here) or profile guided optimization (e.g. here). Also, do you intend (i + j) % 10 or i + (j % 10)? The % operator has higher precedence, so your code as written is testing the latter.
I'm a bit familiar with the program Niels is asking about.
There are a bunch of interesting answers around (thanks), but the answers slightly miss the spirit of the question. The given example programs are really just example programs. The logic that is subject to pre-processor statements is much much more involved. In the end, it is not just about executing a modulo operation or a simple division. it is about keeping or skipping certain procedure calls, executing an operation between two other operations etc, defining the size of an array, etc.
All these things could be guarded by variables that are set by command-line parameters. But that would be too costly as many of these routines, statements, memory allocations are executed a billion times. Perhaps that shapes the problem a bit better. Still very interested in your ideas.
Dirk
If you would use C++ instead of C you could use templates so that things can be calculated at compile time, even recursions are possible.
Please have a look at C++ template meta programming.
A stupid answer, but you could pass the define on the gcc command line and run the whole thing with a shell script that recompiles and runs the program based on a command-line parameter
#!/bin/sh
skip=$1
out=program_skip$skip
if [ ! -x $out ]; then
gcc -O3 -Dskip=$skip -o $out test.c
fi
time $out 1000
I got also an about 2× slowdown between program_define and program_variable, 26.2s vs. 49.0s. I then tried
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char** argv) {
int i, j, r;
long result = 0;
int limit = atoi(argv[1]);
int skip = atoi(argv[2]);
for (i = 0; i < 10000000; ++i) {
for (j = 0, r = 0; j < limit; ++j, ++r) {
if (r == skip) r = 0;
if (i + r == 0) {
continue;
}
result += i + j;
}
}
printf("%lu\n", result);
return 0;
}
using an extra variable to avoid the costly division, and the resulting time was 18.9s, so significantly better than the modulo with a statically known constant. However, this auxiliary-variable technique is only promising if the change is easily predictable.
Another possibility would be to eliminate using the modulus operator:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char** argv) {
int i, j;
long result = 0;
int limit = atoi(argv[1]);
int skip = atoi(argv[2]);
int current = 0;
for (i = 0; i < 10000000; ++i) {
for (j = 0; j < limit; ++j) {
if (++current == skip) {
current = 0;
continue;
}
result += i + j;
}
}
printf("%lu\n", result);
return 0;
}
If that is the actual code, you have a few ways to optimize it:
(i + j % 10==0) is only true when i==0, so you can skip that entire mod operation when i>0. Also, since i + j only increases by 1 on each loop, you can hoist the mod out and simply have a variable you increment and reset when it hits skip (as has been pointed out in other answers).
You can also have all possible function implementations already in the program, and at runtime you change the function pointer to select the function which you are actually are using.
You can use macros to avoid that you have to write duplicate code:
#define MYFUNCMACRO(name, myvar) void name##doit(){/* time consuming code using myvar */}
MYFUNCMACRO(TEN,10)
MYFUNCMACRO(TWENTY,20)
MYFUNCMACRO(FOURTY,40)
MYFUNCMACRO(FIFTY,50)
If you need to have too many of these macros (hundreds?) you can write a codegenerator which writes the cpp file automatically for a range of values.
I didn't compile nor test the code, but maybe you see the principle.
You might be compiling without optimisation, which will lead your program to load skip each time it's checked, instead of the literal of 10. Try adding -O2 to your compiler's command line, and/or use
register int skip;
This question already has answers here:
Faster approach to checking for an all-zero buffer in C?
(19 answers)
Closed 4 years ago.
I have an array of bytes, in memory. What's the fastest way to see if all the bytes in the array are zero?
Nowadays, short of using SIMD extensions (such as SSE on x86 processors), you might as well iterate over the array and compare each value to 0.
In the distant past, performing a comparison and conditional branch for each element in the array (in addition to the loop branch itself) would have been deemed expensive and, depending on how often (or early) you could expect a non-zero element to appear in the array, you might have elected to completely do without conditionals inside the loop, using solely bitwise-or to detect any set bits and deferring the actual check until after the loop completes:
int sum = 0;
for (i = 0; i < ARRAY_SIZE; ++i) {
sum |= array[i];
}
if (sum != 0) {
printf("At least one array element is non-zero\n");
}
However, with today's pipelined super-scalar processor designs complete with branch prediction, all non-SSE approaches are virtualy indistinguishable within a loop. If anything, comparing each element to zero and breaking out of the loop early (as soon as the first non-zero element is encountered) could be, in the long run, more efficient than the sum |= array[i] approach (which always traverses the entire array) unless, that is, you expect your array to be almost always made up exclusively of zeroes (in which case making the sum |= array[i] approach truly branchless by using GCC's -funroll-loops could give you the better numbers -- see the numbers below for an Athlon processor, results may vary with processor model and manufacturer.)
#include <stdio.h>
int a[1024*1024];
/* Methods 1 & 2 are equivalent on x86 */
int main() {
int i, j, n;
# if defined METHOD3
int x;
# endif
for (i = 0; i < 100; ++i) {
# if defined METHOD3
x = 0;
# endif
for (j = 0, n = 0; j < sizeof(a)/sizeof(a[0]); ++j) {
# if defined METHOD1
if (a[j] != 0) { n = 1; }
# elif defined METHOD2
n |= (a[j] != 0);
# elif defined METHOD3
x |= a[j];
# endif
}
# if defined METHOD3
n = (x != 0);
# endif
printf("%d\n", n);
}
}
$ uname -mp
i686 athlon
$ gcc -g -O3 -DMETHOD1 test.c
$ time ./a.out
real 0m0.376s
user 0m0.373s
sys 0m0.003s
$ gcc -g -O3 -DMETHOD2 test.c
$ time ./a.out
real 0m0.377s
user 0m0.372s
sys 0m0.003s
$ gcc -g -O3 -DMETHOD3 test.c
$ time ./a.out
real 0m0.376s
user 0m0.373s
sys 0m0.003s
$ gcc -g -O3 -DMETHOD1 -funroll-loops test.c
$ time ./a.out
real 0m0.351s
user 0m0.348s
sys 0m0.003s
$ gcc -g -O3 -DMETHOD2 -funroll-loops test.c
$ time ./a.out
real 0m0.343s
user 0m0.340s
sys 0m0.003s
$ gcc -g -O3 -DMETHOD3 -funroll-loops test.c
$ time ./a.out
real 0m0.209s
user 0m0.206s
sys 0m0.003s
Here's a short, quick solution, if you're okay with using inline assembly.
#include <stdio.h>
int main(void) {
int checkzero(char *string, int length);
char str1[] = "wow this is not zero!";
char str2[] = {0, 0, 0, 0, 0, 0, 0, 0};
printf("%d\n", checkzero(str1, sizeof(str1)));
printf("%d\n", checkzero(str2, sizeof(str2)));
}
int checkzero(char *string, int length) {
int is_zero;
__asm__ (
"cld\n"
"xorb %%al, %%al\n"
"repz scasb\n"
: "=c" (is_zero)
: "c" (length), "D" (string)
: "eax", "cc"
);
return !is_zero;
}
In case you're unfamiliar with assembly, I'll explain what we do here: we store the length of the string in a register, and ask the processor to scan the string for a zero (we specify this by setting the lower 8 bits of the accumulator, namely %%al, to zero), reducing the value of said register on each iteration, until a non-zero byte is encountered. Now, if the string was all zeroes, the register, too, will be zero, since it was decremented length number of times. However, if a non-zero value was encountered, the "loop" that checked for zeroes terminated prematurely, and hence the register will not be zero. We then obtain the value of that register, and return its boolean negation.
Profiling this yielded the following results:
$ time or.exe
real 0m37.274s
user 0m0.015s
sys 0m0.000s
$ time scasb.exe
real 0m15.951s
user 0m0.000s
sys 0m0.046s
(Both test cases ran 100000 times on arrays of size 100000. The or.exe code comes from Vlad's answer. Function calls were eliminated in both cases.)
If you want to do this in 32-bit C, probably just loop over the array as a 32-bit integer array and compare it to 0, then make sure the stuff at the end is also 0.
Split the checked memory half, and compare the first part to the second.
a. If any difference, it can't be all the same.
b. If no difference repeat for the first half.
Worst case 2*N. Memory efficient and memcmp based.
Not sure if it should be used in real life, but I liked the self-compare idea.
It works for odd length. Do you see why? :-)
bool memcheck(char* p, char chr, size_t size) {
// Check if first char differs from expected.
if (*p != chr)
return false;
int near_half, far_half;
while (size > 1) {
near_half = size/2;
far_half = size-near_half;
if (memcmp(p, p+far_half, near_half))
return false;
size = far_half;
}
return true;
}
If the array is of any decent size, your limiting factor on a modern CPU is going to be access to the memory.
Make sure to use cache prefetching for a decent distance ahead (i.e. 1-2K) with something like __dcbt or prefetchnta (or prefetch0 if you are going to use the buffer again soon).
You will also want to do something like SIMD or SWAR to or multiple bytes at a time. Even with 32-bit words, it will be 4X less operations than a per character version. I'd recommend unrolling the or's and making them feed into a "tree" of or's. You can see what I mean in my code example - this takes advantage of superscalar capability to do two integer ops (the or's) in parallel by making use of ops that do not have as many intermediate data dependencies. I use a tree size of 8 (4x4, then 2x2, then 1x1) but you can expand that to a larger number depending on how many free registers you have in your CPU architecture.
The following pseudo-code example for the inner loop (no prolog/epilog) uses 32-bit ints but you could do 64/128-bit with MMX/SSE or whatever is available to you. This will be fairly fast if you have prefetched the block into the cache. Also you will possibly need to do unaligned check before if your buffer is not 4-byte aligned and after if your buffer (after alignment) is not a multiple of 32-bytes in length.
const UINT32 *pmem = ***aligned-buffer-pointer***;
UINT32 a0,a1,a2,a3;
while(bytesremain >= 32)
{
// Compare an aligned "line" of 32-bytes
a0 = pmem[0] | pmem[1];
a1 = pmem[2] | pmem[3];
a2 = pmem[4] | pmem[5];
a3 = pmem[6] | pmem[7];
a0 |= a1; a2 |= a3;
pmem += 8;
a0 |= a2;
bytesremain -= 32;
if(a0 != 0) break;
}
if(a0!=0) then ***buffer-is-not-all-zeros***
I would actually suggest encapsulating the compare of a "line" of values into a single function and then unrolling that a couple times with the cache prefetching.
Measured two implementations on ARM64, one using a loop with early return on false, one that ORs all bytes:
int is_empty1(unsigned char * buf, int size)
{
int i;
for(i = 0; i < size; i++) {
if(buf[i] != 0) return 0;
}
return 1;
}
int is_empty2(unsigned char * buf, int size)
{
int sum = 0;
for(int i = 0; i < size; i++) {
sum |= buf[i];
}
return sum == 0;
}
Results:
All results, in microseconds:
is_empty1 is_empty2
MEDIAN 0.350 3.554
AVG 1.636 3.768
only false results:
is_empty1 is_empty2
MEDIAN 0.003 3.560
AVG 0.382 3.777
only true results:
is_empty1 is_empty2
MEDIAN 3.649 3,528
AVG 3.857 3.751
Summary: only for datasets where the probability of false results is very small, the second algorithm using ORing performs better, due to the omitted branch. Otherwise, returning early is clearly the outperforming strategy.
Rusty Russel's memeqzero is very fast. It reuses memcmp to do the heavy lifting:
https://github.com/rustyrussell/ccan/blob/master/ccan/mem/mem.c#L92.