Im having problems using stdin and NULL in eclipse - c

Here is my code that I am having issues with. The goal of the program is to scan in a bunch of doubles and perform some simple statistical operations on them. The line I am having the issue with is the fgets(). I have included the stdio.h, it's just not showing up in the code. My actual question is where are the stdin and NULL giving me issues when I though they were part of the language? The exact error I am getting is that both Symbol stdin and NULL could not be resolved.
/*
* simpleStats.c
*
* Created on: Sep 17, 2018
* Author: David Liotta
*/
#include <stdio.h>
#define BUFSIZE 256
int main(){
double n, max, min, sum, mean;
char line[BUFSIZE];
int numsRead = 0;
int numOfItems = 1;
n = -1;
max = n;
min = n;
sum = n;
while(n != 0 && fgets(line, BUFSIZE, stdin ) != NULL){
numsRead = sscanf(line, "%f", &n);
if(numsRead == 1 && n != 0){
numOfItems++;
if(n > max)
max = n;
if(n < min)
min = n;
sum = sum + n;
}
if(numsRead == 0)
printf("Bad input\n");
}
mean = sum / numOfItems;
printf("# of items: %i", numOfItems);
printf("\nSum: %f.3", sum);
printf("\nMax: %f.3", max);
printf("\nMin: %f.3", min);
printf("\nMean: %f.3", mean);
}

This code should compile. I suspect something might be wrong with your development environment.
Since you're running Eclipse, I'm assuming that your compiler is GCC. I may be wrong though.
Try to locate your compiler executable, and run the compilation by hand:
gcc -Wall -o simpleStats simpleStats.c
or, if you're on Windows:
gcc.exe -Wall -o simpleStats.exe simpleStats.c
You may have to specify the full path to gcc.exe, (depending on your environment, it might even be called something else; you may be able to retrieve the full path from the console window in Eclipse).
Pay close attention to the output. Copy/paste the full output verbatim in your original post if you can (do not rephrase the warnings / error messages).
I seldom use Eclipse, but with most IDEs you get to chose what kind of project you want to create. Make sure you selected something like "console application", the error you're referring to (stdin not being resolved) may suggest a linker error. Again, it's hard to tell without the exact GCC output.
A couple more things to check:
make sure your compiler and its dependencies are properly installed,
make sure that this compiler is targeted at Windows (or whatever OS you use), not at some exotic embedded platform,
most development environments come with a bunch of sample projects, see if you can build one.

The problem I was having ended up being the compiler not correctly reading the code. I used a different compiler, and with some minor syntax changes, the code worked fine.

Related

In what situation the output could go wrong like this?

I am trying to do the problem 200B on codeforces. I tested my code, and the output was all right. But when I uploaded it to the online judge system, I failed on the very first test case. It said my output was -0.000000000000, instead of 66.666666666667.
But I have compiled and run on Visual Studio C++ 2010, MacOS clang 13.0.0, and Linux GCC 6.3.0, the outputs were all the same as mine, 66.666666666667. I am very curious and want to figure out in what situation the output could be -0.000000000000.
On my computer,
Input:
3
50 50 100
Output:
66.666666666667
On the online judge system,
Input:
3
50 50 100
Participant's output
-0.000000000000
Jury's answer
66.666666666667
Checker comment
wrong answer 1st numbers differ - expected: '66.66667', found: '-0.00000', error = '1.00000'
#include <stdio.h>
int main(void)
{
int n;
double sumOrange = 0;
double sumDrink = 0;
scanf ("%d", &n);
while (n-- > 0) {
int m;
scanf("%d", &m);
sumOrange += m / 100.0;
sumDrink++;
}
printf("%.12lf\n", (sumOrange / sumDrink) * 100.0);
return 0;
}
I just don't understand why my output could be -0.000000000000. Please help, thanks.
Update: Tested on different versions of GCC (4.9, 5.1, 6.3), the wrong output does not appear. Guess the cause might lie in the specific implementation of printf.
The problem is because printf function in GNU gcc C11 does not support %.12lf format. It should be changed to %.12f For more information, you can read the article below:
Correct format specifier for double in printf

Why does my code run when i debugg but when i press build & run it crashes?

So we got an assignment in uni to format a text.
The max width of a line shoud be 68 characters and if the words dont fill the line we have to fill the empty spaces with blanks, equally distributed between far left and far right.
The Problem is when i build and run the programm it crashes but when im debugging it, it runs fine.
My Compiler is Codeblocks 17.12 and to debugg im using minGW gdb.
Im pretty new to programming and know that my code is probalby not the greatest.
We got the function readtext from our professor.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#define WORDS_LIMIT 10000
//------------------------------------------------------------
int readtext(const char* filename, int *maxWidth, char *words[], const int WordsLimit){
int WordCount=0;
FILE* Infile;
Infile=fopen(filename, "r");
assert(Infile);
fscanf(Infile, "%i\n", maxWidth);
char line[BUFSIZ];
while (fgets(line, BUFSIZ, Infile) && WordsLimit>WordCount)
{
// remove newline
char *newline = strchr(line, '\n');
if (newline) {
*newline = '\0';
}
words[WordCount++]=strdup(line);
}
return WordCount-1;
}
//------------------------------------------------------------
int letterCount (char *Zeile[], int wordCount){
int wordWidth = 0;
int i = 0;
while (i <= wordCount){
if (strlen(Zeile[i]) > 0 && strlen(Zeile[i]) < 68 ){
wordWidth = wordWidth + strlen(Zeile[i]);
i++;
}else{
i++;
break;
}
}
return wordWidth;
}
//------------------------------------------------------------
int findWords(char *Zeile[], char *words[]){
int Width = 0;
int k = 0;
int wordsPerLine = 0;
while (Width <= 68 && strlen(words[k]) > 0) {
Width = strlen(words[k]) + 1;
Zeile[k] = words[k];
wordsPerLine = wordsPerLine+1;
k++;
}
return wordsPerLine;
}
//------------------------------------------------------------
int formatBlanksLine (int lettersPerLine, int wordCount){
int BlanksPerLine = 0;
BlanksPerLine = 68 - (lettersPerLine + wordCount);
return BlanksPerLine;
}
//------------------------------------------------------------
int formatBlanksRight(int BlanksLine, int BlanksRight){
if (BlanksLine % 2 == 0){
BlanksRight = (BlanksLine / 2) - 1;
}
else{
BlanksRight = (BlanksLine / 2) - 2;
}
return BlanksRight;
}
//------------------------------------------------------------
int formatBlanksLeft(int BlanksLine, int BlanksLeft){
if (BlanksLine % 2 == 0){
BlanksLeft = (BlanksLine / 2);
}
else{
BlanksLeft = (BlanksLine / 2) + 1;
}
return BlanksLeft;
}
//------------------------------------------------------------
int main(int argc, char *argv[]){
char *Zeile[68];
char *filename = "Cathedral_1.txt";
char *words[WORDS_LIMIT];
int wordCount = 0;
int NumWords = 0;
int maxWidth;
int lettersPerLine = 0;
int BlanksLine = 0;
int BlanksLeft = 0;
int BlanksRight = 0;
//------------------------------------------------------------
// use filename from commandline
if (argc > 1) {
filename = argv[1];
}
// overwrite textwidth with the 2nd argument from the commandline
if (argc > 2) {
maxWidth = atoi(argv[2]);
}
//------------------------------------------------------------
NumWords = readtext(filename, &maxWidth, words, WORDS_LIMIT);
wordCount = findWords(Zeile, words);
lettersPerLine = letterCount(Zeile, wordCount);
BlanksLine = formatBlanksLine(lettersPerLine, wordCount);
BlanksLeft = formatBlanksLeft(BlanksLine, BlanksLeft);
BlanksRight = formatBlanksRight(BlanksLine, BlanksRight);
//------------------------------------------------------------
printf("Format text to %d chars per line\n", maxWidth);
for (int i=0; i < NumWords; i++)
{
printf("%s\n", words[i]);
}
//------------------------------------------------------------
printf("-----------------------------------\n");
for (int i = 0; i < BlanksLeft; i++){
printf("0");
}
for (int i=0; i < wordCount ; i++)
{
printf("%s0", Zeile[i]);
}
for (int i = 0; i < BlanksRight; i++){
printf("0");
}
//------------------------------------------------------------
printf("\n-----------------------------------\n");
printf("Die Anzahl der Buchstaben ist: %d\n", lettersPerLine);
printf("-----------------------------------\n");
//------------------------------------------------------------
return 0;
}
Cathedral.txt
The Cathedral and the Bazaar
Linux is subversive. Who would have thought even five years ago (1991) that a world-class operating system could coalesce as if by magic out of part-time hacking by several thousand developers scattered all over the planet, connected only by the tenuous strands of the Internet?
Certainly not I. By the time Linux swam onto my radar screen in early 1993, I had already been involved in Unix and open-source development for ten years. I was one of the first GNU contributors in the mid-1980s. I had released a good deal of open-source software onto the net, developing or co-developing several programs (nethack, Emacs's VC and GUD modes, xlife, and others) that are still in wide use today. I thought I knew how it was done.
Linux overturned much of what I thought I knew. I had been preaching the Unix gospel of small tools, rapid prototyping and evolutionary programming for years. But I also believed there was a certain critical complexity above which a more centralized, a priori approach was required. I believed that the most important software (operating systems and really large tools like the Emacs programming editor) needed to be built like cathedrals, carefully crafted by individual wizards or small bands of mages working in splendid isolation, with no beta to be released before its time.
Linus Torvalds's style of development-release early and often, delegate everything you can, be open to the point of promiscuity-came as a surprise. No quiet, reverent cathedral-building here-rather, the Linux community seemed to resemble a great babbling bazaar of differing agendas and approaches (aptly symbolized by the Linux archive sites, who'd take submissions from anyone) out of which a coherent and stable system could seemingly emerge only by a succession of miracles.
The fact that this bazaar style seemed to work, and work well, came as a distinct shock. As I learned my way around, I worked hard not just at individual projects, but also at trying to understand why the Linux world not only didn't fly apart in confusion but seemed to go from strength to strength at a speed barely imaginable to cathedral-builders.
By mid-1996 I thought I was beginning to understand. Chance handed me a perfect way to test my theory, in the form of an open-source project that I could consciously try to run in the bazaar style. So I did-and it was a significant success.
This is the story of that project. I'll use it to propose some aphorisms about effective open-source development. Not all of these are things I first learned in the Linux world, but we'll see how the Linux world gives them particular point. If I'm correct, they'll help you understand exactly what it is that makes the Linux community such a fountain of good software-and, perhaps, they will help you become more productive yourself.
Cathedral_1.txt
68
The
Cathedral
and
the
Bazaar
Linux
is
subversive.
Who
would
have
thought
even
five
years
ago
(1991)
that
a
world-class
operating
system
could
coalesce
as
if
by
magic
out
of
part-time
hacking
by
several
thousand
developers
scattered
all
over
the
planet,
connected
only
by
the
tenuous
strands
of
the
Internet?
Certainly
not
I.
By
the
time
Linux
swam
onto
my
radar
screen
in
early
1993,
I
had
already
been
involved
in
Unix
and
open-source
development
for
ten
years.
I
was
one
of
the
first
GNU
contributors
in
the
mid-1980s.
I
had
released
a
good
deal
of
open-source
software
onto
the
net,
developing
or
co-developing
several
programs
(nethack,
Emacs's
VC
and
GUD
modes,
xlife,
and
others)
that
are
still
in
wide
use
today.
I
thought
I
knew
how
it
was
done.
Linux
overturned
much
of
what
I
thought
I
knew.
I
had
been
preaching
the
Unix
gospel
of
small
tools,
rapid
prototyping
and
evolutionary
programming
for
years.
But
I
also
believed
there
was
a
certain
critical
complexity
above
which
a
more
centralized,
a
priori
approach
was
required.
I
believed
that
the
most
important
software
(operating
systems
and
really
large
tools
like
the
Emacs
programming
editor)
needed
to
be
built
like
cathedrals,
carefully
crafted
by
individual
wizards
or
small
bands
of
mages
working
in
splendid
isolation,
with
no
beta
to
be
released
before
its
time.
Linus
Torvalds's
style
of
development-release
early
and
often,
delegate
everything
you
can,
be
open
to
the
point
of
promiscuity-came
as
a
surprise.
No
quiet,
reverent
cathedral-building
here-rather,
the
Linux
community
seemed
to
resemble
a
great
babbling
bazaar
of
differing
agendas
and
approaches
(aptly
symbolized
by
the
Linux
archive
sites,
who'd
take
submissions
from
anyone)
out
of
which
a
coherent
and
stable
system
could
seemingly
emerge
only
by
a
succession
of
miracles.
The
fact
that
this
bazaar
style
seemed
to
work,
and
work
well,
came
as
a
distinct
shock.
As
I
learned
my
way
around,
I
worked
hard
not
just
at
individual
projects,
but
also
at
trying
to
understand
why
the
Linux
world
not
only
didn't
fly
apart
in
confusion
but
seemed
to
go
from
strength
to
strength
at
a
speed
barely
imaginable
to
cathedral-builders.
By
mid-1996
I
thought
I
was
beginning
to
understand.
Chance
handed
me
a
perfect
way
to
test
my
theory,
in
the
form
of
an
open-source
project
that
I
could
consciously
try
to
run
in
the
bazaar
style.
So
I
did-and
it
was
a
significant
success.
This
is
the
story
of
that
project.
I'll
use
it
to
propose
some
aphorisms
about
effective
open-source
development.
Not
all
of
these
are
things
I
first
learned
in
the
Linux
world,
but
we'll
see
how
the
Linux
world
gives
them
particular
point.
If
I'm
correct,
they'll
help
you
understand
exactly
what
it
is
that
makes
the
Linux
community
such
a
fountain
of
good
software-and,
perhaps,
they
will
help
you
become
more
productive
yourself.
This is prefaced by my top comments.
Okay ...
In letterCount, you have:
while (i <= wordCount)
This goes one beyond the end of the array. What you want is:
while (i < wordCount)
The first version would segfault because Zeile is an array of char pointers and we'd hit an uninitialized pointer.
This is UB. So, it tried to dereference whatever came after. Under debug, it still got a wrong value, but, somehow, that value was "harmless" (e.g. did not segfault).
As I mentioned in my top comments, we still want to change:
char *Zeile[68];
Into:
char *Zeile[WORDS_LIMIT];

How to find the problems of reallocation when it is not done

Consider the following code:
#include <stdio.h>
#include <ctype.h>
#include <stdlib.h>
#define MAX_NUM 5
#define MAX_INCR 5
int main(void)
{
char check = 'y';
int max_number = MAX_NUM;
double *array_input = NULL;
int i = 0;
double buffer = 0.0;
int count = 0;
array_input = malloc(5 * (sizeof(double)));
printf("Please enter the numbers into the array: ");
for(i = 0; i < max_number; i++)
{
++count;
scanf("%lf", &buffer);
*(array_input + i) = buffer;
if(count == max_number)
{
printf("\nDo you want to input more?(y/n) ");
fflush(stdin);
if((check = tolower(getchar())) == 'y')
{
max_number += MAX_INCR;
//realloc(array_input, max_number);
continue;
}
}
}
for(int j = 0; j < max_number; j++)
{
printf("\nThe value is: %lf", *(array_input + j));
}
return 0;
}
(freeing the memory is not done to keep the code consise)
Now, the reallocation is not done in here deliberately but the output of the program is exactly what it should be.
e.g. 5.3, 4.2, 5.6. 7.4, 3, 2, 4, 5, 6, 7.8, the program outputs these numbers as it is, like it should.
How to figure out that there's a possible error in this program?(Example: Someone is given the program but the programmer forgot the reallocation part, it may later cause problems, right?)
The problem with "undefined behaviour" is that you never know. In many cases, just like yours, the program apparently behaves normal, but once you change a little bit, it suddenly crashes, even though the change itself is done correctly. Such bugs often cause security holes, since hackers can exploit them to run their own code.
There are tools to find bugs like that, e.g. http://valgrind.org/
You will never know until exception occurs. That is why people do implement various of testing codes including unit testing and function testing, etc...
Those testings enhance its reliability and quality.
Codes which has security holes are the same.
Programmers who coded could believe that the programmed code is very solid. For example, like the developers who developed remote-drive using network connection secured. However, hackers study the weakness and try many attempts to find the weakness of the system and finally it breaks. After coder finds out its weakness, programmers improve their code from the defect. This is why cyber attack is called spear (who try to attack) and shield (who try to defense).
There are no perfect code. Once it finds its bug, it is needed to improve its quality.

Debugging C code with gdb

This is a homework assignment, I just want help with gdb, not specific answers.
I have no experience with gdb whatsoever and little terminal experience. I followed a simple example online to debug some code using gdb but in the example gdb pointed out that a problem happened when it ran the code. When I try to mimic the process for this assignment gdb doesn't say anything. I am still somewhat new to C, but I can see problems when I look at the code and gdb isn't saying anything.
Say the file is named test.c, in the terminal I type gcc test.c and it gives me a warning because printf() is there but #include <stdio.h> is not, which is good because that is supposed to be wrong.
It also produces a.out and if I run it in the terminal with ./a.out nothing happens. The terminal just is ready for my next input with no messages. If I type gdb ./a.out and then run it just tells me the program exited normally.
Can someone point out what I have to do to make gdb point to the errors please?
// insertion sort, several errors
int X[10], // input array
Y[10], // workspace array
NumInputs, // length of input array
NumY = 0; // current number of
// elements in Y
void GetArgs(int AC, char **AV) {
int I;
NumInputs = AC - 1;
for (I = 0; I < NumInputs; I++) X[I] = atoi(AV[I+1]);
}
void ScootOver(int JJ) {
int K;
for (K = NumY-1; K > JJ; K++) Y[K] = Y[K-1];
}
void Insert(int NewY) {
int J;
if (NumY = 0) { // Y empty so far,
// easy case
Y[0] = NewY;
return;
}
// need to insert just before the first Y
// element that NewY is less than
for (J = 0; J < NumY; J++) {
if (NewY < Y[J]) {
// shift Y[J], Y[J+1],... rightward
// before inserting NewY
ScootOver(J);
Y[J] = NewY;
return;
}
}
}
void ProcessData() {
// insert new Y in the proper place
// among Y[0],...,Y[NumY-1]
for (NumY = 0; NumY < NumInputs; NumY++) Insert(X[NumY]);
}
void PrintResults() {
int I;
for (I = 0; I < NumInputs; I++) printf("%d\n",Y[I]);
}
int main(int Argc, char ** Argv) {
GetArgs(Argc,Argv);
ProcessData();
PrintResults();
}
Edit: The code is not mine, it is part of the assignment
There are different kinds of errors. Some can be detected by programs (the compiler, the OS, the debugger), and some cannot.
The compiler is required (by the C standard) to issue errors if it detects any constraint violations. It may issue other errors and warnings when not in standards compliance mode. The compiler will give you more error diagnostics if you add the -Wall and -Wextra options. The compiler may be able to detect even more errors if you enable optimizations (-O0 through -O3 set different levels of optimization), but you may want to skip optimizations if you want to single-step in the debugger, because the optimizer will make it harder for the debugger to show you the relevant source-lines (some may be re-ordered, some may be eliminated).
The operating system will detect errors involving traversing bad pointers (usually), or bad arguments to system calls, or (usually) floating-point division by zero.
But anything that doesn't crash the program is a semantic error. And these require a human brain to hunt for them.
So, as Brian says, you need to set breakpoints and single-step through the program. And, as jweyrich says, you need to compile the program with -g to add debugging symbols.
You can inspect variables with print (eg. print Argc will tell you how many command-line arguments were on the run line). And display will add variables to a list that is displayed just before each prompt. If I were debugging through that for-loop in Insert, I'd probably do display J and display Y[J], next, and then hit enter a bunch of times watching the calculation progress.
If your breakpoint is deeply nested, you can get a "stack dump" with backtrace.
next will take you to the next statement (following the semicolon). step will take you into function calls and to the first statement of the function. And remember: if you're single-stepping through a function and get to the 'return' statement, use step to enter the next function call in the calling statement; use next at the return to finish the calling statement (and just execute any remaining function calls in the statement, without prompting). You may not need to know this bit just yet, but if you do, there you go.
From gdb, do break main, then run.
From there, next or step until you find where you went wrong.

Speed up C program without using conditional compilation

we are working on a model checking tool which executes certain search routines several billion times. We have different search routines which are currently selected using preprocessor directives. This is not only very unhandy as we need to recompile every time we make a different choice, but also makes the code hard to read. It's now time to start a new version and we are evaluating whether we can avoid conditional compilation.
Here is a very artificial example that shows the effect:
/* program_define */
#include <stdio.h>
#include <stdlib.h>
#define skip 10
int main(int argc, char** argv) {
int i, j;
long result = 0;
int limit = atoi(argv[1]);
for (i = 0; i < 10000000; ++i) {
for (j = 0; j < limit; ++j) {
if (i + j % skip == 0) {
continue;
}
result += i + j;
}
}
printf("%lu\n", result);
return 0;
}
Here, the variable skip is an example for a value that influences the behavior of the program. Unfortunately, we need to recompile every time we want a new value of skip.
Let's look at another version of the program:
/* program_variable */
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char** argv) {
int i, j;
long result = 0;
int limit = atoi(argv[1]);
int skip = atoi(argv[2]);
for (i = 0; i < 10000000; ++i) {
for (j = 0; j < limit; ++j) {
if (i + j % skip == 0) {
continue;
}
result += i + j;
}
}
printf("%lu\n", result);
return 0;
}
Here, the value for skip is passed as a command line parameter. This adds great flexibility. However, this program is much slower:
$ time ./program_define 1000 10
50004989999950500
real 0m25.973s
user 0m25.937s
sys 0m0.019s
vs.
$ time ./program_variable 1000 10
50004989999950500
real 0m50.829s
user 0m50.738s
sys 0m0.042s
What we are looking for is an efficient way to pass values into a program (by means of a command line parameter or a file input) that will never change afterward. Is there a way to optimize the code (or tell the compiler to) such that it runs more efficiently?
Any help is greatly appreciated!
Comments:
As Dirk wrote in his comment, it is not about the concrete example. What I meant was a way to replace an if that evaluates a variable that is set once and then never changed (say, a command line option) inside a function that is called literally billions of times by a more efficient construct. We currently use the preprocessor to tailor the desired version of the function. It would be nice if there is a nicer way that does not require recompilation.
You can take a look at libdivide which works to do fast division when the divisor isn't known until runtime: (libdivide is an open source library
for optimizing integer division).
If you calculate a % b using a - b * (a / b) (but with libdivide) you might find that it's faster.
I ran your program_variable code on my system to get a baseline of performance:
$ gcc -Wall test1.c
$ time ./a.out 1000 10
50004989999950500
real 0m55.531s
user 0m55.484s
sys 0m0.033s
If I compile test1.c with -O3, then I get:
$ time ./a.out 1000 10
50004989999950500
real 0m54.305s
user 0m54.246s
sys 0m0.030s
In a third test, I manually set the values of limit and skip:
int limit = 1000, skip = 10;
I then re-run the test:
$ gcc -Wall test2.c
$ time ./a.out
50004989999950500
real 0m54.312s
user 0m54.282s
sys 0m0.019s
Taking out the atoi() calls doesn't make much of a difference. But if I compile with -O3 optimizations turned on, then I get a speed bump:
$ gcc -Wall -O3 test2.c
$ time ./a.out
50004989999950500
real 0m26.756s
user 0m26.724s
sys 0m0.020s
Adding a #define macro for an ersatz atoi() function helped a little, but didn't do much:
#define QSaToi(iLen, zString, iOut) {int j = 1; iOut = 0; \
for (int i = iLen - 1; i >= 0; --i) \
{ iOut += ((zString[i] - 48) * j); \
j = j*10;}}
...
int limit, skip;
QSaToi(4, argv[1], limit);
QSaToi(2, argv[2], skip);
And testing:
$ gcc -Wall -O3 -std=gnu99 test3.c
$ time ./a.out 1000 10
50004989999950500
real 0m53.514s
user 0m53.473s
sys 0m0.025s
The expensive part seems to be those atoi() calls, if that's the only difference between -O3 compilation.
Perhaps you could write one binary, which loops through tests of various values of limit and skip, something like:
#define NUM_LIMITS 3
#define NUM_SKIPS 2
...
int limits[NUM_LIMITS] = {100, 1000, 1000};
int skips[NUM_SKIPS] = {1, 10};
int limit, skip;
...
for (int limitIdx = 0; limitIdx < NUM_LIMITS; limitIdx++)
for (int skipIdx = 0; skipIdx < NUM_SKIPS; skipIdx++)
/* per-limit, per-skip test */
If you know your parameters ahead of compilation time, perhaps you can do it this way. You could use fprintf() to write your output to a per-limit, per-skip file output, if you want results in separate files.
You could try using the GCC likely/unlikely builtins (e.g. here) or profile guided optimization (e.g. here). Also, do you intend (i + j) % 10 or i + (j % 10)? The % operator has higher precedence, so your code as written is testing the latter.
I'm a bit familiar with the program Niels is asking about.
There are a bunch of interesting answers around (thanks), but the answers slightly miss the spirit of the question. The given example programs are really just example programs. The logic that is subject to pre-processor statements is much much more involved. In the end, it is not just about executing a modulo operation or a simple division. it is about keeping or skipping certain procedure calls, executing an operation between two other operations etc, defining the size of an array, etc.
All these things could be guarded by variables that are set by command-line parameters. But that would be too costly as many of these routines, statements, memory allocations are executed a billion times. Perhaps that shapes the problem a bit better. Still very interested in your ideas.
Dirk
If you would use C++ instead of C you could use templates so that things can be calculated at compile time, even recursions are possible.
Please have a look at C++ template meta programming.
A stupid answer, but you could pass the define on the gcc command line and run the whole thing with a shell script that recompiles and runs the program based on a command-line parameter
#!/bin/sh
skip=$1
out=program_skip$skip
if [ ! -x $out ]; then
gcc -O3 -Dskip=$skip -o $out test.c
fi
time $out 1000
I got also an about 2× slowdown between program_define and program_variable, 26.2s vs. 49.0s. I then tried
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char** argv) {
int i, j, r;
long result = 0;
int limit = atoi(argv[1]);
int skip = atoi(argv[2]);
for (i = 0; i < 10000000; ++i) {
for (j = 0, r = 0; j < limit; ++j, ++r) {
if (r == skip) r = 0;
if (i + r == 0) {
continue;
}
result += i + j;
}
}
printf("%lu\n", result);
return 0;
}
using an extra variable to avoid the costly division, and the resulting time was 18.9s, so significantly better than the modulo with a statically known constant. However, this auxiliary-variable technique is only promising if the change is easily predictable.
Another possibility would be to eliminate using the modulus operator:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char** argv) {
int i, j;
long result = 0;
int limit = atoi(argv[1]);
int skip = atoi(argv[2]);
int current = 0;
for (i = 0; i < 10000000; ++i) {
for (j = 0; j < limit; ++j) {
if (++current == skip) {
current = 0;
continue;
}
result += i + j;
}
}
printf("%lu\n", result);
return 0;
}
If that is the actual code, you have a few ways to optimize it:
(i + j % 10==0) is only true when i==0, so you can skip that entire mod operation when i>0. Also, since i + j only increases by 1 on each loop, you can hoist the mod out and simply have a variable you increment and reset when it hits skip (as has been pointed out in other answers).
You can also have all possible function implementations already in the program, and at runtime you change the function pointer to select the function which you are actually are using.
You can use macros to avoid that you have to write duplicate code:
#define MYFUNCMACRO(name, myvar) void name##doit(){/* time consuming code using myvar */}
MYFUNCMACRO(TEN,10)
MYFUNCMACRO(TWENTY,20)
MYFUNCMACRO(FOURTY,40)
MYFUNCMACRO(FIFTY,50)
If you need to have too many of these macros (hundreds?) you can write a codegenerator which writes the cpp file automatically for a range of values.
I didn't compile nor test the code, but maybe you see the principle.
You might be compiling without optimisation, which will lead your program to load skip each time it's checked, instead of the literal of 10. Try adding -O2 to your compiler's command line, and/or use
register int skip;

Resources