global counter in c is not working as expected - c

I have a bit of queue code that I was working on. I was trying to use a global int to keep track of the queue's size.
#define MAX 100
int size=0;
int gEnqueue=gDequeue=0;
int enqueue()
{
gEnqueue++;
if( size == MAX )
return QUEUE_FULL;
/* snip the actual queue handling */
size++;
return 0;
}
int dequeue()
{
gDequeue++;
if(!size)
return QUEUE_EMPTY;
/* snip actual queue handling */
if(size)
size--;
return 0;
}
there is of course much more code then that, but too much to post.
What is happening is the size gets stuck at the max I have set. Both functions get called an even number of times. If I dump the queue I can see that there are only 3 items in it.
What would cause this problem?
edit #1: made the code example match what I actually coded
This is not threaded.
edit #2: I am an idiot and should have done this instead of assuming.
I was wrong about the calls being even to the enqueue() and dequeue().
Note to self, use real metrics not guesses.

If you can't use a debugger I would suggest adding print statements inside both functions showing what size equals and then after running the program examine the output. Usually when looking at the print log the problem is pretty obvious.

The easiest solution is not to call "enqueue" if size==MAX.
But if that's not possible try this:
int size=0;
int overflow=0;
int enqueue()
{
if( size < MAX )
size++;
else
overflow++;
return 0;
}
int dequeue()
{
if(overflow)
overflow--;
else if(size)
size--;
return 0;
}

There's nothing obviously wrong with the code you posted, so this suggests there's something wrong with the code you snipped, or in the way you're calling the code. You'll have to debug this for yourself. There are two main debugging techniques that would help you at this point:
As #KPexEA suggested, debugging using printf() or other logging statements. Put a printf() at the beginning and end of both functions, printing out as much state as you think might possibly be useful.
int enqueue()
{
printf("enqueue(): Enter: size=%d\n", size);
if( size == MAX ) {
printf("enqueue(): Exit: QUEUE_FULL\n");
return QUEUE_FULL;
}
/* snip the actual queue handling */
size++;
printf("enqueue(): Exit: size=%d\n", size);
return 0;
}
int dequeue()
{
printf("dequeue(): Enter: size=%d\n", size);
if(!size) {
printf("dequeue(): QUEUE_EMPTY\n");
return QUEUE_EMPTY;
}
/* snip actual queue handling */
if(size)
size--;
printf("dequeue(): Exit: size=%d\n", size);
return 0;
}
By examining the output, it should become apparent what's happening with the size of your queue. (You could also count the actual number of elements in your queue and print that when you enter and exit your functions.)
The other technique is interactive debugging. This is especially useful to determine exactly how your code is flowing, but you have to sit there every time you run your program to watch how it's running. (If your bug occurs every time, that's easy; if it occurs every once and a while, it's hard to go back and recreate your program's flow after the fact.) Set a breakpoint at the beginning of each of your functions and use the debugger to display the value size. Set another breakpoint at the end of each function and make sure (1) the breakpoint actually gets hit, and (2) your expectations of any changes made to size are met.

Related

How go back to another stack frame? C

EDIT:
thank alot
Im understanding now that I should use gdb
I ask for understand how stack frame working and how change things
exit(0) and goto its not option
How can change that fun 'sec' will return to main?
the output will be:
print start main
print this from first
print this from sec
print exit main
void sec()
{
/*change only here */
printf("print this from sec");
}
void first()
{
printf("print this from first");
sec();
printf("dont print this");
}
int main() {
printf("print start main");
first();
printf("print exit main\n");
return 0;
}
I dont want add asm code, only C.
I try to find the address of the rbp but I dont know how.
Disclaimer: this code should not exist. It is non-portable, makes a lot of assumptions, and relies on a gaping UB. Nevertheless,
#include <execinfo.h>
void sec()
{
/*change only here */
void * bt[4];
int size = backtrace(bt, 4);
while (bt[size] != bt[1])
size++;
bt[size++] = bt[2];
while (bt[size] != bt[2])
size++;
bt[size] = bt[3];
printf("print this from sec");
}
backtrace return an array of four pointers:
where backtrace should return,
where sec should return,
where first should return, and
where main should return.
The following two loops go up the stack looking for those addresses, and patches them to point to next frame.
Try to comment out the second loop, and observe that print exit main is printed twice. Do you see why?

Changing global variable using stat.c

I am trying to print logs dynamically. What I have done is I have a debug variable which I set in my own stat_my.c file. Below is show_stat function.
extern int local_debug_lk;
static int show_stat(struct seq_file *p, void *v)
{
int temp=0;
if(local_debug_lk == 0)
{
seq_printf(p,"local_debug_lk=0, enabling,int_num=%d\n",int_num);
local_debug_lk=1;
}
else
{
seq_printf(p,"local_debug_lk=:%d,int_num=%d\n",local_debug_lk,int_num);
while(temp<int_num){
seq_printf(p,"%d\n",intr_list_seq[temp]);
temp++;
}
local_debug_lk=0;
int_num=0;
}
return 0;
}
Driver file
int local_debug_lk, int_num;
isr_root(...){
/*
logic to extract IRQ number, saved in vect variable
*/
if(local_debug_lk && (int_num < 50000)){
intr_list_seq[int_num]=vect;
int_num++;
}
What I expect is when I do "cat /proc/show_stat", first it will enable local_debug_lk flag and whenever an interrupt occurs in driver file, it will be stored in intr_list_seq[] array. and when I do "cat /proc/stat_my" second time, it should print IRQ sequence and disable IRQ recording by setting local_debug_lk=0.
But…what's happening is, I am always getting
"local_debug_lk=0, enabling,int_num=0" log on cat; i.e. local_debug_lk is always zero; it never gets enabled.
Also, when my driver is not running, it works fine!
On two consecutive "cat /proc/stat_my", first value is set to 1 and then 0 again.
Is it possible my driver is not picking latest updated value of local_debug_lk variable?
Could you please let me know what I am doing wrong here?
It could be more calls to .show function than readings from the file (with cat /proc/show_stat). Moreover underlying system expects stable results from .show: if called with the same parameters, the function should print the same information to the seq_file.
Because of that, switching a flag in the .show function has a little sence, and making the function's output dependent on this flag is simply wrong.
Generally, changing any kernel state when a file is read is not what expected by the user. It is better to use write functionality for that.
Function .show actually prints information into temporary kernel buffer. If everything goes OK, information from the buffer is transmitted into user buffer and eventually is printed by cat. But if kernel buffer is too small, information printed into it is discarded. In that case underlying system allocates bigger buffer, and call .show again.
Also, .show is rerun if user buffer is too small to accomodate all information printed.

C code stack corruption changing variable

I'm hoping that someone can help me out. I have not written much in C code in over a decade and just picked this back up 2 days ago so bear with me please as I am rusty. THANK YOU!
What:
I'm working on creating a very simple thread pool for an application. This code is written in C on CodeBlocks using GNU GCC for the compiler. It is built as a command line application. No additional files are linked or included.
The code should create X threads (in this case I have it set to 10) each of which sits and waits while watching an array entry (identified by the threads thread index or count) for any incoming data it might need to process. Once a given child has processed the data coming in via the array there is no need to pass the data back to the main thread; rather the child should simply reset that array entry to 0 to indicate that it is ready to process another input. The main thread will receive requests and will dole them out to whatever thread is available. If none are available then it will refuse to handle that input.
For simplicity sake the code below is a complete and working but trimmed and gutted version that DOES exhibit the stack overflow I am trying to track down. This compiles fine and initially runs fine but after a few passes the threadIndex value in the child thread process (workerThread) becomes corrupt and jumps to weird values - generally becoming the number of milliseconds I have put in for the 'Sleep' function.
What I have checked:
The threadIndex variable is not a global or shared variable.
All arrays are plenty big enough to handle the max number of threads I am creating.
All loops have the loopvariable reset to 0 before running.
I have not named multiple variables with the same name.
I use atomic_load to make sure I don't write to the same global array variable with two different threads at once please note I am rusty... I may be misunderstanding how this part works
I have placed test cases all over to see where the variable goes nuts and I am stumped.
Best Guess
All of my research confirms what I recall from years back; I likely am going out of bounds somewhere and causing stack corruption. I have looked at numerous other problems like this on google as well as on stack overflow and while all point me to the same conclusion I have been unable to figure out what specifically is wrong in my code.
#include<stdio.h>
//#include<string.h>
#include<pthread.h>
#include<stdlib.h>
#include<conio.h>
//#include<unistd.h>
#define ESCAPE 27
int maxThreads = 10;
pthread_t tid[21];
int ret[21];
int threadIncoming[21];
int threadRunning[21];
struct arg_struct {
char* arg1;
int arg2;
};
//sick of the stupid upper/lowercase nonsense... boom... fixed
void* sleep(int time){Sleep(time);}
void* workerThread(void *arguments)
{
//get the stuff passed in to us
struct arg_struct *args = (struct arg_struct *)arguments;
char *address = args -> arg1;
int threadIndex = args -> arg2;
//hold how many we have processed - we are unlikely to ever hit the max so no need to round robin this number at this point
unsigned long processedCount = 0;
//this never triggers so it IS coming in correctly
if(threadIndex > 20){
printf("INIT ERROR! ThreadIndex = %d", threadIndex);
sleep(1000);
}
unsigned long x = 0;
pthread_t id = pthread_self();
//as long as we should be running
while(__atomic_load_n (&threadRunning[threadIndex], __ATOMIC_ACQUIRE)){
//if and only if we have something to do...
if(__atomic_load_n (&threadIncoming[threadIndex], __ATOMIC_ACQUIRE)){
//simulate us doing something
//for(x=0; x<(0xFFFFFFF);x++);
sleep(2001);
//the value going into sleep is CLEARLY somehow ending up in index because you can change that to any number you want
//and next thing you know the next line says "First thread processing done on (the value given to sleep)
printf("\n First thread processing done on %d\n", threadIndex);
//all done doing something so clear the incoming so we can reuse it for our next one
//this error should not EVER be able to get thrown but it is.... something is corrupting our stack and going into memory that it shouldn't
if(threadIndex > 20){ printf("ERROR! ThreadIndex = %d", threadIndex); }
else{ __atomic_store_n (&threadIncoming[threadIndex], 0, __ATOMIC_RELEASE); }
//increment the processed count
++processedCount;
}
else{Sleep(10);}
}
//no need to do atomocity I don't think for this as it is only set on the exit and not read till after everything is done
ret[threadIndex] = processedCount;
pthread_exit(&ret[threadIndex]);
return NULL;
}
int main(void)
{
int i = 0;
int err;
int *ptr[21];
int doLoop = 1;
//initialize these all to set the threads to running and the status on incoming to NOT be processing
for(i=0;i < maxThreads;i++){
threadIncoming[i] = 0;
threadRunning[i] = 1;
}
//create our threads
for(i=0;i < maxThreads;i++)
{
struct arg_struct args;
args.arg1 = "here";
args.arg2 = i;
err = pthread_create(&(tid[i]), NULL, &workerThread, (void *)&args);
if (err != 0){ printf("\ncan't create thread :[%s]", strerror(err)); }
}
//loop until we hit escape
while(doLoop){
//see if we were pressed escape
if(kbhit()){ if(getch() == ESCAPE){ doLoop = 0; } }
//just for testing - actual version would load only as needed
for(i=0;i < maxThreads;i++){
//make sure we synchronize so we don't end up pointing into a garbage address or half loading when a thread accesses us or whatever was going on
if(!__atomic_load_n (&threadIncoming[i], __ATOMIC_ACQUIRE)){
__atomic_store_n (&threadIncoming[i], 1, __ATOMIC_RELEASE);
}
}
}
//exiting...
printf("\n'Esc' pressed. Now exiting...\n");
//call to end them all...
for(i=0;i < maxThreads;i++){ __atomic_store_n (&threadRunning[i], 0, __ATOMIC_RELEASE); }
//join them all back up - if we had an actual worthwhile value here we could use it
for(i=0;i < maxThreads;i++){
pthread_join(tid[i], (void**)&(ptr[i]));
printf("\n return value from thread %d is [%d]\n", i, *ptr[i]);
}
return 0;
}
Output
Here is the output I get. Note that how long it takes before it starts going crazy does seem to possibly vary but not much.
Output Screen with Error
I don't trust your handling of args, there seems to be a race condition. What if you create N threads before the first one of them gets to run? Then the first thread created will probably see the args for the N:th thread, rather than for the first, and so on.
I don't believe there's a guarantee that automatic variables used in a loop like that are created in non-overlapping areas; after all they go out of scope with each iteration of the loop.

Any benefit of using assert instead of using a simple "if" ?

Given this code :
#include <stdio.h>
#include <assert.h>
void print_number(int* somePtr) {
assert (somePtr!=NULL);
printf ("%d\n",*somePtr);
}
int main ()
{
int a=1234;
int * b = NULL;
int * c = NULL;
b=&a;
print_number (c);
print_number (b);
return 0;
}
I can do this instead :
#include <stdio.h>
#include <assert.h>
void print_number(int* somePtr) {
if (somePtr != NULL)
printf ("%d\n",*somePtr);
// else do something
}
int main ()
{
int a=1234;
int * b = NULL;
int * c = NULL;
b=&a;
print_number (c);
print_number (b);
return 0;
}
So , what am I gaining by using assert ?
Regards
assert is to document your assumptions in the code. if statement is to handle different logical scenarios.
Now in your specific case, think from the point of view of the developer of the print_number() function.
For example when you write
void print_number(int* somePtr) {
assert (somePtr!=NULL);
printf ("%d\n",*somePtr);
}
you mean to say that,
In my print_number function I assume that always the pointer coming is not null. I would be very very surprised if this is null. I don't care to handle this scenario at all in my code.
But, if you write
void print_number(int* somePtr) {
if (somePtr != NULL)
printf ("%d\n",*somePtr);
// else do something
}
You seem to say that, in my print_number function, I expect people to pass a null pointer. And I know how to handle this situation and I do handle this with an else condition.
So, sometimes you will know how to handle certain situations and you want to do that. Then, use if.
Sometimes, you assume that something will not happen and you don't care to handle it. You just express your surprise and stop your program execution there with assert.
The difference is that assert is enabled only for debug build; it is not enabled for release build (i.e when NDEBUG is defined), which means in the release build there will be no check; as a result, your code will be little bit faster, compared to the one in which you use if condition which remains in the release build as well.
That means, assert is used to check common errors when you write the code, and catch them as soon as possible, in the development phase itself.
Lots of reasons:
Asserts are usually removed for release builds.
Asserts will report failure information to the client. if() does nothing by itself.
Because asserts are usually macros, you can also get code information about the failing assertion.
Assert is more semantically clear than if().
If assertion fails, you will see the output containing the failed assertion itself, plus the function and the line of the failed assert, something like:
test: main.cpp:9: int main(): Assertion `0==1' failed.
So, if your program crashes in runtime, you will see the exact reason and location of the crash.
There's a big article about assertions in wiki.
Assert will inform you that something wrong happend, possibly error to be fixed. In debug mode it will break and show callstack that will help you with fixing bug. So its a good practice to use. I would actually use if() and assert, because in Release your asserts should be turned off:
void print_number(int* somePtr) {
assert(somePtr != NULL);
if (somePtr != NULL)
printf ("%d\n",*somePtr);
// else do something
}
in " // else do something " you might think of throwing exception or returning error code.
Listen If Your (if) statement becomes True or False so compiler go for the next instructions.
But in assert.h when your statement becomes false "Program Terminates immediately" with assertion message.
EXAMPLE :*
#include <assert.h> #include <stdio.h>
int main () {
int a;
printf("Enter an integer value: "); scanf("%d", &a); assert(a >= 10);
printf("Integer entered is %d\n", a);
return(0); }

What is weird about wrapping setjmp and longjmp?

I am using setjmp and longjmp for the first time, and I ran across an issue that comes about when I wrap setjmp and longjmp. I boiled the code down to the following example:
#include <stdio.h>
#include <setjmp.h>
jmp_buf jb;
int mywrap_save()
{
int i = setjmp(jb);
return i;
}
int mywrap_call()
{
longjmp(jb, 1);
printf("this shouldn't appear\n");
}
void example_wrap()
{
if (mywrap_save() == 0){
printf("wrap: try block\n");
mywrap_call();
} else {
printf("wrap: catch block\n");
}
}
void example_non_wrap()
{
if (setjmp(jb) == 0){
printf("non_wrap: try block\n");
longjmp(jb, 1);
} else {
printf("non_wrap: catch block\n");
}
}
int main()
{
example_wrap();
example_non_wrap();
}
Initially I thought example_wrap() and example_non_wrap() would behave the same. However, the result of running the program (GCC 4.4, Linux):
wrap: try block
non_wrap: try block
non_wrap: catch block
If I trace the program in gdb, I see that even though mywrap_save() returns 1, the else branch after returning is oddly ignored. Can anyone explain what is going on?
The longjmp() routines may not be called after the routine which called
the setjmp() routines returns.
In other words, you are screwing up your stack.
You might take a look at the assembly to see if you can piece together what's really happening.
setjmp() will save the current call stack and mark a point. When the call stack grows, no matter how far from the marked point, you can use longjmp() to go to the marked point, like you never left the point.
In your code, when returning from mywrap_save(), the marked point was no longer valid, the stack space around the point was dirty, hence you cannot go back to a dirty point.

Resources