I am having difficulties with the below code;
int i = 0;
int x = 0;
int ch ;
int n;
while((i < sizeof(buffer) - 1) && (x < (TIMER_FREQ*30)))
{
//getkey_serial0 returns either a (int)character or 0 if nothing on
//UART0
if((ch = getkey_serial0) == 0)
{
x++; //Increment X as there is nothing being received.
}
else
{
if(ch == '\n')
{
n++;
}
if(n < 8){ //Yes I can simplify this but for some reason
} //I only just noticed this :/ Anyway, it is
else{ //just here to avoid saving info I don't need
buffer[i] = ch ;
i++;
}
}
}
As the input it is reading in is the results of a wireless scan the number of entries scanned can vary greatly, and so I need to be able to avoid infinitely looping.
Originally I just read up to 11 \n's but this was rubbish as I kept missing SSID's which I needed, so I decided I needed some sort of timer or method to help me break after X amount of time.
TIMER_FREQ is defined as 10.
Clearly I am doing something stupid so any suggestions or tips would be greatly appreciated.
I generally prefer suggestions to help me try and think out the problem as opposed to fixed code posts :) I always seem to miss something simple despite my best efforts!
Thanks
EDIT: I should mention, this is on an embedded system (ARM7)
You should have access to a general purpose timer interrupt -- commonly called sys_tick().
The general practice in such "bare metal" applications is to configure the interrupt to fire every n milliseconds (10 ms is frequently used on my Cortex M3). Then, have the ISR update a counter. You'll want to ensure the counter update is atomic, so use a 32-bit, properly-aligned variable. (I'm assuming your processor is 32-bit, I can't recall for certain). Then your "application" code can poll the elapsed time as needed.
BUT - this timer discussion might be moot. In my ARM9 applications, we tie an interrupt to the UART's receive buffer. The associated ISR captures the keystroke and then performs any buffer management. Is that an option for you?
Do you really mean:
if((ch = getkey_serial0) == 0) { ...
Or do you actually mean:
if((ch = getkey_serial0()) == 0) { ...
If the latter, this is why your program never returns zero as you are giving it a function pointer. Does your program have many warnings at build?
If you want to time things, look into time(). It will let you see the system's wall clock, so you can determine if too many seconds have elapsed.
Related
I have a strange situation under C/Visual Studio on a Windows 7 platform. There is a problem from time to time and I spent a lot of time to find it. The problem is within a third party library, for which I have the complete code. There a thread is created (the printLog statements are from myself):
...
plafParams->eventThreadFlag = 2;
printLog("before CreateThread");
if (plafParams->hReadThread_p = CreateThread(NULL, 0, ( LPTHREAD_START_ROUTINE ) plafPortReadThread, ( void * ) dlmsInstance, 0,
&plafParams->portReadThreadID) )
{
printLog("after CreateThread: OK");
plafParams->eventThreadFlag = 3;
}
else
{
unsigned int lasterr = GetLastError();
printLog("error CreateThread, last error:%x", lasterr);
/* Could not create the read thread. */
...
...
return FAILURE;
}
printLog("SUCCESS");
...
...
The thread function is:
void *plafPortReadThread(DLMS_GLOBALS *dlmsInstance)
{
PLAF_PARAMS *plafParams;
plafParams = (PLAF_PARAMS *)(dlmsInstance->plafParams);
printLog("start - plafPortReadThread, plafParams->eventThreadFlag=%x", plafParams->eventThreadFlag);
while ((plafParams->eventThreadFlag != 1) && (plafParams->eventThreadFlag != 3))
{
if (plafParams->eventThreadFlag == 0)
{
printLog("exit 1 - plafPortReadThread, plafParams->eventThreadFlag=%x", plafParams->eventThreadFlag);
CloseHandle(plafParams->hReadThread_p);
plafFree((void **)&plafParams);
ExitThread(0);
break;
}
}
printLog("start - plafPortReadThread, proceed=%d", proceed);
...
Now, when the flag is set before the while loop is started within the thread, everything works OK:
SUCCESS
start - plafPortReadThread, plafParams->eventThreadFlag=3
But sometimes the thread is quick enough so the while loop is started before the flag is actually set within the outer part.
The output is then:
start - plafPortReadThread, plafParams->eventThreadFlag=2
SUCCESS
Most surprisingly the while loop doesn't exit, even after the flag has been set to 3.
It seems, that the compiler "optimizes" the flag and assumes, that it cannot be changed from outside.
What could be the problem? I'm really surprised. Or is there something else I have overseen completely? I know, that the code is not very elegant and that such things should better be done with semaphores or signals. But it is not my code and I want to change as little as possible.
After removing the whole while condition it works as expected.
Should I change the struct or its fields to volatile ? Everybody says, that volatile is useless in our days and not needed anymore, except in the case, where a memory location is changed by peripherals...
Prior to C11 this is totally platform-dependent, because the effect you are observing is due to the memory model used by your platform. This is different from a compiler optimization as synchronization points between threads require the compiler to insert barrier instructions, instead of, e.g., making something a constant. For C11 for section 7.17.3 specifies the different models. So your value is not optimized out statically, thread A just never reads the value thread B wrote, but still has its local value.
In practice many projects don't use C11 yet, and thus you will likely have to check the documentation of your platform. Note that in many cases you don't have to modify the type of the variable for the flag (in case you can't). Most memory models specify synchronization points that also forbid reordering of certain instructions, i.e. in:
int x = 3;
_Atomic int a = 1;
x = 5;
a = 2;
the compiler will often have to ensure that x has the value 3 when a has the value 1, and that when a is assigned the value 2, x will have the value 5. volatile does not participate in this relationship (in the C/C++ 11 models - often confused because it does participate in Java's happened-before), and is mostly useless, unless your writes should never be optimized out because they have side-effects such as a LED blinking which the compiler can't understand:
volatile int x = 1; // some special location - blink then clear
x = 1; // blink then clear
x = 1; // blink then clear
Normally when I have a big for loop I put messages to inform me in which part of the process my program is, for example:
for(i = 0; i < large_n; i++) {
if( i % (large_n)/1000 == 0) {
printf("We are at %ld \n", i);
}
// Do some other stuff
}
I was wondering if this hurts too much the performance (a priori) and if it is the case if there is a smarter alternative.Thanks in advance.
Maybe you can split the large loop in order to check the condition sometimes only, but I don't know if this will really save time, that depends more on your "other stuff".
int T = ...; // times to check the condition, make sure large_n % T == 0
for(int t = 0; t < T; ++t)
{
for(int i = large_n/T * t; i < large_n/T * (t+1); ++i)
{
// other stuff
}
printf("We are at %ld \n", large_n/T * (t+1));
}
Regardless of what is in your loop, I wouldn't be leaving statements like printf in unless it's essential to the application/user, nor would I use what are effectively redundant if statements, for the same reason.
Both of these are examples of trace level debugging. They're totally valid and in some cases very useful, but generally not ultimately so in the end application. In this respect, a usual thing to do is to only include them in the build when you actually want to use the information they provide. In this case, you might do something like this:
#define DEBUG
for(i = 0; i < large_n; i++)
{
#ifdef DEBUG
if( i % (large_n)/1000 == 0)
{
printf("We are at %ld \n", i);
}
#endif
}
Regarding the performance cost of including these debug outputs all the time, it will totally depend on the system you're running, the efficiency of whatever "printing" statement you're using to output the data, the check/s you're performing and, of course, how often you're trying to perform output.
Your mod test probably doesn't hurt performance but if you want a very quick test and you're prepared for multiples of two then consider a mathematical and test:
if ( ( i & 0xFF ) == 0 ) {
/* this gets printed every 256 iterations */
...
}
or
if ( ( i & 0xFFFF ) == 0 ) {
/* this gets printed every 65536 iterations */
...
}
By placing a print statement inside of the for loop, you are sacrificing some performance.
Because the program needs to do a system call to write output to the screen every time the message is printed, it takes CPU time away from the program itself.
You can see the difference in performance between these two loops:
int i;
printf("Start Loop A\n");
for(i = 0; i < 100000; i++) {
printf("%d ", i);
}
printf("Done with Loop A\n");
printf("Start Loop B\n");
for(i = 0; i < 100000; i++) {
// Do Nothing
}
printf("Done with Loop B\n");
I would include timing code, but I am in the middle of work and can update it later over lunch.
If the difference isn't noticeable, you can increase 100000 to a larger number (although too large a number would cause the first loop to take WAY too long to complete).
Whoops, forgot to finish my answer.
To cut down on the number of system calls your program needs to make, you could check a condition first, and only print if that condition is true.
For example, if you were counting up as in my example code, you could only print out every 100th number by using %:
int i;
for(i = 0; i < 100000; i++) {
if(i%100 == 0)
printf("%d", i);
}
That will reduce the number of syscalls from ~100000 to ~1000, which in turn would increase the performance of the loop.
The problem is IO operation printf takes a much time than processor calculates. you can reduce the time if you can add them all and print finally.
Notation:
Tp = total time spent executing the progress statements.
Tn = total time spent doing the other normal stuff.
>> = Much greater than
If performance is your main criteria, you want Tn >> Tp. This strongly suggests that the code should be profiled so that you can pick appropriate values. The routine 'printf()' is considered a slow routine (much slower than %) and is a blocking routine (that is, the thread that calls it may pend waiting for a resource used by it).
Personally, I like to abstract away the progress indicator. It can be a logging mechanism,
a printf, a progress box, .... Heck, it may be updating a structure that is read by another thread/task/process.
id = progressRegister (<some predefined type of progress update mechanism>);
for(i = 0; i < large_n; i++) {
progressUpdate (id, <string>, i, large_n);
// Do some other stuff
}
progressUnregister(id);
Yes, there is some overhead in calling the routine 'progressUpdate()' on each iteration, but again, as long as Tn >> Tp, it usually is not that important.
Hope this helps.
Any ideas why it works fine for values like 0, 1, 2, 3, 4... and seg faults for values like >15?
#include
#include
#include
void *fib(void *fibToFind);
main(){
pthread_t mainthread;
long fibToFind = 15;
long finalFib;
pthread_create(&mainthread,NULL,fib,(void*) fibToFind);
pthread_join(mainthread,(void*)&finalFib);
printf("The number is: %d\n",finalFib);
}
void *fib(void *fibToFind){
long retval;
long newFibToFind = ((long)fibToFind);
long returnMinusOne;
long returnMinustwo;
pthread_t minusone;
pthread_t minustwo;
if(newFibToFind == 0 || newFibToFind == 1)
return newFibToFind;
else{
long newFibToFind1 = ((long)fibToFind) - 1;
long newFibToFind2 = ((long)fibToFind) - 2;
pthread_create(&minusone,NULL,fib,(void*) newFibToFind1);
pthread_create(&minustwo,NULL,fib,(void*) newFibToFind2);
pthread_join(minusone,(void*)&returnMinusOne);
pthread_join(minustwo,(void*)&returnMinustwo);
return returnMinusOne + returnMinustwo;
}
}
Runs out of memory (out of space for stacks), or valid thread handles?
You're asking for an awful lot of threads, which require lots of stack/context.
Windows (and Linux) have a stupid "big [contiguous] stack" idea.
From the documentation on pthreads_create:
"On Linux/x86-32, the default stack size for a new thread is 2 megabytes."
If you manufacture 10,000 threads, you need 20 Gb of RAM.
I built a version of OP's program, and it bombed with some 3500 (p)threads
on Windows XP64.
See this SO thread for more details on why big stacks are a really bad idea:
Why are stack overflows still a problem?
If you give up on big stacks, and implement a parallel language with heap allocation
for activation records
(our PARLANSE is
one of these) the problem goes away.
Here's the first (sequential) program we wrote in PARLANSE:
(define fibonacci_argument 45)
(define fibonacci
(lambda(function natural natural )function
`Given n, computes nth fibonacci number'
(ifthenelse (<= ? 1)
?
(+ (fibonacci (-- ?))
(fibonacci (- ? 2))
)+
)ifthenelse
)lambda
)define
Here's an execution run on an i7:
C:\DMS\Domains\PARLANSE\Tools\PerformanceTest>run fibonaccisequential
Starting Sequential Fibonacci(45)...Runtime: 33.752067 seconds
Result: 1134903170
Here's the second, which is parallel:
(define coarse_grain_threshold 30) ; technology constant: tune to amortize fork overhead across lots of work
(define parallel_fibonacci
(lambda (function natural natural )function
`Given n, computes nth fibonacci number'
(ifthenelse (<= ? coarse_grain_threshold)
(fibonacci ?)
(let (;; [n natural ] [m natural ] )
(value (|| (= m (parallel_fibonacci (-- ?)) )=
(= n (parallel_fibonacci (- ? 2)) )=
)||
(+ m n)
)value
)let
)ifthenelse
)lambda
)define
Making the parallelism explicit makes the programs a lot easier to write, too.
The parallel version we test by calling (parallel_fibonacci 45). Here
is the execution run on the same i7 (which arguably has 8 processors,
but it is really 4 processors hyperthreaded so it really isn't quite 8
equivalent CPUs):
C:\DMS\Domains\PARLANSE\Tools\PerformanceTest>run fibonacciparallelcoarse
Parallel Coarse-grain Fibonacci(45) with cutoff 30...Runtime: 5.511126 seconds
Result: 1134903170
A speedup near 6+, not bad for not-quite-8 processors. One of the other
answers to this question ran the pthreads version; it took "a few seconds"
(to blow up) computing Fib(18), and this is 5.5 seconds for Fib(45).
This tells you pthreads
is a fundamentally bad way to do lots of fine grain parallelism, because
it has really, really high forking overhead. (PARLANSE is designed to
minimize that forking overhead).
Here's what happens if you set the technology constant to zero (forks on every call
to fib):
C:\DMS\Domains\PARLANSE\Tools\PerformanceTest>run fibonacciparallel
Starting Parallel Fibonacci(45)...Runtime: 15.578779 seconds
Result: 1134903170
You can see that amortizing fork overhead is a good idea, even if you have fast forks.
Fib(45) produces a lot of grains. Heap allocation
of activation records solves the OP's first-order problem (thousands of pthreads each
with 1Mb of stack burns gigabytes of RAM).
But there's a second order problem: 2^45 PARLANSE "grains" will burn all your memory too
just keeping track of the grains even if your grain control block is tiny.
So it helps to have a scheduler that throttles forks once you have "a lot"
(for some definition of "a lot" significantly less that 2^45) grains to prevent the
explosion of parallelism from swamping the machine with "grain" tracking data structures.
It has to unthrottle forks when the number of grains falls below a threshold
too, to make sure there is always lots of logical, parallel work for the physical
CPUs to do.
You are not checking for errors - in particular, from pthread_create(). When pthread_create() fails, the pthread_t variable is left undefined, and the subsequent pthread_join() may crash.
If you do check for errors, you will find that pthread_create() is failing. This is because you are trying to generate almost 2000 threads - with default settings, this would require 16GB of thread stacks to be allocated alone.
You should revise your algorithm so that it does not generate so many threads.
I tried to run your code, and came across several surprises:
printf("The number is: %d\n", finalFib);
This line has a small error: %d means printf expects an int, but is passed a long int. On most platforms this is the same, or will have the same behavior anyways, but pedantically speaking (or if you just want to stop the warning from coming up, which is a very noble ideal too), you should use %ld instead, which will expect a long int.
Your fib function, on the other hand, seems non-functional. Testing it on my machine, it doesn't crash, but it yields 1047, which is not a Fibonacci number. Looking closer, it seems your program is incorrect on several aspects:
void *fib(void *fibToFind)
{
long retval; // retval is never used
long newFibToFind = ((long)fibToFind);
long returnMinusOne; // variable is read but never initialized
long returnMinustwo; // variable is read but never initialized
pthread_t minusone; // variable is never used (?)
pthread_t minustwo; // variable is never used
if(newFibToFind == 0 || newFibToFind == 1)
// you miss a cast here (but you really shouldn't do it this way)
return newFibToFind;
else{
long newFibToFind1 = ((long)fibToFind) - 1; // variable is never used
long newFibToFind2 = ((long)fibToFind) - 2; // variable is never used
// reading undefined variables (and missing a cast)
return returnMinusOne + returnMinustwo;
}
}
Always take care of compiler warnings: when you get one, usually, you really are doing something fishy.
Maybe you should revise the algorithm a little: right now, all your function does is returning the sum of two undefined values, hence the 1047 I got earlier.
Implementing the Fibonacci suite using a recursive algorithm means you need to call the function again. As others noted, it's quite an inefficient way of doing it, but it's easy, so I guess all computer science teachers use it as an example.
The regular recursive algorithm looks like this:
int fibonacci(int iteration)
{
if (iteration == 0 || iteration == 1)
return 1;
return fibonacci(iteration - 1) + fibonacci(iteration - 2);
}
I don't know to which extent you were supposed to use threads—just run the algorithm on a secondary thread, or create new threads for each call? Let's assume the first for now, since it's a lot more straightforward.
Casting integers to pointers and vice-versa is a bad practice because if you try to look at things at a higher level, they should be widely different. Integers do maths, and pointers resolve memory addresses. It happens to work because they're represented the same way, but really, you shouldn't do this. Instead, you might notice that the function called to run your new thread accepts a void* argument: we can use it to convey both where the input is, and where the output will be.
So building upon my previous fibonacci function, you could use this code as the thread main routine:
void* fibonacci_offshored(void* pointer)
{
int* pointer_to_number = pointer;
int input = *pointer_to_number;
*pointer_to_number = fibonacci(input);
return NULL;
}
It expects a pointer to an integer, and takes from it its input, then writes it output there.1 You would then create the thread like that:
int main()
{
int value = 15;
pthread_t thread;
// on input, value should contain the number of iterations;
// after the end of the function, it will contain the result of
// the fibonacci function
int result = pthread_create(&thread, NULL, fibonacci_offshored, &value);
// error checking is important! try to crash gracefully at the very least
if (result != 0)
{
perror("pthread_create");
return 1;
}
if (pthread_join(thread, NULL)
{
perror("pthread_join");
return 1;
}
// now, value contains the output of the fibonacci function
// (note that value is an int, so just %d is fine)
printf("The value is %d\n", value);
return 0;
}
If you need to call the Fibonacci function from new distinct threads (please note: that's not what I'd advise, and others seem to agree with me; it will just blow up for a sufficiently large amount of iterations), you'll first need to merge the fibonacci function with the fibonacci_offshored function. It will considerably bulk it up, because dealing with threads is heavier than dealing with regular functions.
void* threaded_fibonacci(void* pointer)
{
int* pointer_to_number = pointer;
int input = *pointer_to_number;
if (input == 0 || input == 1)
{
*pointer_to_number = 1;
return NULL;
}
// we need one argument per thread
int minus_one_number = input - 1;
int minus_two_number = input - 2;
pthread_t minus_one;
pthread_t minus_two;
// don't forget to check! especially that in a recursive function where the
// recursion set actually grows instead of shrinking, you're bound to fail
// at some point
if (pthread_create(&minus_one, NULL, threaded_fibonacci, &minus_one_number) != 0)
{
perror("pthread_create");
*pointer_to_number = 0;
return NULL;
}
if (pthread_create(&minus_two, NULL, threaded_fibonacci, &minus_two_number) != 0)
{
perror("pthread_create");
*pointer_to_number = 0;
return NULL;
}
if (pthread_join(minus_one, NULL) != 0)
{
perror("pthread_join");
*pointer_to_number = 0;
return NULL;
}
if (pthread_join(minus_two, NULL) != 0)
{
perror("pthread_join");
*pointer_to_number = 0;
return NULL;
}
*pointer_to_number = minus_one_number + minus_two_number;
return NULL;
}
Now that you have this bulky function, adjustments to your main function are going to be quite easy: just change the reference to fibonacci_offshored to threaded_fibonacci.
int main()
{
int value = 15;
pthread_t thread;
int result = pthread_create(&thread, NULL, threaded_fibonacci, &value);
if (result != 0)
{
perror("pthread_create");
return 1;
}
pthread_join(thread, NULL);
printf("The value is %d\n", value);
return 0;
}
You might have been told that threads speed up parallel processes, but there's a limit somewhere where it's more expensive to set up the thread than run its contents. This is a very good example of such a situation: the threaded version of the program runs much, much slower than the non-threaded one.
For educational purposes, this program runs out of threads on my machine when the number of desired iterations is 18, and takes a few seconds to run. By comparison, using an iterative implementation, we never run out of threads, and we have our answer in a matter of milliseconds. It's also considerably simpler. This would be a great example of how using a better algorithm fixes many problems.
Also, out of curiosity, it would be interesting to see if it crashes on your machine, and where/how.
1. Usually, you should try to avoid to change the meaning of a variable between its value on input and its value after the return of the function. For instance, here, on input, the variable is the number of iterations we want; on output, it's the result of the function. Those are two very different meanings, and that's not really a good practice. I didn't feel like using dynamic allocations to return a value through the void* return value.
Okay, so I've got some C code to perform a mathematical operation which could, pretty much, take any length of time (depending on the operands supplied to it, of course). I was wondering if there is a way to register some kind of method which will be called every n seconds which can analyse the state of the operation, i.e. what iteration it is currently at, possibly using a hardware timer interrupt or something?
The reason I ask this is because I know the common way to implement this is to be keeping track of the current iteration in a variable; say, an integer called progress and have an IF statement like this in the code:
if ((progress % 10000) == 0)
printf("Currently at iteration %d\n", progress);
but I believe that a mod operation takes a relatively long time to execute, so the idea of having it inside a loop which will be ran many, many times scares me, from an optimisation point of view.
So I get the feeling that having an external way of signalling a progress print is nice and efficient. Are there any great ways to perform this, or is the simple 'mod check' the best (in terms of optimising)?
I'd go with the mod check, but maybe with subtractions instead :-)
icount = 0;
progress = 10000;
/* ... */
if (--progress == 0) {
progress = 10000;
printf("Currently at iteration %d0000\n", ++icount);
}
/* ... */
While mod operations are usually slow, the compiler should be able to optimize and predict this really well and only mis-predict once ever 10'000 ifs, burning one mod operation and ~20 cycles (for the mis-prediction) on it, which is fine. So you are trying to optimize one mod operation every 10'000 iterations. Of course this assumes you are running it on a modern and typical CPU, and not some embedded system with unknown specs. This should even be faster than having a counter variable.
Suggestion: Test it with and without the timing code, and figure out a complex solution if there is really a problem.
Premature optimisation is the root of all evil. -Knuth
mod is about the same speed as division, on most CPU's these days that means about 5-10 cycles... in other words hardly anything, slower than multiply/add/subtract, but not enough to really worry about.
However you are right to want to avoid sting in a loop spinning if you're doing work in another thread or something like that, if you're on a unixish system there's timer_create() or on linux the much easier to use timerfd_create()
But for single threaded, just putting that if in is enough.
Use alarm setitimer to raise SIGALRM signals at regular intervals.
struct itimerval interval;
void handler( int x ) {
write( STDOUT_FILENO, ".", 1 ); /* Defined in POSIX, not in C */
}
int main() {
signal( SIGALRM, &handler );
interval.it_value.tv_sec = 5; /* display after 5 seconds */
interval.it_interval.tv_sec = 5; /* then display every 5 seconds */
setitimer( ITIMER_REAL, &interval, NULL );
/* do computations */
interval.it_interval.tv_sec = 0; /* don't display progress any more */
setitimer( ITIMER_REAL, &interval, NULL );
printf( "\n" ); /* done with the dots! */
}
Note, only a smattering of functions are OK to call inside handler. They are listed partway down this page. If you want to communicate anything for a fancier printout, do it through a sig_atomic_t variable.
you could have a global variable for the iterations, which you could monitor from an external thread.
While () {
Print(iteration);
Sleep(1000);
}
You may need to watch out for data races though.
I think I might be having a stack overflow problem or something similar in my embedded firmware code. I am a new programmer and have never dealt with a SO so I'm not sure if that is what's happening or not.
The firmware controls a device with a wheel that has magnets evenly spaced around it and the board has a hall effect sensor that senses when magnet is over it. My firmware operates the stepper and also count steps while monitoring the magnet sensor in order to detect if the wheel has stalled.
I am using a timer interrupt on my chip (8 bit, 8057 acrh.) to set output ports to control the motor and for the stall detection. The stall detection code looks like this...
// Enter ISR
// Change the ports to the appropriate value for the next step
// ...
StallDetector++; // Increment the stall detector
if(PosSensor != LastPosMagState)
{
StallDetector = 0;
LastPosMagState = PosSensor;
}
else
{
if (PosSensor == ON)
{
if (StallDetector > (MagnetSize + 10))
{
HandleStallEvent();
}
}
else if (PosSensor == OFF)
{
if (StallDetector > (GapSize + 10))
{
HandleStallEvent();
}
}
}
this code is called every time the ISR is triggered. PosSensor is the magnet sensor. MagnetSize is the number of stepper steps that it takes to get through the magnet field. GapSize is the number of steps between two magnets. So I want to detect if the wheel gets stuck either with the sensor over a magnet or not over a magnet.
This works great for a long time but then after a while the first stall event will occur because 'StallDetector > (MagnetSize + 10)' but when I look at the value of StallDetector it is always around 220! This doesn't make sense because MagnetSize is always around 35. So the stall event should have been triggered at like 46 but somehow it got all the way up to 220? And I don't set the value of stall detector anywhere else in my code.
Do you have any advice on how I can track down the root of this problem?
The ISR looks like this
void Timer3_ISR(void) interrupt 14
{
OperateStepper(); // This is the function shown above
TMR3CN &= ~0x80; // Clear Timer3 interrupt flag
}
HandleStallEvent just sets a few variable back to their default values so that it can attempt another move...
#pragma save
#pragma nooverlay
void HandleStallEvent()
{
///*
PulseMotor = 0; //Stop the wheel from moving
SetMotorPower(0); //Set motor power low
MotorSpeed = LOW_SPEED;
SetSpeedHz();
ERROR_STATE = 2;
DEVICE_IS_HOMED = FALSE;
DEVICE_IS_HOMING = FALSE;
DEVICE_IS_MOVING = FALSE;
HOMING_STATE = 0;
MOVING_STATE = 0;
CURRENT_POSITION = 0;
StallDetector = 0;
return;
//*/
}
#pragma restore
Is PosSensor volatile? That is, do you update PosSensor somewhere, or is it directly reading a GPIO?
I assume GapSize is rather large (> 220?) It sounds to me like you might have a race condition.
// PosSensor == OFF, LastPosMagState == OFF
if(PosSensor != LastPosMagState)
{
StallDetector = 0;
LastPosMagState = PosSensor;
}
else
{
// Race Condition: PosSensor turns ON here
// while LastPosMagState still == OFF
if (PosSensor == ON)
{
if (StallDetector > (MagnetSize + 10))
{
HandleStallEvent();
}
}
else if (PosSensor == OFF)
{
if (StallDetector > (GapSize + 10))
{
HandleStallEvent();
}
}
}
You should cache the value of PosSensor once, right after doing StallDetector++, so that in the event PosSensor changes during your code, you don't start testing the new value.
This is definitely not stack overflow. If you blew the stack (overflowed it) your application would simply crash. This sounds more like something we used to call memory stomping in my C++ days. You may not be accessing the memory location that the StallDetector value occupies via StallDetector variable alone. There may be another part of your code "stomping" this particular memory location erroneously.
Unfortunately, this kind of issue is very hard to track down. About the only thing you could do is systematically isolate (remove from execution) chunks of your code until you narrow down and find the bug.
Do you have nest ISRs on your system? Could be something along the lines of start your ISR and increment your count, then interrupt it and do it again. Do this enough times and your interrupt stack can overflow. It could also explain such a high counter variable as well.
Does HandleStallEvent() "look at" StallDetector within the ISR or does it trigger something on the main loop? If it's on the main loop, are you clearing the interrupt bit?
Or are you looking at StallDetector from a debugger outside the ISR? Then a retriggered interrupt would use the correct value each time, but execute too many times, and you would only see the final, inflated value.
On second thought, more likely you don't have to clear an interrupt-generating register, but rather the interrupt pin is remaining asserted by the sensor. You need to ignore the interrupt after it's first handled until the line deasserts, such as by having the original ISR disable itself and and reinstall it in a second ISR which handles the 1->0 transition.
You might then also need to add debouncing hardware or adjust it if you have it.
Check your parameter types. If you defined the parameters in a way different than the caller expects then calling your method could overwrite the space that variable is stored in. (For instance if you wrote the function expecting an int but it is pushing a long onto the stack.)
You could see what additional options your debugger supports. In Visual Studio, for example, it is possible to set a "data breakpoint", where you break when a memory location changes (or is set to a certain value, or above a threshold, ...).
If something like this is possible in your case, you could see where the data is changed and if there is someone else writing to the memory erroneously.