Embedded C: AVR; variable in header cannot be evaluated in main - c

thank you for taking the time to read.
Problem I'm seeing
I have:
main.h
that declares:
uint8_t u_newVar
I also have
foo.h
that writes:
extern uint8_t u_newVar
The application sits in a infinite while loop in main.c until an ISR occurs.
In the foo.c, said ISR calls a function within foo.c.
In that function (foo_function() if you will): 0x01 is written to the u_newVar.
Finally, after returning from the interrupt and back into the infinite while, there is a single "if" statement:
while(1){
if(u_newVar == 0x01){
uartTX_sendArray(st_uartRX_MessageContents->u_command, (sizeof st_uartRX_MessageContents->u_command));
uartTX_sendButtonPressData(st_uartRX_MessageContents->u32_value);
u_newVar = 0x00;
}
}
However, the application never enters the if. This "if" block will work if it is in foo.c, after the
u_newVar = 0x01;
line.
Stuff I tried
I looked at the compiled assembly, and what I found kind of stumps me.
If I look at the "if" in main, this is what I see:
So it loads the value from address: 0x011D from SRAM, which I can confirm is 0x01.
Then "CPI" to compare R24 directly to 0x01, which should obviously work.
Then "BREQ", branch if equal, and increment program counter twice to the uart function below. Also makes sense.
However this part is weird. Otherwise, use "RJMP" to jump to the instruction which it is currently at. Unless I'm mistaken, this would lock here for eternity?
So, remember when I mentioned that when I put the if block into the foo.c after the write to u_newVar? Yeah, that's not too interesting:
The "if" is supposed to be after "u_newVar = 0x01", but the compiler is too smart and optimizes it how. Which is why it always works there.

You forgot to tell the compiler that the variable might be modified asynchronously. Such as in a ISR.
volatile uint8_t u_newVar;

what your looking at is the output from the assembler, not the linked code.
The linker has info in the relocation table for that offset in the code (the data byte(s) of the instruction and what the actual target offset is. during the linking process (or for certain environments during the load processing) those data bytes will be updated to contain the correct value

Related

C: Using static volatile with "getter" function and interruptions

Suppose I have the following C code:
/* clock.c */
#include "clock.h"
static volatile uint32_t clock_ticks;
uint32_t get_clock_ticks(void)
{
return clock_ticks;
}
void clock_tick(void)
{
clock_ticks++;
}
Now I am calling clock_tick (i.e.: incrementing clock_ticks variable) within an interruption, while calling get_clock_ticks() from the main() function (i.e.: outside the interruption).
My understanding is that clock_ticks should be declared as volatile as otherwise the compiler could optimize its access and make main() think the value has not changed (while it actually changed from the interruption).
I wonder if using the get_clock_ticks(void) function there, instead of accessing the variable directly form main() (i.e.: not declaring it as static) can actually force the compiler to load the variable from memory even if it was not declared as volatile.
I wonder this as someone told me this could be happening. Is it true? Under which conditions? Should I always use volatile anyway no matters if I use a "getter" function?
A getter function doesn't help in any way here over using volatile.
Assume the compiler sees you've just fetched the value two lines above and not changed it since then.
If it's a good optimizing compiler, I would expect it to see the function call has no side effect simply optimize out the function call.
If get_clock_ticks() would be external (i.e. in a separate module), matters are different (maybe that's what you remember).
Something that can change its value outside normal program flow (e.g. in an ISR), should always be declared volatile.
Don't forget that even if you currently compile the code declaring get_clock_ticks and the code using it as separate modules, perhaps one day you will use link-time or cross-module optimisation. Keep the "volatile" even though you are using a getter function - it will do no harm to the code generation in this case, and makes the code correct.
One thing you have not mentioned is the bit size of the processor. If it is not capable of reading a 32-bit value in a single operation, then your get_clock_ticks() will sometimes fail as the reads are not atomic.

How to set up a callback function when an Exception occurs?

I've been stuck for a while on how to set up a callback when an exception occurs.
I have this test code:
void main()
{
long * bad = (long*)0x0A000000; //Invalid address
//When the following line gets executed
//it causes an error and the debugger sends me to an assembly file.
*bad = 123456789;
}
The assembly file that I am sent to looks like this(fragment of the real file):
.macro DEFAULT_ISR_HANDLER name=
.thumb_func
.weak \name
\name:
1: b 1b /* endless loop */
.endm
DEFAULT_ISR_HANDLER SRC_IRQHandler /*Debugger stops on this line*/
As I understand DEFAULT_ISR_HANDLER is a macro that defines an endless loop.
What I want to do is define my own function in a C file, that I could call when an exception occurs, instead of calling whats defined in the DEFAULT_ISR_HANDLER macro.
My question is, How would I define a macro, in that assembly, that calls an specific C function?
Hopefully I explained myself. Any information or direction around this topic is appreciated.
In case it's relevant I am using GCC ARM compiler v5.4_2016q3
Thanks,
Isaac
EDIT
I am using a Cortex-M3.
Until now I realized I was talking about processor exceptions. According to the datasheet there is a list with 16 exception types.
Apparently, the way it works is that all the exception types are being redirected to the macro, which in turn calls some thumb function and afterwards an endless loop(according to DEFAULT_ISR_HANDLER above in code).
What I would like to do is define my own function in a C file, for convenience, so every time any type of processor exception appear, I could control how to proceed.
You have two options:
Just define a C function with the void SRC_IRQHandler(void) signature and since the macro is defining the default handler as weak, your function will override the default handler in the linking stage.
There should be a place in your project where SRC_IRQHandler is placed in what is called a Vector Table in the Cortex-M3 architecture. You can replace the name of this function with your own C function and your function will be called when this interrupt (exception) happens.
The cortex-m family in general has well more than 16 exceptions there are those plus as many interrupts are implemented by that core, 32, 64, 128, 256. But it is all fundamentally the same. The way the cortex-m family works is they perform the EABI call for you if you will, they preserve some of the registers and then start execution at the address called out in the vector table done in such a way that you can have the address of a normally compiled C function directly in the table. Historically you needed to wrap that function with some code to preserve and restore the state and often instruction sets have a special return from interrupt, but the cortex-m they did a bit different.
so knowing that then the next question is how do you get that address in the table, and that depends on your code, build system, etc. Those handlers might be setup to point to an address in ram and maybe you are running on an RTOS and there is a function you call runtime to register a function for an exception then the RTOS changes the code or some data value in ram that is tied into their handler which essentially wraps around yours. or you are making the vector table in assembly or some other tool specific thing (although assembly is there, works and easy) and you simply count down the right number of entries (or add a hundred more entries so you can count down to the right entry) and place the name of your C function.
good idea to disassemble or do some other check on the result before running to double check that you have placed the handler address at the right physical address for that interrupt/exception.

MPLAB X, XC8, and C in general. Using variables across functions and source files

I am having a difficult time passing variables to functions--especially functions that are not in the same source file. I suspect these two problems are actually the same problem. I am sure this is somewhere on the internet, but I have done a lot of searching and I am now even more confused. Mostly I need someone to give me some direction on what I should be reading/searching for.
PROBLEM 1:
Say I have a source file named main.c. After the #includes and #defines, I declare a variable
int count;
I then declare a function
void increment () {
count++;
}
Within function main(); I call the function increment();, and then update PORTA to display it in LEDs. Both "count" and PORTA are assigned zero before main(); runs.
void main () {
increment();
PORTA = count;
}
The problem is that there appears to be two versions of "count". If this program was run, PORTA would never light an LED. However, if "PORTA = count;" were moved inside the function, it would increment properly. Furthermore, all hardware writes (Port, tris, etc) work fine inside the function, but variables I thought I declared globally do not. Thus, I assume the compiler is making a copy of "count" for the function call, and forgetting it when it returns.
I would normally just return a value from the function to get around this, but interrupt routines for the PIC cannot return a value, and I must use an interrupt.
What do I do? Surely I am missing a major concept!
PROBLEM 2: Example of a common issue
Say I am using the MLA device library and load the demo material for the HID_Mouse. Though it has ten million folders and source and header files that include each other, I am able to edit some of the subroutines and make it do my bidding. However, I need to declare a variable that is used both in main.c and modified by a function in app_device_mouse.c. How do I declare this thing so that it gets globally read/written, but I don't get declaration errors from the compiler?
../src/app_device_mouse.c:306: error: (192) undefined identifier "position_x"
i.e "You didn't declare 'int position_x' in app_device_mouse.c, even though you did in main.c
I'm not sure of the result of declaring it in both places, but something tells me that's a bad idea.
Thanks so much in advance for your time. I have learned a lot from this community!
-GB
For anyone who comes behind, the code in PROBLEM 1 was actually working code. My error instead was carelessly initializing my TRISC to 1 instead of 0xff; which means I was trying to run a button off an output. I should know better than that.
However, I was having this problem on other occasions by declaring my variables in main(); instead of outside the functions. This means I was trying to modify local variables inside a function that had not declared it - this was giving me nulls and garbage. Pedwards correctly identified that I was having trouble with global vs local variables; and "scope" was a really helpful keyword.
Declaring a variable as volatile is necessary for the variable to be modified by the ISR. After Oled's comment I was able to find this information on page 169 of the XC8 compiler manual.
What you're missing is called "scope". It's not specific to XC8, any C book will help you.
PIC interrupts won't take/return anything for a reason. Define a global in the same file as your ISR is defined and read/change that. If you're going to write to it from the ISR declare it 'volatile':
volatile int foo = 0x00;
If you need to access it from another file (beginners shall avoid this) declare it 'extern' in this file (or include):
extern int foo;

Compiler optimization call-ret vs jmp

I am building one of the projects and I am looking at the generated list file.(target: x86-64) My code looks like:
int func_1(var1,var2){
asm_inline_(
)
func_2(var1,var2);
return_1;
}
void func_2(var_1,var_2){
asm __inline__(
)
func_3();
}
/**** Jump to kernel ---> System call stub in assembly. This func in .S file***/
void func_3(){
}
When I see the assembly code, I find "jmp" instruction is used instead of "call-return" pair when calling func_2 and func_3. I am sure it is one of the compiler optimization and I have not explored how to disable it. (GCC)
The moment I add some volatile variables to func_2 and func_3 and increment them then "jmp" gets replaced by "call-ret" pair.
I am bemused to see the behavior because those variables are useless and they don't serve any purpose.
Can someone please explain the behavior?
Thanks
If code jumps to the start of another function rather than calling it, when the jumped-to function returns, it will return back to the point where the outer function was called from, ignoring any more of the first function after that point. Assuming the behaviour is correct (the first function contributed nothing else to the execution after that point anyway), this is an optimisation because it reduces the number of instructions and stack manipulations by one level.
In the given example, the behaviour is correct; there's no local stack to pop and no value to return, so there is no code that needs to run after the call. (return_1, assuming it's not a macro for something, is a pure expression and therefore does nothing no matter its value.) So there's no reason to keep the stack frame around for the future when it has nothing more to contribute to events.
If you add volatile variables to the function bodies, you aren't just adding variables whose flow the compiler can analyse - you're adding slots that you've explicitly told the compiler could be accessed outside the normal control flow it can predict. The volatile qualifier warns the compiler that even though there's no obvious way for the variables to escape, something outside has a way to get their address and write to it at any time. So it can't reduce their lifetime, because it's been told that code outside the function might still try to write to that stack space; and obviously that means the stack frame needs to continue to exist for its entire declared lifespan.

How do I know if gcc agrees that something is volatile?

Consider the following:
volatile uint32_t i;
How do I know if gcc did or did not treat i as volatile? It would be declared as such because no nearby code is going to modify it, and modification of it is likely due to some interrupt.
I am not the world's worst assembly programmer, but I play one on TV. Can someone help me to understand how it would differ?
If you take the following stupid code:
#include <stdio.h>
#include <inttypes.h>
volatile uint32_t i;
int main(void)
{
if (i == 64738)
return 0;
else
return 1;
}
Compile it to object format and disassemble it via objdump, then do the same after removing 'volatile', there is no difference (according to diff). Is the volatile declaration just too close to where its checked or modified or should I just always use some atomic type when declaring something volatile? Do some optimization flags influence this?
Note, my stupid sample does not fully match my question, I realize this. I'm only trying to find out if gcc did or did not treat the variable as volatile, so I'm studying small dumps to try to find the difference.
Many compilers in some situations don't treat volatile the way they should. See this paper if you deal much with volatiles to avoid nasty surprises: Volatiles are Miscompiled, and What to Do about It. It also contains the pretty good description of the volatile backed with the quotations from the standard.
To be 100% sure, and for such a simple example check out the assembly output.
Try setting the variable outside a loop and reading it inside the loop. In a non-volatile case, the compiler might (or might not) shove it into a register or make it a compile time constant or something before the loop, since it "knows" it's not going to change, whereas if it's volatile it will read it from the variable space every time through the loop.
Basically, when you declare something as volatile, you're telling the compiler not to make certain optimizations. If it decided not to make those optimizations, you don't know that it didn't do them because it was declared volatile, or just that it decided it needed those registers for something else, or it didn't notice that it could turn it into a compile time constant.
As far as I know, volatile helps the optimizer. For example, if your code looked like this:
int foo() {
int x = 0;
while (x);
return 42;
}
The "while" loop would be optimized out of the binary.
But if you define 'x' as being volatile (ie, volatile int x;), then the compiler will leave the loop alone.
Your little sample is inadequate to show anything. The difference between a volatile variable and one that isn't is that each load or store in the code has to generate precisely one load or store in the executable for a volatile variable, whereas the compiler is free to optimize away loads or stores of non-volatile variables. If you're getting one load of i in your sample, that's what I'd expect for volatile and non-volatile.
To show a difference, you're going to have to have redundant loads and/or stores. Try something like
int i = 5;
int j = i + 2;
i = 5;
i = 5;
printf("%d %d\n", i, j);
changing i between non-volatile and volatile. You may have to enable some level of optimization to see the difference.
The code there has three stores and two loads of i, which can be optimized away to one store and probably one load if i is not volatile. If i is declared volatile, all stores and loads should show up in the object code in order, no matter what the optimization. If they don't, you've got a compiler bug.
It should always treat it as volatile.
The reason the code is the same is that volatile just instructs the compiler to load the variable from memory each time it accesses it. Even with optimization on, the compiler still needs to load i from memory once in the code you've written, because it can't infer the value of i at compile time. If you access it repeatedly, you'll see a difference.
Any modern compiler has multiple stages. One of the fairly easy yet interesting questions is whether the declaration of the variable itself was parsed correctly. This is easy because the C++ name mangling should differ depending on the volatile-ness. Hence, if you compile twice, once with volatile defined away, the symbol tables should differ slightly.
Read the standard before you misquote or downvote. Here's a quote from n2798:
7.1.6.1 The cv-qualifiers
7 Note: volatile is a hint to the implementation to avoid aggressive optimization involving the object because the value of the object might be changed by means undetectable by an implementation. See 1.9 for detailed semantics. In general, the semantics of volatile are intended to be the same in C++ as they are in C.
The keyword volatile acts as a hint. Much like the register keyword. However, volatile asks the compiler to keep all its optimizations at bay. This way, it won't keep a copy of the variable in a register or a cache (to optimize speed of access) but rather fetch it from the memory everytime you request for it.
Since there is so much of confusion: some more. The C99 standard does in fact say that a volatile qualified object must be looked up every time it is read and so on as others have noted. But, there is also another section that says that what constitutes a volatile access is implementation defined. So, a compiler, which knows the hardware inside out, will know, for example, when you have an automatic volatile qualified variable and whose address is never taken, that it will not be put in a sensitive region of memory and will almost certainly ignore the hint and optimize it away.
This keyword finds usage in setjmp and longjmp type of error handling. The only thing you have to bear in mind is that: You supply the volatile keyword when you think the variable may change. That is, you could take an ordinary object and manage with a few casts.
Another thing to keep in mind is the definition of what constitutes a volatile access is left by standard to the implementation.
If you really wanted different assembly compile with optimization

Resources