Using the C preprocessor to effectively rename variables - c

I'm writing a few very tight loops and the outermost loop will run for over a month. It's my understanding that the less local variables a function has, the better the compiler can optimize it. In one of the loops, I need a few flags, only one of which is used at a time. If you were the proverbial homicidal maniac that knows where I live, would you rather have the flag named flag and used as such throughout or would you prefer something like
unsigned int flag;
while (condition) {
#define found_flag flag
found_flag = 0;
for (i = 0; i<n; i++) {
if (found_condition) {
found_flag = 1;
break;
}
}
if (!found_flag) {
/* not found action */
}
/* other code leading up to the next loop with flag */
#define next_flag flag
next_flag = 0;
/* ... /*
}
This provides the benefit of allowing descriptive names for each flag without adding a new variable but seems a little unorthodox. I'm a new C programmer so I'm not sure what to do here.

Don't bother doing this, just use a new variable for each flag. The compiler will be able to determine where each one is first and last used and optimise the actual amount of space used accordingly. If none of the usage of the flag variables overlap, then the compiler may end up using the same space for all flag variables anyway.
Code for readability first and foremost.

I completely agree with dreamlax: the compiler will be smart enough for you to ignore this issue entirely, but I'd like to mention that you neglected a third option, which is rather more readable:
while (something) {
/* setup per-loop preconditions */
{
int flag1;
while (anotherthing) {
/* ... */
}
/* deal with flag found or not-found here */
}
/* possibly some other preconditions */
{
int flag2;
while (stillanotherthing) {
/* ... */
}
}
}
which would tell a dumb compiler explicitly when you are done with each flag. Note that you will need to take care about where you declare variables that need to live beyond the flag-scope blocks.

Your trick would only be useful on very old, very simple, or buggy compilers that aren't capable of correct register (re)allocation and scheduling (sometimes, that's what one is stuck with for various or ancient embedded processors). gcc, and most modern compilers, when optimizations are turned on, would reallocate any register or local memory resources used for local variables until they are almost hard to find when debugging at the machine code level. So you might as well make your code readable and not spend brain power on this type of premature optimization.

Related

Const pointer to volatile struct member

I'm using microcontroller to make some ADC measurements. I have an issue when I try to compile following code using -O2 optimization, MCU freezes when PrintVal() function is present in code. I did some debugging and it turns out that when I add -fno-inline compiler flag, the code will run fine even with PrintVal() function.
Here is some background:
AdcIsr.c contains interrupt that is executed when ADC finishes it's job. This file also contains ISRInit() function that initializes variable that will hold value after conversion. In main loop will wait for interrupt and only then access AdcMeas.value.
AdcIsr.c
static volatile uin16_t* isrVarPtr = NULL;
ISR()
{
uint8_t tmp = readAdc();
*isrVarPtr = tmp;
}
void ISRInit(volatile uint16_t *var)
{
isrVarPtr = var;
}
AdcMeas.c
typedef struct{
uint8_t id;
volatile uint16_t value;
}AdcMeas_t;
static AdcMeas_t AdcMeas = {0};
const AdcMeas_t* AdcMeasGetStructPtr()
{
return &AdcMeas;
}
main.c
void PrintVal(const AdcMeas_t* data)
{
printf("AdcMeas %d value: %d\r\n", data->id, data->value);
}
void StartMeasurement()
{
...
AdcOn();
...
}
int main()
{
ISRInit(AdcMeasGetStructPtr()->value);
while(1)
{
StartMeasurement();
WaitForISR();
PrintVal(AdcMeasGetStructPtr());
DelayMs(1000);
}
}
Questions:
Is there something wrong with usage of const AdcMeas_t* data as argument of the PrintVal() function? I understand that AdcMeas.value may change inside interrupt and PrintVal() may be outdated.
AdcMeas contains a 'generic getter'. Is this a good practice to use this sort of function to allow read-only access to static structure? or should I implement AdcMeasGetId() and AdcMeasGetValue functions (note that this struct has only 2 members, what if it has 8 members)?
I know this code is a bit dumb (waiting for interrupt in while loop), this is just an example.
Some bugs:
You have no header files, neither library include or your own ones. This means that everything is hopelessly broken until you fix that. You cannot do multiple file projects in C without header files.
*isrVarPtr = tmp; Here you write to a variable without protection from race conditions. If the main program reads this variable in several steps, you risk getting incorrect data. You need to protect against race conditions or guarantee atomic access.
const AdcMeasGetStructPtr() is gibberish and there is no way that the return &AdcMeas; inside it would compile with a conforming C compiler.
If you have an old but conforming C90 compiler, the return type will get treated as int. Otherwise, if you have a modern C compiler, not even the function definition will compiler. So it would seem that something is very wrong with your compiler, which is a greater concern than this bug.
Declaring the typedef struct in the C file and then returning a pointer to it doesn't make any sense. You need to re-design this module. You could have a getter function returning an instance to a private struct, if there is only ever going to be 1 instance of it (singleton). However, as mentioned, it needs to handle race conditions.
Stylistic concerns:
Empty parenthesis () in a function declaration is almost always wrong in C. This is obsolete style and means "accept any parameter". C++ is different here.
int main() doesn't make any sense at all in a microcontroller system. You should use some implementation-defined form suitable for freestanding programs. The most commonly supported form is void main (void).
DelayMs(1000); is highly questionable code in any embedded system. There should never be a reason why you'd want to hang up your MCU being useless, with max current consumption, for a whole second.
Overall it seems you would benefit from a "continuous conversion" ADC. ADCs that support continuous conversion just dump their latest read in the data register and you can pick it up with polling whenever you need it. Catching all ADC interrupts is really just for hard realtime systems, signal processing and similar.

Lazy-init an array with multi-threaded readers: is it safe without barriers or atomics?

I've been having an implementation discussion where the idea that a CPU can choose to completely reorder the storing of memory has come up.
I was initializing a static array in C using code similar to:
static int array[10];
static int array_initialized = 0;
void initialize () {
array[0] = 1;
array[1] = 2;
...
array_initialized = -1;
}
and it is used later similar to:
int get_index(int index) {
if (!array_initialized) initialize();
if (index < 0 || index > 9) return -1;
return array[index];
}
is it possible for the CPU to reorder memory access in a multi-core intel architecture (or other architecture) such that it sets array_initialized before the initialize function has finished setting the array elements? or so that another execution thread can see array_initialized as non-zero before the entire array has been initialized in its view of the memory?
TL:DR: to make lazy-init safe if you don't do it before starting multiple threads, you need an _Atomic flag.
is it possible for the CPU to reorder memory access in a multi-core Intel (x86) architecture
No, such reordering is possible at compile time only. x86 asm effectively has acquire/release semantics for normal loads/stores. (seq_cst + a store buffer with store forwarding).
https://preshing.com/20120625/memory-ordering-at-compile-time/
(or other architecture)
Yes, most other ISAs have a weaker asm memory model that does allow StoreStore reordering and LoadLoad reordering. (Effectively memory_order_relaxed, or sort of like memory_order_consume on ISAs other than Alpha AXP, but compilers don't try to maintain data dependencies.)
None of this really matters from C because the C memory model is very weak, allowing compile-time reordering and simultaneous read/write or write+write of any object is data-race UB.
Data Race UB is what lets a compiler keep static variables in registers for the life of a function / inside a loop when compiling for "normal" ISAs.
Having 2 threads run this function is C data-race UB if array_initialized isn't already set before either of them run. (e.g. by having the main thread run it once before starting any more threads). And remove the array_initialized flag entirely, unless you have a use for the lazy-init feature before starting any more threads.
It's 100% safe for a single thread, regardless of how many other threads are running: the C programming model guarantees that a single thread always sees its own operations in program order. (Just like asm for all normal ISAs; other than explicit parallelism in ISAs like Itanium, you always see your own operations in order. It's only other threads seeing your operations where things get weird).
Starting a new thread is (I think) always a "full barrier", or in C terms "synchronizes with" the new thread. Stuff in the new thread can't happen before anything in the parent thread. So just calling get_index once from the main thread makes it safe with no further barriers for other threads to run get_index after that.
You could make lazy init thread-safe with an _Atomic flag
This is similar to what gcc does for function-local static variables with non-constant initializers. Check out the code-gen for that if you're curious: a read-only check of an already-init flag and then a call to an init function that makes sure only one thread runs the initializer.
This requires an acquire load in the fast-path for the already-initialized state. That's free on x86 and SPARC-TSO (same asm as a normal load), but not on weaker ISAs. AArch64 has an acquire load instruction, other ISAs need some barrier instructions.
Turn your array_initialized flag into a 3-state _Atomic variable:
init not started (e.g. init == 0). Check for this with an acquire load.
init started but not finished (e.g. init == -1)
init finished (e.g. init == 1)
You can leave static int array[10]; itself non-atomic by making sure exactly 1 thread "claims" responsibility for doing the init, using atomic_compare_exchange_strong (which will succeed for exactly one thread). And then have other threads spin-wait for the INIT_FINISHED state.
Using initial state == 0 lets it be in the BSS, hopefully next to the data. Otherwise we might prefer INIT_FINISHED=0 for ISAs where branching on an int from memory being (non)zero is slightly more efficient than other numbers. (e.g. AArch64 cbnz, MIPS bne $reg, $zero).
We could get the best of both worlds (cheapest possible fast-path for the already-init case) while still having the flag in the BSS: Have the main thread write it with INIT_NOTSTARTED = -1 before starting any more threads.
Having the flag next to the array is helpful for a small array where the flag is probably in the same cache line as the data we want to index. Or at least the same 4k page.
#include <stdatomic.h>
#include <stdbool.h>
#ifdef __x86_64__
#include <immintrin.h>
#define SPINLOOP_BODY _mm_pause()
#else
#define SPINLOOP_BODY /**/
#endif
#ifdef __GNUC__
#define unlikely(expr) __builtin_expect(!!(expr), 0)
#define likely(expr) __builtin_expect(!!(expr), 1)
#define NOINLINE __attribute__((noinline))
#else
#define unlikely(expr) (expr)
#define likely(expr) (expr)
#define NOINLINE /**/
#endif
enum init_states {
INIT_NOTSTARTED = 0,
INIT_STARTED = -1,
INIT_FINISHED = 1 // optional: make this 0 to speed up the fast-path on some ISAs, and store an INIT_NOTSTARTED before the first call
};
static int array[10];
static _Atomic int array_initialized = INIT_NOTSTARTED;
// called either before or during init.
// One thread claims responsibility for doing the init, others spin-wait
NOINLINE // this is rare, make sure it doesn't bloat the fast-path
void initialize(void) {
bool winner = false;
// check read-only if another thread has already claimed init
if (array_initialized == INIT_NOTSTARTED) {
int expected = INIT_NOTSTARTED;
winner = atomic_compare_exchange_strong(&array_initialized, &expected, INIT_STARTED);
// seq_cst memory order is fine. Weaker might be ok but it only has to run once
}
if (winner) {
array[0] = 1;
// ...
atomic_store_explicit(&array_initialized, INIT_FINISHED, memory_order_release);
} else {
// spin-wait for the winner in other threads
// yield(); optional.
// Or use some kind of mutex or condition var if init is really slow
// otherwise just spin on a seq_cst load. (Or acquire is fine.)
while(array_initialized != INIT_FINISHED)
SPINLOOP_BODY; // x86 only
// winner's release store syncs with our load:
// array[] stores Happened Before this point so we can read it without UB
}
}
int get_index(int index) {
// atomic acquire load is fine, doesn't need seq_cst. Cheaper than seq_cst on PowerPC
if (unlikely(atomic_load_explicit(&array_initialized, memory_order_acquire) != INIT_FINISHED))
initialize();
if (unlikely(index < 0 || index > 9)) return -1;
return array[index];
}
This does compile to correct-looking and efficient asm on Godbolt. Without unlikely() macros, gcc/clang think that at least the stand-alone version of get_index has initialize() and/or return -1 as the most likely fast-path.
And compilers wanted to inline the init function, which would be silly because it only runs once per thread at most. Hopefully profile-guided optimization would correct that.

Improve performance of reading volatile memory

I have a function reading from some volatile memory which is updated by a DMA. The DMA is never operating on the same memory-location as the function. My application is performance critical. Hence, I realized the execution time is improved by approx. 20% if I not declare the memory as volatile. In the scope of my function the memory is non-volatile. Hovever, I have to be sure that next time the function is called, the compiler know that the memory may have changed.
The memory is two two-dimensional arrays:
volatile uint16_t memoryBuffer[2][10][20] = {0};
The DMA operates on the opposite "matrix" than the program function:
void myTask(uint8_t indexOppositeOfDMA)
{
for(uint8_t n=0; n<10; n++)
{
for(uint8_t m=0; m<20; m++)
{
//Do some stuff with memory (readings only):
foo(memoryBuffer[indexOppositeOfDMA][n][m]);
}
}
}
Is there a proper way to tell my compiler that the memoryBuffer is non-volatile inside the scope of myTask() but may be changed next time i call myTask(), so I could optain the performance improvement of 20%?
Platform Cortex-M4
The problem without volatile
Let's assume that volatile is omitted from the data array. Then the C compiler
and the CPU do not know that its elements change outside the program-flow. Some
things that could happen then:
The whole array might be loaded into the cache when myTask() is called for
the first time. The array might stay in the cache forever and is never
updated from the "main" memory again. This issue is more pressing on multi-core
CPUs if myTask() is bound to a single core, for example.
If myTask() is inlined into the parent function, the compiler might decide
to hoist loads outside of the loop even to a point where the DMA transfer
has not been completed.
The compiler might even be able to determine that no write happens to
memoryBuffer and assume that the array elements stay at 0 all the time
(which would again trigger a lot of optimizations). This could happen if
the program was rather small and all the code is visible to the compiler
at once (or LTO is used).
Remember: After all the compiler does not know anything about the DMA
peripheral and that it is writing "unexpectedly and wildly into memory"
(from a compiler perspective).
If the compiler is dumb/conservative and the CPU not very sophisticated (single core, no out-of-order execution), the code might even work without the volatile declaration. But it also might not...
The problem with volatile
Making
the whole array volatile is often a pessimisation. For speed reasons you
probably want to unroll the loop. So instead of loading from the
array and incrementing the index alternatingly such as
load memoryBuffer[m]
m += 1;
load memoryBuffer[m]
m += 1;
load memoryBuffer[m]
m += 1;
load memoryBuffer[m]
m += 1;
it can be faster to load multiple elements at once and increment the index
in larger steps such as
load memoryBuffer[m]
load memoryBuffer[m + 1]
load memoryBuffer[m + 2]
load memoryBuffer[m + 3]
m += 4;
This is especially true, if the loads can be fused together (e.g. to perform
one 32-bit load instead of two 16-bit loads). Further you want the
compiler to use SIMD instruction to process multiple array elements with
a single instruction.
These optimizations are often prevented if the load happens from
volatile memory because compilers are usually very conservative with
load/store reordering around volatile memory accesses.
Again the behavior differs between compiler vendors (e.g. MSVC vs GCC).
Possible solution 1: fences
So you would like to make the array non-volatile but add a hint for the compiler/CPU saying "when you see this line (execute this statement), flush the cache and reload the array from memory". In C11 you could insert an atomic_thread_fence at the beginning of myTask(). Such fences prevent the re-ordering of loads/stores across them.
Since we do not have a C11 compiler, we use intrinsics for this task. The ARMCC compiler has a __dmb() intrinsic (data memory barrier). For GCC you may want to look at __sync_synchronize() (doc).
Possible solution 2: atomic variable holding the buffer state
We use the following pattern a lot in our codebase (e.g. when reading data from
SPI via DMA and calling a function to analyze it): The buffer is declared as
plain array (no volatile) and an atomic flag is added to each buffer, which
is set when the DMA transfer has finished. The code looks something
like this:
typedef struct Buffer
{
uint16_t data[10][20];
// Flag indicating if the buffer has been filled. Only use atomic instructions on it!
int filled;
// C11: atomic_int filled;
// C++: std::atomic_bool filled{false};
} Buffer_t;
Buffer_t buffers[2];
Buffer_t* volatile currentDmaBuffer; // using volatile here because I'm lazy
void setupDMA(void)
{
for (int i = 0; i < 2; ++i)
{
int bufferFilled;
// Atomically load the flag.
bufferFilled = __sync_fetch_and_or(&buffers[i].filled, 0);
// C11: bufferFilled = atomic_load(&buffers[i].filled);
// C++: bufferFilled = buffers[i].filled;
if (!bufferFilled)
{
currentDmaBuffer = &buffers[i];
... configure DMA to write to buffers[i].data and start it
}
}
// If you end up here, there is no free buffer available because the
// data processing takes too long.
}
void DMA_done_IRQHandler(void)
{
// ... stop DMA if needed
// Atomically set the flag indicating that the buffer has been filled.
__sync_fetch_and_or(&currentDmaBuffer->filled, 1);
// C11: atomic_store(&currentDmaBuffer->filled, 1);
// C++: currentDmaBuffer->filled = true;
currentDmaBuffer = 0;
// ... possibly start another DMA transfer ...
}
void myTask(Buffer_t* buffer)
{
for (uint8_t n=0; n<10; n++)
for (uint8_t m=0; m<20; m++)
foo(buffer->data[n][m]);
// Reset the flag atomically.
__sync_fetch_and_and(&buffer->filled, 0);
// C11: atomic_store(&buffer->filled, 0);
// C++: buffer->filled = false;
}
void waitForData(void)
{
// ... see setupDma(void) ...
}
The advantage of pairing the buffers with an atomic is that you are able to detect when the processing is too slow meaning that you have to buffer more,
make the incoming data slower or the processing code faster or whatever is
sufficient in your case.
Possible solution 3: OS support
If you have an (embedded) OS, you might resort to other patterns instead of using volatile arrays. The OS we use features memory pools and queues. The latter can be filled from a thread or an interrupt and a thread can block on
the queue until it is non-empty. The pattern looks a bit like this:
MemoryPool pool; // A pool to acquire DMA buffers.
Queue bufferQueue; // A queue for pointers to buffers filled by the DMA.
void* volatile currentBuffer; // The buffer currently filled by the DMA.
void setupDMA(void)
{
currentBuffer = MemoryPool_Allocate(&pool, 20 * 10 * sizeof(uint16_t));
// ... make the DMA write to currentBuffer
}
void DMA_done_IRQHandler(void)
{
// ... stop DMA if needed
Queue_Post(&bufferQueue, currentBuffer);
currentBuffer = 0;
}
void myTask(void)
{
void* buffer = Queue_Wait(&bufferQueue);
[... work with buffer ...]
MemoryPool_Deallocate(&pool, buffer);
}
This is probably the easiest approach to implement but only if you have an OS
and if portability is not an issue.
Here you say that the buffer is non-volatile:
"memoryBuffer is non-volatile inside the scope of myTask"
But here you say that it must be volatile:
"but may be changed next time i call myTask"
These two sentences are contradicting. Clearly the memory area must be volatile or the compiler can't know that it may be updated by DMA.
However, I rather suspect that the actual performance loss comes from accessing this memory region repeatedly through your algorithm, forcing the compiler to read it back over and over again.
What you should do is to take a local, non-volatile copy of the part of the memory you are interested in:
void myTask(uint8_t indexOppositeOfDMA)
{
for(uint8_t n=0; n<10; n++)
{
for(uint8_t m=0; m<20; m++)
{
volatile uint16_t* data = &memoryBuffer[indexOppositeOfDMA][n][m];
uint16_t local_copy = *data; // this access is volatile and wont get optimized away
foo(&local_copy); // optimizations possible here
// if needed, write back again:
*data = local_copy; // optional
}
}
}
You'll have to benchmark it, but I'm pretty sure this should improve performance.
Alternatively, you could first copy the whole part of the array you are interested in, then work on that, before writing it back. That should help performance even more.
You're not allowed to cast away the volatile qualifier1.
If the array must be defined holding volatile elements then the only two options, "that let the compiler know that the memory has changed", are to keep the volatile qualifier, or use a temporary array which is defined without volatile and is copied to the proper array after the function call. Pick whichever is faster.
1 (Quoted from: ISO/IEC 9899:201x 6.7.3 Type qualifiers 6)
If an attempt is
made to refer to an object defined with a volatile-qualified type through use of an lvalue
with non-volatile-qualified type, the behavior is undefined.
It seems to me that you a passing half of the buffer to myTask and each half does not need to be volatile. So I wonder if you could solve your issue by defining the buffer as such, and then passing a pointer to one of the half-buffers to myTask. I'm not sure whether this will work but maybe something like this...
typedef struct memory_buffer {
uint16_t buffer[10][20];
} memory_buffer ;
volatile memory_buffer double_buffer[2];
void myTask(memory_buffer *mem_buf)
{
for(uint8_t n=0; n<10; n++)
{
for(uint8_t m=0; m<20; m++)
{
//Do some stuff with memory:
foo(mem_buf->buffer[n][m]);
}
}
}
I don't know you platform/mCU/SoC, but usually DMAs have interrupt that trigger on programmable threshold.
What I can imagine is to remove volatile keyword and use interrupt as semaphore for task.
In other words:
DMA is programmed to interrupt when last byte of buffer is written
Task is block on a semaphore/flag waiting that the flag is released
When DMA calls the interrupt routine cange the buffer pointed by DMA for the next reading time and change the flag that unlock the task that can elaborate data.
Something like:
uint16_t memoryBuffer[2][10][20];
volatile uint8_t PingPong = 0;
void interrupt ( void )
{
// Change current DMA pointed buffer
PingPong ^= 1;
}
void myTask(void)
{
static uint8_t lastPingPong = 0;
if (lastPingPong != PingPong)
{
for (uint8_t n = 0; n < 10; n++)
{
for (uint8_t m = 0; m < 20; m++)
{
//Do some stuff with memory:
foo(memoryBuffer[PingPong][n][m]);
}
}
lastPingPong = PingPong;
}
}

multithreaded environment in c

I'm just trying to get my head around multithreading environments, specifically how you would implement a cooperative one in c (on an AVR, but out of interest I would like to keep this general).
My problem comes with the thread switch itself: I'm pretty sure I could write this in assembler, flushing all the registers to a stack and then saving the PC to return to later.
How would one pull something like this off in c? I have been told it can do "everything".
I realize this is quite a general question, so any links with information on this topic would be greatly appreciated.
Thanks
You can do this with setjmp/longjmp on most systems -- here is some code I've use in the past for task switching:
void task_switch(Task *to, int exit)
{
int tmp;
int task_errno; /* save space for errno */
task_errno = errno;
if (!(tmp = setjmp(current_task->env))) {
tmp = exit ? (int)current_task : 1;
current_task = to;
longjmp(to->env, tmp); }
if (exit) {
/* if we get here, the stack pointer is pointing into an already
** freed block ! */
abort(); }
if (tmp != 1)
free((void *)tmp);
errno = task_errno;
}
This depends on sizeof(int) == sizeof(void *) in order to pass a pointer as the argument to setjmp/longjmp, but that could be avoided by using handles (indexes into a global array of all task structures) instead of raw pointers here, or by using a static pointer.
Of course, the tricky part is setting up jmpbuf objects for newly created tasks, each with their own stack. You can use a signal handler with sigaltstack for that:
static void (*tfn)(void *);
static void *tfn_arg;
static stack_t old_ss;
static int old_sm;
static struct sigaction old_sa;
Task *current_task = 0;
static Task *parent_task;
static int task_count;
static void newtask()
{
int sm;
void (*fn)(void *);
void *fn_arg;
task_count++;
sigaltstack(&old_ss, 0);
sigaction(SIGUSR1, &old_sa, 0);
sm = old_sm;
fn = tfn;
fn_arg = tfn_arg;
task_switch(parent_task);
sigsetmask(sm);
(*fn)(fn_arg);
abort();
}
Task *task_start(int ssize, void (*_tfn)(void *), void *_arg)
{
Task *volatile new;
stack_t t_ss;
struct sigaction t_sa;
old_sm = sigsetmask(~sigmask(SIGUSR1));
if (!current_task) task_init();
tfn = _tfn;
tfn_arg = _arg;
new = malloc(sizeof(Task) + ssize + ALIGN);
new->next = 0;
new->task_data = 0;
t_ss.ss_sp = (void *)(new + 1);
t_ss.ss_size = ssize;
t_ss.ss_flags = 0;
if ((unsigned long)t_ss.ss_sp & (ALIGN-1))
t_ss.ss_sp = (void *)(((unsigned long)t_ss.ss_sp+ALIGN) & ~(ALIGN-1));
t_sa.sa_handler = newtask;
t_sa.sa_mask = ~sigmask(SIGUSR1);
t_sa.sa_flags = SA_ONSTACK|SA_RESETHAND;
sigaltstack(&t_ss, &old_ss);
sigaction(SIGUSR1, &t_sa, &old_sa);
parent_task = current_task;
if (!setjmp(current_task->env)) {
current_task = new;
kill(getpid(), SIGUSR1); }
sigaltstack(&old_ss, 0);
sigaction(SIGUSR1, &old_sa, 0);
sigsetmask(old_sm);
return new;
}
If you wanted to keep it pure C, I think you might be able to use setjmp and longjmp, but I've never tried it myself, and I imagine there's probably some platforms on which this wouldn't work (i.e. certain registers/other settings not being saved). The only other alternative would be to write it in assembly.
As mentioned, setjmp/longjmp are standard C and are available even in the libc of 8-bit AVRs. They do exactly what you said you'd do in assembler: save the processor context. But one has to keep in mind that the intended purpose of those functions is just to jump backwards in the flow of control; switching between tasks is an abuse. It does work anyway, and looks like this is even frequently used in a variety of user-level thread libraries -- like GNU Pth. But still, is an abuse of the intended purpose, and requires being careful.
As Chris Dodd said, you still need to provide an stack for each new task. He used sigaltstack() and other signal-related functions, but those do not exist in standard C, only in unix-like environments. For example, the AVR libc does not provide them. So as an alternative you can try reserving a part of your existing stack (by declaring a big local array, or using alloca()) for use as the stack of the new thread. Just keep in mind that the main/scheduler thread will keep using its stack, each thread uses its own stack, and all of them will grow and shrink as stacks usually do, so they will need space for doing so without interfering with each other.
And since we're already mentioning unix-like, non-standard-C mechanisms, there is also makecontext()/swapcontext() and family, which are more powerful but harder to find than setjmp()/longjmp(). The names say it all really: the context functions let you manage full process contexts (stacks included), the jmp functions let you just jump around - you'll have to hack the rest.
For the AVR anyway, given that you won't probably have an OS to help nor much memory to blindly reserve, you'd be probably better off using assembler for the switching and stack initializing.
In my experience if people start writing schedulers it isn't too long before they start wanting things like network stacks, memory allocation and file systems too. It's almost never worth going down that route; you end up spending more time writing your own operating system than you're spending on your actual application.
First whiff of your project heading that way and it's almost always worth putting the effort to put in an existing OS (linux, VxWorks, etc). Of course, that might mean that you run into problems if the CPU isn't up to it. And AVR isn't exactly a whole lot of CPU, and fitting an existing OS on to it ranges from mostly impossible to tricky for the major OSes, though there are some tiny OSes (some open source, see http://en.wikipedia.org/wiki/List_of_real-time_operating_systems).
So at the commencement of a project you should carefully consider how you might wish to evolve it going into the future. This might influence your choice of CPU now to save having to do hideous things in software later.

Performance of array of functions over if and switch statements

I am writing a very performance critical part of the code and I had this crazy idea about substituting case statements (or if statements) with array of function pointers.
Let me demonstrate; here goes the normal version:
while(statement)
{
/* 'option' changes on every iteration */
switch(option)
{
case 0: /* simple task */ break;
case 1: /* simple task */ break;
case 2: /* simple task */ break;
case 3: /* simple task */ break;
}
}
And here is the "callback function" version:
void task0(void) {
/* simple task */
}
void task1(void) {
/* simple task */
}
void task2(void) {
/* simple task */
}
void task3(void) {
/* simple task */
}
void (*task[4]) (void);
task[0] = task0;
task[1] = task1;
task[2] = task2;
task[3] = task3;
while(statement)
{
/* 'option' changes on every iteration */
/* and now we call the function with 'case' number */
(*task[option]) ();
}
So which version will be faster? Is the overhead of the function call eliminating speed benefit over normal switch (or if) statement?
Ofcourse the latter version is not so readable but I am looking for all the speed I can get.
I am about to benchmark this when I get things set up but if someone has an answer already, I wont bother.
I think at the end of the day your switch statements will be the fastest, because function pointers have the "overhead" of the lookup of the function and the function call itself. A switch is just a jmp table straight. It of course depends on different things which only testing can give you an answer to. That's my two cent worth.
The switch statement should be compiled into a branch table, which is essentially the same thing as your array of functions, if your compiler has at least basic optimization capability.
Which version will be faster depends. The naive implementation of switch is a huge if ... else if ... else if ... construction meaning it takes on average O(n) time to execute where n is the number of cases. Your jump table is O(1) so the more different cases there are and the more the later cases are used, the more likely the jump table is to be better. For a small number of cases or for switches where the first case is chosen more frequently than others, the naive implementation is better. The matter is complicated by the fact that the compiler may choose to use a jump table even when you have written a switch if it thinks that will be faster.
The only way to know which you should choose is to performance test your code.
First, I would randomly-pause it a few times, to make certain enough time is spent in this dispatching to even bother optimizing it.
Second, if it is, since each branch spends very few cycles, you want a jump table to get to the desired branch. The reason switch statements exist is to suggest to the compiler that it can generate one if the switch values are compact.
How long is the list of switch values? If it's short, the if-ladder could still be faster, especially if you put the most frequently used codes at the top. An alternative to an if-ladder (that I've never actually seen anyone use) is an if-tree, the code equivalent of a binary tree.
You probably don't want an array of function pointers. Yes, it's an array reference to get the function pointer, but there's several instructions' overhead in calling a function, and it sounds like that could overwhelm the small amount being done inside each function.
In any case, looking at the assembly language, or single-stepping at the instruction level, will give you a good idea how efficient it's being.
A good compiler will compile a switch with cases in a small numerical range as a single conditional to see if the value is in that range (which can sometimes be optimized out) followed by a jumptable jump. This will almost surely be faster than a function call (direct or indirect) because:
A jump is a lot less expensive than a call (which must save call-clobbered registers, adjust the stack, etc.).
The code in the switch statement cases can make use of expression values already cached in registers in the caller.
It's possible that an extremely advanced compiler could determine that the call-via-function pointer only refers to one of a small set of static-linkage functions, and thereby optimize things heavily, maybe even eliminating the calls and replacing them by jumps. But I wouldn't count on it.
I arrived at this post recently since I was wondering the same. I ended up taking the time to try it. It certainly depends greatly on what you're doing, but for my VM it was a decent speed up (15-25%), and allowed me to simplify some code (which is probably where a lot of the speedup came from). As an example (code simplified for clarity), a "for" loop was able to be easily implemented using a for loop:
void OpFor( Frame* frame, Instruction* &code )
{
i32 start = GET_OP_A(code);
i32 stop_value = GET_OP_B(code);
i32 step = GET_OP_C(code);
// instruction count (ie. block size)
u32 i_count = GET_OP_D(code);
// pointer to end of block (NOP if it branches)
Instruction* end = code + i_count;
if( step > 0 )
{
for( u32 i = start; i < stop_value; i += step )
{
// rewind instruction pointer
Instruction* cur = code;
// execute code inside for loop
while(cur != end)
{
cur->func(frame, cur);
++cur;
}
}
}
else
// same with <=
}

Resources