I am trying to read an analog input (A0) with a speed of 10KHz, using the library Due Timer, but, when I increase the value of the vector it crashs, the goal is to use the vector in a FFT analysis with a 5000 size vector. I have tried to work directly with SAM3X83 Timers but I get the same problema. and this prolem is driving me craaaazy!!
Pls, I would appreciate any help. Thx.
#include <DueTimer.h>
int v[5000];
void setup(){
Serial.begin(9600);
Timer3.attachInterrupt(Read);
Timer3.start(100);
analogReadResolution(12);
}
void loop(){}
void display(){
for(int j=0; j<5000; j++){
Serial.println(v[j]);
}
}
int i=0;
void Read(){
v[i]=analogRead(A0);
i++;
if (i>=5000){
i=0;
Timer3.stop();
}
}
if you use DutTimer you have to use volatile variables
volatile is a keyword known as a variable qualifier, it is usually used before the datatype of a variable, to modify the way in which the compiler and subsequent program treats the variable.
Declaring a variable volatile is a directive to the compiler. The compiler is software which translates your C/C++ code into the machine code, which are the real instructions for the Atmega chip in the Arduino. Link
Related
I'm using microcontroller to make some ADC measurements. I have an issue when I try to compile following code using -O2 optimization, MCU freezes when PrintVal() function is present in code. I did some debugging and it turns out that when I add -fno-inline compiler flag, the code will run fine even with PrintVal() function.
Here is some background:
AdcIsr.c contains interrupt that is executed when ADC finishes it's job. This file also contains ISRInit() function that initializes variable that will hold value after conversion. In main loop will wait for interrupt and only then access AdcMeas.value.
AdcIsr.c
static volatile uin16_t* isrVarPtr = NULL;
ISR()
{
uint8_t tmp = readAdc();
*isrVarPtr = tmp;
}
void ISRInit(volatile uint16_t *var)
{
isrVarPtr = var;
}
AdcMeas.c
typedef struct{
uint8_t id;
volatile uint16_t value;
}AdcMeas_t;
static AdcMeas_t AdcMeas = {0};
const AdcMeas_t* AdcMeasGetStructPtr()
{
return &AdcMeas;
}
main.c
void PrintVal(const AdcMeas_t* data)
{
printf("AdcMeas %d value: %d\r\n", data->id, data->value);
}
void StartMeasurement()
{
...
AdcOn();
...
}
int main()
{
ISRInit(AdcMeasGetStructPtr()->value);
while(1)
{
StartMeasurement();
WaitForISR();
PrintVal(AdcMeasGetStructPtr());
DelayMs(1000);
}
}
Questions:
Is there something wrong with usage of const AdcMeas_t* data as argument of the PrintVal() function? I understand that AdcMeas.value may change inside interrupt and PrintVal() may be outdated.
AdcMeas contains a 'generic getter'. Is this a good practice to use this sort of function to allow read-only access to static structure? or should I implement AdcMeasGetId() and AdcMeasGetValue functions (note that this struct has only 2 members, what if it has 8 members)?
I know this code is a bit dumb (waiting for interrupt in while loop), this is just an example.
Some bugs:
You have no header files, neither library include or your own ones. This means that everything is hopelessly broken until you fix that. You cannot do multiple file projects in C without header files.
*isrVarPtr = tmp; Here you write to a variable without protection from race conditions. If the main program reads this variable in several steps, you risk getting incorrect data. You need to protect against race conditions or guarantee atomic access.
const AdcMeasGetStructPtr() is gibberish and there is no way that the return &AdcMeas; inside it would compile with a conforming C compiler.
If you have an old but conforming C90 compiler, the return type will get treated as int. Otherwise, if you have a modern C compiler, not even the function definition will compiler. So it would seem that something is very wrong with your compiler, which is a greater concern than this bug.
Declaring the typedef struct in the C file and then returning a pointer to it doesn't make any sense. You need to re-design this module. You could have a getter function returning an instance to a private struct, if there is only ever going to be 1 instance of it (singleton). However, as mentioned, it needs to handle race conditions.
Stylistic concerns:
Empty parenthesis () in a function declaration is almost always wrong in C. This is obsolete style and means "accept any parameter". C++ is different here.
int main() doesn't make any sense at all in a microcontroller system. You should use some implementation-defined form suitable for freestanding programs. The most commonly supported form is void main (void).
DelayMs(1000); is highly questionable code in any embedded system. There should never be a reason why you'd want to hang up your MCU being useless, with max current consumption, for a whole second.
Overall it seems you would benefit from a "continuous conversion" ADC. ADCs that support continuous conversion just dump their latest read in the data register and you can pick it up with polling whenever you need it. Catching all ADC interrupts is really just for hard realtime systems, signal processing and similar.
I have a function reading from some volatile memory which is updated by a DMA. The DMA is never operating on the same memory-location as the function. My application is performance critical. Hence, I realized the execution time is improved by approx. 20% if I not declare the memory as volatile. In the scope of my function the memory is non-volatile. Hovever, I have to be sure that next time the function is called, the compiler know that the memory may have changed.
The memory is two two-dimensional arrays:
volatile uint16_t memoryBuffer[2][10][20] = {0};
The DMA operates on the opposite "matrix" than the program function:
void myTask(uint8_t indexOppositeOfDMA)
{
for(uint8_t n=0; n<10; n++)
{
for(uint8_t m=0; m<20; m++)
{
//Do some stuff with memory (readings only):
foo(memoryBuffer[indexOppositeOfDMA][n][m]);
}
}
}
Is there a proper way to tell my compiler that the memoryBuffer is non-volatile inside the scope of myTask() but may be changed next time i call myTask(), so I could optain the performance improvement of 20%?
Platform Cortex-M4
The problem without volatile
Let's assume that volatile is omitted from the data array. Then the C compiler
and the CPU do not know that its elements change outside the program-flow. Some
things that could happen then:
The whole array might be loaded into the cache when myTask() is called for
the first time. The array might stay in the cache forever and is never
updated from the "main" memory again. This issue is more pressing on multi-core
CPUs if myTask() is bound to a single core, for example.
If myTask() is inlined into the parent function, the compiler might decide
to hoist loads outside of the loop even to a point where the DMA transfer
has not been completed.
The compiler might even be able to determine that no write happens to
memoryBuffer and assume that the array elements stay at 0 all the time
(which would again trigger a lot of optimizations). This could happen if
the program was rather small and all the code is visible to the compiler
at once (or LTO is used).
Remember: After all the compiler does not know anything about the DMA
peripheral and that it is writing "unexpectedly and wildly into memory"
(from a compiler perspective).
If the compiler is dumb/conservative and the CPU not very sophisticated (single core, no out-of-order execution), the code might even work without the volatile declaration. But it also might not...
The problem with volatile
Making
the whole array volatile is often a pessimisation. For speed reasons you
probably want to unroll the loop. So instead of loading from the
array and incrementing the index alternatingly such as
load memoryBuffer[m]
m += 1;
load memoryBuffer[m]
m += 1;
load memoryBuffer[m]
m += 1;
load memoryBuffer[m]
m += 1;
it can be faster to load multiple elements at once and increment the index
in larger steps such as
load memoryBuffer[m]
load memoryBuffer[m + 1]
load memoryBuffer[m + 2]
load memoryBuffer[m + 3]
m += 4;
This is especially true, if the loads can be fused together (e.g. to perform
one 32-bit load instead of two 16-bit loads). Further you want the
compiler to use SIMD instruction to process multiple array elements with
a single instruction.
These optimizations are often prevented if the load happens from
volatile memory because compilers are usually very conservative with
load/store reordering around volatile memory accesses.
Again the behavior differs between compiler vendors (e.g. MSVC vs GCC).
Possible solution 1: fences
So you would like to make the array non-volatile but add a hint for the compiler/CPU saying "when you see this line (execute this statement), flush the cache and reload the array from memory". In C11 you could insert an atomic_thread_fence at the beginning of myTask(). Such fences prevent the re-ordering of loads/stores across them.
Since we do not have a C11 compiler, we use intrinsics for this task. The ARMCC compiler has a __dmb() intrinsic (data memory barrier). For GCC you may want to look at __sync_synchronize() (doc).
Possible solution 2: atomic variable holding the buffer state
We use the following pattern a lot in our codebase (e.g. when reading data from
SPI via DMA and calling a function to analyze it): The buffer is declared as
plain array (no volatile) and an atomic flag is added to each buffer, which
is set when the DMA transfer has finished. The code looks something
like this:
typedef struct Buffer
{
uint16_t data[10][20];
// Flag indicating if the buffer has been filled. Only use atomic instructions on it!
int filled;
// C11: atomic_int filled;
// C++: std::atomic_bool filled{false};
} Buffer_t;
Buffer_t buffers[2];
Buffer_t* volatile currentDmaBuffer; // using volatile here because I'm lazy
void setupDMA(void)
{
for (int i = 0; i < 2; ++i)
{
int bufferFilled;
// Atomically load the flag.
bufferFilled = __sync_fetch_and_or(&buffers[i].filled, 0);
// C11: bufferFilled = atomic_load(&buffers[i].filled);
// C++: bufferFilled = buffers[i].filled;
if (!bufferFilled)
{
currentDmaBuffer = &buffers[i];
... configure DMA to write to buffers[i].data and start it
}
}
// If you end up here, there is no free buffer available because the
// data processing takes too long.
}
void DMA_done_IRQHandler(void)
{
// ... stop DMA if needed
// Atomically set the flag indicating that the buffer has been filled.
__sync_fetch_and_or(¤tDmaBuffer->filled, 1);
// C11: atomic_store(¤tDmaBuffer->filled, 1);
// C++: currentDmaBuffer->filled = true;
currentDmaBuffer = 0;
// ... possibly start another DMA transfer ...
}
void myTask(Buffer_t* buffer)
{
for (uint8_t n=0; n<10; n++)
for (uint8_t m=0; m<20; m++)
foo(buffer->data[n][m]);
// Reset the flag atomically.
__sync_fetch_and_and(&buffer->filled, 0);
// C11: atomic_store(&buffer->filled, 0);
// C++: buffer->filled = false;
}
void waitForData(void)
{
// ... see setupDma(void) ...
}
The advantage of pairing the buffers with an atomic is that you are able to detect when the processing is too slow meaning that you have to buffer more,
make the incoming data slower or the processing code faster or whatever is
sufficient in your case.
Possible solution 3: OS support
If you have an (embedded) OS, you might resort to other patterns instead of using volatile arrays. The OS we use features memory pools and queues. The latter can be filled from a thread or an interrupt and a thread can block on
the queue until it is non-empty. The pattern looks a bit like this:
MemoryPool pool; // A pool to acquire DMA buffers.
Queue bufferQueue; // A queue for pointers to buffers filled by the DMA.
void* volatile currentBuffer; // The buffer currently filled by the DMA.
void setupDMA(void)
{
currentBuffer = MemoryPool_Allocate(&pool, 20 * 10 * sizeof(uint16_t));
// ... make the DMA write to currentBuffer
}
void DMA_done_IRQHandler(void)
{
// ... stop DMA if needed
Queue_Post(&bufferQueue, currentBuffer);
currentBuffer = 0;
}
void myTask(void)
{
void* buffer = Queue_Wait(&bufferQueue);
[... work with buffer ...]
MemoryPool_Deallocate(&pool, buffer);
}
This is probably the easiest approach to implement but only if you have an OS
and if portability is not an issue.
Here you say that the buffer is non-volatile:
"memoryBuffer is non-volatile inside the scope of myTask"
But here you say that it must be volatile:
"but may be changed next time i call myTask"
These two sentences are contradicting. Clearly the memory area must be volatile or the compiler can't know that it may be updated by DMA.
However, I rather suspect that the actual performance loss comes from accessing this memory region repeatedly through your algorithm, forcing the compiler to read it back over and over again.
What you should do is to take a local, non-volatile copy of the part of the memory you are interested in:
void myTask(uint8_t indexOppositeOfDMA)
{
for(uint8_t n=0; n<10; n++)
{
for(uint8_t m=0; m<20; m++)
{
volatile uint16_t* data = &memoryBuffer[indexOppositeOfDMA][n][m];
uint16_t local_copy = *data; // this access is volatile and wont get optimized away
foo(&local_copy); // optimizations possible here
// if needed, write back again:
*data = local_copy; // optional
}
}
}
You'll have to benchmark it, but I'm pretty sure this should improve performance.
Alternatively, you could first copy the whole part of the array you are interested in, then work on that, before writing it back. That should help performance even more.
You're not allowed to cast away the volatile qualifier1.
If the array must be defined holding volatile elements then the only two options, "that let the compiler know that the memory has changed", are to keep the volatile qualifier, or use a temporary array which is defined without volatile and is copied to the proper array after the function call. Pick whichever is faster.
1 (Quoted from: ISO/IEC 9899:201x 6.7.3 Type qualifiers 6)
If an attempt is
made to refer to an object defined with a volatile-qualified type through use of an lvalue
with non-volatile-qualified type, the behavior is undefined.
It seems to me that you a passing half of the buffer to myTask and each half does not need to be volatile. So I wonder if you could solve your issue by defining the buffer as such, and then passing a pointer to one of the half-buffers to myTask. I'm not sure whether this will work but maybe something like this...
typedef struct memory_buffer {
uint16_t buffer[10][20];
} memory_buffer ;
volatile memory_buffer double_buffer[2];
void myTask(memory_buffer *mem_buf)
{
for(uint8_t n=0; n<10; n++)
{
for(uint8_t m=0; m<20; m++)
{
//Do some stuff with memory:
foo(mem_buf->buffer[n][m]);
}
}
}
I don't know you platform/mCU/SoC, but usually DMAs have interrupt that trigger on programmable threshold.
What I can imagine is to remove volatile keyword and use interrupt as semaphore for task.
In other words:
DMA is programmed to interrupt when last byte of buffer is written
Task is block on a semaphore/flag waiting that the flag is released
When DMA calls the interrupt routine cange the buffer pointed by DMA for the next reading time and change the flag that unlock the task that can elaborate data.
Something like:
uint16_t memoryBuffer[2][10][20];
volatile uint8_t PingPong = 0;
void interrupt ( void )
{
// Change current DMA pointed buffer
PingPong ^= 1;
}
void myTask(void)
{
static uint8_t lastPingPong = 0;
if (lastPingPong != PingPong)
{
for (uint8_t n = 0; n < 10; n++)
{
for (uint8_t m = 0; m < 20; m++)
{
//Do some stuff with memory:
foo(memoryBuffer[PingPong][n][m]);
}
}
lastPingPong = PingPong;
}
}
I have the following few lines of code that I am trying to run in parallel
void optimized(int data_len, unsigned int * input_array, unsigned int * output_array, unsigned int * filter_list, int filter_len) {
#pragma omp parallel for
for (int j = 0; j < filter_len; j++) {
for (int i = 0; i < data_len; i++) {
if (input_array[i] == filter_list[j]) {
output_array[i] = filter_list[j];
}
}
}
}
Just putting the pragma statement has really done wonders, but I am trying to further reduce the run time of this code. I have tried many things ranging from array padding to collapsing the loops to creating tasks, but the only thing that has seemed to work thus far is loop unrolling. Does anyone have any suggestions on what I could possibly due to further speed up this code?
You are doing pure memory accessing. That is limited by the memory bandwidth of the machine.
Multi-threading is not going to help you much. gcc -O2 already provide you SSE instruction optimization. So it may not help either to use intel instruction directly. You may try to check 4 int at once because SSE support 128 register (please see https://gcc.gnu.org/onlinedocs/gcc-4.4.5/gcc/X86-Built_002din-Functions.html and google for some example) Also to reduce the amount of data helps, by using short instead of int if you can.
I have 2 questions about new OpenMP 4.0.
First one is that I couldn't understand that what is the difference between target and target data? According to specifications target data create a new data environment. So what is the data environment? By the way can we liken OpenMP target data to OpenACC data directives?
The second question is as follows:
extern void init(float*, float*, int);
extern void output(float*, int);
void vec_mult(int N)
{
int i;
float p[N], v1[N], v2[N];
init(v1, v2, N);
#pragma omp target map(to: v1, v2) map(from: p)
#pragma omp parallel for
for (i=0; i<N; i++)
p[i] = v1[i] * v2[i];
output(p, N);
}
According to this example, there is no teams directive. So How should OpenMP compiler configurate device kernel? For if we talk about CUDA, do the invocation may like "kernel_func<<<1,1>>>"
I found answer for my 2nd question.
If we want to use parallel for which inside of the target without teams directive, the compiler should generate kernel has 1 block. In the other hand compiler has to spawn the iterations through the threads inside of the block. For this reason kernels should have many threads (of course it's possible to work with 1 thread). The solution to implement this directives,
you can create kernel with static number of threads in this case.
it doesn't make sense but it can be done to configure kernel with 1 threads. kernel_invocation<<<1,1>>>(parameter1,parameter2, ...)
best solution :) you need analyzer to decide threads number,dimension and so on.
you can find more solution in this paper:
http://rosecompiler.org/ROSE_ResearchPapers/Liao-OpenMP-Accelerator-Model-2013.pdf
I try to program an example histogram tool using OpenCL. To start, I was just interessted to atomicly increment each bin. I came up with the following kernel code:
__kernel void Histogram(
__global const int* input,
__global int* histogram,
int numElements) {
// get index into global data array
int iGID = get_global_id(0);
// bound check, equivalent to the limit on a 'for' loop
if (iGID >= numElements) {
return;
}
if( iGID < 100 ) {
// initialize histogram
histogram[iGID] = 0;
}
barrier(CLK_GLOBAL_MEM_FENCE);
int bin = input[iGID];
atomic_inc(&histogram[bin]);
}
But the output histogram is zero in every bin. Why is that? Further more, the real strange things happen if a put a printf(" ") in the last line. Suddenly, it works. I am completely lost, has someone an idea why this happens?
P.S.
I enabled all extensions
I solved to problem by my self.
After nothing fixed the problem, I tried to change the CLDevice to the CPU. Everything went as it was supposed to be (unfortunately very slow :D). But this gave me the idea that it might not be a code problem but a OpenCL infrastructure problem.
I updated the OpenCL platform of AMD and now everything works.
Thank you, in case you thought about my problem.