Delay on PIC18F - c

I'm using a PIC18F14K50 with HiTech ANSI C Compiler and MPLAB v8.43. My PIC code is finally up and running and working, with the exception of the delay function. This is crucial for my application - I need it to be in certain states for a given number of milliseconds, seconds, or minutes.
I have been trying to find a solution for this for about 2 weeks but have been unsuccessful so far. I gave up and wrote my own delay function with asm("nop"); in a loop, but this gives very unpredictable results. If I tell it to wait for half a second or 5 seconds, it works accurately enough. But as soon as I tell it to wait for longer - like for 10 minutes, the delay only lasts about 10 - 20 seconds, and 2 minutes ands up being a blink shorter than a 500ms delay.
Here are my config fuses and wait() function:
#include <htc.h>
__CONFIG(1, FOSC_IRC & FCMEN_OFF & IESO_OFF & XINST_OFF);
__CONFIG(2, PWRTEN_OFF & BOREN_OFF & WDTEN_OFF);
__CONFIG(3, MCLRE_OFF);
__CONFIG(4, STVREN_ON & LVP_OFF & DEBUG_OFF);
__CONFIG(5, 0xFFFF);
__CONFIG(6, 0xFFFF);
__CONFIG(7, 0xFFFF);
void wait(int ms)
{
for (int i = 0; i < ms; i++)
for (int j = 0; j < 12; j++)
asm("nop");
}
Like I said, if I call wait(500) up to wait(30000) then I will get half a second to 30 second delay to within the tolerence I'm interested in - however if I call wait(600000) then I do not get a 10 minute delay as I would expect, but rather about 10-15 seconds, and wait(120000) doesn't give a 2 minute delay, but rather a quick blink.
Ideally, I'd like to get the built-in __delay_ms() function working and being called from within my wait(), however I haven't had any success with this. If I try to #include <delay.h> then my MPLAB complains there is no such file or directory. If I look at the delay.h in my HiTech samples, there is a DelayUs(unsigned char) defined and an extern void DelayMs(unsigned char) which I haven't tried, however when I try to put the extern directly into my C code, I get an undefined symbol error upon linking.
The discrepancy between the short to medium delays and the long delays makes no sense. The only explanation I have is that the compiler has optimised out the NOPs or something.
Like I said, it's a PIC18F14K50 with the above configuration fuses. I don't have a great deal of experience with PICs, but I assume it's running at 4MHz given this set-up.
I'm happy with an external function from a library or macro, or with a hand-written function with NOPs. All I need is for it to be accurate to within a couple of seconds per minute or so.

Is the PIC a 16-bit microcontroller? My guess is that you're getting overflow on the value of wait, which would overflow after 2^15 (32,767 is the max value of a signed 16 bit int).

If you change your int variables to unsigned, you can go up to 65535ms. To go higher than that, you need to use long as your parameter type and nest your loops even deeper.

A better long term solution would be to write a delay function that uses one of the built in hardware timers that are in your chip. Your NOP delay will not be accurate over long periods if you have things like other interrupts firing and using some CPU cycles

Related

How to make delay microsecond in AVR (ATmega8) with timer?

I want to make variable delay in ATmega8. But in function delay_us(), I can just put a constant value. I think I can make a variable delay microsecond with a timer but I don't know how to work with this.
Please help me.
You can use a delay loop: you delay for one microsecond in each
iteration, and do as many iterations as microseconds you have to burn:
void delay_us(unsigned long us)
{
while (us--) _delay_us(1);
}
There are, however, a few issues with this approach:
it takes time to manage the iterations (decrement the counter, compare
to zero, conditional branch...), so the delay within the loop should
be significantly shorter that 1 µs
it takes time to call the function and return from it, and this should
be discounted from the iteration count, but since this time may not be
a full number of microseconds, you will have to add a small delay in
order to get to the next full microsecond
if the compiler inlines the function, everything will be off.
Trying to fix those issues yields something like this:
// Only valid with a 16 MHz clock.
void __attribute__((noinline)) delay_us(unsigned long us)
{
if (us < 2) return;
us -= 2;
_delay_us(0.4375);
while (us--) _delay_us(0.3125);
}
For a more complete version that can handle various clock frequencies,
see the delayMicroseconds() function from the Arduino AVR
core. Notice that the function is only accurate for a few discrete
frequencies. Notice also that the delay loop is done in inline assembly,
in order to be independent of compiler optimizations.

Time measurements differ on microcontroller

I am measuring the cycle count of different C functions which I try to make constant time in order to mitigate side channel attacks (crypto).
I am working with a microcontroller (aurix from infineon) which has an onboard cycle counter which gets incremented each clock tick and which I can read out.
Consider the following:
int result[32], cnt=0;
int secret[32];
/** some other code***/
reset_and_startCounter(); //resets cycles to 0 and starts the counter
int tmp = readCycles(); //read cycles before function call
function(secret) //I want to measure this function, should be constant time
result[cnt++] = readCycles() - tmp; //read out cycles and subtract to get correct result
When I measure the cycles like shown above, I will sometimes receive a different amount of cycles depending on the input given to the function. (~1-10 cycles difference, function itself takes about 3000 cycles).
I was now wondering if it not yet is perfectly constant time, and that the calculations depend on some input. I looked into the function and did the following:
void function(int* input){
reset_and_startCounter();
int tmp = readCycles();
/*********************************
******calculations on input******
*********************************/
result[cnt++] = readCycles() - tmp;
}
and I received the same amount of cycles no matter what input is given.
I then also measured the time needed to call the function only, and to return from the function. Both measurements were the same no matter what input.
I was always using the gcc compiler flags -O3,-fomit-frame-pointer. -O3 because the runtime is critical and I need it to be fast. And also important, no other code has been running on the microcontroller (no OS etc.)
Does anyone have a possible explanation for this. I want to be secure, that my code is constant time, and those cycles are arbitrary...
And sorry for not providing a runnable code here, but I believe not many have an Aurix lying arround :O
Thank you
The Infineon Aurix microcontroller you're using is designed for hard real-time applications. It has intentionally been designed to provide consistent runtime performance -- it lacks most of the features that can lead to inconsistent performance on more sophisticated CPUs, like cache memory or branch prediction.
While showing that your code has constant runtime on this part is a start, it is still possible for your code to have variable runtime when run on other CPUs. It is also possible that a device containing this CPU may leak information through other channels, particularly through power analysis. If making your application resistant to sidechannel analysis is critical, you may want to consider using a part designed for cryptographic applications. (The Aurix is not such a part.)

Looping timer function in avr?

I recently had to make an Arduino project using avr library and without delay lib. In that i had to create an implementation of the delay function.
After searching on the internet i found this particular code in many many places.
And the only explanation i got was it kills time in a callibrated manner.
void delay_ms(int ms) {
int delay_count = F_CPU / 17500;//Where is this 17500 comming from
volatile int i;
while (ms != 0) {
for (i=0; i != delay_count; i++);
ms--;
}
}
iam not able to understand how the following works,(though it did do the job) i.e., how did we determine delay count to be F_cpu/17500. Where is this number comming from.
Delay functions is better to be done in assembly, because you must know how many instruction cycle your code take to know how to repeat it to achieve the total delay.
I didn't test your code but this value (17500) is designed to reach 1ms delay.
for example if F_CPU = 1000000so delay_count = 57, to reach 1ms it count 57 count a simple calculation you could found that every count will take 17us and this value is the time for loop take when compiled to assembly.
But of course different compiler versions will produce different assembly code which means inaccurate delay.
My advice to you is to use standard avr/delay.h library. i cannot see any reason why can't you use it? But if you must create another one so you should learn assembly!!

Is there any particular speed of reading a code by the compiler?

I have noticed that 10(^7) or 10 000 000 increment is equal to 10 seconds in my environment.
Here is an example of custom function that works for me that wastes x seconds before the next line:
void pause(unsigned short seconds)
{
int f;
unsigned long long deltaTime = seconds*10000000;
for(f=0; f<deltaTime; f++);
}
with this function you can request specific amount of seconds for "pause".
However.. i am not sure if thats even correct. Maybe the speed of listening the code depends from the compiller or the processor.. or both?
Several things wrong here:
In most compilers, if you enable optimizations (-O), it'll totally remove this code realizing it does nothing.
the speed of the loop is determined by compiler, processor, system load, and many other aspects
There's already a sleep function.

How to measure cpu time and wall clock time?

I saw many topics about this, even on stackoverflow, for example:
How can I measure CPU time and wall clock time on both Linux/Windows?
I want to measure both cpu and wall time. Although person who answered a question in topic I posted recommend using gettimeofday to measure a wall time, I read that its better to use instead clock_gettime. So, I wrote the code below (is it ok, is it really measure a wall time, not cpu time? Im asking, cause I found a webpage: http://nadeausoftware.com/articles/2012/03/c_c_tip_how_measure_cpu_time_benchmarking#clockgettme where it says that clock_gettime measures a cpu time...) Whats the truth and which one should I use to measure a wall time?
Another question is about cpu time. I found the answer that clock is great about it, so I wrote a sample code for it too. But its not what I really want, for my code it shows me a 0 secods of cpu time. Is it possible to measure cpu time more precisely (in seconds)? Thanks for any help (for now on, Im interested only in Linux solutions).
Heres my code:
#include <time.h>
#include <stdio.h> /* printf */
#include <math.h> /* sqrt */
#include <stdlib.h>
int main()
{
int i;
double sum;
// measure elapsed wall time
struct timespec now, tmstart;
clock_gettime(CLOCK_REALTIME, &tmstart);
for(i=0; i<1024; i++){
sum += log((double)i);
}
clock_gettime(CLOCK_REALTIME, &now);
double seconds = (double)((now.tv_sec+now.tv_nsec*1e-9) - (double)(tmstart.tv_sec+tmstart.tv_nsec*1e-9));
printf("wall time %fs\n", seconds);
// measure cpu time
double start = (double)clock() /(double) CLOCKS_PER_SEC;
for(i=0; i<1024; i++){
sum += log((double)i);
}
double end = (double)clock() / (double) CLOCKS_PER_SEC;
printf("cpu time %fs\n", end - start);
return 0;
}
Compile it like this:
gcc test.c -o test -lrt -lm
and it shows me:
wall time 0.000424s
cpu time 0.000000s
I know I can make more iterations but thats not the point here ;)
IMPORTANT:
printf("CLOCKS_PER_SEC is %ld\n", CLOCKS_PER_SEC);
shows
CLOCKS_PER_SEC is 1000000
According to my manual page on clock it says
POSIX requires that CLOCKS_PER_SEC equals 1000000 independent of the actual resolution.
When increasing the number iterations on my computer the measured cpu-time starts showing on 100000 iterations. From the returned figures it seems the resolution is actually 10 millisecond.
Beware that when you optimize your code, the whole loop may disappear because sum is a dead value. There is also nothing to stop the compiler from moving the clock statements across the loop as there are no real dependences with the code in between.
Let me elaborate a bit more on micro measurements of performance of code. The naive and tempting way to measure performance is indeed by adding clock statements as you have done. However since time is not a concept or side effect in C, compilers can often move these clock calls at will. To remedy this it is tempting to make such clock calls have side effects by for example having it access volatile variables. However this still doesn't prohibit the compiler from moving highly side-effect free code over the calls. Think for example of accessing regular local variables. But worse, by making the clock calls look very scary to the compiler, you will actually negatively impact any optimizations. As a result, mere measuring of the performance impacts that performance in a negative and undesirable way.
If you use profiling, as already mentioned by someone, you can get a pretty good assessment of the performance of even optimized code, although the overall time of course is increased.
Another good way to measure performance is just asking the compiler to report the number of cycles some code will take. For a lot of architectures the compiler has a very accurate estimate of this. However most notably for a Pentium architecture it doesn't because the hardware does a lot of scheduling that is hard to predict.
Although it is not standing practice I think compilers should support a pragma that marks a function to be measured. The compiler then can include high precision non-intrusive measuring points in the prologue and epilogue of a function and prohibit any inlining of the function. Depending on the architecture it can choose a high precision clock to measure time, preferably with support from the OS to only measure time of the current process.

Resources