I recently had to make an Arduino project using avr library and without delay lib. In that i had to create an implementation of the delay function.
After searching on the internet i found this particular code in many many places.
And the only explanation i got was it kills time in a callibrated manner.
void delay_ms(int ms) {
int delay_count = F_CPU / 17500;//Where is this 17500 comming from
volatile int i;
while (ms != 0) {
for (i=0; i != delay_count; i++);
ms--;
}
}
iam not able to understand how the following works,(though it did do the job) i.e., how did we determine delay count to be F_cpu/17500. Where is this number comming from.
Delay functions is better to be done in assembly, because you must know how many instruction cycle your code take to know how to repeat it to achieve the total delay.
I didn't test your code but this value (17500) is designed to reach 1ms delay.
for example if F_CPU = 1000000so delay_count = 57, to reach 1ms it count 57 count a simple calculation you could found that every count will take 17us and this value is the time for loop take when compiled to assembly.
But of course different compiler versions will produce different assembly code which means inaccurate delay.
My advice to you is to use standard avr/delay.h library. i cannot see any reason why can't you use it? But if you must create another one so you should learn assembly!!
Related
I want to make variable delay in ATmega8. But in function delay_us(), I can just put a constant value. I think I can make a variable delay microsecond with a timer but I don't know how to work with this.
Please help me.
You can use a delay loop: you delay for one microsecond in each
iteration, and do as many iterations as microseconds you have to burn:
void delay_us(unsigned long us)
{
while (us--) _delay_us(1);
}
There are, however, a few issues with this approach:
it takes time to manage the iterations (decrement the counter, compare
to zero, conditional branch...), so the delay within the loop should
be significantly shorter that 1 µs
it takes time to call the function and return from it, and this should
be discounted from the iteration count, but since this time may not be
a full number of microseconds, you will have to add a small delay in
order to get to the next full microsecond
if the compiler inlines the function, everything will be off.
Trying to fix those issues yields something like this:
// Only valid with a 16 MHz clock.
void __attribute__((noinline)) delay_us(unsigned long us)
{
if (us < 2) return;
us -= 2;
_delay_us(0.4375);
while (us--) _delay_us(0.3125);
}
For a more complete version that can handle various clock frequencies,
see the delayMicroseconds() function from the Arduino AVR
core. Notice that the function is only accurate for a few discrete
frequencies. Notice also that the delay loop is done in inline assembly,
in order to be independent of compiler optimizations.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 3 years ago.
Improve this question
I was making a digital clock in C programming.
It takes input of current time from the user, then it updates the second to show the time in format HH:MM:SS.
I am confused with the part of for loop that is inside second.
for(i=0;i<8999990;i++)
i=i+1-1.
.
I have tried to dry run the code.
Suppose, I gave input 10:30:20 as hh:min:sec respectively.
Now, the for loop will start.
for loop for hr, then for loop for min, then for loop for sec...then for loop for i...
when sec is 20, for loop for i will run 89999990 times, and do i=i+1-1, ie update i value....
then sec will be 21....
what i am surprised of is how the "i" loop is creating an impact on the sec value?
and how that much fast?
[code]
#include<conio.h>
#include<stdio.h>
#include<stdlib.h>
int main()
{
int h=0,m=0,s=00;
double i;
printf("Enter time in format of HH MM SS\n");
scanf("%d%d%d",&h,&m,&s);
start:;
for(h;h<24;h++){
for(m;m<60;m++){
for(s;s<60;s++){
system("cls");
printf("\n\n\n\n\n\n\t\t\t\t\t\t%d:%d:%d\n",h,m,s);
for( i=0;i<89999900;i++){
i=i+1-1;
}
}
s=0;
}
m=0;
}
goto start;
}
[/code]
These kind of dirty "burn-away" loops were common long time ago, and still exist at some extent in embedded systems. They aren't professional since they are very inaccurate and also tightly coupled to a certain compiler build on a certain system. You cannot get accurate timing out of it for a PC, that's for sure.
First of all, programmers always tend to write them wrong. In the 1980s you could write loops like this because compilers were crap, but nowadays any half-decent compiler will simply remove that whole loop from the executable. This is because it doesn't contain any side-effects. So in order to make the code work at all, you must declare i as volatile to prevent that optimization.
Once that severe bug is fixed, you'll have to figure out how long time the loop actually takes. It is a sum of all CPU instructions needed to run it: a compare, some calculation and then increasing the iterator by 1. If you disassemble the program you can calculate this by adding together the number of cycles needed by all instructions, then multiply that with the time it takes to execute one cycle, which is normally 1/CPU frequency (ignoring pipelining, potential RAM access time losses, multicore etc etc).
In the end you'll come up with the conclusion that whoever wrote this code just pulled a number out of their... hat, then at best benchmarked the loop execution time with all optimizations enabled. But far more likely, I'd say this code was written by someone who didn't know what they were doing, which we can already tell from the missing volatile.
You should not use goto. To have an infinite loop, you can use a while(1) loop as shown.
You should use the system provided sleep or delay function. rather than writing your own for loop. This is because the no of iterations will change if you go to a different machine.
The program given below is a crude clock and will accumulate delays, as there will be some time required for execution of the code which is not considered during the design.
The code given below is assuming linux sleep function, for windows sleep function you can refer Sleep function in Windows, using C. If you are using an embedded system, there will be delay functions available, or you can write your own using one of the timers.
while(1)
{
for(h;h<24;h++)
{
for(m;m<60;m++)
{
for(s;s<60;s++)
{
system("cls");
printf("\n\n\n\n\n\n\t\t\t\t\t\t%d:%d:%d\n",h,m,s);
sleep(1000);
}
s=0;
}
m=0;
}
h = 0;
}
I am measuring the cycle count of different C functions which I try to make constant time in order to mitigate side channel attacks (crypto).
I am working with a microcontroller (aurix from infineon) which has an onboard cycle counter which gets incremented each clock tick and which I can read out.
Consider the following:
int result[32], cnt=0;
int secret[32];
/** some other code***/
reset_and_startCounter(); //resets cycles to 0 and starts the counter
int tmp = readCycles(); //read cycles before function call
function(secret) //I want to measure this function, should be constant time
result[cnt++] = readCycles() - tmp; //read out cycles and subtract to get correct result
When I measure the cycles like shown above, I will sometimes receive a different amount of cycles depending on the input given to the function. (~1-10 cycles difference, function itself takes about 3000 cycles).
I was now wondering if it not yet is perfectly constant time, and that the calculations depend on some input. I looked into the function and did the following:
void function(int* input){
reset_and_startCounter();
int tmp = readCycles();
/*********************************
******calculations on input******
*********************************/
result[cnt++] = readCycles() - tmp;
}
and I received the same amount of cycles no matter what input is given.
I then also measured the time needed to call the function only, and to return from the function. Both measurements were the same no matter what input.
I was always using the gcc compiler flags -O3,-fomit-frame-pointer. -O3 because the runtime is critical and I need it to be fast. And also important, no other code has been running on the microcontroller (no OS etc.)
Does anyone have a possible explanation for this. I want to be secure, that my code is constant time, and those cycles are arbitrary...
And sorry for not providing a runnable code here, but I believe not many have an Aurix lying arround :O
Thank you
The Infineon Aurix microcontroller you're using is designed for hard real-time applications. It has intentionally been designed to provide consistent runtime performance -- it lacks most of the features that can lead to inconsistent performance on more sophisticated CPUs, like cache memory or branch prediction.
While showing that your code has constant runtime on this part is a start, it is still possible for your code to have variable runtime when run on other CPUs. It is also possible that a device containing this CPU may leak information through other channels, particularly through power analysis. If making your application resistant to sidechannel analysis is critical, you may want to consider using a part designed for cryptographic applications. (The Aurix is not such a part.)
I have noticed that 10(^7) or 10 000 000 increment is equal to 10 seconds in my environment.
Here is an example of custom function that works for me that wastes x seconds before the next line:
void pause(unsigned short seconds)
{
int f;
unsigned long long deltaTime = seconds*10000000;
for(f=0; f<deltaTime; f++);
}
with this function you can request specific amount of seconds for "pause".
However.. i am not sure if thats even correct. Maybe the speed of listening the code depends from the compiller or the processor.. or both?
Several things wrong here:
In most compilers, if you enable optimizations (-O), it'll totally remove this code realizing it does nothing.
the speed of the loop is determined by compiler, processor, system load, and many other aspects
There's already a sleep function.
I'm using a PIC18F14K50 with HiTech ANSI C Compiler and MPLAB v8.43. My PIC code is finally up and running and working, with the exception of the delay function. This is crucial for my application - I need it to be in certain states for a given number of milliseconds, seconds, or minutes.
I have been trying to find a solution for this for about 2 weeks but have been unsuccessful so far. I gave up and wrote my own delay function with asm("nop"); in a loop, but this gives very unpredictable results. If I tell it to wait for half a second or 5 seconds, it works accurately enough. But as soon as I tell it to wait for longer - like for 10 minutes, the delay only lasts about 10 - 20 seconds, and 2 minutes ands up being a blink shorter than a 500ms delay.
Here are my config fuses and wait() function:
#include <htc.h>
__CONFIG(1, FOSC_IRC & FCMEN_OFF & IESO_OFF & XINST_OFF);
__CONFIG(2, PWRTEN_OFF & BOREN_OFF & WDTEN_OFF);
__CONFIG(3, MCLRE_OFF);
__CONFIG(4, STVREN_ON & LVP_OFF & DEBUG_OFF);
__CONFIG(5, 0xFFFF);
__CONFIG(6, 0xFFFF);
__CONFIG(7, 0xFFFF);
void wait(int ms)
{
for (int i = 0; i < ms; i++)
for (int j = 0; j < 12; j++)
asm("nop");
}
Like I said, if I call wait(500) up to wait(30000) then I will get half a second to 30 second delay to within the tolerence I'm interested in - however if I call wait(600000) then I do not get a 10 minute delay as I would expect, but rather about 10-15 seconds, and wait(120000) doesn't give a 2 minute delay, but rather a quick blink.
Ideally, I'd like to get the built-in __delay_ms() function working and being called from within my wait(), however I haven't had any success with this. If I try to #include <delay.h> then my MPLAB complains there is no such file or directory. If I look at the delay.h in my HiTech samples, there is a DelayUs(unsigned char) defined and an extern void DelayMs(unsigned char) which I haven't tried, however when I try to put the extern directly into my C code, I get an undefined symbol error upon linking.
The discrepancy between the short to medium delays and the long delays makes no sense. The only explanation I have is that the compiler has optimised out the NOPs or something.
Like I said, it's a PIC18F14K50 with the above configuration fuses. I don't have a great deal of experience with PICs, but I assume it's running at 4MHz given this set-up.
I'm happy with an external function from a library or macro, or with a hand-written function with NOPs. All I need is for it to be accurate to within a couple of seconds per minute or so.
Is the PIC a 16-bit microcontroller? My guess is that you're getting overflow on the value of wait, which would overflow after 2^15 (32,767 is the max value of a signed 16 bit int).
If you change your int variables to unsigned, you can go up to 65535ms. To go higher than that, you need to use long as your parameter type and nest your loops even deeper.
A better long term solution would be to write a delay function that uses one of the built in hardware timers that are in your chip. Your NOP delay will not be accurate over long periods if you have things like other interrupts firing and using some CPU cycles