I was trying to read infra red signal from arduino so I wrote this code
{
long int initial = micros();
int timeinms = millis();
int oldValue = 0;
int bitPosition = 0;
while(millis() - timeinms< 100){
int value = (PIND & B10000000 ) >> 7;
if(value != oldValue){
oldValue = value;
arr[bitPosition] = micros() > initial ? micros() - initial : 4294967295 - initial + micros();
bitVal[bitPosition] = value;
bitPosition++;
}
}
}
But when I read signal I get the general timing is correct 430 and 600 instead of 562 and 1620 instead of 562 * 3 but why is there such difference. I know micros has 4 microseconds error but this should result to at most 8 microseconds error not 80 or 100
So what is the problem is it a bug in the code or something I am missing
Tried above code getting times with + or - 80 microseconds to 100
You don't know how millis is working in background. Your code is interrupted every 1 millisecond by timer. I guess that duration of ISR is explanation of observed differentiae of time measurement. Don't use millis you get more precise time measurement. Or use other timer in capture mode for most precise measurement.
Related
I made a function, where PWM signal is generated at the output (PORTD) without usage of PWM control registers inside PIC microcontroller (PIC18F452). In order to slowly dim LED connected at the output, I was trying to increase the time needed for pulse to advance from 0% of one period to 100% of one period of square wave, while having square wave frequency constant. Everything should go as planned, except that second parameter being passed into pwm function, somehow resets, when going from 655 to 666 (that is, when duty cycle is at 65%). After this event, value being passed to pwm function proceeds from 0. Where as it should not reset at transition from 655 to 656 but at transition from 1000 to 1001.
void main(void) {
TRISD = 0x00; //port D set as output
LATD = 0x00; //port D output set LOW
unsigned int width = 1000; // length of T_on + T_off
unsigned int j;
unsigned int res;
while(1){
for (j = 1; j <= width; j++){
res = (unsigned int)((j*100)/width);
pwm(&LATD, res);
}
}
return;
}
void pwm(volatile unsigned char *lat, unsigned int cycle){
if(cycle > 100){ // reset the "cycle"
cycle = 100;
}
unsigned int i = 1;
while(i<=(cycle)){ // T_on
*lat = 0x01;
i++;
}
unsigned int j = 100-cycle;
while(j){ // T_off
*lat = 0;
j--;
}
return;
}
As for the program itself, it should work like so:
second parameter passed into pwm function is the duty cycle (in %) which changes from 0 to 100
with variable "width" the time needed for duty cycle to advance from 0% to 100% is controlled (width = 100 represents fastest time and everything above that is considered gradually slower time from 0% to 100%)
expression ((j*100)/width) serves as step variable inside "while" loop inside pwm function:
if width = 100, step is increased every increment of "j"
if width = 1000, step is increased every 10 increments of "j",
etc.
PORTD is passed into function as its address, whereas in function pwm, this address is operated via pointer variable lat
As for the problem itself, I could only assume two possibilities: either data type of second parameter of function pwm is incorrect or there is some unknown limitation within PIC microprocessor.
Also, here are definitions of configuration bits (device specific registers) of PIC, located int header file included in this program: https://imgur.com/a/UDYifgN
This is, how the program should operate: https://vimeo.com/488207207
This is, how the program currently operates: https://vimeo.com/488207746
The problem is a 16 Bit overflow:
res = (unsigned int)((j*100)/width);
if j is greater then 655 the result of the calculation j*100 is greater 16 Bit. Switch this to 32 Bit. Or easier make your loop from 0...100 for res.
e.g.
for (res = 0; res <= 100; res++){
pwm(&LATD, res);
}
To preface, I am on a Unix (linux) system using gcc.
What I am stuck on is how to accurately implement a way to run a section of code for a certain amount of time.
Here is an example of something I have been working with:
struct timeb start, check;
int64_t duration = 10000;
int64_t elapsed = 0;
ftime(&start);
while ( elapsed < duration ) {
// do a set of tasks
ftime(&check);
elapsed += ((check.time - start.time) * 1000) + (check.millitm - start.millitm);
}
I was thinking this would have carried on for 10000ms or 10 seconds, but it didn't, almost instantly. I was basing this off other questions such as How to get the time elapsed in C in milliseconds? (Windows) . But then I thought that if upon the first call of ftime, the struct is time = 1, millitm = 999 and on the second call time = 2, millitm = 01 it would be calculating the elapsed time as being 1002 milliseconds. Is there something I am missing?
Also the suggestions in the various stackoverflow questions, ftime() and gettimeofday(), are listed as deprecated or legacy.
I believe I could convert the start time into milliseconds, and the check time into millseconds, then subtract start from check. But milliseconds since the epoch requires 42 bits and I'm trying to keep everything in the loop as efficient as possible.
What approach could I take towards this?
Code is incorrect calculating elapsed time.
// elapsed += ((check.time - start.time) * 1000) + (check.millitm - start.millitm);
elapsed = ((check.time - start.time) * (int64_t)1000) + (check.millitm - start.millitm);
There is some concern about check.millitm - start.millitm. On systems with struct timeb *tp, it can be expected that the millitm will be promoted to int before subtraction occurs. So the difference will be in the range [-1000 ... 1000].
struct timeb {
time_t time;
unsigned short millitm;
short timezone;
short dstflag;
};
IMO, more robust code would handle ms conversion in a separate helper function. This matches OP's "I believe I could convert the start time into milliseconds, and the check time into millseconds, then subtract start from check."
int64_t timeb_to_ms(struct timeb *t) {
return (int64_t)t->time * 1000 + t->millitm;
}
struct timeb start, check;
ftime(&start);
int64_t start_ms = timeb_to_ms(&start);
int64_t duration = 10000 /* ms */;
int64_t elapsed = 0;
while (elapsed < duration) {
// do a set of tasks
struct timeb check;
ftime(&check);
elapsed = timeb_to_ms(&check) - start_ms;
}
If you want efficiency, let the system send you a signal when a timer expires.
Traditionally, you can set a timer with a resolution in seconds with the alarm(2) syscall.
The system then sends you a SIGALRM when the timer expires. The default disposition of that signal is to terminate.
If you handle the signal, you can longjmp(2) from the handler to another place.
I don't think it gets much more efficient than SIGALRM + longjmp (with an asynchronous timer, your code basically runs undisturbed without having to do any extra checks or calls).
Below is an example for you:
#define _XOPEN_SOURCE
#include <unistd.h>
#include <stdio.h>
#include <signal.h>
#include <setjmp.h>
static jmp_buf jmpbuf;
void hndlr();
void loop();
int main(){
/*sisv_signal handlers get reset after a signal is caught and handled*/
if(SIG_ERR==sysv_signal(SIGALRM,hndlr)){
perror("couldn't set SIGALRM handler");
return 1;
}
/*the handler will jump you back here*/
setjmp(jmpbuf);
if(0>alarm(3/*seconds*/)){
perror("couldn't set alarm");
return 1;
}
loop();
return 0;
}
void hndlr(){
puts("Caught SIGALRM");
puts("RESET");
longjmp(jmpbuf,1);
}
void loop(){
int i;
for(i=0; ; i++){
//print each 100-milionth iteration
if(0==i%100000000){
printf("%d\n", i);
}
}
}
If alarm(2) isn't enough, you can use timer_create(2) as EOF suggests.
I make a function like this
trace_printk("111111");
udelay(4000);
trace_printk("222222");
and the log shows it's 4.01 ms , it'OK
But when i call like this
trace_printk("111111");
ndelay(10000);
ndelay(10000);
ndelay(10000);
ndelay(10000);
....
....//totally 400 ndelay calls
trace_printk("222222");
the log will shows 4.7 ms. It's not acceptable.
Why the error of ndelay is so huge like this?
Look deep in the kernel code i found the implemention of this two functions
void __udelay(unsigned long usecs)
{
__const_udelay(usecs * 0x10C7UL); /* 2**32 / 1000000 (rounded up) */
}
void __ndelay(unsigned long nsecs)
{
__const_udelay(nsecs * 0x5UL); /* 2**32 / 1000000000 (rounded up) */
}
I thought udelay will be 1000 times than ndelay, but it's not, why?
As you've already noticed, the nanosecond delay implementation is quite a coarse approximation compared to the millisecond delay, because of the 0x5 constant factor used. 0x10c7 / 0x5 is approximately 859. Using 0x4 would be closer to 1000 (approximately 1073).
However, using 0x4 would cause the ndelay to be less than the number of nanoseconds requested. In general, delay functions aim to provide a delay at least as long as requested by the user (see here: http://practicepeople.blogspot.jp/2013/08/kernel-programming-busy-waiting-delay.html).
Every time you call it, a rounding error is added. Note the comment 2**32 / 1000000000. That value is really ~4.29, but it was rounded up to 5. That's a pretty hefty error.
By contrast the udelay error is small: (~4294.97 versus 4295 [0x10c7]).
You can use ktime_get_ns() to get high precision time since boot. So you can use it not only as high precision delay but also as high precision timer. There is example:
u64 t;
t = ktime_get_ns(); // Get current nanoseconds since boot
for (i = 0; i < 24; i++) // Send 24 1200ns-1300ns pulses via GPIO
{
gpio_set_value(pin, 1); // Drive GPIO or do something else
t += 1200; // Now we have absolute time of the next step
while (ktime_get_ns() < t); // Wait for it
gpio_set_value(pin, 0); // Do something, again
t += 1300; // Now we have time of the next step, again
while (ktime_get_ns() < t); // Wait for it, again
}
Im trying to make a simple RPM meter using an ATMega328.
I have an encoder on the motor which has 306 interrupts per rotation (as the motor encoder has 3 spokes which interrupt on rising and falling edge, the motor is geared 51:1 and so 6 transitions * 51 = 306 interrupts per wheel rotation ) , and I am using a timer interrupting every 1ms, however in the interrupt it set to recalculate every 1 second.
There seems to be 2 problems.
1) RPM never goes below 60, instead its either 0 or RPM >= 60
2) Reducing the time interval causes it to always be 0 (as far as I can tell)
Here is the code
int main(void){
while(1){
int temprpm = leftRPM;
printf("Revs: %d \n",temprpm);
_delay_ms(50);
};
return 0;
}
ISR (INT0_vect){
ticksM1++;
}
ISR(TIMER0_COMPA_vect){
counter++;
if(counter == 1000){
int tempticks = ticksM1;
leftRPM = ((tempticks - lastM1)/306)*1*60;
lastM1 = tempticks;
counter = 0;
}
}
Anything that is not declared in that code is declared globally and as an int, ticksM1 is also volatile.
The macros are AVR macros for the interrupts.
The purpose of the multiplying by 1 for leftRPM represents time, ideally I want to use 1ms without the if statement so the 1 would then be 1000
For a speed between 60 and 120 RPM the result of ((tempticks - lastM1)/306) will be 1 and below 60 RPM it will be zero. Your output will always be a multiple of 60
The first improvement I would suggest is not to perform expensive arithmetic in the ISR. It is unnecessary - store the speed in raw counts-per-second, and convert to RPM only for display.
Second, perform the multiply before the divide to avoid unnecessarily discarding information. Then for example at 60RPM (306CPS) you have (306 * 60) / 306 == 60. Even as low as 1RPM you get (6 * 60) / 306 == 1. In fact it gives you a potential resolution of approximately 0.2RPM as opposed to 60RPM! To allow the parameters to be easily maintained; I recommend using symbolic constants rather than magic numbers.
#define ENCODER_COUNTS_PER_REV 306
#define MILLISEC_PER_SAMPLE 1000
#define SAMPLES_PER_MINUTE ((60 * 1000) / MILLISEC_PER_SAMPLE)
ISR(TIMER0_COMPA_vect){
counter++;
if(counter == MILLISEC_PER_SAMPLE)
{
int tempticks = ticksM1;
leftCPS = tempticks - lastM1 ;
lastM1 = tempticks;
counter = 0;
}
}
Then in main():
int temprpm = (leftCPS * SAMPLES_PER_MINUTE) / ENCODER_COUNTS_PER_REV ;
If you want better that 1RPM resolution you might consider
int temprpm_x10 = (leftCPS * SAMPLES_PER_MINUTE) / (ENCODER_COUNTS_PER_REV / 10) ;
then displaying:
printf( "%d.%d", temprpm / 10, temprpm % 10 ) ;
Given the potential resolution of 0.2 rpm by this method, higher resolution display is unnecessary, though you could use a moving-average to improve resolution at the expense of some "display-lag".
Alternatively now that the calculation of RPM is no longer in the ISR you might afford a floating point operation:
float temprpm = ((float)leftCPS * (float)SAMPLES_PER_MINUTE ) / (float)ENCODER_COUNTS_PER_REV ;
printf( "%f", temprpm ) ;
Another potential issue is that ticksM1++ and tempticks = ticksM1, and the reading of leftRPM (or leftCPS in my solution) are not atomic operations, and can result in an incorrect value being read if interrupt nesting is supported (and even if it is not in the case of the access from outside the interrupt context). If the maximum rate will be less that 256 cps (42RPM) then you might get away with an atomic 8 bit counter; you cal alternatively reduce your sampling period to ensure the count is always less that 256. Failing that the simplest solution is to disable interrupts while reading or updating non-atomic variables shared across interrupt and thread contexts.
It's integer division. You would probably get better results with something like this:
leftRPM = ((tempticks - lastM1)/6);
gcc (GCC) 4.6.0 20110419 (Red Hat 4.6.0-5)
I am trying to get the time of start and end time. And get the difference between them.
The function I have is for creating a API for our existing hardware.
The API wait_events take one argument that is time in milli-seconds. So what I am trying to get the start before the while loop. And using time to get the number of seconds. Then after 1 iteration of the loop get the time difference and then compare that difference with the time out.
Many thanks for any suggestions,
/* Wait for an event up to a specified time out.
* If an event occurs before the time out return 0
* If an event timeouts out before an event return -1 */
int wait_events(int timeout_ms)
{
time_t start = 0;
time_t end = 0;
double time_diff = 0;
/* convert to seconds */
int timeout = timeout_ms / 100;
/* Get the initial time */
start = time(NULL);
while(TRUE) {
if(open_device_flag == TRUE) {
device_evt.event_id = EVENT_DEV_OPEN;
return TRUE;
}
/* Get the end time after each iteration */
end = time(NULL);
/* Get the difference between times */
time_diff = difftime(start, end);
if(time_diff > timeout) {
/* timed out before getting an event */
return FALSE;
}
}
}
The function that will call will be like this.
int main(void)
{
#define TIMEOUT 500 /* 1/2 sec */
while(TRUE) {
if(wait_events(TIMEOUT) != 0) {
/* Process incoming event */
printf("Event fired\n");
}
else {
printf("Event timed out\n");
}
}
return 0;
}
=============== EDIT with updated results ==================
1) With no sleep -> 99.7% - 100% CPU
2) Setting usleep(10) -> 25% CPU
3) Setting usleep(100) -> 13% CPU
3) Setting usleep(1000) -> 2.6% CPU
4) Setting usleep(10000) -> 0.3 - 0.7% CPU
You're overcomplicating it - simplified:
time_t start = time();
for (;;) {
// try something
if (time() > start + 5) {
printf("5s timeout!\n");
break;
}
}
time_t should in general just be an int or long int depending on your platform counting the number of seconds since January 1st 1970.
Side note:
int timeout = timeout_ms / 1000;
One second consists of 1000 milliseconds.
Edit - another note:
You'll most likely have to ensure that the other thread(s) and/or event handling can happen, so include some kind of thread inactivity (using sleep(), nanosleep() or whatever).
Without calling a Sleep() function this a really bad design : your loop will use 100% of the CPU. Even if you are using threads, your other threads won't have much time to run as this thread will use many CPU cycles.
You should design something like that:
while(true) {
Sleep(100); // lets say you want a precision of 100 ms
// Do the compare time stuff here
}
If you need precision of the timing and are using different threads/processes, use Mutexes (semaphores with a increment/decrement of 1) or Critical Sections to make sure the time compare of your function is not interrupted by another process/thread of your own.
I believe your Red Hat is a System V so you can sync using IPC