I could use:
while(1==1)
{
delay(10);
f(); // <-- function to be called every 10 seconds
otherfunctions();
}
But this will take just over 10 seconds because the other functions take some time to execute. Is there a delay function that takes into account the time taken by the other functions so I can then call f() exactly every 10 seconds?
I've heard that this can be done with a clever function that can be found in a header file, but I cant remember which one. I think it might have been #include mbed.h but even if the function is included in this header file I do not know what it is called or how to search for it.
Does anybody know of a function that can do what I am after?
You should of course start by reading the mbed handbook. It is not a large API and you can get a good overview of it very quickly.
The mbed platform is a C++ API, so you will need to use C++ compilation.
There are several ways to achieve what you need, some examples:
Using the Ticker class:
#include "mbed.h"
Ticker TenSecondStuff ;
void TenSecondFunction()
{
f();
otherfunctions();
}
int main()
{
TenSecondStuff.attach( TenSecondFunction, 10.0f ) ;
// spin in a main loop.
for(;;)
{
continuousStuff() ;
}
}
Using wait_us() and the Timer class:
#include "mbed.h"
int main()
{
Timer t ;
for(;;)
{
t.start() ;
f() ;
otherfunctions() ;
t.stop() ;
wait_us( 10.0f - t.read_us() ) ;
}
}
Using the Ticker class, an alternative method:
#include "mbed.h"
Ticker ticksec ;
volatile static unsigned seconds_tick = 0 ;
void tick_sec()
{
seconds_tick++ ;
}
int main()
{
ticksec.attach( tick_sec, 1.0f ) ;
unsigned next_ten_sec = seconds_tick + 10 ;
for(;;)
{
if( (seconds_tick - next_ten_sec) >= 0 )
{
next_ten_sec += 10 ;
f() ;
otherfunctions() ;
}
continuousStuff() ;
}
}
Using mbed RTOS timer
#include "mbed.h"
#include "rtos.h"
void TenSecondFunction( void const* )
{
f();
otherfunctions();
}
int main()
{
RtosTimer every_ten_seconds( TenSecondFunction, osTimerPeriodic, 0);
for(;;)
{
continuousStuff() ;
}
}
If you want it simple try this
int delayTime = DELAY_10_SECS;
while(1==1)
{
delay(delayTime);
lastTime = getCurrTicks(); //Or start some timer with interrupt which tracks time
f(); // <-- function to be called every 10 seconds
otherfunctions();
delayTime = DELAY_10_SECS - ( getCurrTicks() - lastTime ); //Or stop timer and get the time
}
Asssuming you have some type of timer counter, perhaps one generated by a timer driven interrupt, try something like this:
volatile int *pticker; /* pointer to ticker */
tickpersecond = ... ; /* number of ticks per second */
/* ... */
tickcount = *pticker; /* get original reading of timer */
while(1){
tickcount += 10 * tickspersecond;
delaycount = tickcount-*pticker;
delay(delaycount); /* delay delaycount ticks */
/* ... */
}
This assumes that the ticker increments (instead of decrements), that the codde never get 10 seconds behind on a delay and assumes the number of ticks per second is an exact integer. Since an original reading is used as a basis, the loop will not "drift" over a long period of time.
Related
What is the best way to create a timer with Microblaze which would allow me to have it work more similarly to a function like delay_ms() or sleep() in more conventional scripts?
Easily, I can create a stupid function like this:
void delay_ms(int i) {
//mind that I am doing this on the top of my head
for(delays=0; delay<(i*((1/frequency of the device)/2)); delays++) {
}
}
... but that would only have processor process nothing until it finishes, while in reality I need it to have the function allow me to do stop one process for a certain period of time while another one continues working.
Such thing is possible, no doubt about that, but what would the simplest solution to this problem be?
(I am using Spartan-3A, but I believe the solution would work for different kits, FPGAs as well.)
TL;DR
Use a micro OS, like FreeRTOS.
Bad answer
Well, if you have no OS, no task commutation but have an external timer, you can
use the following approach:
Enable interruption for your hardware timer, and manage a counter driven by this interrution:
You should have something like
/**timer.c**/
/* The internal counters
* each task have its counter
*/
static int s_timers[NUMBER_OF_TASKS] = {0,0};
/* on each time tick, decrease timers */
void timer_interrupt()
{
int i;
for (i = 0; i < NUMBER_OF_TASKS; ++i)
{
if (s_timer[i] > 0)
{
s_timer[i]--;
}
}
}
/* set wait counter:
* each task says how tick it want to wait
*/
void timer_set_wait(int task_num, int tick_to_wait)
{
s_timer[task_num] = tick_to_wait;
}
/**
* each task can ask if its time went out
*/
int timer_timeout(int task_num)
{
return (0 == s_timer[task_num]);
}
Once you have something like a timer (the code above is easily perfectible),
program your tasks:
/**task-1.c**/
/*TASK IDÂ must be valid and unique in s_timer */
#define TASK_1_ID 0
void task_1()
{
if (timer_timeout(TASK_1_ID))
{
/* task has wait long enough, it can run again */
/* DO TASK 1 STUFF */
printf("hello from task 1\n");
/* Ask to wait for 150 ticks */
timer_set_wait(TASK_1_ID, 150);
}
}
/**task-2.c**/
/*TASK IDÂ must be valid and unique in s_timer */
#define TASK_2_ID 1
void task_2()
{
if (timer_timeout(TASK_2_ID))
{
/* task has wait long enough, it can run again */
/* DO TASK 2 STUFF */
printf("hello from task 2\n");
/* Ask to wait for 250 ticks */
timer_set_wait(TASK_2_ID, 250);
}
}
And schedule (a big word here) the tasks:
/** main.c **/
int main()
{
/* init the program, like set up the timer interruption */
init()
/* do tasks, for ever*/
while(1)
{
task_1();
task_2();
}
return 0;
}
I think what I have described is a lame solution that should not be seriously used.
The code I gave is full of problems, like what happens if a task become to slow to execute...
Instead, you --could-- should use some RT Os, like FreeRTOS which is very helpful in this kind of problems.
To preface, I am on a Unix (linux) system using gcc.
What I am stuck on is how to accurately implement a way to run a section of code for a certain amount of time.
Here is an example of something I have been working with:
struct timeb start, check;
int64_t duration = 10000;
int64_t elapsed = 0;
ftime(&start);
while ( elapsed < duration ) {
// do a set of tasks
ftime(&check);
elapsed += ((check.time - start.time) * 1000) + (check.millitm - start.millitm);
}
I was thinking this would have carried on for 10000ms or 10 seconds, but it didn't, almost instantly. I was basing this off other questions such as How to get the time elapsed in C in milliseconds? (Windows) . But then I thought that if upon the first call of ftime, the struct is time = 1, millitm = 999 and on the second call time = 2, millitm = 01 it would be calculating the elapsed time as being 1002 milliseconds. Is there something I am missing?
Also the suggestions in the various stackoverflow questions, ftime() and gettimeofday(), are listed as deprecated or legacy.
I believe I could convert the start time into milliseconds, and the check time into millseconds, then subtract start from check. But milliseconds since the epoch requires 42 bits and I'm trying to keep everything in the loop as efficient as possible.
What approach could I take towards this?
Code is incorrect calculating elapsed time.
// elapsed += ((check.time - start.time) * 1000) + (check.millitm - start.millitm);
elapsed = ((check.time - start.time) * (int64_t)1000) + (check.millitm - start.millitm);
There is some concern about check.millitm - start.millitm. On systems with struct timeb *tp, it can be expected that the millitm will be promoted to int before subtraction occurs. So the difference will be in the range [-1000 ... 1000].
struct timeb {
time_t time;
unsigned short millitm;
short timezone;
short dstflag;
};
IMO, more robust code would handle ms conversion in a separate helper function. This matches OP's "I believe I could convert the start time into milliseconds, and the check time into millseconds, then subtract start from check."
int64_t timeb_to_ms(struct timeb *t) {
return (int64_t)t->time * 1000 + t->millitm;
}
struct timeb start, check;
ftime(&start);
int64_t start_ms = timeb_to_ms(&start);
int64_t duration = 10000 /* ms */;
int64_t elapsed = 0;
while (elapsed < duration) {
// do a set of tasks
struct timeb check;
ftime(&check);
elapsed = timeb_to_ms(&check) - start_ms;
}
If you want efficiency, let the system send you a signal when a timer expires.
Traditionally, you can set a timer with a resolution in seconds with the alarm(2) syscall.
The system then sends you a SIGALRM when the timer expires. The default disposition of that signal is to terminate.
If you handle the signal, you can longjmp(2) from the handler to another place.
I don't think it gets much more efficient than SIGALRM + longjmp (with an asynchronous timer, your code basically runs undisturbed without having to do any extra checks or calls).
Below is an example for you:
#define _XOPEN_SOURCE
#include <unistd.h>
#include <stdio.h>
#include <signal.h>
#include <setjmp.h>
static jmp_buf jmpbuf;
void hndlr();
void loop();
int main(){
/*sisv_signal handlers get reset after a signal is caught and handled*/
if(SIG_ERR==sysv_signal(SIGALRM,hndlr)){
perror("couldn't set SIGALRM handler");
return 1;
}
/*the handler will jump you back here*/
setjmp(jmpbuf);
if(0>alarm(3/*seconds*/)){
perror("couldn't set alarm");
return 1;
}
loop();
return 0;
}
void hndlr(){
puts("Caught SIGALRM");
puts("RESET");
longjmp(jmpbuf,1);
}
void loop(){
int i;
for(i=0; ; i++){
//print each 100-milionth iteration
if(0==i%100000000){
printf("%d\n", i);
}
}
}
If alarm(2) isn't enough, you can use timer_create(2) as EOF suggests.
I am making a program in which I am getting data from a serial device. The problem which I am facing is that the device gives me the wrong data until I run while(1) and then read the data. So I thought of running a for loop for 100000 times and then reading the data but still it was giving wrong data. I can only use while(1). So is there anyway I can stop while(1) after sometime like 7-10sec.?
please help,thanks.!!
I think it will help.
int i=0;
while(1){
// do your work.
if ( i == 100 ) break; // for an example.
i++;
}
printf("After While\n");
Is it necessary for your while loop to iterate on 1? Perhaps you could loop on time(NULL) instead, for example:
time_t t = time(NULL) + 10;
while (time(NULL) < t) {
/* ... */
}
This is not exactly precise; The loop could run for anything between 9 seconds and 10 seconds, perhaps even longer depending on how saturated your CPU usage is by other tasks. It doesn't look like you're looking for anything precise, however, and this should give you some idea...
If for whatever silly reason you must use while (1), then you can use this idea together with if and break like so:
time_t t = time(NULL) + 10;
while (1) {
if (time(NULL) >= t) {
break;
}
/* ... */
}
To exit the loop you have to use break statement.
while(1)
{
//your code...
sleep(7);//to sleep for 7 seconds
break;//jumps out of the loop after 7 seconds of delay
}
#include <time.h>
#include <stdio.h>
int main()
{
time_t end = time(NULL) + 7; //7s
while (1) {
//your code...
printf("running...\n");
if (time(NULL) >= end) {
break;
}
//your code..
}
return 0;
}
while(1) {
delay(10000); //To delay for 10 seconds.
break;
}
If you can't use delay() then probably use some loop to get significant amount of time delay and thereafter break the loop.
I would like to receive information from a GPS receiver every second, but from sensors - every half second...
I took the code of tinyGPS and added it sensors code:
#include <TinyGPS.h>
const int RightPin = A0;
const int FrontPin = A1;
const int LeftPin = A2;
int RightVal = 0;
int FrontVal = 0;
int LeftVal = 0;
TinyGPS gps;
void setup() {
Serial.begin(115200); //GPS DATA
Serial1.begin(9600); //GPS
Serial2.begin(9600); //BLUETOOTH
}
void loop() {
RightVal = analogRead(RightPin);
FrontVal = analogRead(FrontPin);
LeftVal = analogRead(LeftPin);
Serial1.print(RightVal);
Serial1.print(", ");
Serial1.print(FrontVal);
Serial1.print(", ");
Serial1.println(LeftVal);
bool newdata = false;
unsigned long start = millis();
// Every second we print an update
while (millis() - start < 1000)
{
if (feedgps())
newdata = true;
}
gpsdump(gps);
}
Thank you very much
I'm not sure if this is what you are looking for, but you can achieve this by using interrupts. You can use a timer to generate an interrupt every 0.5 seconds and just read your sensors every time (and the GPS every two).
I haven't done this in arduino but in c with AVR microcontrollers. There must be a lot of documentation online.
from this link you can read:
attachInterrupt(function, period)
Calls a function at the specified interval in microseconds. Be careful about trying to execute too complicated of an interrupt at too high of a frequency, or the CPU may never enter the main loop and your program will 'lock up'. Note that you can optionally set the period with this function if you include a value in microseconds as the last parameter when you call it.
hr_time.h:
----------
#include <windows.h>
typedef struct {
LARGE_INTEGER start;
LARGE_INTEGER stop;
} stopWatch;
void startTimer( stopWatch *timer);
void stopTimer( stopWatch *timer);
double LIToSecs( LARGE_INTEGER * L);
double getElapsedTime( stopWatch *timer);
------------------------------------------------------
hr_time.c:
------------
#include <windows.h>
#ifndef hr_timer
#include "hr_time.h"
#define hr_timer
#endif
void startTimer( stopWatch *timer) {
QueryPerformanceCounter(&timer->start);
}
void stopTimer( stopWatch *timer) {
QueryPerformanceCounter(&timer->stop);
}
double LIToSecs( LARGE_INTEGER * L) {
LARGE_INTEGER frequency;
QueryPerformanceFrequency( &frequency );
return ((double)L->QuadPart /(double)frequency.QuadPart);
}
double getElapsedTime( stopWatch *timer) {
LARGE_INTEGER time;
time.QuadPart = timer->stop.QuadPart - timer->start.QuadPart;
return LIToSecs( &time) ;
}
#include "TIMER1.h"
void main()
{
/**
* how to make This task is activated every 2ms ??
*/
TASK( Task2ms )
{
stopWatch s;
startTimer(&s);
if( XCPEVENT_DAQ_OVERLOAD & Xcp_DoDaqForEvent_2msRstr() )
{
}
if( XCPEVENT_MISSING_DTO & Xcp_DoStimForEvent_2msRstr() )
{
}
stopTimer(&s);
getElapsedTime(&s);
}
}
if we take two readings at TimeStart and then TimeEnd then the difference is the number of counts. Divide this by the frequency of the counter- a value expressed as ticks per second and the result is the length of time that the timed code took to execute.
The above code is working fine but need some suggestions to call the function at 2ms or 10ms. could anyone help me in this ??
Declare a variable of type stopWatch eg s. Then before the code you wish to time, insert a startTimer( &s) function call and after the code, a stopTimer(&s) call. You can then call getElapsedTime(&s) to return the time in seconds accurate to microseconds.
Myquestion : How to call a specific function at 2ms or 10ms ?? Where to modify in the above code ??
I modified the code and added main function: Is it possible call the function (like: XCPEVENT_DAQ_OVERLOAD & Xcp_DoDaqForEvent_2msRstr() and XCPEVENT_MISSING_DTO & Xcp_DoStimForEvent_2msRstr()) for every 2ms ??
The code excerpts in your question show how to measure elapsed time to high resolution. They do not show how to schedule periodic execution. That would require a timer.
As you no doubt know, the standard Win32 timer is a low resolution timer. You need a high resolution timer. The most commonly used example of which is a multimedia timer. More recently these have been deprecated in favour of timer queues.