High resolution timer in erlang - timer

Does anybody know if it is possible to make a high resolution timer in Erlang?
According to documentation all timers and timeouts are measured in milliseconds.
There is need to make a delay in microseconds. For example, instead of
timer:apply_after(MilliSec, Module, Function, Arguments).
something like
timer:apply_after(MicroSec, Module, Function, Arguments).

Indeed, all timers and timeouts primitives are in milliseconds including :
receive ... after primitive (which is what timer module eventually relies upon);
erlang:send_after/3 and erlang:start_timer/3 which rely on the same mechanism;
driver_set_timer function for linked in drivers.
Two methods could be considered to achieve a sub-millisecond timer:
use Erlang primitives to wait the truncated number of milliseconds and then adjust with a busy loop. Please note that erlang:now() is not a real time function as it is guaranteed to be monotonous (and this is quite expensive). You should use os:timestamp() instead;
write native code that spawns a thread that will send a message when the timer fires. This could easily be implemented as a NIF.

In practice,if you need a timer,you should use eralng:send_after/3 or erlang:start_timer/3,instead of timer module. The Timer module use the timer process to
implement the timer, if the application has too much timer ,it will block the timer process which will slow you application .
erlang:send_after/3 and erlang:start_timer/3 has one difference.
erlang:start_timer/3 will send the message {timeout, TimerRef, Msg} to Dest after Time milliseconds. erlang:send_after/3 will send just the Msg to Dest after Time millisecond.
The problem is when you need to cancel the timer , and the Msg has been send,if you use erlang:send_after/3,it may cause logic handle issue.

Related

STM32 RTOS timer interrupt and threads

I am working on a project where I need to execute 2 pieces of code off TIM interrupts. One of them has a slightly higher priority than the other, and both will be running on 2 different timers (of course not at the same time interval). Due to both timers being proportional to another (one is 1KHz, one is 8Khz) both will trigger at the same time.
Since I am already using the RTOS middle-ware for another purposes (threads of a much lower priority than these too), I was thinking of creating one thread of each these routines.
However, looking at how cubeMX is generating code, I am even wondering if this is possible.
I can start/stop these timers from any thread, but there is only one HAL_TIM_PeriodElapsedCallback which you usually fill with if statements like so:
if (htim->Instance == TIM2)
Am I correct to assume, regardless of which thread the timers are started from, the TIM callback will always occur "outside" of the RTOS environment?
if so, what would be a better strategy to achieve something close to what I need?
Cheers
Interrupts will triger. But remember:
Its priority (not the RTOS priority as they are unrelated) must be lower the SVC interrupt if you want to use any ...fromISR RTOS functions
They will not happen at the same time (as you have only one core)
I am working on a project where I need to execute 2 pieces of code off
TIM interrupts. One of them has a slightly higher priority than the
other, and both will be running on 2 different timers...
What exactly do you mean by "one of them has a [..] higher priority" - the HW timer events will occur just when the timer underflows occur. I think you mean, the handler code servicing the timeout events.
... (of course not at the same time interval). Due to both timers being proportional to another (one is 1KHz, one is 8Khz) both will trigger at the same time.
In embedded realtime programming, you should never build on the assumption that IRQ events are not occurring at the same time: Your ISR handlers may be suppressed at the moment when a trigger event occurs. This way, even if two concurrent events trigger closely after each other, it may look for your software code as if they had triggered at the same time. The solution is what your question points at: Context priorities (of tasks (= "threads") and ISRs (= "Interrupt handlers")) let you avoid the question which event came earlier and control which event to treat first.
Since I am already using the RTOS middle-ware for another purposes (threads of a much lower priority than these too), I was thinking of creating one thread of each these routines.
You are free to deploy code to an RTOS task or to an ISR, but keep in mind that any ISR will have a higher priority than any task. Your TIM event will trigger an ISR (= interrupt context), but you can (and often should) use the ISR to send a notification (or event, or semaphore, or queue message) to a task in order to have the main part of the timer event processed at the lower priority of a task.
However, looking at how cubeMX is generating code, I am even wondering if this is possible.
CubeMX is not limiting you to use or not use tasks. The question is rather how far CubeMX will generate the code you need, and how much you have to add manually. Please note that you don't have to use the CubeMX feature to generate tasks through its configuration, but this can be done by your own C code, too.
I can start/stop these timers from any thread, but there is only one HAL_TIM_PeriodElapsedCallback which you usually fill with if statements like so:
if (htim->Instance == TIM2)
Am I correct to assume, regardless of which thread the timers are started from, the TIM callback will always occur "outside" of the RTOS environment?
Yes, you are. The question who started the timer is not relevant to the context type/selection triggered by the timer. In any case, the TIM will trigger its ISR (at the interrupt priority configured for that interrupt).
If you use the CubeHAL library, it will implement the root of that ISR, check which of the TIMs related to that ISR have elapsed, and invoke the code you printed. Here, you can insert your user code to the different TIM instances (like TIM2 in your case).
if so, what would be a better strategy to achieve something close to what I need?
Re-check your favourite textbook on RTOS and microcontrollers. Any SO answer cannot include all the theory to solve the problem properly.
Decide whether there will be any more urgent reaction on your system than treating the timeout events. If no, you may implement the timeout reaction in the ISR handler. If yes (or in cases of doubt), implement the ISR with a task notification that goes to a task where you do what the timeout event requires. This may be the task from where you started the timer, or another one.

Contiki delay in seconds

I am trying to develop a contiki piece of code in which I need to wait for three seconds for a transducer output. Although this may sound quite un-transducer like, at development time, I want to simulate the behavior at human readable speeds and hence I need to set the timer to say 3 seconds.
The contiki timer library is pretty well documented and has a good series of examples which mention the creation, setting and resetting of the timer. However, if I have code like the following:
timer_set(&transducerOutputWaitTimer);
bool if_blk_executed;
if(timer_expired(&&transducerOutputWaitTimer)){
if_blk_executed = true;
//do something
}
if(if_blk_executed)
printf("Sunrise");
else
printf("It is not dawned yet");
Now the expiry is not immediately triggered after the timer is set. So the if block is never executed. Effectively it will never dawn.
Now there are two ways in which I can get the system to wait. One, by adding a while loop on the timer like this:
while(!timer_expired(&&transducerOutputWaitTimer)){};
//do something
or
cpu_delay_usecs{mytimerdur_in_secs*10^6*};
//do something
I do not see that either approaches are elegant. While one wastes CPU cycles, the other enforces an unnecessarily large computation.
Is there any better way? I know that clock
Attached to this question are the following two:
How can I trigger one Contiki protothread from another? If I can do this, I can get the interrupt that causes the transducerOutput to invoke a process from which I can trigger the etimer and process events.
What exactly does the CPU delay mean? Does this mean that the entire CPU clock cycles get held back for the given duration? If yes, how are other processes running currently in the system affected?
Updates
Update 1: the while method did not work. The code possibly went into an infinite loop.
Update 2: I tried with a clock_delay(3*CLOCK_SECOND) approach after setting my timer. It worked. However, in this case, why do I need the timer method at all?
Update 3 (This changes the context of the answer, hence adding this comment as per the suggestion)
My Timer needs to be used outside a process in a different void function. In that case, I need to use the timer() library rather than the etimer (which is specific to a process. So how do I get my method to wait for the desired time in such cases?
There are several Contiki abstractions built on top of the timer library - there is usually no need to user the timer_t structures directly. Instead, there are event timers (struct etimer), which play well together with Contiki processes, and callback timers (struct ctimer), both of which internally use struct timer. For a less RAM hungry option with a more limited API there are second timers (struct stimer). Finally, there is a single real-time timer in the system (struct rtimer).
An example event timer usage: set and wait for 3 seconds:
static struct etimer timer;
etimer_set(&timer, 3 * CLOCK_SECOND);
PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&timer));
How can I trigger one Contiki protothread from another?
There is process_poll - see the process API documentation for more examples.
What exactly does the CPU delay mean?
It's a form of busy waiting. Don't use the delay functions or other forms of busy waiting for delays longer than microseconds - not energy efficient, does not allow other processes to run, and in worst case the watchdog might expire.

How to Disable/Delay the watchDog Timer for a certain Task in an embedded system

I'm working on a project for automotive system where we use the MPC5748 MCU. The application uses an RTOS based on AUTOSAR OS, and this MPC target support two type of watchdogs; software and hardware (they have used soft WDT).
My mission is to fit an algorithm within this application, the development of the algorithm has been done, the problem is that in the task where the algorithm is running is a 1ms task and the algorithm needs much more time than the time dedicated to this function.
I'm a newbie to the embedded world.By the way, in the algorithm main function the program will reset itself and this seems to be a timeOut generated by the expiration of the watchdog.
My questions are:
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Can i add a sleep(<1ms) on the desired function in order to wait a little bit witout affecting other tasks
What are other options to try?
NB: This is a general problem on the watchdog timer and any useful informations will be much helpful for me. Sorry because I can't share the code.
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
Let's forget that one - it is a really bad idea. If it is possible to defeat the watchdog, then it is possible to do it by error, and then the whole point of the watchdog is defeated. Apart from that its an XY question - a question about your proposed solution to a different problem - you should ask about the problem directly.
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Yes you need another task, but you should not add a "big delay" and it is probably unnecessary and certainly a bad design. If the 1ms task needs the result of the algorithm then, the algorithm should run in a service task triggered by the 1ms task and run asynchronously to the 1ms task, the service task then makes the results available to the 1ms task when available (by shared memory or message passing perhaps). Alternatively if the result is not specifically needed by the 1ms task, the service task could take the necessary action independently of the 1ms task.
There are many options, but essentially it seems that your task partitioning is inappropriate; your CAN Rx task should be responsible for receiving CAN messages only, and any action required in response to CAN messages deferred to one or more other tasks, perhaps fed from a message queue.
What are other options to try ?
Software design should not be a matter of trial and error - get the design right, implement the design. However you might consider whether 1ms is appropriate; is it possible that the period can be extended to encompass the worst case execution time without causing a failure to meet deadlines in general? If the answer is "no" then the algorithm does not belong in this task.
I don't think so you can disable/delay the WATCHDOG timer and even if you could that's not a good option to go for.
The problem what think is that the task you are calling is of 1ms, which is very less to read CAN messages and then operate on the same. The minimum task time i think should be of 5ms and the optimal time should be of 10ms.
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
You should never disable the watchdog anywhere in your code.
It might not even be possible, on the MPC5x families you typically set up the watchdog once, and then for safety reasons all watchdog registers turn to read-only registers.
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Ideally you should only service the watchdog from one single location in the program. Your CAN peripheral will be FlexCAN, which has a lot of available "mailboxes" for CAN messages. In most cases, you shouldn't need to poll it, but a flag will be set when the desired message arrive.
So it isn't obvious to me why you would need a delay to wait for them. Simply do:
void the_task (void)
{
wdog_refresh();
... // do other things
if(can_message_available)
{
// do something with the message
}
... // do other things
}
rather than
// BAD:
while(!can_message_available)
; // do nothing
Even if you need to use the CAN as FIFO and poll it repeatedly, you would still use the same approach. You'd just have to ensure that the task runs often enough that there will never be an overflow in the FIFO buffer.

Best way to write a function that takes in a timeout (posix C)

So I have an embedded Linux device that is connected to a motor controller via a serial port. I am writing an interface library which makes a lot of nice generic functions which other programs will call. One of which is a function to run the program that is currently on the controller's flash disk:
int run_motor_program(int serial_fd, char *label, timeout);
The general pseudocode for this function is:
call write(serial_fd, "start program at `label`")
perform a couple read()'s / write()'s to check whether program has started on the motor controller
do
/* some stuff */
while(program is running AND timeout hasn't exceeded)
If the timeout exceeded, kill motor and return timeout error
The timeout in the above function definition is used in case something goes wrong while running the program on the motor controller. If the motor controller gets stuck in a longer loop than expected, I need the ability to stop program.
The only ways I know for keeping track of a timeout are:
1) Calling gettimeofday() before and during the loop to see if elapsed time is > timeout value passed in
2) Calling clock_gettime() and basically doing the same as 1.
3) Using timer_create() before the loop and timer_getoverrun() in the loop to check if the time has elapsed (this seems to be the most elegant solution, but I can't seem to get timer_getoverrun() to work with SIGEV_NONE [I don't want to use signals]).
Which of these (or if anyone has any other suggestions) is the best way to handle including a timeout in a function? I really only need resolution down to the millisecond.
I tend to do option 1 myself. If subsecond granularity isn't needed, then I'll use time. Typically the work is checking for IO, so I also use a select with a timeout configured.
You could consider using one of the alarm signal mechanisms. The simplest and oldest is alarm(), which schedules a SIGALRM signal after the specified number of seconds. If you have a signal handler for SIGALRM, your process won't die but will allow you to recover from the error.
The primary limitation of alarm() is that it deals in whole seconds. There are a plethora of sub-second or fractional second alternatives. You should look at setitimer(). You might use nanosleep() but you'd probably also need to use threads since nanosleep() blocks the calling thread. That moves it up the complexity scale. There are calls like pthread_cond_timedwait() that could also be used in a threaded program.
Your prototype int run_motor_program(int serial_fd, char *label, timeout); won't compile; you need to define the type of the timeout argument. You also need to decide what your argument means - whether it is an interval or duration of time (the number of seconds to run the motor for before timing out) or whether it is the end time (the Unix time after which the program must be stopped). There are various sub-second structures that you'll have to negotiate. Your choice is likely to be affected by which system call you use for implementing the timeout.

Why are nanosleep() and usleep() too slow?

I have a program that generates packets to send to a receiver. I need an efficient method of introducing a small delay between the sending of each packet so as not to overrun the receiver. I've tried usleep() and nanosleep() but they seem to be too slow. I've implemented a busy wait loop and had more success, but it's not the most efficient method, I know. I'm interested in anyone's experiences in trying to do what I'm doing. Do others find usleep() and nanosleep() to function well for this type of application?
Thanks,
Danny Llewallyn
The behaviour of the sleep functions for very small intervals is heavily dependent on the kernel version and configuration.
If you have a "tickless" kernel (CONFIG_NO_HZ) and high resolution timers, then you can expect the sleeps to be quite close to what you ask for.
Otherwise, you'll generally end up sleeping at the granularity of the timer interrupt. The timer interrupt interval is configurable (CONFIG_HZ) - 10ms, 4ms, 3.3ms and 1ms are the common choices.
Assuming that the higher level approaches other commenters have mentioned are not available to you, then a common approach in embedded/microcontroller land is to create a NOP-loop of the required length.
A NOP operation takes one CPU cycle and in an embedded environment you typically know exactly what clock speed your processor is running at so you can just use a simple for-loop conatining _NOP() or if only a very short delay is required then don't bother with a loop, just add in the required number of nops.
regTX = 0xFF; // Transmit FF on special register
// Wait three clock cycles
_NOP();
_NOP();
_NOP();
regTX = 0x00; // Transmit 00
This seems like a bad design. Ideally the receiver would queue any extra data it receives , and then do its message processing separate thread. In that way, it can handle bursts of data without relying on the sender to throttle its requests.
But perhaps such an approach is not practical if (for example) you do not have control of the receiver's code, or if this is an embedded application.
I can speak for Solaris here, in that it uses an OS timer to wake up sleep calls. By default the minimum wait time will be 10ms, regardless of what you specify in your usleep. However, you can use the parameters hires_tick = 1 (1ms wakeups) and hires_hz = in the /etc/system configuration file to increase the frequency of timer wake up calls.
Instead of doing things at the packet level, where you need to worry about such things as overrunning the reciever. Why not use a TCP stream to transmit the data? Let TCP handle things like flow rate control and packet retransmission.
If you've already got a lot invested in the packetized approach, you can always use a layer on top of TCP to extract the original packets of data from the TCP stream and feed these into your existing functions.

Resources