Cron implementation in Erlang - timer

I need to provide a way to perform actions at specific date/time repeatedly. Basically it should work like Cron and I'm thinking of the way of managing execution times.
One solution could be to run a loop in each job/process and constantly check (every minute or second) whether current time is the time we are waiting for.
Another solution could be to work with timers by waiting until the next execution. We calculate the difference between now and the next execution time, and supply that delay to the timer. But since the execution times should be manageable, we would need to have a way to interrupt that timer and create a new one, or we could simply kill that process and create a fresh one.
Does anyone have any thoughts on how it should be done properly, or are they any libraries for accomplishing this particular scenario?

Here are 4 libs you could take a look at:
https://github.com/erlware/erlcron
https://github.com/b3rnie/crontab
https://github.com/jeraymond/leader_cron
https://github.com/zhongwencool/ecron

Related

Determining cause of delay/pause - kernel scheduler etc

System is an embedded Linux/Busybox core on a small embedded board with a web server (Boa) running.
We are seeing some high latency in responses from the web server - sometimes >500ms for no good reason, so I've been digging...
On liberally scattering debug prints throughout the code it seems to come down to the entire process just... stopping for a bit, in a way which I can only assume must be the process/thread being interrupted by another process.
Using print statements and clock_gettime() to calculate time taken to process a request, I can see the code reach the bottom of a while() loop (parsing input), print something like "Time so far: 5ms" and then the next line at the top of the loop will print "Time so far: 350ms" - and all that the code does between the bottom of the loop and the 1st print back at the top is a basic check along the lines of while(position < end), it has nothing complicated that could hold it up.
There's no IO blocking, the data it's parsing has all arrived already, and it's not making any external calls or wandering off into complex functions.
I then looked into whether the kernel scheduler (CFS in our case) might be holding things up, adding calls to clock() (processor time rather than wall-clock) and again calculating time differences Vs processor time used I can see that the wall-clock time delay may run beyond 300ms from one loop to the next, but the reported processor time taken (which seems to have a ~10ms resolution) is more like 50ms.
So, that suggests the task scheduler is holding the process up for hundreds of milliseconds at a time. I've checked the scheduler granularity and max delay and it's nowhere near 100ms, scheduler latency is set at 6ms for example.
Any advice on what I can do now to try and track down the problem - identifying processes which could hog the CPU for >100ms, measuring/tracking what the scheduler is doing, etc.?
First you should try and run your program using strace to see if there are any system calls holding things up.
If that is ambiguous or does not help I would suggest you try and profile the kernel. You could try OProfile
This will create a call graph that you can analyze and see what is happening.

How do I decide between taskSpawn(), period(), and watchdogs?

We are using embedded C for the VxWorks real time operating system.
Currently, all of our UDP connections are started with TaskSpawn().
This routine creates and activates a new task with a specified
priority and options and returns a system-assigned ID.
We specify the task size, a priority, and pass in an entry point.
These are continuous connections, and thus every entry point contains an infinite loop where we delay before the next iteration.
Then I discovered period().
period spawns a task to call a function periodically.
Period sounds like what we should be using instead, but I can't find any information on when you would prefer this function over TaskSpawn. Period also doesn't allow specifying the task size or the priority, so how is it decided? Is the task size dynamic? What will the priority be?
There are also watchdogs.
Any task may create a watchdog timer and use it to run a specified
routine in the context of the system-clock ISR, after a specified
delay.
Again, this seems to be in line with the goal of processing data at a particular rate. Which do I choose when a task must continuously execute code at the same rate (i.e. in real time)?
What are the differences between these 3 methods?
Here is a little clarification:
taskSpawn(..) creates a task with which you're free to do anything with you like.
Watchdogs shall only be used to monitor time constraints. Remember that the callback of the watchdog is executed within the context of the system clock ISR which has many limitations (e.g. free stack size, never use blocking function calls in an ISR, ...). Additionally executing "a lot of code" in the system clock ISR slows down your entire system.
period(..) is intended to be a helper for the VxWorks shell and not to be used by a program.
With that being said your only option is to use taskSpawn(..) unless you're doing some very simple stuff in which case period(..) might be ok to use.
If you need to do things cyclically in a specific time frame you might look at timers or taskDelay(..) in combination with sysClkRateSet(..).
Another option is to create two tasks. One that is setting a semaphore after a specific time intervall and the other "worker" tasks waits for this semaphore to do something. With that approach you separate "timing" from "action" which proved to be benefitial according to my experience. You also might want to monitor excution time of the "worker" task by using a watchdog.

How to Disable/Delay the watchDog Timer for a certain Task in an embedded system

I'm working on a project for automotive system where we use the MPC5748 MCU. The application uses an RTOS based on AUTOSAR OS, and this MPC target support two type of watchdogs; software and hardware (they have used soft WDT).
My mission is to fit an algorithm within this application, the development of the algorithm has been done, the problem is that in the task where the algorithm is running is a 1ms task and the algorithm needs much more time than the time dedicated to this function.
I'm a newbie to the embedded world.By the way, in the algorithm main function the program will reset itself and this seems to be a timeOut generated by the expiration of the watchdog.
My questions are:
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Can i add a sleep(<1ms) on the desired function in order to wait a little bit witout affecting other tasks
What are other options to try?
NB: This is a general problem on the watchdog timer and any useful informations will be much helpful for me. Sorry because I can't share the code.
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
Let's forget that one - it is a really bad idea. If it is possible to defeat the watchdog, then it is possible to do it by error, and then the whole point of the watchdog is defeated. Apart from that its an XY question - a question about your proposed solution to a different problem - you should ask about the problem directly.
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Yes you need another task, but you should not add a "big delay" and it is probably unnecessary and certainly a bad design. If the 1ms task needs the result of the algorithm then, the algorithm should run in a service task triggered by the 1ms task and run asynchronously to the 1ms task, the service task then makes the results available to the 1ms task when available (by shared memory or message passing perhaps). Alternatively if the result is not specifically needed by the 1ms task, the service task could take the necessary action independently of the 1ms task.
There are many options, but essentially it seems that your task partitioning is inappropriate; your CAN Rx task should be responsible for receiving CAN messages only, and any action required in response to CAN messages deferred to one or more other tasks, perhaps fed from a message queue.
What are other options to try ?
Software design should not be a matter of trial and error - get the design right, implement the design. However you might consider whether 1ms is appropriate; is it possible that the period can be extended to encompass the worst case execution time without causing a failure to meet deadlines in general? If the answer is "no" then the algorithm does not belong in this task.
I don't think so you can disable/delay the WATCHDOG timer and even if you could that's not a good option to go for.
The problem what think is that the task you are calling is of 1ms, which is very less to read CAN messages and then operate on the same. The minimum task time i think should be of 5ms and the optimal time should be of 10ms.
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
You should never disable the watchdog anywhere in your code.
It might not even be possible, on the MPC5x families you typically set up the watchdog once, and then for safety reasons all watchdog registers turn to read-only registers.
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Ideally you should only service the watchdog from one single location in the program. Your CAN peripheral will be FlexCAN, which has a lot of available "mailboxes" for CAN messages. In most cases, you shouldn't need to poll it, but a flag will be set when the desired message arrive.
So it isn't obvious to me why you would need a delay to wait for them. Simply do:
void the_task (void)
{
wdog_refresh();
... // do other things
if(can_message_available)
{
// do something with the message
}
... // do other things
}
rather than
// BAD:
while(!can_message_available)
; // do nothing
Even if you need to use the CAN as FIFO and poll it repeatedly, you would still use the same approach. You'd just have to ensure that the task runs often enough that there will never be an overflow in the FIFO buffer.

How to generate requests at a "requests/sec" target rate?

Say I have a target of x requests/sec that I want to generate continuously. My goal is to start these requests at roughly the same interval, rather than just generating x requests and then waiting until 1 second has elapsed and repeating the whole thing over and over again. I'm not making any assumptions about these requests, some might take much longer than others, which is why my scheduler thread will not perform the requests (or wait for them to finish), but hand them over to a sufficiently sized Thread Pool.
Now if x is in the range of hundreds or less, I might get by with .net's Timers or Thread.Sleep and checking actually elapsed time using Stopwatch.
But if I want to go into the thousands or tens of thousands, I could try going high-resolution timer to maintain my roughly the same interval approach. But this would (in most programming environments on a general OS) imply some amount of hand-coding with spin waiting and so forth, and I'm not sure it's worthwhile to take this route.
Extending the initial approach, I could instead use a Timer to sleep and do y requests on each Timer event, monitor the actual requests per second achieved doing this and fine-tune y at runtime. The effect is somewhere in between "put all x requests and wait until 1 second elapsed since start", which I'm trying not to do, and "wait more or less exactly 1/x seconds before starting the next request".
The latter seems like a good compromise, but is there anything that's easier while still spreading the requests somewhat evenly over time? This must have been implemented hundreds of times by different people, but I can't find good references on the issue.
So what's the easiest way to implement this?
One way to do it:
First find (good luck on Windows) or implement a usleep or nanosleep function. As a first step, this could be (on .net) a simple Thread.SpinWait() / Stopwatch.Elapsed > x combo. If you want to get fancier, do Thread.Sleep() if the time span is large enough and only do the fine-tuning using Thread.SpinWait().
That done, just take the inverse of the rate and you have the time interval you need to sleep between each event. Your basic loop, which you do on one dedicated thread, then goes
Fire event
Sleep(sleepTime)
Then every, say, 250ms (or more for faster rates), check the actually achieved rate and adjust the sleepTime interval, perhaps with some smoothing to dampen wild temporary swings, like this
newRate = max(1, sleepTime / targetRate * actualRate)
sleepTime = 0.3 * sleepTime + 0.7 * newRate
This adjusts to what is actually going on in your program and on your system, and makes up for the time spent to invoke the event callback, and whatever the callback is doing on that same thread etc. Without this, you will probably not be able to get high accuracy.
Needless to say, if your rate is so high that you cannot use Sleep but always have to spin, one core will be spinning continuously. The good news: We get ever more cores on our machines, so one core matters less and less :) More serious though, as you mentioned in the comment, if your program does actual work, your event generator will have less time (and need) to waste cycles.
Check out https://github.com/EugenDueck/EventCannon for a proof of concept implementation in .net. It's implemented roughly as described above and done as a library, so you can embed that in your program if you use .net.

Task scheduling - controlling the execution of a function

In an embedded project, we're supposed to implement a task scheduler with different priorities, the project implementation is in C and is run on an Arduino device.
Now that we're in the researching phase, one question popped but nobody had experience enough to have a certain answer:
How is it possible to control the execution time of a function? How do we keep track of time before the function returns so we can interrupt it for example when a time-out occurs?
One suggestion was to use fork(), but since Arduino does not include an operation system, there's no kernel to handle a thread. Or am I wrong?
Any input will be helpful, thanks a bunch,
You need a timer. All non-cooperative multi tasking systems (i.e. those which don't depend on the function to say "you can interrupt me now" all the time) use a timer to stop the execution after some time (say 100ms).
In the interrupt handler, check if there is another "thread" which can run and switch context.
A pretty simple implementation is a "ready list": Whenever a task or thread could do some work, add it to the ready list.
When the timer fires, add the current task at the end of the list and make the head of the list the current task.
In an embedded system a task scheduler is the core of an operating system (usually an RTOS), so you are being asked to implement one not to use one.
A simple example of how such a scheduler works is described in Jean Labrosse's boot Micro C/OS-II. It describes a complete RTOS kernel with scheduling and IPC. For your project you can take the description of this core and implement your own (or you could use the included source code).
Such a kernel works by scheduling at certain OS calls and on a timer interrupt. A context switch involves storing the processor registers for one task and replacing then with teh registers for another. Because this register save/restore includes the stack-pointer and program counter, control is switched between threads.
It may be that simpler forms of scheduling (rather than preemptive) scheduling are called for. One method is to implement task functions that run to completion and where necessary store their own state and are implemented as state-machines, and then have a simple loop that polls a timer and call's each 'task' function according to a schedule table (that includes the periodicity of the task and a pointer to its function, so that say one function will be called every second, while another will be called every millisecond.

Resources