I'm using FreeRTOS port for PIC32 microcontroller on the PIC32MX starter kit. Was just playing with tasks but the tasks aren't context switching. Here are my main config settings:
#define configMAX_PRIORITIES ( ( unsigned portBASE_TYPE ) 5 )
#define configKERNEL_INTERRUPT_PRIORITY 0x01
#define configMAX_SYSCALL_INTERRUPT_PRIORITY 0x03
#define configTICK_RATE_HZ ( ( portTickType ) 100 )
Now I have two tasks defined which blink two LEDs. Both have priority of 4(highest). Under normal operation the LEDs should alternatively blink for each 100 ticks. But this doesn't happen. The second LED blinks for 100 ticks and the control goes to the general exception handler. Why does this happen? Seems like there is no scheduling at all.
FreeRTOS is a priority based pre-emptive scheduler, tasks of equal priority that do not yield processor time will be round-robin scheduled. Relying on round-robin scheduling is seldom suitable for real-time tasks, and depending on the configured time slice, that may mess up your timing. Time-slicing may even be disabled.
Your tasks must enter the Blocked state waiting on some event (such as elapsed time) to allow each other to run as intended.
That said, entering the exception handler rather than simply one task starving another or not running with the intended timing is a different matter. For that you will need to post additional information, though your first approach should be to deploy your debugger.
The absolute first thing to check is your "tick" interrupt. Often interrupts are not enabled, timers aren't set up right, clocks are not configured properly in the #pragma's that set up the PIC32.. and all those issues manifest themselves first in a lack of a "tick".
This is the #1 cause of not task switching: if you're not getting the tick interrupt. That's where the normal pre-emptive task switching happens.
Assuming you're using the "off the shelf demo", in MPLAB, set a breakpoint in the void vPortIncrementTick( void ) function (around line 177 in FreeRTOS\Source\portable\MPLAB\PIC32MX\port.c) and run your code. If it breakpoints in there, your timer tick is working.
Are you sure both tasks are well registered and the scheduler has been launched?
Something like the following code would do the job:
xTaskCreate( yourFirstTask, "firstTask", STACK_SIZE, NULL, TASK_PRIORITY, NULL );
xTaskCreate( yourSecondTask, "secondTask", STACK_SIZE, NULL, TASK_PRIORITY, NULL );
vTaskStartScheduler();
You can also add an application tick hook to see if the tick interruption occurs correctly or if there is a problem with the tick timer.
There are standard demo tasks that just blink LEDs in the FreeRTOS/Demo/Common/Minimal/flash.c source file. The tasks created in that file are included in the standard PIC32 demo application (which targets the Microchip Explorer16 board).
In its very simplest form, a task that just toggles and LED every 500ms would look like this:
/* Standard task prototype, the parameter is not used in this case. */
void vADummyTask( void *pvParameters )
{
const portTickType xDelayTime = 500 / portTICK_RATE_MS;
for( ;; )
{
ToggleLED();
vTaskDelay( xDelayTime );
}
}
Do you have a round-robin scheduler? Are your tasks sleeping for any length of time, or just yielding (or busy waiting)?
A very common gotcha in embedded OSs is that the scheduler will frequently not attempt to schedule multiple processes of the same priority fairly. That is, once A yields, if A is runnable A may get scheduled again immediately even if B has not had any CPU for ages. This is very counterintuitive if you're used to desktop OSs which go to a lot of effort to do fair scheduling (or at least, it was to me).
If you're running into this, you'll want to make sure that your tasks look like this:
for (;;)
{
led(on); sleep(delay);
led(off); sleep(delay);
}
...to ensure that the task actually stops being runnable between blinks. It won't work if it looks like this:
for (;;)
{
led(on);
led(off);
}
(Also, as a general rule, you want to use normal priority rather than high priority unless you know you'll really need it --- if you starve a system task the system can behave oddly or crash.)
Related
Context
I'm making some libraries to manage internet protocol trough GPRS, some part of this communications (made trough UART) are rather slow (some can take more than 30 seconds) because the module has to connect through GPRS.
First I made a driver library to control the module and manage TCP/IP connections, this library worked whit blocking functions, for example a function like Init_GPRS_connection() could take several seconds to end, I have been made to notice that this is bad practice, cause now I have to implement a watchdog timer and this kind of function is not friendly whit short timeout like watchdogs have (I cannot kick the timer before it expire)
What have I though
I need to rewrite part of my libraries to be watchdog friendly, for this purpose I have tough in this scheme, I need functions that have state machine inside, those will be pulling data acquired trough UART interruptions to advance trough the state machines, so then I can write code like:
GPRS_typef Init_GPRS_connection(){
switch(state){ //state would be a global functions that take the current state of the state machine
.... //here would be all the states of the state machine
case end:
state = 0;
return Done;
}
}
while(Init_GPRS_connection() != Done){
Do_stuff(); //Like kick the Watchdog
}
But I see a few problems whit this solution:
This is a less user-friendly implementation, the user should be careful using this library driver because extra lines of code would be always necessary (kind of defeating the purpose of using functions).
If, for some reason, the module wouldn't answer at some point the code would get stuck in the state machine because the watchdog would be kicked outside this function even though the code got stuck in a loop, this kind of defeat the purpose of using watchdog Timer's
My question
What kind of implementation should I use to make a user and watchdog friendly driver library?, how does other drivers library manage this?
Extra information
All this in the context of embedded systems
I would like to implement the watchdog kicking action outside the driver's functions
Given where you are and assuming you do not what too much upheaval to your project to "do it properly", what you might to is add variable watchdog timeout extension, such that you set a counter that is decremented in a timer interrupt and if the counter is not zero, the watch dog is reset.
That way you are not allowing the timer interrupt to reset the watchdog indefinitely while your main thread is stuck, but you can extend the watchdog immediately before executing any blocking code, essentially setting a timeout for that operation.
So you might have (pseudocode):
static volatile uint8_t wdg_repeat_count = 0 ;
void extendWatchdog( uint8_t repeat ) { wdg_repeat_count = repeat ; }
void timerISR( void )
{
if( wdg_repeat_count > 0 )
{
resetWatchdog() ;
wdg_repeat_count-- ;
}
}
Then you can either:
extendWatchdog( CONNECTION_INIT_WDG_TIMEOUT ) ;
while(Init_GPRS_connection() != Done){
Do_stuff(); //Like kick the Watchdog
}
or continue to use your existing non-state-machine based solution:
extendWatchdog( CONNECTION_INIT_WDG_TIMEOUT ) ;
bool connected = Init_GPRS_connection() ;
if( connected ) ...
The idea is compatible with both what you have and what you propose, it simply allows you to extend the watchdog timeout beyond that dictated by the hardware.
I suggest a uint8_t, because it prevents a lazy developer simply setting a large value and effectively disabling the watchdog protection, and it is likely to be atomic and so shareable between the main and interrupt context.
All that said, it would clearly have been better to design in your integrity infrastructure from the outset at the architectural level rather than trying to bolt it on after the event. For example if you were using an RTOS, you might reset the watchdog in a low priority task that if starved, would cause a watchdog expiry, and that "watchdog task" could be use to monitor the other tasks to ensure they are scheduling as expected.
Without an RTOS you might have a "big-loop" architecture with each "task" implemented as a state-machine. In your example you seem to have missed the point of a state-machine. "initialising connection" should be a single state of a high level state-machine, the internals of that state may itself be a state-machine (hierarchical state machines). So your entire system would be a single master state-machine in the main loop, and the watchdog reset once at each loop iteration. Nothing in any sub-state should block to ensure the loop time is low and deterministic. That is how for example Arduino framework's loop() function should work (when done properly - unfortunately seldom the case in examples). To understand how to implement a real-time deterministic state-machine architecture you couls do worse that look at the work of Miro Samek. The framework described therein is available via his company.
You should make your library non-blocking, but other than that, you should not worry about the watchdog at all. The watchdog management should be left to the user.
To allow the user to do other work while your library is waiting, you can use these approaches:
Provide a function to feed the data into your library (e.g. receive()). The user should call this function when the data is available, for example from the interrupt. As this function can be called from the interrupt, make sure it does not do heavy processing. Typically, you would just buffer the data and process it later (Step 2).
Provide a function, that user calls periodically, that updates the state of your library and does any other housekeeping tasks (like timeout detection). Typically, this function is called run(), process(), tick() or something along these lines. The user would call this function in their main loop or from a dedicated RTOS task.
Provide a way to tell the user the state of your library. You can do it either by some sort of getState() function or using a callback or both. Based on this information, the user can implement their own state machine to do things on connect, disconnect etc.
I am working on a project where I need to execute 2 pieces of code off TIM interrupts. One of them has a slightly higher priority than the other, and both will be running on 2 different timers (of course not at the same time interval). Due to both timers being proportional to another (one is 1KHz, one is 8Khz) both will trigger at the same time.
Since I am already using the RTOS middle-ware for another purposes (threads of a much lower priority than these too), I was thinking of creating one thread of each these routines.
However, looking at how cubeMX is generating code, I am even wondering if this is possible.
I can start/stop these timers from any thread, but there is only one HAL_TIM_PeriodElapsedCallback which you usually fill with if statements like so:
if (htim->Instance == TIM2)
Am I correct to assume, regardless of which thread the timers are started from, the TIM callback will always occur "outside" of the RTOS environment?
if so, what would be a better strategy to achieve something close to what I need?
Cheers
Interrupts will triger. But remember:
Its priority (not the RTOS priority as they are unrelated) must be lower the SVC interrupt if you want to use any ...fromISR RTOS functions
They will not happen at the same time (as you have only one core)
I am working on a project where I need to execute 2 pieces of code off
TIM interrupts. One of them has a slightly higher priority than the
other, and both will be running on 2 different timers...
What exactly do you mean by "one of them has a [..] higher priority" - the HW timer events will occur just when the timer underflows occur. I think you mean, the handler code servicing the timeout events.
... (of course not at the same time interval). Due to both timers being proportional to another (one is 1KHz, one is 8Khz) both will trigger at the same time.
In embedded realtime programming, you should never build on the assumption that IRQ events are not occurring at the same time: Your ISR handlers may be suppressed at the moment when a trigger event occurs. This way, even if two concurrent events trigger closely after each other, it may look for your software code as if they had triggered at the same time. The solution is what your question points at: Context priorities (of tasks (= "threads") and ISRs (= "Interrupt handlers")) let you avoid the question which event came earlier and control which event to treat first.
Since I am already using the RTOS middle-ware for another purposes (threads of a much lower priority than these too), I was thinking of creating one thread of each these routines.
You are free to deploy code to an RTOS task or to an ISR, but keep in mind that any ISR will have a higher priority than any task. Your TIM event will trigger an ISR (= interrupt context), but you can (and often should) use the ISR to send a notification (or event, or semaphore, or queue message) to a task in order to have the main part of the timer event processed at the lower priority of a task.
However, looking at how cubeMX is generating code, I am even wondering if this is possible.
CubeMX is not limiting you to use or not use tasks. The question is rather how far CubeMX will generate the code you need, and how much you have to add manually. Please note that you don't have to use the CubeMX feature to generate tasks through its configuration, but this can be done by your own C code, too.
I can start/stop these timers from any thread, but there is only one HAL_TIM_PeriodElapsedCallback which you usually fill with if statements like so:
if (htim->Instance == TIM2)
Am I correct to assume, regardless of which thread the timers are started from, the TIM callback will always occur "outside" of the RTOS environment?
Yes, you are. The question who started the timer is not relevant to the context type/selection triggered by the timer. In any case, the TIM will trigger its ISR (at the interrupt priority configured for that interrupt).
If you use the CubeHAL library, it will implement the root of that ISR, check which of the TIMs related to that ISR have elapsed, and invoke the code you printed. Here, you can insert your user code to the different TIM instances (like TIM2 in your case).
if so, what would be a better strategy to achieve something close to what I need?
Re-check your favourite textbook on RTOS and microcontrollers. Any SO answer cannot include all the theory to solve the problem properly.
Decide whether there will be any more urgent reaction on your system than treating the timeout events. If no, you may implement the timeout reaction in the ISR handler. If yes (or in cases of doubt), implement the ISR with a task notification that goes to a task where you do what the timeout event requires. This may be the task from where you started the timer, or another one.
I am trying to develop a contiki piece of code in which I need to wait for three seconds for a transducer output. Although this may sound quite un-transducer like, at development time, I want to simulate the behavior at human readable speeds and hence I need to set the timer to say 3 seconds.
The contiki timer library is pretty well documented and has a good series of examples which mention the creation, setting and resetting of the timer. However, if I have code like the following:
timer_set(&transducerOutputWaitTimer);
bool if_blk_executed;
if(timer_expired(&&transducerOutputWaitTimer)){
if_blk_executed = true;
//do something
}
if(if_blk_executed)
printf("Sunrise");
else
printf("It is not dawned yet");
Now the expiry is not immediately triggered after the timer is set. So the if block is never executed. Effectively it will never dawn.
Now there are two ways in which I can get the system to wait. One, by adding a while loop on the timer like this:
while(!timer_expired(&&transducerOutputWaitTimer)){};
//do something
or
cpu_delay_usecs{mytimerdur_in_secs*10^6*};
//do something
I do not see that either approaches are elegant. While one wastes CPU cycles, the other enforces an unnecessarily large computation.
Is there any better way? I know that clock
Attached to this question are the following two:
How can I trigger one Contiki protothread from another? If I can do this, I can get the interrupt that causes the transducerOutput to invoke a process from which I can trigger the etimer and process events.
What exactly does the CPU delay mean? Does this mean that the entire CPU clock cycles get held back for the given duration? If yes, how are other processes running currently in the system affected?
Updates
Update 1: the while method did not work. The code possibly went into an infinite loop.
Update 2: I tried with a clock_delay(3*CLOCK_SECOND) approach after setting my timer. It worked. However, in this case, why do I need the timer method at all?
Update 3 (This changes the context of the answer, hence adding this comment as per the suggestion)
My Timer needs to be used outside a process in a different void function. In that case, I need to use the timer() library rather than the etimer (which is specific to a process. So how do I get my method to wait for the desired time in such cases?
There are several Contiki abstractions built on top of the timer library - there is usually no need to user the timer_t structures directly. Instead, there are event timers (struct etimer), which play well together with Contiki processes, and callback timers (struct ctimer), both of which internally use struct timer. For a less RAM hungry option with a more limited API there are second timers (struct stimer). Finally, there is a single real-time timer in the system (struct rtimer).
An example event timer usage: set and wait for 3 seconds:
static struct etimer timer;
etimer_set(&timer, 3 * CLOCK_SECOND);
PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&timer));
How can I trigger one Contiki protothread from another?
There is process_poll - see the process API documentation for more examples.
What exactly does the CPU delay mean?
It's a form of busy waiting. Don't use the delay functions or other forms of busy waiting for delays longer than microseconds - not energy efficient, does not allow other processes to run, and in worst case the watchdog might expire.
I'm working on a project for automotive system where we use the MPC5748 MCU. The application uses an RTOS based on AUTOSAR OS, and this MPC target support two type of watchdogs; software and hardware (they have used soft WDT).
My mission is to fit an algorithm within this application, the development of the algorithm has been done, the problem is that in the task where the algorithm is running is a 1ms task and the algorithm needs much more time than the time dedicated to this function.
I'm a newbie to the embedded world.By the way, in the algorithm main function the program will reset itself and this seems to be a timeOut generated by the expiration of the watchdog.
My questions are:
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Can i add a sleep(<1ms) on the desired function in order to wait a little bit witout affecting other tasks
What are other options to try?
NB: This is a general problem on the watchdog timer and any useful informations will be much helpful for me. Sorry because I can't share the code.
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
Let's forget that one - it is a really bad idea. If it is possible to defeat the watchdog, then it is possible to do it by error, and then the whole point of the watchdog is defeated. Apart from that its an XY question - a question about your proposed solution to a different problem - you should ask about the problem directly.
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Yes you need another task, but you should not add a "big delay" and it is probably unnecessary and certainly a bad design. If the 1ms task needs the result of the algorithm then, the algorithm should run in a service task triggered by the 1ms task and run asynchronously to the 1ms task, the service task then makes the results available to the 1ms task when available (by shared memory or message passing perhaps). Alternatively if the result is not specifically needed by the 1ms task, the service task could take the necessary action independently of the 1ms task.
There are many options, but essentially it seems that your task partitioning is inappropriate; your CAN Rx task should be responsible for receiving CAN messages only, and any action required in response to CAN messages deferred to one or more other tasks, perhaps fed from a message queue.
What are other options to try ?
Software design should not be a matter of trial and error - get the design right, implement the design. However you might consider whether 1ms is appropriate; is it possible that the period can be extended to encompass the worst case execution time without causing a failure to meet deadlines in general? If the answer is "no" then the algorithm does not belong in this task.
I don't think so you can disable/delay the WATCHDOG timer and even if you could that's not a good option to go for.
The problem what think is that the task you are calling is of 1ms, which is very less to read CAN messages and then operate on the same. The minimum task time i think should be of 5ms and the optimal time should be of 10ms.
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
You should never disable the watchdog anywhere in your code.
It might not even be possible, on the MPC5x families you typically set up the watchdog once, and then for safety reasons all watchdog registers turn to read-only registers.
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Ideally you should only service the watchdog from one single location in the program. Your CAN peripheral will be FlexCAN, which has a lot of available "mailboxes" for CAN messages. In most cases, you shouldn't need to poll it, but a flag will be set when the desired message arrive.
So it isn't obvious to me why you would need a delay to wait for them. Simply do:
void the_task (void)
{
wdog_refresh();
... // do other things
if(can_message_available)
{
// do something with the message
}
... // do other things
}
rather than
// BAD:
while(!can_message_available)
; // do nothing
Even if you need to use the CAN as FIFO and poll it repeatedly, you would still use the same approach. You'd just have to ensure that the task runs often enough that there will never be an overflow in the FIFO buffer.
I am writing a Gif animator in C.
I have two threads running in parallel, both . The first allows the user to alter the speed of the animation. The second draws the current frame, and then calls Sleep(Constant * 100 / CurrentSpeed), where CurrentSpeed is a percentage amount, ranging from 1 to 200.
The problem is that if you quickly change the speed from 100%, to 1%, and then back to the first, the second thread will execute the following:
Sleep(Constant * 100)
This will draw frame A, wait many seconds (although the speed was changed by the user), and only then draw B and the following frames in the default speed.
It seems to me that Sleep is a poor choice of mine in this case. What can I do to solve this problem?
EDIT:
The code I currently have (Simplified):
while (1) {
InvalidateRect(Handle, &ImageRect, FALSE);
if (shouldDispose) {
break;
}
if (DelayTime)
Sleep(DelayTime * 100 / CurrentSpeed);
SelectNextImage();
}
Instead of calling Sleep() with the desired frame rate, why don't you call it with a constant interval of 1 ms, for example, and use a variable as a counter?
For example, let C be a global variable (counter) which is loaded with a number of 'ticks' of 1ms. Then, write the loop:
while(1) { //Main loop of the player thread
if (C > 0) C--;
if (C == 0) nextframe(); //if counter reaches 0, load next frame.
Sleep(1);
}
The control thread would load C with a number of 1ms ticks (i.e. frame rate), and the player thread will never be stopped beyond 1 ms. The use of 1ms as the base rate is arbitrary. Use the minimum time that allows you the maximum frame rate, in order to load CPU the less as possible.
EDIT
After some hot comments (arguing is good after all), I'd like to point out that this solution is sub-optimal, i.e., it doesn't use any OS mechanism for signaling threads or any other API for preventing the thread from wasting CPU time. The solution shown here is generic: it may be used in any system (even in embedded systems without any running OS. But above all, it is based on the original code posted by the user that asked the question: using Sleep(), how can I achieve my purpose. I give him my humble answer. Anyway, I encourage other people to write sample code using the appropriate API for achieving the same goal. With no hard feelings, special thanks to Martin James.
Find a synchro API on your OS that allows a wait with a timeout, eg. WaitForSingleObject() on Windows. If you want to change the delay, change the timeout and signal the event upon which the WFSO is waiting to make it return 'early' and restart the wait with the new timeout.
Polling with Sleep(1) loops is rarely justifiable.
Create a waitable timer. When you set the timer, you can specify a callback function that will run in the setting thread's context. This means you can do it with two threads, but it actually works just fine with only a single thread as well.
The main advantage of a waitable timer is, however, that it is more accurate and more reliable than Sleep. A timer is conceptually much different from Sleep insofar as Sleep only gives up control and the scheduler marks the thread as ready to run when the time is up and when the scheduler runs anyway. It doesn't do anything beyond that. Which means that the thread will eventually be scheduled to run again, like any other thread that is ready.
A thread that is waiting on a timer (or other waitable object) causes the scheduler to run when the timer is up and has its priority temporarily boosted. It therefore runs not only more reliably and more closely to the desired time, but also earlier than all other threads with the same base priority. Which does not give a realtime guarantee but at least gives a sort of "soft guarantee".
If you still want to use Sleep, use SleepEx instead which you can alert, either by queueing an APC, or by calling the undocumented NtAlertThread function.
In any case, Sleep is troublesome not only because of being unreliable, but also because it bases on the granularity of the system-wide timer. Which you can, of course, set to as low as 1ms (or less on some systems), but that will cause a lot of unnecessary interrupts.