how to print a string not disordered in vxworks multitask environment? - c

void print_task(void)
{
for(;;)
{
taskLock();
printf("this is task %d\n", taskIdSelf());
taskUnlock();
taskDelay(0);
}
}
void print_test(void)
{
taskSpawn("t1", 100,0,0x10000, (FUNCPTR)print_task, 0,0,0,0,0,0,0,0,0,0);
taskSpawn("t2", 100,0,0x10000, (FUNCPTR)print_task, 0,0,0,0,0,0,0,0,0,0);
}
the above code show:
this is task this is task126738208 126672144 this is task this is
task 126712667214438208
this is task this is task 1266721441 26738208 this is task 126672144
this is task
what is the right way to print a string in multitask?

The problem lies in taskLock();
Try semaphore or mutex instead.

The main idea to print in multi-threaded environment is using dedicated task that printout.
Normally in vxWorks there is a log task that gets the log messages from all tasks in the system and print to terminal from one task only.
The main problem in vxWorks logger mechanism is that the logger task use very high priority and can change your system timing.
Therefore, you should create your own low priority task that get messages from other tasks (using message queue, shared memory protected by mutex, …).
In that case there are 2 great benefits:
The first one, all system printout will be printed from one single task.
The second, and most important benefit, the real-time tasks in the system should not loss time using printf() function.
As you know, printf is very slow function that use system calls and for sure change the timing of your tasks according the debug information you add.
taskLock,
taskLock use as a command to the kernel, it mean to leave the current running task in the CPU as READY.
As you wrote in the example code taskUnlock() function doesn't have arguments. The basic reason is to enable the kernel & interrupts to perform taskUnlock in the system.
There are many system calls that perform task unlock (and sometimes interrupts service routing do it also)

Rather than invent a home-brew solution, just use logMsg(). It is the canonical safe & sane way to print stuff. Internally, it pushes your message onto a message queue. Then a separate task pulls stuff off the queue and prints it. By using logMsg(), you gain ability to print from ISR's, don't have interleaved prints from multiple tasks printing simultaneously, and so on.
For example:
printf("this is task %d\n", taskIdSelf());
becomes
logMsg("this is task %d\n", taskIdSelf(), 0,0,0,0,0,0);

Related

Effective software scheduling

For example in a code like below
while(1){
task1();
task2();
}
there should be cooperation between task1() and task2() which are executed in rr fashion. However, if task1() is implemented as follows
task1(){
while(1);
}
Is there a way to build a scheduler which avoid monopolization of resources by task1() by only relying on software (for example switching tasks each 500 ms)?
Assume to have available only plain C/Assembly, and not rely on external scheduler/OS.
Is there a way to build a scheduler which avoid monopolization of resources by task1() by only relying on software (for example switching tasks each 500 ms)?
Yes it's possible; but it probably isn't possible in plain C because (at a minimum) you'd need to switch between different stacks during task switches.
However, you should know that just switching tasks every 500 ms is very "not effective". Specifically; when one task has to wait for anything (time delay, data received from network, user input, data to be fetched from disk, a mutex, ...) you want to keep the CPU busy by switching to a different task (if there are any other tasks).
To do that; you either need fully asynchronous interfaces for everything (which C does not have), or you need to control all of the code (e.g. write an OS).
Of course the majority of task switches are caused by "task has to wait for something" or "something task was waiting for occurred"; and switching tasks every 500 ms is relatively irrelevant (it only matters for rare tasks that don't do any IO), and even when it is relevant it's a bad idea (in a "10 half finished jobs vs. 5 finished jobs and 5 unstarted jobs" way).
one easy way is to use
a hardware timer,
a queue of tasks to run,
a scheduler
and a dispatcher.
a pre-allocated stack for each task
the timer interrupt handler triggers the scheduler. The scheduler determines which task is to run next. The scheduler triggers the dispatcher.
The dispatcher performs a 'context' switch between the prior running task and the next task to run, placing the prior running task back into the queue, then restores the 'context' of the next task to run, then
transfers execution control to the 'next' task,
The queue is composed of control blocks. The control blocks contain a copy of all the registers and the address of the entry point for the task

FreeRTOS simultaneous tasks

I want to create two tasks that run simultaneously in FreeRTOS. The first task will deal with the LED, the second task will monitor the temperature.
I have two questions:
Will this code create two tasks that run simultaneously?
How do I send the data between the tasks, for example: if the temperature is more than x degrees, turn on the LED?
void firstTask(void *pvParameters) {
while (1) {
puts("firstTask");
}
}
void secondTask(void *pvParameters) {
while (1) {
puts("secondTask");
}
}
int main() {
xTaskCreate(firstTask, "firstTask", STACK_SIZE, NULL, TASK_PRIORITY, NULL);
xTaskCreate(secondTask, "secondTask", STACK_SIZE, NULL, TASK_PRIORITY, NULL);
vTaskStartScheduler();
}
Tasks of equal priority are round-robin scheduled. This means that firstTask will run continuously until the end of its time-slice or until it is blocked, then secondTask will run for a complete timeslice or until it is blocked then back to firstTask repeating indefinitely.
On the face of it you have no blocking calls, but it is possible that if you have implemented RTOS aware buffered I/O for stdio, then puts() may well be blocking when its buffer is full.
The tasks on a single core processor are never truly concurrent, but are scheduled to run as necessary depending on the scheduling algorithm. FreeRTOS is a priority-based preemptive scheduler.
Your example may or not behave as you intend, but both tasks will get CPU time and run in some fashion. It is probably largely academic as this is not a very practical or useful use of an RTOS.
Tasks never really run simultaneously - assuming you only have one core. In you case you are creating the tasks with the same priority and they never block (although they do output strings, probably in a way that is not thread safe), so they will 'share' CPU time by time slicing. Each task will execute up to the next tick interrupt, at which point it will switch to the other.
I recommend reading the free pdf version of the FreeRTOS book for a gentle introduction into the basics https://www.freertos.org/Documentation/RTOS_book.html

How do I decide between taskSpawn(), period(), and watchdogs?

We are using embedded C for the VxWorks real time operating system.
Currently, all of our UDP connections are started with TaskSpawn().
This routine creates and activates a new task with a specified
priority and options and returns a system-assigned ID.
We specify the task size, a priority, and pass in an entry point.
These are continuous connections, and thus every entry point contains an infinite loop where we delay before the next iteration.
Then I discovered period().
period spawns a task to call a function periodically.
Period sounds like what we should be using instead, but I can't find any information on when you would prefer this function over TaskSpawn. Period also doesn't allow specifying the task size or the priority, so how is it decided? Is the task size dynamic? What will the priority be?
There are also watchdogs.
Any task may create a watchdog timer and use it to run a specified
routine in the context of the system-clock ISR, after a specified
delay.
Again, this seems to be in line with the goal of processing data at a particular rate. Which do I choose when a task must continuously execute code at the same rate (i.e. in real time)?
What are the differences between these 3 methods?
Here is a little clarification:
taskSpawn(..) creates a task with which you're free to do anything with you like.
Watchdogs shall only be used to monitor time constraints. Remember that the callback of the watchdog is executed within the context of the system clock ISR which has many limitations (e.g. free stack size, never use blocking function calls in an ISR, ...). Additionally executing "a lot of code" in the system clock ISR slows down your entire system.
period(..) is intended to be a helper for the VxWorks shell and not to be used by a program.
With that being said your only option is to use taskSpawn(..) unless you're doing some very simple stuff in which case period(..) might be ok to use.
If you need to do things cyclically in a specific time frame you might look at timers or taskDelay(..) in combination with sysClkRateSet(..).
Another option is to create two tasks. One that is setting a semaphore after a specific time intervall and the other "worker" tasks waits for this semaphore to do something. With that approach you separate "timing" from "action" which proved to be benefitial according to my experience. You also might want to monitor excution time of the "worker" task by using a watchdog.

Deadlock of powerfail sequence during write to flash page

I'm currently working on an embedded project using an ARM Cortex M3 microcontroller with FreeRTOS as system OS. The code was written by a former colleague and sadly the project has some weird bugs which I have to find and fix as soon as possible.
Short description: The device is integrated into vehicles and sends some "special" data using an integrated modem to a remote server.
The main problem: Since the device is integrated into a vehicle, the power supply of the device can be lost at any time. Therefore the device stores some parts of the "special" data to two reserved flash pages. This code module is laid out as an eeprom emulation on two flash pages(for wear leveling and data transfer from one flash page to another).
The eeprom emulation works with so called "virtual addresses", where you can write data blocks of any size to the currently active/valid flash page and read it back by using those virtual addresses.
The former colleague implemented the eeprom emulation as multitasking module, where you can read/write to the flash pages from every task in the application. At first sight everything seems fine.
But my project manager told me, that the device always loses some of the "special" data at moments, where the power supply level in the vehicle goes down to some volts and the device tries to save the data to flash.
Normally the power supply is about 10-18 volts, but if it goes down to under 7 volts, the device receives an interrupt called powerwarn and it triggers a task called powerfail task.
The powerfail task has the highest priority of all tasks and executes some callbacks where e.g. the modem is turned off and also where the "special" data is stored in the flash page.
I tried to understand the code and debugged for days/weeks and now I'm quite sure that I found the problem:
Within those callbacks which the powerfail task executes (called powerfail callbacks), there are RTOS calls,
where other tasks get suspended. But unfortunately those supended task could also have a unfinished EEPROM_WriteBlock() call just before the powerwarn interrupt is received.
Therefore the powerfail task executes the callbacks and in one of the callbacks there is a EE_WriteBlock() call where the task can't take the mutex in EE_WriteBlock() since another task (which was suspended) has taken it already --> Deadlock!
This is the routine to write data to flash:
uint16_t
EE_WriteBlock (EE_TypeDef *EE, uint16_t VirtAddress, const void *Data, uint16_t Size)
{
.
.
xSemaphoreTakeRecursive(EE->rw_mutex, portMAX_DELAY);
/* Write the variable virtual address and value in the EEPROM */
.
.
.
xSemaphoreGiveRecursive(EE->rw_mutex);
return Status;
}
This is the RTOS specific code when 'xSemaphoreTakeRecursive()' is called:
portBASE_TYPE xQueueTakeMutexRecursive( xQueueHandle pxMutex, portTickType xBlockTime )
{
portBASE_TYPE xReturn;
/* Comments regarding mutual exclusion as per those within
xQueueGiveMutexRecursive(). */
traceTAKE_MUTEX_RECURSIVE( pxMutex );
if( pxMutex->pxMutexHolder == xTaskGetCurrentTaskHandle() )
{
( pxMutex->uxRecursiveCallCount )++;
xReturn = pdPASS;
}
else
{
xReturn = xQueueGenericReceive( pxMutex, NULL, xBlockTime, pdFALSE );
/* pdPASS will only be returned if we successfully obtained the mutex,
we may have blocked to reach here. */
if( xReturn == pdPASS )
{
( pxMutex->uxRecursiveCallCount )++;
}
else
{
traceTAKE_MUTEX_RECURSIVE_FAILED( pxMutex );
}
}
return xReturn;
}
My project manager is happy that I've found the bug but he also forces me to create a fix as quickly as possible, but what I really want is a rewrite of the code.
Maybe one of you might think, just avoid the suspension of the other tasks and you are done, but that is not a possible solution, since this could trigger another bug.
Does anybody have a quick solution/idea how I could fix this deadlock problem?
Maybe I could use xTaskGetCurrentTaskHandle() in EE_WriteBlock() to determine who has the ownership of the mutex and then give it if the task is not running anymore.
Thx
Writing flash, on many systems, requires interrupts to be disabled for the duration of the write so I'm not sure how powerFail can be made running while a write is in progress, but anyway:
Don't control access to the reserved flash pages directly with a mutex - use a blocking producer-consumer queue instead.
Delegate all those writes to one 'flashWriter' thread by queueing requests to it. If the threads requesting the writes require synchronous access, include an event or semaphore in the request struct that the requesting thread waits on after pushing its request. The flashWriter can signal it when done, (or after loading the struct with an error indication:).
There are variations on a theme - if all the write requesting threads need only synchronous access, maybe they can keep their own static request struct with their own semaphore and just queue up a pointer to it.
Use a producer-consumer queue class that allows a high-priority push at the head of the queue and, when powerfail runs, push a 'stopWriting' request at the front of the queue. The flashWriter will then complete any write operation in progress, pop the stopWriting request and so be instructed to suspend itself, (or you could use a 'stop' volatile boolean that the flashWriter checks every time before attempting to pop the queue).
That should prevent deadlock by removing the hard mutex lock from the flash write requests pushed in the other threads. It won't matter if other threads continue to queue up write requests - they will never be executed.
Edit: I've just had two more coffees and, thinking about this, the 'flashWriter' thread could easily become the 'FlashWriterAndPowerFail' thread:
You could arrange for your producer-consumer queue to return a pop() result of null if a volatile 'stop' boolean is set, no matter whether there were entries on the queue or no. In the 'FWAPF' thread, do a null-check after every pop() return and do the powerFail actions upon null or flashWrite actions if not.
When the powerFail interrupt occurs, set the stop bool and signal the 'count' semaphore in the queue to ensure that the FWAPF thread is made running if it's currently blocked on the queue.
That way, you don't need a separate 'powerFail' thread and stack - one thread can do the flashWrite and powerFail while still ensuring that there are no mutex deadlocks.

Queue in Semaphores - even possible?

I have following C problem:
I have a hardware module that controls the SPI bus (as a master), let's call it SPI_control, it's got private (static) read & write and "public" Init() and WriteRead() functions (for those who don't know, SPI is full duplex i.e. a Write always reads data on the bus). Now I need to make this accessible to higher levekl modules that incorporate certain protocols. Let's cal the upper modules TDM and AC. They run in two separate threads and one might not be interrupted by the other (when it's in the nmiddle of a transaction, it first needs to complete).
So one possibility I thought of, is to incorporate a SPI_ENG inbween the modules and the SPI_control which controls data flow and know what can be interrupted and what can't - it would then forward data accordingly to spi_control. But hwo can the independent tasks AC & **TDM talk to spi_control, can I have them to write to and read from some kind ok Semaphore queue? How is should this be done?
Its not exactly clear what you are trying to do, but a general solution is that your two processes (AC and TDM) can write data in their own separate output queues. A third process can act as a scheduler and read alternatively from these queue and write on to the HW (SPI_control). This may be what you are looking for since the queues will also act as elasticity buffers to handle bursty transactions.
This ways you will not have to worry about AC getting preempted TDM, there should be no need for Mutex's to synchronize the accesses to SPI_Control.
Queues in kernel are implemented using kernel semaphores. A Queue is an array of memory guarded by an kernel semaphore.
What I would do is create a control msg queue for the Scheduler tasks. So now system will have 3 queue. 2 data output queues for AC, TDM process and one Control queue for Scheduler task. During system startup scheduler task will start before AC and TDM and pend on its control queue. The AC and TDM process should send "data available" msg to scheduler task over the control queue whenever their queue goes non empty (msgQNumMsgs()). On receiving this msg, the scheduler task should start reading from the specific queue until it is empty and again pend on the control queue. The last time I worked on vxworks(2004), it had a flat memory model, in which all the global variables were accessible to all tasks. Is it this the case? If yes then you can use global variable to pass queue Id's between tasks.
I would simply use a Mutex on each SPI operation:
SPI_Read()
{
MutexGet(&spiMutex);
...
MutexPut(&spiMutex);
}
SPI_Write()
{
MutexGet(&spiMutex);
...
MutexPut(&spiMutex);
}
Make sure that you initialize the Mutex with priority-inheritance enabled, so that it can perform priority-inversion whenever needed.

Resources