I have had a good experience in programming bare metal stm32f4x; however, i tried to shift my code to freeRTOS and for that i first wanted to see if i can use Heap4.c for memory allocation instead of standard C malloc and free calls to better manage the memory etc.
However, what I observed is that using these calls disables my interrupts and never turns them back on. Hence, anything which makes use of interrupts is not working; everything else, which has nothing to do with interrupts is working ok. Not even Systick timer interrupt handler is being triggered.
So, the question is that, how can we make use of pvPortMalloc and vPortFree with bare metal code considering that all other peripherals do make use of their interrupts and SysTick is basically used for simple time delays etc. When using these calls, I could not see any prints happening inside systick as there was no systick handler being called.
Here I would like to point out that I am not calling pvPortMalloc or vPortFree in any interrupt context at all. So, that is totally safe and nothing to worry about that.
I have read through few discussions and if i understand correctly, then any call to FreeRTOS scheduler to suspend tasks etc does not impact as there will be no tasks at all. So, I expect this heap4.c port to work just fine with bare metal as well as long as we stay away from using them within ISR context; but apparently it just disables interrupts and seem to never turn them back on.
I hope to have the opinion of experts here on using pvPortMalloc and vPortFree in bare metal instead of using freeRTOS.
Best regards,
Junaid
I think if you replace the xTaskSuspendAll() and xTaskResumeAll() to simply disable / enable interrupts it should work fine. In fact if your interrupts are not using the allocated memory you might not even need to do this, you could simply comment them out. Suspend and Resume are quite complex functions that can attempt to yield control to other tasks if required.
I suspect the reason interrupts are not getting re-enabled is that either taskEXIT_CRITICAL() is not defined correctly (portENABLE_INTERRUPTS) or the uxCriticalNesting is greater than one when truing to re-enable interrupts (enter critical called more times than exit critical).
However you will probably find the standard malloc and free are better if you are not using FreeRTOS.
Related
(Hopefully) simplified version of my problem:
Say I'm using every GPIO port of my cortex M-4 mcu to do the exact same thing, like read the port on a pin-level change. I've simplified my code so it's port-agnostic, but I'm having issues with a nice solution for re-using the same interrupt handler function.
Is there a way I can use the same interrupt handler function while
having a method of finding which port triggered the interrupt? Ideally some O(1)/doesn't scale up depending on how many ports the board has.
Should I just have different handlers for each port that call the same function that takes in a "port" parameter? (Best I could come up with so far)
So like:
void worker (uint32_t gpio_id) {
*work goes here*
}
void GPIOA_IRQ_Handler(void) { worker(GPIOA_id); }
void GPIOB_IRQ_Handler(void) { worker(GPIOB_id); }
void GPIOC_IRQ_Handler(void) { worker(GPIOC_id); }
...
My actual problem:
I'm learning about and fiddling around with FreeRTOS and creating simple drivers for debug/stdio UART, some buttons that are on my dev. board, so on. So far I've been making drivers for a specific peripheral/port.
Now I'm looking to make an I2C driver without knowing which interface I'm gonna use (there are 10 I2C ports in my mcu), and to potentially to allow the driver code to be used on multiple ports at the same time. I'd know all the ports used at compile-time though.
I have a pretty good idea on how to make the driver to be port-agnostic, except I'm getting hung up on figuring out a nice way to find which port triggered the interrupt using a single handler function. (besides cycling through every port's interrupt status reg since that's O(n)).
Like I said the best I came up with is to not have a single handler and instead have different handlers on the vector table that all call the same "worker" function in it and passing a "port" parameter. This method clutters up the driver code, but it is O(1) (unless you take code-complexity into account).
Am I going about this all wrong and should just "keep it simple stupid" and implement the driver according to the port(s)/use-case I will actually need in the simplest way possible? (don't even have plans to use multiple I2C buses, just though it'd be interesting to implement)
Thank you in advance, hopefully the post isn't too ambiguous or long (I feel like it's pretty long sry).
Is there a way I can use the same interrupt handler function while having a method of finding which port triggered the interrupt?
Only if the different interrupts are cleared in the same way and your application doesn't care which pin that triggered the interrupt. Quite unlikely use-case.
Should I just have different handlers for each port that call the same function that takes in a "port" parameter?
Yeah that's usually what I do. You should pass on the parameters from the ISR to the function, that are unique to the specific interrupt. Important: note that the function should be inline static! A fast ISR is significantly more important than saving a tiny bit of flash by re-using the same function. So in the machine code you'll have 4 different ISRs with the worker function inlined. (Might want to disable inlining in debug build though.)
Am I going about this all wrong and should just "keep it simple stupid"
Sounds like you are doing it right. A properly written driver should be able to handle multiple hardware peripheral instances with the same code. That being said, C programmers have a tendency to obsess about avoiding code repetition. "KISS" is often far more sound than avoiding code repetition. Avoiding repetition is of course nice, but not your top priority here.
Priorities in this case should be, most important first:
As fast, slim interrupts as possible.
Readable code.
Flash size used.
Avoiding code repetition.
I'm working on one of Freesacle micro controller. This microcontroller has several reset sources (e.g. clock monitor reset, watchdog reset and ...).
Suppose that because of watchdog, my micro controller is reset. How can I save some data just before reset happens. I mean for example how can I understand that where had been the program counter just before watchdog reset. With this method I want to know where I have error (in another words long process) that causes watchdog reset.
Most Freescale MCUs work like this:
RAM is preserved after watchdog reset. But probably not after LVD reset and certainly not after power-on reset. This is in most cases completely undocumented.
The MCU will either have a status register where you can check the reset cause (for example HCS08, MPC5x, Kinetis), or it will have special reset vectors for different reset causes (for example HC11, HCS12, Coldfire).
There is no way to save anything upon reset. Reset happens and only afterwards can you find out what caused the reset.
It is however possible to reserve a chunk of RAM as a special segment. Upon power-on reset, you can initialize this segment by setting everything to zero. If you get a watchdog reset, you can assume that this RAM segment is still valid and intact. So you don't initialize it, but leave it as it is. This method enables you to save variable values across reset. Probably - this is not well documented for most MCU families. I have used this trick at least on HCS08, HCS12 and MPC56.
As for the program counter, you are out of luck. It is reset with no means to recover it. Meaning that the only way to find out where a watchdog reset occurred is the tedious old school way of moving a breakpoint bit by bit down your code, run the program and check if it reached the breakpoint.
Though in case of modern MCUs like MPC56 or Cortex M, you simply check the trace buffer and see what code that caused the reset. Not only do you get the PC, you get to see the C source code. But you might need a professional, Eclipse-free tool chain to do this.
Depending on your microcontroller you may get Reset Reason, but getting previous program counter (PC/IP) after reset is not possible.
Most of modern microcontrollers have provision for Watchdog Interrupt Instead of reset.
You can configure watchdog peripheral to enable interrupt , In that ISR you can check stored context on stack. ( You can take help from JTAG debugger to check call stack).
There are multiple debugging methods available if your micro-controller dosent support above method.
e.g
In simple while(1) based architecture you can use a HW timer and restart it after some section of code. In Timer ISR you will know which code section is consuming long enough than the timer.
Two things:
Write a log! And rotate that log to keep the last 30 min. or whatever reasonable amount of time you think you need to reproduce the error. Where the log stops, you can see what happened just before that. Even in production-level devices there is some level of logging.
(Less, practical) You can attach a debugger to nearly every micrcontroller and step through the code. Probably put a break-point that is hit just before you enter the critical section of code. Some IDEs/uCs allow having "data-breakpoints" that get triggered when certain variables contain certain values.
Disclaimer: I am not familiar with the exact microcontroller that you are using.
It is written in your manual.
I don't know that specific processor but in most microprocessors a watchdog reset is a soft reset, meaning that certain registers will keep information about the reset source and sometimes reason.
You need to post more specific information on your Freescale μC for this be answered properly.
Even if you could get the Program Counter before reset, it wouldn't be advisable to blindly set the program counter to another after reset --- as there would likely have been stack and heap information as well as the data itself may also have changed.
It depends on what you want to preserve after reset, certain behaviour or data? Volatile memory may or may not have been cleared after watchdog (see your uC datasheet) and you will be able to detect a reset after checking reset registers (again see your uC datasheet). By detecting a reset and checking volatile memory you may be able to prepare your uC to restart in a way that you'd prefer after the unlikely event of a reset occurring. You could create a global value and set it to a particular value in global scope, then if it resets, check the value against it when a reset event occurs -- if it is the same, you could assume other memory may also be the same. If volatile memory is not an option you'll need to have a look at the datasheet for non-volatile options, however it is also advisable not to continually write to non-volatile memory due to writing limitations.
The only reliable solution is to use a debugger with trace capability if your chip supports embedded instruction trace.
Some devices have an option to redirect the watchdog timeout to an interrupt rather then a reset. This would allow you to write the watchdog timeout handler much like an exception handler and dump or store the stack information including the return address which will indicate the location the interrupt occurred.
However in some cases, neither solution is a reliable method of achieving your aim. In a multi-tasking environment or system with interrupt handlers, the code running when the watchdog timeout occurs may not be the process that is causing the problem.
I am learning embedded systems on the ARM9 processor (SAM9G20). I am more familiar with procedural programming for general purpose. Thus what I am doing is going through the data sheet and learning what registers there are and how to manipulate them.
My question is, how do I know when the computer reset? I know that there is a Reset Controller that manages resets. A register called the Status Register (RSTC_SR) stores the source of the reset. Do I need to keep periodically reading this register?
My solution is to store the number of resets in the FRAM (or start by setting it to 0), once a reset happens, I compare this variable with the register value in my main function. If the register value is higher then obviously it reset. However I am sure there is a more optimized way (perhaps using interrupts). Or is this how its usually done?
You do not need to periodically check, since every time the machine is reset your program will re-start from the beginning.
Simply add checks to the startup code, i.e. early in main(), as needed. If you want to figure out things like how often you reset, then that is more difficult since typically (no experience with SAMs, I'm an STM32 type of guy) on-board timers etc will also reset. Best would be some kind of real-world independent clock, like an RTC that you can poll and save the value of. Please consider if you really need this, though.
A simple solution is to exploit the structure of your code.
Many code bases for embedded take this form:
int main(void)
{
// setup stuff here
while (1)
{
// handle stuff here
}
return 0;
}
You can exploit that the code above while(1) is only run once at startup. You could increment a counter there, and save it in non-volatile storage. That would tell you how many times the microcontroller has reset.
Another example is on Arduino, where the code is structured such that a function called setup() is called once, and a function called loop() is called continuously. With this structure, you could increment the variable in the setup()-function to achieve the same effect.
Whenever your processor starts up, it has by definition come out of reset. What the reset status register does is indicate the source or reason for the reset, such as power-on, watchdog-timer, brown-out, software-instruction, reset-pin etc.
It is not a matter of knowing when your processor has reset - that is implicit by the fact that your code has restarted. It is rather a matter of knowing the cause of the reset.
You need not monitor or read the reset status at all if your application has no need of it, but in some applications perhaps it is a useful diagnostic for example to maintain a count of various reset causes as it may be indicative of the stability of your system software, its power-supply or the behaviour of the operators. Ideally you'd want to log the cause with a timestamp assuming you have an suitable RTC source early enough in your start-up. The timing of resets is often a useful diagnostic where simply counting them may not be.
Any counting of the reset cause should occur early in your code start-up before any interrupts are enabled (because an interrupt may itself cause a reset). This may require you to implement the counters in the start-up code before main() is invoked in cases where the start-up code might enable interrupts - for stdio or filesystem support fro example.
A way to do this is to run the code in debug mode (if you got a debugger for the SAM). After a reset the program counter(PC) points to the address where your code starts.
I'm reading Linux source code to learn how scheduling works. I learn that in a preemptible kernel (CONFIG_PREEMPT is set), there is a chance for preemption after returning to kernel-space from interrupt handler by calling preempt_schedule_irq.
However, I also find the following code snippet in preempt_schedule_irq
do {
preempt_disable();
local_irq_enable(); //why enable interrupt here?
__schedule(true); //interrupt would be disabled inside it
local_irq_disable();
sched_preempt_enable_no_resched();
} while (need_resched());
There is a local_irq_enable() call inside it and this kind of confuses me. Why do we need to enable interrupt here since at the start of __schedule it would disabled again?.
My humble guess is that this gives a chance to processes with higher priority to be scheduled first. However, it doesn't make sense because the preemption is already disabled in preempt_schedule_irq, even if there is an interrupt, there would not be a preemption reschedule.
So what on earth is the point in preempting the scheduling procedure here? I think I must have missed something but I don't figure out.
Short answer: Because interrupts should be enabled as much as possible, only disabled to protect minimal critical sections. Arbitrarily extending the disabling beyond your critical section into non-critical sections of functions you're calling because you're assuming that at some point that function will disable them, is bad design.
Why doesn't __schedule() disable interrupts as it's very first instruction? Because it doesn't need to, the code at the start of __schedule() isn't a critical section so explicitly disabling interrupts before it would be a waste. The writer of __schedule() went out of their way to maximize the time when interrupts can be handled, why ignore that opportunity by not enabling interrupts?
Also, you have no guarantees about what __schedule() might do in the future. Since the start of __schedule() isn't a critical section, you have no guarantees that more stuff won't be added before the interrupt disabling. Remember, the person who's going to be making changes to __schedule() shouldn't have to consider that one of the callers decided to leave their interrupts disabled relying on the fact that __schedule() turns them off pretty soon anyway. __schedule() has no regard at all about interrupt status when it's called.
You should disable/enable interrupts around the critical sections of your code, not rely on the inner mechanics of some other function you're calling and hoping it doesn't change.
If you look through the history of the scheduler, you'll see that the code preceding the interrupt disabling has changed over time. Digging through the commits you see that the "sti" to enable interrupts was present since the very first commit to implement preemption in the kernel going back all the way to 2.5: https://github.com/schwabe/tglx-history/blob/ec332cd30cf1ccde914a87330ff66744414c8d24/arch/i386/kernel/entry.S#L235
In external interrupt function, I want to reset by calling main function. But afterwards, if I have a new interrupt trigger, MCU thinks that It's handling in interrupt function and It doesn't call interrupt function again. What is my solution? (in my project, I'm not allowed to call soft Reset function)
Calling main() in any event is a bad idea, calling it from an interrupt handler is a really bad idea as you have discovered.
What you really need is to modify the stack and link-register so that when the interrupt context exits,, it "returns" to main(), rather than from whence it came. That is a non-trivial task, probably requiring some assembler code or compiler intrinsics.
You have to realise that the hardware will not have been restored to its reset state; you will probably need at least to disable all interrupts to prevent them occurring while the system is re-initialising.
Moreover the standard library will not be reinitialised if you jump to main(); rather than the reset vector. In particular, any currently allocated dynamic memory will instantly leak away and become unusable. In fact all of the C run-time environment initialisation will be skipped - leaving amongst for example static and global data in its last state rather than applying correct initialisation.
In short it is dangerous, error-prone, target specific, and fundamentally poor practice. Most of what you would have to do to make it work is already done in the start-up code that is executed before main() is called, so it would be far simpler to invoke that. The difference between that and forcing a true reset (via the watchdog or AICR) is that the on-chip peripheral state remains untouched (apart from any initialisation explicitly done in the start-up). In my experience, if you are using a more complex peripheral such as USB, safely restarting the system without a true reset is difficult to achieve safely (or at least it is difficult to determine how to do it safely) and hardly worth the effort.
Reset by calling main() is wrong. There is code in front of main inserted by the linker and C-runtime that you will skip by such soft-reset.
Instead, call NVIC_SystemReset() or enable the IWDG and while(1){} to reset.
The HAL should have example files for the watchdog timer.
SRAM is maintained. Any value not initialized by the linker script will still be there.
Calling Main() from any point of your code is a wrong idea if you are not resetting the stack and setting the initial values.
There is always a initialization function ( that actually calls Main()) which is inside an interrupt vector, usually this function can be triggered by calling the function NVIC_SystemReset(void) , be sure than you enable this interrupt so it can be software triggered.
As far as I know, when get inside and interrupt code, other interruptions are inhibit, I am thinking on two different options:
Enable the interruptions inside the interruption and call the function NVIC_SystemReset(void)
Modify the stack and push the direction of the function NVIC_SystemReset(void) so when you go out of the interruption it could be executed.