How to call an event after x minutes while still running? - c

I'm quite new in programming, so please bear with me.
I'm working with a microcontroller, therefore I'm using Microchip Studio.
My code is simplified build up like this:
While(1){
if(ErrorFlag==1)
timer_restart++;
else
timer_restart=0;
if (time_restart == 600000)
restart()
//Remaining code
} // EndWhile
My problem is that I would like to call restart() after around 5 minutes. Right now I have no clue how long it takes. Is there a better way to implement that?
I've tried to find out what time one-WhileLoop-Rotation requires with the clock() function. But I'm getting a ErrorMessage "undefined reference". I think that Microchip Studio does not know those functions.
I maybe could use something like:
while(1){
while(ErrorFlag==1){
delay_ms(5000);
restart();
ErrorFlag=0;
}}
But then the rest of the code is interrupted. Is there any advice someone can give me?

"Busy-while" or "busy-delay" are rarely ever the correct solution. Apart from being inaccurate, they also lock up your CPU at 100% and consuming current needlessly.
You need to use the actual on-chip hardware peripheral timers, either by polling a timer flag on regular basis or by using interrupts. As for how to do that, it depends on the specific MCU.

Related

How do I efficiently use the same interrupt handler for every (identical) peripheral port?

(Hopefully) simplified version of my problem:
Say I'm using every GPIO port of my cortex M-4 mcu to do the exact same thing, like read the port on a pin-level change. I've simplified my code so it's port-agnostic, but I'm having issues with a nice solution for re-using the same interrupt handler function.
Is there a way I can use the same interrupt handler function while
having a method of finding which port triggered the interrupt? Ideally some O(1)/doesn't scale up depending on how many ports the board has.
Should I just have different handlers for each port that call the same function that takes in a "port" parameter? (Best I could come up with so far)
So like:
void worker (uint32_t gpio_id) {
*work goes here*
}
void GPIOA_IRQ_Handler(void) { worker(GPIOA_id); }
void GPIOB_IRQ_Handler(void) { worker(GPIOB_id); }
void GPIOC_IRQ_Handler(void) { worker(GPIOC_id); }
...
My actual problem:
I'm learning about and fiddling around with FreeRTOS and creating simple drivers for debug/stdio UART, some buttons that are on my dev. board, so on. So far I've been making drivers for a specific peripheral/port.
Now I'm looking to make an I2C driver without knowing which interface I'm gonna use (there are 10 I2C ports in my mcu), and to potentially to allow the driver code to be used on multiple ports at the same time. I'd know all the ports used at compile-time though.
I have a pretty good idea on how to make the driver to be port-agnostic, except I'm getting hung up on figuring out a nice way to find which port triggered the interrupt using a single handler function. (besides cycling through every port's interrupt status reg since that's O(n)).
Like I said the best I came up with is to not have a single handler and instead have different handlers on the vector table that all call the same "worker" function in it and passing a "port" parameter. This method clutters up the driver code, but it is O(1) (unless you take code-complexity into account).
Am I going about this all wrong and should just "keep it simple stupid" and implement the driver according to the port(s)/use-case I will actually need in the simplest way possible? (don't even have plans to use multiple I2C buses, just though it'd be interesting to implement)
Thank you in advance, hopefully the post isn't too ambiguous or long (I feel like it's pretty long sry).
Is there a way I can use the same interrupt handler function while having a method of finding which port triggered the interrupt?
Only if the different interrupts are cleared in the same way and your application doesn't care which pin that triggered the interrupt. Quite unlikely use-case.
Should I just have different handlers for each port that call the same function that takes in a "port" parameter?
Yeah that's usually what I do. You should pass on the parameters from the ISR to the function, that are unique to the specific interrupt. Important: note that the function should be inline static! A fast ISR is significantly more important than saving a tiny bit of flash by re-using the same function. So in the machine code you'll have 4 different ISRs with the worker function inlined. (Might want to disable inlining in debug build though.)
Am I going about this all wrong and should just "keep it simple stupid"
Sounds like you are doing it right. A properly written driver should be able to handle multiple hardware peripheral instances with the same code. That being said, C programmers have a tendency to obsess about avoiding code repetition. "KISS" is often far more sound than avoiding code repetition. Avoiding repetition is of course nice, but not your top priority here.
Priorities in this case should be, most important first:
As fast, slim interrupts as possible.
Readable code.
Flash size used.
Avoiding code repetition.

How to prevent linux soft lockup/unresponsiveness in C without sleep

How would be the correct way to prevent a soft lockup/unresponsiveness in a long running while loop in a C program?
(dmesg is reporting a soft lockup)
Pseudo code is like this:
while( worktodo ) {
worktodo = doWork();
}
My code is of course way more complex, and also includes a printf statement which gets executed once a second to report progress, but the problem is, the program ceases to respond to ctrl+c at this point.
Things I've tried which do work (but I want an alternative):
doing printf every loop iteration (don't know why, but the program becomes responsive again that way (???)) - wastes a lot of performance due to unneeded printf calls (each doWork() call does not take very long)
using sleep/usleep/... - also seems like a waste of (processing-)time to me, as the whole program will already be running several hours at full speed
What I'm thinking about is some kind of process_waiting_events() function or the like, and normal signals seem to be working fine as I can use kill on a different shell to stop the program.
Additional background info: I'm using GWAN and my code is running inside the main.c "maintenance script", which seems to be running in the main thread as far as I can tell.
Thank you very much.
P.S.: Yes I did check all other threads I found regarding soft lockups, but they all seem to ask about why soft lockups occur, while I know the why and want to have a way of preventing them.
P.P.S.: Optimizing the program (making it run shorter) is not really a solution, as I'm processing a 29GB bz2 file which extracts to about 400GB xml, at the speed of about 10-40MB per second on a single thread, so even at max speed I would be bound by I/O and still have it running for several hours.
While the posed answer using threads might possibly be an option, it would in reality just shift the problem to a different thread. My solution after all was using
sleep(0)
Also tested sched_yield / pthread_yield, both of which didn't really help. Unfortunately I've been unable to find a good resource which documents sleep(0) in linux, but for windows the documentation states that using a value of 0 lets the thread yield it's remaining part of the current cpu slice.
It turns out that sleep(0) is most probably relying on what is called timer slack in linux - an article about this can be found here: http://lwn.net/Articles/463357/
Another possibility is using nanosleep(&(struct timespec){0}, NULL) which seems to not necessarily rely on timer slack - linux man pages for nanosleep state that if the requested interval is below clock granularity, it will be rounded up to clock granularity, which on linux depends on CLOCK_MONOTONIC according to the man pages. Thus, a value of 0 nanoseconds is perfectly valid and should always work, as clock granularity can never be 0.
Hope this helps someone else as well ;)
Your scenario is not really a soft lock up, it is a process is busy doing something.
How about this pseudo code:
void workerThread()
{
while(workToDo)
{
if(threadSignalled)
break;
workToDo = DoWork()
}
}
void sighandler()
{
signal worker thread to finish
waitForWorkerThreadFinished;
}
void main()
{
InstallSignalHandler;
CreateSemaphore
StartThread;
waitForWorkerThreadFinished;
}
Clearly a timing issue. Using a signalling mechanism should remove the problem.
The use of printf solves the problem because printf accesses the console which is an expensive and time consuming process which in your case gives enough time for the worker to complete its work.

How to use delays in Arduino code?

Is there a way I can use the delay command and have something else running in the background?
Kinda, if you use interrupts. delay itself uses these. But it's not as elegant as a multi-threaded solution (which is probably what you're looking for). There is a Multi-Threading library for Arduino but I'm not sure how well, or even if, it works.
The Arduino is only capable of running a single thread at a time meaning it can only do one thing at a time. You can use interrupts to literally interrupt the normal flow of your code but it's still technically not executing at the same time. The library I linked to attempts to implement what you might call a crude "hyper-threaded" solution. Two threads executing in tandem on a single physical processing core.
If you need other code to execute, you need to learn how to program with millis(). This involved converting your code from "step by step" execution to a time-based state machine.
For example if you want a LED to flash, you have two states for that LED: On and Off. You change the state when enough time has elapsed.
Here are a series of examples of how to convert delay()-based code into millis()-based code:
http://www.cmiyc.com/blog/2011/01/06/millis-tutorial/
Usually all you need is a timer and a ISR routine. You won't manage to live without Interrupts :P Here you can find a good explanation about this.
I agree with JamesC4S, state machine is probably the right formalism to use in your case. You could for example try the ThingML language (which uses components, state machines, etc), and which compiles to Arduino code. A simple example can be found here.

How to solve "BUG: scheduling while atomic: swapper /0x00000103/0, CPU#0"? in TSC2007 Driver?

I found tsc2007 driver and modified according to our needs. Our firm is producing its own TI DM365 board. In this board we used TSC2007 and connected PENIRQ pin to GPIO0 of DM365. It is being seen OK on driver. when i touch to touchscreen cursor is moving but at the same time i am getting
BUG: scheduling while atomic: swapper /0x00000103/0, CPU#0
warning and embedded Linux is being crashed. there are 2 files that i modified and uploaded to http://www.muhendislikhizmeti.com/touchscreen.zip one is with timer the other is not. it is giving this error in any case.
I found a solution on web that i need to use work queue and call with using schedule_work() API. but they are blur for me now. Is anybody have any idea how to solve this problem and can give me some advice where to start to use work queue.
"Scheduling while atomic" indicates that you've tried to sleep somewhere that you shouldn't - like within a spinlock-protected critical section or an interrupt handler.
Common examples of things that can sleep are mutex_lock(), kmalloc(..., GFP_KERNEL), get_user() and put_user().
Exactly as said in 1st answer, scheduling while atomic happens when the scheduler gets confused and therefore unable to work properly and this because the scheduler tried to perform a "schedule()" in a section that contains a schedulable code inside of a non schedulable one.
For example using sleeps inside of a section protected by a spinlock. Trying to use another lock(semaphores,mutexes..) inside of a spinlock-proteced code may also disturb the scheduler. In addition using spinlocks in user space can drive the scheduler to behave as such. Hope this helps
For anyone else with a similar error - I had this problem because I had a function, called from an atomic context, that used kzalloc(..., GFP_KERN) when it should have used GFP_NOWAIT or GFP_ATOMIC.
This is just one example of a function sleeping when you don't want to, which is something you have to be careful of in kernel programming.
Hope this is useful to somebody else!
Thanks for the former two answers, in my case it was enough to disable the preemption:
preempt_disable();
// Your code with locks and schedule()
preempt_enable();

How to find execution time (if possible in nanoseconds) of the sections in a C code on linux with intel dual core?

I have a C code with some functions.I need to find out the execution time of each function, I have tried using gettimeofday and rdtsc,but i guess that because its multi core system the output time provided involves the switching time between the processors. I wanted it to be serialized. So if somebody can give me an idea that how should i calulate the time or at least let me know about the syntax of rdstcp.
P.S. please reply as soon as possible
Thanks :)
It's a little impractical to expect nanosecond resolution.
You can't add code just to output the execution times of functions without increasing execution time. When you take the code out, the timing changes.
In practice, this kind of measurement is made by observing the CPU timing signals on an oscilloscope (or logic analyser).
If you have multiple cores, then the CPU timer won't be stable between them. So set the thread affinity to keep it on the one core. You also might want to use a real time timer to measure the time for the process or thread using clock_gettime(CLOCK_PROCESS_CPUTIMER_ID). Read the note for SMP systems in the usage for that function.
Both of these will effect the timing of the program, so perform multiple iterations of whatever you are benchmarking, and don't call the timing functions too often to try and mitigate this.
There should be some way to set processor affinity to tell the operating system to only run that thread on a particuar core.
In windows there is a SetThreadAffinity system call, I imagine there is a similar function in linux, although I don't know what it is called.
You could boot your dual core system to use one core only using the following kernel parameter:
maxcpus=1
But the measured time will still comprise process contest switching and thus depend on the activity on the other processes on the system. Are you interested in the execution time, or the CPU time needed to execute your task ?
Mate, I'm not sure about this, but even if you're dual core,unless the program is threaded, it will only run in 1 thread (meaning 1 core), so it should not involve the time of switching between processors, I believe there is no such thing...
Pavium is correct, the only way to get decent timing at this resolution in with an oscilloscope and toggling GPIO pins.
Bear in mind that this is all a bit academic anyway: I suppose you are running with an operating system, etc, so there is no way to get a straight run at the hardware.
You really need to look at the reason you want this measurement. Is it a performance benchmark for some code? You could try running the code many thousands of times and get some statistics. For this kind of approach I would recommend you read Zed Shaws diatribe to make sure the numbers aren't fooling you.
Precise Performance Measuring was impossible until Linux kernel 2.6.31. In this kernel a new library for accessing the performacne counters of the CPU and IMHO correcting times in the scheduler was added.
Unfortunately i don't have more details but maybe it is a starting point for more information search. I'm just adding this because nobody mentioned it before
Use the struct timespec structure & clock_gettime function as follows to obtain the time of execution of the code in nanoseconds precision
struct timespec start, end;
clock_gettime(CLOCK_REALTIME,&start);
/* Do something */
clock_gettime(CLOCK_REALTIME,&end);
It returns a value as ((((unsigned64)start.tv_sec) * ((unsigned64)(1000000000L))) + ((unsigned64)(start.tv_nsec))))
Moreover this I've used for multithreaded concepts too..
Hope this answer will be more helpful for you to get your desired execution time in nanoseconds.

Resources