ioctl and execution time - c

I have a program running two threads - they communicate using message queues.
In one thread, I call ioctl() to access the hardware decryptor. The code goes like:
void Decrypt
{
...
..
...
if(<condition 1>)
{.
...
...
retVal = ioctl(...);
comesInHere1++;
}
if(<condition 2>)
{
...
...
retVal = ioctl(...);
comesInHere2++;
}
comesInHere1 and comesInHere2 are used to count the number of times it goes in that particular if loop.
The entire program takes 80 ms to execute. But if I comment out the test variables (comesInHere1, comesInHere2 within the if loops), the execution time increases by 8 ms to 88 ms!
How is that possible? I cant comment out the variables now since it increases the time taken, cant keep them either - will get killed in code review :)
Kindly let me know
Thanks

Cache? It's possible that by adding a bit more data you're moving code to different cache lines that would somehow be placed together, causing thrashing. You could experiment by running on different systems and by adding padding data between variables that are used exclusively in each thread.
What happens if you serialize the processing onto a single core?

Related

Cyclic data reading (like 1Wire DS18B20 temperature) without blocking main program

I'm trying to do some temperature reading using DS18B20 sensor on Raspberry Pi. My problem is that reading data from this sensor takes time. It's not much, more or less 1 s, but I cannot allow my main program to wait until this is done. I don't need to have 'most recent value'. It's temperature, so I'm gonna ask about it every minute or so. But the sensor can make measurement for example every 10 s and this will provide me recent enough value. In mean time I have to process other requests made to application. So I am thinking of some kind of endless measure loop. In general it would look like:
> start time measurement
> get DS18B20 value from 1 wire
> parse output
> stop measure time
> get the execution time, and put it in some global variable
> sleep for UPDATE_EVERY_X - execution time
So I thought of using fork(), but this creates zombies when the main process exit. Main application is some kind of server, so it won't exit gently in most times, so I need some kind of additional protection and this should be not Linux unique method. I'm trying to write my app as portable I can.
Second thought of mine is to use threads. Dispatch one thread to do this infinite loop, and implement some basic producer - consumer, with mutexes etc. That thread will only lock the output temperature when all is done, so this will make significant difference in blocking time.
Third option is to use asynchronous IO. But this is kind of magic for me right know. I did not used this before but it appears in search results.
And for claryfication, this is not strictly about 1Wire DS18B20, but about main approach when you need to do task every x sec and share information between processes, kind of embedded timer interupts.
Best regards, voodoo16.
If you want the simplest option, remember that the "Convert T" and "Read Scratchpad" commands are two distinct steps. Your program can read the temperature, then come back for the value later. There's no need to stall the program, unless you want to get the temperature value the exact instant the conversion finishes.
mainLoop() {
while(1) {
// Do things
int16_t rawTemperature = readScratchpad(myTemperatureSensor); // Get the last temperature reading
if(!getTemperature(myTemperatureSensor)) { // Start the next conversion
//Temperature conversion was not done, throw out value
rawTemperature = INT16_MAX;
}
else {
float temperature = ((float)rawTemperature / 4.0f);
saveTemperature(temperature);
}
// Do other things while temperature conversion happens
}
}
Where in this case, getTemperature issues a "Convert T" command, and returns 0 or 1 according to the datasheet (replies 0 if the prior conversion wasn't done, 1 if a new conversion has been started).

Determining cause of delay/pause - kernel scheduler etc

System is an embedded Linux/Busybox core on a small embedded board with a web server (Boa) running.
We are seeing some high latency in responses from the web server - sometimes >500ms for no good reason, so I've been digging...
On liberally scattering debug prints throughout the code it seems to come down to the entire process just... stopping for a bit, in a way which I can only assume must be the process/thread being interrupted by another process.
Using print statements and clock_gettime() to calculate time taken to process a request, I can see the code reach the bottom of a while() loop (parsing input), print something like "Time so far: 5ms" and then the next line at the top of the loop will print "Time so far: 350ms" - and all that the code does between the bottom of the loop and the 1st print back at the top is a basic check along the lines of while(position < end), it has nothing complicated that could hold it up.
There's no IO blocking, the data it's parsing has all arrived already, and it's not making any external calls or wandering off into complex functions.
I then looked into whether the kernel scheduler (CFS in our case) might be holding things up, adding calls to clock() (processor time rather than wall-clock) and again calculating time differences Vs processor time used I can see that the wall-clock time delay may run beyond 300ms from one loop to the next, but the reported processor time taken (which seems to have a ~10ms resolution) is more like 50ms.
So, that suggests the task scheduler is holding the process up for hundreds of milliseconds at a time. I've checked the scheduler granularity and max delay and it's nowhere near 100ms, scheduler latency is set at 6ms for example.
Any advice on what I can do now to try and track down the problem - identifying processes which could hog the CPU for >100ms, measuring/tracking what the scheduler is doing, etc.?
First you should try and run your program using strace to see if there are any system calls holding things up.
If that is ambiguous or does not help I would suggest you try and profile the kernel. You could try OProfile
This will create a call graph that you can analyze and see what is happening.

Increment an output signal in labview

![enter image description here][1]I have a high voltage control VI and I'd like it to increase the output voltage by a user set increment every x number of seconds. At the moment I have a timed sequence outside the main while loop but it never starts. When it's inside the while loop it delays all other functions. I'm afraid I'm such a beginner at this that I can't post a picture yet. All that needs to happen is an increase in voltage by x amount every y seconds. Is there a way to fix this or a better way of doing it? I'm open to suggestions! Thanks!
Eric,
Without seeing the code I am guessing that you have the two loops in series (i.e. the starting of the while loop depends upon an output of the timed loop; this is the only way that one loop might block another). If this is the case, then decouple the two loops so that they are not directly dependent on each other.
If the while loop is dependent on user input, then use an event structure and then pass the new parameters via a queue (this would be your producer-consumer pattern).
Also, get rid of the timed loop and replace with a while loop. The timed loop is only simulated on non-real time machines and it can disrupt determinisitic features of a real-time system. Given that you are looking for sending out a a signal on the order of seconds, it is absolutely not necessary.
Anyways, if I am off base, please throw the code in question up so that we can review it.
Cheers, Matt

How to prevent linux soft lockup/unresponsiveness in C without sleep

How would be the correct way to prevent a soft lockup/unresponsiveness in a long running while loop in a C program?
(dmesg is reporting a soft lockup)
Pseudo code is like this:
while( worktodo ) {
worktodo = doWork();
}
My code is of course way more complex, and also includes a printf statement which gets executed once a second to report progress, but the problem is, the program ceases to respond to ctrl+c at this point.
Things I've tried which do work (but I want an alternative):
doing printf every loop iteration (don't know why, but the program becomes responsive again that way (???)) - wastes a lot of performance due to unneeded printf calls (each doWork() call does not take very long)
using sleep/usleep/... - also seems like a waste of (processing-)time to me, as the whole program will already be running several hours at full speed
What I'm thinking about is some kind of process_waiting_events() function or the like, and normal signals seem to be working fine as I can use kill on a different shell to stop the program.
Additional background info: I'm using GWAN and my code is running inside the main.c "maintenance script", which seems to be running in the main thread as far as I can tell.
Thank you very much.
P.S.: Yes I did check all other threads I found regarding soft lockups, but they all seem to ask about why soft lockups occur, while I know the why and want to have a way of preventing them.
P.P.S.: Optimizing the program (making it run shorter) is not really a solution, as I'm processing a 29GB bz2 file which extracts to about 400GB xml, at the speed of about 10-40MB per second on a single thread, so even at max speed I would be bound by I/O and still have it running for several hours.
While the posed answer using threads might possibly be an option, it would in reality just shift the problem to a different thread. My solution after all was using
sleep(0)
Also tested sched_yield / pthread_yield, both of which didn't really help. Unfortunately I've been unable to find a good resource which documents sleep(0) in linux, but for windows the documentation states that using a value of 0 lets the thread yield it's remaining part of the current cpu slice.
It turns out that sleep(0) is most probably relying on what is called timer slack in linux - an article about this can be found here: http://lwn.net/Articles/463357/
Another possibility is using nanosleep(&(struct timespec){0}, NULL) which seems to not necessarily rely on timer slack - linux man pages for nanosleep state that if the requested interval is below clock granularity, it will be rounded up to clock granularity, which on linux depends on CLOCK_MONOTONIC according to the man pages. Thus, a value of 0 nanoseconds is perfectly valid and should always work, as clock granularity can never be 0.
Hope this helps someone else as well ;)
Your scenario is not really a soft lock up, it is a process is busy doing something.
How about this pseudo code:
void workerThread()
{
while(workToDo)
{
if(threadSignalled)
break;
workToDo = DoWork()
}
}
void sighandler()
{
signal worker thread to finish
waitForWorkerThreadFinished;
}
void main()
{
InstallSignalHandler;
CreateSemaphore
StartThread;
waitForWorkerThreadFinished;
}
Clearly a timing issue. Using a signalling mechanism should remove the problem.
The use of printf solves the problem because printf accesses the console which is an expensive and time consuming process which in your case gives enough time for the worker to complete its work.

run func() based on what time it is

i wrote some code that monitors a directory DIR with inotify() and when a file gets moved in DIR i get a .txt output of that file(its an nfcapd file with flows of my network interface). This happens every 5 minutes.
After that, i used Snort's DPX starter kit, with which you can extend Snort by writing your own preprocessor. This preprocessor,like all the others, is just a function that's executed every time a new packet is available. My problem is that i want, when a new file gets exported from my previous code(so every 5 minutes), to read that file inside the preprocessor's function.
So, is there any way of getting the time and executing only if it's the desired time?
if (time is 15:36){
func(output.txt);}
i'm writing in c.
Thanks
You can do something like the following:
#include <time.h>
...
time_t t = time(NULL); //obtain current time in seconds
struct tm broken_time;
localtime_r(&t, &broken_time); // split time into fields
if(broken_time.tm_hour == 15 && broken_time.tm_min == 36) { //perform the check
func(output.txt);
}
Since you're using inotify, I'm assuming your environment supports POSIX signals.
You can use alarm() to raise a signal after a predetermined amount of time has passed and have the appropriate signal handler do whatever work you need to do. It would avoid what I think is going to end up being a very ugly infinite loop in your code.
So, in your case, the function handling SIGALRM would not need to worry what time it was, it would know that a predetermined amount of time has passed by the fact that it was entered. However, you'll need to provide some context that function can access to know what to do, kind of hard to suggest how without seeing your code.
I'm not entirely sure you're going down the right path with this, but using alarm() would probably be the sanest approach given what you described.

Resources