Cyclic data reading (like 1Wire DS18B20 temperature) without blocking main program - c

I'm trying to do some temperature reading using DS18B20 sensor on Raspberry Pi. My problem is that reading data from this sensor takes time. It's not much, more or less 1 s, but I cannot allow my main program to wait until this is done. I don't need to have 'most recent value'. It's temperature, so I'm gonna ask about it every minute or so. But the sensor can make measurement for example every 10 s and this will provide me recent enough value. In mean time I have to process other requests made to application. So I am thinking of some kind of endless measure loop. In general it would look like:
> start time measurement
> get DS18B20 value from 1 wire
> parse output
> stop measure time
> get the execution time, and put it in some global variable
> sleep for UPDATE_EVERY_X - execution time
So I thought of using fork(), but this creates zombies when the main process exit. Main application is some kind of server, so it won't exit gently in most times, so I need some kind of additional protection and this should be not Linux unique method. I'm trying to write my app as portable I can.
Second thought of mine is to use threads. Dispatch one thread to do this infinite loop, and implement some basic producer - consumer, with mutexes etc. That thread will only lock the output temperature when all is done, so this will make significant difference in blocking time.
Third option is to use asynchronous IO. But this is kind of magic for me right know. I did not used this before but it appears in search results.
And for claryfication, this is not strictly about 1Wire DS18B20, but about main approach when you need to do task every x sec and share information between processes, kind of embedded timer interupts.
Best regards, voodoo16.

If you want the simplest option, remember that the "Convert T" and "Read Scratchpad" commands are two distinct steps. Your program can read the temperature, then come back for the value later. There's no need to stall the program, unless you want to get the temperature value the exact instant the conversion finishes.
mainLoop() {
while(1) {
// Do things
int16_t rawTemperature = readScratchpad(myTemperatureSensor); // Get the last temperature reading
if(!getTemperature(myTemperatureSensor)) { // Start the next conversion
//Temperature conversion was not done, throw out value
rawTemperature = INT16_MAX;
}
else {
float temperature = ((float)rawTemperature / 4.0f);
saveTemperature(temperature);
}
// Do other things while temperature conversion happens
}
}
Where in this case, getTemperature issues a "Convert T" command, and returns 0 or 1 according to the datasheet (replies 0 if the prior conversion wasn't done, 1 if a new conversion has been started).

Related

Increment an output signal in labview

![enter image description here][1]I have a high voltage control VI and I'd like it to increase the output voltage by a user set increment every x number of seconds. At the moment I have a timed sequence outside the main while loop but it never starts. When it's inside the while loop it delays all other functions. I'm afraid I'm such a beginner at this that I can't post a picture yet. All that needs to happen is an increase in voltage by x amount every y seconds. Is there a way to fix this or a better way of doing it? I'm open to suggestions! Thanks!
Eric,
Without seeing the code I am guessing that you have the two loops in series (i.e. the starting of the while loop depends upon an output of the timed loop; this is the only way that one loop might block another). If this is the case, then decouple the two loops so that they are not directly dependent on each other.
If the while loop is dependent on user input, then use an event structure and then pass the new parameters via a queue (this would be your producer-consumer pattern).
Also, get rid of the timed loop and replace with a while loop. The timed loop is only simulated on non-real time machines and it can disrupt determinisitic features of a real-time system. Given that you are looking for sending out a a signal on the order of seconds, it is absolutely not necessary.
Anyways, if I am off base, please throw the code in question up so that we can review it.
Cheers, Matt

Generic Microcontroller Delay Function

Come someone please tell me how this function works? I'm using it in code and have an idea how it works, but I'm not 100% sure exactly. I understand the concept of an input variable N incrementing down, but how the heck does it work? Also, if I am using it repeatedly in my main() for different delays (different iputs for N), then do I have to "zero" the function if I used it somewhere else?
Reference: MILLISEC is a constant defined by Fcy/10000, or system clock/10000.
Thanks in advance.
// DelayNmSec() gives a 1mS to 65.5 Seconds delay
/* Note that FCY is used in the computation. Please make the necessary
Changes(PLLx4 or PLLx8 etc) to compute the right FCY as in the define
statement above. */
void DelayNmSec(unsigned int N)
{
unsigned int j;
while(N--)
for(j=0;j < MILLISEC;j++);
}
This is referred to as busy waiting, a concept that just burns some CPU cycles thus "waiting" by keeping the CPU "busy" doing empty loops. You don't need to reset the function, it will do the same if called repeatedly.
If you call it with N=3, it will repeat the while loop 3 times, every time counting with j from 0 to MILLISEC, which is supposedly a constant that depends on the CPU clock.
The original author of the code have timed and looked at the assembler generated to get the exact number of instructions executed per Millisecond, and have configured a constant MILLISEC to match that for the for loop as a busy-wait.
The input parameter N is then simply the number of milliseconds the caller want to wait and the number of times the for-loop is executed.
The code will break if
used on a different or faster micro controller (depending on how Fcy is maintained), or
the optimization level on the C compiler is changed, or
c-compiler version is changed (as it may generate different code)
so, if the guy who wrote it is clever, there may be a calibration program which defines and configures the MILLISEC constant.
This is what is known as a busy wait in which the time taken for a particular computation is used as a counter to cause a delay.
This approach does have problems in that on different processors with different speeds, the computation needs to be adjusted. Old games used this approach and I remember a simulation using this busy wait approach that targeted an old 8086 type of processor to cause an animation to move smoothly. When the game was used on a Pentium processor PC, instead of the rocket majestically rising up the screen over several seconds, the entire animation flashed before your eyes so fast that it was difficult to see what the animation was.
This sort of busy wait means that in the thread running, the thread is sitting in a computation loop counting down for the number of milliseconds. The result is that the thread does not do anything else other than counting down.
If the operating system is not a preemptive multi-tasking OS, then nothing else will run until the count down completes which may cause problems in other threads and tasks.
If the operating system is preemptive multi-tasking the resulting delays will have a variability as control is switched to some other thread for some period of time before switching back.
This approach is normally used for small pieces of software on dedicated processors where a computation has a known amount of time and where having the processor dedicated to the countdown does not impact other parts of the software. An example might be a small sensor that performs a reading to collect a data sample then does this kind of busy loop before doing the next read to collect the next data sample.

Waiting maximum of X time for input, then proceeding with the program?

Hello I'm creating a game in C.
I want there to be a frame printed every 0.1 seconds. During that time, the user may or may not input using getch().
How do I write such a program? Heres what I can offer you guys to work with.
do{
usleep(100000); // simple 100 mili second delay
if (getch()==32) (ASCII for a space) // may or may not be inputed in 0.1 second timeframe.
playerJumps;
// even if user inputs early, I still want game printed exactly every 0.1 sec not sooner/later.
printGame;
}while(notDead);
I really hope I kept code nice and clear
I've done this before, you are going to have to talk about what platform you are on. All of the C library input functions block to wait for input. One way to do it is with threads -- you have one thread block on the user input, and the other does the game, and it gets notified by the input thread when there is input. The other way is to use a function like poll() on linux, which I believe is what I used, they basically allow you to specify a wait period, or to just try to see if there is input and return immediately if there isn't. Though I think select() should also work, and I think that should be relatively cross platform.

Disrupt Sleep() on Windows in C

I am writing a Gif animator in C.
I have two threads running in parallel, both . The first allows the user to alter the speed of the animation. The second draws the current frame, and then calls Sleep(Constant * 100 / CurrentSpeed), where CurrentSpeed is a percentage amount, ranging from 1 to 200.
The problem is that if you quickly change the speed from 100%, to 1%, and then back to the first, the second thread will execute the following:
Sleep(Constant * 100)
This will draw frame A, wait many seconds (although the speed was changed by the user), and only then draw B and the following frames in the default speed.
It seems to me that Sleep is a poor choice of mine in this case. What can I do to solve this problem?
EDIT:
The code I currently have (Simplified):
while (1) {
InvalidateRect(Handle, &ImageRect, FALSE);
if (shouldDispose) {
break;
}
if (DelayTime)
Sleep(DelayTime * 100 / CurrentSpeed);
SelectNextImage();
}
Instead of calling Sleep() with the desired frame rate, why don't you call it with a constant interval of 1 ms, for example, and use a variable as a counter?
For example, let C be a global variable (counter) which is loaded with a number of 'ticks' of 1ms. Then, write the loop:
while(1) { //Main loop of the player thread
if (C > 0) C--;
if (C == 0) nextframe(); //if counter reaches 0, load next frame.
Sleep(1);
}
The control thread would load C with a number of 1ms ticks (i.e. frame rate), and the player thread will never be stopped beyond 1 ms. The use of 1ms as the base rate is arbitrary. Use the minimum time that allows you the maximum frame rate, in order to load CPU the less as possible.
EDIT
After some hot comments (arguing is good after all), I'd like to point out that this solution is sub-optimal, i.e., it doesn't use any OS mechanism for signaling threads or any other API for preventing the thread from wasting CPU time. The solution shown here is generic: it may be used in any system (even in embedded systems without any running OS. But above all, it is based on the original code posted by the user that asked the question: using Sleep(), how can I achieve my purpose. I give him my humble answer. Anyway, I encourage other people to write sample code using the appropriate API for achieving the same goal. With no hard feelings, special thanks to Martin James.
Find a synchro API on your OS that allows a wait with a timeout, eg. WaitForSingleObject() on Windows. If you want to change the delay, change the timeout and signal the event upon which the WFSO is waiting to make it return 'early' and restart the wait with the new timeout.
Polling with Sleep(1) loops is rarely justifiable.
Create a waitable timer. When you set the timer, you can specify a callback function that will run in the setting thread's context. This means you can do it with two threads, but it actually works just fine with only a single thread as well.
The main advantage of a waitable timer is, however, that it is more accurate and more reliable than Sleep. A timer is conceptually much different from Sleep insofar as Sleep only gives up control and the scheduler marks the thread as ready to run when the time is up and when the scheduler runs anyway. It doesn't do anything beyond that. Which means that the thread will eventually be scheduled to run again, like any other thread that is ready.
A thread that is waiting on a timer (or other waitable object) causes the scheduler to run when the timer is up and has its priority temporarily boosted. It therefore runs not only more reliably and more closely to the desired time, but also earlier than all other threads with the same base priority. Which does not give a realtime guarantee but at least gives a sort of "soft guarantee".
If you still want to use Sleep, use SleepEx instead which you can alert, either by queueing an APC, or by calling the undocumented NtAlertThread function.
In any case, Sleep is troublesome not only because of being unreliable, but also because it bases on the granularity of the system-wide timer. Which you can, of course, set to as low as 1ms (or less on some systems), but that will cause a lot of unnecessary interrupts.

ioctl and execution time

I have a program running two threads - they communicate using message queues.
In one thread, I call ioctl() to access the hardware decryptor. The code goes like:
void Decrypt
{
...
..
...
if(<condition 1>)
{.
...
...
retVal = ioctl(...);
comesInHere1++;
}
if(<condition 2>)
{
...
...
retVal = ioctl(...);
comesInHere2++;
}
comesInHere1 and comesInHere2 are used to count the number of times it goes in that particular if loop.
The entire program takes 80 ms to execute. But if I comment out the test variables (comesInHere1, comesInHere2 within the if loops), the execution time increases by 8 ms to 88 ms!
How is that possible? I cant comment out the variables now since it increases the time taken, cant keep them either - will get killed in code review :)
Kindly let me know
Thanks
Cache? It's possible that by adding a bit more data you're moving code to different cache lines that would somehow be placed together, causing thrashing. You could experiment by running on different systems and by adding padding data between variables that are used exclusively in each thread.
What happens if you serialize the processing onto a single core?

Resources