Subtracting and getting seconds as input in C - c

I have the following code:
#include <stdio.h>
#include <time.h>
int main(){
clock_t timerS;
int i=1, targetTime=2;
scanf("%d", &targetTime);
while(i!=0){
timerS = clock();
while ((double)((clock() - timerS) / CLOCKS_PER_SEC) < targetTime){
//do something
}
//do another thing but delayed by the given time
if(targetTime>=0.5)
targetTime-=0.02;
else i=0;
}
return 0;
}
And what I want to do is having a loop which does something for (initially) an inputted amount of seconds and also doing another thing after targetTime-seconds have passed.
But after the first loop, to change the speed with which these operations are made(more specifically -0.02 seconds in this case).
An example would be getting multiple user inputs from user for 2 seconds, and displaying all the inputs made in these 2 seconds afterwards.
First problem is
If the initial given time is smaller than 1 second (for example 0.6), the other thing isn't delayed by 0.6 seconds, but is done immediately.
Second problem is
Actually similar to the first, if I subtract 0.02 seconds (in this case) from targetTime, it again does the other thing immediately and not in targetTime-0.02 seconds as I intend it to.
I'm new to this "clock" and "time" topic in C so I guess I'm doing something wrong regarding how these operations should be done. Also, please don't give an overly-complicated explanation/solution because of the above-mentioned reason.
Thanks!

Don't use the clock(2) system call, as it is obsolete and has been fully superseeded by machine independent replacements.
You can use, if your system supports it, clock_gettime(2), that will give you up to nanosecond precission (depending on the platform, but at least in linux on Intel architectures it is almost warranted) or, if you cannot use it, at least you'll have gettimeofday(2), which is derived from BSD systems, and provides you with a clock with microsecond resolution.
If you want to stop your program for some delay, you have also sleep(2) (second based) usleep(2) (microsecond based) or even nsleep(2) (nanosecond based)
Anyway, any of these calls has a tick that is not based on the system heartbeat, and the resolution is uniform and not system dependant.

I mistakenly initiated targetTime as int instead of double. Changing it to double solves the issue easily. Sorry!

Related

What is the efficient way to continuously check until a condition is true

So I have this program that continuously check until the condition is true. My problem is whenever I run it, my computer slows down because of the loop. Can anyone please suggest the best and most efficient way to do this? Thank you for your response in advance.
To illustrate my problem, here is a code that represents it:
#include <stdio.h>
#include <time.h>
#include <string.h>
#include <windows.h>
int main(void){
time_t now;
struct tm *local;
while(1){
time(&now);
local = localtime(&now);
if(local->tm_min > 55){
printf("Time:\t%d:%d:%d\n",local->tm_hour,local->tm_min,local->tm_sec);
getch();
exit(0);
}
}
return 0;
}
If polling is really what you want, or you have to use it, then you must give breath to the system by using sleep's.
So, how much to sleep in each iteration? It can be a fixed value (and if you sleep just 1 millisecond you will be stunned at how this is effective). A fixed value, say 20-30 milliseconds is good if you check for slow events like keystrokes by a real user. If, say, you are monitoring a serial port, perhaps you need lower values.
Then, depending on the application, you can also implement a variable sleep time. For example (this is a little stupid but it is just to explain): you wait for keystrokes, and sleep 30 milliseconds. Then you use your program in a pipe and discover that it is painly slow. A solution could be to set the value to sleep equal to 30 ms, but after having read a character, the value is lowered to 0 which causes the sleep to be not performed. Every time the condition fails the value is raised up to the maximum limit (20-30 milliseconds for a keyboard).
-- EDIT AFTER COMMENTS --
It has pointed out that keyboards and serial ports do not need polling, or they should not be polled. Generally speaking this is true, but it depends on the hardware and operating system (which in turn is a piece of software and, if the hardware does not support an interrupt for a given condition, even the OS would have to poll). About keyboards, for example, I thought at those little ones implemented as a matrix of buttons: some small CPUs have special facilities to generate an interrupt on any I/O change, but other don't: in that case polling is the only solution - and it is also ideal for implementing anti-bouncing (this kind of polling is not necessarily performed inside a loop).
For serial ports, it is almost true that nobody would implement one without an interrupt (to avoid polling). But even so, it is difficult to manage the incoming data in an event-driven fashion; often a flag is set, and some other part of the program, which polls that flag, will work out the message.
Event-driven programming seems easy at first, but as soon the program gets bigger the complication augments too.
There are other situations to consider, for example loops which read data from somewhere and process those data. If something else has to be done inside the loop, for example checking how much time is passed, but the reading is blocking, the reading must be implemented in a non-blocking way, and the whole loop must turn into a kind of polling for one or more conditions -unless one uses multi-threading.
Anyway, I agree that polling is evil and should only be used when necessary.
Efficiently? One way or the other you need to put your process to sleep until the condition WILL BE TRUE - then wake up and die (so to speak :-). Since your code includes windows.h I'll assume you're running on Windows and thus have the Sleep() function available.
#include <stdio.h>
#include <windows.h>
#include <time.h>
int main(void)
{
time_t now;
struct tm *local;
DWORD msecs;
time(&now);
local = localtime(&now);
/* (55 * 60000) = msecs in 55 minutes */
msecs = (55 * 60000) - ((local->tm_min * 60000) + (local->tm_sec * 1000));
if(msecs > 0)
Sleep(msecs)
return 0;
}

Generic Microcontroller Delay Function

Come someone please tell me how this function works? I'm using it in code and have an idea how it works, but I'm not 100% sure exactly. I understand the concept of an input variable N incrementing down, but how the heck does it work? Also, if I am using it repeatedly in my main() for different delays (different iputs for N), then do I have to "zero" the function if I used it somewhere else?
Reference: MILLISEC is a constant defined by Fcy/10000, or system clock/10000.
Thanks in advance.
// DelayNmSec() gives a 1mS to 65.5 Seconds delay
/* Note that FCY is used in the computation. Please make the necessary
Changes(PLLx4 or PLLx8 etc) to compute the right FCY as in the define
statement above. */
void DelayNmSec(unsigned int N)
{
unsigned int j;
while(N--)
for(j=0;j < MILLISEC;j++);
}
This is referred to as busy waiting, a concept that just burns some CPU cycles thus "waiting" by keeping the CPU "busy" doing empty loops. You don't need to reset the function, it will do the same if called repeatedly.
If you call it with N=3, it will repeat the while loop 3 times, every time counting with j from 0 to MILLISEC, which is supposedly a constant that depends on the CPU clock.
The original author of the code have timed and looked at the assembler generated to get the exact number of instructions executed per Millisecond, and have configured a constant MILLISEC to match that for the for loop as a busy-wait.
The input parameter N is then simply the number of milliseconds the caller want to wait and the number of times the for-loop is executed.
The code will break if
used on a different or faster micro controller (depending on how Fcy is maintained), or
the optimization level on the C compiler is changed, or
c-compiler version is changed (as it may generate different code)
so, if the guy who wrote it is clever, there may be a calibration program which defines and configures the MILLISEC constant.
This is what is known as a busy wait in which the time taken for a particular computation is used as a counter to cause a delay.
This approach does have problems in that on different processors with different speeds, the computation needs to be adjusted. Old games used this approach and I remember a simulation using this busy wait approach that targeted an old 8086 type of processor to cause an animation to move smoothly. When the game was used on a Pentium processor PC, instead of the rocket majestically rising up the screen over several seconds, the entire animation flashed before your eyes so fast that it was difficult to see what the animation was.
This sort of busy wait means that in the thread running, the thread is sitting in a computation loop counting down for the number of milliseconds. The result is that the thread does not do anything else other than counting down.
If the operating system is not a preemptive multi-tasking OS, then nothing else will run until the count down completes which may cause problems in other threads and tasks.
If the operating system is preemptive multi-tasking the resulting delays will have a variability as control is switched to some other thread for some period of time before switching back.
This approach is normally used for small pieces of software on dedicated processors where a computation has a known amount of time and where having the processor dedicated to the countdown does not impact other parts of the software. An example might be a small sensor that performs a reading to collect a data sample then does this kind of busy loop before doing the next read to collect the next data sample.

QueryPerformance counter and Queryperformance frequency in windows

#include <windows.h>
#include <stdio.h>
#include <stdint.h>
// assuming we return times with microsecond resolution
#define STOPWATCH_TICKS_PER_US 1
uint64_t GetStopWatch()
{
LARGE_INTEGER t, freq;
uint64_t val;
QueryPerformanceCounter(&t);
QueryPerformanceFrequency(&freq);
return (uint64_t) (t.QuadPart / (double) freq.QuadPart * 1000000);
}
void task()
{
printf("hi\n");
}
int main()
{
uint64_t start = GetStopWatch();
task();
uint64_t stop = GetStopWatch();
printf("Elapsed time (microseconds): %lld\n", stop - start);
}
The above contains a query performance counter function Retrieves the current value of the high-resolution performance counter and query performance frequency function Retrieves the frequency of the high-resolution performance counter. If I am calling the task(); function multiple times then the difference between the start and stop time varies but I should get the same time difference for calling the task function multiple times. could anyone help me to identify the mistake in the above code ??
The thing is, Windows is a pre-emptive multi-tasking operating system. What the hell does that mean, you ask?
'Simple' - windows allocates time-slices to each of the running processes in the system. This gives the illusion of dozens or hundreds of processes running in parallel. In reality, you are limited to 2, 4, 8 or perhaps 16 parallel processes in a typical desktop/laptop. An Intel i3 has 2 physical cores, each of which can give the impression of doing two things at once. (But in reality, there's hardware tricks going on that switch the execution between each of the two threads that each core can handle at once) This is in addition to the software context switching that Windows/Linux/MacOSX do.
These time-slices are not guaranteed to be of the same duration each time. You may find the pc does a sync with windows.time to update your clock, you may find that the virus-scanner decides to begin working, or any one of a number of other things. All of these events may occur after your task() function has begun, yet before it ends.
In the DOS days, you'd get very nearly the same result each and every time you timed a single iteration of task(). Though, thanks to TSR programs, you could still find an interrupt was fired and some machine-time stolen during execution.
It is for just these reasons that a more accurate determination of the time a task takes to execute may be calculated by running the task N times, dividing the elapsed time by N to get the time per iteration.
For some functions in the past, I have used values for N as large as 100 million.
EDIT: A short snippet.
LARGE_INTEGER tStart, tEnd;
LARGE_INTEGER tFreq;
double tSecsElapsed;
QueryPerformanceFrequency(&tFreq);
QueryPerformanceCounter(&tStart);
int i, n = 100;
for (i=0; i<n; i++)
{
// Do Something
}
QueryPerformanceCounter(&tEnd);
tSecsElapsed = (tEnd.QuadPart - tStart.QuadPart) / (double)tFreq.QuadPart;
double tMsElapsed = tSecElapsed * 1000;
double tMsPerIteration = tMsElapsed / (double)n;
Code execution time on modern operating systems and processors is very unpredictable. There is no scenario where you can be sure that the elapsed time actually measured the time taken by your code, your program may well have lost the processor to another process while it was executing. The caches used by the processor play a big role, code is always a lot slower when it is executed the first time when the caches do not yet contain the code and data used by the program. The memory bus is very slow compared to the processor.
It gets especially meaningless when you measure a printf() statement. The console window is owned by another process so there's a significant chunk of process interop overhead whose execution time critically depends on the state of that process. You'll suddenly see a huge difference when the console window needs to be scrolled for example. And most of all, there isn't actually anything you can do about making it faster so measuring it is only interesting for curiosity.
Profile only code that you can improve. Take many samples so you can get rid of the outliers. Never pick the lowest measurement, that just creates unrealistic expectations. Don't pick the average either, that is affected to much by the long delays that other processes can incur on your test. The median value is a good choice.

gettimeofday clock_gettime solution to generate unique number

My process runs multiple instances (processes) and multiple threads and all of them write to the same database. As soon as the request is placed, a unique req id is generated for the record to be added to the proprietary db. Here are our limitations: It cannot be more than 9 char length, needs to have hhmmss as the first 6 chars. We decided to use ms for the last 3 digits to complete the 9 chars and we are doing all this using gettimeofday() . However, with increased traffic, there are now instances of collisions when multiple requests are placed with in a ms period. This combined with the fact that gettimeofday() itself is not accurate is causing an increased number of collissions. I tried to use clock_gettime but when tested, it is also not that accurate as I observed from the following test program:
We couldn't use static or global variables due to threading issues
Unable to use random numbers as they need to be sequential
Appreciate any help.
#include <time.h>
int main( int argc, char **argv )
{
long i;
struct timespec start, stop;
double gap;
clock_gettime( CLOCK_REALTIME, &start);
for (i =0; i< 123456789 ; i++);
clock_gettime( CLOCK_REALTIME, &stop);
gap = ( stop.tv_sec - start.tv_sec ) + ( stop.tv_nsec - start.tv_nsec ) / 1000000;
printf( "%lf ms\n", gap );
return 0;
}
The type of problem you are describing has already been more-or-less solved by issuing a UUID. This is a system that is designed to solve all the problems you mention and some more.
A linux library: http://linux.die.net/man/3/uuid
More information is available here: http://en.wikipedia.org/wiki/Universally_unique_identifier
Using a time stamp as a unique ID will never work reliably unless you limit yourself to only one transaction per lowest clock tick (1 millisecond in this case).
Since you are stuck using a time value for the first 6 of 9 bytes you need to try to fit as much range into the last 3 bytes as possible.
If you can get away with not using ASCII characters in the last 3 bytes then you should avoid it since that will limit the values it can have a great deal. If possible you should try to use these bytes as a 24 bit integer (range of 16777216) and just have each transaction increment the counter. You could then set it back to 0 each time that gettimeofday let you know that the time had changed. (or you could set up an repeating SIGALRM to let you know when to call gettimeofday again to update your time and 0 the 24 bit integer).
If you are forced to use ASCII printable characters for these bytes then things are a little bit more difficult. The easiest way to extend the range of this would be to use hexadecimal rather than decimal numbers. This grows your representable range from 1000 to 4096. You can do better if you use an even broader number base, though. If you tacked on the first 22 characters of the alphabet (the same way that tacking on the first 6 letters is done for hex) then you can represent 32x32x32 values, which is 32768. That would be a lot of transactions per second. You can do even better if you extend your numeric alphabet even further, but it will become more piecemeal as you do since you will probably want to restrict some characters from being appearing in the value. Using a representation that strtol or strtoul can easily work with will likely be easier to program.
If your application is multithreaded then you may want to consider taking up part of your numeric range as a thread ID and let each thread keep its own transaction counter. This will make determining the relative time between two transactions processed by different threads more difficult to calculate, but it will keep the threads from all wanting to increment the same memory location (which may require a mutex or semaphore).
Generally using clock time on a heavy loaded system like this with a resolution under a second is a bad idea anyhow. Threads will take their timestamp and then be descheduled in the middle of the operation, so you will see things arriving out of order.
Three characters left to encode things uniquely is not much. Try at least to use some different encoding such as base64.
If you use gcc as a compiler you have thread local storage (TLS) as an extension that is quite efficient. Just prefix your static variable with __thread (or so). If you are restricted to phtreads, there are means to get thread specific keys, too, pthread_get_key. But better would be to have the information as long as possible on the stack of the thread.
To obtain a per thread counter that makes a serial number for your request use
your hhmmss timestamp as so far
as much bits that you need to
identify your threads
the last bits for the per thread serial number
as above that should only
wrap round after more than a second
You could even be cheating and yield a thread that fires too many requests within the same second.
I guess you could give each thread of each process a unique ID at startup, I guess this would take only one of the 3 available characters unless you have hundreds of threads. You can then use a local counter per-thread to set the last two characters (using base64 or even more, depending on what characters are allowed, to get enough amplitude).
In this situation the only case where a collision may happen is if the counter of a thread wraps during the same second.
Of course, this is a dirty hack. The Right Way would be to share a ressource amongst the threads/processes. It might be the simplest solution in your case tho.

Timekeeping in Linux kernel 2.6

I've read chapter 7 in the 'Linux Device Drivers' (which can be found here) that time can be measured in 'jiffies'. The problem with the stock jiffies variable is that it wraps around quite frequently (especially if you have your CONFIG_HZ set to 1000).
In my kernel module I'm saving a jiffies value that is set to some time in the future and comparing it at a later time with the current 'jiffies' value. I've learned already that there are functions that take the 32bit jiffy wrap into consideration so to compare two values I'm using this:
if (time_after(jiffies, some_future_jiffies_value))
{
// we've already passed the saved value
}
Here comes my question: So now I want to set the 'some_future_jiffies_value' to "now + 10ms". This can easily be accomplished by doing this:
some_future_jiffies_value = jiffies + msecs_to_jiffies(10);
Is this correct? What happens if the current jiffies is near MAX_JIFFY_OFFSET and the resulting value of msecs_to_jiffies(10) puts some_future_jiffies_value past that offset? Does it wrap around automatically or should I add some code to check for this? Are there functions that save me from having to deal with this?
Update:
To avoid stuff with wraparound I've rewritten my sleep loop:
// Sleep for the appropriate time
while (time_after(some_future_jiffies_value, jiffies))
{
set_current_state(TASK_INTERRUPTIBLE);
schedule_timeout(1);
}
I assume this is more portable right?
Update 2:
Thank you very much 'ctuffli' for taking the time to come back to this question and providing some feedback on my comments as well. My kernel driver is working fine now and it is a lot less ugly compared to the situation before you provided me with all these tips. Thanks!
What you are implementing here is essentially msleep_interruptible() (linux/kernel/timer.c)
/**
* msleep_interruptible - sleep waiting for signals
* #msecs: Time in milliseconds to sleep for
*/
unsigned long msleep_interruptible(unsigned int msecs)
This function has the advantage that the specification is in milliseconds and hides the details of jiffies wrapping internally. Be sure to check the return values as this call returns the number of jiffies remaining. Zero means the call slept the specified number of milliseconds while a non-zero value indicates the call was interrupted this many jiffies early.
With regard to wrapping, see section 6.2.1.2 for a description of jiffies and wrapping. Also, this post tries to describe wrapping in the abstract.

Resources