Why alter curl_multi_timeout() return value? - c

This sample code contains:
curl_multi_timeout(multi_handle, &curl_timeo);
if(curl_timeo >= 0) {
timeout.tv_sec = curl_timeo / 1000;
if(timeout.tv_sec > 1)
timeout.tv_sec = 1;
else
timeout.tv_usec = (curl_timeo % 1000) * 1000;
}
Why is tv_sec clipped to 1 second? Why isn't the value returned by curl_multi_timeout() used as-is (after dividing by 1000)?
Assuming there's a good reason for the above, then is there a case when you would NOT clip the value to 1 second? What case is that?

The code is just setting a maximum wait time for the later call to select(). If anything, this looks like a bug. It looks as if the code is protecting itself from an unreasonable answer from curl_multi_timeout(). My guess is that the coder was thinking, "if the curl timeout function returns something longer than one minute, then don't wait any longer than that." ...and then proceeded to typo one minute as one second. It probably should be doing
if (timeout.tv_sec > 60) {
timeout.tv_sec = 60;
else if (timeout.tv_sec == 0) {
timeout.tv_usec = curl_timeo * 1000;
}
The mod by 1000 is unnecessary since curl_multi_timeout() returns milliseconds, so if tv_sec is zero, that means that the returned value is in the range of 0 - 999.

Related

Combine if statements with logically related conditions

I have this situation.
**Updated my code and it works in all the way i want it to. *clarified the code as requested(full program)
if i give
Start time: 2:30:30
Stop time: 2:30:25
i should get
elapsed: 23:59:55
get it? it crossed midnight...into the next day....
thats wht i wanted and it works!
I have these five if statements with logically related conditions.
The program was giving the desired output, but is it possible to combine these if statements in any way (other than using 'OR' OPERATORS and making huge conditions; like nested-if or maybe conditional operators.
//time elapsed program
//including support for time crossing midnight into the next day
#include<stdio.h>
struct time
{
int hour;
int minute;
int second;
};
struct time timeElapsed(struct time, struct time);
int main()
{
struct time start, stop, elapse;
printf("Enter start time (hh:mm:ss) : ");
scanf("%d:%d:%d", &start.hour, &start.minute, &start.second);
printf("Enter stop time (hh:mm:ss) : ");
scanf("%d:%d:%d", &stop.hour, &stop.minute, &stop.second);
elapse = timeElapsed(start, stop);
printf("The time elapsed is : %.2d:%.2d:%.2d", elapse.hour, elapse.minute, elapse.second);
return 0;
}
struct time timeElapsed(struct time begin, struct time end)
{
struct time elapse;
if(end.hour < begin.hour)
end.hour += 24;
if(end.hour == begin.hour && end.minute < begin.minute)
end.hour += 24;
if(end.hour == begin.hour && end.minute == begin.minute && end.second < begin.second)
end.hour += 24;
if(end.second < begin.second)
{
--end.minute;
end.second += 60;
}
if(end.minute < begin.minute)
{
--end.hour;
end.minute += 60;
}
elapse.second = end.second - begin.second;
elapse.minute = end.minute - begin.minute;
elapse.hour = end.hour - begin.hour;
return elapse;
}
Logically you are comparing whether the end time is earlier than the begin time.
If you can convert the three numbers to one via a mapping that preserves the order then you will be able to use a single comparison.
In this case , converting to the total number of seconds stands out:
if ( end.hour * 3600L + end.minute * 60 + end.second
< begin.hour * 3600L + begin.minute * 60 + begin.second )
This may or may not be more efficient than your original code. If you are going to do this regularly then you could make an inline function to convert a time to the total seconds.
So the first three tests amount to checking for "end < begin". Assuming the fields are already validated to be within range (i.e. .minute and .second both 0..59) why not convert to seconds and compare them directly, e.g.:
if((end.hour * 3600 + end.minute * 60 + end.second) <
(begin.hour * 3600 + begin.minute * 60 + begin.second))
To me this is more obvious as source, and on a modern CPU probably generates better code (i.e. assuming that branches are expensive and integer multiplication is cheap)
To see how the two approaches compare here's a Godbolt version of two such comparison functions compiled with Clang 3.9 -O3 (21 vs 11 instructions, I didn't count the code bytes or try to guesstimate execution time).
https://godbolt.org/g/Ki3CVL
Code below differs a bit from OP's logic, but I think this matches the true goal (perhaps not)
Subtract like terms and scale. By subtracting before scaling, overflow opportunities reduced.
#define MPH 60
#define SPM 60
int cmp = ((end.hour - begin.hour)*MPH + (end.minute - begin.minute))*SPM +
(end.second - begin.second);
if (cmp < 0) Time_After();
else if (cmp > 0) Time_Before();
else Time_Same();
Keeping your approach, I would do as follows. This is not much different, but there are two main branches: one where end definitely does not need to be modified because it happens after begin or at the same time (so that the difference is 0:0.0), and another where fields are adjusted to take account modular arithmetic.
struct time timeElapsed (struct time begin, struct time end)
{
if ((end.hour >= begin.hour) &&
(end.minute >= begin.minute) &&
(end.second >= begin.second)) {
/* end is greater or equal to begin, nothing to adjust. */
} else {
end.hour +=24;
if (end.second < begin.second) {
--end.minute;
end.second += 60;
}
if (end.minute < begin.minute) {
--end.hour;
end.minute += 60;
}
}
struct time elapsed = {
end.hour - begin.hour,
end.minute - begin.minute,
end.second - begin.second
};
return elapsed;
}

Is this a correct use of sleep()?

I'm using sleep like this to grab a frame every 1/25th of a second. OS is Debian 6 armel.
#define VIDEO_FRAME_RATE 25.0f
while (RECORDING) {
sprintf(buffer, "Someting from a data struct that is updated\n");
fprintf(Output, buffer);
usecTosleep = (1.0f/VIDEO_FRAME_RATE) * 1000000;
usleep(usecToSleep);
}
Question: What is the guarantee that the loop will output the buffer to Output file descriptor at every 1/25th of a second?
Is there a better way to do this in C? I need it to be as precise as possible to prevent drifting.
Thank you.
Your "recording operation" still takes some time...
So you need to calculate time that should be spent on on frame (usecTosleep = (1.0f/VIDEO_FRAME_RATE) * 1000000;) and then calculate time operation really took, adjust sleep time accordingly:
usecTosleep = (1.0f/VIDEO_FRAME_RATE) * 1000000;
lastFrameUsec = 0;
while (RECORDING) {
sprintf(buffer, "Someting from a data struct that is updated\n");
fprintf(Output, buffer);
usecTosleep = (1.0f/VIDEO_FRAME_RATE) * 1000000;
// currentFrameUec - lastFrameUsec = actual time spending on operation
currentFrameUsec = getUsecElapsedFromStart();
actualSleep = usecToSleep - (currentFrameUsec - lastFrameUsec);
// If there's time to sleep left, sleep
if(actualSleep > 0){
usleep(actualSleep);
lastFrameUsec = getUsecElepasedFromStart();
} else {
lastFrameUsec = currentFrameUsec;
}
}
I'm not aware of multi platform getUsecElapsedFromStart() so you probably will have to implement your own for example like this one.
int getUsecElapsedFromStart(const struct timespec *tstart)
{
struct timespec *tnow;
clock_gettime(CLOCK_MONOTONIC, &tnow);
return (int)((tnow tv_sec*10.0e9 + tnow.tv_nsec) -
(tstart.tv_ses*10.0e9 + tstart.tv_nsec));
}
clock_gettime(CLOCK_MONOTONIC, &tstart);
while(RECORDING){
// ...
currentFrameUsec = getUsecElapsedFromStart(&tstart);
}
In response to your first question, there is no such guarantee. usleep() promises only that it will sleep at least as long as you tell it to. But it may sleep longer:
The usleep() function suspends execution of the calling process for (at least) usec microseconds. The
sleep may be lengthened slightly by any system activity or by the time spent processing the call or by
the granularity of system timers.

How to check timer ticks how many times?

I have a timer which ticks once every second. I would like to check when it ticks 60 times which means a minute and have it do something.
Assuming C#, this should do the job:
private int m_Time = 0;
private void Timer_Tick(...)
{
m_Time++;
if (m_Time == 60)
{
m_Time = 0;
// it's been 60 seconds, do whatever
}
// do your "every 1 second" code here
}
Essentially you make a private field that counts the number of seconds that have ticked by, then check if it's 60. If it is, a minute has passed and you can perform your logic. Then set the counter back to 0 and carry on.
Create an int field, increment it every tick, and in an if(field == 60) block you can do "something".

Creating a timeout using time and difftime

gcc (GCC) 4.6.0 20110419 (Red Hat 4.6.0-5)
I am trying to get the time of start and end time. And get the difference between them.
The function I have is for creating a API for our existing hardware.
The API wait_events take one argument that is time in milli-seconds. So what I am trying to get the start before the while loop. And using time to get the number of seconds. Then after 1 iteration of the loop get the time difference and then compare that difference with the time out.
Many thanks for any suggestions,
/* Wait for an event up to a specified time out.
* If an event occurs before the time out return 0
* If an event timeouts out before an event return -1 */
int wait_events(int timeout_ms)
{
time_t start = 0;
time_t end = 0;
double time_diff = 0;
/* convert to seconds */
int timeout = timeout_ms / 100;
/* Get the initial time */
start = time(NULL);
while(TRUE) {
if(open_device_flag == TRUE) {
device_evt.event_id = EVENT_DEV_OPEN;
return TRUE;
}
/* Get the end time after each iteration */
end = time(NULL);
/* Get the difference between times */
time_diff = difftime(start, end);
if(time_diff > timeout) {
/* timed out before getting an event */
return FALSE;
}
}
}
The function that will call will be like this.
int main(void)
{
#define TIMEOUT 500 /* 1/2 sec */
while(TRUE) {
if(wait_events(TIMEOUT) != 0) {
/* Process incoming event */
printf("Event fired\n");
}
else {
printf("Event timed out\n");
}
}
return 0;
}
=============== EDIT with updated results ==================
1) With no sleep -> 99.7% - 100% CPU
2) Setting usleep(10) -> 25% CPU
3) Setting usleep(100) -> 13% CPU
3) Setting usleep(1000) -> 2.6% CPU
4) Setting usleep(10000) -> 0.3 - 0.7% CPU
You're overcomplicating it - simplified:
time_t start = time();
for (;;) {
// try something
if (time() > start + 5) {
printf("5s timeout!\n");
break;
}
}
time_t should in general just be an int or long int depending on your platform counting the number of seconds since January 1st 1970.
Side note:
int timeout = timeout_ms / 1000;
One second consists of 1000 milliseconds.
Edit - another note:
You'll most likely have to ensure that the other thread(s) and/or event handling can happen, so include some kind of thread inactivity (using sleep(), nanosleep() or whatever).
Without calling a Sleep() function this a really bad design : your loop will use 100% of the CPU. Even if you are using threads, your other threads won't have much time to run as this thread will use many CPU cycles.
You should design something like that:
while(true) {
Sleep(100); // lets say you want a precision of 100 ms
// Do the compare time stuff here
}
If you need precision of the timing and are using different threads/processes, use Mutexes (semaphores with a increment/decrement of 1) or Critical Sections to make sure the time compare of your function is not interrupted by another process/thread of your own.
I believe your Red Hat is a System V so you can sync using IPC

are these msec<->timeval functions correct?

I have a bug in this program, and I keep coming back to these two functions, but they look right to me. Anything wrong here?
long visual_time_get_msec(VisTime *time_)
{
visual_log_return_val_if_fail(time_ != NULL, 0);
return time_->tv_sec * 1000 + time_->tv_usec / 1000;
}
int visual_time_set_from_msec(VisTime *time_, long msec)
{
visual_log_return_val_if_fail(time_ != NULL, -VISUAL_ERROR_TIME_NULL);
long sec = msec / 1000;
long usec = 0;
visual_time_set(time_, sec, usec);
return VISUAL_OK;
}
Your first function is rounding down, so that 1.000999 seconds is rounded to 1000ms, rather than 1001ms. To fix that (make it round to nearest millisecond), you could do this:
long visual_time_get_msec(VisTime *time_)
{
visual_log_return_val_if_fail(time_ != NULL, 0);
return time_->tv_sec * 1000 + (time_->tv_usec + 500) / 1000;
}
Fuzz has already pointed out the truncation in your second example - the only thing I would add is that you can simplify it a little using the modulo operator:
long sec = msec / 1000;
long usec = (msec % 1000) * 1000;
(The above all assume that you're not dealing with negative timevals - if you are, it gets more complicated).
visual_time_set_from_msec doesnt look right...
if someone calls visual_time_set_from_msec(time, 999), then your struct will be set to zero, rather the 999,000us.
What you should do is:
// Calculate number of seconds
long sec = msec / 1000;
// Calculate remainding microseconds after number of seconds is taken in to account
long usec = (msec - 1000*sec) * 1000;
it really depends on your inputs, but thats my 2 cents :-)

Resources