I have this situation.
**Updated my code and it works in all the way i want it to. *clarified the code as requested(full program)
if i give
Start time: 2:30:30
Stop time: 2:30:25
i should get
elapsed: 23:59:55
get it? it crossed midnight...into the next day....
thats wht i wanted and it works!
I have these five if statements with logically related conditions.
The program was giving the desired output, but is it possible to combine these if statements in any way (other than using 'OR' OPERATORS and making huge conditions; like nested-if or maybe conditional operators.
//time elapsed program
//including support for time crossing midnight into the next day
#include<stdio.h>
struct time
{
int hour;
int minute;
int second;
};
struct time timeElapsed(struct time, struct time);
int main()
{
struct time start, stop, elapse;
printf("Enter start time (hh:mm:ss) : ");
scanf("%d:%d:%d", &start.hour, &start.minute, &start.second);
printf("Enter stop time (hh:mm:ss) : ");
scanf("%d:%d:%d", &stop.hour, &stop.minute, &stop.second);
elapse = timeElapsed(start, stop);
printf("The time elapsed is : %.2d:%.2d:%.2d", elapse.hour, elapse.minute, elapse.second);
return 0;
}
struct time timeElapsed(struct time begin, struct time end)
{
struct time elapse;
if(end.hour < begin.hour)
end.hour += 24;
if(end.hour == begin.hour && end.minute < begin.minute)
end.hour += 24;
if(end.hour == begin.hour && end.minute == begin.minute && end.second < begin.second)
end.hour += 24;
if(end.second < begin.second)
{
--end.minute;
end.second += 60;
}
if(end.minute < begin.minute)
{
--end.hour;
end.minute += 60;
}
elapse.second = end.second - begin.second;
elapse.minute = end.minute - begin.minute;
elapse.hour = end.hour - begin.hour;
return elapse;
}
Logically you are comparing whether the end time is earlier than the begin time.
If you can convert the three numbers to one via a mapping that preserves the order then you will be able to use a single comparison.
In this case , converting to the total number of seconds stands out:
if ( end.hour * 3600L + end.minute * 60 + end.second
< begin.hour * 3600L + begin.minute * 60 + begin.second )
This may or may not be more efficient than your original code. If you are going to do this regularly then you could make an inline function to convert a time to the total seconds.
So the first three tests amount to checking for "end < begin". Assuming the fields are already validated to be within range (i.e. .minute and .second both 0..59) why not convert to seconds and compare them directly, e.g.:
if((end.hour * 3600 + end.minute * 60 + end.second) <
(begin.hour * 3600 + begin.minute * 60 + begin.second))
To me this is more obvious as source, and on a modern CPU probably generates better code (i.e. assuming that branches are expensive and integer multiplication is cheap)
To see how the two approaches compare here's a Godbolt version of two such comparison functions compiled with Clang 3.9 -O3 (21 vs 11 instructions, I didn't count the code bytes or try to guesstimate execution time).
https://godbolt.org/g/Ki3CVL
Code below differs a bit from OP's logic, but I think this matches the true goal (perhaps not)
Subtract like terms and scale. By subtracting before scaling, overflow opportunities reduced.
#define MPH 60
#define SPM 60
int cmp = ((end.hour - begin.hour)*MPH + (end.minute - begin.minute))*SPM +
(end.second - begin.second);
if (cmp < 0) Time_After();
else if (cmp > 0) Time_Before();
else Time_Same();
Keeping your approach, I would do as follows. This is not much different, but there are two main branches: one where end definitely does not need to be modified because it happens after begin or at the same time (so that the difference is 0:0.0), and another where fields are adjusted to take account modular arithmetic.
struct time timeElapsed (struct time begin, struct time end)
{
if ((end.hour >= begin.hour) &&
(end.minute >= begin.minute) &&
(end.second >= begin.second)) {
/* end is greater or equal to begin, nothing to adjust. */
} else {
end.hour +=24;
if (end.second < begin.second) {
--end.minute;
end.second += 60;
}
if (end.minute < begin.minute) {
--end.hour;
end.minute += 60;
}
}
struct time elapsed = {
end.hour - begin.hour,
end.minute - begin.minute,
end.second - begin.second
};
return elapsed;
}
Related
I'm currently tracking the analog value of a photodetector coming into my system. The signal itself is cleaned, filtered (low pass and high pass), and amplified in hardware before coming into my system. The signal has a small amount of DC walk to it, which is giving me some trouble. I've attempted to just move the min up by 1% every 50 reads of the ADC,but it adds more noise than I'd like to my signal. Here's a snapshot of what I'm pulling in below (blue = signal, max/min average = green, red = min) The spikes in the red signal can be ignored that's something I'm doing to say when a certain condition is met.
Right now my function for tracking min is this:
//Determine is value is outside max or min
if(data > max) max = data;
if(data < min) min = data;
//Reset function to bring the bounds in every 50 cycles
if(rstCntr>=50){
rstCntr=0;
max = max/1.01;
min = min*1.01;
if(min <= 1200) min = 1200;
if(max >= 1900) max = 1900;
}
That works fine except when I do that 1% correction to make sure we are still tracking the signal it throws other functions off which rely on the average value and the min value. My objective is to determine:
On the negative slope of the signal
Data coming in is less than the average
Data coming in is 5% above the minimum
It is really #3 that is driving everything else. There is enough slack in the other two that they aren't that affected.
Any suggestions for a better way to track the max and min in real-time than what I'm doing?
EDIT: Per comment by ryyker: here is additional information and reproducible example code
Need more clearly described: I'm reading an analog signal approximately once every 2ms and determining whether that signal has crossed a threshold just above the minimum value of the analog signal. The signal has some DC walk in it which doesn't allow me to simply set the lowest value seen since power-on as the minimum value.
The question: On a reading-by-reading basis, how can I track the min of a signal that doesn't have a consistent minimum value?
int main(void) {
while (1)
{
//******************************************************************************
//** Process analog sensor data, calculate HR, and trigger solenoids
//** At some point this should probably be moved to a function call in System.c,
//** but I don't want to mess with it right now since it works (Adam 11/23/2022)
//******************************************************************************
//Read Analog Data for Sensor
data = ADC1_ReadChannel(7);
//Buffer the sensor data for peak/valley detection
for(int buf=3;buf>0;buf--){
dataBuffer[buf] = dataBuffer[buf-1];
}
dataBuffer[0] = data;
//Look for a valley
//Considered a valley is the 3 most recent data points are increasing
//This helps avoid noise in the signal
uint8_t count = 0;
for(int buf=0;buf<3;buf++) {
if(dataBuffer[buf]>dataBuffer[buf+1]) count++;
}
if(count >= 3) currentSlope = true; //if the last 3 points are increasing, we just passed a valley
else currentSlope = false; //not a valley
// Track the data stream max and min to calculate a signal average
// The signal average is used to determine when we are on the bottom end of the waveform.
if(data > max) max = data;
if(data < min) min = data;
if(rstCntr>=50){ //Make sure we are tracking the signal by moving min and max in every 200 samples
rstCntr=0;
max = max/1.01;
min = min*1.01;
if(min <= 1200) min = 1200; //average*.5; //Probably finger was removed from sensor, move back up
if(max >= 1900) max = 1900; //Need to see if this really works consistently
}
rstCntr++;
average = ((uint16_t)min+(uint16_t)max)/2;
trigger = min; //Variable is only used for debug output, resetting each time around
if(data < average &&
currentSlope == false && //falling edge of signal
data <= (((average-min)*.03)+min) && //Threshold above the min
{
FireSolenoids();
}
}
return 1;
}
EDIT2:
Here is what I'm seeing using the code posted by ryyker below. The green line is what I'm using as my threshold, which works fairly well, but you can see max and min don't track the signal.
EDIT3:
Update with edited min/max code. Not seeing it ever reach the max. Might be the window size is too small (set to 40 in this image).
EDIT4:
Just for extra clarity, I'm restating my objectives once again, hopefully to make things as clear as possible. It might be helpful to provide a bit more context around what the information is used for, so I'm doing that also.
Description:
I have an analog sensor which measures a periodic signal in the range of 0.6Hz to 2Hz. The signal's periodicity is not consistent from pulsewave to pulsewave. It varies +/- 20%. The periodic signal is used to determine the timing of when a valve is opened and closed.
Objective:
The valve needs to be opened a constant number of ms after the signal peak is reached, but the time it physically takes the valve to move is much longer than this constant number. In other words, opening the valve when the peak is detected means the valve opens too late.
Similar to 1, using the valley of the signal is also not enough time for the valve to physically open.
The periodicity of the signal varies enough that it isn't possible to use the peak-to-peak time from the previous two pulsewaves to determine when to open the valve.
I need to consistently determine a point on the negative sloped portion of the pulsewave to use as the trigger for opening the valve.
Approach:
My approach is to measure the minimum and maximum of the signal and then set a threshold above the minimum which I can use to determine the time the open the valve.
My thought is that by setting some constant percentage above the minimum will get me to a consistent location on the negative sloped which can be used to open the valve.
"On a reading-by-reading basis, how can I track the min of a signal that doesn't have a consistent minimum value?"
By putting each discrete signal sample through a moving window filter, and performing statistical operations on the window as it moves, standard deviation can be extracted (following mean and variance) which can then be combined with mean to determine the minimum allowed value for each point of a particular waveform. This assumes noise contribution is known and consistent.
The following implementation is one way to consider.
in header file or top of .c
//support for stats() function
#define WND_SZ 10;
int wnd_sz = WND_SZ;
typedef struct stat_s{
double arr[10];
double min; //mean - std_dev
double max; //mean + std_dev
double mean; //running
double variance;//running
double std_dev; //running
} stat_s;
void stats(double in, stat_s *out);
in .c (edit to change max and min)
// void stats(double in, stat_s *out)
// Used to monitor a continuous stream of sensor values.
// Accepts series of measurement values from a sensor,
// Each new input value is stored in array element [i%wnd_sz]
// where wnd_sz is the width of the sample array.
// instantaneous values for max and min as well as
// moving values of mean, variance, and standard deviation
// are derived once per input
void ISL_UTIL stats(double in, stat_s *out)
{
double sum = 0, sum1 = 0;
int j = 0;
static int i = 0;
out->arr[i%wnd_sz] = in;//array index values cycle within window size
//sum all elements of moving window array
for(j = 0; j < wnd_sz; j++)
sum += out->arr[j];
//compute mean
out->mean = sum / (double)wnd_sz;
//sum squares of diff between each element and mean
for (j = 0; j < wnd_sz; j++)
sum1 += pow((out->arr[j] - out->mean), 2);
//compute variance
out->variance = sum1 / (double)wnd_sz;
//compute standard deviation
out->std_dev = sqrt(out->variance);
//EDIT here:
//mean +/- std_dev
out->max = out->mean + out->std_dev;
out->min = out->mean - out->std_dev;
//END EDIT
//prevent overflow for long running sessions.
i = (i == 1000) ? 0 : ++i;
}
int main(void)
{
stat_s s = {0};
bool running = true;
double val = 0.0;
while(running)
{
//read one sample from some sensor
val = someSensor();
stats(val, &s);
// collect instantaneous and running data from s
// into variables here
if(some exit condition) break
}
return 0;
}
Using this code with 1000 bounded pseudo random values, mean is surrounded with traces depicting mean + std_dev and mean - std_dev As std_dev becomes smaller over time, the traces converge toward the mean signal:
Note: I used the following in my test code to produce data arrays of a signal with constant amplitude added to injected noise that diminishes in amplitude over time.
void gen_data(int samples)
{
srand(clock());
int i = 0;
int plotHandle[6] = {0};
stat_s s = {0};
double arr[5][samples];
memset(arr, 0, sizeof arr);
for(i=0; i < samples; i++)//simulate ongoing sampling of sensor
{
s.arr[i%wnd_sz] = 50 + rand()%100;
if(i<.20*samples) s.arr[i%wnd_sz] = 50 + rand()%100;
else if(i<.40*samples) s.arr[i%wnd_sz] = 50 + rand()%50;
else if(i<.60*samples) s.arr[i%wnd_sz] = 50 + rand()%25;
else if(i<.80*samples) s.arr[i%wnd_sz] = 50 + rand()%12;
else s.arr[i%wnd_sz] = 50 + rand()%6;
stats(s.arr[i%wnd_sz], &s);
arr[0][i] = s.mean;
arr[1][i] = s.variance;
arr[2][i] = s.std_dev;
arr[3][i] = s.min;
arr[4][i] = s.max;
}
//
Plotting algorithms deleted for brevity.
}
I have to elapse the measuring time during multiple threads. I must get an output like this:
Starting Time | Thread Number
00000000000 | 1
00000000100 | 2
00000000200 | 3
Firstly, I used gettimeofday but I saw that there are some negative numbers then I made little research and learn that gettimeofday is not reliable to measure elapsed time. Then I decide to use clock_gettime(CLOCK_MONOTONIC).
However, there is a problem. When I use second to measure time, I cannot measure time precisely. When I use nanosecond, length of end.tv_nsec variable cannot exceed 9 digits (since it is a long variable). That means, when it has to move to the 10th digit, it still remains at 9 digits and actually the number gets smaller, causing the elapsed time to be negative.
That is my code:
long elapsedTime;
struct timespec end;
struct timespec start2;
//gettimeofday(&start2, NULL);
clock_gettime(CLOCK_MONOTONIC,&start2);
while(c <= totalCount)
{
if(strcmp(algorithm,"FCFS") == 0)
{
printf("In SErunner count=%d \n",count);
if(count > 0)
{
printf("Count = %d \n",count);
it = deQueue();
c++;
tid = it->tid;
clock_gettime(CLOCK_MONOTONIC,&end);
usleep( 1000*(it->value));
elapsedTime = ( end.tv_sec - start2.tv_sec);
printf("Process of thread %d finished with value %d\n",it->tid,it->value);
fprintf(outputFile,"%ld %d %d\n",elapsedTime,it->value,it->tid+1);
}
}
Unfortunately, timespec does not have microsecond variable. If you can help me I will be very happy.
Write a helper function that calculates the difference between two timespecs:
int64_t difftimespec_ns(const struct timespec after, const struct timespec before)
{
return ((int64_t)after.tv_sec - (int64_t)before.tv_sec) * (int64_t)1000000000
+ ((int64_t)after.tv_nsec - (int64_t)before.tv_nsec);
}
If you want it in microseconds, just divide it by 1000, or use:
int64_t difftimespec_us(const struct timespec after, const struct timespec before)
{
return ((int64_t)after.tv_sec - (int64_t)before.tv_sec) * (int64_t)1000000
+ ((int64_t)after.tv_nsec - (int64_t)before.tv_nsec) / 1000;
}
Remember to include <inttypes.h>, so that you can use conversion "%" PRIi64 to print integers of int64_t type:
printf("%09" PRIi64 " | 5\n", difftimespec_ns(after, before));
To calculate the delta (elapsed time), you need to make an substraction between two timeval or two timespec structures depending on the services you are using.
For timeval, there is a set of operations to manipulate struct timeval in <sys/time.h> (e.g. /usr/include/x86_64-linux-gnu/sys/time.h):
# define timersub(a, b, result) \
do { \
(result)->tv_sec = (a)->tv_sec - (b)->tv_sec; \
(result)->tv_usec = (a)->tv_usec - (b)->tv_usec; \
if ((result)->tv_usec < 0) { \
--(result)->tv_sec; \
(result)->tv_usec += 1000000; \
} \
} while (0)
For timespec, if you don't have them installed in your header files, copy something like the macro defined in this source code:
#define timespecsub(tsp, usp, vsp) \
do { \
(vsp)->tv_sec = (tsp)->tv_sec - (usp)->tv_sec; \
(vsp)->tv_nsec = (tsp)->tv_nsec - (usp)->tv_nsec; \
if ((vsp)->tv_nsec < 0) { \
(vsp)->tv_sec--; \
(vsp)->tv_nsec += 1000000000L; \
} \
} while (0)
You could convert the time to a double value using some code such as :
double
clocktime_BM (clockid_t clid)
{
struct timespec ts = { 0, 0 };
if (clock_gettime (clid, &ts))
return NAN;
return (double) ts.tv_sec + 1.0e-9 * ts.tv_nsec;
}
The returned double value contains something in seconds. On most machines, double-s are IEEE 754 floating point numbers, and basic operations on them are fast (less than a µs each). Read the floating-point-gui.de for more about them. In 2020 x86-64 based laptops and servers have some HPET. Don't expect a microsecond precision on time measurements (since Linux runs many processes, and they might get scheduled at arbitrary times; read some good textbook about operating systems for explanations).
(the above code is from Bismon, funded thru CHARIOT; something similar appears in RefPerSys)
On Linux, be sure to read syscalls(2), clock_gettime(2), errno(3), time(7), vdso(7).
Consider studying the source code of the Linux kernel and/or of the GNU libc and/or of musl-libc. See LinuxFromScratch and OSDEV and kernelnewbies.
Be aware of The year 2038 problem on some 32 bits computers.
I am new to C programming, but experienced in Java. I am creating a simple console application to calculate time between two chosen values. I am storing the chosen values in an int array like this:
static int timeVals[] = {748,800,815,830,845,914,929,942,953,1001,1010,1026,1034,1042,1048};
I am calling a method diff to calculate the time between to values liek this:
int diff (int start, int slut) {
/* slut means end in danish. not using rude words, but I am danish */
int minutes = 0;
int time = 0;
double hest;
hest = (100/60);
printf("hest = %f", hest);
if(timeVals[start] > timeVals[slut])
{
minutes = timeVals[start] - timeVals[slut];
/* printing hest to see what the value is */
printf("t = %f",hest);
time = minutes * (100/60);
printf("minut diff: %d\n", time);
}
else
{
minutes = timeVals[slut] - timeVals[start];
tiem = time + (minutes * (100/60));
}
return time;
}
The weird thing is that when I print out hest value I get 1.000000, which I mean isn't right... I have been struggling with this for hours now, and I can't find the issue.. maybe I'm just bad at math :P
hope you can help me
The issue is
hest = (100/60)
This result will be 1 because 100 / 60 = 1.6666...., but this is integer division, so you will lose your decimals, so hest = 1. Use
hest = (100.0 / 60.0)
Same with
time = minutes * (100/60);
Changed to
time = minutes * (100.0 / 60.0);
In this case again, you will lose your decimals because time is an int.
Some would recommend, if speed is an issue, that you perform all integer calculations and do store all your items as ints in 1/100th's of a second (i.e. 60 minutes in 1/100ths of a second = 60 minutes*60 seconds*100)
EDIT: Just to clarify, the link is for C++, but the same principles apply. But on most x86 based systems this isn't as big of a deal as it is on power limited embedded systems. Here's another link that discusses this issue
Following statement is a NOP
time = minutes * (100/60);
(100 / 60) == 1 because 100 and 60 are integers. You must write this :
time = (minutes * 100) / 60;
For instance if minutes == 123 time will be calculated as (123 * 100) / 60 which is 12300 / 60 which is 205.
As stated, you are mixing floating point and integer arithmetic. When you divide two integers, your result is integer, but you are trying to print that result as float. You might consider using the modulus operator (%) and compute quotient and remainder,
int // you might want to return float, since you are comingling int and float for time
diff (int start, int slut)
{
/* slut means end in danish. not using rude words, but I am danish */
int minutes = 0, seconds = 0, diff;
int time = 0;
double hest = (100.0) / (60.0); //here
printf("hest = %f", hest);
if(timeVals[start] > timeVals[slut])
{
diff = timeVals[start] - timeVals[slut];
time = diff * (100.0/60.0);
minutes = diff/60;
seconds = diff%60; //modulus (remainder after division by 60
}
else
{
diff = timeVals[slut] - timeVals[start];
time = time + diff * (100.0/60.0);
minutes = diff/60;
seconds = diff%60; //modulus (remainder after division by 60
}
/* printing hest to see what the value is */
printf("t = %f",hest);
printf("minut:seconds %d:%02d\n", minutes, seconds );
printf("minut diff: %d\n", time);
return time;
}
I'm trying to calculate the time offset to be added to subtitle files to correct the lag. The part shown below is after tokenizing the hh:mm:ss,uuu (uuu stands for microseconds) into the time[] array. I'm converting the time into microseconds then adding the actual & lag time to get the final time.
The program computes the actual & lag time properly. However, it gives the wrong final hour time. Have I hit upon some overflow condition that can't be handled by the code below?
Edit: I have realized the error. I should be dividing rather than taking remainder for hour time.
int i;
int time[4];
unsigned long totalTime,totalLagTime;
...
for(i=0;i<4;i++)
{
printf("time[%d] = %d\n",i,time[i]);
}
for(i=0;i<4;i++)
{
printf("lag time[%d] = %d\n",i,lagTime[i]);
}
totalTime = 1000*(3600*time[0] + 60*time[1] + time[2]) + time[3];
printf("total time is %u in milliseconds\n",totalTime);
totalLagTime = 1000*(3600*lagTime[0] + 60*lagTime[1] + lagTime[2]) + lagTime[3];
printf("total lag time is %u in milliseconds\n",totalLagTime);
totalTime += totalLagTime;
printf("Now, total time is %u in milliseconds\n",totalTime);
time[0] = totalTime % 3600000;
printf("hour time is %d\n",time[0]);
Test case:
00:01:24,320
time[0] = 0
time[1] = 1
time[2] = 24
time[3] = 320
lag time[0] = 10
lag time[1] = 10
lag time[2] = 10
lag time[3] = 10
total time is 84320 in milliseconds
total lag time is 36610010 in milliseconds
Now, total time is 36694330 in milliseconds
hour time is 694330
Shouldn't that be
time[0] = totalTime / 3600000;
You have a logic error: 36694330 mod 3600000 really is 694330.
What are you trying to do, exactly?
I have a bug in this program, and I keep coming back to these two functions, but they look right to me. Anything wrong here?
long visual_time_get_msec(VisTime *time_)
{
visual_log_return_val_if_fail(time_ != NULL, 0);
return time_->tv_sec * 1000 + time_->tv_usec / 1000;
}
int visual_time_set_from_msec(VisTime *time_, long msec)
{
visual_log_return_val_if_fail(time_ != NULL, -VISUAL_ERROR_TIME_NULL);
long sec = msec / 1000;
long usec = 0;
visual_time_set(time_, sec, usec);
return VISUAL_OK;
}
Your first function is rounding down, so that 1.000999 seconds is rounded to 1000ms, rather than 1001ms. To fix that (make it round to nearest millisecond), you could do this:
long visual_time_get_msec(VisTime *time_)
{
visual_log_return_val_if_fail(time_ != NULL, 0);
return time_->tv_sec * 1000 + (time_->tv_usec + 500) / 1000;
}
Fuzz has already pointed out the truncation in your second example - the only thing I would add is that you can simplify it a little using the modulo operator:
long sec = msec / 1000;
long usec = (msec % 1000) * 1000;
(The above all assume that you're not dealing with negative timevals - if you are, it gets more complicated).
visual_time_set_from_msec doesnt look right...
if someone calls visual_time_set_from_msec(time, 999), then your struct will be set to zero, rather the 999,000us.
What you should do is:
// Calculate number of seconds
long sec = msec / 1000;
// Calculate remainding microseconds after number of seconds is taken in to account
long usec = (msec - 1000*sec) * 1000;
it really depends on your inputs, but thats my 2 cents :-)