converting integers to minutes - c

I am new to C programming, but experienced in Java. I am creating a simple console application to calculate time between two chosen values. I am storing the chosen values in an int array like this:
static int timeVals[] = {748,800,815,830,845,914,929,942,953,1001,1010,1026,1034,1042,1048};
I am calling a method diff to calculate the time between to values liek this:
int diff (int start, int slut) {
/* slut means end in danish. not using rude words, but I am danish */
int minutes = 0;
int time = 0;
double hest;
hest = (100/60);
printf("hest = %f", hest);
if(timeVals[start] > timeVals[slut])
{
minutes = timeVals[start] - timeVals[slut];
/* printing hest to see what the value is */
printf("t = %f",hest);
time = minutes * (100/60);
printf("minut diff: %d\n", time);
}
else
{
minutes = timeVals[slut] - timeVals[start];
tiem = time + (minutes * (100/60));
}
return time;
}
The weird thing is that when I print out hest value I get 1.000000, which I mean isn't right... I have been struggling with this for hours now, and I can't find the issue.. maybe I'm just bad at math :P
hope you can help me

The issue is
hest = (100/60)
This result will be 1 because 100 / 60 = 1.6666...., but this is integer division, so you will lose your decimals, so hest = 1. Use
hest = (100.0 / 60.0)
Same with
time = minutes * (100/60);
Changed to
time = minutes * (100.0 / 60.0);
In this case again, you will lose your decimals because time is an int.
Some would recommend, if speed is an issue, that you perform all integer calculations and do store all your items as ints in 1/100th's of a second (i.e. 60 minutes in 1/100ths of a second = 60 minutes*60 seconds*100)
EDIT: Just to clarify, the link is for C++, but the same principles apply. But on most x86 based systems this isn't as big of a deal as it is on power limited embedded systems. Here's another link that discusses this issue

Following statement is a NOP
time = minutes * (100/60);
(100 / 60) == 1 because 100 and 60 are integers. You must write this :
time = (minutes * 100) / 60;
For instance if minutes == 123 time will be calculated as (123 * 100) / 60 which is 12300 / 60 which is 205.

As stated, you are mixing floating point and integer arithmetic. When you divide two integers, your result is integer, but you are trying to print that result as float. You might consider using the modulus operator (%) and compute quotient and remainder,
int // you might want to return float, since you are comingling int and float for time
diff (int start, int slut)
{
/* slut means end in danish. not using rude words, but I am danish */
int minutes = 0, seconds = 0, diff;
int time = 0;
double hest = (100.0) / (60.0); //here
printf("hest = %f", hest);
if(timeVals[start] > timeVals[slut])
{
diff = timeVals[start] - timeVals[slut];
time = diff * (100.0/60.0);
minutes = diff/60;
seconds = diff%60; //modulus (remainder after division by 60
}
else
{
diff = timeVals[slut] - timeVals[start];
time = time + diff * (100.0/60.0);
minutes = diff/60;
seconds = diff%60; //modulus (remainder after division by 60
}
/* printing hest to see what the value is */
printf("t = %f",hest);
printf("minut:seconds %d:%02d\n", minutes, seconds );
printf("minut diff: %d\n", time);
return time;
}

Related

Is there another way to calculate time performing a function without clock()?

void play(){
...
int f;
...
if(per==4){
f=clock()/CLOCKS_PER_SEC;
int min,sec;
min = f/60;
sec = f%60;
printf("You have won the game in %d turns and %d:%02d!!!",turn,min,sec);
break;
}
...
}
Standard timing methods including clock() are provided in time.h. One function that might be used is time():
time_t start = time(NULL) ;
// Something that takes time here...
// Elapsed time in minutes+seconds.
int f = time(NULL) - start ;
int min = f / 60 ;
int sec = f % 60 ;
Non-standard means may be provided by your platform such as Windows performance counters.
Strictly the return value from time() need not be in seconds and the difftime() function should be used rather than time(NULL) - start, but you are unlikely to encounter a system where time_t is not in seconds, and difftime() returns a double which may not be desirable in some cases:
int f = (int)difftime( time(NULL), start ) ;
int min = f / 60 ;
int sec = f % 60 ;

Combine if statements with logically related conditions

I have this situation.
**Updated my code and it works in all the way i want it to. *clarified the code as requested(full program)
if i give
Start time: 2:30:30
Stop time: 2:30:25
i should get
elapsed: 23:59:55
get it? it crossed midnight...into the next day....
thats wht i wanted and it works!
I have these five if statements with logically related conditions.
The program was giving the desired output, but is it possible to combine these if statements in any way (other than using 'OR' OPERATORS and making huge conditions; like nested-if or maybe conditional operators.
//time elapsed program
//including support for time crossing midnight into the next day
#include<stdio.h>
struct time
{
int hour;
int minute;
int second;
};
struct time timeElapsed(struct time, struct time);
int main()
{
struct time start, stop, elapse;
printf("Enter start time (hh:mm:ss) : ");
scanf("%d:%d:%d", &start.hour, &start.minute, &start.second);
printf("Enter stop time (hh:mm:ss) : ");
scanf("%d:%d:%d", &stop.hour, &stop.minute, &stop.second);
elapse = timeElapsed(start, stop);
printf("The time elapsed is : %.2d:%.2d:%.2d", elapse.hour, elapse.minute, elapse.second);
return 0;
}
struct time timeElapsed(struct time begin, struct time end)
{
struct time elapse;
if(end.hour < begin.hour)
end.hour += 24;
if(end.hour == begin.hour && end.minute < begin.minute)
end.hour += 24;
if(end.hour == begin.hour && end.minute == begin.minute && end.second < begin.second)
end.hour += 24;
if(end.second < begin.second)
{
--end.minute;
end.second += 60;
}
if(end.minute < begin.minute)
{
--end.hour;
end.minute += 60;
}
elapse.second = end.second - begin.second;
elapse.minute = end.minute - begin.minute;
elapse.hour = end.hour - begin.hour;
return elapse;
}
Logically you are comparing whether the end time is earlier than the begin time.
If you can convert the three numbers to one via a mapping that preserves the order then you will be able to use a single comparison.
In this case , converting to the total number of seconds stands out:
if ( end.hour * 3600L + end.minute * 60 + end.second
< begin.hour * 3600L + begin.minute * 60 + begin.second )
This may or may not be more efficient than your original code. If you are going to do this regularly then you could make an inline function to convert a time to the total seconds.
So the first three tests amount to checking for "end < begin". Assuming the fields are already validated to be within range (i.e. .minute and .second both 0..59) why not convert to seconds and compare them directly, e.g.:
if((end.hour * 3600 + end.minute * 60 + end.second) <
(begin.hour * 3600 + begin.minute * 60 + begin.second))
To me this is more obvious as source, and on a modern CPU probably generates better code (i.e. assuming that branches are expensive and integer multiplication is cheap)
To see how the two approaches compare here's a Godbolt version of two such comparison functions compiled with Clang 3.9 -O3 (21 vs 11 instructions, I didn't count the code bytes or try to guesstimate execution time).
https://godbolt.org/g/Ki3CVL
Code below differs a bit from OP's logic, but I think this matches the true goal (perhaps not)
Subtract like terms and scale. By subtracting before scaling, overflow opportunities reduced.
#define MPH 60
#define SPM 60
int cmp = ((end.hour - begin.hour)*MPH + (end.minute - begin.minute))*SPM +
(end.second - begin.second);
if (cmp < 0) Time_After();
else if (cmp > 0) Time_Before();
else Time_Same();
Keeping your approach, I would do as follows. This is not much different, but there are two main branches: one where end definitely does not need to be modified because it happens after begin or at the same time (so that the difference is 0:0.0), and another where fields are adjusted to take account modular arithmetic.
struct time timeElapsed (struct time begin, struct time end)
{
if ((end.hour >= begin.hour) &&
(end.minute >= begin.minute) &&
(end.second >= begin.second)) {
/* end is greater or equal to begin, nothing to adjust. */
} else {
end.hour +=24;
if (end.second < begin.second) {
--end.minute;
end.second += 60;
}
if (end.minute < begin.minute) {
--end.hour;
end.minute += 60;
}
}
struct time elapsed = {
end.hour - begin.hour,
end.minute - begin.minute,
end.second - begin.second
};
return elapsed;
}

PIC Microcontroller using C

I am trying to get this code to work in MM:SS:FFFFFF, where MM is minutes, SS seconds and FFFFFF micro seconds, but my minutes are bot working properly. Instead of getting anything like 01:05:873098 I get 00:65_873098. Thanks for any tip.
#include <prototype.h>
int16 overflow_count;
#int_timer1
void timer1_isr(){
overflow_count++;
}
void main(){
int32 time;
setup_timer_1(T1_INTERNAL | T1_DIV_BY_1);
enable_interrupts(int_timer1);
while(TRUE){
enable_interrupts(global);
while(input(PUSH_BUTTON)); //Wait for press
set_timer1(0);
overflow_count=0;
while(!input(PUSH_BUTTON)); //WAIT FOR RELEASE
disable_interrupts(global);
time=get_timer1();
time=time+((int32)overflow_count<<16);
time-=15; //substract overhead
printf("Time is %02lu:%02lu.%06lu minutes.\r\n",
time/1000000000, (time/6000000), (time/5)%1000000);
}
}
I would suggest that you introduce some intermediate variables like "ticks", "microsecs", "secs", and "mins". Do the calculations step by step, from smallest unit to largest, remembering to subtract off each part before converting the next larger part. Make sure the units work out at each step of the conversion (e.g. don't add or subtract values that have different units). Think about how you'd do it with pencil and paper: probably not the way you've written it! When you break it down like that, it will be easier to get the logic correct.
you aren't ever subtracting out the whole parts
if time is time in micro's... something like:
time = time % (24 * 60 * 60 * 1000); //mod out any extra days
int hours = time / (60 * 60 * 1000);
time = time % (60 * 60 * 1000); // or time -= hours*(60 * 60 * 1000)
int min = time / (60 * 1000);
time = time % (60 * 1000); // or time-= min*(60*1000)
...

Of subtitles & lag times (yet another C overflow doubt)

I'm trying to calculate the time offset to be added to subtitle files to correct the lag. The part shown below is after tokenizing the hh:mm:ss,uuu (uuu stands for microseconds) into the time[] array. I'm converting the time into microseconds then adding the actual & lag time to get the final time.
The program computes the actual & lag time properly. However, it gives the wrong final hour time. Have I hit upon some overflow condition that can't be handled by the code below?
Edit: I have realized the error. I should be dividing rather than taking remainder for hour time.
int i;
int time[4];
unsigned long totalTime,totalLagTime;
...
for(i=0;i<4;i++)
{
printf("time[%d] = %d\n",i,time[i]);
}
for(i=0;i<4;i++)
{
printf("lag time[%d] = %d\n",i,lagTime[i]);
}
totalTime = 1000*(3600*time[0] + 60*time[1] + time[2]) + time[3];
printf("total time is %u in milliseconds\n",totalTime);
totalLagTime = 1000*(3600*lagTime[0] + 60*lagTime[1] + lagTime[2]) + lagTime[3];
printf("total lag time is %u in milliseconds\n",totalLagTime);
totalTime += totalLagTime;
printf("Now, total time is %u in milliseconds\n",totalTime);
time[0] = totalTime % 3600000;
printf("hour time is %d\n",time[0]);
Test case:
00:01:24,320
time[0] = 0
time[1] = 1
time[2] = 24
time[3] = 320
lag time[0] = 10
lag time[1] = 10
lag time[2] = 10
lag time[3] = 10
total time is 84320 in milliseconds
total lag time is 36610010 in milliseconds
Now, total time is 36694330 in milliseconds
hour time is 694330
Shouldn't that be
time[0] = totalTime / 3600000;
You have a logic error: 36694330 mod 3600000 really is 694330.
What are you trying to do, exactly?

are these msec<->timeval functions correct?

I have a bug in this program, and I keep coming back to these two functions, but they look right to me. Anything wrong here?
long visual_time_get_msec(VisTime *time_)
{
visual_log_return_val_if_fail(time_ != NULL, 0);
return time_->tv_sec * 1000 + time_->tv_usec / 1000;
}
int visual_time_set_from_msec(VisTime *time_, long msec)
{
visual_log_return_val_if_fail(time_ != NULL, -VISUAL_ERROR_TIME_NULL);
long sec = msec / 1000;
long usec = 0;
visual_time_set(time_, sec, usec);
return VISUAL_OK;
}
Your first function is rounding down, so that 1.000999 seconds is rounded to 1000ms, rather than 1001ms. To fix that (make it round to nearest millisecond), you could do this:
long visual_time_get_msec(VisTime *time_)
{
visual_log_return_val_if_fail(time_ != NULL, 0);
return time_->tv_sec * 1000 + (time_->tv_usec + 500) / 1000;
}
Fuzz has already pointed out the truncation in your second example - the only thing I would add is that you can simplify it a little using the modulo operator:
long sec = msec / 1000;
long usec = (msec % 1000) * 1000;
(The above all assume that you're not dealing with negative timevals - if you are, it gets more complicated).
visual_time_set_from_msec doesnt look right...
if someone calls visual_time_set_from_msec(time, 999), then your struct will be set to zero, rather the 999,000us.
What you should do is:
// Calculate number of seconds
long sec = msec / 1000;
// Calculate remainding microseconds after number of seconds is taken in to account
long usec = (msec - 1000*sec) * 1000;
it really depends on your inputs, but thats my 2 cents :-)

Resources