adding a number to the computer's time - c

This is a program that prints the current time. how do i add numbers to the time printed. Example: the output of the program is 15:35. how do i make it print 16:35? if this isn't possible; i would like to know if they are any other methods i can use. thanks
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
void main()
{
FILE *fp;
char hc1,hc2,mc1,mc2;
int hi1,hi2,mi1,mi2,hour,minute;
system("echo %time% >time.txt");
fp=fopen("time.txt","r");
if(fp==NULL)
exit(1) ;
hc1=fgetc(fp);
hc2=fgetc(fp);
fgetc(fp);
mc1=fgetc(fp);
mc2=fgetc(fp);
fclose(fp);
remove("time.txt");
hi1=hc1;
hi2=hc2;
mi1=mc1;
mi2=mc2;
hi1-=48;
hi2-=48;
mi1-=48;
mi2-=48;
hour=hi1*10+hi2;
minute=mi1*10+mi2;
printf("the Current time is %d:%d\n",hour,minute);
}

If you are writing an application for getting current system time and performing arithmetic operation on it, a better way is to use the timeval struct in C.
It is present in the "sys/time.h" header and stores the time in both seconds and microseconds format. Also, it has got the gettimeofday() function to get current system time.
Following are the links for your reference:
Timeval struct
gettimeofday() function
NOTE: Some of the functions used with this struct are not portable and may work only on Linux based systems.

Add integers directly to your hour or minute variables similar to the two different ways here:
hour += 1;
hour = hour + 1;
Or do the literal addition in your `printf,
printf("the Current time is %d:%d\n",hour + 1,minute);
As David commented below, be mindful of pushing the hour or minute past 23 or 59, respectively.

check this simple function
void AddTime(int& currentH, int& currentM, int& currentS, int addH, int addM, int addS)
{
int extraM=(currentS+addS)/60;
currentS=(currentS+addS)%60;
int extraH=(currentM+addM+extraM)/60;
currentM=(currentM+addM+extraM)%60;
int extraD=(currentH+addH+extraH)/24;
currentH=(currentH+addH+extraH)%24;
}

Related

Why does my running clock measuring function give me 0 clocks?

I'm doing some exercise projects in a C book, and I was asked to write a program that uses clock function in C library to measure how long it takes qsort function to sort an array that's reversed from a sorted state. So I wrote below:
/*
* Write a program that uses the clock function to measure how long it takes qsort to sort
* an array of 1000 integers that are originally in reverse order. Run the program for arrays
* of 10000 and 100000 integers as well.
*/
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define SIZE 10000
int compf(const void *, const void *);
int main(void)
{
int arr[SIZE];
clock_t start_clock, end_clock;
for (int i = 0; i < SIZE; ++i) {
arr[i] = SIZE - i;
}
start_clock = clock();
qsort(arr, SIZE, sizeof(arr[0]), compf);
end_clock = clock();
printf("start_clock: %ld\nend_clock: %ld\n", start_clock, end_clock);
printf("Measured seconds used: %g\n", (end_clock - start_clock) / (double)CLOCKS_PER_SEC);
return EXIT_SUCCESS;
}
int compf(const void *p, const void *q)
{
return *(int *)p - *(int *)q;
}
But running the program gives me the results below:
start_clock: 0
end_clock: 0
Measured clock used: 0
How can it be my system used 0 clock to sort an array? What am I doing wrong?
I'm using GCC included in mingw-w64 which is x86_64-8.1.0-release-win32-seh-rt_v6-rev0.
Also I'm compiling with arguments -g -Wall -Wextra -pedantic -std=c99 -D__USE_MINGW_ANSI_STDIO given to gcc.exe.
3 possible answers to your issue:
what is clock_t? Is it just a normal data type like int? Or is it some sort of struct? Make sure you are using it correctly for its data type
What is this running on? If your clock isn't already running you need to start it on, for instance, most microcontrollers. If you try pulling from it without starting, it will just be 0 at all times since the clock is not moving
Is your code fast enough that it's not registering? Is it actually taking 0 seconds (rounded down) to run? 1 full second is a very long time in the coding world, you can run millions of lines of code in less than a second. Make sure your timing process can handle small numbers (ie. you can register 1 micro-second of timing), or your code is running slow enough to register on your clock speed

time dependent uniformly distributed random number

I need a uniformly distributed random number generator...
Here is what I've tried its output is a constant number no matter how many time i run the .exe
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int randr( int min, int max);
int main()
{
srand(time(NULL));
int rr=randr(0,10);
printf("rr=%d\n",rr)
return 0;
}
int randr( int min, int max)
{
double scaled = (double)rand()/RAND_MAX;
return (max - min +1)*scaled + min;
}
Thanks in advance.
The problem with your code is that the time function doesn't use milliseconds, so each call to your function set the same seed and generate the same first number (assuming it's called at the same time, i.e. the same second).
One way to avoid this is to give a seed only once in your program (srand must be called only once), you can verify that by trying this code :
int main()
{
int a = 0;
srand(time(NULL));
for (int i=0;i<10000;i++){
int rr=randr(0,10);
a+=rr;
printf("rr=%d\n",rr);
}
printf("mean : %d\n", a/10000); // to quickly check the uniformity
return 0;
}
Another way is to use a function that can give you a different seed at each call (time based on milliseconds for example). A possible implementation on a POSIX system :
struct timespec tmp;
clock_gettime(CLOCK_MONOTONIC,&tmp);
srand(tmp.tv_nsec);
This will be based on nanoseconds (as suggested by R..), to compile you'll probably need to link with librt (-lrt on gcc).

Get average run-time of a C program

I'm trying to measure differences in speed of reading and writing misaligned vs aligned bits into binary files. I would like to know is there an utility I can use (Except for running time over & over again and writing my own) to sample an average run-time of a program (I'm running Linux based OS)?
Thanks
running time over & over again and writing my own
That's fine. You can perform the read/write ten thousand times both ways and compute the average time.
If you really want to use a library you can try Google Perftools.
Put this in a header file:
#ifndef TIMER_H
#define TIMER_H
#include <stdlib>
#include <sys/time.h>
typedef unsigned long long timestamp_t;
static timestamp_t
get_timestamp ()
{
struct timeval now;
gettimeofday (&now, NULL);
return now.tv_usec + (timestamp_t)now.tv_sec * 1000000;
}
#endif
Include the header file into whichever .c file you'll be using, and do something like this:
#define N 10000
int main()
{
int i;
double avg;
timestamp_t start, end;
start = get_timestamp();
for(i = 0; i < N; i++)
foo();
end = get_timestamp();
avg = (end - start) / (double)N;
printf("%f", avg);
return 0;
}
Basically this calls whichever function you're trying to measure performance of N times, where N is a defined constant (doesn't have to be) in this case. It takes a timestamp before the for loop and after the for loop and then calculates the average time it's taken for the function to execute. The get_timestamp() function returns the number of microseconds, so if you need milliseconds, divide by 1000, seconds - divide by 1000000 etc.

difference between two milliseconds in C

I was wondering whether there is a function in C that takes time in the following format (current date and time in seconds are in Epoch format)
a.1343323725
b.1343326383
And returns me the result as difference between the two time as hrs:mins:secs
Sorry for any confusion, to clarify my point I wrote a code that gets that gets the user's current time execute some block of code and gets the time again. And I was wondering whether the difference could be converted into hrs:mins:sec as a string literal.
#include <sys/time.h>
#include <stdio.h>
int main(void)
{
struct timeval tv = {0};
gettimeofday(&tv, NULL);
printf("%ld \n", tv.tv_sec);
//Execte some code
gettimeofday(&tv, NULL);
return 0;
}
First you want to get the time difference in seconds out of the timeval structures that those functions return, using something like this:
int diff = a.tv_sec-b.tv_sec;
Where a and b were the values returned by gettimeofday.
Next you want to break that down into units of hours, minutes and seconds.
int hours=diff/3600;
int minutes=(diff/60)%60;
int seconds=diff%60;
Finally we want to get that data into a string, using the snprintf function from
#include <stdio.h>
char output[10];
snprintf(output, 10, "%d:%d:%d", hours, minutes, seconds);
sprintf words exactly like printf, except the output goes into a string, not onto stdout, and snprintf is the same except it won't write more than n characters into the string, to prevent buffer overflows.
Stitch those together and you've got the job done.
unsigned diff = b-a;
printf("%u:%u:%u", diff/3600, (diff % 3600)/60, diff % 60);
diff / 3600 is hours.
(diff % 3600) / 60 is remainder minutes
(diff % 3600) % 60 equal to diff % 60 is remainder seconds
Limitation: this code don't work for diffs greater than about 136 years.
Note: It is bad idea to use gettimeofday for the performance meter. The best function to use is clock_gettime with CLOCK_MONOTONIC clock id (It is not affected by wall-clock time change). But beware CLOCK_MONOTONIC don't work in Linux kernels before 2.6. Check clock availability before use.

Calculating elapsed time in a C program in milliseconds

I want to calculate the time in milliseconds taken by the execution of some part of my program. I've been looking online, but there's not much info on this topic. Any of you know how to do this?
Best way to answer is with an example:
#include <sys/time.h>
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
/* Return 1 if the difference is negative, otherwise 0. */
int timeval_subtract(struct timeval *result, struct timeval *t2, struct timeval *t1)
{
long int diff = (t2->tv_usec + 1000000 * t2->tv_sec) - (t1->tv_usec + 1000000 * t1->tv_sec);
result->tv_sec = diff / 1000000;
result->tv_usec = diff % 1000000;
return (diff<0);
}
void timeval_print(struct timeval *tv)
{
char buffer[30];
time_t curtime;
printf("%ld.%06ld", tv->tv_sec, tv->tv_usec);
curtime = tv->tv_sec;
strftime(buffer, 30, "%m-%d-%Y %T", localtime(&curtime));
printf(" = %s.%06ld\n", buffer, tv->tv_usec);
}
int main()
{
struct timeval tvBegin, tvEnd, tvDiff;
// begin
gettimeofday(&tvBegin, NULL);
timeval_print(&tvBegin);
// lengthy operation
int i,j;
for(i=0;i<999999L;++i) {
j=sqrt(i);
}
//end
gettimeofday(&tvEnd, NULL);
timeval_print(&tvEnd);
// diff
timeval_subtract(&tvDiff, &tvEnd, &tvBegin);
printf("%ld.%06ld\n", tvDiff.tv_sec, tvDiff.tv_usec);
return 0;
}
Another option ( at least on some UNIX ) is clock_gettime and related functions. These allow access to various realtime clocks and you can select one of the higher resolution ones and throw away the resolution you don't need.
The gettimeofday function returns the time with microsecond precision (if the platform can support that, of course):
The gettimeofday() function shall
obtain the current time, expressed as
seconds and microseconds since the
Epoch, and store it in the timeval
structure pointed to by tp. The
resolution of the system clock is
unspecified.
C libraries have a function to let you get the system time. You can calculate elapsed time after you capture the start and stop times.
The function is called gettimeofday() and you can look at the man page to find out what to include and how to use it.
On Windows, you can just do this:
DWORD dwTickCount = GetTickCount();
// Perform some things.
printf("Code took: %dms\n", GetTickCount() - dwTickCount);
Not the most general/elegant solution, but nice and quick when you need it.

Resources