This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to measure time in milliseconds using ANSI C?
How can I get the Windows system time with millisecond resolution?
We want to calculate the time which a player have taken to finish the game.
But with time.h we could only calculate in seconds. but that is not exact. Is it possible to get the time in milliseconds?
and what is the %? to printf?
There is no portable way to get resolution of less than a second in standard C So best you can do is, use the POSIX function gettimeofday().
quick answer
#include<stdio.h>
#include<time.h>
int main()
{
clock_t t1, t2;
t1 = clock();
int i;
for(i = 0; i < 1000000; i++)
{
int x = 90;
}
t2 = clock();
float diff = ((float)(t2 - t1) / 1000000.0F ) * 1000;
printf("%f",diff);
return 0;
}
If you're on a Unix-like system, use gettimeofday and convert the result from microseconds to milliseconds.
Related
This question already has answers here:
Why does printf not flush after the call unless a newline is in the format string?
(10 answers)
What is it with printf() sending output to buffer?
(3 answers)
Closed 1 year ago.
I am learning C and currently trying to understand why in the code part below the while loop is executed before the for loop!?
The code should display a 3-digit random number and then wait n seconds.
srand(time(NULL));
time_t rnd_number_length = 3;
for (int i=0; i < rnd_number_length; i++) {
printf("%d ", rand() % 10);
}
clock_t waiting_time = clock() + 5 * CLOCKS_PER_SEC;
while(clock() < waiting_time) {}
However, when I compile and run this code, the program waits first and only then displays the random digits. Why is this happening?
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I am trying to do some recursive multiplication.
When the iteration number is more than 1 million, the completion time started to go up, why?
#include <time.h>
#include <stdio.h>
float num;
unsigned long i, j;
clock_t start, end;
int main(void)
{
start = clock();
for (j = 0; j<10000000; j++){
num = 1.000001E30f;
for (i = 0; i<100; i++){
num = num * 0.999915454854432f;
if (num == 0){
printf("zero\n");
}
}
//printf("%e\n", num);
//printf("%ld\n", j);
}
end = clock();
float cpu_time_used = ((float)(end - start))/CLOCKS_PER_SEC;
printf("%f", cpu_time_used);
return 0;
}
Compiled with GCC 7.3 on Windows 10
You keep multiplying an accumulator by 0.999915454854432f, thus bringing the value closer and closer to zero. You might be getting so close to zero that it becomes a denormal representation. That may trigger slower execution in the floating point hardware and can be a source of surprising performance bloat. Just a wild guess!
See the "Performance Issues" section in the above Wikipedia page.
This question already has answers here:
Get the current time in C [duplicate]
(11 answers)
Closed 5 years ago.
I'm doing a C programming course at uni and one of the tasks is to create an OpenGL analogue clock on the screen. I want to drive the hour, minute, and second hands from the actual time.
How do I read the hour, minute, and seconds from system time? As integers would be best. I've had a look around and can't find anything quite like what I'm after.
You should use localtime, like this:
#include <time.h>
time_t timestamp = time(NULL);
if (timestamp != (time_t)-1) {
struct tm *t = localtime(×tamp);
// Use t->tm_hour, t->tm_min and t->tm_sec, which are ints
}
See man 3 localtime for more info.
This question already has answers here:
What is the time complexity of my function? [duplicate]
(4 answers)
Closed 6 years ago.
im sturggling with finding complexity of the next function
void what(int n) {
int i;
for (i = 1; i <= n; i++) {
int x = n;
while (x > 0)
x -= i;
}
}
ive tried to solve it by the next things
at looking on space i found its only O(1) since no taking of it.
when thinking of time
i thought that since its each time being devided it will be n(1+1/2 +1/4+....)=O(N.log(N))
is it correct?
thank you
A simplistic analysis gives a time complexity of O(N.log(N)), but it should be noted that the loop does not compute anything: local variable x is decremented n / i times and discarded. A good compiler should be able to compile the whole function to a no-op: void what(int n) {}, which a resulting complexity of O(1), both for space and time.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
i am trying to calculate efficiency of a c program. I am using cclock() function to calculate the time consumed in execution of the program.
This is the code
but i am getting negative output
#include <time.h>
#include <stdio.h>
int main(){
clock_t start,end,total;
start = clock();
int i;
for(i=0;i<1000;i++)
{
printf("%d\n",i);
}
end = clock();
printf("%0.2f",(start-end)/CLOCKS_PER_SEC);
return (0);
}
Use end-start, not start-end!
start - end is subtracting the greater time from the lesser, hence why you're getting a negative value. It's like asking why you get a negative value when you subtract seven from three :-)
You need to subtract start from end.
As an aside, there's no guarantee that clock_t and CLOCKS_PER_SEC are floating point types. If they're integral types, you're likely to end up with a integer division. If you really want a floating value, use something like:
(double)(end - start) / CLOCKS_PER_SEC
There's a few other minor nigglies in your code, nothing major, but a cleaner implementation in my opinion would be along the following lines:
#include <time.h>
#include <stdio.h>
int main (void) {
clock_t duration = clock();
for (int i = 0; i < 1000; i++)
printf ("%d\n", i);
duration = clock() - duration;
printf ("%0.2f seconds\n", (double)(duration) / CLOCKS_PER_SEC);
return 0;
}
As a second aside, outputting a thousand lines is likely to take about zero seconds CPU time on modern systems. I'm assuming your payload will be a little more complex but don't be surprised if that code above takes no CPU time at all (at least to two decimal places). In fact, I had to multiply the loop by a hundred to get it up to 0.3 seconds on my system.
And don't think you can get a larger duration by using something like sleep() since that will almost certainly not use any CPU time.
printf("%0.2f",(end - start)/CLOCKS_PER_SEC);
It's simple maths. Do end - start.
The ending time is after starting time.
You should change it to printf("%0.2f",(end-start)/CLOCKS_PER_SEC);