I created a simple game in C in Xcode, but time does not elapse. Does anyone know how to solve this problem?
int main(void)
{
long startTime = 0;
long totalTime = 0;
long prevtime = 0;
int num;
init();
cursor = arrayFish;
startTime = clock();
while (1)
{
printFishes();
printf("\n Which fishing port would you like to water? ");
scanf("%d", &num);
printf("\n");
if (num > 6 || num < 1)
{
printf("Please re-enter.\n");
continue;
}
}
totalTime = (clock() - startTime) / CLOCKS_PER_SEC;
printf("Total Elapsed Time: %ld\n", totalTime);
}
It works except for the time mark. So I just wrote the code needed. Thank you.
You need to know Time Stamp Counter register your clock function is reading. If it is a 32 bit register it will reset very frequently.
For example if your CPU has 2GHz clock then 1 sec = 2 * 1e9 cycles and 32 bit unsigned register can count from 0 to 2^32 - 1 cycles. Which is equal to (2^32)/(2*1e9) = 2.14748365 seconds.
In this case your output will always be limited to 2.14 seconds
Related
The start and end time are based on a 24 hour clock format. The task is that we will input the start and the end time then we will compute the length of the call and convert the result in minutes.
Sample output:
Start time: 1810
End time: 2000
Length of call: 110 minutes
Here's what I did try doing. First, I tried to minus the start and end time and automatically turn the answer into positive. Now if the total result(resultMain) is greater than 120, it will multiply the result to (.60). Else if the result is greater than 60 and less than 120, then it will just get minus 40 instead of it getting multiplied by (.60). My problem is that my result is inconsistent, sometimes the answer is correct but sometimes it is wrong.
#include <stdio.h>
#include <math.h>
#include <string.h>
int main()
{
int startTime, endTime, result1, result2;
double totalTime1, totalTime2, resultMain;
printf("\nPLDT Telephone Call Charge\n");
printf("\nStart time\t: ");
scanf("%d", &startTime);
printf("End time\t: ");
scanf("%d", &endTime);
totalTime1 = startTime - endTime;
resultMain = fabs(totalTime1);
if(resultMain >= 120){
totalTime2 = resultMain * .60;
result1 = ceil(totalTime2);
result2 = fabs(result1);
printf("Length of call\t: %d minutes\n", result2);
}else if(resultMain >= 60 && resultMain < 120){
totalTime2 = resultMain - 40;
result1 = ceil(totalTime2);
result2 = fabs(result1);
printf("Length of call\t: %d minutes\n", result2);
}else{
totalTime2 = resultMain;
result1 = ceil(totalTime2);
result2 = fabs(result1);
printf("Length of call\t: %d minutes\n", result2);
}
return 0;
}
Example of correct answer:
Start time: 0123
End time: 0224
Length of call: 61 minutes
Example of wrong answer:
Start time: 0852
End time: 0906
Length of call: 54 minutes
Example of wrong answer:
Start time: 0805
End time: 1210
Length of call: 243 minutes
No need for any floating point math
Before subtracting, break time into hours and minutes
int startTime_hours = startTime/100;
int startTime_mins = startTime%100;
startTime_mins += startTime_hours*60; // startTime_mins is now the total minutes.
The difference is the end minus the start
int diff = endTime_mins - startTime_mins;
When difference is negative, add a day worth of time
Example: start time just before midnight and the end time after midnight.
if (diff < 0) {
diff += 24*60;
}
Only 1 case needed for printing
printf("Length of call\t: %d minutes\n", diff);
You can use this algorithm to calculate the difference in minutes between two times:
const MINS_PER_HR = 60, MINS_PER_DAY = 1440
startx = starthour * MINS_PER_HR + startminute
endx = endhour * MINS_PER_HR + endminute
duration = endx - startx
if duration < 0:
duration = duration + MINS_PER_DAY
See: Algorithm needed to calculate difference between two times
This code will implement it for you:
#include <stdio.h>
int main()
{
int startTime, endTime, startHour, startMin, endHour, endMin, duration;
printf("\nPLDT Telephone Call Charge\n");
printf("\nStart time\t: ");
scanf("%04d", &startTime);
printf("End time\t: ");
scanf("%04d", &endTime);
startHour = startTime / 100;
startMin = startTime % 100;
endHour = endTime / 100;
endMin = endTime % 100;
duration = (endHour * 60 + endMin) - (startHour * 60 + startMin);
if (duration < 0)
duration += 60 * 24;
printf("Length of call\t: %d minutes\n", duration);
return 0;
}
i need a 60HZ timer (16.6 ms trigger once)
it work well in windows(mingw gcc) but not in liunx(gcc)
can anyone help me abust this? THX
#include <stdio.h>
#include <time.h>t
#define PRE_MS CLOCKS_PER_SEC / 1000
int main()
{
clock_t pre = clock();
int cnt = 0;
printf("CLOCKS_PER_SEC = %d\n", CLOCKS_PER_SEC);
while (1)
{
clock_t diff = clock() - pre;
if (diff > 16 * PRE_MS)
{
cnt++;
if (cnt > 60)
{
printf("%d\n", (int)pre);
cnt = 0;
}
pre += diff;
}
}
}
printf pre 1s in windows
CLOCKS_PER_SEC = 1000
1020
2058
3095
4132
5169
6206
7243
8280
9317
printf pre 2s in linux
CLOCKS_PER_SEC = 1000000
1875000
3781250
5687500
7593750
9500000
11406250
13312500
15218750
First a misconception: 60 Hz is not 17 operations per second but 60.
Second the period check is reading clock() twice and discarding any time interval for printf() to be called. AFAIK the CLOCKS_PER_SEC is larger on Linux than Windows systems, so there is more chance that you are 'throwing away' clock ticks. Read the clock() once, for example:
#include <stdio.h>
#include <time.h>
int main(void)
{
unsigned long long tickcount = 0;
clock_t baseticks = clock();
while (tickcount < 180) { // for 3 seconds
tickcount++;
clock_t nexttick = (clock_t) (baseticks + tickcount * CLOCKS_PER_SEC / 60);
while(clock() < nexttick) {} // wait
printf("Tick %llu\n", tickcount);
}
return 0;
}
The code works from the total elapsed time, so any intervals that are not an exact number of clock ticks are averaged out (instead of a cumulative rounding-off error).
At some point the value from clock() will overflow/wrap, so a real implementation that runs for any length of time will have to take care of this.
Hi pretty new to coding in c, but would love your help in my following coding problem.
I want to add two times (that are in twenty four hour time notation already).
Currently they are both integers and the arithmatic addition function is great for whole hour (e.g. 800+1000), however because our / computer's numbers are base ten, it will not roll over to the next hour after 60min which leads to problems with addition.
I'm not sure if the modulus % can solve this? Ideally I would like to use simple c coding (that I understand), and not start importing timing keys into the program.
e.g.
#include <stdio.h>
int main (void)
{
int time1 = 1045; // 10:45am in 24hour time
printf("Time %d ",time1);
int time2 = 930; //9 hours & 30min
printf("+ time %d", time2);
int calc = time1 + time2;
printf(" should not equal ... %d\n", calc);
printf("\nInstead they should add to %d\n\n", 2015); //8:15pm in 24hr time
return 0;
}
Yes, you're correct that modulo division is involved. Remember, that is remainder division. This is more worthy as a comment since supplying a complete answer for problems like this is generally frowned upon, but it's too long for that; this should get you started:
#include <stdio.h>
int main(void)
{
// Assuming the given time has the format hhmm or hmm.
// This program would be much more useful if these were
// gathered as command line arguments
int time1 = 1045;
int time2 = 930;
// integer division by 100 gives you the hours based on the
// assumption that the 1's and 10's place will always be
// the minutes
int time1Hours = time1 / 100; // time1Hours == 10
int time2Hours = time2 / 100; // time2Hours == 9
// modulus division by 100 gives the remainder of integer division,
// which in this case gives us the minutes
int time1Min = time1 % 100; // time1Min == 45
int time2Min = time2 % 100; // time2Min == 30
// now, add them up
int totalHours = time1Hours + time2Hours; // totalHours = 19
int totalMin = time1Min + time2Min; // totalMin = 75
// The rest is omitted for you to finish
// If our total minutes exceed 60 (in this case they do), we
// need to adjust both the total hours and the total minutes
// Clearly, there is 1 hour and 15 min in 75 min. How can you
// pull 1 hour and 15 min from 75 min using integer and modulo
// (remainder) division, given there are 60 min in an hour?
// this problem could be taken further by adding days, weeks,
// years (leap years become more complicated), centuries, etc.
return 0;
}
I used this for a long time...
// convert both times hhmm to minutes an sum minutes
// be sure to do integer division
int time1m = ((time1 / 100) * 60)+(time1 % 100);
int time2m = ((time2 / 100) * 60)+(time2 % 100);
int sumMin = time1m + time2m;
// convert back to hhmm
int hhmm = ((sumMin / 60) * 100)+(sumMin % 60);
You can also include a day, as time is of 24 hour format.
#include <stdio.h>
int main()
{
int t1=2330;
int t2=2340;
int sum=((t1/100)*60)+(t1%100)+((t2/100)*60)+(t2%100);
int day=sum/(24*60);
sum = sum % (24*60);
int hours=sum/(60);
int mins=sum % 60;
printf("days = %d \t hours = %d \t mins=%d\n",day, hours, mins);
return 0;
}
I wrote a program with 2 threads doing the same thing but I found the throughput of each threads is slower than if I only spawn one thread. Then I write this simple test to see if that's my problem or it's because of the system.
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <time.h>
/*
* Function: run_add
* -----------------------
* Do addition operation for iteration ^ 3 times
*
* returns: void
*/
void *run_add(void *ptr) {
clock_t t1, t2;
t1 = clock();
int sum = 0;
int i = 0, j = 0, k = 0;
int iteration = 1000;
long total = iteration * iteration * iteration;
for (i = 0; i < iteration; i++) {
for (j = 0; j < iteration; j++) {
for (k = 0; k < iteration; k++) {
sum++;
}
}
}
t2 = clock();
float diff = ((float)(t2 - t1) / 1000000.0F );
printf("thread id = %d\n", (int)(pthread_self()));
printf("Total addtions: %ld\n", total);
printf("Total time: %f second\n", diff);
printf("Addition per second: %f\n", total / diff);
printf("\n");
return NULL;
}
void run_test(int num_thread) {
pthread_t pth_arr[num_thread];
int i = 0;
for (i = 0; i < num_thread; i++) {
pthread_create(&pth_arr[i], NULL, run_add, NULL);
}
for (i = 0; i < num_thread; i++) {
pthread_join(pth_arr[i], NULL);
}
}
int main() {
int num_thread = 5;
int i = 0;
for (i = 1; i < num_thread; i++) {
printf("Running SUM with %d threads. \n\n", i);
run_test(i);
}
return 0;
}
The result still shows the average speed of n threads is slower than one single thread. The more threads I have, the slower each one is.
Here's the result:
Running SUM with 1 threads.
thread id = 528384,
Total addtions: 1000000000,
Total time: 1.441257 second,
Addition per second: 693838784.000000
Running SUM with 2 threads.
thread id = 528384,
Total addtions: 1000000000,
Total time: 2.970870 second,
Addition per second: 336601728.000000
thread id = 1064960,
Total addtions: 1000000000,
Total time: 2.972992 second,
Addition per second: 336361504.000000
Running SUM with 3 threads.
thread id = 1064960,
Total addtions: 1000000000,
Total time: 4.434701 second,
Addition per second: 225494352.000000
thread id = 1601536,
Total addtions: 1000000000,
Total time: 4.449250 second,
Addition per second: 224756976.000000
thread id = 528384,
Total addtions: 1000000000,
Total time: 4.454826 second,
Addition per second: 224475664.000000
Running SUM with 4 threads.
thread id = 528384,
Total addtions: 1000000000,
Total time: 6.261967 second,
Addition per second: 159694224.000000
thread id = 1064960,
Total addtions: 1000000000,
Total time: 6.293107 second,
Addition per second: 158904016.000000
thread id = 2138112,
Total addtions: 1000000000,
Total time: 6.295047 second,
Addition per second: 158855056.000000
thread id = 1601536,
Total addtions: 1000000000,
Total time: 6.306261 second,
Addition per second: 158572560.000000
I have a 4-core CPU and my system monitor shows each time I ran n threads, n CPU cores are 100% utilized. Is it true that n threads(<= my CPU cores) are supposed to run n times as fast as one thread? Why it is not the case here?
clock() measures CPU time not "Wall" time.
it also measures the total time of all threads..
CPU time is time when the processor was executing you code, wall time is real world elapsed time (like a clock on the wall would show)
time your program using /usr/bin/time to see what's really happening.
or use a wall-time function like time(), gettimeofday() or clock_gettime()
clock_gettime() can measure CPU time for this thread, for this process, or wall time. - it's probably the best way to do this type of experiment.
While you have your answer regarding why the multi-threaded performance seemed worse than single-thread, there are several things you can do to clean up the logic of your program and make it work like it appears you intended it to.
First, if you were keeping track of the relative wall-time that passed and the time reported by your diff of the clock() times, you would have noticed the time reported was approximately a (n-proccessor core) multiple of the actual wall-time. That was explained in the other answer.
For relative per-core performance timing, the use of clock() is fine. You are getting only an approximation of wall-time, but for looking at a relative additions per-second, that provides a clean per-core look at performance.
While you have correctly used a divisor of 1000000 for diff, time.h provides a convenient define for you. POSIX requires that CLOCKS_PER_SEC equals 1000000 independent of the actual resolution. That constant is provided in time.h.
Next, you should also notice that your output per-core wasn't reported until all threads were joined making reporting totals in run_add somewhat pointless. You can output thread_id, etc. from the individual threads for convenience, but the timing information should be computed back in the calling function after all threads have been joined. That will clean up the logic of your run_add significantly. Further, if you want to be able to vary the number of iterations, you should consider passing that value through ptr. e.g.:
/*
* Function: run_add
* -----------------------
* Do addition operation for iteration ^ 3 times
*
* returns: void
*/
void *run_add (void *ptr)
{
int i = 0, j = 0, k = 0, iteration = *(int *)ptr;
unsigned long sum = 0;
for (i = 0; i < iteration; i++)
for (j = 0; j < iteration; j++)
for (k = 0; k < iteration; k++)
sum++;
printf (" thread id = %lu\n", (long unsigned) (pthread_self ()));
printf (" iterations = %lu\n\n", sum);
return NULL;
}
run_test is relatively unchanged, with the bulk of the calculation changes being those moved from run_add to main and being scaled to account for the number of cores utilized. The following is a rewrite of main allowing the user to specify the number of cores to use as the first argument (using all-cores by default) and the base for your cubed number of iterations as the second argument (1000 by default):
int main (int argc, char **argv) {
int nproc = sysconf (_SC_NPROCESSORS_ONLN), /* number of core available */
num_thread = argc > 1 ? atoi (argv[1]) : nproc,
iter = argc > 2 ? atoi (argv[2]) : 1000;
unsigned long subtotal = iter * iter * iter,
total = subtotal * num_thread;
double diff = 0.0, t1 = 0.0, t2 = 0.0;
if (num_thread > nproc) num_thread = nproc;
printf ("\nrunning sum with %d threads.\n\n", num_thread);
t1 = clock ();
run_test (num_thread, &iter);
t2 = clock ();
diff = (double)((t2 - t1) / CLOCKS_PER_SEC / num_thread);
printf ("----------------\nTotal time: %lf second\n", diff);
printf ("Total addtions: %lu\n", total);
printf ("Additions per-second: %lf\n\n", total / diff);
return 0;
}
Putting all the pieces together, you could write a working example as follows. Make sure you disable optimizations to prevent your compiler from optimizing out your loops for sum, etc...
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <time.h>
#include <unistd.h>
/*
* Function: run_add
* -----------------------
* Do addition operation for iteration ^ 3 times
*
* returns: void
*/
void *run_add (void *ptr)
{
int i = 0, j = 0, k = 0, iteration = *(int *)ptr;
unsigned long sum = 0;
for (i = 0; i < iteration; i++)
for (j = 0; j < iteration; j++)
for (k = 0; k < iteration; k++)
sum++;
printf (" thread id = %lu\n", (long unsigned) (pthread_self ()));
printf (" iterations = %lu\n\n", sum);
return NULL;
}
void run_test (int num_thread, int *it)
{
pthread_t pth_arr[num_thread];
int i = 0;
for (i = 0; i < num_thread; i++)
pthread_create (&pth_arr[i], NULL, run_add, it);
for (i = 0; i < num_thread; i++)
pthread_join (pth_arr[i], NULL);
}
int main (int argc, char **argv) {
int nproc = sysconf (_SC_NPROCESSORS_ONLN),
num_thread = argc > 1 ? atoi (argv[1]) : nproc,
iter = argc > 2 ? atoi (argv[2]) : 1000;
unsigned long subtotal = iter * iter * iter,
total = subtotal * num_thread;
double diff = 0.0, t1 = 0.0, t2 = 0.0;
if (num_thread > nproc) num_thread = nproc;
printf ("\nrunning sum with %d threads.\n\n", num_thread);
t1 = clock ();
run_test (num_thread, &iter);
t2 = clock ();
diff = (double)((t2 - t1) / CLOCKS_PER_SEC / num_thread);
printf ("----------------\nTotal time: %lf second\n", diff);
printf ("Total addtions: %lu\n", total);
printf ("Additions per-second: %lf\n\n", total / diff);
return 0;
}
Example Use/Output
Now you can measure the relative number of additions per-second performed based on the number of cores utilized -- and have it return a Total time that is roughly what wall-time would be. For example, measuring the additions per-second using a single core results in:
$ ./bin/pthread_one_per_core 1
running sum with 1 threads.
thread id = 140380000397056
iterations = 1000000000
----------------
Total time: 2.149662 second
Total addtions: 1000000000
Additions per-second: 465189411.172547
Approximatey 465M additions per-sec. Using two cores should double that rate:
$ ./bin/pthread_one_per_core 2
running sum with 2 threads.
thread id = 140437156796160
iterations = 1000000000
thread id = 140437165188864
iterations = 1000000000
----------------
Total time: 2.152436 second
Total addtions: 2000000000
Additions per-second: 929179560.000957
Exactly twice the additions per-sec at 929M/s. Using 4-cores:
$ ./bin/pthread_one_per_core 4
running sum with 4 threads.
thread id = 139867841853184
iterations = 1000000000
thread id = 139867858638592
iterations = 1000000000
thread id = 139867867031296
iterations = 1000000000
thread id = 139867850245888
iterations = 1000000000
----------------
Total time: 2.202021 second
Total addtions: 4000000000
Additions per-second: 1816513309.422720
Doubled again to 1.81G/s, and using 8-cores gives the expected results:
$ ./bin/pthread_one_per_core
running sum with 8 threads.
thread id = 140617712838400
iterations = 1000000000
thread id = 140617654089472
iterations = 1000000000
thread id = 140617687660288
iterations = 1000000000
thread id = 140617704445696
iterations = 1000000000
thread id = 140617662482176
iterations = 1000000000
thread id = 140617696052992
iterations = 1000000000
thread id = 140617670874880
iterations = 1000000000
thread id = 140617679267584
iterations = 1000000000
----------------
Total time: 2.250243 second
Total addtions: 8000000000
Additions per-second: 3555171004.558562
3.55G/s. Look over the both answers (currently) and let us know if you have any questions.
note: there are a number of additional clean-ups and validations that could be applied, but for purposes of your example, updating the types to rational unsigned prevents strange results with thread_id and the addition numbers.
I'm working on a programming assignment and I'm getting strange results.
The idea is to calculate the number of processor ticks and time taken to run the algorithm.
Usually the code runs so quickly that the time taken is 0 sec, but I noticed that the number of processor ticks was 0 at the start and at the finish, resulting in 0 processor ticks taken.
I added a delay using usleep so that the time taken was non-zero, but the processor ticks is still zero and the calculation between the time stamps is still zero.
I've been banging my head on this for several days now and can't get past this problem, any suggestions are extremely welcome.
My code is below:
/* This program takes an input "n". If n is even it divides n by 2
* If n is odd, it multiples n by 3 and adds 1. Each time through the loop
* it iterates a counter.
* It continues until n is 1
*
* This program will compute the time taken to perform the above algorithm
*/
#include <stdio.h>
#include <time.h>
void delay(int);
int main(void) {
int n, i = 0;
time_t start, finish, duration;
clock_t startTicks, finishTicks, diffTicks;
printf("Clocks per sec = %d\n", CLOCKS_PER_SEC);
printf("Enter an integer: ");
scanf("%d", &n); // read value from keyboard
time(&start); // record start time in ticks
startTicks = clock();
printf("Start Clock = %s\n", ctime(&start));
printf("Start Processor Ticks = %d\n", startTicks);
while (n != 1) { // continues until n=1
i++; // increment counter
printf("iterations =%d\t", i); // display counter iterations
if (n % 2) { // if n is odd, n=3n+1
printf("Input n is odd!\t\t");
n = (n * 3) + 1;
printf("Output n = %d\n", n);
delay(1000000);
} else { //if n is even, n=n/2
printf("Input n is even!\t");
n = n / 2;
printf("Output n = %d\n", n);
delay(1000000);
}
}
printf("n=%d\n", n);
time(&finish); // record finish time in ticks
finishTicks = clock();
printf("Stop time = %s\n", ctime(&finish));
printf("Stop Processor Ticks = %d\n", finishTicks);
duration = difftime(finish, start); // compute difference in time
diffTicks = finishTicks - startTicks;
printf("Time elapsed = %2.4f seconds\n", duration);
printf("Processor ticks elapsed = %d\n", diffTicks);
return (n);
}
void delay(int us) {
usleep(us);
}
EDIT: So after researching further, I discovered that usleep() won't affect the program running time, so I wrote a delay function in asm. Now I am getting a value for processor ticks, but I am still getting zero sec taken to run the algorithm.
void delay(int us) {
for (int i = 0; i < us; i++) {
__asm__("nop");
}
}
You can calculate the elapsed time using the below formula.
double timeDiff = (double)(EndTime - StartTime) / CLOCKS_PER_SEC.
Here is the dummy code.
void CalculateTime(clock_t startTime, clock_t endTime)
{
clock_t diffTime = endTime - startTime;
printf("Processor time elapsed = %lf\n", (double)diffTime /CLOCKS_PER_SEC);
}
Hope this helps.
You are trying to time an implementation of Goldbach's Conjecture. I don't see how you can hope to get a meaningful execution time when it contains delays. Another problem is the granularity of clock() results, as shown by the value of CLOCKS_PER_SEC.
It is even more difficult trying to use time() which has a resolution of 1 second.
The way to do it is to compute a large number of values. This prints only 10 of them, to ensure the calculations are not optimised out, but not to distort the calculation time too much.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define SAMPLES 100000
int main(void) {
int i, j, n;
double duration;
clock_t startTicks = clock();
for(j=2; j<SAMPLES; j++) {
n = j; // starting number
i = 0; // iterations
while(n != 1) {
if (n % 2){ // if n is odd, n=3n+1
n = n * 3 + 1;
}
else { // if n is even, n=n/2
n = n / 2;
}
i++;
}
if(j % (SAMPLES/10) == 0) // print 10 results only
printf ("%d had %d iterations\n", j, i);
}
duration = ((double)clock() - startTicks) / CLOCKS_PER_SEC;
printf("\nDuration: %f seconds\n", duration);
return 0;
}
Program output:
10000 had 29 iterations
20000 had 30 iterations
30000 had 178 iterations
40000 had 31 iterations
50000 had 127 iterations
60000 had 179 iterations
70000 had 81 iterations
80000 had 32 iterations
90000 had 164 iterations
Duration: 0.090000 seconds