So I'm working on a c assignment that generates an array, and uses threads to determine different characteristics.
At the end of the program I need to print the wall time, the user time, and the system time. I thought I did this correctly but my results seem to point otherwise.
After multiple tests, The user time is almost always 0 and the system time is always 0. I know the user time should be greater than the wall time since it's multithreaded code.
Here's how I'm calculating it, if anyone could point out my mistake or explain why its getting an incorrect time then that'd be great:
EDIT: issue was irrelevant with code. (something else was wrong in my threads)
thanks
System time should almost definitely be zero, I wouldn't expect much time to be spent in kernel-mode. As for User Time being zero, it is possible that all operations are taking less than one CPU tick. If you are trying to time a short lived operation (for example, one call to malloc or something), then I would definitely expect the time to be zero.
Related
I have a method which returns the current time as a string. Though this method is called millions of times per second and thus I optimized this method in several ways (statically allocated buffers for the time string etc).
For this application it is perfectly fine to approximate the time. For example I use a resolution of 10 milliseconds. Within this time the same time string is returned.
Though when profiling the code the clock() call consumes the vast amount of time.
What other and faster choices do I have to approximate the time difference with milliseconds resolution?
To answer my own question: The solution was to limit calls to clock(), or any time function for that matter. The overall execution time for the whole test case is now 22x faster.
I think I can give a general advise after profiling this quite extensively: If you can live with a lower time resolution, and you really need to optimize your code for speed, change the problem into using a single global timer and avoid costly time comparisons for each run.
I have a simple thread now, sleeping for the desired resolution time, and updating an atomic int ticker variable on each loop.
In the function I needed to optimize I then just compare two ints (the last tick and the current tick). If not equal, it‘s time for an update.
For example, if we program a computer to check and update some variables every 5 minutes does that mean that the computer actually checks if the condition matches (if that 5 minutes are up so it can execute a program) every tick? So that means (in my point of view) the bigger the amount of conditionals or timers or both the heavier the load on the processor even though the processor just checks if the time is up or whether the condition is match or not.
My reasoning being here that the processor can't really put some task away and forget about it for 5 minutes and then just remember about it and execute the program. It has to keep a track of time (counting seconds or ticks or whatever), keep track of timers that are currently being on and check if the time on every timer is up or not.
That makes every timer a conditional statement. Right?
So the main question is... am I correct on all of those statements or the reality is a bit different and if so then how different?
Thank you.
I am assuming here that you have a basic understanding of how processes are managed in a CPU
Most programming languages implement some form of wait() function, that will cause the CPU to stop executing instructions from that thread until it is interrupted, allowing it to work on other tasks in the meantime. Waiting for an interrupt does not use much of the system's resources, and is much more efficient than the polling method that you were describing.
This is a pretty basic explanation, but if you want to learn more, lookup preemptive multitasking.
Well, as the title suggests, I am coding a game which is called Morabaraba. It will be a computer vs human game. I will use game trees and alpha-beta cut-offs in order to generate the computer plays and every computer play should be made in a specific amount of time (let's say 15 seconds).
What is the best way to count the elapsed seconds since the begin of its turn and validate that it still has not exceeded the time limit without overloading the system? As is known, time is precious when generating game trees. Exceeding time limit will result in forfeit.
I want to know how this could be done with lightweight algorithm. How can I validate that time limit has not been reached every x seconds? Is the time of this validation negligible?
Thanks in advance.
Yes, time of this validation will be mostly negligible just because you will poll the amount of time passed at discrete intervals and compare it to the start time to know how much time is elapsed so far.
I see two solutions here:
embed the time check in the same thread that is computing the alpha-beta pruning and stop it accordingly returning the best solution found so far
place the AI code on a separate thread and interrupt it when time went over a threshold, by ensuring that the best solution so far is already stored somewhere so that you can pick it up
Second approach may sound scary but in certain situations it's just unpractical to check elapsed time within the algorithm itself because the code cannot be easily adapted and interrupted from the callee (for example if you have a non modular algorithm made of many different steps, which is not your case though)
Start by just checking the time after every "step" or every N steps, in some way that you believe you are checking at least once a second. If the nature of your computation makes this impractical, you can use something like POSIX timers to signal your process when a given amount of time has passed.
I'm using a built-in benchmarking module for some quick and dirty tests. It gives me:
CPU time
system CPU time (actually I never get any result for this with the code I'm running)
the sum of the user and system CPU times (always the same as the CPU time in my case)
the elapsed real time
I didn't even know I needed all that information.
I just want to compare two pieces of code and see which one takes longer. I know that one piece of code probably does more garbage collection than the other but I'm not sure how much of an impact it's going to have.
Any ideas which metric I should be looking at?
And, most importantly, could someone explain why the "elapsed real time" is always longer than the CPU time - what causes the lag between the two?
There are many things going on in your system other than running your Ruby code. Elapsed time is the total real time taken and should not be used for benchmarking. You want the system and user CPU times since those are the times that your process actually had the CPU.
An example, if your process:
used the CPU for one second running your code; then
used the CPU for one second running OS kernel code; then
was swapped out for seven seconds while another process ran; then
used the CPU for one more second running your code,
you would have seen:
ten seconds elapsed time,
two seconds user time,
one second system time,
three seconds total CPU time.
The three seconds is what you need to worry about, since the ten depends entirely upon the vagaries of the process scheduling.
Multitasking operating system, stalls while waiting for I/O, and other moments when you code is not actively working.
You don't want to totally discount clock-on-the-wall time. Time used to wait w/o another thread ready to utilize CPU cycles may make one piece of code less desirable than another. One set of code may take some more CPU time, but, employ multi-threading to dominate over the other code in the real world. Depends on requirements and specifics. My point is ... use all metrics available to you to make your decision.
Also, as a good practice, if you want to compare two pieces of code you should be running as few extraneous processes as possible.
It may also be the case that the CPU time when your code is executing is not counted.
The extreme example is a real-time system where the timer triggers some activity which is always shorter than a timer tick. Then the CPU time for that activity may never be counted (depending on how the OS does the accounting).
Given a C process that runs at the highest priority that requests the current time, Is the time returned adjusted for the amount of time the code takes to return to the user process space? Is it out of date when you get it? As a measurement taking the execution time of known number of assembly instructions in a loop and asking for the time before and after it could give you an approximation of the error. I know this must be an issue in scientific applications? I don't plan to write software involving any super colliders any time in the near future. I have read a few articles on the subject but they do not indicate that any correction is made to make the time given to you be slightly ahead of the time the system read in. Should I lose sleep over other things?
Yes, they are almost definitely "wrong".
For Windows, the timing functions do not take into account the time it takes to transition back to user mode. Even if this were taken into account, it can't correct if the function returns, and your code hits a page fault/gets swapped out/etc., before capturing the return value.
In general, when timing things you should snap a start and an end time around a large number of iterations to weed out these sort of uncertainties.
No, you should not lose sleep over this. No amount of adjustment or other software trickery will yield perfect results on a system with a pipelined processor with multi-layered memory access running a multi-tasking operating system with memory management, devices, interrupt handlers... Not even if your process has the highest priority.
Plus, taking the difference of two such times will cancel out the constant overhead, anyway.
Edit: I mean yes, you should lose sleep over other things :).
Yes, the answer you get will be off by a certain (smallish) amount; I have never heard of a timer function compensating for the average return time, because such a thing is nearly impossible to predict well. Such things are usually implemented by simply reading a register in the hardware and returning the value, or a version of it scaled to the appropriate timescale.
That said, I wouldn't lose sleep over this. The accepted way of keeping this overhead from affecting your measurements in any significant way is not to use these timers for short events. Usually, you will time several hundred, thousand, or million executions of the same thing, and divide by the number of executions to estimate the average time. Such a thing is usually more useful than timing a single instance, as it takes into account average cache behavior, OS effects, and so forth.
Most of the real world problems involving high resolution timers are used for profiling, in which the time is read once during START, and once more during FINISH. So most of the times ~almost~ the same amount of delay in involved in both START and FINISH. And hence it works fine.
Now, for nuclear reactors, WINDOWS or for that many other operating system with generic functions may not be suitable. I guess they use REAL TIME operating systems which might give a better accurate time values than desktop operating systems.