Make a play within time limit in C - c

Well, as the title suggests, I am coding a game which is called Morabaraba. It will be a computer vs human game. I will use game trees and alpha-beta cut-offs in order to generate the computer plays and every computer play should be made in a specific amount of time (let's say 15 seconds).
What is the best way to count the elapsed seconds since the begin of its turn and validate that it still has not exceeded the time limit without overloading the system? As is known, time is precious when generating game trees. Exceeding time limit will result in forfeit.
I want to know how this could be done with lightweight algorithm. How can I validate that time limit has not been reached every x seconds? Is the time of this validation negligible?
Thanks in advance.

Yes, time of this validation will be mostly negligible just because you will poll the amount of time passed at discrete intervals and compare it to the start time to know how much time is elapsed so far.
I see two solutions here:
embed the time check in the same thread that is computing the alpha-beta pruning and stop it accordingly returning the best solution found so far
place the AI code on a separate thread and interrupt it when time went over a threshold, by ensuring that the best solution so far is already stored somewhere so that you can pick it up
Second approach may sound scary but in certain situations it's just unpractical to check elapsed time within the algorithm itself because the code cannot be easily adapted and interrupted from the callee (for example if you have a non modular algorithm made of many different steps, which is not your case though)

Start by just checking the time after every "step" or every N steps, in some way that you believe you are checking at least once a second. If the nature of your computation makes this impractical, you can use something like POSIX timers to signal your process when a given amount of time has passed.

Related

Fastest way to get approximate time difference

I have a method which returns the current time as a string. Though this method is called millions of times per second and thus I optimized this method in several ways (statically allocated buffers for the time string etc).
For this application it is perfectly fine to approximate the time. For example I use a resolution of 10 milliseconds. Within this time the same time string is returned.
Though when profiling the code the clock() call consumes the vast amount of time.
What other and faster choices do I have to approximate the time difference with milliseconds resolution?
To answer my own question: The solution was to limit calls to clock(), or any time function for that matter. The overall execution time for the whole test case is now 22x faster.
I think I can give a general advise after profiling this quite extensively: If you can live with a lower time resolution, and you really need to optimize your code for speed, change the problem into using a single global timer and avoid costly time comparisons for each run.
I have a simple thread now, sleeping for the desired resolution time, and updating an atomic int ticker variable on each loop.
In the function I needed to optimize I then just compare two ints (the last tick and the current tick). If not equal, it‘s time for an update.

Recording time of a program

So I'm working on a c assignment that generates an array, and uses threads to determine different characteristics.
At the end of the program I need to print the wall time, the user time, and the system time. I thought I did this correctly but my results seem to point otherwise.
After multiple tests, The user time is almost always 0 and the system time is always 0. I know the user time should be greater than the wall time since it's multithreaded code.
Here's how I'm calculating it, if anyone could point out my mistake or explain why its getting an incorrect time then that'd be great:
EDIT: issue was irrelevant with code. (something else was wrong in my threads)
thanks
System time should almost definitely be zero, I wouldn't expect much time to be spent in kernel-mode. As for User Time being zero, it is possible that all operations are taking less than one CPU tick. If you are trying to time a short lived operation (for example, one call to malloc or something), then I would definitely expect the time to be zero.

Schedulling algorithm cpu time

I am trying to make a simulator for a scheduling algorithm in C using round robin and fcfs.
I just have a few questions as I have tried to look it up and read the kernel commands but im still that confused :( This program is being done on putty(linux) where you have a list of processes with a time clock that execute or take up cpu time.
How do we make a process take up CPU time? Do we call the sys() function(don't know which one) or are we meant to malloc a process when I read it in my program from a textfile? I know i may sound stupid, but please explain.
What do you suggest is the best data structure to use for the storage of the process(time created,process id,memory size,job time) for ex (0,2,70,8)?
When a process finishes in its job time, how do we terminate it for it to free itself from the CPU to ensure the other process at a clock time after it can use the cpu?
How do you implement the clock time, is there any inbuilt function or to just use a for loop.
I hope these are not asking too many questions but whoever can get back to me I would really appreciate it.
Regards
If you're building a simulator you should NOT be actually waiting that amount of time, you should "schedule" by updating counters and saying process p1 has run for 750ms total so far, scheduled 3 times for 250ms, 250ms, 250ms, etc... Trying to attempt to run a scheduling simulation in real time in user space is bound to give you odd results as your process itself needs to be scheduled as well.
For instance, if you want to simulate FCFS, you implement a simple "process" queue and give them each a time slice (you can use the default kernel timeslice or your own, doesn't really matter) and each of these processes will have some total execution time to finish and you can do calculations based off this. For example P1 is a process, requires 3.12 seconds of CPU time to finish (I don't think memory simulation is needed since we're doing scheduling and not caching or anything else taken into account). You just run the algorithm like you normally would but just adding numbers, so you "run" P1, add time to its counter and check if it's done. If it is check the difference etc... and you can keep a global time to keep track of how long it has run in wall clock time. Then simply put P1 at the end of the queue and "schedule" the next process.
Now if you want to measure scheduling performance that's completely different and this usually involves running workload benchmarks to run many processes on the system and check overall performance metrics for each.

When benchmarking, what causes a lag between CPU time and "elapsed real time"?

I'm using a built-in benchmarking module for some quick and dirty tests. It gives me:
CPU time
system CPU time (actually I never get any result for this with the code I'm running)
the sum of the user and system CPU times (always the same as the CPU time in my case)
the elapsed real time
I didn't even know I needed all that information.
I just want to compare two pieces of code and see which one takes longer. I know that one piece of code probably does more garbage collection than the other but I'm not sure how much of an impact it's going to have.
Any ideas which metric I should be looking at?
And, most importantly, could someone explain why the "elapsed real time" is always longer than the CPU time - what causes the lag between the two?
There are many things going on in your system other than running your Ruby code. Elapsed time is the total real time taken and should not be used for benchmarking. You want the system and user CPU times since those are the times that your process actually had the CPU.
An example, if your process:
used the CPU for one second running your code; then
used the CPU for one second running OS kernel code; then
was swapped out for seven seconds while another process ran; then
used the CPU for one more second running your code,
you would have seen:
ten seconds elapsed time,
two seconds user time,
one second system time,
three seconds total CPU time.
The three seconds is what you need to worry about, since the ten depends entirely upon the vagaries of the process scheduling.
Multitasking operating system, stalls while waiting for I/O, and other moments when you code is not actively working.
You don't want to totally discount clock-on-the-wall time. Time used to wait w/o another thread ready to utilize CPU cycles may make one piece of code less desirable than another. One set of code may take some more CPU time, but, employ multi-threading to dominate over the other code in the real world. Depends on requirements and specifics. My point is ... use all metrics available to you to make your decision.
Also, as a good practice, if you want to compare two pieces of code you should be running as few extraneous processes as possible.
It may also be the case that the CPU time when your code is executing is not counted.
The extreme example is a real-time system where the timer triggers some activity which is always shorter than a timer tick. Then the CPU time for that activity may never be counted (depending on how the OS does the accounting).

Are high resolution calls to get the system time wrong by the time the function returns?

Given a C process that runs at the highest priority that requests the current time, Is the time returned adjusted for the amount of time the code takes to return to the user process space? Is it out of date when you get it? As a measurement taking the execution time of known number of assembly instructions in a loop and asking for the time before and after it could give you an approximation of the error. I know this must be an issue in scientific applications? I don't plan to write software involving any super colliders any time in the near future. I have read a few articles on the subject but they do not indicate that any correction is made to make the time given to you be slightly ahead of the time the system read in. Should I lose sleep over other things?
Yes, they are almost definitely "wrong".
For Windows, the timing functions do not take into account the time it takes to transition back to user mode. Even if this were taken into account, it can't correct if the function returns, and your code hits a page fault/gets swapped out/etc., before capturing the return value.
In general, when timing things you should snap a start and an end time around a large number of iterations to weed out these sort of uncertainties.
No, you should not lose sleep over this. No amount of adjustment or other software trickery will yield perfect results on a system with a pipelined processor with multi-layered memory access running a multi-tasking operating system with memory management, devices, interrupt handlers... Not even if your process has the highest priority.
Plus, taking the difference of two such times will cancel out the constant overhead, anyway.
Edit: I mean yes, you should lose sleep over other things :).
Yes, the answer you get will be off by a certain (smallish) amount; I have never heard of a timer function compensating for the average return time, because such a thing is nearly impossible to predict well. Such things are usually implemented by simply reading a register in the hardware and returning the value, or a version of it scaled to the appropriate timescale.
That said, I wouldn't lose sleep over this. The accepted way of keeping this overhead from affecting your measurements in any significant way is not to use these timers for short events. Usually, you will time several hundred, thousand, or million executions of the same thing, and divide by the number of executions to estimate the average time. Such a thing is usually more useful than timing a single instance, as it takes into account average cache behavior, OS effects, and so forth.
Most of the real world problems involving high resolution timers are used for profiling, in which the time is read once during START, and once more during FINISH. So most of the times ~almost~ the same amount of delay in involved in both START and FINISH. And hence it works fine.
Now, for nuclear reactors, WINDOWS or for that many other operating system with generic functions may not be suitable. I guess they use REAL TIME operating systems which might give a better accurate time values than desktop operating systems.

Resources