Programmatically Getting Total Time and Busy Time of Function on Linux - c

Is there any way to programmatically get the total time that a C program has been running, as well as the amount of that time spent inside a particular function? I need to do this in the code since I want to use these two values as arguments for another function. Since I am on Linux, would I be able to use gprof or perf to do this?

Grab the system time when the program starts. Then, whenever you want, you can get the current time and subtract the start time. That tells how long you've been running, in wall-clock time.
Have a global boolean Q that is set True when your function is entered, and False when it exits, so it is only True while the program is "in" the function (inclusively).
Setup a timer interrupt to go off every N ms, and have two global counters, A and B. (N does not have to be small.) When the timer interrupts, have it increment B regardless, but only increment A if Q is true.
This way, you know how much time has elapsed, and A/B is the fraction of that time your function was on the stack.
BTW: If the function is recursive, let Q be an integer "depth counter". Otherwise, no change.

Yes, you can use gprof, but that requires re-compiling the binary you want to measure to insert the required monitoring code. By default, programs don't spend time recording this data, so you must add it. With gcc, this is done using the -pg option.

You can use gprof for the same.

Related

Set a time for the for loop

I want to define timing in a for loop:
int count;
int forfunctime;
for(count=0;count<5;count++)
{
output_b(sutun[count]);
output_c(UC[count]);
delay_ms(1);
}
int count;
int forfunctime;
for(forfunctime=0;forfunctime<100;++forfunctime)
{
for(count=0;count<5;count++)
{
output_b(sutun[count]);
output_c(UC[count]);
delay_ms(1);
}
}
In the SECOND code I am able to get a delay using the MIPS instruction processing time by enclosing the for loop but is there a more precise way?
My goal is to set a time for the for loop.
edit: for those who will use this information later: While programming PIC, we use a for loop to scroll the row and column in matrix displays, but if we want to keep this loop active for a certain period of time, we need to use timers for this.
Since I dont't know which PIC you're using, I can't answer precisely to this question, but I can draw you the general map.
What you need is a timer. There are timers on almost every microcontroller. But it is very likely that you will have to manually program it, generally by settings some bits in memory. Here is an example on how to program a timer.
The first and most naive solution is to set one of your microcontroller's timer and read it's value each time you're out of your loop, like below :
setTimer(); //Your work, not a predifined function
while (readTimer() < yourTimeConstant){ //againt, your work, not a predifined function
for(count=0;count<5;count++){
output_b(sutun[count]);
output_c(UC[count]);
}
}
The second and cleaner solution would be to use events if your microcontroller supports them.
You can save the current time just before enter the loop and check if already passed 10 seconds at every loop iteration, but actually that isn't the best way to do that. If you're working on windows system check waitable timer objects, for linux check alarm
I want to run the for loop written in the first code for 10 seconds.
Microcontrollers generally provide timers that you can use instead of delay loops. Timers have at least two advantages: 1) they make it easy to wait a specific amount of time, and 2) unlike delay functions, they don't prevent the controller from doing other things. With a timer, you can pretty much say "let me know when 10 seconds have passed."
Check the docs for your particular chip to see what timers are available and how to use them, but here's a good overview of how to use timers on PIC controllers.

programming EXACT timer which calles a function in c language every x seconds

I need some help with a part of my Programm. I want to call a function EXACTLY every x seconds.
The problem with the most soultions is, that the time to call the sleep() or w/e function stacks over time.
An example: If I want to call a function every second with sleep(1), it takes 1 second + a very small x (time to call the sleep() function).
So it takes 1,x seconds. Not 1,0 second.
Over a long period of time the x stacks to a "huge amount" of time.
I want a code snippet which executes something EXACTLY every second so that I get exact timestamps without any additional delay.
I think it's part of the real-time-programming problem.
Is there some working code out there for that?
Please help me out with that. :)
FloKL
int X = 10;
int t1 = (int)time(NULL);
while(1==1){
// execute your process
...
// calculate next tick
t1 = t1 + X;
sleep(t1 - (int)time(NULL));
}
You can recode your call to sleep to pass effectively 1 - a rather than 1 where a is an adjustment calculate to eliminate the accumulated x. (Clearly you'll need to adjust the sleep unit as a will be less than 1 in general).
Otherwise C provides no facility directly for an "exact" timing. You'll need to use external hardware for that. And expect to pay a lot of money for atomic-clock level accuracy.
suggest using the function: setitimer() which is exposed via the header file: <sys/timer.h>.
Suggest reading the MAN page for setitimer for all the details

How to measure the time it took for a thread to get past a condition variable?

I want to simulate a theater reservation system where customers communicate with operators to reserve seats in C. I am using the pthread library. When the thread is created, it tries to acquire the mutex of the variable that holds the number of available operators and checks to see whether any operator is available(with the use of a condition variable) and if not, the thread goes to sleep. I want to know the time it took for the thread to connect with an operator i.e. the time it took for the thread to get past the condition variable. Can I just use a timer at the start of the thread and right after the condition variable or this wouldn't work because when the thread is blocked the timer is blocked too? If this is the case, what is the correct way to do this? Thanks.
void* thread_transaction(void* arg) //the function passed to
pthread_create
{
//instantiating variables and such...
//reserving an operator
pthread_mutex_lock(&tel_mut);
while(threadData->available_tel <= 0)
pthread_cond_wait(&tel_cond, &tel_mut);
//can I put a timer at the start and right here and accurately
//measure the time?
Assuming you are writing for a Posix-compatible system (like Linux), you can use clock_gettime() before and after the wait loop, and subtract the start time from the end time to get the elapsed time. You'll want to use a wall-time clock, not a CPU-time clock. Since this function places its result in a struct timespec, which is a two-part value, the subtraction is not trivial (but neither is it particularly complicated): you need to consider the case where subtracting the nanosecond fields results in a negative number.
The difference between the CLOCK_REALTIME and CLOCK_MONOTONIC is that the monotonic clock is not affected by adjustments to the host's clock required to synchronise with a time service. (Another difference is that there is no information about the what the value returned by CLOCK_MONOTONIC represents in terms of the calendar, since it usually starts counting at system boot. However, the difference between two times is completely meaningful.)

Measuring Elapsed Time In Linux (CLOCK_MONOTONIC vs. CLOCK_MONOTONIC_RAW)

I am currently trying to talk to a piece of hardware in userspace (underneath the hood, everything is using the spidev kernel driver, but that's a different story).
The hardware will tell me that a command has been completed by indicating so with a special value in a register, that I am reading from. The hardware also has a requirement to get back to me in a certain time, otherwise the command has failed. Different commands take different times.
As a result, I am implementing a way to set a timeout and then check for that timeout using clock_gettime(). In my "set" function, I take the current time and add the time interval I should wait for (usually this anywhere from a few ms to a couple of seconds). I then store this value for safe keeping later.
In my "check" function, I once again, get the current time and then compare it against the time I have saved. This seems to work as I had hoped.
Given my use case, should I be using CLOCK_MONOTONIC or CLOCK_MONOTONIC_RAW? I'm assuming CLOCK_MONOTONIC_RAW is better suited, since I have short intervals that I am checking. I am worried that such a short interval might represent a system-wide outlier, in which NTP was doing alot of adjusting. Note that my target system is only Linux kernels 4.4 and newer.
Thanks in advance for the help.
Edited to add: given my use case, I need "wall clock" time, not CPU time. That is, I am checking to see if the hardware has responded in some wall clock time interval.
References:
Rutgers Course Notes
What is the difference between CLOCK_MONOTONIC & CLOCK_MONOTONIC_RAW?
Elapsed Time in C Tutorial

Generic Microcontroller Delay Function

Come someone please tell me how this function works? I'm using it in code and have an idea how it works, but I'm not 100% sure exactly. I understand the concept of an input variable N incrementing down, but how the heck does it work? Also, if I am using it repeatedly in my main() for different delays (different iputs for N), then do I have to "zero" the function if I used it somewhere else?
Reference: MILLISEC is a constant defined by Fcy/10000, or system clock/10000.
Thanks in advance.
// DelayNmSec() gives a 1mS to 65.5 Seconds delay
/* Note that FCY is used in the computation. Please make the necessary
Changes(PLLx4 or PLLx8 etc) to compute the right FCY as in the define
statement above. */
void DelayNmSec(unsigned int N)
{
unsigned int j;
while(N--)
for(j=0;j < MILLISEC;j++);
}
This is referred to as busy waiting, a concept that just burns some CPU cycles thus "waiting" by keeping the CPU "busy" doing empty loops. You don't need to reset the function, it will do the same if called repeatedly.
If you call it with N=3, it will repeat the while loop 3 times, every time counting with j from 0 to MILLISEC, which is supposedly a constant that depends on the CPU clock.
The original author of the code have timed and looked at the assembler generated to get the exact number of instructions executed per Millisecond, and have configured a constant MILLISEC to match that for the for loop as a busy-wait.
The input parameter N is then simply the number of milliseconds the caller want to wait and the number of times the for-loop is executed.
The code will break if
used on a different or faster micro controller (depending on how Fcy is maintained), or
the optimization level on the C compiler is changed, or
c-compiler version is changed (as it may generate different code)
so, if the guy who wrote it is clever, there may be a calibration program which defines and configures the MILLISEC constant.
This is what is known as a busy wait in which the time taken for a particular computation is used as a counter to cause a delay.
This approach does have problems in that on different processors with different speeds, the computation needs to be adjusted. Old games used this approach and I remember a simulation using this busy wait approach that targeted an old 8086 type of processor to cause an animation to move smoothly. When the game was used on a Pentium processor PC, instead of the rocket majestically rising up the screen over several seconds, the entire animation flashed before your eyes so fast that it was difficult to see what the animation was.
This sort of busy wait means that in the thread running, the thread is sitting in a computation loop counting down for the number of milliseconds. The result is that the thread does not do anything else other than counting down.
If the operating system is not a preemptive multi-tasking OS, then nothing else will run until the count down completes which may cause problems in other threads and tasks.
If the operating system is preemptive multi-tasking the resulting delays will have a variability as control is switched to some other thread for some period of time before switching back.
This approach is normally used for small pieces of software on dedicated processors where a computation has a known amount of time and where having the processor dedicated to the countdown does not impact other parts of the software. An example might be a small sensor that performs a reading to collect a data sample then does this kind of busy loop before doing the next read to collect the next data sample.

Resources