Internal working of timer - c

I know QueryPerformanceCounter() can be used for timing functions. I want to know:
1-Can I increase the resolution of the timer by over-clocking the CPU (so it ticks faster)?
2-Basically what makes some timers more precise than others, (e.g, QueryPerformanceCounter() is more precise as compared to GetTickCount())? If there is single crystal oscillator on the motherboard , why some timers are slower as compared to others?

QueryPerformanceCounter has very high resolution - normally less than one nanosecond. I don't see why you'd like to increase it. Overclocking will increase it, but it seems like a very weak reason for overclocking.
QueryPerformanceCounter is very accurate, but somewhat expensive and not very convenient.
a. It's expensive because it uses the expensive rdtsc instruction. Faster timers can just read an integer from memory. This integer needs to be updated, and we don't want to do it too often (1000 times a second is reasonable), so we get a very cheap timer, with low precision. That's basically GetTickCount.
b. It's inconvenient because it uses units which change between computers. Sometimes it will be nanoseconds, sometimes half-nano, or other values. It makes it harder to calculate with.
a. Another source of inconvenience is that it returns very large numbers, which may overflow when you try to do math with them, so you need to be careful.

The timing source for QPC is machine dependent. It is typically picked up from a frequency available somewhere in the chipset. Whether overclocking the cpu is going to affect it is highly dependent on your motherboard design. The simplest way is to just try it, use QueryPerformanceFrequency to see the effect.
GetTickCount is driven from an entirely different timer source, the signal that also generates the clock interrupt. It is not very precise, 1/64 of second normally, but it is highly accurate. Your machine contacts a time server from time to time to recalibrate the clock and adjust the clock correction factor. Which makes it accurate to about a second over an entire year. QPC is very precise, but not nearly as accurate. Use it only to time short intervals.

1 - Yes, Internally, one of the better timers is rdtsc, which does give you the clock value. Combining this with information from cpuid instruction, gives you time.
2 - The other timers rely upon various timing sources, such as the 8253 timer, for instance.
QPF is a wrapper added by Microsoft on and over what rdtsc provides. Read this article for more info:
http://www.strchr.com/performance_measurements_with_rdtsc

Related

How to use perf or other utilities to measure the elapsed time of a loop/function

I am working on a project requiring profiling the target applications at first.
What I want to know is the exact time consumed by a loop body/function. The platform is BeagleBone Black board with Debian OS and installed perf_4.9.
gettimeofday() can provide a micro-second resolution but I still want more accurate results. It looks perf can give cycles statistics and thus be a good fit for purposes. However, perf can only analyze the whole application instead of individual loop/functions.
After trying the instructions posted in this Using perf probe to monitor performance stats during a particular function, it does not work well.
I am just wondering if there is any example application in C I can test and use on this board for my purpose. Thank you!
Caveat: This is more of comment than an answer but it's a bit too long for just a comment.
Thanks a lot for advising a new function. I tried that but get a little unsure about its accuracy. Yes, it can offer nanosecond resolution but there is inconsistency.
There will be inconsistency if you use two different clock sources.
What I do is first use clock_gettime() to measure a loop body, the approximate elasped time would be around 1.4us in this way. Then I put GPIO instructions, pull high and pull down, at beginning and end of the loop body, respectively and measure the signal frequency on this GPIO with an oscilloscope.
A scope is useful if you're trying to debug the hardware. It can also show what's on the pins. But, in 40+ years of doing performance measurement/improvement/tuning, I've never used it to tune software.
In fact, I would trust the CPU clock more than I would trust the scope for software performance numbers
For a production product, you may have to measure performance on a system deployed at a customer site [because the issue only shows up on that one customer's machine]. You may have to debug this remotely and can't hook up a scope there. So, you need something that can work without external probe/test rigs.
To my surprise, the frequency is around 1.8MHz, i.e., ~500ns. This inconsistency makes me a little confused... – GeekTao
The difference could be just round off error based on different time bases and latency in getting in/out of the device (GPIO pins). I presume you're just using GPIO in this way to facilitate benchmark/timing. So, in a way, you're not measuring the "real" system, but the system with the GPIO overhead.
In tuning, one is less concerned with absolute values than relative. That is, clock_gettime is ultimately based on number of highres clock ticks (at 1ns/tick or better from the system's free running TSC (time stamp counter)). What the clock frequency actually is doesn't matter as much. If you measure a loop/function and get X duration. Then, you change some code and get X+n, this tells you whether the code got faster or slower.
500ns isn't that large an amount. Almost any system wide action (e.g. timeslicing, syscall, task switch, etc.) could account for that. Unless you've mapped the GPIO registers into app memory, the syscall overhead could dwarf that.
In fact, just the overhead of calling clock_gettime could account for that.
Although the clock_gettime is technically a syscall, linux will map the code directly into the app's code via the VDSO mechanism so there is no syscall overhead. But, even the userspace code has some calculations to do.
For example, I have two x86 PCs. On one system the overhead of the call is 26 ns. On another system, the overhead is 1800 ns. Both these systems are at least 2GHz+
For your beaglebone/arm system, the base clock rate may be less, so overhead of 500 ns may be ballpark.
I usually benchmark the overhead and subtract it out from the calculations.
And, on the x86, the actual code just gets the CPU's TSC value (via the rdtsc instruction) and does some adjustment. For arm, it has a similar H/W register but requires special care to map userspace access to it (a coprocessor instruction, IIRC).
Speaking of arm, I was doing a commercial arm product (an nVidia Jetson to be exact). We were very concerned about latency of incoming video frames.
The H/W engineer didn't trust TSC [or software in general ;-)] and was trying to use a scope, an LED [controlled by a GPIO pin] and when the LED flash/pulse showed up inside the video frame (e.g. the coordinates of the white dot in the video frame were [effectively] a time measurement).
It took a while to convince the engineer, but, eventually I was able to prove that the clock_gettime/TSC approach was more accurate/reliable.
And, certainly, easier to use. We had multiple test/development/SDK boards but could only hook up the scope/LED rig on one at a time.

Finding a source of entropy in an embedded system?

For a small embedded device (TI MSP430F2274), I am trying to create a Pseudo Random Number Generator (PRNG), but I am having difficulty identifying a potential source of entropy to use as a seed. Unfortunately, the device does not have enough available memory space to include time.h and incorporate the function srand(time(0)). Has anyone had experience with this family of devices, or has incorporated a PRNG in an embedded device that is constrained by its memory space? Thank you.
Your part (MSP430F2274) does not offer any generic solution or support, but your application may do so. Any unpredictable and asynchronous external event or value that is guaranteed to occur or be available before or exactly when you need can be used.
For example the part has a pair of 16 bit timers, with one of these running, on detection of some asynchronous trigger event such as a user button press, the value of the clock counter at that time may be used as a seed.
Alternatively if you have an analogue input with a continuously varying and asynchronous signal, simply reading that value at any time and perhaps read multiple samples spaced over a suitable time interval to generate a larger seed if necessary.
Even without a specific signal, an otherwise unused ADC input channel is likely to have sufficient noise to make its least significant bit unpredictable - you might concatenate the LSB from a number of independent samples to generate a seed or the required length.
Essentially any unpredictable external event that your application already supports may suffice. Without details of your application it is not possible to advise specifically, but given that this is specifically a mixed-signal microcontroller, there will presumably be some suitable external unpredictability?
If you have multiple clock sources (and the MSP430F2274 seems to have that at a glance), you can use the unpredictable drift between these sources for entropy if you absolutely have nothing better.
The way to do is using two sources, one as a time base, measuring ticks of the other during a period. The count of ticks will vary slightly as the two clock sources are independent. Depending on what options are available for timers, this may be done by timers, otherwise even the watchdog could be an option, configured as an interval timer (if nothing else, it is usually capable to run on a different clock source than the main clock).
This method may requre some time to set up (as the clocks don't deviate a lot from their specified frequency, so you need to wait for relatively long to gather a meaningful amount of random deviance between them, a second or so maybe sufficient).
Otherwise as Clifford mentioned, you could gather entropy from your environment, which is definitely superior, if you have such an environment available. The only good thing in this one (drift between clock sources) is that this is very likely available for just about any setup.
By the way you can not do srand(time(0)), just from where you are expecting time() to get the count of seconds since epoch on a microcontroller? :)

ARM926EJ-S cycle-counter

Im using an ARM926EJ-S and am trying to figure out whether the ARM can give (e.g. a readable register) the CPU's cycle-counter. I guess a # that will represent the number of cycles since the CPU has been powered.
In my system i have only Low-Res external RTC/Timers. I would like to be able to achieve a Hi-Res timer.
Many thanks in advance!
You probably have only two choices:
Use an instruction-cycle accurate simulator; the problem here is that effectively simulating peripherals and external stimulus can be complex or impossible.
Use a peripheral hardware timer. In most cases you will not be able to run such a timer at the typical core clock rate of an ARM9, and there will be an over head in servicing the timer either side of the period being timed, but it can be used to give execution time over larger or longer running sections of code, which may be of more practical use than cycle count.
While cycle count may be somewhat scalable to different clock rates, it remains constrained by memory and I/O wait states, so is perhaps not as useful as it may seem as a performance metric, except at the micro-level of analysis, and larger performance gains are typically to be had by taking a wider view.
The arm-9 is not equipped with an PMU (Performance Monitoring Unit) as included in the Cortex-family. The PMU is described here. The linux kernel comes equipped with support for using the PMU for benchmarking performance. See here for documentation of the perf tool-set.
Bit unsure about the arm-9, need to dig a bit more...

CUDA: Difference between CPU timer and CUDA timer event?

What is the difference between using a CPU timer and the CUDA timer event to measure the time taken for the execution of some CUDA code?
Which of these should a CUDA programmer use?
And why?
what I know:
CPU timer usage would involve calling cudaThreadSynchronize before any time is noted.
For noting the time, one of these could be used:
clock()
high-resolution performance counter like QueryPerformanceCounter (on Windows)
CUDA timer event would involve recording before and after by using cudaEventRecord. At a later time, the elapsed time would be obtained by calling cudaEventSynchronize on the events, followed by cudaEventElapsedTime to obtain the elapsed time.
The answer to the first part of question is that cudaEvents timers are based off high resolution counters on board the GPU, and they have lower latency and better resolution than using a host timer because they come "off the metal". You should expect sub-microsecond resolution from the cudaEvents timers. You should prefer them for timing GPU operations for precisely that reason. The per-stream nature of cudaEvents can also be useful for instrumenting asynchronous operations like simultaneous kernel execution and overlapped copy and kernel execution. Doing that sort of time measurement is just about impossible using host timers.
EDIT: I won't answer the last paragraph because you deleted it.
The main advantage of using CUDA events for timing is that they're less subject to perturbations due to other system events, like paging or interrupts from the disk or network controller. Also, because the cu(da)EventRecord is asynchronous, there is less of a Heisenberg effect when timing short, GPU-intensive operations.
Another advantage of CUDA events is that they have a clean cross-platform API - no need to wrap gettimeofday() or QueryPerformanceCounter().
One final note: use caution when using streamed CUDA events for timing - if you do not specify the NULL stream, you may wind up timing operations that you did not intend to. There is a good analogy between CUDA events and reading the CPU's timestamp counter, which is a serializing instruction. On modern superscalar processors, the serializing semantics make the timing unambiguous. Also like RDTSC, you should always bracket the events you want to time with enough work that the timing is meaningful (just like you can't use RDTSC to meaningfully time a single machine instruction).

How to measure CPU cycles per instruction in a C program

I have a C program where I am starting to use some SIMD optimizations for the SPE (Cell processor), etc. I would like somehow to "time" how many cycles do they need. One idea is to switch on/off and measure whole execution time. But this is slow. I can also add between and before the execution gettimeofday(&start,NULL) and so statements, but they are only precise I think when one copes with more than miliseconds.
I wonder if it is possible to measure efficiently the nanoseconds per instruction or just the CPU cycles or some other precise timing measure.
Depending on your CPU you may be able to get at performance registers within the CPU itself which track instruction clocks and many other useful things. Profilers and other performance utilities can do this, so it should also be possible from user code too. On Mac OS X I would use the Apple CHUD framework, but you didn't state what OS or CPU you are using so it's hard to give specific suggestions.
Execute the code to be tested in a loop and divide the time it takes with the loop counter. The timer you use must not be high resolution to measure correct values.
Nano seconds won't be enough for that. You need picoseconds.
I don't think that you can measure something like this reliably. You will have to look into specifications (I'm not sure if current CPUs have this information documented).
As a not C guy... my guess is you need to look at the assembly code, and go from there. The only problem is a single instruction could take 1 or 100000 cpu cycles, depending on the exact CPU you are on.

Resources