Ticker timer interrupt max count - timer

on this link
https://os.mbed.com/handbook/Ticker
it says, "Note that timers are based on 32-bit int microsecond counters, so can only time up to a maximum of 2^31-1 microseconds i.e. 30 minutes. They are designed for times between microseconds and seconds. For longer times, you should consider the time()/Real time clock."
My question is that this 30 minute limit is only for when an interval of 1 us is being used? something like this
flipper.attach_us(&flip, 1);
In case I have to call an interrupt every 1 ms does this mean the counter can now go upto 30000 minutes? something like this
flipper.attach_us(&flip, 1000);
Also what would happen to the timer after it gets filled does it clear it self on its own and restart or does it through out an error.

This is the function declaration:
void attach_us (Callback< void()> func, us_timestamp_t t)
"t" is the time between calls. The warning you found in the linked page (https://os.mbed.com/handbook/Ticker) says that the max interval time you can set to is about 30 minutes because "t" is a 32-bit int.
(I think it is 64-bit in the latest API though. https://os.mbed.com/docs/latest/reference/ticker.html).
When the timer reaches the value specified by “t”, it overflows and triggers the Callback function. It repeats that until you detach it.
If your interval is 1ms, you don't need to worry about the 30mins max limitation.

The page you're linking to is old. Timers are no longer 32-bits, but are 64-bits now; so this is no longer an issue. See the latest version of the Ticker docs at https://os.mbed.com/docs/latest/reference/ticker.html .

Related

Difference between timer values not as expected

I'm sorry if this is a basic question.
I am working on an embedded project. Somewhere in the project in some driver files there is a function named _time returnTime() that is said to return a mcu time with a 10 microseconds resolution. I guess it reads the registers for a timer module of the mcu and returns them and this timer has a 10 microseconds resolution.
I also have a user defined function. I want to test how much it takes to execute this function.
So I have decided to use function returnTime for this by calling it before the first and after the last instructions of my function.
In the function that I want to test how long it takes to execute I do something like this:
myFunction()
{
time1 = returnTime();
...
time2 = returnTime();
time_elapsed = time2 - time1;
}
When I do this function time_elapsed is a value of 5 and sometimes 4. Is that possible. Since the returnTime() function returns a time with 10 microsecond resolution shouldn't time_elapsed also be a value which is a multiple of 10?(like 0, 10, 20,...etc)
The timer may have a resolution of 10 microseconds but what are the units of the value returned by returnTime()? Perhaps returnTime() returns a value with units of "ticks" where one tick is equal to 10 microseconds. (The hardware timer module typically counts by one tick.)
Let's assume your function takes 42 microseconds to execute. If the function starts executing one microsecond before the next tick of the timer, then the timer will tick five times before the function finishes. But if the function starts one microsecond after the timer ticks, then the timer will tick only four times during the function execution. So sometimes returnTime() will return 5 and sometimes it will return 4.
The above are the simplest explanations. Also the hardware timer module could be setup incorrectly for the MCU clock speed. Also, if interrupts are enabled, then an interrupt could be occurring during execution of the function which makes the function execution time appear to vary.

How to write a time difference function to STM32F4

i am working on STM32F4 and pretty new at it. I know basics of C but with more than 1 day research, i still not found a solution of this.
I simply want to make a delay function myself, processor runs at 168MHz ( HCLK ). So my intuition says that it produces 168x10^6 clock cycles at each seconds. So the method should be something like that,
1-Store current clock count to a variable
2-Time diff = ( clock value at any time - stored starting clock value ) / 168000000
This flow should give me time difference in terms of seconds and then i can use it to convert whatever i want.
But, unfortunately, despite it seems so easy, I just cant implement any methods to MCU.
I tried time.h but it did not work properly. For ex, clock() gave same result over and over, and time( the one returns seconds since 1970 ) gave hexadecimal 0xFFFFFFFF ( -1, I guess means error ) .
Thanks.
Edit : While writing i assumed that some func like clock() will return total clock count since the start of program flow, but now i think after 4Billion/168Million secs it will overflow uint32_t size. I am really confused.
The answer depends on the required precision and intervals.
For shorter intervals with sub-microsecond precision there is a cycle counter. Your suspicion is correct, it would overflow after 232/168*106 ~ 25.5 seconds.
For longer intervals there are timers that can be prescaled to support any possible subdivision of the 168 MHz clock. The most commonly used setup is the SysTick timer set to generate an interrupt at 1 kHz frequency, which increments a software counter. Reading this counter would give the number of milliseconds elapsed since startup. As it is usually a 32 bit counter, it would overflow after 49.7 days. The HAL library sets SysTick up this way, the counter can then be queried using the HAL_GetTick() function.
For even longer or more specialized timing requirements you can use the RTC peripheral which keeps calendar time, or the TIM peripherals (basic, general and advanced timers), these have their own prescalers, and they can be arranged in a master-slave setup to give almost arbitrary precision and intervals.

get process start time in kernel module solaris

I am trying to get process start time in kernel module.
I get the proc struct pointer, and from the proc I take field p_mstart ()
typedef struct proc {
.....
/*
* Microstate accounting, resource usage, and real-time profiling
*/
hrtime_t p_mstart; /* hi-res process start time */
this return me the number: 1976026375725303
struct proc* iterated_process_ptr = curproc
LOG("***KERNEL***: PID=%d, StartTime=%lld",iterated_process_ptr->p_pidp->pid_id, iterated_process_ptr->p_mstart);
What is this number ?
In the documentation solaris write:
The gethrtime() function returns the current high-resolution real time. Time is expressed as nanoseconds since some arbitrary time in the past.
And in the book Solaris Internals they write:
Within the process, the operating system maintains a high-resolution teimstamp that marks process start and terminate times, A p_mstart field, the process start time, is set in the kernel fork() code when the process is created.... it return 64-bit value expressed in nanosecond
The number 1976026375725303 does not make sense at all.
If i divide by 1,000,000,000 and then by 3600 in order to get hours, i get 528 hours, 22 days, but my uptime is 5 days..
Based on answer received at google group: comp.unix.solaris.
Instead of going to proc -> p_mstart
I need to take
iterated_process_ptr ->p_user.u_start
This bring me the same struct (timestruc_t) as userspace
typedef struct psinfo {
psinfo ->pr_start; /* process start time, from the epoch */
The number 1976026375725303 does not make sense at all.
Yes it does. Per the very documentation that you quoted:
Time is expressed as nanoseconds since some arbitrary time in the
past.
Thus, the value can be used to calculate how long ago the process started:
hrtime_t howLongAgo = gethrtime() - p->p_mstart;
That produces a value in nanoseconds for how long ago the process started.
And note that the value produced is accurate - the value from iterated_process_ptr ->p_user.u_start is subject to system clock changes, so you can't say, "This process has been running for 3 hours, 15 minutes, and 3 seconds" unless you also know the system clock hasn't been reset or modified in any way.
Per the Solaris 11 gethrtime.9F man page:
Description
The gethrtime() function returns the current high-resolution real
time. Time is expressed as nanoseconds since some arbitrary time in
the past; it is not correlated in any way to the time of day, and
thus is not subject to resetting or drifting by way of adjtime(2) or
settimeofday(3C). The hi-res timer is ideally suited to performance
measurement tasks, where cheap, accurate interval timing is required.
Return Values
gethrtime() always returns the current high-resolution real time.
There are no error conditions.
...
Notes
Although the units of hi-res time are always the same (nanoseconds),
the actual resolution is hardware dependent. Hi-res time is guaranteed
to be monotonic (it does not go backward, it does not periodically
wrap) and linear (it does not occasionally speed up or slow down for
adjustment, as the time of day can), but not necessarily unique: two
sufficiently proximate calls might return the same value.
The time base used for this function is the same as that for
gethrtime(3C). Values returned by both of these functions can be
interleaved for comparison purposes.

Precise Linux Timing - What Determines the Resolution of clock_gettime()?

I need to do precision timing to the 1 us level to time a change in duty cycle of a pwm wave.
Background
I am using a Gumstix Over Water COM (https://www.gumstix.com/store/app.php/products/265/) that has a single core ARM Cortex-A8 processor running at 499.92 BogoMIPS (the Gumstix page claims up to 1Ghz with 800Mhz recommended) according to /proc/cpuinfo. The OS is an Angstrom Image version of Linux based of kernel version 2.6.34 and it is stock on the Gumstix Water COM.
The Problem
I have done a fair amount of reading about precise timing in Linux (and have tried most of it) and the consensus seems to be that using clock_gettime() and referencing CLOCK_MONOTONIC is the best way to do it. (I would have liked to use the RDTSC register for timing since I have one core with minimal power saving abilities but this is not an Intel processor.) So here is the odd part, while clock_getres() returns 1, suggesting resolution at 1 ns, actual timing tests suggest a minimum resolution of 30517ns or (it can't be coincidence) exactly the time between a 32.768KHz clock ticks. Here's what I mean:
// Stackoverflow example
#include <stdio.h>
#include <time.h>
#define SEC2NANOSEC 1000000000
int main( int argc, const char* argv[] )
{
// //////////////// Min resolution test //////////////////////
struct timespec resStart, resEnd, ts;
ts.tv_sec = 0; // s
ts.tv_nsec = 1; // ns
int iters = 100;
double resTime,sum = 0;
int i;
for (i = 0; i<iters; i++)
{
clock_gettime(CLOCK_MONOTONIC, &resStart); // start timer
// clock_nanosleep(CLOCK_MONOTONIC, 0, &ts, &ts);
clock_gettime(CLOCK_MONOTONIC, &resEnd); // end timer
resTime = ((double)resEnd.tv_sec*SEC2NANOSEC + (double)resEnd.tv_nsec
- ((double)resStart.tv_sec*SEC2NANOSEC + (double)resStart.tv_nsec);
sum = sum + resTime;
printf("resTime = %f\n",resTime);
}
printf("Average = %f\n",sum/(double)iters);
}
(Don't fret over the double casting, tv_sec in a time_t and tv_nsec is a long.)
Compile with:
gcc soExample.c -o runSOExample -lrt
Run with:
./runSOExample
With the nanosleep commented out as shown, the result is either 0ns or 30517ns with the majority being 0ns. This leads me to believe that CLOCK_MONOTONIC is updated at 32.768kHz and most of the time the clock has not been updated before the second clock_gettime() call is made and in cases where the result is 30517ns the clock has been updated between calls.
When I do the same thing on my development computer (AMD FX(tm)-6100 Six-Core Processor running at 1.4 GHz) the minimum delay is a more constant 149-151ns with no zeros.
So, let's compare those results to the CPU speeds. For the Gumstix, that 30517ns (32.768kHz) equates to 15298 cycles of the 499.93MHz cpu. For my dev computer that 150ns equates to 210 cycles of the 1.4Ghz CPU.
With the clock_nanosleep() call uncommented the average results are these:
Gumstix: Avg value = 213623 and the result varies, up and down, by multiples of that min resolution of 30517ns
Dev computer: 57710-68065 ns with no clear trend. In the case of the dev computer I expect the resolution to actually be at the 1 ns level and the measured ~150ns truly is the time elapsed between the two clock_gettime() calls.
So, my question's are these:
What determines that minimum resolution?
Why is the resolution of the dev computer 30000X better than the Gumstix when the processor is only running ~2.6X faster?
Is there a way to change how often CLOCK_MONOTONIC is updated and where? In the kernel?
Thanks! If you need more info or clarification just ask.
As I understand, the difference between two environments(Gumstix and your Dev-computer) might be the underlying timer h/w they are using.
Commented nanosleep() case:
You are using clock_gettime() twice. To give you a rough idea of what this clock_gettime() will ultimately get mapped to(in kernel):
clock_gettime -->clock_get() -->posix_ktime_get_ts -->ktime_get_ts() -->timekeeping_get_ns()
-->clock->read()
clock->read() basically reads the value of the counter provided by underlying timer driver and corresponding h/w. A simple difference with stored value of the counter in the past and current counter value and then nanoseconds conversion mathematics will yield you the nanoseconds elapsed and will update the time-keeping data structures in kernel.
For example , if you have a HPET timer which gives you a 10 MHz clock, the h/w counter will get updated at 100 ns time interval.
Lets say, on first clock->read(), you get a counter value of X.
Linux Time-keeping data structures will read this value of X, get the difference 'D'compared to some old stored counter value.Do some counter-difference 'D' to nanoseconds 'n' conversion mathematics, update the data-structure by 'n'
Yield this new time value to the user space.
When second clock->read() is issued, it will again read the counter and update the time.
Now, for a HPET timer, this counter is getting updated every 100ns and hence , you will see this difference being reported to the user-space.
Now, Let's replace this HPET timer with a slow 32.768 KHz clock. Now , clock->read()'s counter will updated only after 30517 ns seconds, so, if you second call to clock_gettime() is before this period, you will get 0(which is majority of the cases) and in some cases, your second function call will be placed after counter has incremented by 1, i.e 30517 ns has elapsed. Hence , the value of 30517 ns sometimes.
Uncommented Nanosleep() case:
Let's trace the clock_nanosleep() for monotonic clocks:
clock_nanosleep() -->nsleep --> common_nsleep() -->hrtimer_nanosleep() -->do_nanosleep()
do_nanosleep() will simply put the current task in INTERRUPTIBLE state, will wait for the timer to expire(which is 1 ns) and then set the current task in RUNNING state again. You see, there are lot of factors involved now, mainly when your kernel thread (and hence the user space process) will be scheduled again. Depending on your OS, you will always face some latency when your doing a context-switch and this is what we observe with the average values.
Now Your questions:
What determines that minimum resolution?
I think the resolution/precision of your system will depend on the underlying timer hardware being used(assuming your OS is able to provide that precision to the user space process).
*Why is the resolution of the dev computer 30000X better than the Gumstix when the processor is only running ~2.6X faster?*
Sorry, I missed you here. How it is 30000x faster? To me , it looks like something 200x faster(30714 ns/ 150 ns ~ 200X ? ) .But anyway, as I understand, CPU speed may or may not have to do with the timer resolution/precision. So, this assumption may be right in some architectures(when you are using TSC H/W), though, might fail in others(using HPET, PIT etc).
Is there a way to change how often CLOCK_MONOTONIC is updated and where? In the kernel?
you can always look into the kernel code for details(that's how i looked into it).
In linux kernel code , look for these source files and Documentation:
kernel/posix-timers.c
kernel/hrtimer.c
Documentation/timers/hrtimers.txt
I do not have gumstix on hand, but it looks like your clocksource is slow.
run:
$ dmesg | grep clocksource
If you get back
[ 0.560455] Switching to clocksource 32k_counter
This might explain why your clock is so slow.
In the recent kernels there is a directory /sys/devices/system/clocksource/clocksource0 with two files: available_clocksource and current_clocksource. If you have this directory, try switching to a different source by echo'ing its name into second file.

GetTickCount function

I have a question regarding GetTickCount function,
I have two calls to this function in my code with several commands between them and the function in both calls returns same count.
i.e.
var1 = GetTickCount();
code
:
:
var2 = GetTickCount();
var1 and var2 has same values in it.
can someone help?
Assuming this is the Windows GetTickCount call, that's entirely reasonable:
The resolution of the GetTickCount
function is limited to the resolution
of the system timer, which is
typically in the range of 10
milliseconds to 16 milliseconds.
Note that it's only measuring milliseconds to start with - and you can do an awful lot in a millisecond these days.
The docs go on to say:
If you need a higher resolution timer,
use a multimedia timer or a
high-resolution timer.
Perhaps QueryPerformanceCounter would be more appropriate?
If you are referring to the Windows API call then read this.
I would guess that you are trying to time a short interval so this paragraph is relevant. Are you timing something shorter than that interval? If so look into QueryPerformanceCounter instead perhaps.
The resolution of the GetTickCount
function is limited to the resolution
of the system timer, which is
typically in the range of 10
milliseconds to 16 milliseconds. The
resolution of the GetTickCount
function is not affected by
adjustments made by the
GetSystemTimeAdjustment function.
If you go the QueryPerformanceCounter route you need to watch out for hardware dependent wierdness. Its been awhile so I don't know if this kinda stuff still happens.
You might also want to take a look at this link since it has a nice sample app which compares QueryPerformanceCounter, GetTickCount and TimeGetTime
From MSDN
The resolution of the GetTickCount
function is limited to the resolution
of the system timer, which is
typically in the range of 10
milliseconds to 16 milliseconds. The
resolution of the GetTickCount
function is not affected by
adjustments made by the
GetSystemTimeAdjustment function.
The elapsed time is stored as a DWORD
value. Therefore, the time will wrap
around to zero if the system is run
continuously for 49.7 days. To avoid
this problem, use the GetTickCount64
function. Otherwise, check for an
overflow condition when comparing
times.
If you need a higher resolution timer,
use a multimedia timer or a
high-resolution timer.
GetTickCount has a resolution of one millisecond (in practice, it's several milliseconds). It's highly likely that the functions you're calling in between are taking considerably less than 1 millisecond.

Resources