How to implement C timer in Vxworks - c

I have implemented timer functionality to find the performance of my task in windows and linux. But linux implementation is not working in Vxworks PPC 750 board. gettimeofday is not available in Vxworks.
t1 = vxworks_start_timer(); //How to implement ?
my_task();
t2 = vxworks_stop_timer(); //How to implement ?
elapsedtime = t2-t1;
How to implement this timer in Vxworks to calculate elapsed time of a task.

There are various approaches to this, dependant on your needs.
If the activity to be measured is long running, you might prefer to use the system tick counter, accessible via tickGet() or tickGet64().
This increments at the system clock rate frequency (i.e. the rate of the scheduler, not the CPU freq), and so the resolution is limited to a single tick - which might be as large as 1/60th of a second. You can use sysClkRateGet() to determine the frequency.
For long running tasks, the above is probably sufficient, however if you require higher resolution, possibly at the expense of limited duration, you can use the system timestamp counter, if it is configured. For this, you can use sysTimestamp() (or sysTimestamp64()), and also use sysTimestampFreq() to get the frequency.
Dependant on your system configuration, the counter may reset frequently, and you can use sysTimestampPeriod() to workout when this will occur - you will need to handle this in your timing code.
You can, of course, use both methods together to provide both a long running, yet high resolution timer

If the system timer tick resolution is sufficient, you could use tickGet() and sysClkRateGet()
or clock_gettime(), but resolution is still limited to system clock tick
Otherwise, you could read TBL and TBU (arch-specific)

Related

How to implement time counting in a new operating system?

i have to implement in a operating system, the function sleep().
Which is at the moment, not exisiting in the previous mentioned system.
The problem is, I have to count the elapsed time to wake the sleeping thread up.
How should i relize this? Do i have to count the CPU Ticks or is there another way?
Are CPU Ticks not dependend on the CPU frequency which is different for each CPU?
I have to implement the function in the language C.
the time function doesn't exist either
Thank you in advance!
Typically, such functionality is provided by a hardware timer interrupt, (and its associated driver), that manages a 'tick count' and a delta-queue of 'Thread Control Block' pointers, (pTCB). The pTCP's for sleeping threads are stored in the queue ordered by interval expiry tick count. The timer interrupt increments the tick count and checks it agains the expiry count of the item at the head of the queue.
When a thread requests a sleep, the thread pTCB is taken out of the set of ready threads, the expiry-count calculated and the pTCB inserted into the timer queue. When the pTCB reaches the end of the queue, and it's expiry tick has arrived, it is popped and added back to the set of ready threads so that it may be set running.
It totally depends on your platform/OS. It has to provide you some time-like information, e.g. ticks. Otherwise it is just impossible.
Converting ticks to seconds of course requires additional information. Again, this can be supplied by your platform. Or you have to find it out by other means (manual, configure it yourself, ...).
The easiest and most common way to do that in operating systems is to set up a timer interrupt at a static frequency, then build a timer framework on top of that, then use that timer framework to fire off wakeups for your sleeping threads.
A good paper that discusses various data structures for how to do it efficiently is here. I recommend from my own experience scheme 7. It's quite easy to implement and performs wonderfully.
You can find a fast implementation with a good API here. But I'm biased, because I wrote it.
If you don't want a timer interrupt with a static frequency it becomes much harder to implement a nice timer facility with good performance. I've done a few experiments, but I'd recommend you to start with simple timer interrupt with a static frequency. Once you start doing dynamic timers you need to exactly understand the tradeoffs you are prepared to make.
you can use time() :
time_t t = time();
while(time() < t + sleepDuration);
You may use the CPUs Time Stamp Counter (TSC) to get counter values for time keeping. See Chapter 16.12.1 of "Intel® 64 and IA-32 Architectures Software Developer’s Manual".
The TSC is a low level counter which may provide counter values independent of CPU speed:
"The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor’s support for invariant TSC is indicated by CPUID.80000007H:EDX[8].
The invariant TSC will run at a constant rate in all ACPI P-, C--, and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource."
However, for the implementation of sleep() alike functionality you should look into timer
hardware like HPET, ACPI, and alike. See "Intel 64® and IA-32 Architectures Software Developer's Manual, Volume 3B: System Programming Guide, Part 2" and "IA-PC HPET (High Precision Event Timers) Specification" for details.

How to implement a clock in c similar to linux system clock

Normally when the linux system boots up it actually takes the reference time from RTC and runs a software timer on its own [i.e, generally known as system clock/wall clock]. When the system is about to shutdown it sync its wall clock time with RTC. I am looking for a method to implement a wall clock in c as similar to this. Can any body suggest some idea for me?
Thanks in advance,
Anandhakrishnan Ramasamy.
What OS usually do is they fetch the system startup time from RTC or HPET or any other timer device. And after they load PIC or APIC with a value to receive periodic interrupts from them (e.g after every 100ms). Based on these interrupts value of system clock or wall clock gets updated.
You can't do it in plain C without relying on functionalities provided by the OS. The reason is that the OS schedules several applications through multiprogramming, and your C application can't have knowledge about when it has been suspended by the scheduler.
Therefore, you have to use Posix functions like gettimeofday(), time() and so on.
Its hard to do this 100% correctly. You will have to detect times when the CPU goes to sleep, if the system is suspended, and also any time someone changes the timezone, or when daylight savings time starts or ends. You would have to do all these things yourself.
All CPUs today have a high resolution timer. Its just a register that increments every CPU clock cycle. If you know the frequency of the CPU, and you read that register on a regular basis ( e.g. often enough that it doesn't overflow ) you can measure time.
On linux there is a family of functions that reads this register for you, and figures out the CPU frequency, and returns the time in that register in nano-seconds:
timespec ts;
clock_gettime( CLOCK_MONOTONIC_RAW, &ts );
u64 timeInNanoSeconds = ts.tv_nsec + ( ts.tv_sec * 1000000000LL );
That time will wrap around every 5 minutes or so. So you have to read it pretty often, so you can detect the wrap around. So any time you read it, if ts.tv.nsec is smaller than the last time you called it, they you had an overflow, and you have to account for it.
Once you can accurately measure the passage of a second, then you can build your wall clock from there.

Scheduling events at microsecond granularity in POSIX

I'm trying to determine the granularity I can accurately schedule tasks to occur in C/C++. At the moment I can reliably schedule tasks to occur every 5 microseconds, but I'm trying to see if I can lower this further.
Any advice on how to achieve this / if it is possible would be greatly appreciated.
Since I know timer granularity can often be OS dependent: I am currently running on Linux, but would use Windows if the timing granularity is better (although I don't believe it is, based on what I've found for the QueryPerformanceCounter)
I execute all measurements on bare-metal (no VM). /proc/timer_info confirms nanosecond timer resolution for my CPU (but I know that doesn't translate to nanosecond alarm resolution)
Current
My current code can be found as a Gist here
At the moment, I'm able to execute a request every 5 microseconds (5000 nanoseconds) with less then 1% late arrivals. When late arrivals do occur, they are typically only one cycle (5000 nanoseconds) behind.
I'm doing 3 things at the moment
Setting the process to real-time priority (some pointed out by #Spudd86 here)
struct sched_param schedparm;
memset(&schedparm, 0, sizeof(schedparm));
schedparm.sched_priority = 99; // highest rt priority
sched_setscheduler(0, SCHED_FIFO, &schedparm);
Minimizing the timer slack
prctl(PR_SET_TIMERSLACK, 1);
Using timerfds (part of the 2.6 Linux kernel)
int timerfd = timerfd_create(CLOCK_MONOTONIC,0);
struct itimerspec timspec;
bzero(&timspec, sizeof(timspec));
timspec.it_interval.tv_sec = 0;
timspec.it_interval.tv_nsec = nanosecondInterval;
timspec.it_value.tv_sec = 0;
timspec.it_value.tv_nsec = 1;
timerfd_settime(timerfd, 0, &timspec, 0);
Possible improvements
Dedicate a processor to this process?
Use a nonblocking timerfd so that I can create a tight loop, instead of blocking (tight loop will waste more CPU, but may also be quicker to respond to an alarm)
Using an external embedded device for triggering (can't imagine why this would be better)
Why
I'm currently working on creating a workload generator for a benchmarking engine. The workload generator simulates an arrival rate (X requests / second, etc.) using a Poisson process. From the Poisson process, I can determine the relative times at which requests must be made from the benchmarking engine.
So for instance, at 10 requests a second, we may have requests made at:
t = 0.02, 0.04, 0.05, 0.056, 0.09 seconds
These requests need to be scheduled in advance and then executed. As the number of requests per second increases, the granularity required for scheduling these requests increases (thousands of requests per second requires sub-millisecond accuracy). As a result, I'm trying to figure out how to scale this system further.
You're very close to the limits of what vanilla Linux will offer you, and it's way past what it can guarantee. Adding the real-time patches to your kernel and tuning for full pre-emption will help give you better guarantees under load. I would also remove any dynamic memory allocation from your time critical code, malloc and friends can (and will) stall for a not-inconsequential (in a real-time sense) period of time if it has to reclaim the memory from the i/o cache. I would also be considering removing swap from that machine to help guarantee performance. Dedicating a processor to your task will help to prevent context switch times but, again, it's no guarantee.
I would also suggest that you be careful with that level of sched_priority, you're above various important bits of Linux there, which can lead to very strange effects.
What you gain from building a realtime kernel is more reliable guarantees (ie lower maximum latency) of the time between an IO/timer event handled by the kernel, and control being passed to your app in response. This comes at the price of lower throughput, and you might notice an increase in your best-case latency times.
However, the only reason for using OS timers to schedule events with high-precision is if you're afraid of burning CPU cycles in a loop while you wait for your next due event. OS timers (especially in MS Windows) are not reliable for high granularity timing events, and are very dependant on the sort of timing/HPET hardware available in your system.
When I require highly accurate event scheduling, I use a hybrid method. First, I measure the worst case latency - that is, the biggest difference between the time I requested to sleep, and the actual clock time after sleeping. Let's call this difference "D". (You can actually do this on-the-fly during normal running, by tracking "D" every time you sleep, with something like "D = (D*7 + lastD) / 8" to produce a temporal average).
Then never request to sleep beyond "N - D*2", where "N" is the time of the next event. When within "D*2" time of the next event, enter a spin loop and wait for "N" to occur.
This eats a lot more CPU cycles, but depending on the accuracy you require, you might be able to get away with a "sched_yield()" in your spin loop, which is more kind to your system.

CUDA: Difference between CPU timer and CUDA timer event?

What is the difference between using a CPU timer and the CUDA timer event to measure the time taken for the execution of some CUDA code?
Which of these should a CUDA programmer use?
And why?
what I know:
CPU timer usage would involve calling cudaThreadSynchronize before any time is noted.
For noting the time, one of these could be used:
clock()
high-resolution performance counter like QueryPerformanceCounter (on Windows)
CUDA timer event would involve recording before and after by using cudaEventRecord. At a later time, the elapsed time would be obtained by calling cudaEventSynchronize on the events, followed by cudaEventElapsedTime to obtain the elapsed time.
The answer to the first part of question is that cudaEvents timers are based off high resolution counters on board the GPU, and they have lower latency and better resolution than using a host timer because they come "off the metal". You should expect sub-microsecond resolution from the cudaEvents timers. You should prefer them for timing GPU operations for precisely that reason. The per-stream nature of cudaEvents can also be useful for instrumenting asynchronous operations like simultaneous kernel execution and overlapped copy and kernel execution. Doing that sort of time measurement is just about impossible using host timers.
EDIT: I won't answer the last paragraph because you deleted it.
The main advantage of using CUDA events for timing is that they're less subject to perturbations due to other system events, like paging or interrupts from the disk or network controller. Also, because the cu(da)EventRecord is asynchronous, there is less of a Heisenberg effect when timing short, GPU-intensive operations.
Another advantage of CUDA events is that they have a clean cross-platform API - no need to wrap gettimeofday() or QueryPerformanceCounter().
One final note: use caution when using streamed CUDA events for timing - if you do not specify the NULL stream, you may wind up timing operations that you did not intend to. There is a good analogy between CUDA events and reading the CPU's timestamp counter, which is a serializing instruction. On modern superscalar processors, the serializing semantics make the timing unambiguous. Also like RDTSC, you should always bracket the events you want to time with enough work that the timing is meaningful (just like you can't use RDTSC to meaningfully time a single machine instruction).

1ms resolution timer under linux recommended way

I need a timer tick with 1ms resolution under linux. It is used to increment a timer value that in turn is used to see if various Events should be triggered. The POSIX timerfd_create is not an option because of the glibc requirement. I tried timer_create and timer_settimer, but the best I get from them is a 10ms resolution, smaller values seem to default to 10ms resolution. Getittimer and setitimer have a 10 ms resolution according to the manpage.
The only way to do this timer I can currently think of is to use clock_gettime with CLOCK_MONOTONIC in my main loop an test if a ms has passed, and if so to increase the counter (and then check if the various Events should fire).
Is there a better way to do this than to constantly query in the main loop? What is the recommended solution to this?
The language I am using is plain old c
Update
I am using a 2.6.26 Kernel. I know you can have it interrupt at 1kHz, and the POSIX timer_* functions then can be programmed to up to 1ms but that seems not to be reliable and I don't want to use that, because it may need a new kernel on some Systems. Some stock Kernel seem to still have the 100Hz configured. And I would need to detect that. The application may be run on something else than my System :)
I can not sleep for 1ms because there may be network events I have to react to.
How I resolved it
Since it is not that important I simply declared that the global timer has a 100ms resolution. All events using their own timer have to set at least 100ms for timer expiration. I was more or less wondering if there would be a better way, hence the question.
Why I accepted the answer
I think the answer from freespace best described why it is not really possible without a realtime Linux System.
Polling in the main loop isn't an answer either - your process might not get much CPU time, so more than 10ms will elapse before your code gets to run, rendering it moot.
10ms is about the standard timer resolution for most non-realtime operating systems (RTOS). But it is moot in a non-RTOS - the behaviour of the scheduler and dispatcher is going to greatly influence how quickly you can respond to a timer expiring. For example even suppose you had a sub 10ms resolution timer, you can't respond to the timer expiring if your code isn't running. Since you can't predict when your code is going to run, you can't respond to timer expiration accurately.
There is of course realtime linux kernels, see http://www.linuxdevices.com/articles/AT8073314981.html for a list. A RTOS offers facilities whereby you can get soft or hard guarantees about when your code is going to run. This is about the only way to reliably and accurately respond to timers expiring etc.
To get 1ms resolution timers do what libevent does.
Organize your timers into a min-heap, that is, the top of the heap is the timer with the earliest expiry (absolute) time (a rb-tree would also work but with more overhead). Before calling select() or epoll() in your main event loop calculate the delta in milliseconds between the expiry time of the earliest timer and now. Use this delta as the timeout to select(). select() and epoll() timeouts have 1ms resolution.
I've got a timer resolution test that uses the mechanism explained above (but not libevent). The test measures the difference between the desired timer expiry time and its actual expiry of 1ms, 5ms and 10ms timers:
1000 deviation samples of 1msec timer: min= -246115nsec max= 1143471nsec median= -70775nsec avg= 901nsec stddev= 45570nsec
1000 deviation samples of 5msec timer: min= -265280nsec max= 256260nsec median= -252363nsec avg= -195nsec stddev= 30933nsec
1000 deviation samples of 10msec timer: min= -273119nsec max= 274045nsec median= 103471nsec avg= -179nsec stddev= 31228nsec
1000 deviation samples of 1msec timer: min= -144930nsec max= 1052379nsec median= -109322nsec avg= 1000nsec stddev= 43545nsec
1000 deviation samples of 5msec timer: min= -1229446nsec max= 1230399nsec median= 1222761nsec avg= 724nsec stddev= 254466nsec
1000 deviation samples of 10msec timer: min= -1227580nsec max= 1227734nsec median= 47328nsec avg= 745nsec stddev= 173834nsec
1000 deviation samples of 1msec timer: min= -222672nsec max= 228907nsec median= 63635nsec avg= 22nsec stddev= 29410nsec
1000 deviation samples of 5msec timer: min= -1302808nsec max= 1270006nsec median= 1251949nsec avg= -222nsec stddev= 345944nsec
1000 deviation samples of 10msec timer: min= -1297724nsec max= 1298269nsec median= 1254351nsec avg= -225nsec stddev= 374717nsec
The test ran as a real-time process on Fedora 13 kernel 2.6.34, the best achieved precision of 1ms timer was avg=22nsec stddev=29410nsec.
I'm not sure it's the best solution, but you might consider writing a small kernel module that uses the kernel high-res timers to do timing. Basically, you'd create a device file for which reads would only return on 1ms intervals.
An example of this type of approach is used in the Asterisk PBX, via the ztdummy module. If you google for ztdummy you can find the code that does this.
I think you'll have trouble achieving 1 ms precision with standard Linux even with constant querying in the main loop, because the kernel does not ensure your application will get CPU all the time. For example, you can be put to sleep for dozens of milliseconds because of preemptive multitasking and there's little you can do about it.
You might want to look into Real-Time Linux.
If you are targeting x86 platform you should check HPET timers. This is hardware timer with large precision. It must be supported by your motherbord (right now all of them support it) and your kernel should contains driver for it as well. I have used it few times without any problems and was able to achieve much better resolution than 1ms.
Here is some documentation and examples:
http://www.kernel.org/doc/Documentation/timers/hpet.txt
http://www.kernel.org/doc/Documentation/timers/hpet_example.c
http://fpmurphy.blogspot.com/2009/07/linux-hpet-support.html
I seem to recall getting ok results with gettimeofday/usleep based polling -- I wasn't needing 1000 timers a second or anything, but I was needing good accuracy with the timing for ticks I did need -- my app was a MIDI drum machine controller, and I seem to remember getting sub-millisecond accuracy, which you need for a drum machine if you don't want it to sound like a very bad drummer (esp. counting MIDI's built-in latencies) -- iirc (it was 2005 so my memory is a bit fuzzy) I was getting within 200 microseconds of target times with usleep.
However, I was not running much else on the system. If you have a controlled environment you might be able to get away with a solution like that. If there's more going on the system (watch cron firing up updatedb, etc.) then things may fall apart.
Are you running on a Linux 2.4 kernel?
From VMware KB article #1420 (http://kb.vmware.com/kb/1420).
Linux guest operating systems keep
time by counting timer interrupts.
Unpatched 2.4 and earlier kernels
program the virtual system timer to
request clock interrupts at 100Hz (100
interrupts per second). 2.6 kernels,
on the other hand, request interrupts
at 1000Hz - ten times as often. Some
2.4 kernels modified by distribution vendors to contain 2.6 features also
request 1000Hz interrupts, or in some
cases, interrupts at other rates, such
as 512Hz.
There is ktimer patch for linux kernel:
http://lwn.net/Articles/167897/
http://www.kernel.org/pub/linux/kernel/projects/rt/
HTH
First, get the kernel source and compile it with an adjusted HZ parameter.
If HZ=1000, timer interrupts 1000 times per seconds. It is ok to use HZ=1000 for an i386 machine.
On an embedded machine, HZ might be limited to 100 or 200.
For good operation, PREEMPT_KERNEL option should be on. There are
kernels which does not support this option properly. You can check them out by
searching.
Recent kernels, i.e. 2.6.35.10, supports NO_HZ options, which turns
on dynamic ticks. This means that there will be no timer ticks when in idle,
but a timer tick will be generated at the specified moment.
There is a RT patch to the kernel, but hardware support is very limited.
Generally RTAI is an all killer solution to your problem, but its
hardware support is very limited. However, good CNC controllers, like
emc2, use RTAI for their clocking, maybe 5000 Hz, but it can be
hard work to install it.
If you can, you could add hardware to generate pulses. That would make
a system which can be adapted to any OS version.
You don't need an RTOS for a simple real time application. All modern processors have General Purpose timers. Get a datasheet for whatever target CPU you are working on. Look in the kernel source, under the arch directory you will find processor specific source how to handle these timers.
There are two approaches you can take with this:
1) Your application is ONLY running your state machine, and nothing else. Linux is simply your "boot loader." Create a kernel object which installs a character device. On insertion into the kernel, set up your GP Timer to run continuously. You know the frequency it's operating at. Now, in the kernel, explicitly disable your watchdog. Now disable interrupts (hardware AND software) On a single-cpu Linux kernel, calling spin_lock() will accomplish this (never let go of it.) The CPU is YOURS. Busy loop, checking the value of the GPT until the required # of ticks have passed, when they have, set a value for the next timeout and enter your processing loop. Just make sure that the burst time for your code is under 1ms
2) A 2nd option. This assumes you are running a preemptive Linux kernel. Set up an unused a GPT along side your running OS. Now, set up an interrupt to fire some configurable margin BEFORE your 1ms timeout happens (say 50-75 uSec.) When the interrupt fires, you will immediately disable interrupts and spin waiting for 1ms window to occur, then entering your state machine and subsequently enabling interrupts on your wait OUT. This accounts for the fact that you are cooperating with OTHER things in the kernel which disable interrupts. This ASSUMES that there is no other kernel activity which locks out interrupts for a long time (more than 100us.) Now, you can MEASURE the accuracy of your firing event and make the window larger until it meets your need.
If instead you are trying to learn how RTOS's work...or if you are trying to solve a control problem with more than one real-time responsibility...then use an RTOS.
Can you at least use nanosleep in your loop to sleep for 1ms? Or is that a glibc thing?
Update: Never mind, I see from the man page "it can take up to 10 ms longer than specified until the process becomes runnable again"
What about using "/dev/rtc0" (or "/dev/rtc") device and its related ioctl() interface? I think it offers an accurate timer counter. It is not possible to set the rate just to 1 ms, but to a close value or 1/1024sec (1024Hz), or to a higher frequency, like 8192Hz.

Resources