Conversion of msec to jiffies - c

i am using msecs_to_jiffies(msecs) to get delay. I need a delay of 16 ms. But the problem is the function return 1 for input 1-10, 2 for 11-20, 3 for 21-30 and son on. Hence i am unable to set proper delay. I can set delay only in factors of 10 ms. I can't change the HZ value and the function cant sleep also.
Kindly suggest solution to this problem.
Thanks

It seems your system HZ value is set to 100.
If you wish to suspend execution for a period of time in a resolution lower then the system HZ, you need to use high resolution timers (which use nsec resolution, not jiffies) supported in your board and enabled in the kernel. See here for the interface of how to use them: http://lwn.net/Articles/167897/
So, either change the system HZ to 1000 and get a jiffie resolution of 1 msec or use a high resolution timer.

You can't sleep for exactly 16ms. You can sleep for at least 16ms, but not 16ms. That's not the way Linux (or any other desktop OS) works - they're not realtime OSes and they are scheduled in a non-deterministic manner and there's nothing you can do about it.
Whatever you're trying to do, you'll have to go about it another way. With what little info you've provided, all I can say is that what you're trying to do can't be done.

Related

How to write a time difference function to STM32F4

i am working on STM32F4 and pretty new at it. I know basics of C but with more than 1 day research, i still not found a solution of this.
I simply want to make a delay function myself, processor runs at 168MHz ( HCLK ). So my intuition says that it produces 168x10^6 clock cycles at each seconds. So the method should be something like that,
1-Store current clock count to a variable
2-Time diff = ( clock value at any time - stored starting clock value ) / 168000000
This flow should give me time difference in terms of seconds and then i can use it to convert whatever i want.
But, unfortunately, despite it seems so easy, I just cant implement any methods to MCU.
I tried time.h but it did not work properly. For ex, clock() gave same result over and over, and time( the one returns seconds since 1970 ) gave hexadecimal 0xFFFFFFFF ( -1, I guess means error ) .
Thanks.
Edit : While writing i assumed that some func like clock() will return total clock count since the start of program flow, but now i think after 4Billion/168Million secs it will overflow uint32_t size. I am really confused.
The answer depends on the required precision and intervals.
For shorter intervals with sub-microsecond precision there is a cycle counter. Your suspicion is correct, it would overflow after 232/168*106 ~ 25.5 seconds.
For longer intervals there are timers that can be prescaled to support any possible subdivision of the 168 MHz clock. The most commonly used setup is the SysTick timer set to generate an interrupt at 1 kHz frequency, which increments a software counter. Reading this counter would give the number of milliseconds elapsed since startup. As it is usually a 32 bit counter, it would overflow after 49.7 days. The HAL library sets SysTick up this way, the counter can then be queried using the HAL_GetTick() function.
For even longer or more specialized timing requirements you can use the RTC peripheral which keeps calendar time, or the TIM peripherals (basic, general and advanced timers), these have their own prescalers, and they can be arranged in a master-slave setup to give almost arbitrary precision and intervals.

Better way to determine seconds since power on in microcontroller?

Short question: How to get seconds since reset in STM32L051T6 microcontroller?
My effort and detailed issue:
I am using an STM32L051T6 series microcontroller. I need to count seconds since power on. I am also using low power mode. So I wrote code to use wakeup timer interrupt functionality of internal RTC of microcontroller. I used 1 second interval wake up timer with external LSE clock of 32768 Hz. I observed the accumulated seconds since power on (SSPO) after 3 days and found that it is falling behind by 115 seconds compared to actual time elapsed. My guess for this drift is interrupt latency in executing wakeup timer interrupt. How can I remove drift of this 115 seconds? Or is there any other better method than using wakeup interrupt to count seconds since power on?
UPDATE:
I tried to use Systick with HAL_GetTick() function as seconds since power on. But even systick is also getting delayed over time.
If you want to measure time with accuracy over a longer period, an RTC is the way to go. As you mentioned that you have an RTC, you can use the method below.
At startup, load the RTC with zero.
Then you can read the seconds elapsed when required without the errors above.
Edit: As per comment, the RTC can be changed by user. In that case,
If you can modify the RTC write function called by the user, then when the user calls the RTC write function, you update a global variable VarA = time set by user. The elapsed time will be Time read by RTC - VarA.
If the RTC is accurate, you should use the RTC by storing its value at boot time and later comparing to that saved value. But you said that the RTC can be reset by user so I can see two ways to cope with it:
if you have enough control on the system, replace the command or IHM that a user can use to reset the clock with a wrapper that inform you module and allows to read the RTC before and after it has been reset
if you have not enough control or cannot wrap the user's reset (because it uses a direct system call, etc.) use a timer to control the RTC value on every second
But you should define a threshold on the delta on RTC clock. If it is small, it is likely to be an adjustment because unless your system uses an atomic clock, even RTC can derive over time. In that case I would not care because you can hardly know whether it derived since last reboot or not. If you want a more clever algorythm, you can make the threshold dependant on the current time since last reboot: the longer the system is up, the higher the probability it has derived since then.
On the opposite, a large delta is likely to be a correction because RTC was blatantly erroneous, the saving battery is out of use, or what else. In that case you should compute the new start RTC time that gives same duration with the new RTC value.
As a rule of thumb, I would use a threshold of about 1 or 2 seconds per uptime day without RTC clock adjustement (ref) - meaning I would also store the time of last RTC adjustement, initially boot time.

Calling a C function periodically on OSX

I have a function which calculates a BPM for a track from incoming data packets from a CDJ. Lets say the BPM was 124.45 beats per minute, how would I go about calling a function every 0.482 seconds (i.e. once per beat)? Would it be possible to set up another thread and set a timer?
Maybe have a look at high precision timers, here for which Apple claim 500 micrososecond accuracy which is 0.1% of your 500 (ish) millisecond requirement. You can minimise skew by reading the time at the start of your processing and calculating an offset to the next beat. Also, if you find you are often getting scheduled late, and missing beats, you can sleep for, say, 95% of the time to your next beat so the CPU can schedule something else, and then busy wait for the last few percent so you don't hog the CPU.

Linux Kernel delay, below jiffies, without busy-waiting

i need to set a signal high and low by time in a linux kernel, using, timer and mdelay().
hightime: 0.01ms-20.00ms;
lowtime:10ms-1000ms
both are adjustable by userspace.
For the lowtime i use an API timer and for the hightime i use mdelay() and udelay().
Now the problem: if hightime is 9.9ms and lowtime is 10ms the kernel is asleep for the whole time (expect 0.1ms). But my userinterface in the userspace needs to work, while the kernel timer is running.
One jiffie is about 10ms in my system, so i can not use a timer for the lowtime.
Someone got an idea, how i can do these 0.01ms - 10 ms waits in the kernel, so that my userinterface still works properly?
Thanks
You can reduce the 10 ms:
Edit /usr/include/asm/param.h and look for definition of HZ. I guess you'll find 100.
100 Hz presents a period of 10 ms. More modern Linuxes have 250 HZ which would put your time
slice down to 4 ms. You may sqeeze it to 1000 HZ which lets you run at 1 ms slices.
Further reading: Linux kernel map, 7.1. Measuring Time Lapses

1ms resolution timer under linux recommended way

I need a timer tick with 1ms resolution under linux. It is used to increment a timer value that in turn is used to see if various Events should be triggered. The POSIX timerfd_create is not an option because of the glibc requirement. I tried timer_create and timer_settimer, but the best I get from them is a 10ms resolution, smaller values seem to default to 10ms resolution. Getittimer and setitimer have a 10 ms resolution according to the manpage.
The only way to do this timer I can currently think of is to use clock_gettime with CLOCK_MONOTONIC in my main loop an test if a ms has passed, and if so to increase the counter (and then check if the various Events should fire).
Is there a better way to do this than to constantly query in the main loop? What is the recommended solution to this?
The language I am using is plain old c
Update
I am using a 2.6.26 Kernel. I know you can have it interrupt at 1kHz, and the POSIX timer_* functions then can be programmed to up to 1ms but that seems not to be reliable and I don't want to use that, because it may need a new kernel on some Systems. Some stock Kernel seem to still have the 100Hz configured. And I would need to detect that. The application may be run on something else than my System :)
I can not sleep for 1ms because there may be network events I have to react to.
How I resolved it
Since it is not that important I simply declared that the global timer has a 100ms resolution. All events using their own timer have to set at least 100ms for timer expiration. I was more or less wondering if there would be a better way, hence the question.
Why I accepted the answer
I think the answer from freespace best described why it is not really possible without a realtime Linux System.
Polling in the main loop isn't an answer either - your process might not get much CPU time, so more than 10ms will elapse before your code gets to run, rendering it moot.
10ms is about the standard timer resolution for most non-realtime operating systems (RTOS). But it is moot in a non-RTOS - the behaviour of the scheduler and dispatcher is going to greatly influence how quickly you can respond to a timer expiring. For example even suppose you had a sub 10ms resolution timer, you can't respond to the timer expiring if your code isn't running. Since you can't predict when your code is going to run, you can't respond to timer expiration accurately.
There is of course realtime linux kernels, see http://www.linuxdevices.com/articles/AT8073314981.html for a list. A RTOS offers facilities whereby you can get soft or hard guarantees about when your code is going to run. This is about the only way to reliably and accurately respond to timers expiring etc.
To get 1ms resolution timers do what libevent does.
Organize your timers into a min-heap, that is, the top of the heap is the timer with the earliest expiry (absolute) time (a rb-tree would also work but with more overhead). Before calling select() or epoll() in your main event loop calculate the delta in milliseconds between the expiry time of the earliest timer and now. Use this delta as the timeout to select(). select() and epoll() timeouts have 1ms resolution.
I've got a timer resolution test that uses the mechanism explained above (but not libevent). The test measures the difference between the desired timer expiry time and its actual expiry of 1ms, 5ms and 10ms timers:
1000 deviation samples of 1msec timer: min= -246115nsec max= 1143471nsec median= -70775nsec avg= 901nsec stddev= 45570nsec
1000 deviation samples of 5msec timer: min= -265280nsec max= 256260nsec median= -252363nsec avg= -195nsec stddev= 30933nsec
1000 deviation samples of 10msec timer: min= -273119nsec max= 274045nsec median= 103471nsec avg= -179nsec stddev= 31228nsec
1000 deviation samples of 1msec timer: min= -144930nsec max= 1052379nsec median= -109322nsec avg= 1000nsec stddev= 43545nsec
1000 deviation samples of 5msec timer: min= -1229446nsec max= 1230399nsec median= 1222761nsec avg= 724nsec stddev= 254466nsec
1000 deviation samples of 10msec timer: min= -1227580nsec max= 1227734nsec median= 47328nsec avg= 745nsec stddev= 173834nsec
1000 deviation samples of 1msec timer: min= -222672nsec max= 228907nsec median= 63635nsec avg= 22nsec stddev= 29410nsec
1000 deviation samples of 5msec timer: min= -1302808nsec max= 1270006nsec median= 1251949nsec avg= -222nsec stddev= 345944nsec
1000 deviation samples of 10msec timer: min= -1297724nsec max= 1298269nsec median= 1254351nsec avg= -225nsec stddev= 374717nsec
The test ran as a real-time process on Fedora 13 kernel 2.6.34, the best achieved precision of 1ms timer was avg=22nsec stddev=29410nsec.
I'm not sure it's the best solution, but you might consider writing a small kernel module that uses the kernel high-res timers to do timing. Basically, you'd create a device file for which reads would only return on 1ms intervals.
An example of this type of approach is used in the Asterisk PBX, via the ztdummy module. If you google for ztdummy you can find the code that does this.
I think you'll have trouble achieving 1 ms precision with standard Linux even with constant querying in the main loop, because the kernel does not ensure your application will get CPU all the time. For example, you can be put to sleep for dozens of milliseconds because of preemptive multitasking and there's little you can do about it.
You might want to look into Real-Time Linux.
If you are targeting x86 platform you should check HPET timers. This is hardware timer with large precision. It must be supported by your motherbord (right now all of them support it) and your kernel should contains driver for it as well. I have used it few times without any problems and was able to achieve much better resolution than 1ms.
Here is some documentation and examples:
http://www.kernel.org/doc/Documentation/timers/hpet.txt
http://www.kernel.org/doc/Documentation/timers/hpet_example.c
http://fpmurphy.blogspot.com/2009/07/linux-hpet-support.html
I seem to recall getting ok results with gettimeofday/usleep based polling -- I wasn't needing 1000 timers a second or anything, but I was needing good accuracy with the timing for ticks I did need -- my app was a MIDI drum machine controller, and I seem to remember getting sub-millisecond accuracy, which you need for a drum machine if you don't want it to sound like a very bad drummer (esp. counting MIDI's built-in latencies) -- iirc (it was 2005 so my memory is a bit fuzzy) I was getting within 200 microseconds of target times with usleep.
However, I was not running much else on the system. If you have a controlled environment you might be able to get away with a solution like that. If there's more going on the system (watch cron firing up updatedb, etc.) then things may fall apart.
Are you running on a Linux 2.4 kernel?
From VMware KB article #1420 (http://kb.vmware.com/kb/1420).
Linux guest operating systems keep
time by counting timer interrupts.
Unpatched 2.4 and earlier kernels
program the virtual system timer to
request clock interrupts at 100Hz (100
interrupts per second). 2.6 kernels,
on the other hand, request interrupts
at 1000Hz - ten times as often. Some
2.4 kernels modified by distribution vendors to contain 2.6 features also
request 1000Hz interrupts, or in some
cases, interrupts at other rates, such
as 512Hz.
There is ktimer patch for linux kernel:
http://lwn.net/Articles/167897/
http://www.kernel.org/pub/linux/kernel/projects/rt/
HTH
First, get the kernel source and compile it with an adjusted HZ parameter.
If HZ=1000, timer interrupts 1000 times per seconds. It is ok to use HZ=1000 for an i386 machine.
On an embedded machine, HZ might be limited to 100 or 200.
For good operation, PREEMPT_KERNEL option should be on. There are
kernels which does not support this option properly. You can check them out by
searching.
Recent kernels, i.e. 2.6.35.10, supports NO_HZ options, which turns
on dynamic ticks. This means that there will be no timer ticks when in idle,
but a timer tick will be generated at the specified moment.
There is a RT patch to the kernel, but hardware support is very limited.
Generally RTAI is an all killer solution to your problem, but its
hardware support is very limited. However, good CNC controllers, like
emc2, use RTAI for their clocking, maybe 5000 Hz, but it can be
hard work to install it.
If you can, you could add hardware to generate pulses. That would make
a system which can be adapted to any OS version.
You don't need an RTOS for a simple real time application. All modern processors have General Purpose timers. Get a datasheet for whatever target CPU you are working on. Look in the kernel source, under the arch directory you will find processor specific source how to handle these timers.
There are two approaches you can take with this:
1) Your application is ONLY running your state machine, and nothing else. Linux is simply your "boot loader." Create a kernel object which installs a character device. On insertion into the kernel, set up your GP Timer to run continuously. You know the frequency it's operating at. Now, in the kernel, explicitly disable your watchdog. Now disable interrupts (hardware AND software) On a single-cpu Linux kernel, calling spin_lock() will accomplish this (never let go of it.) The CPU is YOURS. Busy loop, checking the value of the GPT until the required # of ticks have passed, when they have, set a value for the next timeout and enter your processing loop. Just make sure that the burst time for your code is under 1ms
2) A 2nd option. This assumes you are running a preemptive Linux kernel. Set up an unused a GPT along side your running OS. Now, set up an interrupt to fire some configurable margin BEFORE your 1ms timeout happens (say 50-75 uSec.) When the interrupt fires, you will immediately disable interrupts and spin waiting for 1ms window to occur, then entering your state machine and subsequently enabling interrupts on your wait OUT. This accounts for the fact that you are cooperating with OTHER things in the kernel which disable interrupts. This ASSUMES that there is no other kernel activity which locks out interrupts for a long time (more than 100us.) Now, you can MEASURE the accuracy of your firing event and make the window larger until it meets your need.
If instead you are trying to learn how RTOS's work...or if you are trying to solve a control problem with more than one real-time responsibility...then use an RTOS.
Can you at least use nanosleep in your loop to sleep for 1ms? Or is that a glibc thing?
Update: Never mind, I see from the man page "it can take up to 10 ms longer than specified until the process becomes runnable again"
What about using "/dev/rtc0" (or "/dev/rtc") device and its related ioctl() interface? I think it offers an accurate timer counter. It is not possible to set the rate just to 1 ms, but to a close value or 1/1024sec (1024Hz), or to a higher frequency, like 8192Hz.

Resources