I've got a GPS with the PPS coming in on a GPIO on my non-networked embedded Linux board running 2.6.37.
I'm trying to keep the system clock as accurate as possible (preferably better than 20us accuracy). I've set the time within a few millis from the serial port GPS and would like to use the the PPS to discipline the clock (is that the correct term?)
I've set up an interrupt that catches the the PPS.
My interrupt handling routine is something like:
struct timeval tv;
do_gettimeofday(&tv);
..check that we're really close to a second mark and adjust tv to the nearest second...
do_settimeofday(&tv);
Question:
Do do_gettimeofday/do_settimeofday take significant amounts of time to execute?
Is this an acceptable way to approach keeping time?
I know there's NTP and Linux-PPS, but I don't want the overhead unless there's other subtle considerations to setting system time that I'm not aware of.
Many thanks.
You should use ntpd with PPS if you need accurate and reliable synchronization. ntpd uses tested algorithms written with reliability in mind. The overhead is negligible.
Related
I am writing a device driver and want to benchmark a few pieces of code to get a feel for where I could be experiencing some bottlenecks. As a result, I want to time a few segments of code.
In userspace, I'm used to using clock_gettime() with CLOCK_MONOTONIC. Looking at the kernel sources (note that I am running kernel 4.4, but will be upgrading eventually), it appears I have a few choices:
getnstimeofday()
getrawmonotonic()
get_monotonic_coarse()
getboottime()
For convenience, I have written a function (see below) to get me the current time. I am currently using getrawmonotonic() because I figured this is what I wanted. My function returns the current time as a ktime_t, so then I can use ktime_sub() to get the elapsed time between two times.
static ktime_t get_time_now(void) {
struct timespec time_now;
getrawmonotonic(&time_now);
return timespec_to_ktime(time_now);
}
Given the available high resolution clocking functions (jiffies won't work for me), what is the best function for my given application? More generally, I'm interested in any/all documentation about these functions and the underlying clocks. Primarily, I am curious if the clocks are affected by any timing adjustments and what their epochs are.
Are you comparing measurements you're making in the kernel directly with measurements you've made in userspace? I'm wondering about your choice to use CLOCK_MONOTONIC_RAW as the timebase in the kernel, since you chose to use CLOCK_MONOTONIC in userspace. If you're looking for an analogous and non-coarse function in the kernel which returns CLOCK_MONOTONIC (and not CLOCK_MONOTONIC_RAW) time, look at ktime_get_ts().
It's possible you could also be using raw kernel ticks to be measuring what you're trying to measure (rather than jiffies, which represent multiple kernel ticks), but I do not know how to do that off the top of my head.
In general if you're trying to find documentation about Linux timekeeping, you can take a look at Documentation/timers/timekeeping.txt. Usually when I try to figure out kernel timekeeping I also unfortunately just spend a lot of time reading through the kernel source in time/ (time/timekeeping.c is where most of the functions you're thinking of using right now live... it's not super well-commented, but you can probably wrap your head around it with a little bit of time). And if you're feeling altruistic after learning, remember that updating documentation is a good way to contribute to the kernel :)
To your question at the end about how clocks are affected by timing adjustments and what epochs are used:
CLOCK_REALTIME always starts at Jan 01, 1970 at midnight (colloquially known as the Unix Epoch) if there are no RTC's present or if it hasn't already been set by an application in userspace (or I guess a kernel module if you want to be weird). Usually the userspace application which sets this is the ntp daemon, ntpd or chrony or similar. Its value represents the number of seconds passed since 1970.
CLOCK_MONTONIC represents the number of seconds passed since the device was booted up, and if the device is suspended at a CLOCK_MONOTONIC value of x, when it's resumed, it resumes with CLOCK_MONOTONIC set to x as well. It's not supported on ancient kernels.
CLOCK_BOOTTIME is like CLOCK_MONOTONIC, but has time added to it across suspend/resume -- so if you suspend at a CLOCK_BOOTTIME value of x, for 5 seconds, you'll come back with a CLOCK_BOOTTIME value of x+5. It's not supported on old kernels (its support came about after CLOCK_MONOTONIC).
Fully-fleshed NTP daemons (not SNTP daemons -- that's a more lightweight and less accuracy-creating protocol) set the system clock, or CLOCK_REALTIME, using settimeofday() for large adjustments ("steps" or "jumps") -- these immediately affect the total value of CLOCK_REALTIME, and using adjtime() for smaller adjustments ("slewing" or "skewing") -- these affect the rate at which CLOCK_REALTIME moves forward per CPU clock cycle. I think for some architectures you can actually tune the CPU clock cycle through some means or other, and the kernel implements adjtime() this way if possible, but don't quote me on that. From both the bulk of the kernel's perspective and userspace's perspective, it doesn't actually matter.
CLOCK_MONOTONIC, CLOCK_BOOTTIME, and all other friends slew at the same rate as CLOCK_REALTIME, which is actually fairly convenient in most situations. They're not affected by steps in CLOCK_REALTIME, only by slews.
CLOCK_MONOTONIC_RAW, CLOCK_BOOTTIME_RAW, and friends do NOT slew at the same rate as CLOCK_REALTIME, CLOCK_MONOTONIC, and CLOCK_BOOTIME. I guess this is useful sometimes.
Linux provides some process/thread-specific clocks to userspace (CLOCK_PROCESS_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID), which I know nothing about. I do not know if they're easily accessible in the kernel.
I'm working on a C program that transmits samples over USB3 for a set period of time (1-10 us), and then receives samples for 100-1000 us. I have a rudimentary pthread implementation where the TX and RX routines are each handled as a thread. The reason for this is that in order to test the actual TX routine, the RX needs to run and sample before the transmitter is activated.
Note that I have very little C experience outside of embedded applications and this is my first time dabbling with pthread.
My question is, since I know exactly how many samples I need to transmit and receive, how can I e.g. start the RX thread once the TX thread is done executing and vice versa? How can I ensure that the timing stays consistent? Sampling at 10 MHz causes some harsh timing requirements.
Thanks!
EDIT:
To provide a little more detail, my device is a bladeRF x40 SDR, and communication to the device is handled by a FX3 microcontroller, which occurs over a USB3 connection. I'm running Xubuntu 14.04. Processing, scheduling and configuration however is handled by a C program which runs on the PC.
You don't say anything about your platform, except that it supports pthreads.
So, assuming Linux, you're going to have to realize that in general Linux is not a real-time operating system, and what you're doing sure sounds as if has real-time timing requirements.
There are real-time variants of Linux, I'm not sure how they'd suit your needs. You might also be able to achieve better performance by doing the work in a kernel driver, but then you won't have access to pthreads so you're going to have to be a bit more low-level.
Thought I'd post my solution.
While the next build of the bladeRF firmware and FPGA image will include the option to add metadata (timestamps) to the synchronous interface, until then there's no real way in which I can know at which time instants certain events occurred.
What I do know is my sampling rate, and exactly how many samples I need to transmit and receive at which times relative to each other. Therefore, by using conditional variables (with pthread), I can signal my receiver to start receiving samples at the desired instant. Since TX and RX operations happen in a very specific sequence, I can calculate delays by counting the number of samples and multiplying by the sampling rate, which has proven to be within 95-98% accurate.
This obviously means that since my TX and RX threads are running simultaneously, there are chunks of data within the received set of samples that will be useless, and I have another routine in place to discard those samples.
I am writing a network program which can calculate accurate data packet rate (packet per second, frame per second, bps). Now i have a device called testcenter which can send accurate flow to a specific pc (protocol is UDP/IP) on Linux, i like to know the accurate pps(packets per second) with my program , i have considered the gettimeofday(&start,NULL)function before i call recvfrom() and update the counter for packets, after that call gettimeofday(&end,NULL) and get the pps rate. I hope there is better solution than this since the user/kernel barrier is traversed on system calls.
Best regards.
I think you should use clock_gettime() with CLOCK_MONOTONIC_COARSE. But it will only be accurate till the last tick .. So may be off by 10s of millisec. But its definitely faster that using it with CLOCK_MONOTONIC_RAW. You can also use gettimeofday but clock_gettime with CLOCK_MONOTONIC_RAW is slightly faster and higher resolution than gettimeofday.
Also gettimeofday() gives wall clock time, which might change even for daylight saving ... I don't think you should use it to measure traffic rate.
Your observation that gettimeofday switches to kernel mode is incorrect for Linux on a few popular architectures due to the use of vsyscalls. Clearly using gettimeofday here is not a bad option. You should however consider using a monotonic clock, see man 3 clock_gettime. Note that clock_gettime is not yet converted to vsyscall for as many architectures as gettimeofday.
Beyond this option you may be able to set the SO_TIMESTAMP socket option and obtain precise timestamps via recvmsg.
I am programming DAC peripheral of stm32f2xx. I have an array of bytes (Sound) & I would like to generate signal with sample rate = 8K.
Now my question is:
How do I specify sample rate?
Note:
I googled alot. I am only getting trangle wave generation and sine wave generation using DMA. I dont want to use DMA.
Thanks in advance for help...
Regards,
It's not practical to play waveforms out of the DAC without using DMA. You set up the DMA with your samples, and you set up the DAC to use a timer as the trigger. Then you set up your timer to trigger at your desired sample rate.
I would agree with TJD that in general it is not practical to do so without DMA, however it is not impossible, particularly at a low sample rate.
One could use a timer set to trigger every 1/8000th of a second as the fixed time base. From there, the interrupt routine would need to load up the next sample into the DAC. The sample rate could be varied by changing the timer's time base.
It would be a similar effort to write the code to configure the DMA controller when compared to writing the code to move the correct sample into the buffer. However, the DMA approach would be more reliable, likely posses less jitter in the sample rate, and frees up the core to execute other code that may be needed. In fact, with the TIM/DMA/DACs setup, you may be able to halt the core or enter a sleep mode that keeps peripheral clocks running.
yes, i agree with TJD too.
using DMA is effecient as well as free up CPU for other task [good].
managing the timing in software(core with busy loop) [bad] will not produce good results. (so, use timer for timing [good]).
now for copying, you have to dedicate CPU to do the copying after a specific interval of time (from busy-loop or timer timeout) to DAC register.[bad]
at the end i recommend, connect DMA and timer, and on timeout, DMA will copy data to DAC register [good]. this solution only appear hard but actually much easier to work with when setup'd.
[note: written in pov of someone who is trying to understand/start on something like this]
This is an academic question (I'm not necessarily planning on doing it) but I am curious about how it would work. I'm thinking of a userland software (rather than hardware) solution.
I want to produce PWM signals (let's say for a small number of digital GPIO pins, but more than 1). I would probably write a program which created a Pthread, and then infinitely looped over the duty cycle with appropriate sleep()s etc in that thread to get the proportions right.
Would this not clobber the CPU horribly? I imagine the frequency would be somewhere around the 100 Hz mark. I've not done anything like this before but I can imagine that the constant looping, context switches etc wouldn't be great for multitasking or CPU usage.
Any advice about CPU in this case use and multitasking? FWIW I'm thinking of a single-core processor. I have a feeling answers could range from 'that will make your system unusable' to 'the numbers involved are orders of magnitude smaller than will make an impact to a modern processor'!
Assume C because it seems most appropriate.
EDIT: Assume Linux or some other general purpose POSIX operating system on a machine with access to hardware GPIO pins.
EDIT: I had assumed it would be obvious how I would implement PWM with sleep. For the avoidance of doubt, something like this:
while (TRUE)
{
// Set all channels high
for (int c = 0; x < NUM_CHANNELS)
{
set_gpio_pin(c, 1);
}
// Loop over units within duty cycle
for (int x = 0; x < DUTY_CYCLE_UNITS; x++)
{
// Set channels low when their number is up
for (int c = 0; x < NUM_CHANNELS)
{
if (x > CHANNELS[c])
{
set_gpio_pin(c, 0);
}
}
sleep(DUTY_CYCLE_UNIT);
}
}
Use a driver if you can. If your embedded device has a PWM controller, then fine, else dedicate a hardware timer to generating the PWM intervals and driving the GPIO pins.
If you have to do this at user level, raising a process/thread to a high priority and using sleep() calls is sure to generate a lot of jitter and a poor pulse-width range.
You do not very clearly state the ultimate purpose of this, but since you have tagged this embedded and pthreads I will assume you have a dedicated chip with a linux variant running.
In this case, I would suggest the best way to create PWM output is through your main program loop, since I assume the PWM is part of a greater control application. Most simple embedded applications (no UI) can run in a single thread with periodic updates of the GPIOs in your main thread.
For example:
InitIOs();
while(1)
{
// Do stuff
UpdatePWM();
}
That being said, check your chip specification, in most embedded devices there are dedicated PWM output pins (that can also act as GPIOs) and those can be configured simply in hardware by setting a duty cycle and updating that duty cycle as required. In this case, the hardware will do the work for you.
If you can clarify your situation a bit I can likely give you a more detailed answer.
A better way is probably to use some kind interrupt-driven approach. I suppose it depends on your system, but IIRC Arduino uses interrupts for PWM.
100Hz seems about doable from user space. Typical OS task scheduler timeslices are around 10ms, too, so your CPU will already be multitasking at about that interval. You'll probably want to use a high process priority (low niceness) to ensure the sleeps won't overrun (much), and keep track of actual wall time and potentially adjust your sleep values down based on that feedback to avoid drift. You'll also need to make sure the timer the kernel uses for this on your hardware has a high enough resolution!
If you're very low on RAM and swapping heavily, you could run into problems with your program being paged out to disk. Also, if the kernel is doing other CPU-intensive stuff, this would also introduce unacceptable delays. (other, lower priority user space tasks should be ok) If keeping the frequency constant is critical, you're better off solving this in the kernel (or even running a realtime kernel).
Using a thread and sleeping on an OS that is not an RTOS is not going to produce very accurate or consistent results.
A better method is to use a timer interrupt and toggle the GPIO in the ISR. Unlike using a hardware PWM output on a hardware timer, this approach allows you to use a single timer for multiple signals and for other purposes. You will still probably see more jitter that a hardware PWM and the practical frequency range and pulse resolution will be much lower that is achievable in hardware, but at least the jitter will be in the order of microseconds rather than milliseconds.
If you have a timer, you can set that up to kick an interrupt each time a new PWM edge is required. With some clever coding, you can queue these up so the interrupt handler knows which of many PWM channels and whether a high or low going edge is required, and then schedule itself for the next required edge.
If you have enough of these timers, then its even easier as you can allocate one per PWM channel.
On an embedded controller with a low-latency interrupt response, this can produce surprisingly good results.
I fail to understand why you would want to do PWM in software with all of the inherent timing jitter that interrupt servicing and software interactions will introduce (e.g. the PWM interrupt hits when interrupts are disabled, the processor is servicing a long uninterruptible instruction, or another service routine is active). Most modern microcontrollers (ARM-7, ARM Cortex-M, AVR32, MSP, ...) have timers that can either be configured to produce or are dedicated as PWM generators. These will produce multiple rock steady PWM signals that, once set up, require zero processor input to keep running. These PWM outputs can be configured so that two signals do not overlap or have simultaneous edges, as required by the application.
If you are relying on the OS sleep function to set the time between the PWM edges then this will run slow. The sleep function will set the minimum time between task activations and the time between these will be delayed by the task switches, the presence of a higher priority thread or other kernel function running.