edit:
I am essentially attempting to utilize the NTP code from section 5 of RFC 1129 from the command-line. Simply setting the clock, or even making an adjtime call is insufficient. I'd like to utilize the pre-existing NTP code for properly synchronizing clocks, but without the network part.
I have a system that cannot reach the internet, but has access to a high-precision clock. I would like to periodically poll that high precision clock for the time, and utilize the control system in NTP to synchronize the system clock.
Does anyone know how to feed input to NTP without faking an NTP server?
Ideally, I would be able to feed it the current time on the command-line, and have it use that as another point for synchronizing the clock.
bash ~ $ something 1416899507
Looking into refclock_nmea.c it appears as though a simple mechanism would be to feed ntpd time values from GPS NMEA sentences. Alternatively, it doesn't appear to be that difficult to just implement a custom refclock driver. David Mills has a tutorial available: http://www.eecis.udel.edu/~mills/ntp/html/howto.html
Related
I have to do a code in C to detect the inter-characters time within a rs232 line on linux. Inter-characters time to detect could be 1ms. So I need something to timestamp very quickly an incomming characters. When I say very quickly is less than 1ms.
I don't ask for a coding solution, I just want a initial help to know what path I have to take : is it possible to do this on linux ? I have to modify a driver to reach this kind of time ? Or Something on user space can do it (I don't think so).
No chance to achieve this in user space, as far as I know there is no serial port configuration that allows you to specify precise inter-character timeout. Maybe coding custom driver could bring you closer to UART interrupts since that's what you need.
However every time I had to solve similar task, I ended up creating a tiny hardware module that performs my time-critical task very precisely and only reports results to the linux machine. It totally depends on what you need and how precise your communication gaps detection should be.
Please bear with me on this slightly odd request!
I have a [debian linux] system with an internal [RTC] clock which the user can adjust or change. It may or may not be connected to external IP and therefore may or may not have access to NTP. I would like to provide an option in the clock setting UI to "set from NTP" if it is available, but I don't want the system clock to be constantly updated from NTP.
How can I configure ntpd to be active, but not update the system
clock?
How can I read and display the 'ntp time' (which will be shown
along with the system clock)?
system is debian jessie.
reading and displaying the time in C or python.
will disable ntpd and use occasional NTP requests to update a local "network time offset" from system time.
I am doing a project with 2 raspberry pi which work as servers and a laptop which is the client.
I have attached to each raspberry and usb microphone and using the Portaudio Library im capturing audio streaming
and send it back to the laptop through a tcp/ip connection.
The scope of this project is to locate sound sources and it works like this. I run a .c file on each raspberry which are
connected on the same LAN as with the PC laptop. When this program is running on both raspberryies i have a message
"Waiting connection for a client". The next thing to do is just to run the matlab file which will start the both raspberries
and record. I have managed to synchronize the raspberries to start in the same time through a simple condition like
do
{
sleep(0.01);
j = read(newsockfd, &start,1 );
} while (j==0);
so right before both raspberries have to start recording i pause them in order to finish the initialization commands and so on
and then i just send a character "start = 'k'" through my matlab program
t1,t2 are tcp connections
start = 'k';
fwrite (t1, k);
fwrite (t2, k);
from this point both raspberries open the PortAudio stream and call recordCallBack function.
When I run the application and clap, i still get a delay of 0.2s between them which causes
an error of 60 meters. I have also checked the execution time of the fwrite function but that might
save me about 0.05 seconds which will still lead to results far from reality.
This project is based on TDOA measurement and it is desired to have a delay under 0.01 seconds to get accuracy <1m.
I have heard that linux has some very accurate timers, and i was thinking that maybe i could use that to
clock the time inside the functions in the .c file. Anyway if you have any ideas of how i can measure the delay from
the point i send the character 'k' from matlab until the point where the audio stream is opened in microphone, or any
way how i could synchronize the 2 linux servers please help.
ps: both are raspberry 2 pi and connected through UTP cables so the processing and transmission rates should be the same
It looks like an interesting project but I think you underestimate the problem a little bit. The first issue is that you need to synchonize the two sensors. Given the speed of sound and if you want an accuracy of about 1m you need to synchronize them with about 1ms accuracy. You could try with the Network Time Protocol but I'm not sure you can reach this accuracy even with a master on the local network. Better synchronization can be achieved with PTP (over ethernet) or GPS if you can receive a GPS signal.
Then if you manage to achieve this, a first step could be to record a few hand claps on both raspberry pi, save the timestamp when you start recording on both and see if you actually obtain something significant. Maybe you will also need to use a microcontroller and a real-time operating system instead!
There are many ways to synchronise clocks. It could be in a system level or in application level.
System level tend to be easier because there are already tools to do the job. I don't recommend you doing PTP at this stage, as mentioned by Emilien, since it is quite complicated to make it work. Instead I would recommend you to use normal setup via the same NTP network on all machines.
Example of NTP setup:
Query the server with # ntpdate -q 0.rhel.pool.ntp.org
If it is running, setup your local clock with # ntpdate 0.rhel.pool.ntp.org 1.rhel.pool.ntp.org
OBS: # means root user (which most likely means that you will need to run the command with sudo), whilst $ means normal user.
Check all machines times with $ date +%k:%M:%S.%N which will return the clock down to a nanosecond resolution.
If that doesn't acheive the desired result then try the PTP aproach, or just synchronise all your devices when they connect to the master, where your master can normalise each independant clock. I will not go into details here.
Then you can send your audio data via TCP/IP (or perhaps UDP/IP to lower latency) like you mentioned before, but always send the timestamp of your slave machine associated to a audio frame using clock_gettime() function with CLOCK_REALTIME as the clk_id argument.
I've got an experimental box of tricks running that, every 100 ms or so, will spit out a 4 microsecond long +5V pulse of electricity on a TTL line. The exact time that this happens is not known ahead of time, but it's important -- so I'd like to use the Red Hat 5.3 computer that essentially runs the experiment to service this TTL, and create a glorified timestamp.
At the moment, what I've done is wired the TTL into pin 13 of the parallel port (STATUS_SELECT, one of the input lines on a parallel port) on the linux box, spawn a process when the experiment starts, use chrt to change its scheduled priority to 99 -- i.e. high -- and then just poll the parallel port repeatedly in a while loop until the pin goes high. I then create an accurate timestamp, and, in a non-blocking way write it to disk.
Obviously, this is inefficient -- sometimes the process is suspended, and a TTL will be missed. As the computer is, itself, busy doing other things (namely acquiring data from my experimental bit of kit -- an MRI scanner!) this happens quite often. Polling is easy, but probably bad.
My question is this: doing something quickly when a TTL occurs seems like the bread-and-butter of computing, but, as far as I can tell, it's only possible to deal with interrupts on linux if you're a kernel module. The parallel port can generate interrupts, and libraries like paraport let you build kernel modules relatively quickly, where you have to supply your own handler.
Is the best way to deal with this problem and create accurate (±25 ms) timestamps for an experiment whenever that TTL comes in -- to write a kernel module that provides a list of recent interrupts to somewhere in /proc, and then read them out with a normal process later? Is that approach not going to work, and be very CPU inefficient -- or open a bag of worms to do with interrupt priority I'm not aware of?
Most importantly, this seems like it should be a solved problem -- is it, and if so do any wise people wish to point me in the right direction? Writing a kernel module seems like, frankly, a lot of hard, risky work for something that feels as if it should perhaps be simple.
The premise that "it's only possible to deal with interrupts on linux if you're a kernel module" dismisses some fairly common and effective strategies.
The simple course of action for responding to interrupts in userspace (especially infrequent ones) is to have a driver which created a kernel device (or in some cases sysfs node) where either a read() or perhaps a custom ioctl() from userspace will block until the interrupt occurs. You'd have to check if the default parallel port driver supports this, but it's extremely common with the GPIO drivers on embedded-type boards, and the basic scheme could be borrowed into the parallel port - provided that the hardware supports true interrupts.
If very precise timing is the goal, you might do better to customize the kernel module to record the timestamp there, and implement a mechanism where a read() from userspace blocks until the interrupt occurs, and then obtains the kernel's already recorded timestamp as the read data - thus avoiding the variable latency of waking userspace and calling back into the kernel to get the time.
You might also look at true local-bus serial ports (if present) as an alternate-interrupt capable interface in cases where the available parallel port is some partial or indirect implementation which doesn't support them.
In situations where your only available interface is something indirect and high latency such as USB, or where you want a lot of host- and operation-system- independence, then it may indeed make sense to use an external microcontroller. In that case, you would probably try to set the micro's clock from the host system, and then have it give you timestamp messages every time it sees an event. If your experiment only needs the timestamps to be relative to each other within a given experimental session, this should work well. But if you need to establish an absolute time synchronization across the USB latency, you may have to do some careful roundtrip measurement and then estimation of the latency in order to compensate it (see NTP for an extreme example).
I'm currently in an early phase of developing a mobile app that depends heavily on timestamps.
A master device is connected to several client devices over wifi, and issues various commands to these. When the client devices receive commands, they need to mark the (relative) timestamp when the command is executed.
While all this is simple enough, I haven't come up with a solution for how to deal with clock differences. For example, the master device might have its clock at 12:01:01, while client A is on 12:01:02 and client B on 12:01:03. Mostly, I can expect these devices to be set to similar times, as they sync over NTP. However, the nature of my application requires ms precision, so therefore I would like to safeguard against discrepancies.
A short delay between issuing a command and executing the command is fine, however an incorrect timestamp of when that command was executed is not.
So far, I'm thinking of something along the line of having the master device ping each client device to determine transaction time, and then request the client to send their "local" time. Based on this, I can calculate what the time difference is between master and client. Once the time difference is know, the client can adapt its timestamps accordingly.
I am not very familiar with networking though, and I suspect that pinging a device is not a very reliable method of establishing transaction time, since a lot factors apply, and latency may change.
I assume that there are many real-world settings where such timing issues are important, and thus there should be solutions already. Does anyone know of any? Is it enough to simply divide response time by two?
Thanks!
One heads over to RFC 5905 for NTPv4 and learns from the folks who really have put their noodle to this problem and how to figure it out.
Or you simply make sure NTP is working properly on your servers so that you don't have this problem in the first place.