What is the error between NTP stratums? - ntp

I read that NTP syncs machines upto 100 ms accuracy on WAN (given in following links - http://www.ntp.org/ntpfaq/NTP-s-algo.htm and https://en.wikipedia.org/wiki/Network_Time_Protocol).
Is the error of 100 ms (or sync difference) at each NTP stratum?
If not then how does NTP ensure that error doesn't build up to lower layers?
I know I am missing something from the NTP protocol here, can someone please point out what? Does it have anything to do with root delay and root dispersion or they are just used to reject the candidate servers?

100ms is just an illustrative value giving a typical upper level of
what is to be expected with current network technology.
Basically the delta among 2 ntp nodes is determined by the asymmetry of
of the network connection to and from one node to the other.
Dispersion measures the error over time on measuring the clock of
the (source) ntp server with the (local) clock of the ntp client.
Thus, with ATM or other kind of store-and-forward components within the network path the achievable accuracy is far superior
than with low latency broadcast links that might be present within a LAN.
The 100ms are not to be taken as a value per level, but are
what is to be expected along
a usual (low depth) synchronisation tree.
(I have not seen more than 5 levels being used in real live)
root delay is the total delay to the time source in use.
root dispersion is the maximum error alongt the path to this time source.
For detailed explanation on how error (and jitter and dispersion) are
calculated and interact, please have a look at the specification documents.
E.g.: RFC 5905: Network Time Protocol Version 4: Protocol and Algorithms Specification

Related

RN4871 with Gattlib C library high latency

I am using RN4871 with the latest firmware to turn on multiple LEDs via an I2C LED driver. I use gattlib C/C++ library to communicate ("write without response") with the RN4871 through my Ubuntu (20.04) system and I measure the latency of communication via an oscilloscope connected both to my computer and the RN4871.
I set the communication parameters on the Bluetooth device to 7.5 ms intervals with zero latency.
The problem is if I send commands from my computer in 500ms intervals, my communication latency is below 20ms which is just perfect for my application. However if my commands are 1 seconds or longer apart the latency rises up to 250ms! In my application I need extremely fast communications with minimal latency (below 40ms) and of course my commands are sent at variable intervals (can be even 10 seconds apart). I dont know where this issue comes from? Does it have to do with some sleeping process within the RN4871 that occurs when there is no data to transfer for more than 500ms or something else?
Thanks!

XBee communication in large networks

Here is my situation:
I have a network of 96 XBee S2B and S2C modules. My application runs on an ARM module and has an XBee S2C module. All modules (in total 97 of them) are in the same network and are able to communicate with each other.
The software starts and knows the 64 bit addresses of all modules. It will do a network discovery (Local AT -> ND) and will wait for the responses. With each response the 16 bit address of each module is updated. If a module has not responded to the network discovery, it will be sent again every 30 seconds (in most tests, after 60 seconds, all nodes are discovered.
Then, with all 64 bit and 16 bit addresses stored, the application will send a message to every node using unicasting. It will not wait between sending the messages. I tried this with, 36, 42, 78 and now 96 nodes. With 36 nodes the messages are received within 3 seconds by every node (as expected), with 42 and 78 it takes respectively 4 and 7 seconds to reach every node. With 96 however it takes 90 seconds (at the least).
There is no outside interference that I can detect and all nodes are within reach (if not, the network discovery would have failed).
I also tried using 64 bit messaging and ignoring the 16 bit address, it takes even longer when using this method.
I am using the xbee3library made by attie (https://github.com/attie/libxbee3).
My question is: How do I speed up the communication time of the 96 nodes (keep in mind that the goal is to be able to handle even bigger networks) and why is there such a big difference between 78 and 96 nodes (why is the network suddenly so slow?)
If there is any more information needed about my situation, I will be happy to provide it. As I manage the code I can perform tests if you need more information.
First off, get an 802.15.4 sniffer and start looking at the traffic to see what's going on. Without that, you're stuck guessing at what might be happening. I haven't worked with 802.15.4 in years, but outside of Ember Desktop (only available from Silicon Labs in expensive development kits) I was pleased with the Ubiqua Protocol Analyzer. You might also want to explore where Wireshark's 802.15.4 sniffing capabilities stand.
Second, try implementing code to wait for a Transmit Status message before sending your next message. Better yet, write code to keep track of multiple outstanding messages and test it out with various settings -- how does the network behave with 1 message waiting on a Transmit Status, versus 5 outstanding messages?
My guess is that you're running into challenges with the XBee modules managing a routing table for that many nodes. Digi provides a document for working with large XBee networks, which explains how to use Source Routing on a large network. Your central node may need to maintain a routing table and specify routes in outbound messages to improve network throughput.
The thing is there is a lot of collision and large overhead on your network in the scenario where 96 nodes are involved.
My suggestion is to cluster your nodes with multiple routers as your network growth.
The issue is likely to be you are using the stadard zigbee based routing which is AODV which is basically calculated with every transmission. Once the number of nodes reaches a large number the calculation takes exponentially longer. You should consider changing to Source Routing which is basically a different frame type that also makes use of stored routes at nodes. On a large stable network this should be much faster for transmission of messages.
https://www.digi.com/wiki/developer/index.php/Large_ZigBee_Networks_and_Source_Routing

Delay measurement and synchronisation between raspberry pi

I am doing a project with 2 raspberry pi which work as servers and a laptop which is the client.
I have attached to each raspberry and usb microphone and using the Portaudio Library im capturing audio streaming
and send it back to the laptop through a tcp/ip connection.
The scope of this project is to locate sound sources and it works like this. I run a .c file on each raspberry which are
connected on the same LAN as with the PC laptop. When this program is running on both raspberryies i have a message
"Waiting connection for a client". The next thing to do is just to run the matlab file which will start the both raspberries
and record. I have managed to synchronize the raspberries to start in the same time through a simple condition like
do
{
sleep(0.01);
j = read(newsockfd, &start,1 );
} while (j==0);
so right before both raspberries have to start recording i pause them in order to finish the initialization commands and so on
and then i just send a character "start = 'k'" through my matlab program
t1,t2 are tcp connections
start = 'k';
fwrite (t1, k);
fwrite (t2, k);
from this point both raspberries open the PortAudio stream and call recordCallBack function.
When I run the application and clap, i still get a delay of 0.2s between them which causes
an error of 60 meters. I have also checked the execution time of the fwrite function but that might
save me about 0.05 seconds which will still lead to results far from reality.
This project is based on TDOA measurement and it is desired to have a delay under 0.01 seconds to get accuracy <1m.
I have heard that linux has some very accurate timers, and i was thinking that maybe i could use that to
clock the time inside the functions in the .c file. Anyway if you have any ideas of how i can measure the delay from
the point i send the character 'k' from matlab until the point where the audio stream is opened in microphone, or any
way how i could synchronize the 2 linux servers please help.
ps: both are raspberry 2 pi and connected through UTP cables so the processing and transmission rates should be the same
It looks like an interesting project but I think you underestimate the problem a little bit. The first issue is that you need to synchonize the two sensors. Given the speed of sound and if you want an accuracy of about 1m you need to synchronize them with about 1ms accuracy. You could try with the Network Time Protocol but I'm not sure you can reach this accuracy even with a master on the local network. Better synchronization can be achieved with PTP (over ethernet) or GPS if you can receive a GPS signal.
Then if you manage to achieve this, a first step could be to record a few hand claps on both raspberry pi, save the timestamp when you start recording on both and see if you actually obtain something significant. Maybe you will also need to use a microcontroller and a real-time operating system instead!
There are many ways to synchronise clocks. It could be in a system level or in application level.
System level tend to be easier because there are already tools to do the job. I don't recommend you doing PTP at this stage, as mentioned by Emilien, since it is quite complicated to make it work. Instead I would recommend you to use normal setup via the same NTP network on all machines.
Example of NTP setup:
Query the server with # ntpdate -q 0.rhel.pool.ntp.org
If it is running, setup your local clock with # ntpdate 0.rhel.pool.ntp.org 1.rhel.pool.ntp.org
OBS: # means root user (which most likely means that you will need to run the command with sudo), whilst $ means normal user.
Check all machines times with $ date +%k:%M:%S.%N which will return the clock down to a nanosecond resolution.
If that doesn't acheive the desired result then try the PTP aproach, or just synchronise all your devices when they connect to the master, where your master can normalise each independant clock. I will not go into details here.
Then you can send your audio data via TCP/IP (or perhaps UDP/IP to lower latency) like you mentioned before, but always send the timestamp of your slave machine associated to a audio frame using clock_gettime() function with CLOCK_REALTIME as the clk_id argument.

Using NTP without a server, just for the Control System

edit:
I am essentially attempting to utilize the NTP code from section 5 of RFC 1129 from the command-line. Simply setting the clock, or even making an adjtime call is insufficient. I'd like to utilize the pre-existing NTP code for properly synchronizing clocks, but without the network part.
I have a system that cannot reach the internet, but has access to a high-precision clock. I would like to periodically poll that high precision clock for the time, and utilize the control system in NTP to synchronize the system clock.
Does anyone know how to feed input to NTP without faking an NTP server?
Ideally, I would be able to feed it the current time on the command-line, and have it use that as another point for synchronizing the clock.
bash ~ $ something 1416899507
Looking into refclock_nmea.c it appears as though a simple mechanism would be to feed ntpd time values from GPS NMEA sentences. Alternatively, it doesn't appear to be that difficult to just implement a custom refclock driver. David Mills has a tutorial available: http://www.eecis.udel.edu/~mills/ntp/html/howto.html

How would one go about to measure differences in clock time on two different devices?

I'm currently in an early phase of developing a mobile app that depends heavily on timestamps.
A master device is connected to several client devices over wifi, and issues various commands to these. When the client devices receive commands, they need to mark the (relative) timestamp when the command is executed.
While all this is simple enough, I haven't come up with a solution for how to deal with clock differences. For example, the master device might have its clock at 12:01:01, while client A is on 12:01:02 and client B on 12:01:03. Mostly, I can expect these devices to be set to similar times, as they sync over NTP. However, the nature of my application requires ms precision, so therefore I would like to safeguard against discrepancies.
A short delay between issuing a command and executing the command is fine, however an incorrect timestamp of when that command was executed is not.
So far, I'm thinking of something along the line of having the master device ping each client device to determine transaction time, and then request the client to send their "local" time. Based on this, I can calculate what the time difference is between master and client. Once the time difference is know, the client can adapt its timestamps accordingly.
I am not very familiar with networking though, and I suspect that pinging a device is not a very reliable method of establishing transaction time, since a lot factors apply, and latency may change.
I assume that there are many real-world settings where such timing issues are important, and thus there should be solutions already. Does anyone know of any? Is it enough to simply divide response time by two?
Thanks!
One heads over to RFC 5905 for NTPv4 and learns from the folks who really have put their noodle to this problem and how to figure it out.
Or you simply make sure NTP is working properly on your servers so that you don't have this problem in the first place.

Resources