SNTP client output questions - ntp

https://www.eecis.udel.edu/~mills/ntp/html/sntp.html This site says:
"By default, sntp writes the local date and time (i.e., not UTC) to the standard output in the format
2011-08-04 00:40:36.642222 (+0000) +0.006611 +/- 0.041061 psp-os1 149.20.68.26
where the +0.006611 +/- 0.041061 indicates the time offset and error bound of the system clock relative to the server clock, in seconds."
I have a few questions about this.
First question - Is the time displayed the current time of the client machine or the current time +/- the offset?
Second question - What is the (+0000) representing?
Third question - What is 'error bound'? this confuses me slightly.
Final question - What is psp-os1?
I understand the offset and I assume the IP address at the end is the server IP.

Related

Implementing Primary NTP Server (GPS Receiver)

I'm trying to implement an NTP server based on an NMEA GPS Receiver. I'm not sure what to fill the root delay field with.
I've read the NTPv4 specification and it's written that root delay is the total round-trip delay to the reference clock.
If I'm working with a secondary server, root delay can be calculated from the time difference between the timestamps when making the packet requests with the reference server (am I correct?).
But I'm not sure what to fill it with if I'm using a GPS Receiver as the reference clock, should I fill it with 0 instead?
It will depend largely on how you're setting the time in your server from the GPS. If you're reading the NMEA sentence, interpreting it and setting the clock, the root delay would be the time taken to do that. But it wouldn't be a very good clock; there's a lot of non-deterministic delays (jitter) involved in reading RS232 (assuming that is how you're connected to the GPS).
You can use the 1 pulse per second output of a GPS receiver to fix that. It's normally on the Data Carrier Detect pin. Using a proper RS232 port (not a USB one) you can have the server's clock synchronised to that (DCD can be used to raise an interrupt), so now you get very good alignment to GPS time. This could certainly be done in Solaris (a native part of the kernel), and in Linux too (http://support.ntp.org/bin/view/Support/ConfiguringNMEARefclocks). If you're doing this then I think that the root delay would be small, but there's the matter of the OS and hardware's response time to interrupts.
EDIT
According to this NTP docs page,
Root Delay
This is the total roundtrip delay to the primary reference source at
the root of the synchronization subnet, in seconds. Note that this
variable can take on both positive and negative values, depending on
clock Precision and Skew.
So with 1PPS it's going to be pretty low. So far as I can tell it's a field that a secondary NTP server uses to tell its clients what its delay to a reference clock is. So if you have a 1PPS locked GPS time source, you are a reference clock. In which case, perhaps zero is correct enough; I don't think that NTP can achieve cross-network time synchronisation accuracies (1ms at best) better than the IRQ response time of a computer (< 50us hopefully with a good CONFIG_PREEMPT_RT linux kernel with nothing else going on).

How do I turn on nanosecond precision when capturing live traffic?

How do I tell libpcap v1.6.2 to store nanosecond values in struct pcap_pkthdr::ts.tv_usec (instead of microsecond values) when capturing live packets?
(Note: This question is similar to How to enable nanosecond resolution when capturing live packets in libpcap? but that question is vague enough that I decided to ask a new question.)
For offline and "dead" captures, the following functions can be used to tell libpcap to fill the struct pcap_pkthdr's ts.tv_usec member with nanosecond values:
pcap_open_dead_with_tstamp_precision()
pcap_open_offline_with_tstamp_precision()
pcap_fopen_offline_with_tstamp_precision()
Unfortunately, there does not appear to be _with_tstamp_precision variants for pcap_open_live() or pcap_create().
I believe that capturing live packets with nanosecond resolution should be possible, because the changelog for v1.5.0 says (emphasis mine):
Add support for getting nanosecond-resolution time stamps when capturing and reading capture files
I did see the pcap_set_tstamp_type() function and the pcap-tstamp man page, which says:
PCAP_TSTAMP_HOST—host: Time stamp provided by the host on which the capture is being done. The precision of this time stamp is unspecified; it might or might not be synchronized with the host operating system's clock.
PCAP_TSTAMP_HOST_LOWPREC—host_lowprec: Time stamp provided by the host on which the capture is being done. This is a low-precision time stamp, synchronized with the host operating system's clock.
PCAP_TSTAMP_HOST_HIPREC—host_hiprec: Time stamp provided by the host on which the capture is being done. This is a high-precision time stamp; it might or might not be synchronized with the host operating system's clock. It might be more expensive to fetch than PCAP_TSTAMP_HOST_LOWPREC.
PCAP_TSTAMP_ADAPTER—adapter: Time stamp provided by the network adapter on which the capture is being done. This is a high-precision time stamp, synchronized with the host operating system's clock.
PCAP_TSTAMP_ADAPTER_UNSYNCED—adapter_unsynced: Time stamp provided by the network adapter on which the capture is being done. This is a high-precision time stamp; it is not synchronized with the host operating system's clock.
Does the phrase "high-precision time stamp" here mean that nanosecond values are stored in the header's ts.tv_usec field? If so, PCAP_TSTAMP_HOST says "unspecified", so how do I determine at runtime whether the ts.tv_usec field holds microseconds or nanoseconds? And which of these is the default if pcap_set_tstamp_type() is never called?
pcap_create() does little if anything to set parameters for the capture device, and has no alternative calls for setting those parameters; this is by design. The intent, at the time pcap_create() and pcap_activate() were introduced, was that neither of those calls would have to be changed in order to support new parameters, and that new APIs would be introduced as new parameters are introduced.
You're supposed to call pcap_create() to create a not-yet-activated handle, set the parameters with the appropriate calls, and then attempt to activate the handle with pcap_activate().
One of the appropriate calls is pcap_set_tstamp_precision(), which is the call you use between pcap_create() and pcap_activate() to specify that you want nanosecond-precision time stamps. The default is microsecond-precision time stamps, for backwards source and binary compatibility.
Note that pcap_set_tstamp_precision() will fail if you can't get nanosecond-precision time stamps from the device on which you're capturing, so you must check whether it succeeds or fails or call pcap_get_tstamp_precision() after activating the pcap_t in order to see what time stamp precision you'll be getting.
And, no, "high-precision" has nothing to do with whether you get microseconds or nanoseconds, it has to do with whether the nominal microseconds or nanoseconds value really provide microsecond or nanosecond granularity or whether you'll always get values that are multiples of a power of 10 because the clock being used doesn't measure down to the microsecond or nanosecond.

Purpose of NTP vs. ICMP Timestamp message

I know the purpose of the Network Time Protocol is to synchronize clocks over networks, primarily with the use of the Originate, Receive and Transmit timestamps to make the time calculations.
But, the ICMP protocol also has a Timestamp control message (and a respective Timestamp Reply one), that "is used for time synchronization". It also contains three timestamp fields with the same name as in NTP, that are likely used in a similar manner.
So, what are the differences between the two? I guess the distinction is not that NTP is for desktop operating systems and ICMP for Layer 3 devices, since I know Cisco switches that are using NTP.
Timestamps may look like a similar field, but have different length and very different content
ICMP-timestamps' fields are 31-bit, carry relative-time of "touching" ICMP-packed on outgoing/incoming side of the network connection, expressed as a number of miliseconds elapsed since last UTC-midnight.
Highest order bit is used for flagging a UTC-uncoordinated host-time / non-standard value.
NTP-timestamps' fields are 64 bit, carry an absolute time, recorded as a 32-bit for seconds since epoch ( Jan-01-1900 .. till .. rollover in 2036 ) and another 32-bits for fractions of a second ( thus going in a time-measurement down to a sub-nanosecond resolution ).

Time format conversation In c

I am working on time critical operation in c with arduino. I am reading time from RTC. My algorithm needs
1. local solar time
2. local time
3. local standard time meridian
4. equation of time.
5. Greenwich mean time.
I read date and time from RTC . How can use time to convert to calculation for above format.

Time division multiplexing [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have a doubt regarding time division multipluxing.
In GSM we are using time division multiplexing.
In time division multiplxing, there is timeslots foreach signel. But while using GSM mobiles, we are getting a continues flow of data, but actually we are not transmitting it continuesly rite??
How do we get continues signal even if transmission is done using TDM.
The simple answer is that you're not receiving a continuous flow of data. What you are receiving is short bursts of data close enough together that they seem to form a continuous stream.
In case you care, the specific numbers for GSM are that it starts with 4.615 ms frames, each of which is divided into 8 timeslots of .577 ms. So, a particular mobile handset receives data for .577 ms, then waits for ~4 ms, then receives data for another .577 ms, and so on. There's a delay of 3 time slots between receiving and transmitting, so it receives data, then ~1.8 ms later, it gets to transmit for .577 ms.
The timing is close enough together that even if (for example) the signal gets weak and/or there's interference for a few ms, and a particular hand-set misses receiving data for one time slot won't necessarily be audible. When the signal is lost for about 20 ms, most people can start to perceive it as actual signal loss. Losses shorter than that will result in lower sound fidelity, but not necessarily as actual loss of signal.
It's also worth noting that most of the newer (3G, 4G, LTE) systems work entirely differently, being based on CDMA instead of TDMA.

Resources