I have a .pkt sniffer capture. When I read each and every packet within the capture from my C application, I observe a radio header appended. The radio header contains the time in epoch for each and every packet. I would like to find out the time difference between two packets in terms of milliseconds. I am not sure how to diff two epoch values and find out the time difference in milliseconds. Please help me with this.
I have a .pkt sniffer capture.
What is a ".pkt sniffer capture"? Did you use Wireshark to capture the packets? If so, then it's either a pcap or a pcap-ng capture.
The radio header contains the time in epoch for each and every packet.
In most sniffer formats, the time stamp for packets is part of a "record header", not part of a radio-information header. For 802.11 captures, some capture file formats might provide a radio-information header that includes the 802.11 Timing Synchronization Function timer, but that timer doesn't have a specified epoch.
The epoch for the packet time stamp depends on the capture file format. pcap and pcap-ng use the UN*X epoch of January 1, 1970, 00:00:00 UTC, as pcap was originally used on UN*X and pcap-ng is influenced by pcap. Other file formats might, for example, use the Windows FILETIME epoch of January 1, 1601, 00:00:00 "UTC" (using the proleptic Gregorian calendar), or some other time origin.
But if all you want is to
find out the time difference between two packets in terms of milliseconds
then the epoch is irrelevant - if a time is represented as "X seconds since some epoch) (where X isn't necessarily an integer; it could have a fractional part), then the time between "X seconds since the epoch" and "Y seconds since the epoch" is X - Y, and the epoch itself cancels out.
The big issue, then, is how is "seconds since some epoch" represented. An integral count of seconds? An integral count of {nanoseconds, tenths of microseconds, microseconds, milliseconds, etc.}? A combination of seconds and {nanoseconds, microseconds, etc.}? That's why we'd need to know whether this is a file-format time stamp or an 802.11 TSFT and, if it's a file-format time stamp, what file format that is.
If, as you seem to indicate, this is a Wireshark capture (i.e., a capture made with Wireshark, not just a capture that Wireshark happens to be able to read - it can read captures from programs that don't use its native pcap or pcap-ng format), then the file format is pcap or pcap-ng (in which case the epoch is the Epoch, i.e. January 1, 1970, 00:00:00 UTS) and the time stamp is either 32-bit seconds since the Epoch and 32-bit microseconds since that second, for pcap, or a 64-bit count of units since the Epoch (the units are specified by the Interface Description Block for the interface on which the packet in question was captured; the default is microseconds).
To calculate the difference between the time stamps of two pcap packets, in microseconds, take the difference between the seconds values, multiply it by 1,000,000, and add to it the difference between the microseconds values (both differences are signed). To convert that to milliseconds, divide it by 1,000.
To calculate the difference between the time stamps of two pcap-ng packets, take the difference between the time stamp values; that's a count of fractions of a second, where the fraction is defined by the specified value in the Interface Description Block. To convert that to milliseconds, adjust as appropriate based on what the fraction is (for example, if it's microseconds, divide it by 1,000).
To calculate the difference between the TSFT values of two 802.11 packets, just subtract one value from the other; the TSFT value, at least for radiotap, is in microseconds, so, to convert it to milliseconds, divide it by 1,000.
"Epoch" isn't a unit or format; it's a point in time. Specifically, it's midnight UTC of January 1st, 1970.
Unix timestamps are just the number of seconds that have passed since that time. Subtract the smaller one from the larger to find the difference in seconds, and multiply by 1000 to get the number of milliseconds.
Related
I am trying to get process start time in kernel module.
I get the proc struct pointer, and from the proc I take field p_mstart ()
typedef struct proc {
.....
/*
* Microstate accounting, resource usage, and real-time profiling
*/
hrtime_t p_mstart; /* hi-res process start time */
this return me the number: 1976026375725303
struct proc* iterated_process_ptr = curproc
LOG("***KERNEL***: PID=%d, StartTime=%lld",iterated_process_ptr->p_pidp->pid_id, iterated_process_ptr->p_mstart);
What is this number ?
In the documentation solaris write:
The gethrtime() function returns the current high-resolution real time. Time is expressed as nanoseconds since some arbitrary time in the past.
And in the book Solaris Internals they write:
Within the process, the operating system maintains a high-resolution teimstamp that marks process start and terminate times, A p_mstart field, the process start time, is set in the kernel fork() code when the process is created.... it return 64-bit value expressed in nanosecond
The number 1976026375725303 does not make sense at all.
If i divide by 1,000,000,000 and then by 3600 in order to get hours, i get 528 hours, 22 days, but my uptime is 5 days..
Based on answer received at google group: comp.unix.solaris.
Instead of going to proc -> p_mstart
I need to take
iterated_process_ptr ->p_user.u_start
This bring me the same struct (timestruc_t) as userspace
typedef struct psinfo {
psinfo ->pr_start; /* process start time, from the epoch */
The number 1976026375725303 does not make sense at all.
Yes it does. Per the very documentation that you quoted:
Time is expressed as nanoseconds since some arbitrary time in the
past.
Thus, the value can be used to calculate how long ago the process started:
hrtime_t howLongAgo = gethrtime() - p->p_mstart;
That produces a value in nanoseconds for how long ago the process started.
And note that the value produced is accurate - the value from iterated_process_ptr ->p_user.u_start is subject to system clock changes, so you can't say, "This process has been running for 3 hours, 15 minutes, and 3 seconds" unless you also know the system clock hasn't been reset or modified in any way.
Per the Solaris 11 gethrtime.9F man page:
Description
The gethrtime() function returns the current high-resolution real
time. Time is expressed as nanoseconds since some arbitrary time in
the past; it is not correlated in any way to the time of day, and
thus is not subject to resetting or drifting by way of adjtime(2) or
settimeofday(3C). The hi-res timer is ideally suited to performance
measurement tasks, where cheap, accurate interval timing is required.
Return Values
gethrtime() always returns the current high-resolution real time.
There are no error conditions.
...
Notes
Although the units of hi-res time are always the same (nanoseconds),
the actual resolution is hardware dependent. Hi-res time is guaranteed
to be monotonic (it does not go backward, it does not periodically
wrap) and linear (it does not occasionally speed up or slow down for
adjustment, as the time of day can), but not necessarily unique: two
sufficiently proximate calls might return the same value.
The time base used for this function is the same as that for
gethrtime(3C). Values returned by both of these functions can be
interleaved for comparison purposes.
How do I tell libpcap v1.6.2 to store nanosecond values in struct pcap_pkthdr::ts.tv_usec (instead of microsecond values) when capturing live packets?
(Note: This question is similar to How to enable nanosecond resolution when capturing live packets in libpcap? but that question is vague enough that I decided to ask a new question.)
For offline and "dead" captures, the following functions can be used to tell libpcap to fill the struct pcap_pkthdr's ts.tv_usec member with nanosecond values:
pcap_open_dead_with_tstamp_precision()
pcap_open_offline_with_tstamp_precision()
pcap_fopen_offline_with_tstamp_precision()
Unfortunately, there does not appear to be _with_tstamp_precision variants for pcap_open_live() or pcap_create().
I believe that capturing live packets with nanosecond resolution should be possible, because the changelog for v1.5.0 says (emphasis mine):
Add support for getting nanosecond-resolution time stamps when capturing and reading capture files
I did see the pcap_set_tstamp_type() function and the pcap-tstamp man page, which says:
PCAP_TSTAMP_HOST—host: Time stamp provided by the host on which the capture is being done. The precision of this time stamp is unspecified; it might or might not be synchronized with the host operating system's clock.
PCAP_TSTAMP_HOST_LOWPREC—host_lowprec: Time stamp provided by the host on which the capture is being done. This is a low-precision time stamp, synchronized with the host operating system's clock.
PCAP_TSTAMP_HOST_HIPREC—host_hiprec: Time stamp provided by the host on which the capture is being done. This is a high-precision time stamp; it might or might not be synchronized with the host operating system's clock. It might be more expensive to fetch than PCAP_TSTAMP_HOST_LOWPREC.
PCAP_TSTAMP_ADAPTER—adapter: Time stamp provided by the network adapter on which the capture is being done. This is a high-precision time stamp, synchronized with the host operating system's clock.
PCAP_TSTAMP_ADAPTER_UNSYNCED—adapter_unsynced: Time stamp provided by the network adapter on which the capture is being done. This is a high-precision time stamp; it is not synchronized with the host operating system's clock.
Does the phrase "high-precision time stamp" here mean that nanosecond values are stored in the header's ts.tv_usec field? If so, PCAP_TSTAMP_HOST says "unspecified", so how do I determine at runtime whether the ts.tv_usec field holds microseconds or nanoseconds? And which of these is the default if pcap_set_tstamp_type() is never called?
pcap_create() does little if anything to set parameters for the capture device, and has no alternative calls for setting those parameters; this is by design. The intent, at the time pcap_create() and pcap_activate() were introduced, was that neither of those calls would have to be changed in order to support new parameters, and that new APIs would be introduced as new parameters are introduced.
You're supposed to call pcap_create() to create a not-yet-activated handle, set the parameters with the appropriate calls, and then attempt to activate the handle with pcap_activate().
One of the appropriate calls is pcap_set_tstamp_precision(), which is the call you use between pcap_create() and pcap_activate() to specify that you want nanosecond-precision time stamps. The default is microsecond-precision time stamps, for backwards source and binary compatibility.
Note that pcap_set_tstamp_precision() will fail if you can't get nanosecond-precision time stamps from the device on which you're capturing, so you must check whether it succeeds or fails or call pcap_get_tstamp_precision() after activating the pcap_t in order to see what time stamp precision you'll be getting.
And, no, "high-precision" has nothing to do with whether you get microseconds or nanoseconds, it has to do with whether the nominal microseconds or nanoseconds value really provide microsecond or nanosecond granularity or whether you'll always get values that are multiples of a power of 10 because the clock being used doesn't measure down to the microsecond or nanosecond.
I know the purpose of the Network Time Protocol is to synchronize clocks over networks, primarily with the use of the Originate, Receive and Transmit timestamps to make the time calculations.
But, the ICMP protocol also has a Timestamp control message (and a respective Timestamp Reply one), that "is used for time synchronization". It also contains three timestamp fields with the same name as in NTP, that are likely used in a similar manner.
So, what are the differences between the two? I guess the distinction is not that NTP is for desktop operating systems and ICMP for Layer 3 devices, since I know Cisco switches that are using NTP.
Timestamps may look like a similar field, but have different length and very different content
ICMP-timestamps' fields are 31-bit, carry relative-time of "touching" ICMP-packed on outgoing/incoming side of the network connection, expressed as a number of miliseconds elapsed since last UTC-midnight.
Highest order bit is used for flagging a UTC-uncoordinated host-time / non-standard value.
NTP-timestamps' fields are 64 bit, carry an absolute time, recorded as a 32-bit for seconds since epoch ( Jan-01-1900 .. till .. rollover in 2036 ) and another 32-bits for fractions of a second ( thus going in a time-measurement down to a sub-nanosecond resolution ).
I am working on time critical operation in c with arduino. I am reading time from RTC. My algorithm needs
1. local solar time
2. local time
3. local standard time meridian
4. equation of time.
5. Greenwich mean time.
I read date and time from RTC . How can use time to convert to calculation for above format.
I wish to write a C program which obtains the system time and hence
uses this time to print out its ntp equivalent.
Am I right in saying that the following is correct for the seconds part
of the ntp time?
long int ntp.seconds = time(NULL) + 2208988800;
How might I calculate the fraction part?
The fractional part to add obviously is 0ps ... ;-)
So the question for the fraction could be reduced to how accurate is the system clock.
gettimeofday() gets you micro seconds. clock_gettime() could get you nano seconds.
Anyhow I doubt you'll be reaching the theoratically possible resolution the 32bit wide value for the fraction allows (at least on a standard PC).