In an attempt to make the system time on an ODroid as close to realtime as possible, I've tried adding a real time clock to the ODroid. The RTC has an accuracy of +/- 4ppm.
Without the realtimeclock, I would get results like this (Synced with NTP-server every 60 seconds). The blue is an Orange Pi for comparison. The x-axis is the sample, and the y-axis is the offset reported by the NTP-server in ms.
So what I tried, was the same thing (Though more samples, but same interval), but instead of just syncing with the NTP-server, I did the following:
Set the system time to the hw-clock time.
Sync with the NTP-server to update the system time, and record the offset given by the server
Update the HW-clock to the systemtime, since it has just been synced to realtime.
Then I wait 60 seconds and repeat. I didn't expect it to be perfect, but what I got shocked me a little bit.
What in the world am I looking at? The jitter becomes less and less, and follows an almost straight line, but when it reaches the perfect time (about 410 minutes in....), it the seems to continue, and let the jitter and offset grow again.
Can anyone explain this, or maybe tell me what I'm doing wrong?
This is weird!
So you are plotting the difference between your RTC time and the NTP server time. Where is the NTP server located? In the second plot you are working in a range of a couple hundred ms. NTP has accuracy limitations. From wikipedia:
https://en.wikipedia.org/wiki/Network_Time_Protocol
NTP can usually maintain time to within tens of milliseconds over the
public Internet, and can achieve better than one millisecond accuracy
in local area networks under ideal conditions. Asymmetric routes and
network congestion can cause errors of 100 ms or more
Your data is a bit weird looking though.
Related
Suppose I send 1000 transactions per second and the finality is 2-3 sec. Will it take 2-3 sec to finalize all of those 1000 transactions? Or will it be a finalize delay difference between the first and last transaction?
Each individual transaction will finalise within a few seconds so yes if the consensus latency is say 4s, they will each reach consensus in 4s from when they were received by a node. Since it will take you 1s to send all 1000, it will take 5s overall for all 1000 to reach consensus.
The analogy I sometimes use to distinguish throughput and latency is to use the difference between the speed of light and sound.
The time it takes for a flash of light or sound to reach someone is latency.
Now, you can flash or sound once a second or 100 times a second, that's throughput. The fact it takes sound longer to travel to its destination (latency) doesn't prevent 100 sounds being sent in one second.
I'm currently using the level2 orderbook channel via the GDAX WebSocket API. Quite recently a "time" field started appearing on the l2update JSON messages and this doesn't appear to be documented on the API reference pages. Some questions:
What does this time field represent and is it reliable enough to use? Is it message sending time from GDAX?
If it is sending time, I am occassionally seeing latencies of up to two minutes - is this expected?
Thanks!
I am playing with the L2 api's right now and have the same question. I'm seeing a range of timestamps from 4000ms delayed to -300ms (in the future).
The negative number makes me feel that the time can't be trusted. I attempted connecting from 2 different datacenters and from home and I can replicate both sides of the problem.
I've been using this field reliably for a couple of months assuming it is the time the order was received, and it was typically coming out as a pretty consistent 0.05 second lag in relation to my system time; however, in the last few days, it's been increasing - over 1 second yesterday and 2.02 seconds right now. I note (https://status.gdax.com/) maintenance was carried out on May 9th, but it was fine for me after that for a few days.
To answer the question more directly, no 2 minute latencies are not expected. I would check your system time is correct. Quick google brings up https://time.is/.
i am trying to find out what causes the difference between the Qtime and the actual response time in my Solr application.
The SolrServer is running on the same maschine as the program generating the queries.
I am getting Qtimes in average around 19ms but it takes 30ms to actually get my response.
This may sound like it is not much, but i am using Solr for some obscure stuff where every millisecond counts.
I figured that the time difference is not caused by Disk I/O since using RAMDirectoryFactory did not speed up anything.
Using a SolrEmbeddedServer instead of a SolrHttpServer did not cause a speedup aswell (so it is not Jetty what causes the difference?)
Is the data transfer between the query-program and the Solr-instance causing the time difference? And even more important, how can i minimize this time?
regards
This is a well known FAQ:
Why is the QTime Solr returns lower then the amount of time I'm
measuring in my client?
"QTime" only reflects the amount of time Solr spent processing the
request. It does not reflect any time spent reading the request from
the client across the network, or writing the response back to the
client. (This should hopefully be obvious since the QTime is actually
included in body of the response.)
The time spent on this network I/O can be a non-trivial contribution
to the the total time as observed from clients, particularly because
there are many cases where Solr can stream "stored fields" for the
response (ie: requested by the "fl" param) directly from the index as
part of the response writing, in which case disk I/O reading those
stored field values may contribute to the total time observed by
clients outside of the time measured in QTime.
How to minimize this time?
Not sure if it will have any effect, but make sure you are using javabin format, not json or xml (wt=javabin)
In SQL Server 2008 Activity Monitor, I see Wait Time on Wait Category "Latch" (not Buffer Latch) spike above 10,000ms/sec at times. Average Waiter Count is under 10, but this is by far the highest area of waits in a very busy system. Disk IO is almost zero and page life expectancy is over 80,000, so I know it's not slowed down by disk hardware and assume it's not even touching SAN cache. Does this mean SQL Server is waiting on CPU (i.e. resolving a bajillion locks) or waiting to transfer data from the local server's cache memory for processing?
Background: System is a 48-core running SQL Server 2008 Enterprise w/ 64GB of RAM. Queries are under 100ms in response time - for now - but I'm trying to understand the bottlenecks before they get to 100x that level.
Class Count Sum Time Max Time
ACCESS_METHODS_DATASET_PARENT 649629086 3683117221 45600
BUFFER 20280535 23445826 8860
NESTING_TRANSACTION_READONLY 22309954 102483312 187
NESTING_TRANSACTION_FULL 7447169 123234478 265
Some latches are IO, some are CPU, some are other resource. It really depends on which particular latch type you're seeing this. sys.dm_os_latch_stats will show which latches are hot in your deployment.
I wouldn't worry about the last three items. The two nesting_transaction ones look very healthy (low average, low max). Buffer is also OK, more or less, although the the 8s max time is a bit high.
The AM_DS_PARENT latch is related to parallel queries/parallel scans. Its average is OK, but the max of 45s is rather high. W/o going into too much detail I can tell that long wait time on this latch type indicate that your IO subsystem can encounter spikes (and the max 8s BUFFER latch waits corroborate this).
Language: C++
Development Environment: Microsoft Visual C++
Libraries Used: MFC
Problem: This should be fairly simple, but I can't quite wrap my head around it. I'm attempting to calculate a rolling average over a given amount of time - let's say five seconds. Every second, my program receives a data message containing some numerical information, including the CPU idle time during the process.
I want to be able to show the user an average CPU idle time over a five second period. I was thinking about using just an array and storing a value every five seconds, but I'm not sure how to do the rolling portion. Unless there is some other built-in method for doing rolling calculations?
As it turns out, it would actually be better to implement immediate feedback regarding idle percentage, which is much easier to code.