GDAX Websocket API - Level 2 timestamp accuracy - coinbase-api

I'm currently using the level2 orderbook channel via the GDAX WebSocket API. Quite recently a "time" field started appearing on the l2update JSON messages and this doesn't appear to be documented on the API reference pages. Some questions:
What does this time field represent and is it reliable enough to use? Is it message sending time from GDAX?
If it is sending time, I am occassionally seeing latencies of up to two minutes - is this expected?
Thanks!

I am playing with the L2 api's right now and have the same question. I'm seeing a range of timestamps from 4000ms delayed to -300ms (in the future).
The negative number makes me feel that the time can't be trusted. I attempted connecting from 2 different datacenters and from home and I can replicate both sides of the problem.

I've been using this field reliably for a couple of months assuming it is the time the order was received, and it was typically coming out as a pretty consistent 0.05 second lag in relation to my system time; however, in the last few days, it's been increasing - over 1 second yesterday and 2.02 seconds right now. I note (https://status.gdax.com/) maintenance was carried out on May 9th, but it was fine for me after that for a few days.
To answer the question more directly, no 2 minute latencies are not expected. I would check your system time is correct. Quick google brings up https://time.is/.

Related

Gatling: Keep fixed number of users/requests at any instant

how can we keep fixed number of active concurrent users/requests at time for a scenario.
I have an unique testing problem where I am required to do the performance testing of services with fixed number of request at a given moment for a given time periods like 10 minutes or 30 minutes or 1 hour.
I am not looking for per second thing, what I am looking for is that we start with N number of requests and as any of request out of N requests completes we add one more so that at any given moment we have N concurrent requests only.
Things which I tried are rampUsers(100) over 10 seconds but what I see is sometimes there are more than 50 users at a given instance.
constantUsersPerSec(20) during (1 minute) also took the number of requests t0 50+ for sometime.
atOnceUsers(20) seems related but I don't see any way to keep it running for given number of seconds and adding more requests as previous ones completes.
Thankyou community in advance, expecting some direction from your side.
There is a throttling mechanism (https://gatling.io/docs/3.0/general/simulation_setup/#throttling) which allow you to set max number of requests, but you must remember that users are injected to simulation independently of that and you must inject enough users to produce that max number of request, without that you will end up with lower req/s. Also users that will be injected but won't be able to send request because of throttling will wait in queue for they turn. It may result in huge load just after throttle ends or may extend your simulation, so it is always better to have throttle time longer than injection time and add maxDuration() option to simulation setup.
You should also have in mind that throttled simulation is far from natural way how users behave. They never wait for other user to finish before opening page or making any action, so in real life you will always end up with variable number of requests per second.
Use the Closed Work Load Model injection supported by Gatling 3.0. In your case, to simulate and maintain 20 active users/requests for a minute, you can use an injection like,
Script.<Controller>.<Scenario>.inject(constantConcurrentUsers(20) during (60 seconds))

Solr Qtime difference to real time

i am trying to find out what causes the difference between the Qtime and the actual response time in my Solr application.
The SolrServer is running on the same maschine as the program generating the queries.
I am getting Qtimes in average around 19ms but it takes 30ms to actually get my response.
This may sound like it is not much, but i am using Solr for some obscure stuff where every millisecond counts.
I figured that the time difference is not caused by Disk I/O since using RAMDirectoryFactory did not speed up anything.
Using a SolrEmbeddedServer instead of a SolrHttpServer did not cause a speedup aswell (so it is not Jetty what causes the difference?)
Is the data transfer between the query-program and the Solr-instance causing the time difference? And even more important, how can i minimize this time?
regards
This is a well known FAQ:
Why is the QTime Solr returns lower then the amount of time I'm
measuring in my client?
"QTime" only reflects the amount of time Solr spent processing the
request. It does not reflect any time spent reading the request from
the client across the network, or writing the response back to the
client. (This should hopefully be obvious since the QTime is actually
included in body of the response.)
The time spent on this network I/O can be a non-trivial contribution
to the the total time as observed from clients, particularly because
there are many cases where Solr can stream "stored fields" for the
response (ie: requested by the "fl" param) directly from the index as
part of the response writing, in which case disk I/O reading those
stored field values may contribute to the total time observed by
clients outside of the time measured in QTime.
How to minimize this time?
Not sure if it will have any effect, but make sure you are using javabin format, not json or xml (wt=javabin)

Is there a max number of requests an Asp.net mvc 3 application can handle and what happens it's exceeded?

I have an application where you can select entries in a table to be updated. You can select 50 and hit 'send it' and it will send all 50 as 50 individual ajax calls, each of which calls controller X that updates the database table Y. The user said they selected 25 but I see 12 in the logs and no trace of the other 13. I was wondering if maybe I hit some limit with the number of requests or the length of the queue. Any Ideas?
(Running this locally it had no problem with 50-100 being sent at once it just took like 2 seconds for all of them to finally callback).
This question cannot really be answered as there's no "hard limit" that would cause your ajax calls to be lost. However, overloading the server (if you only have one) can cause requests to be queued up and/or take a long time which in turn may cause requests to timeout (depending on your settings). Chances are if you are making 50 ajax calls instead of just one then there is some room for optimization there and I'm not sure that classifies as premature.
In the end, I think the most and best advice you will get is to profile and load test your code and hardware to be sure otherwise we're all just guessing.

AppEngine: How do I get the sequence of datastore write events?

I need a sequencer for the entire application's data.
Using a counter entity is a bad idea (5 writes per second limit), and Sharding counters are not an option.
GMT time stamp seems unsafe due to clock variances with servers, plus a possible server time being set/reset.
Any idea?
How do I get a entity property which I can query for all entities changed since a given value?
TIA
Distributed datastores such as the app engine datastore don't have a global sequence - there's literally no way to determine if entity A was written to server A' before entity B was written to server B' if those events occur sufficiently close together, unless you have a single machine mediating all transactions and serializing them, which places a hard upper bound on how scalable your system can be.
For your actual practical problem, the easiest solution would be to assign a modification timestamp to each record, and each time you need to sync, look for records newer than (that timestamp) - (epsilon), where epsilon is a short time interval that is longer than the expected difference in time synchronization between servers (something like 10 seconds should be ample). Your client can then discard any duplicate records it receives.

Transactional counter with 5+ writes per second in Google App Engine datastore

I'm developing a tournament version of a game where I expect 1000+ simultaneous players. When the tournament begins, players will be eliminated quite fast (possibly more than 5 per second), but the process will slow down as the tournament progresses. Depending when a player is eliminated from the tournament a certain amount of points is awarded. For example a player who drops first, gets nothing, while player who is 500th, receives 1 point and the first place winner receives say 200 points. Now I'd like to award and display the amount of points right away after a player has been eliminated.
The problem is that when I push a new row into a datastore after a player has been eliminated, the row entity has to be in a separate entity group so I would not hit the gae datastore limit of 1-5 writes per second for 1 entity group. Also I need to be able to read and write a count of rows consistently so I can determine the prize correctly for all the players that get eliminated.
What would be the best way to implement the datamodel to support this?
Since there's a limited number of players, contention issues over a few a second are not likely to be sustained for very long, so you have two options:
Simply ignore the issue. Clusters of eliminations will occur, but as long as it's not a sustained situation, the retry mechanics for transactions will ensure they all get executed.
When someone goes out, record this independently, and update the tournament status, assigning ranks, asynchronously. This means you can't inform them of their rank immediately, but rather need to make an asynchronous reply or have them poll for it.
I would suggest the former, frankly: Even if half your 1000 person tournament went out in the first 5 minutes - a preposterously unlikely event - you're still looking at less than 2 eliminations per second. In reality, any spikes will be smaller and shorter-lived than that.
One thing to bear in mind is that due to how transaction retries work, transactions on the same entity group that occur together will be resolved in semi-random order - that is, it's not a strict FIFO queue. If you require that, you'll have to enforce it yourself, though that's a far from trivial thing to do in a distributed system of any sort.
the existing comments and answers address the specific question pretty well.
at a higher level, take a look at this post and open source library from the google code jam team. they had a similar problem and ended up developing a scalable scoreboard based on the datastore that handles both updates and requests for arbitrary pages efficiently.

Resources