Maximum number of requests for SkyDrive? - request

What is the Maximum number of requests for SkyDrive?
I need the request per second and the request per day.
For example : for GDrive the Maximum number of requests per second is 10 QPS. Maximum number of requests per day is 10,000,000 QPD across the account.

Related

Batch Apex callout limits

does callout limitation depends on the number of times the execute method is invoked in a batch class?
I have read that it depends on the number of callouts per execute method, so we should use batch size of 1 if have to utilize the maximum of 100 callouts, but if we have 25000 records, and the batch size is 1, will it reach the maximum limit for callouts?
In batch job every start,execute and finish get fresh set of governor limits because they're separate transactions. So you get 100 callouts. You still must complete them under 120 seconds - but that's a different limit.
I'm not aware of any limit how many callouts you can make within 24 h so probably there isn't one. There's this limit though
The maximum number of asynchronous Apex method executions (batch Apex,
future methods, Queueable Apex, and scheduled Apex) per a 24-hour
period: 250,000 or the number of user licenses in your org multiplied
by 200, whichever is greater
You will have to balance it all out. If the callouts take 1 second (so you can do all 100) - feel free to set the batch size to 1 and really use 250K execute's. I can't imagine what functionality would require 100 webservice calls to process one record - but in theory you can.
If you need to process more than 250K records daily - well, increase the batch size but then you grand total of possible callouts goes down.

Increasing Requests Per Seconds in Gatling

I'm trying to increase Requests Per Second in Gatling with fewer Users (Each User-created will send API requests in a loop). I achieved 300 RPS with 35 Users. However, even if I increase the users to 70 or 150, I cannot get a higher rps than 300. With increased user count, the RPS is more than 300 for the initial few seconds but later just hovers around 300.
I tried both atOnceUsers and rampUsers, but still couldn't achieve higher RPS.
Is there any way to increase RPS with fewer​ users?
You need to find out where your constraint is - look at the results coming back from the simulation and examine how the response time changes with the number of requests. You may be being throttled by your application under test

Configuring hystrix thread pool for a system with high RPS and 99%ile response time ~500ms

As per hystrix docs thread pool should be configured using below formula :
ThreadPoolSize = requests per second at peak when healthy × 99th percentile latency in seconds + some breathing room
We have an external service whose 99%ile response time is ~500ms and expected RPS is 200. so with the formulae the thread pool size comes out to be 100. I don't think it is a good idea to configure thread pool with such a high number.
Please suggest what can be done in this scenario. What should be the ideal value for thread pool size and queue.

App Engine: Can I set a req/min rate instead of req/sec in queue.yaml?

Google has a 15000/min limit on the number of reads and writes. To stay under this limit, I calculated 15000/min == 250/sec, so my queue config is:
name: mapreduce-queue
rate: 200/s
max_concurrent_requests: 200
Can I directly set a rate of 15000/min in queue.yaml? I used 200/s because 15000/min == 250/sec adjusted for bursts. Also, I feel like I should not need the max_concurrent_requests limit at all?
Yes you can.
However, use 15000/m instead of 15000/min
From the docs
rate (push queues only)
How often tasks are processed on this queue. The value is a number
followed by a slash and a unit of time, where the unit is s for
seconds, m for minutes, h for hours, or d for days. For example, the
value 5/m says tasks will be processed at a rate of 5 times per
minute.
If the number is 0 (such as 0/s), the queue is considered "paused,"
and no tasks are processed.
and
max_concurrent_requests (push queues only)
Sets the maximum number of tasks that can be executed at any given
time in the specified queue. The value is an integer. By default, this
directive is unset and there is no limit on the maximum number of
concurrent tasks. One use of this directive is to prevent too many
tasks from running at once or to prevent datastore contention.
Restricting the maximum number of concurrent tasks gives you more
control over your queue's rate of execution. For example, you can
constrain the number of instances that are running the queue's tasks.
Limiting the number of concurrent requests in a given queue allows you
to make resources available for other queues or online processing.
It seems to me that for your situation, max_concurrent_requests is something you don't want to leave out

How to aggregate log records by time

I have a huge log file containing log messages prefixed with timestamp. The timestamp is with the precision of microseconds. I want to find a 10 sec time window when highest number of messages were logged. How can you do that?
You'd need to slurp in the file line by line, figure out which 10s period each timestamp is in, and keep track of which timestamp range had the biggest "member" count.
You don't specify which language, so I'll just use pseudocode:
read a line
extract/convert timestamp to a 10s interval number
if this timestamp is outside the range of the previous interval, "remember" that interval's
membership count and start a new interval counter
If the previous interval's membership count is bigger than the last recorded biggest interval, make the previous interval be the new "biggest" interval
Increment interval counter for this new line.
repeat until file's been consumed
spit out the recorded interval number, which will have had the biggest membership count
You might first aggregate your log files into one second intervals, then find in these numbers the sequence of highest weight.

Resources