What is workload throttling? - database

Could somebody give a good explanation for newbie, what does following phrase means:
1) workload throttling within a single cluster and 2) workload
balance across multiple clusters.
This is from overview of advantages of one ETL-jobs tool, that helps perform ETL (Extract, Transform, Load) jobs on Redshift database.

Many web services allocate a maximum amount of "interaction" that you can have with a service. Once your exceed that amount, the service will shift in how it completes its interactions.
Amazon imposes limitations on how much compute power you can consume within your nodes. The phrase "workload throttling" means that if you exceed the limits detailed in Amazon's documentation Amazon Redshift Limts, your queries, jobs, tasks, or work items will be given lower priority or fail outright.
The idea is that Amazon doesn't want you to consume so much compute power that it prevents others from using the service and, honestly, they don't want you to consume more power than it costs them to provide.
Workload throttling isn't an idea exclusive to this Amazon service, or cloud services in general. The concept can be found in any system that needs to account for receiving more tasks than it can handle. Some systems deal with being overburdened differently.
For example, some systems will defer you to alternate services in the case of a load balancer. 3rd party data APIs will delegate you a maximum amount of data per hour/minute and then either delay the responses you get back, charge you more money, or stop responding altogether.
Another service that you can look at that deals with throttling is the Google Maps Geocoding service. If you look on their documentation, Google Maps Geocoding API Usage Limits, you will see that:
Users of the standard API:
2,500 free requests per day, calculated as the sum of client-side and server-side queries.
50 requests per second, calculated as the sum of client-side and server-side queries.
If you exceed this and have billing enabled, Google will shift to:
$0.50 USD / 1000 additional requests, up to 100,000 daily.
I can't remember what the response looks like after you hit that daily limit, but once you hit it, you basically don't get responses back until the day resets.

Related

Azure Search paging causes throttling

I am from the team that runs nuget.org, the package ecosystem for .NET. We use Azure Search to power our search API. Our APIs are public, so third-party customers can use them to analyze our ecosystem or make apps.
We recently had an outage caused by a single customer paging through our search documents using the $skip and $top query parameters in batches of 200 documents at a time. This resulted in Azure Search throttling:
Failed to execute request because the request rate has caused your service to exceed the limits of its provisioned capacity. Reduce the rate of requests, or adjust the number of replicas/partitions. See http://aka.ms/azure-search-throttling for more information.
Azure Search's throttling affected all customers in that region for 10 minutes, not just the single customer that was paging. We read through Azure Search's throttling documentation, but have the following questions:
Is customer paging with high $skip values particularly expensive for Azure Search?
What can we do to reduce the likelihood of Azure Search throttling for paging scenarios?
Should we add our own throttling to ensure a single customer’s searches doesn’t affect all other customers' searches? Does Azure Search have guidance on this?
Some more information about our service:
Number of documents in index: ~950K
Request volume: 1.3K paging requests in ~10 minutes. Peak of 125 requests per second, average of 6 requests per second
Scale: standard SKU, 1 partition, 3 replicas (this is our secondary region, hence the smaller scale to save money)
Deep paging is indeed a costly operation. Since Azure Search is designed to be distributed, all indexes are divided into multiple shards, to allow for quick scale operation. This comes with the downside that ranked results from each shard need to be merged and ranked to create a final list of results. The number of results to merge increases linearly with the skip value, so that step can become expensive when paging very deep in the results.
As a search service, Azure Search is optimized for quick retrieval of top documents based on textual relevance. It's unfortunately not the best tool for scenarios where a client simply want to return a list of all documents in a data source.
From what I understand in your post, there's 2 reasons for the throttling
High skip values
Sharp increase in QPS
We encourage you to control both. It is not uncommon for our customers to implement their own throttling logic to prevent their own customers from emitting an abnormally large amount of requests. Even without skip values, having a single customer send enough queries to increase the traffic multiple-fold can lead to throttling (I'm not sure if that was the case here). There's no official guidelines as to how to handle queries coming from your client apps. The best approach in my opinion would probably be for your team to run performance tests using realistic workloads to try to understand the limits of your search service (which depends on the index schema, number of documents, type of queries being emitted, etc.). Once you have a good idea of how many QPS your service can handle for your scenarios, then you can decide how much of that QPS you are willing to allocate to a single customer at a time, and enforce a limit based on that.
Regarding the deep paging cost: if this is a common scenario for your customers (paging through all documents of a search index), I would recommend you expose a way to page through all documents directly from the data source (assuming Azure Search is not the primary data store of the documents), and mostly use Azure Search for relevance related retrieval scenarios only.

Throttling limits for Azure Search

I'm looking for throttling information and this is the best that I've been able to find so far: https://learn.microsoft.com/en-us/azure/search/search-limits-quotas-capacity#throttling-limits
For doing a search
https://{{search-service}}.search.windows.net/indexes/:index/docs?api-version={{version}}&search=some text
Is this line from the reference page above the limit for searches?
Get Index (GET /indexes/myindex): 10 per second per search unit
I'm trying to see what the limit is for searching only under ideal scenario of nothing else happening such as an indexer running.
Some APIs such as GET /indexes are throttled based on simple rate limits. However, queries and indexing requests do not work this way. In the case of those APIs, throttling happens dynamically based on resource availability. If the system's internal queues start to fill, requests will begin to fail with 503 (Service Unavailable). If enough such failures happen within a discrete period of time (calculated as an average over a rolling window), the service will throttle requests in order to relieve pressure and allow the system to recover.
The reason throttling works this way instead of based on static rate limits is that most Azure Cognitive Search pricing tiers (other than Free) give you dedicated capacity. Static rate limits could artificially limit how you use your own capacity, so instead throttling dynamically applies backpressure as a way to ensure the reliability of the service when its capacity is overloaded.
For more information about testing and performance tuning Azure Cognitive Search, see this article.
For Azure search, there are 2 kinds of APIs: Query APIs (Search/Suggest/Autocomplete) and Index APIs .
The one you mentioned belongs to Index APIs:
Get Index (GET /indexes/myindex): 10 per second per search unit
If you want to know Query APIs(searching) limit (QPS limit), this doc will be helpful:

10,000 HTTP requests per minute performance

I'm fairly experienced with web crawlers, however, this question is in regards to performance and scale. I'm needing to request and crawl 150,000 urls over an interval(most urls are every 15 minutes which makes it about 10,000 requests per minute). These pages have a decent amount of data(around 200kb per page). Each of the 150,000 urls exist in our database(MSSQL) with a timestamp of the last crawl date, and an interval for so we know when to crawl again.
This is where we get an extra layer of complexity. They do have an API which allows for up to 10 items per call. The information we need exists partially only in the API, and partially only on the web page. The owner is allowing us to make web calls and their servers can handle it, however, they can not update their API or provide direct data access.
So the flow should be something like: Get 10 records from the database that intervals have passed and need to be crawled, then hit the API. Then each item in the batch of 10 needs their own separate web-requests. Once the request returns the HTML we parse it and update records in our database.
I am interested in getting some advice on the correct way to handle the infrastructure. Assuming a multi-server environment some business requirements:
Once a URL record is ready to be crawled, we want to ensure it is only grabbed and ran by a single server. If two servers check it out simultaneously and run, it can corrupt our data.
The workload can vary, currently, it is 150,000 url records, but that can go much lower or much higher. While I don't expect more than a 10% change per day, having some sort of auto-scale would be nice.
After each request returns the HTML we need to parse it and update records in our database with the individual data pieces. Some host providers allow free incoming data but charge for outgoing. So ideally the code base that requests the webpage and then parses the data also has direct SQL access. (As opposed to a micro-service approach)
Something like a multi-server blocking collection(Azure queue?), autoscaling VMs that poll the queue, single database host server which is also queried by MVC app that displays data to users.
Any advice or critique is greatly appreciated.
Messaging
I echo Evandro's comment and would explore Service Bus Message Queues of Event Hubs for loading a queue to be processed by your compute nodes. Message Queues support record locking which based on your write up might be attractive.
Compute Options
I also agree that Azure Functions would provide a good platform for scaling your compute/processing operations (calling the API & scraping HTML). In addition Azure Functions can be triggered by Message Queues, Event Hubs OR Event Grid. [Note: Event Grid allows you to connect various Azure services (pub/sub) with durable messaging. So it might play a helpful middle-man role in your scenario.]
Another option for compute could be Azure Container Instances (ACI) as you could spin up containers on demand to process your records. This does not have the same auto-scaling capability that Functions does though and also does not support the direct binding operations.
Data Processing Concern (Ingress/Egress)
Indeed Azure does not charge for data ingress but any data leaving Azure will have an egress charge after the initial 5 GB each month. [https://azure.microsoft.com/en-us/pricing/details/bandwidth/]
You should be able to have the Azure Functions handle calling the API, scraping the HTML and writing to the database. You might have to break those up into separated Functions but you can chain Functions together easily either directly or with LogicApps.

google appstore, how to split fees per datastore namespace

I'd like to make a GAE app multi-tenant to cater to different clients (companies), database namespaces seems like a GAE endorsed solution. Is there a meaningful way to split GAE fees among client/namespaces? GAE costs for app are mainly depends on user activities - backend instances up time, because new instances are created or (after 15 min delay) terminated proportionally to the server load, not total volume of data user has or created. Ideally the way the fees are split should be meaningful and could be explained to the clients.
I guess the most fair fee splitting solution is just create a new app for a new client, so all costs reported separately, yet total cost will grow up, I expect few apps running on same instances will use server resources more economically.
Every app engine request is logged with a rough estimated cost measurement. It is possible to log the namespace/client associated with every request and query the logs to add up the estimated instance costs for that namespace. Note that the estimated cost field is deprecated and may be inaccurate. It is mostly useful as a rough guide to the proportion of instance cost associated with each client.
As far as datastore pricing goes, the cloud console will tell you how much data has been stored in each namespace, and you can calculate costs from that. For reads/writes, we have set up a logging system to help us track reads and writes per namespace (i.e. every request tracks the number of datastore reads and writes it does in each namespace and logs these numbers at the end of the request).
The bottom line is that with some investments into infrastructure and logging, it is possible to roughly track costs per namespace. But no, App Engine does not make this easy, and it may be impossible to calculate very accurate cost estimates.

How to estimate hosting services cost on GAE?

I'm building a system which I plan to deploy on Google App Engine. Current pricing is described here:
Google App Engine - Pricing and Features
I need an estimate of cost per client managed by the webapp. The cost won't be very accurate until I have completed the development. GAE uses such fine grained price calculation such as READs and WRITEs that it becomes a very daunting task to estimate operation cost per user.
I have an agile dev. process which leaves me even more clueless in determining my cost. I've been exploiting my users stories to create a cost baseline per user story. Then I roughly estimate how will the user execute each story workflow to finally compute a simplistic estimation.
As I see it, computing estimates for Datastore API is overly complex for a startup project. The other costs are a bit easier to grasp. Unfortunately, I need to give an approximate cost to my manager!
Has anyone undergone such a task? Any pointers would be great, regarding tools, examples, or any other related information.
Thank you.
Yes, it is possible to do cost estimate analysis for app engine applications. Based on my experience, the three major areas of cost that I encountered while doing my analysis are the instance hour cost, the datastore read/write cost, and the datastore stored data cost.
YMMV based on the type of app that you are developing, of course. If it is an intense OLTP application that handle simple-but-frequent CRUD to your data records, most of the cost would be on the datastore read/write operations, so I would suggest to start your estimate on this resource.
For datastore read/write, the cost for writing is generally much more expensive than the cost for reading the data. This is because write cost take into account not only the cost to write the entity, but also to write all the indexes associated with the entity. I would suggest you to read an article by Google about the life of a datastore write, especially the part about Apply Phase, to understand how to calculate the number of write per entity based on your data model.
To do an estimate of instance hours that you would need, the simplest approach (but not always feasible) would be to deploy a simple app to test how long would a particular request took. If this approach is undesirable, you might also base your estimate on the Google App Engine System Status page (e.g. what would be the latency for a datastore write for a particularly sized entity) to get a (very) rough picture on how long would it take to process your request.
The third major area of cost, in my opinion, is the datastore stored data cost. This would vary based on your data model, of course, but any estimate you made need to also take into account the storage that would be taken by the entity indexes. Taking a quick glance on the datastore statistic page, I think the indexes could increase the storage size between 40% to 400%, depending on how many index you have for the particular entity.
Remember that most costs are an estimation of real costs. The definite source of truth is here: https://cloud.google.com/pricing/.
A good tool to estimate your cost for Appengine is this awesome Chrome Extension: "App Engine Offline Statistics Estimator".
You can also check out the AppStats package (to infer costs from within the app via API).
Recap:
Official Appengine Pricing
AppStats for Python
AppStats for Java
Online Estimator (OSE) Chrome Extension
You can use the pricing calculator
https://cloud.google.com/products/calculator/

Resources