How can i achieve LogicApp->APIM->SFTP - azure-logic-apps

We'd like to pick a file in this structure LogicApp->APIM->SFTP. APIM because we want to define a staticIP for SFTP server to whitelist. I understand how we can restrict/monitor calls to LogicApp via APIM but i require help in the other direction.
Thanks

We can restrict/monitor the calls (call rate) to a specified number per basis of time period specified using the rate-limit policy that prevents API usage spikes on based of per subscription.
Set the calls rate on number of calls box and renewal period in seconds using Inbound Processing option:
Caller will receive 429 Too many requests as response & status code if the call rate is exceeded:

Related

Regarding API rate limit. Is is for a single app or a single user?

In Coinbase API doc, it describes
"By default, each API key or app is rate limited at 10,000 requests per hour if your requests are being rate limited. HTTP response code 429 will be returned with an rate_limit_exceeded error"
[Question] I'd like to know that whether the current API restrictions are for a single app or a single user.
Thanks in advance
As you already mentioned, the limit is linked to the API key.
So if you have multiple apps using the same API key(ideally you should not), the limit will apply to the cumulative calls from all the apps. If you have separate apps using different keys, the limit will apply to each app.
In addition to Anupam's answer, if you use the exchange api there are different rates and limited by IP.
REST API
When a rate limit is exceeded, a status of 429 Too Many Requests will be returned.
Public endpoints
We throttle public endpoints by IP: 10 requests per second, up to 15 requests per second in bursts. Some endpoints may have custom rate limits.
Private endpoints
We throttle private endpoints by profile ID: 15 requests per second, up to 30 requests per second in bursts. Some endpoints may have custom rate limits.
/fills endpoint has a custom rate limit of 10 requests per second, up to 20 requests per second in bursts.

For Cloud Run triggered from PubSub, when is the right time to send ACK for the request message?

I was building a service that runs on Cloud Run that is triggered by PubSub through EventArc.
'PubSub' guarantees delivery at least one time and it would retry for every acknowledgement deadline. This deadline is set in the queue subscription details.
We could send an acknowledgement back at two points when a service receives a pub-sub request (which is received as a POST request in the service).
At the beginning of the request as soon as the request was received. The service would then continue to process the request at its own pace. However, this article points out that
When an application running on Cloud Run finishes handling a request, the container instance's access to CPU will be disabled or severely limited. Therefore, you should not start background threads or routines that run outside the scope of the request handlers.
So sending a response at the beginning may not be an option
After the request has been processed by the service. So this would mean that, depending on what the service would do, we cannot always predict how long it would take to process the request. Hence we cannot set the Acknowledgement deadline correctly, resulting in PubSub retries and duplicate requests.
So what is the best practice here? Is there a better way to handle this?
Best practice is generally to ack a message once the processing is complete. In addition to the Cloud Run limitation you linked, consider that if the endpoint acked a message immediately upon receipt and then an error occurred in processing it, your application could lose that message.
To minimize duplicates, you can set the ack deadline to an upper bound of the processing time. (If your endpoint ends up processing messages faster than this, the ack deadline won’t rate-limit incoming messages.) If the 600s deadline is not sufficient, you could consider writing the message to some persistent storage and then acking it. Then, a separate worker can asynchronously process the messages from persistent storage.
Since you are concerned that you might not be able to set the correct "Acknowledgement Deadline", you can use modify_ack_deadline() in your code where you can dynamically extend your deadline if the process is still running. You can refer to this document for sample code implementations.
Be wary that the maximum acknowledgement deadline is 600 seconds. Just make sure that your processing in cloud run does not exceed the said limit.
Acknowledgements do not apply to Cloud Run, because acks are for "pull subscriptions" where a process is continuously pulling the Cloud PubSub API.
To get events from PubSub into Cloud Run, you use "push subscriptions" where PubSub makes an HTTP request to Cloud Run, and waits for it to finish.
In this push scenario, PubSub already knows it made you a request (you received the event) so it does not need an acknowledgement about the receipt of the message. However, if your request sends a faulty response code (e.g. http 500) PubSub will make another request to retry (and this is configurable on the Push Subscription itself).

Limits for auth\access token request in Azure AD

Do we have any throttling\limits for request of access token for 1 Application in Azure Active Directory? I found unofficial limitations: 200 calls from 1 user for 30 seconds. This is true?
Shortly Yes you are right!
There is a concurrent API calling limits but what exactly the count it has not specified officially. If you refer this official docs you would know.
Its calls Throttling. When Throttling happens you would encounter 429 error code for that. You could check here See the screen shot below:
Best Practice:
Officially suggested following way to handle request limit:
Reduce the number of operations per request.
Reduce the frequency of calls.
Avoid immediate retries, because all requests accrue against your
usage limits
For more details you could check here.
Note: Calling limit also specifically doesn't specified on Azure AD service limits and restrictions

How to ignore idle timeout from AWS ELB in the browser

I have an application where a user can upload a PDF using angular-file-upload.js
This library does not support file chunking: https://github.com/nervgh/angular-file-upload/issues/41
My elastic load balancer is configured to have an idle timeout of 10 seconds and other parts of the application depend on keeping this parameter.
The issue is if the file upload takes longer than 10 seconds the user receives a 504 Gateway Timeout in the browser and an error message. However, the file still reaches the server after some time.
How can I ignore or not show the user this 504 Gateway Timeout that comes from the ELB? Is there another way around this issue?
The issue you have is that an ELB is always going to close the connection unless it gets some traffic back from your server. See below from AWS docs. It's the same behaviour for an ALB or a Classic load balancer.
By default, Elastic Load Balancing sets the idle timeout to 60 seconds
for both connections. Therefore, if the instance doesn't send some
data at least every 60 seconds while the request is in flight, the
load balancer can close the connection. To ensure that lengthy
operations such as file uploads have time to complete, send at least 1
byte of data before each idle timeout period elapses, and increase the
length of the idle timeout period as needed.
So to get around this, you have two options:
Change the server processing to start sending some data back as soon as the connection is established, on an interval of less than 10 seconds.
Use another library for doing your uploads, or use vanilla javascript. There are plenty of examples out there, e.g. this one.
Edit: Third option
Thanks to #colde for making the valid point that you can simply work around your load balancer altogether. This has the added benefit of freeing up your server resources which get tied up with lengthy uploads. In our implementation of this we used pre-signed urls to securely achieve this.

Stackoverflow quota for a day

I'm using stackoverflow in my meteor application,
I'm calling api directly from my code like below
HTTP.call("GET", urlString,{params:{site:"stackoverflow"}},function(error,result)
{
console.log(result.data);
});
I'm not using any oauth or client id secret id in my calls.
In the response, I'm getting a variable called quota. with maximum pings 300
Is that means I can only call the api for 300 times, I want more than that, I'm even ready to pay for it.
Is there a way to increase that number.
Thanks
You are throttled because you have not registered your application. If you register your application, you will receive a quota increase to 10,000 hits per day. You will want to read the authentication documentation on how to utilize the keys you receive from registering your application.

Resources