I have and MVC5 web-app running on Azure. Primarily this is a website but I also have a CRON job running (triggered from an external source) which calls a URL (a GET Request) to carry out some house keeping.
This task is asynchronous can take up to and sometimes over the default timeout on Azure of 230 seconds. I'm regularly getting 500 errors due to this.
Now, I've done a bit of reading and this looks like it's something to do with the Azure Load Balancer settings. I've also found some threads relating to being able to alter that timeout in various contexts but I'm wondering if anyone has experience in altering the Azure Load Balancer timeout in the context of a Web App? Is it a straightforward process?
but I'm wondering if anyone has experience in altering the Azure Load Balancer timeout in the context of a Web App?
You could configure the Azure Load Balancer settings in virtual machines and cloud services.
So, if you want to do it, I suggest you could deploy web app to Virtual Machine or migrate web app to cloud services.
For more detail, you could refer to the link.
If possible, you could try to optimize your query or execution code.
This is a little bit of an old question but I ran into it while trying to figure out what was going on with my app so I figured I would post an answer here in case anyone else does the same.
An Azure Load Balancer (created via ANY means) which mostly comes along with an external IP Address — at least when created via Kubernetes / AKS / Helm — has the 4 min, 240 second idle connection timeout that you referred to.
That being said there are two different types of Public IP Addresses — Basic (which is almost always the default that you probably created) and Standard. More information on the docs: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-standard-overview
You can modify the idle timeout see the following doc: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-tcp-idle-timeout
That being said changing the timeout may or may not solve your problem. In our case we had a Rails app that was connection to a database outside of azure. As soon as you add a Load Balancer with a Public IP all traffic will EXIT that public IP and be bound by the 4 minute idle timeout.
That's fine except for that Rails anticipates that a connection to a Database is not going to be cut frequently — which in this case happens ALL the time.
We ended up implementing a connection pooling service that sat between our Rails app and our real database (called PGbouncer — specific to Postgres DBs). That service monitored a connection and re-connected when the timer was nearing the Azure LB timeout.
It took a little while to implement but in our case it works flawlessly. You can see some more details over here: What Azure Kubernetes (AKS) 'Time-out' happens to disconnect connections in/out of a Pod in my Cluster?
The longest timeout you can set for a Public IP / Load Balancer is 30 minutes. If you have a connection that you would like to utilize that runs idle longer than that — then you may be out of luck. As of now 30 mins is the max.
Related
Initial connections to my website are extremely slow (100+ seconds). How can I diagnose the issue?
Using the Chrome dev tool network tab, I see that the issue is "initial connection" and not things like SSL or Waiting/TTFB.
This only happens for the first page visit to the website for a given device; after the first page loads, everything on the website is very fast. This consistently happens for new devices, on the same device if I don't visit the website for X days, and on the same device if I clear the cache and browsing history.
The website is a Django app is hosted using Google Cloud App Engine with 2 flexible instances.
User traffic to the website is low, so I doubt the issue is load balancing or traffic spikes.
Thanks!
Yesterday I tried opening the page and I noticed 1.8min to load the main page and 2.1min to make a search, later attempts were faster as you mentioned. I also tried accessing the page today and it loaded quite fast.
From my understanding the high latency of the first connection might be related to session handling, database connections, ssl certificates problem, huge amount of uncached data, expensive operations run before the server sends the response. It's nearly impossible for us to determine it without access to your code, logs and database configurations.
As for how to narrow down the issue I might suggest the following:
Examine your logs for possible causes.
Add timeit logging interleaved with each of the statements that handle the requests and watch for bottlenecks or long-running code.
Deploy the same endpoint without logos, images and other data that would be browser-cached and see if it makes a difference.
Create a hello-world simple endpoint and check it's latency. Keep slowly evolving the endpoint to resemble your actual handling code with hopes on finding what's the issue.
If only the first connection is slow, it might be because the instance is starting and you do not have minimum idle instances and warmup requests enabled. This configuration will make you have instances ready for taking traffic and the latency will be slower in the first connection.
As it is stated in the documentation:
If you set a minimum number of idle instances, pending latency will
have less effect on your application's performance. Because App Engine
keeps idle instances in reserve, it is unlikely that requests will
enter the pending queue except in exceptionally high load spikes. You
will need to test your application and expected traffic volume to
determine the ideal number of instances to keep in reserve.
Also you can find more information about warmup requests in this documentation about Configuring Warmup Requests to Improve Performance
I resolved this issue by removing and recreating custom domain settings for my App Engine project, and removing and recreating the corresponding DNS records in domains.google, following these instructions:
https://cloud.google.com/appengine/docs/standard/python/mapping-custom-domains
I'm still not sure what the underlying issue was, but this fixed it. Hope this can help anyone encountering a similar issue.
I had this exact issue. it was killing our load performance once we switched to a load balancer.
What is ended up being was the instance group port setting. We're obviously using ssl certs for the site but I had indicated port 80 and 443.
Once I removed port 80 from the instance group that the load balancer refers to, it loaded all the pages immediately.
Problem. I'm looking for an agile way to shoot a docker container (stored on GCR.IO) to a managed service on GCP:
one docker container gcr.io/project/helloworld with private data (say, Cloud SQL backend) - can't face the real world.
a bunch of IPs I want to expose it to: say [ "1.2.3.4" , "2.3.4.0/24" ].
My ideal platform would be Cloud Run, but also GAE works.
I want to develop in agile way (say deploy with 2-3 lines of code), is it possible run my service secretly and yet super easily? We're not talking about a huge production project, we're talking about playing around and writing a POC you want to share securely over the internet to a few friends making sure the rest of the world gets a 403.
What I've tried so far.
The only think that works easily is a GCE vm with docker-friendly OS (like cos) where I can set up firewall rules. This works, but it's a lame docker app on a disposable VM. Machine runs forever and dies at reboot unless I stabilize it on cron/startup. Looks like I'm doing somebody else's job.
Everything else I've tried so far failed:
Cloud Run. Amazing but can't set up firewall rules on it, or Cloud Director, .. seems to work only with IAP which is painful to set up.
GAE. Works with multiple IPs and can't detach public IPs or firewall it. I managed to get the IP filtering within the app but seems a bit risky. I don't [want to] trust my coding skills :)
Cloud Armor. Only supports a HTTPS Load Balancer which I don't have. Nor I have MIGs to point to. I want simplicity.
Traffic Director and need a HTTP L7 balancer. But I have a docker container, on a single pod. Why do I need a LB?
GKE. Actually this seems to work: [1] but it's not fully managed (I need to create cluster, pods, ..)
Is this a product deficiency or am I looking at the wrong products? What's the simplest way to achieve what I want?
[1] how do I add a firewall rule to a gke service?
Please limit your question to one service. Not everyone is an expert on all Google Cloud services. You will have a better chance of a good answer for each service if they are separate questions.
In summary, if you want to use Google Cloud Security Groups to control IP based access you need to use a service that runs on Compute Engine as security groups are part of the VPC feature set. App Engine Standard and Cloud Run do not run within your project's VPC. This leaves you with App Engine Flex, Compute Engine, and Kubernetes.
I would change strategies and use Google Cloud Run managed by authentication. Access is controlled by Google Cloud IAM via OAuth tokens.
Cloud Run Authentication Overview
I have agreed with the John Hanley’s reply and I have up-voted his answer.
Also, I’ve learned that you are looking how to restrict access to your service through GCP.
By setting a firewall rules, You can limit access to your service by limiting the Source IP range as Allowed source, so that only this address will be allowed as source IP.
Please review another thread in Server Fault [1], stating how to “Restrict access to single IP only”.
https://serverfault.com/questions/901364/restrict-access-to-single-ip-only
You can do quite easily with a Serverless NEG for Cloud Run or GAE
If you're doing this in Terraform you can follow this article
I have a servlet hosted within Google App Engine. As part of the business logic, it performs a HTTPS request to another webservice. I started getting SSL connection reset exceptions but they appear to be happening in a random fashion. I haven't been able to repeat the exception and they start happening even more over time but then can fade away and come back. I've performed stress testing on the remote service directly from other machines and never receive a single issue. The only way I've been able to deal with this is by redeploying the application but it can take a few redeployments before the issue goes away. I am confident it isnt an issue with the remote service but something happening within Google App Engine.
Thank you for your comments, we just got to the bottom of this yesterday. It was a dynamic application being deployed, so the instances should be managing their resources by themselves. However, what was happening is that an instance was running low on memory which resulted in SSL connection resets. This is why they were appearing in a random manner and degrading even worse overtime. The way this was resolved was to upgrade the resources, changing the deployment configuration to use an F2 instance rather than the default F1 instance. Since doing so there hasn't been a connection dropped. Thanks again for your responses and I hope this helps anyone who may experience something similar.
I have a Google App Engine server up and running. I don't have any cron jobs or anything that I am aware that calls the default website url. I have had over 70,000 requests for the _ah/warmup in the last 24 hours. How can I figure where this traffic is coming from? Is this a denial of service attack on the server?
The warmup request is, as explained by the Google Documentation, an internal call that is made whenever you spin up new instances.
Without any more information, the only thing 70 000 requests to that URL means is that your scaling is maybe a bit too aggressive and your app is spinning too many instances.
I have a question about how Google app engine achieve scalability through virtualization. For example when we deploy a cloud app to Goodle app engine, by the time the number of users of our app has been increased and I think Google will automatically generate a new virtual server to handle user request. At the first time, the cloud app runs on one virtual server and now it runs on two virtual servers. Google achieved
scalability through virtualization so that any one system in the Google
infrastructure can run an application’s code—even two consecutive
requests posted to the same application may not go to the same server
Does anyone know how an application can run on two virtual servers on Google. How does it send request to two virtual server and synchronizes data, use CPU resources,...?
Is there any document from Google point out this problem and virtualization implement?
This is in now way a specific answer since we have no idea how Google does this. But I can explain how a load balancer works in Apache which operates on a similar concept. Heck, maybe Google is using a variable of Apache load balancing. Read more here.
Basically a simply Apache load balancing structure consists of at least 3 servers: 1 head load balancer & 2 mirrored servers. The load balancer is basically the traffic cop to outside world traffic. Any public request made to a website that uses load balancing will actually be requesting the “head” machine.
On that load balancing machine, configuration options basically determine which slave servers behind the scenes send content back to the load balancer for delivery. These “slave” machines are basically regular Apache web servers that are—perhaps—IP restricted to only deliver content to the main head load balancer machine.
So assuming both slave servers in a load balancing structure are 100% the same. The load balancer will randomly choose one to grab content from & if it can grab the content in a reasonable amount of time that “slave” now becomes the source. If for some reason the slave machine is slow, the load balancer then decides, “Too slow, moving on!” and goes to the next machine. And it basically makes a decision like that for each request.
The net result is the faster & more accessible server is what is served first. But because the content is all proxied behind the load balancer the public accesses, nobody in the outside world knows the difference.
Now let’s say the site behind a load balancer is so heavily trafficked that more servers need to be added to the cluster. No problem! Just clone the existing slave setup to as many new machines as possible, adjust the load balancer to know that these slaves exist & let it manage the proxy.
Now the hard part is really keeping all machines in sync. And that is all dependent on site needs & usage. So a DB heavy website might use MySQL mirroring for each DB on each machine. Or maybe have a completely separate DB server that itself might be mirroring & clustering to other DBs.
All that said, Google’s key to success is balancing how their load balancing infrastructure works. It’s not easy & I have no clue what they do. But I am sure the basic concepts outlined above are applied in some way.