In my AppEngine application, the servlet takes a long time to initialize. This is not a big problem with a usual Tomcat deployment, as the initialization happens only once. However, in AppEngine, I noticed that many service requests cause AppEngine to launch a new process and do the initialization, and thus, these sevrice requests take a long time.
Is it possible to disconnect the initialization from the service request? Somehow tell AppEngine to do the initialization in the background, so that when a user asks for a page, he won't have to wait so long?
You can setup Idle instances or Warmup Requests a the Application Settings at the Applications Settings section of the App Engine console of your app. That should avoid getting those slow initialization times, as your app would be preloaded and ready to go.
Related
What is the best way to set up Google App Engine to always have at least one instance ready and available to handle requests when using automatic scaling? This is for a low traffic application.
There are settings here that allow you to control them but I am not sure what the best combination is and some of them sound confusing. For example, min-instances and min-idle-instances sound similar. I tried setting min-instances to 1 but I still experienced lag.
What is a configuration that from the end user's point of view is always on and handles requests without lag (for a low traffic application)? I
In the App Engine Standard environment, when your application load or handling a requests this may cause the users to experience more latency, however warmup requests might help you reduce this latency. Before any live requests get to that instance, warmup requests load the app's code onto a new one. If this is enabled, App Engine will detect if your application needs a new instance and initiate a warmup request to initialize a new instance. You can check this link for Configuring Warmup Requests to Improve Performance
Regarding min-instances and min-idle-instances these will only apply when warmup request is enabled. As you can see in this post the difference of these two elements: min-instances used to process the incoming request immediately while min-idle-instances used to process high load traffic.
However, you mentioned that you don't need a warmup so we suggest you to select App Engine Flexible and based on this documentation it must have at least one instance running and can scale up in response to traffic. Please take note that using this environment costs you a higher price. You can refer to this link for reference regarding the pricing of two environments in App Engine.
It appears that AppEngine standard has a warmup feature to warm up an app after a deployment but I don't see the same feature available for Flex. The readiness & liveness probes also don't work for this since setting the path setting to a custom path inside the application doesn't seem to make the probes actually hit the internal endpoint.
Is there some solution I'm missing other than doing things like manually hitting the endpoints myself after the deployment which won't be very reliable since the calls don't necessarily always round robin to each instance?
In App Engine Standard, warmup requests essentially load your app's code into a new instance before any live requests reach that instance. This can happen in the following situations:
When you redeploy a version of your app.
When new instances are created due to the load from requests
exceeding the capacity of the current set of running instances.
When maintenance and repairs of the underlying infrastructure or
physical hardware occur
In App Engine Flexible, you can achieve the same result by using the initial_delay_sec setting for liveness checks in your app.yaml file. If you set up its value to give enough time for your code to initialize, the first request coming to that instance will be processed quickly by your already-initialized code.
I'm having a nodejs script which starts a stream with a third party and stores the incoming messages in FireStore.
There is no need for incoming requests. But after I deployed my script to App Engine, the script only starts if I call the cloud endpoint. After that, it keeps running (and that is what it should do).
Probably there is a way to start processes by default and also build in something like a auto-restart if it crashes, but I couldn't find it or I am using the wrong search terms :-)
AppEngine is a web-microservice platform. I mean that all (micro) service deployed have to be triggered by an HTTP request.
By the way, you can perform an infinite batch process which stream data.
However, you can set up a Cloud Task which call an AppEngine endpoint. The max duration is 24H. Link this to a Cloud Scheduler to launch every day your 24H-long task. (In detail, your cloud scheduler has to trigger an endpoint like Cloud Function or AppEngine. This endpoint creates the task in Cloud Task. Cloud Scheduler can't directly create a task in Cloud Task)
As Guillaume mentioned, GAE isn't really intended for implementing services like the ones you want to.
However, it's possible to do something similar, simply by configuring a minimum 1 idle instance:
GAE will start an idle instance for the service automatically, without waiting for a triggering request
when the idle instance dies accidentally or is terminated because it reaches the end of its allowed lifespan GAE will again start a new idle instance
when the 1st request comes in GAE will dispatch it to the idle instance, that instance thus becoming active (serving subsequent requests) and GAE will immediately start a new idle instance to have it on standby
when the only active instance dies GAE won't start a new instance immediately, it'll wait until a new request comes in, which will be like the 1st request
when traffic is high enough GAE will start dispatching it to the idle instance on standby activating it and again start a new idle instance on standby.
I try to deploy a simple nodejs app to GAE flexible environment.
Followed the official guide, using this command:
gcloud app deploy --verbosity=debug
I tried a lot of times.
The logs give me these forever:
DEBUG: Operation [apps/just-aloe-212502/operations/b1e812f6-299c-438e-b335-e35aa343242a] not complete. Waiting to retry.
Updating service [flex-env-get-started] (this may take several minutes)...⠛DEBUG: Operation [apps/just-aloe-212502/operations/b1e812f6-299c-438e-b335-e35aa343242a] not complete. Waiting to retry.
Updating service [flex-env-get-started] (this may take several minutes)...⠛DEBUG: Operation [apps/just-aloe-212502/operations/b1e812f6-299c-438e-b335-e35aa343242a] not complete. Waiting to retry.
Updating service [flex-env-get-started] (this may take several minutes)...⠹DEBUG: Operation [apps/just-aloe-212502/operations/b1e812f6-299c-438e-b335-e35aa343242a] not complete. Waiting to retry.
Updating service [flex-env-get-started] (this may take several minutes)...⠼DEBUG: Operation [apps/just-aloe-212502/operations/b1e812f6-299c-438e-b335-e35aa343242a] not complete. Waiting to retry.
What happened?
I can run my simple nodejs hello-world app successfully in local. And, the GAE standard environment works fine.
I should note that App Engine Flexible environment is based on Google Compute Engine, so it takes time to configure the infrastructure when you deploy your app.
The first deployment of a new version of an App Engine Flexible application takes some time due to setting up of internal infrastructure however subsequent deployments should be relatively fast since it only modifies some GCP resources and then waits on the health checks.
Deployment requires docker image building (which you can skip if you already have a pre-built image uploaded to gcr.io). Using a pre-build (to gcr.io) docker image will skip docker build step and would optimize the deployment time.
The Twitter streaming api says that we should open a HTTP request and parse updates as they come in. I was under the impression that Google's urlfetch cannot keep the http request open past 10 seconds.
I considered having a cron job that polled my Twitter account every few seconds, but I think Google AppEngine only allows cron jobs once a minute. However, my application needs near-realtime access to my twitter #replies (preferably only a 10 second or less lag).
Are there any method for receiving real-time updates from Twitter?
Thanks!
Unfortunately, you can't use the urlfetch API for 'hanging gets'. All the data will be returned when the request terminates, so even if you could hold it open arbitrarily long, it wouldn't do you much good.
Have you considered using Gnip? They provide a push-based 'web hooks' notification system for many public feeds, including Twitter's public timeline.
I'm curious.
Wouldn't you want this to be polling twitter on the client side? Are you polling your public feed? If so, I would decentralize the work to the clients rather than the server...
It may be possible to use Google Compute Engine https://developers.google.com/compute/ to maintain unrestricted hanging GET connections, then call a webhook in your AppEngine app to deliver the data from your compute engine VM to where it needs to be in AppEngine.