Google App Engine Manual Scaling Prevents Restart - google-app-engine

I have a python app engine that handles api results and it's stateful. However it seems that after a few hours of inactivity (no requests), the server shuts off, resetting all states, and when a new request is made, it's listening again.
But the states are reset. I want the server to actively remain unchanged 24/7 and not reset/restart as I want to maintain states.
I have configured as per documentation but it's still restarting, I am not sure what's wrong
Here is my app.yaml:
runtime: python37
entrypoint: python main.py
manual_scaling:
instances: 1

In App Engine the general recomendation is to create stateless applications as mentioned on the documentation
Your app should be "stateless" so that nothing is stored on the instance.
As an alternative for the application not to get restarted you can deploy it on Compute Engine, As that service is a Virtual Machine you can have total control of the states.

Related

Google App Engine - Frequently getting logged out of the deployed .NET application

Frequently getting logged out from the deployed application, session is not working/timing out too soon in the deployed .net 5 application in the App Engine flex.
Below are the warning logs which i'm getting, not sure are they related to session issue or not.
Logged warnings
Are there any session settings which needs to be done within GCP console which i'm unaware of?
I've found similar configuration for Java applications but nothing for .NET.
Reference:- https://developers.google.com/appengine/docs/java/config/appconfig?csw=1#Java_appengine_web_xml_Enabling_sessions
Apparently, I found the issue and fixed it as mentioned below
app.yml file is to be modified with custom env instead of the default which Google App Engine flex provides with 2 instances, like below -
Modified app.yml -
runtime: custom
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
Here we are explicitly mentioning the instances and power required to run our app.
The default deployment uses multiple instances, and the sessions seem to be stored privately per instance. If you reload the page a few times, you will see your session exists sometimes and does not exist other times as you toggle between instances.
Hence by providing 1 instance we are restricting our session to be then and there hence resolving the above issue with it.

App Engine Standard, Serverless VPCs, Cloud Memorystore giving significant amount of timeouts

We configured our App Engine Standard python 3 service to connect to Cloud Memorystore via the Serverless VPC service (per the documentation, and other stack overflow threads). (I've included the app.yaml config below). This all worked well, unless an instance went idle for a little while. Over time we saw a high volume of:
Long unexplained hangs when making calls to Memorystore, even though they eventually worked
redis.exceptions.ConnectionError: Error 110 connecting to 10.0.0.12:6379. Connection timed out.
redis.exceptions.TimeoutError: Timeout reading from socket
These happened to the point where I had to move back to App Engine Flexible, where the service runs great without any of the above problems.
My conclusion is that Serverless VPC does not handle the fact that the redis client tries hard to leave the connection to redis open all the time. I tried a few variations of timeout settings, but nothing that helped. Has anyone successfully deployed App Engine Standard, Memorystore, and Serverless VPC?
env_variables:
REDISHOST: <IP>
REDISPORT: 6379
network:
name: "projects/<PROJECT-ID>/global/networks/default"
vpc_access_connector:
name: "projects/<PROJECT-ID>/locations/us-central1/connectors/<VPC-NAME>
Code used to connect to Memorystore (using redis-py):
REDIS_CLIENT = redis.StrictRedis(
host=REDIS_HOST,
port=REDIS_PORT,
retry_on_timeout=True,
health_check_interval=30
)
(I tried various timeout settings but couldn't find anything that helped)
I created a Memorystore instance and a Serverless VPC Access connector as stated in the docs (https://cloud.google.com/vpc/docs/configure-serverless-vpc-access), then deployed this sample (https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/appengine/standard_python37/redis) from Google Cloud Platform Python doc samples repo to App Engine Standard after making some modifications:
This is my app.yaml:
runtime: python37
# Update with Redis instance details
env_variables:
REDIS_HOST: <memorystore-ip-here>
REDIS_PORT: 6379
# Update with Serverless VPC Access connector details
vpc_access_connector:
name: 'projects/<project-id>/locations/<region>/connectors/<connector-name>'
# [END memorystore_app_yaml_standard]
I edited the code on main.py and used the snippet that you use to connect to the memorystore instance. It ended up like this:
redis_client = redis.StrictRedis(
host=redis_host, port=redis_port,
password=redis_password,
retry_on_timeout=True,
health_check_interval=30
)
I edited the requirements.txt. I changed “redis==3.3.8” for “redis>=3.3.0”
Things to note:
Make sure to use “gcloud beta app deploy” instead of “gcloud app deploy” since it is needed in order for the Serverless VPC Access connector to work.
Make sure that the authorized network you set to the memorystore instance is the same that you select for the Serverless VPC Access connector
This works as expected for me, could you please check if this works for you?
You may try to use min idle instance option, so you will have at least one idle instance to wait to serve your traffic. Bear in mind that this may change your billing cost. Also here you can find a billing calculator.
If the min idle instances are set to 0 there are no available instances to serve your traffic when the requests are starting and may this be the reason of having exceptions.

Google App Engine Node application: prevent downscaling to 0 instances

I've deployed a Node.js application on Google App Engine (standard environment).
I've noticed that after 10 minutes of inactivity, the app is undeployed and the number of instances goes to 0.
So the first request I make takes 4-5 seconds to reply.
This is my app.yaml
runtime: nodejs10
service: backend
automatic_scaling:
min_instances: 1
I added also min_idle_instances, but the issue seems not solved:
You can use ‘min_idle_instances’ instead of ‘min_instances’.
When using ‘min_instances’, you define how many instances you would like spun up when your app receives traffic.
When you use ‘min_idle_instances’, you define how many instances you want to keep alive. These instances are kept idle and running in the background in order to receive traffic.
Do note that it may increase your monthly invoice as those instances are live, whether or not they are receiving traffic.
I switched from standard to flexible environment, and it seems really better

How to actually stop an AppEngine form creating instances

I have been trying to get an AppEngine project to simply stop running. After trying deleting versions, deleting instances, and even uploading straight up empty main.py and worker.py files the project is still using about 3 hours of instance hours per hour. I don't understand how this is physically possibly. Where are some places I can start looking, since where I've been looking before doesn't seem to have any relevance whatsoever.
One possible approach would be to disable (or even shutdown) your application. From Google App Engine FAQ:
How can I disable one of my existing applications?
Disabling your application stops all serving requests, but your data and state are retained. You are still be billed for applicable
charges, such as Compute Engine instances. To release all the
resources used within the project, shut down your project.
To disable your application:
In the GCP Console, go to the App Engine Settings page.
Click Disable application and follow the instructions.
Disabling your app takes effect immediately. Confirm that your application has been disabled by visiting the URL of your app, such as
http://[YOUR_PROJECT_ID].appspot.com/. Your application should
return an HTTP 404 Not Found error.
Note that shutting the project down will be automatically followed by deletion in 30 days, so don't do that if you still want to re-enable the project at some point.
You can set the max instances in app.yaml:
instance_class: B2
basic_scaling:
max_instances: 1
Not sure if you can set that to 0, but at least limit it to 1.
Also
1) Make sure you don't have some backend instance (which uses its own app.yaml) running.
2) make sure you don't have any cron jobs running, or tasks stuck in the taskqueue
3) Try:
health_check:
enable_health_check: False
4) Shut down your instances.

Why do I have backend in Google App Engine if I don't have backends.yaml at all

When I open the admin panel of Google App Engine I see that I have running backend. I didn't intend to have one and I don't have backends.yaml file in my config.
I doubt it is because in the app.yaml I have this line manual_scaling in order to be able to make longer operatiotns:
manual_scaling:
instances: 1
Probably I should consider using modules. But first I want to clear this issue.
Manual scaling only works with "B"-type instances, which used to be known as "backend instances":
https://developers.google.com/appengine/docs/python/modules/#Python_Instance_scaling_and_class

Resources