Clear appEngine Flex static files cache - google-app-engine

I set a cache-control on my server of 1 year.
How to say to the AppEngine "clear !" to take a new version from the server ?
The configuration is Flex custom environment
runtime: custom
env: flex
env_variables:
writecontrolEnv: 'prod'
handlers:
- url: /.*
script: this field is required, but ignored
service: gateway-prod
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
resources:
cpu: 1
memory_gb: 2
disk_size_gb: 10
skip_files:
- node_modules/
network:
instance_tag: gateway

Assuming that your app is the one serving the static files then the cache parameters sent by the server are controlled by your application code. Which means that once you deploy a new version with updates parameters the server will send the updated values.
But the problem is that caching is actually performed by the client (or some middle-man network device), so the end user will not reach to the server until the (very long in your case) cache expiration time is reached, so it won't see the update until then.
You can try to clear your browser cache, hoping that the browser was the one doing the cache-ing.
To prevent such occurrences in the future you may want to choose a shorter cache expiration time or use some cache busting technique like this one.

Related

App engine flex nodejs worker_connections error

I deployed a nodejs server to app engine flex.
I am using websockets and should expect around 10k concurrent connections at the moment.
At around 2000 websockets connections I get this error:
[alert] 33#33: 4096 worker_connections are not enough
There is no permanent way to edit the nginx configuration in a nodejs runtime.
Isn't 2k connections on on instance quite low?
My yaml config file :
runtime: nodejs
env: flex
service: web
network:
session_affinity: true
resources:
cpu: 1
memory_gb: 3
disk_size_gb: 10
automatic_scaling:
min_num_instances: 1
cpu_utilization:
target_utilization: 0.6
As per this public issue 1 public issue 2 increasing the number of instance night increase the worker_connections. Also reducing the memory of each instance allow the instance scale up at a lower threshold. This may help with keeping the number of open connections below 4096 which should be constant across any size instance. You can change the worker_connections value for one instance SShing into the VM.
Nginx config is located in /tmp/nginx/nginx.conf and you can manually change it as follows:
sudo su
vi /tmp/nginx/nginx.conf #Make your changes
docker exec nginx_proxy nginx -s reload
Apart from the above workaround there is another PIT for implementation of a feature to allow users to change nginx.conf settings. and feel free to post there if you have any other queries.

Google App Engine - keep previous version's static files

We've deployed a Vue SPA to Google App Engine and it's served completely by the static handlers.
The issue that we are facing is that if a user is active on our site mid-deploy, then their old Webpack chunk manifest becomes invalid (since some chunks' hashes are overwritten). If they now try to route to a new page and that page tries to fetch a chunk that got overwritten, we get the following error:
ChunkLoadError: Loading chunk Conversations failed.
(error: https://example.com/js/Conversations.71762189.js)
Ideally, we'd like to keep N (2-3?) previous versions of the app's static files.
Is our only option to push all the assets to a Cloud Storage Bucket? If so, how would we go about pruning older versions?
Here is my app.yaml for reference:
runtime: nodejs10
instance_class: F4
automatic_scaling:
min_instances: 2
max_instances: 10
default_expiration: "30d"
error_handlers:
- file: default_error.html
handlers:
- url: /api/*
secure: always
redirect_http_response_code: 301
script: auto
- url: /js/*
secure: always
redirect_http_response_code: 301
static_dir: dist/js
- url: /css/*
secure: always
redirect_http_response_code: 301
static_dir: dist/css
- url: /img/*
secure: always
redirect_http_response_code: 301
static_dir: dist/img
- url: /(.*\.(json|js|txt))$
secure: always
redirect_http_response_code: 301
static_files: dist/\1
upload: dist/.*\.(json|js|txt)$
expiration: "10m"
- url: /.*
secure: always
redirect_http_response_code: 301
static_files: dist/index.html
upload: dist/index.html
expiration: "2m"
The issue typically happens when the deployment overwrites an existing service version which receives traffic (i.e. the service version is not changed). From Deploying an app:
Note: If you deploy a version that specifies the same version ID as a version that already exists on App Engine, the files that you
deploy will overwrite the existing version. This can be problematic if
the version is serving traffic because traffic to your application
might be disrupted. You can avoid disrupting traffic if you deploy
your new version with a different version ID and then move traffic to
that version.
As long as a service version is deployed and not deleted or overwritten its respective static assets remain accessible.
To prevent the issue always deploy using a fresh service version, then (gradually) migrate traffic to the newly deployed version. Keeping the latest N service versions around will give you those N sets of static assets you desire.
In general, this deployment practice is good/recommended for a few other reasons:
avoids potential outages, see Continuous integration/deployment/delivery on Google App Engine, too risky?
avoids potential traffic loss while GAE spins up enough new version instances to handle the traffic load, see 2nd half of GAE shutdown or restart all the active instances of a service/app
Potentially of interest: Google Frontend Retention between deployments
Deploy using the --no-promote flag, and utilize the Traffic Migration feature in the Standard Environment to gradually migrate traffic over to the new version so that all users don't experience an immediate switchover the moment the new version goes live. App Engine will host both the old and new versions (or, "blue" and "green") for a period of time until all traffic points to the new version, and then the old version will be shut down.
See also:
Testing on App Engine
Migrating and Splitting Traffic with the Admin API
Blue-Green Deployment Pattern

Autoscaling is working but continuously 502 coming on Google Flexible Environment (NodeJs)

# [START app_yaml]
runtime: nodejs
env: flex
service: 'frontend'
env_variables:
DEPLOY_ENV: 'PRODUCTION'
handlers:
- url: /.*
script: IGNORED
secure: always
resources:
cpu: 2
memory_gb: 4
# [END app_yaml]
Autoscaling is working but continuously 502 coming on Google Flexible Environment (NodeJs). See logs, response time is only 0, 1 and sometimes it's more than that. Any help would be very appreciated.
Are all your requests getting 502s or only some of them? Did you have it working before? What changes did you make?
If all your requests are getting 502s and it's never worked before, then it's likely you don't have it setup properly. Please make sure your app is listening to port 8080 and it should work.
If it worked before and it's continue to work now without any code change, then any transient issue should be investigated on our end. Please file a issue on our Public Issue Tracker, thanks.

Protecting cron scheduling endpoint on AppEngine (Flexible Environment)

I am trying to get my dataflow job scheduled via cron.yaml in an AppEngine flexible environment. This works flawlessly when I leave my endpoint unprotected. However, when trying to secure the endpoint, I see 403 status responses, even when triggering it from within the TaskQueues interface.
My app.yaml looks like this:
runtime: java
env: flex
handlers:
- url: /.*
script: this field is required, but ignored
- url: /dataflow/schedule
script: this field is required, but ignored
login: admin
runtime_config:
jdk: openjdk8
resources:
cpu: .5
memory_gb: 1.3
disk_size_gb: 10
manual_scaling:
instances: 1
Secure handlers (like login: admin) do not work on App Engine Flexible, that is why the 403.
For securing that handler, you can check the request header "X-AppEngine-Cron" in your app, which is a trusted header only set by traffic coming from App Engine.

When deploying my app in flexible environment getting error beta setting machine_type cannot be set in an App Engine Flexible Environment

When trying to deploy my app engine using flexible environment then i am getting error.
ERROR: (gcloud.preview.app.deploy) INVALID_ARGUMENT:
The beta setting machine_type cannot be set in an App Engine Flexible Environment deployment.
My app.yaml is given below
runtime: nodejs
#vm: true
env: flex
# [END runtime]
network:
instance_tag: app-tag
name: network-tag
instance_class: F1
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
cool_down_period_sec: 60
beta_settings:
machine_type: f1-micro
handlers:
- url: /.*
script: IGNORED
secure: always
# Temporary setting to keep gcloud from uploading node_modules
skip_files:
- ^node_modules$
Also can anyone please tell me what is the difference between vm: true and env: flex because both set app engine environment to flexible ??
When changing from vm: true to env: flex you're actually switching to the latest infra version, see Upgrading to the Latest App Engine Flexible Environment Beta Release.
The machine type is no longer configured that way. Instead you'd configure a custom instance shape via its resources:
Resource settings
These settings control the computing resources. App Engine assigns a
machine type based on the amount of CPU and memory you've
specified. The machine is guaranteed to have at least the level of
resources you've specified, it might have more.
You can specify up to eight volumes of tmpfs in the resource settings.
You can then enable workloads that require shared memory via tmpfs and
can improve file system I/O.
For example:
resources:
cpu: 2
memory_gb: 1.3
disk_size_gb: 10
volumes:
- name: ramdisk1
volume_type: tmpfs
size_gb: 0.5

Resources