I'm trying to set up a single backend worker on Google Cloud App Engine. I have a YAML file that defines (among other things):
env: flex
resources:
memory_gb: 2
cpu: 1
instance_class: B8
...
I can control the memory size, the number of processors, but I need a faster CPU. I set instance_class: B8, but it doesn't help (probably works just for the Standard environment, not Flex).
Is there a way to control which instance (i.e. CPU speed) to run it on?
You cannot specify the CPU speed. You can specify the number of cores and the RAM.
App Engine Flexible Resource Settings
The Instance Type B8 is used with App Engine Standard and not Flexible.
Review the App Engine Flexible pricing for additional information.
Related
I am migrating from App Engine to Cloud Run. App Engine Standard instance classes are defined here. I would like to know to what CPU and Memory configuration they map to in Cloud Run.
tl,dr: An F4 App Engine instance is roughly equivalent to 1 CPU and 1Gi Memory in Cloud Run.
Memory: This table shows exactly how much memory each instance class gets:
CPU: Cloud Run doesn't give details about the CPU frequency (it might change over time), it only guarantees a CPU quantity. F1 and F2 App Engine instances map to 0.25 and 0.5 CPU in Cloud Run, selecting a CPU < 1 in Cloud Run comes with limitations on other settings (e.g. concurrency). So the recommendation is to pick 1 CPU for these instance classes too.
App Engine instance class
Cloud Run CPU equivalent
Cloud Run Memory equivalent
F1
0.25
256 MiB
F2
0.5
512 MiB
F4
1
1 GiB
F4 _1G
1
2 GiB
Could anyone suggest, is it possible to deploy application that consume RAM more than 2 GB. As per the web link provided by Google Cloud Platform documentation its looks not feasible. Web link https://cloud.google.com/appengine/docs/standard
The maximum instance size for App Engine Standard is B8 which provides 2 GB memory.
We are using Google App Engine to ingest large amount of data to Google Cloud FireStore with below configuration:
Basic scaling
instance_class: B4
basic_scaling:
instances: 1
The overall data ingestion 0f 20GB takes around 1.5 hours. But we have noticed that some time after an hour, instance is abruptly shutting down with below error:
Container terminated on signal 9.
As per this documentation, basic scaling can serve he request up to 24 hours.
We can not see any more details in the logs as well. Also checked the memory usage, B4 has 1024 MB and the app is only utilising up to 700 MB.
If anyone has faced this kind of error, your input would be valuable!
Although the instance has 1024MB, the Operating System also needs some of that space - I guess that's why it shuts down. It's out of memory.
I'm hosting my back end project on Google Cloud (App Engine Flex Instance), for now I have only 10 users but they charge me 250$ per month now, because I use several core, and so I used 2400 hours of accumulated instance time. Insane for only 10 users and not so much traffic!
Can I reduce or limit the number of core used by my back end?
As you can see here, the price for App Engine Flexible is computed as vCPU per core hour of usage. Basically, it does not matter if users reach your back-end project. It matters only if many users reach your App Engine Flexible deployment, increasing the number of resources required to serve them, thus increasing the price.
Yes, you can reduce the number of cores used in the back end, through the resource settings of your app.yaml configuration file. You might also want to check service scaling settings, to control the way App Engine Flexible assigns more resources based on your service's demands.
I'm updating my webapp that previously has been using the default configuration. Now I'm starting to use modules trying this config
application: newkoolproject
# Other settings here...
version: newkool
runtime: python27
api_version: 1
threadsafe: true
automatic_scaling:
min_idle_instances: 5
max_idle_instances: automatic # default value
min_pending_latency: automatic # default value
max_pending_latency: 30ms
max_concurrent_requests: 50
I pay for instance hours, data reads and complex searches totalling a few dollar daily on my current budget (where my limit is 7 USD for a day to avoid sudden spikes due to DoSing or other technical issue).
Could it be feasible for my to try and squeeze my app into the freetier using memcache and other technologies to reduce costs? Or should I forget about reaching free-tier (< 28 instances hours etc) and instead make a configuration for a more optimal UX? How will the change change my costs?
Update 141010 18:59 CET
I could add appstats
You'll need to turn appestats on on your local app engine development server, go through your typical user flow. Here are the instructions on how to do this: https://cloud.google.com/appengine/docs/python/tools/appstats
Make sure you turn calculate RPC costs on:
appstats_CALC_RPC_COSTS = True
Once you go through a typical user flow, you would go to localhost:8080/_ah/stats and it will estimate how much specific calls and flows will cost when you get to production. Really great tool, as it even helps identify bottlenecks and slow running areas within your application.
Google's recommendation is to not only use memcache, but also to split work into smaller units (as much as possible) by leveraging task queues.
UPDATE: Simple memcache usage example
my_results = memcache.get("SOME-KEY-FOR-THIS-ITEM")
if not my_results:
#Do work here
memcache.set("SOME-KEY-FOR-THIS-ITEM", my_results)
return my_results