I've deployed the helloworld samples on appegine standard using java7 and java8 runtimes and noticed that java8 takes in average 6 times longer to spin up a new instance. I've used B1 instances on my tests but noticed the same on B2 as well and I am not willing to use more expensive instance classes because currently I have no traffic at all.
is it because java8 is still BETA or a limitation that we should consider regarding java8?
Related
I have many versions of appengine instances that are active but not used because they are old. What are the costs they can generate? How do you behave with the old versions of appengine instances? Do you delete them or deactivate them?
On the documentation I don't find any reference to the costs of the old instances.
https://cloud.google.com/appengine/pricing?hl=it
UPDATE:
(GAE STANDARD)
Thank you
It's a poorly documented aspect of App Engine. What you describe as versions that are "not used" are more specifically versions that don't receive traffic. But depending on your scaling configuration (essentially defined in your app.yaml file), there may not be a 1:1 relationship between traffic and the number of active instances serving a version.
For example, I'm familiar with using "automatic_scaling" with min_instances = 1. This will prevent a service to scale to zero instance and have latency to serve an incoming request after some idle time, but it also means that any version until deleted will generate a baseline cost of 1 instance running 24/7.
Also, I've found that the estimated number of instances displayed in the dashboard you screenshotted can be misleading (more specifically, it can show 0 instance while there is actually one running).
Note that if you do not have scaling related configuration in your app.yaml file, you should check what are the default values currently considered by App Engine.
It's tricky when you get started and I'm sure I'm not the only one who lost most of the free trial budget because of this.
There is actually a limit of versions you can have depending on your app's pricing:
Limit
Free app
Paid app
Max versions
15
210
It seems that you can have them active in case you want to switch between versions migrating or splitting the traffic between them, but they won't charge you for them if you don't reach more than 15 versions.
Hi we are planning to migrate to Mule4 from Mule3 and I've few questions related to sizing of cores compared to Cloudhub vs RTF.
Currently we installed Mule Runtimes on AWS(on-premise) . 2 VM Machines of 2 Cores each. so that is 4 Cores subscription. Clustered them as ServerGroup. Deployed 40 applications on both.
Question 1) So My understanding is we are using 2 cores to maintain 40 applications and other 2 cores for high availability . Let me know if this is correct and If the same 40 apps have to be moved to Cloudhub with HA do i need 8Cores ?
coming to RTF, i guess we need to have 3 controller and 3 worker nodes. suppose if i take AWS VM Machine of 3 Core capacity . It will be 3X3 = 9 cores using and I can deploy the same 40 applications on those 3 VM machines. (it could be more than 40 apps as well ).This is with high availability
When it comes to cloudhub if i need to deploy 40 apps with high availability (each app deployed on 2 cores) it would take 8Cores. and I cannot deploy not single application more than 40.
Question 2) RTF though i have 4 core VM machine i can deploy 50 or 60 apps. but for cloudhub if i take 4core subscription i cannot deploy more than 40 apps. Is this correct ?
yes, you're right. Currently (Dec 2021), the minimum allocation of vCores when deploying applications to Cloudhub is 0.1 of a vCore, so to your 1st question, yes, correct, you would strictly need 8 vCores, assuming 2 workers per application for a "somewhat" high-availability. A true end-to-end high-availability would more likely need 3 workers, so that if one dies, you would still have HA within the other 2.
To the second question, when you deploy in RTF and even Mule runtime directly to say a VM or a container, you have more flexibility in terms of how much of a vCore portion you need to allocate for your applications. Your MuleSoft account manager would be able to articulate with you as to how much that would mean.
Last but not least, you could also think of different deployment models and cost-savings approaches, which depending on your scenario could mean using say service mesh, so you drastically reduce the number of vCores you use and also you can come with a strategy of grouping endpoints/resources of different applications in a single one. Example: if you have 2 different applications, both related to customer data or somehow the same domain, you could group them together.
Ed.
I want to deploy containerized code using one of Google's serverless options. From what I understand Google has two options for this:
Google App Engine Flexible Environment
Google Cloud Run (in beta)
I've watched the 2019 Google Next talk Where Should I Run My Code? Choosing From 5+ Compute Options. And I read Jerry101's answer to the general question "What is the difference between Google App Engine and Google Cloud Run?".
To me it basically sounds like Cloud Run is the answer to the limitations of using Google App Engine Flexible Environment.
The reasons I can think of to choose App Engine Flexible Environment over Cloud Run are:
Legacy - if your code currently relies on App Engine Flex you might not want to deal with moving it
Track record - App Engine Flex has been around for a while in general availability and in that sense has a track record, whereas Cloud Run is just in Beta
But those are both operation type considerations. Neither is a concern for me. Is there a technical advantage to choosing App Engine Flex over Cloud Run?
Thanks
Note: The beta Serverless VPC Access for App Engine is only available for the standard environment as of this question posting April 2019, not for Flex, so that's not a consideration in the question of App Engine Flex vs Cloud Run
Pricing/Autoscaling: The pricing model between GAE Flexible Environment and Cloud Run are a bit different.
In GAE Flexible, you are always running at least 1 instance at any time. So even if your app is not getting any requests, you’re paying for that instance. Billing granularity is 1 minute.
In Cloud Run, you are only paying when you are processing requests, and the billing granularity is 0.1 second. See here for an explanation of the Cloud Run billing model.
Underlying infrastructure: Since GAE Flexible is running on VMs, it is a bit slower than Cloud Run to deploy a new revision of your app, and scale up. Cloud Run deployments are faster.
Portability: Cloud Run uses the open source Knative API and its container contract. This gives you flexibility and freedom to a greater extent. If you wanted to run the same workload on an infra you manage (such as GKE), you could do it with "Cloud Run on GKE".
I'd actually suggest that you seriously consider Cloud Run over App Engine.
Over time, I've seen a few comments of a "new" App Engine in the works, and it really seems that Cloud Run is that answer. It is in beta, and that can be an issue. I've seen some companies use beta services in production and others wait. However, if I am going to start a new app today - it's going to be Cloud Run over App Engine Flex for sure.
Google is very deep into Kubernetes as a business function. As Cloud Run is sitting on GKE - this means that it is indirectly receiving development via other teams (The general GKE infrastructure).
Conversely, App Engine is on some older tech. Although it's not bad - it is "yesterday's" technology. Google, to me, seems to be a company that gets really excited about what is new and what is being highly adopted in the industry.
All this said - when you wrap your services in a container, you should be able to run them anywhere? Well, anywhere there is a container platform. You can front your services with something like Cloud Endpoints and you can deploy on both App Engine and Cloud Run - swap back and forth. Then, at that point, the limitations might be the services that are offered. For instance, Cloud Run currently doesn't support some items, like Cloud Load Balancing or Cloud Memorystore. That might be a blocker today.
Short story: Appengine is something real, relatively stable. Cloud Run is pretty much just a draft/idea, very unstable.
Long story:
Being in alpha/beta Google Cloud Run may suffer many changes. If you are old enough you might remember how dramatically Appengine pricing has changed. It promised a CPU/RAM based pricing, then it decided that's not "possible" or at least not very profitable and moved to a VM based pricing, then they shipped a decent appengine release(Appengine Flex or whatever name it had at that time) but also increased the price again by adding a minimum instance model. Not to mention the countless APIs/breaking changes or the limits changes.
Cloud Run is based on gVisor which has some limitations so depending on the language/library you use and what you do, it may break(or just Google's implementation may break) at some point and there is nothing you can do (i.e. patch the system) and it will ruin your productivity and potentially your business. You may have a look on its current issues.
Free advice: Even if you choose Appengine or Cloud Run avoid proprietary APIs/services such Google Datastore. They may ruin your business. Pricing, APIs and limits will change. There is no real open source or paid alternative so your code is not portable. Your code is pretty worthless outside of Google cloud.
Disclaimer: I've been burned by appengine changes and datastore lock-in so my opinion may be biased.
I have a ML model with REST API interface as a micro service. When I tried to run with Cloud Run, it deploys but just does not work. I had to switch back to App Engine Flexible Env.
Cloud Run (fully managed) currently (Jul 2020) has RAM limitation of 2GB. For better hardware I should go for Anthose with GKE infra. But this has min instance needs of at least 4 instances to properly work.
Mine being a tiny application I settled for App engine Flexible environment. Though autoscale settings required min 2 instances, in early days it could be managed with manual scaling and 1 instance as limit.
EDIT:
As on Aug 22 2020, the RAM limit is 4GB and number of cores is 2, for fully managed GCP cloud Run.
Main difference is background tasks.
In cloud run, everything kicks off by a request, and once that request completes, the instance won't be up up any longer.
App Engine also gave you some built in freebies like memory caching, but I don't think that's true of App Engine flex.
For a straightforward HTTP API, the differences are negligible, and you can get some of the things that App Engine gives you with other GCP products (Cloud Scheduler, Cloud Task).
You can check this video out for a comparison and demo on cloud run:
https://www.youtube.com/watch?v=rVWopvGE74c
App Engine Flexible, focus on "Code first", developer-focused, App Engine app was made up of multiple services, you really didn't have to do any kind of naming when deploying your applications.
Characteristics of the GAE flexible environment :
It is not possible to downscale to ZERO
Source code that is written in a version of any of the supported
programming languages: Python, Java, Node.js, Go, Ruby, PHP, or .NET
Runs in a Docker container that includes a custom runtime or source
code written in other programming languages.
Uses or depends on frameworks that include native code.
Accesses the resources or services of your Google Cloud project that
reside in the Compute Engine network.
Maximum request timeout: 60 minutes
Cloud Run is a managed compute platform that enables you to run containers that are invocable via requests or events, everything is a service, whether it's an actual service or an application with a web interface, so consider its use as the deployment of a service rather than an appplication.
Characteristics of Cloud Run :
It is serverless: it abstracts away all infrastructure management
It depends on the fact that your application should be stateless.
GCP will spin up multiple instances of your app to scale it dynamically
Downscale to ZERO
You can use below url to get difference between Cloud Run and App Engine.
Hosting Options
Some times many reason to use App Engine over the Cloud Run is, Cloud Run doesn’t Background processes. It response time also 15 mins only.
My Java app runs on the Standard Google App Engine (GAE) and is configured to have 1 minimum instance and 1 maximum instance. It is also configured to have 1 minimum idle instance which allows the single instance to run non-stop. I ran a timer for 1 hour and then checked how many instance hours have elapsed. It indicates slightly over 2 hours. How is this possible when only a single instance is running?
From your configuration you should actually be having 2 instances running:
one resident instance, due to the minimum idle instance configuration. This serves only sudden transient traffic peaks while GAE spins up the necessary dynamic instances, see min-idle-instance on GAE/J and Why do more requests go to new (dynamic) instances than to resident instance?
one dynamic instance, due to the min/max 1 instance configs, handling the regular traffic
Note: the instance class also matters (but probably it's not your case here). From Standard environment instances:
Important: When you are billed for instance hours, you will not see any instance classes in your billing line items. Instead, you will
see the appropriate multiple of instance hours. For example, if you
use an F4 instance for one hour, you do not see "F4" listed, but you
see billing for four instance hours at the F1 rate.
We just migrated to google cloud endpoints v2 / java8 and found that latency has gone up. We see this kind of request in traces often:
https://servicecontrol.googleapis.com/v1/services/<myapi>.endpoints.<myappid>.cloud.goog:check
Which uses around 14ms. Also, somehow memory usage went up and our B2 frontends suddenly start blocking and having delays of 10s often, which could be a problem with connection pooling not done right, but was somehow not present with endpoints-v1 & java7 before.
At the same time, we see 0 errors reported per instance (which is not true, it is aborting requests after around 10-30s all the time) and we cannot get any stack traces to see where a request was aborted like before.
Killing / restarting an instance will solve the 10s problem for some time, but that is naturally not a solution.
Are there any steps that have to be done to get to the promised performance improvements of v2?
TL;DR - GCE 2.0 alone is faster and more reliable than GCE 1.0, but don't use API Management or you'll give back all those gains and then some.
I too was seeing major slowness issues when testing out GCE 2.0, and I couldn't possibly justify subjecting my users to such terrible latency drops, so I set out to determine what's going on.
Here was my methodology:
I set up a minimum viable App Engine app consisting of just one simple API call that returns a server timestamp using Endpoints 1.0, Endpoints 2.0, and Endpoints 2.0 with API Management. You can see all the code for these here: https://github.com/ubragg/cloud-endpoints-testing
I deployed each of these to a separate App Engine app and tested the API using the API Explorer at these links (so you can try for yourself):
GCE 1.0
GCE 2.0
GCE 2.0+AM
The results?
Here are the results of a bunch of requests in rapid succession on each of the APIs:
GCE 1.0 GCE 2.0 GCE 2.0+AM
average 434 ms 80 ms 482 ms
median 90 ms 81 ms 527 ms
high 2503 ms 85 ms 723 ms
low 75 ms 73 ms 150 ms
As you can see, GCE 2.0 without AM was both fast and consistent. Even GCE 1.0 usually was pretty fast, but would occasionally have some troublesome outliers. GCE 2.0 with AM was pretty much always unacceptably slow, only dipping into the "maybe acceptable" range on rare occasions.
Note that all of these times are from the client perspective reported by the API Explorer. Here are the server reported averages for the same requests from the App Engine dashboard over the same time period:
GCE 1.0 GCE 2.0 GCE 2.0+AM
average 24 ms 14 ms 395 ms
So bottom line is, if you care about latency, API Management isn't really an option. If you're curious about how to run GCE 2.0 without API Management, simply be sure NOT to follow any of the instructions here: https://cloud.google.com/endpoints/docs/frameworks/python/adding-api-management.
Using the base API framework without the management library (of which the 14ms calls you mentioned are a part), you should get some improved latency. There is some increased memory usage in the v2 frameworks, as it is now incorporating code that was previously a separate service. If you are not using API management, I would suggest removing the library and seeing if it helps. It should eliminate the 14ms of latency and reduce memory use a fair amount, as you won't be loading as much code or data.