I'm trying to configure my GAE project to use min_instances=0 when running over automatic scaling option.
I followed all steps on docs but after to click "EXECUTE", I received a BAD Request error:
The error informs "This field is not supported for VM versions" but I'm using GAE only.
Also, during the first execution, the service asked me about some authorization, and I agreed with.
Is there some way to fix this? I could not find any explanation how to fix this issue.
#GonçaloAlbino observed that I used Flex environment instead of Standard. So I'm able to use automaticScaling.min_total_instances.
Related
following the golang library instructions if you write logs with the client library, where can one see those logs when running your server locally during development (eg via go run main.go)?
in my case (not sure if it's relevant) i'm using the library as part of golang logic in appengine, and even the relevant-looking instructions on "viewing logs" for those docs don't mention local development explicitly. Is that because it (running gcloud app logs tail and seeing local server logs) should "just work" or because there's no way to see logs for a local logs sdk interaction?
It's a good question and the Cloud Logging libraries do appear bound to Google's Cloud Logging service but, for local development (your question) and, loose-coupling as a generally good principle, these libraries really ought to be pluggable. Why shouldn't services running on e.g. GCP route logs to e.g. AWS?
With OpenTelemetry (nee OpenCensus), Google (and others) promote the ability to disconnect metric and trace production from consuming services, and logs aren't distinctly different.
A popular logging library in Go, Logrus supports pluggable logging via Hooks and an old (!) Stackdriver Logging implementation exists; this should be straightforward to upgrade to the current API (version).
Meantime, I think your question would benefit from being posted to Google's public issue tracker for Stackdriver (sic.) logging (link) and I'm going to ask someone who's very familiar with Cloud Logging as she may have some insight into this for us.
Update
I emailed with some former colleagues at Google and learned that Open Telemetry will eventually encompass logging. This is mentioned briefly on the project's About page.
tl;dr Tentatively answering myself: that's not supported - instead one has to just conditionally swap out calls to regular logger if env (eg empty GAE_INSTANCE env variable) indicates you're on localhost.
Walking through the code under the NewClient(...) call on the logging package, I end up a spot where the upstream API is really being called (note the rpc context used by the very last turtle - I never saw logic as I walked through that seemed to be switching to something for local development), so I suspect there really is no emulation capturing.
EDIT: See DazWilkin's helpful answer below for more context
When deploying using gcloud app deploy I get the following error:
Timed out waiting for the app infrastructure to become healthy gcp
I contacted GCP Support and they told me the same thing I had read in other threads:
the error you are referring to may be related to the Compute Engine “In-Use IP Addresses” Quota limit. You can view your current quota limit information by accessing from your GCP menu “IAM & Admin > Quotas”.
I checked the "In-Use IP Addresses" and it doesn't seem like I have a problem with quotas:
Looking for the error, I found that in the Activity tab, when deploying, I get an error. Apparently , when App Engine is trying to delete a VM, the process starts to loop trying to delete it. You can see the error:
(I intentionally covered the project ID)
Edit: It seem like the problem is only with southamerica-east1. I created a new project in southamerica-east1 but I kept getting the same error, so then I created a new project with the App Engine in us-west2 and worked like a charm (I used the same application and app.yaml). I wonder if the problem is GCP southamerica-east1 or a unknown bad configuration by my side.
This is probably related to this issue: https://issuetracker.google.com/u/2/issues/73583699. It does mentioned the "in-use IP Address" quota, but many people have posted in recent days (Nov 2018) indicating that they are seeing the error and have verified that they have not hit their quota.
Unfortunately, no solution has been posted and there hasn't been any recent comment from the devs.
First, our apologies that you’ve experienced this issue. Be assured that we are aware of the situation and the team works hard to resolve it.
Our goal is to make sure that there are available resources in all zones. This
type of issue is rare. When a situation like this occurs, or is about to
occur, our team is notified immediately and the issue is investigated.
We recommend deploying and balancing your workload across multiple zones or
regions to reduce the likelihood of an outage. Please review our documentation
which outlines how to build resilient and scalable architectures on Google
Cloud Platform.
For the time being, you can try relaxing your requirements (e.g. requesting a smaller instance or one with fewer resources) or removing the external IP requirement.
If that proves not to be enough, you can try deploying your application to another region
Again, we want to offer our sincerest apologies.
Thanks for understanding.
At the end we didn't find a real solution so we moved all our services from Brazil to US-2. I'm not sure if the Region is the problem, but there in US-2 all works like a charm
I'm trying to debug 502 errors coming out of the nginx container with my AppEngine Flex setup.
I noticed that the logs indicate liveness and rediness checks being spammed very rapidly (see attached).
For clarification this is currently running a single instance in manual_scaling mode.
check_interval_sec is set for 30s on liveness_check and 5 sec on rediness_check.
Can anyone provide insight into what is going on here?
It looks like you setup readiness and liveness checks too aggressively in your app.yaml. Please keep in mind that the checks works for every instance so if you have a lot of instances, it will occur frequently.
If you only have one instance setup, then the behavior contradict what the documentation described. Please file an issue with us on the issue tracker.
I am having problems with the Bluemix Monitoring and Analytics service.
I have 2 applications with bindings to a single Monitoring and Analytics service. Every ~1 minute I get the following log line in both apps:
ERR [Resource Monitoring][ERROR]: JsonSender request error: Error: unsupported certificate purpose
When I remove the bindings, the log message does not appear. I also greped my code for anything related to "JsonSender" or "Resource Monitoring" and did not find anything.
I am doing some major refactoring work on our server, which might have broken things. However, our code does not use the Monitoring service directly (we don't have a package that connects to the monitoring server or something like that) - so I will be very surprised if the problem is due to the refactoring changes. I did not check the logs before doing the changes.
Any ideas will help.
Bluemix have 3 production environments: ng, eu-gb, au-syd, and I tested with ng, and eu-gb, both using 2 applications with same M&A service, and tested with multiple instances. They are all work fine.
Meanwhile, I received a similar problem that claim they are using Node.js 4.2.6.
So there are some more information we need to know to identify the problem:
1. Which version of Node.js are you using (Bluemix Default or any other one)
2. Which production environment are you using? (ng, eu-gb, au-syd)
3. Is there any environment variables are you using in your application?
(either the creating in code one, or the one using USER-DEFINED Variables)
4. One more thing, could you please try to delete the M&A service, and create it again, in case we are trapped in a previous fault of M&A.
cf ds <your M&A service name>
cf cs MonitoringAndAnalytics <plan> <your M&A service name>
NodeJS versions 4.4.* all appear to work
NodeJS uses openssl and apparently did/does not like how one of the M&A server certificates were constructed.
Unfortunately NodeJS does not expose the openssl verify purpose API.
Please consider upgrading to 4.4 while we consider how to change the server's certificates in the least disruptive manner as there are other application types that do not have an issue with them (e.g. Liberty and Ruby)
setting node js version 4.2.4 in package.json worked for me, however this is an alternative by-passing solution. Actual fix is being handled by core team. Thanks.
I'm trying to understand how to use Json-RPC calls in Google Go that would be used in a Google App Engine app. So far I understand that I somehow should call rpc.Client.Dial, but I don't understand what the "network" and "address" parameters should be. Can anyone provide a sample, working code that demonstrates how to use Json-RPC in Go?
I have already written an answer to your question on the go-nuts group, but for completeness, here it is:
Go's jsonrpc package isn't compatible with GAE yet.
Reference: https://groups.google.com/d/msg/google-appengine-go/uQ0cv0m9j0E/H3VCrFgEWvcJ
It's probably a good idea to read the full thread there, since it describes the limitations on GAE nicely and links to a patched library with lots of workarounds... The issue is already known, but has not been solved yet.