I am hitting this quota and I was wondering wether this quota for External IP?
INVALID_ARGUMENT: The following quotas were exceeded: IN_USE_ADDRESSES (quota: 8, used: 7 + needed: 2).
I have a few services that work via pub/sub and making request outside. Do I still need External IP? Or somehow I can set and use Internal IP?
Meantime I made a request to increase but want to understand this.
This quota is related to the ephemeral IPs used by App Engine. Actually App Engine will always use External IPs, you cannot avoid that, you may want to take a look on how App Engine manages IPs in this documentation.
Everytime you deploy a new version of your app, App Engine by default will retain the old versions with its IPs. You can avoid this situation by stopping the previous versions on deployment with the flag --stop-previous-version.
This is also already answered here.
Related
I have jobs and APIs hosted on cloud composer and App Engine that works fine. However for one of my job I would need to call an API that is IP restricted.
As far as I understand, I see that there's no way to have a fixed IP for app engine and cloud composer workers and I don't know what is the best solution then.
I thought about creating a GCE with a fixed IP that would be switched on/off by the cloud composer or app engine and then the API call would be executed by the startup-script. However, it restrains this to only asynchronous tasks and it seems to add a non desired step.
I have been told that it is possible to set up a proxy but I don't know how to do it and I did not find comprehensive docs about it.
Would you have advice for this use-case ?
Thanks a lot for your help
It's probably out of scope to you, but you could whitelist the whole range of app engine ip by performing a lookup on _cloud-netblocks.googleusercontent.com
In this case you are whitelisting any app engine applications, so be sure this api has another kind of authorization and good security. More info on the App Engine KB.
What I would do is install or implement some kind of API proxy on GCE. It's a bummer to have a VM on 24/7 for this kind of task so you could also use an autoscaler to scale to 0 (not sure about this one).
As you have mentioned: you can set up a TCP or UDP proxy in GCE as a relay, and then send requests to the relay (which then forwards those requests to the IP-restricted host).
However, that might be somewhat brittle in some cases (and introduces a single point of failure). Therefore, another option you could consider is creating a private IP Cloud Composer environment, and then using Cloud NAT for public IP connectivity. That way, all requests from Airflow within Composer will look like they are originating from the IP address of the NAT gateway.
I have an existing Laravel site which I would like to host on Google app Engine. I've created the app.yaml file and made changes in the composer.json file. When I run the command gcloud app deploy , I get the operation timed out Error. I've updated the config using gcloud config set app/cloud_build_timeout 1000 but still no luck.
It is possible that your In-use IP addresses quota in the region of your App Engine Flexible application has reached its limit. You can check your in-use addresses by clicking here and will be able to increase this by clicking the 'Edit Quotas' button in the Cloud Console.
To find out whether this is the issue causing the error, you can do so by going to the “Activity” tab of your project home page. You may have warnings about quota limits and VMs failing to be created. As App Engine by default leave previous versions up and running that may taking up the ip addresses. You can delete the previous versions and/or request an increase of your IP address quota limit. It is also advised to use the up to date gcloud tools and SDK which may resolve the issue.
If you follow the steps above but are still having trouble, I suggest that you create a new private issue in the Public Issue Tracker and provide the contents of your app.yaml file, Error logs and Project ID. Google Cloud Support team will look further into the matter.
In my app (Google App Engine Standard Python 2.7) I have some flags in global variables that are initialized (read values from memcache/Datastore) when the instance start (at the first request). That variables values doesn't change often, only once a month or in case of emergencies (i.e. when google app engine Taskqueue or Memcache service are not working well, that happened not more than twice a year as reported in GC Status but affected seriously my app and my customers: https://status.cloud.google.com/incident/appengine/15024 https://status.cloud.google.com/incident/appengine/17003).
I don't want to store these flags in memcache nor Datastore for efficiency and costs.
I'm looking for a way to send a message to all instances (see my previous post GAE send requests to all active instances ):
As stated in https://cloud.google.com/appengine/docs/standard/python/how-requests-are-routed
Note: Targeting an instance is not supported in services that are configured for auto scaling or basic scaling. The instance ID must be an integer in the range from 0, up to the total number of instances running. Regardless of your scaling type or instance class, it is not possible to send a request to a specific instance without targeting a service or version within that instance.
but another solution could be:
1) Send a shutdown message/command to all instances of my app or a service
2) Send a restart message/command to all instances of my app or service
I use only automatic scaling, so I'cant send a request targeted to a specific instance (I can get the list of active instances using GAE admin API).
it's there any way to do this programmatically in Python GAE? Manually in the GCP console it's easy when having a few instances, but for 50+ instances it's a pain...
One possible solution (actually more of a workaround), inspired by your comment on the related post, is to obtain a restart of all instances by re-deployment of the same version of the app code.
Automated deployments are also possible using the Google App Engine Admin API, see Deploying Your Apps with the Admin API:
To deploy a version of your app with the Admin API:
Upload your app's resources to Google Cloud Storage.
Create a configuration file that defines your deployment.
Create and send the HTTP request for deploying your app.
It should be noted that (re)deploying an app version which handles 100% of the traffic can cause errors and traffic loss due to:
overwriting the app files actually being in use (see note in Deploying an app)
not giving GAE enough time to spin up sufficient instances fast enough to handle high income traffic rates (more details here)
Using different app versions for the deployments and gradually migrating traffic to the newly deployed apps can completely eliminate such loss. This might not be relevant in your particular case, since the old app version is already impaired.
Automating traffic migration is also possible, see Migrating and Splitting Traffic with the Admin API.
It's possible to use the Google Cloud API to stop all the instances. They would then be automatically scaled back up to the required level. My first attempt at this would be a process where:
The config item was changed
The current list of instances was enumerated from the API
The instances were shutdown over a time period that allows new instances to be spun up and replace them, and how time sensitive the config change is. Perhaps close on instance per 60s.
In terms of using the API you can use the gcloud tool (https://cloud.google.com/sdk/gcloud/reference/app/instances/):
gcloud app instances list
Then delete the instances with:
gcloud app instances delete instanceid --service=s1 --version=v1
There is also a REST API (https://cloud.google.com/appengine/docs/admin-api/reference/rest/v1/apps.services.versions.instances/list):
GET https://appengine.googleapis.com/v1/{parent=apps/*/services/*/versions/*}/instances
DELETE https://appengine.googleapis.com/v1/{name=apps/*/services/*/versions/*/instances/*}
I have just created one google app engine application and one 2nd Generation MySQL instance in eu-west2 region. In GCP Console they both seems to be in eu-west2 region.
However when I try to gelocate my ip's they seem to be in somewhere in US.
What should I do to use GCP in eu-west2 region?
my GCP instances:
their locations:
Google has an extensive world wide network. What you are seeing is us routing you to Google's closest Point of Presence (POP), which from that point on you're on a software defined network (SDN). What this means is we get your traffic on to our fast network as quickly as possible and abstract away the details of getting you to the machine in question.
Check latency from you to these hosts, then spin up a VM in Europe and check latency from that VM to these hosts - you'll find the numbers will confirm they really are in eu-west2.
I faced the same issue, you can find more about it here: Outgoing HTTP Request Location on Google App Engine
Is about the Google Network usage, the outgoing traffic come from the "Point of Presence" instead of the location and it can be dynamic. In my case, I have no solution, since it's mandatory to my API to make the requests from Brazil =\
I'm running an app on a VM instance (instance-1) and would like myproject.appspot.com requests to be served by instance-1.
I read https://cloud.google.com/appengine/docs/java/modules/routing but it wasn't clear. Is there a way to say "send all traffic to my one instance"?
If I go to my (ephemeral) external IP address for that instance, I can see the server. But, that won't work for an oAuth2 domain (no IP addresses allowed), so I need it to go through the named domain.
I'd be ok if I could use something constant like instance-1-dot-myproject.appspot.com but would prefer the base myproject.appspot.com to say "any instances? great! use that."
I think you want to use Managed VMs. They give you the flexibility of Google Compute Engine but work more like the PaaS that is Google App Engine.
You don't create the Google Compute Engine VM instances yourself, however, Managed VMs will spin them up on demand, using the Docker image you provide as the container of the code, data, etc.
Note that as of 29 Sep 2015, per the docs:
Beta
This is a Beta release of Managed VMs. This feature is not covered by any SLA or deprecation policy and may be subject to backward-incompatible changes.