I am setting a private connection between my project's Google App Engine and Compute engine with a serverless VPC network. The region of the GAE is set in northamerica-northeast1 (which can't be changed as described in the documentation), so I am trying to set up the VPC connector to be in the same region, following this guide.
When creating a new serverless VPC connector, it looks like there is no region selection for northamerica-northeast1, as shown in the screenshot. Is there a way to set up the connector to this region already set by app engine? At this point the last thing I want to do is start an entirely new project just to re-setup app engine and other applications on GCP.
I have seen some responses in the past about getting whitelisted to be given access to specific requests such as this, but again, I do not know how to go about that process either, if that is my best option.
Because of the current product limitations,to set a connection between App Engine and Compute Engine please file a feature request in our Public Issue Tracker and we will implement support for the northamerica-northeast1 region. To do so please refer to link[1]
After getting in touch with a helpful Google support rep, this specific region is indeed not supported by VPC networks. My best (and really only) option was to set up a new project, with a new region and match my VPC network to the new region, and then migrate/copy any datasets and apps I was using from the old project to the new.
Related
I had done an investigation on below
Adding custom domain through AppEngine settings - Doesn't seem to be static IP, it uses Google NS.
Setting up VM and run as a proxy - Seems to be convoluted method and security/maintenance overhead.
HTTPS load balancer with internet NEG I am still investigating and it said
You should do this when you want to serve content from an origin that is hosted outside of Google Cloud, and you want your external HTTP(S) load balancer to be the frontend.
Any suggesions/thoughts for this solution will be greatly appricated to chose right solution for this
Static IP for AppEngine/Cloud Functions can be achieved by HTTPS Load Balancer with "Serverless Network End Group" backend.
LB also helps multi-region serving for AppEngine and other serverless components.
This is similar to Internet NEG with HTTPS LB, serverless NEG can be mapped to Google internal services like Cloud run/CF, AppEngine. It was also possible to map multiple AppEngine services from the same GCP project.
I was able to gain early access to serverless NEG on my project and test on my side. I will update this post when Serverless NEG available for public access.
Edit (7/7/2020): Serverless NEG is available in Beta now and is available for everyone to access, See
Serverless network endpoint groups overview
Setting up serverless NEGs
As you can see at this documentation, "App Engine does not currently provide a way to map static IP addresses to an application. In order to optimize the network path between an end user and an App Engine application, end users on different ISPs or geographic locations might use different IP addresses to access the same App Engine application". So, there is no way to set a static IP address to App Engine but you can use a use a pool of IP address. In the shared link, you can find the way to use ranges of IP address in App Engine. This other link, explain a bit more how to do it.
Problem. I'm looking for an agile way to shoot a docker container (stored on GCR.IO) to a managed service on GCP:
one docker container gcr.io/project/helloworld with private data (say, Cloud SQL backend) - can't face the real world.
a bunch of IPs I want to expose it to: say [ "1.2.3.4" , "2.3.4.0/24" ].
My ideal platform would be Cloud Run, but also GAE works.
I want to develop in agile way (say deploy with 2-3 lines of code), is it possible run my service secretly and yet super easily? We're not talking about a huge production project, we're talking about playing around and writing a POC you want to share securely over the internet to a few friends making sure the rest of the world gets a 403.
What I've tried so far.
The only think that works easily is a GCE vm with docker-friendly OS (like cos) where I can set up firewall rules. This works, but it's a lame docker app on a disposable VM. Machine runs forever and dies at reboot unless I stabilize it on cron/startup. Looks like I'm doing somebody else's job.
Everything else I've tried so far failed:
Cloud Run. Amazing but can't set up firewall rules on it, or Cloud Director, .. seems to work only with IAP which is painful to set up.
GAE. Works with multiple IPs and can't detach public IPs or firewall it. I managed to get the IP filtering within the app but seems a bit risky. I don't [want to] trust my coding skills :)
Cloud Armor. Only supports a HTTPS Load Balancer which I don't have. Nor I have MIGs to point to. I want simplicity.
Traffic Director and need a HTTP L7 balancer. But I have a docker container, on a single pod. Why do I need a LB?
GKE. Actually this seems to work: [1] but it's not fully managed (I need to create cluster, pods, ..)
Is this a product deficiency or am I looking at the wrong products? What's the simplest way to achieve what I want?
[1] how do I add a firewall rule to a gke service?
Please limit your question to one service. Not everyone is an expert on all Google Cloud services. You will have a better chance of a good answer for each service if they are separate questions.
In summary, if you want to use Google Cloud Security Groups to control IP based access you need to use a service that runs on Compute Engine as security groups are part of the VPC feature set. App Engine Standard and Cloud Run do not run within your project's VPC. This leaves you with App Engine Flex, Compute Engine, and Kubernetes.
I would change strategies and use Google Cloud Run managed by authentication. Access is controlled by Google Cloud IAM via OAuth tokens.
Cloud Run Authentication Overview
I have agreed with the John Hanley’s reply and I have up-voted his answer.
Also, I’ve learned that you are looking how to restrict access to your service through GCP.
By setting a firewall rules, You can limit access to your service by limiting the Source IP range as Allowed source, so that only this address will be allowed as source IP.
Please review another thread in Server Fault [1], stating how to “Restrict access to single IP only”.
https://serverfault.com/questions/901364/restrict-access-to-single-ip-only
You can do quite easily with a Serverless NEG for Cloud Run or GAE
If you're doing this in Terraform you can follow this article
I just when trough this tutorial about Using
Firebase and App Engine Standard Environment in an Android App
It was grate but I wonder now can anyone upload and replace my servlet code. Like do I need to set up some firewall somewhere. I read the docs
about
Using Networks and Firewalls
but I cannot see any hands-on how to apply this, it´s really advanced and if someone could break it down, what I need to do to only allow me to access the code.
I´m a bit new to this but when working with this tutorial
Build an Android App Using Firebase and the App Engine Flexible
Environment
I got this email from CloudPlatform-noreply saying I must maintain a Firewalls :
Dear Developer, We noticed that your Google Cloud Project has open
project firewalls. This could make your instance vulnerable to
compromises since anyone on the internet can access and establish a
connection to the instance. The following project has open firewalls:
Playchat (ID: playchat-4cc1d) Google Cloud Platform provides the
flexibility for you to configure your project to your specific needs.
We recommend updating your settings to only allow access to the ports
that your project requires. You can review your project's settings by
inspecting the output of gcloud compute firewall-rules or by visiting
the firewall settings page on the GCP Console. Learn more about using
firewalls and secure connections to VM instances.
What do I need to be afraid of here - what does "since anyone on the internet can access and establish a connection to the instance." really mean?
I want my Firebase signed in users to be able to access only
Source code deployment
The only people that can deploy source code to your app are ones that you've given access to in the IAM permissions pages in the Cloud Platform Console. People there need Owner or have the specific role of "App Engine Admin" or "App Engine Deployer".
Connecting to your instances
If you are using the App Engine standard environment there are no virtual machine instances. The standard environment is purely a platform as a service, not your typical hosting environment with servers.
If you are using the App Engine flexible environment, your code does run on virtual machine instances. However, those instances by default are locked down. You can enable SSH for debugging purposes. These connections however use the tokens via your authorized gcloud installation to connect. All this is just to say, that by default your instances are locked down and even in the debug mode they are still pretty secure.
Overall, your code is secure by default. Protecting your resources is actually probably more about protecting your Gmail account and thus its connected resources like your Cloud Platform projects. Protect your account with two-factor authentication, don't give people more access to your project than they require, and lastly don't enable debugging unless you need it and even then close it down when you're done.
I'm running an app on a VM instance (instance-1) and would like myproject.appspot.com requests to be served by instance-1.
I read https://cloud.google.com/appengine/docs/java/modules/routing but it wasn't clear. Is there a way to say "send all traffic to my one instance"?
If I go to my (ephemeral) external IP address for that instance, I can see the server. But, that won't work for an oAuth2 domain (no IP addresses allowed), so I need it to go through the named domain.
I'd be ok if I could use something constant like instance-1-dot-myproject.appspot.com but would prefer the base myproject.appspot.com to say "any instances? great! use that."
I think you want to use Managed VMs. They give you the flexibility of Google Compute Engine but work more like the PaaS that is Google App Engine.
You don't create the Google Compute Engine VM instances yourself, however, Managed VMs will spin them up on demand, using the Docker image you provide as the container of the code, data, etc.
Note that as of 29 Sep 2015, per the docs:
Beta
This is a Beta release of Managed VMs. This feature is not covered by any SLA or deprecation policy and may be subject to backward-incompatible changes.
I am trying to setup application on Google cloud compute. But I want to setup scaling script that would launch VM instances on google cloud based on some criteria. So Google provides autoscaler options for this, But is it possible to do that without autoscaler through Google APIs??
Also I would like to know procedure for creating image on google cloud compute. I have created one Instance group with instance template that launched one VM instance. But when I try to create image from new image option but it doesn't list disk of that instance.
For the first question, you can write your own auto scaler. Every google compute engine machine can be accessed through a remote api: https://cloud.google.com/compute/docs/reference/latest/
You can host your own auto scaler on App Engine with a cron checking the machine health and CPU every 1 minute for example.
Please write a new SO question for the second question.