Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I have a running cluster on Kubernetes with GCP, and some services running on App Engines, and I'm trying to communicate between them without being able to access the App Engines from the outside.
I created a private Kubernetes cluster with a specific subnetwork, I linked this subnetwork to a Cloud NAT to have a unique egress IP I can whitelist, and I allowed this IP in the App Engine firewall rules.
However, when I request my app engines from the cluster, I get a 403 response because it doesn't pass through the firewall. But if I connect to my Kubernetes pod and try to request a site to know my IP, I get the IP I set in the Cloud NAT.
I found in the Cloud NAT documentation that the translation to internal IPs is realised before the application of the firewall rules (https://cloud.google.com/nat/docs/overview#firewall_rules).
Is there a way to retrieve this internal IP? Or another way to secure the services?
Cloud NAT never applies to traffic sent to the public IP addresses for Google APIs and services. Requests sent to Google APIs and services never use the external IPs configured for Cloud NAT as their sources.
I suggest you to deploy a Internal load Balancer. Internal TCP/UDP Load Balancing makes your cluster's services accessible to applications outside of your cluster that use the same VPC network and are located in the same GCP region.
Related
I have a web app/Api which is currently running on a google app engine resource. As the calculations of the API are very computing intensive, i have outsourced the computational part to a managed auto-scaling google compute engine group, with a HTTP load balancer in the front end (to maintain a single IP address and balance load accross the several engines that are dynamically spawning).
Currently, i just make an HTTP call to the load balancer IP address from the app engine. As the GAE and GCE are in the same region, this however feels highly innefficient (i am aware that the app engine and compute engines are still in two physically seperated data centers). This also poses a security threat as I am constently receiving calls from random IP bots trying to exploit potential security loopholes. Additionally, i am only verfying API token validity at the app engine level, as i do not want to give user database access to the compute engine (security reasons), so this means that there is no verification beeing done between app engine and compute engine, so that the latter answers all calls that it gets.
Is there a way to establish a private connection between the app engine and cloud engine?
My goal would be to not have to open the GCE to the whole internet, bearing in mind that it is only receiving calls from one IP adress/resource
I have tried whitelisting only the app engines IP addresses, but this unforthunately is a large block of adresses, is very cumbersome to retrieve and changes dynamically. The app engine also cannot use the private IP of the compute engine/ google SQL servers.
Other creative ideas are highly welcome!
It appears that Serverless VPC Access may be a potential solution. The following is taken from the overview:
Serverless VPC Access enables you to connect from the App Engine
standard environment and Cloud Functions directly to your VPC network.
This connection makes it possible for your App Engine standard
environment apps and Cloud Functions to access resources in your VPC
network via internal (private) IP addresses. Using internal IP
addresses improves the latency of communication between your Google
Cloud Platform services and avoids exposing internal resources to the
public internet.
Serverless VPC Access only allows your app or function to send
requests to resources in your VPC network and receive responses to
those requests. Communication in the opposite direction, where a VM
initiates a request to an app or function, requires you to use the
public address of the app or function.
I was wondering if a site-to-site VPN setup like the one in this diagram is possible:
From the diagram, I could access the GAE Flex instance's internal IP that are launched in the VPC from the on-premise server, but I don't think I could invoke a *.appspot.com URL without needing to go outside of the tunnel, correct?
The on-premise network can only whitelist IP ranges for external HTTPS access but it seems like GAE can't support such a configuration?
Is this kind of setup only possible by setting up a GCE reverse proxy? If that's the case, would I just be better off deploying my application as a Kubernetes cluster?
So I did an experiment that revealed several things about this setup in case any is interested in this in the future:
In the above diagram, under the assumption that the on-premise instances are in a private subnet with no external IP, they can reach app engine instances via their internal IP.
Just because they can reach the App Engine flexible instances via their internal IP, this doesn't mean that they can then be port forwarded to invoke your function.
The only real use case for App Engine flex working with Cloud VPN is if you need the App Engine instances to be able to reach either your private cloud and/or on-premise instances i.e. going from right to left in the above diagram.
We want to use an app engine flexible process to update our ElasticSearch index, which is on Google Kubernetes Engine. We need to connect to ElasticSearch via a http(s) address. What's the recommended way to do this? We don't want to expose the cluster to the external networks since we don't have authentication in front of it.
I've seen this SO post but both k8s and AE have changed a lot in the 2 years since the question/answer.
Thanks for your help!
The post you linked to was about App Engine Standard. App Engine Flex is built on top of the same Google Cloud networking that is used by Google Compute Engine virtual machines and Google Kubernetes Engine clusters. As long as you put the App Engine flex application into the same VPC as the Google Kubernetes Engine cluster you should be able to communicate between them using internal networking.
On the other hand, to expose a Kubernetes service to anything running outside of the cluster will require you to modify the service for Elastic search because by default Kubernetes services are only reachable from inside of the cluster (due to the way that the service IPs are allocated and reached via IPTables magic). You need to "expose" the service, but rather than exposing it to the internet via an external load balancer, you expose it to the VPC using an internal load balancer. See https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing.
in addition to Robert's answer, make sure that app engine and GKE are in the same region,
because Internal load balancers are not usable from other region.
check this https://issuetracker.google.com/issues/111021512
Is there a way to deploy "internal facing" applications in Google App Engine. AWS offers this capability as explained here and so does Azure as explained here.
What is the GCP equivalent for this? It appears App Engine Flexible Environment could be the answer but I could not find a clear documentation on whether Flexible Environment is indeed the way to host intranet facing applications. Is there someone from GCP who can advise?
Update
I tested the solution recommended by Dan recently. Listed below are my observations:
App Engine Flex allows deploying to a VPC and this allows VPN scenarios. The VPN scenarios however is for connections (originating) from App Engine to GCP VPCs or to other networks outside GCP which can be on-prem or in another cloud.
Access (destined) to the app itself from a GCP or another network is always routed via the internet facing Public IPs. There is no option to access the app at a private IP at the moment.
If there's another update, I will update it here.
Update 28Oct2021
Google has now launched Serverless Network Endpoint Group(NEG)s. With this users can connect AppEngine, Cloud Run & Cloud Function endpoints to a LoadBalancer. However at the moment, you can only use Serverless NEGs with an external HTTP(S) load balancer. You cannot use serverless NEGs with regional external HTTP(S) load balancers or with any other load balancer types. Google documentation for Serverless NEGs is available here.
I'm not sure this meets your requirements, but it's possible to set up an App Engine Standard application (not certain about Flexible) such that it is only accessible to users logged into your G-Suite domain. This is the approach I've used for internal-facing applications in the past, but it only applies if your case involves an entity using G-Suite.
You can set this up under the App Engine application Settings, under Identity Aware Proxy.
In this scenario the application is still operating at a publicly accessible location, but only users logged into your G-Suite domain can access it.
It should be possible with the GAE flexible environment. From Advanced network configuration:
You can segment your Compute Engine network into subnetworks. This
allows you to enable VPN scenarios, such as accessing databases within
your corporate network.
To enable subnetworks for your App Engine application:
Create a custom subnet network.
Add the network name and subnetwork name to your app.yaml file, as specified above.
To establish a VPN, create a gateway and a tunnel for a custom subnet network.
The standard env GAE doesn't offer access to the networking layer to achieve such goal.
I'd like to put a Redis server on Google Compute Engine and speak to it via AppEngine's socket support. The only problem is that there doesn't seem to be a specific firewall rule that says "this AppEngine application can access this host/port and no other".
There are some rules at instance setup time that describe whether the instance has access to task queues, etc, but not the inverse.
So my question is: how can I restrict port access to a Redis service only to a single AppEngine application?
In short you can not. AppEngine is a shared IP space with all the other apps, just like shared hosting. You need to use application level authentication such as OAuth to get the proper restrictions in place.