Can traffic from App Engine for Google APIs travel through Serverless VPC access connector not be routed through cloud NAT? - google-app-engine

We have set up a VPC Serverless access connector, and configured app engine to use this in app.yaml. We have egress_setting: all-traffic set, as we want to access a 3rd party API from a specific IP address. We used the documentation from https://cloud.google.com/appengine/docs/standard/python3/outbound-ip-addresses#static-ip.
Part of our testing is hitting a large set of URLs on app engine and checking the HTTP status. In this testing we noticed a dramatic reduction in the rate of serving requests when using the connector. Since all egress traffic is routed via the connector, my first inclination is to think our applications usage of Google APIs (datastore, cloud storage, Cloud SQL) is being impacted.
The connector is still has the minimum number of instances as active instances, indicating we have not reached the limit of it's performance, and that this is not the bottleneck. However, retesting with the vpc_access_connector removed from app.yaml returns performance to what we previously had.
I've tried enabling Private Google Access on the subnet the connector is linked to, but this has not improved the situation.
I think we may need to add some routing rules that allow us to send the traffic for Google APIs directly to Google's services, and not through the cloud NAT, but I'm unsure as to what rules would be applicable. I see no reason why this is not possible, but I haven't found the right documentation to guide me here.
Is this possible? Is this documented somewhere?

Related

How to establish a private connection between Google app engine and compute engine?

I have a web app/Api which is currently running on a google app engine resource. As the calculations of the API are very computing intensive, i have outsourced the computational part to a managed auto-scaling google compute engine group, with a HTTP load balancer in the front end (to maintain a single IP address and balance load accross the several engines that are dynamically spawning).
Currently, i just make an HTTP call to the load balancer IP address from the app engine. As the GAE and GCE are in the same region, this however feels highly innefficient (i am aware that the app engine and compute engines are still in two physically seperated data centers). This also poses a security threat as I am constently receiving calls from random IP bots trying to exploit potential security loopholes. Additionally, i am only verfying API token validity at the app engine level, as i do not want to give user database access to the compute engine (security reasons), so this means that there is no verification beeing done between app engine and compute engine, so that the latter answers all calls that it gets.
Is there a way to establish a private connection between the app engine and cloud engine?
My goal would be to not have to open the GCE to the whole internet, bearing in mind that it is only receiving calls from one IP adress/resource
I have tried whitelisting only the app engines IP addresses, but this unforthunately is a large block of adresses, is very cumbersome to retrieve and changes dynamically. The app engine also cannot use the private IP of the compute engine/ google SQL servers.
Other creative ideas are highly welcome!
It appears that Serverless VPC Access may be a potential solution. The following is taken from the overview:
Serverless VPC Access enables you to connect from the App Engine
standard environment and Cloud Functions directly to your VPC network.
This connection makes it possible for your App Engine standard
environment apps and Cloud Functions to access resources in your VPC
network via internal (private) IP addresses. Using internal IP
addresses improves the latency of communication between your Google
Cloud Platform services and avoids exposing internal resources to the
public internet.
Serverless VPC Access only allows your app or function to send
requests to resources in your VPC network and receive responses to
those requests. Communication in the opposite direction, where a VM
initiates a request to an app or function, requires you to use the
public address of the app or function.

Firewall/Block Compute Engine to only allow connections from App Engine (Flexible?)

I have CouchDB server on a Google Compute Engine via Bitnami.
I want my API (Google App Engine) to be able to talk to Compute Engine but I really don't want anyone else to be able to for security purposes.
I'm open to using App Engine Flexible if that's what needs to happen.
It says here that google app engine can't be assigned a static IP but I was wondering if anyone had any other suggestions of restricting outside access to the static ip I've assigned my compute engine to only allow incoming connections made to my Projects/My app engine, etc.
You are requiring to use Virtual Private Cloud (VPC), since this option allows you to configure firewall rules in order to provide a controlled access to your Cloud resources and allow then to interact in a safe environment.
You can give a look to the VPC overview to have a better understanding of the capabilities and options offered by Google Cloud (https://cloud.google.com/vpc/docs/vpc) and also you will find useful information on how to use the VPC over the different services in the docs (https://cloud.google.com/vpc/docs/private-access-options). By the way, you would need to move your API to App Engine Flex.

Is there a way to deploy internal facing applications in Google App Engine?

Is there a way to deploy "internal facing" applications in Google App Engine. AWS offers this capability as explained here and so does Azure as explained here.
What is the GCP equivalent for this? It appears App Engine Flexible Environment could be the answer but I could not find a clear documentation on whether Flexible Environment is indeed the way to host intranet facing applications. Is there someone from GCP who can advise?
Update
I tested the solution recommended by Dan recently. Listed below are my observations:
App Engine Flex allows deploying to a VPC and this allows VPN scenarios. The VPN scenarios however is for connections (originating) from App Engine to GCP VPCs or to other networks outside GCP which can be on-prem or in another cloud.
Access (destined) to the app itself from a GCP or another network is always routed via the internet facing Public IPs. There is no option to access the app at a private IP at the moment.
If there's another update, I will update it here.
Update 28Oct2021
Google has now launched Serverless Network Endpoint Group(NEG)s. With this users can connect AppEngine, Cloud Run & Cloud Function endpoints to a LoadBalancer. However at the moment, you can only use Serverless NEGs with an external HTTP(S) load balancer. You cannot use serverless NEGs with regional external HTTP(S) load balancers or with any other load balancer types. Google documentation for Serverless NEGs is available here.
I'm not sure this meets your requirements, but it's possible to set up an App Engine Standard application (not certain about Flexible) such that it is only accessible to users logged into your G-Suite domain. This is the approach I've used for internal-facing applications in the past, but it only applies if your case involves an entity using G-Suite.
You can set this up under the App Engine application Settings, under Identity Aware Proxy.
In this scenario the application is still operating at a publicly accessible location, but only users logged into your G-Suite domain can access it.
It should be possible with the GAE flexible environment. From Advanced network configuration:
You can segment your Compute Engine network into subnetworks. This
allows you to enable VPN scenarios, such as accessing databases within
your corporate network.
To enable subnetworks for your App Engine application:
Create a custom subnet network.
Add the network name and subnetwork name to your app.yaml file, as specified above.
To establish a VPN, create a gateway and a tunnel for a custom subnet network.
The standard env GAE doesn't offer access to the networking layer to achieve such goal.

Caching of Google Cloud Endpoints?

Will requests to Cloud Endpoints get cached?
The official docs are a little light on this matter. The docs read:
Cloud Endpoints uses the distributed Extensible Service Proxy to
provide low latency and high performance for serving even the most
demanding APIs. [...] and can be used with Google App Engine, Google
Container Engine, Google Compute Engine or Kubernetes.
A 'distributed extensible service proxy' makes me think the Endpoint is distributed to the edge nodes for faster responses, but the docs don't specifically state this.
We can use Cloud CDN to cache requests from GAE, Compute and Container Engine. Endpoints can be used with all those. This makes me wonder if there's some magic in the background with CDN+compute to cache the Endpoints responses. Again, the docs are a little light on this.
Has anyone figured this out? Thanks!
Great question! The Extensible Service Proxy (ESP) does not perform request caching. Its function is to intercept incoming requests, validate auth tokens, and then forward the request to Google Service Control where additional API Management rules are applied as defined in your Open API spec. Endpoints uses a distributed proxy model for better performance, to avoid the extra network hop that's typically incurred with a traditional multi-tenant API proxy. This is in fact the same model used internally within Google to power our own APIs.
Please let us know if you have anymore questions!

Google App Engine, Secure Data Connector (SDC), and supported protocols

I've been investigating what I can do with Google's Secure Data Connector and App Engine.
Is it possible, from an App Engine application, to grab resources inside my corporate intranet without using HTTP(S)?
From what I read in the documentation, the only way to request resources through SDC is by using url_fetch, which is limited to HTTP, right?
You are right that app engine does not let you to other hosts or use sockets directly except through its URLFetch API which is limited to HTTP. However, you are not stuck to traditional ports - you can use it to access ports 80-90, 440-450, and 1024-65535 (as of GAE v1.3.2).
It doesn't seem like this restriction should matter much if you are planning on using SDC - the SDC FAQ seems to indicate that it uses HTTP/HTTPS to connect to resources on your intranet anyway.

Resources