GCP internal load balancer - google-app-engine

I'm trying access elasticsearch cluster on GKE from my project in GAE - flexible. Since I don't want an external load-balancer, I'm following this guide:
https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
Both GKE and GAE are deployed in the same region, but the calls to the elasticsearch cluster timeout all the time. Has anyone done this and can share some tips would be much appreciated!
My service.yaml file looks like this:
apiVersion: v1
kind: Service
metadata:
name: internalloadbalancerservice
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app.kubernetes.io/component: elasticsearch-server
app.kubernetes.io/name: elasticsearch #label selector service
spec:
type: LoadBalancer
loadBalancerSourceRanges: # restrict access
- xxxxxxxx
ports:
- name: myport
port: 9000
protocol: TCP # default; can also specify UDP
selector:
app.kubernetes.io/name : elasticsearch # label selector for Pods
app.kubernetes.io/component: elasticsearch-server

GCP now has a beta Global Access feature with Internal Load balancers which will allow the internal load balancers to be accessible from any region within the same network.
This will be helpful for your case too. If two services are exposed using internal IP addresses but located in different regions.
UPDATE
Global Access feature is now stable (for GKE 1.16.x and above) and it can be enabled by adding the below annotation to your service.
networking.gke.io/internal-load-balancer-allow-global-access: "true"
For Example: The below manifest will create your internalloadbalancerservice LoadBalancer with internal IP address and that IP will be accessible from any region within the same VPC.
apiVersion: v1
kind: Service
metadata:
name: internalloadbalancerservice
annotations:
cloud.google.com/load-balancer-type: "Internal"
# Required to enable global access
networking.gke.io/internal-load-balancer-allow-global-access: "true"
labels:
app.kubernetes.io/component: elasticsearch-server
app.kubernetes.io/name: elasticsearch #label selector service
spec:
type: LoadBalancer
loadBalancerSourceRanges: # restrict access
- xxxxxxxx
ports:
- name: myport
port: 9000
protocol: TCP # default; can also specify UDP
selector:
app.kubernetes.io/name : elasticsearch # label selector for Pods
app.kubernetes.io/component: elasticsearch-server
This works well for GKE 1.16.x and above. For older GKE versions, you can refer to this answer.

To save anyone else from a similar situation, I will share my findings of why I couldn't connect to my GKE app from GAE. The GAE was in region europe-west, while GKE was in region europe-west-4a. I thought that would be the same region. But changing GKE region to europe-west-1b worked. Not very obvious but when reading the documentation GAE region europe-west and GKE region europe-west-1b are both in Belgium.

Assuming that the GAE app and the GKE cluster are in the same region, and in the same VPC network, I would suggest to make sure you have created Ingress allow firewall rules that apply to the GKE nodes as targets with the GAE app VMs as sources.
Remember Ingress to VMs is denied by the implied deny Ingress rule. So unless you create Ingress allow firewall rules, you'll not be able to send packets to any VMs. And to use an Internal Load Balancing (ILB), both the client and the backend VMs must be in the same:
- Region
- VPC network
- Project

Related

GAE Whitelist IP VPC tied to App Engine secondary instance not working firewall

I read into this article:
How to properly configure VPC firewall for App Engine instances?
This was a huge help in getting the firewall setup in the first place - so for those who have found this and are struggling with that - follow along. https://cloud.google.com/appengine/docs/flexible/python/using-shared-vpc is a good reference, as there are some accounts that need permissions "added" to make the magic happen.
My issue - I have two containerized services running in AppEngine one default (website), one API. I've configured the API to run in a VPC/subnet separate from the default created one. I have not made any changes to the firewall settings directly hanging off the App Engine settings as those are global, and do not let you target a specific instance - and the website needs to remain public, while the API should require whitelisting access.
dispatch.yaml for configuring subdomain mapping
dispatch:
- url: "www.example.com/*"
service: default
- url: "api.example.com/*"
service: api
API yaml settings:
network:
name: projects/mycool-12345-project/global/networks/apis
subnetwork_name: apis
instance_tag: myapi
Create a VPC network
name - apis
subnet name - apis
creation mode - automatic
routing mode - regional
dns policy - none
max MTU - 1460
Add firewall rules
allow 130.211.0.0/22, 35.191.0.0/16 port 10402,8443 tag aef-instance priority 1000
deny 0.0.0.0/0 port 8443 tag myapi priority 900
allow 130.211.0.0/22, 35.191.0.0/16 port 8443 tag myapi priority 800
this works - but I cannot specify the "white list IP".
if I do the following and disable the "allow 130 / 35 networks 8443/800"
allow my.ip.number.ihave port 8443 tag myapi priority 800
it never trips this rule, it never recognizes my IP.
what change / how do you configure the firewall in the VPC so it receives the public IP. When I reviewed the logs, it said it denied my request because my IP address was 35.x.x.x.
I would recommend to contact GCP support in that case. If I'm not wrong, you can directly whitelist the IP addresses at App Engine level, but it's not a standard procedure

gcloud - Can't configure my VPC connector to work with my Redis instance

I'm facing a problem with gcloud and their support can't seem to help me.
So, to put my app in prod I need to use a redis instance to host some data. I'm using memorystore because I like to have everything on gcloud.
My app is in the standard environment on app engine so on their doc (https://cloud.google.com/memorystore/docs/redis/connect-redis-instance-standard) they ask me to configure a VPC connector. But I think that the CIDR that I put is always wrong, can someone help me finding the good CIDR.
connectMode: DIRECT_PEERING
createTime: '2020-03-13T17:20:51.590448560Z'
currentLocationId: europe-west1-d
displayName: APP-EPG
host: 10.240.224.179
locationId: europe-west1-d
memorySizeGb: 1
name: projects/*************/locations/europe-west1/instances/app-epg
persistenceIamIdentity: *************
port: 6379
redisVersion: REDIS_4_0
reservedIpRange: 10.240.224.176/29
state: READY
tier: BASIC
Thank you all !
First in order to VPC connector work yor App Engine instances have to be in the same VPC & region that your Redis instance is. If not there will not be connectivity between the two.
Also make sure you redis and app use one of the approved locations. By now it's a lot of them.
Your redis instance is in europe-west1 region so to create your VPC connector you have to set the name of the VPC network your redis instance is in
(for example "default").
IP range you were asking about is any range (not reserved by the network redis instance is in).
So - for example if your "default" network is 10.13.0.0/28 then you have to specify something else like 10.140.0.0/28 etc. It has to be /29 - otherwise you won't be able to create the connector.
Why 10.13.0.0 or any other addresses ? They are going to be assigned as the source network for you apps to connect to the Redis (or any
other VM's) in the specified network.
I've tested it using the command:
cloud compute networks vpc-access connectors create conn2 --network default /
--range 10.13.0.0/28 --region=europe-west1
Or you can do it using console in Serverless VPC Access and clicking "Add new connector";
You can also read documentation on how to create a connector.

Google Cloud Memorystore (Redis) ETIMEDOUT in App Engine

Im writing a NodeJS app and trying to connect to GCPs Redis MemoryStore, but I'm getting the ETIMEDOUT 10.47.29.131:6379 error message. The 10.47.29.131 corresponds to the REDISHOST. I'm trying to reach the server by it's internal private IP.
While the app works locally with a local Redis installed, it does not when deployed to the GCP AppEngine.
My GCP-Setup
Redis instance running at location europe-west3-a
Created a connector under "Serverless VPC access" which is in europe-west3
Redis and the VPC-connector are on the same network "default".
App Engine running in europe-west
Redis isntance:
VPC-connector:
The app.yml
runtime: nodejs
env: flex
automatic_scaling:
// or this but without env: flex (standard)
vpc_access_connector:
name: "projects/project-ID/locations/europe-west/connectors/connector-name"
beta_settings:
cloud_sql_instances: project-ID:europe-west3:name
env_variables:
REDISHOST: '10.47.29.131'
REDISPORT: '6379'
// removed this when trying without env: flex (standard)
network:
name: default
session_affinity: true
I followed these instructions to set everything up: https://cloud.google.com/memorystore/docs/redis/connect-redis-instance-standard
Digging deeper, I found: https://cloud.google.com/vpc/docs/configure-serverless-vpc-access where they mention something about permissions and serverless-vpc-access-images, and while trying to follow the instructions: https://cloud.google.com/compute/docs/images/restricting-image-access#trusted_images I couldn't find "Define trusted image projects." anywhere
What am I missing here?
Well, turns out, the problem was the region I've selected for the Redis instance.
From Documentation:
Important: In order to connect to a Memorystore for Redis instance, the connecting client must be located within the same region as the instance.
A region is a specific geographical location where you can run your resources. Each region is subdivided into several zones.
For example, the us-central1 region in the central United States has zones us-central1-a, us-central1-b, us-central1-c, and us-central1-f.
Althouh the documentation clearly says, that AppEngine and Memorystore have to be in the same region, my assumption on what regions actually are, was false.
When I created the AppEngine, I created it in europe-west, which is the same as europe-west1. On the other hand, when I created the redis instance, I used europe-west3, with the assumption that west3 is the same region as west, which is not.
Since the AppEngines region cannot be changed, I created another redis-instance in europe-west1 and now everything works.
So, the redis region must be exactly the same as the AppEngine region. region1 is the same as region, but region2 or region3 are not the same as region.

Ingress with subdomains

I use Google Cloud for deploy company app.
The goal: every branch deployed on some subdomain(example.com): task-123.example.com, etc.
I copy Cloud DNS namespace to the domain registrar. I pass the static IP address(via kubernetes.io/ingress.global-static-ip-name: "test-static-ip") for Ingress and pass it to domain registrar to A record. But I can't understand how to make subdomain works.
Every branch creates Ingress with static IP, but with different URLs for the host.
I made CNAME *.example.com which refers to example.com, but its not works.
Help me, please. Sorry for my English.
You want *.example.com to point to the ingress controller so branch1.example.com and branch2.example.com will both hit the ingress controller. This is achieved with wildcard DNS.
Each branch in your scenario should have its own routing rule (ingress resource) with a host section defined for its specific branch. The ingress controller is updated when a new ingress resource is created and its routing rules then reflect the additional rule. So creating a new branch with a new ingress resource for that host will tell the ingress controller to route traffic for that specific host to a Service specific to that branch. (Or you can define all the branch rules in one go with a fanout ingress - see ingress-nginx - create one ingress per host? Or combine many hosts into one ingress and reload? )
That's 'how it works'. I'm not sure if that is your question though? It's hard to diagnose the problem you're having. Presumably you have an Ingress, a Service and a Deployment? To help with that I think you'd need to post those and explain (either as an update or a separate question) what behaviour you see (a 404 maybe)?
Making Ingress work with subdomains is made extremely easy with kubernetes. Basically you just define rules for each of your hosts.
Here are specific steps you could follow
Point your DNS to your ingress IP address. To do this you will need to setup a global static IP address. In google cloud you can go here and see how you can set that up
Refer that static IP in you ingress annotation
Define rules and host mapping, refer to the documentation
Final code will look like this, I am using helm to iterate through my hosts here
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-router
annotations:
kubernetes.io/ingress.global-static-ip-name: "your-static-domain"
networking.gke.io/managed-certificates: "your-tls-cert"
spec:
rules:
{{- range $index,$service := .Values.deployments }}
- host: {{ $service.host }}
http:
paths:
- backend:
serviceName: {{ $service.name }}-service-name
servicePort: {{ $service.port }}
{{- end }}

UDP Server on Appengine Flex?

I would like to have a service in appengine flexible that has a UDP server that takes incoming udp traffic on a given port and redirects it to another service in appengine standard that uses HTTPS.
It is my understanding that flex environment allows opening UDP listen sockets and indeed my application starts the server OK. However, I cannot make any traffic reach the UDP server.
I suspect the problem is a GAE or Docker configuration problem but I cannot find documentation or similar issues online to solve it. All Google documentation for appengine flexible is around HTTPS. So any guidance would be helpful. I have several questions that I believe relate to my understanding on Flexible Appengine, the VM and Docker:
Is flex appengine supposed to be used at all as a UDP server? lack
of documentation on UDP load balancing seems to indicate me no...
Any ideas if this is on the roadmap?
If supported, to which IP/URL should I direct my UDP traffic? Is it to my-project . appspot . com or to each of the individual VM instances (would seem like a bad idea since VMs are ephemeral)?
This is my current application
app.yaml
As you can see I forwarded my listen UDP port as explained here
runtime: python
env: flex
entrypoint: python main.py
runtime_config:
python_version: 2
network:
forwarded_ports:
- 13949/udp
service: udp-gateway
# This sample incurs costs to run on the App Engine flexible environment.
# The settings below are to reduce costs during testing and are not appropriate
# for production use. For more information, see:
# https://cloud.google.com/appengine/docs/flexible/python/configuring-your-app-with-app-yaml
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
For the server I am using python SocketServer in threaded mode and I am keeping my main thread in an infinite loop in order not to exit the server.
I have also added a firewall rule in my GCP console:
{
"allowed": [
{
"IPProtocol": "udp",
"ports": [
"13949"
]
}
],
"creationTimestamp": "2018-02-24T16:39:24.282-08:00",
"description": "allow udp incoming on 13949",
"direction": "INGRESS",
"id": "4340622718970870611",
"kind": "compute#firewall",
"name": "allow-udp-13949",
"network": "projects/xxxxxx/global/networks/default",
"priority": 1000,
"selfLink": "projects/xxxxx/global/firewalls/allow-udp-13949",
"sourceRanges": [
"0.0.0.0/0"
]
}
So I ended up being able to answer my own questions (thanks SO for allowing me to put down my thoughts, it helps :))
Indeed, flex environment only features a load balancer for HTTPS, which means that even if it is possible to open UDP sockets, it is not meant to be used as an UDP server. I have not found any evidence, Google plans to add support for UDP/TCP load balancing for Appengine flex. The next service that offers UDP load balancing is Kubernetes Engine (and Compute Engine of course). So that is where I am headed now.
With the configuration described in the OP, I could make traffic reach my application by addressing individual instances' IP. However, this is not meant to be used in a production application since instances are ephemeral and also does not scale (would need to do my own load balancer which is out of the question),

Resources