Ingress with subdomains - google-app-engine

I use Google Cloud for deploy company app.
The goal: every branch deployed on some subdomain(example.com): task-123.example.com, etc.
I copy Cloud DNS namespace to the domain registrar. I pass the static IP address(via kubernetes.io/ingress.global-static-ip-name: "test-static-ip") for Ingress and pass it to domain registrar to A record. But I can't understand how to make subdomain works.
Every branch creates Ingress with static IP, but with different URLs for the host.
I made CNAME *.example.com which refers to example.com, but its not works.
Help me, please. Sorry for my English.

You want *.example.com to point to the ingress controller so branch1.example.com and branch2.example.com will both hit the ingress controller. This is achieved with wildcard DNS.
Each branch in your scenario should have its own routing rule (ingress resource) with a host section defined for its specific branch. The ingress controller is updated when a new ingress resource is created and its routing rules then reflect the additional rule. So creating a new branch with a new ingress resource for that host will tell the ingress controller to route traffic for that specific host to a Service specific to that branch. (Or you can define all the branch rules in one go with a fanout ingress - see ingress-nginx - create one ingress per host? Or combine many hosts into one ingress and reload? )
That's 'how it works'. I'm not sure if that is your question though? It's hard to diagnose the problem you're having. Presumably you have an Ingress, a Service and a Deployment? To help with that I think you'd need to post those and explain (either as an update or a separate question) what behaviour you see (a 404 maybe)?

Making Ingress work with subdomains is made extremely easy with kubernetes. Basically you just define rules for each of your hosts.
Here are specific steps you could follow
Point your DNS to your ingress IP address. To do this you will need to setup a global static IP address. In google cloud you can go here and see how you can set that up
Refer that static IP in you ingress annotation
Define rules and host mapping, refer to the documentation
Final code will look like this, I am using helm to iterate through my hosts here
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-router
annotations:
kubernetes.io/ingress.global-static-ip-name: "your-static-domain"
networking.gke.io/managed-certificates: "your-tls-cert"
spec:
rules:
{{- range $index,$service := .Values.deployments }}
- host: {{ $service.host }}
http:
paths:
- backend:
serviceName: {{ $service.name }}-service-name
servicePort: {{ $service.port }}
{{- end }}

Related

GAE Whitelist IP VPC tied to App Engine secondary instance not working firewall

I read into this article:
How to properly configure VPC firewall for App Engine instances?
This was a huge help in getting the firewall setup in the first place - so for those who have found this and are struggling with that - follow along. https://cloud.google.com/appengine/docs/flexible/python/using-shared-vpc is a good reference, as there are some accounts that need permissions "added" to make the magic happen.
My issue - I have two containerized services running in AppEngine one default (website), one API. I've configured the API to run in a VPC/subnet separate from the default created one. I have not made any changes to the firewall settings directly hanging off the App Engine settings as those are global, and do not let you target a specific instance - and the website needs to remain public, while the API should require whitelisting access.
dispatch.yaml for configuring subdomain mapping
dispatch:
- url: "www.example.com/*"
service: default
- url: "api.example.com/*"
service: api
API yaml settings:
network:
name: projects/mycool-12345-project/global/networks/apis
subnetwork_name: apis
instance_tag: myapi
Create a VPC network
name - apis
subnet name - apis
creation mode - automatic
routing mode - regional
dns policy - none
max MTU - 1460
Add firewall rules
allow 130.211.0.0/22, 35.191.0.0/16 port 10402,8443 tag aef-instance priority 1000
deny 0.0.0.0/0 port 8443 tag myapi priority 900
allow 130.211.0.0/22, 35.191.0.0/16 port 8443 tag myapi priority 800
this works - but I cannot specify the "white list IP".
if I do the following and disable the "allow 130 / 35 networks 8443/800"
allow my.ip.number.ihave port 8443 tag myapi priority 800
it never trips this rule, it never recognizes my IP.
what change / how do you configure the firewall in the VPC so it receives the public IP. When I reviewed the logs, it said it denied my request because my IP address was 35.x.x.x.
I would recommend to contact GCP support in that case. If I'm not wrong, you can directly whitelist the IP addresses at App Engine level, but it's not a standard procedure

Using Google App Engine URL patterns to reach non-default service

According to [this doc][1] I should be able to reach a non-default service through URL patterns.
I am using a custom domain. (site.com) This is working to reach the default service.
I want to reach a second, non-default service called my-service.
According to the docs, it seems like this is the way to do it:
my-service.site.com
However, this isn't working. I understand I can use a dispatch.yaml file, but I would like to set it up just by the URL's if possible.
How do I set this up correctly?
Edit: The exact error (url replaced) is this:
This site can’t be reached
my-service.site.com’s server IP address could not be found.
[1]: https://cloud.google.com/appengine/docs/standard/python/how-requests-are-routed#default_routing
The URL pattern you are using is correct. This problem is a DNS issue for "site.com". It appears that while you have mapped "site.com" to App Engine, subdomains are not mapped to App Engine, thus the DNS lookup is failing.
You either need to ensure that you have a wildcard entry in place for site.com, eg:
*.site.com. 3599 IN CNAME ghs.googlehosted.com.
Or map the specific service subdomain(s) to Google.
my-service.site.com. 3599 IN CNAME ghs.googlehosted.com.
This maps requests to subdomains of "site.com" to Google's servers.
Further details are in the documentation here.

Google Cloud Memorystore (Redis) ETIMEDOUT in App Engine

Im writing a NodeJS app and trying to connect to GCPs Redis MemoryStore, but I'm getting the ETIMEDOUT 10.47.29.131:6379 error message. The 10.47.29.131 corresponds to the REDISHOST. I'm trying to reach the server by it's internal private IP.
While the app works locally with a local Redis installed, it does not when deployed to the GCP AppEngine.
My GCP-Setup
Redis instance running at location europe-west3-a
Created a connector under "Serverless VPC access" which is in europe-west3
Redis and the VPC-connector are on the same network "default".
App Engine running in europe-west
Redis isntance:
VPC-connector:
The app.yml
runtime: nodejs
env: flex
automatic_scaling:
// or this but without env: flex (standard)
vpc_access_connector:
name: "projects/project-ID/locations/europe-west/connectors/connector-name"
beta_settings:
cloud_sql_instances: project-ID:europe-west3:name
env_variables:
REDISHOST: '10.47.29.131'
REDISPORT: '6379'
// removed this when trying without env: flex (standard)
network:
name: default
session_affinity: true
I followed these instructions to set everything up: https://cloud.google.com/memorystore/docs/redis/connect-redis-instance-standard
Digging deeper, I found: https://cloud.google.com/vpc/docs/configure-serverless-vpc-access where they mention something about permissions and serverless-vpc-access-images, and while trying to follow the instructions: https://cloud.google.com/compute/docs/images/restricting-image-access#trusted_images I couldn't find "Define trusted image projects." anywhere
What am I missing here?
Well, turns out, the problem was the region I've selected for the Redis instance.
From Documentation:
Important: In order to connect to a Memorystore for Redis instance, the connecting client must be located within the same region as the instance.
A region is a specific geographical location where you can run your resources. Each region is subdivided into several zones.
For example, the us-central1 region in the central United States has zones us-central1-a, us-central1-b, us-central1-c, and us-central1-f.
Althouh the documentation clearly says, that AppEngine and Memorystore have to be in the same region, my assumption on what regions actually are, was false.
When I created the AppEngine, I created it in europe-west, which is the same as europe-west1. On the other hand, when I created the redis instance, I used europe-west3, with the assumption that west3 is the same region as west, which is not.
Since the AppEngines region cannot be changed, I created another redis-instance in europe-west1 and now everything works.
So, the redis region must be exactly the same as the AppEngine region. region1 is the same as region, but region2 or region3 are not the same as region.

AppEnginge dispatch.yaml url

I have this:
- url: "awesome.com/*"
service: awesome
- url: "www.awesome.com/*"
service: awesome
Is possible to do this? to achieve same as above?
- url: "*.awesome.com/*"
service: awesome
No, what that last option you mentioned would do is map all subdomains of awesome.com to the service awesome, as you can see in this example corresponding to mapping subdomains in the documentation.
Here you have more information about mapping custom domains.
Yes, it's possible to do that. But it won't be equivalent with what you have now:
it won't match your top-level domain awesome.com which is matched by your current 1st rule
it'll match any <blah>.awesome.com subdomain, your current set of rules only matches the www.awesome.com subdomain
If indeed you want to send requests for both the full domain as well as all its subdomains to the awesome service you can achieve that simply by the custom domain mapping/config itself (which you need to do explicitly for the domain and each subdomain anyways), no need for a dispatch file.
Note that you'd still need to deploy a default service, see Why do I need to deploy a "default" app before I can deploy multiple services in GAE?. Might as well just let the awesome service be the default one in this case, less confusing and less room for trouble IMHO.

GCP internal load balancer

I'm trying access elasticsearch cluster on GKE from my project in GAE - flexible. Since I don't want an external load-balancer, I'm following this guide:
https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
Both GKE and GAE are deployed in the same region, but the calls to the elasticsearch cluster timeout all the time. Has anyone done this and can share some tips would be much appreciated!
My service.yaml file looks like this:
apiVersion: v1
kind: Service
metadata:
name: internalloadbalancerservice
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app.kubernetes.io/component: elasticsearch-server
app.kubernetes.io/name: elasticsearch #label selector service
spec:
type: LoadBalancer
loadBalancerSourceRanges: # restrict access
- xxxxxxxx
ports:
- name: myport
port: 9000
protocol: TCP # default; can also specify UDP
selector:
app.kubernetes.io/name : elasticsearch # label selector for Pods
app.kubernetes.io/component: elasticsearch-server
GCP now has a beta Global Access feature with Internal Load balancers which will allow the internal load balancers to be accessible from any region within the same network.
This will be helpful for your case too. If two services are exposed using internal IP addresses but located in different regions.
UPDATE
Global Access feature is now stable (for GKE 1.16.x and above) and it can be enabled by adding the below annotation to your service.
networking.gke.io/internal-load-balancer-allow-global-access: "true"
For Example: The below manifest will create your internalloadbalancerservice LoadBalancer with internal IP address and that IP will be accessible from any region within the same VPC.
apiVersion: v1
kind: Service
metadata:
name: internalloadbalancerservice
annotations:
cloud.google.com/load-balancer-type: "Internal"
# Required to enable global access
networking.gke.io/internal-load-balancer-allow-global-access: "true"
labels:
app.kubernetes.io/component: elasticsearch-server
app.kubernetes.io/name: elasticsearch #label selector service
spec:
type: LoadBalancer
loadBalancerSourceRanges: # restrict access
- xxxxxxxx
ports:
- name: myport
port: 9000
protocol: TCP # default; can also specify UDP
selector:
app.kubernetes.io/name : elasticsearch # label selector for Pods
app.kubernetes.io/component: elasticsearch-server
This works well for GKE 1.16.x and above. For older GKE versions, you can refer to this answer.
To save anyone else from a similar situation, I will share my findings of why I couldn't connect to my GKE app from GAE. The GAE was in region europe-west, while GKE was in region europe-west-4a. I thought that would be the same region. But changing GKE region to europe-west-1b worked. Not very obvious but when reading the documentation GAE region europe-west and GKE region europe-west-1b are both in Belgium.
Assuming that the GAE app and the GKE cluster are in the same region, and in the same VPC network, I would suggest to make sure you have created Ingress allow firewall rules that apply to the GKE nodes as targets with the GAE app VMs as sources.
Remember Ingress to VMs is denied by the implied deny Ingress rule. So unless you create Ingress allow firewall rules, you'll not be able to send packets to any VMs. And to use an Internal Load Balancing (ILB), both the client and the backend VMs must be in the same:
- Region
- VPC network
- Project

Resources