GCP - what are mysql-access instances? - google-app-engine

I have a Java application deployed to AppEngine Standard and connects to Cloud SQL via public IP. I was looking at VM instances dashboard and found a set of instances with the following naming pattern and these are alerting for high cpu utilization.
aet-uswest1-mysql--access-{abcd}
The description says "Anthos/GKE and Dataproc VMs are Google-managed and include built-in agents.". These are all e2-micro instances and I could not change the instance type. At first I thought these are the underlying AppEngine instances, but the AppEngine instances I use are F4 class, I think these instances are something else.
What are these instances and how are they used?
Here is the list I see under instances in VM instances dashboard. I can't ssh or manage these instances. (I have randomized the instance names and ip addresses)
Name,Agent,Active Alerts,System Events,Zone,Private IP,Size
aet-uswest1-mysql--access-abcd,Not applicable,0,0,us-west1-a,10.5.0.9,e2-micro
aet-uswest1-mysql--access-efgh,Not applicable,0,0,us-west1-a,10.5.0.12,e2-micro
aet-uswest1-mysql--access-ijkl,Not applicable,0,0,us-west1-b,10.5.0.3,e2-micro
aet-uswest1-mysql--access-mnop,Not applicable,0,0,us-west1-c,10.5.0.7,e2-micro
aet-uswest1-mysql--access-qrst,Not applicable,0,0,us-west1-b,10.5.0.5,e2-micro
aet-uswest1-mysql--access-uvwx,Not applicable,0,0,us-west1-b,10.5.0.10,e2-micro
aet-uswest1-mysql--access-yz01,Not applicable,0,0,us-west1-c,10.5.0.2,e2-micro
aet-uswest1-mysql--access-23df,Not applicable,0,0,us-west1-c,10.5.0.6,e2-micro
aet-uswest1-mysql--access-efef,Not applicable,0,0,us-west1-a,10.5.0.11,e2-micro
aet-uswest1-mysql--access-57sf,Not applicable,0,0,us-west1-b,10.5.0.13,e2-micro

My understanding that AppEngine service was connected to Mysql via public network is incorrect. I found that these instances are related to VPC connector used by AppEngine standard to access MySQL service.

Related

List Mule Apps and their IP addresses

Is there a way to list all the Mule applications deployed in a VPC in Cloudhub, and their private IP addresses as a report (maybe in Monitoring)? I know the private IP addresses are dynamic and will change, but is there a way to get such a report?
There is no built-in way to do that however you can gather the information together with a script or application. I can give you the high level direction. You need to get the list of applications for each environment associated with the VPC, get the deployment region to confirm that it matches the VPC region (just in case that multi region deployments are enabled) and the status to ensure it is running. You can use CloudHub 1.0 REST API: https://anypoint.mulesoft.com/exchange/portals/anypoint-platform/f1e97bc6-315a-4490-82a7-23abe036327a.anypoint-platform/cloudhub-api/minor/1.0/pages/home/
Then with the resulting list of applications you can query the DNS names used by CloudHub 1.0:
mule-worker-myapp.region.cloudhub.io to get the public IPs
mule-worker-internal-myapp.region.cloudhub.io to get the internal IP inside the VPC

Routing between GCP projects (AppEngine + Kubernetes)

In Google Cloud, I have an application deployed in Kubernetes in one project (call it Project-A), and another deployed in App Engine (call it Project-B). Project-A has a cloud NAT created using automatic IP. Project-B uses App Engine standard.
Project-B by default allows ingress traffic from the internet. However, I only want Project-A to communicate with Project-B. All other traffic needs to be blocked.
I currently do not have any shared VPC configured.
In Project-B, I configure the App Engine Firewall rules with the following deny rules (the list below is shown in the order of the firewall rule priority defined in App Engine Firewall):
0.0.0.1/32
0.0.0.2/31
0.0.0.4/30
0.0.0.8/29
0.0.0.16/28
0.0.0.32/27
0.0.0.64/26
0.0.0.128/25
0.0.1.0/24
0.0.2.0/23
0.0.4.0/22
0.0.8.0/21
0.0.16.0/20
0.0.32.0/19
0.0.64.0/18
0.0.128.0/17
0.1.0.0/16
0.2.0.0/15
0.4.0.0/14
0.8.0.0/13
0.16.0.0/12
0.32.0.0/11
0.64.0.0/10
0.128.0.0/9
1.0.0.0/8
2.0.0.0/7
4.0.0.0/6
8.0.0.0/5
16.0.0.0/4
32.0.0.0/3
64.0.0.0/2
128.0.0.0/1
default rule: allow *
(the CIDR blocks above correspond to 0.0.0.1 - 255.255.255.255; I used https://www.ipaddressguide.com/cidr to perform the calculation for me).
From Project-A, I am still able to reach Project-B. Is there some kind of internal network routing that Google does which bypasses the App Engine firewall? It seems like in this case, Google is using the default rule and ignoring all my other rules.
I then did the reverse. The rules for all those CIDR blocks above were changed to ALLOW, while the last default rule was changed to DENY for all IPs. I then got the reverse behaviour - Project-A is unable to reach Project-B. Again, it looks like only the default rule is being used.
How can I achieve the situation where only Project-A can communicate with Project-B, no internet ingress traffic is allowed to reach Project-B? Can I avoid using a shared VPC? If I do use a shared VPC, what should the App Engine firewall rules be for Project-B?
Sure. I ended up going with the load balancer solution. This gives me a loosely coupled solution, which is better for my scenario. Takes less than 30minutes to set it up.

AWS: Allow access to Lambda only, from VPC

I am stuck with a problem where I have to spin my databases in public subnets because if I try to spin my Lambdas in VPC with ENIs attached, the response time of the lambdas is really horrible. Is there a way to move keep my databases in a private subnet and make lambdas able to talk to them. Plus, Lambdas must be able to communicate on the internet as well. Maybe a security group to allow lambdas only.
To solve this issue you have to follow below steps :
Create two subnet group for RDS
Assign one of subset of RDS to Lambda also. Lambda will have their own subnet also. After this Lambda will have two subnet.
Change Lambda security group outbound rules to reach over internet open for all.
More details can be found here: --https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html
https://blog.shikisoft.com/running-aws-lambda-in-vpc-accessing-rds/

gcloud - Can't configure my VPC connector to work with my Redis instance

I'm facing a problem with gcloud and their support can't seem to help me.
So, to put my app in prod I need to use a redis instance to host some data. I'm using memorystore because I like to have everything on gcloud.
My app is in the standard environment on app engine so on their doc (https://cloud.google.com/memorystore/docs/redis/connect-redis-instance-standard) they ask me to configure a VPC connector. But I think that the CIDR that I put is always wrong, can someone help me finding the good CIDR.
connectMode: DIRECT_PEERING
createTime: '2020-03-13T17:20:51.590448560Z'
currentLocationId: europe-west1-d
displayName: APP-EPG
host: 10.240.224.179
locationId: europe-west1-d
memorySizeGb: 1
name: projects/*************/locations/europe-west1/instances/app-epg
persistenceIamIdentity: *************
port: 6379
redisVersion: REDIS_4_0
reservedIpRange: 10.240.224.176/29
state: READY
tier: BASIC
Thank you all !
First in order to VPC connector work yor App Engine instances have to be in the same VPC & region that your Redis instance is. If not there will not be connectivity between the two.
Also make sure you redis and app use one of the approved locations. By now it's a lot of them.
Your redis instance is in europe-west1 region so to create your VPC connector you have to set the name of the VPC network your redis instance is in
(for example "default").
IP range you were asking about is any range (not reserved by the network redis instance is in).
So - for example if your "default" network is 10.13.0.0/28 then you have to specify something else like 10.140.0.0/28 etc. It has to be /29 - otherwise you won't be able to create the connector.
Why 10.13.0.0 or any other addresses ? They are going to be assigned as the source network for you apps to connect to the Redis (or any
other VM's) in the specified network.
I've tested it using the command:
cloud compute networks vpc-access connectors create conn2 --network default /
--range 10.13.0.0/28 --region=europe-west1
Or you can do it using console in Serverless VPC Access and clicking "Add new connector";
You can also read documentation on how to create a connector.

Google Cloud Memorystore (Redis) ETIMEDOUT in App Engine

Im writing a NodeJS app and trying to connect to GCPs Redis MemoryStore, but I'm getting the ETIMEDOUT 10.47.29.131:6379 error message. The 10.47.29.131 corresponds to the REDISHOST. I'm trying to reach the server by it's internal private IP.
While the app works locally with a local Redis installed, it does not when deployed to the GCP AppEngine.
My GCP-Setup
Redis instance running at location europe-west3-a
Created a connector under "Serverless VPC access" which is in europe-west3
Redis and the VPC-connector are on the same network "default".
App Engine running in europe-west
Redis isntance:
VPC-connector:
The app.yml
runtime: nodejs
env: flex
automatic_scaling:
// or this but without env: flex (standard)
vpc_access_connector:
name: "projects/project-ID/locations/europe-west/connectors/connector-name"
beta_settings:
cloud_sql_instances: project-ID:europe-west3:name
env_variables:
REDISHOST: '10.47.29.131'
REDISPORT: '6379'
// removed this when trying without env: flex (standard)
network:
name: default
session_affinity: true
I followed these instructions to set everything up: https://cloud.google.com/memorystore/docs/redis/connect-redis-instance-standard
Digging deeper, I found: https://cloud.google.com/vpc/docs/configure-serverless-vpc-access where they mention something about permissions and serverless-vpc-access-images, and while trying to follow the instructions: https://cloud.google.com/compute/docs/images/restricting-image-access#trusted_images I couldn't find "Define trusted image projects." anywhere
What am I missing here?
Well, turns out, the problem was the region I've selected for the Redis instance.
From Documentation:
Important: In order to connect to a Memorystore for Redis instance, the connecting client must be located within the same region as the instance.
A region is a specific geographical location where you can run your resources. Each region is subdivided into several zones.
For example, the us-central1 region in the central United States has zones us-central1-a, us-central1-b, us-central1-c, and us-central1-f.
Althouh the documentation clearly says, that AppEngine and Memorystore have to be in the same region, my assumption on what regions actually are, was false.
When I created the AppEngine, I created it in europe-west, which is the same as europe-west1. On the other hand, when I created the redis instance, I used europe-west3, with the assumption that west3 is the same region as west, which is not.
Since the AppEngines region cannot be changed, I created another redis-instance in europe-west1 and now everything works.
So, the redis region must be exactly the same as the AppEngine region. region1 is the same as region, but region2 or region3 are not the same as region.

Resources