How do I whitelist private IP in Google Cloud SQL? - google-app-engine

I am trying to create an Autoscaling web application network over HTTP Load Balancing. The Web Server Instances are going to be connected to load balancer. Further the web instances have to be connected to mysql/cloud sql through the internal IP.
So just to conclude, I need to use the Linux Web Instance (Not App Engine) and Connect to MySql/Cloud SQL through Internal Network Only?
Is it possible?
Thanks!

It's not possible, you need to use an external IP as stated in the documentation:
Note: You must use the external (public) IP address of the GCE instance.
Also, you can find here that it's not possible to authorize a private network like the one specified:
You can not specify a private network (for example, 10.x.x.x) as an authorized network.

You should use the cloud SQL proxy.
It runs on the box providing secure access to your Cloud SQL database.
Example here for container engine: https://cloud.google.com/sql/docs/container-engine-connect

To access the CloudSQL the IP must be white listed. To white list an IP go to your project then on the side bar: Storage -> Cloud SQL. Select your instance then 'Access Control'. Under 'Authorization' click the '+' to add your IP.

Related

How to point Google Cloud DNS to Plesk DNS

I'm new at this, I have installed Plesk in a google compute engine without using the easiest way through marketplace.
My problem is with the DNS, to register my domain in registro.br it requires at least two different IPs, that Google DNS provides, but Plesk don't, so I'm copying manually all the DNSs from Plesk to Google, that takes a while, other problem is with renewing my SSLs automaticly, since my main DNS isn't from Plesk, it can't renew and I need to do that manually
Is there a way that I can use Google DNSs "NS" to register in registro.br, but manage it on Plesk?
Have a look at the documentation for Google Cloud DNS here. As you can see, you can set up DNS forwarding between your non-GCP name servers and Google Cloud's internal name servers, but it's possible only for private zones. Cloud DNS public managed zones do not support forwarding. Public managed zones are only authoritative zones.
In my opinion, you should use Google Clous DNS as your primary DNS server and configure Plesk to use external DNS as it's shown in the documentation. To do this, use the custom type of installation (refer to the Deployment guide for details) and deselect the corresponding component (BIND DNS server support on Linux and Microsoft DNS server on Windows). In this case you cannot manage zones through Plesk. You can use external DNS server instead.
Unfortunately, there's no way to use Google Cloud DNS at registro.br and manage it on Plesk.
As an alternative, you can setup Plesk with DNS (BIND DNS server support on Linux and Microsoft DNS server on Windows) and then find DNS hosting and configure there secondary DNS for your domain (with different IP for registro.br). In this case, you don't need to use Google Cloud DNS at all, but you have to configure synchronization between your master and secondary DNS servers.

How to connect user to different Cloud SQL instance once they login?

We got one GCP project contains 3 Cloud SQL instance and each one in a different GCP region for a different group of users.
What we would like is when the user login, I need to connect them to differnt instance based on a master table (another master SQL instance)?
Is that the best way to do, or we can do it differently?
Our application resides on Google app engine with Python flex env.
Thanks in advance!
Thinking about using the app engine custom domain mapping to point connect the user to different SQL instance by using different URL.
For App Engine Flexible, you can configure the Cloud SQL Proxy to support more than one Cloud SQL instance. Just use different port numbers for each SQL instance when setting up the proxy. If you are using unix sockets, just specify the instance names.
For example:
unix sockets:
./cloud_sql_proxy -dir=/cloudsql \
-instances=myProject:us-central1:myInstance,myProject:us-central1:myInstance2
Your connection string includes:
/cloudsql/myProject:us-central1:myInstance2
tcp:
./cloud_sql_proxy \
-instances=myProject:us-central1:myInstance=tcp:3306,myProject:us-central1:myInstance2=tcp:3307
The tcp method, specify the host as 127.0.0.1 and port (3306 or 3307).

Separate SQL server speed too slow in Google Cloud

I was moving all website to google cloud and encounter a performance problem.
I set up a VM instance on Compute Engine and a Cloud SQL server.
And connect the Joomla website from VM to Cloud SQL server using provided IP address. (Seems public IP)
The performance is really slow compared to the website using local database inside the VM itself.
So, my question is, is there a way to find local IP to connect to Cloud SQL since our web server is also on the Google Cloud infra itself.
Or, the only way is to stick with the database inside VM?
Update
I set up the Cloud proxy using this guide.
Can connect to mysql prompt with the proxy now.
But still cannot find a way to let joomla use this cloud proxy to connect to the database.
The fastest, easiest, and most secure way to connect to your Cloud SQL instance from your Compute instance is by using the Cloud SQL Proxy. There are multiple reasons for this, but here are the main ones:
Secure connections: The proxy automatically encrypts traffic to and from the database using TLS 1.2 with a 128-bit AES cipher; SSL certificates are used to verify client and server identities.
Easier connection management: The proxy handles authentication with Google Cloud SQL, removing the need to provide static IP addresses.
There's also the fact that you only need a static and small number of instances (1 in your case) connecting to the database, so you don't really need to overcomplicate your setup, you can just drop this binary into your instance, run it as a daemon, and instantly have a fast lane to your Cloud SQL instance (I use "fast lane" here because the traffic will go through Google Cloud's internal network).
Setting up the Cloud SQL Proxy comes down to enabling the Cloud SQL API, giving the service account of your intance access to the Cloud SQL API, making sure the binary has execution permissions (chmod +x), and giving it the connection string to the Cloud SQL instance. You seem to be having issues using the Proxy, so if you need more troubleshooting ideas, you can find them in the documentation. The tutorial you've followed should have detailed instructions on how to do these steps.
After all of that and after making sure the Proxy is running, connecting Joomla to the database should be similar to how you do it via the MySQL client. You should point your Joomla installation to localhost (or 127.0.0.1), give it a set of credentials to access the database itself (you can create database users via the Console), give your Joomla database's name, and that should be it!
Don't forget that the Proxy needs to be running in TCP mode! That should be as simple as adding =tcp:LOCAL_PORT_TO_LISTEN_ON to the connection string parameter you're passing to the Proxy. Here's an example of how to run the Proxy:
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:3306
Virtual Private Cloud (VPC) helps to increase the performance.
Private Google access enables virtual machine (VM) instances on a subnetwork to reach Google APIs and Services using an internal IP address rather than an external IP address. You can use Private Google access to allow VMs without Internet access to reach Google services.
Here you get more details: https://cloud.google.com/vpc/docs/private-google-access

What IP addresses should I have listed to access my google cloud SQL instance?

So Im following the google ruby guide to create and setup a cloud sql instance. Under 'Create and configure a Cloud SQL instance' step 4 it tells you to allow all network fields so the instance is open to all traffic, then underneath that it gives the warning:
This configuration leaves your Cloud SQL instance open to traffic from everyone, everywhere. It is used only for demonstration purposes. In production environments, restrict access to only those IP addresses that need access.
I haven't setup VM servers on a cloud environment before so I have no idea on what IP addresses I should be giving access to the SQL instance or what ones 'need access' do I just change it to the IP of my VMs?
In the context of the guide that you linked, the IP whitelist is necessary so you can access your Cloud SQL instance from your development server on your local computer. For that specific purpuse, you can just whitelist your computer's IP (see http://www.whatsmyip.org) instead off all the world.
When your application is going to be running on App Engine, you don't need to whitelist the IP. There is a separate access control list for that in the Cloud Console where you can list the App Engine applications authorized to connect.

How do I authorize my ephemeral Google Container Engine instances in Cloud SQL?

I am currently test-driving Google Container Engine (GKE) and Kubernetes as a possible replacement to AWS/ElasticBeanstalk deployment. It was my understanding that just by the virtue of my dynamic servers being in the same project as the cloud sql instance, that they'd naturally be included in the firewall rules of that project. However, this appears not to be the case. My app servers and SQL server are in the same availability zone, and I have both ipv4 and ipv6 enabled on the sql server.
I don't want to statically assign IP Addresses to cluster members that are themselves ephemeral, so I'm looking for guidance on how I can properly enable SQL access to my docker-based app hosted inside GKE? As a stopgap, I've added the ephemeral IPs of the container cluster nodes and that has enabled me to use CloudSQL but I'd really like to have a more seamless way of handling this if my nodes somehow get a new ip address.
The current recommendations (SSL or HAProxy) are discussed in [1]. We are working on a client proxy that will use service accounts to authenticate to Cloud SQL.
[1] Is it possible to connect to Google Cloud SQL from a Google Managed VM?
Sadly, this is currently the only way to do this. A better option would be to write a controller that dynamically examined the managed instance group created by GKE and automatically updated the IP addresses in the Cloud SQL API. But I agree the integration should be more seamless.

Resources