GAE script to authorize networks on CloudSQL - google-app-engine

I am working on a project where i need access to Cloud SQL but my Ip address is frequently changing (10 times a day or more). Is there a way to tell Cloud SQL about my new ip address using scripting to allow access from it? At the moment i have to use the cloud console, but if i could write a script.

I have just found out that on the Cloud Console in Access Control, one can use a DNS name, rather than just an IP. Google is pretty awesome.
So in Access Control simply put a Domain Name as allowed access, and use a simple no-ip service like ddns.net to keep the domain name up to date with the dynamic ip.

When accessing Cloud SQL from AppEngine, you don't have to authorize the IP address. You must authorize the AppEngine application as described here.
EDIT:
If this is your local (ISP) ip address that keeps changing, then maybe you can setup a SSH tunnel :
Create an instance on Compute engine, can be the cheapest
ssh to the instance with params -L 3306:cloudsqlip:3306
Now authorize the ip address (no need of a static ip, can be the ephemeral) of the compute instance. You should be able to connect to your local machine 127.0.0.1:3306 and your traffic will be tunneled to your cloud sql instance.

Related

Uploading Databases

How does one go about uploading a database like Apache Cassandra after creating one? Furthermore, is there a way to upload/share only its skeleton structure, without the data gathered in it? I'm on MacOS and would like to use Python to do all of this. Thank you!
Based on your second comment, I guessed it to mean you want the database to be remotely accessible to clients/apps not installed locally.
Clients/apps connect to Cassandra on the IP address set for rpc_address and the CQL port set for native_transport_port (default is 9042) set in cassandra.yaml.
You mentioned that your Cassandra instance is running on your laptop so only clients/apps running on your local network can access it if you configure rpc_address to an IP address accessible on the network (default is localhost).
If you're just trying out Cassandra and want to collaborate with other developer friends, try Astra and launch Cassandra instance on the free-tier (no credit card required). With it you can share the database credentials with your friends and they can connect to it over the internet.
You can connect to Astra from your Python app using the Python driver. Otherwise, Astra includes Stargate.io pre-configured and ready to use. Stargate is a data access gateway that lets you connect to Cassandra from your app using REST API, GraphQL API or JSON/Doc API without having to learn CQL. For more info, see Connecting to your Astra database. Cheers!

Google Kubernetes Engine Service Unable To Connect To Snowflake

I deployed a service to GKE on Google Cloud Platform, but unfortunately, Snowflake is blocking the IP Address. I think Snowflake only enables connections to IP Addresses that have been whitelisted, so I tried creating a cluster in the appropriate Network. But when I expose the service, I still run into the error.
I have also created an App Engine instance as well in the appropriate network, and it still doesn't let me connect to Snowflake.
Error Message:
DatabaseError: (snowflake.connector.errors.DatabaseError) 250001 (08001): None: Failed to connect to DB: IP [XXXXXXX] is not allowed to access Snowflake. Contact your local security administrator.\n(Background on this error at: http://sqlalche.me/e/4xp6)\nINFO:snowflake.con! nector.connection:closed\nINFO:snowflake.connector.connection:closed\n
Your snowflake application only accepts requests from whitelisted IPs which means you need to have a specific IP, or a set of specific IPs that are calling snowflake.
By default, GKE will not do this.
When a request from one of your pods tries to reach outside the cluster to contact snowflake, the pod IP is SNATd to use the node's IP address. Both nodes and node IPs are dynamic and stateless so you can't make sure specific IPs are used.
Instead, consider using Cloud NAT with GKE. This will ensure that all requests from your GKE cluster will use the same IP address. You can then just whitelist the Cloud NAT IP on snowflake.

Strange mikrotik dns relation to firebird database

In one company there is windows server 2008 hosting firebird 2 database.
Clients are using some software to connect from local machines to this database.
Network is running on few mikrotik routers.
When i change main gateway mikrotik router dns to cleanbrowsing ip addresses (185.228.168.10 and 185.228.169.11), software can not connect fo this firebird database.
When i use 8.8.8.8 dns or 1.1.1.1 - no such problems.
Software does not relate to dns, i know this because it is written by me in c#.
How possible is that and why it happens?
Changing the main gateway router's DNS server to another upstream server means you are potentially getting different responses to DNS queries. Assuming that nothing else has changed on your network, I imagine one of the following:
Your new DNS provider does not have special config for the dns entries you are querying
Your new DNS provider is located somewhere else physically, and you are running into a situation where geolocation matters (different dns responses to differently located users)
There is another gadget on the network intercepting DNS and is unaware of the change you are making. For example a NAT rule on a router that redirects 8.8.8.8 to an internal DNS server.
I agree with your assessment that the software is probably not causing this, because you changed infrastructure, I think that this is an infrastructure problem.
With 15+ years of experience running FirebirdSQL in small networks, I always set following things to prevent such problems:
The first DNS at the router's DHCP should point to the router's IP (gateway) itself, so it resolves local pc names easier
Setting a (random?) DHCP domain name at router's setup is recommended too
Edit/replace the firebird.conf file with one of fixed default port (3050) + event port (3051).
Opening those ports on each PC's firewall is a MUST. Both incoming and outgoing. You may narrow it to local IP range to prevent outside attacks. (Create a script once, run it on each PC as Admin once.)
Usually I also add "fbserver.exe" to firewall exception too
Restart FirebirdSQL service (or the whole PC) after changing gateway or DNS or firebird.conf

restrict access to domain but allow access to subdomain

I have a domain name.com that points to IP 123.123.123.123 where I have installed a apache2 server.
I also have sub domains like ftp.name.com / etc.name.com that also point to the same ip address.
I want that when a user types in browser name.com to be restricted access, like when the apache server is down (or like when you try to access a domain that does not exist) but in the same time I want the user to be able to access the sub domains. Does it make sense? Is it possible?
"Like when the apache server is down" will not be possible because if it's down, it does not accept connections. But it will have to accept the connection to receive the HTTP header which tells the server the requested (sub)domain.
A possible solution would be to change your DNS entries. Let ftp.name.com point to your server (123.123.123.123 in your example) and configure www.name.com to an unused / invalid ip address. This way your ftp.name.com server will not receive name.com queries at all.

Connect Google Cloud SQL to MySQL workbench

I have just create my Google Cloud SQL. I am using the ip address that I get i.e:160.200.100.12 the root as user name and the password that I have setup. I cannot connect with workbench and get the error: "failed to connect to MySQL at:160.200.100.12 with user root". Any idea what it may be wrong?
From reading through the comments, I can see that there are two points of confusion.
In order to connect to your Cloud SQL instance, you must first authorize access for connection requests coming from your IP address (meaning the one provided to you by your ISP). It is important to not confuse this IP address with the one provided by Google. I recommend reading through the documentation on ‘Connecting from External Applications’.
Furthermore, there seems to be some confusion regarding the IPv6 address which is included with your Cloud SQL instance. If your ISP does not support IPv6 connectivity, then you will not be able to use the IPv6 address provided to facilitate connections. In this case, you will have to request an IPv4 address for your Cloud SQL instance.
Finally, it is important to note that while your IPv6 address is free to use, assigning an IPv4 address will incur additional charges. I recommend reviewing the pricing information for Cloud SQL so you can get a better idea of how this is calculated.
I hope that this information is helpful.

Resources