Google Kubernetes Engine Service Unable To Connect To Snowflake - google-app-engine

I deployed a service to GKE on Google Cloud Platform, but unfortunately, Snowflake is blocking the IP Address. I think Snowflake only enables connections to IP Addresses that have been whitelisted, so I tried creating a cluster in the appropriate Network. But when I expose the service, I still run into the error.
I have also created an App Engine instance as well in the appropriate network, and it still doesn't let me connect to Snowflake.
Error Message:
DatabaseError: (snowflake.connector.errors.DatabaseError) 250001 (08001): None: Failed to connect to DB: IP [XXXXXXX] is not allowed to access Snowflake. Contact your local security administrator.\n(Background on this error at: http://sqlalche.me/e/4xp6)\nINFO:snowflake.con! nector.connection:closed\nINFO:snowflake.connector.connection:closed\n

Your snowflake application only accepts requests from whitelisted IPs which means you need to have a specific IP, or a set of specific IPs that are calling snowflake.
By default, GKE will not do this.
When a request from one of your pods tries to reach outside the cluster to contact snowflake, the pod IP is SNATd to use the node's IP address. Both nodes and node IPs are dynamic and stateless so you can't make sure specific IPs are used.
Instead, consider using Cloud NAT with GKE. This will ensure that all requests from your GKE cluster will use the same IP address. You can then just whitelist the Cloud NAT IP on snowflake.

Related

SqlException server not found error when connecting to an SQL DB from an Azure Function App

I have deployed a .NET Core Azure Function App running on the consumption pricing plan which connects, through EF Core, to a MS SQL database hosted by my website provider.
I see the following error reported by App Insights when the database connection is attempted:
Microsoft.Data.SqlClient.SqlException (0x80131904): A network-related
or instance-specific error occurred while establishing a connection to
SQL Server. The server was not found or was not accessible. Verify
that the instance name is correct and that SQL Server is configured to
allow remote connections. (provider: TCP Provider, error: 0 - A
connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed
because connected host has failed to respond.)
System.ComponentModel.Win32Exception (10060): A connection attempt
failed because the connected party did not properly respond after a
period of time, or established connection failed because connected
host has failed to respond.
...
Error Number:10060,State:0,Class:20
I followed the instructions here to obtain the function app's outboundIpAddresses (using Azure Resource Explorer which I also double checked with the Azure CLI).
I passed the list of IP's to the support team at my hosting provider & yet still receive the same error.
I know it's not code related as when I run my function app locally, I can connect fine (my local IP is on the SQL Server allow list).
Why can the Azure function not connect to the database?
This is is a small home project so I can't afford the virtual network NAT gateway route.
running on the consumption pricing plan
Outbound IP addresses reported by functions running on the Consumption plan are not reliable.
As per Azure documentation:
When a function app that runs on the Consumption plan or the Premium plan is scaled, a new range of outbound IP addresses may be assigned. When running on either of these plans, you can't rely on the reported outbound IP addresses to create a definitive allowlist. To be able to include all potential outbound addresses used during dynamic scaling, you'll need to add the entire data center to your allowlist.
Instead, provide your hosting provider with the outbound IP addresses for the Azure region(/data center) where your Azure function is hosted in, to cover all possible IPs that your Azure function may be assigned.
The official Azure IP ranges for all regions are in a JSON file available for download here.
Firstly, download this file.
Secondly, search for AzureCloud.[region name] e.g. AzureCloud.uknorth or AzureCloud.centralfrance which will bring up the IP addresses for Azure cloud in your specific region.
{
"name": "AzureCloud.uknorth",
"id": "AzureCloud.uknorth",
"properties": {
"changeNumber": 11,
"region": "uknorth",
"regionId": 29,
"platform": "Azure",
"systemService": "",
"addressPrefixes": [
"13.87.120.0/22",
...
"2603:1027:1:1c0::/59"
],
"networkFeatures": []
}
}
Finally, provide your hosting provider with all the IP addresses listed in the fragment.
Your Azure function should then be able to consistently connect to your database.
N.B. The list can & will update over time albeit more to add than to change - currently, the last modified date is 26th April 2022 as stated in the details section on the download page.
If anything breaks, ensure to check the page for updates or to guarantee no possible outages, upgrade your pricing plan.
Extra thoughts...
As you mention this is for a small project, I'm not sure what Azure pricing is like but I'd host the same project on AWS.
For the function itself, AWS Lambda's free tier includes 1M free requests per month (like Azure) & 400,000 GB-seconds of compute time per month which should be plenty.
For the connectivity, you'd need a VPC (free), an Internet Gateway (free + negligible data transfer costs), an Elastic IP address (free) and a managed NAT gateway (roughly $1 a day depending on region).
Oh - and you'd get the added benefit of just having 1 Elastic IP address to provide to your hosting provider, which would always stay the same regardless of what 'pricing plan' you're on.
If anything, I'd also take a look at AWS as a potential option for future projects if anything but that's out of scope :)

Strange mikrotik dns relation to firebird database

In one company there is windows server 2008 hosting firebird 2 database.
Clients are using some software to connect from local machines to this database.
Network is running on few mikrotik routers.
When i change main gateway mikrotik router dns to cleanbrowsing ip addresses (185.228.168.10 and 185.228.169.11), software can not connect fo this firebird database.
When i use 8.8.8.8 dns or 1.1.1.1 - no such problems.
Software does not relate to dns, i know this because it is written by me in c#.
How possible is that and why it happens?
Changing the main gateway router's DNS server to another upstream server means you are potentially getting different responses to DNS queries. Assuming that nothing else has changed on your network, I imagine one of the following:
Your new DNS provider does not have special config for the dns entries you are querying
Your new DNS provider is located somewhere else physically, and you are running into a situation where geolocation matters (different dns responses to differently located users)
There is another gadget on the network intercepting DNS and is unaware of the change you are making. For example a NAT rule on a router that redirects 8.8.8.8 to an internal DNS server.
I agree with your assessment that the software is probably not causing this, because you changed infrastructure, I think that this is an infrastructure problem.
With 15+ years of experience running FirebirdSQL in small networks, I always set following things to prevent such problems:
The first DNS at the router's DHCP should point to the router's IP (gateway) itself, so it resolves local pc names easier
Setting a (random?) DHCP domain name at router's setup is recommended too
Edit/replace the firebird.conf file with one of fixed default port (3050) + event port (3051).
Opening those ports on each PC's firewall is a MUST. Both incoming and outgoing. You may narrow it to local IP range to prevent outside attacks. (Create a script once, run it on each PC as Admin once.)
Usually I also add "fbserver.exe" to firewall exception too
Restart FirebirdSQL service (or the whole PC) after changing gateway or DNS or firebird.conf

restrict access to domain but allow access to subdomain

I have a domain name.com that points to IP 123.123.123.123 where I have installed a apache2 server.
I also have sub domains like ftp.name.com / etc.name.com that also point to the same ip address.
I want that when a user types in browser name.com to be restricted access, like when the apache server is down (or like when you try to access a domain that does not exist) but in the same time I want the user to be able to access the sub domains. Does it make sense? Is it possible?
"Like when the apache server is down" will not be possible because if it's down, it does not accept connections. But it will have to accept the connection to receive the HTTP header which tells the server the requested (sub)domain.
A possible solution would be to change your DNS entries. Let ftp.name.com point to your server (123.123.123.123 in your example) and configure www.name.com to an unused / invalid ip address. This way your ftp.name.com server will not receive name.com queries at all.

Connect Google Cloud SQL to MySQL workbench

I have just create my Google Cloud SQL. I am using the ip address that I get i.e:160.200.100.12 the root as user name and the password that I have setup. I cannot connect with workbench and get the error: "failed to connect to MySQL at:160.200.100.12 with user root". Any idea what it may be wrong?
From reading through the comments, I can see that there are two points of confusion.
In order to connect to your Cloud SQL instance, you must first authorize access for connection requests coming from your IP address (meaning the one provided to you by your ISP). It is important to not confuse this IP address with the one provided by Google. I recommend reading through the documentation on ‘Connecting from External Applications’.
Furthermore, there seems to be some confusion regarding the IPv6 address which is included with your Cloud SQL instance. If your ISP does not support IPv6 connectivity, then you will not be able to use the IPv6 address provided to facilitate connections. In this case, you will have to request an IPv4 address for your Cloud SQL instance.
Finally, it is important to note that while your IPv6 address is free to use, assigning an IPv4 address will incur additional charges. I recommend reviewing the pricing information for Cloud SQL so you can get a better idea of how this is calculated.
I hope that this information is helpful.

GAE script to authorize networks on CloudSQL

I am working on a project where i need access to Cloud SQL but my Ip address is frequently changing (10 times a day or more). Is there a way to tell Cloud SQL about my new ip address using scripting to allow access from it? At the moment i have to use the cloud console, but if i could write a script.
I have just found out that on the Cloud Console in Access Control, one can use a DNS name, rather than just an IP. Google is pretty awesome.
So in Access Control simply put a Domain Name as allowed access, and use a simple no-ip service like ddns.net to keep the domain name up to date with the dynamic ip.
When accessing Cloud SQL from AppEngine, you don't have to authorize the IP address. You must authorize the AppEngine application as described here.
EDIT:
If this is your local (ISP) ip address that keeps changing, then maybe you can setup a SSH tunnel :
Create an instance on Compute engine, can be the cheapest
ssh to the instance with params -L 3306:cloudsqlip:3306
Now authorize the ip address (no need of a static ip, can be the ephemeral) of the compute instance. You should be able to connect to your local machine 127.0.0.1:3306 and your traffic will be tunneled to your cloud sql instance.

Resources