App Engine connection to cloud-sql issues - google-app-engine

I'm using flask_mysqldb (https://github.com/alexferl/flask-mysqldb/blob/master/flask_mysqldb/__init__.py) to connect to cloud sql from app engine.
I'm passing the following params to it:
MYSQL_HOST = "PUBLIC IP of the cloud sql instance"
MYSQL_UNIX_SOCKET = '/cloudsql/<project_id>:<region>:<instance name>'
MYSQL_USER = 'user'
MYSQL_PASSWORD = 'pword'
But I'm getting an error message when trying to connect from app engine:
MySQLdb._exceptions.OperationalError: (2003, "Can't connect to MySQL server on 'PUBLIC_IP_ADDRESS' (110)")
When I allow all connections to the cloud sql instance (i.e. allow 0.0.0.0/0 as allowed network), then app engine succeeds and is able to connect.
The app engine instance is in the same project as the cloud sql instance, and the app engine service account has the right permissions.
Any ideas what the issue is ? thanks

This error is being caused by the fact that you are setting both MYSQL_HOST and MYSQL_UNIX_SOCKET in your example. Setting MYSQL_HOST tells flask-mysqldb to connect via a TCP connection, while setting MYSQL_UNIX_SOCKET tells it to connect via Unix sockets. These are conflicting and thus causing the error you are seeing.
Don't set MYSQL_HOST, remove it from your code and everything should work fine:
from flask import Flask
from flask_mysqldb import MySQL
app = Flask(__name__)
# Required
app.config["MYSQL_USER"] = <YOUR_DB_USER>
app.config["MYSQL_PASSWORD"] = <YOUR_DB_PASSWORD>
app.config["MYSQL_DB"] = <YOUR_DB_NAME>
app.config["MYSQL_UNIX_SOCKET"] = '/cloudsql/<PROJECT_ID>:<REGION>:<INSTANCE_NAME>'
# Extra configs, optional:
app.config["MYSQL_CURSORCLASS"] = "DictCursor"
mysql = MySQL(app)
#app.route("/")
def users():
cur = mysql.connection.cursor()
cur.execute("""SELECT user, host FROM mysql.user""")
rv = cur.fetchall()
return str(rv)
if __name__ == "__main__":
app.run(debug=True)
With the above change you should no longer have to allow 0.0.0.0/0 as an allowed network.

Related

db query error: failed to connect to server - please inspect Grafana server log for details

I'm new to Grafana and trying to connect Grafana to Microsoft SQL Server. I run both Grafana and SQL server on the same machine with Windows OS. In Grafana, I selected SQL Server data source and provided Host and DB name. I created a user in SQL server and granted reader permission to the user as per https://grafana.com/docs/grafana/latest/datasources/mssql/. Either for SQL server Authentication or Windows Authentication, I get the error db query error: failed to connect to server - please inspect Grafana server log for details.
I checked then Grafana log file: lvl=eror msg="query error" logger=tsdb.mssql err="Unable to open tcp connection with host 'servername:1433': dial tcp [2a02:908:1391:9e80:c180:xxxx:xxxx:xxxx]:1433: connectex: No connection could be made because the target machine actively refused it."
How can I force SQL server to give access to Grafana?
I should mention that, I haven't changed Grafana conf file. Do I need to change the default conf or create another conf file?
The default DB configuration in Grafana conf file is:
[database]
# You can configure the database connection by specifying type, host, name, user and password
# as separate properties or as on string using the url property.
# Either "mysql", "postgres" or "sqlite3", it's your choice
type = sqlite3
host = 127.0.0.1:3306
name = grafana
user = root
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
password =
# Use either URL or the previous fields to configure the database
# Example: mysql://user:secret#host:port/database
url =
# Max idle conn setting default is 2
max_idle_conn = 2
# Max conn setting default is 0 (mean not set)
max_open_conn =
# Connection Max Lifetime default is 14400 (means 14400 seconds or 4 hours)
conn_max_lifetime = 14400
# Set to true to log the sql calls and execution times.
log_queries =
# For "postgres", use either "disable", "require" or "verify-full"
# For "mysql", use either "true", "false", or "skip-verify".
ssl_mode = disable
# Database drivers may support different transaction isolation levels.
# Currently, only "mysql" driver supports isolation levels.
# If the value is empty - driver's default isolation level is applied.
# For "mysql" use "READ-UNCOMMITTED", "READ-COMMITTED", "REPEATABLE-READ" or "SERIALIZABLE".
isolation_level =
ca_cert_path =
client_key_path =
client_cert_path =
server_cert_name =
# For "sqlite3" only, path relative to data_path setting
path = grafana.db
# For "sqlite3" only. cache mode setting used for connecting to the database
cache_mode = private
The settings in Grafana's configuration file refer to its internal database so you do not need to change any of these to connect to MS SQL Server.
Try using "localhost" or "127.0.0.1" as the host name
Make sure authentication is SQL Server Authentication
Make sure Encrypt is false
Check the SQL server logs for any errors
Docker host using IP address of your machine follow below steps:
Open the CMD
IPCONFIG /ALL
Look for the IPV4 address under WiFi or
vEtherner; in my case, it's 192.168.1.24 and 172.45.202.1, respectively
Then try accessing the app hosted in the Docker container with the mapped port (e.g., 1433/5436)
It simply worked using 192.168.1.24:1433 and 172.45.202.1:1433 in the same way to access all container apps hosted using Docker

Why do i get this error when i connect snowflake and python

This is the error i get when i connect to snowflake via python?
OperationalError: 250003: Failed to execute request: ("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])",)
I connect using:
ctx = snowflake.connector.connect(
user='JoeBloggs',
password='pwd',
account='JoeBloggs',
database='DEV_DATA'
)
do i need to feed in other paramters such as port, host, etc how did i find what these are?
I think your value for 'account' needs to be modified. It looks like you're using your username there, but it should be the Snowflake account. This should be the portion of the URL that you connect directly to that precedes the snowflakecomputing.com portion. For example, 'xy12345.east-us-2.azure'.
My initial thoughts are that the error indicates a firewall or proxy issue. In particular, a proxy might intercept Snowflake's SSL certificate and replace it with their own. The best way to resolve this is to ensure the certificate is trusted in the proxy and the proxy is configured as per Snowflake's documentation so that the Snowflake certificate can pass through.
The documentation below has more information on using a proxy with SnowSQL. You can pass along the error with issuer details to your network engineer and can request to whitelist the required URLs (documentation also below outlining the whitelisting requirements). You can use the SYSTEM$WHITELIST function to get all the URLs to whitelist in a proxy or firewall for your account.
https://docs.snowflake.net/manuals/user-guide/snowsql-start.html#using-a-proxy-server
https://docs.snowflake.net/manuals/user-guide/hostname-whitelist.html
First, install Snowflake python connector .pip3 install snowflake-python-connector.
Can you try with code below:
------------------------------------------------------
import snowflake.connector
PASSWORD = '*****'
USER = '<UNAME>'
ACCOUNT = '<ACCNTNAME>'
WAREHOUSE = '<WHNAME>'
DATABASE = '<DBNAME>'
SCHEMA = 'PUBLIC'
print("Connecting...")
con = snowflake.connector.connect(
user=USER,
password=PASSWORD,
account=ACCOUNT,
warehouse=WAREHOUSE,
database=DATABASE,
schema=SCHEMA
)
con.cursor().execute("USE WAREHOUSE " + WAREHOUSE)
con.cursor().execute("USE DATABASE " + DATABASE)
try:
result = con.cursor().execute("Select * from <TABLENAME>")
result_list = result.fetchall()
print(result_list)
finally:
con.cursor().close()
con.cursor().close()
---------------------------------------------------

Missing socket for connection to Cloud SQL from Google App Engine standard environment

I am trying to connect a Python 3.7 GAE app in standard environment to a Cloud SQL Postgres 9.6 database.
The procedure is described in this doc.
Unfortunately, the UNIX socket /cloudsql/<DB_CONNECTION_NAME> that is normally used to connect to the database does not exist on the GAE instance (folder /cloudsql is empty).
More information on what I tried:
the GAE app and the cloud SQL instance are in the same project and region (I tried in europe-west1 and europe-west3)
I have added and removed a beta_settings -> cloud_sql_instances key in the app.yaml config file, to no avail. From what I understood, this should only be needed in the flexible environment anyway
I have activated the Cloud SQL Admin UI
Has anyone encountered and solved this problem?
The SO questions about this problem are either old, unanswered, or do not solve the problem in my environment.
I am able to get the connection working with the following configuration:
PROJECT=[[YOUR-PROJECT-ID]]
REGION=europe-west3
INSTANCE=instance-01
and:
import os
from flask import Flask
import psycopg2
db_user = os.environ.get('CLOUD_SQL_USERNAME')
db_pass = os.environ.get('CLOUD_SQL_PASSWORD')
db_name = os.environ.get('CLOUD_SQL_DATABASE')
db_conn = os.environ.get('CLOUD_SQL_INSTANCE')
app = Flask(__name__)
#app.route('/')
def main():
host = '/cloudsql/{}'.format(db_conn)
cnx = psycopg2.connect(
dbname=db_name,
user=db_user,
password=db_pass,
host=host
)
with cnx.cursor() as cursor:
cursor.execute('SELECT NOW() as now;')
result = cursor.fetchall()
current_time = result[0][0]
cnx.commit()
cnx.close()
return str(current_time)
and:
flask==1.0.2
psycopg2==2.8
and, with ${VARIABLE} replaced with value:
runtime: python37
env_variables:
CLOUD_SQL_INSTANCE: "${PROJECT}:${REGION}:${INSTANCE}"
CLOUD_SQL_USERNAME: ${USERNAME}
CLOUD_SQL_PASSWORD: ${PASSWORD}
CLOUD_SQL_DATABASE: ${DATABASE}
Based on a similar issue, the most probable root cause is detailed here:
https://issuetracker.google.com/117804657#comment16
Other possible causes, per the discussion, could be:
A lack of public IP on the SQL instance
Specification of the port in the configuration settings
Here are some recommendations:
Run the application locally to make sure it works before deploying to App Engine.
Double check the Cloud SQL configuration (e.g. username, password, instance connection name) on the app.yaml file.
Make sure the Google Cloud SQL API is enabled.
Try recreating the Cloud SQL instance.
Simply recreating the Cloud SQL instance or database has worked in other cases, as modifications to the quickstart’s default setup might be difficult to track.
Cheers
I fixed this by enabling the Cloud SQL Admin API.
See https://cloud.google.com/sql/docs/postgres/debugging-connectivity

Bluemix connecting to external SQL Server Database

I have an application built using the ASP.NET 5 runtime - I would like to connect it to an on-premise SQL Server Database.
After some research I've already created the user-provided service with the relevant credentials, however I am unsure what to do next (i.e. writing the necessary code connecting it in ASP.NET).
Some further googling suggests to use Secure Gateway? but is this the only way? the cloud I am working on is dedicated and does not have the Secure Gateway service. Is there a workaround for this?
(Note: The application I'm working on is based on the ASP.NET-Cloudant example on IBM Github, if that helps).
https://github.com/IBM-Bluemix/asp.net5-cloudant
The Secure Gateway service isn't required as long as the Bluemix environment can connect to the server running SQL Server. This might require your firewall rules to be a little more relaxed on the SQL Server, or you can contact IBM to create a secure tunnel as Hobert suggested in his answer.
Aside from that issue, if you're planning to use Entity Framework to connect to your SQL Server, it should work similar to the existing tutorials on the asp.net site. The only difference will be in how you access the environment variables to create your connection string.
Assuming that you created your user-provided service with a command similar to this:
cf cups my-sql-server -p '{"server":"127.0.0.1","database":"MyDB","user":"sa","password":"my-password"}'
Your connection string in your Startup.cs file's ConfigureServices method would then look something like this:
string vcapServices = Environment.GetEnvironmentVariable("VCAP_SERVICES");
string connection = "";
if (vcapServices != null)
{
string myServiceName = "my-sql-server";
JArray userServices = (JArray)JObject.Parse(vcapServices)?["user-provided"];
dynamic creds = ((dynamic)userServices
.FirstOrDefault(m => ((dynamic)m).name == myServiceName))?.credentials;
connection = string.Format(#"Server={0};Database={1};User Id={2}; Password={3};",
creds.server, creds.database, creds.user, creds.password);
}
Update
The cloudant boilerplate that you're modifying doesn't use Entity Framework because cloudant is a NoSQL database, so it's a bit different than connecting to SQL Server. The reason that the boilerplate calls .Configure to register the creds class is that it needs to use that class from another location, but when using Entity Framework you simply need to use the credentials when adding EF to the services in the Startup.cs file so you don't need to use .Configure<creds>.
If you follow the guide here, the only part you'll need to change is the line var connection = #"Server=(localdb)\mssqllocaldb;Database=EFGetStarted.AspNet5.NewDb;Trusted_Connection=True;"; replacing it with the code above to create the connection string instead of hard-coding it like they did in the example tutorial.
Eventually, your ConfigureServices method should look something like this, assuming your DbContext class is named BloggingContext like in the example:
public void ConfigureServices(IServiceCollection services)
{
string vcapServices = Environment.GetEnvironmentVariable("VCAP_SERVICES");
string connection = "";
if (vcapServices != null)
{
string myServiceName = "my-sql-server";
JArray userServices = (JArray)JObject.Parse(vcapServices)?["user-provided"];
dynamic creds = ((dynamic)userServices
.FirstOrDefault(m => ((dynamic)m).name == myServiceName))?.credentials;
connection = string.Format(#"Server={0};Database={1};User Id={2}; Password={3};",
creds.server, creds.database, creds.user, creds.password);
}
services.AddEntityFramework()
.AddSqlServer()
.AddDbContext<BloggingContext>(options => options.UseSqlServer(connection));
services.AddMvc();
}
And then your Startup method would be simplified to:
public Startup(IHostingEnvironment env)
{
var configBuilder = new ConfigurationBuilder()
.AddJsonFile("config.json", optional: true);
Configuration = configBuilder.Build();
}
Excellent!
In Public Bluemix Regions, you would create and use the Secure Gateway Service to access the On-Premise MS SQL Server DB.
In your case, as a Bluemix Dedicated client, you should engage your IBM Bluemix Administration Team so they can work with your Network Team to create a tunnel between the Dedicated Bluemix Region and your On-Premise MS SQL DB Server.
If you want to connect directly from your Asp.Net Core application to a SQL Server you actually don't need a Secure Gateway.
For example, if you want to use a SQL Azure as your Database you can simply add the given connection string in your application.
But, for pratical and security reasons, you should create a User-Provided Service to store your credentials (and not use statically in your code), and pull your credentials from you VCAP_SERVICES simply adding SteelToe to your Cconfiguration Builder. (Instead of use parse the configuration manually with JObjects and JArrays)
Step-by-step:
In your CloudFoundry console create a User-Provided Service using a Json:
cf cups MySqlServerCredentials -p '{"server":"tcp:example.database.windows.net,1433", "database":"MyExampleDatabase", "user":"admin", "password":"password"}'
Obs.: If you use Windows console/Powershell you should escape you double quotes in Json like:
'{\"server\":\"myserver\",\"database\":\"mydatabase\",\"user\":\"admin\",\"password\":\"password\"}'
After you have created your User-Provided Service you should Connect this Service with your application in Bluemix Console.
Then, In your application add the reference to SteelToe CloudFoundry Steeltoe.Extensions.Configuration.CloudFoundry
In your Startup class add:
using Steeltoe.Extensions.Configuration;
...
var builder = new ConfigurationBuilder()
.SetBasePath(basePath)
.AddJsonFile("appsettings.json")
.AddCloudFoundry();
var config = builder.Build();
Finally, to access your configurations just use:
var mySqlName = config["vcap:services:user-provided:0:name"];
var database = config["vcap:services:user-provided:0:credentials:database"];
var server = config["vcap:services:user-provided:0:credentials:server"];
var password = config["vcap:services:user-provided:0:credentials:password"];
var user = config["vcap:services:user-provided:0:credentials:user"];
OBS.: If you're using Azure, remember to configure your Database firewall to accept the IP of your Bluemis application, but as default Bluemix don't give a static IP address you have some options:
Buy a Bluemix Statica service to you application (expensive)
Update firewall rules with REST put with the current IP of application (workaroud)
Open your Azure Database Firewall to a broad range of IPs. (Just DON'T)
More info about SteelToe CloudFoundry in :
https://github.com/SteeltoeOSS/Configuration/tree/master/src/Steeltoe.Extensions.Configuration.CloudFoundry

Accessing existing cloud SQL instance from another project ID

I have created a cloud sql instance in a PHP project and have made the billing procedure successfully. The project works.
Now, I want to access my database from another project but this time in Java SDK project with servlets.
Using the example in https://developers.google.com/appengine/
docs/java/cloud-sql/
In Java project I have:
project id: javaProjectID
In php project id I have:
project id: phptestID
instance name: phpinstanceName
database: dbname
In my code in Servlet i do the below connection:
String url = "jdbc:google:mysql://phptestID:phpinstanceName/dbname?user=root";
Connection conn = (Connection) DriverManager.getConnection(url);
(...>> The connection fails in this point and doesn't access the database to make the below query )
String sqlStmt = "SELECT * FROM sometable";
PreparedStatement stmt = conn.prepareStatement(sqlStmt);
ResultSetres = stmt.executeQuery(sqlStmt);
How can I access my database in another project?? Is there any other way except
your-project-id:your-instance-name???
You will need to give the new app engine app access to your CloudSQL instance. To do this go to the Cloud SQL instance in the console, edit it, go down to Authorized App Engine Applications and then add the app id of the new App Engine app.
UPDATE:
The most recent steps look like in the attached screenshot below

Resources