Cannot connect to Amazon Keyspaces with cqlsh - database

I am having trouble connecting to Amazon Keyspaces, both with my application code and cqlsh:
cqlsh cassandra.eu-west-2.amazonaws.com 9142 -u "xxxxxxxxxxxxxxx" -p "xxxxxxxxxxxxxxxxxxxxxx" --ssl
Connection error: ('Unable to connect to any servers', {'3.10.201.209': error(1, u"Tried connecting to [('3.10.201.209', 9142)]. Last error: [SSL] internal error (_ssl.c:727)")})
What is particularly confusing is that my setup worked in the past.
My cqlshrc:
[connection]
port = 9142
factory = cqlshlib.ssl.ssl_transport_factory
[ssl]
validate = true
certfile = /home/abc/.cassandra/AmazonRootCA1.pem
I fetched the certificate like this:
wget -c https://www.amazontrust.com/repository/AmazonRootCA1.pem
DNS seems fine:
nslookup cassandra.eu-west-2.amazonaws.com
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
Name: cassandra.eu-west-2.amazonaws.com
Address: 3.10.201.209
I recently upgraded to Ubuntu 20.04 from 18.04, which may be causing issues.
Update: Yes, it probably changed the default SSL protocol

I figured it out for cqlsh; you need to set the SSL version:
[connection]
port = 9142
factory = cqlshlib.ssl.ssl_transport_factory
[cql]
version = 3.4.4
[ssl]
validate = true
certfile = /home/abc/.cassandra/AmazonRootCA1.pem
version = TLSv1_2
The fix for .NET solution is similar; you must set the SslProtocols correctly.
Here is an F# script that works:
#load "../.paket/load/netcoreapp3.1/CassandraCSharpDriver.fsx"
open System
open System.Net.Security
open System.Security
open System.Security.Authentication
open System.Security.Cryptography
open System.Security.Cryptography.X509Certificates
open Cassandra
let private getEnvVar (name : string) =
let x = Environment.GetEnvironmentVariable name
if String.IsNullOrWhiteSpace x
then
failwithf "The environment variable %s must be set" name
else
x
let region = getEnvVar "AWS_REGION"
let keyspace = getEnvVar "AWS_KEYSPACES_KEYSPACE"
let keyspacesUsername = getEnvVar "AWS_KEYSPACES_USERNAME"
let keyspacesPassword = getEnvVar "AWS_KEYSPACES_PASSWORD"
async {
let certCollection = X509Certificate2Collection ()
use cert = new X509Certificate2 (#"./AmazonRootCA1.pem", "amazon")
certCollection.Add (cert) |> ignore
let sslOptions =
SSLOptions
(
SslProtocols.Tls12,
true,
(fun sender certificate chain sslPolicyErrors ->
if sslPolicyErrors = SslPolicyErrors.None
then
true
else
printfn "Cassandra node SSL certificate validation error(s): {%A}" sslPolicyErrors
false)
)
|> (fun x -> x.SetCertificateCollection(certCollection))
let contactPoints = [| sprintf "cassandra.%s.amazonaws.com" region |]
let cluster =
Cluster.Builder()
.AddContactPoints(contactPoints)
.WithPort(9142)
.WithAuthProvider(PlainTextAuthProvider (keyspacesUsername, keyspacesPassword))
.WithSSL(sslOptions)
.Build()
use! cassandra =
cluster.ConnectAsync keyspace
|> Async.AwaitTask
printfn "Connected. "
}
|> Async.RunSynchronously
It should be easy to translate to C# :)

Related

Azure SQL Databricks Integration

These are my JDBC connection details:
jdbcHostname = "ss-owaisde.database.windows.net"
jdbcPort = 1433
jdbcDatabase = "database-owaisde"
jdbcUsername = "owaisde"
jdbcPassword = "******"
jdbcDriver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbcUrl = f"jdbc:sqlserver://{jdbcHostname}:{jdbcPort};databaseName=
{jdbcDatabase};user{jdbcUsername};password={jdbcPassword};driver={jdbcDriver}"
Executing the spark read
df1 = spark.read.format("jdbc").option("url",jdbcUrl).option("dbtable",
"SalesLT.Product").load()
Getting the following error on databricks
java.sql.SQLException: No suitable driver
I tried to replicate your issue with your code:
I got same error:
As per my knowledge in your case URL is not build in correct format. I tried with below code:
jdbcHostname = "<servername>.database.windows.net"
jdbcPort = 1433
jdbcDatabase = "<dbname>"
jdbcUsername = "<username>"
jdbcPassword = "<password>"
jdbcDriver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
#url = s"jdbc:sqlserver://${database_host}:${database_port}/${database_name}"
table = "Student"
jdbcUrl = f"jdbc:sqlserver://{jdbcHostname}:{jdbcPort};databaseName={jdbcDatabase}"
df1 = spark.read.format("jdbc").option("driver", jdbcDriver).option("url", jdbcUrl).option("dbtable", table).option("user", jdbcUsername).option("password", jdbcPassword).load()
Dataframe created successfully.
It worked fine for me, check from your end.

What protocol does SnowFlake JDBC driver use?

I'm trying to find out what protocol the SnowFlake JDBC library uses to communicate with SnowFlake. I see hints here and there that it seems to be using HTTPS as the protocol. Is this true?
To my knowledge, other JDBC libraries like for example for Oracle or PostgreSQL use the lower level TCP protocol to communicate with their database servers, and not the application-level HTTP(S) protocol, so I'm confused.
My organization only supports securely routing http(s)-based communication. Can I use this snowflake jdbc library then?
I have browsed all documentation that I could find, but wasn't able to answer this question.
My issue on GitHub didn't get an answer either.
Edit: Yes, I've seen this question, but I don't feel that it answers my question. SSL/TLS is an encryption, but that doesn't specify the data format.
It looks like the jdbc driver uses HTTP Client HttpUtil.initHttpClient(httpClientSettingsKey, null);, as you can see in here
The HTTP Utility Class is available here
Putting an excerpt of the session open method here in case the link goes bad/dead.
/**
* Open a new database session
*
* #throws SFException this is a runtime exception
* #throws SnowflakeSQLException exception raised from Snowflake components
*/
public synchronized void open() throws SFException, SnowflakeSQLException {
performSanityCheckOnProperties();
Map<SFSessionProperty, Object> connectionPropertiesMap = getConnectionPropertiesMap();
logger.debug(
"input: server={}, account={}, user={}, password={}, role={}, database={}, schema={},"
+ " warehouse={}, validate_default_parameters={}, authenticator={}, ocsp_mode={},"
+ " passcode_in_password={}, passcode={}, private_key={}, disable_socks_proxy={},"
+ " application={}, app_id={}, app_version={}, login_timeout={}, network_timeout={},"
+ " query_timeout={}, tracing={}, private_key_file={}, private_key_file_pwd={}."
+ " session_parameters: client_store_temporary_credential={}",
connectionPropertiesMap.get(SFSessionProperty.SERVER_URL),
connectionPropertiesMap.get(SFSessionProperty.ACCOUNT),
connectionPropertiesMap.get(SFSessionProperty.USER),
!Strings.isNullOrEmpty((String) connectionPropertiesMap.get(SFSessionProperty.PASSWORD))
? "***"
: "(empty)",
connectionPropertiesMap.get(SFSessionProperty.ROLE),
connectionPropertiesMap.get(SFSessionProperty.DATABASE),
connectionPropertiesMap.get(SFSessionProperty.SCHEMA),
connectionPropertiesMap.get(SFSessionProperty.WAREHOUSE),
connectionPropertiesMap.get(SFSessionProperty.VALIDATE_DEFAULT_PARAMETERS),
connectionPropertiesMap.get(SFSessionProperty.AUTHENTICATOR),
getOCSPMode().name(),
connectionPropertiesMap.get(SFSessionProperty.PASSCODE_IN_PASSWORD),
!Strings.isNullOrEmpty((String) connectionPropertiesMap.get(SFSessionProperty.PASSCODE))
? "***"
: "(empty)",
connectionPropertiesMap.get(SFSessionProperty.PRIVATE_KEY) != null
? "(not null)"
: "(null)",
connectionPropertiesMap.get(SFSessionProperty.DISABLE_SOCKS_PROXY),
connectionPropertiesMap.get(SFSessionProperty.APPLICATION),
connectionPropertiesMap.get(SFSessionProperty.APP_ID),
connectionPropertiesMap.get(SFSessionProperty.APP_VERSION),
connectionPropertiesMap.get(SFSessionProperty.LOGIN_TIMEOUT),
connectionPropertiesMap.get(SFSessionProperty.NETWORK_TIMEOUT),
connectionPropertiesMap.get(SFSessionProperty.QUERY_TIMEOUT),
connectionPropertiesMap.get(SFSessionProperty.TRACING),
connectionPropertiesMap.get(SFSessionProperty.PRIVATE_KEY_FILE),
!Strings.isNullOrEmpty(
(String) connectionPropertiesMap.get(SFSessionProperty.PRIVATE_KEY_FILE_PWD))
? "***"
: "(empty)",
sessionParametersMap.get(CLIENT_STORE_TEMPORARY_CREDENTIAL));
HttpClientSettingsKey httpClientSettingsKey = getHttpClientKey();
logger.debug(
"connection proxy parameters: use_proxy={}, proxy_host={}, proxy_port={}, proxy_user={},"
+ " proxy_password={}, non_proxy_hosts={}, proxy_protocol={}",
httpClientSettingsKey.usesProxy(),
httpClientSettingsKey.getProxyHost(),
httpClientSettingsKey.getProxyPort(),
httpClientSettingsKey.getProxyUser(),
!Strings.isNullOrEmpty(httpClientSettingsKey.getProxyPassword()) ? "***" : "(empty)",
httpClientSettingsKey.getNonProxyHosts(),
httpClientSettingsKey.getProxyProtocol());
// TODO: temporarily hardcode sessionParameter debug info. will be changed in the future
SFLoginInput loginInput = new SFLoginInput();
loginInput
.setServerUrl((String) connectionPropertiesMap.get(SFSessionProperty.SERVER_URL))
.setDatabaseName((String) connectionPropertiesMap.get(SFSessionProperty.DATABASE))
.setSchemaName((String) connectionPropertiesMap.get(SFSessionProperty.SCHEMA))
.setWarehouse((String) connectionPropertiesMap.get(SFSessionProperty.WAREHOUSE))
.setRole((String) connectionPropertiesMap.get(SFSessionProperty.ROLE))
.setValidateDefaultParameters(
connectionPropertiesMap.get(SFSessionProperty.VALIDATE_DEFAULT_PARAMETERS))
.setAuthenticator((String) connectionPropertiesMap.get(SFSessionProperty.AUTHENTICATOR))
.setOKTAUserName((String) connectionPropertiesMap.get(SFSessionProperty.OKTA_USERNAME))
.setAccountName((String) connectionPropertiesMap.get(SFSessionProperty.ACCOUNT))
.setLoginTimeout(loginTimeout)
.setAuthTimeout(authTimeout)
.setUserName((String) connectionPropertiesMap.get(SFSessionProperty.USER))
.setPassword((String) connectionPropertiesMap.get(SFSessionProperty.PASSWORD))
.setToken((String) connectionPropertiesMap.get(SFSessionProperty.TOKEN))
.setPasscodeInPassword(passcodeInPassword)
.setPasscode((String) connectionPropertiesMap.get(SFSessionProperty.PASSCODE))
.setConnectionTimeout(httpClientConnectionTimeout)
.setSocketTimeout(httpClientSocketTimeout)
.setAppId((String) connectionPropertiesMap.get(SFSessionProperty.APP_ID))
.setAppVersion((String) connectionPropertiesMap.get(SFSessionProperty.APP_VERSION))
.setSessionParameters(sessionParametersMap)
.setPrivateKey((PrivateKey) connectionPropertiesMap.get(SFSessionProperty.PRIVATE_KEY))
.setPrivateKeyFile((String) connectionPropertiesMap.get(SFSessionProperty.PRIVATE_KEY_FILE))
.setPrivateKeyFilePwd(
(String) connectionPropertiesMap.get(SFSessionProperty.PRIVATE_KEY_FILE_PWD))
.setApplication((String) connectionPropertiesMap.get(SFSessionProperty.APPLICATION))
.setServiceName(getServiceName())
.setOCSPMode(getOCSPMode())
.setHttpClientSettingsKey(httpClientSettingsKey);
// propagate OCSP mode to SFTrustManager. Note OCSP setting is global on JVM.
HttpUtil.initHttpClient(httpClientSettingsKey, null);
SFLoginOutput loginOutput =
SessionUtil.openSession(loginInput, connectionPropertiesMap, tracingLevel.toString());
isClosed = false;
authTimeout = loginInput.getAuthTimeout();
sessionToken = loginOutput.getSessionToken();
masterToken = loginOutput.getMasterToken();
idToken = loginOutput.getIdToken();
mfaToken = loginOutput.getMfaToken();
setDatabaseVersion(loginOutput.getDatabaseVersion());
setDatabaseMajorVersion(loginOutput.getDatabaseMajorVersion());
setDatabaseMinorVersion(loginOutput.getDatabaseMinorVersion());
httpClientSocketTimeout = loginOutput.getHttpClientSocketTimeout();
masterTokenValidityInSeconds = loginOutput.getMasterTokenValidityInSeconds();
setDatabase(loginOutput.getSessionDatabase());
setSchema(loginOutput.getSessionSchema());
setRole(loginOutput.getSessionRole());
setWarehouse(loginOutput.getSessionWarehouse());
setSessionId(loginOutput.getSessionId());
setAutoCommit(loginOutput.getAutoCommit());
// Update common parameter values for this session
SessionUtil.updateSfDriverParamValues(loginOutput.getCommonParams(), this);
String loginDatabaseName = (String) connectionPropertiesMap.get(SFSessionProperty.DATABASE);
String loginSchemaName = (String) connectionPropertiesMap.get(SFSessionProperty.SCHEMA);
String loginRole = (String) connectionPropertiesMap.get(SFSessionProperty.ROLE);
String loginWarehouse = (String) connectionPropertiesMap.get(SFSessionProperty.WAREHOUSE);
if (loginDatabaseName != null && !loginDatabaseName.equalsIgnoreCase(getDatabase())) {
sqlWarnings.add(
new SFException(
ErrorCode.CONNECTION_ESTABLISHED_WITH_DIFFERENT_PROP,
"Database",
loginDatabaseName,
getDatabase()));
}
if (loginSchemaName != null && !loginSchemaName.equalsIgnoreCase(getSchema())) {
sqlWarnings.add(
new SFException(
ErrorCode.CONNECTION_ESTABLISHED_WITH_DIFFERENT_PROP,
"Schema",
loginSchemaName,
getSchema()));
}
if (loginRole != null && !loginRole.equalsIgnoreCase(getRole())) {
sqlWarnings.add(
new SFException(
ErrorCode.CONNECTION_ESTABLISHED_WITH_DIFFERENT_PROP, "Role", loginRole, getRole()));
}
if (loginWarehouse != null && !loginWarehouse.equalsIgnoreCase(getWarehouse())) {
sqlWarnings.add(
new SFException(
ErrorCode.CONNECTION_ESTABLISHED_WITH_DIFFERENT_PROP,
"Warehouse",
loginWarehouse,
getWarehouse()));
}
// start heartbeat for this session so that the master token will not expire
startHeartbeatForThisSession();
}

postgresSQL terminating abnormally

I am trying to update postgresSQL table with psycopg2 (python package) sometimes it is failing with below error.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Here is the code
from psycopg2 import pool
now = datetime.now()
logoff_time = datetime(now.year, now.month, now.day, 15, 0, 0)
while True:
time.sleep(1)
try:
status = 'EXECUTED'
exec_type1 = 'CANCELLED'
exec_type2 = 'COMPLETED'
try:
postgreSQL_pool = pool.SimpleConnectionPool(1, 20, host = db_host,
database = db_name,
port = db_port,
user = db_user,
password = db_pwd)
if postgreSQL_pool:
print("Connection pool created successfully")
conn = postgreSQL_pool.getconn()
except (Exception, psycopg2.DatabaseError) as error:
print(error)
sql = """ UPDATE orders SET status = %s, executed_type = %s WHERE order_id = %s"""
updated_rows = 0
try:
cur = conn.cursor()
cur.execute(sql, (status, exec_type1, order_id,))
conn.commit()
updated_rows = cur.rowcount
cur.close()
break
except (Exception, psycopg2.DatabaseError) as error:
print(error)
print(updated_rows)
except Exception as e:
print(e)
psycopg2 version: '2.8.6 (dt dec pq3 ext lo64)'
Postgres: PostgreSQL 12.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit
it is pretty much simple task but facing challenges. suggestions please
The server is crashing for some reason that you might be able to read in the server's logs.

How to connect to SQL Server with R?

I want to create a model in R using a connection to data stored in SQL Server datawarehouse.
I tried to use RevoScaleR library which returned
package RevoScaleR is not available (for R version 3.4.1)
so, I edited the connection string (given on the code below) for ODBC library:
install.packages("RevoScaleR")
#require("RevoScaleR")
if (!require("RODBC"))
install.packages("RODBC")
conn <- odbcDriverConnect(connection="Driver={SQL Server Native Client 11.0}; Server=CZPHADDWH01/DEV; Database=DWH_Staging; trusted_connection=true")
sqlWait <- TRUE;
sqlConsoleOutput <- FALSE;
cc <- RxInSqlServer(connectionString = conn, wait = sqlWait)
rxSetComputeContext(cc)
train_query <- "SELECT TOP(10000) * FROM dim.Contract"
formula <- as.formula("Cosi ~ ContractID + ApprovedLoanAmount + ApprovedLoadDuration")
forest_model <- rxDForest(formula = formula,
data = train_query,
nTree = 20,
maxDepth = 32,
mTry = 3,
seed = 5,
verbose = 1,
reportProgress = 1)
rxDForest_model <- as.raw(serialize(forest_model, connection = conn))
lenght(rxDForest_model)
However:
package 'RODBC' successfully unpacked and MD5 sums checked
The downloaded binary packages are in
C:\Users\sjirak\AppData\Local\Temp\Rtmpqa9iKN\downloaded_packages
Error in odbcDriverConnect(connection = "Driver={SQL Server Native
Client 11.0}; Server=CZPHADDWH01/DEV; Database=DWH_Staging;
trusted_connection=true") : could not find function
"odbcDriverConnect" In library(package, lib.loc = lib.loc,
character.only = TRUE, logical.return = TRUE, : there is no
package called 'RODBC'
Any help would be appreciated.
Looking at the documentation of the ODBC, I see the following functions
odbc-package
dbConnect,OdbcDriver-method
dbUnQuoteIdentifier
odbc
odbc-tables
OdbcConnection
odbcConnectionActions
odbcConnectionIcon
odbcDataType
OdbcDriver
odbcListColumns
odbcListDataSources
odbcListDrivers
odbcListObjects
odbcListObjectTypes
odbcPreviewObject
OdbcResult
odbcSetTransactionIsolationLevel
test_roundtrip
hence I dont see your function in this list. This could be the reason why...
Hence, check the documentation for the proper function.

Setting a not-default db adapter in zend framework 1.12

i've a very specific problem, but i'm a niewbie with zend framework so i don't have idea of how exctly this db adapter works as a configuration, but i've already made a db connection with the default adapter of zend, and it was successful. Now i've to set two different database connections for two different db in the same application. So i've taken my application.ini and i've written the following lines:
;connessione al db
resources.db.adapter = pdo_mssql
resources.db.params.host = "ip"
resources.db.params.username = user
resources.db.params.password = pwd
resources.db.params.dbname = NAME
resources.db.isDefaultTableAdapter = true
resources.db.params.pdoType = dblib
;connessione al db1
resources.db1.adapter = pdo_mssql
resources.db1.params.host = "ip"
resources.db1.params.username = user
resources.db1.params.password = pwd
resources.db1.params.dbname = NAME
resources.db1.isDefaultTableAdapter = false
resources.db1.params.pdoType = dblib
then i went to my action controller and i wrote:
$db = Zend_Registry::get ( 'db' );
$result = $db->fetchRow("SELECT [Sell-to Customer No_] FROM dbo.SyncroPlanningTable WHERE id='".$id);
$rag_soc=$result->{"Sell-to Customer No_"};
$db1 = Zend_Registry::get ( 'db1' );
$result1 = $db1->fetchRow("SELECT [No_],[Name],[Address],[City],[Contact],[Name],[Phone] FROM `dbo.SOS$Customer` WHERE No_ = '".$rag_soc."'");
The error i'm getting is the following:
Fatal error: Uncaught exception 'Zend_Application_Bootstrap_Exception' with message 'Unable to resolve plugin "db1";
UPDATE:
My bootstrap.php is:
$resource = $this->getPluginResource ( "db" );
$db = $resource->getDbAdapter ();
$db->setFetchMode ( Zend_Db::FETCH_OBJ );
Zend_Db_Table_Abstract::setDefaultAdapter ( $db );
Zend_Registry::set ( "db", $db );
How can i change it? it is not mentioned in the manual page you gave me.
resources.db refers to Zend_Application_Resource_Db, so "db" here is not a variable name.
You should use Zend_Application_Resource_Multidb to support multiple database connections:
http://framework.zend.com/manual/1.12/en/zend.application.available-resources.html#zend.application.available-resources.multidb
Your code is expecting the DB adapters to be in the registry, so you need to grab them from the multiDB resource and store them:
$multiDB = $this->getPluginResource('multidb');
Zend_Registry::set('db1', $multiDB->getDb('db1');
Zend_Registry::set('db2', $multiDB->getDb('db2');
also, this line:
Zend_Db_Table_Abstract::setDefaultAdapter ( $db );
can be removed, as you're specifying the default adapter in the application.ini.

Resources