PreparedStatement does not work with Google SQL cloud - google-app-engine

I am developing an application based on google sql cloud . The Mysql database is local.
The code below to insert a row works :
Statement stmt = connection.createStatement();
String sql = "INSERT INTO USER(name) VALUES ('smith')";
stmt.executeUpdate(sql);
However, the code below, that uses PreparedStatement rather that Statement does not work :
String sql = "insert into USER(email) values(?)";
PreparedStatement pstmt = connection.prepareStatement(sql);
pstmt.setString(1, email);
int success = pstmt.executeUpdate();
No rows are inserted in the database and success = 0 (return code for error)
Thanks for your help

Related

How to execute a command: SET IDENTITY_INSERT <table> ON on SQL Server table from Spark/Databricks?

I have been able to read/write from Databricks into SQL Server table using JDBC driver. However this time I have to execute a command before I write to a SQL Server.
I need to execute this command on SQL server: SET IDENTITY_INSERT <sqlserver_table_name> ON
How to do this from Databricks ? Any help/pointers are appreciated. Thanks.
You can't do this with the JDBC Spark Connector (or the SQL Server Spark Connector), but it's trivial when using JDBC directly in Scala or Java. When using JDBC directly you have explicit control of the session, and you can issue multiple batches in the same session, or multiple statements in the same batch. EG
%scala
import java.util.Properties
import java.sql.DriverManager
val jdbcUsername = dbutils.secrets.get(scope = "kv", key = "sqluser")
val jdbcPassword = dbutils.secrets.get(scope = "kv", key = "sqlpassword")
val driverClass = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
// Create the JDBC URL without passing in the user and password parameters.
val jdbcUrl = s"jdbc:sqlserver://xxxxxx.database.windows.net:1433; . . ."
val connection = DriverManager.getConnection(jdbcUrl, jdbcUsername, jdbcPassword)
val stmt = connection.createStatement()
val sql = """
SET IDENTITY_INSERT <sqlserver_table_name> ON
"""
stmt.execute(sql)
//run additional batches here with IDENTITY_INSERT ON
connection.close()
And you can always use the Spark Connector to load a staging table, then use JDBC to run a stored procedure or ad-hoc SQL batch to load the staging data into the target table.

Call Snowflake Procedure from Azure Function App

I have procedure in Snowflake and would like to call it from my Timer Triggered Azure Function App.
That procedure expects a parameter which is of type string. Following is my code snippet to connect to Snowflake and calling that procedure with parameter.
using (IDbConnection conn = new SnowflakeDbConnection())
{
//Connect to Snowflake
conn.ConnectionString = Environment.GetEnvironmentVariable("SnowflakeConnection");
conn.Open();
using (IDbCommand cmd = conn.CreateCommand())
{
if (conn.State == ConnectionState.Open)
{
cmd.CommandText = "SP_Snowflake_Procedure";
//cmd.CommandType = CommandType.StoredProcedure;
var date = cmd.CreateParameter();
date.ParameterName = "RUNDATE";
date.DbType = DbType.String;
date.Value = "2018-01-01";
cmd.Parameters.Add(date);
using (IDataReader dr = cmd.ExecuteReader())
{
/****************
Logic to work on data
received from SP
*****************/
}
}
}
}
When control comes to cmd.ExecuteReader(), it's failing with error:
Snowflake.Data: SQL compilation error: syntax error line 1 at position 0 unexpected 'SP_Snowflake_Procedure'.
I don't understand this Snowflake, how to call a procedure. I had a thought of, it is way similar to MS SQL. But I am wrong. I couldn't even find proper related documents.
I could use same code without procedure call but simple SELECT statement and worked fine.
Suggest me any changes here.
I can't tell from the code if you're using the ODBC driver for Snowflake or the .NET driver for Snowflake. The ODBC driver supports more features than the .NET driver, but I think executing SPs should be supported in both.
You'll need to make the call using a SQL statement that executes a query (as opposed to methods that execute non-query). It will return a table with a single row with the return from the SP. It will contain a single column with the name of the SP and the scalar value of the SP (basically what would be returned to the SQL worksheet if run in the web UI).
Here's a sample SP to test in case you need a simple one:
create or replace procedure EchoString(stringValue string)
returns VARCHAR
language JavaScript
as
$$
// Note that variables passed to Snowflake stored procedures
// muat be all CAPITAL letters when used in the body of the
// procedure code.
return STRINGVALUE
$$;
--Run the stored procedure to echo the value.
call EchoString('Echo this string.');
Here's how to call the SP from a C# project using an ODBC connection:
OdbcConnection DbConnection = new OdbcConnection("DSN=Snowflake;pwd=******");
OdbcCommand DbCommandSetup = DbConnection.CreateCommand();
DbConnection.Open();
// These two lines are only required if you get a message about no running warehouse.
// It will depend on how your calling user is set up in Snowflake.
DbCommandSetup.CommandText = "use warehouse TEST;";
DbCommandSetup.ExecuteNonQuery();
OdbcCommand DbCommand = DbConnection.CreateCommand();
DbCommand.CommandText = "call TEST.PUBLIC.ECHOSTRING('Echo this string.')";
OdbcDataReader DbReader = DbCommand.ExecuteReader();
// Note: If you define a Snowflake SP, DB, or schema in mixed case without double quoting
// the name, Snowflake will uppercase it in the catalog. You can call it from here without
// converting to upper case as long as it's not double quoted (escaped \") in the string.

pyodbc will only execute commands when querying the database specified in the connection string

conn = pyodbc.connect(r'DRIVER={SQL Server Native Client 11.0};SERVER=localhost\<redacted>;DATABASE=master;UID=<redacted>;PWD=<redacted>')
cursor = conn.cursor()
query = """SELECT <redacted> FROM <redacted> WHERE <redacted>"""
row = cursor.execute(query).fetchone()
dummyName = row[0]
cursor.close()
cursor = conn.cursor()
query = """SELECT <redacted> FROM <redacted> WHERE <redacted>"""
print query
row = cursor.execute(query)
print row.fetchone()
This code properly connects to the db and executes the first query on the first db. However, when it executes the second query on the other db, it doesn't return any data and I get a popup window saying python.exe has stopped working when I try to fetch any rows, after which my program crashes. I checked and the query I'm trying to execute is a valid query that works properly from the master db and from the same account I'm connected to in the code.
The problem was we were using an old version of pyodbc. I updated it to the new version and now it works perfectly.

Azure SQL server error Statement 'RECEIVE MSG' is not supported in this version of SQL Server

I have used SQLdependency with SignalR to show alerts to users.. The code is as follows:
public IEnumerable<AlertInfo> GetData(long UserId)
{
using (var connection = new SqlConnection(ConfigurationManager.ConnectionStrings["yafnet"].ConnectionString))
{
connection.Open();
using (SqlCommand command = new SqlCommand(#"SELECT [AlertID],[AlertNote],[AlertDetails],[AlertDate],[Location]
FROM [dbo].[Alerts] where [UserID]=" + UserId + " AND [IsViewed]=0", connection))
{
// Make sure the command object does not already have
// a notification object associated with it.
command.Notification = null;
SqlDependency.Stop(ConfigurationManager.ConnectionStrings["yafnet"].ConnectionString);
SqlDependency.Start(ConfigurationManager.ConnectionStrings["yafnet"].ConnectionString);
SqlDependency dependency = new SqlDependency(command);
dependency.OnChange += new OnChangeEventHandler(dependency_OnChange);
if (connection.State == ConnectionState.Closed)
connection.Open();
using (var reader = command.ExecuteReader())
return reader.Cast<IDataRecord>()
.Select(x => new AlertInfo()
{
AlertID = x.GetInt64(0),
AlertNote = x.GetString(1),
AlertDetails = x.GetString(2),
AlertDate = x.GetDateTime(3),
Location = x.GetString(4)
}).ToList();
}
}
}
It is working fine on localhost. But after uploading to Azure server, this method throws the following error:
Message":"An error has occurred.","ExceptionMessage":"Statement 'RECEIVE MSG' is not supported
in this version of SQL Server.","ExceptionType":"System.Data.SqlClient.SqlException","StackTrace":"
\r\nServer stack trace: \r\n at System.Data.SqlClient.SqlConnection.OnError(SqlException exception
, Boolean breakConnection, Action`1 wrapCloseInAction)
What could be the issue?
Actually your SQL Server database must have is_broker_enabled = 1.
You need to check whether it's enabled or not.
To verify this, use the command SELECT name, is_broker_enabled FROM sys.databases.
If your database shows result "1" it's okay and if "0" then you must enable it using this command ALTER DATABASE yourdb SET ENABLE_BROKER.
But the bad news is Azure SQL database shows it enabled but it no longer supports is_broker_enabled.
To do this, you need to install the full instance of SQL Server to Azure VM.

vbscript connect to sql server run query save results to csv

I am trying to create a script that connects to sql server, selects some data, and saves the results to a csv. Windows Authentication is enabled in Sql Server.
Here's what I have so far:
Dim connect, sql, resultSet, pth, txt
Set connect = CreateObject("ADODB.Connection")
connect.ConnectionString = "Provider=SQLOLEDB;Server=server\instance;Database=pm;Trusted_Connection=True;"
connect.Open
sql="SELECT col1, col2 FROM tbl1 order by col1, col2"
Set resultSet = connect.Execute(sql)
pth = "d:\test.csv"
Set txt = fs.CreateTextFile(pth, True)
On Error Resume Next
resultSet.MoveFirst
Do While Not resultSet.eof
txt.WriteLine(resultSet(0) & "," & resultSet(1))
resultSet.MoveNext
Loop
resultSet.Close
connect.Close
Set connect = Nothing
I get error:
d:\vbs_test.vbs(5, 1) Microsoft OLE DB Provider for SQL Server: Invalid authorization specification

Resources