I am unable to load data from a stage on SnowFlake using java.
I don't see any errors but data is not loaded from stage "mystage" to table "TESTTABLE "
Code:
Connection connection = DriverManager.getConnection(connectionUrl, _connectionProperties);
Statement statement = connection.createStatement();
statement.executeQuery("copy into TESTTABLE (id, name) from (select $1, $2 from #mystage/F.csv.gz t);");
If I run same command in SnowFlake console, data is getting loaded into table "TESTTABLE " properly.
We do not know to which database/schema the default connection points to. I would try to use fully qualified table name:
statement.executeQuery("copy into <db_name>.<schema_name>.TESTTABLE (id, name) from (select $1, $2 from #mystage/F.csv.gz t);");
By mistake I had comment out connection.commit(); that was the problem.
Related
I have created a Python function which creates multiple query statements.
Once it creates the SQL statement, it executes it (one at a time).
Is there anyway to way to bulk run all the statements at once (assuming I was able to create all the SQL statements and wanted to execute them once all the statements were generated)? I know there is an execute_stream in the Python Connector, but I think this requires a file to be created first. It also appears to me that it runs a single query statement at a time."
Since this question is missing an example of the file, here is a file content that I have provided as extra that we can work from.
//connection test file for python multiple queries
import snowflake.connector
conn = snowflake.connector.connect(
user = 'xxx',
password = '',
account = 'xxx',
warehouse= 'xxx',
database= 'TEST_xxx'
session_parameters = {
'QUERY_TAG: 'Rachel_test',
}
}
while(conn== true){
print(conn.sfqid)import snowflake.connector
try:
conn.cursor().execute("CREATE WAREHOUSE IF NOT EXISTS tiny_warehouse_mg")
conn.cursor().execute("CREATE DATABASE IF NOT EXISTS testdb_mg")
conn.cursor().execute("USE DATABASE testdb_mg")
conn.cursor().execute(
"CREATE OR REPLACE TABLE "
"test_table(col1 integer, col2 string)")
conn.cursor().execute(
"INSERT INTO test_table(col1, col2) VALUES " +
" (123, 'test string1'), " +
" (456, 'test string2')")
break
except Exception as e:
conn.rollback()
raise e
}
conn.close()
The reference to this question refers to a method that can be done with the file call, the example in documentation is as follows:
from codecs import open
with open(sqlfile, 'r', encoding='utf-8') as f:
for cur in con.execute_stream(f):
for ret in cur:
print(ret)
Reference to guide I used
Now when I ran these, they were not perfect, but in practice I was able to execute multiple sql statements in one connection, but not many at once. Each statement had their own query id. Is it possible to have a .sql file associated with one query id?
Is it possible to have a .sql file associated with one query id?
You can achieve that effect with the QUERY_TAG session parameter. Set the QUERY_TAG to the name of your .SQL file before executing it's queries. Access the .SQL file QUERY_IDs later using the QUERY_TAG field in QUERY_HISTORY().
I believe though you generated the .sql while executing in snowflake each statement will have unique query id.
If you want to run one sql independent to other you may try with multiprocessing/multi threading concept in python.
The Python and Node.Js libraries do not allow multiple statement executions.
I'm not sure about Python but for Node.JS there is this library that extends the original one and add a method call "ExecutionAll" to it:
snowflake-multisql
You just need to wrap multiple statements with the BEGIN and END.
BEGIN
<statement_1>;
<statement_2>;
END;
With these operators, I was able to execute multiple statement in nodejs
I have created a lookup for a list of IDs and a subsequent Foreach loop to run an sql stmt for each ID.
My variable for catching the list of IDs is called MissingRecordIDs and is of type Object. In the Foreach container I map each value to a variable called RecordID of type Int32. No fancy scripts - I followed these instructions: https://www.simple-talk.com/sql/ssis/implementing-foreach-looping-logic-in-ssis-/ (without the file loading part - I am just running an SQL stmt).
It runs fine from within SSIS, but when I deploy it my Integration Services Catalogue in MSSQL it fails.
This is the error I get when running from SQL Mgt Studio:
I thought I could just put a Precendence Constraint after MissingRecordsIDs get filled to check for NULL and skip the Foreach loop if necessary - but I can't figure out how to check for NULL in an Object variable?
Here is the Variable declaration and Object getting enumerated:
And here is the Variable mapping:
The SQL stmt that is in 'Lookup missing Orders':
select distinct cast(od.order_id as int) as order_id
from invman_staging.staging.invman_OrderDetails_cdc od
LEFT OUTER JOIN invman_staging.staging.invman_Orders_cdc o
on o.order_id = od.order_id and o.BatchID = ?
where od.BatchID = ?
and o.order_id is null
and od.order_id is not null
In the current environment this query returns nothing - there are no missing Orders, so I don't want to go into the 'Foreach Order Loop' at all.
This is a known issue Microsoft is aware of: https://connect.microsoft.com/SQLServer/feedback/details/742282/ssis-2012-wont-assign-null-values-from-sql-queries-to-variables-of-type-string-inside-a-foreach-loop
I would suggest to add an ISNULL(RecordID, 0) to the query as well as set an expression to the component "Load missing Orders" in order to enable it only when RecordID != 0.
In my case it wasn't NULL causing the problem, the ID value which I loaded from database was stored as nvarchar(50), even if it was a integer, I attempted to use it as integer in SSIS and it kept giving me the same error message, this worked for me:
SELECT CAST(id as INT) FROM dbo.Table
I'm trying to run the following UPDATE query from a python script (note I've removed the database info):
print 'Connecting to db for update query...'
db = pyodbc.connect('DRIVER={FreeTDS};SERVER=<removed>;DATABASE=<removed>;UID=<removed>;PWD=<removed>')
cursor = db.cursor()
print ' Executing SQL queries...'
for i in range(len(data)):
sql = '''
UPDATE product.sanction
SET action_summary = '{action_summary}'
WHERE sanction_id = {sanction_id};
'''.format(sanction_id=data[i][0], action_summary=data[i][1])
cursor.execute(sql)
cursor.close()
db.commit()
db.close()
However, it hangs indefinitely, no error.
I'm new to pyodbc, but it should be setup correctly considering I'm having no problems performing SELECT queries. I did have to call CAST for SELECT queries (I've cast sanction_id AS INT [int identity on the database] and action_summary AS TEXT [nvarchar on the database]) to properly populate data, so perhaps the problem lies somewhere there, but I don't know where to start debugging. Converting the text to NVARCHAR didn't do anything either.
Here's an example of one of the rows in data:
(2861357, 'Exclusion Program: NonProcurement; Excluding Agency: HHS; CT Code: Z; Exclusion Type: Prohibition/Restriction; SAM Number: S4MR3Q9FL;')
I was unable to find my issue, but I ended up using QuerySets rather than running an UPDATE query.
Suppose I generate the PK for my SQL Server DB table with the help of newid() function. In Java I can do something like this:
...
String query = "DECLARE #newGuid uniqueidentifier "+
"SET #newGuid = newid() "+
"INSERT INTO myTable(id, stringval) "+
"VALUES (#newGuid, "Hello") "+
"SELECT uid FROM #newGuid";
PreparedStatement ps = conn.prepareStatement(query);
ResultSet rs = ps.executeQuery();
String uid = rs.getString("uid");
But when I try to make that with Delphi+ADO I get stuck cause ADO can either get data from DB (Open method of AdoQuery) or put data to DB (ExecSQL method). So I can't insert new value to the table and get the parameter value afterwards.
You could solve this problem atleast in two ways.
You can put both of your SQL queries into one string (just like you have in your example) and call TADOQuery.Open or TADOQuery.Active := True. it doesn't matter that you have INSERT statement there as long as query returns something.
You can define parameter's direction as pdOutput in ADOQuery.Parameters collection and read value of that parameter after executing the query.
You are treating #newGuid as if it was a table. Your last row in the query should be:
SELECT #newGuid as uid
I am facing this problem perl DBD::ODBC rollback ineffective with AutoCommit enabled at and while looking at the problem , I found that a very basic thing is failing with Perl::DBI using DBD::ODBC on sql server. But i am not sure if this wont happen with any other driver.
The problem is that when I create a #temp table using $dbh->do and when i try to access the same #temp table using another $dbh->do , i am getting the below error. Also this does not happen all the time , but only intermittently.
Invalid object name '#temp'
$dbh->do("SELECT ... INTO #temp FROM ...");
$dbh->do("INSERT INTO ... SELECT ... FROM #temp");
The second do fails with 'Invalid object name '#temp''
Kindly help me with the problem.
Not that it answers your question but it might help. The following works for me.
#
# To access temporary tables in MS SQL Server they need to be created via
# SQLExecDirect
#
use strict;
use warnings;
use DBI;
my $h = DBI->connect();
eval {
$h->do(q{drop table martin});
$h->do(q{drop table martin2});
};
$h->do(q{create table martin (a int)});
$h->do(q{create table martin2 (a int)});
$h->do('insert into martin values(1)');
my $s;
# this long winded way works:
#$s = $h->prepare('select * into #tmp from martin',
# { odbc_exec_direct => 1}
#);
#$s->execute;
# and this works too:
$h->do('select * into #tmp from martin');
# but a prepare without odbc_exec_direct would not work
print "NUM_OF_FIELDS: " . DBI::neat($s->{NUM_OF_FIELDS}), "\n";
$s = $h->selectall_arrayref(q{select * from #tmp});
use Data::Dumper;
print Dumper($s), "\n";
$h->do(q/insert into martin2 select * from #tmp/);
$s = $h->selectall_arrayref(q{select * from martin2});
print Dumper($s), "\n";
I was having this problem as well. I tried all of the above but it didnt matter. I stumbled upon this http://bytes.com/topic/sql-server/answers/80443-creating-temporary-table-select-into which solved my problem.
What's happening is that ADO is opening a second connection behind
your back. This has really not anything to do with how you created the
table.
The reason that ADO opens an extra connection, is because there are
rows waiting to be fetched on the first connection, so ADO cannot
submit a query on that connection.
I assume that Perl DBI is doing the same, so based on this assumption, here's what I did and it worked perfectly fine:
my $sth = $dbh->prepare('Select name into #temp from NameTable');
$sth->execute();
$sth->fetchall_arrayref();
$sth = $dbh->prepare('Select a.name, b.age from #temp a, AgeTable b where a.name = name');
$sth->execute();
my ($name,$age)
$sth->bind_columns(\$name,\$age);
while ( $sth->fetch())
{
# processing
}