Stored Procedure cannot run successfully in SQL Server via SQLAlchemy - sql-server

What I am using
Ubuntu 16.04
Python 3.6
FreeTDS, TDS Version 7.3
SQLAlchemy 1.2.5
Windows server 2012
SQL Server 2008 Enterprise
My purpose
I write code in Python on Ubuntu machine to insert and execute stored procedure on MS SQL Server 2008. I create an order for customer. An order may have many main ingredients, toppings. When finish order, I run a stored procedure to process data to user_order and employee_order.
The stored procedure
In stored procedure, when select data from source tables and process data, if any error is happened, transaction is rolled back.
My code snippet
def process():
engine = get_engine() # my method get engine by connection string
session_maker = sessionmaker(bind=engine.execution_options(isolation_level='SERIALIZABLE'))
session = session_maker()
ref = 'REF0000001'
try:
# Create order
order = Order(id=1, ref=ref)
# Add main ingredients
main1 = Main(order=1, name='coffee')
main2 = Main(order=1, name='milk')
# Topup
topup1 = TopUp(order=1, name='cookies')
topup2 = TopUp(order=1, name='chocolate')
session.add(order)
session.flush()
session.add_all([main1, main2])
session.flush()
session.add_all([topup1, topup2])
session.flush()
session.commit()
except:
session.rollback()
reraise
finally:
session.close()
del session
time.sleep(1)
session = session_maker()
session.execute('EXEC finish_order %a' % ref)
session.commit()
session.close()
del session
And result is
There is no error, but there is no data in user_order and employee_order even though stored procedure finish_order is run.
But, if I run the stored procedure again as a simple query in terminal or SQL Studio Management, the data is imported to destination tables.
Doubts
Is there any chance that data has not been finished inserting into origin tables yet when stored procedure is called?
Please help me with this case.
Thank you!

Related

How to use copy Storage Integration in a Snowflake task statement?

I'm testing SnowFlake. To do this I created an instance of SnowFlake on GCP.
One of the tests is to try the daily load of data from a STORAGE INTEGRATION.
To do that I had generated the STORAGE INTEGRATION and the stage.
I tested the copy
copy into DEMO_DB.PUBLIC.DATA_BY_REGION from #sg_gcs_covid pattern='.*data_by_region.*'
and all goes fine.
Now it's time to test the daily scheduling with the task statement.
I created this task:
CREATE TASK schedule_regioni
WAREHOUSE = COMPUTE_WH
SCHEDULE = 'USING CRON 42 18 9 9 * Europe/Rome'
COMMENT = 'Test Schedule'
AS
copy into DEMO_DB.PUBLIC.DATA_BY_REGION from #sg_gcs_covid pattern='.*data_by_region.*';
And I enabled it:
alter task schedule_regioni resume;
I got no errors, but the task don't loads data.
To resolve the issue i had to put the copy in a stored procedure and insert the call of the storede procedure instead of the copy:
DROP TASK schedule_regioni;
CREATE TASK schedule_regioni
WAREHOUSE = COMPUTE_WH
SCHEDULE = 'USING CRON 42 18 9 9 * Europe/Rome'
COMMENT = 'Test Schedule'
AS
call sp_upload_c19_regioni();
The question is: this is a desired behavior or an issue (as I suppose)?
Someone can give to me some information about this?
I've just tried ( but with storage integration and stage on AWS S3) and it works fine also using copy command inside sql part of the task, without calling a stored procedure.
In order to start investigating the issue, I would check following info (maybe for debugging I would create the task scheduling it every few minutes):
check task_history and verify executions
select *
from table(information_schema.task_history(
scheduled_time_range_start=>dateadd('hour',-1,current_timestamp()),
result_limit => 100,
task_name=>'YOUR_TASK_NAME'));
if previous step is successfull, check copy_history and verify the input file name , target table and number of records/errors are the expected ones
SELECT *
FROM TABLE (information_schema.copy_history(TABLE_NAME => 'YOUR_TABLE_NAME',
start_time=> dateadd(hours, -1, current_timestamp())))
ORDER BY 3 DESC;
Check if the results are the same you get when the task with sp call is executed.
Please also confirm that you are loading new files not yet loaded into your table with COPY command (otherwise you need to specify FORCE = TRUE parameter in the copy command or remove metadata information truncating your target table to reload the same files).

SQL Not working in QlikView

I've copied an SQL code that I've previously tried in SQL Server manager and that was working with SQL Server Manager.
In QlikView I get "ErrorSource: (null), ErrorMsg: (null)"
What could be the mistake? I'm using a temp table (#Clasif1) because in the original script I have multiple Insert Into Commands.
Thanks!!
LOAD *;
SQL INSERT INTO #Clasif1
SELECT de.pate_tempor, de.prod_codigo, de.liqu_numero,
concepto = Convert(char(50),'FOB Fruta Exportacion'),
kilos = Convert(decimal(14,2),SUM(de.dece_kilrea)),
total_plata = Convert(decimal(14,2),sum((de.dece_kilrea/de.enva_pesone)*de.dece_fobuni))
FROM dba.detacajemb de
WHERE de.pate_tempor = 2015
and de.pool_tipool = 1
GROUP BY de.pate_tempor, de.prod_codigo, de.liqu_numero
SELECT cla.pate_tempor, cla.prod_codigo, cla.concepto, cla.kilos, cla.total_plata, cla.liqu_numero
FROM #Clasif1 cla;
You cannot perform an INSERT command from QV. You can create a stored procedure that runs your code above and call that.
SQL exec stored_procedure <parameters>

Why am I getting inconsistent results with conditional SET NOCOUNT ON in stored procedure called by WebSphere Message Broker?

I'm writing some flows for IBM WebSphere Message Broker which call stored procedures on a remote Microsoft SQL Server database. My problem is that I'm sometimes getting the resultset which should be returned and sometimes not getting anything.
The lines in the stored procedure which seem to be causing the trouble are:
IF (#noCountInd > 0)
SET NOCOUNT ON;
When I call the stored procedure from a database node in the Message Broker it returns a resultset on the first call then nothing on subsequent calls. If the SET NOCOUNT ON is unconditional it works every time. It works every time even with the above condition if called through the SQL Server Management Studio command line.
It also seems as if when enough time has passed between calls for the Message Broker to close its database connection then the next call with a new connection will succeed.
Here's my pared down code to produce this problem:
Stored procedure
CREATE PROCEDURE dbo.pTestConditionalNoCount
#noCountInd bit = 0
AS
IF (#noCountInd > 0)
SET NOCOUNT ON;
SELECT 'Success' AS RESULT;
RETURN 0;
ESQL in database node
CREATE DATABASE MODULE testConditionalNoCount
CREATE FUNCTION Main() RETURNS BOOLEAN
BEGIN
CALL pTestConditionalNoCount(TRUE,
Environment.Variables.testConditionalNoCount.Results.Row[])
IN Database.{'DATABASE_NAME'}.{'dbo'};
RETURN TRUE;
END;
END MODULE;
CREATE PROCEDURE pTestConditionalNoCount(IN testNoCountInd BOOLEAN)
LANGUAGE DATABASE
DYNAMIC RESULT SETS 1
EXTERNAL NAME pTestConditionalNoCount;
Output from trace node
Environment: ( ['MQROOT' : 0x29692478]
(0x01000000:Name):Variables = (
(0x01000000:Name):testConditionalNoCount = (
(0x01000000:Name):Results = (
(0x01000000:Name):Row = (
(0x03000000:NameValue):RESULT = 'Success' (CHARACTER)
)
)
)
)
)
Environment: ( ['MQROOT' : 0x29692478]
(0x01000000:Name):Variables = (
(0x01000000:Name):testConditionalNoCount = (
(0x01000000:Name):Results =
)
)
)
Environment: ( ['MQROOT' : 0x29692478]
(0x01000000:Name):Variables = (
(0x01000000:Name):testConditionalNoCount = (
(0x01000000:Name):Results =
)
)
)
The version of SQL Server is Microsoft SQL Server 2005 - 9.00.5324.00 (X64) and message broker is IBM WebSphere Message Broker 8.0.0.2.
Anyone have any idea what is going on here?
Update
I ran an ODBC trace and it shows two resultsets being returned in the failure case. The first is empty and the second is the resultset I'm expecting.
There seems to be no difference in the way the procedure is called. A SQLExecute is logged for each call then a SQLNumResultCols which returns 1 if it worked and 0 if it didn't.
I suspect that this is related to the caching of the stored procedure objects which is why when you let the connection idle out you see the call work again.
Might be worth examining an ODBC trace of 2 subsequent calls and see if there is any difference between the 2 executions (other than no results of course).

FreeTDS / SQL Server UPDATE Query Hangs Indefinitely

I'm trying to run the following UPDATE query from a python script (note I've removed the database info):
print 'Connecting to db for update query...'
db = pyodbc.connect('DRIVER={FreeTDS};SERVER=<removed>;DATABASE=<removed>;UID=<removed>;PWD=<removed>')
cursor = db.cursor()
print ' Executing SQL queries...'
for i in range(len(data)):
sql = '''
UPDATE product.sanction
SET action_summary = '{action_summary}'
WHERE sanction_id = {sanction_id};
'''.format(sanction_id=data[i][0], action_summary=data[i][1])
cursor.execute(sql)
cursor.close()
db.commit()
db.close()
However, it hangs indefinitely, no error.
I'm new to pyodbc, but it should be setup correctly considering I'm having no problems performing SELECT queries. I did have to call CAST for SELECT queries (I've cast sanction_id AS INT [int identity on the database] and action_summary AS TEXT [nvarchar on the database]) to properly populate data, so perhaps the problem lies somewhere there, but I don't know where to start debugging. Converting the text to NVARCHAR didn't do anything either.
Here's an example of one of the rows in data:
(2861357, 'Exclusion Program: NonProcurement; Excluding Agency: HHS; CT Code: Z; Exclusion Type: Prohibition/Restriction; SAM Number: S4MR3Q9FL;')
I was unable to find my issue, but I ended up using QuerySets rather than running an UPDATE query.

Waiting for DB restore to finish using sqlalchemy on SQL Server 2008

I'm trying to automate my db restores during development, using TSQL on SQL Server 2008, using sqlalchemy with pyodbc as a transport.
The command I'm executing is:
"""CREATE DATABASE dbname
restore database dbname FROM DISK='C:\Backups\dbname.bak' WITH REPLACE,MOVE 'dbname_data' TO 'C:\Databases\dbname_data.mdf',MOVE 'dbname_log' TO 'C:\Databases\dbname_log.ldf'"""
Unfortunately, the in SQL Management Studio, after the code has run, I see that the DB remains in state "Restoring...".
If I restore through management studio, it works. If I use subprocess to call "sqlcmd", it works. pymssql has problems with authentication and doesnt even get that far.
What might be going wrong?
The BACKUP and RESTORE statements run asynchronously so they don't terminate before moving on to the rest of the code.
Using a while statement as described at http://ryepup.unwashedmeme.com/blog/2010/08/26/making-sql-server-backups-using-python-and-pyodbc/ solved this for me:
# setup your DB connection, cursor, etc
cur.execute('BACKUP DATABASE ? TO DISK=?',
['test', r'd:\temp\test.bak'])
while cur.nextset():
pass
Unable to reproduce the problem restoring directly from pyodbc (without sqlalchemy) doing the following:
connection = pyodbc.connect(connection_string) # ensure autocommit is set to `True` in connection string
cursor = connection.cursor()
affected = cursor.execute("""CREATE DATABASE test
RESTORE DATABASE test FROM DISK = 'D:\\test.bak' WITH REPLACE, MOVE 'test_data' TO 'D:\\test_data.mdf', MOVE 'test_log' to 'D:\\test_log.ldf' """)
while cursor.nextset():
pass
Some questions that need clarification:
What is the code in use to do the restore using sqlalchemy?
What version of the SQL Server ODBC driver is in use?
Are there any messages in the SQL Server log related to the restore?
Thanks to geographika for the Cursor.nextset() example!
For SQL Alchemy users, and thanks to geographika for the answer: I ended up using the “raw” DBAPI connection from the connection pool.
It is exactly as geographika's solution but with a few additional pieces:
import sqlalchemy as sa
driver = 'SQL+Server'
name = 'servername'
sql_engine_str = 'mssql+pyodbc://'\
+ name\
+ '/'\
+ 'master'\
+ '?driver='\
+ driver
engine = sa.create_engine(sql_engine_str, connect_args={'autocommit': True})
connection = engine.raw_connection()
try:
cursor = connection.cursor()
sql_cmd = """
RESTORE DATABASE [test]
FROM DISK = N'...\\test.bak'
WITH FILE = 1,
MOVE N'test'
TO N'...\\test_Primary.mdf',
MOVE N'test_log'
TO N'...\\test_log.ldf',
RECOVERY,
NOUNLOAD,
STATS = 5,
REPLACE
"""
cursor.execute(sql_cmd)
while cursor.nextset():
pass
except Exception as e:
logger.error(str(e), exc_info=True)
Five things fixed my problem with identical symptoms.
Found that my test.bak file contained the wrong mdf and ldf files:
>>> cursor.execute(r"RESTORE FILELISTONLY FROM DISK = 'test.bak'").fetchall()
[(u'WRONGNAME', u'C:\\Program Files\\Microsoft SQL ...),
(u'WRONGNAME_log', u'C:\\Program Files\\Microsoft SQL ...)]
Created a new bak file and made sure to set the copy-only backup option
Set the autocommit option for my connection.
connection = pyodbc.connect(connection_string, autocommit=True)
Used the connection.cursor only for a single RESTORE command and nothing else
Corrected the test_data MOVE to test in my RESTORE command (courtesy of #beargle).
affected = cursor.execute("""RESTORE DATABASE test FROM DISK = 'test.bak' WITH REPLACE, MOVE 'test' TO 'C:\\test.mdf', MOVE 'test_log' to 'C:\\test_log.ldf' """)

Resources