Stored procedure using PYODBC Not loading destination table in SQL Server - sql-server

def UploadTable(table):
conn = pyodbc.connect('Driver={SQL Server Native Client 11.0};Server=XXXXXX;Database=XXXXXX;Trusted_Connection=yes')
cur = conn.cursor()
cur.execute("TRUNCATE TABLE dr.Imported_OM01TMP4_Data")
create_statement = fts.fast_to_sql(table, "dr.Imported_OM01TMP4_Data", conn, if_exists="append")
cur.execute("EXEC [dr].[PopulateGlAccountRevenue_Files_UltimateEdition_DeltaLoad]")
conn.commit()
conn.close()
Please see my code snippet above, I am trying to run this stored procedure [dr].[PopulateGlAccountRevenue_Files_UltimateEdition_DeltaLoad] that is already defined in SQL Server.
My code runs fine but when I check to see if the destination table in the server is loaded with the data from the table dr.Imported_OM01TMP4_Data, I am seeing a blank.
When I populate the same table with my python code but execute the stored procedure in SQL Server, the destination table is loaded properly. Is this a permissions / access issue? I have DB Owner Access/Read/Write as well, so I am not sure what is wrong with my code.
Please advise.

Related

pyodbc insert into SQL Server db table stopped committing. Why?

I have a python script that has been inserting into a SQL Server table for a few weeks since I wrote it. Suddenly it stopped inserting and I can see that it looks like a COMMIT problem because the Primary Key Identity ID column in the table increments - if I do a T-SQL insert in SSMS, I can see that several ID values have been skipped. The rows seem to be inserted but are rolled back by the look of it. I've restarted the SQL Server instance and restarted the VS Code app I am using to run the script from. No success. No errors from python/pyodbc. I've run out of ideas. Any suggestions?
import pyodbc
SQL_DRIVER = 'SQL Server Native Client 11.0'
SQL_OUTPUT_TABLE = "test"
SERVER = "myServer"
DATABASE = "myDB"
def main():
cnxn = pyodbc.connect('DRIVER={'+SQL_DRIVER+'};SERVER='+SERVER+';DATABASE='+DATABASE+';Trusted_Connection=yes')
cursor = cnxn.cursor()
tsql : str = "insert into [dbo].[test](col1) values ('stuff');"
cursor.execute(tsql)
cursor.commit
cnxn.close
if __name__ == '__main__':
main()
Tried alternative SQL drivers. Created the test script you see here to reduce the scope as much as possible. Service restarts, etc. Can also successfully insert rows directly from within SSMS to the table.
I think I found the problem. cursor.execute followed by conn.commit fixed it.
import pyodbc
SQL_DRIVER = 'SQL Server Native Client 11.0'
SQL_OUTPUT_TABLE = "test"
SERVER = "DESKTOP-GBCJUII"
DATABASE = "xen_mints"
conn = pyodbc.connect('Driver={'+SQL_DRIVER+'};'
'Server='+SERVER+';'
'Database='+DATABASE+';'
'Trusted_Connection=yes;')
cursor = conn.cursor()
cursor.execute('''
INSERT INTO '''+SQL_OUTPUT_TABLE+''' (col1)
VALUES
('stuff')
''')
conn.commit()

Use TRUNCATE TABLE against SQL Server in Power Query?

I'd like to use Microsoft Power Query to truncate a SQL Server table.
I wrote the M-Query code below, using the technique in Power BI write back to sql source:
let
Source = Sql.Database("server_host/instance_name", "database_name"),
Sql = "truncate table [target_table]",
RunSql = Value.NativeQuery(Source, Sql)
in
RunSql
When I run this, it fails and gives the error message "Expression.Error: This native database query isn't currently supported."
Is it possible to execute the TRUNCATE TABLE statement in Power Query against SQL Server, and if so, how?
Try this:
let
Source = Sql.Database("server_host/instance_name", "database_name"),
Sql = "truncate table [target_table] select 1",
RunSql = Value.NativeQuery(Source, Sql)
in
RunSql

Incorrect syntax near Go with Pypyodbc

I am using the pypyodbc library to establish a connection to a SQL Server 2008 R2 database and every time I try to execute a .sql file I encounter the following error:
pypyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near 'Go'.")
This is the sql query I am trying to execute:
Use SL_Site1_App
Go
select emp_num,name, trans_num, job, trans_type
from Hours where trans_type like '1000%' order by trans_date desc
This is the python script that I am using:
import pypyodbc, ExcelFile
def main():
# read the SQL queries externally
queries = ['C:\\Temp\\Ready_to_use_queries\\Connection_sql_python.sql']
for index, query in enumerate(queries):
cursor = initiate_connection_db()
results = retrieve_results_query(cursor, query)
if index == 0:
ExcelFile.write_to_workbook(results)
print("The workbook has been created and data has been inserted.\n")
def initiate_connection_db():
connection_live_db = pypyodbc.connect(driver="{SQL Server}", server="xxx.xxx.xxx.xxx", uid="my-name",
pwd="try-and-guess", Trusted_Connection="No")
connection = connection_live_db.cursor()
return connection
The workaround for this problem is to delete the Use SL_Site1_App Go line but I want to know if this is a known problem related to the pypyodbc library to process these lines and if so, where should I look to notify the developers about this issue.
GO is a batch separator used by sqlcmd and SSMS. It's not a T-SQL operator.
Considering you're using an application to connect to SQL Server, declare your database in the connection string, by adding database="SL_Site1_App", and then remove the USE and GO statements in your SQL Statement.

Write in Excel spreadsheet from SQL Server linked server

I have a client who has a huge Excel file. They absolutely want to continue to work with this file. They asked us if we can update data in the file from a PocketPC.
I created a linked server to the spreadsheet:
EXEC master.dbo.sp_addlinkedserver
#server = N'ExcelFile',
#srvproduct=N'Excel',
#provider=N'Microsoft.ACE.OLEDB.12.0',
#datasrc=N'Filename.xls',
#provstr=N'Excel 12.0;IMEX=1;HDR=YES;'
I can successfully query the file with the following:
SELECT *
FROM ExcelFile...[Feuil1$]
If the file is already open, I get an error. I guess the file MUST be closed?
Anyway, is there a way to update cells in the Excel file with something like:
UPDATE ExcelFile...[Feuil1$]
SET [BIP] = 123456
WHERE [BIP] = '966985'
I get this error:
An error occurred while preparing the query "UPDATE Feuil1$ set BIP
= (1.234560000000000e+005) WHERE BIP=(9.669850000000000e+005)" for execution against OLE DB provider "Microsoft.ACE.OLEDB.12.0" for
linked server "ExcelFile".
Thanks for your time and help

how can I get pyodbc to perform a "SELECT ... INTO" statement without locking?

I'm trying to copy a table in SQL Server, but a simple statement seems to be locking my database when using pyodbc. Here's the code I'm trying:
dbCxn = db.connect(cxnString)
dbCursor = dbCxn.cursor()
query = """\
SELECT TOP(10) *
INTO production_data_adjusted
FROM production_data
"""
dbCursor.execute(query)
The last statement returns immediately, but both LINQPad and SQL Server Management Studio are locked out of the database afterwards (I try to refresh their table lists). Running sp_who2 shows that LINQPad/SSMS are stuck waiting for my pyodbc process. Other databases on the server seem fine, but all access to this database gets held up. The only way I can get these other applications to resolve their stalls is by closing the pyodbc database connection:
dbCxn.close()
This exact same SELECT ... INTO statement statement works fine and takes only a second from LINQPad and SSMS. The above code works fine and doesn't lock the database if I remove the INTO line. It even returns results if I add fetchone() or fetchall().
Can anyone tell me what I'm doing wrong here?
Call the commit function of either the cursor or connection after SELECT ... INTO is executed, for example:
...
dbCursor.execute(query)
dbCursor.commit()
Alternatively, automatic commit of transactions can be specified when the connection is created using autocommit. Note that autocommit is an argument to the connect function, not a connection string attribute, for example:
...
dbCxn = db.connect(cxnString, autocommit=True)
...

Resources