pyodbc insert into SQL Server db table stopped committing. Why? - sql-server

I have a python script that has been inserting into a SQL Server table for a few weeks since I wrote it. Suddenly it stopped inserting and I can see that it looks like a COMMIT problem because the Primary Key Identity ID column in the table increments - if I do a T-SQL insert in SSMS, I can see that several ID values have been skipped. The rows seem to be inserted but are rolled back by the look of it. I've restarted the SQL Server instance and restarted the VS Code app I am using to run the script from. No success. No errors from python/pyodbc. I've run out of ideas. Any suggestions?
import pyodbc
SQL_DRIVER = 'SQL Server Native Client 11.0'
SQL_OUTPUT_TABLE = "test"
SERVER = "myServer"
DATABASE = "myDB"
def main():
cnxn = pyodbc.connect('DRIVER={'+SQL_DRIVER+'};SERVER='+SERVER+';DATABASE='+DATABASE+';Trusted_Connection=yes')
cursor = cnxn.cursor()
tsql : str = "insert into [dbo].[test](col1) values ('stuff');"
cursor.execute(tsql)
cursor.commit
cnxn.close
if __name__ == '__main__':
main()
Tried alternative SQL drivers. Created the test script you see here to reduce the scope as much as possible. Service restarts, etc. Can also successfully insert rows directly from within SSMS to the table.

I think I found the problem. cursor.execute followed by conn.commit fixed it.
import pyodbc
SQL_DRIVER = 'SQL Server Native Client 11.0'
SQL_OUTPUT_TABLE = "test"
SERVER = "DESKTOP-GBCJUII"
DATABASE = "xen_mints"
conn = pyodbc.connect('Driver={'+SQL_DRIVER+'};'
'Server='+SERVER+';'
'Database='+DATABASE+';'
'Trusted_Connection=yes;')
cursor = conn.cursor()
cursor.execute('''
INSERT INTO '''+SQL_OUTPUT_TABLE+''' (col1)
VALUES
('stuff')
''')
conn.commit()

Related

Use TRUNCATE TABLE against SQL Server in Power Query?

I'd like to use Microsoft Power Query to truncate a SQL Server table.
I wrote the M-Query code below, using the technique in Power BI write back to sql source:
let
Source = Sql.Database("server_host/instance_name", "database_name"),
Sql = "truncate table [target_table]",
RunSql = Value.NativeQuery(Source, Sql)
in
RunSql
When I run this, it fails and gives the error message "Expression.Error: This native database query isn't currently supported."
Is it possible to execute the TRUNCATE TABLE statement in Power Query against SQL Server, and if so, how?
Try this:
let
Source = Sql.Database("server_host/instance_name", "database_name"),
Sql = "truncate table [target_table] select 1",
RunSql = Value.NativeQuery(Source, Sql)
in
RunSql

Stored procedure using PYODBC Not loading destination table in SQL Server

def UploadTable(table):
conn = pyodbc.connect('Driver={SQL Server Native Client 11.0};Server=XXXXXX;Database=XXXXXX;Trusted_Connection=yes')
cur = conn.cursor()
cur.execute("TRUNCATE TABLE dr.Imported_OM01TMP4_Data")
create_statement = fts.fast_to_sql(table, "dr.Imported_OM01TMP4_Data", conn, if_exists="append")
cur.execute("EXEC [dr].[PopulateGlAccountRevenue_Files_UltimateEdition_DeltaLoad]")
conn.commit()
conn.close()
Please see my code snippet above, I am trying to run this stored procedure [dr].[PopulateGlAccountRevenue_Files_UltimateEdition_DeltaLoad] that is already defined in SQL Server.
My code runs fine but when I check to see if the destination table in the server is loaded with the data from the table dr.Imported_OM01TMP4_Data, I am seeing a blank.
When I populate the same table with my python code but execute the stored procedure in SQL Server, the destination table is loaded properly. Is this a permissions / access issue? I have DB Owner Access/Read/Write as well, so I am not sure what is wrong with my code.
Please advise.

Incorrect syntax near Go with Pypyodbc

I am using the pypyodbc library to establish a connection to a SQL Server 2008 R2 database and every time I try to execute a .sql file I encounter the following error:
pypyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near 'Go'.")
This is the sql query I am trying to execute:
Use SL_Site1_App
Go
select emp_num,name, trans_num, job, trans_type
from Hours where trans_type like '1000%' order by trans_date desc
This is the python script that I am using:
import pypyodbc, ExcelFile
def main():
# read the SQL queries externally
queries = ['C:\\Temp\\Ready_to_use_queries\\Connection_sql_python.sql']
for index, query in enumerate(queries):
cursor = initiate_connection_db()
results = retrieve_results_query(cursor, query)
if index == 0:
ExcelFile.write_to_workbook(results)
print("The workbook has been created and data has been inserted.\n")
def initiate_connection_db():
connection_live_db = pypyodbc.connect(driver="{SQL Server}", server="xxx.xxx.xxx.xxx", uid="my-name",
pwd="try-and-guess", Trusted_Connection="No")
connection = connection_live_db.cursor()
return connection
The workaround for this problem is to delete the Use SL_Site1_App Go line but I want to know if this is a known problem related to the pypyodbc library to process these lines and if so, where should I look to notify the developers about this issue.
GO is a batch separator used by sqlcmd and SSMS. It's not a T-SQL operator.
Considering you're using an application to connect to SQL Server, declare your database in the connection string, by adding database="SL_Site1_App", and then remove the USE and GO statements in your SQL Statement.

Not able to fetch all rows of data from MS SQL Server using RODBC even with believeNRows=F

I am trying to use RODBC library in R to fetech data from Microsoft SQL Server through a query, but the data I got is incomplete even if I set believeNRows=FALSE. The Microsoft SQL Server version is SQL Server 2016 SP1 CU3
The R code is as following:
library(RODBC)
sql.server = 'GDCSCTDDBSWA01'
database.name = 'Data.Analytics'
sql.string = 'select * from [Data.Analytics].[dbo].[Table]'
db.string <- sprintf("driver={ODBC Driver 13 for SQL Server}; server=%s;database={%s}; trusted_connection=yes", server , database.name)
db.channel <- odbcDriverConnect(db.string, believeNRows=FALSE)
itin.data <- data.table(sqlQuery(db.channel, sql.string))
close(db.channel)
It only returned around 1500 rows of data (the exact number of rows changes in each run, but it is around the same magnitude). However, when I ran the query in Microsoft SQL Server Management Studio, it worked correctly.
To eliminate the possibility of network issue, I also tried pyodbc and it also worked fine. The python code is as following:
import pyodbc
connection= pyodbc.connect('DRIVER={ODBC Driver 13 for SQL Server};SERVER=GDCSCTDDBSWA01;DATABASE={Data.Analytics};trusted_connection=yes')
cursor = connection.cursor()
sql = 'select * from [Data.Analytics].[dbo].[Table]'
cursor.execute(sql)
dataList = cursor.fetchall()
connection.close()
Does anyone have an idea what causes RODBC to fail?
Setting believeNRows = FALSE in the sqlQuery statement should pull all rows.
sqlQuery(myconn, "select * from table", believeNRows = FALSE)

how can I get pyodbc to perform a "SELECT ... INTO" statement without locking?

I'm trying to copy a table in SQL Server, but a simple statement seems to be locking my database when using pyodbc. Here's the code I'm trying:
dbCxn = db.connect(cxnString)
dbCursor = dbCxn.cursor()
query = """\
SELECT TOP(10) *
INTO production_data_adjusted
FROM production_data
"""
dbCursor.execute(query)
The last statement returns immediately, but both LINQPad and SQL Server Management Studio are locked out of the database afterwards (I try to refresh their table lists). Running sp_who2 shows that LINQPad/SSMS are stuck waiting for my pyodbc process. Other databases on the server seem fine, but all access to this database gets held up. The only way I can get these other applications to resolve their stalls is by closing the pyodbc database connection:
dbCxn.close()
This exact same SELECT ... INTO statement statement works fine and takes only a second from LINQPad and SSMS. The above code works fine and doesn't lock the database if I remove the INTO line. It even returns results if I add fetchone() or fetchall().
Can anyone tell me what I'm doing wrong here?
Call the commit function of either the cursor or connection after SELECT ... INTO is executed, for example:
...
dbCursor.execute(query)
dbCursor.commit()
Alternatively, automatic commit of transactions can be specified when the connection is created using autocommit. Note that autocommit is an argument to the connect function, not a connection string attribute, for example:
...
dbCxn = db.connect(cxnString, autocommit=True)
...

Resources