pyODBC and SQL Server 2008 and Python 3 - sql-server

I have pyODBC installed for Python 3.2 and I am attempting to update a SQL Server 2008 R2 database that I created as a test.
I have no problem retrieving data and that has always worked.
However when the program performs a cursor.execute("sql") to insert or delete a row then it does not work - no error, nothing. The response is as if I am successful updating the database but no changes are reflected.
The code below essentially is creating a dictionary (I have plans for this later) and just doing a quick build of a sql insert statement (which works as I testing the entry I wrote to the log)
I have 11 rows in my table, Killer, which is not being affected at all, even after a commit.
I know this is something dumb but I can't see it.
Here is the code:
cnxn = pyodbc.connect('DRIVER={SQL Server Native Client 10.0};SERVER=PHX-500222;DATABASE=RoughRide;UID=sa;PWD=slayer')
cursor = cnxn.cursor()
# loop through dictionary and create insert entries
logging.debug("using test data to build sql")
for row in data_dictionary:
entry = data_dictionary[row]
inf = entry['Information']
dt = entry['TheDateTime']
stat = entry['TheStatus']
flg = entry['Flagg']
# create sql and set right back into row
data_dictionary[row] = "INSERT INTO Killer(Information, TheDateTime, TheStatus, Flagg) VALUES ('%s', '%s', '%s', %d)" % (inf, dt, stat, flg)
# insert some rows
logging.debug("inserting test data")
for row in data_dictionary.values():
cursor.execute(row)
# delete a row
rowsdeleted = cursor.execute("DELETE FROM Killer WHERE Id > 1").rowcount
logging.debug("deleted: " + str(rowsdeleted))
cnxn.commit

Assuming this isn't a typo in the post, looks like you're just missing parentheses for the Connection.commit() method:
...
# delete a row
rowsdeleted = cursor.execute("DELETE FROM Killer WHERE Id > 1").rowcount
logging.debug("deleted: " + str(rowsdeleted))
cnxn.commit()

Related

Pandas dataframe insert into SQL Server taking too long with execute and executemany

I have a pandas dataframe with 27 columns and ~45k rows that I need to insert into a SQL Server table.
I am currently using with the below code and it takes 90 mins to insert:
conn = pyodbc.connect('Driver={ODBC Driver 17 for SQL Server};\
Server=#servername;\
Database=dbtest;\
Trusted_Connection=yes;')
cursor = conn.cursor() #Create cursor
for index, row in t6.iterrows():
cursor.execute("insert into dbtest.dbo.test( col1, col2, col3, col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,,col27)\
values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)",
row['col1'],row['col2'], row['col3'],,row['col27'])
I have also tried to load using executemany and that takes even longer to complete, at nearly 120mins.
I am really looking for a faster load time since I need to run this daily.
You can set fast_executemany in pyodbc itself for versions>=4.0.19. It is off by default.
import pyodbc
server_name = 'localhost'
database_name = 'AdventureWorks2019'
table_name = 'MyTable'
driver = 'ODBC Driver 17 for SQL Server'
connection = pyodbc.connect(driver='{'+driver+'}', server=server_name, database=database_name, trusted_connection='yes')
cursor = connection.cursor()
cursor.fast_executemany = True # reduce number of calls to server on inserts
# form SQL statement
columns = ", ".join(df.columns)
values = '('+', '.join(['?']*len(df.columns))+')'
statement = "INSERT INTO "+table_name+" ("+columns+") VALUES "+values
# extract values from DataFrame into list of tuples
insert = [tuple(x) for x in df.values]
cursor.executemany(statement, insert)
Or if you prefer sqlalchemy and dataframes directly.
import sqlalchemy as db
engine = db.create_engine('mssql+pyodbc://#'+server_name+'/'+database_name+'?trusted_connection=yes&driver='+driver, fast_executemany=True)
df.to_sql(table_name, engine, if_exists='append', index=False)
See fast_executemany in this link.
https://github.com/mkleehammer/pyodbc/wiki/Features-beyond-the-DB-API
I have worked through this in the past, and this was the fastest that I could get it to work using sqlalchemy.
import sqlalchemy as sa
engine = (sa.create_engine(f'mssql://#{server}/{database}
?trusted_connection=yes&driver={driver_name}', fast_executemany=True)) #windows authentication
df.to_sql('Daily_Report', con=engine, if_exists='append', index=False)
If the engine is not working for you, then you may have a different setup so please see: https://docs.sqlalchemy.org/en/13/core/engines.html
You should be able to create the variables needed above, but here is how I get the driver:
driver_name = ''
driver_names = [x for x in pyodbc.drivers() if x.endswith(' for SQL Server')]
if driver_names:
driver_name = driver_names[-1] #You may need to change the [-1] if wrong driver to [-2] or a different option in the driver_names list.
if driver_name:
conn_str = f'''DRIVER={driver_name};SERVER='''
else:
print('(No suitable driver found. Cannot connect.)')
You can try to use the method 'multi' built in pandas to_sql.
df.to_sql('table_name', con=engine, if_exists='replace', index=False, method='multi')
The multi method allows you to 'Pass multiple values in a single INSERT clause.' per documentation.
I found it to be pretty efficient.

How to access and update an already existing tables of sqlite database file in another python file

I wrote the following code to create a database file and then created a table in that database and inserted values using sql queries in python.
let say this is a file named info.py
conn = sqlite3.connect('sqlite_file.db', timeout=20)
c = conn.cursor()
#Creating a new SQLite table with 1 column
c.execute('CREATE TABLE STUDENTS_ (Name CHAR, RollNo INTEGER)')
a=['Richa', 'Swapnil', 'Jahanavi', 'Shivam', 'Mehul']
b=[122, 143, 102, 186, 110]
p=0
for r in b:
c.execute("INSERT INTO STUDENTS_ VALUES (?,?);", (a[p],b[p]))
p=p+1
It runs well and gives the expected result.
Now I want to update the same table named STUDENTS_, through another code in a different python file,I tried the code below.
This is another file named info_add.py
import sqlite3
sqlite_file = 'my_first_db.sqlite' # name of the sqlite database file
STUD = 'STUDENTS_' # name of the table to be created
conn = sqlite3.connect('sqlite_file.db', timeout=20)
c = conn.cursor()
a=['Riya', 'Vipul']
b=[160, 173]
p=0
for r in b:
c.execute("INSERT INTO STUDENTS_ VALUES (?,?);", (a[p],b[p]))
p=p+1
I Get the following error:
OperationalError: database is locked
What is this error?I know i am doing wrong, please anyone help me with a right method!!Thankyou
The "database is locked" message indicates that some other connection still has an active transaction.
Python tries to be clever and automatically starts transactions for you, so you have to ensure that you end your transactions (conn.commit()) when needed.

RODBC::sqlSave - problems creating/appending to a table

Related to several other questions on the RODBC package, I'm having problems using RODBC::sqlSave to write to a table on a SQL Server database. I'm using MS SQL Server 2008 and 64-bit R on a Windows RDP.
The solution in the 3rd link (questions) does work [sqlSave(ch, df)]. But in this case, it writes to the wrong data base. That is, my default DB is "C2G" but I want to write to "BI_Sandbox". And it doesn't allow for options such as rownames, etc. So there still seems to be a problem in the package.
Obviously, a possible solution would be to change my ODBC solution to the specified database, but it seems there should be a better method. And this wouldn't solve the problem of unusable parameters in the sqlSave command--such as rownames, varTypes, etc.
I have the following ODBC- System DSN connnection:
Microsoft SQL Server Native Client Version 11.00.3000
Data Source Name: c2g
Data Source Description: c2g
Server: DC01-WIN-SQLEDW\BISQL01,29537
Use Integrated Security: Yes
Database: C2G
Language: (Default)
Data Encryption: No
Trust Server Certificate: No
Multiple Active Result Sets(MARS): No
Mirror Server:
Translate Character Data: Yes
Log Long Running Queries: No
Log Driver Statistics: No
Use Regional Settings: No
Use ANSI Quoted Identifiers: Yes
Use ANSI Null, Paddings and Warnings: Yes
R code:
R> ch <- odbcConnect("c2g")
R> sqlSave(ch, zinq_scores, tablename = "[bi_sandbox].[dbo].[table1]",
append= FALSE, rownames= FALSE, colnames= FALSE)
Error in sqlColumns(channel, tablename) :
‘[bi_sandbox].[dbo].[table1]’: table not found on channel
# after error, try again:
R> sqlDrop(ch, "[bi_sandbox].[dbo].[table1]", errors = FALSE)
R> sqlSave(ch, zinq_scores, tablename = "[bi_sandbox].[dbo].[table1]",
append= FALSE, rownames= FALSE, colnames= FALSE)
Error in sqlSave(ch, zinq_scores, tablename = "[bi_sandbox].[dbo].[table1]", :
42S01 2714 [Microsoft][SQL Server Native Client 11.0][SQL Server]There is already an object named 'table1' in the database.
[RODBC] ERROR: Could not SQLExecDirect 'CREATE TABLE [bi_sandbox].[dbo].[table1] ("credibility_review" float, "creditbuilder" float, "no_product" float, "duns" varchar(255), "pos_credrev" varchar(5), "pos_credbuild" varchar(5))'
In the past, I've gotten around this by running the supremely inefficient sqlQuery with insert into row-by-row to get around this. But I tried this time and no data was written. Although the sqlQuery statement did not have an error or warning message.
temp <-"INSERT INTO [bi_sandbox].[dbo].[table1]
+ (credibility_review, creditbuilder, no_product, duns, pos_credrev, pos_credbuild) VALUES ("
>
> for(i in 1:nrow(zinq_scores)) {
+ sqlQuery(ch, paste(temp, "'", zinq_scores[i, 1], "'",",", " ",
+ "'", zinq_scores[i, 2], "'", ",",
+ "'", zinq_scores[i, 3], "'", ",",
+ "'", zinq_scores[i, 4], "'", ",",
+ "'", zinq_scores[i, 5], "'", ",",
+ "'", zinq_scores[i, 6], "'", ")"))
+ }
> str(sqlQuery(ch, "select * from [bi_sandbox].[dbo].[table1]"))
'data.frame': 0 obs. of 6 variables:
$ credibility_review: chr
$ creditbuilder : chr
$ no_product : chr
$ duns : chr
$ pos_credrev : chr
$ pos_credbuild : chr
Any help would be greatly appreciated.
Also, if there is any missing detail, please let me know and I'll edit the question.
My apologies up front. This is not exactly a "simple example." It's pretty trivial, but there are a lot of parts. And by the end, you'll probably think I'm crazy for doing it this way.
Starting in SQL Server Management Studio
First, I've created a database on SQL Server called mtcars with default schema dbo. I've also added myself as a user. Under my own user name, I am the database owner, so I can do anything I want to the database, but from R, I will connect using a generic account that only has EXECUTE privileges.
The predefined table in the database that we are going to write to is called mtcars. (So the full path to the table is mtcars.dbo.mtcars; it's lazy, I know). The code to define the table is
USE [mtcars]
GO
/****** Object: Table [dbo].[mtcars] Script Date: 2/22/2016 11:56:53 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[mtcars](
[OID] [int] IDENTITY(1,1) NOT NULL,
[mpg] [numeric](18, 0) NULL,
[cyl] [numeric](18, 0) NULL,
[disp] [numeric](18, 0) NULL,
[hp] [numeric](18, 0) NULL
) ON [PRIMARY]
GO
Stored Procedures
I'm going to use two stored procedures. The first is an "UPSERT" procedure, that will first try to update a row in a table. If that fails, it will insert the row into the table.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE dbo.sample_procedure
#OID int = 0,
#mpg numeric(18,0) = 0,
#cyl numeric(18,0) = 0,
#disp numeric(18,0) = 0,
#hp numeric(18,0) = 0
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- TRANSACTION code borrowed from
-- http://stackoverflow.com/a/21209131/1017276
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
UPDATE dbo.mtcars
SET mpg = #mpg,
cyl = #cyl,
disp = #disp,
hp = #hp
WHERE OID = #OID;
IF ##ROWCOUNT = 0
BEGIN
INSERT dbo.mtcars (mpg, cyl, disp, hp)
VALUES (#mpg, #cyl, #disp, #hp)
END
COMMIT TRANSACTION;
END
GO
Another stored procedure I will use is just the equivalent of RODBC::sqlFetch. As far as I can tell, sqlFetch depends on SQL injection, and I'm not allowed to use it. Just to be on the safe side of our data security policies, I write little procedures like this (Data security is pretty tight here, you may or may not need this)
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE dbo.get_mtcars
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SELECT * FROM dbo.mtcars
END
GO
Now, from R
I have a utility function I use to help me manage inputting data into the stored procedures. sqlSave would do a lot of this automatically, so I'm kind of reinventing the wheel. The gist of the utility function is to determine if the value I'm pushing to the database needs to be nested in quotes or not.
#* Utility function. This does a couple helpful things like
#* Convert NA and NULL into a SQL NULL
#* wrap character strings and dates in single quotes
sqlNullString <- function(value, numeric=FALSE)
{
if (is.null(value)) value <- "NULL"
if (is.na(value)) value <- "NULL"
if (inherits(value, "Date")) value <- format(x = value, format = "%Y-%m-%d")
if (value == "NULL") return(value)
else if (numeric) return(value)
else return(paste0("'", value, "'"))
}
This next step isn't strictly necessary, but I'm going to do it just so that my R table is similar to my SQL table. This is organizational strategy on my part.
mtcars$OID <- NA
Now let's establish our connection:
server <- "[server_name]"
uid <- "[generic_user_name]"
pwd <- "[password]"
library(RODBC)
channel <- odbcDriverConnect(paste0("driver=SQL Server;",
"server=", server, ";",
"database=mtcars;",
"uid=", uid, ";",
"pwd=", pwd))
Now this next part is pure laziness. I'm going to use a for loop to push each row of the data frame the to SQL table one at a time. As noted in the original question, this is kind of inefficient. I'm sure I could write a stored procedure to accept several vectors of data, compile them into a temporary table, and do the UPSERT in SQL, but I don't work with large data sets when I'm doing this, and so it hasn't yet been worth it to me to write such a procedure. Instead, I prefer to stick with the code that is a little easier for me to reason with on my limited SQL skills.
Here, we're just going to push the first 5 rows of mtcars
#* Insert the first 5 rows into the SQL Table
for (i in 1:5)
{
sqlQuery(channel = channel,
query = paste0("EXECUTE dbo.sample_procedure ",
"#OID = ", sqlNullString(mtcars$OID[i]), ", ",
"#mpg = ", mtcars$mpg[i], ", ",
"#cyl = ", mtcars$cyl[i], ", ",
"#disp = ", mtcars$disp[i], ", ",
"#hp = ", mtcars$hp[i]))
}
And now we'll take a look at the table from SQL
sqlQuery(channel = channel,
query = "EXECUTE dbo.get_mtcars")
This next line is just to match up the OIDs in R and SQL for illustration purposes. Normally, I would do this manually.
mtcars$OID[1:5] <- 1:5
This next for loop will UPSERT all 32 rows. We already have 5, we're UPSERTing 32, and the SQL table at the end should have 32 if we've done it correctly. (That is, SQL will recognize the 5 rows that already exist)
#* Update/Insert (UPSERT) the entire table
for (i in 1:nrow(mtcars))
{
sqlQuery(channel = channel,
query = paste0("EXECUTE dbo.sample_procedure ",
"#OID = ", sqlNullString(mtcars$OID[i]), ", ",
"#mpg = ", mtcars$mpg[i], ", ",
"#cyl = ", mtcars$cyl[i], ", ",
"#disp = ", mtcars$disp[i], ", ",
"#hp = ", mtcars$hp[i]))
}
#* Notice that the first 5 rows were unchanged (though they would have changed
#* if we had changed the data...the point being that the stored procedure
#* correctly identified that these records already existed)
sqlQuery(channel = channel,
query = "EXECUTE dbo.get_mtcars")
Recap
The stored procedure approach has a major disadvantage in that it is blatantly reinventing the wheel. It also requires that you learn SQL. SQL is pretty easy to learn for simple tasks, but some of the code I've written for more complex tasks is pretty difficult to interpret. Some of my procedures have taken me the better part of a day to get right. (once they are done, however, they work incredibly well)
The other big disadvantage to the stored procedure is, I've noticed, it does require a little bit more code work and organization. I'd say it's probably been about 10% more code work and documentation than if I were just using SQL Injection.
The chief advantages of the stored procedures approach are
you have massive flexibility for what you want to do
You can store your SQL code into the database and not pollute your R code with potentially huge strings of SQL code
Avoiding SQL injection (again, this is a data security thing, and may not be an issue depending on your employer's policies. I'm strictly forbidden from using SQL injection, so stored procedures are my only option)
It should also be noted that I've not yet explored using Table-Valued parameters in my stored procedures, which might simplify things for me a bit.
In the past, I've gotten around this by running the supremely inefficient sqlQuery with insert into row-by-row to get around this. But I tried this time and no data was written. Although the sqlQuery statement did not have an error or warning message.
Faced it yesterday: in my case the issue was in scheme. The table was actually created but in my user own scheme.
First time you can create it and than you have this error (that object already exists)
After the investigation I found that some packages does not work correctly with schemes.
In the end I used "insert by line" solution. The solution is available here and here

FreeTDS / SQL Server UPDATE Query Hangs Indefinitely

I'm trying to run the following UPDATE query from a python script (note I've removed the database info):
print 'Connecting to db for update query...'
db = pyodbc.connect('DRIVER={FreeTDS};SERVER=<removed>;DATABASE=<removed>;UID=<removed>;PWD=<removed>')
cursor = db.cursor()
print ' Executing SQL queries...'
for i in range(len(data)):
sql = '''
UPDATE product.sanction
SET action_summary = '{action_summary}'
WHERE sanction_id = {sanction_id};
'''.format(sanction_id=data[i][0], action_summary=data[i][1])
cursor.execute(sql)
cursor.close()
db.commit()
db.close()
However, it hangs indefinitely, no error.
I'm new to pyodbc, but it should be setup correctly considering I'm having no problems performing SELECT queries. I did have to call CAST for SELECT queries (I've cast sanction_id AS INT [int identity on the database] and action_summary AS TEXT [nvarchar on the database]) to properly populate data, so perhaps the problem lies somewhere there, but I don't know where to start debugging. Converting the text to NVARCHAR didn't do anything either.
Here's an example of one of the rows in data:
(2861357, 'Exclusion Program: NonProcurement; Excluding Agency: HHS; CT Code: Z; Exclusion Type: Prohibition/Restriction; SAM Number: S4MR3Q9FL;')
I was unable to find my issue, but I ended up using QuerySets rather than running an UPDATE query.

LinqToSQL not updating database

I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there.
using (var context = new CenasDataContext())
{
context.Log = Console.Out;
context.Cenas.InsertOnSubmit(new Cena() { id = 1});
context.SubmitChanges();
}
This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID.
**INSERT INTO [dbo].Cenas VALUES (#p0)
-- #p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1]
-- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build:
4.0.30319.1**
This is LOG from the execution (printed the context log into the console).
The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer -> new query) i see the table is empty, every time.
I am using a SQL Server database file (.mdf).
EDIT (1): Immediate Window result
context.GetChangeSet()
{Inserts: 1, Deletes: 0, Updates: 0}
Deletes: Count = 0
Inserts: Count = 1
Updates: Count = 0
context.GetChangeSet().Inserts
Count = 1
[0]: {DBTest.Cena}
If you construct a DataContext without arguments, it will retrieve its connection string from your App.Config or Web.Config file. Open the one that applies, and verify that it points to the same database.
Put a breakpoint on context.SubmitChanges(); and in your immediate window in VS, do:
context.GetChangeSet();
There is an inserts property and it should have one record. That will help tell if its queuing up an insert.
HTH.

Resources