I'm trying to run the following UPDATE query from a python script (note I've removed the database info):
print 'Connecting to db for update query...'
db = pyodbc.connect('DRIVER={FreeTDS};SERVER=<removed>;DATABASE=<removed>;UID=<removed>;PWD=<removed>')
cursor = db.cursor()
print ' Executing SQL queries...'
for i in range(len(data)):
sql = '''
UPDATE product.sanction
SET action_summary = '{action_summary}'
WHERE sanction_id = {sanction_id};
'''.format(sanction_id=data[i][0], action_summary=data[i][1])
cursor.execute(sql)
cursor.close()
db.commit()
db.close()
However, it hangs indefinitely, no error.
I'm new to pyodbc, but it should be setup correctly considering I'm having no problems performing SELECT queries. I did have to call CAST for SELECT queries (I've cast sanction_id AS INT [int identity on the database] and action_summary AS TEXT [nvarchar on the database]) to properly populate data, so perhaps the problem lies somewhere there, but I don't know where to start debugging. Converting the text to NVARCHAR didn't do anything either.
Here's an example of one of the rows in data:
(2861357, 'Exclusion Program: NonProcurement; Excluding Agency: HHS; CT Code: Z; Exclusion Type: Prohibition/Restriction; SAM Number: S4MR3Q9FL;')
I was unable to find my issue, but I ended up using QuerySets rather than running an UPDATE query.
Related
I have created a Python function which creates multiple query statements.
Once it creates the SQL statement, it executes it (one at a time).
Is there anyway to way to bulk run all the statements at once (assuming I was able to create all the SQL statements and wanted to execute them once all the statements were generated)? I know there is an execute_stream in the Python Connector, but I think this requires a file to be created first. It also appears to me that it runs a single query statement at a time."
Since this question is missing an example of the file, here is a file content that I have provided as extra that we can work from.
//connection test file for python multiple queries
import snowflake.connector
conn = snowflake.connector.connect(
user = 'xxx',
password = '',
account = 'xxx',
warehouse= 'xxx',
database= 'TEST_xxx'
session_parameters = {
'QUERY_TAG: 'Rachel_test',
}
}
while(conn== true){
print(conn.sfqid)import snowflake.connector
try:
conn.cursor().execute("CREATE WAREHOUSE IF NOT EXISTS tiny_warehouse_mg")
conn.cursor().execute("CREATE DATABASE IF NOT EXISTS testdb_mg")
conn.cursor().execute("USE DATABASE testdb_mg")
conn.cursor().execute(
"CREATE OR REPLACE TABLE "
"test_table(col1 integer, col2 string)")
conn.cursor().execute(
"INSERT INTO test_table(col1, col2) VALUES " +
" (123, 'test string1'), " +
" (456, 'test string2')")
break
except Exception as e:
conn.rollback()
raise e
}
conn.close()
The reference to this question refers to a method that can be done with the file call, the example in documentation is as follows:
from codecs import open
with open(sqlfile, 'r', encoding='utf-8') as f:
for cur in con.execute_stream(f):
for ret in cur:
print(ret)
Reference to guide I used
Now when I ran these, they were not perfect, but in practice I was able to execute multiple sql statements in one connection, but not many at once. Each statement had their own query id. Is it possible to have a .sql file associated with one query id?
Is it possible to have a .sql file associated with one query id?
You can achieve that effect with the QUERY_TAG session parameter. Set the QUERY_TAG to the name of your .SQL file before executing it's queries. Access the .SQL file QUERY_IDs later using the QUERY_TAG field in QUERY_HISTORY().
I believe though you generated the .sql while executing in snowflake each statement will have unique query id.
If you want to run one sql independent to other you may try with multiprocessing/multi threading concept in python.
The Python and Node.Js libraries do not allow multiple statement executions.
I'm not sure about Python but for Node.JS there is this library that extends the original one and add a method call "ExecutionAll" to it:
snowflake-multisql
You just need to wrap multiple statements with the BEGIN and END.
BEGIN
<statement_1>;
<statement_2>;
END;
With these operators, I was able to execute multiple statement in nodejs
I need to be able to execute an update SQL script, but it isn't working
Here is a link to the site that I used for reference:
https://groovyinsoapui.wordpress.com/tag/sql-eachrow-groovy-soapui/
Here is the format of the code that I ended up writing (due to the nature of the work I am doing, I am unable to provide the exact script that I wrote)
import groovy.sql.Sql
def groovyUtils = new com.eviware.soapui.support.GroovyUtils(context)
groovyUtils.registerJdbcDriver("com.microsoft.sqlserver.jdbc.SQLServerDriver")
def connectString = "jdbc:microsoft:sqlserver://:;databaseName=?user=&password="
sql = Sql.newInstance(connectString) // TEST YOUR CONNECT STRING IN A SQL BROWSER
sql.executeUpdate("UPDATE TABLE SET COLUMN_1 = 'VALUE_1' WHERE COLUMN_2 = 'VALUE_2'")
The response that I am getting is:
Script-result: 0
I also tried to use:
sql.execute("UPDATE TABLE SET COLUMN_1 = 'VALUE_1' WHERE COLUMN_2 = 'VALUE_2'")
Which returns the following response:
Script-result: false
From what you say it seems that no row has COLUMN_2 = 'VALUE_2', so then number of updated rows is 0.
I would first check that statement on Management Studio just to make sure.
In sqlserver I have a function which generates a complex xml of all products with several tables joined: location, suppliers, orders etc.
No problem in that, it runs in 68 sec and produces around 450MB.
It should only be called occationally during migration to another server, so it doesn't matter it takes some time.
I want to make this available for download over webserver.
I've tried some variations of this in classic asp:
Response.Buffer = false
set rs=conn.execute("select cast(dbo.exportXML() as varchar(max)) as res")
response.write rs("res")
But I just get a standard
An error occurred on the server when processing the URL. Please contact the system administrator.
If you are the system administrator please click here to find out more about this error.
Not my usual custom 500-errorhandler, so I'm not sure how to find the error.
The problem is in response.write rs("res"), if i just do
temp = rs("res")
the script runs, but displays nothing of cause; if I then
response.write temp
I get the same failure.
So the problem is writing such a ling string.
Can I save the file from tsql directly; and run the job periodically from sql agent?
I found that there seems to be a limit on how much data can be written at once using Response.Write. The workaround I used was to break the data into chunks like this:
Dim Data, Done
Done = False
Do While Not Done
Data = RecordSet(0).GetChunk(8192)
If Not Len(Data) = 0 Then
Response.Write Data
Else
Done = True
End If
Loop
Try this:
Response.ContentType = "text/xml"
rs.CursorLocation = 3
rs.Open "select cast(dbo.exportXML() as varchar(max)) as res",conn
'Persist the Recordset in XML format to the ASP Response object.
'The constant value for adPersistXML is 1.
rs.Save Response, 1
I've created a simple SSRS report using Visual Studio 2012,
I'm using CRMAF_ prefix to use CRM's auto filtering, and achieve a context-based report.
I'm using two datasets to achieve this; dsFiltered for the filtered data, and dsApprovalSummary for my report.
This is the query dsFiltered uses :
declare #sql as nVarchar(max)
set #sql = 'SELECT vrp_investdocumentid
FROM (' + #CRM_Filteredvrp_investdocument + ') as CRMAF_vrp_investdocument'
exec(#sql)
This is the query dsApprovalSummary uses :
select doc.vrp_name as 'Yatırım Dosyası',
act.vrp_actioncode as 'Aksiyon Kodu',
cfg.vrp_description as 'Aksiyon Açıklaması',
act.OwnerIdName as 'Aksiyon Sorumlusu',
act.ModifiedOn as 'Son Değiştirme Tarihi'
from vrp_action act
inner join vrp_investdocument as doc on act.RegardingObjectId=doc.vrp_investdocumentId
inner join vrp_actionconfig as cfg on act.vrp_actioncode = cfg.vrp_actioncode
where cfg.vrp_reporttask=1 and act.RegardingObjectId = #documentId
order by act.ModifiedOn
The parameters are :
#CRM_Filteredvrp_investdocument - The parameter CRM should have been populated with a query, defaults to null
#CRM_vrp_investdocumentId - Comes from dsFiltered (CRMAF_vrp_investdocument.vrp_investdocumentid); allows null.
The report works perfectly on the development server. However, when i deploy the report into the production server, it does not ask me to select a filter, or does not have a default filter; tries to run directly and then gives an rsProcessingAborted. I've checked the logs, and saw it said SYNTAX ERROR NEAR )-.
This is from the report server logs :
processing!ReportServer_0-20!13ec!11/11/2014-13:45:04:: w WARN: Data source 'srcApprovalSummary': Report processing has been aborted.
processing!ReportServer_0-20!13ec!11/11/2014-13:45:04:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ProcessingAbortedException: ,
Microsoft.ReportingServices.ReportProcessing.ProcessingAbortedException: An error has occurred during report processing.
---> Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'dsFiltered'.
---> System.Data.SqlClient.SqlException: Incorrect syntax near ')'
UPDATE : On the development server, we have everything installed on the same machine; CRM Frontend, Services, SQL Server, Report Server etc. But on the production environment, each one of these servers are different machines. Could this be the source of error?
UPDATE 2 : Running the profiler gave me that #CRM_Filteredvrp_investdocument comes in NULL. See the query below from the profiler :
exec sp_executesql N'declare #sql as nVarchar(max)
set #sql = ''SELECT vrp_investdocumentid
FROM ('' + #CRM_Filteredvrp_investdocument + '') as CRMAF_vrp_investdocument''
exec(#sql)',N'#CRM_Filteredvrp_investdocument nvarchar(4000)',#CRM_Filteredvrp_investdocument=NULL
It turns out to be a collation problem, i've been trying to use a custom data source with this connection string :
Data Source=myprodsqlserver; Initial Catalog=myorganization_MSCRM;
I've rewritten it lowercase, and replaced the data source with localhost the problem is magically gone.
data source=localhost; initial catalog=myorganization_MSCRM;
In the report editor, try rebuilding the datasource used by each of your datasets using the connection string builder (don't type it manually). Build them so they point to your Prod CRM database and then test the report completely in the report editor. This will determine if the problem is lies with the report or CRM.
I have pyODBC installed for Python 3.2 and I am attempting to update a SQL Server 2008 R2 database that I created as a test.
I have no problem retrieving data and that has always worked.
However when the program performs a cursor.execute("sql") to insert or delete a row then it does not work - no error, nothing. The response is as if I am successful updating the database but no changes are reflected.
The code below essentially is creating a dictionary (I have plans for this later) and just doing a quick build of a sql insert statement (which works as I testing the entry I wrote to the log)
I have 11 rows in my table, Killer, which is not being affected at all, even after a commit.
I know this is something dumb but I can't see it.
Here is the code:
cnxn = pyodbc.connect('DRIVER={SQL Server Native Client 10.0};SERVER=PHX-500222;DATABASE=RoughRide;UID=sa;PWD=slayer')
cursor = cnxn.cursor()
# loop through dictionary and create insert entries
logging.debug("using test data to build sql")
for row in data_dictionary:
entry = data_dictionary[row]
inf = entry['Information']
dt = entry['TheDateTime']
stat = entry['TheStatus']
flg = entry['Flagg']
# create sql and set right back into row
data_dictionary[row] = "INSERT INTO Killer(Information, TheDateTime, TheStatus, Flagg) VALUES ('%s', '%s', '%s', %d)" % (inf, dt, stat, flg)
# insert some rows
logging.debug("inserting test data")
for row in data_dictionary.values():
cursor.execute(row)
# delete a row
rowsdeleted = cursor.execute("DELETE FROM Killer WHERE Id > 1").rowcount
logging.debug("deleted: " + str(rowsdeleted))
cnxn.commit
Assuming this isn't a typo in the post, looks like you're just missing parentheses for the Connection.commit() method:
...
# delete a row
rowsdeleted = cursor.execute("DELETE FROM Killer WHERE Id > 1").rowcount
logging.debug("deleted: " + str(rowsdeleted))
cnxn.commit()