IronPython stack trace cites a line in "except" block - sql-server

I'm working on an IronPython (v2.7.3) module that connects to a given SQL Server database on a remote machine, and uses SMO to generate scripts for all of that DB's objects. My 'real' module has the code to generate a script for every defined object type in SMO, from ApplicationRoles to XmlSchemaCollections. The DB I'm working with is on SQL Server 2000. It has a fair number of objects -- 117 tables, 257 SPs, 101 views, etc.
Every time I run my module, I get a stack trace at the point where it's scripting the SPs. I trimmed down to module to script only the tables and the SPs, and it still failed out while scripting the SPs. Here's the trimmed-down version:
import sys, clr
import System.Array
serverName = r'x.x.x.x' #IP address of remote server
pathAssemblies = r'C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\SQLServer2008R2\x64'
sys.path.append(pathAssemblies)
clr.AddReferenceToFile('Microsoft.SqlServer.Smo.dll')
import Microsoft.SqlServer.Management.Smo as SMO
srv = SMO.Server(serverName)
srv.ConnectionContext.LoginSecure = False
srv.ConnectionContext.Login = 'sa'
srv.ConnectionContext.Password = 'foo' #Password of sa
db = srv.Databases['bar'] #Name of database
scrp = SMO.Scripter(srv)
sys.stdout = open('DBScriptOutput.txt', 'w')
try:
for dbgenobj in db.Tables:
urns = System.Array[SMO.SqlSmoObject]([dbgenobj])
outStr = scrp.Script(urns)
for outLine in outStr:print outLine
except:
print 'Failed out while generating table scripts.'
try:
for dbgenobj in db.StoredProcedures:
urns = System.Array[SMO.SqlSmoObject]([dbgenobj])
outStr = scrp.Script(urns)
for outLine in outStr:print outLine
except:
print 'Failed out while generating stored procedure scripts.'
The puzzle here that has me stumped involves two things that don't seem to make sense:
(1) The stack track itself looks like this:
Traceback (most recent call last):
File "E:\t.py", line 33, in <module>
UnicodeEncodeError: ('unknown', '\x00', 0, 1, '')
Line 33 though is the print statement in the except block. The output file has all of the tables' scripts, complete scripts for 235 of the SPs, and part of the script for the 236th. But there's nothing unusual (that I can see anyway) about #236 that should cause the scripting to fail out. Nor can I understand why the stack trace would occur at all citing a simple print statement in the except block.
(2) As a further troubleshooting experiment, I tried running the script with the whole try-except block for the tables commented out. It still fails generating the SP scripts, and generates the same stack trace citing line 33. The difference is this time it successfully generates another 16 lines of the script for procedure #236 before terminating. The overall file size of the output file is significantly smaller though. I could understand if the file stopped at the same size, or if the scripting stopped at the same point in the SP, but neither one of these is true.
So at this point, having (apparently) ruled out a problem character in the SP or a file/memory size limit for the scripting process, I'm stumped.

I've got this problem with procedure that had non-ASCII characters in comments. The simplest solution is to use the codecs module and codecs.open instead of plain open call. Add this to the import lines:
import codecs
then replace open call with:
sys.stdout = codecs.open('DBScriptOutput.txt', 'w', 'utf8')
That worked for me.

Related

Error in the sentence "unload" using Informix

I try to use this sentence "unload" in Informix but it doesn't work:
UNLOAD TO 'p7024cargaP.unl' select * from p7024carga;
[Error] Script lines: 1-4 --------------------------
A syntax error has occurred.
Script line 1, statement line 1, column 1
So maybe it is because I am using this sentence in Aqua Data Studio.
I have a Windows system in my pc. Can someone help me?
UNLOAD is not a command understood by the server. Some tools, notably DB-Access, recognize the syntax and use a more or less complex sequence of operations to declare a cursor for the SELECT statement and then open the cursor, fetch each row, and format the result, writing to the named file.
Your primary option is to use DB-Access to execute the statement. That is certainly the simplest.

AWS RDS MYSQL import db Access Denied

I cannot import a database in AWS RDS because of this commands in my sql file:
SET ##SESSION.SQL_LOG_BIN= 0;
SET ##GLOBAL.GTID_PURGED=/*!80000 '+'*/ '';
SET ##SESSION.SQL_LOG_BIN = #MYSQLDUMP_TEMP_LOG_BIN;
Are they important ? whiteout them there is no error.
log_bin_trust_function_creators parameter is set to 1 in a custom parameter.
FYI: MySql 5.7 and 8, same error
ERROR 1227 (42000) at line 20: Access denied; you need (at least one of) the SUPER, SYSTEM_VARIABLES_ADMIN or SESSION_VARIABLES_ADMIN privilege(s) for this operation
SET ##SESSION.SQL_LOG_BIN=0;
This is telling MySQL not to put these INSERT statements into the binary log. If you do not have binary logs enabled (not replicating), then it's not an issue to remove it. As far as I know, there is no way to enable this in RDS; I'm actually trying to figure out a way, which is how I found your question.
SET ##GLOBAL.GTID_PURGED=/*!80000 '+'*/ '';
Did you execute a RESET MASTER on the database where the dump originated from? Check here for an explanation of this value: gtid_purged
SET ##SESSION.SQL_LOG_BIN = #MYSQLDUMP_TEMP_LOG_BIN;
This is setting the ##SESSION.SQL_LOG_BIN variable back to the original setting; you should've seen another line like: SET #MYSQLDUMP_TEMP_LOG_BIN = ##SESSION.SQL_LOG_BIN;
If you're simply recovering this table into a new database that isn't writing to a binary log (for replication), it's safe to remove these lines. Hope this helps!

Python & SQL - Python wont actually commit data to SQL DB

This is a 2 part question but Ill get to my first conundrum.
I have some code I'm just trying to test where I want Python to run a stored procedure with the required variables and it runs and even shows the new row. However, if I go into the DB and run a SELECT statement to show me the data, its not there. Its as if the DB blocked it or something strange.
This code WORKS if I run a select statement instead of this stored procedure. The stored procedure works and even when I run this code, Python will run the stored procedure and spit back the new row (complete with the new row number that's auto-generated due to the IDENTITY I have on the column)
import pyodbc
conn = pyodbc.connect('Driver={SQL Server};'
'Server=REPORT\INSTANCE;'
'UID=UserID;'
'PWD=Password;'
'Database=ReportDB;'
'Trusted_Connection=no;')
cursor = conn.cursor()
cursor.execute('EXEC [Schema].[Comments] \'2019-04-18 00:00:00.000\',\'Team\',\'1900-01-01 13:12:16.000\',\'testing this string\',\'username\'')
for row in cursor:
print(row)
Kind of lost here. Python will run the stored procedure (which I have set to run a select at the end of, to prove the data was committed) and Python shows that but I don't see it in the actual DB. That line is gone.
You can see here, the 470 is the ID Column (Identity, no null) and that's what it should be.
Notice that the most recent entry is still 469
Same Instance
Same DB
EDIT
I just noticed something in the Python. Each time I run the Python code, it runs the code and the stored procedure does the select at the end but each time I run it, the CommentsID increases by 1 (as it should) but its NOT remembering the previous ones inserted by Python. The only ones the SELECT statement pulls back in is the one I committed via SQL itself.
Notice here that the CommentsID (the first number in each line that starts with a 4) goes from 469 to 471. Yet I just had that image above that shows 470 - where did 470 go?!
Second part:
I'm having a hard time putting variables into that EXEC section of the code. I think I need to wrap it in single quotes which means I need to put \ in front of the quotes I need to stay for the SQL code. But when I do that and then try to run it, I cant get it to pull in the variables.
Here is what the SQL needs to be:
EXEC [schema].[Comments] 'Username'
In Python, I know I need it to be in single quotes but because the SQL code has quotes, you typically just put \ in front like this:
'EXEC [schema].[Comments] \'Username\''
That works. However, I then want username to pull from a variable, not be a string.

SQL-Server - Bulk Insert Error 7301

Using SQL Server 2016, I am working on a legacy system that requires its nightly import to run via bulk insert. I know SSIS is a better option, but not one that's available to me.
I am uploading the file from the local machine with the following command:
BULK INSERT DataImports.staging_Companies
FROM 'D:\xxxxx\companies_20180802093057.txt'
WITH (BATCHSIZE = 1000
, DATAFILETYPE = 'char'
, FIRSTROW = 2
, FIELDTERMINATOR = ' ' -- Tab Character here
, ROWTERMINATOR = '\n'
, ERRORFILE = 'D:\xxxxx\company_errors.txt');
No format file is being used, and due to our dynamic handling, we would not be able to use one. When uploading the file I am getting an error as:
Msg 7301, Level 16, State 2, Line 4
Cannot obtain the required interface ("IID_IColumnsInfo") from OLE DB provider "BULK" for linked server "(null)".
The general opinion on this is that there is an issue with the row/line terminators. This is not the case here. The issue seems to be around file size or number of rows. The import works fine up to 2482 rows but falls over at 2483. By moving rows around I have ruled out an error on the data itself.
Once the file size/rowcount is exceeded, the command does not run at all. I have added a trigger to the destination table and a batch size, and see results for the smaller size, and none at all for the larger size.
From this, I am wondering if there is something that causes an interruption while reading the file, and cuts it off halfway through a line? I have used and maintained this system for a while and have seen far larger files processed before in terms of both rows, and data size.
Update:
Have just transferred this file onto my local machine, and ran the import on my test db (SQL2017). The bulk insert ran fine with no errors. My local version 14.0.1000.169, client server 13.0.1601.5.
Also tested on another 2016 Server (13.0.4474.0) and that ran fine. Is there anything in server setup that may be causing this issue? Or even something from the main file system? Am clutching at straws now.
Any ideas gratefully received.

ADO.NET and ExecuteNonQuery: how to use DDL

I execute SQL scripts to change the database schema. It looks something like this:
using (var command = connection.CreateCommand())
{
command.CommandText = script;
command.ExecuteNonQuery();
}
Additionally, the commands are executed within a transaction.
The scrip looks like this:
Alter Table [TableName]
ADD [NewColumn] bigint NULL
Update [TableName]
SET [NewColumn] = (SELECT somevalue FROM anothertable)
I get an error, because NewColumn does not exist. It seems to parse and validate it before it is executed.
When I execute the whole stuff in the Management Studio, I can put GO between the statements, then it works. When I put GO into the script, ADO.NET complains (Incorrect syntax near 'GO').
I could split the script into separate scripts and execute it in separate commands, this would be hard to handle. I could split it on every GO, parsing the script myself. I just think that there should be a better solution and that I didn't understand something. How should scripts like this be executed?
My implementation if anyone is interested in, according to John Saunders' answer:
List<string> lines = new List<string>();
while (!textStreamReader.EndOfStream)
{
string line = textStreamReader.ReadLine();
if (line.Trim().ToLower() == "go" || textStreamReader.EndOfStream)
{
ExecuteCommand(
string.Join(Environment.NewLine, lines.ToArray()));
lines.Clear();
}
else
{
lines.Add(line);
}
}
Not using one of umpteen ORM libraries to do it ? Good :-)
To be completely safe when running scripts that do structural changes use SMO rather than SqlClient and make sure MARS is not turned on via connection string (SMO will normally complain if it is anyway). Look for ServerConnection class and ExecuteNonQuery - different DLL of course :-)
The diff is that SMO dll pases the script as-is to SQL so it's genuine equivalent of running it in SSMS or via isql cmd line. Slicing on GO-s ends up growing into much bigger scanning every time you encounter another glitch (like that GO can be in the middle of a multi-line comment, there can be multiple USE statements, a script can be dropping the very DB that SqlCLient connected to - oops :-). I just killed one such thing in the codebase I inherited (after more complex scripts conflicted with MARS and MARS is good for production code but not for admin stuff).
You have to run each batch separately. In particular, to run a script that may contain multiple batches ("GO" keywords), you have to split the script on the "GO" keywords.
Not Tested:
string script = File.ReadAllText("script.sql");
string[] batches = script.Split(new [] {"GO"+Environment.NewLine}, StringSplitOptions.None);
foreach (string batch in batches)
{
// run ExecuteNonQuery on the batch
}

Resources