PDO ODBC, one of several result sets causes php to crash - sql-server

-Running PHP 5.6 on IIS 8.5 (Windows Server 2012 R2)
-Connecting to SQL Server 2008 R2 remotely via PDO, ODBC
-Issue isolated to one query, other queries function normally with expected results.
I am calling a stored procedure that returns eight result sets. I write each result set to a CSV. When I get to the third result set, php-cgi.exe crashes and I get a 500 error.
I have isolated the issue to this particular result set because if I skip the result set altogether using $stmt->nextRowset() everything works as expected.
//execute sp, bind parameters year and period-defined above
$StmtText = "{CALL PROCESS_PR (?, ?) }";
$Stmt = $dbh->prepare($QueryText);
$Stmt->bindParam(1, $PayYear, PDO::PARAM_INT);
$Stmt->bindParam(2, $PayPeriod, PDO::PARAM_INT);
$Stmt->execute();
$file_out = fopen('c:\windows\temp\tmp_1.csv, 'w');
$Result = $Stmt->fetchAll(PDO::FETCH_NUM);
foreach($Result as $row) { fputcsv($file_out,$row); }
fclose($file_out);
$Stmt->nextRowset();
//this happens 8 times, fails on the third
I am not throwing any PHP errors, and advanced IIS logging doesn't suggest a whole lot. I am struggling to determine what is crashing PHP. I executed the stored procedure directly via SSMS and it executes successfully, and looking at the problematic result set, can't see anything out of the ordinary- no special characters, no long strings, etc.
I have also been down the road of confirming PHP memory limits are set appropriately, checking timeouts on both FastCGI and in php.ini, and verifying MVC++ 2012 is installed, both 32bit and 64bit.
Looking for any thoughts on how to track down php crashing on this one particular result set. Thanks much.
UPDATE: Laughing Vergil's answer below solved the issue. The problematic result set had two fields with datatype varchar(max). changing one of them to varchar(255) solved the problem.

Related

ColdFusion 9.01 -> Lucee 5.3.3.62 and <cfinsert> / <cfupdate>

I’ve inherited a big application which is running on CF 9.01.
I’m in the process to port it to Lucee 5.3.3.62, but have some problems with and
I know that I should replace it with , but this application has ~1000 source files (!!), and replacing all those tags is currently not obvious for timing reasons.
Lucee is throwing errors like:
“An object or column name is missing or empty. For SELECT INTO
statements, verify each column has a name. For other statements, look
for empty alias names. Aliases defined as “” or are not allowed.
Change the alias to a valid name.”
At first, I thought there were problems with date field, because Lucee is handling them differently than CF 9.01, but this is not the case.
So, I created a test table (on MS-SQL Server 2008R2):
CREATE TABLE [dbo].[LuceeTest01](
[Field1] [nvarchar](50) NULL,
[Field2] [nvarchar](50) NULL ) ON [PRIMARY]
In Lucee, I’m using as datasource: Microsoft SQL Server (Vendor Microsoft), called “one”
This is my test application:
<cfset Form.Field1 = "Field1">
<cfset Form.Field2 = "Field2">
<cfoutput>
<cfinsert datasource="one"
tablename="LuceeTest01"
formfields="Field1, Field2">
</cfoutput>
When I run this, I get the same error. Any idea why?
Full trace here: https://justpaste.it/6k0hw
Thanks!
EDIT1:
Curious. I tried using “jTDS Type 4 JDBC Driver for MS SQL Server and Sybase” as datasource driver, and now the error is:
The database name component of the object qualifier must be the name
of the current database.
This traces back to this statement:
{call []..sp_columns 'LuceeTest01', '', '', 'null', 3}
When I try this in the Microsoft SQL Server Management Studio, I get the same error.
However, when I specify the database name (‘one’ as third argument), no error in MS SQL SMS.
EXEC sp_columns 'LuceeTest01', '', 'one', 'null', 3
Shouldn’t Lucee take this argument from the datasource configuration or something?
EDIT2:
As suggested by #Redtopia, when "tableowner" and "tablequalifier" are specified, it works for the jTDS driver. Will use this as workaround.
Updated sample code:
<cfset Form.Field1 = "Field1">
<cfset Form.Field2 = "Field2">
<cfinsert datasource="onecfc"
tableowner="dbo"
tablename="LuceeTest01"
tablequalifier="one"
formfields="Field1,Field2">
EDIT3:
Bug filed here: https://luceeserver.atlassian.net/browse/LDEV-2566
I personally would refactor CFINSERT into queryExecute and write a plain InsertInto SQL statement. I wish we would completely remove support for cfinsert.
Consider using
<cfscript>
Form.Field1 = "Field1";
Form.Field2 = "Field2";
// Don't forget to setup datasource in application.cfc
QueryExecute("
INSERT INTO LuceeTest01 (Field1, Field2)
VALUES (?, ?)
",
[form.field1, form.field2]
);
</cfscript>
I am 99% confident that this is a Lucee / JDK / JDBC Driver bug and not a fault in your config.
Source:
I initially suspected some low-hanging fruit such as your leading whitespace in ' Field2'. Then I saw your comment showing that you had tried with that trimmed and your Edit1 with the different error when using a different DB Driver. So I set to work trying to reproduce your issue.
On Lucee 5.2.4.37 and MS SQL Server 2016, armed with your sample code and two new datasources - one each for jTDS (MSQL and Sybase) driver and Microsoft SQL Server (JDBC4 - Vendor Microsoft) on SQL, I was unable to reproduce either issue on either driver. Even when selectively taking away various DB permissions and changing default DB for the SQL user, I was still only able to force different (expected) errors, not your error.
As soon as I hit the admin update to Lucee 5.3.3.62 and re-ran the tests, boom I hit both of your errors with the respective datasources, with no other change in DB permissions, datasource config or sample code.
Good luck convincing the Lucee guys that this anecdotal evidence is proof of a bug, but give me a shout if you need an extra voice. Whilst I don't use cfinsert/cfupdate in my own code, I have in the recent past been in the position of supporting a legacy CF application of similar sounding size and nature and empathise with the logistical challenges surrounding refactoring or modernising it!
Edit:
I tried the tablequalifier suggestion from #Redtopia in a comment above. Adding just the tablequalifier attribute did not work for me with either DB driver.
Using both tablequalifier="dbname" and tableowner="dbo" still didn't work for me with the MS SQL Server driver, but does seem to work for the jTDS driver, so it's a possible workaround meaning changing every occurrence of the tag, so ideally the Lucee guys will be able to fix the bug from their end or identify which Java update broke it if Lucee itself didn't.

SQLAlchemy Truncating Strings On Import From MS SQL

First off this is my setup:
Windows 7
MS SQL Server 2008
Python 3.6 Anaconda Distribution
I am working in a Jupyter notebook and trying to import a column of data from a MS SQL Server database using SQLAlchemy. The column in question contains cells which store long strings of text (datatype is nvarchar(max)). This is my code:
engine = create_engine('mssql+pyodbc://user:password#server:port/db_name?driver=SQL+Server+Native+Client+11.0'
stmt = 'SELECT componenttext FROM TranscriptComponent WHERE transcriptId=1265293'
connection = engine.connect()
results = connection.execute(stmt).fetchall()
This executes fine, and imports a list of strings. However when I examine the strings they are truncated, and in the middle of the strings the following message seems to have been inserted:
... (8326 characters truncated) ...
With the number of characters varying from string to string. I did a check on how long the strings that got imported are, and the ones that have been truncated are all limited at either 339 or 340 characters.
Is this a limitation in SQLAlchemy, Python or something else entirely?
Any help appreciated!
Same problem here!
Set up :
Windows Server 2012
MS SQL Server 2016/PostgreSQL 10.1
Python 3.6 Anaconda Distribution
I've tested everything I could, but can't overpass this 33x limitation in field length. Either varchar/text seems to be affected and the DBMS/driver doesn't seem to have any influence.
EDIT:
Found the source of the "problem": https://bitbucket.org/zzzeek/sqlalchemy/issues/2837
Seems like fetchall() is affected by this feature.
The only workaround i found was :
empty_list=[]
connection = engine.connect()
results = connection.execute(stmt)
for row in results:
empty_list.append(row['componenttext'])
This way i haven't found any truncation in my long string field(>3000 ch).

ExecuteReader TimeOut solved by changing the name of the stored procedure

This happened to me today.
My MVC.Net application was running fine since few months. Today it caught error when executing this part of code.(this is the simplified version)
var cmd = db.Database.Connection.CreateCommand();
cmd.CommandText = $"mySchema.myStoredProcedureName {param1};
db.Database.CommandTimeout = 0;
db.Database.Connection.Open();
var reader = cmd.ExecuteReader();
Where db is a DbContext EF6.
The timeOut occured on the last line
I tried the syntax "using" no success
I tried also the following, maybe the connection is not opened
while(db.Database.Connection.State != ConnectionState.Open) {
db.Database.Connection.Open(); }
No! success.
The stored procedure returns result in 2 seconds on SSMS.
Finally I created a similar stored procedure with another name
Then it worked.
My question:
- Did MSSQL blackList my stored procedure?
I don't think it was blacklisted. Is it possible that your indexes were in need of a rebuild? In other words the renaming really may not have fixed the problem, but some other sort of SQL Server maintenance behind the scenes did?
My educated guess is the server provider did something to affect you if you did not change any code.

How do I call a MS SQL Server System Stored Procedure with Firedac?

I am converting my application from using raw ADO calls to using FireDac.
Currently, I have the following line of code:
TQueryRunner.ExecuteQueryNoMsg('exec(''sp_who ##spid'')', iRA,rs,conn,True);
I need to be able to call that sp_id ##spid call in FireDac and return the result set. I can't seem to do so despite a number of different approaches. I've tried calling it with TFDStoredProc, but the parameter requires a value, and there really isn't a value for it. I've tried a TFDQuery, but that won't work at all (and by "won't work" i mean I get an access viloation when I try....)
Can someone point me in the right direction?

Accessing output parameters before processing result set in SQL Server via jdbc

I am calling an 2005 MS SQL Server stored procedure using the MS JDBC Driver and want to access the output parameters before processing the result set as follows:
proc = "{call mySproc(?,?,?)}";
conn = ds.getConnection();
callableStmt = conn.prepareCall(proc);
callableStmt.setString(1,inputParam);
callableStmt.registerOutParameter(2,Types.INTEGER);
callableStmt.registerOutParameter(3,Types.INTEGER);
callableStmt.execute();
rs = (ResultSet)callableStmt.getResultSet();
output[0] = callableStmt.getInt(2); //#rc
output[1] = callableStmt.getInt(3); //#rs
if(output[0] != 0){
//do some stuff
} else {
// process result set
}
Problem is that accessing the output parameters before processing the result set causes the result set to be closed.
Is there a way I can achive this without altering the stored procedure?
It's possible to do this via JDBC for other databases. However, from researching I found the JDBC Spec states:
For maximum portability, a call's
ResultSet objects and update counts
should be processed prior to getting
the values of output parameters.
Is it the case that the MS JDBC Driver has been implemented to the letter of the law and other JDBC drivers have provided more flexible implementations?
Hoping someone can clear up my understanding on this issue.
The output parameters come on the wire after all result set. Any client, regardless of the platform or technology, has to first parse all results before they can even see the output parameters values.
If there are clients that offer the value of output parameters before consuming the result sets it means they cache the result sets in memory. Very bad considering result sets can grow quote large.

Resources