ExecuteReader TimeOut solved by changing the name of the stored procedure - sql-server

This happened to me today.
My MVC.Net application was running fine since few months. Today it caught error when executing this part of code.(this is the simplified version)
var cmd = db.Database.Connection.CreateCommand();
cmd.CommandText = $"mySchema.myStoredProcedureName {param1};
db.Database.CommandTimeout = 0;
db.Database.Connection.Open();
var reader = cmd.ExecuteReader();
Where db is a DbContext EF6.
The timeOut occured on the last line
I tried the syntax "using" no success
I tried also the following, maybe the connection is not opened
while(db.Database.Connection.State != ConnectionState.Open) {
db.Database.Connection.Open(); }
No! success.
The stored procedure returns result in 2 seconds on SSMS.
Finally I created a similar stored procedure with another name
Then it worked.
My question:
- Did MSSQL blackList my stored procedure?

I don't think it was blacklisted. Is it possible that your indexes were in need of a rebuild? In other words the renaming really may not have fixed the problem, but some other sort of SQL Server maintenance behind the scenes did?
My educated guess is the server provider did something to affect you if you did not change any code.

Related

MSSQL record and display all languages in table including English, Chinese and Arabic [duplicate]

I am very new to working with databases. Now I can write SELECT, UPDATE, DELETE, and INSERT commands. But I have seen many forums where we prefer to write:
SELECT empSalary from employee where salary = #salary
...instead of:
SELECT empSalary from employee where salary = txtSalary.Text
Why do we always prefer to use parameters and how would I use them?
I wanted to know the use and benefits of the first method. I have even heard of SQL injection but I don't fully understand it. I don't even know if SQL injection is related to my question.
Using parameters helps prevent SQL Injection attacks when the database is used in conjunction with a program interface such as a desktop program or web site.
In your example, a user can directly run SQL code on your database by crafting statements in txtSalary.
For example, if they were to write 0 OR 1=1, the executed SQL would be
SELECT empSalary from employee where salary = 0 or 1=1
whereby all empSalaries would be returned.
Further, a user could perform far worse commands against your database, including deleting it If they wrote 0; Drop Table employee:
SELECT empSalary from employee where salary = 0; Drop Table employee
The table employee would then be deleted.
In your case, it looks like you're using .NET. Using parameters is as easy as:
string sql = "SELECT empSalary from employee where salary = #salary";
using (SqlConnection connection = new SqlConnection(/* connection info */))
using (SqlCommand command = new SqlCommand(sql, connection))
{
var salaryParam = new SqlParameter("salary", SqlDbType.Money);
salaryParam.Value = txtMoney.Text;
command.Parameters.Add(salaryParam);
var results = command.ExecuteReader();
}
Dim sql As String = "SELECT empSalary from employee where salary = #salary"
Using connection As New SqlConnection("connectionString")
Using command As New SqlCommand(sql, connection)
Dim salaryParam = New SqlParameter("salary", SqlDbType.Money)
salaryParam.Value = txtMoney.Text
command.Parameters.Add(salaryParam)
Dim results = command.ExecuteReader()
End Using
End Using
Edit 2016-4-25:
As per George Stocker's comment, I changed the sample code to not use AddWithValue. Also, it is generally recommended that you wrap IDisposables in using statements.
You are right, this is related to SQL injection, which is a vulnerability that allows a malicioius user to execute arbitrary statements against your database. This old time favorite XKCD comic illustrates the concept:
In your example, if you just use:
var query = "SELECT empSalary from employee where salary = " + txtSalary.Text;
// and proceed to execute this query
You are open to SQL injection. For example, say someone enters txtSalary:
1; UPDATE employee SET salary = 9999999 WHERE empID = 10; --
1; DROP TABLE employee; --
// etc.
When you execute this query, it will perform a SELECT and an UPDATE or DROP, or whatever they wanted. The -- at the end simply comments out the rest of your query, which would be useful in the attack if you were concatenating anything after txtSalary.Text.
The correct way is to use parameterized queries, eg (C#):
SqlCommand query = new SqlCommand("SELECT empSalary FROM employee
WHERE salary = #sal;");
query.Parameters.AddWithValue("#sal", txtSalary.Text);
With that, you can safely execute the query.
For reference on how to avoid SQL injection in several other languages, check bobby-tables.com, a website maintained by a SO user.
In addition to other answers need to add that parameters not only helps prevent sql injection but can improve performance of queries. Sql server caching parameterized query plans and reuse them on repeated queries execution. If you not parameterized your query then sql server would compile new plan on each query(with some exclusion) execution if text of query would differ.
More information about query plan caching
Two years after my first go, I'm recidivating...
Why do we prefer parameters? SQL injection is obviously a big reason, but could it be that we're secretly longing to get back to SQL as a language. SQL in string literals is already a weird cultural practice, but at least you can copy and paste your request into management studio. SQL dynamically constructed with host language conditionals and control structures, when SQL has conditionals and control structures, is just level 0 barbarism. You have to run your app in debug, or with a trace, to see what SQL it generates.
Don't stop with just parameters. Go all the way and use QueryFirst (disclaimer: which I wrote). Your SQL lives in a .sql file. You edit it in the fabulous TSQL editor window, with syntax validation and Intellisense for your tables and columns. You can assign test data in the special comments section and click "play" to run your query right there in the window. Creating a parameter is as easy as putting "#myParam" in your SQL. Then, each time you save, QueryFirst generates the C# wrapper for your query. Your parameters pop up, strongly typed, as arguments to the Execute() methods. Your results are returned in an IEnumerable or List of strongly typed POCOs, the types generated from the actual schema returned by your query. If your query doesn't run, your app won't compile. If your db schema changes and your query runs but some columns disappear, the compile error points to the line in your code that tries to access the missing data. And there are numerous other advantages. Why would you want to access data any other way?
In Sql when any word contain # sign it means it is variable and we use this variable to set value in it and use it on number area on the same sql script because it is only restricted on the single script while you can declare lot of variables of same type and name on many script. We use this variable in stored procedure lot because stored procedure are pre-compiled queries and we can pass values in these variable from script, desktop and websites for further information read Declare Local Variable, Sql Stored Procedure and sql injections.
Also read Protect from sql injection it will guide how you can protect your database.
Hope it help you to understand also any question comment me.
Old post but wanted to ensure newcomers are aware of Stored procedures.
My 10ยข worth here is that if you are able to write your SQL statement as a stored procedure, that in my view is the optimum approach. I ALWAYS use stored procs and never loop through records in my main code. For Example: SQL Table > SQL Stored Procedures > IIS/Dot.NET > Class.
When you use stored procedures, you can restrict the user to EXECUTE permission only, thus reducing security risks.
Your stored procedure is inherently paramerised, and you can specify input and output parameters.
The stored procedure (if it returns data via SELECT statement) can be accessed and read in the exact same way as you would a regular SELECT statement in your code.
It also runs faster as it is compiled on the SQL Server.
Did I also mention you can do multiple steps, e.g. update a table, check values on another DB server, and then once finally finished, return data to the client, all on the same server, and no interaction with the client. So this is MUCH faster than coding this logic in your code.
Other answers cover why parameters are important, but there is a downside! In .net, there are several methods for creating parameters (Add, AddWithValue), but they all require you to worry, needlessly, about the parameter name, and they all reduce the readability of the SQL in the code. Right when you're trying to meditate on the SQL, you need to hunt around above or below to see what value has been used in the parameter.
I humbly claim my little SqlBuilder class is the most elegant way to write parameterized queries. Your code will look like this...
C#
var bldr = new SqlBuilder( myCommand );
bldr.Append("SELECT * FROM CUSTOMERS WHERE ID = ").Value(myId);
//or
bldr.Append("SELECT * FROM CUSTOMERS WHERE NAME LIKE ").FuzzyValue(myName);
myCommand.CommandText = bldr.ToString();
Your code will be shorter and much more readable. You don't even need extra lines, and, when you're reading back, you don't need to hunt around for the value of parameters. The class you need is here...
using System;
using System.Collections.Generic;
using System.Text;
using System.Data;
using System.Data.SqlClient;
public class SqlBuilder
{
private StringBuilder _rq;
private SqlCommand _cmd;
private int _seq;
public SqlBuilder(SqlCommand cmd)
{
_rq = new StringBuilder();
_cmd = cmd;
_seq = 0;
}
public SqlBuilder Append(String str)
{
_rq.Append(str);
return this;
}
public SqlBuilder Value(Object value)
{
string paramName = "#SqlBuilderParam" + _seq++;
_rq.Append(paramName);
_cmd.Parameters.AddWithValue(paramName, value);
return this;
}
public SqlBuilder FuzzyValue(Object value)
{
string paramName = "#SqlBuilderParam" + _seq++;
_rq.Append("'%' + " + paramName + " + '%'");
_cmd.Parameters.AddWithValue(paramName, value);
return this;
}
public override string ToString()
{
return _rq.ToString();
}
}

PDO ODBC, one of several result sets causes php to crash

-Running PHP 5.6 on IIS 8.5 (Windows Server 2012 R2)
-Connecting to SQL Server 2008 R2 remotely via PDO, ODBC
-Issue isolated to one query, other queries function normally with expected results.
I am calling a stored procedure that returns eight result sets. I write each result set to a CSV. When I get to the third result set, php-cgi.exe crashes and I get a 500 error.
I have isolated the issue to this particular result set because if I skip the result set altogether using $stmt->nextRowset() everything works as expected.
//execute sp, bind parameters year and period-defined above
$StmtText = "{CALL PROCESS_PR (?, ?) }";
$Stmt = $dbh->prepare($QueryText);
$Stmt->bindParam(1, $PayYear, PDO::PARAM_INT);
$Stmt->bindParam(2, $PayPeriod, PDO::PARAM_INT);
$Stmt->execute();
$file_out = fopen('c:\windows\temp\tmp_1.csv, 'w');
$Result = $Stmt->fetchAll(PDO::FETCH_NUM);
foreach($Result as $row) { fputcsv($file_out,$row); }
fclose($file_out);
$Stmt->nextRowset();
//this happens 8 times, fails on the third
I am not throwing any PHP errors, and advanced IIS logging doesn't suggest a whole lot. I am struggling to determine what is crashing PHP. I executed the stored procedure directly via SSMS and it executes successfully, and looking at the problematic result set, can't see anything out of the ordinary- no special characters, no long strings, etc.
I have also been down the road of confirming PHP memory limits are set appropriately, checking timeouts on both FastCGI and in php.ini, and verifying MVC++ 2012 is installed, both 32bit and 64bit.
Looking for any thoughts on how to track down php crashing on this one particular result set. Thanks much.
UPDATE: Laughing Vergil's answer below solved the issue. The problematic result set had two fields with datatype varchar(max). changing one of them to varchar(255) solved the problem.

ADO.NET and ExecuteNonQuery: how to use DDL

I execute SQL scripts to change the database schema. It looks something like this:
using (var command = connection.CreateCommand())
{
command.CommandText = script;
command.ExecuteNonQuery();
}
Additionally, the commands are executed within a transaction.
The scrip looks like this:
Alter Table [TableName]
ADD [NewColumn] bigint NULL
Update [TableName]
SET [NewColumn] = (SELECT somevalue FROM anothertable)
I get an error, because NewColumn does not exist. It seems to parse and validate it before it is executed.
When I execute the whole stuff in the Management Studio, I can put GO between the statements, then it works. When I put GO into the script, ADO.NET complains (Incorrect syntax near 'GO').
I could split the script into separate scripts and execute it in separate commands, this would be hard to handle. I could split it on every GO, parsing the script myself. I just think that there should be a better solution and that I didn't understand something. How should scripts like this be executed?
My implementation if anyone is interested in, according to John Saunders' answer:
List<string> lines = new List<string>();
while (!textStreamReader.EndOfStream)
{
string line = textStreamReader.ReadLine();
if (line.Trim().ToLower() == "go" || textStreamReader.EndOfStream)
{
ExecuteCommand(
string.Join(Environment.NewLine, lines.ToArray()));
lines.Clear();
}
else
{
lines.Add(line);
}
}
Not using one of umpteen ORM libraries to do it ? Good :-)
To be completely safe when running scripts that do structural changes use SMO rather than SqlClient and make sure MARS is not turned on via connection string (SMO will normally complain if it is anyway). Look for ServerConnection class and ExecuteNonQuery - different DLL of course :-)
The diff is that SMO dll pases the script as-is to SQL so it's genuine equivalent of running it in SSMS or via isql cmd line. Slicing on GO-s ends up growing into much bigger scanning every time you encounter another glitch (like that GO can be in the middle of a multi-line comment, there can be multiple USE statements, a script can be dropping the very DB that SqlCLient connected to - oops :-). I just killed one such thing in the codebase I inherited (after more complex scripts conflicted with MARS and MARS is good for production code but not for admin stuff).
You have to run each batch separately. In particular, to run a script that may contain multiple batches ("GO" keywords), you have to split the script on the "GO" keywords.
Not Tested:
string script = File.ReadAllText("script.sql");
string[] batches = script.Split(new [] {"GO"+Environment.NewLine}, StringSplitOptions.None);
foreach (string batch in batches)
{
// run ExecuteNonQuery on the batch
}

LINQ to SQL: errors from stored procedure are ignored after result set is returned

I'm using LINQ to SQL to call a stored procedure. This stored procedure currently returns a resultset and then has some raiserror statements being triggered after the resultset is retrieved (I'm writing tests for stored procedures, in case you're wondering why I'm doing this).
When LINQ to SQL calls the proc and it gets a resultset back, it seems to ignore all of the errors that I'm throwing because it got its resultset. Is there a way to make it always throw a SqlException when I do a raiserror from SQL?
Interesting; that is a problem I have seen before when using an IDataReader, which is why I now religiously consume all the tables (even if I am only expecting one) - for example, if I am only expecting one table, something like:
while (reader.Read())
{ // read data from first table
}
// read to end of stream
while (reader.NextResult()) { }
The problem is that the error goes into the TDS at the point you raise it; so if you raise it after the SELECT then in follows table in the TDS - and if the reader doesn't read to the end of the stream they might not see it.
I'll be honest - my preferred answer to this is: raise all errors before data. This might mean doing the main SELECT into a temp-table (#table) or table-variable (#table). Beyond that - if it is critical to catch this error (and if the inbuilt LINQ-to-SQL code isn't helping), then perhaps fall back to ExecuteReader and something like the above.
I suspect (but I haven't checked) that you could also use DataContext.Translate<T> to do some of the ORM heavy-lifting; for example:
// cmd is our DbCommand, and ctx is our DataContext
using(var reader = cmd.ExecuteReader()) {
var result = ctx.Translate<MyResultType>(reader).ToList();
while(reader.NextResult()) {}
return result;
}
Make sure that your Severity Level is greater than 10 when you call RAISERROR as per:
http://support.microsoft.com/default.aspx/kb/321903
RAISERROR('Stored Procedure Execution Failed',15,1)

Timeout not being honoured in connection string

I have a long running SQL statement that I want to run, and no matter what I put in the "timeout=" clause of my connection string, it always seems to end after 30 seconds.
I'm just using SqlHelper.ExecuteNonQuery() to execute it, and letting it take care of opening connections, etc.
Is there something else that could be overriding my timeout, or causing sql server to ignore it? I have run profiler over the query, and the trace doesn't look any different when I run it in management studio, versus in my code.
Management studio completes the query in roughly a minute, but even with a timeout set to 300, or 30000, my code still times out after 30 seconds.
What are you using to set the timeout in your connection string? From memory that's "ConnectionTimeout" and only affects the time it takes to actually connect to the server.
Each individual command has a separate "CommandTimeout" which would be what you're looking for. Not sure how SqlHelper implements that though.
In addition to timeout in connection string, try using the timeout property of the SQL command. Below is a C# sample, using the SqlCommand class. Its equivalent should be applicable to what you are using.
SqlCommand command = new SqlCommand(sqlQuery, _Database.Connection);
command.CommandTimeout = 0;
int rows = command.ExecuteNonQuery();

Resources