SQL Injection in Code/Static SQL (T-SQL) - sql-server

Are parametrized static/code SQL statements subject to SQL injection attacks?
For example, let's say I have the following simplified stored procedure:
Does the fact that I am passing the input #PSeries_desc mean I am subject to injection attacks if it is parameterized?
Previously, this was a dynamic SQL statement and the code was executed using exec as opposed to sp_executesql So, it definitely was open to attacks.
CREATE procedure get_product_by_title
#PSearchType int = NULL
, #Pseries_desc varchar(40) = NULL
as
begin
declare
#whereLikeBeg varchar(1)
, #whereLikeEnd varchar(1)
set #whereLikeBeg = ''
set #whereLikeEnd = ''
if #search_code = 'contains'
begin
set #whereLikeBeg = '%'
set #whereLikeEnd = '%'
end
if #search_code = 'starts_with'
begin
set #whereLikeEnd = '%'
end
select
distinct B.parent_product_id
, B.parent_product_id
from
tableA
where
parent_product_id = child_product_id
and product_title like #whereLikeBeg + #Pseries_desc + #whereLikeEnd
end

This code look safe to me...
Parametrized query is not the only way to protect yourself from SQL-injection attacks but it's probably the simplest and safest way to do it.
And even if you forget about the sql-injection attacks, building query dynamically is not a good practice, especially when you are working with strings because they might contains SQL reserved words / characters that will have an impact on your query.

If you are using parameterized queries in the access code, you don't need to worry. Checking for it inside the stored procedure is improper.

Related

SQL Where clause Coalese vs ISNULL VS Dynamic

I have a question for best use when creating where clause in a SQL Procedure.
I have written a query three different ways one using Coalesce in where clause, one using a isnull or statement, and one which is dynamic using sp_executesql.
Coalesce:
WHERE ClientID = COALESCE(#Client, ClientID) AND
AccessPersonID = COALESCE(#AccessPerson, AccessPersonID)
IsNull Or:
WHERE (#Client IS NULL OR #Client = ClientID)
AND (#AccessPerson IS NULL OR #AccessPerson= AccessPersonID)
and dynamically:
SET #sql = #sql + Char(13) + Char(10) + N'WHERE 1 = 1';
IF #Client <> 0
BEGIN
SET #sql = #sql + Char(13) + Char(10) + N' AND ClientID = #Client '
END
IF #AccessPerson <> 0
BEGIN
SET #sql = #sql + Char(13) + Char(10) + N' AND AccessPersonID = #AccessPerson '
END
When I use SQL Sentry Plan Explorer the results show for the estimated that the Coalesce is the best but the the lest accurate between estimated and actual. Where the dynamic has the worst estimated but it is 100% accurate to the actual.
This is a very simple procedure I am just trying to figure out what is the bes way to write procedures like this. I would thin the dynamic is the way to go since it is the most accurate.
The correct answer is the 'dynamic' option. It's good you left parameters in because it protects against SQL Injection (at this layer anyway).
The reason 'dynamic' is the best is because it will create a query plan that is best for the given query. With your example you might get up to 3 plans for this query, depending on which parameters are > 0, but each plan generated one will be optimized for that scenario (they will leave out unnecessary parameter comparisons).
The other two styles will generate one plan (each), and it will only be optimized for the parameters you used AT THAT TIME ONLY. Each subsequent execution will use the old plan and might be cached using the parameter you are not calling with.
'Dynamic' is not as clean-code as the other two options, but for performance, it will give you the optimal query plan each time.
And the dynamic SQL operates in a different scope than your sproc will, so even though you declare a variable in your sproc, you'll have to redeclare it in your dynamic SQL. Or concat it into the statement. But then you should also do NULL checks in your dynamic SQL AND in your sproc, because NULL isn't equal to 0 nor is it not equal to 0. You can't compare it because it doesn't exist. :-S
DECLARE #Client int = 1
, #AccessPerson int = NULL
;
DECLARE #sql nvarchar(2000) = N'SELECT * FROM ##TestClientID WHERE 1=1'
;
IF #Client <> 0
BEGIN
SET #sql = CONCAT(#sql, N' AND ClientID = ', CONVERT(nvarchar(10), #Client))
END
;
IF #AccessPerson <> 0
BEGIN
SET #sql = CONCAT(#sql, N' AND AccessPersonID =', CONVERT(nvarchar(10), #AccessPerson))
END
;
PRINT #sql
EXEC sp_ExecuteSQL #sql
Note: For demo purposes, I also had to modify my temp table above and make it a global temp instead of a local temp, since I'm calling it from dynamic SQL. It exists in a different scope. Don't forget to clean it up after you're done. :-)
Your top two statements don't do quite the same things if either value is NULL.
http://sqlfiddle.com/#!9/d0aa3/4
IF OBJECT_ID (N'tempdb..#TestClientID', N'U') IS NOT NULL
DROP TABLE #TestClientID;
GO
CREATE TABLE #TestClientID ( ClientID int , AccessPersonID int )
INSERT INTO #TestClientID (ClientID, AccessPersonID)
SELECT 1,1 UNION ALL
SELECT NULL,1 UNION ALL
SELECT 1,NULL UNION ALL
SELECT 0,0
DECLARE #ClientID int = NULL
DECLARE #AccessPersonID int = 1
SELECT * FROM #TestClientID
WHERE ClientID = COALESCE(#ClientID, ClientID)
AND AccessPersonID = COALESCE(#AccessPersonID, AccessPersonID)
SELECT * FROM #TestClientID
WHERE (#ClientID IS NULL OR #ClientID = ClientID)
AND (#AccessPersonID IS NULL OR #AccessPersonID = AccessPersonID)
That said, if you're looking to eliminate a NULL input value, then use the COALESCE(). NULLs can get weird when doing comparisons. COALESCE(a,b) is more akin to MS SQL's ISNULL(a,b). In other words, if a IS NULL, use b.
And again, it really all depends on what you're ultimately trying to do. sp_ExecuteSQL is MS-centric, so if you don't plan to port this to any other database, you can use that. But honestly, in 15 years I've probably ported an application from one db to another fewer than a dozen times. It's more important if you're writing an application that will be used by other people who will install it on different systems, but if it's an enclosed system, the benefits of the database you're using usually outweigh the lack of portability.
I probably should have included One more section of the query
For the ISNULL and the COALESCE I am converting a value of 0 to null where in the dynamic I am leaving the value as 0 for the if clause. That is why the look a bit different.
From what I have been seeing the the COALESCE seems to be the consistently the worst performing.
Surprisingly from what I have tested the ISNULL and dynamic are very similar with the ISNULL version being slightly better in most cases.
In most cases it has reviled indexes that needed to be add and in most cases the indexes improved the queries the most but after thet have been added the ISNULL and Dynamic still perform better than the COALESCE.
Also I can not see us switching from MSSQL in the near or distant future.

How to create select/update/delete statement using stored procedure safe from SQL injection

I always prefer to use stored procedures for most SQL commands during development.
One example for select statement.I use this Store porcedure
ALTER proc [dbo].[sp_select] (#tbl varchar(200),#col varchar(max),#cond varchar(max))
as
declare #query varchar(max)
if(#cond!=NULL)
begin
set #query='select '+#col+' from '+#tbl+' where '+#cond
end
else
begin
set #query='select '+#col+' from '+#tbl
end
exec(#query)
GO
I am little conscious SQL Injection atacks. This way is safe from such attack or not??
Any suggestion would be appreciated...
Your stored procedure is completely pointless and only makes it harder to write safe code.
SQL injection is not magic; it's simply input strings with quotes.
Stored procedures do not magically defend against it; they simply encourage you to pass user input as parameters (which you aren't doing).
The correct way to protect against SQL (and other forms of) injection is to change your application code to never concatenate arbitrary text (especially user input) into a structured langauge (such as SQL, HTML, or JSON).
Instead, use parameters, a JSON serializer, or a proper HTML escaper, as appropriate.
No. It very vulnerable to SQL Injection. For example, suppose someone does
exec dbo.sp_select '#Dummy', '(Select Null) As x; Update Employee Set Salary = 1000000 Where EmployeeName = ''me''; Declare #Dummy Table (i int); Select Null ', null
then the query that you build and execute is
select (Select Null) As x; Update Employee Set Salary = 1000000 Where EmployeeName = 'me'; Declare #Dummy Table (i int); Select Null from #Dummy
and your stored procedure which is supposed to only do selects has just updated my salary to 1,000,000.

Alternate synonym in SQL Server in one transaction

I am new to Transact SQL programming.
I have created a stored procedure that would drop and create an existing synonym so that it will point to another table. The stored procedure takes in 2 parameters:
synonymName - an existing synonym
nextTable - the table to be point at
This is the code snippet:
...
BEGIN TRAN SwitchTran
SET #SqlCommand='drop synonym ' + #synonymName
EXEC sp_executesql #SqlCommand
SET #SqlCommand='create synonym ' + #synonymName + ' for ' + #nextTable
EXEC sp_executesql #SqlCommand
COMMIT SwitchTran
...
We have an application that would write data using the synonym regularly.
My question is would I run into a race condition where the synonym is dropped, while the application try to write to the synonym?
If the above is a problem, could someone give me suggestion to the solution.
Thanks
Yes, you'd have a race condition.
One way to manage this is to have sp_getapplock after BEGIN TRAN in Transaction mode and trap/handle the return status as required. This will literally serialise (in the execution sense, not isolation) callers so only one SPID executes at any one time.
I'm fairly certain you will indeed get race conditions. Synonym Names are intended to be used for shortening the name of an object and aren't supposed to change any more often than other objects. I'm guessing by your description that you are using it for code reuse. You are probably better off using Dynamic SQL instead, which incidentally you already are.
For more information on Dynamic SQL you might want to consider a look at this article on by Erland Sommarskog that OMG Poinies references in a lot of his answers. Particularly the section on Dealing with Dynamic Table and Column Names which I've quotes below
Dealing with Dynamic Table and Column
Names
Passing table and column names as
parameters to a procedure with dynamic
SQL is rarely a good idea for
application code. (It can make
perfectly sense for admin tasks). As
I've said, you cannot pass a table or
a column name as a parameter to
sp_executesql, but you must
interpolate it into the SQL string.
Still you should protect it against
SQL injection, as a matter of routine.
It could be that bad it comes from
user input.
To this end, you should use the
built-in function quotename() (added
in SQL 7). quotename() takes two
parameters: the first is a string, and
the second is a pair of delimiters to
wrap the string in. The default for
the second parameter is []. Thus,
quotename('Orders') returns [Orders].
quotename() takes care of nested
delimiters, so if you have a really
crazy table name like Left]Bracket,
quotename() will return
[Left]]Bracket].
Note that when you work with names
with several components, each
component should be quoted separately.
quotename('dbo.Orders') returns
[dbo.Orders], but that is a table in
an unknown schema of which the first
four characters are d, b, o and a dot.
As long as you only work with the dbo
schema, best practice is to add dbo in
the dynamic SQL and only pass the
table name. If you work with different
schemas, pass the schema as a separate
parameter. (Although you could use the
built-in function parsename() to split
up a #tblname parameter in parts.)
While general_select still is a poor
idea as a stored procedure, here is
nevertheless a version that summarises
some good coding virtues for dynamic
SQL:
CREATE PROCEDURE general_select #tblname nvarchar(128),
#key varchar(10),
#debug bit = 0 AS DECLARE #sql nvarchar(4000)
SET #sql = 'SELECT col1, col2, col3
FROM dbo.' + quotename(#tblname) + '
WHERE keycol = #key' IF #debug = 1
PRINT #sql EXEC sp_executesql #sql, N'#key varchar(10)', #key = #key
- I'm using sp_executesql rather than EXEC().
- I'm prefixing the table name with dbo.
- I'm wrapping #tblname in quotename().
- There is a #debug parameter.

Use of '' + in SQL Server 2005 Stored Procedure to build SQL string

I'm building a stored procedure which is rather stretching my experience. With the help of people who responded to this thread [Nested if statements in SQL Server stored procedure SELECT statement I think I'm most of the way there :)
In short, the SP takes a series of paramterised inputs to dynamically build an SQL statement that creates a temporary table of id values ordered in a specific way. The remainder of the SP, which returns the data according to the requested page from the id values in this temporary table is all sorted.
Reconsider the use of dynamic SQL - you should really know what you are doing if you go that route.
What is the problem you are trying to solve? I am sure people here will be able to find a better solution than the dynamic SQL you are proposing to use.
Take a look at CONVERT() and CAST() for the integers.
To concatenate integer values into the dynamic SQL statement you need to convert to a varchar e.g:
....WHERE
OT.site_id = ' + CAST(#siteid AS VARCHAR)
If the SQL statement is always going to be less than 4000 chars, I'd at least consider using sp_executesql to use parameterised SQL.
e.g.
DECLARE #SQL NVARCHAR(4000)
DECLARE #siteid INTEGER
SET #siteid = 1
SET #SQL = 'SELECT * FROM MyTable WHERE site_id = #siteid'
EXECUTE sp_executesql #SQL, N'#siteid INTEGER', #siteid
All in all, what you're doing is not likely to be very performant/scalable/maintainable and you don't really gain much from having it as a sproc. Plus you need to be very very careful to validate the input as you could open up yourself to SQL injection (hence my point about using sp_executesql with parameterised SQL).
You need to cast the int param to be a char/varchar so that you can add it to the existing string. The fact that you aren't surrounding it with quotes in the final sql means it will be interpreted as a number.

Coding stored procedure for search screen with multiple, optional criteria

I've got a search screen on which the user can specify any combination of first name, last name, semester, or course. I'm not sure how to optimally code the SQL Server 2005 stored procedure to handle these potentially optional parameters. What's the most efficient way? Separate procedures for each combination? Taking the items in as nullable parms and building dynamic SQL?
I'd set each parameter to optional (default value being null)
and then handle it in the WHERE....
FirstName=ISNULL(#FirstName,FirstName)
AND
LastName=ISNULL(#LastName,LastName)
AND
SemesterID=ISNULL(#SemesterID,SemesterID)
That'll handle only first name, only last name, all three, etc., etc.
It's also a lot more pretty/manageable/robust than building the SQL string dynamically and executing that.
The best solution is to utilize sp_execute_sql. For example:
--BEGIN SQL
declare #sql nvarchar(4000)
set #sql =
'select * from weblogs.dbo.vwlogs
where Log_time between #BeginDate and #EndDate'
+ case when #UserName is null then '' else 'and client_user = #UserName' end
sp_execute_sql
#sql
, #params = '#UserName varchar(50)'
, #UserName = #UserName
--END SQL
As muerte mentioned, this will have a performance benefit versus exec()'ing a similar statement.
I would do it with sp_executesql because the plan will be cached just for the first pattern, or the first set of conditions.
Take a look at this TechNet article:
sp_executesql can be used instead of stored procedures to execute a Transact-SQL statement many times when the change in parameter values to the statement is the only variation. Because the Transact-SQL statement itself remains constant and only the parameter values change, the SQL Server query optimizer is likely to reuse the execution plan it generates for the first execution.
Was just posting the same concept as Kevin Fairchild, that is how we typically handle it.
You could do dynamic sql in your code to create the statement as required but if so you need to watch for sql injection.
As muerte points out, the plan will be cached for the first set of parameters. This can lead to bad performance when its run each subsequent time using alternate parameters. To resolve that use the WITH RECOMPILE option on the procedure.

Resources