I am having statement.
SELECT *
FROM DemoTable
WHERE i = 7
UNION
SELECT *
FROM DemoTable
WHERE i = 6
What i need i need to put time delay between the union statement i need to check something related to snapshot isolation.
Can i put time delay between the union
Such as
SELECT *
FROM DemoTable
WHERE i = 7
Will Run First and after 10 seconds
SELECT *
FROM DemoTable
WHERE i = 6
But i need both in one statement i.e with union only.
You cannot put a delay in the middle of the union. A select/union is part of the SQL DML. Delay is a programming construct and is not related to data manipulation in any way.
Also, isolation and concurrency would not be impacted by separates reads on the same table. Reads generally use a shared lock. You'll need to start a transaction that modifies the data in some way for there to be any impact to concurrency. You'd be better off using multiple query windows or connections with explicit transactions.
You can use WAITFOR inside of a stored procedure, but never with a union.
CREATE PROCEDURE dbo.testproc
AS
BEGIN
SELECT CURRENT_TIMESTAMP;
WAITFOR DELAY '00:00:02'
SELECT CURRENT_TIMESTAMP;
END
go
EXEC dbo.testproc
try insert WAITFOR DELAY '00:02';
Related
need a bit of help with this sql injection issue:
The following is a version of a parameterised stored procedure. Excluding how it is called from an application, is there anyway to prevent #v_string from being treated as dynamic SQL?
I think this is fairly water tight - there's no execute or concatenated sql, but still inserting a semicolon allows additional data to be returned.
I know there are multiple levels to consider this question on, but I want to know if there is some simple solution I am missing here as the majority of injection fixes involve dynamic queries.
create table dbo.Employee (EmpID int,EmpName varchar(60))
declare
#v_id int,
#v_string varchar(60)
begin
set #v_string='test'''; waitfor delay '0:0:5' --
if #v_id is null
begin
set #v_id = (select EmpID
from Abc.Employee
where EmpName=#v_string);
end
print #v_id
end
is there anyway to prevent #v_string from being treated as dynamic
SQL?
I would not expect #v_string to be treated as dynamic SQL here since the T-SQL code has no EXECUTE or EXECUTE sp_executeSQL. The value will not be executed, but treated as a WHERE clause value not vulnerable to SQL injection.
If this doesn't answer your question, post a full example that demonstrates the value being treated as dynamic SQL.
You're being confused by your own testing. The line:
set #v_string='test'''; waitfor delay '0:0:5' --
Is creating a string #v_string with the value test', and then executing waitfor delay '0:0:5'. Then your actual Employee query is being run.
So if you run your query as is, with your additional example:
set #v_string='test'''; select * from sys.databases
...what will happen is that line of code will set #v_string to be test', then immediately execute select * from sys.databases. Then the rest of your code will run, executing your actual select. So you'll see the result of select * from sys.databases, followed by the result of your Employee query, but only because you actually hard-coded the statement select * from sys.databases into your procedure without realising it :)
If you want the string #v_string to be set to test'; waitfor delay '0:0:5' then you've got the string quoting wrong. It should be:
set #v_string='test''; waitfor delay ''0:0:5'''
I've got a long-running stored procedure on a SQL server database. I don't want it to run more often than once every ten minutes.
Once the stored procedure has run, I want to store the latest result in a LatestResult table, against a time, and have all calls to the procedure return that result for the next ten minutes.
That much is relatively simple, but we've found that, because the procedure checks the LatestResult table and updates it, that large userbases are getting a number of deadlocks, when two users call the procedure at the same time.
In a client-side/threading situation, I would solve this by using a lock, having the first user lock the function, the second user encounters the lock, waiting for the result, the first user finishes their procedure call, updates the LatestResult table, and unlocks the second user, who then picks up the result from the LatestResult table.
Is there any way to accomplish this kind of locking in SQL Server?
EDIT:
This is basically how the code looks without its error checking calls:
DECLARE #LastChecked AS DATETIME
DECLARE #LastResult AS NUMERIC(18,2)
SELECT TOP 1 #LastChecked = LastRunTime, #LastResult = LastResult FROM LastResult
DECLARE #ReturnValue AS NUMERIC(18,2)
IF DATEDIFF(n, #LastChecked, GetDate()) >= 10 OR NOT #LastResult = 0
BEGIN
SELECT #ReturnValue = ABS(ISNULL(SUM(ISNULL(Amount,0)),0)) FROM Transactions WHERE ISNULL(DeletedFlag,0) = 0 GROUP BY GroupID ORDER BY ABS(ISNULL(SUM(ISNULL(Amount,0)),0))
UPDATE LastResult SET LastRunTime = GETDATE(), LastResult = #ReturnValue
SELECT #ReturnValue
END
ELSE
BEGIN
SELECT #LastResult
END
I'm not really sure what's going on with the grouping, but I've found a test system where execution time is coming in around 4 seconds.
I think there's some work scheduled to archive some of these records and boil them down to running totals, which will probably help things given that there's several million rows in that four second table...
This is a valid opportunity to use an Application Lock (see sp_getapplock and sp_releaseapplock) as it is a lock taken out on a concept that you define, not on any particular rows in any given table. The idea is that you create a transaction, then create this arbitrary lock that has an indetifier, and other processes will wait to enter that piece of code until the lock is released. This works just like lock() at the app layer. The #Resource parameter is the label of the arbitrary "concept". In more complex situations, you can even concatenate a CustomerID or something in there for more granular locking control.
DECLARE #LastChecked DATETIME,
#LastResult NUMERIC(18,2);
DECLARE #ReturnValue NUMERIC(18,2);
BEGIN TRANSACTION;
EXEC sp_getapplock #Resource = 'check_timing', #LockMode = 'Exclusive';
SELECT TOP 1 -- not sure if this helps the optimizer on a 1 row table, but seems ok
#LastChecked = LastRunTime,
#LastResult = LastResult
FROM LastResult;
IF (DATEDIFF(MINUTE, #LastChecked, GETDATE()) >= 10 OR #LastResult <> 0)
BEGIN
SELECT #ReturnValue = ABS(ISNULL(SUM(ISNULL(Amount, 0)), 0))
FROM Transactions
WHERE DeletedFlag = 0
OR DeletedFlag IS NULL;
UPDATE LastResult
SET LastRunTime = GETDATE(),
LastResult = #ReturnValue;
END;
ELSE
BEGIN
SET #ReturnValue = #LastResult; -- This is always 0 here
END;
SELECT #ReturnValue AS [ReturnValue];
EXEC sp_releaseapplock #Resource = 'check_timing';
COMMIT TRANSACTION;
You need to manage errors / ROLLBACK yourself (as stated in the linked MSDN documentation) so put in the usual TRY / CATCH. But, this does allow you to manage the situation.
If there are any concerns regarding contention on this process, there shouldn't be much as the lookup done right after locking the resource is a SELECT from a single-row table and then an IF statement that (ideally) just returns the last known value if the 10-minute timer hasn't elapsed. Hence, most calls should process rather quickly.
Please note: sp_getapplock / sp_releaseapplock should be used sparingly; Application Locks can definitely be very handy (such as in cases like this one) but they should only be used when absolutely necessary.
Is a statement in SQL Server ACID?
What I mean by that
Given a single T-SQL statement, not wrapped in a BEGIN TRANSACTION / COMMIT TRANSACTION, are the actions of that statement:
Atomic: either all of its data modifications are performed, or none of them is performed.
Consistent: When completed, a transaction must leave all data in a consistent state.
Isolated: Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions.
Durable: After a transaction has completed, its effects are permanently in place in the system.
The reason I ask
I have a single statement in a live system that appears to be violating the rules of the query.
In effect my T-SQL statement is:
--If there are any slots available,
--then find the earliest unbooked transaction and mark it booked
UPDATE Transactions
SET Booked = 1
WHERE TransactionID = (
SELECT TOP 1 TransactionID
FROM Slots
INNER JOIN Transactions t2
ON Slots.SlotDate = t2.TransactionDate
WHERE t2.Booked = 0 --only book it if it's currently unbooked
AND Slots.Available > 0 --only book it if there's empty slots
ORDER BY t2.CreatedDate)
Note: But a simpler conceptual variant might be:
--Give away one gift, as long as we haven't given away five
UPDATE Gifts
SET GivenAway = 1
WHERE GiftID = (
SELECT TOP 1 GiftID
FROM Gifts
WHERE g2.GivenAway = 0
AND (SELECT COUNT(*) FROM Gifts g2 WHERE g2.GivenAway = 1) < 5
ORDER BY g2.GiftValue DESC
)
In both of these statements, notice that they are single statements (UPDATE...SET...WHERE).
There are cases where the wrong transaction is being "booked"; it's actually picking a later transaction. After staring at this for 16 hours, I'm stumped. It's as though SQL Server is simply violating the rules.
I wondered what if the results of the Slots view is changing before the update happens? What if SQL Server is not holding SHARED locks on the transactions on that date? Is it possible that a single statement can be inconsistent?
So I decided to test it
I decided to check if the results of sub-queries, or inner operations, are inconsistent. I created a simple table with a single int column:
CREATE TABLE CountingNumbers (
Value int PRIMARY KEY NOT NULL
)
From multiple connections, in a tight loop, I call the single T-SQL statement:
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
In other words the pseudo-code is:
while (true)
{
ADOConnection.Execute(sql);
}
And within a few seconds I get:
Violation of PRIMARY KEY constraint 'PK__Counting__07D9BBC343D61337'.
Cannot insert duplicate key in object 'dbo.CountingNumbers'.
The duplicate value is (1332)
Are statements atomic?
The fact that a single statement wasn't atomic makes me wonder if single statements are atomic?
Or is there a more subtle definition of statement, that differs from (for example) what SQL Server considers a statement:
Does this fundamentally means that within the confines of a single T-SQL statement, SQL Server statements are not atomic?
And if a single statement is atomic, what accounts for the key violation?
From within a stored procedure
Rather than a remote client opening n connections, I tried it with a stored procedure:
CREATE procedure [dbo].[DoCountNumbers] AS
SET NOCOUNT ON;
DECLARE #bumpedCount int
SET #bumpedCount = 0
WHILE (#bumpedCount < 500) --safety valve
BEGIN
SET #bumpedCount = #bumpedCount+1;
PRINT 'Running bump '+CAST(#bumpedCount AS varchar(50))
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
IF (#bumpedCount >= 500)
BEGIN
PRINT 'WARNING: Bumping safety limit of 500 bumps reached'
END
END
PRINT 'Done bumping process'
and opened 5 tabs in SSMS, pressed F5 in each, and watched as they too violated ACID:
Running bump 414
Msg 2627, Level 14, State 1, Procedure DoCountNumbers, Line 14
Violation of PRIMARY KEY constraint 'PK_CountingNumbers'.
Cannot insert duplicate key in object 'dbo.CountingNumbers'.
The duplicate key value is (4414).
The statement has been terminated.
So the failure is independent of ADO, ADO.net, or none of the above.
For 15 years i've been operating under the assumption that a single statement in SQL Server is consistent; and the only
What about TRANSACTION ISOLATION LEVEL xxx?
For different variants of the SQL batch to execute:
default (read committed): key violation
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
default (read committed), explicit transaction: no error key violation
BEGIN TRANSACTION
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
COMMIT TRANSACTION
serializable: deadlock
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
COMMIT TRANSACTION
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
snapshot (after altering database to enable snapshot isolation): key violation
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRANSACTION
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
COMMIT TRANSACTION
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
Bonus
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
Default transaction isolation level (READ COMMITTED)
Turns out every query I've ever written is broken
This certainly changes things. Every update statement I've ever written is fundamentally broken. E.g.:
--Update the user with their last invoice date
UPDATE Users
SET LastInvoiceDate = (SELECT MAX(InvoiceDate) FROM Invoices WHERE Invoices.uid = Users.uid)
Wrong value; because another invoice could be inserted after the MAX and before the UPDATE. Or an example from BOL:
UPDATE Sales.SalesPerson
SET SalesYTD = SalesYTD +
(SELECT SUM(so.SubTotal)
FROM Sales.SalesOrderHeader AS so
WHERE so.OrderDate = (SELECT MAX(OrderDate)
FROM Sales.SalesOrderHeader AS so2
WHERE so2.SalesPersonID = so.SalesPersonID)
AND Sales.SalesPerson.BusinessEntityID = so.SalesPersonID
GROUP BY so.SalesPersonID);
without exclusive holdlocks, the SalesYTD is wrong.
How have I been able to do anything all these years.
I've been operating under the assumption that a single statement in SQL Server is consistent
That assumption is wrong. The following two transactions have identical locking semantics:
STATEMENT
BEGIN TRAN; STATEMENT; COMMIT
No difference at all. Single statements and auto-commits do not change anything.
So merging all logic into one statement does not help (if it does, it was by accident because the plan changed).
Let's fix the problem at hand. SERIALIZABLE will fix the inconsistency you are seeing because it guarantees that your transactions behave as if they executed single-threadedly. Equivalently, they behave as if they executed instantly.
You will be getting deadlocks. If you are ok with a retry loop, you're done at this point.
If you want to invest more time, apply locking hints to force exclusive access to the relevant data:
UPDATE Gifts -- U-locked anyway
SET GivenAway = 1
WHERE GiftID = (
SELECT TOP 1 GiftID
FROM Gifts WITH (UPDLOCK, HOLDLOCK) --this normally just S-locks.
WHERE g2.GivenAway = 0
AND (SELECT COUNT(*) FROM Gifts g2 WITH (UPDLOCK, HOLDLOCK) WHERE g2.GivenAway = 1) < 5
ORDER BY g2.GiftValue DESC
)
You will now see reduced concurrency. That might be totally fine depending on your load.
The very nature of your problem makes achieving concurrency hard. If you require a solution for that we'd need to apply more invasive techniques.
You can simplify the UPDATE a bit:
WITH g AS (
SELECT TOP 1 Gifts.*
FROM Gifts
WHERE g2.GivenAway = 0
AND (SELECT COUNT(*) FROM Gifts g2 WITH (UPDLOCK, HOLDLOCK) WHERE g2.GivenAway = 1) < 5
ORDER BY g2.GiftValue DESC
)
UPDATE g -- U-locked anyway
SET GivenAway = 1
This gets rid of one unnecessary join.
Below is an example of an UPDATE statement that does increment a counter value atomically
-- Do this once for test setup
CREATE TABLE CountingNumbers (Value int PRIMARY KEY NOT NULL)
INSERT INTO CountingNumbers VALUES(1)
-- Run this in parallel: start it in two tabs on SQL Server Management Studio
-- You will see each connection generating new numbers without duplicates and without timeouts
while (1=1)
BEGIN
declare #nextNumber int
-- Taking the Update lock is only relevant in case this statement is part of a larger transaction
-- to prevent deadlock
-- When executing without a transaction, the statement will itself be atomic
UPDATE CountingNumbers WITH (UPDLOCK, ROWLOCK) SET #nextNumber=Value=Value+1
print #nextNumber
END
Select does not lock exclusively, even serializable does, but only for the time the select is executed! Once the select is over, the select lock is gone. Then, update locks take on as they now know what to lock as Select has return results. Meanwhile, anyone else can Select again!
The only sure way to safely read and lock a row is:
begin transaction
--lock what i need to read
update mytable set col1=col1 where mykey=#key
--now read what i need
select #d1=col1,#d2=col2 from mytable where mykey=#key
--now do here calculations checks whatever i need from the row i read to decide my update
if #d1<#d2 set #d1=#d2 else set #d1=#d2 * 2 --just an example calc
--now do the actual update on what i read and the logic
update mytable set col1=#d1,col2=#d2 where mykey=#key
commit transaction
This way any other connection running the same statement for the same data it will surely wait at the first (fake) update statement until the previous is done. This ensures that when lock is released only one connection will granted permission to lock request to 'update' and this one will surely read committed finalized data to make calculations and decide if and what to actually update at the second 'real' update.
In other words, when you need to select information to decide if/how to update, you need a begin/commit transaction block plus you need to start with a fake update of what you need to select - before you select it(update output will also do).
Maybe someone already ask this, but I can't find appropriate answer to this question.
If I have, let's say, following query:
SELECT
column1,
column2,
column3
FROM Table1 AS t1
WAITFOR DELAY '10:00:00'
where this query returns around 100000 rows.
Did WAITFOR statement waiting 10 hours before telling SQL Server to execute query and produce result or SQL Server execute query immediately and keep result in RAM for 10 hours and then send it over network or just show?
Am I missing here something?
I would appreciate if someone give me real example that prove first or second solution.
I executed next query:
SELECT GETDATE()
SELECT GETDATE()
WAITFOR DELAY '00:00:05'
The result was two dates that were the same. On this basis, I will conclude that SQL Server immediately executes the query and keeps the result for certain time to show, but that made little sense for me.
According to the docs, the WAITFOR command is used to block a statement or transaction for the specified amount of time. In that case, you'd use it to delay subsequent statements. In other words, you'd want to execute something after the WAITFOR command, not before. Here's a few examples:
The following example executes the stored procedure after a two-hour delay.
BEGIN
WAITFOR DELAY '02:00';
EXECUTE sp_helpdb;
END;
GO
The following example executes the stored procedure sp_update_job at 10:20 P.M. (22:20).
USE msdb;
EXECUTE sp_add_job #job_name = 'TestJob';
BEGIN
WAITFOR TIME '22:20';
EXECUTE sp_update_job #job_name = 'TestJob',
#new_name = 'UpdatedJob';
END;
GO
I try to find out the performance or internal implementation for WAITFOR in T-SQL, have gone through MSDN and Stackoverflow and other sites without luck, here is my question
For below code, I want to delete the top 10,000 rows from table DUMMY. I want to make this delete job have the least performance impact on the database's other jobs as possible and give priority to others (if any). So I make it delete 100 rows at a time and do it 100 times with sleep time in two adjacent deletes.
Question:
During the WAITFOR blocking time, will this transaction consume CPU or just idle and waiting for kicked up by some event 1 second later?
During that 1 sec, if there are other transactions trying to INSERT/UPDATE on the DUMMY table, who gets priority?
Really appreciate your help or any insights for this
declare #cnt int
set #cnt = 0
while #cnt < 100
begin
delete top 100 from DUMMYTABLE where FOO = 'BAR'
set #cnt = #cnt + 1
waitfor delay '00:00:01'
end
It does not consume any CPU
Status = suspended
You can see this with 2 query windows:
SELECT ##SPID;
GO
WAITFOR DELAY '000:03:00'; -- three minutes
Then in the other
SELECT * FROM sys.sysprocesses S WHERE S.spid = 53; -- replace 53
Note: SQL Server 2012 SP1 but AFAIK behaviour is the same
Point 2, sorry missed this
Another session will modify the table while the WAITFOR is running. It isn't a lock.