sqls processed inside a begin end block in sybase ase - sybase

I have a stored procedure with a begin end block. For example there are few select statements and update statements in that block.
Inside this begin block, are all the sqls executed together or sequentially according to how they are written?
begin
select stmt
update stmt
select stmt
.
.
end

The statements in a stored procedure are executed sequentially.
The only thing you do not know the ordet of, and that can be executed in parallel, are the different substeps internally in each statement.
For example, in:
SELECT a, b
FROM table t
INNER JOIN other o
ON o.id = t.id
INNER JOIN third d
ON d.o_id = o.o_id
WHERE t.b = 123
UPDATE t
SET x = 123
FROM table t
WHERE t.b = 234
The select will always be executed before the update, but in the select statement, you do not know if 'table' is joined with 'other' first, and the joined with 'third', or if 'other' is joined with 'third' and then with 'table'.

All statements in begin - End block executed sequentially .
I have faced this issue of deadlock and its possible because procedure must be running in multiple thread.
If this can be executed in single or 2-3 threads and transactions are small then process can wait before it get escalated to deadlocks. or else add try catch block with error number for deadlock to retry update if its deadlock error.

Related

How to compare the count of each table in two different databases?

I have two databases: db1 and db2 (db2 was completely empty). I was copying all the db1 to db2 but the progress was interrupted and I need to know which tables are still left to copy. How can I compare the count of each table in these two databases to know which tables I still have to transfer?
Basically, you need to loop through the data dictionary and generate some dynamic SQL which executes a count for each table.
I have assumed you're only transferring one schema. If that's not true, or you're not connecting as the target schema, you'll need to use ALL_TABLES instead of USER_TABLES, and include the OWNER column in the driving query and the dynamic query too.
declare
n pls_integer;
stmt varchar2(32767);
begin
for r in ( select table_name from user_tables order by table_name ) loop
stmt := 'select count(*) from ' || r.table_name;
-- uncomment the next line to debug errors
-- dbms_output.put_line(stmt);
execute immediate stmt into n;
-- you may wish to only display empty tables
-- if n = 0 then
dbms_output.put_line(r.table_name || ' = ' || lpad(n, 10));
-- end if;
end loop;
end;
One would hope that your data copying process was clever enough to commit only completed tables. If so you only need to run this on DB2. Otherwise on both.

T-SQL select and update with lock - transaction or table hint

How to achieve following transaction lock?
In big simplification - I have a table of "Tasks" with statuses (Created, Started, Completed). I want to create stored procedure GetNext to get top 1 task that wasn't yet started (has Created status).
In this procedure I want to mark the task as Started. Obviously I want to avoid the situation when two processes call this procedure and get the same task.
The procedure will not be called frequently so performance is not an issue, keeping data uncorrupted is an issue.
So I want to do something like this:
UPDATE tblTasks
SET Status = 'Started'
WHERE TaskId = (SELECT TOP 1 TaskId
FROM tblTasks
WHERE Status = 'Created')
I also want to receive the task that I just updated so rather than what is above I need something like:
DECLARE #TaskId AS INT = (SELECT TOP 1 TaskId FROM tblTasks WHERE Status = 'Created')
UPDATE tblTasks
SET Status = 'Started'
WHERE TaskId = #TaskId
[... - Do something with #TaskId - not relevant]
OR
DECLARE #TaskIds AS TABLE(Id INT)
UPDATE tblTasks
SET Status = 'Started'
OUTPUT INSERTED.Id INTO #TaskIdS
WHERE TaskId = #TaskId
[... - Do something with #TaskIds - not relevant]
So assuming that I need select + update to achieve what I need - how can I assure that no other process will execute even first operation (select) until existing process is done?
As far as I understand even Serializable isolation level of transaction is not enough here because other process can read data, then wait until I finish (because its update is being held by lock) and update the data that I just updated.
I feel that table hints XLOCK or HOLDLOCK might help but I'm no expert and MS doc scared me with :
Caution
Because the SQL Server query optimizer typically selects the best execution plan for a query, we recommend that hints be used only as a last resort by experienced developers and database administrators.
(from https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-table)
So how do I make sure that two processes will not update one item and also how do I make sure that if one process is running the other will wait and do its job after the first finishes instead of failing?
Typically SQL Server locks automatically for each step until you either have a GOor reaches the end of the script.
From what I understand, what you want/need is a way to SELECT/UPDATE "in one go". You should be able to achieve that with a combination of TRANSACTION, TRY ... CATCH and CTE.
DECLARE #TaskIds AS TABLE (TaskId INT);
BEGIN TRANSACTION;
BEGIN TRY
WITH myTasks (TaskId) AS (
SELECT TOP 1 t.TaskId
FROM tblTasks AS t
WHERE t.Status = 'Created'
)
UPDATE t
SET t.Status = 'Started'
OUTPUT INSERTED.TaskId INTO #TaskIds
FROM tblTasks AS t
INNER JOIN myTasks AS mt
ON mt.TaskId = t.TaskId;
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION;
THROW;
END CATCH;
IF ##TRANCOUNT > 0
BEGIN
COMMIT TRANSACTION;
SELECT TaskId FROM #TaskIds;
[.. do other stuff ..]
END
GO

Is a single SQL Server statement atomic and consistent?

Is a statement in SQL Server ACID?
What I mean by that
Given a single T-SQL statement, not wrapped in a BEGIN TRANSACTION / COMMIT TRANSACTION, are the actions of that statement:
Atomic: either all of its data modifications are performed, or none of them is performed.
Consistent: When completed, a transaction must leave all data in a consistent state.
Isolated: Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions.
Durable: After a transaction has completed, its effects are permanently in place in the system.
The reason I ask
I have a single statement in a live system that appears to be violating the rules of the query.
In effect my T-SQL statement is:
--If there are any slots available,
--then find the earliest unbooked transaction and mark it booked
UPDATE Transactions
SET Booked = 1
WHERE TransactionID = (
SELECT TOP 1 TransactionID
FROM Slots
INNER JOIN Transactions t2
ON Slots.SlotDate = t2.TransactionDate
WHERE t2.Booked = 0 --only book it if it's currently unbooked
AND Slots.Available > 0 --only book it if there's empty slots
ORDER BY t2.CreatedDate)
Note: But a simpler conceptual variant might be:
--Give away one gift, as long as we haven't given away five
UPDATE Gifts
SET GivenAway = 1
WHERE GiftID = (
SELECT TOP 1 GiftID
FROM Gifts
WHERE g2.GivenAway = 0
AND (SELECT COUNT(*) FROM Gifts g2 WHERE g2.GivenAway = 1) < 5
ORDER BY g2.GiftValue DESC
)
In both of these statements, notice that they are single statements (UPDATE...SET...WHERE).
There are cases where the wrong transaction is being "booked"; it's actually picking a later transaction. After staring at this for 16 hours, I'm stumped. It's as though SQL Server is simply violating the rules.
I wondered what if the results of the Slots view is changing before the update happens? What if SQL Server is not holding SHARED locks on the transactions on that date? Is it possible that a single statement can be inconsistent?
So I decided to test it
I decided to check if the results of sub-queries, or inner operations, are inconsistent. I created a simple table with a single int column:
CREATE TABLE CountingNumbers (
Value int PRIMARY KEY NOT NULL
)
From multiple connections, in a tight loop, I call the single T-SQL statement:
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
In other words the pseudo-code is:
while (true)
{
ADOConnection.Execute(sql);
}
And within a few seconds I get:
Violation of PRIMARY KEY constraint 'PK__Counting__07D9BBC343D61337'.
Cannot insert duplicate key in object 'dbo.CountingNumbers'.
The duplicate value is (1332)
Are statements atomic?
The fact that a single statement wasn't atomic makes me wonder if single statements are atomic?
Or is there a more subtle definition of statement, that differs from (for example) what SQL Server considers a statement:
Does this fundamentally means that within the confines of a single T-SQL statement, SQL Server statements are not atomic?
And if a single statement is atomic, what accounts for the key violation?
From within a stored procedure
Rather than a remote client opening n connections, I tried it with a stored procedure:
CREATE procedure [dbo].[DoCountNumbers] AS
SET NOCOUNT ON;
DECLARE #bumpedCount int
SET #bumpedCount = 0
WHILE (#bumpedCount < 500) --safety valve
BEGIN
SET #bumpedCount = #bumpedCount+1;
PRINT 'Running bump '+CAST(#bumpedCount AS varchar(50))
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
IF (#bumpedCount >= 500)
BEGIN
PRINT 'WARNING: Bumping safety limit of 500 bumps reached'
END
END
PRINT 'Done bumping process'
and opened 5 tabs in SSMS, pressed F5 in each, and watched as they too violated ACID:
Running bump 414
Msg 2627, Level 14, State 1, Procedure DoCountNumbers, Line 14
Violation of PRIMARY KEY constraint 'PK_CountingNumbers'.
Cannot insert duplicate key in object 'dbo.CountingNumbers'.
The duplicate key value is (4414).
The statement has been terminated.
So the failure is independent of ADO, ADO.net, or none of the above.
For 15 years i've been operating under the assumption that a single statement in SQL Server is consistent; and the only
What about TRANSACTION ISOLATION LEVEL xxx?
For different variants of the SQL batch to execute:
default (read committed): key violation
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
default (read committed), explicit transaction: no error key violation
BEGIN TRANSACTION
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
COMMIT TRANSACTION
serializable: deadlock
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
COMMIT TRANSACTION
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
snapshot (after altering database to enable snapshot isolation): key violation
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRANSACTION
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
COMMIT TRANSACTION
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
Bonus
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
Default transaction isolation level (READ COMMITTED)
Turns out every query I've ever written is broken
This certainly changes things. Every update statement I've ever written is fundamentally broken. E.g.:
--Update the user with their last invoice date
UPDATE Users
SET LastInvoiceDate = (SELECT MAX(InvoiceDate) FROM Invoices WHERE Invoices.uid = Users.uid)
Wrong value; because another invoice could be inserted after the MAX and before the UPDATE. Or an example from BOL:
UPDATE Sales.SalesPerson
SET SalesYTD = SalesYTD +
(SELECT SUM(so.SubTotal)
FROM Sales.SalesOrderHeader AS so
WHERE so.OrderDate = (SELECT MAX(OrderDate)
FROM Sales.SalesOrderHeader AS so2
WHERE so2.SalesPersonID = so.SalesPersonID)
AND Sales.SalesPerson.BusinessEntityID = so.SalesPersonID
GROUP BY so.SalesPersonID);
without exclusive holdlocks, the SalesYTD is wrong.
How have I been able to do anything all these years.
I've been operating under the assumption that a single statement in SQL Server is consistent
That assumption is wrong. The following two transactions have identical locking semantics:
STATEMENT
BEGIN TRAN; STATEMENT; COMMIT
No difference at all. Single statements and auto-commits do not change anything.
So merging all logic into one statement does not help (if it does, it was by accident because the plan changed).
Let's fix the problem at hand. SERIALIZABLE will fix the inconsistency you are seeing because it guarantees that your transactions behave as if they executed single-threadedly. Equivalently, they behave as if they executed instantly.
You will be getting deadlocks. If you are ok with a retry loop, you're done at this point.
If you want to invest more time, apply locking hints to force exclusive access to the relevant data:
UPDATE Gifts -- U-locked anyway
SET GivenAway = 1
WHERE GiftID = (
SELECT TOP 1 GiftID
FROM Gifts WITH (UPDLOCK, HOLDLOCK) --this normally just S-locks.
WHERE g2.GivenAway = 0
AND (SELECT COUNT(*) FROM Gifts g2 WITH (UPDLOCK, HOLDLOCK) WHERE g2.GivenAway = 1) < 5
ORDER BY g2.GiftValue DESC
)
You will now see reduced concurrency. That might be totally fine depending on your load.
The very nature of your problem makes achieving concurrency hard. If you require a solution for that we'd need to apply more invasive techniques.
You can simplify the UPDATE a bit:
WITH g AS (
SELECT TOP 1 Gifts.*
FROM Gifts
WHERE g2.GivenAway = 0
AND (SELECT COUNT(*) FROM Gifts g2 WITH (UPDLOCK, HOLDLOCK) WHERE g2.GivenAway = 1) < 5
ORDER BY g2.GiftValue DESC
)
UPDATE g -- U-locked anyway
SET GivenAway = 1
This gets rid of one unnecessary join.
Below is an example of an UPDATE statement that does increment a counter value atomically
-- Do this once for test setup
CREATE TABLE CountingNumbers (Value int PRIMARY KEY NOT NULL)
INSERT INTO CountingNumbers VALUES(1)
-- Run this in parallel: start it in two tabs on SQL Server Management Studio
-- You will see each connection generating new numbers without duplicates and without timeouts
while (1=1)
BEGIN
declare #nextNumber int
-- Taking the Update lock is only relevant in case this statement is part of a larger transaction
-- to prevent deadlock
-- When executing without a transaction, the statement will itself be atomic
UPDATE CountingNumbers WITH (UPDLOCK, ROWLOCK) SET #nextNumber=Value=Value+1
print #nextNumber
END
Select does not lock exclusively, even serializable does, but only for the time the select is executed! Once the select is over, the select lock is gone. Then, update locks take on as they now know what to lock as Select has return results. Meanwhile, anyone else can Select again!
The only sure way to safely read and lock a row is:
begin transaction
--lock what i need to read
update mytable set col1=col1 where mykey=#key
--now read what i need
select #d1=col1,#d2=col2 from mytable where mykey=#key
--now do here calculations checks whatever i need from the row i read to decide my update
if #d1<#d2 set #d1=#d2 else set #d1=#d2 * 2 --just an example calc
--now do the actual update on what i read and the logic
update mytable set col1=#d1,col2=#d2 where mykey=#key
commit transaction
This way any other connection running the same statement for the same data it will surely wait at the first (fake) update statement until the previous is done. This ensures that when lock is released only one connection will granted permission to lock request to 'update' and this one will surely read committed finalized data to make calculations and decide if and what to actually update at the second 'real' update.
In other words, when you need to select information to decide if/how to update, you need a begin/commit transaction block plus you need to start with a fake update of what you need to select - before you select it(update output will also do).

Let each run of a sproc process its share of rows

I am maintaining a sproc where the developer has implemented his own locking mechanism but to me it seemed flawed:
CREATE PROCEDURE Sproc 1
AS
Update X
set flag = lockedforprocessing
where flag = unprocessed
-- Some processing occurs here with enough time to
-- 1. table X gets inserted new rows with a flag of unprocessed
-- 2. start another instance of this Sproc 1 that executes the above update
Select from X
where flag = lockedforprocessing
-- Now the above statement reads rows that it hadn't put a lock on to start with.
I know that I can just wrap it sproc inside a transaction with isolation level of SERIALIZABLE but I want to avoid this.
The goal is
that multiple instances of this sproc can run at the same time and process their own "share" of the records to achieve maximum concurrency.
An execution of the sproc should not wait on a previous run that is still executing
I don't think REPEATABLE READ can help here since it won't prevent the new records with a value of "unprocessed" being read (correct me if I'm wrong please).
I just discovered the sp_getlock sproc and it would resolve the bug but serialize exaction which is not my goal.
A solution that I see is to have each run of the proc generate its own unique GUID and assign that to the flag but somehow I am thinking I am simulating something that SQL Server already can solve out of the box.
Is the only way that let each run of a sproc process it's "share" of the rows to have it in SERIALIZABLE?
Regards, Tom
Assuming there is an ID field in X, a temporary table of updated Xs can help:
CREATE PROCEDURE Sproc 1
AS
-- Temporary table listing all accessed Xs
declare #flagged table (ID int primary key)
-- Lock and retrieve locked records
Update X
set flag = lockedforprocessing
output Inserted.ID into #flagged
where flag = unprocessed
-- Processing
Select from X inner join #flagged f on x.ID = f.ID
-- Clean-up
update X
set flag = processed
from x inner join #flagged f on x.ID = f.ID

Do database cursors pick up changes to the underlying data?

Quick question about cursors (in particular Oracle cursors).
Let's say I have a table called "my_table" which has two columns, an ID and a name. There are millions of rows, but the name column is always the string 'test'.
I then run this PL/SQL script:
declare
cursor cur is
select t.id, t.name
from my_table t
order by 1;
begin
for cur_row in cur loop
if (cur_row.name = 'test') then
dbms_output.put_line('everything is fine!');
else
dbms_output.put_line('error error error!!!!!');
exit;
end if;
end loop;
end;
/
if I, while this is running, run this SQL:
update my_table
set name = 'error'
where id = <max id>;
commit;
will the cursor in the PL/SQL block pick up that change and print out "error error error" and exit? or will it not pick up the change at all ... or will it even allow the update to my_table?
thanks!
A cursor effectively runs a SELECT and then lets you iterate over the result set, which is kept in a snapshot of the DB state. Because your result set has already been fetched, it won't be affected by the UPDATE statement. (Handling things otherwise would require you to re-run the query every time you advanced your cursor!)
See:
http://www.techonthenet.com/oracle/cursors/declare.php

Resources