Execute select query in where condition - sql-server

I want to execute a select query in where condition for update table to fill number increment as per user account in SQL Server.
Here is my code:
DECLARE #i INT = 0;
WHILE #i <= 1077
begin
update tbl_UdharKhata set ReceiptNo = #i
where EXISTS (select distinct UserId from tbl_udharkhata)
SET #i = #i + 1;
end
this query is working perfectly but the problem is that in ReceiptNo whole user account receipt number updating the same number.
Note: there are 1077 rows of distinct user accounts and hence there is 1077 row of userid.

What you're doing is updating all the records in the table 1077 times, each time with a different number, up to the end of the loop. Now I'm not sure if you want a single number for each userId, or an incrementing number for each row with the same userId, starting at 1 for each userId.
The only way for your SELECT statement to not return anything is if the table is empty - because it has no WHERE clause.
Since your select statement is in an EXISTS operator, the condition will always evaluate to true, making the WHERE clause of the UPDATE statement redundant.
basically, it's like update tbl_UdharKhata set ReceiptNo = #i where exists(select 1),
which is exactly like update tbl_UdharKhata set ReceiptNo = #i
This means that in each iteration of the loop, you're updating all the records in the table with the current value of #i.
Now, it's not very clear from your question what you want, but I'm gonna go on a limb here and guess you want to update the ReceiptNo column so that for each userId you'll have an incrementing number, resetting to 1 for each new userId.
If that is the case, the easiest way to do that is by creating a common table expression (cte) and then update that cte:
;WITH cte AS
(
SELECT ReceiptNo
-- Note: Order by ##SPID means an arbitrary order! Details after the code.
, ROW_NUMBER() OVER(PARTITION BY UserId ORDER BY ##SPID) As Rn
FROM tbl_UdharKhata
)
UPDATE cte
SET ReceiptNo = Rn
Note I've used ORDER BY ##SPID in the OVER clause of the ROW_NUMBER().
##SPID is a built in function that returns the session ID of the current user process - meaning that it will return a constant value in each session.
Using Order by with a constant value will generate an arbitrary order.
If you want a specific order, use any column in your table that isn't userId (otherwise you'll end up with the same arbitrary order - because the userId will be the same for each partition).

Related

Trigger reverses the changes made - SQL Server

I'm working on an E-commerce system where I have an order table that stores all the information regarding an order. The orders go through different stages: Open, Verified, In Process, etc. And I'm keeping counts of these orders at different stages e.g. Open Orders 95, Verified 5, In Process 3, etc.
When a new order is inserted in the table, I have a trigger that increments the Open Orders by 1. Similarly, I have a trigger for updates which checks the order's previous stage and the next to decrement and increment accordingly.
The INSERT trigger is working fine as described above. But the UPDATE trigger has a weird behavior that it makes the desired changes to the Counts but then reverses the changes for some reason.
For instance, upon changing the status of an order from Open to Verified, the ideal behavior would be to decrement Open Orders by 1 and increment Verified Orders by 1. The trigger currently performed the desired action but then for some reason restores the previous value.
Here's a snippet of my trigger where I check if the order previously belonged to the Open status and is now being updated to Verified status:
IF ##ROWCOUNT = 0 RETURN
DECLARE #orderID VARCHAR(MAX) -- orderID of the order that is being updated
DECLARE #storeID VARCHAR(MAX) -- storeID of the store the order belongs to
SELECT TOP 1
#orderID = i.id,
#storeID = i.storeID
FROM
inserted AS i
INNER JOIN deleted AS d
ON i.id = d.id
-- IF from Open Order
IF EXISTS (
SELECT *
FROM
deleted
WHERE
orderStatus = 'Open' AND
id = #orderID
)
BEGIN
-- IF to Verified Order
IF EXISTS (
SELECT *
FROM
inserted
WHERE
orderStatus = 'Verified' AND
id = #orderID
)
BEGIN
UPDATE order_counts
SET
open_orders = open_orders - ##ROWCOUNT,
verified_orders = verified_orders + ##ROWCOUNT
WHERE storeID = #storeID
END
END
EDIT:
Here's some extra information which will be helpful in light of the first comment on the question:
I have a lot of records in the table so using COUNT() again and again has a lot of impact on the overall performance. This is why I'm keeping counts in a separate table. Also, I've written the trigger in a way that it handles both single record/multi record changes. I only check one row because I know in case of multiple records they will all be going through the same change of status. Hence, the decrement/increment of ##ROWCOUNT
If you can tolerate a slightly different representation of the order counts, I'd strongly suggest using an indexed view instead1:
create table dbo.Orders (
ID int not null,
OrderStatus varchar(20) not null,
constraint PK_Orders PRIMARY KEY (ID)
)
go
create view dbo.OrderCounts
with schemabinding
as
select
OrderStatus,
COUNT_BIG(*) as Cnt
from
dbo.Orders
group by OrderStatus
go
create unique clustered index IX_OrderCounts on dbo.OrderCounts (OrderStatus)
go
insert into dbo.Orders (ID,OrderStatus) values
(1,'Open'),
(2,'Open'),
(3,'Verified')
go
update dbo.Orders set OrderStatus = 'Verified' where ID = 2
go
select * from dbo.OrderCounts
Results:
OrderStatus Cnt
-------------------- --------------------
Open 1
Verified 2
This has the advantage that, whilst behind the scenes SQL Server is doing something very similar to running triggers, this code has been debugged thoroughly and is correct.
In your current attempted trigger, one further reason why the trigger is currently broken is that ##ROWCOUNT isn't "sticky" - it doesn't remember the number of rows that were affected by the original UPDATE when you're running other statements inside your trigger that also set ##ROWCOUNT.
1You can always stack a non-indexed view atop this view and perform a PIVOT if you really want the counts in a single row and in multiple columns.
The reason for this behavior is the use of ##ROWCOUNT multiple times while in reality once the results from ##ROWCOUNT is fetched, the results are cleared. Instead get the results into variable and use that variable across the trigger. Check the below scenario for the same.
CREATE DATABASE Test
USE Test
CREATE TABLE One
(
ID INT IDENTITY(1,1)
,Name NVARCHAR(MAX)
)
GO
CREATE TRIGGER TR_One ON One FOR INSERT,UPDATE
AS
BEGIN
PRINT ##ROWCOUNT
SELECT ##ROWCOUNT
END
UPDATE One
SET Name = 'Name4'
WHERE ID = 3
RESULTS :-
The Print ##ROWCOUNT statement would give a value of 1, where as the select ##ROWCOUNT would give the value of 0

SQL table variable empty after while loop

I have a SQL table I'm trying to query unique results. Based off of the "FileName" column I want to get only the most recent row for each filename.
In the example, I am pulling all files with the last name of "smith". The LoanNumber may be in multiple rows because the file may have been copied and so I want the most recent one only.
The code below results in no data. I get just a column header called "FileID" and no values. I believe the #ResultsTable is not keeping the data I'm trying to put into it with the INSERT on the 12th line. I do not know why. I tried moving the DECLARE statement for the table variable #ResultsTable around and the best I can ever get it to display is a single record and most of places I put it, I only get "Must declare the table variable "#ResultsTable"."
What am I doing wrong that the table variable is not getting populated properly and maintaining it rows?
DECLARE #ResultsTable table(FileID varchar(10)); --table variable for the list of record IDs
DECLARE #ThisLoanNumber varchar(50); --current loan number for each record to be used during while loop
DECLARE LoanFiles CURSOR read_only FOR --create LoanFiles cursor to loop through
Select distinct [LoanNumber] from [Package_Files]
Where [LastName]='smith';
OPEN LoanFiles;
While ##FETCH_STATUS = 0 --If previous fetch was successful, loop through the cursor "LoanFiles"
BEGIN
FETCH NEXT FROM LoanFiles into #ThisLoanNumber; --Get the LoanNumber from the current row
INSERT Into #ResultsTable --insert the ID number of the row which was updated most recently of the rows which have a loan number equal to the number form the current row in "LoanFiles" cursor
Select Top 1 [iFileID] From [Package_Files] Where [LoanNumber]=#ThisLoanNumber Order By [UpdateTime] Desc;
END;
CLOSE LoanFiles;
DEALLOCATE LoanFiles;
Select * from #ResultsTable; --display results...
There are a couple of ways you can do this type of query without resorting to looping.
Here is using a cte.
with SortedValues as
(
select FileID
, ROW_NUMBER() over (partition by LoanNumber order by UpdateTime desc) as RowNum
from Package_Files
where LastName = 'Smith'
)
select FileID
from SortedValues
where RowNum = 1
Another option is to use a subquery approach similar to this.
select FileID
from
(
select FileID
, ROW_NUMBER() over (partition by LoanNumber order by UpdateTime desc) as RowNum
from Package_Files
where LastName = 'Smith'
) x
where x.RowNum = 1

Get the highest updated ID of a multi row update

Update: I am using Sql Server 2008 R2.
I am going to update a large number of rows and to avoid unnecessary locking I will do this in matches of around a thousand lines per update.
Using SET ROWCOUND I can limit the update to 1000 lines and using WHERE ID > x I can set which batch it should run.
But for this to work I need to know the highest ID from the just processed batch.
I could user OUTPUTto return all affected ID's and find the highest one on code but I would like to be able to return just the highest ID.
I tried this
SELECT MAX(id)
FROM (
UPDATE mytable
SET maxvalue = (SELECT MAX(salesvalue) FROM sales WHERE cid = t.id GROUP BY cid)
OUTPUT inserted.id
FROM mytable t
WHERE au.userid > 0
) updates(id)
But it gives me this error
A nested INSERT, UPDATE, DELETE, or MERGE statement is not allowed in a SELECT statement that is not the immediate source of rows for an INSERT statement.
BUT if I try to insert the result into a table directly it is valid
CREATE TABLE #temp(id int)
INSERT INTO #temp
SELECT MAX(id)
FROM (
UPDATE mytable
SET maxvalue = (SELECT MAX(salesvalue) FROM sales WHERE cid = t.id GROUP BY cid)
OUTPUT inserted.id
FROM mytable t
WHERE au.userid > 0
) updates(id)
drop table #temp
Is there any workaround to this and can anyone explain why I can insert the result into a table but not just return the result?
DO NOT USE SET ROWCOUNT for this (or, at all), as BOL says:
Using SET ROWCOUNT will not affect DELETE, INSERT, and
UPDATE statements in the next release of SQL Server!
Do not use SET
ROWCOUNT with DELETE, INSERT, and UPDATE statements in new development
work, and plan to modify applications that currently use it. Also, for
DELETE, INSERT, and UPDATE statements that currently use SET ROWCOUNT,
we recommend that you rewrite them to use the TOP syntax.
You could do it with a table variable, too:
DECLARE #Log TABLE (id INT NOT NULL);
UPDATE TOP 1000 mytable
SET maxvalue = (SELECT MAX(salesvalue) FROM sales WHERE cid = t.id GROUP BY cid)
OUTPUT inserted.id INTO #Log
FROM mytable t
WHERE au.userid > 0
SELECT maxid = MAX(id)
FROM #Log

SQl Server Express 2005 - updating 2 tables and atomicity?

First off, I want to start by saying I am not an SQL programmer (I'm a C++/Delphi guy), so some of my questions might be really obvious. So pardon my ignorance :o)
I've been charged with writing a script that will update certain tables in a database based on the contents of a CSV file. I have it working it would seem, but I am worried about atomicity for one of the steps:
One of the tables contains only one field - an int which must be incremented each time, but from what I can see is not defined as an identity for some reason. I must create a new row in this table, and insert that row's value into another newly-created row in another table.
This is how I did it (as part of a larger script):
DECLARE #uniqueID INT,
#counter INT,
#maxCount INT
SELECT #maxCount = COUNT(*) FROM tempTable
SET #counter = 1
WHILE (#counter <= #maxCount)
BEGIN
SELECT #uniqueID = MAX(id) FROM uniqueIDTable <----Line 1
INSERT INTO uniqueIDTableVALUES (#uniqueID + 1) <----Line 2
SELECT #uniqueID = #uniqueID + 1
UPDATE TOP(1) tempTable
SET userID = #uniqueID
WHERE userID IS NULL
SET #counter = #counter + 1
END
GO
First of all, am I correct using a "WHILE" construct? I couldn't find a way to achieve this with a simple UPDATE statement.
Second of all, how can I be sure that no other operation will be carried out on the database between Lines 1 and 2 that would insert a value into the uniqueIDTable before I do? Is there a way to "synchronize" operations in SQL Server Express?
Also, keep in mind that I have no control over the database design.
Thanks a lot!
You can do the whole 9 yards in one single statement:
WITH cteUsers AS (
SELECT t.*
, ROW_NUMBER() OVER (ORDER BY userID) as rn
, COALESCE(m.id,0) as max_id
FROM tempTable t WITH(UPDLOCK)
JOIN (
SELECT MAX(id) as id
FROM uniqueIDTable WITH (UPDLOCK)
) as m ON 1=1
WHERE userID IS NULL)
UPDATE cteUsers
SET userID = rn + max_id
OUTPUT INSERTED.userID
INTO uniqueIDTable (id);
You get the MAX(id), lock the uniqueIDTable, compute sequential userIDs for users with NULL userID by using ROW_NUMBER(), update the tempTable and insert the new ids into uniqueIDTable. All in one operation.
For performance you need and index on uniqueIDTable(id) and index on tempTable(userID).
SQL is all about set oriented operations, WHILE loops are the code smell of SQL.
You need a transaction to ensure atomicity and you need to move the select and insert into one statement or do the select with an updlock to prevent two people from running the select at the same time, getting the same value and then trying to insert the same value into the table.
Basically
DECLARE #MaxValTable TABLE (MaxID int)
BEGIN TRANSACTION
BEGIN TRY
INSERT INTO uniqueIDTable VALUES (id)
OUTPUT inserted.id INTO #MaxValTable
SELECT MAX(id) + 1 FROM uniqueIDTable
UPDATE TOP(1) tempTable
SET userID = (SELECT MAXid FROM #MaxValTable)
WHERE userID IS NULL
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
RAISERROR 'Error occurred updating tempTable' -- more detail here is good
END CATCH
That said, using an identity would make things far simpler. This is a potential concurrency problem. Is there any way you can change the column to be identity?
Edit: Ensuring that only one connection at a time will be able to insert into the uniqueIDtable. Not going to scale well though.
Edit: Table variable's better than exclusive table lock. If need be, this can be used when inserting users as well.

Update SQL with consecutive numbering

I want to update a table with consecutive numbering starting with 1. The update has a where clause so only results that meet the clause will be renumbered. Can I accomplish this efficiently without using a temp table?
This probably depends on your database, but here is a solution for MySQL 5 that involves using a variable:
SET #a:=0;
UPDATE table SET field=#a:=#a+1 WHERE whatever='whatever' ORDER BY field2,field3
You should probably edit your question and indicate which database you're using however.
Edit: I found a solution utilizing T-SQL for SQL Server. It's very similar to the MySQL method:
DECLARE #myVar int
SET #myVar = 0
UPDATE
myTable
SET
#myvar = myField = #myVar + 1
For Microsoft SQL Server 2005/2008. ROW_NUMBER() function was added in 2005.
; with T as (select ROW_NUMBER() over (order by ColumnToOrderBy) as RN
, ColumnToHoldConsecutiveNumber from TableToUpdate
where ...)
update T
set ColumnToHoldConsecutiveNumber = RN
EDIT: For SQL Server 2000:
declare #RN int
set #RN = 0
Update T
set ColumnToHoldConsecutiveNubmer = #RN
, #RN = #RN + 1
where ...
NOTE: When I tested the increment of #RN appeared to happen prior to setting the the column to #RN, so the above gives numbers starting at 1.
EDIT: I just noticed that is appears you want to create multiple sequential numbers within the table. Depending on the requirements, you may be able to do this in a single pass with SQL Server 2005/2008, by adding partition by to the over clause:
; with T as (select ROW_NUMBER()
over (partition by Client, City order by ColumnToOrderBy) as RN
, ColumnToHoldConsecutiveNumber from TableToUpdate)
update T
set ColumnToHoldConsecutiveNumber = RN
If you want to create a new PrimaryKey column, use just this:
ALTER TABLE accounts ADD id INT IDENTITY(1,1)
As well as using a CTE or a WITH, it is also possible to use an update with a self-join to the same table:
UPDATE a
SET a.columnToBeSet = b.sequence
FROM tableXxx a
INNER JOIN
(
SELECT ROW_NUMBER() OVER ( ORDER BY columnX ) AS sequence, columnY, columnZ
FROM tableXxx
WHERE columnY = #groupId AND columnY = #lang2
) b ON b.columnY = a.columnY AND b.columnZ = a.columnZ
The derived table, alias b, is used to generated the sequence via the ROW_NUMBER() function together with some other columns which form a virtual primary key.
Typically, each row will require a unique sequence value.
The WHERE clause is optional and limits the update to those rows that satisfy the specified conditions.
The derived table is then joined to the same table, alias a, joining on the virtual primary key columns with the column to be updated set to the generated sequence.
In oracle this works:
update myTable set rowColum = rownum
where something = something else
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/pseudocolumns009.htm#i1006297
To get the example by Shannon fully working I had to edit his answer:
; WITH CTE AS (
SELECT ROW_NUMBER() OVER (ORDER BY [NameOfField]) as RowNumber, t1.ID
FROM [ActualTableName] t1
)
UPDATE [ActualTableName]
SET Name = 'Depersonalised Name ' + CONVERT(varchar(255), RowNumber)
FROM CTE
WHERE CTE.Id = [ActualTableName].ID
as his answer was trying to update T, which in his case was the name of the Common Table Expression, and it throws an error.
UPDATE TableName
SET TableName.id = TableName.New_Id
FROM (
SELECT id, ROW_NUMBER() OVER (ORDER BY id) AS New_Id
FROM TableName
) TableName
I've used this technique for years to populate ordinals and sequentially numbered columns. However I recently discovered an issue with it when running on SQL Server 2012. It would appear that internally the query engine is applying the update using multiple threads and the predicate portion of the UPDATE is not being handled in a thread-safe manner. To make it work again I had to reconfigure SQL Server's max degree of parallelism down to 1 core.
EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE WITH OVERRIDE;
GO
EXEC sp_configure 'max degree of parallelism', 1;
GO
RECONFIGURE WITH OVERRIDE;
GO
DECLARE #id int
SET #id = -1
UPDATE dbo.mytable
SET #id = Ordinal = #id + 1
Without this you'll find that most sequential numbers are duplicated throughout the table.
One more way to achieve the desired result
1. Create a sequence object - (https://learn.microsoft.com/en-us/sql/t-sql/statements/create-sequence-transact-sql?view=sql-server-ver16)
CREATE SEQUENCE dbo.mySeq
AS BIGINT
START WITH 1 -- up to you from what number you want to start cycling
INCREMENT BY 1 -- up to you how it will increment
MINVALUE 1
CYCLE
CACHE 100;
2. Update your records
UPDATE TableName
SET Col2 = NEXT VALUE FOR dbo.mySeq
WHERE ....some condition...
EDIT: To reset sequence to start from the 1 for the next time you use it
ALTER SEQUENCE dbo.mySeq RESTART WITH 1 -- or start with any value you need`
Join to a Numbers table? It involves an extra table, but it wouldn't be temporary -- you'd keep the numbers table around as a utility.
See http://web.archive.org/web/20150411042510/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html
or
http://www.sqlservercentral.com/articles/Advanced+Querying/2547/
(the latter requires a free registration, but I find it to be a very good source of tips & techniques for MS SQL Server, and a lot is applicable to any SQL implementation).
It is possible, but only via some very complicated queries - basically you need a subquery that counts the number of records selected so far, and uses that as the sequence ID. I wrote something similar at one point - it worked, but it was a lot of pain.
To be honest, you'd be better off with a temporary table with an autoincrement field.

Resources