I'm sure there's a more elegant (or simply "correct") way to do what I'm trying to achieve. I believe I need to use a Cursor, but can't quite wrap my head around how to.
I have the code below to find the days left in a contract, but unless I put the 'where' clause (which basically selects a specific record), I get this error message:
'Subquery returned more than 1 value'
That's why I think I need a cursor; to loop through the records, and update a field with the number of days left in a contract.
Here's what I have, which works in-as-much that it returns a number.
DECLARE #TodaysDT date = GetDate()
DECLARE #ContractExpirationDT date =
(SELECT ExprDt from CONTRACTS
WHERE ID = 274);
DECLARE #DaysRemaining INT =
(SELECT DATEDIFF(dd, #ContractExpirationDT,#TodaysDT));
Print #DaysRemaining;
This returns a correct value for a specific record ID (this case, ID 274)
How do I use a cursor to step through each record, and then update a field in each record with the #DaysRemaining value?
Thank you for your time!
In my opinion you don't need a cursor; you can just run an update without where clause in order to calculate remaining days for all rows.
Here is a basic example that you can use as a starting point:
--Create a table variable to hold test data
declare #contract table (Id int, ExprDt datetime, DaysRemaining int )
--Insert sample data
insert into #contract select 1, '20200101' ,null
insert into #contract select 2, '20201231' ,null
insert into #contract select 3, '20191231' ,null
insert into #contract select 274, '20191231',null
--Save today's date inseide a variable
DECLARE #TodaysDT date = GetDate()
--Update DaysRemaining field for each record
update #contract set DaysRemaining = DATEDIFF(dd, ExprDt,#TodaysDT)
--Select records to check results
select Id, ExprDt, DaysRemaining
from #contract
Here is the output of this command:
Related
Hello guys help me with this problem.
I have table name comments in SQL.
I applied paging in SQL procedure.
step-1:Firstly i fetch page 1 with 5 records
step-2:Now Created new comment.
step-3:Fetch page 2 with 5 records.
step-4:Got 5 rows that is fine but i got one record again which was in page 1 also
This happens because every time i create new comment the last comments will be shifted by one and each time i am facing this kind problem.
Ex.
create proc getComments(#PageNumber tinyint,#PerPage INT,#TotalRecords INT OUTPUT)
AS
CREATE TABLE #TempTable
(RowNumber SMALLINT,Id int,CommentText nvarchar(max),CommentedBy nvarchar(256),CommentTime datetime
INSERT INTO #TempTable
(RowNumber,Id ,CommentText ,CommentedBy ,CommentTime )
SELECT ROW_NUMBER() OVER (ORDER BY CommentTime desc),Id,CommentText,CommentedBy ,CommentTime from comments
SELECT #TotalRecords = COUNT(Id) FROM #TempTable
SELECT * FROM #TempTable
where RowNumber > (#PageNumber - 1) * #PerPage
AND RowNumber <= #PageNumber * #PerPage
GO
Your issue is that you are getting exactly what you are asking for in SQL. When you run the stored proc the next time with the additional row inserted, that row is being factored into the results of the query.
The only way to prevent new data from affecting your paging results is to remove the new data or begin paging again from the last record of the original page.
This assumes your Id column is an incrementing value.
CREATE PROC getComments(
#PageNumber tinyint, #PerPage INT,
#LastIdFromPreviousPage INT, #TotalRecords INT OUTPUT
)
AS
BEGIN
CREATE TABLE #TempTable
(RowNumber SMALLINT, Id INT, CommentText NVARCHAR(MAX),
CommentedBy NVARCHAR(256),CommentTime DATETIME)
INSERT INTO #TempTable
(RowNumber, Id, CommentText, CommentedBy, CommentTime)
SELECT
ROW_NUMBER() OVER (ORDER BY CommentTime desc),
Id, CommentText, CommentedBy, CommentTime
FROM comments
SELECT #TotalRecords = COUNT(Id) FROM #TempTable
SELECT *
FROM #TempTable
WHERE (#LastIdFromPreviousPage IS NULL
AND RowNumber > (#PageNumber - 1) * #PerPage
AND RowNumber <= #PageNumber * #PerPage)
OR (Id < #LastIdFromPreviousPage
AND Id >= #LastIdFromPreviousPage - #PerPage)
END
GO
You could also change #LastIdFromPreviousPage to be a DATETIME of the first time you began paging and filter your results to begin paging after that date when you return the data.
Sql server 2012 introduced the OFFSET and FETCH clauses to allow an easy syntax for paging query results. I would suggest using it.
Also, I would suggest reading Aaron Bertrand's Pagination with OFFSET / FETCH : A better way article, especially of you encounter performance issues with pagination.
CREATE PROCEDURE getComments (
#PageNumber tinyint,
#PerPage INT,
#TotalRecords INT OUTPUT
)
AS
SELECT #TotalRecords = COUNT(Id) FROM comments;
SELECT Id, CommentText, CommentedBy ,CommentTime
FROM comments
ORDER BY CommentTime desc
OFFSET (#PageNumber - 1) * #PerPage ROWS
FETCH NEXT #PerPage ROWS ONLY;
GO
Here i have a table called tblemployee which consists id,name and salary column.The name and salary column consists five rows, name column consists 3 different name (i.e each name in name column does not match with another name) while the salary column consists the same integer value (i.e 40,000 in each row of salary column).
Table tblemployee structure
name|salary
-----------
max |40000
rob |40000
jon |40000
Now what i want is that, i want all the names from name column but only one salary value from salary column as shown below:
name|salary
-----------
max |40000
rob |
jon |
Sql Server query i have tried which didn't give the expected output
select DISTINCT salary,name from tblabca
Declare #tblemployee table (name varchar(25),salary int)
Insert Into #tblemployee values
('max',40000),
('rob',40000),
('jon',40000),
('joseph',25000),
('mary',25000)
Select Name
,Salary = case when RN=1 then cast(Salary as varchar(25)) else '' end
From (
Select *
,RN = Row_Number() over (Partition By Salary Order By Name)
,DR = Dense_Rank() over (Order By Salary)
From #tblemployee
) A
Order by DR Desc,RN
Returns
Name Salary
jon 40000
max
rob
joseph 25000
mary
"GROUP BY" and group_concat would suit your case. Please try like this
select salary, group_concat(name) from tblabca group by salary;
Reference: GROUP BY, GROUP_CONCAT
You will never get the result as you stated. Because DISTINCT operator works on a SET. not on individual column. In a relational database you will only work with sets.
So the combination of Salary and Name will be treated as Distinct.
But if you want you can get names in comma concatenated list like below
SELECT SALARY
, STUFF ((SELECT ','+NAME From TableA T2
WHERE T1.SALARY = T2.SALARY FOR XML PATH('')
),1,1,'') FROM TABLEA T1
As others already stated, you are definately not looking for DISTINCT operator.
The distinct operator will work upon the entire result set, meaning you'll get result rows that are unique (column by column).
Although with some rework you might end up with the result you want, do you really want the result in such a not uniform way? I mean, getting a list of names on the name column and only one salary on the salary column do not look like a nice result set to work with.
Maby you should work on your code to account for the change you want to make in the query.
declare #tblemployee Table(
id int identity(1,1) primary key not null,
name nvarchar(MAX) not null,
salary int not null
);
declare #Result Table(
name nvarchar(MAX) not null,
salaryString nvarchar(MAX)
);
insert into #tblemployee(name,salary) values ('joseph' ,25000);
insert into #tblemployee(name,salary) values ('mary' ,25000);
insert into #tblemployee(name,salary) values ('Max' ,40000);
insert into #tblemployee(name,salary) values ('rob' ,40000);
insert into #tblemployee(name,salary) values ('jon' ,40000);
declare #LastSalary int = 0;
declare #name nvarchar(MAX);
declare #salary int;
DECLARE iterator CURSOR LOCAL FAST_FORWARD FOR
SELECT name,
salary
FROM #tblemployee
Order by salary desc
OPEN iterator
FETCH NEXT FROM iterator INTO #name,#salary
WHILE ##FETCH_STATUS = 0
BEGIN
IF (#salary!=#LastSalary)
BEGIN
SET #LastSalary = #salary
insert into #Result(name,salaryString)
values(#name,#salary+'');
END
ELSE
BEGIN
insert into #Result(name,salaryString)
values(#name,'');
END
FETCH NEXT FROM iterator INTO #name,#salary
END
Select * from #Result
I want to make a database query with pagination. So, I used a common-table expression and a ranked function to achieve this. Look at the example below.
declare #table table (name varchar(30));
insert into #table values ('Jeanna Hackman');
insert into #table values ('Han Fackler');
insert into #table values ('Tiera Wetherbee');
insert into #table values ('Hilario Mccray');
insert into #table values ('Mariela Edinger');
insert into #table values ('Darla Tremble');
insert into #table values ('Mammie Cicero');
insert into #table values ('Raisa Harbour');
insert into #table values ('Nicholas Blass');
insert into #table values ('Heather Hayashi');
declare #pagenumber int = 2;
declare #pagesize int = 3;
declare #total int;
with query as
(
select name, ROW_NUMBER() OVER(ORDER BY name ASC) as line from #table
)
select top (#pagesize) name from query
where line > (#pagenumber - 1) * #pagesize
Here, I can specify the #pagesize and #pagenumber variables to give me just the records that I want. However, this example (that comes from a stored procedure) is used to make a grid pagination in a web application. This web application requires to show the page numbers. For instance, if a have 12 records in the database and the page size is 3, then I'll have to show 4 links, each one representing a page.
But I can't do this without knowing how many records are there, and this example just gives me the subset of records.
Then I changed the stored procedure to return the count(*).
declare #pagenumber int = 2;
declare #pagesize int = 3;
declare #total int;
with query as
(
select name, ROW_NUMBER() OVER(ORDER BY name ASC) as line, total = count(*) over()from #table
)
select top (#pagesize) name, total from query
where line > (#pagenumber - 1) * #pagesize
So, along with each line, it will show the total number of records. But I didn't like it.
My question is if there's a better way (performance) to do this, maybe setting the #total variable without returning this information in the SELECT. Or is this total column something that won't harm the performance too much?
Thanks
Assuming you are using MSSQL 2012, you can use Offset and Fetch which cleans up server-side paging greatly. We've found performance is fine, and in most cases better. As far as getting the total column count, just use the window function below inline...it will not include the limits imposed by 'offset' and 'fetch'.
For Row_Number, you can use window functions the way you did, but I would recommend that you calculate that client side as (pagenumber*pagesize + resultsetRowNumber), so if you're on the 5th page of 10 results and on the third row you would output row 53.
When applied to an Orders table with about 2 million orders, I found the following:
FAST VERSION
This ran in under a second. The nice thing about it is that you can do your filtering in the common table expression once and it applies both to the paging process and the count. When you have many predicates in the where clause, this keeps things simple.
declare #skipRows int = 25,
#takeRows int = 100,
#count int = 0
;WITH Orders_cte AS (
SELECT OrderID
FROM dbo.Orders
)
SELECT
OrderID,
tCountOrders.CountOrders AS TotalRows
FROM Orders_cte
CROSS JOIN (SELECT Count(*) AS CountOrders FROM Orders_cte) AS tCountOrders
ORDER BY OrderID
OFFSET #skipRows ROWS
FETCH NEXT #takeRows ROWS ONLY;
SLOW VERSION
This took about 10 sec, and it was the Count(*) that caused the slowness. I'm surprised this is so slow, but I suspect it's simply calculating the total for each row. It's very clean though.
declare #skipRows int = 25,
#takeRows int = 100,
#count int = 0
SELECT
OrderID,
Count(*) Over() AS TotalRows
FROM Location.Orders
ORDER BY OrderID
OFFSET #skipRows ROWS
FETCH NEXT #takeRows ROWS ONLY;
CONCLUSION
We've gone through this performance tuning process before and actually found that it depended on the query, predicates used, and indexes involved. For instance, the second we introduced a view it chugged, so we actually query off the base table and then join up the view (which includes the base table) and it actually performs very well.
I would suggest having a couple of straight-forward strategies and applying them to high-value queries that are chugging.
DECLARE #pageNumber INT = 1 ,
#RowsPerPage INT = 20
SELECT *
FROM TableName
ORDER BY Id
OFFSET ( ( #pageNumber - 1 ) * #RowsPerPage ) ROWS
FETCH NEXT #RowsPerPage ROWS ONLY;
What if you calculate the count beforehand?
declare #pagenumber int = 2;
declare #pagesize int = 3;
declare #total int;
SELECT #total = count(*)
FROM #table
with query as
(
select name, ROW_NUMBER() OVER(ORDER BY name ASC) as line from #table
)
select top (#pagesize) name, #total total from query
where line > (#pagenumber - 1) * #pagesize
Another way, is to calculate max(line). Check the link
Return total records from SQL Server when using ROW_NUMBER
UPD:
For single query, check marc_s's answer on the link above.
with query as
(
select name, ROW_NUMBER() OVER(ORDER BY name ASC) as line from #table
)
select top (#pagesize) name,
(SELECT MAX(line) FROM query) AS total
from query
where line > (#pagenumber - 1) * #pagesize
#pagenumber=5
#pagesize=5
Create a common table expression and write logic like this
Between ((#pagenumber-1)*(#pagesize))+1 and (#pagenumber *#pagesize)
There are many way we can achieve pagination: I hope this information is useful to you and others.
Example 1: using offset-fetch next clause. introduce in 2005
declare #table table (name varchar(30));
insert into #table values ('Jeanna Hackman');
insert into #table values ('Han Fackler');
insert into #table values ('Tiera Wetherbee');
insert into #table values ('Hilario Mccray');
insert into #table values ('Mariela Edinger');
insert into #table values ('Darla Tremble');
insert into #table values ('Mammie Cicero');
insert into #table values ('Raisa Harbour');
insert into #table values ('Nicholas Blass');
insert into #table values ('Heather Hayashi');
declare #pagenumber int = 1
declare #pagesize int = 3
--this is a CTE( common table expression and this is introduce in 2005)
with query as
(
select ROW_NUMBER() OVER(ORDER BY name ASC) as line, name from #table
)
--order by clause is required to use offset-fetch
select * from query
order by name
offset ((#pagenumber - 1) * #pagesize) rows
fetch next #pagesize rows only
Example 2: using row_number() function and between
declare #table table (name varchar(30));
insert into #table values ('Jeanna Hackman');
insert into #table values ('Han Fackler');
insert into #table values ('Tiera Wetherbee');
insert into #table values ('Hilario Mccray');
insert into #table values ('Mariela Edinger');
insert into #table values ('Darla Tremble');
insert into #table values ('Mammie Cicero');
insert into #table values ('Raisa Harbour');
insert into #table values ('Nicholas Blass');
insert into #table values ('Heather Hayashi');
declare #pagenumber int = 2
declare #pagesize int = 3
SELECT *
FROM
(select ROW_NUMBER() OVER (ORDER BY PRODUCTNAME) AS RowNum, * from Products)
as Prodcut
where RowNum between (((#pagenumber - 1) * #pageSize )+ 1)
and (#pagenumber * #pageSize )
I hope these will be helpful to all
I don't like other solutions for being too complex, so here is my version.
Execute three select queries in one go and use output parameters for getting the count values. This query returns the total count, the filter count, and the page rows. It supports sorting, searching, and filtering the source data. It's easy to read and modify.
Let's say you have two tables with one-to-many relationship, items and their prices changed over time so the example query is not too trivial.
create table shop.Items
(
Id uniqueidentifier not null primary key,
Name nvarchar(100) not null,
);
create table shop.Prices
(
ItemId uniqueidentifier not null,
Updated datetime not null,
Price money not null,
constraint PK_Prices primary key (ItemId, Updated),
constraint FK_Prices_Items foreign key (ItemId) references shop.Items(Id)
);
Here is the query:
select #TotalCount = count(*) over()
from shop.Items i;
select #FilterCount = count(*) over()
from shop.Items i
outer apply (select top 1 p.Price, p.Updated from shop.Prices p where p.ItemId = i.Id order by p.Updated desc) as p
where (#Search is null or i.Name like '%' + #Search + '%')/**where**/;
select i.Id as ItemId, i.Name, p.Price, p.Updated
from shop.Items i
outer apply (select top 1 p.Price, p.Updated from shop.Prices p where p.ItemId = i.Id order by p.Updated desc) as p
where (#Search is null or i.Name like '%' + #Search + '%')/**where**/
order by /**orderby**/i.Id
offset #SkipCount rows fetch next #TakeCount rows only;
You need to provide the following parameters to the query:
#SkipCount - how many records to skip, calculated from the page number.
#TakeCount - how many records to return, calculated from or equal to the page size.
#Search - a text to search for in some columns, provided by the grid search box.
#TotalCount - the total number of records in the data source, the output parameter.
#FilterCount - the number of records after the search and filtering operations, the output parameter.
You can replace /**orderby**/ comment with the list of columns and their ordering directions if the grid must support sorting the rows by columns. you get this info from the grid and translate it to an SQL expression. We still need to order the records by some column initially, I usually use ID column for that.
If the grid must support filtering data by each column individually, you can replace /**where**/ comment with an SQL expression for that.
If the user is not searching and filtering the data, but only clicks through the grid pages, this query doesn't change at all and the database server executes it very quickly.
SQL SERVER 2000:
I have a table with test data (about 100000 rows), I want to update a column value from another table with some random data from another table. According to this question, This is what I am trying:
UPDATE testdata
SET type = (SELECT TOP 1 id FROM testtypes ORDER BY CHECKSUM(NEWID()))
-- or even
UPDATE testdata
SET type = (SELECT TOP 1 id FROM testtypes ORDER BY NEWID())
However, the "type" field is still with the same value for all rows; Any ideas what Am I doing wrong?
[EDIT]
I would expect this query to return one different value for each row, but it doesn't:
SELECT testdata.id, (SELECT TOP 1 id FROM testtypes ORDER BY CHECKSUM(NEWID())) type
FROM testdata
-- however seeding a rand value works
SELECT testdata.id, (SELECT TOP 1 id FROM testtypes ORDER BY CHECKSUM(NEWID()) + RAND(testdata.id)) type
FROM testdata
Your problem is: you are selecting only a single value and then updating all columns with that one single value.
In order to really get a randomization going, you need to do a step-by-step / looping approach - I tried this in SQL Server 2008, but I think it should work in SQL Server 2000 as well:
-- declare a temporary TABLE variable in memory
DECLARE #Temporary TABLE (ID INT)
-- insert all your ID values (the PK) into that temporary table
INSERT INTO #Temporary SELECT ID FROM dbo.TestData
-- check to see we have the values
SELECT COUNT(*) AS 'Before the loop' FROM #Temporary
-- pick an ID from the temporary table at random
DECLARE #WorkID INT
SELECT TOP 1 #WorkID = ID FROM #Temporary ORDER BY NEWID()
WHILE #WorkID IS NOT NULL
BEGIN
-- now update exactly one row in your base table with a new random value
UPDATE dbo.TestData
SET [type] = (SELECT TOP 1 id FROM dbo.TestTypes ORDER BY NEWID())
WHERE ID = #WorkID
-- remove that ID from the temporary table - has been updated
DELETE FROM #Temporary WHERE ID = #WorkID
-- first set #WorkID back to NULL and then pick a new ID from
-- the temporary table at random
SET #WorkID = NULL
SELECT TOP 1 #WorkID = ID FROM #Temporary ORDER BY NEWID()
END
-- check to see we have no more IDs left
SELECT COUNT(*) AS 'After the update loop' FROM #Temporary
you need to enforce a per row calculation in the selection of the new ids ..
this would do the trick
UPDATE testdata
SET type = (SELECT TOP 1 id FROM testtypes ORDER BY outerTT*CHECKSUM(NEWID()))
FROM testtypes outerTT
I am using SQL Server 2005. I have heard that we can use a table variable to use instead of LEFT OUTER JOIN.
What I understand is that, we have to put all the values from the left table to the table variable, first. Then we have to UPDATE the table variable with the right table values. Then select from the table variable.
Has anyone come across this kind of approach? Could you please suggest a real time example (with query)?
I have not written any query for this. My question is - if someone has used a similar approach, I would like to know the scenario and how it is handled. I understand that in some cases it may be slower than the LEFT OUTER JOIN.
Please assume that we are dealing with tables that have less than 5000 records.
Thanks
It can be done, but I have no idea why you would ever want to do it.
This realy does seem like it is being done backwards. But if you are trying this for your own learning only, here goes:
DECLARE #MainTable TABLE(
ID INT,
Val FLOAT
)
INSERT INTO #MainTable SELECT 1, 1
INSERT INTO #MainTable SELECT 2, 2
INSERT INTO #MainTable SELECT 3, 3
INSERT INTO #MainTable SELECT 4, 4
DECLARE #LeftTable TABLE(
ID INT,
MainID INT,
Val FLOAT
)
INSERT INTO #LeftTable SELECT 1, 1, 11
INSERT INTO #LeftTable SELECT 3, 3, 33
SELECT *,
mt.Val + ISNULL(lt.Val, 0)
FROM #MainTable mt LEFT JOIN
#LeftTable lt ON mt.ID = lt.MainID
DECLARE #Table TABLE(
ID INT,
Val FLOAT
)
INSERT INTO #Table
SELECT ID,
Val
FROM #MainTable
UPDATE #Table
SET Val = t.Val + lt.Val
FROM #Table t INNER JOIN
#LeftTable lt ON t.ID = lt.ID
SELECT *
FROM #Table
I don't think it's very clear from your question what you want to achieve? (What your tables look like, and what result you want). But you can certainly select data into a variable of a table datatype, and tamper with it. It's quite convenient:
DECLARE #tbl TABLE (id INT IDENTITY(1,1), userId int, foreignId int)
INSERT INTO #tbl (userId)
SELECT id FROM users
WHERE name LIKE 'a%'
UPDATE #tbl t
SET
foreignId = (SELECT id FROM foreignTable f WHERE f.userId = t.userId)
In that example I gave the table variable an identity column of its own, distinct from the one in the source table. I often find that useful. Adjust as you like... Again, it's not very clear what the question is, but I hope this might guide you in the right direction...?
Every scenario is different, and without full details on a specific case it's difficult to say whether it would be a good approach for you.
Having said that, I would not be looking to use the table variable approach unless I had a specific functional reason to - if the query can be fulfilled with a standard SELECT query using an OUTER JOIN, then I'd use that as I'd expect that to be most efficient.
The times where you may want to use a temp table/table variable instead, are more when you want to get an intermediary resultset and then do some processing on it before then returning it out - i.e. the kind of processing that cannot be done with a straight forward query.
Note the table variables are very handy, but take into account that they are not guaranteed to reside in-memory - they can get persisted to tempdb like standard temp tables.
Thank you, astander.
I tried with an example given below. Both of the approaches took 19 seconds. However, I guess some tuning will help the Table varaible update approach to become faster than LEFT JOIN.
AS I am not a master in tuning I request your help. Any SQL expert ready to prove it?
---- PLease replace "" with '' below. I am not familiar with how to put code in this forum... It causes some troubles....
CREATE TABLE #MainTable (
CustomerID INT PRIMARY KEY,
FirstName VARCHAR(100)
)
DECLARE #Count INT
SET #Count = 0
DECLARE #Iterator INT
SET #Iterator = 0
WHILE #Count <8000
BEGIN
INSERT INTO #MainTable SELECT #Count, "Cust"+CONVERT(VARCHAR(10),#Count)
SET #Count = #Count+1
END
CREATE TABLE #RightTable
(
OrderID INT PRIMARY KEY,
CustomerID INT,
Product VARCHAR(100)
)
CREATE INDEX [IDX_CustomerID] ON #RightTable (CustomerID)
WHILE #Iterator <400000
BEGIN
IF #Iterator % 2 = 0
BEGIN
INSERT INTO #RightTable SELECT #Iterator,2, "Prod"+CONVERT(VARCHAR(10),#Iterator)
END
ELSE
BEGIN
INSERT INTO #RightTable SELECT #Iterator,1, "Prod"+CONVERT(VARCHAR(10),#Iterator)
END
SET #Iterator = #Iterator+1
END
-- Using LEFT JOIN
SELECT mt.CustomerID,mt.FirstName,COUNT(rt.Product) [CountResult]
FROM #MainTable mt
LEFT JOIN #RightTable rt ON mt.CustomerID = rt.CustomerID
GROUP BY mt.CustomerID,mt.FirstName
---------------------------
-- Using Table variable Update
DECLARE #WorkingTableVariable TABLE
(
CustomerID INT,
FirstName VARCHAR(100),
ProductCount INT
)
INSERT
INTO #WorkingTableVariable (CustomerID,FirstName)
SELECT CustomerID, FirstName FROM #MainTable
UPDATE #WorkingTableVariable
SET ProductCount = [Count]
FROM #WorkingTableVariable wt
INNER JOIN
(SELECT CustomerID,COUNT(rt.Product) AS [Count]
FROM #RightTable rt
GROUP BY CustomerID) IV ON wt.CustomerID = IV.CustomerID
SELECT CustomerID,FirstName, ISNULL(ProductCount,0) [CountResult] FROM #WorkingTableVariable
ORDER BY CustomerID
--------
DROP TABLE #MainTable
DROP TABLE #RightTable
Thanks
Lijo
In my opinion there is one reason to do this:
If you have a complicated query with lots of inner joins and one left join you sometimes get in trouble because this query is hundreds of times less fast than using the same query without the left join.
If you query lots of records with a result of very few records to be joined to the left join you could get faster results if you materialize the intermediate result into a table variable or temp table.
But usually there is no need to really update the data in the table variable - you could query the table variable using the left join to return the result.
... just my two cents.