How to know there are 0 result in select - sql-server

if I have a SQL scripts to check a select statement on a table & output those conditions that has no result, How do I do this. I have variables & fetch & next to go thru each row in the table.
Example:
declare #1 varchar(10)
declare #2 varchar(10)
declare #1 cursor for
select userid from users
declare #2 cursor for
select pay from pays where pay_userid=#1
then here if there is no records in pays for certain userid, output to screen or
I did insert into a temp table
But my issue/problem is not those userids not found will not be output due to the result for the last query has no result for those userids.

Maybe you can use "SELECT COUNT(*) FROM ....." at first to judge if there is any result in the query.

You can get the COUNT of your query, and then check if that COUNT is > 0, do your SELECT, otherwise, do something else.

FETCH NEXT FROM your_cursor;
IF ##FETCH_STATUS = 0
BEGIN
-- do your stuff
END

After select statement you can check ##ROWCOUNT variable like this:
SELECT ...
IF ##ROWCOUNT = 0
PRINT 'select statement returned empty result'

Related

How to run series of values through stored procedure

I have a stored procedure that I need to run a list of values through and output into a temp table.
This is the SP: EXEC [SP_ReturnHTML] #zoneid, 1
The first value, I assume, will be a variable and the second value will be hard-coded. I am not able to modify this SP, as it is used in other processes, so I need to run these values through the SP via a cursor or WHILE loop. The values only need to be run through once, so a FAST_FORWARD cursor type may be more ideal, based on some preliminary reading on cursors (of which my experience in is extremely limited). This is what I attempted:
declare #zoneid int = (select zoneid from #values)
declare list cursor fast_forward
for EXEC [SP_ReturnHTML] #zoneid,1
open list
fetch next from list
But when I try to do this, I get the error Incorrect syntax near the keyword 'EXEC'.
The output of this SP, when using #zoneid=14105 (and the hard-coded 1 relates to the fieldgroupid) looks something like the shot below. For clarity, despite using #zoneid=14105, the reason a value of 4054 shows up is due to the way the SP is written, and is intended. The two values relate to a state and county relationship, noted by the first 2 columns, ParentHeaderId and HeaderId. I opted to use 14105 for the example, because the 3 examples in the #values table only retrieve their secondary value and I wanted to avoid confusion here.
The values that I need to run through the SP for the #zoneid are in a table (which has about 3100 rows), which can be exemplified with the following:
create table #values (zoneid int)
insert into #values
values
(13346),
(13347),
(13348)
So very simply put, I need something like the following as a final product (pseudo code):
declare #zoneid INT = (select zoneid from #values)
select * into #results from
(
EXEC [SP_ReturnHTML] #zoneid, 1
)
Something like this:
drop table if exists #results
drop table if exists #Data
go
create or alter procedure [SP_ReturnHTML] #value int, #s varchar(20)
as
begin
select concat(' value=',#value, '; s = ', #s)
end
go
create table #Data (value int, county varchar(30))
insert into #Data
values
(100, 'Baker'),
(101,'Baldwin'),
(102,'Baldwin'),
(103,'Ballard'),
(104,'Baltimore City'),
(105,'Baltimore'),
(106,'Bamberg'),
(107,'Bandera'),
(108,'Banders'),
(109,'Banks'),
(110,'Banner'),
(111,'Bannock'),
(112,'Baraga')
go
create table #results(value nvarchar(200))
declare c cursor local for select value from #Data
declare #value int
open c
fetch next from c into #value
while ##fetch_status = 0
begin
insert into #results(value)
EXEC [SP_ReturnHTML] #value, '1'
fetch next from c into #value
end
go
select *
from #results

Only return N rows affected when used sqlcmd

When I used sqlcmd, It retured messages N rows affected + results. I only want to return messages N rows affected. Do you know if we make some config for sqlcmd?
I'm going to guess this is an xy question, and i'm basing this answer on what i'm guessing are the real OP's needs; that they are checking the number of rows in a table/statement, and are then checking the output of the return message statement to ensure it meets to expectation.
I think, instead, then, that the right way for the OP's odd set up would be to use SET NOCOUNT ON and then create your own PRINT statement. This is overly simplified SQL, however, as i have no query to work with. Firstly, a sample table:
USE Sandbox;
Go
--Create sample table
CREATE TABLE #test (i int);
INSERT INTO #test
VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9);
GO
And then the solution:
--The solution
SET NOCOUNT ON; --This turns the normal message off
DECLARE #Count int;
SELECT #Count = COUNT(*)
FROM #test
WHERE i > 5;
IF #Count = 1 BEGIN
PRINT '(' + CONVERT(varchar(5),#Count) + ' row affected)';
END ELSE BEGIN
PRINT '(' + CONVERT(varchar(5),#Count) + ' rows affected)';
END
And a clean up
--Clean up
GO
DROP TABLE #test;

T-SQL different result between code in stored and same code in query pane

I'm working on a procedure that should return a o or a 1, depending on result from parameter calculation (parameters used to interrogate 2 tables in a database).
When I excute that code in a query pane, it gives me the results i'm expecting.
code looks like:
SELECT TOP 1 state, updDate INTO #history
FROM [xxx].[dbo].[ImportHystory] WHERE (db = 'EB') ORDER BY addDate DESC;
IF (SELECT state FROM #history) = 'O'
BEGIN
SELECT TOP 1 * INTO #process_status
FROM yyy.dbo.process_status WHERE KeyName = 'eb-importer';
IF(SELECT s.EndDate FROM #process_status s) IS NOT NULL
IF (SELECT s.EndDate FROM #process_status s) > (SELECT h.updDate FROM #history h)
BEGIN
IF (SELECT MessageLog from #process_status) IS NOT NULL SELECT 1;
ELSE SELECT 0;
END
ELSE
SELECT 1;
ELSE
SELECT 1;
END
ELSE
SELECT 0
I'm in the situation where EndDate from #process_status is null, so the execution returns 1.
Once i put the SAME code in a SP, and pass 'EB' and 'eb-importer' as parameters, it returns 0.
And I exec the procedure with the data from the table right in front of me, so i know for sure that result is wrong.
Inside the procedure:
ALTER PROCEDURE [dbo].[can_start_import] (#keyName varchar, #db varchar, #result bit output)
DECLARE #result bit;
and replace every
SELECT {0|1}
with
SELECT #result = {0|1}
Executed from the Query pane:
DECLARE #result bit;
EXEC [dbo].[can_start_import] #KeyName = 'eb-importer', #db = 'EB', #result = #result OUTPUT
SELECT #result AS N'#result'
Why does this happen?
You are doing a top(1) query without an order by. That means SQL Server can pick any row from table1 that matches the where clause.
If you want to guarantee that the result is the same every time you execute that code you need an order by statement that unambiguously orders the rows.
So, apparently 2 things needed to be done:
set the length of the varchar parameter with a higher length,
filter with ' like ' instead of ' = ' for god knows what reason
Now it work as i expected to do, but i still don't get the different results between the query pane and the procedure if i use the equal...

How to update large table with millions of rows in SQL Server?

I've an UPDATE statement which can update more than million records. I want to update them in batches of 1000 or 10000. I tried with ##ROWCOUNT but I am unable to get desired result.
Just for testing purpose what I did is, I selected table with 14 records and set a row count of 5. This query is supposed to update records in 5, 5 and 4 but it just updates first 5 records.
Query - 1:
SET ROWCOUNT 5
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
WHILE ##ROWCOUNT > 0
BEGIN
SET rowcount 5
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
PRINT (##ROWCOUNT)
END
SET rowcount 0
Query - 2:
SET ROWCOUNT 5
WHILE (##ROWCOUNT > 0)
BEGIN
BEGIN TRANSACTION
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
PRINT (##ROWCOUNT)
IF ##ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
END
SET ROWCOUNT 0
What am I missing here?
You should not be updating 10k rows in a set unless you are certain that the operation is getting Page Locks (due to multiple rows per page being part of the UPDATE operation). The issue is that Lock Escalation (from either Row or Page to Table locks) occurs at 5000 locks. So it is safest to keep it just below 5000, just in case the operation is using Row Locks.
You should not be using SET ROWCOUNT to limit the number of rows that will be modified. There are two issues here:
It has that been deprecated since SQL Server 2005 was released (11 years ago):
Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in a future release of SQL Server. Avoid using SET ROWCOUNT with DELETE, INSERT, and UPDATE statements in new development work, and plan to modify applications that currently use it. For a similar behavior, use the TOP syntax
It can affect more than just the statement you are dealing with:
Setting the SET ROWCOUNT option causes most Transact-SQL statements to stop processing when they have been affected by the specified number of rows. This includes triggers. The ROWCOUNT option does not affect dynamic cursors, but it does limit the rowset of keyset and insensitive cursors. This option should be used with caution.
Instead, use the TOP () clause.
There is no purpose in having an explicit transaction here. It complicates the code and you have no handling for a ROLLBACK, which isn't even needed since each statement is its own transaction (i.e. auto-commit).
Assuming you find a reason to keep the explicit transaction, then you do not have a TRY / CATCH structure. Please see my answer on DBA.StackExchange for a TRY / CATCH template that handles transactions:
Are we required to handle Transaction in C# Code as well as in Store procedure
I suspect that the real WHERE clause is not being shown in the example code in the Question, so simply relying upon what has been shown, a better model (please see note below regarding performance) would be:
DECLARE #Rows INT,
#BatchSize INT; -- keep below 5000 to be safe
SET #BatchSize = 2000;
SET #Rows = #BatchSize; -- initialize just to enter the loop
BEGIN TRY
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE TOP (#BatchSize) tab
SET tab.Value = 'abc1'
FROM TableName tab
WHERE tab.Parameter1 = 'abc'
AND tab.Parameter2 = 123
AND tab.Value <> 'abc1' COLLATE Latin1_General_100_BIN2;
-- Use a binary Collation (ending in _BIN2, not _BIN) to make sure
-- that you don't skip differences that compare the same due to
-- insensitivity of case, accent, etc, or linguistic equivalence.
SET #Rows = ##ROWCOUNT;
END;
END TRY
BEGIN CATCH
RAISERROR(stuff);
RETURN;
END CATCH;
By testing #Rows against #BatchSize, you can avoid that final UPDATE query (in most cases) because the final set is typically some number of rows less than #BatchSize, in which case we know that there are no more to process (which is what you see in the output shown in your answer). Only in those cases where the final set of rows is equal to #BatchSize will this code run a final UPDATE affecting 0 rows.
I also added a condition to the WHERE clause to prevent rows that have already been updated from being updated again.
NOTE REGARDING PERFORMANCE
I emphasized "better" above (as in, "this is a better model") because this has several improvements over the O.P.'s original code, and works fine in many cases, but is not perfect for all cases. For tables of at least a certain size (which varies due to several factors so I can't be more specific), performance will degrade as there are fewer rows to fix if either:
there is no index to support the query, or
there is an index, but at least one column in the WHERE clause is a string data type that does not use a binary collation, hence a COLLATE clause is added to the query here to force the binary collation, and doing so invalidates the index (for this particular query).
This is the situation that #mikesigs encountered, thus requiring a different approach. The updated method copies the IDs for all rows to be updated into a temporary table, then uses that temp table to INNER JOIN to the table being updated on the clustered index key column(s). (It's important to capture and join on the clustered index columns, whether or not those are the primary key columns!).
Please see #mikesigs answer below for details. The approach shown in that answer is a very effective pattern that I have used myself on many occasions. The only changes I would make are:
Explicitly create the #targetIds table rather than using SELECT INTO...
For the #targetIds table, declare a clustered primary key on the column(s).
For the #batchIds table, declare a clustered primary key on the column(s).
For inserting into #targetIds, use INSERT INTO #targetIds (column_name(s)) SELECT and remove the ORDER BY as it's unnecessary.
So, if you don't have an index that can be used for this operation, and can't temporarily create one that will actually work (a filtered index might work, depending on your WHERE clause for the UPDATE query), then try the approach shown in #mikesigs answer (and if you use that solution, please up-vote it).
WHILE EXISTS (SELECT * FROM TableName WHERE Value <> 'abc1' AND Parameter1 = 'abc' AND Parameter2 = 123)
BEGIN
UPDATE TOP (1000) TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 AND Value <> 'abc1'
END
I encountered this thread yesterday and wrote a script based on the accepted answer. It turned out to perform very slowly, taking 12 hours to process 25M of 33M rows. I wound up cancelling it this morning and working with a DBA to improve it.
The DBA pointed out that the is null check in my UPDATE query was using a Clustered Index Scan on the PK, and it was the scan that was slowing the query down. Basically, the longer the query runs, the further it needs to look through the index for the right rows.
The approach he came up with was obvious in hind sight. Essentially, you load the IDs of the rows you want to update into a temp table, then join that onto the target table in the update statement. This uses an Index Seek instead of a Scan. And ho boy does it speed things up! It took 2 minutes to update the last 8M records.
Batching Using a Temp Table
SET NOCOUNT ON
DECLARE #Rows INT,
#BatchSize INT,
#Completed INT,
#Total INT,
#Message nvarchar(max)
SET #BatchSize = 4000
SET #Rows = #BatchSize
SET #Completed = 0
-- #targetIds table holds the IDs of ALL the rows you want to update
SELECT Id into #targetIds
FROM TheTable
WHERE Foo IS NULL
ORDER BY Id
-- Used for printing out the progress
SELECT #Total = ##ROWCOUNT
-- #batchIds table holds just the records updated in the current batch
CREATE TABLE #batchIds (Id UNIQUEIDENTIFIER);
-- Loop until #targetIds is empty
WHILE EXISTS (SELECT 1 FROM #targetIds)
BEGIN
-- Remove a batch of rows from the top of #targetIds and put them into #batchIds
DELETE TOP (#BatchSize)
FROM #targetIds
OUTPUT deleted.Id INTO #batchIds
-- Update TheTable data
UPDATE t
SET Foo = 'bar'
FROM TheTable t
JOIN #batchIds tmp ON t.Id = tmp.Id
WHERE t.Foo IS NULL
-- Get the # of rows updated
SET #Rows = ##ROWCOUNT
-- Increment our #Completed counter, for progress display purposes
SET #Completed = #Completed + #Rows
-- Print progress using RAISERROR to avoid SQL buffering issue
SELECT #Message = 'Completed ' + cast(#Completed as varchar(10)) + '/' + cast(#Total as varchar(10))
RAISERROR(#Message, 0, 1) WITH NOWAIT
-- Quick operation to delete all the rows from our batch table
TRUNCATE TABLE #batchIds;
END
-- Clean up
DROP TABLE IF EXISTS #batchIds;
DROP TABLE IF EXISTS #targetIds;
Batching the slow way, do not use!
For reference, here is the original slower performing query:
SET NOCOUNT ON
DECLARE #Rows INT,
#BatchSize INT,
#Completed INT,
#Total INT
SET #BatchSize = 4000
SET #Rows = #BatchSize
SET #Completed = 0
SELECT #Total = COUNT(*) FROM TheTable WHERE Foo IS NULL
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE t
SET Foo = 'bar'
FROM TheTable t
JOIN #batchIds tmp ON t.Id = tmp.Id
WHERE t.Foo IS NULL
SET #Rows = ##ROWCOUNT
SET #Completed = #Completed + #Rows
PRINT 'Completed ' + cast(#Completed as varchar(10)) + '/' + cast(#Total as varchar(10))
END
This is a more efficient version of the solution from #Kramb. The existence check is redundant as the update where clause already handles this. Instead you just grab the rowcount and compare to batchsize.
Also note #Kramb solution didn't filter out already updated rows from the next iteration hence it would be an infinite loop.
Also uses the modern batch size syntax instead of using rowcount.
DECLARE #batchSize INT, #rowsUpdated INT
SET #batchSize = 1000;
SET #rowsUpdated = #batchSize; -- Initialise for the while loop entry
WHILE (#batchSize = #rowsUpdated)
BEGIN
UPDATE TOP (#batchSize) TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 and Value <> 'abc1';
SET #rowsUpdated = ##ROWCOUNT;
END
I want share my experience. A few days ago I have to update 21 million records in table with 76 million records. My colleague suggested the next variant.
For example, we have the next table 'Persons':
Id | FirstName | LastName | Email | JobTitle
1 | John | Doe | abc1#abc.com | Software Developer
2 | John1 | Doe1 | abc2#abc.com | Software Developer
3 | John2 | Doe2 | abc3#abc.com | Web Designer
Task: Update persons to the new Job Title: 'Software Developer' -> 'Web Developer'.
1. Create Temporary Table 'Persons_SoftwareDeveloper_To_WebDeveloper (Id INT Primary Key)'
2. Select into temporary table persons which you want to update with the new Job Title:
INSERT INTO Persons_SoftwareDeveloper_To_WebDeveloper SELECT Id FROM
Persons WITH(NOLOCK) --avoid lock
WHERE JobTitle = 'Software Developer'
OPTION(MAXDOP 1) -- use only one core
Depends on rows count, this statement will take some time to fill your temporary table, but it would avoid locks. In my situation it took about 5 minutes (21 million rows).
3. The main idea is to generate micro sql statements to update database. So, let's print them:
DECLARE #i INT, #pagesize INT, #totalPersons INT
SET #i=0
SET #pagesize=2000
SELECT #totalPersons = MAX(Id) FROM Persons
while #i<= #totalPersons
begin
Print '
UPDATE persons
SET persons.JobTitle = ''ASP.NET Developer''
FROM Persons_SoftwareDeveloper_To_WebDeveloper tmp
JOIN Persons persons ON tmp.Id = persons.Id
where persons.Id between '+cast(#i as varchar(20)) +' and '+cast(#i+#pagesize as varchar(20)) +'
PRINT ''Page ' + cast((#i / #pageSize) as varchar(20)) + ' of ' + cast(#totalPersons/#pageSize as varchar(20))+'
GO
'
set #i=#i+#pagesize
end
After executing this script you will receive hundreds of batches which you can execute in one tab of MS SQL Management Studio.
4. Run printed sql statements and check for locks on table. You always can stop process and play with #pageSize to speed up or speed down updating(don't forget to change #i after you pause script).
5. Drop Persons_SoftwareDeveloper_To_AspNetDeveloper. Remove temporary table.
Minor Note: This migration could take a time and new rows with invalid data could be inserted during migration. So, firstly fix places where your rows adds. In my situation I fixed UI, 'Software Developer' -> 'Web Developer'.
More about this method on my blog https://yarkul.com/how-smoothly-insert-millions-of-rows-in-sql-server/
Your print is messing things up, because it resets ##ROWCOUNT. Whenever you use ##ROWCOUNT, my advice is to always set it immediately to a variable. So:
DECLARE #RC int;
WHILE #RC > 0 or #RC IS NULL
BEGIN
SET rowcount 5;
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 AND Value <> 'abc1';
SET #RC = ##ROWCOUNT;
PRINT(##ROWCOUNT)
END;
SET rowcount = 0;
And, another nice feature is that you don't need to repeat the update code.
First of all, thank you all for your inputs. I tweak my Query - 1 and got my desired result. Gordon Linoff is right, PRINT was messing up my query so I modified it as following:
Modified Query - 1:
SET ROWCOUNT 5
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
IF ##ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
END
SET ROWCOUNT 0
Output:
(5 row(s) affected)
(5 row(s) affected)
(4 row(s) affected)
(0 row(s) affected)

select TOP (all)

declare #t int
set #t = 10
if (o = 'mmm') set #t = -1
select top(#t) * from table
What if I want generally it resulted with 10 rows, but rarely all of them.
I know I can do this through "SET ROWCOUNT". But is there some variable number, like -1, that causing TOP to result all elements.
The largest possible value that can be passed to TOP is 9223372036854775807 so you could just pass that.
Below I use the binary form for max signed bigint as it is easier to remember as long as you know the basic pattern and that bigint is 8 bytes.
declare #t bigint = case when some_condition then 10 else 0x7fffffffffffffff end;
select top(#t) *
From table
If you dont have an order by clause the top 10 will just be any 10 and optimisation dependant.
If you do have an order by clause to define the top 10 and an index to support it then the plan for the query above should be fine for either possible value.
If you don't have a supporting index and the plan shows a sort you should consider splitting into two queries.
im not sure I understand your question.
But if you sometimes want TOP and other times don't just use if / else construct:
if (condition)
'send TOP
SELECT TOP 10 Blah FROM...
else
SELECT blah1, blah2 FROM...
You can use dynamic SQL (but I, personally, try to avoid dynamic SQL), where you create a string of the statement you want to run from conditions or parameters.
There's also some good information here on how to do it without dynamic SQL:
https://web.archive.org/web/20150520123828/http://sqlserver2000.databases.aspfaq.com:80/how-do-i-use-a-variable-in-a-top-clause-in-sql-server.html
declare #top bigint = NULL
declare #top_max_value bigint = 9223372036854775807
if (#top IS NULL)
begin
set #top = #top_max_value
end
select top(#top) *
from [YourTableName]
a dynamic sql version isn't that's hard to do.
CREATE PROCEDURE [dbo].[VariableTopSelect]
-- Add the parameters for the stored procedure here
#t int
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
declare #sql nvarchar(max)
if (#t=10)
begin
set #sql='select top (10) * from table'
end
else
begin
set #sql='select * from table'
end
exec sp_executesql #sql
END
with this sp, if they send 10 to the sp, it'll select the top 10, otherwise it'll select all.
The best solution I've found is to select the needed columns with all of your conditions into a temporary table, then do your conditional top:
DECLARE #TempTable TABLE(cols...)
INSERT INTO #TempTable
SELECT blah FROM ...
if (condition)
SELECT TOP 10 * FROM #tempTable
else
SELECT * FROM #tempTable
This way you follow DRY, get your conditional TOP, and are just as easy to read.
Cheers.
It is also possible with a UNION and a parameter
SELECT DISTINCT TOP 10
Column1, Column2
FROM Table
WHERE #ShowAllResults = 0
UNION
SELECT DISTINCT
Column1, Column2
FROM Table
WHERE #ShowAllResults = 1
I might be too late now, or getting too old
But I solved that by using Top(100)Percent
This goes around all complexities
Select Top(100)Percent * from tablename;
Use the statement "SET ROWCOUNT #recordCount" at the beginning of the result query.The variable "#recordCount" can be any positive integer. It should be 0 to return all records.
that means , "SET ROWCOUNT 0" will return all records and "SET ROWCOUNT 15" will return only the TOP 15 rows of result set.
Drawback can be the Performance hit when dealing with large number of records. Also the SET ROWCOUNT will be effective throughout the scope of execution of the whole query.

Resources