SSMS Alter Table Row not being accessed anywhere else - sql-server

I'm creating a temp table to store data from a csv and then altering the table after to create a new column to identify each row by a unique number in ascending order.
This gets created fine, however I can't query the table using these row numbers. Seems as if it doesn't get set. On SSMS when I use the newly created column it red lines it with the error Invalid Column Name 'columnName' but I can still query the database.
declare #loopNum INT
set #loopNum = 0
CREATE TABLE #A
(
column1 BIGINT NOT NULL,
column2 BIGINT NOT NULL,
)
DECLARE #command NVARCHAR(150)
SET #command = just reads from file into temp table A. this works fine
EXEC SP_EXECUTESQL #command
ALTER TABLE #A
ADD RowNumbers INT IDENTITY(1,1)
--if i run a select * from #a, all 4 columns show perfectly
while #loopNum <= 5
begin
select * from #a where loopNum = RowNumbers -- doesn't return anything yet loop is going up one as 6 blank results are returned
set #loopNum = #loopNum + 1
end
The select statement doesn't recognise "RowNumbers" so I'm not sure if there's a problem with how I've done the alter command.
This is what I get so far.
Column 1 | Column 2 | RowNumbers
A | B | 1
C | D | 2
It just doesn't loop through it.

A couple of issues here.
One, you're going to get an empty row first always because you declared #loopNum = 0 and your WHILE loop starts at 1.
Two, you are using "loopNum" instead of your variable #loopNum in your SELECT.
Rextester: http://rextester.com/XJNML62761

Related

Pulling Values From Temp Table After An Iteration

I'm querying for the total sizes of recent data in certain databases.
I create a table containing the DBs to be queried then iterate over it to get the DB names and total number of times to run the iteration.
I then create a temptable where the needed data will be inserted into.
I run the iteration to grab the information and push it into the temptable for each database.
After the iteration finishes I'm not able to pull the values from this newly created table.
I wrote a little comment next to each portion of code explaining what I'm trying to do and what I expect to happen.
/*check if the #databases table is already present and then drop it*/
IF OBJECT_ID('tempdb..#databases', 'U') IS NOT NULL
begin
drop table #databases;
end
select ArtifactID into #databases from edds.eddsdbo.[Case]
where name like '%Review%'
/*Once this first statement has been run there will now be a
number column that is associated with the artificatID. Each database has an area that is
titled [EDDS'artifactID']. So if the artifactID = 1111111 then the DB would
be accessed at [EDDS1111111]*/
declare #runs int = 1; /*this will track the number of times iterated
over the result set*/
declare #max int = 0; /*this will be the limit*/
declare #databasename sysname='' /*this will allow the population of each
database name*/
/*check if your temp table exists and drop if necessary*/
IF OBJECT_ID('tempdb..#temptable', 'U') IS NOT NULL
begin
drop table #temptable;
end
/*create the temp table as outside the loop*/
create table #temptable(
fileSize dec,
extractedTextSize dec
)
while #runs<=#max
begin
select #max=count(*) from #databases;
/*the #max is now the number of databases inserted in to this table*/
/*This select statement pulls the information that will be placed
into the temptable. This second statment should be inside the loop. One time
for each DB that appeared in the first query's results.*/
/*begin the loop by assigning your database name, I don't know what the
column is called so I have just called it databasename for now*/
select top 1 #databasename = ArtifactID from #databases;
/*generate your sql using the #databasename variable, if you want to make
the database and table names dynamic too then you can use the same formula*/
insert into #temptable
select SUM(fileSize)/1024/1024/1024, SUM(extractedTextSize)/1024/1024
FROM [EDDS'+cast(#databasename as nvarchar(128))+'].[EDDSDBO].[Document] ed
where ed.CreatedDate >= (select CONVERT(varchar,dateadd(d,-
(day(getdate())),getdate()),106))'
/*remove that row from the databases table so the row won't be redone
This will take the #max and lessen it by one*/
delete from #databases where ArtifactID=#databasename;
/* Once the #max is less than 1 then end the loop*/
end
/* Query the final values in the temp table after the iteration is complete*/
select filesize+extractedTextSize as Gigs from #temptable
When that final select statement runs to pull values from #temptable the response is a single gigs column(as expected) but the table itself is blank.
Something is happening to clear the data out of the table and I'm stuck.
I'm not sure if my error is in syntax or a general error of logic but any help would be greatly appreciated.
Made a few tweaks to formatting but main issue is your loop would never run.
You have #runs <= #max, but #max = 1 and #runs = 0 at start so it will never loop
To fix this you can do a couple different things but I set the #max before loop, and in the loop just added 1 to #runs each loop, since you know how many you need #max before loop runs, and just add it to number of runs and do your compare.
But one NOTE there are much better ways to do this then the way you have. Put identity on your #databases table, and in your loop just do where databaseID = loopCount (then you dont have to delete from the table)
--check if the #databases table is already present and then drop it
IF OBJECT_ID('tempdb..#databases', 'U') IS NOT NULL
drop table #databases;
--Once this first statement has been run there will now be a number column that is associated with the artificatID. Each database has an area that is
-- titled [EDDS'artifactID']. So if the artifactID = 1111111 then the DB would be accessed at [EDDS1111111]
select ArtifactID
INTO #databases
FROM edds.eddsdbo.[Case]
where name like '%Review%'
-- set to 0 to start
DECLARE #runs int = 0;
--this will be the limit
DECLARE #max int = 0;
--this will allow the population of each database name
DECLARE #databasename sysname = ''
--check if your temp table exists and drop if necessary
IF OBJECT_ID('tempdb..#temptable', 'U') IS NOT NULL
drop table #temptable;
--create the temp table as outside the loop
create table #temptable(
fileSize dec,
extractedTextSize dec
)
-- ***********************************************
-- Need to set the value your looping on before you get to your loop, also so if you dont have any you wont do your loop
-- ***********************************************
--the #max is now the number of databases inserted in to this table
select #max = COUNT(*)
FROM #databases;
while #runs <= #max
BEGIN
/*This select statement pulls the information that will be placed
into the temptable. This second statment should be inside the loop. One time
for each DB that appeared in the first query's results.*/
/*begin the loop by assigning your database name, I don't know what the
column is called so I have just called it databasename for now*/
select top 1 #databasename = ArtifactID from #databases;
/*generate your sql using the #databasename variable, if you want to make
the database and table names dynamic too then you can use the same formula*/
insert into #temptable
select SUM(fileSize)/1024/1024/1024, SUM(extractedTextSize)/1024/1024
FROM [EDDS'+cast(#databasename as nvarchar(128))+'].[EDDSDBO].[Document] ed
where ed.CreatedDate >= (select CONVERT(varchar,dateadd(d,- (day(getdate())),getdate()),106))
--remove that row from the databases table so the row won't be redone This will take the #max and lessen it by one
delete from #databases where ArtifactID=#databasename;
--Once the #max is less than 1 then end the loop
-- ***********************************************
-- no need to select from the table and change your max value, just change your runs by adding one for each run
-- ***********************************************
--the #max is now the number of databases inserted in to this table
select #runs = #runs + 1 --#max=count(*) from #databases;
end
-- Query the final values in the temp table after the iteration is complete
select filesize+extractedTextSize as Gigs from #temptable
This is second answer, but its alternative to like I mentioned in above and cleaner to post to post as an alternative answer to keep them seperate
This is a better way to do your looping (not fully tested out yet so you will have to verify).
But instead of deleting from your table just add an ID to it and loop through it using that ID. Way less steps and much cleaner.
--check if the #databases table is already present and then drop it
IF OBJECT_ID('tempdb..#databases', 'U') IS NOT NULL
drop table #databases;
--create the temp table as outside the loop
create table #databases(
ID INT IDENTITY,
ArtifactID VARCHAR(20) -- not sure of this ID's data type
)
--check if your temp table exists and drop if necessary
IF OBJECT_ID('tempdb..#temptable', 'U') IS NOT NULL
drop table #temptable;
--create the temp table as outside the loop
create table #temptable(
fileSize dec,
extractedTextSize dec
)
--this will allow the population of each database name
DECLARE #databasename sysname = ''
-- initialze to 1 so it matches first record in temp table
DECLARE #LoopOn int = 1;
--this will be the max count from table
DECLARE #MaxCount int = 0;
--Once this first statement has been run there will now be a number column that is associated with the artificatID. Each database has an area that is
-- titled [EDDS'artifactID']. So if the artifactID = 1111111 then the DB would be accessed at [EDDS1111111]
-- do insert here so it adds the ID column
INSERT INTO #databases(
ArtifactID
)
SELECT ArtifactID
FROM edds.eddsdbo.[Case]
where name like '%Review%'
-- sets the max number of loops we are going to do
select #MaxCount = COUNT(*)
FROM #databases;
while #LoopOn <= #MaxCount
BEGIN
-- your table has IDENTITY so select the one for the loop your on (initalize to 1)
select #databasename = ArtifactID
FROM #databases
WHERE ID = #LoopOn;
--generate your sql using the #databasename variable, if you want to make
--the database and table names dynamic too then you can use the same formula
insert into #temptable
select SUM(fileSize)/1024/1024/1024, SUM(extractedTextSize)/1024/1024
-- dont know/think this will work like this? If not you have to use dynamic SQL
FROM [EDDS'+cast(#databasename as nvarchar(128))+'].[EDDSDBO].[Document] ed
where ed.CreatedDate >= (select CONVERT(varchar,dateadd(d,- (day(getdate())),getdate()),106))
-- remove all deletes/etc and just add one to the #LoopOn and it will be selected above based off the ID
select #LoopOn += 1
end
-- Query the final values in the temp table after the iteration is complete
select filesize+extractedTextSize as Gigs
FROM #temptable

Create a number of new rows in SQL Server DB with null values

I would like to create a set of new rows in a DB where ID is = to 10,11,12,13,14,15 but all other values are null. This is assuming rows 1 through 9 already exist (in this example). My application will set the first and last row parameters.
Here's my query to create one row but I need a way to loop through rows 10 through 15 until all five rows are created:
#FirstRow int = 10 --will be set by application
,#LastRow int = 15 --will be set by application
,#FileName varchar(100) = NULL
,#CreatedDate date = NULL
,#CreatedBy varchar (50) = NULL
AS
BEGIN
INSERT INTO TABLE(TABLE_ID, FILENAME, CREATED_BY, CREATED_DATE)
VALUES (#FirstRow, #FileName, #CreatedBy, #CreatedDate)
END
The reason I need blank rows is because the application needs to update an existing row in a table. My application will be uploading thousands of documents to rows in a table based on file ID. The application requires that the rows already be inserted. The files are inserted after rows are added. The app then deletes all rows that are null.
Assuming the rows you're inserting are always consecutive, you can use a ghetto FOR loop like the one below to accomplish your goal:
--put all the other variable assignments above this line
DECLARE #i int = #FirstRow
WHILE (#i <= #LastRow)
BEGIN
INSERT INTO TABLE(TABLE_ID, FILENAME, CREATED_BY, CREATED_DATE)
VALUES (#i, #FileName, #CreatedBy, #CreatedDate)
SET #i = #i + 1;
END
Basically, we assigned #i to the lowest index, and then just iterate through, one by one, until we're at the max index.
If performance is a concern, the above approach will not be ideal.
If you don't have a numbers table as SMor mentioned, you can use an ad-hoc tally table
Example
Declare #FirstRow int = 10 --will be set by application
,#LastRow int = 15 --will be set by application
,#FileName varchar(100) = NULL
,#CreatedDate date = NULL
,#CreatedBy varchar (50) = NULL
INSERT INTO TABLE(TABLE_ID, FILENAME, CREATED_BY, CREATED_DATE)
Select Top (#LastRow-#FirstRow+1)
#FirstRow-1+Row_Number() Over (Order By (Select NULL))
,#FileName
,#CreatedBy
,#CreatedDate
From master..spt_values n1, master..spt_values n2
Data Generated

How to update large table with millions of rows in SQL Server?

I've an UPDATE statement which can update more than million records. I want to update them in batches of 1000 or 10000. I tried with ##ROWCOUNT but I am unable to get desired result.
Just for testing purpose what I did is, I selected table with 14 records and set a row count of 5. This query is supposed to update records in 5, 5 and 4 but it just updates first 5 records.
Query - 1:
SET ROWCOUNT 5
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
WHILE ##ROWCOUNT > 0
BEGIN
SET rowcount 5
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
PRINT (##ROWCOUNT)
END
SET rowcount 0
Query - 2:
SET ROWCOUNT 5
WHILE (##ROWCOUNT > 0)
BEGIN
BEGIN TRANSACTION
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
PRINT (##ROWCOUNT)
IF ##ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
END
SET ROWCOUNT 0
What am I missing here?
You should not be updating 10k rows in a set unless you are certain that the operation is getting Page Locks (due to multiple rows per page being part of the UPDATE operation). The issue is that Lock Escalation (from either Row or Page to Table locks) occurs at 5000 locks. So it is safest to keep it just below 5000, just in case the operation is using Row Locks.
You should not be using SET ROWCOUNT to limit the number of rows that will be modified. There are two issues here:
It has that been deprecated since SQL Server 2005 was released (11 years ago):
Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in a future release of SQL Server. Avoid using SET ROWCOUNT with DELETE, INSERT, and UPDATE statements in new development work, and plan to modify applications that currently use it. For a similar behavior, use the TOP syntax
It can affect more than just the statement you are dealing with:
Setting the SET ROWCOUNT option causes most Transact-SQL statements to stop processing when they have been affected by the specified number of rows. This includes triggers. The ROWCOUNT option does not affect dynamic cursors, but it does limit the rowset of keyset and insensitive cursors. This option should be used with caution.
Instead, use the TOP () clause.
There is no purpose in having an explicit transaction here. It complicates the code and you have no handling for a ROLLBACK, which isn't even needed since each statement is its own transaction (i.e. auto-commit).
Assuming you find a reason to keep the explicit transaction, then you do not have a TRY / CATCH structure. Please see my answer on DBA.StackExchange for a TRY / CATCH template that handles transactions:
Are we required to handle Transaction in C# Code as well as in Store procedure
I suspect that the real WHERE clause is not being shown in the example code in the Question, so simply relying upon what has been shown, a better model (please see note below regarding performance) would be:
DECLARE #Rows INT,
#BatchSize INT; -- keep below 5000 to be safe
SET #BatchSize = 2000;
SET #Rows = #BatchSize; -- initialize just to enter the loop
BEGIN TRY
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE TOP (#BatchSize) tab
SET tab.Value = 'abc1'
FROM TableName tab
WHERE tab.Parameter1 = 'abc'
AND tab.Parameter2 = 123
AND tab.Value <> 'abc1' COLLATE Latin1_General_100_BIN2;
-- Use a binary Collation (ending in _BIN2, not _BIN) to make sure
-- that you don't skip differences that compare the same due to
-- insensitivity of case, accent, etc, or linguistic equivalence.
SET #Rows = ##ROWCOUNT;
END;
END TRY
BEGIN CATCH
RAISERROR(stuff);
RETURN;
END CATCH;
By testing #Rows against #BatchSize, you can avoid that final UPDATE query (in most cases) because the final set is typically some number of rows less than #BatchSize, in which case we know that there are no more to process (which is what you see in the output shown in your answer). Only in those cases where the final set of rows is equal to #BatchSize will this code run a final UPDATE affecting 0 rows.
I also added a condition to the WHERE clause to prevent rows that have already been updated from being updated again.
NOTE REGARDING PERFORMANCE
I emphasized "better" above (as in, "this is a better model") because this has several improvements over the O.P.'s original code, and works fine in many cases, but is not perfect for all cases. For tables of at least a certain size (which varies due to several factors so I can't be more specific), performance will degrade as there are fewer rows to fix if either:
there is no index to support the query, or
there is an index, but at least one column in the WHERE clause is a string data type that does not use a binary collation, hence a COLLATE clause is added to the query here to force the binary collation, and doing so invalidates the index (for this particular query).
This is the situation that #mikesigs encountered, thus requiring a different approach. The updated method copies the IDs for all rows to be updated into a temporary table, then uses that temp table to INNER JOIN to the table being updated on the clustered index key column(s). (It's important to capture and join on the clustered index columns, whether or not those are the primary key columns!).
Please see #mikesigs answer below for details. The approach shown in that answer is a very effective pattern that I have used myself on many occasions. The only changes I would make are:
Explicitly create the #targetIds table rather than using SELECT INTO...
For the #targetIds table, declare a clustered primary key on the column(s).
For the #batchIds table, declare a clustered primary key on the column(s).
For inserting into #targetIds, use INSERT INTO #targetIds (column_name(s)) SELECT and remove the ORDER BY as it's unnecessary.
So, if you don't have an index that can be used for this operation, and can't temporarily create one that will actually work (a filtered index might work, depending on your WHERE clause for the UPDATE query), then try the approach shown in #mikesigs answer (and if you use that solution, please up-vote it).
WHILE EXISTS (SELECT * FROM TableName WHERE Value <> 'abc1' AND Parameter1 = 'abc' AND Parameter2 = 123)
BEGIN
UPDATE TOP (1000) TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 AND Value <> 'abc1'
END
I encountered this thread yesterday and wrote a script based on the accepted answer. It turned out to perform very slowly, taking 12 hours to process 25M of 33M rows. I wound up cancelling it this morning and working with a DBA to improve it.
The DBA pointed out that the is null check in my UPDATE query was using a Clustered Index Scan on the PK, and it was the scan that was slowing the query down. Basically, the longer the query runs, the further it needs to look through the index for the right rows.
The approach he came up with was obvious in hind sight. Essentially, you load the IDs of the rows you want to update into a temp table, then join that onto the target table in the update statement. This uses an Index Seek instead of a Scan. And ho boy does it speed things up! It took 2 minutes to update the last 8M records.
Batching Using a Temp Table
SET NOCOUNT ON
DECLARE #Rows INT,
#BatchSize INT,
#Completed INT,
#Total INT,
#Message nvarchar(max)
SET #BatchSize = 4000
SET #Rows = #BatchSize
SET #Completed = 0
-- #targetIds table holds the IDs of ALL the rows you want to update
SELECT Id into #targetIds
FROM TheTable
WHERE Foo IS NULL
ORDER BY Id
-- Used for printing out the progress
SELECT #Total = ##ROWCOUNT
-- #batchIds table holds just the records updated in the current batch
CREATE TABLE #batchIds (Id UNIQUEIDENTIFIER);
-- Loop until #targetIds is empty
WHILE EXISTS (SELECT 1 FROM #targetIds)
BEGIN
-- Remove a batch of rows from the top of #targetIds and put them into #batchIds
DELETE TOP (#BatchSize)
FROM #targetIds
OUTPUT deleted.Id INTO #batchIds
-- Update TheTable data
UPDATE t
SET Foo = 'bar'
FROM TheTable t
JOIN #batchIds tmp ON t.Id = tmp.Id
WHERE t.Foo IS NULL
-- Get the # of rows updated
SET #Rows = ##ROWCOUNT
-- Increment our #Completed counter, for progress display purposes
SET #Completed = #Completed + #Rows
-- Print progress using RAISERROR to avoid SQL buffering issue
SELECT #Message = 'Completed ' + cast(#Completed as varchar(10)) + '/' + cast(#Total as varchar(10))
RAISERROR(#Message, 0, 1) WITH NOWAIT
-- Quick operation to delete all the rows from our batch table
TRUNCATE TABLE #batchIds;
END
-- Clean up
DROP TABLE IF EXISTS #batchIds;
DROP TABLE IF EXISTS #targetIds;
Batching the slow way, do not use!
For reference, here is the original slower performing query:
SET NOCOUNT ON
DECLARE #Rows INT,
#BatchSize INT,
#Completed INT,
#Total INT
SET #BatchSize = 4000
SET #Rows = #BatchSize
SET #Completed = 0
SELECT #Total = COUNT(*) FROM TheTable WHERE Foo IS NULL
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE t
SET Foo = 'bar'
FROM TheTable t
JOIN #batchIds tmp ON t.Id = tmp.Id
WHERE t.Foo IS NULL
SET #Rows = ##ROWCOUNT
SET #Completed = #Completed + #Rows
PRINT 'Completed ' + cast(#Completed as varchar(10)) + '/' + cast(#Total as varchar(10))
END
This is a more efficient version of the solution from #Kramb. The existence check is redundant as the update where clause already handles this. Instead you just grab the rowcount and compare to batchsize.
Also note #Kramb solution didn't filter out already updated rows from the next iteration hence it would be an infinite loop.
Also uses the modern batch size syntax instead of using rowcount.
DECLARE #batchSize INT, #rowsUpdated INT
SET #batchSize = 1000;
SET #rowsUpdated = #batchSize; -- Initialise for the while loop entry
WHILE (#batchSize = #rowsUpdated)
BEGIN
UPDATE TOP (#batchSize) TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 and Value <> 'abc1';
SET #rowsUpdated = ##ROWCOUNT;
END
I want share my experience. A few days ago I have to update 21 million records in table with 76 million records. My colleague suggested the next variant.
For example, we have the next table 'Persons':
Id | FirstName | LastName | Email | JobTitle
1 | John | Doe | abc1#abc.com | Software Developer
2 | John1 | Doe1 | abc2#abc.com | Software Developer
3 | John2 | Doe2 | abc3#abc.com | Web Designer
Task: Update persons to the new Job Title: 'Software Developer' -> 'Web Developer'.
1. Create Temporary Table 'Persons_SoftwareDeveloper_To_WebDeveloper (Id INT Primary Key)'
2. Select into temporary table persons which you want to update with the new Job Title:
INSERT INTO Persons_SoftwareDeveloper_To_WebDeveloper SELECT Id FROM
Persons WITH(NOLOCK) --avoid lock
WHERE JobTitle = 'Software Developer'
OPTION(MAXDOP 1) -- use only one core
Depends on rows count, this statement will take some time to fill your temporary table, but it would avoid locks. In my situation it took about 5 minutes (21 million rows).
3. The main idea is to generate micro sql statements to update database. So, let's print them:
DECLARE #i INT, #pagesize INT, #totalPersons INT
SET #i=0
SET #pagesize=2000
SELECT #totalPersons = MAX(Id) FROM Persons
while #i<= #totalPersons
begin
Print '
UPDATE persons
SET persons.JobTitle = ''ASP.NET Developer''
FROM Persons_SoftwareDeveloper_To_WebDeveloper tmp
JOIN Persons persons ON tmp.Id = persons.Id
where persons.Id between '+cast(#i as varchar(20)) +' and '+cast(#i+#pagesize as varchar(20)) +'
PRINT ''Page ' + cast((#i / #pageSize) as varchar(20)) + ' of ' + cast(#totalPersons/#pageSize as varchar(20))+'
GO
'
set #i=#i+#pagesize
end
After executing this script you will receive hundreds of batches which you can execute in one tab of MS SQL Management Studio.
4. Run printed sql statements and check for locks on table. You always can stop process and play with #pageSize to speed up or speed down updating(don't forget to change #i after you pause script).
5. Drop Persons_SoftwareDeveloper_To_AspNetDeveloper. Remove temporary table.
Minor Note: This migration could take a time and new rows with invalid data could be inserted during migration. So, firstly fix places where your rows adds. In my situation I fixed UI, 'Software Developer' -> 'Web Developer'.
More about this method on my blog https://yarkul.com/how-smoothly-insert-millions-of-rows-in-sql-server/
Your print is messing things up, because it resets ##ROWCOUNT. Whenever you use ##ROWCOUNT, my advice is to always set it immediately to a variable. So:
DECLARE #RC int;
WHILE #RC > 0 or #RC IS NULL
BEGIN
SET rowcount 5;
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 AND Value <> 'abc1';
SET #RC = ##ROWCOUNT;
PRINT(##ROWCOUNT)
END;
SET rowcount = 0;
And, another nice feature is that you don't need to repeat the update code.
First of all, thank you all for your inputs. I tweak my Query - 1 and got my desired result. Gordon Linoff is right, PRINT was messing up my query so I modified it as following:
Modified Query - 1:
SET ROWCOUNT 5
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
IF ##ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
END
SET ROWCOUNT 0
Output:
(5 row(s) affected)
(5 row(s) affected)
(4 row(s) affected)
(0 row(s) affected)

SQL Server constraint to permit "two unique" values

I have a table like this one:
id fk_id
1 1
2 1
3 2
4 3
5 3
The field fk_id references another table, I want to create a constraint to permit max two insertion with each fk_id.
I want to prevent this:
id fk_id
1 1
2 1
3 1 <-- FAIL
4 3
5 3
This is a relationship of "one to many (but max 2)" or "one to one (or two)" - I donĀ“t know how I can name it.
Can I do this with MS SQL Server? Maybe a CHECK CONSTRAINT ?
SOLUTION:
-- function to check if there are more then two rows
CREATE FUNCTION [dbo].[CheckMaxTwoForeignKeys](#check_id int)
RETURNS bit
AS
BEGIN
DECLARE #result bit
DECLARE #count int
SELECT #count = COUNT(*) FROM mytable WHERE fk_id = #check_id
IF #count <= 2
SET #result = 1
ELSE
SET #result = 0
RETURN #result
END
-- create the constraint
ALTER TABLE mytable
ADD CONSTRAINT CK_MaxTwoFK CHECK ( ([dbo].[CheckMaxTwoForeignKeys]([fk_id])=1) )
You should create a check constraint that calls a function; the function returns 1 if there are 2 or less values for the current value (current value that is being checked).
The check constraint should be something like check(dbo.FunctionCheckValidityOfValue = 1)

Duplicate Auto Numbers generated in SQL Server

Be gentle, I'm a SQL newbie. I have a table named autonumber_settings like this:
Prefix | AutoNumber
SO | 112320
CA | 3542
A whenever a new sales line is created, a stored procedure is called that reads the current autonumber value from the 'SO' row, then increments the number, updates that same row, and return the number back from the stored procedure. The stored procedure is below:
ALTER PROCEDURE [dbo].[GetAutoNumber]
(
#type nvarchar(50) ,
#out nvarchar(50) = '' OUTPUT
)
as
set nocount on
declare #currentvalue nvarchar(50)
declare #prefix nvarchar(10)
if exists (select * from autonumber_settings where lower(autonumber_type) = lower(#type))
begin
select #prefix = isnull(autonumber_prefix,''),#currentvalue=autonumber_currentvalue
from autonumber_settings
where lower(autonumber_type) = lower(#type)
set #currentvalue = #currentvalue + 1
update dbo.autonumber_settings set autonumber_currentvalue = #currentvalue where lower(autonumber_type) = lower(#type)
set #out = cast(#prefix as nvarchar(10)) + cast(#currentvalue as nvarchar(50))
select #out as value
end
else
select '' as value
Now, there is another procedure that accesses the same table that duplicates orders, copying both the header and the lines. On occasion, the duplication results in duplicate line numbers. Here is a piece of that procedure:
BEGIN TRAN
IF exists
(
SELECT *
FROM autonumber_settings
WHERE autonumber_type = 'SalesOrderDetail'
)
BEGIN
SELECT
#prefix = ISNULL(autonumber_prefix,'')
,#current_value=CAST (autonumber_currentvalue AS INTEGER)
FROM autonumber_settings
WHERE autonumber_type = 'SalesOrderDetail'
SET #new_auto_number = #current_value + #number_of_lines
UPDATE dbo.autonumber_settings
SET autonumber_currentvalue = #new_auto_number
WHERE autonumber_type = 'SalesOrderDetail'
END
COMMIT TRAN
Any ideas on why the two procedures don't seem to play well together, occasionally giving the same line numbers created from scratch as lines created by duplication.
This is a race condition or your autonumber assignment. Two executions have the potential to read out the same value before a new one is written back to the database.
The best way to fix this is to use an identity column and let SQL server handle the autonumber assignments.
Barring that you could use sp_getapplock to serialize your access to autonumber_settings.
You could use repeatable read on the selects. That will lock the row and block the other procedure's select until you update the value and commit.
Insert WITH (REPEATABLEREAD,ROWLOCK) after the from clause for each select.

Resources