Using Between with Max? - sql-server

is it possible to use Between with Max, like this:
SELECT * FROM TABLE WHERE ID BETWEEN 100 AND MAX
Or a way to go to the end?

What do you mean by Max? The maximum value of the data type? The maximum value in the column?
In any case you just need
SELECT * FROM TABLE WHERE ID >= 100

below will work
SELECT * FROM tblName WHERE id BETWEEN 100 and (SELECT MAX(id) from tblName)

I can't see why you wouldn't just use a greater-than-or-equal-to condition, but if you really insist on doing it this way:
SELECT * FROM TABLE WHERE ID BETWEEN 100 AND (SELECT MAX(ID) FROM TABLE)

As pointed out you can use a nested select to get the MAX value for the end of your range
Here is a code sample to test out the theory:
create table #TempTable (id int)
declare #Counter int
set #Counter = 1
while (#Counter < 1000)
begin
insert into #TempTable (id) values (#Counter)
set #Counter = #Counter + 1
end
select * from #TempTable where id between 800 and (Select MAX(id) from #TempTable)
drop table #TempTable

Related

How to insert records into table in loop?

I have a table with non-autoincrement ID field. I need to create an varchar array and then insert data from array into table.
The problem is that I don't know how to declare such array. And also I don't know how to address the value by the index in the loop.
declare #newCodesList - ???
declare #counter int = 0
declare #lastID int = (select MAX(Id) from OrganizationCode)
while #counter < LEN(#newCodesList)
begin
#lastID = #lastID + 1
insert into OrganizationCodeCopy values(#lastID, #newCodesList[#counter])
#counter = #counter + 1
end
In code upper I try to insert values in the loop after finding last record ID and declaring counter
You could use a table variable to store the codes in. Then do an INSERT ... SELECT from that variable. To get the IDs you can use row_number() the your maximum ID from the other table.
DECLARE #codes TABLE
(code nvarchar(4));
INSERT INTO #codes
(code)
VALUES ('A01B'),
('B03C'),
('X97K');
INSERT INTO organizationcodecopy
(id,
code)
SELECT (SELECT coalesce(max(id), 0)
FROM organizationcode) + row_number() OVER (ORDER BY code) id,
code
FROM #codes;
db<>fiddle
Instead of an array, you can use a comma separated string. Like following:
DECLARE #newCodesList VARCHAR(MAX) = 'value1,value2'
DECLARE #lastID int = (SELECT MAX(Id) FROM OrganizationCode)
INSERT INTO OrganizationCodeCopy
(
Id,
Code
)
SELECT
#lastID + ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS Id
Element AS Code
FROM
asi_SplitString(#newCodesList, ',')
Using loops is not a good solution. You may try to define a temporary table and insert new data with one statement. Values for Id are generated with ROW_NUMBER() function:
-- New data
CREATE TABLE #NewCodes (
Code varchar(50)
)
INSERT INTO #NewCodes
(Code)
VALUES
('Code1'),
('Code2'),
('Code3'),
('Code4'),
('Code5'),
('Code6')
-- Last ID
DECLARE #LastID int
SELECT #LastId = (SELECT ISNULL(MAX(Id), 0) FROM OrganizationCode)
-- Statement
INSERT INTO OrganizationCode
(Id, Code)
SELECT
#LastId + ROW_NUMBER() OVER (ORDER BY Code) AS Id,
[Code]
FROM #NewCodes

Creating duplicates with a different ID for test in SQL

I have a table with 1000 unique records with one of the field as ID. For testing purpose, my requirement is that To update the last 200 records ID value to the first 200 records ID in the same table. Sequence isn't mandatory.
Appreciate help on this.
Typically I charge for doing other ppls homework, don't forget to cite your source ;)
declare #example as table (
exampleid int identity(1,1) not null
, color nvarchar(255) not null
);
insert into #example (color)
select 'black' union all
select 'green' union all
select 'purple' union all
select 'indigo' union all
select 'yellow' union all
select 'pink';
select *
from #example;
declare #max int = (select max(exampleId) from #example);
declare #min int = #max - 2
;with cte as (
select top 2 color
from #example
)
update #example
set color = a.color
from cte a
where exampleid <= #max and exampleid > #min;
select *
from #example
This script should solve the issue and will cover scenarios even if the id column is not sequential.I have included the comments to help you understand the joins and the flow of the script.
declare #test table
(
ID int not null,
Txt char(1)
)
declare #counter int = 1
/*******This variable is the top n or bottom n records in question it is 200 ,
for test purpose setting it to 20
************/
declare #delta int = 20
while(#counter <= 50)
begin
Insert into #test values(#counter * 5,CHAR(#counter+65))
set #counter = #counter + 1
end
/************Tag the records with a row id as we do not know if ID's are sequential or random ************/
Select ROW_NUMBER() over (order by ID) rownum,* into #tmp from #test
/************Logic to update the data ************/
/*Here we first do a self join on the tmp table with first 20 to last 20
then create another join to the test table based on the ID of the first 20
*/
update t
set t.ID = tid.lastID
--Select t.ID , tid.lastID
from #test t inner join
(
Select f20.rownum as first20 ,l20.rownum as last20,f20.ID as firstID, l20.ID lastID
from #tmp f20
inner join #tmp l20 on (f20.rownum + #delta) = l20.rownum
)tid on tid.firstID = t.ID and tid.first20 < = #delta
Select * from #test
drop table #tmp

How to chunk updates to SQL Server?

I want to update a table in SQL Server by setting a FLAG column to 1 for all values since the beginning of the year:
TABLE
DATE ID FLAG (more columns...)
2016/01/01 1 0 ...
2016/01/01 2 0 ...
2016/01/02 3 0 ...
2016/01/02 4 0 ...
(etc)
Problem is that this table contains hundreds of millions of records and I've been advised to chunk the updates 100,000 rows at a time to avoid blocking other processes.
I need to remember which rows I update because there are background processes which immediately flip the FLAG back to 0 once they're done processing it.
Does anyone have suggestions on how I can do this?
Each day's worth of data has over a million records, so I can't simply loop using the DATE as a counter. I am thinking of using the ID
Assuming the date column and the ID column are sequential you could do a simple loop. By this I mean that if there is a record id=1 and date=2016-1-1 then record id=2 date=2015-12-31 could not exist. If you are worried about locks/exceptions you should add a transaction in the WHILE block and commit or rollback on failure.
Change the #batchSize to whatever you feel is right after some experimentation.
DECLARE #currentId int, #maxId int, #batchSize int = 10000
SELECT #currentId = MIN(ID), #maxId = MAX(ID) FROM YOURTABLE WHERE DATE >= '2016-01-01'
WHILE #currentId < #maxId
BEGIN
UPDATE YOURTABLE SET FLAG = 1 WHERE ID BETWEEN #currentId AND (#currentId + #batchSize)
SET #currentId = #currentId + #batchSize
END
As this as the update will never flag the same record to 1 twice I do not see a need to track which records were touched unless you are going to manually stop the process partway through.
You should also ensure that the ID column has an index on it so the retrieval is fast in each update statement.
Looks like a simple question or maybe I'm missing something.
You can create a temp/permanent table to keep track of updated rows.
create tbl (Id int) -- or temp table based on your case
insert into tbl values (0)
declare #lastId int = (select Id from tbl)
;with cte as (
select top 100000
from YourMainTable
where Id > #lastId
ORDER BY Id
)
update cte
set Flag = 1
update tbl set Id = #lastId + 100000
You can do this process in a loop (except the table creation part)
create table #tmp_table
(
id int ,
row_number int
)
insert into #tmp_table
(
id,
row_number
)
--logic to load records from base table
select
bt.id,
row_number() over(partition by id order by id ) as row_number
from
dbo.bas_table bt
where
--ur logic to limit the records
declare #batch_size int = 100000;
declare #start_row_number int,#end_row_number int;
select
#start_row_number = min(row_number),
#end_row_number = max(row_number)
from
#tmp_table
while(#start_row_number < #end_row_number)
begin
update top #batch_size
bt
set
bt.flag = 1
from
dbo.base_table bt
inner join #tmp_table tt on
tt.Id = bt.Id
where
bt.row_number between #start_row_number and (#start_row_number + #batch_size)
set #start_row_number = #start_row_number + #batch_size
end

Generate a repetitive sequential number using SQL Server 2008

Can anyone help me to generate a repetitive sequential number using SQL Server 2008. Say I have a table of 1000 rows and a new field (int) added to the table. All I need is to auto fill that particular field with sequential numbers 1-100 all the way to the last row.
I have this but doesnt seem that it is working. You help is much appreciated.
DECLARE #id INT
SET #id = 0
while #id < 101
BEGIN
UPDATE Names SET id=#id
set #id=#id+1
END
USE tempdb
GO
DROP TABLE tableof1000rows
GO
CREATE TABLE tableof1000rows (id int identity(1,1), nb int, value varchar(128))
GO
INSERT INTO tableof1000rows (value)
SELECT TOP 1000 o1.name
FROM sys.objects o1
CROSS JOIN sys.objects o2
GO
UPDATE t1
SET nb = t2.nb
FROM tableof1000rows t1
JOIN (SELECT id, (ROW_NUMBER() OVER (ORDER BY id) % 100) + 1 as nb FROM tableof1000rows) t2 ON t1.id = t2.id
GO
SELECT *
FROM tableof1000rows
Use ROW_NUMBER to generate a number. Use modulo maths to get values from 1 to 100.
go
create table dbo.Demo1
(
DID int not null identity primary key,
RepetitiveSequentialNumber int not null
) ;
go
insert into dbo.Demo1 values ( 0 )
go 1000 -- This is just to get 1,000 rows into the table.
-- Get a sequential number.
select DID, row_number() over ( order by DID ) as RSN
into #RowNumbers
from dbo.Demo1 ;
-- Take the last two digits. This gives us values from 0 to 99.
update #RowNumbers
set RSN = RSN % 100 ;
-- Change the 0 values to 100.
update #RowNumbers
set RSN = case when RSN = 0 then 100 else RSN end ;
-- Update the main table.
update dbo.Demo1
set RepetitiveSequentialNumber = r.RSN
from dbo.Demo1 as d inner join #RowNumbers as r on r.DID = d.DID ;
select *
from dbo.Demo1 ;
Not pretty or elegant, but..
while exists(select * from tbl where new_col is null)
update top(1) tbl
set new_col=(select ISNULL(max(new_col),0)+1 from tbl WHERE new_col is null)
Your way isn't working because you are trying to set a value rather than insert one. I'm sure you have found a solution by now but if not then try this instead.
DECLARE #id INT
SET #id = 0
while #id < 101
BEGIN
INSERT Names select #id
set #id=#id+1
END

How to expand rows from count in tsql only

I have a table that contains a number and a range value. For instance, one column has the value of 40 and the other column has a value of 100 meaning that starting 40 the range has 100 values ending in 139 inclusive of the number 40. I want to write a tsql statement that expands my data into individual rows.
I think I need a cte for this but do not know how I can achieve this.
Note: when expanded I am expecting 7m rows.
If you want CTE here is an example:
Initial insert:
insert into rangeTable (StartValue, RangeValue)
select 40,100
union all select 150,10
go
the query:
with r_CTE (startVal, rangeVal, generatedVal)
as
(
select r.startValue, r.rangeValue, r.startValue
from rangeTable r
union all
select r.startValue, r.rangeValue, generatedVal+1
from rangeTable r
inner join r_CTE rc
on r.startValue = rc.startVal
and r.rangeValue = rc.rangeVal
and r.startValue + r.rangeValue > rc.generatedVal + 1
)
select * from r_CTE
order by startVal, rangeVal, generatedVal
Just be aware that the default maximum number of recursions is 100. You can change it to the maximum of 32767 by calling
option (maxrecursion 32767)
or to no limit
option (maxrecursion 0)
See BOL for details
I don't know how this could be done with common table expressions, but here is a solution using a temporary table:
SET NOCOUNT ON
DECLARE #MaxValue INT
SELECT #MaxValue = max(StartValue + RangeValue) FROM MyTable
DECLARE #Numbers table (
Number INT IDENTITY(1,1) PRIMARY KEY
)
INSERT #Numbers DEFAULT VALUES
WHILE COALESCE(SCOPE_IDENTITY(), 0) <= #MaxValue
INSERT #Numbers DEFAULT VALUES
SELECT n.Number
FROM #Numbers n
WHERE EXISTS(
SELECT *
FROM MyTable t
WHERE n.Number BETWEEN t.StartValue AND t.StartValue + t.RangeValue - 1
)
SET NOCOUNT OFF
Could be optimized if the Numbers table was a regular table. So you don't have to fill the temporary table on every call.
you could try this approach:
create function [dbo].[fRange](#a int, #b int)
returns #ret table (val int)
as
begin
declare #val int
declare #end int
set #val = #a
set #end = #a + #b
while #val < #end
begin
insert into #ret(val)
select #val
set #val = #val+1
end
return
end
go
declare #ranges table(start int, noOfEntries int)
insert into #ranges (start, noOfEntries)
select 40,100
union all select 150, 10
select * from #ranges r
cross apply dbo.fRange(start,noOfEntries ) fr
not the fastest but should work
I would do something slightly different from splattne...
SET NOCOUNT ON
DECLARE #MaxValue INT
DECLARE #Numbers table (
Number INT IDENTITY(1,1) PRIMARY KEY CLUSTERED
)
SELECT #MaxValue = max(RangeValue) FROM MyTable
INSERT #Numbers DEFAULT VALUES
WHILE COALESCE(SCOPE_IDENTITY(), 0) <= #MaxValue
INSERT #Numbers DEFAULT VALUES
SELECT
t.startValue + n.Number
FROM
MyTable t
INNER JOIN
#Numbers n
ON n.Number < t.RangeValue
SET NOCOUNT OFF
This will minimise the number of rows you need to insert into the table variable, then use a join to 'multiply' one table by the other...
By the nature of the query, the source table table doesn't need indexing, but the "numbers" table should have an index (or primary key). Clustered Indexes refer to how they're stored on the Disk, so I can't see CLUSTERED being relevant here, but I left it in as I just copied from Splattne.
(Large joins like this may be slow, but still much faster than millions of inserts.)

Resources