i'm using this query on SQLServer 2017:
DECLARE #id int
SET #id = 0
UPDATE aicidich00fuff
SET #id = trasco_id = #id + 1
but for some reason it doesn't create an unique value.
It is easily notable if I run the following:
select COUNT(*) cont, trasco_id from aicidich00fuff group by trasco_id order by trasco_id
cont trasco_id
4 1
4 2
4 3
4 4
4 5
4 6
4 7
4 8
4 9
and so on. Fun fact is that most values are repeated four times, then three times, then twice.
This query for autoincrementing a column has worked since a few weeks ago, but now every time i got this behaviour. Any tips?
Thanks
Related
In SQL Server, I have the following table (snippet) which is the source data I receive (I cannot get the raw table it was generated from).
Gradelevel | YoS | Inventory
4 | 0 | 4000
4 | 1 | 3500
4 | 2 | 2000
The first row of the table is saying for grade level 4, there are 4,000 people with 0 years of service (YoS).
I need to find the median YoS for each Grade level. This would be easy if the table wasn't given to me aggregated up to the Gradelevel/YoS level with a sum in the Inventory column, but sadly I'm not so lucky.
What I need is to ungroup this table such that I have a new table where the first record is in the table 4,000 times, the next record 3,500 times, the next 2,000, etc (the inventory column would not be in this new table). Then I could take the percent_disc() of the YoS column by grade level and get the median. I could also then use other statistical functions on YoS to glean other insights from the data.
So far I've looked at unpivot (doesn't appear to be a candidate for my use case), CTEs (can't find an example close to what I'm trying to do), and a function which iterates through the above table inserting the number of rows indicated by the value in inventory to a new table which becomes my 'ungrouped' table I can run statistical analyses on. I believe the last approach is the best option available to me but the examples I've all seen iterate and focus on a single column from a table. I need to iterate through each row, then use the gradelevel, and yos values to insert [inventory] number of times before moving on to the next row.
Is anyone aware of:
A better way to do this other then the iteration/cursor method?
How to iterate through a table to accomplish my goal? I've been reading Is there a way to loop through a table variable in TSQL without using a cursor? but am having a hard time figuring out how to apply that iteration to my use case.
Edit 10/3, here is the looping code I got working which produces the same as John's cross apply. Pro is any statistical function can then be run on it, con is it is slow.
--this table will hold our row (non-frequency) based inventory data
DROP TABLE IF EXISTS #tempinv
CREATE TABLE #tempinv(
amcosversionid INT NOT null,
pp NVARCHAR(3) NOT NULL,
gl INT NOT NULL,
yos INT NOT NULL
)
-- to transform the inventory frequency table to a row based inventory we need to iterate through it
DECLARE #MyCursor CURSOR, #pp AS NVARCHAR(3), #gl AS INT, #yos AS INT, #inv AS int
BEGIN
SET #MyCursor = CURSOR FOR
SELECT payplan, gradelevel, step_yos, SUM(inventory) AS inventory
FROM
mytable
GROUP BY payplan, gradelevel, step_yos
OPEN #MyCursor
FETCH NEXT FROM #MyCursor
INTO #pp, #GL, #yos, #inv
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE #i int
SET #i = 1
--insert into our new table for each number of people in inventory
WHILE #i<=#inv
BEGIN
INSERT INTO #tempinv (pp,gl,yos) VALUES (#pp,#gl,#yos)
SET #i = #i + 1
END
FETCH NEXT FROM #MyCursor
INTO #pp, #GL, #yos, #inv
END;
One Option is to use an CROSS APPLY in concert with an ad-hoc tally table. This will "expand" your data into N rows. Then you can perform any desired analysis you want.
Example
Select *
From YourTable A
Cross Apply (
Select Top ([Inventory]) N=Row_Number() Over (Order By (Select NULL))
From master..spt_values n1, master..spt_values n2
) B
Returns
Grd Yos Inven N
4 0 4000 1
4 0 4000 2
4 0 4000 3
4 0 4000 4
4 0 4000 5
...
4 0 4000 3998
4 0 4000 3999
4 0 4000 4000
4 1 3500 1
4 1 3500 2
4 1 3500 3
4 1 3500 4
...
4 1 3500 3499
4 1 3500 3500
4 2 2000 1
4 2 2000 2
4 2 2000 3
...
4 2 2000 1999
4 2 2000 2000
I have a table with 4 columns ID, c1, c2 and LOT. ID is the primary key. For every record when c1 is 5 I want to auto-generate a number for LOT which will be a sequence starting from 1 for each distinct value of c2.
So if c1 is not 5, LOT remains null. But if c1 is 5 then for every record where c2=1 I want to populate LOT with an auto-incrementing sequence starting from 1.
Ex:
ID c1 c2 LOT
1 3
2 5 1 1
3 5 1 2
4 5 1 3
5 4
Then do the same for a different value of c2. So if c2 is 2, have another bunch of auto-incrementing LOT numbers starting from 1:
ID c1 c2 LOT
6 3
7 5 2 1
8 5 1 4
9 5 2 2
10 5 2 3
We are using MSSQL 2014 Enterprise Ed. Would table-partitioning be useful or do I need to create special tables for each distinct value of C2?
not with an identity field, you can use a trigger instead.
There is no way of doing this using the Identity feature, however, consider using a Instead of trigger to manually manage the values like you want.
You could use the logic to generate LOT in a query or view:
SELECT ID, C1, C2,
CASE
WHEN C1<>5 OR C2 IS NULL THEN NULL
ELSE COUNT(*) OVER (PARTITION BY C1, C2 ORDER BY ID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
END AS LOT
FROM D
ORDER BY ID
Given a table generated with (ID, C1, C2):
CREATE TABLE D (ID INT PRIMARY KEY IDENTITY, C1 INT, C2 INT)
INSERT D VALUES (3,NULL),
(5,1),
(5,1),
(5,1),
(4,NULL),
(3,NULL),
(5,2),
(5,1),
(5,2),
(5,2)
The query produces the output indicated above:
ID C1 C2 LOT
1 3
2 5 1 1
3 5 1 2
4 5 1 3
5 4
6 3
7 5 2 1
8 5 1 4
9 5 2 2
10 5 2 3
The statement used to generate LOT, COUNT(*) OVER (PARTITION BY C1, C2 ORDER BY ID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, simply counts the number of rows before and including the current row where C1 and C2 are equal to the current row. For instance, for Row 4, the tuple (c1,c2)=(5,1) is observed in 3 records before and including that row (rows 2, 3, and 4), so LOT=3.
Thank you everyone for the suggestions to use a trigger (all up-voted). It turns out (as I mentioned in a comment above) an article that came up on the side-bar (SQL Server unique auto-increment column in the context of another column) shows a detailed construction of the proper INSTEAD OF INSERT trigger. The author mentions that it is "Not tested", and indeed there IS a slight error (a missing GROUP BY ParentEntityID in the WITH clause) but anybody copying the code would get an error that is obvious to fix. Probably not kosher to be correcting that post here, but the other question is 6 years old.
I have a scenario where in I want a field in a table to be incremented sequentially.
Suppose I have a table Test, with columns TestID, TestResult1,2 etc.. and TestCount.
I have data bulk inserted into the table. Some of the records may be retests, which means new data to be inserted matches existing records in the table, Test Count should be updated.Matching is done on TestID
If the table is as follows:
TestID TestResult1 TestResult2.. TestCount
12 1 1 1
12 2 2 2
13 4 1 1
Data to be inserted in
TestID TestResult1 TestResult2..
12 3 5
12 2 2
The table should be updated as
TestID TestResult1 TestResult2.. TestCount
12 1 1 1
12 2 2 2
13 4 1 1
12 3 5 3
12 2 2 4
I tried adding a trigger on the table to update the TestCount Counting the number of records that matches. But it was updating the table as follows
TestID TestResult1 TestResult2.. TestCount
12 1 1 1
12 2 2 2
13 4 1 1
12 3 5 3
12 2 2 3
CREATE TRIGGER trgTestCount
on Test
AFTER INSERT
AS
Update g
Set TestCount= (Select Count(*)+1 from Test g join INSERTED g1 where g.TestID=g1.TestID )
from Test g
This is a SSIS package and I use a dataflow task to load data from STg table to test table.
Can you tell me what I am doing wrong here?
If you can change the table structure, I would suggest adding an Identity column, change the TestCount column to a computed column, and have it's value as the count of distinct test ids that are the same is the current row test id and the create date is lower than the current value of the Identity column.
This will eliminate the need for triggers and will handle inserting multiple records with the same test id automatically.
In SQL Server 2008 R2, I have a table like this:
ID Dates Count
1 03-02-2014 2
2 04-02-2014 1
3 05-02-2014 NULL
4 06-02-2014 1
5 07-02-2014 3
6 08-02-2014 NULL
7 09-02-2014 2
8 10-02-2014 NULL
9 11-02-2014 1
10 12-02-2014 3
11 13-02-2014 NULL
12 14-02-2014 1
I have an INT variable having some value such as #XCount = 15.
My requirement is to update the count column with (#XCount - Count) such as the result of previous record will be subtracted by the Count value in the next record.
Result:
ID Dates Count
1 03-02-2014 13 (15-2)
2 04-02-2014 12 (13-1)
3 05-02-2014 12 (12-0)
4 06-02-2014 11 (12-1)
5 07-02-2014 8 (11-3)
6 08-02-2014 8 (8-0)
7 09-02-2014 6 (8-2)
8 10-02-2014 6 (6-0)
9 11-02-2014 5 (6-1)
10 12-02-2014 2 (5-3)
11 13-02-2014 2 (2-0)
12 14-02-2014 1 (2-1)
I'm reluctant to use cursors as a solution. Can somebody help me?
How about something like
DECLARE #XCount INT = 15
;WITH Vals AS(
SELECT ID, Dates, [Count] OriginalCount, #XCount - ISNULL([COUNT],0) NewCount
FROM Table1
WHERE ID = 1
UNION ALL
SELECT t.ID, t.Dates, t.[Count], v.NewCount - ISNULL(t.[Count],0)
FROM Table1 t INNER JOIN Vals v ON t.ID = v.ID + 1
)
SELECT *
FROM Vals
SQL Fiddle DEMO
Do note thought that this is a recursive query, and that sometimes (until the tech allows for it, such as SQL SERVER 2012 LAG or Running totals) old does work.
I have a table like
Id Value
1 Start
2 Normal
3 End
4 Normal
5 Start
6 Normal
7 Normal
8 End
9 Normal
I have to bring the output like
id Value
1 Start
2 Normal
3 End
5 Start
6 Normal
7 Normal
8 End
i.e. the records between Start & End. Records with id's 4 & 9 are outside the Start & End henceforth are not there in the output.
How to do this in set based manner (SQLServer 2005)?
Load a table #t:
declare #t table(Id int,Value nvarchar(100));
insert into #t values (1,'Start'),(2,'Normal'),(3,'End'),(4,'Normal'),(5,'Start'),(6,'Normal'),(7,'Normal'),(8,'End'),(9,'Normal');
Query:
With RangesT as (
select Id, (select top 1 Id from #t where Id>p.Id and Value='End' order by Id asc) Id_to
from #t p
where Value='Start'
)
select crossT.*
from RangesT p
cross apply (
select * from #t where Id>=p.Id and Id<=Id_to
) crossT
order by Id
Note that I'm assuming no overlaps. The result:
Id Value
----------- ------
1 Start
2 Normal
3 End
5 Start
6 Normal
7 Normal
8 End