Related
I have the data ready to Insert into my Production table however the ID column is NULL and that needs to be pre-populated with the IDs prior to Insert. I have these IDs in another Temp Table... all I want is to simply apply these IDs to the records in my Temp Table.
For example... Say I have 10 records all simply needing IDs. I have in another temp table exactly 10 IDs... they simply need to be applied to my 10 records in my 'Ready to INSERT' Temp Table.
I worked in Oracle for about 9 years and I would have done this simply by looping over my 'Collection' using a FORALL Loop... basically I would simply loop over my 'Ready to INSERT' temp table and for each row apply the ID from my other 'Collection'... in SQL Server I'm working with Temp Tables NOT Collections and well... there's no FORALL Loop or really any fancy loops in SQL Server other than WHILE.
My goal is to know the appropriate method to accomplish this in SQL Server. I have learned that in the SQL Server world so many of the DML operations are all SET Based whereas when I worked in oracle we handled data via arrays/collections and using CURSORS or LOOPs we would simply iterate thru the data. I've seen in the SQL Server world using CURSORS and/or iterating thru data record by record is frowned upon.
Help me get my head out of the 'Oracle' space I was in for so long and into the 'SQL Server' space I need to be in. This has been a slight struggle.
The code below is how I've currently implemented this however it just seems convoluted.
SET NOCOUNT ON;
DECLARE #KeyValueNewMAX INT,
#KeyValueINuse INT,
#ClientID INT,
#Count INT;
DROP TABLE IF EXISTS #InterOtherSourceData;
DROP TABLE IF EXISTS #InterOtherActual;
DROP TABLE IF EXISTS #InterOtherIDs;
CREATE TABLE #InterOtherSourceData -- Data stored here for DML until data is ready for INSERT
(
UniqueID INT IDENTITY( 1, 1 ),
NewIntOtherID INT,
ClientID INT
);
CREATE TABLE #InterOtherActual -- Prod Table where the data will be INSERTED Into
(
IntOtherID INT,
ClientID INT
);
CREATE TABLE #InterOtherIDs -- Store IDs needing to be applied to Data
(
UniqueID INT IDENTITY( 1, 1 ),
NewIntOtherID INT
);
BEGIN
/* TEST Create Fake Data and store it in temp table */
WITH fakeIntOtherRecs AS
(
SELECT 1001 AS ClientID, 'Jake' AS fName, 'Jilly' AS lName UNION ALL
SELECT 2002 AS ClientID, 'Jason' AS fName, 'Bateman' AS lName UNION ALL
SELECT 3003 AS ClientID, 'Brain' AS fName, 'Man' AS lName
)
INSERT INTO #InterOtherSourceData (ClientID)
SELECT fc.ClientID--, fc.fName, fc.lName
FROM fakeIntOtherRecs fc
;
/* END TEST Prep Fake Data */
/* Obtain count so we know how many IDs we need to create */
SELECT #Count = COUNT(*) FROM #InterOtherSourceData;
PRINT 'Count: ' + CAST(#Count AS VARCHAR);
/* For testing set value OF KeyValuePre to the max key currently in use by Table */
SELECT #KeyValueINuse = 13;
/* Using the #Count let's obtain the new MAX ID... basically Existing_Key + SourceRecordCount = New_MaxKey */
SELECT #KeyValueNewMAX = #KeyValueINuse + #Count /* STORE new MAX ID in variable */
/* Print both keys for testing purposes to review */
PRINT 'KeyValue Current: ' + CAST(#KeyValueINuse AS VARCHAR) + ' KeyValue Max: ' + CAST(#KeyValueNewMAX AS VARCHAR);
/* Using recursive CTE generate a fake table containing all of the IDs we want to INSERT into Prod Table */
WITH CTE AS
(
SELECT (#KeyValueNewMAX - #Count) + 1 AS STARTMINID, #KeyValueNewMAX AS ENDMAXID UNION ALL
/* SELECT FROM CTE to create Recursion */
SELECT STARTMINID + 1 AS STARTMINID, ENDMAXID FROM CTE
WHERE (STARTMINID + 1) < (#KeyValueNewMAX + 1)
)
INSERT INTO #InterOtherIDs (NewIntOtherID)
SELECT c.STARTMINID AS NewIntOtherID
FROM CTE c
;
/* Apply New IDs : Using the IDENTITY fields on both Temp Tables I can JOIN the tables by the IDENTITY columns
| Is there a BETTER Way to do this?... like LOOP over each record rather than having to build up common IDs in both tables using IDENTITY columns?
*/
UPDATE #InterOtherSourceData SET NewIntOtherID = oi.NewIntOtherID
FROM #InterOtherIDs oi
JOIN #InterOtherSourceData o ON o.UniqueID = oi.UniqueID
;
/* View data that is ready for insert */
--SELECT *
--FROM #InterOtherSourceData
--;
/* INSERT DATA INTO PRODUCTION TABLE */
INSERT INTO #InterOtherActual (IntOtherID, ClientId)
SELECT NewIntOtherID, ClientID
FROM #InterOtherSourceData
;
SELECT * FROM #InterOtherActual;
END
To pre-generate key values in SQL Server use a sequence rather than an IDENTITY column.
eg
drop table if exists t
drop table if exists #t_stg
drop sequence t_seq
go
create sequence t_seq start with 1 increment by 1
create table t(id int primary key default (next value for t_seq),a int, b int)
create table #t_stg(id int, a int, b int)
insert into #t_stg(a,b) values (1,2),(3,3),(4,5)
update #t_stg set id = next value for t_seq
--select * from #t_stg
insert into t(id,a,b)
select * from #t_stg
I am using a trigger to insert rows into a table using INSERT statement as below but when doing this the RECORD_ID number increments by 1 digit so all the records inserted have the same number..
This is what i'm using to increment the records from the trigger.
, ISNULL((
SELECT MAX([PROGRESS-RECID]) FROM [DBAdmin].[dbo].[ReTncyTransStatement]
),0) + 1 AS [PROGRESS-RECID]
This is what i'm using to load the data
;WITH TestTrans (
[ORG-CODE]
,[TNCY-SYS-REF]
,[TRANS-NO]
,[POSTING-YEAR]
,[POSTING-WEEK]
,[TRANS-YEAR]
,[TRANS-WEEK]
,[TRANS-DATE]
,[ACCOUNT-TYPE]
,[ACCOUNT-CODE]
,[COMMENT]
,[TRANS-AMT]
,[SOURCE]
,[CREATED-USER]
,[CREATED-DATE]
,[CREATED-TIME]
,[UPDATED-USER]
,[UPDATED-DATE]
,[UPDATED-TIME]
,[BATCH-NO]
,[BATCH-NO-TYPE]
,[SUSPENSE-REF]
,[REFERENCE]
,[MGT-AREA]
,[ANALYSIS-CODE]
)
AS (SELECT
[ORG-CODE]
,[TNCY-SYS-REF]
,[TRANS-NO]
,[POSTING-YEAR]
,[POSTING-WEEK]
,[TRANS-YEAR]
,[TRANS-WEEK]
,[TRANS-DATE]
,[ACCOUNT-TYPE]
,[ACCOUNT-CODE]
,[COMMENT]
,[TRANS-AMT]
,[SOURCE]
,[CREATED-USER]
,[CREATED-DATE]
,[CREATED-TIME]
,[UPDATED-USER]
,[UPDATED-DATE]
,[UPDATED-TIME]
,[BATCH-NO]
,[BATCH-NO-TYPE]
,[SUSPENSE-REF]
,[REFERENCE]
,[MGT-AREA]
,[ANALYSIS-CODE] from [SQLViewsPro2Live].[dbo].[RE-TNCY-TRANS] where [TRANS-DATE] between '2019-05-16 00:00:00.000' and '2019-05-17 00:00:00.000'
)
INSERT INTO [SQLViewsPro2Test].[dbo].[RE-TNCY-TRANS]
SELECT
[ORG-CODE]
,[TNCY-SYS-REF]
,[TRANS-NO]
,[POSTING-YEAR]
,[POSTING-WEEK]
,[TRANS-YEAR]
,[TRANS-WEEK]
,[TRANS-DATE]
,[ACCOUNT-TYPE]
,[ACCOUNT-CODE]
,[COMMENT]
,[TRANS-AMT]
,[SOURCE]
,[CREATED-USER]
,[CREATED-DATE]
,[CREATED-TIME]
,[UPDATED-USER]
,[UPDATED-DATE]
,[UPDATED-TIME]
,[BATCH-NO]
,[BATCH-NO-TYPE]
,[SUSPENSE-REF]
,[REFERENCE]
,[MGT-AREA]
,[ANALYSIS-CODE]
FROM TestTrans;
GO
Any fixes appreciated
Thanks,
Full description of problem available here: T-SQL : create trigger to copy new columns from one table to another and increment no
Make PROGRESS-RECID an IDENTITY column and it will auto-increment.
Based on the linked question, you can rewrite your trigger as following:
CREATE TRIGGER AddReTncyTransStatement
ON [SQLViewsPro2EOD].[dbo].[RE-TNCY-TRANS]
AFTER UPDATE, INSERT
AS
BEGIN
DECLARE #ORG_CODE INT,
#TNCY_SYS_REF INT,
#TRANS_NO INT;
DECLARE C CURSOR FAST_FORWARD FOR(
SELECT Inserted.[ORG-CODE],
Inserted.[TNCY-SYS-REF],
Inserted.[TRANS-NO]
FROM Inserted);
OPEN C;
FETCH NEXT FROM C
INTO #ORG_CODE,
#TNCY_SYS_REF,
#TRANS_NO;
WHILE ##FETCH_STATUS = 0
BEGIN
INSERT INTO [DBAdmin].[dbo].[ReTncyTransStatement]
(
[ORG-CODE],
[TNCY-SYS-REF],
[TRANS-NO],
[PROGRESS-RECID]
)
SELECT
#ORG_CODE,
#TNCY_SYS_REF,
#TRANS_NO,
ISNULL((SELECT MAX([PROGRESS-RECID]) FROM [DBAdmin].[dbo].[ReTncyTransStatement]),0) + 1 AS RECID;
FETCH NEXT FROM C
INTO #ORG_CODE,
#TNCY_SYS_REF,
#TRANS_NO
END;
CLOSE C;
DEALLOCATE C;
END;
Root of your problem:
When you use INSERT INTO ... SELECT(The one outside the trigger), trigger will be called once and the inserted table will contain all the records to be inserted. so the query inside the trigger will be run once, furthermore the SELECT MAX([PROGRESS-RECID]) will be calculated once. This means that if the inserted table contains 10 records, that are being inserted, then MAX(...) will be same for all of them!
How I Solved it:
Inside the trigger I used Cursor to iterate through the all records that are being inserted(For example 10 records), then in each iteration I insert one record to ReTncyTransStatement so the MAX(...) will be calculated and executed as expected.
I have 2 Openquerys that are just simple selects from 2 tables. My objective is to populate a single table with data from those 2 queries that basically return the same thing but with different names.
Example
1st Warehouse 1
Select * From OpenQuery ('SELECT * FROM Warehouse1')
2nd Warehouse 2
Select * From OpenQuery ('SELECT * FROM Warehouse2')
There are thousands of rows that i need to update my SQL table. Problem is, this is very expensive if i use UNION, and my question is how can achieve this for best performance possible? Also this is data from an external database so i really can't change the queries
I have to update my main table with these queries only when user access the list that shows the data
EDIT.
I wasn't very clear but both tables return same type of column
| ID | Warehouse | Ticket | Item | Qty
One belongs to Warehouse 1, the other to Warehouse 2, both have different amount of rows.
You can use inner join with update for this you need to make table alias as shown below
UPDATE A
SET A.<ColName> = B.<ColName>
from Warehouse1 A
INNER JOIN Warehouse2 B
ON A.<Id> = B.<Id>
--where if required
But why you need to UNION?
You can simply insert 2 times under a transaction.
BEGIN TRY
BEGIN TRAN T1
INSERT into mytable
--select from openquery table 1
INSERT into mytable
--select from openquery table 2
COMMIT TRAN T1
END TRY
BEGIN CATCH
---handle error
ROLLBACK TRAN T1
END CATCH
For anyone with the same problem as me. Here is the solution i came up with that suits my problem better.
I save the open query on a view since I don't need to change anything or insert in my database at all
/*************************** Views ********************************/
GO
IF OBJECT_ID('viewx_POE', 'v') IS NOT NULL
DROP VIEW viewx_POE
GO
CREATE VIEW viewx_POE AS
SELECT ET0104 AS Armazem,
ET0109 AS Localizacao,
ET0102 AS Etiqueta,
ET0101 AS Artigo,
ET0103 AS Quantidade
FROM OpenQuery(MACPAC, 'SELECT FET001.ET0104, FET001.ET0109, FET001.ET0102, FET001.ET0101, FET001.ET0103
FROM AUTO.D805DATPOR.FET001 FET001
WHERE (FET001.ET0104=''POE'') AND (FET001.ET0105=''DIS'')');
/**************************************************************************/
GO
IF OBJECT_ID('viewx_CORRICA', 'v') IS NOT NULL
DROP VIEW viewx_CORRICA
GO
CREATE VIEW viewx_CORRICA AS
SELECT GHZORI AS Armazem,
GHNEMP AS Localizacao,
LBLBNB AS Etiqueta,
GHLIB5 AS Artigo,
LBQTYD AS Quantidade
FROM OpenQuery(MACPAC, 'SELECT GA160H.LBLBNB, GA160H.GHLIB5, GA160H.GHZORI, GA160H.GHNEMP, GA160M.LBQTYD
FROM D805DATPOR.GA160H GA160H, D805DATPOR.GA160M GA160M
WHERE GA160M.LBLBNB = GA160H.LBLBNB AND (GA160H.GHZORI=''CORRICA'' AND GA160H.GHCSTA=''DIS'')');
And then when needed I select the view depending on the user rank and return whatever i need from it
GO
IF OBJECT_ID('dbo.spx_SELECT_RandomLocalizacoes') IS NOT NULL
DROP PROCEDURE spx_SELECT_RandomLocalizacoes
GO
CREATE PROCEDURE spx_SELECT_RandomLocalizacoes
#LocalizacoesMax int,
#Armazem int
AS
BEGIN
SET NOCOUNT ON
DECLARE #Output int
IF ( #Armazem = 'POE' )
BEGIN
SELECT TOP(10) xa.IdArmazem, vpoe.Localizacao, vpoe.Etiqueta, vpoe.Artigo, vpoe.Quantidade
FROM viewx_POE vpoe
INNER JOIN xArmazem xa
ON vpoe.Armazem = xa.Armazem
ORDER BY NEWID()
END
ELSE IF ( #Armazem = 'CORRICA' )
BEGIN
SELECT TOP(#LocalizacoesMax) * FROM viewx_CORRICA ORDER BY NEWID()
END
END
I am moving a small database from MS Access into SQL Server. Each year, the users would create a new Access database and have clean data, but this change will put data across the years into one pot. The users have relied on the autonumber value in Access as a reference for records. That is very inaccurate if, say, 238 records are removed.
So I am trying to accommodate them with an id column they can control (somewhat). They will not see the real primary key in the SQL table, but I want to give them an ID they can edit, but still be unique.
I've been working with this trigger, but it has taken much longer than I expected.
Everything SEEMS TO work fine, except I don't understand why I have the same data in my INSERTED table as the table the trigger is on. (See note in code.)
ALTER TRIGGER [dbo].[trg_tblAppData]
ON [dbo].[tblAppData]
AFTER INSERT,UPDATE
AS
BEGIN
SET NOCOUNT ON;
DECLARE #NewUserEnteredId int = 0;
DECLARE #RowIdForUpdate int = 0;
DECLARE #CurrentUserEnteredId int = 0;
DECLARE #LoopCount int = 0;
--*** Loop through all records to be updated because the values will be incremented.
WHILE (1 = 1)
BEGIN
SET #LoopCount = #LoopCount + 1;
IF (#LoopCount > (SELECT Count(*) FROM INSERTED))
BREAK;
SELECT TOP 1 #RowIdForUpdate = ID, #CurrentUserEnteredId = UserEnteredId FROM INSERTED WHERE ID > #RowIdForUpdate ORDER BY ID DESC;
IF (#RowIdForUpdate IS NULL)
BREAK;
-- WHY IS THERE A MATCH HERE? HAS THE RECORD ALREADY BEEN INSERTED?
IF EXISTS (SELECT UserEnteredId FROM tblAppData WHERE UserEnteredId = #CurrentUserEnteredId)
BEGIN
SET #NewUserEnteredId = (SELECT Max(t1.UserEnteredId) + 1 FROM tblAppData t1);
END
ELSE
SET #NewUserEnteredId = #CurrentUserEnteredId;
UPDATE tblAppData
SET UserEnteredId = #NewUserEnteredId
FROM tblAppData a
WHERE a.ID = #RowIdForUpdate
END
END
Here is what I want to accomplish:
When new record(s) are added, it should increment values from the Max existing
When a user overrides a value, it should check to see the existence of that value. If found restore the existing value, otherwise allow the change.
This trigger allows for multiple rows being added at a time.
It is great for this to be efficient for future use, but in reality, they will only add 1,000 records a year.
I wouldn't use a trigger to accomplish this.
Here is a script you can use to create a sequence (op didn't tag version), create the primary key, use the sequence as your special id, and put a constraint on the column.
create table dbo.test (
testid int identity(1,1) not null primary key clustered
, myid int null constraint UQ_ unique
, somevalue nvarchar(255) null
);
create sequence dbo.myid
as int
start with 1
increment by 1;
alter table dbo.test
add default next value for dbo.myid for myid;
insert into dbo.test (somevalue)
select 'this' union all
select 'that' union all
select 'and' union all
select 'this';
insert into dbo.test (myid, somevalue)
select 33, 'oops';
select *
from dbo.test
insert into dbo.test (somevalue)
select 'oh the fun';
select *
from dbo.test
--| This should error
insert into dbo.test (myid, somevalue)
select 3, 'This is NO fun';
Here is the result set:
testid myid somevalue
1 1 this
2 2 that
3 3 and
4 4 this
5 33 oops
6 5 oh the fun
And at the very end a test, which will error.
I'm trying to insert a lot of records to a table.
This is the scenario:
SQL Server 2008 (DB is 2005)
The destination table has a Clustered Index (PK). This field should be an Identity, but the developer of the DB (we couldn't change it, as it will affect the program) create it as an Integer. Everytime the program needs to add a row to the table, look at the max id (historyno on this case) and sum one.
This affect our performance when we need to insert a lot of records at the same time, so we create a process to insert rows from a temporary table (AKT_ES_CampTool_TempHist) out of production hours.
The problem is that, in one hour, it only inserts 8K rows. Considering that we need to insert more than 120K, we run out of hours.
The code we use is the following. Please, if someone has any idea to improve it, it will be appreciate.
DECLARE #HistNo AS INT
WHILE EXISTS (SELECT * FROM AKT_ES_CampTool_TempHist WHERE Inserted = 0)
BEGIN
SELECT #HistNo=MIN(HistoryNo) FROM AKT_ES_CampTool_TempHist WHERE Inserted = 0
INSERT INTO NOVADB.dbo.niHist (
HistoryNo,ObjectType,ObjectNo,SubNo,ReferenceNo,
Time,Type,Priority,Collector,Code,
Action,RemainingAmount,Obliterated,SubType,ActSegment,
Data,FreetextData,quantity
)
SELECT
(SELECT max(historyNo)+1
FROM NOVADB..niHist),ObjectType,ObjectNo,SubNo,ReferenceNo,
Time,Type,Priority,Collector,Code,
Action,RemainingAmount,Obliterated,SubType,ActSegment,
Data,FreetextData,quantity
FROM AKT_ES_CampTool_TempHist
WHERE HistoryNo=#HistNo
UPDATE AKT_ES_CampTool_TempHist
SET Inserted=1
WHERE HistoryNo=#HistNo
END
obviously the proper answer is to change that historyNo column to an identity, but as you can't do that why not use ROW_NUMBER over the entire set to get an incrementing number to add to the prev max historyNo?
Then you could alter the insert to just
DECLARE #OldMaxHistNo AS INT
SELECT #OldMaxHistNo = MAX(historyNo) FROM NOVADB..niHist
INSERT INTO NOVADB.dbo.niHist (
HistoryNo,ObjectType,ObjectNo,SubNo,ReferenceNo,
Time,Type,Priority,Collector,Code,
Action,RemainingAmount,Obliterated,SubType,ActSegment,
Data,FreetextData,quantity
)
SELECT
#OldMaxHistNo+ ROW_NUMBER() OVER(ORDER BY ObjectNo)
FROM NOVADB..niHist),ObjectType,ObjectNo,SubNo,ReferenceNo,
Time,Type,Priority,Collector,Code,
Action,RemainingAmount,Obliterated,SubType,ActSegment,
Data,FreetextData,quantity
FROM AKT_ES_CampTool_TempHist
WHERE Inserted = 0
UPDATE AKT_ES_CampTool_TempHist
SET Inserted=1
Might have to lock the tables inside a transaction whilst doing it though
You could select the data which should be inserted into an temporary table with a new HistoryNo generated by Rownumber() and changed with max(historyNo) FROM NOVADB..niHist.
SELECT ROW_NUMBER() OVER (Order by ID) as NEW_HistoryNo , *
into #tmp
FROM AKT_ES_CampTool_TempHist
WHERE Inserted = 0
ORDER BY HistoryNo
Update #tmp set NEW_HistoryNo=NEW_HistoryNo + (SELECT max(historyNo) FROM NOVADB..niHist)
INSERT INTO NOVADB.dbo.niHist (
HistoryNo,ObjectType,ObjectNo,SubNo,ReferenceNo,
Time,Type,Priority,Collector,Code,
Action,RemainingAmount,Obliterated,SubType,ActSegment,
Data,FreetextData,quantity ) )
SELECT
NEW_HistoryNo,ObjectType,ObjectNo,SubNo,ReferenceNo,
Time,Type,Priority,Collector,Code,
Action,RemainingAmount,Obliterated,SubType,ActSegment,
Data,FreetextData,quantity
from #tmp
Update AKT_ES_CampTool_TempHist set Inserted = 1
from #tmp
Where #tmp.HistoryNo=AKT_ES_CampTool_TempHist.HistoryNo and AKT_ES_CampTool_TempHist.Inserted = 0
Drop Table #tmp
You should never use the max+1 strategy you are using for assigning an index. Assuming you can't use identity and the main table and you are not using the lastest version of sql server -- Create a shadow table based on a identity field and use that to generate sequence numbers
i.e.
create table AKT_ES_CampTool_Shadow
(
id int identity(1234,1) not null -- replacing 1234 with a value based on max+1
, dummy varchar(1) null
)
Then to gen an id -- less expensive than max+1 -- no locking problems
create proc AKT_ES_CampTool_idgen(#newid output)
(
declare #newid int
begin tran
insert into dbo.AKT_ES_CampTool_Shadow (dummy) values ('')
select #newid = scope_id()
rollback
)
You don't say how big AKT_ES_CampTool_TempHist is. If it is large, you may have performance issues there (esp. if there is no index on the field "inserted")
You could start by created a table var containing the relevant columns.
declare #TempHist table
(
HistNo int
, inserted int
, etc.
primary key(...)
)
Then populate #TempHist with a single insert query. If you don't have an appropriate PK for this table, used use a generated RowID s the PK
Now, you can loop through this table without causing lock contention. Just select top 1 from #TempHist and the delete the corresponsding row from #TempHist when you are done processing it.
You won't have use a cursor nor have a large Batch operation