TSQL Update Issue - sql-server

Ok SQL Server fans I have an issue with a legacy stored procedure that sits inside of a SQL Server 2008 R2 Instance that I have inherited also with the PROD data which to say the least is horrible. Also, I can NOT make any changes to the data nor the table structures.
So here is my problem, the stored procedure in question runs daily and is used to update the employee table. As you can see from my example the incoming data (#New_Employees) contains the updated data and I need to use it to update the data in the Employee data is stored in the #Existing_Employees table. Throughout the years different formatting of the EMP_ID value has been used and must be maintained as is (I fought and lost that battle). Thankfully, I have been successfully in changing the format of the EMP_ID column in the #New_Employees table (Yeah!) and any new records will use this format thankfully!
So now you may see my problem, I need to update the ID column in the #New_Employees table with the corresponding ID from the #Existing_Employees table by matching (that's right you guessed it) by the EMP_ID columns. So I came up with an extremely hacky way to handle the disparate formats of the EMP_ID columns but it is very slow considering the number of rows that I need to process (1M+).
I thought of creating a staging table where I could simply cast the EMP_ID columns to an INT and then back to a NVARCHAR in each table to remove the leading zeros and I am sort of leaning that way but I wanted to see if there was another way to handle this dysfunctional data. Any constructive comments are welcome.
IF OBJECT_ID(N'TempDB..#NEW_EMPLOYEES') IS NOT NULL
DROP TABLE #NEW_EMPLOYEES
CREATE TABLE #NEW_EMPLOYEES(
ID INT
,EMP_ID NVARCHAR(50)
,NAME NVARCHAR(50))
GO
IF OBJECT_ID(N'TempDB..#EXISTING_EMPLOYEES') IS NOT NULL
DROP TABLE #EXISTING_EMPLOYEES
CREATE TABLE #EXISTING_EMPLOYEES(
ID INT PRIMARY KEY
,EMP_ID NVARCHAR(50)
,NAME NVARCHAR(50))
GO
INSERT INTO #NEW_EMPLOYEES
VALUES(NULL, '00123', 'Adam Arkin')
,(NULL, '00345', 'Bob Baker')
,(NULL, '00526', 'Charles Nelson O''Reilly')
,(NULL, '04321', 'David Numberman')
,(NULL, '44321', 'Ida Falcone')
INSERT INTO #EXISTING_EMPLOYEES
VALUES(1, '123', 'Adam Arkin')
,(2, '000345', 'Bob Baker')
,(3, '526', 'Charles Nelson O''Reilly')
,(4, '0004321', 'Ed Sullivan')
,(5, '02143', 'Frank Sinatra')
,(6, '5567', 'George Thorogood')
,(7, '0000123-1', 'Adam Arkin')
,(8, '7', 'Harry Hamilton')
-- First Method - Not Successful
UPDATE NE
SET ID = EE.ID
FROM
#NEW_EMPLOYEES NE
LEFT OUTER JOIN #EXISTING_EMPLOYEES EE
ON EE.EMP_ID = NE.EMP_ID
SELECT * FROM #NEW_EMPLOYEES
-- Second Method - Successful but Slow
UPDATE NE
SET ID = EE.ID
FROM
dbo.#NEW_EMPLOYEES NE
LEFT OUTER JOIN dbo.#EXISTING_EMPLOYEES EE
ON CAST(CASE WHEN NE.EMP_ID LIKE N'%[^0-9]%'
THEN NE.EMP_ID
ELSE LTRIM(STR(CAST(NE.EMP_ID AS INT))) END AS NVARCHAR(50)) =
CAST(CASE WHEN EE.EMP_ID LIKE N'%[^0-9]%'
THEN EE.EMP_ID
ELSE LTRIM(STR(CAST(EE.EMP_ID AS INT))) END AS NVARCHAR(50))
SELECT * FROM #NEW_EMPLOYEES

the number of rows that I need to process (1M+).
A million employees? Per day?
I think I would add a 3rd table:
create table #ids ( id INT not NULL PRIMARY KEY
, emp_id not NULL NVARCHAR(50) unique );
Populate that table using your LTRIM(STR(CAST, ahem, algorithm, and update Employees directly from a join of those three tables.
I recommend using ANSI update, not Microsoft's nonstandard update ... from because the ANSI version prevents nondeterministic results in cases where the FROM produces more than one row.

Related

Snowflake - how to do multiple DML operations on same primary key in a specific order?

I am trying to set up continuous data replication in Snowflake. I get the transactions happened in source system and I need to perform them in Snowflake in the same order as source system. I am trying to use MERGE for this, but when there are multiple operations on same key in source system, MERGE is not working correctly. It either misses an operation or returns duplicate row detected during DML operation error.
Please note that the transactions need to happen in exact order and it is not possible to take the latest transaction for a key and just do it (like if a record has been INSERTED and UPDATED, in Snowflake too it needs to be inserted first and then updated even though insert is only transient state) .
Here is the example:
create or replace table employee_source (
id int,
first_name varchar(255),
last_name varchar(255),
operation_name varchar(255),
binlogkey integer
)
create or replace table employee_destination ( id int, first_name varchar(255), last_name varchar(255) );
insert into employee_source values (1,'Wayne','Bells','INSERT',11);
insert into employee_source values (1,'Wayne','BellsT','UPDATE',12);
insert into employee_source values (2,'Anthony','Allen','INSERT',13);
insert into employee_source values (3,'Eric','Henderson','INSERT',14);
insert into employee_source values (4,'Jimmy','Smith','INSERT',15);
insert into employee_source values (1,'Wayne','Bellsa','UPDATE',16);
insert into employee_source values (1,'Wayner','Bellsat','UPDATE',17);
insert into employee_source values (2,'Anthony','Allen','DELETE',18);
MERGE into employee_destination as T using (select * from employee_source order by binlogkey)
AS S
ON T.id = s.id
when not matched
And S.operation_name = 'INSERT' THEN
INSERT (id,
first_name,
last_name)
VALUES (
S.id,
S.first_name,
S.last_name)
when matched AND S.operation_name = 'UPDATE'
THEN
update set T.first_name = S.first_name, T.last_name = S.last_name
When matched
And S.operation_name = 'DELETE' THEN DELETE;
I am expecting to see - Bellsat - as last name for employee id 1 in the employee_destination table after all rows get processed. Same way, I should not see emp id 2 in the employee_destination table.
Is there any other alternative to MERGE to achieve this? Basically to go over every single DML in the same order (using binlogkey column for ordering) .
thanks.
You need to manipulate your source data to ensure that you only have one record per key/operation otherwise the join will be non-deterministic and will (dpending on your settings) either error or will update using a random one of the applicable source records. This is covered in the documentation here https://docs.snowflake.com/en/sql-reference/sql/merge.html#duplicate-join-behavior.
In any case, why would you want to update a record only for it to be overwritten by another update - this would be incredibly inefficient?
Since your updates appear to include the new values for all rows, you can use a window function to get to just the latest incoming change, and then merge those results into the target table. For example, the select for that merge (with the window function to get only the latest change) would look like this:
with SOURCE_DATA as
(
select COLUMN1::int ID
,COLUMN2::string FIRST_NAME
,COLUMN3::string LAST_NAME
,COLUMN4::string OPERATION_NAME
,COLUMN5::int PROCESSING_ORDER
from values
(1,'Wayne','Bells','INSERT',11),
(1,'Wayne','BellsT','UPDATE',12),
(2,'Anthony','Allen','INSERT',13),
(3,'Eric','Henderson','INSERT',14),
(4,'Jimmy','Smith','INSERT',15),
(1,'Wayne','Bellsa','UPDATE',16),
(1,'Wayne','Bellsat','UPDATE',17),
(2,'Anthony','Allen','DELETE',18)
)
select * from SOURCE_DATA
qualify row_number() over (partition by ID order by PROCESSING_ORDER desc) = 1
That will produce a result set that has only the changes required to merge into the target table:
ID
FIRST_NAME
LAST_NAME
OPERATION_NAME
PROCESSING_ORDER
1
Wayne
Bellsat
UPDATE
17
2
Anthony
Allen
DELETE
18
3
Eric
Henderson
INSERT
14
4
Jimmy
Smith
INSERT
15
You can then change the when not matched to remove the operation_name. If it's listed as an update and it's not in the target table, it's because it was inserted in a previous operation in the new changes.
For the when matched clause, you can use the operation_name to determine if the row should be updated or deleted.

Perform SCD2 on snowflake table based upon oracle input data

currently am sourcing data from oracle
As a part of intial load ingested all history data from oracle table oracle_a to snowflake table "snow_a" using named stage and copy into commands.
I would like to perform SCD2 on snow_a table based upon oracle_a table.
I mean if any new record added to Oracle_a table then that record to be inserted and any changes to existing record of oracle_a table ,
existing record of snow_a table to be expired and insert the record. Further details refer below image.
oracle_a table has key columns key_col1,key_col2,key_col3 as mentioned in below image. attr1 and attr2 are other attributes of the table enter image description here
Implementing SCD Type 2 functionality on a table in Snowflake is no different than in any other relational database. However, there is additional functionality that can help with this process. Please have a look at this blog post series on using Snowflake Streams and Tasks to perform the SCD logic.
https://www.snowflake.com/blog/building-a-type-2-slowly-changing-dimension-in-snowflake-using-streams-and-tasks-part-1/
Cheers,
Michael Rainey
Ok so here is what I found - though you may need to adjust were the update and insert come from - since oracle_a is not in Snowflake.
CREATE TABLE snowflake_a(key_col1 varchar(10), key_col2 varchar(10), key_col3 varchar(10), attr1 varchar(8), attr2 varchar(10), eff_ts TIMESTAMP, exp_ts TIMESTAMP, valid varchar(10));
DROP table oracle_a;
INSERT INTO snowflake_a VALUES('PT_1', 'DL_1', 'RPT_1', 'Address1', 'APT_1', current_date, current_date, 'Active');
CREATE TABLE oracle_a(key_col1 varchar(10), key_col2 varchar(10), key_col3 varchar(10), attr1 varchar(8), attr2 varchar(8), eff_ts TIMESTAMP, exp_ts TIMESTAMP);
INSERT INTO oracle_a
VALUES( 'PT_1', 'DL_1', 'RPT_1', 'Address1', 'APT_1', '10/24/2019', '12/31/1999');
UPDATE snowflake_a
SET valid = 'Expired'
WHERE valid LIKE '%Active%';
SELECT * FROM snowflake_a;
INSERT INTO snowflake_a VALUES( 'PT_1', 'DL_1', 'RPT_1', 'Address1', 'APT_1', '10/24/2019', '12/31/1999', 'Active');
SELECT * FROM snowflake_a;
Or better yet, what are us using to connect from your Oracle ecosystem to the Snowflake ecosystem?
From the question, it seems that the incoming Oracle rows do not contain any SCD2 type columns and that when each row inserted into snowflake is to be inserted using SCD2 type functionality.
SCD2 columns can have a specific meaning to the business, such that 'exp_ts' could be actual date or a business date. Snowflake 'Stage' does not include SCD2 functionality. This is usually the role of an ETL framework, not that of a 'fast/bulk' load utility.
Most ETL vendors have SCD2 functions as a part of their offering.
I did following steps to perform SCD2.
Loaded Oracle_a table data into TEMPORARY scd2_temp table
Performed update on snow_a to expire "changed records" by joining
key cols and checking the rest of attributes
Inserted into snow_a table from TEMPORARY scd2_temp to snow_a table
Here's a solution based on the following assumptions:
The source oracle table is not itself responsible for SCD2
processing (so Eff/Exp TS columns wouldn't be present on that
table).
There is an external process that is only Extracting/Loading delta
(new, updated) records into Snowflake.
The source oracle is not deleting records
First create the tables and add the first set of delta data:
CREATE OR REPLACE TABLE stg.cdc2_oracle_d (
key1 varchar(10),
key2 varchar(10),
key3 varchar(10),
attr1 varchar(8),
attr2 varchar(8));
CREATE OR REPLACE TABLE edw.cdc2_snowflake_d (
key1 varchar(10),
key2 varchar(10),
key3 varchar(10),
attr1 varchar(8),
attr2 varchar(8),
eff_ts TIMESTAMP_LTZ(0),
exp_ts TIMESTAMP_LTZ(0),
active_fl char(1));
INSERT INTO stg.cdc2_oracle_d VALUES
( 'PT_1', 'DL_1', 'RPT_1', 'Addr1a', 'APT_1.0'),
( 'PT_2', 'DL_2', 'RPT_2', 'Addr2a', 'APT_2.0'),
( 'PT_3', 'DL_3', 'RPT_3', 'Addr3a', 'APT_3.0');
Then run the following Transformation script:
BEGIN;
-- 1: insert new-new records from stg table that don't current exist in the edw table
INSERT INTO edw.cdc2_snowflake_d
SELECT
key1,
key2,
key3,
attr1,
attr2,
CURRENT_TIMESTAMP(0) AS eff_ts,
CAST('9999-12-31 23:59:59' AS TIMESTAMP) AS end_ts,
'Y' AS active_fl
FROM stg.cdc2_oracle_d stg
WHERE NOT EXISTS (
SELECT 1
FROM edw.cdc2_snowflake_d edw
WHERE edw.key1 = stg.key1
AND edw.key2 = stg.key2
AND edw.key3 = stg.key3
AND edw.active_fl = 'Y');
-- 2: insert new version of record from stg table where key current does exist in edw table
-- but only add if the attr columns are different, otherwise it's the same record
INSERT INTO edw.cdc2_snowflake_d
SELECT
stg.key1,
stg.key2,
stg.key3,
stg.attr1,
stg.attr2,
CURRENT_TIMESTAMP(0) AS eff_ts,
CAST('9999-12-31 23:59:59' AS TIMESTAMP) AS end_ts,
'T' AS active_fl -- set flat to Temporary setting
FROM stg.cdc2_oracle_d stg
JOIN edw.cdc2_snowflake_d edw ON edw.key1 = stg.key1 AND edw.key2 = stg.key2
AND edw.key3 = stg.key3 AND edw.active_fl = 'Y'
WHERE (stg.attr1 <> edw.attr1
OR stg.attr2 <> edw.attr2);
-- 3: deactive the current record where there is a new record from above step
-- and set the end_ts to 1 second prior to new record so there is no overlap in data
UPDATE edw.cdc2_snowflake_d old
SET old.active_fl = 'N',
old.exp_ts = DATEADD(SECOND, -1, new.eff_ts)
FROM edw.cdc2_snowflake_d new
WHERE old.key1 = new.key1
AND old.key2 = new.key2
AND old.key3 = new.key3
AND new.active_fl = 'T'
AND old.active_fl = 'Y';
-- 4: finally set all the temporary records to active
UPDATE cdc2_snowflake_d tmp
SET tmp.active_fl = 'Y'
WHERE tmp.active_fl = 'T';
COMMIT;
Review the results, then truncate & add new data and run the script again:
SELECT * FROM stg.cdc2_oracle_d;
SELECT * FROM edw.cdc2_snowflake_d ORDER BY 1,2,3,5;
TRUNCATE TABLE stg.cdc2_oracle_d;
INSERT INTO stg.cdc2_oracle_d VALUES
( 'PT_1', 'DL_1', 'RPT_1', 'Addr1a', 'APT_1.1'), -- record has updated attr2
( 'PT_2', 'DL_2', 'RPT_2', 'Addr2a', 'APT_2.0'), -- record has no changes
( 'PT_4', 'DL_4', 'RPT_4', 'Addr4a', 'APT_4.0'); -- new record
You'll see that PT_1 has 2 records w/ non-overlapping timestamps, only 1 is active.

How to shift entire row from last to 3rd position without changing values in SQL Server

This is my table:
DocumentTypeId DocumentType UserId CreatedDtm
--------------------------------------------------------------------------
2d47e2f8-4 PDF 443f-4baa 2015-12-03 17:56:59.4170000
b4b-4803-a Images a99f-1fd 1997-02-11 22:16:51.7000000
600-0e32 XL e60e07a6b 2015-08-19 15:26:11.4730000
40f8ff9f Word 79b399715 1994-04-23 10:33:44.2300000
8230a07c email 750e-4c3d 2015-01-10 09:56:08.1700000
How can I shift the last entire row (DocumentType=email) on 3rd position,(before DocumentType=XL) without changing table values?
Without wishing to deny the truth of what others have said here, SQL Server does have CLUSTERED indices. For full details on these and the difference between a clustered table and a non-clustered one, please see here. In effect, a clustered table does have data written to disk in index order. However, due to subsequent insertions and deletions, you should never rely on any given record being in a fixed ordinal position.
To get your data showing email third and XL fourth, you simply need to order by CreatedDtm. Thus:
declare #test table
(
DocumentTypeID varchar(20),
DocumentType varchar(10),
UserID varchar(20),
CreatedDtm datetime
)
INSERT INTO #test VALUES
('2d47e2f8-4','PDF','443f-4baa','2015-12-03 17:56:59'),
('b4b-4803-a','Images','a99f-1fd','1997-02-11 22:16:51'),
('600-0e32','XL','e60e07a6b','2015-08-19 15:26:11'),
('40f8ff9f','Word','79b399715','1994-04-23 10:33:44'),
('8230a07c','email','750e-4c3d','2015-01-10 09:56:08')
SELECT * FROM #test order by CreatedDtm
This gives a result set of:
40f8ff9f Word 79b399715 1994-04-23 10:33:44.000
b4b-4803-a Images a99f-1fd 1997-02-11 22:16:51.000
8230a07c email 750e-4c3d 2015-01-10 09:56:08.000
600-0e32 XL e60e07a6b 2015-08-19 15:26:11.000
2d47e2f8-4 PDF 443f-4baa 2015-12-03 17:56:59.000
This maybe what you are looking for, but I cannot stress enough, that it only gives email 3rd and XL 4th in this particular case. If the dates were different, it would not be so. But perhaps, this was all that you needed?
I assumed that you need to sort by DocumentTypecolumn.
Joining with a temp table, which may contain virtually DocumenTypes with desired SortOrder, you can achieve the result you want.
declare #tbl table(
DocumentTypeID varchar(50),
DocumentType varchar(50)
)
insert into #tbl(DocumentTypeID, DocumentType)
values
('2d47e2f8-4','PDF'),
('b4b-4803-a','Images'),
('600-0e32','XL'),
('40f8ff9f','Word'),
('8230a07c','email')
;
--this will give you original output
select * from #tbl;
--this will output rows with new sort order
select t.* from #tbl t
inner join
(
select *
from
(values
('PDF',1, 1),
('Images',2, 2),
('XL',3, 4),
('Word',4, 5),
('email',5, 3) --here I put new sort order '3'
) as dt(TypeName, SortOrder, NewSortOrder)
) dt
on dt.TypeName = t.DocumentType
order by dt.NewSortOrder
The row positions don't really matter in SQL tables, since it's all unordered sets of data, but if you really want to switch the rows I'd suggest you send all your data to temp table e.g,
SELECT * FROM [tablename] INTO #temptable
then delete/truncate the data from that table (if it won't mess the other tables it's connected to) and use the temp table you made to insert into it as you like, since it'll have all the same fields with the same data from the original.

SQL Server : trigger on after insert into multiple tables

In a SQL Server database that I am setting up, there are a few 1 to many tables that I would like to create the "many" table records after the parent record is inserted, inserting only the auto generated ID value and another value from the parent record.
For instance:
Table 1 (Parent): Auto_ID, Name, createby, createdatetime
Table 2 (Child): Auto_ID, Parent_Auto_ID, Col1, Col2, createby, createdatetime
Table 3 (Child): Parent_Auto_ID, Col1, createby, createdatetime
I already have:
CREATE TRIGGER [dbo].[ipamuserInsert]
ON [dbbo].[ipamuser]
FOR INSERT
AS
SET NOCOUNT ON
INSERT INTO [dbo].[ipamuserdefault]
(ipamuserID, homepage, createby, createdatetime)
VALUES
SELECT
ipamuserID,
NULL,
createby,
GETDATE()
FROM
inserted
INSERT INTO [dbo].[ipamextlink]
(ipamuserID, link, createby, GETDATE())
VALUES
SELECT
ipamuserID, null, createby, GETDATE()
FROM
inserted
GO
Is this the right syntax, or do I need to declare a separate temp variable and store the ipamuserID and the createby values and use those instead?
I cant test this without your table definitions but I think that you are spot on.
Depending on the number of tables involved I would consider making this dynamic. The number of tables would have to be high to make it worth the effort though.
Editbased on comment:
You can make this dynamic by using system views to find out what tables are related and generate your Sql dynamically based on the metadata. Here are a couple articles to get you started.
https://msdn.microsoft.com/en-us/library/ms186778.aspx
https://msdn.microsoft.com/en-us/library/ms177862.aspx
The basic structure would be to create a cursor loop for each related table and an inner cursor loop for each column in the related table. You would build the sql statement dynamically in the same format as you have it defined in your question.

SEQUENCE in SQL Server 2008 R2

I need to know if there is any way to have a SEQUENCE or something like that, as we have in Oracle. The idea is to get one number and then use it as a key to save some records in a table. Each time we need to save data in that table, first we get the next number from the sequence and then we use the same to save some records. Is not an IDENTITY column.
For example:
[ID] [SEQUENCE ID] [Code] [Value]
1 1 A 232
2 1 B 454
3 1 C 565
Next time someone needs to add records, the next SEQUENCE ID should be 2, is there any way to do it? the sequence could be a guid for me as well.
As Guillelon points out, the best way to do this in SQL Server is with an identity column.
You can simply define a column as being identity. When a new row is inserted, the identity is automatically incremented.
The difference is that the identity is updated on every row, not just some rows. To be honest, think this is a much better approach. Your example suggests that you are storing both an entity and detail in the same table.
The SequenceId should be the primary identity key in another table. This value can then be used for insertion into this table.
This can be done using multiple ways, Following is what I can think of
Creating a trigger and there by computing the possible value
Adding a computed column along with a function that retrieves the next value of the sequence
Here is an article that presents various solutions
One possible way is to do something like this:
-- Example 1
DECLARE #Var INT
SET #Var = Select Max(ID) + 1 From tbl;
INSERT INTO tbl VALUES (#var,'Record 1')
INSERT INTO tbl VALUES (#var,'Record 2')
INSERT INTO tbl VALUES (#var,'Record 3')
-- Example 2
INSERT INTO #temp VALUES (1,2)
INSERT INTO #temp VALUES (1,2)
INSERT INTO ActualTable (col1, col2, sequence)
SELECT temp.*, (SELECT MAX(ID) + 1 FROM ActualTable)
FROM #temp temp
-- Example 3
DECLARE #var int
INSERT INTO ActualTable (col1, col2, sequence) OUTPUT #var = inserted.sequence VALUES (1, 2, (SELECT MAX(ID) + 1 FROM ActualTable))
The first two examples rely on batch updating. But based on your comment, I have added example 3 which is a single input initially. You can then use the sequence that was inserted to insert the rest of the records. If you have never used an output, please reply in comments and I will expand further.
I would isolate all of the above inside of a transactions.
If you were using SQL Server 2012, you could use the SEQUENCE operator as shown here.
Forgive me if syntax errors, don't have SSMS installed

Resources