How to update last modified user using windows auth info? - sql-server

I've a table sales containing sales records in SQL Server. I need a a column modified_by which shows the actor (user) of the last modification. Column modified_by should be filled by the user's ID related to the Windows Authentication.
id content modified_by
----------- ----------- -----------
1 foo Tom
2 bar Jack
If Tom updates Jack's record, then for the record 2, the column modified_by will show Tom instead.
This UPDATE action should be done automotically by the server for every record modification. Can it be possible ? Should I use trigger to do it ?

We can use TRIGGER to check rows after INSERT or UPDATE statement
CREATE TRIGGER dbo.after_update ON dbo.sales
AFTER INSERT, UPDATE
AS
UPDATE dbo.sales
SET modified_by = SYSTEM_USER
FROM inserted
WHERE inserted.id = dbo.sales.id
MSDN: CREATE TRIGGER (Transact-SQL)

update table
set lastmodifiedby =suser_sname()
More info here
you also can create a table with last modified as default ..
create table tt
(
id int,
lastmodifiedby varchar(100) default suser_sname()
)

Related

Snowflake - how to do multiple DML operations on same primary key in a specific order?

I am trying to set up continuous data replication in Snowflake. I get the transactions happened in source system and I need to perform them in Snowflake in the same order as source system. I am trying to use MERGE for this, but when there are multiple operations on same key in source system, MERGE is not working correctly. It either misses an operation or returns duplicate row detected during DML operation error.
Please note that the transactions need to happen in exact order and it is not possible to take the latest transaction for a key and just do it (like if a record has been INSERTED and UPDATED, in Snowflake too it needs to be inserted first and then updated even though insert is only transient state) .
Here is the example:
create or replace table employee_source (
id int,
first_name varchar(255),
last_name varchar(255),
operation_name varchar(255),
binlogkey integer
)
create or replace table employee_destination ( id int, first_name varchar(255), last_name varchar(255) );
insert into employee_source values (1,'Wayne','Bells','INSERT',11);
insert into employee_source values (1,'Wayne','BellsT','UPDATE',12);
insert into employee_source values (2,'Anthony','Allen','INSERT',13);
insert into employee_source values (3,'Eric','Henderson','INSERT',14);
insert into employee_source values (4,'Jimmy','Smith','INSERT',15);
insert into employee_source values (1,'Wayne','Bellsa','UPDATE',16);
insert into employee_source values (1,'Wayner','Bellsat','UPDATE',17);
insert into employee_source values (2,'Anthony','Allen','DELETE',18);
MERGE into employee_destination as T using (select * from employee_source order by binlogkey)
AS S
ON T.id = s.id
when not matched
And S.operation_name = 'INSERT' THEN
INSERT (id,
first_name,
last_name)
VALUES (
S.id,
S.first_name,
S.last_name)
when matched AND S.operation_name = 'UPDATE'
THEN
update set T.first_name = S.first_name, T.last_name = S.last_name
When matched
And S.operation_name = 'DELETE' THEN DELETE;
I am expecting to see - Bellsat - as last name for employee id 1 in the employee_destination table after all rows get processed. Same way, I should not see emp id 2 in the employee_destination table.
Is there any other alternative to MERGE to achieve this? Basically to go over every single DML in the same order (using binlogkey column for ordering) .
thanks.
You need to manipulate your source data to ensure that you only have one record per key/operation otherwise the join will be non-deterministic and will (dpending on your settings) either error or will update using a random one of the applicable source records. This is covered in the documentation here https://docs.snowflake.com/en/sql-reference/sql/merge.html#duplicate-join-behavior.
In any case, why would you want to update a record only for it to be overwritten by another update - this would be incredibly inefficient?
Since your updates appear to include the new values for all rows, you can use a window function to get to just the latest incoming change, and then merge those results into the target table. For example, the select for that merge (with the window function to get only the latest change) would look like this:
with SOURCE_DATA as
(
select COLUMN1::int ID
,COLUMN2::string FIRST_NAME
,COLUMN3::string LAST_NAME
,COLUMN4::string OPERATION_NAME
,COLUMN5::int PROCESSING_ORDER
from values
(1,'Wayne','Bells','INSERT',11),
(1,'Wayne','BellsT','UPDATE',12),
(2,'Anthony','Allen','INSERT',13),
(3,'Eric','Henderson','INSERT',14),
(4,'Jimmy','Smith','INSERT',15),
(1,'Wayne','Bellsa','UPDATE',16),
(1,'Wayne','Bellsat','UPDATE',17),
(2,'Anthony','Allen','DELETE',18)
)
select * from SOURCE_DATA
qualify row_number() over (partition by ID order by PROCESSING_ORDER desc) = 1
That will produce a result set that has only the changes required to merge into the target table:
ID
FIRST_NAME
LAST_NAME
OPERATION_NAME
PROCESSING_ORDER
1
Wayne
Bellsat
UPDATE
17
2
Anthony
Allen
DELETE
18
3
Eric
Henderson
INSERT
14
4
Jimmy
Smith
INSERT
15
You can then change the when not matched to remove the operation_name. If it's listed as an update and it's not in the target table, it's because it was inserted in a previous operation in the new changes.
For the when matched clause, you can use the operation_name to determine if the row should be updated or deleted.

How to select the default values of a table?

In my app, when letting the user enter a new record, I want to preselect the database's default values.
Let's for example take this table:
CREATE TABLE pet (
ID INT NOT NULL,
name VARCHAR(255) DEFAULT 'noname',
age INT DEFAULT 1
)
I would like to do something like this:
SELECT DEFAULT VALUES FROM pet -- NOT WORKING
And it should return:
ID | name | age
--------------------
NULL | noname | 1
I would then let the user fill in the remaining fields, or let her change one of the defaults, before she clicks on "save".
How can I select the default values of a sql server table using tsql?
You don't "SELECT" the Default values, only insert them. A SELECT returns the rows from a table, you can't SELECT the DEFAULT VALUES as there's no such row inside the table.
You could do something silly, like use a TRANSACTION and roll it back, but as ID doesn't have a default value, and you don't define a value for it with DEFAULT VALUES, it'll fail in your scenario:
CREATE TABLE pet (
ID INT NOT NULL,
name VARCHAR(255) DEFAULT 'noname',
age INT DEFAULT 1
)
GO
BEGIN TRANSACTION;
INSERT INTO dbo.pet
OUTPUT inserted.*
DEFAULT VALUES;
ROLLBACK;
Msg 515, Level 16, State 2, Line 13
Cannot insert the value NULL into column 'ID', table 'Sandbox.dbo.pet'; column does not allow nulls. INSERT fails.
You can, therefore, just supply the values for your non-NULL columns:
BEGIN TRANSACTION;
INSERT INTO dbo.pet (ID)
OUTPUT inserted.*
VALUES(1);
ROLLBACK;
Which will output the "default" values:
ID|name |age
--|------|---
1|noname|1
Selecting the default values of all columns is not very straight-forward, and as Heinzi wrote in his comment - does require a level of permissions you normally don't want your users to have.
That being said, a simple workaround would be to insert a record, select it back and display to the user, let the user decide what they want to change (if anything) and then when they submit the record - update the record (or delete the previous record and insert a new one).
That would require you to have some indication if the record was actually reviewed and updated by the user, but that's easy enough to accomplish by simply adding a bit column and setting it to 1 when updating the data.
As I have commented before. There is no need for this query since you can press alt + f1 on any table in your editor in Management Studio and provide you every information you need for the table.
select sys1.name 'Name',replace(replace(
case
when object_definition(sys1.default_object_id) is null then 'No Default Value'
else object_definition(sys1.default_object_id)
end ,'(',''),')','') 'Default value',
information_schema.columns.data_type 'Data type'
from sys.columns as sys1
left join information_schema.columns on sys1.name = information_schema.columns.column_name
where
object_id = object_id('table_name')
and information_schema.columns.table_name = 'table_name'
It seems like this might be solution:
SELECT * FROM (
SELECT
sys1.name AS COLUMN_NAME,
replace(replace(object_definition(sys1.default_object_id),'(',''),')','') AS DEFAULT_VALUE
FROM sys.columns AS sys1
LEFT JOIN information_schema.columns ON sys1.name = information_schema.columns.column_name
WHERE object_id = object_id('pet')
AND information_schema.columns.table_name = 'pet'
) AS SourceTable PIVOT(MAX(DEFAULT_VALUE) FOR COLUMN_NAME IN(ID, name, age)) AS PivotTable;
It returns:
ID |name |age
----|------|---
NULL|noname|1
Probably the column types are incorrect - but maybe I can live with that.
Thanks for #Nissus to provide an intermediate step to this.

Update 'Active' Column to 'N' when a new record is inserted to table A

I have table A
FacilityID CreatedDate Active
---------------------------------------
A001 2018-03-21 N
A001 2018-03-22 Y
A002 2018-03-21 Y
If a new FacilityID for A001 or A002 is inserted then the old record should become 'N'.
Can we achieve this with an After Insert trigger?
Formatted:
Create Trigger [dbo].[TableA_tr]
On [dbo].[Table A]
For INSERT
As
Begin
Update [dbo].[Table A]
Join
set [Active]='N'
where [CreatedDate]<
USE AFTER INSERT Trigger as you want to insert the new record:
Create trigger [dbo].[TableA_tr]
on [dbo].[Table A]
AFTER INSERT
AS
--Update all records with the same facility id that do not match the datetime of the new item
UPDATE [dbo].[TableA]
SET Active = 'N'
FROM inserted INS
JOIN dbo.TableA AS T
ON T.row_id = INS.row_id;
WHERE T.[FacilityId] = INS.FacilityId AND T.[CreatedDate] <> INS.CreatedDate
GO
Here is another approach to this. This is set based and it will still work correctly if you insert 1 or any number of rows. Remember that in sql server triggers fire once per operation, not once per row like some other DBMS systems do.
Create trigger [dbo].[TableA_tr]
on [dbo].[Table A]
AFTER INSERT
AS
--Update all records with the same facility id that do not match the datetime of the new item
UPDATE a
SET Active = 'N'
from dbo.TableA a
join inserted i on i.FacilityId = a.FacilityId
and i.CreatedDate <> a.CreatedDate

Update strategy for table with sequence generated number as primary key in Informatica

I have a mapping that gets data from multiple sql server source tables and assigns a sequence generated number as ID for each rows. In the target table, the ID field is set as primary key.
Every time I run this mapping, it creates new rows and assigns a new ID for the records that are pre-existing in the target. Below is an example:
1st run:
ID SourceID Name State
1 123 ABC NY
2 456 DEF PA
2nd run:
ID SourceID Name State
1 123 ABC NY
2 456 DEF PA
3 123 ABC NY
4 456 DEF PA
Desired Output must:
1) create a new row and assign a new ID if a record gets updated in the source.
2) create a new row and assign a new ID if new rows are inserted in the source.
How can this be obtained in Informatica?
Thank you in advance!
I'll take a flyer and assume the ACTUAL question here is 'How can I tell if the incoming record is neither insert nor update so that I can ignore it'. You could
a) have some date field in your source data to identify when the record was updated and then restrict your source qualifier to only pick up records which were last updated after the last time this mapping ran... drawback is if fields you're not interested in were updated then you'll process a lot of redundant records
b) better suggestion!! Configure a dynamic lookup which should store the latest state of a record matching by the SourceID. Then you can use the newlookuprow indicator port to tell if the record is an insert, update or no change and filter out the no change records in a subsequent transformation
Give the ID field an IDENTITY PROPERTY...
Create Table SomeTable (ID int identity(1,1),
SourceID int,
[Name] varchar(64),
[State] varchar(64))
When you insert into it... you don't insert anything for ID. For example...
insert into SomeTable
select
SourceID,
[Name],
[State]
from
someOtherTable
The ID field will be an auto increment starting at 1 and increment by 1 each time a row is inserted. In regards to your question about adding rows each time one is updated or inserted into another table, this is what TRIGGERS are for.

Update trigger not working correctly

I have two tables abstractDetails and auditAbstractDetails. I have a web form from where a person submits details about abstract (title, background, objective etc.) and it inserts value into abstractDetails table.
Now I have written a trigger so whenever a person updates his Form and submit it again, it should populate new values into abstractDetails as well as it should populate old values and New values in the AuditabstractDetails table ( title,background,objective..... auditTitle,auditBackground.....)
old value should store in AuditabstractDetails tables under the columns title, background etc. and new values will be stored under the columns auditTitle,auditBackground etc.)
here is my trigger:
USE [Abstract]
GO
/****** Object: Trigger [dbo].[trgAuditabstractDetails] Script Date: 09/28/2016 11:45:29 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TRIGGER [dbo].[trgAuditabstractDetails]
ON [dbo].[abstractDetails]
FOR UPDATE
AS
BEGIN
SET NOCOUNT ON
delete from AuditabstractDetails where Id IN (SELECT I.Id
FROM Inserted I);
insert into AuditabstractDetails(Id, abstractInfoId, title, background,
objective, design, result, Audittitle, Auditbackground,
Auditobjective, Auditdesign, Auditresult)
-- case 1: ID unchanged
SELECT I.Id, I.abstractInfoId, D.title, D.background, D.objective,
D.design, D.result,
I.title, I.background, I.objective, I.design, I.result
FROM Inserted I
JOIN Deleted D on I.Id=D.Id;
END
GO
after this when I am updating the value of form (title, background etc.) in the abstractDetails table from the backend (from DB by editing the value in abstractDetails table), it is working perfect. But when I am updating it through the form and submitting it, It is updating New values in both columns ( title, auditTitle etc..), It is not storing old values in specific columns , instead it is storing new values in both the columns.
abstractDetails table
===========================================
Id |title | background | Result
-------------------------------------------
1 Abs1 backAbs2 resAbs2
2 Abs2 backAbs2 resAbs2
-------------------------------------------
Expected Result:
AuditabstractDetails table
===============================================
Id |title | background | Result | AuditTitle | auditBackground |auditResult
----------------------------------------------------------------------------
1 Abs1 backAbs2 resAbs2 Audit1 auditback1 auditres1
2 Abs2 backAbs2 resAbs2 Audit2 auditback2 auditres2
----------------------------------------------------------------------------
ActualResult:
AuditabstractDetails table
===============================================
Id |title | background | Result | AuditTitle | auditBackground |auditResult
----------------------------------------------------------------------------
1 Audit1 auditback1 auditres1 Audit1 auditback1 auditres1
2 Audit2 auditback2 auditres2 Audit2 auditback2 auditres2
----------------------------------------------------------------------------
Need help
It sounds like the problem isn't with the trigger, but is with whatever's saving your data. If it works as desired from SQL Server but fails when the UI is making the call, then you need to check the UI to see what's going on. It seems likely that the UI is updating the record more than once. I would test this by removing the delete portion of your trigger and seeing how many rows you end up with in your audit table.

Resources