How to find who deleted data from my table in sql server - sql-server

I have some table in my table which is accessible to many. Some data is missing in my table now. How can I find who deleted those rows from that table.

You can use ApexSQL Log to fully investigate operations executed against your table. The database needs to be in the full recovery model, so the information on past operations is available inside the transaction log file for ApexSQL Log to read it. Once the tool analyzes your t-log, you will be able to see the time the operation began and ended, the operation type, the schema and object name of the object affected, the name of the user who executed the operation, and more. For UPDATEs, you’ll even be able to see the old and the new value of the updated fields.
There are several guides on this here https://solutioncenter.apexsql.com/apexsql-log-solutions-table-of-contents/
Furthermore, you can even use ApexSQL Log to rollback those transactions if you need to. It will simply 'Undo' them and rollback changes back to their original state.

You can find deleted data's UserName by following little snippet :
DECLARE #TableName sysname
SET #TableName = 'dbo.t1_new' --INPUT TABLE NAME
SELECT
u.[name] AS UserName
, l.[Begin Time] AS TransactionStartTime
FROM
fn_dblog(NULL, NULL) l
INNER JOIN
(
SELECT
[Transaction ID]
FROM
fn_dblog(NULL, NULL)
WHERE
AllocUnitName LIKE #TableName + '%'
AND
Operation = 'LOP_DELETE_ROWS'
) deletes
ON deletes.[Transaction ID] = l.[Transaction ID]
INNER JOIN
sysusers u
ON u.[sid] = l.[Transaction SID]
source : dba.stackexchange
(I don't recall who posted it)

Unfortunately, you can't see deleted records if you don't keep them somewhere yourself.
If you want to track this type of interventions, you should not really delete your records.
Instead, you should create a some more fields on your table.
Here is an exemple :
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Person](
[Pers_ID] [int] IDENTITY(1,1) NOT NULL,
[Pers_CompanyID] [int] NULL,
[Pers_FirstName] [nvarchar](50) NULL,
[Pers_LastName] [nvarchar](50) NULL,
[Pers_CreatedBy] [int] NULL,
[Pers_CreatedDate] [datetime] NULL,
[Pers_UpdatedBy] [int] NULL,
[Pers_UpdatedDate] [datetime] NULL,
[Pers_Deleted] [bit] NULL,
CONSTRAINT [PK_Person] PRIMARY KEY CLUSTERED
(
[Pers_ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
When the user creates a record, you can set CreatedBy = UserID, CreatedDate = CurrentDate,
While updating a record, UpdatedBy = UserID, UpdatedDate = CurrentDate
And deleting, Deleted = True, UpdatedBy = UserID, UpdatedDate = CurrentDate.
And in your code, in all queries you should add the condition Deleted = null.
Thus, you can track who created, updated or deleted a record.

Related

"There is already an object named '' in the database" When table should be created only if it does not exist

I'd like to create the new table only if it does not already exist in the database. So I use the following:
IF (NOT EXISTS(SELECT * FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'dbo'
AND TABLE_NAME = 'FactSend'))
BEGIN
SET ANSI_NULLS ON;
SET QUOTED_IDENTIFIER ON;
CREATE TABLE [MyDB].[dbo].[FactSend](
[Id] [varchar](100) NOT NULL,
[FlowId] [int] NULL,
[Name] [nvarchar](550) NULL,
[Channel] [varchar](100) NOT NULL,
[Date] [datetime] NULL,
CONSTRAINT [PK_FactSend] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
ALTER TABLE [MyDB].[dbo].[FactSend] WITH CHECK ADD CONSTRAINT [FK_FactSend_DimFlow] FOREIGN KEY([FlowId])
REFERENCES [MyDB].[dbo].[DimFlow] ([Id])
ALTER TABLE [MyDB].[dbo].[FactSend] CHECK CONSTRAINT [FK_FactSend_DimFlow]
END
But I get the following error:
There is already an object named 'FactSend' in the database.
I know there is, that is why I put that in an IF so that the CREATE is skipped.
Too long for a comment but a wild guess. The database you're connected to isn't MyDB and so you're checking in a different database for the existence of FactSend; and then trying to create it in MyDB. Does the following work?
USE MyDB;
GO
IF (NOT EXISTS(SELECT * FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'dbo'
AND TABLE_NAME = 'FactSend'))
BEGIN
SET ANSI_NULLS ON;
SET QUOTED_IDENTIFIER ON;
CREATE TABLE [dbo].[FactSend](
[Id] [varchar](100) NOT NULL,
[FlowId] [int] NULL,
[Name] [nvarchar](550) NULL,
[Channel] [varchar](100) NOT NULL,
[Date] [datetime] NULL,
CONSTRAINT [PK_FactSend] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
ALTER TABLE [dbo].[FactSend] WITH CHECK ADD CONSTRAINT [FK_FactSend_DimFlow] FOREIGN KEY([FlowId])
REFERENCES [dbo].[DimFlow] ([Id])
ALTER TABLE [dbo].[FactSend] CHECK CONSTRAINT [FK_FactSend_DimFlow]
END
When referencing an object with 2 part naming (i.e. dbo.MyTable, sys.columns, INFORMATION_SCHEMA.TABLES), the database you are currently connected to will be used. Writing a query/statement with a 3 part naming convention does not change the context of the database you are using (just like using 4 part naming convention doesn't change the server you are connected to).
I suspect that you were connected to the default database; probably master. As a result your EXISTS checked in the database master for the table dbo.FactSend.
In effect, your query was more like the below:
USE master;
IF (NOT EXISTS(SELECT * FROM master.INFORMATION_SCHEMA.TABLES --technically master isn't needed here, it's just to show the point
WHERE TABLE_SCHEMA = 'dbo'
AND TABLE_NAME = 'FactSend'))
BEGIN
SET ANSI_NULLS ON;
SET QUOTED_IDENTIFIER ON;
CREATE TABLE [MyDB].[dbo].[FactSend](
[Id] [varchar](100) NOT NULL,
[FlowId] [int] NULL,
...
So, to confirm, you were checking to the existence of the object master.dbo.FactSend and then, if that didn't exist, creating the object MyDB.dbo.FactSend. Of course, that means that no matter how many times you do(try to) create the MyDB.dbo.FactSend, it'll never mean the object master.dbo.FactSend exists; so the NOT EXISTS will always evaluate to true.
Making sure you are connected to the right database is really important. Personally, when using scripts to create objects I recommend against using 3 part naming. Instead declare your database prior (using USE), and then create your objects using 2 part naming. That way you always know the context of the database the objects are being created in, can't "accidental" create them in the wrong one, and if you need to change the database (maybe you're scripting them to a different database) you only need to change the USE statement and not every reference. Of course, if you are referring to objects in other databases then you'd have to use 3 part naming, but I'm specifically talking about when everything in tidily in one DB.

MS SQL Check for duplicate in two fields

I am trying create a trigger that will check if the Author already exist in a table based on a combination of their first and last name. From what Ive been reading this trigger should work, but when I try to insert any new author into the table it gives the "Author exists in table already!" error even though I am inserting an author that does not exist in the table.
Here is the trigger
USE [WebsiteDB]
GO
CREATE TRIGGER [dbo].[tr_AuthorExists] ON [dbo].[Authors]
AFTER INSERT
AS
if exists ( select * from Authors
inner join inserted i on i.author_fname=Authors.author_fname AND i.author_lname=Authors.author_lname)
begin
rollback
RAISERROR ('Author exists in table already!', 16, 1);
End
Here is the table
CREATE TABLE [dbo].[Authors](
[author_id] [int] IDENTITY(1,1) NOT NULL,
[author_fname] [nvarchar](50) NOT NULL,
[author_lname] [nvarchar](50) NOT NULL,
[author_middle] [nvarchar](50) NULL,
CONSTRAINT [PK_Authors] PRIMARY KEY CLUSTERED
(
[author_id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Any assistance would be appreciated!
You will need to do this as an INSTEAD of trigger. This also means you need to actually perform the insert inside the trigger. Something along these lines.
CREATE TRIGGER [dbo].[tr_AuthorExists] ON [dbo].[Authors]
instead of insert
AS
set nocount on;
if exists
(
select * from Authors a
inner join inserted i on i.author_fname = a.author_fname AND i.author_lname = a.author_lname
)
begin
rollback
RAISERROR ('Author exists in table already!', 16, 1);
End
else
insert Authors
select i.author_fname
, i.author_lname
, i.author_middle
from inserted i

SQL server scripting object date and database completeness

I have a script that automatically overwrites SQL Server objects every 2 days.
In order to check whether the script has run successfully, I would like to be able to check two things:
Find out the freshness of the objects by retrieving the object creation (table, views,...) date. If it is older than 2 days, the script has not overwritten the objects. These objects have to be listed.
Find out the completeness of the objects by ensuring all objects are present based on a predefined list, ie check if all tables/views are present. The objects are already stored in another table on the database level, so this can be used as an input.
How to go about this? What would be the approach? Could you please refer me to any good online resources? What scripting language is used to realize this?
Many thanks.
If you use the system tables then an unrelated release could throw you off. Use a log table to keep track of what is going on. On successful completion of your process have it insert an entry into the table that says it was completed. Then query the log table to see when you should refresh again.
Could be something as simple as the table below where activityTypeId = 1 for this process and activityType is zero for started and 1 for completed.
CREATE TABLE [dbo].[ActivityLog](
[id] [int] IDENTITY(1,1) NOT NULL,
[activityTypeId] [int] NOT NULL,
[activityTime] [datetime] NOT NULL,
[activityValue] [int] NOT NULL,
CONSTRAINT [PK_ActivityLog] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
ON [PRIMARY]
) ON [PRIMARY]
This should do it:
SELECT
mo.Name,
CASE
WHEN so.name IS NULL
THEN 'Does Not Exist'
WHEN DATEDIFF(dd, so.create_date, getdate()) > 2
THEN 'More than two days old'
ELSE 'Exists' END AS existCheck
FROM dbo.MyObjects AS mo
LEFT JOIN sys.objects AS so ON so.name = mo.Name
Here is a possible solution
IF EXISTS(
SELECT 1
FROM sys.objects O
LEFT OUTER JOIN YourTable T
ON O.name = t.name
AND O.modify_date < DATEADD(DAY,-2,GETDATE())
WHERE TYPE IN ('U','V')
)
RAISERROR('Some objects have not been updated in the last 2 days', 16, 1)

How to speed up Xpath performance in SQL Server when searching for an element with specific text

I am supposed to remove whole rows and part of XML-documents from a table with an XML column based on a specific value in the XML column. However the table contains millions of rows and gets locked when I perform the operation. Currently it will take almost a week to clean it up, and the system is too critical to be taken offline for so long.
Are there any ways to optimize the xpath expressions in this script:
declare #slutdato datetime = '2012-03-01 00:00:00.000'
declare #startdato datetime = '2000-02-01 00:00:00.000'
declare #lev varchar(20) = 'suppliername'
declare #todelete varchar(10) = '~~~~~~~~~~'
CREATE TABLE #ids (selId int NOT NULL PRIMARY KEY)
INSERT into #ids
select id from dbo.proevesvar
WHERE leverandoer = #lev
and proevedato <= #slutdato
and proevedato >= #startdato
begin transaction /* delete whole rows */
delete from dbo.proevesvar
where id in (select selId from #ids)
and ProeveSvarXml.exist('/LaboratoryReport/LaboratoryResults/Result[Value=sql:variable(''#todelete'')]') = 1
and Proevesvarxml.exist('/LaboratoryReport/LaboratoryResults/Result[Value!=sql:variable(''#todelete'')]') = 0
commit
go
begin transaction /* delete single results */
UPDATE dbo.proevesvar SET ProeveSvarXml.modify('delete /LaboratoryReport/LaboratoryResults/Result[Value=sql:variable(''#todelete'')]')
where id in (select selId from #ids)
commit
go
The table definitions is:
CREATE TABLE [dbo].[ProeveSvar](
[ID] [int] IDENTITY(1,1) NOT NULL,
[CPRnr] [nchar](10) NOT NULL,
[ProeveDato] [datetime] NOT NULL,
[ProeveSvarXml] [xml] NOT NULL,
[Leverandoer] [nvarchar](50) NOT NULL,
[Proevenr] [nvarchar](50) NOT NULL,
[Lokationsnr] [nchar](13) NOT NULL,
[Modtaget] [datetime] NOT NULL,
CONSTRAINT [PK_ProeveSvar] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
CONSTRAINT [IX_ProeveSvar_1] UNIQUE NONCLUSTERED
(
[CPRnr] ASC,
[Lokationsnr] ASC,
[Proevenr] ASC,
[ProeveDato] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
The first insert statement is very fast. I believe I can handle the locking by committing 50 rows at a time, so other requests can be handled in between my transactions.
The total number of rows for this supplier is about 5.5 million and the total rowcount in the table is around 13 million.
I've not really used xpath within SQL server before, but something which stands out is that you're doing lots of reads and writes in the same command (in the second statement). If possible, change your queries to..
CREATE TABLE #ids (selId int NOT NULL PRIMARY KEY)
INSERT into #ids
select id from dbo.proevesvar
WHERE leverandoer = #lev
and proevedato <= #slutdato
and proevedato >= #startdato
and ProeveSvarXml.exist('/LaboratoryReport/LaboratoryResults/Result[Value=sql:variable(''#todelete'')]') = 1
and Proevesvarxml.exist('/LaboratoryReport/LaboratoryResults/Result[Value!=sql:variable(''#todelete'')]') = 0
begin transaction /* delete whole rows */
delete from dbo.proevesvar
where id in (select selId from #ids)
This means that the first query will only create the new temporary table, and not write anything back, which will take slightly longer than your original, but the key thing is that your second query will ONLY be deleting records based on what's in your temporary table.
What you'll probably find is because it's deleting records, it's constantly re-building indices, and causing the reads to also be slower.
I'd also delete/disable any indices/constraints that don't actually help your query run.
Also, you're creating your clustered primary key on the ID, which isn't always the best thing to do. Especially if you're doing lots of date scans.
Can you also view the estimated execution plan for the top query, it would be interesting to see the order in which it checks the conditions. If it's doing the date first, then that's fine, but if it's doing the xpath before it checks the date, you might have to separte it into 3 queries, or add a new clustered index on 'proevedato,id'. This should force the query to only run the xpath for records which actually match the date.
Hope this helps.

Trigger Not Putting Data in History Table

I have the following trigger (along with others on similar tables) that sometimes fails to put data into the historic table. It should put data into a historic table exactly as it's inserted/updated and stamped with a date.
CREATE TRIGGER [dbo].[trig_UpdateHistoricProductCustomFields]
ON [dbo].[productCustomFields]
AFTER UPDATE,INSERT
AS
BEGIN
IF ((UPDATE(data)))
BEGIN
SET NOCOUNT ON;
DECLARE #date bigint
SET #date = datepart(yyyy,getdate())*10000000000+datepart(mm,getdate())*100000000+datepart(dd,getdate())*1000000+datepart(hh,getdate())*10000+datepart(mi,getdate())*100+datepart(ss,getdate())
INSERT INTO historicProductCustomFields (productId,customFieldNumber,data,effectiveDate) (SELECT productId,customFieldNumber,data,#date from inserted)
END
END
Schema:
CREATE TABLE [dbo].[productCustomFields](
[id] [int] IDENTITY(1,1) NOT NULL,
[productId] [int] NOT NULL,
[customFieldNumber] [int] NOT NULL,
[data] [varchar](50) NULL,
CONSTRAINT [PK_productCustomFields] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
CREATE TABLE [dbo].[historicProductCustomFields](
[id] [bigint] IDENTITY(1,1) NOT NULL,
[productId] [int] NOT NULL,
[customFieldNumber] [int] NOT NULL,
[data] [varchar](50) NULL,
[effectiveDate] [bigint] NOT NULL,
CONSTRAINT [PK_historicProductCustomFields] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
I insert and update only on one record at a time on the productCustomFields table. It seems to work 99% of the time and hard to test for failure. Can anyone shed some light on what I may be doing wrong or better practices for this type of trigger?
Environment is Sql Server Express 2005. I haven't rolled out the service pack yet for sql server either for this particular client.
I think the right way to solve this is keep a TRY CATCH block when inserting into the dbo.historicProductCustomFields table and write the errors into a custom errorlog table. From there it is easy to track this down.
I also see a PK on the historicProductCustomFields table but if you insert and update a given record in ProductCustomFields table then won't you get primary key violations on the historicProductCustomFields table?
You should schema qualify your table that you are inserting into.
You should check to ensure that there are not multiple triggers on the table, as if there are, only 1 trigger for that type of trigger will fire and if there are 2 defined, they are run in random order. In other words, 2 triggers of the same type (AFTER INSERT) then one would fire and the other would not, but you don't necessary have control as to which will fire.
try to use this trigger. i just give you example try to write trigger with this trigger.
create TRIGGER [dbo].[insert_Assets_Tran]
ON [dbo].[AssetMaster]
AFTER INSERT , UPDATE
AS BEGIN
DECLARE #isnum TINYINT;
SELECT #isnum = COUNT(*) FROM inserted;
IF (#isnum = 1)
INSERT INTO AssetTransaction
select [AssetId],[Brandname],[SrNo],[Modelno],[Processor],[Ram],[Hdd],[Display],[Os],[Office],[Purchasedt]
,[Expirydt],[Vendor],[VendorAMC],[Typename],[LocationName],[Empid],[CreatedBy],[CreatedOn],[ModifiedBy]
,[ModifiedOn],[Remark],[AssetStatus],[Category],[Oylstartdt],[Oylenddt],[Configuration]
,[AStatus],[Tassign]
FROM inserted;
ELSE
RAISERROR('some fields not supplied', 16, 1)
WITH SETERROR;
END

Resources