I am trying to implement a way to track changes to a table named gsbirst_Objects and gsbirst_Objects_Backup. It will record DML and Truncate statements
I have a stored procedure that will update the main table when it is called. How can I capture changes at the beginning and end when the stored procedure is called
I have created the backup table
I did this a while back using triggers it isn't the best way but works. You can create an audit table them build a trigger for each action. I made a trigger ON DELETE, ON UPDATE, and ON INSERT. I would then grab the record that was changed up dated or deleted and concatenate the row together and load a before and after into the audit table depending on what happened. This route for me gave me a little more detailed even of what happened and what changed.
Related
I would like to use a timetravel feature on snowflake and restore the original table.
I've deleted and created the table using following command:
DROP TABLE "SOCIAL_LIVE"
CREATE TABLE "SOCIAL_LIVE" (...)
I would like to go back to the original table before dropping table.
I've used following code (hid the transaction ID to 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
Select "BW"."PUBLIC"."SOCIAL_LIVE".* From "BW"."PUBLIC"."SOCIAL_LIVE";
select * from SOCIAL_LIVE before(statement => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx');
Received an error message:
Statement xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx cannot be used to specify time for time travel query.
How can we go back to the original table and restore it on snowflake?
The documentation states:
After dropping a table, creating a table with the same name creates a
new version of the table. The dropped version of the previous table
can still be restored using the following method:
Rename the current version of the table to a different name.
Use the UNDROP TABLE command to restore the previous version.
If you need further information, this page is useful:
https://docs.snowflake.net/manuals/sql-reference/sql/drop-table.html#usage-notes
You will need to undrop the table in order to access that data, though. Time-travel is not maintained by name alone. So, once you dropped and recreated the table, the new table has its own, new time travel.
Looks like there's 3 common reasons that error is seen, with solutions:
the table has been dropped and recreated
see this answer
the time travel period has been exceeded
no solution: target a statement within the time travel period for the table
the wrong statement type is being targeted
only certain statement types can be targeted. Currently, these include SELECT, BEGIN, COMMIT, and DML (INSERT, UPDATE etc). See documentation here.
Statement xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx cannot be used to specify time for time travel query.
Usually we will get above error when we trying to travel behind the object creation time. Try with time travel option with offset option.
I want to track each and every execution event of all the stored procedures of a database, So Is there any way or any global event where I can write SQL to insert record into a table along with stored procedure name or object id?
There are so many stored procedures in my database and I can't make changes to all the SP's and re-deploy them. I need global event where I can write the SQL.
I know we have sys.dm_exec_procedure_stats view (Show's last execute date time from cache), but I want to track manually by insert record for each SP into a separate table.
Answers will be greatly appreciated.
For that purpose you can create separate table and write a trigger for insert, update and delete for each table in your database so you can manually track the all type of transaction. Or write only insert trigger for each and every table which tables are used in your stored procedure.
i have a table named "LogDelete" to save information about users that deleted any rows on any tables. the table fields are like this :
create table LogDelete
(
pk int identity(1,1) primary key,
TableName varchar(15),
DeleteUser nvarchar(20),
DeleteDate datetime
)
Actually i wanna create a trigger that fire on all tables on update action that on every update write proper information on LogDelete Table,
at now i use a stored procedure and call it on every update action on my tables.
Is there a way to do this?
No. There are 'event' triggers, but they are mainly related to loggin in. These kinds of triggers are actually DDL triggers, so they are not related to updating data, but to updating your database scheme.
Afaik, there is no trigger that fires on every update. That means that the way you are handling it now, through a stored procedure, is probably the best way. You can create triggers on each table to call the procedure and do the logging.
You might even write a script that creates all those triggers for you in one run. That will make the initial creating and later updating of the triggers a bit easier.
Here is some MSDN documentation, which says (in remarks about DML triggers):
CREATE TRIGGER must be the first statement in the batch and can apply to only one table.
There is no magic solution for your request, not such a thing as event triggers to all DML (INSERT, UPDATE, DELETE) as you like, but there are some alternatives that you can consider:
If you are using SQL Server 2008 or after, the best thing you could use is CDC (Change Data Capture), you can start with this article by Dave Pinal, I think this will be the best approach since it is not affected by any change in structures.
Read the log file. You'll need analyze it find each DML activity in the log and so you could build an unified operation to log the changes that you need, obviously this is not on-line and not trivial
Same as option two but using traces on all the DML activities. The advantage of this approach is that it can be almost online and it will not require analyzing the log file, you'll just need to analyze a profiler table.
One of our tables has a column for saving troubleshooting information, it is an XML data type, pertaining to the row so if an issue arises we can quickly see everything that happened for that transaction. This has become an issue because it grows the database size drastically. After a month there is generally no need to retrieve this information and it is wasting valuable space.
Our solution is to null out the XML log column after it is a month old by using an insert trigger. Our concern is, will this affect the performance of the table enough to be noticeable and potentially cause problems?
Below is what we are trying to achieve:
CREATE PROCEDURE [dbo].[sp_ClearTransactionXmlLogs]
AS
UPDATE [dbo].[CCResponse]
SET [TransactionXML] = NULL
WHERE [DateSaved] < DATEADD(MONTH,-1,GETDATE())
AND [TransactionXML] IS NOT NULL;
CREATE TRIGGER [dbo].[tr_ClearTransactionXmlLogs]
ON [dbo].[CCResponse]
AFTER INSERT
AS EXEC sp_ClearTransactionXmlLogs;
Rather than having this run as a trigger every time an insert happens, why not schedule it as a nightly job, part of your database maintenance jobs?
Usually triggers are used to perform an action after the main operation (insert, update, delete) about the record changed.
If you don't execute any insert, your CCResponse remail with TransactionXML not null.
Instead of trigger, IMHO, I use a planned job.
I am trying to do an audit history by adding triggers to my tables and inserting rows intto my Audit table. I have a stored procedure that makes doing the inserts a bit easier because it saves code; I don't have to write out the entire insert statement, but I instead execute the stored procedure with a few parameters of the columns I want to insert.
I am not sure how to execute a stored procedure for each of the rows in the "inserted" table. I think maybe I need to use a cursor, but I'm not sure. I've never used a cursor before.
Since this is an audit, I am going to need to compare the value for each column old to new to see if it changed. If it did change I will execute the stored procedure that adds a row to my Audit table.
Any thoughts?
I would trade space for time and not do the comparison. Simply push the new values to the audit table on insert/update. Disk is cheap.
Also, I'm not sure what the stored procedure buys you. Can't you do something simple in the trigger like:
insert into dbo.mytable_audit
(select *, getdate(), getdate(), 'create' from inserted)
Where the trigger runs on insert and you are adding created time, last updated time, and modification type fields. For an update, it's a little tricker since you'll need to supply named parameters as the created time shouldn't be updated
insert into dbo.mytable_audit (col1, col2, ...., last_updated, modification)
(select *, getdate(), 'update' from inserted)
Also, are you planning to audit only successes or failures as well? If you want to audit failures, you'll need something other than triggers I think since the trigger won't run if the transaction is rolled back -- and you won't have the status of the transaction if the trigger runs first.
I've actually moved my auditing to my data access layer and do it in code now. It makes it easier to both success and failure auditing and (using reflection) is pretty easy to copy the fields to the audit object. The other thing that it allows me to do is give the user context since I don't give the actual user permissions to the database and run all queries using a service account.
If your database needs to scale past a few users this will become very expensive. I would recommend looking into 3rd party database auditing tools.
There is already a built in function UPDATE() which tells you if a column has changed (but it is over the entire set of inserted rows).
You can look at some of the techniques in Paul Nielsen's AutoAudit triggers which are code generated.
What it does is check both:
IF UPDATE(<column_name>)
INSERT Audit (...)
SELECT ...
FROM Inserted
JOIN Deleted
ON Inserted.KeyField = Deleted.KeyField -- (AutoAudit does not support multi-column primary keys, but the technique can be done manually)
AND NOT (Inserted.<column_name> = Deleted.<column_name> OR COALESCE(Inserted.<column_name>, Deleted.<column_name>) IS NULL)
But it audits each column change as a separate row. I use it for auditing changes to configuration tables. I am not currently using it for auditing heavy change tables. (But in most transactional systems I've designed, rows on heavy activity tables are typically immutable, you don't have a lot of UPDATEs, just a lot of INSERTs - so you wouldn't even need this kind of auditing). For instance, orders or ledger entries are never changed, and shopping carts are disposable - neither would have this kind of auditing. On low volume change tables, like customer, you can use this kind of auditing.
Jeff,
I agree with Zodeus..a good option is to use a 3rd tool.
I have used auditdatabase (FREE)web tool that generates audit triggers (you do not need to write a single line of TSQL code)
Another good tools is Apex SQL Audit but..it's not free.
I hope this helps you,
F. O'Neill