Trigger that only runs runs once per day - sql-server

This trigger backs up data from dbo.node to dbo.nodearchive. While backups are important, I only need to do this once per day. Note that there is a field called dbo.NodeArchive.versionDate (smalldDatetime).
CREATE TRIGGER [dbo].[Node_update]
ON [dbo].[Node]
for UPDATE
AS
BEGIN
INSERT INTO dbo.NodeArchive ([NodeID]
,[ParentNodeID]
,[Slug]
,[xmlTitle]
...
,[ModifyBy]
,[ModifyDate]
,[CreateBy]
,[CreateDate])
SELECT [deleted].[NodeID]
,[deleted].[ParentNodeID]
,[deleted].[Slug]
,[deleted].[xmlTitle]
...
,[deleted].[ModifyBy]
,[deleted].[ModifyDate]
,[deleted].[CreateBy]
,[deleted].[CreateDate]
FROM [deleted] LEFT JOIN dbo.Node
ON [deleted].NodeID = dbo.Node.NodeID
WHERE deleted.ModifyDate <> dbo.Node.ModifyDate
END
GO
I am looking to backup changes, but never more than one backup version per day. If there is no change, there is no backup.

That's not a trigger anymore - that'll be a scheduled job. Triggers by their very definition execute whenever a given operation (INSERT, DELETE, UPDATE) happens.
Use the SQL Server Agent facility to schedule that T-SQL code to run once per day.
Read all about SQL Server Agent Jobs in the SQL Server Books Online on MSDN
Update: so if I understand correctly: you want to have an UPDATE trigger - but that trigger would only record the NodeID that were affected, into a "these nodes need to be backed up at night" sort of table. Then, at night, you would have a SQL Agent Job that runs and that scans that "work table" and for all NodeID values stored in there, it would then execute that T-SQL statement to copy their data into the NodeArchive table.
With this approach, if your nodes with NodeID = 42 changes ten times, you'll still only have a single entry NodeID = 42 in your work table, and the nightly backup job would then copy that node only once into the NodeArchive.
With this approach, you can decouple the actual copying (which might take time) from the update process. The UPDATE trigger only records which NodeID rows need processing - the actual processing then happens sometime later, at an off-peak hour, without disturbing users of your system.

Related

Replace NULL columns in live database with data from a SQL Server backup

I recently had a horrible blunder.
While attempting to fix an issue we were having with our Exact Synergy system I was attempting to replace the data in two columns for one account with NULL, instead I replaced those two columns in ALL accounts with NULL. Completely restoring from a backup is not an option so now I am left trying to figure out how to replace the missing data.
I have made a full restore of a recent backup for this database to a test database and have confirmed that the data I need is there. I am trying to figure out how to properly write a query that will replace the data in the two columns.
Since this is a backup of the same database, the tables and columns are all identically named.
The databases are Synergy and Synergy_TESTDB
The owner of the tables is dbo
The table is called Addresses
The columns are called textfield1 and textfield2
What I would like to do is take the data in textfield1 and textfield2 from the backup database and use it to populate the empty, or NULL, columns in the live database.
I am extremely new to SQL, and would appreciate any help.
This is obviously untested. I take no responsibility for you using this code.
That said I'd like to try and help you.
The main point is the 3 part database.table naming: I'm assuming you restored backup to same server. I'm also assuming you have a primary key on the table? And that Synergy_TESTDB is the restored database:
update target
set target.textfield1 = source.textfield1
from Synergy.dbo.Addresses target
join Synergy_TESTDB.dbo.Addresses source on target.PrimaryKeyCol = source.PrimaryKeyCol
where target.textfield1 IS NULL
update target
set target.textfield2 = source.textfield2
from Synergy.dbo.Addresses target
join Synergy_TESTDB.dbo.Addresses source on target.PrimaryKeyCol = source.PrimaryKeyCol
where target.textfield2 IS NULL
(Sure it could be done in a single update, but I'm trying to keep it simple.)
I strongly suggest you try in another test database first.
A good habit to get in to is to use a pattern like this:
BEGIN TRANSACTION
-- Perform updates
-- Examine the results: select * from dbo.Blah ...
-- If results are wrong, we just rollback anyway
ROLLBACK
-- If results are what you want, uncomment the COMMIT and comment out the ROLLBACK
-- COMMIT TRANS

How to delete a table after a period of inactivity?

For the purpose of my project I cannot use session based temp tables. They need to be persistent but automatically deleted after a certain period of inactivity (no CRUD performed). Is this at all possible?
You can use the SQL Server Agent to Schedule a Job that calls a Stored Procedure that does this work for you. (How to Schedule a Job?)
How do you identify the tables that have not updated since X amount of time ?
Use this Query:
SELECT OBJECT_NAME(OBJECT_ID) AS TableName, last_user_update,
FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID('DatabaseName')
AND OBJECT_NAME(OBJECT_ID) LIKE '%%' -- Here is the template name for your tables
AND DATEDIFF(MINUTE, last_user_update, GETDATE()) > 10 -- Last updated more than 10 minutes
Now that you have the tables to be deleted, you can use whatever logic you want to DROP them (Cursor, While, Procedure)
Sure it is. Write it into your program layer.
AUTOMATICALLY - within SQL Server: no. Well, you cold use the agent to start a script regularly.
Tracking what "inactivity" means - your responsibility.
You need save modification date of this table somewhere (for example in the same table or in another special table) and then you can create job, which checks last modification date and then drops the table.

Manipulate data when a job is running in sql server

I have a scheduled job in Sql Server 2012. When the job runs every 20 mins, it executes a stored procedure and fills a new table. It takes a lot of time to update the table every time and I don't want an empty table when the job is running.
This is something I'm trying,
IF EXISTS(SELECT 1
FROM msdb.dbo.sysjobs J
JOIN msdb.dbo.sysjobactivity A
ON A.job_id=J.job_id
WHERE J.name=N'MyJobName'
AND A.run_requested_date IS NOT NULL
AND A.stop_execution_date IS NULL
)
PRINT 'The job is running!'
ELSE
PRINT 'The job is not running.'
So, basically when the job is running, instead of printing, I want to return data from a new table or something. The idea is, I do not want an empty data set even when the job is running.
Any ideas will be helpful.
Thanks.

Can I lose data that is being written to the database during a cron jib backup?

I am setting up a Cron Job that will run every 15 minutes and if there is new data in a particular table it will back up that table, email it and delete it.
Do I have to be concerned that data will be written to the database at the same time as the backup is running and it will backup "half" the data and than delete the rest of it?
If your job operates as you have described, there is a risk that new data could be added to the table after the SELECT used to generate the email but before the DELETE.
The simplest way to prevent this might be to run the two statements inside a transaction, assuming the database engine you're using supports transactions.
Alternatively, some database engines support returning data from a DELETE statement in a single atomic transaction - Postgres being one, with the RETURNING clause.
If that option isn't available to you, another solution would be to drive the DELETE using a high-water date/time or auto-incrementing identity column in the source table.
Implementing this might require a schema change to your source table.
Pseudo-code would be something like:
SELECT <variable> = max(<identity>)
FROM source_table
SELECT <columns>
FROM source_table
WHERE <identity> <= <variable>
DELETE source_table
WHERE <identity> <= <variable>
Any data added to source_table between the first SELECT and the DELETE will have a higher identity value than is stored in <variable>, so will not be removed.

How can I view full SQL Job History?

In SQL Server Management Studio, when I "View History" for a SQL Job, I'm only shown the last 50 executions of the Job.
How can I view a full log of every execution of a SQL Job since it was created on the server?
The SQL Server Job system limits the total number of job history entries both per job and over the whole system. This information is stored in the MSDB database.
Obviously you won't be able to go back and see information that has been since discarded, but you can change the SQL Server Agent properties and increase the number of entries that will be recorded from now on.
In the SQL Server Agent Properties:
Select the History page
Modify the 'Maximum job history log size (rows)' and 'Maximum job history rows per job' to suit, or change how historical job data is deleted based on its age.
It won't give you back your history, but it'll help with your future queries!
I'm pretty sure job history is stored somewhere in a dedicated database in SQL Server itself. If this is the case, you can use SQL Server Profiler to intercept SQL statements sent by SQL Server Management Studio and find out names of tables, etc.
Your outcome depends on a couple of things.
What you've set your "Limit size of job log history" and "Automatically remove agent history" settings [right click on SQL Agent | Properties | History] and
whether or not you are doing a "History Clean Up" task in a Maintenance Plan (or manually for that manner). The MP task runs the msdb.dbo.sp_purge_jobhistory stored procedure with an "oldest date" parameter which equates to the period you have selected.
You could use Temporal Table to change retention of data. Persisting job history in Azure SQL Managed Instance:
ALTER TABLE [msdb].[dbo].[sysjobhistory]
ADD StartTime DATETIME2 NOT NULL DEFAULT ('19000101 00:00:00.0000000')
ALTER TABLE [msdb].[dbo].[sysjobhistory]
ADD EndTime DATETIME2 NOT NULL DEFAULT ('99991231 23:59:59.9999999')
ALTER TABLE [msdb].[dbo].[sysjobhistory]
ADD PERIOD FOR SYSTEM_TIME (StartTime, EndTime)
ALTER TABLE [msdb].[dbo].[sysjobhistory]
ADD CONSTRAINT PK_sysjobhistory PRIMARY KEY (instance_id, job_id, step_id)
ALTER TABLE [msdb].[dbo].[sysjobhistory]
SET(SYSTEM_VERSIONING = ON (HISTORY_TABLE = [dbo].[sysjobhistoryall],
DATA_CONSISTENCY_CHECK = ON, HISTORY_RETENTION_PERIOD = 1 MONTH))
select * from msdb.dbo.sysjobhistoryall
This approach allows to define retention period as time(here 1 MONTH) instead of maximum number of rows per job/xaximum job history log size (rows).

Resources