How can I view full SQL Job History? - sql-server

In SQL Server Management Studio, when I "View History" for a SQL Job, I'm only shown the last 50 executions of the Job.
How can I view a full log of every execution of a SQL Job since it was created on the server?

The SQL Server Job system limits the total number of job history entries both per job and over the whole system. This information is stored in the MSDB database.
Obviously you won't be able to go back and see information that has been since discarded, but you can change the SQL Server Agent properties and increase the number of entries that will be recorded from now on.
In the SQL Server Agent Properties:
Select the History page
Modify the 'Maximum job history log size (rows)' and 'Maximum job history rows per job' to suit, or change how historical job data is deleted based on its age.
It won't give you back your history, but it'll help with your future queries!

I'm pretty sure job history is stored somewhere in a dedicated database in SQL Server itself. If this is the case, you can use SQL Server Profiler to intercept SQL statements sent by SQL Server Management Studio and find out names of tables, etc.

Your outcome depends on a couple of things.
What you've set your "Limit size of job log history" and "Automatically remove agent history" settings [right click on SQL Agent | Properties | History] and
whether or not you are doing a "History Clean Up" task in a Maintenance Plan (or manually for that manner). The MP task runs the msdb.dbo.sp_purge_jobhistory stored procedure with an "oldest date" parameter which equates to the period you have selected.

You could use Temporal Table to change retention of data. Persisting job history in Azure SQL Managed Instance:
ALTER TABLE [msdb].[dbo].[sysjobhistory]
ADD StartTime DATETIME2 NOT NULL DEFAULT ('19000101 00:00:00.0000000')
ALTER TABLE [msdb].[dbo].[sysjobhistory]
ADD EndTime DATETIME2 NOT NULL DEFAULT ('99991231 23:59:59.9999999')
ALTER TABLE [msdb].[dbo].[sysjobhistory]
ADD PERIOD FOR SYSTEM_TIME (StartTime, EndTime)
ALTER TABLE [msdb].[dbo].[sysjobhistory]
ADD CONSTRAINT PK_sysjobhistory PRIMARY KEY (instance_id, job_id, step_id)
ALTER TABLE [msdb].[dbo].[sysjobhistory]
SET(SYSTEM_VERSIONING = ON (HISTORY_TABLE = [dbo].[sysjobhistoryall],
DATA_CONSISTENCY_CHECK = ON, HISTORY_RETENTION_PERIOD = 1 MONTH))
select * from msdb.dbo.sysjobhistoryall
This approach allows to define retention period as time(here 1 MONTH) instead of maximum number of rows per job/xaximum job history log size (rows).

Related

SQL Server : how to automatically update data in table after a certain time interval

I have an OrderProduct table with these columns and some data:
-order_number : ORDER01
-customer_name : Jackie
-order_status : Wait For Payment
-datetime_order_status : 25-01-2020 15:30:00
-datetime_transfer_notify : NULL
A customer needs to transfer notify in my order product system in 24 hours if not the Microsoft SQL will automatic update data in column 'order_status' from 'Wait for payment' to 'Cancel'.
How can I do that?
I believe the easiest way to do this is with a SQL Agent job (MS Docs). This is very dependent on the architecture and size of your databases and tables, but it would definitely get the job done. Depending on how sensitive the business is to being up to date, you could set the job to run every 1 minute, every 5 minutes, or any other time interval you would like. If I was going to do this, I would use a query along the lines of the following:
UPDATE OrderProduct SET order_state = 'Cancel' WHERE datetime_order_status < DATEADD(DAY, -1, GETDATE()) AND order_status = 'Wait for Payment'
Along with this, I would use something like SQL Server Management Studio to create a SQL Agent job on that server that ran at the interval you'd like, similar to this (Stack Overflow). Here (Stack Exchange DBA) is a very similar question to yours for MySQL as added reference.

Copy data from a SQL Server table into a historic table and add a timestamp of the time copied?

I'm trying to work out a specific way to copy all data from a particular table (let's call it opportunities) and copy it into a new table, with a timestamp of the date copied into the new table, for the sole purpose of generating historic data into a database hosted in Azure Data Warehousing.
What's the best way to do this? So far I've gone and created a duplicate table in the data warehouse, with an additional column called datecopied
The query I've started using is:
SELECT OppName, Oppvalue
INTO Hst_Opportunities
FROM dbo.opportunities
I am not really sure where to go from here!
SELECT INTO is not supported in Azure SQL Data Warehouse at this time. You should familiarise yourself with the CREATE TABLE AS or CTAS syntax, which is the equivalent in Azure DW.
If you want to fix the copy date, simply assign it to a variable prior to the CTAS, something like this:
DECLARE #copyDate DATETIME2 = CURRENT_TIMESTAMP
CREATE TABLE dbo.Hst_Opportunities
WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = ROUND_ROBIN
)
AS
SELECT OppName, Oppvalue, #copyDate AS copyDate
FROM dbo.opportunities;
I should also mention that the use case for Azure DW is million and billions of rows with terabytes of data. It doesn't tend to do well at low volume, so consider if you need this product, a traditional SQL Server 2016 install, or Azure SQL Database.
You can write insert into select query like below which will work with SQL Server 2008 +, Azure SQL datawarehouse
INSERT INTO Hst_Opportunities
SELECT OppName, Oppvalue, DATEDIFF(SECOND,{d '1970-01-01'},current_timestamp)
FROM dbo.opportunities

How to delete a table after a period of inactivity?

For the purpose of my project I cannot use session based temp tables. They need to be persistent but automatically deleted after a certain period of inactivity (no CRUD performed). Is this at all possible?
You can use the SQL Server Agent to Schedule a Job that calls a Stored Procedure that does this work for you. (How to Schedule a Job?)
How do you identify the tables that have not updated since X amount of time ?
Use this Query:
SELECT OBJECT_NAME(OBJECT_ID) AS TableName, last_user_update,
FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID('DatabaseName')
AND OBJECT_NAME(OBJECT_ID) LIKE '%%' -- Here is the template name for your tables
AND DATEDIFF(MINUTE, last_user_update, GETDATE()) > 10 -- Last updated more than 10 minutes
Now that you have the tables to be deleted, you can use whatever logic you want to DROP them (Cursor, While, Procedure)
Sure it is. Write it into your program layer.
AUTOMATICALLY - within SQL Server: no. Well, you cold use the agent to start a script regularly.
Tracking what "inactivity" means - your responsibility.
You need save modification date of this table somewhere (for example in the same table or in another special table) and then you can create job, which checks last modification date and then drops the table.

SQL Azure raise 40197 error (level 20, state 4, code 9002)

I have a table in a SQL Azure DB (s1, 250Gb limit) with 47.000.000 records (total 3.5Gb). I tried to add a new calculated column, but after 1 hour of script execution, I get: The service has encountered an error processing your request. Please try again. Error code 9002 After several tries, I get the same result.
Script for simple table:
create table dbo.works (
work_id int not null identity(1,1) constraint PK_WORKS primary key,
client_id int null constraint FK_user_works_clients2 REFERENCES dbo.clients(client_id),
login_id int not null constraint FK_user_works_logins2 REFERENCES dbo.logins(login_id),
start_time datetime not null,
end_time datetime not null,
caption varchar(1000) null)
Script for alter:
alter table user_works add delta_secs as datediff(second, start_time, end_time) PERSISTED
Error message:
9002 sql server (local) - error growing transactions log file.
But in Azure I can not manage this param.
How can I change my structure in populated tables?
Azure SQL Database has a 2GB transaction size limit which you are running into. For schema changes like yours you can create a new table with the new schema and copy the data in batches into this new table.
That said the limit has been removed in the latest service version V12. You might want to consider upgrading to avoid having to implement a workaround.
Look at sys.database_files by connecting to the user database. If the log file current size reaches the max size then you hit this. At this point either you have to kill the active transactions or update to higher tiers (if this is not possible because of the amount of data you modifying in a single transaction).
You can also get the same by doing:
DBCC SQLPERF(LOGSPACE);
Couple ideas:
1) Try creating an empty column for delta_secs, then filling in the data separately. If this still results in txn log errors, try updating part of the data at a time with a WHERE clause.
2) Don't add a column. Instead, add a view with the delta_secs column as a calculated field instead. Since this is a derived field, this is probably a better approach anyway.
https://msdn.microsoft.com/en-us/library/ms187956.aspx

Trigger that only runs runs once per day

This trigger backs up data from dbo.node to dbo.nodearchive. While backups are important, I only need to do this once per day. Note that there is a field called dbo.NodeArchive.versionDate (smalldDatetime).
CREATE TRIGGER [dbo].[Node_update]
ON [dbo].[Node]
for UPDATE
AS
BEGIN
INSERT INTO dbo.NodeArchive ([NodeID]
,[ParentNodeID]
,[Slug]
,[xmlTitle]
...
,[ModifyBy]
,[ModifyDate]
,[CreateBy]
,[CreateDate])
SELECT [deleted].[NodeID]
,[deleted].[ParentNodeID]
,[deleted].[Slug]
,[deleted].[xmlTitle]
...
,[deleted].[ModifyBy]
,[deleted].[ModifyDate]
,[deleted].[CreateBy]
,[deleted].[CreateDate]
FROM [deleted] LEFT JOIN dbo.Node
ON [deleted].NodeID = dbo.Node.NodeID
WHERE deleted.ModifyDate <> dbo.Node.ModifyDate
END
GO
I am looking to backup changes, but never more than one backup version per day. If there is no change, there is no backup.
That's not a trigger anymore - that'll be a scheduled job. Triggers by their very definition execute whenever a given operation (INSERT, DELETE, UPDATE) happens.
Use the SQL Server Agent facility to schedule that T-SQL code to run once per day.
Read all about SQL Server Agent Jobs in the SQL Server Books Online on MSDN
Update: so if I understand correctly: you want to have an UPDATE trigger - but that trigger would only record the NodeID that were affected, into a "these nodes need to be backed up at night" sort of table. Then, at night, you would have a SQL Agent Job that runs and that scans that "work table" and for all NodeID values stored in there, it would then execute that T-SQL statement to copy their data into the NodeArchive table.
With this approach, if your nodes with NodeID = 42 changes ten times, you'll still only have a single entry NodeID = 42 in your work table, and the nightly backup job would then copy that node only once into the NodeArchive.
With this approach, you can decouple the actual copying (which might take time) from the update process. The UPDATE trigger only records which NodeID rows need processing - the actual processing then happens sometime later, at an off-peak hour, without disturbing users of your system.

Resources