I am trying to create a SQL Server 2019 trigger where if someone tries to update, insert or delete after 17:00 (during weekdays) or on Saturdays/Sundays, they are not allowed to do so.
If for example I try to update/delete/insert during working hours (before 17:00) it appears that one row has been affected but if I check it then it appears that the row has not been updated (for actions after 17:00 or at weekends the restriction is working).
Can someone help me?
CREATE TRIGGER ALTAEMPLE3
ON EMPLE
INSTEAD OF INSERT, UPDATE, DELETE
AS
DECLARE #day varchar(50),
#hour int
BEGIN
SELECT #day = DATENAME(WEEKDAY, GETDATE())
SELECT #hour = DATENAME(HOUR, GETDATE())
IF (#hour > 17) OR #day IN ('Saturday', 'Sunday')
BEGIN
PRINT 'It is closed'
ROLLBACK
END
END
You seem to be using transactions, and do a ROLLBACK if it is in the wrong time frame. Instead of using INSTEAD OF (which means you never do the INSERT, UPDATE or DELETE) use AFTER, and then if it is the wrong time do a ROLLBACK as you do now.
You should seriously consider moving this to your application software in stead of putting this in your database.
This means you are not able to correct data, move data to an archive, remove old data, and so on after 17:00 and in the weekend.
If you want it like that anyway, try a normal trigger with a throw
CREATE TRIGGER ALTAEMPLE3
ON EMPLE
for INSERT, UPDATE, DELETE
AS
BEGIN
set nocount on
DECLARE #day varchar(50),
#hour int
SELECT #day = DATENAME(WEEKDAY, GETDATE())
SELECT #hour = DATENAME(HOUR, GETDATE())
IF (#hour > 17) OR #day IN ('Saturday', 'Sunday')
BEGIN
;THROW 99001, 'It is closed', 1
END
END
Ow yes, before I forget,
this does makes it possible to insert/update/delete on weekdays from midnight until 17:00
If you want this restriction on workhours, you should add a starthour also
Related
We have a job with couple of steps and almost all of the steps use getdate(), but instead we want to get the date from a specific table and column. The table includes only two columns status as ready (doesn't change) and statusdate (dynamic). The plan is to create a stored procedure and replace the getdate() with that stored procedure.
How do I write the stored procedure? How do I declare a variable?
CREATE PROCEDURE SP_DATE
#StatusDate DATETIME
AS
BEGIN
SELECT StatusDate
FROM [DB_Name].[dbo].[Table_Name]
WHERE status = ready
END
Thank you!
Your jobs use getdate() function therefore in order to replace it with custom programmatic object you should use function as well and not a stored procedure. With a function like this
CREATE FUNCTION dbo.StatusDate ()
RETURNS DATETIME
AS
BEGIN
RETURN (SELECT
StatusDate
FROM Table_Name
WHERE status = 'ready')
END
you can replace getdate directly
SELECT
id
FROM Your_Job_List yjl
WHERE yjl.aDate < dbo.StatusDate()--getdate()
yet there are some questions to the design. One biggest single task of RDBMS is joining tables and perhaps a query similar to next one might be better
SELECT
id
FROM Your_Job_List yjl
,Table_Name tn
WHERE yjl.aDate < tn.StatusDate
AND tn.status = 'ready'
CREATE PROCEDURE spRunNextDate
AS
BEGIN
--SET NOCOUNT ON
declare #runDate datetime
select #runDate = MIN(StatusDate)
from [DB_Name].[dbo].[Table_Name]
where [status] = 'ready'
IF (#runDate IS NULL)
BEGIN
PRINT 'NO Dates in Ready State.'
RETURN 0
END
PRINT 'Will Run Date of ' + CAST(#runDate as varchar(20))
-- Notice the MIN or MAX usage above to get only one date per "run"
END
GO
There are huge holes and questions raised in my presumptuous sp above, but it might get you to thinking about why your question implies that there is no parameter. You are going to need a way to mark the day "done" or else it will be run over and over.
I'm new to SQL Server error handling, and my English isn't too clear, so I apologize in advance for any misuderstandings.
The problem is: I insert multiple records into a table. The table has an AFTER INSERT trigger, which is processing the records one by one in the FETCH WHILE cycle with a cursor. If something error happens, everything is rolling back. So if there is just one wrong field in the inserted records, I lost all of them. And the insert rolls back also, so I can't find the wrong record. So I need to handle the errors inside of the cursor, to rollback only the wrong record.
I made a test database with 3 tables:
tA
VarSmallint smallint
VarTinyint tinyint
String varchar(20)
tB
ID int (PK, identity)
Timestamp datetime (default: getdate())
VarSmallint smallint
VarTinyint tinyint
String varchar(20)
tC
ID int PK
Timestamp datetime
VarTinyint1 tinyint
VarTinyint2 tinyint
String varchar(10)
tA contains 3 records with 1 wrong one. I insert this content into tB.
tB has the trigger, and inserts the records into tC ony by one.
tC has only tinyint variables, so there can be problem to insert values greater than 255. This is the point where the error occurs for the test!
My trigger is:
ALTER TRIGGER [dbo].[trg_tB]
ON [dbo].[tB]
AFTER INSERT
AS
BEGIN
IF ##rowcount = 0
RETURN;
SET NOCOUNT ON;
DECLARE
#ID AS int,
#Timestamp AS datetime,
#VarSmallint AS smallint,
#VarTinyint AS tinyint,
#String AS varchar(20),
DECLARE curNyers CURSOR DYNAMIC
FOR
SELECT
[ID], [Timestamp], [VarSmallint], [VarTinyint], [String]
FROM INSERTED
ORDER BY [ID]
OPEN curNyers
FETCH NEXT FROM curNyers INTO #ID, #Timestamp, #VarSmallint, #VarTinyint, #String
WHILE ##FETCH_STATUS = 0
BEGIN
BEGIN TRY
BEGIN TRAN
INSERT INTO [dbo].[tC]([ID], [Timestamp], [VarTinyint1], [VarTinyint2], [String])
VALUES (#ID, #Timestamp, #VarSmallint, #VarTinyint, #String)
COMMIT TRAN
END TRY
BEGIN CATCH
ROLLBACK TRAN
INSERT INTO [dbo].[tErrorLog]([ErrorTime], [UserName], [ErrorNumber],
[ErrorSeverity], [ErrorState],
[ErrorProcedure], [ErrorLine],
[ErrorMessage], [RecordID])
VALUES (SYSDATETIME(), SUSER_NAME(), ERROR_NUMBER(),
ERROR_SEVERITY(), ERROR_STATE(),
ERROR_PROCEDURE(), ERROR_LINE(),
ERROR_MESSAGE(), #ID)
END CATCH
FETCH NEXT FROM curNyers INTO #ID, #Timestamp, #VarSmallint, #VarTinyint, #String
END
CLOSE curNyers
DEALLOCATE curNyers
END
If I insert 2 good records with 1 wrong, everything is rolling back and I got an error:
Msg 3609, Level 16, State 1, Line 1
The transaction ended in the trigger. The batch has been aborted.
Please help me! How to modify this trigger to work well?
If I insert the wrong record, I need:
All the inserted records in tB
All the good records in tC
Error logged in tErrorLog
Thanks!
You have TWO major disasters in your trigger:
do not use a cursor inside a trigger - that's just horrible! Triggers fire whenever a given operation happens - you have little control over when and how many times they fire. Therefore, in order not to compromise your system performance too much, triggers should be very small, fast, nimble - do not do any heavy lifting and extensive processing in a trigger. A cursor is anything but nimble and fast - it's a resource-hog, processor-hog, memory-leaking monster - AVOID those whenever you can, and most definitely inside a trigger! (and you don't need them, 99% of the cases, anyway)
You can rewrite your whole logic into this one single, fast, set-based statement:
ALTER TRIGGER [dbo].[trg_tB]
ON [dbo].[tB]
AFTER INSERT
AS
BEGIN
INSERT INTO [dbo].[tC]([ID], [Timestamp], [VarTinyint1], [VarTinyint2], [String])
SELECT
[ID], [Timestamp], [VarSmallint], [VarTinyint], [String]
FROM
INSERTED
END
Never call COMMIT TRAN inside a trigger. The trigger executes inside the context and transaction of the statement that caused it to fire - if everything is OK, just let the trigger finish and then the transaction will be committed just fine. If you need to abort, call ROLLBACK. But never ever call COMMIT TRAN in the middle of a trigger. Just don't.....
I deleted the TRIGGER, and copy-pasted the code from it into a STORED PROCEDURE.
Then I added a row Status to tB, and set defaults to 0.
1 is "Record processed OK", 2 is "Record processing fault".
I fill the cursor with WHERE Status = 0.
In the TRY section I update the status to 1, in the CATCH section I UPDATE it to 2.
I have no jobs, so I run the SP from Windows scheduler with a batch file with the SQLCMD command.
Now the processing works well, moreover it worked well for the first time. Thanks for the help!
I have a stored procedure which takes a lot of time (around 5 minutes) to execute. This stored procedure fills up a table. Now I am retrieving the data from that table. I have created a job to execute this stored procedure every 15 minutes. But while the stored procedure is in execution, my table is empty and the front end shows no results at that time. My requirement is to show data at the front end side all the time.
Is there a way to cache the stored procedure results and use that result while the stored procedure executes?
Here is my stored procedure,
BEGIN
declare #day datetime
declare #maxdate datetime
set #maxdate = getdate()
set #day = Convert(Varchar(10),DATEADD(week, DATEDIFF(day, 0, getdate())/7, 0),110)
truncate table tblOpenTicketsPerDay
while #day <= #maxdate
begin
insert into tblOpenTicketsPerDay
select convert(varchar(20),datename(dw,#day)) day_name, count(*) Open_Tickets from
(select [status], createdate, closedate
FROM OPENROWSET('MSDASQL','DSN=SQLite','Select * from tickets') AS a
where createdate <= #day
except
select [status], createdate, closedate
FROM OPENROWSET('MSDASQL','DSN=SQLite','Select * from tickets') AS a
where closedate <= #day and [status] = 'closed') x
set #day = #day + 1
end
END
Any ideas will be helpful.
Thanks.
If I have understood correct then your main concern is: your stored procedure empties the table and then fills it up and since it takes time, your application have no data show.
In that case, you can have a secondary/auxiliary clone table; say tblOpenTicketsPerDay_clone and have your stored procedure fill that table instead like
insert into tblOpenTicketsPerDay_clone
select convert(varchar(20),datename(dw,#day)) day_name,
count(*) Open_Tickets from
That way your application will always have data to display since main table has the data. Once, the clone table is done filling up then transfer the same data to main table saying
delete from tblOpenTicketsPerDay;
insert into tblOpenTicketsPerDay
select * from tblOpenTicketsPerDay_clone;
No, but the problem is not caching, it isa totally bad approach to generate the data.
Generate new data into a temporary table, then MERGE The results (using the merge keyword) into the original table.
No sense in deleting the data first. That is a terrible design approach.
this is my trigger in sql server. I want this trigger to fire automatically( i.e daily) instead of update...
create trigger trig_name
on tab_name
for update
as
begin
declare #id int,#innn int
declare #dif int
declare #inn int
set #id=(select whateverid from inserted)
set #inn = (select DATEDIFF(DAY,INSERTDT,UPDATEDDT) from tab_name where whateverid=#id)
set #innn =(select DATEDIFF(DAY,INSERTDT,GETDATE()) from tab_name where whateverid=#id)
set #dif = #inn-#innn
update tab_name set due=#dif from tab_name where whateverid= #id
end
Create a new SQL Agent Job, and add a Transact SQL step:
update tab_name
set due = DATEDIFF(DAY, INSERTDT, UPDATEDDT) - DATEDIFF(DAY, INSERTDT, GETDATE())
Obviously, unlike the trigger, you can't update those that have just been updated. So this will update all 'due' fields based on the time it runs.
I would consider create a stored proc and getting the job to run that instead. It easier manager, and less likely to get missed in the future.
I am running the following stored procedure to delete large number of records. I understand that the DELETE statement writes to the transaction log and deleting many rows will make the log grow.
I have looked into other options of creating tables and inserting records to keep and then Truncating the source, this method will not work for me.
How can I make my stored procedure below more efficient while making sure that I keep the transaction log from growing unnecessarily?
CREATE PROCEDURE [dbo].[ClearLog]
(
#Age int = 30
)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- DELETE ERRORLOG
WHILE EXISTS ( SELECT [LogId] FROM [dbo].[Error_Log] WHERE DATEDIFF( dd, [TimeStamp], GETDATE() ) > #Age )
BEGIN
SET ROWCOUNT 10000
DELETE [dbo].[Error_Log] WHERE DATEDIFF( dd, [TimeStamp], GETDATE() ) > #Age
WAITFOR DELAY '00:00:01'
SET ROWCOUNT 0
END
END
Here is how I would do it:
CREATE PROCEDURE [dbo].[ClearLog] (
#Age int = 30)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #d DATETIME
, #batch INT;
SET #batch = 10000;
SET #d = DATEADD( dd, -#Age, GETDATE() )
WHILE (1=1)
BEGIN
DELETE TOP (#batch) [dbo].[Error_Log]
WHERE [Timestamp] < #d;
IF (0 = ##ROWCOUNT)
BREAK
END
END
Make the Tiemstamp comparison SARGable
Separate the GETDATE() at the start of batch to produce a consistent run (otherwise it can block in an infinite loop as new records 'age' as the old ones are being deleted).
use TOP instead of SET ROWCOUNT (deprecated: Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in the next release of SQL Server.)
check ##ROWCOUNT to break the loop instead of redundant SELECT
Assuming you have the option of rebuilding the error log table on a partition scheme one option would be to partition the table on date and swap out the partitions. Do a google search for 'alter table switch partition' to dig a bit further.
how about you run it more often, and delete fewer rows each time? Run this every 30 minutes:
CREATE PROCEDURE [dbo].[ClearLog]
(
#Age int = 30
)
AS
BEGIN
SET NOCOUNT ON;
SET ROWCOUNT 10000 --I assume you are on an old version of SQL Server and can't use TOP
DELETE dbo.Error_Log Where Timestamp>GETDATE()-#Age
WAITFOR DELAY '00:00:01' --why???
SET ROWCOUNT 0
END
the way it handles the dates will not truncate time, and you will only delete 30 minutes worth of data each time.
If your database is in FULL recovery mode, the only way to minimize the impact of your delete statements is to "space them out" -- only delete so many during a "transaction interval". For example, if you do t-log backups every hour, only delete, say, 20,000 rows per hour. That may not drop all you need all at once, but will things even out after 24 hours, or after a week?
If your database is in SIMPLE or BULK_LOGGED mode, breaking the deletes into chunks should do it. But, since you're already doing that, I'd have to guess your database is in FULL recover mode. (That, or the connection calling the procedure may be part of a transaction.)
A solution I have used in the past was to temporarily set the recovery model to "Bulk Logged", then back to "Full" at the end of the stored procedure:
DECLARE #dbName NVARCHAR(128);
SELECT #dbName = DB_NAME();
EXEC('ALTER DATABASE ' + #dbName + ' SET RECOVERY BULK_LOGGED')
WHILE EXISTS (...)
BEGIN
-- Delete a batch of rows, then WAITFOR here
END
EXEC('ALTER DATABASE ' + #dbName + ' SET RECOVERY FULL')
This will significantly reduce the transaction log consumption for large batches.
I don't like that it sets the recovery model for the whole database (not just for this session), but it's the best solution I could find.