I have table dbo.Logs in SQL Server in which I am storing application logs, More than 100k rows are getting generated daily on this table. So I have written a job to delete the data periodically from the table.
I am simply using SQL DELETE query to delete data from the Logs table. When there is too many rows in the table (approx. 2,000,000) then this DELETE statement query is getting timeout from SQL SERVER management studio.
Does anyone have idea to fix the issue, I tried increasing the query timeout but the same is not working.
Delete them in chunks that don't cause a timeout.
For example:
SET NOCOUNT ON;
DECLARE #r INT = 1;
WHILE #r > 0
BEGIN
BEGIN TRANSACTION;
DELETE TOP (100000)
FROM dbo.Logs
WHERE CreatedAt < DATEADD(month,-3, DATEADD(day,1, EOMONTH(GETDATE())));
SET #r = ##ROWCOUNT;
COMMIT TRANSACTION;
END
Related
I am trying to insert almost 1,75,00,000 in 8 tables.
I have stored procedure for that. At the start and end of that procedure, I have written Transaction.
Error: The transaction log for database 'Database' is full due to 'ACTIVE_TRANSACTION'.
Note: I want to keep everything in the transaction, its automated process. This process will run on Database every month
CREATE OR ALTER PROCEDURE [dbo].[InsertInMainTbls]
AS
BEGIN
PRINT('STARTED [InsertInMainTbls]')
DECLARE #NoRows INT
DECLARE #maxLoop INT
DECLARE #isSuccess BIT=1
BEGIN TRY
BEGIN TRAN
--1st table
SET #NoRows = 1
SELECT #maxLoop=(MAX([col1])/1000)+1 FROM ProcessTbl
SELECT 'loop=='+ CAST(#maxLoop as Varchar)
WHILE (#NoRows <= #maxLoop)
BEGIN
INSERT INTO MainTbl WITH(TABLOCK)
( col1,col2,col3....col40)
SELECT
( val1,val2,val3....val40)FROM
ProcessTbl
WHERE [col1] BETWEEN (#NoRows*1000)-1000
AND (#NoRows*1000)-1
SET #NoRows = #NoRows+1;
END
--2nd table
.
.
.
--8th table
SET #isSuccess=1;
COMMIT TRAN
END TRY
BEGIN CATCH
PRINT ERROR_MESSAGE();
SELECT ERROR_MESSAGE() 'ErrorMsg' ;
SET #isSuccess=0;
ROLLBACK TRAN
END CATCH
Despite the fact that is a nonsense to have such a huge transaction, while you can do a manual rollback by tagging the rows with something like a timesptamp or a GUID, to do so, you need to have the necessary space in the transaction log file to store all the rows of all your inserts from the first one til the last one, plus all the transaction that other user swill do at the same time. Many solutions to solve your problem :
1) enlarge your transaction log file
2) add some complementary log files
3) remove or decrease the transaction scope
I have been trying to update a column in a table and I am getting the below error:
The transaction log for database 'STAGING' is full due to 'ACTIVE_TRANSACTION'.
I am trying to run the below statement :
UPDATE [STAGING].[dbo].[Stg_Encounter_Alias]
SET
[valid_flag] = 1
FROM [Stg_Encounter_Alias] Stg_ea
where [ACTIVE_IND] = 1
and [END_EFFECTIVE_DT_TM] > convert(date,GETDATE())
My table has approx 18 million rows. And the above update will modify all the rows. The table size is 2.5 GB. Also the DB is in simple recovery mode
This is something that I'll be doing very frequently on different tables. How can I manage this?
My Database size is as per below
Below are the database properties!!! I have tried changing the logsize to unlimited but it goes back to default.
Can any one tell me an efficient way to handle this scenario?
If I run in batches :
begin
DECLARE #COUNT INT
SET #COUNT = 0
SET NOCOUNT ON;
DECLARE #Rows INT,
#BatchSize INT; -- keep below 5000 to be safe
SET #BatchSize = 2000;
SET #Rows = #BatchSize; -- initialize just to enter the loop
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE TOP (#BatchSize) [STAGING].[dbo].[Stg_Encounter_Alias]
SET
[valid_flag] = 1
FROM [Stg_Encounter_Alias] Stg_ea
where [ACTIVE_IND] = 1
and [END_EFFECTIVE_DT_TM] > convert(date,GETDATE())
SET #Rows = ##ROWCOUNT;
END;
end
You are performing your update in a single transaction, and this causes the transaction log to grow very large.
Instead, perform your updates in batches, say 50K - 100K at a time.
Do you have an index on END_EFFECTIVE_DT_TM that includes ACTIVE_IND and valid_flag? That would help performance.
CREATE INDEX NC_Stg_Encounter_Alias_END_EFFECTIVE_DT_TM_I_
ON [dbo].[Stg_Encounter_Alias](END_EFFECTIVE_DT_TM)
INCLUDE (valid_flag)
WHERE ([ACTIVE_IND] = 1);
Another thing that can help performance drastically if you are running Enterprise Edition OR SQL Server 2016 SP1 or later (any edition), is turning on data_compression = page for the table and it's indexes.
I have a procedure which runs every midnight to delete expired rows from the database. During the job my application gets deadlock error when it tries to perform insert or update on the table. What should be the best way to avoid deadlock?
I have millions of record to delete every midnight, which I am deleting in a batch of 1000 rows at a time.
#batch=1000
WHILE #rowCount > 0
BEGIN TRY
DELETE TOP (#batch) FROM application
WHERE job_id IN (SELECT ID FROM JOB
WHERE process_time < #expiryDate)
OR prev_job_id IN (SELECT ID FROM [dbo].JOB
WHERE process_time < #expiryDate)
SELECT #rowCount = ##ROWCOUNT
END TRY
BEGIN CATCH
//some code here...
First sorry for my English.
I have one server when receives an update in specific table, I want write to a remote server too, but if the remote server is unavailable, I want the trigger to write to the local server in another temp table.
Example code to write to remote server:
-- REMOTO is remote server
CREATE TRIGGER insertin
ON mangas
AFTER INSERT
AS
BEGIN
DECLARE #serie varchar(max), #capitulo int
SELECT #serie = serie ,#capitulo = capitulo
FROM inserted
INSERT INTO [REMOTO].[Gest].[dbo].[MARCA] (Codigo, Descripcion)
VALUES (#capitulo, #serie)
END
I need, for example, something like TRY...CATCH or similar. I don't know how can I do it.
Thanks for answers and sorry for my English again.
If using SQL Server 2005 or later, you can put a TRY...CATCH block round your INSERT statement. See MSDN: http://msdn.microsoft.com/en-US/library/ms175976(v=sql.90).aspx
BEGIN TRY
INSERT INTO [REMOTO].[Gest].[dbo].[MARCA]
(Codigo, Descripcion)
VALUES
( #capitulo, #serie )
END TRY
BEGIN CATCH
INSERT INTO [dbo].[MyTemporaryTable]
(Codigo, Descripcion)
VALUES
( #capitulo, #serie )
END CATCH
i do with this code and works well.
if anyone has better solution , please post, I'm learning t-sql now.
any advice is well come
begin
declare
#serie varchar(max),
#capitulo int,
#maxMarca int
select #serie = serie
from inserted
select #maxMarca =max(Codigo) from [REMOTO].[Gest].[dbo].[MARCA]
set #maxMarca = #maxMarca+1
commit -- save transaction insert which generates this trigger work.
begin TRANSACTION
BEGIN TRY
INSERT INTO [REMOTO].[Gest].[dbo].[MARCA] (Codigo, Descripcion) VALUES ( #maxMarca, #serie)
commit transaction --save transaction and finish, if remote server work
END TRY
BEGIN CATCH
IF ##trancount > 0
begin
rollback transaction --remote transaction is go back
INSERT INTO [mangas].[dbo].[mangasTemp] VALUES (#maxMarca, #serie)
commit transaction -- save transaction in local temporal table.
end
END CATCH
end
go
I am running the following stored procedure to delete large number of records. I understand that the DELETE statement writes to the transaction log and deleting many rows will make the log grow.
I have looked into other options of creating tables and inserting records to keep and then Truncating the source, this method will not work for me.
How can I make my stored procedure below more efficient while making sure that I keep the transaction log from growing unnecessarily?
CREATE PROCEDURE [dbo].[ClearLog]
(
#Age int = 30
)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- DELETE ERRORLOG
WHILE EXISTS ( SELECT [LogId] FROM [dbo].[Error_Log] WHERE DATEDIFF( dd, [TimeStamp], GETDATE() ) > #Age )
BEGIN
SET ROWCOUNT 10000
DELETE [dbo].[Error_Log] WHERE DATEDIFF( dd, [TimeStamp], GETDATE() ) > #Age
WAITFOR DELAY '00:00:01'
SET ROWCOUNT 0
END
END
Here is how I would do it:
CREATE PROCEDURE [dbo].[ClearLog] (
#Age int = 30)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #d DATETIME
, #batch INT;
SET #batch = 10000;
SET #d = DATEADD( dd, -#Age, GETDATE() )
WHILE (1=1)
BEGIN
DELETE TOP (#batch) [dbo].[Error_Log]
WHERE [Timestamp] < #d;
IF (0 = ##ROWCOUNT)
BREAK
END
END
Make the Tiemstamp comparison SARGable
Separate the GETDATE() at the start of batch to produce a consistent run (otherwise it can block in an infinite loop as new records 'age' as the old ones are being deleted).
use TOP instead of SET ROWCOUNT (deprecated: Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in the next release of SQL Server.)
check ##ROWCOUNT to break the loop instead of redundant SELECT
Assuming you have the option of rebuilding the error log table on a partition scheme one option would be to partition the table on date and swap out the partitions. Do a google search for 'alter table switch partition' to dig a bit further.
how about you run it more often, and delete fewer rows each time? Run this every 30 minutes:
CREATE PROCEDURE [dbo].[ClearLog]
(
#Age int = 30
)
AS
BEGIN
SET NOCOUNT ON;
SET ROWCOUNT 10000 --I assume you are on an old version of SQL Server and can't use TOP
DELETE dbo.Error_Log Where Timestamp>GETDATE()-#Age
WAITFOR DELAY '00:00:01' --why???
SET ROWCOUNT 0
END
the way it handles the dates will not truncate time, and you will only delete 30 minutes worth of data each time.
If your database is in FULL recovery mode, the only way to minimize the impact of your delete statements is to "space them out" -- only delete so many during a "transaction interval". For example, if you do t-log backups every hour, only delete, say, 20,000 rows per hour. That may not drop all you need all at once, but will things even out after 24 hours, or after a week?
If your database is in SIMPLE or BULK_LOGGED mode, breaking the deletes into chunks should do it. But, since you're already doing that, I'd have to guess your database is in FULL recover mode. (That, or the connection calling the procedure may be part of a transaction.)
A solution I have used in the past was to temporarily set the recovery model to "Bulk Logged", then back to "Full" at the end of the stored procedure:
DECLARE #dbName NVARCHAR(128);
SELECT #dbName = DB_NAME();
EXEC('ALTER DATABASE ' + #dbName + ' SET RECOVERY BULK_LOGGED')
WHILE EXISTS (...)
BEGIN
-- Delete a batch of rows, then WAITFOR here
END
EXEC('ALTER DATABASE ' + #dbName + ' SET RECOVERY FULL')
This will significantly reduce the transaction log consumption for large batches.
I don't like that it sets the recovery model for the whole database (not just for this session), but it's the best solution I could find.