Troubleshoot a Full Transaction Log (SQL Server Error 9002) - sql-server

we have quite a bit of automataion which runs at scheduled times, however I do not manage this and the person who does is on annual leave. Of course it's all fallen over
Usually the go to fix is to update the tbl_Control which contains columns to advise whether any of the automation is already running and delete the running field. One job started but has failed and I'm unable to clear the running field on the tbl_Control and that is where all the automation falls over providing variations of the message below.
Using:
SELECT log_reuse_wait_desc, name, database_id, state, state_desc
FROM sys.databases
It states the database (for which the transaction log is full)
log_reuse_wait_desc ACTIVE_TRANSACTION
So I think I need to stop that running and start again but as I can't update the tbl_Control I'm stuck.
Now I think I might have to do something in SQL to the database (maybe to clear the Transaction Logs, or make space?) but I have no idea
Thanks

You have to backup transaction log. You can also extend the size of the transaction log if it has fixed size. The last thing you can do is to check the disk space on the partition where your transaction log is stored and free some space. But backup transaction log should be first.
Backup Transaction log:
Connect with SSMS to your server.
Select your database under Server Name->Databases->[Your database name] and right click on it.
Choose 'Tasks'->'Back Up...'
On page 'General' select 'Transaction Log' as 'Backup type'
On page 'General' select 'Disk' as 'Back up to'
On page 'General' add new destination clicking 'Add..' button
Click [OK].
Notify administrator about backup you did.

Related

How do I fix this 'Restore of database AdventureWorks2014 failed' permanently?

When I try to restore AdventureWorks2014, I get this
'Exclusive access could not be obtained because the database is in
use.(Microsoft.SqlServer.SmoExtended)'
Of course there is no query window open. I do not have this issue with other databases. The only difference that I found between AdventureWorks2014 and the rest of my databases is compatibility level, 2014 and 2019 respectively. But it was not the solution. I checked the Microsoft link provided with error message which took me to Microsoft home page. I could not find what I was looking for in Documentation. So I went around by setting the database offline or checking 'Close existing connections to destination database' option. Both of them works but I would like to find a permanent solution. Is there one?
Set the Database to SINGLE_USER before the RESTOE:
USE master
ALTER DATABASE AdventureWorks2014 SET SINGLE_USER WITH ROLLBACK IMMEDIATE
RESTORE AdventureWorks2014.....
Something was blocking your database restore. To find out what, issue the restore as T-SQL (even if you're going through some sort of UI, script it out so you can run it by hand) in one window and note the SPID for that window. If you don't know where to find that in your UI of choice (my guess is that you're using SSMS, but maybe not!), you can issue a query of SELECT ##SPID to find it. Before starting up your restore, open another window/session and get this query ready to run:
select *
from sys.dm_os_waiting_tasks
where session_id = «spid from your restore session»;
Kick off your restore and then run the above query to see what is blocking.

Do SQL Server Transaction Logs get 'locked' when there is a non-zero log_reuse_wait?

Scenario:
I have an application attempting to write to a SQL Server DB, and I'm getting the message:
The transaction log for database 'DB_NAME' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
However, I see that there is plenty of space on the drive for the transaction log to be written and autogrowth is on. So from SQL Management Studio, I issue:
select name,
log_reuse_wait,
log_reuse_wait_desc
from sys.databases
where name = 'DB_NAME'
and can see that the log_reuse_wait is '3'.
Question:
Do SQL Server transaction logs get 'locked' in any way when this wait is in place?
Note that this is set to 'Simple' recover model. Please let me know if there is more information I need to provide to make this a complete question.
This article may be of help.
https://msdn.microsoft.com/en-us/library/ms190925.aspx
The log reuse wait you are encountering indicates that there is a backup in progress.
Even in simple recovery, a full database backup has to include part of the transaction log.

How to handle restore fails in log shipping in SQL server 2008 R2?

I am working on log shipping from primary Server1(sql server 2008 R2) to secondary Server2(sql server 2008 R2) in stand by mode.
So there are 3 jobs:
backup on server1,
copy,
restore on server2.
Path for backup source and destination are on server2 and no issue of folder access.
Now first job runs and creates backup and 2nd job creates copy and the restore.
All working fine at first time but as I scheduled them with 5 mins, 7 mins and 9 mins.
But its not working on second attempt even restore jobs throw below errors despite I run it manually:
The restore operation completed with errors. Secondary ID:
could not find a log backup file that could be applied to secondary database.
Is this happening because There is one more log backup going on primary server?? IF yes then how can i manage both log backup(outer log backup and log shipping).
Message
2015-10-13 21:09:05.13 *** Error: The file ‘C:\LS_S\LSDemo_20151013153827.trn’ is too recent to apply to the secondary database ‘LSDemo’.(Microsoft.SqlServer.Management.LogShipping) ***
2015-10-13 21:09:05.13 *** Error: The log in this backup set begins at LSN 32000000047300001, which is too recent to apply to the database. An earlier log backup that includes LSN 32000000047000001 can be restored.
RESTORE LOG is terminating abnormally.(.Net SqlClient Data Provider) ***
Above error is a shown in failure of the history of restore job. If the failure is more than configured thresholds, then we would start seen below error in SQL ERRORLOG on secondary also:
2015-10-14 06:22:00.240 spid60 Error: 14421, Severity: 16, State: 1.
2015-10-14 06:22:00.240 spid60 The log shipping secondary database PinalServer.LSDemo has restore threshold of 45 minutes and is out of sync. No restore was performed for 553 minutes. Restored latency is 4 minutes. Check agent log and logshipping monitor information.
To start troubleshooting, we can look at Job activity monitor on secondary which would fail with the below state:
SQL SERVER - Log Shipping Restore Job Error: The file is too recent to apply to the secondary database LS-Restore-01
If you know SQL transaction log backup basics, you might be able to guess the cause. If we look closely to the error, it talks about LSN mismatch. Most of the cases, a manual transaction log backup was taken. I remember few scenarios where a 3rd party tool would have taken transaction log backup of database which was also part of a log shipping configuration.
Since we know the cause now, what we need to figure out is – where is that “out of band” backup? Here is the query which I have written on my earlier blog.
-- Assign the database name to variable below
DECLARE #db_name VARCHAR(100)
SELECT #db_name = 'LSDemo'
-- query
SELECT TOP (30) s.database_name
,m.physical_device_name
,CAST(CAST(s.backup_size / 1000000 AS INT) AS VARCHAR(14)) + ' ' + 'MB' AS bkSize
,CAST(DATEDIFF(second, s.backup_start_date, s.backup_finish_date) AS VARCHAR(4)) + ' ' + 'Seconds' TimeTaken
,s.backup_start_date
,CAST(s.first_lsn AS VARCHAR(50)) AS first_lsn
,CAST(s.last_lsn AS VARCHAR(50)) AS last_lsn
,CASE s.[type] WHEN 'D'
THEN 'Full'
WHEN 'I'
THEN 'Diff`enter code here`erential'
WHEN 'L'
THEN 'Transaction Log'
END AS BackupType
,s.server_name
,s.recovery_model
FROM msdb.dbo.backupset s
INNER JOIN msdb.dbo.backupmediafamily m ON s.media_set_id = m.media_set_id
WHERE s.database_name = #db_name
ORDER BY backup_start_date DESC
,backup_finish_date
Once we run the query, we would get list of backups happened on the database. This information is picked from MSDB database.
Below picture is self-explanatory.
SQL SERVER - Log Shipping Restore Job Error: The file is too recent to apply to the secondary database LS-Restore-02
Once we found the “problematic” backup, we need to restore it manually on secondary database. Make sure that we are using either norecovery or standby option so that other logs can be restored. Once file is restored, the restore job would be able to pick-up from the same place and would catch up automatically.
enter code here
What are the other problems you have seen with Log-shipping? If you can share some of the common errors, it would be of great help for others and I will try to blog about them too with your help.
Reference: Pinal Dave (https://blog.sqlauthority.com)

Why does SQL Server say "Starting Up Database" in the event log, twice per second?

I have a SQL Server [2012 Express with Advanced Services] database, with not much in it. I'm developing an application using EF Code First, and since my model is still in a state of flux, the database is getting dropped and re-created several times per day.
This morning, my application failed to connect to the database the first time I ran it. On investigation, it seems that the database is in "Recovery Pending" mode.
Looking in the event log, I can see that SQL Server has logged:
Starting up database (my database)
...roughly twice per second all night long. (The event log filled up, so I can't see beyond yesterday evening).
Those "information" log entries stop at about 6am this morning, and are immediately followed by an "error" log entry saying:
There is insufficient memory in resource pool 'internal' to run this query
What the heck happened to my database?
Note: it's just possible that I left my web application running in "debug" mode overnight - although without anyone "driving" it I can't imagine that there would be much database traffic, if any.
It's also worth mentioning that I have a full-text catalog in the database (though as I say, there's hardly any actual content in the DB at present).
I have to say, this is worrying - I would not be happy if this were to happen to my production database!
With AUTO_CLOSE ON the database will be closed as soon as there are no connections to it, and re-open (run recovery, albeit a fast paced one) every time a connection is established to it. So you were seeing the message because every 2 second your application would connect to the database. You probably always had this behavior and never noticed before. Now that your database crashed, you investigated the log and discovered this problem. While is good that now you know and will likely fix it, this does not address you real problem, namely the availability of the database.
So now you have a database that won't come out of recovery, what do you do? You restore from you last backup and apply your disaster recovery plan. Really, that's all there is to it. And there is no alternative.
If you want to understand why the crash happened (it can be any of about 1 myriad reasons...) then you need to contact CSS (Product Support). They have the means to guide you through investigation.
If you wanted to turn off this message in event log.
Just goto SQL Server Management Studio,
Right click on your database
Select Options (from left panel)
Look into "Automatic" section, and change "Auto Close" to "False"
Click okay
That's All :)
I had a similar problem with a sql express database stuck in recovery. After investigating the log it transpired that the database was starting up every couple of minutes. Running the script
select name, state_desc, is_auto_close_on from sys.databases where name = 'mydb'
revealed that auto close was set to on.
So it appears that the database is in always in recovery but is actually coming online for a brief second before going offline again because there are no client connections.
I solved this with following script.
Declare #state varchar(20)
while 1=1
begin
Select #state = state_desc from sys.databases where name='mydb';
If #state = 'ONLINE'
Begin
Alter database MyDb
Set AUTO_CLOSE_OFF;
Print 'Online'
break;
End
waitfor delay '00:00:02'
end

Extreme wait-time when taking a SQL Server database offline

I'm trying to perform some offline maintenance (dev database restore from live backup) on my dev database, but the 'Take Offline' command via SQL Server Management Studio is performing extremely slowly - on the order of 30 minutes plus now. I am just about at my wits end and I can't seem to find any references online as to what might be causing the speed problem, or how to fix it.
Some sites have suggested that open connections to the database cause this slowdown, but the only application that uses this database is my dev machine's IIS instance, and the service is stopped - there are no more open connections.
What could be causing this slowdown, and what can I do to speed it up?
After some additional searching (new search terms inspired by gbn's answer and u07ch's comment on KMike's answer) I found this, which completed successfully in 2 seconds:
ALTER DATABASE <dbname> SET OFFLINE WITH ROLLBACK IMMEDIATE
(Update)
When this still fails with the following error, you can fix it as inspired by this blog post:
ALTER DATABASE failed because a lock could not be placed on database 'dbname' Try again later.
you can run the following command to find out who is keeping a lock on your database:
EXEC sp_who2
And use whatever SPID you find in the following command:
KILL <SPID>
Then run the ALTER DATABASE command again. It should now work.
There is most likely a connection to the DB from somewhere (a rare example: asynchronous statistic update)
To find connections, use sys.sysprocesses
USE master
SELECT * FROM sys.sysprocesses WHERE dbid = DB_ID('MyDB')
To force disconnections, use ROLLBACK IMMEDIATE
USE master
ALTER DATABASE MyDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE
Do you have any open SQL Server Management Studio windows that are connected to this DB?
Put it in single user mode, and then try again.
In my case, after waiting so much for it to finish I had no patience and simply closed management studio. Before exiting, it showed the success message, db is offline. The files were available to rename.
execute the stored procedure
sp_who2
This will allow you to see if there is any blocking locks.. kill their should fix it.
In SSMS: right-click on SQL server icon, Activity Monitor. Open Processes. Find the processed connected. Right-click on the process, Kill.
In my case I had looked at some tables in the DB prior to executing this action. My user account was holding an active connection to this DB in SSMS. Once I disconnected from the server in SSMS (leaving the 'Take database offline' dialog box open) the operation succeeded.
anytime you run into this type of thing you should always think of your transaction log. The alter db statment with rollback immediate indicates this to be the case. Check this out: http://msdn.microsoft.com/en-us/library/ms189085.aspx
Bone up on checkpoints, etc. You need to decide if the transactions in your log are worth saving or not and then pick the mode to run your db in accordingly. There's really no reason for you to have to wait but also no reason for you to lose data either - you can have both.
Closing the instance of SSMS (SQL Service Manager) from which the request was made solved the problem for me.....
To get around this I stopped the website that was connected to the db in IIS and immediately the 'frozen' 'take db offline' panel became unfrozen.
Also, close any query windows you may have open that are connected to the database in question ;)
I tried all the suggestions below and nothing worked.
EXEC sp_who
Kill < SPID >
ALTER DATABASE SET SINGLE_USER WITH Rollback Immediate
ALTER DATABASE SET OFFLINE WITH ROLLBACK IMMEDIATE
Result: Both the above commands were also stuck.
4 . Right-click the database -> Properties -> Options
Set Database Read-Only to True
Click 'Yes' at the dialog warning SQL Server will close all connections to the database.
Result: The window was stuck on executing.
As a last resort, I restarted the SQL server service from configuration manager and then ran ALTER DATABASE SET OFFLINE WITH ROLLBACK IMMEDIATE. It worked like a charm
In SSMS, set the database to read-only then back. The connections will be closed, which frees up the locks.
In my case there was a website that had open connections to the database. This method was easy enough:
Right-click the database -> Properties -> Options
Set Database Read-Only to True
Click 'Yes' at the dialog warning SQL Server will close all connections to the database.
Re-open Options and turn read-only back off
Now try renaming the database or taking it offline.
For me, I just had to go into the Job Activity Monitor and stop two things that were processing. Then it went offline immediately. In my case though I knew what those 2 processes were and that it was ok to stop them.
In my case, the database was related to an old Sharepoint install. Stopping and disabling related services in the server manager "unhung" the take offline action, which had been running for 40 minutes, and it completed immediately.
You may wish to check if any services are currently utilizing the database.
Next time, from the Take Offline dialog, remember to check the 'Drop All Active Connections' checkbox. I was also on SQL_EXPRESS on local machine with no connections, but this slowdown happened for me unless I checked that checkbox.
SSMS, especially if running it from your own desktop remotely and not directly within the database server, can be a reason for the long delays in detaching a database. For some reason SSMS may not be able to disconnect any existing "connections" to the database.
We found the process was almost instant when we did it directly from the database server itself. And in fact it killed the attempt from my own desktop SSMS session, and it "took over" and detached the database.
Nothing else suggested here worked.
Thanks
In my case i stopped Tomcat server . then immediately the DB went offline .

Resources