Sql Server 2008 transaction log Error - database

select log_reuse_wait_desc from sys.databases where name = 'mydb'
1. LOG_BACKUP
All update and insert query throws :
ODBC Error: ODBC RC=-1, ODBC
SQLState=37000, DBMS RC=9002, DBMS Msg=[Microsoft][ODBC SQL Server
Driver][SQL Server]The transaction log for database 'mydb' is full. To
find out why space in the log cannot be reused, see the
log_reuse_wait_desc column in sys.databases. Operation canceled
My query:
First I delete and insert data into STATUS TABLE:
String insertQuery = "insert into "+dbmsName+"."+schemaName+".status(siteId,Severity) values(?,?)";
String deleteQuery = "delete from "+dbmsName+"."+schemaName+".status";
Now I select from status table and update live table:
String updateQuery = "update "+dbmsName+"."+schemaName+".live set status = ? where new_site_id = ?";
String updateAllQuery = "update "+dbmsName+"."+schemaName+".live set status = site_status where new_site_id = ?";
Now I can't even use any other update queries too.
How can I solve this issue?

"The transaction log for database 'mydb' is full" - that's the problem.
You need to free up disk space. Until you do that, you won't ba able to do much.
Do you have a regular T-LOG maintenance schedule? If you are in FULL recovery mode and have no backups running, then the transaction will simply continue growing.
To shrink the transaction log for your database (Don't do this normally, just when encountering your current situation):
8 Steps to better Transaction Log throughput
Has a maximum size been set for your database? Run this to find out:
sp_helpdb mydb
go
Update: You should perform a transaction log back up. You probably have to back it up more than once. After you back up the transaction log, try shrinking it.
Factors That Can Delay Log Truncation

Related

Transaction log is full (due to NOTHING)... but this database is in simple recovery mode

I'm supporting an antedeluvian webapp (soon to be retired) that still uses "aspnetdb" for its auth system. I was doing some work in prep for its retirement on my test environment, when I found my test server complaining with the following error:
The transaction log for database 'aspnetdb' is full due to 'NOTHING'.
Now, normally I'd assume the problem came from the database transaction log... but this database was recently switched into simple recovery mode (this is a test machine).
I've tried a few experiments with no luck, and done a fair bit of googling. Anybody seen this error before? Full transaction log on a database in simple recovery mode?
It's on SQL Server 2016, running in 2008 compatibility mode because aspnetdb is that old.
Got it, help received from stackexchange.
https://dba.stackexchange.com/questions/241172/transaction-log-is-full-due-to-nothing-but-this-database-is-in-simple-recov?noredirect=1#comment475763_241172
Autogrowth was set to 0. Unfortunately there's no way to see this in SSMS because it hides such settings about recovery-mode-simple DBs.
Query to see the real value of Autogrowth, thanks to #HandyD:
SELECT
db.name AS [Database],
mf.name AS [File],
CASE mf.[type_desc]
WHEN 'ROWS' THEN 'Data File'
WHEN 'LOG' THEN 'Log File'
END AS [FileType],
CAST(mf.[size] AS BIGINT)*8/1024 AS [SizeMB],
CASE
WHEN mf.[max_size] = -1 THEN 'Unlimited'
WHEN mf.[max_size] = 268435456 THEN 'Unlimited'
ELSE CAST(mf.[max_size]*8/1024 AS NVARCHAR(25)) + ' MB'
END AS [MaxSize],
CASE [is_percent_growth]
WHEN 0 THEN CONVERT(VARCHAR(6), CAST(mf.growth*8/1024 AS BIGINT)) + ' MB'
WHEN 1 THEN CONVERT(VARCHAR(6), CAST(mf.growth AS BIGINT)) + '%'
END AS [GrowthIncrement]
FROM sys.databases db
LEFT JOIN sys.master_files mf ON mf.database_id = db.database_id
where mf.name like 'aspnetdb%'
The other problem is that, in this state you can't change autogrowth. But you can alter size. So by increasing size and then introducing autogrowth, you can fix the problem.
ALTER DATABASE aspnetdb MODIFY FILE (
NAME = aspnetdb_log
, SIZE = 1GB
) --this fixes the problem
GO
ALTER DATABASE aspnetdb MODIFY FILE (
NAME = aspnetdb_log
, SIZE = 1025MB
, MAXSIZE = UNLIMITED
, FILEGROWTH = 10MB
) -- now we have autogrowth
GO
USE aspnetdb
DBCC SHRINKFILE(aspnetdb_log,1) --now we can shrink the DB back to a sane minimum since autogrowth is in place
GO
Even in simple recovery mode, you can still get a full transaction log. Simple recovery mode simply means that the transaction log is truncated after each completed transaction.
The transaction log still needs space to accomodate for all active transaction and all transactions that are being rolled back.
So one likely cause is that you still have an open transaction on your database. If that happens, the transaction log will not get truncated.
The other angle is the actual space availability. If you have configured your log file with a maximum file size, or when you are running out of disk space, you can run into this.

How to overcome "Failure getting record lock on a record from table"?

I am running a query using OpenQuery and getting a peculiar error.
This is my query:
select * from OpenQuery("CAPITAOC",'SELECT per.*
FROM pub."re-tenancy" AS t
INNER JOIN pub."re-tncy-person" AS per
ON t."tncy-sys-ref" = per."tncy-sys-ref"
INNER JOIN pub."re-tncy-place" AS place
ON t."tncy-sys-ref" = place."tncy-sys-ref"
WHERE t."tncy-status" = ''CUR'' and place."place-ref"=''GALL01000009''')
This is the error message:
OLE DB provider "MSDASQL" for linked server "CAPITAOC" returned message "[DataDirect][ODBC Progress OpenEdge Wire Protocol driver][OPENEDGE]Failure getting record lock on a record from table PUB.RE-TNCY-PERSON.".
OLE DB provider "MSDASQL" for linked server "CAPITAOC" returned message "[DataDirect][ODBC Progress OpenEdge Wire Protocol driver]Error in row.".
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "MSDASQL" for linked server "CAPITAOC".
How do I read this data?
The record lock error:
In a multi-user environment it is useful to lock records that are being updated to prevent an other user session from accessing that record. This prevents a "dirty read" of your data.
To overcome this issue, I suggest looking at this article :
http://knowledgebase.progress.com/articles/Article/20255
The Transaction Isolation Level must be set prior to any other
transactions within the session.
And this is how you find out WHO has locked your record :
http://knowledgebase.progress.com/articles/Article/19833
Also, I would like to suggest that if you are using something like SQL explorer which does not Auto-commit your updates unless you ask it to, then the database table might be locked until you commit your changes.
I ran across this issue as well and the other answer's links were not as helpful as I had hoped. I used the following link: https://knowledgebase.progress.com/articles/Article/P12158
Option #1 - applies from OpenEdge 10.1A02 and later.
Use the WITH (NOLOCK) hint in the SELECT query. This ensures that no record locks are acquired. For example,
SELECT * FROM pub.customer WITH (NOLOCK);
The WITH (NOLOCK) hint is similar to using the Read Uncommitted isolation level in that it will result in a dirty read.
Option #2 - applies to all OpenEdge (10.x/11.x) versions using the Read Committed isolation level.
Use the WITH (READPAST) hint in the SELECT query. This option causes a transaction to skip rows locked by other transactions that would ordinarily appear in the result set, rather than block the transaction waiting for the other transactions to release their locks on these rows. For example,
SELECT * FROM pub.customer WITH (READPAST NOWAIT);
SELECT * FROM pub.customer WITH (READPAST WAIT 5);
Please be aware that this can lead to fewer records being returned than expected since locked records are skipped/omitted from the result set.
Option #3 - applies to all Progress/OpenEdge versions.
Change the Isolation Level to Read Uncommitted to ensure that, when a record is read, no record locks are acquired. Using the Read Uncommitted isolation level will result in a dirty read.
This can be done at ODBC DSN level or via the SET TRANSACTION ISOLATION LEVEL <isolation_level_name> statement. For example,
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
Option 1 worked for me.

Count number of rollbacks in SQL Sever 2008

I am experimenting with the SET-OPTION XACT_ABORT ON as I am seeing a lot of sleeping sessions holding an open transaction causing problems in the application.
Is there a way to measure if flipping the option has an effect or not? I am thinking about something like number of rollbacks per day. How could I collect these?
I am on SQL Server 2008 SP4.
Considering your database is in Full Recovery Model.You can combine all the transaction log backups taken per day and Query it...
SELECT
CASE
WHEN OPERATION='LOP_ABORT_XACT' THEN 'COMMITS'
ELSE 'ROLLBACKS' END AS 'OPERATIONS'
,COUNT(*) AS CNT
FROM FN_DBLOG(NULL,NULL) WHERE OPERATION IN ('LOP_COMMIT_XACT','LOP_ABORT_XACT')
GROUP BY OPERATION
I did some tests using some sample data..
Begin tran test1
insert into t1
select 3
rollback
Output:
Operations cnt
Commits 3
RollBacks 4
Update as per comments:
Reading Transaction log Backup can be expensive ,I recommend you do this on backups taken and not on Active log ..doing this on active Tlog can have below effects
1.since this does a Log Scan,Transaction log truncation will be prevented
2.Huge IO load on server depending on Your log backup Size,since the output of ::fn_dblog can easily go into millions of rows
References:
http://rusanu.com/2014/03/10/how-to-read-and-interpret-the-sql-server-log/
https://blogs.msdn.microsoft.com/dfurman/2009/11/05/reading-database-transaction-log-with-fn_dump_dblog/

SQL Server database has 2 log files and I want to remove one. HOW?

I am new to this. I have a database (created by someone else) that has 2 .ldf files. (blah_log.ldf and blah_log2.ldf). My manager asked me to remove one of the log files but I cannot. How do I do this? I tried to put it on another server, detach, delete log files, attach, but it gives an error. I thought that way it would create just one, but it wanted both. Then i tried to right click properties and delete the files, would not let me delete. It said the log file was not empty. How in the heck do I achieve this. I just want to make it where the dang database has one freaking log file not two. This shouldn't be this complicated. I am a beginner and know nothing so maybe it isn't really. Please HELP!
I just tried this:
empty SQL Server database transaction log file
backup log [dbname] with truncate_only
go
DBCC SHRINKDATABASE ([dbname], 10, TRUNCATEONLY)
go
Then I deleted the second log file and clicked ok. I guess this is all I need to do? I tried it on a test server from a restore.
This MSDN article describes how to accomplish this at a high-level:
You cannot move transaction log data from one log file to another to
empty a transaction log file. To remove inactive transactions from a
transaction log file, the transaction log must be truncated or backed
up. When the transaction log file no longer contains any active or
inactive transactions, the log file can be removed from the database.
And this blog post shows the actual T-SQL that will accomplish this task:
USE master
IF DB_ID('rDb') IS NOT NULL DROP DATABASE rDb
GO
CREATE DATABASE rDb
ON
PRIMARY
( NAME = N'rDb', FILENAME = N'C:\rDb.mdf' , SIZE = 50MB ,
FILEGROWTH = 1024KB )
LOG ON
(NAME = N'rDb_log2', FILENAME = N'C:\rDb_log2.ldf', SIZE = 3MB,
FILEGROWTH = 2MB)
,(NAME = N'rDb_log3', FILENAME = N'C:\rDb_log3.ldf', SIZE = 3MB,
FILEGROWTH = 2MB)
,(NAME = N'rDb_log4', FILENAME = N'C:\rDb_log4.ldf', SIZE = 3MB,
FILEGROWTH = 2MB)
GO
ALTER DATABASE rDb SET RECOVERY FULL
BACKUP DATABASE rDb TO DISK = 'C:\rDb.bak' WITH INIT
CREATE TABLE rDb..t(c1 INT IDENTITY, c2 CHAR(100))
INSERT INTO rDb..t
SELECT TOP(15000) 'hello'
FROM syscolumns AS a
CROSS JOIN syscolumns AS b
--Log is now about 46% full
DBCC SQLPERF(logspace)
--Check virtual log file layout
DBCC LOGINFO(rDb)
--See that file 4 isn't used at all (Status = 0 for all 4's rows)
--We can remove file 4, it isn't used
ALTER DATABASE rDb REMOVE FILE rDb_log4
--Check virtual log file layout
DBCC LOGINFO(rDb)
--Can't remove 3 since it is in use
ALTER DATABASE rDb REMOVE FILE rDb_log3
--What if we backup log?
BACKUP LOG rDb TO DISK = 'C:\rDb.bak'
--Check virtual log file layout
DBCC LOGINFO(rDb)
--3 is still in use (status = 2)
--Can't remove 3 since it is in use
ALTER DATABASE rDb REMOVE FILE rDb_log3
--Shrink 3
USE rDb
DBCC SHRINKFILE(rDb_log3)
USE master
--... and backup log?
BACKUP LOG rDb TO DISK = 'C:\rDb.bak'
--Check virtual log file layout
DBCC LOGINFO(rDb)
--3 is no longer in use
--Can now remove 3 since it is not in use
ALTER DATABASE rDb REMOVE FILE rDb_log3
--Check explorer, we're down to 1 log file
--See what sys.database_files say?
SELECT * FROM rDb.sys.database_files
--Seems physical file is gone, but SQL Server consider the file offline
--Backup log does it:
BACKUP LOG rDb TO DISK = 'C:\rDb.bak'
SELECT * FROM rDb.sys.database_files
--Can never remove the first ("primary") log file
ALTER DATABASE rDb REMOVE FILE rDb_log2
--Note error message from above

SQL Server 2005 - Error_Message() not showing full message

I have encapsulated a backup database command in a Try/Catch and it appears that the error message is being lost somewhere. For example:
BACKUP DATABASE NonExistantDB TO DISK = 'C:\TEMP\NonExistantDB.bak'
..gives error:
Could not locate entry in sysdatabases for database 'NonExistantDB'. No entry found with that name. Make sure that the name is entered correctly. BACKUP DATABASE is terminating abnormally.
Whereas:
BEGIN TRY
BACKUP DATABASE NonExistantDB TO DISK = 'C:\TEMP\NonExistantDB.bak'
END TRY
BEGIN CATCH
PRINT ERROR_MESSAGE()
END CATCH
... only gives error: BACKUP DATABASE is terminating abnormally.
Is there a way to get the full error message or is this a limitation of try/catch?
It's a limitation of try/catch.
If you look carefully at the error generated by executing
BACKUP DATABASE NonExistantDB TO DISK = 'C:\TEMP\NonExistantDB.bak'
you'll find that there are two errors that get thrown. The first is msg 911, which states
Could not locate entry in sysdatabases for database 'NonExistantDB'. No entry
found with that name. Make sure that the name is entered correctly.
The second is the 3013 message that you are displaying. Basically, SQL is only returning the last error.
It is a limitation, that I just ran into myself, of the try/catch block in SQL 2005. I don't know if it still exists or not in 2008.
SQL 2005 Error Handling

Resources