OK I know this has been asked a lot and I thought I had found the answer but it's not working.
I need to delete a large amount of data from some SQL tables. Copying the data I want to keep and truncating or deleting the old table is not an option.
The database is set to simple logging.
I'm only deleting 3,000 rows.
I've tried indies a BEGIN/END Transaction and without that and have a CHECKPOINT command
My understanding was doing this would cause the transaction log to not grow but I'm still getting a 100+ gig transaction log.
I'm looking for a way to delete and not grow the transaction log.
I understand that it's to roll things back if needed but I don't need to I just want to delete and not have the log filled up.
kindly check VLf count also check Log_reuse in database.
SELECT [name] AS 'Database Name',
COUNT(li.database_id) AS 'VLF Count',
SUM(li.vlf_size_mb) AS 'VLF Size (MB)',
SUM(CAST(li.vlf_active AS INT)) AS 'Active VLF',
SUM(li.vlf_active*li.vlf_size_mb) AS 'Active VLF Size (MB)',
COUNT(li.database_id)-SUM(CAST(li.vlf_active AS INT)) AS 'Inactive VLF',
SUM(li.vlf_size_mb)-SUM(li.vlf_active*li.vlf_size_mb) AS 'Inactive VLF Size (MB)'
FROM sys.databases s
CROSS APPLY sys.dm_db_log_info(s.database_id) li
GROUP BY [name]
ORDER BY COUNT(li.database_id) DESC;
check why log reused in database which give description to resolve issue.
SELECT name, log_reuse_wait_desc FROM sys.databases;
Related
I used the following query to view the database log file.
declare #templTable as table
( DatabaseName nvarchar(50),
LogSizeMB nvarchar(50),
LogSpaceUsedPersent nvarchar(50),
Statusee bit
)
INSERT INTO #templTable
EXEC('DBCC SQLPERF(LOGSPACE)')
SELECT * FROM #templTable ORDER BY convert(float , LogSizeMB) desc
DatabaseName LogSizeMB LogSpaceUsedPersent
===============================================
MainDB 6579.93 65.8095
I also used the following code to view the amount of space used by the main database file.
with CteDbSizes
as
(
select database_id, type, size * 8.0 / 1024 size , f.physical_name
from sys.master_files f
)
select
dbFileSizes.[name] AS DatabaseName,
(select sum(size) from CteDbSizes where type = 1 and CteDbSizes.database_id = dbFileSizes.database_id) LogFileSizeMB,
(select sum(size) from CteDbSizes where type = 0 and CteDbSizes.database_id = dbFileSizes.database_id) DataFileSizeMB
--, (select physical_name from CteDbSizes where type = 0 and CteDbSizes.database_id = dbFileSizes.database_id) as PathPfFile
from sys.databases dbFileSizes ORDER BY DataFileSizeMB DESC
DatabaseName LogFileSizeMB DataFileSizeMB
===============================================
MainDB 6579.937500 7668.250000
But whatever I did, the amount of database log space is not less than 6 GB. Do you think there is a reason that the database log has not been changed for more than a month? Is there a solution to reduce this amount or not? I also used different methods and queries to reduce the size of the log file. I got good answers on other databases. Like folow:
use [master];
GO
USE [master]
GO
ALTER DATABASE [MainDB] SET RECOVERY SIMPLE WITH NO_WAIT
GO
USE [MainDB]
GO
DBCC SHRINKDATABASE(N'MainDB')
GO
DBCC SHRINKFILE (N'MainDB_log' , EMPTYFILE)
GO
ALTER DATABASE [MainDB] SET RECOVERY FULL WITH NO_WAIT
GO
But in this particular database, the database log is still not less than 6 GB. Please help. Thanks.
As I informed in the main post, after a while, the size of our database file log reached about 25 gigs and we could no longer even compress the database files. After some searching, I came to the conclusion that I should back up the log file and then compress the log file. For this purpose, I defined a job that prepares a file from a log backup file almost every 30 minutes, and the size of these files usually does not exceed 150 MB. Then, after each backup of the log file, I run the log compression command once. With this method, the size of the log file is greatly reduced and now we have about 500 MB of log file. Of course, due to the large number of transactions on the database, the mentioned job must always be active. If the job is not active, I will increase the log volume again.
I'm supporting an antedeluvian webapp (soon to be retired) that still uses "aspnetdb" for its auth system. I was doing some work in prep for its retirement on my test environment, when I found my test server complaining with the following error:
The transaction log for database 'aspnetdb' is full due to 'NOTHING'.
Now, normally I'd assume the problem came from the database transaction log... but this database was recently switched into simple recovery mode (this is a test machine).
I've tried a few experiments with no luck, and done a fair bit of googling. Anybody seen this error before? Full transaction log on a database in simple recovery mode?
It's on SQL Server 2016, running in 2008 compatibility mode because aspnetdb is that old.
Got it, help received from stackexchange.
https://dba.stackexchange.com/questions/241172/transaction-log-is-full-due-to-nothing-but-this-database-is-in-simple-recov?noredirect=1#comment475763_241172
Autogrowth was set to 0. Unfortunately there's no way to see this in SSMS because it hides such settings about recovery-mode-simple DBs.
Query to see the real value of Autogrowth, thanks to #HandyD:
SELECT
db.name AS [Database],
mf.name AS [File],
CASE mf.[type_desc]
WHEN 'ROWS' THEN 'Data File'
WHEN 'LOG' THEN 'Log File'
END AS [FileType],
CAST(mf.[size] AS BIGINT)*8/1024 AS [SizeMB],
CASE
WHEN mf.[max_size] = -1 THEN 'Unlimited'
WHEN mf.[max_size] = 268435456 THEN 'Unlimited'
ELSE CAST(mf.[max_size]*8/1024 AS NVARCHAR(25)) + ' MB'
END AS [MaxSize],
CASE [is_percent_growth]
WHEN 0 THEN CONVERT(VARCHAR(6), CAST(mf.growth*8/1024 AS BIGINT)) + ' MB'
WHEN 1 THEN CONVERT(VARCHAR(6), CAST(mf.growth AS BIGINT)) + '%'
END AS [GrowthIncrement]
FROM sys.databases db
LEFT JOIN sys.master_files mf ON mf.database_id = db.database_id
where mf.name like 'aspnetdb%'
The other problem is that, in this state you can't change autogrowth. But you can alter size. So by increasing size and then introducing autogrowth, you can fix the problem.
ALTER DATABASE aspnetdb MODIFY FILE (
NAME = aspnetdb_log
, SIZE = 1GB
) --this fixes the problem
GO
ALTER DATABASE aspnetdb MODIFY FILE (
NAME = aspnetdb_log
, SIZE = 1025MB
, MAXSIZE = UNLIMITED
, FILEGROWTH = 10MB
) -- now we have autogrowth
GO
USE aspnetdb
DBCC SHRINKFILE(aspnetdb_log,1) --now we can shrink the DB back to a sane minimum since autogrowth is in place
GO
Even in simple recovery mode, you can still get a full transaction log. Simple recovery mode simply means that the transaction log is truncated after each completed transaction.
The transaction log still needs space to accomodate for all active transaction and all transactions that are being rolled back.
So one likely cause is that you still have an open transaction on your database. If that happens, the transaction log will not get truncated.
The other angle is the actual space availability. If you have configured your log file with a maximum file size, or when you are running out of disk space, you can run into this.
I have an application which is using Entity framework for DB operations. In an one table when performing the delete operation it takes more than 3 minutes. But other similar tables doesn't take much time. I have debugged the code and find out there is no issue with the code.But executing the query in the sql server took much time.
Any troubleshooting steps/root cause for this issue ?
My table is as below,
Id (PK,uniqueidentifier,not null)
FirstValue(real,not null)
SecondValue(real,not null)
ThirdValue(real,not null)
LastValue(int,not null)
Config_Id(FK,uniqueidentifier,not null)
Query Execution Plan
Something isn't adding up here, we're not seeing the full picture...
There are a multitude of things which can slow down deletes (usually):
deleting a lot of records (which we know isn't the case here)
many indexes (which I suspect IS the case here)
deadlocks and blocking (is this a development or production database?)
triggers
cascade delete
transaction log needing to grow
many foreign keys to check (I suspect this might also be happening)
Can you please give us a screenshot of the "View Dependencies" feature in SSMS? To get this, right click on the table in the object explorer and select View Dependencies.
Also, can you open up a query on the master database, run the following queries and post the results:
SELECT name, value, value_in_use, minimum, maximum, [description], is_dynamic, is_advanced
FROM sys.configurations WITH (NOLOCK)
where name in (
'backup compression default',
'clr enabled',
'cost threshold for parallelism',
'lightweight pooling',
'max degree of parallelism',
'max server memory',
'optimize for ad hoc workloads',
'priority boost',
'remote admin connections'
)
ORDER BY name OPTION (RECOMPILE);
SELECT DB_NAME([database_id]) AS [Database Name],
[file_id], [name], physical_name, [type_desc], state_desc,
is_percent_growth, growth,
CONVERT(bigint, growth/128.0) AS [Growth in MB],
CONVERT(bigint, size/128.0) AS [Total Size in MB]
FROM sys.master_files WITH (NOLOCK)
ORDER BY DB_NAME([database_id]), [file_id] OPTION (RECOMPILE);
I've already seen a dozen such questions but most of them get answers that doesn't apply to my case.
First off - the database is am trying to get the data from has a very slow network and is connected to using VPN.
I am accessing it through a database link.
I have full write/read access on my schema tables but I don't have DBA rights so I can't create dumps and I don't have grants for creation new tables etc.
I've been trying to get the database locally and all is well except for one table.
It has 6.5 million records and 16 columns.
There was no problem getting 14 of them but the remaining two are Clobs with huge XML in them.
The data transfer is so slow it is painful.
I tried
insert based on select
insert all 14 then update the other 2
create table as
insert based on select conditional so I get only so many records and manually commit
The issue is mainly that the connection is lost before the transaction finishes (or power loss or VPN drops or random error etc) and all the GBs that have been downloaded are discarded.
As I said I tried putting conditionals so I get a few records but even this is a bit random and requires focus from me.
Something like :
Insert into TableA
Select * from TableA#DB_RemoteDB1
WHERE CREATION_DATE BETWEEN to_date('01-Jan-2016') AND to_date('31-DEC-2016')
Sometimes it works sometimes it doesn't. Just after a few GBs Toad is stuck running but when I look at its throughput it is 0KB/s or a few Bytes/s.
What I am looking for is a loop or a cursor that can be used to get maybe 100000 or a 1000000 at a time - commit it then go for the rest until it is done.
This is a one time operation that I am doing as we need the data locally for testing - so I don't care if it is inefficient as long as the data is brought in in chunks and a commit saves me from retrieving it again.
I can count already about 15GBs of failed downloads I've done over the last 3 days and my local table still has 0 records as all my attempts have failed.
Server: Oracle 11g
Local: Oracle 11g
Attempted Clients: Toad/Sql Dev/dbForge Studio
Thanks.
You could do something like:
begin
loop
insert into tablea
select * from tablea#DB_RemoteDB1 a_remote
where not exists (select null from tablea where id = a_remote.id)
and rownum <= 100000; -- or whatever number makes sense for you
exit when sql%rowcount = 0;
commit;
end loop;
end;
/
This assumes that there is a primary/unique key you can use to check if a row int he remote table already exists in the local one - in this example I've used a vague ID column, but replace that with your actual key column(s).
For each iteration of the loop it will identify rows in the remote table which do not exist in the local table - which may be slow, but you've said performance isn't a priority here - and then, via rownum, limit the number of rows being inserted to a manageable subset.
The loop then terminates when no rows are inserted, which means there are no rows left in the remote table that don't exist locally.
This should be restartable, due to the commit and where not exists check. This isn't usually a good approach - as it kind of breaks normal transaction handling - but as a one off and with your network issues/constraints it may be necessary.
Toad is right, using bulk collect would be (probably significantly) faster in general as the query isn't repeated each time around the loop:
declare
cursor l_cur is
select * from tablea#dblink3 a_remote
where not exists (select null from tablea where id = a_remote.id);
type t_tab is table of l_cur%rowtype;
l_tab t_tab;
begin
open l_cur;
loop
fetch l_cur bulk collect into l_tab limit 100000;
forall i in 1..l_tab.count
insert into tablea values l_tab(i);
commit;
exit when l_cur%notfound;
end loop;
close l_cur;
end;
/
This time you would change the limit 100000 to whatever number you think sensible. There is a trade-off here though, as the PL/SQL table will consume memory, so you may need to experiment a bit to pick that value - you could get errors or affect other users if it's too high. Lower is less of a problem here, except the bulk inserts become slightly less efficient.
But because you have a CLOB column (holding your XML) this won't work for you, as #BobC pointed out; the insert ... select is supported over a DB link, but the collection version will get an error from the fetch:
ORA-22992: cannot use LOB locators selected from remote tables
ORA-06512: at line 10
22992. 00000 - "cannot use LOB locators selected from remote tables"
*Cause: A remote LOB column cannot be referenced.
*Action: Remove references to LOBs in remote tables.
I'm trying to efficiently determine if a log backup will contain any data.
The best I have come up with is the following:
DECLARE #last_lsn numeric(25,0)
SELECT #last_lsn = last_log_backup_lsn
FROM sys.database_recovery_status WHERE database_id = DB_ID()
SELECT TOP 1 [Current LSN] FROM ::fn_dblog(#last_lsn, NULL)
The problem is when there are no transactions since the last backup, fn_dblog throws error 9003 with severity 20(!!) and logs it to the ERRORLOG file and event log. That makes me nervous -- I wish it just returned no records.
FYI, the reason I care is I have hundreds of small databases that can have activity at any time of day, but are typically used 8 hours/day. That means 2/3 of my log backups are empty. Those extra thousands of files can have a measurable impact on the time required for both off-site backup and recovering from a disaster.
I figured out an answer that works for my particular application. If I compare the results of the following two queries, I can determine if any activity has occurred on the database since the last log backup.
SELECT MAX(backup_start_date) FROM msdb..backupset WHERE type = 'L' AND database_name = DB_NAME();
SELECT MAX(last_user_update) FROM sys.dm_db_index_usage_stats WHERE database_id = DB_ID() AND last_user_update IS NOT NULL;
If I run
SELECT [Current LSN] FROM ::fn_dblog(null, NULL)
It seems to return my current LSN at the top that matches the last log backup.
What happens if you change the select from ::fn_dblog to a count(*)? Does that eliminate the error?
If not, maybe select the log records into a temp table (top 100 from ::fn_dblog(null, NULL), ordering by a date, if there is one) and then query that.