How to delete .ldf file from SQL Server 2008? - sql-server

If I stop SQL-server and then delete the .LDF file (transactionlog file) to the database, what will happen ? Will the database be marked suspect or will SQL-server just create a new automatically ? SQL Server 2008 R2
And My .LDF file Size is Too Big, So how to manage it, whether I can Shrink it or delete
Please Suggest in the Query Form.

You should not delete any of the database files since it can severely damage your database!
If you run out of disk space you might want to split your database in multiple parts. This can be done in the database's properties. So you are able to put each part of the database to a different storage volume.
You also can shrink the transaction log file if you change the recovery mode from full to simple, using following commands:
ALTER DATABASE myDatabase SET RECOVERY SIMPLE
DBCC SHRINKDATABASE (myDatabase , 5)
Switching back to full recovery is possible as well:
ALTER DATABASE myDatabase SET RECOVERY FULL
Update about SHRINKDATABASE - or what I did not know when answering this question:
Although the method above gets rid off some unused space it has some severe disadvantages on database files (MDF) - it will harm your indexes by fragmenting them worsening the performance of your database. So you need to rebuild the indexes afterwards to get rid off the fragmentation the shrink command caused.
If you want to shrink just the log file only might want to use SHRINKFILE instead. I copied this example from MSDN:
USE AdventureWorks2012;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE AdventureWorks2012
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE (AdventureWorks2012_Log, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE AdventureWorks2012
SET RECOVERY FULL;
GO

Do not risk deleting your LDF files manually! If you do not need transaction files or wish to reduce them to any size you choose, follow these steps:
(Note this will affect your backups so be sure before doing so)
Right click database
Choose Properties
Click on the 'Options' tab.
Set recovery model to SIMPLE
Next, choose the FILES tab
Now make sure you select the LOG file and scroll right. Under the "Autogrowth" heading click the dots ....
Then disable Autogrowth (This is optional and will limit additional growth)
Then click OK and set the "Initial Size" to the size you wish to have (I set mine to 20MB)
Click OK to save changes
Then right-click the DB again, and choose "Tasks > Shrink > Database", press OK.
Now compare your file sizes!:)

I did it by
Detach the database (include Drop Connections)
Remove the *.ldf file
Attach the database, but remove the expected *.ldf file
Did it for 4 different databases in SQL 2012, i should be the same for SQL 2008

As you can read comments, it is not good solution to remove log. But if you are sure that you do not lose anything, you can just change your DB recovery mode to simple and then use
DBCC shrinkdatabase ('here your database name')
to clear your log.
The worst thing that you can do is to delete log file from disk. If your server had unfinished transactions at moment of server stop, those transactions will not roll back after restart and you will get corrupted data.

You should back up your transaction log, then there will be free space to shrink it. Changing to simple mode then shrinking means you will lose all the transaction data which would be useful in the event of a restore.

The best way to clear ALL ldf files (transaction log files) in all databases in MS SQL server, IF all databases was backed up earlier of course:
USE MASTER
print '*****************************************'
print '************ Czyƛcik LDF ****************'
print '*****************************************'
declare
#isql varchar(2000),
#dbname varchar(64),
#logfile varchar(128),
#recovery_model varchar(64)
declare c1 cursor for
SELECT d.name, mf.name as logfile, d.recovery_model_desc --, physical_name AS current_file_location, size
FROM sys.master_files mf
inner join sys.databases d
on mf.database_id = d.database_id
--where recovery_model_desc <> 'SIMPLE'
and d.name not in ('master','model','msdb','tempdb')
and mf.type_desc = 'LOG'
and d.state_desc = 'online'
open c1
fetch next from c1 into #dbname, #logfile, #recovery_model
While ##fetch_status <> -1
begin
print '----- OPERATIONS FOR: ' + #dbname + ' ------'
print 'CURRENT MODEL IS: ' + #recovery_model
select #isql = 'ALTER DATABASE ' + #dbname + ' SET RECOVERY SIMPLE'
print #isql
exec(#isql)
select #isql='USE ' + #dbname + ' checkpoint'
print #isql
exec(#isql)
select #isql='USE ' + #dbname + ' DBCC SHRINKFILE (' + #logfile + ', 1)'
print #isql
exec(#isql)
select #isql = 'ALTER DATABASE ' + #dbname + ' SET RECOVERY ' + #recovery_model
print #isql
exec(#isql)
fetch next from c1 into #dbname, #logfile, #recovery_model
end
close c1
deallocate c1
This is an improved code, based on: https://www.sqlservercentral.com/Forums/Topic1163961-357-1.aspx
I recommend reading this article: https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/recovery-models-sql-server
Sometimes it is worthwhile to permanently enable RECOVERY MODEL = SIMPLE on some databases and thus once and for all get rid of log problems. Especially when we backup data (or server) daily and daytime changes are not critical from a security point of view.

Related

Batch database creation using .BAK backups

I'm preparing to test an application in development. The application uses SQL Server 2019 for backend databases. It allows users to maintain multiple databases (for compliance and regulatory reasons).
QA testing scenarios require databases to be restored frequently to a known state before a staff member performs test cases in sequence. They then note the results of the test scenario.
There are approximately a dozen test scenarios to work on for this release, and an average of 6 databases to be used for most scenarios. For every scenario, this setup takes about 10 minutes and involves over 20 clicks.
Since scenarios will be tested before and after code changes, this means a time commitment of about 8 hours on setup alone. I suspect this can be reduced to about 1 minute since most of the time is spent navigating menus and the file system while restorations only take a few seconds each.
So I'd like to automate restorations. How can I automate the following sequence of operations inside of SSMS?
Drop all user created databases on the test instance of SQL Server
Create new or overwritten databases populated from ~6 .BAK files. I currently perform this one-by-one using "Restore Database", then adding a new file device, and finally launching the restorations.
EDIT: I usually work with SQL, C#, Batchfiles, or Python. But this task allows flexibility as long as it saves time and the restoration process is reliable. I would imagine either SSMS or a T-SQL query are the natural first places for me to begin.
We are currently using full backups and these seem to remain connected to their parent SQL Server instance and database. This caused me to encounter an SSMS bug when attempting to overwrite an existing database with a backup from another database on the same instance -- the restore fails to overwrite the target database, and the database that created the backup becomes stuck "restoring" until SSMS is closed or I manually restore it with the correct backup.
So as a minor addendum, what backup settings are appropriate for creating these independent copies of databases that have been backed up from other SQL Server instances?
I would suggest you utilize Database Snapshots instead. This allows you to take a snapshot of the database, and then revert back to it after changes are made. The disk space taken up by the snapshot is purely the difference in changes to pages, not the whole database.
Here is a script to create database snapshots for all user databases (you cannot do this for system DBs).
DECLARE #sql nvarchar(max);
SELECT #sql =
STRING_AGG(CAST(CONCAT(
'CREATE DATABASE ',
QUOTENAME(d.name + '_snap'),
' ON ',
f.files,
' AS SNAPSHOT OF ',
QUOTENAME(d.name),
';'
)
AS nvarchar(max)), '
' )
FROM sys.databases d
CROSS APPLY (
SELECT
files = STRING_AGG(CONCAT(
'(NAME = ',
QUOTENAME(f.name),
', FILENAME = ''',
REPLACE(f.physical_name + 'snap', '''', ''''''),
''')'
), ',
' )
FROM sys.database_files f
WHERE f.type_desc = 'ROWS'
) f
WHERE d.database_id > 4; -- not system DB
PRINT #sql;
EXEC sp_executesql #sql;
And here is a script to revert to the snapshots
DECLARE #sql nvarchar(max);
SELECT #sql =
STRING_AGG(CAST(CONCAT(
'RESTORE DATABASE ',
QUOTENAME(dSource.name),
' FROM DATABASE_SNAPSHOT = ',
QUOTENAME(dSnap.name),
';'
)
AS nvarchar(max)), '
' )
FROM sys.databases dSnap
JOIN sys.databases dSource ON dSource.database_id = dSnap.source_database_id;
PRINT #sql;
EXEC sp_executesql #sql;
And to drop the snapshots:
DECLARE #sql nvarchar(max);
SELECT #sql =
STRING_AGG(CAST(CONCAT(
'DROP DATABASE ',
QUOTENAME(d.name),
';'
)
AS nvarchar(max)), '
' )
FROM sys.databases d
WHERE d.source_database_id > 0;
PRINT #sql;
EXEC sp_executesql #sql;

Shrink Transaction Log for none active server

Currently I have few database no longer active (No new entry), however my company does not want to drop it as they want keep it for reference purpose.
But due to huge database size I have difficult time to backup and restore. (Company policy, time to time have to backup and restore to test the database is workable or not).
So I'm thinking, I want to shrink the transaction log to reduce the time needed for the process, is that advisable to do that? Emphasize, since no new entry.
FYI, the transaction file (ldf) is much bigger than actual database size (mdf).
No matter is advisable to do or not, or even have better way to do please let me know.
I use this script
DECLARE #DBName varchar(255)
DECLARE #LogName varchar(255)
DECLARE #DATABASES_Fetch int
DECLARE DATABASES_CURSOR CURSOR FOR
select distinct
name, db_name(s_mf.database_id) dbName
from
sys.master_files s_mf
where
s_mf.state = 0 and -- ONLINE
has_dbaccess(db_name(s_mf.database_id)) = 1 -- Only look at databases to which we have access
and db_name(s_mf.database_id) not in ('master','tempdb','model','msdb','distribution')
and db_name(s_mf.database_id) not like 'MSDB%'
and db_name(s_mf.database_id) not like 'Report%'
and type=1
order by
db_name(s_mf.database_id)
OPEN DATABASES_CURSOR
FETCH NEXT FROM DATABASES_CURSOR INTO #LogName, #DBName
WHILE ##FETCH_STATUS = 0
BEGIN
exec ('USE [' + #DBName + '] ; DBCC SHRINKFILE (N''' + #LogName + ''' , 0, TRUNCATEONLY)')
FETCH NEXT FROM DATABASES_CURSOR INTO #LogName, #DBName
END
CLOSE DATABASES_CURSOR
DEALLOCATE DATABASES_CURSOR
Yes, this is a good idea. The easiest way to do this is first change the Recovery Model to Simple like this:
Then you can use Tasks; Shrink; Files to reduce the transaction log space to zero. There are many resources to show you how to shrink a file, but this article explains it nicely:
How to shrink the transaction log
Shrinking a file is not normally recommended, but it advisable if you do not expect more transactions. This will save you time when you backup and it will take up less disk space.

SQL Server Ignoring IF Statement

Every night, I backup my production server and push the backup to my dev server. My dev server then has a job that runs which first checks if the backup file exits, if so check if the database exists in dev and if so drop the database, then restore from file. This all works fine, unless the file is not yet complete due to slow transfer, etc. If the file is not completely downloaded when the job runs then the first step sees it exists and drops the database. The next step tries to restore and of course fails. The next day when the job runs I would expect that when it checks if the database exists, it would see that it does not and shouldn't attempt to drop it and just restore. However, what's happening is the job is unable to drop the database and just fails at that point. This requires manual intervention to get the database restored, which is my problem. I'm less concerned with the fact of having no database existing on the server for a day (in theory) as I can tweak the schedule further to restore sooner. What I am concerned with is why is the IF statement not working to check if the database exists and attempts to drop a database regardless? Here's the T-SQL code that I am using:
DECLARE #output INT
DECLARE #SqlPath varchar(500) = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Backup\PROD-01_prod_backup.bak'
EXECUTE master.dbo.xp_fileexist #SqlPath, #output OUT
IF #output = 1
BEGIN
IF (EXISTS (SELECT name FROM master.dbo.sysdatabases WHERE ('[' + name + ']' = '[PROD-01]')))
BEGIN
ALTER DATABASE [PROD-01] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE [PROD-01]
END
RESTORE DATABASE [PROD-01] FROM DISK = #SqlPath
END
I'm not sure how this is happening, as I am unable to reproduce, but a TRY / CATCH block is an ideal solution for this case:
SET XACT_ABORT ON
GO
DECLARE #output INT
DECLARE #SqlPath varchar(500) = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Backup\PROD-01_prod_backup.bak'
EXECUTE master.dbo.xp_fileexist #SqlPath, #output OUT
IF #output = 1
BEGIN
IF (EXISTS (SELECT name FROM master.dbo.sysdatabases WHERE (name = 'PROD-01')))
BEGIN TRY
ALTER DATABASE [PROD-01] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE [PROD-01]
END TRY
BEGIN CATCH
SELECT ERROR_MESSAGE();
END CATCH
RESTORE DATABASE [PROD-01] FROM DISK = #SqlPath
END

How can I disable autogrowth in SQL Server wide

I have a database server that some databases with restricted users are in use in the database. I need to restrict users to can't change .MDF and .LDF autogrowth settings. Please guide me to restrict the users.
I think there is two way to get this access:
Disable autogrowth in databases
Limit the maximum size of MDF and LDF
But I couldn't find any option in Management Studio to do them server wide and also get access from users.
Thanks.
you can execute following ALTER DATABASE command which sets auto growth option to off for all databases using undocumented stored procedure sp_Msforeachdb
for single database (Parallel Data Warehouse instances only)
ALTER DATABASE [database_name] SET AUTOGROW = OFF
for all databases
EXEC sp_Msforeachdb "ALTER DATABASE [?] SET AUTOGROW = OFF"
Although this is not a server variable or instance settings, it might help you ease your task for updating all databases on the SQL Server instance
By excluding system databases and for all other databases, following T-SQL can be executed to get list of all database files and output commands prepared can be executed
select
'ALTER DATABASE [' + db_name(database_id) + '] MODIFY FILE ( NAME = N''' + name + ''', FILEGROWTH = 0)'
from sys.master_files
where database_id > 4
To prevent data files' autogrow property to be changed, I prepared below SQL Server DDL trigger once I used a DDL trigger for logging DROP table statements.
Following trigger will also prevent you to change this property, so if you need to update this property, you have to drop this trigger first.
CREATE TRIGGER prevent_filegrowth
ON ALL SERVER
FOR ALTER_DATABASE
AS
declare #SqlCommand nvarchar(max)
set #SqlCommand = ( SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)') );
if( isnull(charindex('FILEGROWTH', #SqlCommand), 0) > 0 )
begin
RAISERROR ('FILEGROWTH property cannot be altered', 16, 1)
ROLLBACK
end
GO
For more on DDL Triggers, please refer to Microsoft Docs

How to get the logical name of the transaction log in SQL Server 2005

I am trying to write a T-SQL routine that shrink the transaction log file using DBCC SHRINKFILE based on the logical name of the database. The DB_NAME() function gives you the logical name of the database. Is there an equivalent one for the transaction log? If not, is there some other way to get this information? The default name for the transaction logs is <<Database Name>>_log, but I would rather not rely on this.
You can use:
SELECT name
FROM sys.master_files
WHERE database_id = db_id()
AND type = 1
Log files have type = 1 for any database_id and all files for all databases can be found in sys.master_files.
EDIT:
I should point out that you shouldn't be shrinking your log on a routine basis. Your transaction log should be sized appropriately to keep it from ever having to grow, and then left at that size. The transaction log can not be instant file initialized and has to be zero'd out when space is added to it, which is a slow sequential operation that degrades performance.
Assuming a standard database (eg only one log file), the log file is always file_id = 2. This applies even if you have multiple data files (id = 3+ for NDFs).
The DBCC also takes the file id too. So, DBCC SHRINKFILE (2...) will always work. You can't parameterise inside the DBCC so this avoids dynanmic SQL. If you want the name, use FILE_NAME(2).
select Name
from sys.database_files
Generates,
SomeDb_Data
SomeDb_Log
SqlServer 2012
DECLARE #command varchar(1000)
SELECT #command = 'USE [?] DBCC SHRINKFILE (2 , 0, TRUNCATEONLY)'
EXEC sp_MSforeachdb #command
--OR simply
EXEC sp_MSforeachdb 'USE [?] DBCC SHRINKFILE (2 , 0, TRUNCATEONLY)'

Resources