I've been researching about how to delete a specific backup file through an SQL query, but I only find results about "deleting backups older than a date". That is not what I want. I want to keep old backups, but I want to be able to delete a specific backup by its ID.
I can easily remove the entries from the msdb tables and its restore history for a given backup, but I would like to be able to delete the files as well through an SQL query (I know their full path, as it is stored in the database), so that they don't keep wasting space in the disk.
The procedure "xp_delete_file" doesn't seem to allow to delete a specific file.
I assume that if there is a procedure to delete old files, there should be some way to delete a specific file. Please don't worry about security here.
May be old but might help someone. xp_delete_file can be used to delete specific backup file. Try the code below:
EXECUTE master.dbo.xp_delete_file 0,N'c:\backup\backup1.bak'
--Define a backup device and physical name.
USE AdventureWorks2012 ;
GO
EXEC sp_addumpdevice 'disk', 'mybackupdisk', 'c:\backup\backup1.bak' ;
GO
--Delete the backup device and the physical name.
USE AdventureWorks2012 ;
GO
EXEC sp_dropdevice ' mybackupdisk ', 'delfile' ;
GO
http://technet.microsoft.com/en-us/library/ms188711.aspx
This is what I needed.
xp_cmdshell 'del c:\backup\file.bak'
It may be needed to activate the command, through:
EXEC sp_configure 'show advanced options', 1
GO
EXEC sp_configure 'xp_cmdshell', 1
GO
RECONFIGURE
GO
Create a backup-device, with a physical name that points to the backup-file:
exec master..sp_addumpdevice #devtype = 'disk',
#logicalname = '<logical_name>',
#physicalname = '<path + physical filename>'
Then, execute:
exec master..sp_dropdevice '<logical_name>', delfile
And your file has gone!
Physical filenames can be found in the table 'msdb..backupmediafamily'
Related
At work, we have production databases on which developers have read permission. When developers have to fix something in the database, they must test their scripts in a copy of the production databases and then ask the DBA team to execute it in production.
Sometimes however, the data that must be fixed is not in the test databases. Developers then ask for a new copy of production databases, and this can take a lot of time.
Of course, we could grant them update permission and ask them to use BEGIN TRANSACTION / ROLLBACK, but it is too risky. Nobody wants that, not even the developers.
My question: is it possible to create a profile on SQL Server - or grant special permission - that would allow to execute update and delete commands but would always, no matter what the developer wrote, rollback after a GO or after the last command issued in a session?
This would be really helpful to test scripts before sending them to production.
You could create a sproc and give EXEC access to devs on that sproc only, SOLUTION #1 - SPROCS. This is probably the most elegant solution as you want them to have a simple way to run their query and also want to control their perms on the production environment. Example to execute a command would be: EXEC [dbo].[usp_rollback_query] 'master', 'INSERT INTO table1 SELECT * FROM table2
SOLUTION #1
USE [DATABASENAME]
GO
ALTER PROC dbo.usp_rollback_query
(
#db VARCHAR(128),
#query NVARCHAR(max)
)
AS
BEGIN
DECLARE #main_query NVARCHAR(max) = 'USE [' + #db + ']
' + #query;
BEGIN TRAN
EXEC sp_executesql #main_query;
ROLLBACK TRAN
END
If you can afford to have snapshot created and dropped each time, SOLUTION #2 - DB SNAPSHOTS is the best way to go about it. It's super fast, the only two drawbacks are that you need to kick people off the DB before you can restore and it will restore all changes made since the snapshot was created.
SOLUTION #2
-- CREATE SNAPSHOT
CREATE DATABASE [DATABASENAME_SS1]
ON
(
NAME = DATABASENAME,
FILENAME = 'your\path\DATABASENAME_SS1.ss'
) AS SNAPSHOT OF [DATABASENAME];
GO
-- let devs run whatever they want
-- CLOSE CONNECTIONS
USE [master];
GO
ALTER DATABASE [DATABASENAME]
SET SINGLE_USER
WITH ROLLBACK IMMEDIATE;
GO
-- RETORE DB
RESTORE DATABASE [DATABASENAME]
FROM DATABASE_SNAPSHOT = 'DATABASENAME_SS1';
GO
-- CLEANUP SNAPSHOT COPY
DROP DATABASE [DATABASENAME_SS1];
I don't think ROLLBACK on each query is a good idea or a good design but if you have to go that route, you would need to use triggers. The limitation with triggers is that a DATABASE or SERVER level trigger can only be for DDL and not DML. Creating triggers on each TABLE object that you think is being altered is doable, however, the drawback here is that you need to know which tables are being modified and even then it's quite messy. Regardless please look at SOLUTION #3 - TABLE TRIGGERS below. To make this better you could create a role and check if the user is part of that role, then rollback.
SOLUTION #3
USE DATABASENAME
GO
ALTER TRIGGER dbo.tr_rollback_devs
ON dbo.table_name
AFTER INSERT, DELETE, UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF SYSTEM_USER IN ('dev1', 'dev2')
ROLLBACK
END
GO
I have a Visual Studio solution with a database project which I spit out to SSMS whenever I hit Publish.
Recently I noticed that the default Data File Location, for the DB and log files, is the server's C:\ drive. Obviously it's bad practice for to store a huge database, with it's files on the C:\ drive. So I want the files at some point during the Publish moved to another drive.
This is what I am trying to do:
I hit Publish, which creates the DB and puts the DB & log files onto C:\, as usual (1)
at this point, no data has been ingested, so the DB is just an empty shell with SPs, functions, etc (2)
I then execute the code block below which puts the DB & log files onto my Y:\ drive (3)
I then execute the rest of my Post Deployment script (4)
However, the script now only gets as far completing (3). As soon as it starts (4), it hits a line which calls a procedure and then fails with
Could not find stored procedure 'dbo.usp_mySP'
If I don't run the below code, and thus not move the DB files, the whole project runs smoothly.
-- SOME CODE HERE... WORKS FINE...
-- set DB offline
ALTER DATABASE [dbName] SET OFFLINE
-- physically move the DB's primary file and lof file (using the command line)
DECLARE #cmd VARCHAR(512)
SET #cmd = 'move "C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\dbName_Primary.mdf" "Y:\SQLData"'
EXEC xp_cmdshell #cmd
SET #cmd = 'move "C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\dbName_Primary.ldf" "Y:\SQLData"'
EXEC xp_cmdshell #cmd
-- logically move the files using T-SQL
ALTER DATABASE [dbName] MODIFY FILE
(
NAME = [dbName]
,FILENAME = 'Y:\SQLData\dbName_Primary.mdf'
)
ALTER DATABASE [dbName] MODIFY FILE
(
NAME = [dbName_log]
,FILENAME = 'Y:\SQLData\dbName_Primary.ldf'
)
-- set DB online
ALTER DATABASE [dbName] SET ONLINE
-- REST OF SCRIPT HERE, INCLUDING CALL TO [dbo].[usp_mySP]... CRASHES
Try inserting a GO statement after altering the database to online.
Edit
Since that didn't do it I begin to wonder if you're executing the statements on the correct database. You might need to explicitly name the database as [database].[schema].[spname], or insert a GO statement and a use database statement right after.
By selecting "Publish" in the context menu of a VS 2015 Database project, I can create a script, which contains all the necessary commands to deploy the database to the SQL Server ("xyz.publish.sql").
The database name and its paths in this script are declared as variables:
:setvar DatabaseName "myDatabase"
:setvar DefaultFilePrefix "myDatabase"
:setvar DefaultDataPath "D:\Databases\"
:setvar DefaultLogPath "D:\Databases\"
also the filenames seems to be automatically generated:
PRIMARY(NAME = [$(DatabaseName)], FILENAME = N'$(DefaultDataPath)$(DefaultFilePrefix)_Primary.mdf')
LOG ON (NAME = [$(DatabaseName)_log], FILENAME = N'$(DefaultLogPath)$(DefaultFilePrefix)_Primary.ldf')...
Where can I set the paths and the filenames? I don't want "_Primary" to be attached at the filenames and the paths need an additional sub-folder.
If I change in the publish script, my changes will probably be overwritten the next time when this script will be generated by Visual Studio.
You can add a pre-deployment script to the project which detaches the database, moves/renames the files, then reattaches the database using the new files.
-- detach db before moving physical files
USE [master]
GO
exec sp_detach_db #dbname = N'$(DatabaseName)'
GO
-- enable xp_cmdshell
exec sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO
exec sp_configure 'xp_cmdshell', 1 -- 0 = Disable , 1 = Enable
GO
RECONFIGURE
GO
-- move physical files
EXEC xp_cmdshell 'MOVE "$(DefaultDataPath)$(DefaultFilePrefix)_Primary.mdf", "C:\$(DatabaseName)\$(DatabaseName).mdf"'
EXEC xp_cmdshell 'MOVE "$(DefaultLogPath)$(DefaultFilePrefix)_Primary.ldf", "C:\$(DatabaseName)\$(DatabaseName)_log.ldf"'
GO
-- reattach db with new filepath
CREATE DATABASE [$(DatabaseName)] ON
(NAME = [$(DatabaseName)], FILENAME = 'C:\$(DatabaseName)\$(DatabaseName).mdf'),
(NAME = [$(DatabaseName)_log], FILENAME = 'C:\$(DatabaseName)\$(DatabaseName)_log.ldf')
FOR ATTACH
GO
-- disable xp_cmdshell
exec sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO
exec sp_configure 'xp_cmdshell', 0 -- 0 = Disable , 1 = Enable
GO
RECONFIGURE
GO
USE [$(DatabaseName)];
GO
Some notes on this:
I hardcoded C:\ as the new location for the files for simplicity. You're better off creating a SQLCMD variable to store this path.
If xp_cmdshell 'MOVE ... fails, it will do so silently. To keep my answer simple I did not include any error checking, but you can roll your own by simply inserting the results of xp_cmdshell in a temp table. See How to capture the error output from xp_cmdshell in SQL Server.
You may run into permissions problems with the xp_cmdshell 'MOVE ... command. In that case, you may need to adjust the permissions of the source and target paths in the MOVE statement. You may also need to run the command as a different user -- see here (Permissions section) or here for starters.
Yes, they will be changed but that's the way it works. You have to change the script. The other thing you can do is not specify the locations in which SQL Server will use the default ones for the instance - but that too involves changing the script.
I am facing one problem ie.
I create one data base...
and then I restored it...
the back up is from the existing data base only..
after successfully restored old one ie. parent one is showing that
"BRPL_Payroll _31-01-2014" (Restoring.........)
like above it is showing....
and then i execute the below query..
RESTORE DATABASE BRPL_Payroll _31-01-2014 ;WITH RECOVERY
but here it is showing that incorrect syntax at '-'
I think my data base name is having some date 31-01-2014
how can i execute the above query...
You can easily restore a database Using the UI and before pressing OK to restore you select Script and SQL-Server will show you the script (Query) you need to run in order to execute the step.
If you don't have direct access I can give you a restore script which works to restore a database from file. But you have to replace of course the path to your DB and your DB Name :
USE [master]
ALTER DATABASE [YURDATABASENAME] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
RESTORE DATABASE [YURDATABASENAME] FROM DISK = N'C:\your\backup\path\backup.bak' WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 5
ALTER DATABASE [YURDATABASENAME] SET MULTI_USER
GO
If I stop SQL-server and then delete the .LDF file (transactionlog file) to the database, what will happen ? Will the database be marked suspect or will SQL-server just create a new automatically ? SQL Server 2008 R2
And My .LDF file Size is Too Big, So how to manage it, whether I can Shrink it or delete
Please Suggest in the Query Form.
You should not delete any of the database files since it can severely damage your database!
If you run out of disk space you might want to split your database in multiple parts. This can be done in the database's properties. So you are able to put each part of the database to a different storage volume.
You also can shrink the transaction log file if you change the recovery mode from full to simple, using following commands:
ALTER DATABASE myDatabase SET RECOVERY SIMPLE
DBCC SHRINKDATABASE (myDatabase , 5)
Switching back to full recovery is possible as well:
ALTER DATABASE myDatabase SET RECOVERY FULL
Update about SHRINKDATABASE - or what I did not know when answering this question:
Although the method above gets rid off some unused space it has some severe disadvantages on database files (MDF) - it will harm your indexes by fragmenting them worsening the performance of your database. So you need to rebuild the indexes afterwards to get rid off the fragmentation the shrink command caused.
If you want to shrink just the log file only might want to use SHRINKFILE instead. I copied this example from MSDN:
USE AdventureWorks2012;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE AdventureWorks2012
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE (AdventureWorks2012_Log, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE AdventureWorks2012
SET RECOVERY FULL;
GO
Do not risk deleting your LDF files manually! If you do not need transaction files or wish to reduce them to any size you choose, follow these steps:
(Note this will affect your backups so be sure before doing so)
Right click database
Choose Properties
Click on the 'Options' tab.
Set recovery model to SIMPLE
Next, choose the FILES tab
Now make sure you select the LOG file and scroll right. Under the "Autogrowth" heading click the dots ....
Then disable Autogrowth (This is optional and will limit additional growth)
Then click OK and set the "Initial Size" to the size you wish to have (I set mine to 20MB)
Click OK to save changes
Then right-click the DB again, and choose "Tasks > Shrink > Database", press OK.
Now compare your file sizes!:)
I did it by
Detach the database (include Drop Connections)
Remove the *.ldf file
Attach the database, but remove the expected *.ldf file
Did it for 4 different databases in SQL 2012, i should be the same for SQL 2008
As you can read comments, it is not good solution to remove log. But if you are sure that you do not lose anything, you can just change your DB recovery mode to simple and then use
DBCC shrinkdatabase ('here your database name')
to clear your log.
The worst thing that you can do is to delete log file from disk. If your server had unfinished transactions at moment of server stop, those transactions will not roll back after restart and you will get corrupted data.
You should back up your transaction log, then there will be free space to shrink it. Changing to simple mode then shrinking means you will lose all the transaction data which would be useful in the event of a restore.
The best way to clear ALL ldf files (transaction log files) in all databases in MS SQL server, IF all databases was backed up earlier of course:
USE MASTER
print '*****************************************'
print '************ CzyĆcik LDF ****************'
print '*****************************************'
declare
#isql varchar(2000),
#dbname varchar(64),
#logfile varchar(128),
#recovery_model varchar(64)
declare c1 cursor for
SELECT d.name, mf.name as logfile, d.recovery_model_desc --, physical_name AS current_file_location, size
FROM sys.master_files mf
inner join sys.databases d
on mf.database_id = d.database_id
--where recovery_model_desc <> 'SIMPLE'
and d.name not in ('master','model','msdb','tempdb')
and mf.type_desc = 'LOG'
and d.state_desc = 'online'
open c1
fetch next from c1 into #dbname, #logfile, #recovery_model
While ##fetch_status <> -1
begin
print '----- OPERATIONS FOR: ' + #dbname + ' ------'
print 'CURRENT MODEL IS: ' + #recovery_model
select #isql = 'ALTER DATABASE ' + #dbname + ' SET RECOVERY SIMPLE'
print #isql
exec(#isql)
select #isql='USE ' + #dbname + ' checkpoint'
print #isql
exec(#isql)
select #isql='USE ' + #dbname + ' DBCC SHRINKFILE (' + #logfile + ', 1)'
print #isql
exec(#isql)
select #isql = 'ALTER DATABASE ' + #dbname + ' SET RECOVERY ' + #recovery_model
print #isql
exec(#isql)
fetch next from c1 into #dbname, #logfile, #recovery_model
end
close c1
deallocate c1
This is an improved code, based on: https://www.sqlservercentral.com/Forums/Topic1163961-357-1.aspx
I recommend reading this article: https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/recovery-models-sql-server
Sometimes it is worthwhile to permanently enable RECOVERY MODEL = SIMPLE on some databases and thus once and for all get rid of log problems. Especially when we backup data (or server) daily and daytime changes are not critical from a security point of view.