I have a CI/CD pipeline that creates a snapshot every time it's run. Now I want to achieve something like Delete all previous snapshots and create the new one or maybe I want to delete all previous backups and save the last two or three recent snapshots only.
I have created a snapshot with the name by appending the timestamp.
As of now, I am able to find only one command which is drop database snapshot name so not sure whether it's even possible to delete all historical snapshots somehow...
Edit :
Running below code does the kind of require job any idea how can I translate the same into PowerShell script so that I can run into pipeline?
DECLARE #Sql as NVARCHAR(MAX) = (SELECT 'DROP DATABASE ['+ name + ']; ' FROM sys.databases WHERE name like '%Dev%' FOR XML PATH('')) EXEC sys.sp_executesql #Sql
Related
I'm preparing to test an application in development. The application uses SQL Server 2019 for backend databases. It allows users to maintain multiple databases (for compliance and regulatory reasons).
QA testing scenarios require databases to be restored frequently to a known state before a staff member performs test cases in sequence. They then note the results of the test scenario.
There are approximately a dozen test scenarios to work on for this release, and an average of 6 databases to be used for most scenarios. For every scenario, this setup takes about 10 minutes and involves over 20 clicks.
Since scenarios will be tested before and after code changes, this means a time commitment of about 8 hours on setup alone. I suspect this can be reduced to about 1 minute since most of the time is spent navigating menus and the file system while restorations only take a few seconds each.
So I'd like to automate restorations. How can I automate the following sequence of operations inside of SSMS?
Drop all user created databases on the test instance of SQL Server
Create new or overwritten databases populated from ~6 .BAK files. I currently perform this one-by-one using "Restore Database", then adding a new file device, and finally launching the restorations.
EDIT: I usually work with SQL, C#, Batchfiles, or Python. But this task allows flexibility as long as it saves time and the restoration process is reliable. I would imagine either SSMS or a T-SQL query are the natural first places for me to begin.
We are currently using full backups and these seem to remain connected to their parent SQL Server instance and database. This caused me to encounter an SSMS bug when attempting to overwrite an existing database with a backup from another database on the same instance -- the restore fails to overwrite the target database, and the database that created the backup becomes stuck "restoring" until SSMS is closed or I manually restore it with the correct backup.
So as a minor addendum, what backup settings are appropriate for creating these independent copies of databases that have been backed up from other SQL Server instances?
I would suggest you utilize Database Snapshots instead. This allows you to take a snapshot of the database, and then revert back to it after changes are made. The disk space taken up by the snapshot is purely the difference in changes to pages, not the whole database.
Here is a script to create database snapshots for all user databases (you cannot do this for system DBs).
DECLARE #sql nvarchar(max);
SELECT #sql =
STRING_AGG(CAST(CONCAT(
'CREATE DATABASE ',
QUOTENAME(d.name + '_snap'),
' ON ',
f.files,
' AS SNAPSHOT OF ',
QUOTENAME(d.name),
';'
)
AS nvarchar(max)), '
' )
FROM sys.databases d
CROSS APPLY (
SELECT
files = STRING_AGG(CONCAT(
'(NAME = ',
QUOTENAME(f.name),
', FILENAME = ''',
REPLACE(f.physical_name + 'snap', '''', ''''''),
''')'
), ',
' )
FROM sys.database_files f
WHERE f.type_desc = 'ROWS'
) f
WHERE d.database_id > 4; -- not system DB
PRINT #sql;
EXEC sp_executesql #sql;
And here is a script to revert to the snapshots
DECLARE #sql nvarchar(max);
SELECT #sql =
STRING_AGG(CAST(CONCAT(
'RESTORE DATABASE ',
QUOTENAME(dSource.name),
' FROM DATABASE_SNAPSHOT = ',
QUOTENAME(dSnap.name),
';'
)
AS nvarchar(max)), '
' )
FROM sys.databases dSnap
JOIN sys.databases dSource ON dSource.database_id = dSnap.source_database_id;
PRINT #sql;
EXEC sp_executesql #sql;
And to drop the snapshots:
DECLARE #sql nvarchar(max);
SELECT #sql =
STRING_AGG(CAST(CONCAT(
'DROP DATABASE ',
QUOTENAME(d.name),
';'
)
AS nvarchar(max)), '
' )
FROM sys.databases d
WHERE d.source_database_id > 0;
PRINT #sql;
EXEC sp_executesql #sql;
I would like to know how I can switch from one database to another within the same script. I have a script that reads the header information from a SQL Server .BAK file and loads the information into a test database. Once the information is in the temp table (Test database) I run the following script to get the database name.
This part works fine.
INSERT INTO #HeaderInfo EXEC('RESTORE HEADERONLY
FROM DISK = N''I:\TEST\database.bak''
WITH NOUNLOAD')
DECLARE #databasename varchar(128);
SET #databasename = (SELECT DatabaseName FROM #HeaderInfo);
The problem is when I try to run the following script nothing happens. The new database is never selected and the script is still on the test database.
EXEC ('USE '+ #databasename)
The goal is switch to the new database (USE NewDatabase) so that the other part of my script (DBCC CHECKDB) can run. This script checks the integrity of the database and saves the results to a temp table.
What am I doing wrong?
You can't expect a use statement to work in this fashion using dynamic SQL. Dynamic SQL is run in its own context, so as soon as it has executed, you're back to your original context. This means that you'd have to include your SQL statements in the same dynamic SQL execution, such as:
declare #db sysname = 'tempdb';
exec ('use ' + #db + '; dbcc checkdb;')
You can alternatively use fully qualified names for your DB objects and specify the database name in your dbcc command, even with a variable, as in:
declare #db sysname = 'tempdb';
dbcc checkdb (#db);
You can't do this because Exec scope is limited to dynamic query. When exec ends context is returned to original state. But context changes in Exec itself. So you should do your thing in one big dynamic statement like:
DECLARE #str NVARCHAR(MAX)
SET #str = 'select * from table1
USE DatabaseName
select * from table2'
EXEC (#str)
A new column has been added to a table, but the new column was not added to the end of the table definition (rightmost column), but the middle of the table.
When I try to commit this in Redgate SQL Source Control, I get the warning "These changes may result in data loss"
Will data loss really occurr?
Is there a way preview the change script to confirm that no data will be lost?
Can I copy the script and easily turn it into a Migrations V2 script?
Will I just have to
Edit the table in SSMS and move the new column to the end
or write a migration script?
If so, are there any handy tools to do the repetitive stuff?
Up front disclosure that I work for Red Gate on SQL Source Control.
That change will need to re-create a table. By default SSMS won't let you save that change. However that option must have been disabled in SSMS. It's under Tools->Options->Designers->Table and Database Designers->Prevent saving changes that require a table re-creating.
Given that feature is disabled SQL Source Control has then picked that up as a potential data loss situation, and prompted to see if you want to add a migration script.
If other developers within your team pull this change in through a get latest, then SQL Source Control will let them about any potential data loss with more details, depending on the current state of their local database. If the only change is adding columns to an existing table then this will not drop the data in columns that are unchanged.
If you are deploying to another DB (e.g. staging/UAT/prod) and you have SQL Compare you can use that to see exactly what will be applied to a DB if you try and run this against another non-local database. Choose the create deployment script option and you can sanity check the SQL before running.
As you say adding the column to the end of the table will avoid the need for the rebuild, so is probably the simplest way to avoid this if you don't need to worry about where the column is.
Alternatively you can add a migration script to:
Create a new table with the new structure using a temp name
Copy the existing data to the temp table
Drop the existing table
Rename the new temp table to the original name
You mention Migrations v2, the beta feature that changes how migrations work in order to better support branching and merging and DVCS systems. See http://www.red-gate.com/migrations
Version 1 migration scripts will need some modifications in order to be converted to a v2 migration script. It's a fairly trivial change. We're working on documenting this at the moment, and please reach out to us on the Google Group if you'd like more information on this change. https://groups.google.com/forum/#!forum/red-gate-migrations
I moved the column to the end of the table using SSMS to negate the need for a migration script.
In a similar scenario, where it was not convenient to move the column, this is what I did to convert an SSMS script to a Migrations V2 script.
Undo the change in SSMS (deleted the column)
Redo the change in SSMS, but instead of saving the change direct to the database, I saved the change script
Modified the change script
Trimmed the SSMS transaction & environment wrapper
Added a guard clause: IF COL_LENGTH('MyTable','MyColumn') IS NULL
Wrapped the script in BEGIN TRAN - ROLLBACK TRAN to test the script without dirtying the database
Replaced GO with END BEGIN
Tested within rolled-back transaction
Removed BEGIN TRAN - ROLLBACK TRAN development wrapper
Here is the simple sql query that will help to insert column in database table without data loss.
Lets say CCDetails is the table in which we want to insert column GlobaleNote just before column Sys_CreatedBy:
declare #str1 nvarchar(1000)
declare #tableName nvarchar(1000)
set #tableName='CCDetails'
set #str1 = ''
SELECT #str1 = #str1 + ', ' + COLUMN_NAME
FROM Information_Schema.Columns
WHERE Table_Name = #tableName
ORDER BY Ordinal_Position
set #str1 = right(#str1, len(#str1) - 2)
set #str1 = 'select ' + #str1 +' into '+#tableName+'Temp from '+#tableName+' ; Drop Table '+ #tableName + ' ; EXEC sp_rename '+#tableName+'Temp, '+#tableName
set #str1 = REPLACE(#str1,'Sys_CreatedBy','CAST('''' as nvarchar(max)) As GlobaleNote , Sys_CreatedBy' )
exec sp_executesql #str1
I have three websites which uses an abstract database structure with tables like: Items, Places, Categories, etc... and stored procedures like GetItemsByCategory, GetRelatedItems, etc... Actually im using exactly the same database structure for these 3 different websites.
From a code perspective im using the same code for all websites (except the HTML which is specific foreach one), and all the common code is in few projects used by all websites, so everytime that i detect a bug (which is in all websites) i just fix it on one place (the common part used by all) and automatically all websites get the fix.
Actually im using Asp.net MVC3 and Sql server.
Everytime i want to extend some funcionality, and i need a new table, stored procedure or something related with database, i have to do the modification in each database.
Do you know any approach that i could use to be able to have the same flexibility and do database modifications only one time for all websites?
Do you think I'm using a good approach or i should use something different in your opinion?
If the databases are on a single server, you could generate the script for the procedure from Management Studio, and make sure to use the option to "check for object existence" (Tools > Options > SQL Server Object Explorer > Scripting). This will yield something like this (most importantly it produces your stored procedure code as something you can execute using dynamic SQL):
USE DBName;
GO
SET ANSI_NULLS ON;
GO
SET QUOTED_IDENTIFIER ON;
GO
IF NOT EXISTS (...)
BEGIN
EXEC dbo.sp_executesql #statement = N'CREATE PROCEDURE dbo.whatever ...
'
END
GO
Now that you have this script, you can modify it to work across multiple databases - you just need to swipe the #statement = portion and re-use it. First you need to stuff the databases where you want this to work into a #table variable (or you can put this in a permanent table, if you want). Then you can build a command to execute in each database, e.g.
DECLARE #dbs TABLE (name SYSNAME);
INSERT #dbs(name) SELECT N'db1';
INSERT #dbs(name) SELECT N'db2';
INSERT #dbs(name) SELECT N'db3';
-- now here is where we re-use the create / alter procedure command from above:
DECLARE #statement NVARCHAR(MAX) = N'CREATE PROCEDURE dbo.whatever ...
';
-- now let's build some dynamic SQL and run it!
DECLARE #sql NVARCHAR(MAX);
SET #sql = N'';
SELECT #sql = #sql + '
EXEC ' + QUOTENAME(name) + '.dbo.sp_executesql N''' + #statement + ''';'
FROM #dbs;
EXEC sys.sp_executesql #sql;
Alternatively, you could create a custom version of my sp_msforeachdb or sp_ineachdb replacements:
Making a more reliable and flexible sp_MSforeachdb
Execute a Command in the Context of Each Database in SQL Server
I used to use a tool called SQLFarms Combine for this, but the tool doesn't seem to exist anymore, or perhaps it has been swallowed up / re-branded by another company. Red Gate has since produced SQL Multi Script that has similar functionality.
If you added a column to all your tables called websiteId you could just have one database. Store the unique websiteId in each site's web.config and just pass it with each request for data. Obviously each site's data is stored with their websiteId so data can be queried per website.
It means a bit of refactoring in your db and any calls to your your db, but once done, you only have one database to maintain.
Of course this is assuming your databases are on the same server...
We have a SQL Server 2005 SP2 machine running a large number of databases, all of which contain full-text catalogs. Whenever we try to drop one of these databases or rebuild a full-text index, the drop or rebuild process hangs indefinitely with a MSSEARCH wait type. The process can’t be killed, and a server reboot is required to get things running again. Based on a Microsoft forums post1, it appears that the problem might be an improperly removed full-text catalog. Can anyone recommend a way to determine which catalog is causing the problem, without having to remove all of them?
1 [http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2681739&SiteID=1]
“Yes we did have full text catalogues in the database, but since I had disabled full text search for the database, and disabled msftesql, I didn't suspect them. I got however an article from Microsoft support, showing me how I could test for catalogues not properly removed. So I discovered that there still existed an old catalogue, which I ,after and only after re-enabling full text search, were able to delete, since then my backup has worked”
Here's a suggestion. I don't have any corrupted databases but you can try this:
declare #t table (name nvarchar(128))
insert into #t select name from sys.databases --where is_fulltext_enabled
while exists(SELECT * FROM #t)
begin
declare #name nvarchar(128)
select #name = name from #t
declare #SQL nvarchar(4000)
set #SQL = 'IF EXISTS(SELECT * FROM '+#name+'.sys.fulltext_catalogs) AND NOT EXISTS(SELECT * FROM sys.databases where is_fulltext_enabled=1 AND name='''+#name+''') PRINT ''' +#Name + ' Could be the culprit'''
print #sql
exec sp_sqlexec #SQL
delete from #t where name = #name
end
If it doesn't work, remove the filter checking sys.databases.
Have you tried running process monitor and when it hangs and see what the underlying error is? Using process moniter you should be able to tell whick file/resource it waiting for/erroring on.
I had a similar problem with invalid full text catalog locations.
The server wouldn't bring all databases online at start-up. It would process databases in dbid order and get half way through and stop. Only the older DBs were brought online and the remainder were inaccessible.
Looking at sysprocesses revealed a dozen or more processes with a waittype = 0x00CC , lastwaittype = MSSEARCH. MSSEARCH could not be stopped.
The problem was caused when we relocated the full text catalogs but entered the wrong path for one of them when running the alter database ... modifyfile command.
The solution was to disable MSSEARCH, reboot the server allowing all DBs to come online, find the offending database, take it offline, correct the file path using the alter database command, and bring the DB online. Then start MSSEARCH and set to automatic start-up.