I'm preparing to test an application in development. The application uses SQL Server 2019 for backend databases. It allows users to maintain multiple databases (for compliance and regulatory reasons).
QA testing scenarios require databases to be restored frequently to a known state before a staff member performs test cases in sequence. They then note the results of the test scenario.
There are approximately a dozen test scenarios to work on for this release, and an average of 6 databases to be used for most scenarios. For every scenario, this setup takes about 10 minutes and involves over 20 clicks.
Since scenarios will be tested before and after code changes, this means a time commitment of about 8 hours on setup alone. I suspect this can be reduced to about 1 minute since most of the time is spent navigating menus and the file system while restorations only take a few seconds each.
So I'd like to automate restorations. How can I automate the following sequence of operations inside of SSMS?
Drop all user created databases on the test instance of SQL Server
Create new or overwritten databases populated from ~6 .BAK files. I currently perform this one-by-one using "Restore Database", then adding a new file device, and finally launching the restorations.
EDIT: I usually work with SQL, C#, Batchfiles, or Python. But this task allows flexibility as long as it saves time and the restoration process is reliable. I would imagine either SSMS or a T-SQL query are the natural first places for me to begin.
We are currently using full backups and these seem to remain connected to their parent SQL Server instance and database. This caused me to encounter an SSMS bug when attempting to overwrite an existing database with a backup from another database on the same instance -- the restore fails to overwrite the target database, and the database that created the backup becomes stuck "restoring" until SSMS is closed or I manually restore it with the correct backup.
So as a minor addendum, what backup settings are appropriate for creating these independent copies of databases that have been backed up from other SQL Server instances?
I would suggest you utilize Database Snapshots instead. This allows you to take a snapshot of the database, and then revert back to it after changes are made. The disk space taken up by the snapshot is purely the difference in changes to pages, not the whole database.
Here is a script to create database snapshots for all user databases (you cannot do this for system DBs).
DECLARE #sql nvarchar(max);
SELECT #sql =
STRING_AGG(CAST(CONCAT(
'CREATE DATABASE ',
QUOTENAME(d.name + '_snap'),
' ON ',
f.files,
' AS SNAPSHOT OF ',
QUOTENAME(d.name),
';'
)
AS nvarchar(max)), '
' )
FROM sys.databases d
CROSS APPLY (
SELECT
files = STRING_AGG(CONCAT(
'(NAME = ',
QUOTENAME(f.name),
', FILENAME = ''',
REPLACE(f.physical_name + 'snap', '''', ''''''),
''')'
), ',
' )
FROM sys.database_files f
WHERE f.type_desc = 'ROWS'
) f
WHERE d.database_id > 4; -- not system DB
PRINT #sql;
EXEC sp_executesql #sql;
And here is a script to revert to the snapshots
DECLARE #sql nvarchar(max);
SELECT #sql =
STRING_AGG(CAST(CONCAT(
'RESTORE DATABASE ',
QUOTENAME(dSource.name),
' FROM DATABASE_SNAPSHOT = ',
QUOTENAME(dSnap.name),
';'
)
AS nvarchar(max)), '
' )
FROM sys.databases dSnap
JOIN sys.databases dSource ON dSource.database_id = dSnap.source_database_id;
PRINT #sql;
EXEC sp_executesql #sql;
And to drop the snapshots:
DECLARE #sql nvarchar(max);
SELECT #sql =
STRING_AGG(CAST(CONCAT(
'DROP DATABASE ',
QUOTENAME(d.name),
';'
)
AS nvarchar(max)), '
' )
FROM sys.databases d
WHERE d.source_database_id > 0;
PRINT #sql;
EXEC sp_executesql #sql;
Related
I have a CI/CD pipeline that creates a snapshot every time it's run. Now I want to achieve something like Delete all previous snapshots and create the new one or maybe I want to delete all previous backups and save the last two or three recent snapshots only.
I have created a snapshot with the name by appending the timestamp.
As of now, I am able to find only one command which is drop database snapshot name so not sure whether it's even possible to delete all historical snapshots somehow...
Edit :
Running below code does the kind of require job any idea how can I translate the same into PowerShell script so that I can run into pipeline?
DECLARE #Sql as NVARCHAR(MAX) = (SELECT 'DROP DATABASE ['+ name + ']; ' FROM sys.databases WHERE name like '%Dev%' FOR XML PATH('')) EXEC sys.sp_executesql #Sql
We are testing the migration from a local SQL Server 2008R2 database to Azure, but have hit a bump in the road.
Process followed, based on SO articles:
Installed SQL Server 2012 Client tools
Amended DB to remove indexes with a fill factor specified, as well as invalid views and procedures (this was determined by using the Export Data-tier Application tool for SSMS, until it successfully created bacpac file)
uploaded the successfully created bacpac file to Azure
Went through steps to create new database using import method
bacpac file is retrieved from blob storage status shown, but then the following error occurs
BadRequest ;Request Error;Error Status Code:</B>
'BadRequest'</P><P><B>Details:
</B>Error encountered during the service operation. ; Exception
Microsoft.SqlServer.Management.Dac.Services.ServiceException:Unable to
authenticate request; </P></DIV></BODY></html>
Note: error text above was trimmed to exclude URL's as I don't have sufficient points.
I can't seem to find any info on this error or where there may be any additional log details to help determine why it will not import.
As the error mentions unable to authenticate, we also tried doing the following:
Created a new user and password on the local DB
Used this same new user and password for the definition of the new DB on Azure
This did not make any difference.
Would appreciate if someone could point us in the right direction to get this working, as we would need to replicate this process quite a few times.
Thanks.
We needed the same thing. Here is some steps that we did and the results:
1) Exporting using SQL Database Migration Tool created by ghuey
You can download here: https://sqlazuremw.codeplex.com/
It's a great tool and i really recommend you to try this first. Depends of the complexity of your database, it will work just fine.
For us, unfortunately didnt work. So you moved to the next step.
2) DAC Package
The 2008 has the option to generate the DACPAC witch creates the structure of the database on Azure and then you can Deploy to Azure by references a connection in the 2008 Studio Managament, Right click on Azure Server, Deploy ... se more details here: http://world.episerver.com/documentation/Items/Upgrading/EPiserver-Commerce/8/Migrating-Commerce-databases-to-Azure/
Well, if this works for you, TRY THIS. It's more easy.
For us, unfortunately didnt work. So you moved to the next step.
3) Using an 2012 server to export bacpac and then import into azure
This steps requires multiple actions to complete. Here it is:
a. Generate a backup into 2008 and move the file do 2012 server;
b. Restore the backup into 2012;
c. Do some SQL that:
c1. Set all owners of SCHEMAs to DBO. You can use an SQL to move schema like this: ALTER AUTHORIZATION ON SCHEMA::[db_datareader] TO [dbo]
c2. Remove all users that was created by you;
c3. Remove all MS_Description (Extend Properties) of all columns and tables
c4. Drop all constraints (tip: generate a complete script of the database with drop and create option enabled and copy the part of "drop constraint"
c5. We need to removed the fill factor options of the indexes of your database. You can do that re-creating the index (including PK that has clustered index associated). Well to drop every PK Clustered, is not that easy but with a little help of Google you will able do find an script to help you create and drop. Here is the script:
DECLARE #object_id int;
DECLARE #parent_object_id int;
DECLARE #TSQL NVARCHAR( 4000);
DECLARE #COLUMN_NAME SYSNAME;
DECLARE #is_descending_key bit;
DECLARE #col1 BIT;
DECLARE #action CHAR( 6);
SET #action = 'DROP';
--SET #action = 'CREATE';
DECLARE PKcursor CURSOR FOR
select kc.object_id , kc .parent_object_id
from sys.key_constraints kc
inner join sys .objects o
on kc.parent_object_id = o.object_id
where kc.type = 'PK' and o. type = 'U'
and o.name not in ( 'dtproperties','sysdiagrams' ) -- not true user tables
order by QUOTENAME (OBJECT_SCHEMA_NAME( kc.parent_object_id ))
,QUOTENAME( OBJECT_NAME(kc .parent_object_id));
OPEN PKcursor ;
FETCH NEXT FROM PKcursor INTO #object_id, #parent_object_id;
WHILE ##FETCH_STATUS = 0
BEGIN
IF #action = 'DROP'
SET #TSQL = 'ALTER TABLE '
+ QUOTENAME (OBJECT_SCHEMA_NAME( #parent_object_id))
+ '.' + QUOTENAME(OBJECT_NAME (#parent_object_id))
+ ' DROP CONSTRAINT ' + QUOTENAME(OBJECT_NAME (#object_id))
ELSE
BEGIN
SET #TSQL = 'ALTER TABLE '
+ QUOTENAME (OBJECT_SCHEMA_NAME( #parent_object_id))
+ '.' + QUOTENAME(OBJECT_NAME (#parent_object_id))
+ ' ADD CONSTRAINT ' + QUOTENAME(OBJECT_NAME (#object_id))
+ ' PRIMARY KEY'
+ CASE INDEXPROPERTY( #parent_object_id
,OBJECT_NAME( #object_id),'IsClustered' )
WHEN 1 THEN ' CLUSTERED'
ELSE ' NONCLUSTERED'
END
+ ' (' ;
DECLARE ColumnCursor CURSOR FOR
select COL_NAME (#parent_object_id, ic.column_id ), ic .is_descending_key
from sys .indexes i
inner join sys. index_columns ic
on i .object_id = ic .object_id and i .index_id = ic .index_id
where i .object_id = #parent_object_id
and i .name = OBJECT_NAME (#object_id)
order by ic. key_ordinal;
OPEN ColumnCursor ;
SET #col1 = 1 ;
FETCH NEXT FROM ColumnCursor INTO #COLUMN_NAME, #is_descending_key;
WHILE ##FETCH_STATUS = 0
BEGIN
IF (#col1 = 1 )
SET #col1 = 0
ELSE
SET #TSQL = #TSQL + ',';
SET #TSQL = #TSQL + QUOTENAME( #COLUMN_NAME)
+ ' '
+ CASE #is_descending_key
WHEN 0 THEN 'ASC'
ELSE 'DESC'
END;
FETCH NEXT FROM ColumnCursor INTO #COLUMN_NAME, #is_descending_key;
END;
CLOSE ColumnCursor ;
DEALLOCATE ColumnCursor ;
SET #TSQL = #TSQL + ');';
END;
PRINT #TSQL;
FETCH NEXT FROM PKcursor INTO #object_id , #parent_object_id ;
END;
CLOSE PKcursor ;
DEALLOCATE PKcursor ;
c6. Re-create the FKs
c7. Remove all indexes
c8. Re-create all indexes (without the fill factor options)
d. Now, right click on the database on 2012 and export data-tier to Azure Storage in format BACPAC. After finished, import on Azure.
It should works :-)
For anyone who may stumble across this, we have been able to locate the issue by using the bacpac file to create a new database on the local 2008R2 server, through the 2012 Client tools.
The error relates to a delete trigger that is being fired, which I don't understand why it is being executed, but that's another question.
Hopefully this may help others with import errors on SQL Azure.
I restored a database after a server failure and now I'm running into a problem where the table names show as database_user_name.table_name. So when I query something like:
select * from contacts
it doesn't work because it expects it be fully qualified, as in:
select * from user1000.contacts
The problem with this is that I have hundreds of stored procedures that reference the tables with their name, so none of the queries work.
Is there a way to tell SQL Server 2005 to drop the username from the table without changing the user as the owner?
try this advice from the manual:
To change the schema of a table or view by using SQL Server Management Studio, in Object Explorer, right-click the table or view and then click Design. Press F4 to open the Properties window. In the Schema box, select a new schema.
If you are sure none of the tables exist in the dbo schema as well, then you can say:
ALTER SCHEMA dbo TRANSFER user1000.contacts;
To generate a set of scripts for all of the tables in that schema, you can say:
DECLARE #sql NVARCHAR(MAX);
SET #sql = N'';
SELECT #sql = #sql + N'
ALTER SCHEMA dbo TRANSFER user1000.' + QUOTENAME(name) + ';'
FROM sys.tables
WHERE SCHEMA_NAME([schema_id]) = N'user1000';
PRINT #sql;
--EXEC sp_executesql #sql;
(Once you're happy with the PRINT output - acknowledging that it will be truncated at 8K even though the variable really contains the whole script - uncomment the EXEC and run it again. This does not check for potential conflicts.)
But the real fix is to fix your code. You should never say select * from contacts - both the * and the missing schema prefix can be problematic for various reasons.
I'm looking to query all databases mapped to a user, similar to Security > Logins > Properties > User Mapping.
This may be done in SQL 2005 if possible
For example, something similar to:
SELECT name
FROM sys.databases
WHERE HAS_DBACCESS(name) = 1
But perform the query from an administrative user, as opposed to running the above query as the user itself.
How would something like this be performed?
Thank you.
Well this might be a start, probably not the nice output you'd hope for (and it produces two resultsets):
EXEC sp_helplogins N'floob';
But it does work on SQL Server 2000. If you want to try and replicate some of the functionality in the procedure, you can see how it's checking for permissions, basically a cursor through every database. On SQL Server 2000:
EXEC sp_helptext N'sp_helplogins';
On 2005+ I much prefer the output of OBJECT_DEFINITION():
SELECT OBJECT_DEFINITION(OBJECT_ID(N'sys.sp_helplogins'));
So you could write your own cursor based on similar logic, and make the output prettier...
Here is a quick (and not complete) example, doesn't cover much but an idea to get started if the above is not sufficient:
DECLARE #login NVARCHAR(255);
SET #login = N'foobarblat';
-- above would be an input parameter to a procedure, I presume
CREATE TABLE #dbs(name SYSNAME);
DECLARE #sql NVARCHAR(MAX);
SET #sql = N'';
SELECT #sql = #sql + N'INSERT #dbs SELECT ''' + name + ''' FROM '
+ QUOTENAME(name) + '.sys.database_principals AS u
INNER JOIN sys.server_principals AS l
ON u.sid = l.sid
WHERE l.name = #login;'
FROM sys.databases
WHERE state_desc = 'ONLINE'
AND user_access_desc = 'MULTI_USER';
EXEC sp_executesql #sql, N'#login SYSNAME', #login;
SELECT name FROM #dbs;
DROP TABLE #dbs;
As I said, this is not complete. Won't know if the user has been denied connect, is member of deny reader/writer roles, won't show the alias if the user name in the db doesn't match the login, etc. You can dig into more details from sp_helplogins depending on what you want to show.
The EXECUTE AS functionality was added in the 2005 release, so I don't think you can run that in 2000. You could probably mimick it by putting the relevant code in a job and setting the job owner to an admin user, but it would process with the job not inline.
If I stop SQL-server and then delete the .LDF file (transactionlog file) to the database, what will happen ? Will the database be marked suspect or will SQL-server just create a new automatically ? SQL Server 2008 R2
And My .LDF file Size is Too Big, So how to manage it, whether I can Shrink it or delete
Please Suggest in the Query Form.
You should not delete any of the database files since it can severely damage your database!
If you run out of disk space you might want to split your database in multiple parts. This can be done in the database's properties. So you are able to put each part of the database to a different storage volume.
You also can shrink the transaction log file if you change the recovery mode from full to simple, using following commands:
ALTER DATABASE myDatabase SET RECOVERY SIMPLE
DBCC SHRINKDATABASE (myDatabase , 5)
Switching back to full recovery is possible as well:
ALTER DATABASE myDatabase SET RECOVERY FULL
Update about SHRINKDATABASE - or what I did not know when answering this question:
Although the method above gets rid off some unused space it has some severe disadvantages on database files (MDF) - it will harm your indexes by fragmenting them worsening the performance of your database. So you need to rebuild the indexes afterwards to get rid off the fragmentation the shrink command caused.
If you want to shrink just the log file only might want to use SHRINKFILE instead. I copied this example from MSDN:
USE AdventureWorks2012;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE AdventureWorks2012
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE (AdventureWorks2012_Log, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE AdventureWorks2012
SET RECOVERY FULL;
GO
Do not risk deleting your LDF files manually! If you do not need transaction files or wish to reduce them to any size you choose, follow these steps:
(Note this will affect your backups so be sure before doing so)
Right click database
Choose Properties
Click on the 'Options' tab.
Set recovery model to SIMPLE
Next, choose the FILES tab
Now make sure you select the LOG file and scroll right. Under the "Autogrowth" heading click the dots ....
Then disable Autogrowth (This is optional and will limit additional growth)
Then click OK and set the "Initial Size" to the size you wish to have (I set mine to 20MB)
Click OK to save changes
Then right-click the DB again, and choose "Tasks > Shrink > Database", press OK.
Now compare your file sizes!:)
I did it by
Detach the database (include Drop Connections)
Remove the *.ldf file
Attach the database, but remove the expected *.ldf file
Did it for 4 different databases in SQL 2012, i should be the same for SQL 2008
As you can read comments, it is not good solution to remove log. But if you are sure that you do not lose anything, you can just change your DB recovery mode to simple and then use
DBCC shrinkdatabase ('here your database name')
to clear your log.
The worst thing that you can do is to delete log file from disk. If your server had unfinished transactions at moment of server stop, those transactions will not roll back after restart and you will get corrupted data.
You should back up your transaction log, then there will be free space to shrink it. Changing to simple mode then shrinking means you will lose all the transaction data which would be useful in the event of a restore.
The best way to clear ALL ldf files (transaction log files) in all databases in MS SQL server, IF all databases was backed up earlier of course:
USE MASTER
print '*****************************************'
print '************ CzyĆcik LDF ****************'
print '*****************************************'
declare
#isql varchar(2000),
#dbname varchar(64),
#logfile varchar(128),
#recovery_model varchar(64)
declare c1 cursor for
SELECT d.name, mf.name as logfile, d.recovery_model_desc --, physical_name AS current_file_location, size
FROM sys.master_files mf
inner join sys.databases d
on mf.database_id = d.database_id
--where recovery_model_desc <> 'SIMPLE'
and d.name not in ('master','model','msdb','tempdb')
and mf.type_desc = 'LOG'
and d.state_desc = 'online'
open c1
fetch next from c1 into #dbname, #logfile, #recovery_model
While ##fetch_status <> -1
begin
print '----- OPERATIONS FOR: ' + #dbname + ' ------'
print 'CURRENT MODEL IS: ' + #recovery_model
select #isql = 'ALTER DATABASE ' + #dbname + ' SET RECOVERY SIMPLE'
print #isql
exec(#isql)
select #isql='USE ' + #dbname + ' checkpoint'
print #isql
exec(#isql)
select #isql='USE ' + #dbname + ' DBCC SHRINKFILE (' + #logfile + ', 1)'
print #isql
exec(#isql)
select #isql = 'ALTER DATABASE ' + #dbname + ' SET RECOVERY ' + #recovery_model
print #isql
exec(#isql)
fetch next from c1 into #dbname, #logfile, #recovery_model
end
close c1
deallocate c1
This is an improved code, based on: https://www.sqlservercentral.com/Forums/Topic1163961-357-1.aspx
I recommend reading this article: https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/recovery-models-sql-server
Sometimes it is worthwhile to permanently enable RECOVERY MODEL = SIMPLE on some databases and thus once and for all get rid of log problems. Especially when we backup data (or server) daily and daytime changes are not critical from a security point of view.