How to set change tracking on a Visual Studio database project (SSDT) - sql-server

I have a SQL Server 2005 DB project and am looking to deploy the Schema over an existing DB that is on a later version of SQL Server. The issue I have is that Change Tracking is enabled on the DB I wish to deploy to and so the first thing SSDT wants to do is disable CT. This poses a problem as I get the error below:
(43,1): SQL72014: .Net SqlClient Data Provider: Msg 22115, Level 16,
State 1, Line 5 Change tracking is enabled for one or more tables in
database 'Test'. Disable change tracking on each table before
disabling it for the database. Use the sys.change_tracking_tables
catalog view to obtain a list of tables for which change tracking is
enabled. (39,0): SQL72045: Script execution error. The executed
script:
IF EXISTS (SELECT 1
FROM [master].[dbo].[sysdatabases]
WHERE [name] = N'$(DatabaseName)')
BEGIN
ALTER DATABASE [$(DatabaseName)]
SET CHANGE_TRACKING = OFF
WITH ROLLBACK IMMEDIATE;
END
In an effort to get around this I have created a PreDeployment script that executes the below:
/* Run pre-deployment scripts to resolve issues */
IF(SELECT SUBSTRING(##VERSION, 29,4)) = '11.0'
BEGIN
PRINT 'Enabling Change Tracking';
DECLARE #dbname VARCHAR(250)
SELECT #dbname = DB_NAME()
EXEC('
IF NOT EXISTS(SELECT * FROM [master].[dbo].[sysdatabases] WHERE name = ''' + #dbname + ''')
ALTER DATABASE ['+ #dbname +
']SET CHANGE_TRACKING = ON
(CHANGE_RETENTION = 5 DAYS, AUTO_CLEANUP = ON);
');
EXEC('
IF NOT EXISTS(SELECT * FROM sys.change_tracking_tables ctt
INNER JOIN sys.tables t ON t.object_id = ctt.object_id
INNER JOIN sys.schemas s ON s.schema_id = t.schema_id
WHERE t.name = ''TableName'')
BEGIN
ALTER TABLE [dbo].[TableName] ENABLE CHANGE_TRACKING;
END;');
So based on the DB Version Change Tracking is set to enabled on the DB and relevant Tables assuming it is not already enabled.I got this idea from a previous post: # ifdef type conditional compilation in T-SQL sql server 2008 2005
Unfortunately this is still not working as SSDT is trying to disable Change Tracking before the PreDeployment script is executed.

Make sure change tracking is enabled in your database project.
Open your database project's properties > Project Settings > Database Settings... > Operational tab > check the "Change tracking" option

As Keith said if you want it in enable it. If you do want to disable it then just run your script before doing the compare so you have a pre-pre-deploy script like:
https://the.agilesql.club/Blog/Ed-Elliott/Pre-Compare-and-Pre-Deploy-Scripts-In-SSDT
If you are disabling it then it is a one off thing so pretty simple.
Other options are to write your own deployment contributor and raising a bug via connect.
Deployment Contributor:
https://the.agilesql.club/blog/Ed-Elliott/2015/09/23/Inside-A-SSDT-Deployment-Contributor
https://github.com/DacFxDeploymentContributors/Contributors
Ed

Related

Monitor when Database is created and receive an email

I created a trigger at the level of the server to control when a db is created.
I have this script that was working fine on SQL 2014, now we moved to SQL 2017, the script is working but I receive lot of emails
CREATE TRIGGER [ddl_trig_database]
ON ALL SERVER
FOR ALTER_DATABASE
AS
DECLARE #results NVARCHAR(max)
DECLARE #subjectText NVARCHAR(max)
DECLARE #databaseName NVARCHAR(255)
SET #subjectText = 'NEW DATABASE Created on ' + ##SERVERNAME + ' by ' + SUSER_SNAME()
SET #results = (SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)'))
SET #databaseName = (SELECT EVENTDATA().value('(/EVENT_INSTANCE/DatabaseName)[1]', 'VARCHAR(255)'))
EXEC msdb.dbo.sp_send_dbmail
#profile_name = 'EmailProfile',
#recipients = 'test#domain.com',
#body = #results,
#subject = #subjectText,
#exclude_query_output = 1 --Suppress 'Mail Queued' message
GO
I receive for example in different emails each of these lines:
ALTER DATABASE [testNewDB] SET DELAYED_DURABILITY = DISABLED
ALTER DATABASE [testNewDB] SET RECOVERY FULL
ALTER DATABASE [testNewDB] SET READ_WRITE
ALTER DATABASE [testNewDB] SET READ_COMMITTED_SNAPSHOT OFF
There are more so I believe the trigger is sending the info for each configuration parameter of the new db created, any idea how to receive only the info of the new DB created without all the rest?
You can replace ALTER_DATABASE with CREATE_DATABASE, but this will not catch a restore event because a restore does not generate a DLL event.
CREATE TRIGGER [ddl_trig_database]
ON ALL SERVER
FOR CREATE_DATABASE
AS
The following article covers a solution that will work around the missing DDL event:
DDL triggers enable us to audit DDL changes but there are a few
missing events, design decisions and installation complications. This
post explains and provides a full solution that includes auditing for
database restores (there is no DDL event for this) and an incremental
self install, which keeps the whole server audit configured for DDL
auditing.
https://www.sqlservercentral.com/forums/topic/sql-2008-ddl-auditing-a-full-self-installingupdating-solution-for-whole-server
The solution in the article for RESTORE events involves a job that runs to check for new databases:
SQL 2008 Audit RESTORE DATABASE
SQL Agent job which runs (in less than 1 second) every 1 minute to
copy new restore database auditing information from
msdb.dbo.restorehistory to dbadata.dbo.ServerAudit. If it finds that a
database restore has happened but has not been audited it
automatically runs the “Setup DDL Audit” job because there is a
possibility that the restored database is not configured for DDL
auditing as expected.

How can I disable autogrowth in SQL Server wide

I have a database server that some databases with restricted users are in use in the database. I need to restrict users to can't change .MDF and .LDF autogrowth settings. Please guide me to restrict the users.
I think there is two way to get this access:
Disable autogrowth in databases
Limit the maximum size of MDF and LDF
But I couldn't find any option in Management Studio to do them server wide and also get access from users.
Thanks.
you can execute following ALTER DATABASE command which sets auto growth option to off for all databases using undocumented stored procedure sp_Msforeachdb
for single database (Parallel Data Warehouse instances only)
ALTER DATABASE [database_name] SET AUTOGROW = OFF
for all databases
EXEC sp_Msforeachdb "ALTER DATABASE [?] SET AUTOGROW = OFF"
Although this is not a server variable or instance settings, it might help you ease your task for updating all databases on the SQL Server instance
By excluding system databases and for all other databases, following T-SQL can be executed to get list of all database files and output commands prepared can be executed
select
'ALTER DATABASE [' + db_name(database_id) + '] MODIFY FILE ( NAME = N''' + name + ''', FILEGROWTH = 0)'
from sys.master_files
where database_id > 4
To prevent data files' autogrow property to be changed, I prepared below SQL Server DDL trigger once I used a DDL trigger for logging DROP table statements.
Following trigger will also prevent you to change this property, so if you need to update this property, you have to drop this trigger first.
CREATE TRIGGER prevent_filegrowth
ON ALL SERVER
FOR ALTER_DATABASE
AS
declare #SqlCommand nvarchar(max)
set #SqlCommand = ( SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)') );
if( isnull(charindex('FILEGROWTH', #SqlCommand), 0) > 0 )
begin
RAISERROR ('FILEGROWTH property cannot be altered', 16, 1)
ROLLBACK
end
GO
For more on DDL Triggers, please refer to Microsoft Docs

SQL Server error 4928 - no replication or CDC

I'm trying to rename a column but I'm getting this error:
Msg 4928, Level 16, State 1, Procedure sp_rename, Line 547
Cannot alter column 'appraisal_id' because it is 'enabled for Replication or Change Data Capture'.
Msg 0, Level 20, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
No replication is configured. At some point CDC was enabled on the database and a few tables (including the table I'm trying to rename a column on), but it is currently disabled on the database. I'm assuming it was disabled on the database without first disabling it on each table, and that's causing this problem. I would say this is a SQL Server bug.
As a workaround, I can re-enable CDC on the database, disable it on the table, and then disable it on the database, then I can rename the column.
I'm trying to find out which tables have this problem (our database has 3500 tables), so I can fix this once and for all and avoid this in future. I don't see anything in any of the system tables (I checked sys.tables, sys.objects, sysobject, sys.columns, syscolumns) that indicates this table has CDC enabled. All the relevant columns (is_published, is_schema_published, is_merge_published, is_tracked_by_cdc) have value 0.
Any idea where SQL Server stores this information ?
I'm using SQL 2008 and 2008 R2; the problem occurs on both.
You can reproduce the problem with the script below:
CREATE DATABASE TestCDC
GO
USE TestCDC
GO
CREATE TABLE dbo.fish(
fish_id int NOT NULL
, name nvarchar(100) NOT NULL
, CONSTRAINT XPKfish PRIMARY KEY (fish_id))
GO
EXECUTE sp_cdc_enable_db
GO
EXECUTE sys.sp_cdc_enable_table
#source_schema = N'dbo'
, #source_name = 'fish'
, #capture_instance = 'my_capture'
, #role_name = NULL
, #filegroup_name = NULL
GO
EXECUTE sp_cdc_disable_db
GO
EXECUTE sp_rename 'dbo.fish.name', 'fish_name'
I'm assuming you have figured this out since but I just came across this problem again and perhaps people would like a good answer on this thread.
It seems that if you disable CDC on the database level it does disable it everywhere but these errors keep occurring.
In order to overcome this problem the trick is to:
Activate CDC on the DB level
EXECUTE sp_cdc_enable_db
GO
Activate CDC on the table
EXEC sys.sp_cdc_enable_table
#source_schema = N'dbo',
#source_name = N'MyTable',
#role_name = N'MyRole',
#supports_net_changes = 1
GO
Disable CDC on the table
EXEC sys.sp_cdc_disable_table
#source_schema = N'dbo',
#source_name = N'MyTable',
#capture_instance = N'dbo_MyTable'
GO
Disable CDC on the DB level
EXECUTE sp_cdc_disable_db
GO
I don't have access to an instance with CDC enabled to test this, but based on the text of the internal procedure used to enable cdc (usefully made accessible here), it might be that one or more tables in the cdc schema contain the information - I'd suggest cdc.change_tables as a starting point.
In sys.tables, there are two columns that will tell you whether the server thinks the table is replicated or enabled for cdc (regardless of the database status for those features). Do the ff:
select name, is_tracked_by_cdc, is_replicated
from sys.tables
where is_tracked_by_cdc = 1
or is_replicated = 1
If either is true, you will have to enable the database feature in question (e.g. CDC), disable the feature for any tables that have it, then re-disable the feature at the database level.

Create Northwind.mdf for use in local db

The following is the start if the standard Install Northwind sql script as provided by Microsoft in their Install sql server 200 sample databases.
SET NOCOUNT ON
GO
USE master
GO
if exists (select * from sysdatabases where name='Northwind')
drop database Northwind
go
DECLARE #device_directory NVARCHAR(520)
SELECT #device_directory = SUBSTRING(filename, 1, CHARINDEX(N'master.mdf', LOWER(filename)) - 1)
FROM master.dbo.sysaltfiles WHERE dbid = 1 AND fileid = 1
EXECUTE (N'CREATE DATABASE Northwind
ON PRIMARY (NAME = N''Northwind'', FILENAME = N''' + #device_directory + N'northwnd.mdf'')
LOG ON (NAME = N''Northwind_log'', FILENAME = N''' + #device_directory + N'northwnd.ldf'')')
go
exec sp_dboption 'Northwind','trunc. log on chkpt.','true'
exec sp_dboption 'Northwind','select into/bulkcopy','true'
GO
set quoted_identifier on
GO
Under normal circumstances I always use a full copy of sql server or sql server express for development, however an unrelated support issue with a third party component has occurred that requires me to provide a self contained sample application with the basic nortwind database file contained within the sample using localdb.
To that end how should I adapt the Execute Create Database section of the sql script so that it will create a copy of the nortwind .mdf in a given location (let's say C:\MyData so that I can then use, that file to send with the sample I need to build for the support team. Essentially it is vital that they have a completely self contained sample to help narrow down the problem.
Many Thanks

Azure SQL Database Create via bacpac Import Fails

We are testing the migration from a local SQL Server 2008R2 database to Azure, but have hit a bump in the road.
Process followed, based on SO articles:
Installed SQL Server 2012 Client tools
Amended DB to remove indexes with a fill factor specified, as well as invalid views and procedures (this was determined by using the Export Data-tier Application tool for SSMS, until it successfully created bacpac file)
uploaded the successfully created bacpac file to Azure
Went through steps to create new database using import method
bacpac file is retrieved from blob storage status shown, but then the following error occurs
BadRequest ;Request Error;Error Status Code:</B>
'BadRequest'</P><P><B>Details:
</B>Error encountered during the service operation. ; Exception
Microsoft.SqlServer.Management.Dac.Services.ServiceException:Unable to
authenticate request; </P></DIV></BODY></html>
Note: error text above was trimmed to exclude URL's as I don't have sufficient points.
I can't seem to find any info on this error or where there may be any additional log details to help determine why it will not import.
As the error mentions unable to authenticate, we also tried doing the following:
Created a new user and password on the local DB
Used this same new user and password for the definition of the new DB on Azure
This did not make any difference.
Would appreciate if someone could point us in the right direction to get this working, as we would need to replicate this process quite a few times.
Thanks.
We needed the same thing. Here is some steps that we did and the results:
1) Exporting using SQL Database Migration Tool created by ghuey
You can download here: https://sqlazuremw.codeplex.com/
It's a great tool and i really recommend you to try this first. Depends of the complexity of your database, it will work just fine.
For us, unfortunately didnt work. So you moved to the next step.
2) DAC Package
The 2008 has the option to generate the DACPAC witch creates the structure of the database on Azure and then you can Deploy to Azure by references a connection in the 2008 Studio Managament, Right click on Azure Server, Deploy ... se more details here: http://world.episerver.com/documentation/Items/Upgrading/EPiserver-Commerce/8/Migrating-Commerce-databases-to-Azure/
Well, if this works for you, TRY THIS. It's more easy.
For us, unfortunately didnt work. So you moved to the next step.
3) Using an 2012 server to export bacpac and then import into azure
This steps requires multiple actions to complete. Here it is:
a. Generate a backup into 2008 and move the file do 2012 server;
b. Restore the backup into 2012;
c. Do some SQL that:
c1. Set all owners of SCHEMAs to DBO. You can use an SQL to move schema like this: ALTER AUTHORIZATION ON SCHEMA::[db_datareader] TO [dbo]
c2. Remove all users that was created by you;
c3. Remove all MS_Description (Extend Properties) of all columns and tables
c4. Drop all constraints (tip: generate a complete script of the database with drop and create option enabled and copy the part of "drop constraint"
c5. We need to removed the fill factor options of the indexes of your database. You can do that re-creating the index (including PK that has clustered index associated). Well to drop every PK Clustered, is not that easy but with a little help of Google you will able do find an script to help you create and drop. Here is the script:
DECLARE #object_id int;
DECLARE #parent_object_id int;
DECLARE #TSQL NVARCHAR( 4000);
DECLARE #COLUMN_NAME SYSNAME;
DECLARE #is_descending_key bit;
DECLARE #col1 BIT;
DECLARE #action CHAR( 6);
SET #action = 'DROP';
--SET #action = 'CREATE';
DECLARE PKcursor CURSOR FOR
select kc.object_id , kc .parent_object_id
from sys.key_constraints kc
inner join sys .objects o
on kc.parent_object_id = o.object_id
where kc.type = 'PK' and o. type = 'U'
and o.name not in ( 'dtproperties','sysdiagrams' ) -- not true user tables
order by QUOTENAME (OBJECT_SCHEMA_NAME( kc.parent_object_id ))
,QUOTENAME( OBJECT_NAME(kc .parent_object_id));
OPEN PKcursor ;
FETCH NEXT FROM PKcursor INTO #object_id, #parent_object_id;
WHILE ##FETCH_STATUS = 0
BEGIN
IF #action = 'DROP'
SET #TSQL = 'ALTER TABLE '
+ QUOTENAME (OBJECT_SCHEMA_NAME( #parent_object_id))
+ '.' + QUOTENAME(OBJECT_NAME (#parent_object_id))
+ ' DROP CONSTRAINT ' + QUOTENAME(OBJECT_NAME (#object_id))
ELSE
BEGIN
SET #TSQL = 'ALTER TABLE '
+ QUOTENAME (OBJECT_SCHEMA_NAME( #parent_object_id))
+ '.' + QUOTENAME(OBJECT_NAME (#parent_object_id))
+ ' ADD CONSTRAINT ' + QUOTENAME(OBJECT_NAME (#object_id))
+ ' PRIMARY KEY'
+ CASE INDEXPROPERTY( #parent_object_id
,OBJECT_NAME( #object_id),'IsClustered' )
WHEN 1 THEN ' CLUSTERED'
ELSE ' NONCLUSTERED'
END
+ ' (' ;
DECLARE ColumnCursor CURSOR FOR
select COL_NAME (#parent_object_id, ic.column_id ), ic .is_descending_key
from sys .indexes i
inner join sys. index_columns ic
on i .object_id = ic .object_id and i .index_id = ic .index_id
where i .object_id = #parent_object_id
and i .name = OBJECT_NAME (#object_id)
order by ic. key_ordinal;
OPEN ColumnCursor ;
SET #col1 = 1 ;
FETCH NEXT FROM ColumnCursor INTO #COLUMN_NAME, #is_descending_key;
WHILE ##FETCH_STATUS = 0
BEGIN
IF (#col1 = 1 )
SET #col1 = 0
ELSE
SET #TSQL = #TSQL + ',';
SET #TSQL = #TSQL + QUOTENAME( #COLUMN_NAME)
+ ' '
+ CASE #is_descending_key
WHEN 0 THEN 'ASC'
ELSE 'DESC'
END;
FETCH NEXT FROM ColumnCursor INTO #COLUMN_NAME, #is_descending_key;
END;
CLOSE ColumnCursor ;
DEALLOCATE ColumnCursor ;
SET #TSQL = #TSQL + ');';
END;
PRINT #TSQL;
FETCH NEXT FROM PKcursor INTO #object_id , #parent_object_id ;
END;
CLOSE PKcursor ;
DEALLOCATE PKcursor ;
c6. Re-create the FKs
c7. Remove all indexes
c8. Re-create all indexes (without the fill factor options)
d. Now, right click on the database on 2012 and export data-tier to Azure Storage in format BACPAC. After finished, import on Azure.
It should works :-)
For anyone who may stumble across this, we have been able to locate the issue by using the bacpac file to create a new database on the local 2008R2 server, through the 2012 Client tools.
The error relates to a delete trigger that is being fired, which I don't understand why it is being executed, but that's another question.
Hopefully this may help others with import errors on SQL Azure.

Resources