I need to populate codelists after publishing my database using SSDT. So I've added new post-deployment script to the project and from it I call another scripts using SQLCMD :r command, each inserting data to one table. But if table is already filled, there are primary key constraints violated and whole setup is broken.
How can I suppress errors in post-deployment script? SQLCMD command :on error ignore is not supported.
Here's a good example of how to achieve what you're looking for using the MERGE statement instead of raw INSERTs.
http://blogs.msdn.com/b/ssdt/archive/2012/02/02/including-data-in-an-sql-server-database-project.aspx
Why don't you modify your script to avoid reinserting existing values? Using a common table expression, you would have something resembling:
;with cte as (select *, row_number() over (partition by ... order by ...) as Row from ... )
insert into ...
select ...
from cte where not exists (...) and cte.Row = 1
Cannot be more explicit without having your table definition ...
Related
I am dropping a column in a table xxx with data. The DACPAC is generating the checking as follow.
IF EXISTS (select top 1 1 from [dbo].[xxx])
RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT
All the data migration has been done in pre-deployment script. Currently I have comment it out manually.
How to prevent it from auto generating?
for those who use sqlpackage to deploy dacpac or use Azure devops set the following in additional arguments and it should work.
/p:BlockOnPossibleDataLoss=False
Uncheck Block incremental deployment if data loss might occurred in VS\Publish\Advanced\General.
this is most likely due to a dependency like a trigger.
If you aren't sure what object is causing it, you can use the following script to identify
select * from sys.objects
where Object_id = (OBJECT_ID('<name of object blocking'))
When publishing a dacpac with sqlpackage.exe, it runs Schema Compare first, followed by pre-deployment scripts. This causes a problem when, for instance, you need to drop a table or rename a column. Schema Compare was done before the object was modified and the deployment fails. Publish must be repeated to take the new schema into account.
Anyone have a work-around for this that does not involve publishing twice?
Gert Drapers called it as pre-pre-deployment script here
Actually it is a challenge. If you need to add non-nullable and foreign key column to a table full of data - you can do with a separate script only.
If you are the only developer - that is not a problem, but when you have a large team that "separate script" has to be somehow executed before every DB publish.
The workaround we used:
Create separate SQL "Before-publish" script (in DB project) which has a property [Build action = None]
Create custom MSBuild Task where to call SQLCMD.EXE utility passing "Before-publish" script as a parameter, and then to call SQLPACKAGE.EXE utility passing DB.dacpac
Add a call of the custom MSBuild Task to db.sqlproj file. For example:
<UsingTask
TaskName="MSBuild.MsSql.DeployTask"
AssemblyFile="$(MSBuildProjectDirectory)\Deploy\MsBuild.MsSql.DeployTask.dll" />
<Target Name="AfterBuild">
<DeployTask
Configuration="$(Configuration)"
DeployConfigPath="$(MSBuildProjectDirectory)\Deploy\Deploy.config"
ProjectDirectory="$(MSBuildProjectDirectory)"
OutputDirectory="$(OutputPath)"
DacVersion="$(DacVersion)">
</DeployTask>
</Target>
MsBuild.MsSql.DeployTask.dll above is that custom MSBuild Task.
Thus the "Before-publish" script could be called from Visual Studio.
For CI we used a batch file (*.bat) where the same two utilities (SQLCMD.EXE & SQLPACKAGE.EXE) were called.
The final process we've got is a little bit complicated and should be described in a separate article - here I mentioned a direction only :)
Move from using visual studio to using scripts that drive sqlpackage.exe and you have the flexibility to run scripts before the compare:
https://the.agilesql.club/Blog/Ed-Elliott/Pre-Deploy-Scripts-In-SSDT-When-Are-They-Run
ed
We faced a situation when we need to transform data from one table into other during deployment of the database project. Of course it is a problem to do using the DB project due to in the pre-deployment the destination table (column) still doesn't exist but in post-deployment script the source table (column) is already absent.
To transform data from TableA to TableB we used the following idea (This approach can be used for any data modifications):
Developer adds destination table (dbo.TableB) into the DB project and deploys it onto the local DB (without committing to a SVN)
He or she creates a pre-deployment transformation script. The trick is that the script put the result data into a temporary table: #TableB
Developer deletes the dbo.TableA in the DB project. It is assumed that the table will be deleted during execution of the main generated script.
Developer writes a post-deployment script that copies data form #TableB to dbo.TableB that was just created by the main script.
All of the changes are committed into the SVN.
This way we don't need the pre-pre-deployment script due to we store the intermediate data in the temporary table.
I'd like to say that the approach that uses the pre-pre-deployment script had the same intermediate (temporary) data, however it is stored not in temporary tables but in real tables. It happens between pre-pre-deployment and pre-deployment. After execution of pre-deployment script this intermediate data disappears.
What is more, the approach with using temporary tables allows us to face the following complicated but real situation: Imagine that we have two transformations in our DB project:
TableA -> TableB
TableB -> TableC
Apart from that we have two databases:
DatabaeA that have the TableA
DatabaeB where the TableA was already transformed into the TableB. The TableA is absent in the DatabaseB.
Nonetheless we can deal this situation. We need just one new action in the pre-deployment. Before the transformation we try to copy data form the dbo.TableA into #TableA. And the transformation script works with temporary tables only.
Let me show you how this idea works in DatabaseA and DatabaseB.
It is assumed that the DB project has two couples of the pre and post deployment scripts: "TableA -> TableB" and "TableB -> TableC".
Below is the example of the scripts for "TableB -> TableC" transformation.
Pre-deployment script
----[The data preparation block]---
--We must prepare to possible transformation
--The condition should verufy the existance of necessary columns
IF OBJECT_ID('dbo.TableB') IS NOT NULL AND
OBJECT_ID('tempdb..#TableB') IS NULL
BEGIN
CREATE TABLE #TableB
(
[Id] INT NOT NULL PRIMARY KEY,
[Value1] VARCHAR(50) NULL,
[Value2] VARCHAR(50) NULL
)
INSERT INTO [#TableB]
SELECT [Id], [Value1], [Value2]
FROM dbo.TableB
END
----[The data transformation block]---
--The condition of the transformation start
--It is very important. It must be as strict as posible to ward off wrong executions.
--The condition should verufy the existance of necessary columns
--Note that the condition and the transformation must use the #TableA instead of dbo.TableA
IF OBJECT_ID('tempdb..#TableB') IS NOT NULL
BEGIN
CREATE TABLE [#TableC]
(
[Id] INT NOT NULL PRIMARY KEY,
[Value] VARCHAR(50) NULL
)
--Data transformation. The source and destimation tables must be temporary tables.
INSERT INTO [#TableC]
SELECT [Id], Value1 + ' '+ Value2 as Value
FROM [#TableB]
END
Post-deployment script
--Here must be a strict condition to ward of a failure
--Checking of the existance of fields is a good idea
IF OBJECT_ID('dbo.TableC') IS NOT NULL AND
OBJECT_ID('tempdb..#TableC') IS NOT NULL
BEGIN
INSERT INTO [TableC]
SELECT [Id], [Value]
FROM [#TableC]
END
In the DatabaseA the pre-deployment script has already created the #TableA. Therefore the data preparation block won't be executed due to there is no dbo.TableB in the database.
However the data transformation will be executed because there is the #TableA in the database that was created by the transformation block of the "TableA -> TableB".
In the DatabaseB the data preparation and transformation blocks for the "TableA -> TableB" script won't be executed. However we already have the the transformed data in the dbo.TableB. Hence the the data preparation and transformation blocks for the "TableB -> TableC" will be executed without any problem.
I use the below work around in such scenarios
If you would like to drop a table
Retain the table within the dacpac (Under Tables folder).
Create a post deployment script to drop the table.
If you would like to drop a column
Retain the column in the table definition within dacpac (Under Tables folder).
Create a post deployment script to drop the column.
This way you can drop tables and columns from your database and whenever you make the next deployment ( may be after few days or even months) exclude that table/columns from dacpac so that dacpac is updated with the latest schema.
We have a moderately-sized SSDT project (~100 tables) that's deployed to dozens of different database instances. As part of our build process we generate a .dacpac file and then when we're ready to upgrade a database we generate a publish script and run it against the database. Some db instances are upgraded at different times so it's important that we have a structured process for these upgrades and versioning.
Most of the generated migration script is dropping and (re)creating procs, functions, indexes and performing any structural changes, plus some data scripts included in a Post-Deployment script. It's these two data-related items I'd like to know how best to structure within the project:
Custom data migrations needed between versions
Static or reference data
Custom data migrations needed between versions
Sometimes we want to perform a one-off data migration as part of an upgrade and I'm not sure the best way to incorporate this into our SSDT project. For example, recently I added a new bit column dbo.Charge.HasComments to contain (redundant) derived data based on another table and will be kept in sync via triggers. An annoying but necessary performance improvement (only added after careful consideration & measurement). As part of the upgrade the SSDT-generated Publish script will contain the necessary ALTER TABLE and CREATE TRIGGER statements, but I also want to update this column based on data in another table:
update dbo.Charge
set HasComments = 1
where exists ( select *
from dbo.ChargeComment
where ChargeComment.ChargeId = Charge.ChargeId )
and HasComments = 0
What's the best way to include this data migration script in my SSDT project?
Currently I have each of these types of migrations in a separate file that's included in the Post-Deployment script, so my Post-Deployment script ends up looking like this:
-- data migrations
:r "data migration\Update dbo.Charge.HasComments if never populated.sql"
go
:r "data migration\Update some other new table or column.sql"
go
Is this the right way to do it, or is there some way to tie in with SSDT and its version tracking better, so those scripts aren't even run when the SSDT Publish is being run against a database that's already at a more recent version. I could have my own table for tracking which migrations have been run, but would prefer not to roll-my-own if there's a standard way of doing this stuff.
Static or reference data
Some of the database tables contain what we call static or reference data, e.g. list of possible timezones, setting types, currencies, various 'type' tables etc. Currently we populate these by having a separate script for each table that is run as part of the Post-Deployment script. Each static data script inserts all the 'correct' static data into a table variable and then inserts/updates/deletes the static data table as needed. Depending on the table it might be appropriate only to insert or only insert and delete but not to update existing records. So each script looks something like this:
-- table listing all the correct static data
declare #working_data table (...)
-- add all the static data that should exist into the working table
insert into #working_data (...) select null, null null where 1=0
union all select 'row1 col1 value', 'col2 value', etc...
union all select 'row2 col1 value', 'col2 value', etc...
...
-- insert any missing records in the live table
insert into staticDataTableX (...)
select * from #working_data
where not exists ( select * from staticDataTableX
where [... primary key join on #working_data...] )
-- update any columns that should be updated
update staticDataTableX
set ...
from staticDataTableX
inner join #working_data on [... primary key join on #working_data...]
-- delete any records, if appropriate with this sort of static data
delete from staticDataTableX
where not exists ( select * from staticDataTableX
where [... primary key join on #working_data...] )
and then my Post-Deployment script has a section like this:
-- static data. each script adds any missing static/reference data:
:r "static_data\settings.sql"
go
:r "static_data\other_static_data.sql"
go
:r "static_data\more_static_data.sql"
go
Is there a better or more conventional way to structure such static data scripts as part of an SSDT project?
To track whether or not the field has already been initialized, try adding an Extended Property when the initialize is performed (it can also be used to determine the need for the initialize):
To add the extended property:
EXEC sys.sp_addextendedproperty
#name = N'EP_Charge_HasComments',
#value = N'Initialized',
#level0type = N'SCHEMA', #level0name = dbo,
#level1type = N'TABLE', #level1name = Charge,
#level2type = N'COLUMN', #level2name = HasComments;
To check for the extended property:
SELECT objtype, objname, name, value
FROM fn_listextendedproperty (NULL,
'SCHEMA', 'dbo',
'TABLE', 'Charge',
'COLUMN', 'HasComments');
For reference data, try using a MERGE. It's MUCH cleaner than the triple-set of queries you're using.
MERGE INTO staticDataTableX AS Target
USING (
VALUES
('row1_UniqueID', 'row1_col1_value', 'col2_value'),
('row2_UniqueID', 'row2_col1_value', 'col2_value'),
('row3_UniqueID', 'row3_col1_value', 'col2_value'),
('row4_UniqueID', 'row4_col1_value', 'col2_value')
) AS Source (TableXID, col1, col2)
ON Target.TableXID = Source.TableXID
WHEN MATCHED THEN
UPDATE SET
Target.col1 = Source.col1,
Target.col2 = Source.col2
WHEN NOT MATCHED BY TARGET THEN
INSERT (TableXID, col1, col2)
VALUES (Source.TableXID, Source.col1, Source.col2)
WHEN NOT MATCHED BY SOURCE THEN
DELETE;
How can I generate script instead of manually writing
if exists (select ... where id = 1)
insert ...
else
update ...
Very boring to do that with many records!
Using management studio to generate script 'Data only' generates only inserts. So running that against existing db gives error on primary keys.
For SQL 2008 onwards you could start using Merge statements along with a CTE
A simple example for a typical id/description lookup table
WITH stuffToPopulate(Id, Description)
AS
(
SELECT 1, 'Foo'
UNION SELECT 2, 'Bar'
UNION SELECT 3, 'Baz'
)
MERGE Your.TableName AS target
USING stuffToPopulate as source
ON (target.Id = source.Id)
WHEN MATCHED THEN
UPDATE SET Description=source.Description
WHEN NOT MATCHED THEN
INSERT (Id, Description)
VALUES (source.Id, source.Description);
Merge statements have a bunch of other functionality that is useful (such as NOT MATCHED BY DESTINATION, NOT MATCHED BY SOURCE). The docs (linked above) will give you much more info.
MERGE is one of the most effective methods to do this.
However, writing a Merge statement is not very intuitive at the beginning, and generating lines for many rows or many tables is a time-consuming process.
I'd suggest using one of the tools to simplify this challenge:
Data Script Writer (Desktop Application for Windows)
Generate SQL Merge (T-SQL Stored Procedure)
I wrote a blog post about these tools recently and approach to leveraging SSDT for deployment database with data. Find out more:
Script and deploy the data for database from SSDT project
I hope this can help.
I'm still fairly new to T-SQL and SQL 2005. I need to import a column of integers from a table in database1 to a identical table (only missing the column I need) in database2. Both are sql 2005 databases. I've tried the built in import command in Server Management Studio but it's forcing me to copy the entire table. This causes errors due to constraints and 'read-only' columns (whatever 'read-only' means in sql2005). I just want to grab a single column and copy it to a table.
There must be a simple way of doing this. Something like:
INSERT INTO database1.myTable columnINeed
SELECT columnINeed from database2.myTable
Inserting won't do it since it'll attempt to insert new rows at the end of the table. What it sounds like your trying to do is add a column to the end of existing rows.
I'm not sure if the syntax is exactly right but, if I understood you then this will do what you're after.
Create the column allowing nulls in database2.
Perform an update:
UPDATE database2.dbo.tablename
SET database2.dbo.tablename.colname = database1.dbo.tablename.colname
FROM database2.dbo.tablename INNER JOIN database1.dbo.tablename ON database2.dbo.tablename.keycol = database1.dbo.tablename.keycol
There is a simple way very much like this as long as both databases are on the same server. The fully qualified name is dbname.owner.table - normally the owner is dbo and there is a shortcut for ".dbo." which is "..", so...
INSERT INTO Datbase1..MyTable
(ColumnList)
SELECT FieldsIWant
FROM Database2..MyTable
first create the column if it doesn't exist:
ALTER TABLE database2..targetTable
ADD targetColumn int null -- or whatever column definition is needed
and since you're using Sql Server 2005 you can use the new MERGE statement.
The MERGE statement has the advantage of being able to treat all situations in one statement like missing rows from source (can do inserts), missing rows from destination (can do deletes), matching rows (can do updates), and everything is done atomically in a single transaction. Example:
MERGE database2..targetTable AS t
USING (SELECT sourceColumn FROM sourceDatabase1..sourceTable) as s
ON t.PrimaryKeyCol = s.PrimaryKeyCol -- or whatever the match should be bassed on
WHEN MATCHED THEN
UPDATE SET t.targetColumn = s.sourceColumn
WHEN NOT MATCHED THEN
INSERT (targetColumn, [other columns ...]) VALUES (s.sourceColumn, [other values ..])
The MERGE statement was introduced to solve cases like yours and I recommend using it, it's much more powerful than solutions using multiple sql batch statements that basically accomplish the same thing MERGE does in one statement without the added complexity.
You could also use a cursor. Assuming you want to iterate all the records in the first table and populate the second table with new rows then something like this would be the way to go:
DECLARE #FirstField nvarchar(100)
DECLARE ACursor CURSOR FOR
SELECT FirstField FROM FirstTable
OPEN ACursor
FETCH NEXT FROM ACursor INTO #FirstField
WHILE ##FETCH_STATUS = 0
BEGIN
INSERT INTO SecondTable ( SecondField ) VALUES ( #FirstField )
FETCH NEXT FROM ACursor INTO #FirstField
END
CLOSE ACursor
DEALLOCATE ACursor
MERGE is only available in SQL 2008 NOT SQL 2005
insert into Test2.dbo.MyTable (MyValue) select MyValue from Test1.dbo.MyTable
This is assuming a great deal. First that the destination database is empty. Second that the other columns are nullable. You may need an update instead. To do that you will need to have a common key.